Webinars

Is DPO the New AI Officer?

Date And Time 22.10.2025
14:00
Location Online

Is the DPO the New AI Officer? Practical AI Governance for GDPR and the EU AI Act

In this webinar, Krete Paal (CEO of GDPR Register) is joined by Maria Golofaeva (Nebius) and Margot Arnus (Veriff) to explore a question many organisations are now asking: is the Data Protection Officer becoming the de facto AI compliance lead? The discussion focuses on real-world implementation—how to organise AI governance, align it with GDPR compliance, and prepare for the EU AI Act without slowing down innovation.

Why privacy professionals are central to AI compliance

Because AI relies heavily on data (often personal data), privacy teams are well placed to lead or co-lead AI compliance. The speakers highlight that DPOs already have experience operating in grey areas, building accountability frameworks, and balancing risk-based decision-making—skills that translate naturally into AI risk management and AI governance.

Getting started: three practical steps

The panel shares a pragmatic approach for organisations building AI compliance capability:

  • Raise AI literacy and awareness: ensure employees understand that AI is regulated, what is regulated, and why. (AI literacy is also framed as a concrete requirement under the EU AI Act.)
  • Reuse and extend your GDPR governance framework: adapt existing processes such as DPIAs, RoPA (Article 30 records), and accountability documentation to cover AI use cases and AI risk assessment.
  • Create cross-functional AI committees: bring together legal, privacy, engineering, data science, product, security, and business stakeholders to build shared ownership and consistent decision-making.

The biggest challenge: innovation vs compliance

A recurring theme is the tension between business goals and legal requirements. AI introduces complex systems that can be difficult to explain, which makes classic GDPR principles—transparency, purpose limitation, and data minimisation—harder to apply in practice. The panel emphasises that the aim is not to block AI, but to enable responsible use with clear guardrails.

Bridging the legal–technical gap

Both speakers stress that AI compliance fails when legal and technical teams work in isolation. Success depends on creating a shared language and a “same room” approach: asking basic questions, mapping data flows, and agreeing what the organisation considers “AI” in practical terms. The advice is simple and memorable: ask the ‘stupid’ questions early—once—rather than staying uncertain for months.

Avoiding overreaction and building a business case

AI regulation can trigger internal panic (“we cannot use AI anymore”). The panel argues for the opposite: understand what is actually regulated and communicate it clearly—internally and externally. Strong governance can also become a competitive advantage, helping privacy and legal teams secure resources by framing compliance as trust, differentiation, and market readiness, not just cost.

Reusing GDPR documentation for AI governance

Rather than starting from zero, the webinar recommends using GDPR documentation as a baseline and adding AI-specific sections. GDPR already demands clarity, documentation, and understandable transparency—making it a strong template for AI compliance. Practical examples include:

  • extending DPIA-style assessments to AI risks
  • maintaining a record of AI activities alongside RoPA
  • treating AI providers as vendors within vendor risk management

Vendor management and AI inventory: what to check first

To keep track of AI tools (including hidden AI features), the panel recommends integrating AI questions into supplier assessments and ongoing monitoring. Key checks include:

  • will the provider train on your data (especially personal data or confidential information)?
  • what technical and organisational measures are in place (segregation, controls, security)?
  • how is the AI deployed in your infrastructure, and for what purpose?

For generative AI and LLMs, the same tool can support many use cases—so organisations need both vendor controls and internal tracking of how teams actually use the tool.

Ethics and bias: why it cannot be an afterthought

The speakers discuss real incidents (such as bias in recruitment tooling and misclassification in image systems) to underline why AI bias is not only unethical but may also be unlawful. They note that the EU AI Act offers clearer signals on what is prohibited, but organisations still need internal principles, stakeholder buy-in, and processes for bias mitigation—often in tension with data minimisation.

Data classification: personal, non-personal, and “regulated” data

AI governance goes beyond personal data. The panel recommends clear internal policies that classify:

  • personal data
  • sensitive personal data
  • confidential information and trade secrets
  • non-personal data that may still be regulated or risky in AI contexts

Practical mitigations include restricting sensitive data in AI tools, using filters, and considering synthetic data or de-identification strategies to reduce exposure.

Using AI tools in privacy work (with human oversight)

The webinar recognises that AI can support privacy teams—provided human oversight remains in place. Examples include drafting policies, brainstorming, and accelerating RoPA inputs by generating structured descriptions of business processes (purpose, data categories, vendors) that colleagues can review and refine.

So, is the DPO the new AI officer?

The panel’s conclusion is nuanced: DPOs are often well positioned, but it depends on company size, resources, and the scale of AI use. AI compliance is best treated as a team effort, with privacy acting as a key driver of ongoing monitoring, accountability, and practical governance.

Key takeaway

Be cautious, but curious. AI is moving fast, regulation is evolving, and organisations that keep learning, keep documenting, and keep cross-functional conversations active will be best placed to use AI responsibly—and competitively.

Krete Paal is the CEO of GDPR Register and has extensive experience in privacy management and regulatory compliance. She works closely with organisations across sectors to operationalise GDPR requirements and translate complex regulatory obligations into practical, scalable processes—now increasingly in the context of AI governance.

Maria Golofaeva is a Data Protection Officer at Toloka, where she advises on privacy and data protection matters in data-intensive and AI-driven environments. Her work focuses on navigating GDPR compliance challenges in advanced data processing operations, including issues related to AI training, data quality, and risk mitigation.

Margot Arnus is Senior Privacy & Product Legal Counsel at Veriff, advising product and engineering teams on privacy-by-design, AI-enabled product development, and regulatory compliance. She brings hands-on experience in aligning fast-moving product innovation with GDPR, ethical considerations, and emerging AI regulation.

Speakers

Margot Arnus
Lead Legal Counsel
Margot Arnus (CIPP/US) is Lead Privacy and Product Legal Counsel at Veriff, where she advises on embedding privacy and regulatory compliance into AI-driven and biometric products used at scale. She is also the Co-Founder of Damus, supporting organisations in building practical privacy capability.

With deep expertise in both EU and US data protection law, Margot combines legal precision with a strong understanding of how products are built and operated. Her background spans cross-border privacy compliance, product counselling, and trust-focused governance in highly regulated environments.

At Veriff, she focuses on translating complex regulatory requirements into clear, business-ready solutions, bridging legal, technical, and commercial teams to ensure privacy is not just compliant, but a driver of trust and sustainable innovation.
GDPR Register
Krete Paal
CEO
Krete Paal is the CEO of GDPR Register, where she leads the development of AI-powered tools that make privacy compliance scalable and practical for organisations across Europe.

With a strong background in data protection and legal tech — from heading Veriff’s DPO Office to earlier work with the Estonian Police and Border Guard, Krete combines deep regulatory expertise with product leadership.

At GDPR Register, she brings a forward-looking perspective on how AI can support GDPR compliance and align with emerging regulations, turning complex requirements into clear, actionable workflows.
Maria Golofaeva
Senior Privacy Lawyer at Nebius
Maria Golofaeva is a seasoned privacy professional and legal strategist whose career focuses on bridging complex regulatory realities with fast-moving AI and data environments.
She currently serves as Data Protection Officer at Toloka and Senior Privacy Lawyer at Nebius, where she helps organizations build trust and accountability in their AI and data-driven operations.
Previously, Maria worked as Lead Legal Counsel at Yandex, managing privacy and compliance in one of the region’s largest technology companies. She holds strong credentials in data protection (including CIPP/E) and brings hands-on experience in compliance, governance, and technology oversight.
At Toloka and Nebius, Maria is deeply involved in developing responsible AI practices — from model governance and data retention to automation in privacy processes. She frequently speaks about the evolving role of DPOs and how legal teams can embrace AI without losing control.
Maria is known for her pragmatic approach: enabling smart automation where it helps, while keeping accountability, transparency, and human judgment at the core of privacy practice.

PREVIOUS
Data Privacy Day Webinar
NEXT
Upskilling privacy professionals