Is DPO the New AI Officer?
Is the DPO the New AI Officer? Practical AI Governance for GDPR and the EU AI Act In this webinar, […]
The EU AI Act remains a major topic because AI is still a new and fast-moving technology with broad everyday use, and the law is rolling out in phases rather than switching on at once. Some parts are already applicable, while other obligations phase in through the coming years, including 2026 and 2027. This creates a multi-year compliance effort similar to GDPR, affecting product roadmaps, procurement, and vendor relationships.
AI differs from earlier regulated areas because many organizations still do not fully understand what AI is, what it will become, and how it behaves in real-world use. That uncertainty increases the need for ongoing interpretation and operational work.
The phased rollout also keeps the topic active. Companies must act before all guidance, templates, and harmonized standards are fully available, while deadlines continue to approach.
Transparency has moved into focus. Progress has been made in identifying which AI tools are used, developed, or sold, and in building internal and external documentation around those tools.
At the same time, the EU AI Act is still seen as more ambiguous than GDPR was after its first year. Data processing was already familiar before GDPR; AI is newer, which increases ambiguity and complexity.
The EU AI Act sets a foundation for human-centric and more trustworthy AI, including bans on certain harmful practices. The practical outcome is expected to become clearer once high-risk obligations are fully enforced, with 2026 highlighted as a key point in time.
AI literacy requirements also support this direction by requiring internal awareness and training so that employees understand AI use and outcomes.
The Act uses risk-based classifications for AI systems. This helps regulators and larger organizations determine requirements, but in practice the lines between categories can be unclear.
Effective classification requires cross-functional input:
A further issue is that unclear boundaries can encourage attempts to position systems as lower risk than they are. More clarity is needed to make classification easier and to support external assessment of whether something is high risk.
Whether a dedicated AI risk officer becomes common depends on company size and sector:
AI compliance ownership can remain divided between legal and product teams: legal asks and translates requirements, while product and technical teams provide the operational answers and implementation.
Key obligations phase in between 2025 and 2027, while guidance, standardization, and documentation templates are still evolving. Companies must act before full clarity exists, which makes prioritization difficult for technical teams.
If a system is high risk, documentation and human oversight requirements become a heavy lift. The overlap with GDPR and other laws increases the overall compliance burden.
Maintaining up-to-date documentation and traceability is difficult, especially for iterative models. Paper-based approaches do not scale well without technical support.
Legal teams can translate requirements, but implementation depends on product and technical teams. Coordination becomes a core operational challenge.
Lack of internal expertise is a recurring hurdle, especially in smaller companies.
AI tools can support efficiency for resource-constrained legal teams by helping with brainstorming and producing bullet-point summaries. They are not a substitute for reading the law or establishing facts, but they can help structure thinking and clarify technical topics.
When requirements feel overly complex, simplification helps identify the core issues to tackle first. Explaining concepts in very simple terms helps reveal overlaps between laws and reduces the risk of getting stuck in complexity.
The EU AI Act can slow innovation in the short term due to compliance costs and legal uncertainty, especially in organizations with internal legal capability that actively raises compliance requirements.
Over time, transparency and baseline obligations are expected to increase trust. Compliance is positioned to become a competitive advantage, similar to GDPR, as buyers become more cautious about AI and demand clearer explanations of what is being done.
AI use in vendor products often creates interest but does not reliably trigger deeper questions until vendor assessment begins. Vendor vetting processes benefit from:
Predefined question lists help identify red flags quickly. A key red flag is inability to explain claims in simple terms. Complex terminology without clear explanations can indicate weak understanding and weak compliance.
Mapping AI systems, understanding what is in use, and determining applicable requirements takes significant time. Waiting for perfect guidance risks losing years while deadlines continue to approach.
The Act does not prohibit AI use in general. It aims to clarify compliant use and limit certain high-risk or harmful practices. Fear-driven avoidance mirrors earlier GDPR misunderstandings.
There are no shortcuts. The workable approach is:
The challenge is typically duplication with slightly different scope, not direct conflict.
Synthetic data is generated from pre-existing data to create new datasets. Whether it is truly free of the original data depends on how it is created and whether the original data can be ruled out as present or visible.
Claims that a dataset is “free from regulation” remain suspicious and require follow-up questions. Personal data definitions and related concepts can also change over time through court cases and differ across jurisdictions, including differences in areas such as biometrics between the EU and certain US states.
Key expectations for the next phase include:
High-risk applicability and enforcement timelines, including August 2026, are expected to be a major turning point for organizations that fall into high-risk categories.