Webinars

EU AI Act turns 1: Challenges, Opportunities, and What’s Next

Location Online

EU AI Act: Year One Review, Compliance Challenges, and What Comes Next

The EU AI Act remains a major topic because AI is still a new and fast-moving technology with broad everyday use, and the law is rolling out in phases rather than switching on at once. Some parts are already applicable, while other obligations phase in through the coming years, including 2026 and 2027. This creates a multi-year compliance effort similar to GDPR, affecting product roadmaps, procurement, and vendor relationships.

Why the EU AI Act still matters after year one

AI differs from earlier regulated areas because many organizations still do not fully understand what AI is, what it will become, and how it behaves in real-world use. That uncertainty increases the need for ongoing interpretation and operational work.

The phased rollout also keeps the topic active. Companies must act before all guidance, templates, and harmonized standards are fully available, while deadlines continue to approach.

What has improved: transparency and documentation

Transparency has moved into focus. Progress has been made in identifying which AI tools are used, developed, or sold, and in building internal and external documentation around those tools.

At the same time, the EU AI Act is still seen as more ambiguous than GDPR was after its first year. Data processing was already familiar before GDPR; AI is newer, which increases ambiguity and complexity.

Human-centric and trustworthy AI: foundation now, outcomes later

The EU AI Act sets a foundation for human-centric and more trustworthy AI, including bans on certain harmful practices. The practical outcome is expected to become clearer once high-risk obligations are fully enforced, with 2026 highlighted as a key point in time.

AI literacy requirements also support this direction by requiring internal awareness and training so that employees understand AI use and outcomes.

Risk-based classification: useful in theory, unclear in practice

The Act uses risk-based classifications for AI systems. This helps regulators and larger organizations determine requirements, but in practice the lines between categories can be unclear.

Effective classification requires cross-functional input:

  • identifying which AI systems are in use
  • defining the risk
  • classifying systems based on that risk

A further issue is that unclear boundaries can encourage attempts to position systems as lower risk than they are. More clarity is needed to make classification easier and to support external assessment of whether something is high risk.

AI risk ownership and the rise of risk roles

Whether a dedicated AI risk officer becomes common depends on company size and sector:

  • In highly regulated sectors (for example financial services) or where sensitive data is involved (for example healthcare), dedicated risk roles make more sense.
  • In smaller companies or less sensitive contexts, responsibilities are more likely to be added to existing roles, similar to how DPO responsibilities were handled across different company sizes under GDPR.

AI compliance ownership can remain divided between legal and product teams: legal asks and translates requirements, while product and technical teams provide the operational answers and implementation.

The biggest compliance challenges so far

1) Timing vs. missing standards

Key obligations phase in between 2025 and 2027, while guidance, standardization, and documentation templates are still evolving. Companies must act before full clarity exists, which makes prioritization difficult for technical teams.

2) Legal complexity and overlap with GDPR

If a system is high risk, documentation and human oversight requirements become a heavy lift. The overlap with GDPR and other laws increases the overall compliance burden.

3) Documentation, traceability, and logs

Maintaining up-to-date documentation and traceability is difficult, especially for iterative models. Paper-based approaches do not scale well without technical support.

4) Cross-functional cooperation

Legal teams can translate requirements, but implementation depends on product and technical teams. Coordination becomes a core operational challenge.

5) Internal expertise gaps

Lack of internal expertise is a recurring hurdle, especially in smaller companies.

Practical approaches for smaller companies

Using AI tools to increase legal efficiency

AI tools can support efficiency for resource-constrained legal teams by helping with brainstorming and producing bullet-point summaries. They are not a substitute for reading the law or establishing facts, but they can help structure thinking and clarify technical topics.

Simplification as a working method

When requirements feel overly complex, simplification helps identify the core issues to tackle first. Explaining concepts in very simple terms helps reveal overlaps between laws and reduces the risk of getting stuck in complexity.

Innovation impact: friction now, trust later

The EU AI Act can slow innovation in the short term due to compliance costs and legal uncertainty, especially in organizations with internal legal capability that actively raises compliance requirements.

Over time, transparency and baseline obligations are expected to increase trust. Compliance is positioned to become a competitive advantage, similar to GDPR, as buyers become more cautious about AI and demand clearer explanations of what is being done.

Vendor and third-party risk: a central issue

AI use in vendor products often creates interest but does not reliably trigger deeper questions until vendor assessment begins. Vendor vetting processes benefit from:

  • embedded questions about AI use
  • requests for technical documentation
  • information needed for risk classification

Predefined question lists help identify red flags quickly. A key red flag is inability to explain claims in simple terms. Complex terminology without clear explanations can indicate weak understanding and weak compliance.

Common myths to remove

Myth: “There is plenty of time”

Mapping AI systems, understanding what is in use, and determining applicable requirements takes significant time. Waiting for perfect guidance risks losing years while deadlines continue to approach.

Myth: “The EU AI Act forbids using AI”

The Act does not prohibit AI use in general. It aims to clarify compliant use and limit certain high-risk or harmful practices. Fear-driven avoidance mirrors earlier GDPR misunderstandings.

Handling overlap with sector laws and GDPR

There are no shortcuts. The workable approach is:

  • identify which requirements apply under each relevant law
  • map them in parallel
  • reuse materials and documentation where overlaps exist
  • avoid document silos that become unmanageable

The challenge is typically duplication with slightly different scope, not direct conflict.

Synthetic data: not automatically “free from regulation”

Synthetic data is generated from pre-existing data to create new datasets. Whether it is truly free of the original data depends on how it is created and whether the original data can be ruled out as present or visible.

Claims that a dataset is “free from regulation” remain suspicious and require follow-up questions. Personal data definitions and related concepts can also change over time through court cases and differ across jurisdictions, including differences in areas such as biometrics between the EU and certain US states.

What to expect next: guidance, clarity, and enforcement milestones

Key expectations for the next phase include:

  • increased guidance from EU bodies to provide interpretational material and clarity on definitions
  • narrower gray areas around risk classification
  • more support materials for smaller businesses, such as ready-made checklists

High-risk applicability and enforcement timelines, including August 2026, are expected to be a major turning point for organizations that fall into high-risk categories.

Year two: likely biggest challenges

  • Vendor management: obtaining reliable, relevant information from vendors about AI use and models
  • Maintaining an AI system inventory: keeping an accurate overview of AI use inside the organization
  • Continued cross-functional cooperation and simplification of legal requirements
  • Balancing safeguards with access to enough high-quality data for model training, while addressing concerns about legality, bias prevention, and model accuracy

What “good” looks like over the next year

  • Less “black box” AI and more ability to explain what happens inside systems
  • Clearer technical and legal documentation, including public-facing explanations of how data is used in AI
  • Stronger internal relationships between legal, product, and technical teams to support implementation
  • Simpler documentation that improves understanding for internal stakeholders, customers, suppliers, and regulators

Speakers

Ksenia Laputko
Global Head of Data protection
Ksenia Laputko is Head of Data Protection u0026amp; AI Governance Officer at Joblio and the founder of BestDPO.net. She has mentored hundreds of privacy professionals worldwide and authored European Data Protection Law: Analysis of European (GDPR), Canadian, and US Regulations. With over a decade of academic and consulting experience, Ksenia lectures in international law and holds every major privacy certification (CIPP/E, CIPP/US, CIPP/A, CIPP/C, AIGP). Having worked across four continents and in multiple sectors from education to tech to AI, she brings a truly global perspective on data protection and AI governance.
GDPR Register
Krete Paal
CEO
Krete Paal is the CEO of GDPR Register, where she leads the development of AI-powered tools that make privacy compliance scalable and practical for organisations across Europe.

With a strong background in data protection and legal tech — from heading Veriff’s DPO Office to earlier work with the Estonian Police and Border Guard, Krete combines deep regulatory expertise with product leadership.

At GDPR Register, she brings a forward-looking perspective on how AI can support GDPR compliance and align with emerging regulations, turning complex requirements into clear, actionable workflows.
PREVIOUS
Upskilling privacy professionals