artificial-intelligence-7730758_1280

Top 5 Myths About the EU AI Act — Expert Advice from GDPR Register’s CEO

EU AI Act – Top 5 Myths

Starting August 2, general-purpose AI systems must comply with the transparency obligations of the EU Artificial Intelligence Act (AI Act). Krete Paal, CEO of the Estonian privacy tech startup GDPR Register, highlights the most common fears and myths surrounding the regulation – and explains what companies need to be doing today.

The EU Artificial Intelligence Act is the world’s first comprehensive regulation designed to make the use of AI safe, responsible, and respectful of fundamental rights. While the goal is commendable, there’s still widespread uncertainty, fear, and misinformation surrounding it.

“It’s been a year since the AI Act was adopted, but in Europe and in Estonia we’re still seeing a wave of anxious questions and exaggerated interpretations. Some have even wondered if the regulation bans AI altogether or if businesses need to leave the EU to keep innovating,” says Paal.

 

Myth 1: The AI Act will kill innovation

It won’t kill innovation – it will guide it to become smarter, more transparent, and more human-centered.

“This reminds me of 2018 when GDPR came into force. Back then, there was also a lot of confusion and panic. Some companies shut down their websites ‘just in case,’ others overspent on legal audits. Meanwhile, smart businesses turned privacy into a competitive edge,” says Paal.

The AI Act presents a strategic opportunity. Proactive risk analysis and transparency can build trust and secure a company’s license to operate in the AI-driven future.

Myth 2: The AI Act bans AI

Not at all. The AI Act is not a list of bans. Most familiar AI tools – such as chatbots, marketing tools, and analytics systems – are classified as low or limited risk technologies.

“That means their use is allowed with minimal added conditions. For instance, it will be mandatory to inform users that they are interacting with an AI-based tool,” explains Paal.

Only a very narrow set of high-risk applications is banned, such as real-time biometric mass surveillance in public spaces or manipulative AI systems. These are extreme edge cases – not everyday tools.

Myth 3: All AI products are high risk

Only specific use cases are defined as “high-risk” under the AI Act – for example:

  • AI used in pre-selecting job applicants

  • Automated grading of exams

  • AI systems in law enforcement or border control

Meanwhile, tools for content generation, social media personalisation, or production line optimisation are not considered high-risk. The Act also provides clear guidance for ensuring compliance through risk assessments, documentation, and transparency.

Myth 4: Generative AI will disappear

Generative AI is not going away – but it will become more accountable. Tools like ChatGPT are covered by the Act as general-purpose AI models (GPAI).

“Users must be clearly informed when content is AI-generated. Where possible, providers will also need to disclose the datasets used to train the model,” says Paal.

This is crucial in the era of deepfakes and misinformation. The new rules help protect users and build trust in generative AI.

Myth 5: AI compliance is just an IT issue

A dangerous myth is that AI compliance is only a technical concern.

“In reality, the regulation affects multiple business functions – marketing, product development, customer service, legal, and leadership,” Paal emphasises.

Marketing teams must know when AI-generated content must be disclosed. Product teams must assess the risk category of AI features. Leadership must implement risk management and compliance systems.

The AI Act is not just a technical manual – it’s a strategic framework. AI is a team sport, and everyone needs to know the rules.

What should companies be doing today?

  • Map your AI use: Identify which AI systems you use or develop. Some provisions are already in effect as of early 2025 – don’t wait.

  • Understand your risk category: Classify your AI systems accordingly to know your responsibilities.

  • Build transparency now: Be clear with users and document your AI processes. Trust and governance are increasingly important.

  • Don’t act out of fear – act on facts: Stay informed and seek expert guidance if needed.

About GDPR Register
GDPR Register is an Estonian startup developed in collaboration with IT experts to make GDPR compliance simple, logical, and efficient. The platform helps companies and public sector organisations streamline and manage all GDPR-related processes and documentation.

Contact:
Krete Paal
krete.paal@gdprregister.eu


Latest Blog Posts

Webinar titled 'Is DPO the new AI officer' discussing the evolving role of data protection officers in the age of AI. Featuring speakers from GDPR Register, Veriff, and Toloka

Is DPO the new AI officer?

Key Takeaways on AI Compliance and the Role of Privacy Professionals The GDPR Register webinar brought together privacy professionals and AI experts to explore the

Read More »