Biometric Data and GDPR: Key Considerations
Biometric data is classified by the GDPR as a special category of personal data, subject to enhanced protection. This means […]
EU AI Act – Top 5 Myths Debunked by GDPR Register
Starting August 2, general-purpose AI systems must comply with the transparency obligations of the EU Artificial Intelligence Act (AI Act). Krete Paal, CEO of Estonian privacy tech startup GDPR Register, highlights the most common fears and misconceptions surrounding the regulation — and explains what businesses must start doing today to stay compliant and competitive.
The EU AI Act is the world’s first comprehensive regulation designed to ensure that the use of AI is safe, transparent, and aligned with fundamental rights. While the aim is positive, uncertainty, fear, and misinformation continue to circulate — especially in the European and Estonian tech sectors.
“It’s been a year since the AI Act was adopted, but in Europe and in Estonia we’re still seeing a wave of anxious questions and exaggerated interpretations. Some have even wondered if the regulation bans AI altogether or if businesses need to leave the EU to keep innovating,” says Paal.
This myth is false. The AI Act won’t stifle innovation — it will shape it to be smarter, more transparent, and more human-centered.
“This reminds me of 2018 when GDPR came into force. Back then, there was also a lot of confusion and panic. Some companies shut down their websites ‘just in case,’ others overspent on legal audits. Meanwhile, smart businesses turned privacy into a competitive edge,” says Paal.
The AI Act creates a strategic advantage for companies that adopt responsible AI practices. Proactively managing risk and transparency helps build trust and a long-term license to operate in a tech-driven future.
False. The AI Act is not a list of bans. Most common AI tools — including chatbots, marketing automation, and analytics — are classified as low or limited risk.
“That means their use is allowed with minimal added conditions. For instance, it will be mandatory to inform users that they are interacting with an AI-based tool,” explains Paal.
Only extreme high-risk cases — like real-time biometric surveillance or manipulative AI — are banned. These are rare and do not affect most businesses.
No. Only specific use cases are defined as high-risk under the AI Act, including:
By contrast, content generation tools, recommendation systems, and manufacturing automation are not considered high-risk. The Act offers clear compliance pathways through documentation, risk assessments, and transparent design practices.
False again. Generative AI isn’t going anywhere — but it will become more transparent and accountable under the Act.
“Users must be clearly informed when content is AI-generated. Where possible, providers will also need to disclose the datasets used to train the model,” says Paal.
These rules are especially important in the era of deepfakes and misinformation, helping to protect users and build trust in AI technologies like ChatGPT and image generators.
This myth is dangerous. AI compliance is not just the responsibility of developers or technical teams.
“In reality, the regulation affects multiple business functions — marketing, product development, customer service, legal, and leadership,” Paal emphasises.
Marketing teams must know when to disclose AI-generated content. Product managers must assess if new features are high-risk. Company leadership must ensure governance, policies, and compliance structures are in place. The AI Act is a cross-functional framework — and AI is a team sport.
GDPR Register is an Estonian privacy tech startup built in collaboration with legal and IT professionals to make GDPR compliance logical, simple, and effective. The platform helps businesses and public sector organisations manage all GDPR-related processes and documentation in one place.
Krete Paal
CEO, GDPR Register
krete.paal@gdprregister.eu