New Swiss Data Protection Act comes into force on September 1, 2023: comparison with GDPR
The new Swiss Data Protection Act (nFADP) has finally been completed. Following the resolution of the last differences on “profiling” […]
On 7 May 2026, the EU Council and European Parliament reached a provisional agreement on a package of amendments intended to streamline and simplify the EU AI Act. The changes form part of the broader Omnibus VII legislative package, the EU’s ongoing effort to reduce regulatory complexity and ease compliance burdens for businesses.
The agreement does not water down the AI Act. The core risk-based framework remains intact. However, the amendments do give companies more time in some areas, while accelerating obligations in others. For organisations developing, deploying or procuring AI systems in Europe, the message is clear: the AI Act may become more workable, but it is not becoming optional.
Official press release ->
The agreement introduces several practical changes that companies should immediately factor into their AI governance and compliance planning.
The most immediate practical impact for many companies is the revised application schedule for high-risk AI rules. Stand-alone high-risk AI systems are expected to become subject to the full set of high-risk requirements from 2 December 2027. High-risk AI systems embedded in regulated products, such as medical devices, industrial machinery or vehicles, are expected to face a later deadline of 2 August 2028.
For companies that have been preparing against a tighter internal timeline, this is welcome breathing room. But it should not be mistaken for a pause. The infrastructure for AI Act compliance takes time to build. Governance frameworks, documentation practices, conformity assessments, human oversight measures and post-market monitoring processes cannot be created overnight.
Organisations that use this additional time strategically will be far better positioned than those that treat the delay as permission to wait.
One area where the agreement accelerates the clock is transparency for AI-generated content. The grace period for implementing technical transparency solutions, including watermarking and provenance-labelling of AI-generated outputs, has been shortened, with the new deadline expected to be 2 December 2026.
For companies deploying generative AI tools, this is no longer a medium-term concern. Customer-facing chatbots, content generation platforms, synthetic media tools and other AI systems capable of producing text, images, audio or video should now be reviewed from a transparency perspective.
The key question is no longer whether AI-generated content should be traceable. The question is whether the organisation has the technical, contractual and operational measures to make that traceability work in practice.
One of the more technically significant provisions concerns the use of special categories of personal data for bias detection and correction.
The agreement clarifies that such data may be processed for this purpose, but only where strictly necessary. This matters for companies developing or deploying high-risk AI systems, because meaningful bias testing may sometimes require access to sensitive data, such as health data, biometric data or data revealing racial or ethnic origin.
However, this is not a general permission to collect, retain or reuse sensitive data throughout the AI lifecycle. The processing must be limited, justified and documented. Companies will need to show why the data is necessary, how the use is minimised, how access is controlled, and how the processing aligns with GDPR Article 9 requirements, which continue to apply in parallel.
The agreement also clarifies registration obligations for AI providers. Providers may need to register their systems in the EU AI database even where they consider the system to be exempt from high-risk classification.
This is practically important for companies operating in or near high-risk categories. A self-assessed exemption should not be treated as the end of the compliance analysis. Legal, compliance and product teams should review AI inventories and determine whether registration obligations apply despite an exemption position.
In practice, this means that internal AI inventories should not only classify systems as high-risk or not high-risk. They should also record the reasoning behind that classification, the exemption analysis, the responsible owner and any related registration decision.
For companies operating in heavily regulated sectors, such as medical devices, machinery, automotive or other product-regulated industries, the agreement introduces a mechanism to address overlapping compliance obligations.
This is a response to one of the most persistent concerns from industry: the risk that companies could be caught between the AI Act and sector-specific legislation that already imposes equivalent or similar AI-related requirements.
The agreement provides for a process through implementing acts to limit the AI Act’s application in certain cases where sectoral law already imposes equivalent AI-specific requirements. The aim is to minimise duplication and reduce unnecessary compliance burden.
For legal and regulatory teams, this is meaningful progress. But it also creates a new dependency: the final compliance position may depend heavily on future Commission guidance and implementing measures. Regulated-sector companies should therefore monitor developments closely and avoid assuming that overlap issues have already been fully resolved.
The agreement also adds a new prohibited practice concerning AI-generated sexual and intimate content. The AI Act will explicitly prohibit AI practices related to the generation of non-consensual sexual and intimate content, as well as child sexual abuse material.
Many companies may assume this is not relevant to them. However, it may have practical implications for platform providers, generative AI developers, AI tool vendors and any organisation deploying open-ended content generation capabilities at scale.
Terms of service, moderation systems, model safeguards, abuse reporting channels and technical restrictions should be reviewed in light of this explicit prohibition.
For companies hoping to use national regulatory sandboxes to test innovative AI applications in a supervised environment, the deadline for competent national authorities to establish those sandboxes has been moved to 2 August 2027.
This means that sandbox access may not be available as quickly as some companies expected. For organisations planning to rely on sandbox participation as part of their compliance or market-entry strategy, this should be reflected in product development timelines.
Sandboxes may still become a valuable route for testing AI systems with regulatory engagement. But they should not be the only compliance strategy.
The agreement is still provisional and must be formally endorsed by both the Council and Parliament before legal and linguistic revision and final adoption. However, the direction is clear enough for companies to act.
The simplification package should be treated as the starting point for practical compliance planning, not as a reason to postpone it.
The AI Act is not just a legal text to monitor. It creates an ongoing need to map AI systems, assess risk, assign responsibilities, document decisions and keep compliance evidence up to date.
That is difficult to manage with scattered spreadsheets and one-off reviews. Companies need a structured way to turn AI Act requirements into practical tasks that legal, privacy, compliance and product teams can actually follow.
Our EU AI Act compliance software helps organisations move from uncertainty to a clear, structured compliance process.
The EU AI Act remains one of the most consequential pieces of technology regulation in a generation. The simplification agreement may make the framework more workable for companies, especially those dealing with high-risk systems or regulated products.
But the key compliance message has not changed. Companies still need to understand where AI is used, classify systems correctly, document decisions, manage risks, and prepare for transparency, oversight and registration obligations.
The AI Act is becoming more practical. It is not becoming optional.
The EU AI Act update may now come with adjusted timelines, but companies still need to understand where AI is used, assess risk, document decisions and prepare for transparency and governance obligations.
GDPR Register’s EU AI Act compliance software helps organisations map AI systems, assess risk and manage AI Act documentation in one structured workflow.
Learn more about GDPR Register’s EU AI Act compliance software ->