When, in October 2025, the European Commission proposed delaying key parts of the EU’s landmark AI Act, it framed the move as “clarification” and “simplification”, a pragmatic adjustment to a complex regulatory rollout.
But beneath the technocratic language lies a politically charged decision with real consequences. A window of regulatory uncertainty that risks privileging market speed over rights, fragmenting enforcement across member states, and ceding strategic ground in the global AI race.
The draft “Digital Omnibus” proposals would push back major compliance deadlines for so-called “high-risk” AI systems, in some formulations, into 2027 or beyond, and add time-limited grace periods for companies to make changes for models that are already on the market, rather than a blanket exemption.
What’s being delayed? (and who asked for it)
The Commission’s package does not repeal the AI Act; it proposes altering timing and certain obligations so that some obligations for high-risk systems apply only after technical standards are finalised, or on extended fixed dates (reports speak of deadlines stretching into late 2027/2028 and extra months for generative systems). The changes include retrofit windows for models already on the market and softer transitional provisions for providers.
Trump and his administration see the EU AI act as protectionist measures aimed squarely at American tech giants that are “designed to harm or discriminate against American technology”. According to reports, the US Mission to the EU formally lobbied Brussels, arguing against the adoption of the EU’s voluntary “code of practice” that complements the AI Act. Trump has also threatened retaliatory tariffs if the EU maintains what he considers discriminatory regulations (such as GDPR, e-Privacy Directive, and the Data Act).
Big Tech agrees with Trump. Their lobbying efforts in Brussels in 2025 (year-to-date) total over EUR151million (or $175 million), according to NGOs Corporate Europe Observatory (CEO) and Lobby Control. They’ve also sent letters, one of which was sent by 46 CEOs of major European countries.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataRegulatory certainty vs capture
Delay supporters argue the move is pragmatic: standards-setting bodies have not finished the technical norms the Act relies on, and firms legitimately need time to adapt. Yet the political optics and timing matter. When a law designed to safeguard fundamental rights is postponed after an intense lobbying campaign by well-resourced platforms and model providers, it looks less like bureaucratic housekeeping and more like regulatory capture by incumbents.
Critics warn that grace periods allow already-deployed models to continue operating without independent assessment of risks, transparency measures, or mandatory safety checks (precisely the gaps the AI Act aimed to close).
Who pays for the AI Act pause?
Delays do not distribute risk evenly. Citizens, patients, applicants for loans or jobs, and protestors are exposed to biased, opaque, or safety-critical systems in the interim, while the companies that profit from those systems gain breathing room.
Advocacy groups and privacy defenders have described the Commission’s proposals as a “rollback” of digital protections and warned they could weaken data-protection safeguards or enable broader use of personal data for model training without explicit consent. The consequence is not just theoretical. Unchecked deployment of high-risk systems can produce harms that are hard to reverse and disproportionately borne by already disadvantaged groups.
Delays in oversight slow the feedback loop that would detect systemic issues early. In some domains (for instance, medical diagnostics, critical infrastructure control, law enforcement), these are not mere inconveniences but real hazards.
A procrastinated regulatory framework may therefore produce what economists call moral hazard, whereby firms take greater risks because the cost of detecting or correcting those risks is deferred.
The “retrofit” and grandfathering provisions are the most politically charged elements of a proposed delay.
Allowing legacy models an extended runway to meet transparency, watermarking, or risk-management rules creates a two-tier ecosystem: incumbents can continue operations while challengers face full compliance costs for new deployments. That increases barriers to entry and can entrench dominant players, an outcome the EU’s digital policy architecture has historically tried to prevent. The net effect could be a regulatory framework that looks strong on paper but is toothless in practice for the systems that matter most.
What should the EU do?
If the Commission genuinely needs more time for standards, it should pair any delay with concrete guardrails, subject to review grandfathering, mandatory transparency reports from firms benefiting from the delay, accelerated investment in enforcement capacity, and sunset clauses tied to standard-setting milestones.
Crucially, co-legislators (the European Parliament and Member States) must not let procedural convenience become a cover for permanent dilution.
