The EU AI Act has become the global blueprint for governance of the transformative technology of AI.

On August 2, 2025, the second stage of the AI Act came into force, including obligations for general purpose AI models (GPAI). The act follows a risk management approach; it aims to regulate transparency and accountability for AI systems and their developers.

Although it was enacted into law in 2024, the first wave of enforcement proper was implemented last February to cover models deemed of “unacceptable risk”, including AI systems considered a clear threat to societal safety.

The second wave, implemented this month, covers GPAI models and is arguably the most important one, at least in terms of scope. The next steps are expected to follow in August 2026 (“high-risk systems”) and August 2027 (final steps of implementation).

GPAI compliance timelines with EU AI Act

From August 2, 2025, GPAI providers must comply with transparency and copyright obligations when placing their models on the EU market. This applies not only to EU-based companies but any organisation with operations in the EU. GPAI models already on the market before August 2, 2025, must ensure compliance by August 2, 2027. GPAI systems include models trained with over 10^23 FLOP (floating point operations) and capable of generating language (whether in the form of text or audio), text-to-image, or text-to-video.

Providers of GPAI systems must keep technical documentation about the model, including a sufficiently detailed summary of its training corpus. In addition, they must implement a policy to comply with EU copyright law. Within the group of GPAI models, there is a special tier considered to be of “systemic risk”, pertaining very advanced models that only a very small handful of providers will develop. Firms within this tier face additional obligations – for instance, notifying the European Commission when developing a model deemed with systemic risk and taking steps to ensure the model’s safety and security. The classification of which models pose systemic risks can change over time as the technology evolves, but it generally includes risks to fundamental rights and safety, and risks related to loss of control over the model. From August 2, 2026, the European Commission’s enforcement powers enter into application for all GPAI models, including “systemic risk” models.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

GlobalData analyst Beatriz Valle commented: “The EU AI Act has been developed with the collaboration of thousands of stakeholders in the private sector, at a time when businesses are craving regulatory guidance to provide them with clear operational parameters. It is also introducing standard security practices across the EU, in a critical period of adoption, supporting a harmonised approach that integrates security considerations from the outset and throughout an AI system’s entire life cycle. It is setting a global benchmark for others to follow in a time of great upheaval, a commendable effort that will make history.”

Valle continued: “However, the act has also drawn criticism because of its disproportionate impact on startups and SMBs, with some experts arguing that it should include exceptions for technologies that are yet to have some hold on the general public and do not have a substantial impact or potential for harm. Others say it could slow down progress among European organisations in the process of training their AI models.”