The AI Act comes at a time when the importance of regulating AI is widely recognized due to the associated risks of misinformation, bias, breaches in data privacy, copyright, and cybersecurity threats.

How AI Is regulated, however, is a different story, as nations are broadly taking different approaches. So far, very few countries or international bodies have taken firm steps to regulate the industry, leaving the sector and Big Tech to self-regulate.

The AI Act

Proposed in April 2021, the EU AI Act is the first EU regulatory framework for AI. In June 2023, the European Parliament voted on the proposed act to consolidate its position ahead of talks with EU member states. There were 499 votes in favor, 28 against, and 93 abstentions.

The EU AI Act is a proposal to regulate the uses of AI aimed at balancing the benefits and risks while fostering innovation. The key features that the EU parliament is prioritizing in AI regulation are safety, transparency, traceability, and ensuring AI models are non-discriminatory and environmentally friendly. The act will not become binding until late 2023 or even 2024. Even once it becomes binding, there will be a grace period of potentially 24 to 36 months before it comes into full force.

The EU AI Act takes a risk-based approach classifying each application into three categories; unacceptable risk, high risk, and limited, minimal, or no risk. The regulations restricting the AI system vary depending on which risk level it is classified as having.

AI systems categorized as having unacceptable risks are considered a threat to people. Examples of this include tools that manipulate behavior, social scoring systems, or real-time biometric identification systems. All unacceptable risk systems will be banned.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

High-risk systems are those that fall under the EU’s product safety legislation, like cars, medical devices, and toys, or those that fall under the following eight specific areas:

  1. Biometric identification and categorization of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, worker management, and access to self-employment
  5. Access to and enjoyment of essential private services and public services and benefits
  6. Law enforcement
  7. Migration, asylum, and border control management
  8. Assistance in legal interpretation and application of the law

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle

Generative AI

Under the AI Act, generative AI models need to adhere to transparency requirements. This means disclosing that content is generated by AI, preventing models from generating illegal content, and publishing summaries of copyrighted training data.

Reaction

Many of the EU’s largest companies wrote a letter warning the European Commission that the drafted legislation “would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” More than 150 executives from companies including Renault, Heineken, Siemens, and Airbus signed the letter.

The tension between innovation and regulation is particularly pertinent when discussing AI. Many AI entrepreneurs and developers have argued that excessive red tape could impose unnecessary and burdensome hurdles that stifle innovation. However, the lack of regulation can be equally damaging to innovation as investing in an unregulated space can be considered too risky. Many are hoping that the ‘Brussels effect’ will kick in, a phenomenon where many other jurisdictions are influenced into bringing EU laws into their own.

If international, consolidated principles are not agreed upon, it will lead to further fragmentation of the digital space.