The European Parliament, today (June 14), approved the much-scrutinised EU Artificial Intelligence (AI) Act – marking the first global rules on the pervasive technology.

Ratification of the AI Act marks a historic moment in AI regulation, following months of criticism from governments, business figures and human rights groups, as well as disputes among MEPs.

European Parliament president, Roberta Metsola, announced the outcome of the plenary vote in Strasbourg as a “balanced and human-centred approach … consistent with the EU’s will to be world leaders” in AI regulation. The AI Act authorises a full ban on AI for biometric surveillance, emotion recognition and predictive policing.

Co-rapporteur Brando Benifei said: “All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose.”

The plenary was not completely smooth sailing, as a last-minute amendment tabled by the European People’s Party (EPP) caused division around the contentious topic of remote biometric identification.

The key issue: remote biometric surveillance

With lawmakers conflicted over ensuring security while mitigating the risk of mass surveillance, remote biometric identification has been the most divisive topic in European Parliament debates leading up to the vote. The AI Act prioritises the latter, endorsing a blanket ban on live remote biometrics in public spaces.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

While the four main political parties of the European Parliament agreed not to table alternative amendments, the rightist EPP pushed for some flexibility on the issue, proposing an amendment that would have allowed various exemptions to live biometric technologies such as facial recognition.

Coordinated by MEP, Jeroen Lenaers, the EPP’s amendment would have influenced the AI Act into taking a security-oriented approach. With judicial approval, real-time biometric identification would have been permitted in three exceptional scenarios: to find a missing person, prevent a terrorist attack, or locate a crime suspect.

Tailor-made rules for generative AI

The Act provides two overarching rules regarding generative AI.

Firstly, it mandates the labelling of AI-generated content. There has been widespread concern around the spread of political disinformation through AI methods such as fake imagery, micro-targeting, cloned human voices, machine-learning methods, political chatbots and facial recognition databases.

Clear identification of AI use in texts, images and videos is intended to de-weaponise the polarising impact of AI-generated disinformation – although the specifics of how such content will be identified are yet to be confirmed.

Retrospective and real-time remote biometric identification has been condemned by Amnesty International as “invasive”, “discriminatory” and potentially “racist”.

Ahead of the plenary vote, the human rights organisation released a statement calling for the EU’s AI Act to prohibit profiling systems, citing research that facial recognition technology magnifies discriminatory law enforcement against marginalised and racialised groups – such as the disproportionate rate of stop-and-search practices among ethnic minorities.

Mher Hakobyan, Advocacy Advisor on AI Regulation at Amnesty International, said that “lawmakers must ban racist profiling and risk assessment systems which label migrants and asylum seekers as ‘threats’; and forecasting technologies to predict border movements and deny people the right to asylum”. It is also crucial that the AI Act blocks the export of any surveillance systems which are not allowed in the EU, according to the statement.

When asked about these human rights objections, co-rapporteur Brando Benifei said that: “We will deal with AI used in border management and migration contexts as ‘high-risk’”, referring to systems that carry an “unacceptable level of risk”. These include systems that could influence elections.

Space to innovate?

The EU AI Act has faced criticism for stifling innovation. In the UK’s own bid to set a global standard for AI regulation, British government officials labelled the Act “draconian”, while Sam Altman, CEO of OpenAI, claimed that the Act was “over-regulating” during his (quickly reversed) threat to pull ChatGPT from the EU. The Brookings Institution think tank released a critical study outlining how the drafted version of the Act could limit the type of research behind general-purpose AI (GPAI) tools like GPT-3.

If mismanaged, open-source GPAI feeds a variety of legal, technical, societal and ethical risks – but experts have said that restricting it will curb transparency, auditing and citizen trust. As put by Alex Engler, Technology Analyst for Brookings: “Without open-source GPAI, the public will know less, and large technology companies will have more influence over the design and execution of these models.”

In response, MEPs have agreed on exemptions for research activities and open-source AI components. Also exempted by the Act was the use of “regulatory sandboxes” on AI. First piloted by the Spanish government and European Commission last year, these sandboxes are controlled environments created by authorities to test AI developments before public release.

Co-rapporteur Dragoş Tudorache described an “ideological battle in this house on definitions [of AI types]”, but promised to avoid ambiguity similar to 2019 with the EU’s GDPR laws in 2019 where “companies were left in legal battles” over definitions.

With the Act, EU lawmakers have taken an unprecedented step in regulating how companies use AI, setting a confrontational dynamic between Brussels and the American tech behemoths funnelling billions into the technology.

MEPs have confirmed that final negotiations with the EU Council and member states will begin tonight.