France, Germany and Italy have reached an agreement on the regulation of artificial intelligence (AI), according to a joint paper seen by Reuters. The landmark move is expected to accelerate negotiations for AI regulation across the entire EU.
The agreement follows a myriad of disagreements from key EU member states which had greatly slowed down the path to finalised EU legislation.
All three governments have stated that they support “mandatory self-regulation through codes of conduct” in the development of AI foundation models, a form of generative AI (GenAI) which can produce outputs from one or more prompts.
“Together we underline that the AI Act regulates the application of AI and not the technology as such,” the joint paper said.
“The inherent risks lie in the application of AI systems rather than in the technology itself,” it added.
The European Parliament initially proposed that its codes of conduct should only be imposed on major AI players, most of which are based in the US.
However, the new agreement from the three EU governments found that this could reduce trust in smaller European AI companies, instead of giving them an advantage.
The rules of conduct will therefore be binding for everyone, the paper said.
The paper also explained that all developers of AI foundation models must house model cards.
“The model cards shall include the relevant information to understand the functioning of the model, its capabilities and its limits and will be based on best practices within the developers community,” according to the paper.
As the agreement currently stands, no sanctions will be imposed on those who fail to follow the rules. However, if violations are identified after a certain period of time, a system of sanctions could be set up, according to the paper.
The European Commission, the European Parliament and the European Council are currently discussing how they will position themselves on GenAI regulation.
Amelia Connor-Afflick, senior analyst at research company GlobalData, said the agreement “recognises that the harms and risks of AI lie in its application as opposed to in the technology itself.”
“This distinction attempts to balance regulation and innovation so that the EU can lead in the AI field,” she told Verdict.
Connor-Afflick said the approach to include no sanctions “has received criticism for inadequate consumer protection and for not balancing innovation and regulation.”
Heather Dawe, UK head of data science, machine learning and AI, at disruptive digital transformation company UST, told Verdict that the agreement will differentiate the EU from other blocs.
“Simply, the clarity regarding how AI is being regulated in the EU will differentiate from other blocs such as the UK which have less clarity,” Dawe said.
“Within the EU, businesses will have a set of rules and guidelines that sets the standards for what Responsible AI is,” she said.