The EU has launched a public consultation on draft election security mitigations in the hopes of tackling risks associated with generative AI (GenAI) and deepfakes.
The proposed recommendations encompass a comprehensive approach, addressing content moderation resourcing, service integrity, transparency in political ads and media literacy.
The guidelines, developed under the recently revamped e-commerce rules known as the Digital Services Act (DSA), focus on nearly two dozen platform giants and search engines.
The EU aims to ensure these platforms, designated under the DSA, implement measures to mitigate risks related to GenAI and its potential misuse during electoral processes.
Concerns about the impact of advanced AI systems such as large language models have surged since the viral rise of GenAI tools like OpenAI’s ChatGPT.
The EU aims to confront the challenges posed by these technologies, which have the ability to produce realistic text, images, videos and other synthetic content.
The guidelines acknowledge the potential for GenAI to mislead voters and manipulate electoral processes by creating and disseminating inauthentic content.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
The proposed measures include clear and persistent labelling of GenAI-altered images, deepfakes and other media manipulations. The EU suggests that labels should be prominent, efficient and travel with content when reshared.
Social media platforms and search engines have been encouraged to provide users with accessible tools to easily add labels to any AI content.
The guidelines recommend that social media platforms use best practices from the recently agreed-upon AI Act and the non-legally binding AI Pact.
Platforms have been urged to watermark any AI-generated content themselves where possible, with specific attention to content involving candidates, politicians or political parties.
Under public consultation until 7 March, the draft guidelines advocate for “reasonable, proportionate and effective” mitigation measures by tech giants.