A senior Indian government official has warned that social media companies will be held accountable for AI-generated deepfakes posted on their platforms, in a fresh crackdown on the issue of AI content moderation.

Rajeev Chandrasekhar, India’s minister of state for electronics and IT, said that social media companies will be made to abide by “very clear and explicit rules” in dealing with deepfake content.

Chandrasekhar said that the country had been “woken up early” to the dangers of deepfakes because of its large online population. 

The stark warning follows an advisory published by the Indian government in December ordering all social media platforms to comply with Indian law on illegal content. 

The advisory warned platforms to “identify and remove misinformation which is patently false, untrue or misleading in nature and impersonates another person, including those created using deepfakes”. 

Jake Moore, global cybersecurity advisor at internet security company ESET, told Verdict that this is the correct approach in tackling deepfakes, and other countries will likely be monitoring these strict rules and taking notes on how they work.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

“Deepfake technology is fast becoming an inevitable beast of its own and needs to be contained as best it can,” Moore said.

The UK, which recently outlined its Online Safety Bill, made an amendment in 2022 to make non-consensual deepfake pornography illegal in the country.

Prime Minister Rishi Sunak recently announced the government were exploring the use of AI labelling to help quell the deepfake issue.

As part of the initiative, pictures and videos created with an AI model would have to be clearly labeled.

However, the UK and other countries have struggled to keep up with the rapid advancements of AI.

Most deepfake content is not currently illegal and a lot of vagueness remains regarding its legality.

Do we have the tech to stop deepfakes?

As AI continues to grow in sophistication, social media platforms have come under increasing pressure to come up with their own rules to combat deepfakes.

All media that has been manipulated needs to be taken down or labelled on Meta, TikTok and X. Political campaigns that have been altered with AI will need to be disclosed too, according to a recent announcement by Google and Meta.

Moore believes the technology needed to actually spot a deepfake is still not good enough to be released with accuracy. 

“There is no simple code or feature that makes deepfakes standout quickly by using technology so it will mean an extensive human approach will be necessary – much like with spotting misinformation,” Moore said. 

However, Jose Luis Riversos, security researcher and consultant at cybersecurity company Trustwave, believes the same AI that creates deepfakes can be used to effectively combat them.

“We can use AI to help mitigate a deepfake’s impact, essentially using the same technology that created the deepfake to prove it is a fabrication,” Riversos said.

“Just as there are applications to identify essays created by ChatGPT, there should be a way to adjust these tools or create new ones that determine whether a video is real or a deepfake.”