ElevenLabs, an artificial intelligence (AI) startup that converts text to speech, has reportedly achieved a valuation of over $1bn after a new funding round raised $80m, demonstrating the hopes for this new technology.
The company’s flagship product allows users to type text that is then read by one of a series of “lifelike” voices generated by AI and is augmented by a host of features including speech-to-speech AI dubbing, which translates a person’s voice into another language while “preserving voice characteristics” and the ability to generate an AI copy of your own voice that can then be sold to others on a marketplace.
Primarily marketed to the advertising and media segments, the company is currently used by games companies, audiobook providers and media organisations including TMZ and LadBible, as well as reporting that it is being used by employees at 41% of Fortune 500 companies.
Deepfakes and AI fraud
As the technology evolves, however, fears are growing about its potential misuse. Last year, ElevenLabs announced on X (then Twitter) that “a set of actors” were misusing its technology for malicious purposes.
Though it did not go into details about the uses, technology that replicates voices can be used to create convincing deepfakes, a form of fraudulent audio or video that purports to be by an individual despite them having no involvement with the content. They often feature celebrities or politicians and can be used to spread fake news, defame individuals and create fake pornography.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
This is particularly concerning in the run-up to the 2024 US election, GlobalData analyst Emma Christy told Verdict: “Audio deepfakes are troubling as it is easier and cheaper to replicate voice without the corresponding video, and they are difficult for even technology to detect. A significant number of people will be unable to discern deepfake audio from reality, with catastrophic implications for countries holding elections this year.
“As AI technology becomes more accessible, in part due to the proliferation of open-source AI, and convincing audio deepfakes become more pervasive, voters could distrust potentially legitimate material – a problem known as the liar’s dividend, where it becomes unclear what is real and what is fake.”
Can it be stopped?
A GlobalData report on misinformation sees deepfakes as a core part of digital misinformation and suggests that regulators are likely to take increased action against this kind of content in future.
China passed landmark legislation last year that requires deep synthesis service providers (including speech generators) to among other things inform an individual whose voice is being altered by the software and obtain their explicit consent. Similar legislation has been introduced in the US though it has not yet passed the House or Senate, so any real regulation in the country is unlikely to come soon.
Even so, companies like ElevenLabs may find themselves constrained by other forces. SAG-AFTRA, a union that represents Hollywood actors, went on strike last year for 118 days over payment structures and the use of AI in the entertainment industry. Wired called generative AI “the major sticking point” in the negotiations, and the strike ended in a deal that included consent provisions for digital replicas of actors similar to those seen in China’s legislation.
This does not preclude businesses from using completely AI-generated voices, however, and ElevenLabs’ early introduction of a marketplace for voices should help the company to stay on the right side of regulation for now. As its adoption grows, however, government oversight is likely to as well.