As the headline suggests – this was the stark message from the November AI Creative Summit in London, which examined the key use cases and threats of artificial intelligence (AI) in the entertainment sector.
After over 70 years of experimentation, AI is now at a turning point. It is becoming accessible and is trialled daily by millions. There will be no signs of it slowing down in 2024 and beyond.
Generative AI is one of five advanced AI capabilities and refers to self-learning algorithms that use existing data, such as text, audio, or images, to produce realistic new content. GlobalData analysis forecasts that generative AI is expected to grow at a CAGR of 80% between 2022 and 2027.
Generative AI is moving away from text and audio to immersive 3D worlds of videos and images, increasing its impact across sectors, particularly in media and entertainment. AI can optimize the creative process through maximizing content efficiency and scale, and augmented creativity, reducing the feedback loop.
Speaking at the event, Jamie Allan, Head of Media Industries, NVIDIA said he has seen a democratization effect around AI over the past couple of years. With the power required to run models reducing—yet the size and quality of models increasing monthly—Allan describes AI as being in a true tech labour market now. He believes a hybrid approach to the integration of generative AI and machine learning (ML) is key in the entertainment sector because retraining existing models with proprietary company data is cheaper and easier than building foundation models.
AI brave new world
Currently, generative AI’s output is highly dependent on input expertise, which means prompt engineering is becoming an increasingly valuable skill. Generative AI output was likened to a couple of interns, the material it produces is not perfect straightaway but is very much workable to achieve the desired outcome. It was suggested that in the future it will be possible to buy and sell successful prompt inputs. Human expertise is vital at this stage, both as guardrails and for creative direction.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
Between 2024 and 2030, GlobalData research suggests that AI will be used instead of basic CGI and that it will be capable of creating blogs, articles, and less basic script generation. Currently, GenAI is great for short-form content. For example, Eline van der Velden, CEO & Founder of Particle6 believes that in six months, it will be creating usable feature film-length content. Despite being good at scripts, GenAI is notoriously bad at subtext—another area where human guardrails are needed.
Several interesting use cases of AI were also discussed at the summit, including voice replication and generation to help connect sufferers of motor neuron disease with loved ones, and World War II letters being brought to life using text for prompts.
Sanjeevan Bala, Chief Data and AI Officer at ITV described a co-pilot, rather than auto-pilot approach to AI use cases in ideation, production, and post-production. AI enables ITV to target advertisements based on positive mentions during a program. For example, if coffee is positively mentioned in a TV show, it will appear in the following ad break.
More terrifyingly, HSBC has been trialling GenAI in its ‘Faces of Fraud’ campaign, which creates detailed compositions of scammers’ faces from inputs of their voice alone. AI can discern a person’s characteristics including gender, ethnicity, weight, and age to create an image. While notable for its ingenuity, this use case will only worsen regulatory problems with bias and data privacy.
The danger of deepfakes
Another topic of contention at the AI summit concerned deepfakes (visual or audio content manipulated or generated using AI). Deepfakes can be used for more realistic dubbing in post-production, including mouth movement correction. This allows companies such as advertising company WPP to scale ecommerce and advertising content internationally in many languages at almost zero cost. According to GlobalData, 43% of individuals cannot differentiate a deepfake video from a genuine one.
However, there is a very real fear that deepfakes and disinformation will be used to undermine democracy, particularly in the forthcoming 2024 US presidential election. Fake content must be regulated, with copyright and originality protected.
The AI regulation void
There is a void of AI regulation, particularly within the media and entertainment sectors. This must be filled with industry best practices and guidelines. The EU’s AI Act and President Biden’s executive order on managing the risks of AI are a step forward for transparency, watermarking, and content labeling. However, technological challenges in detecting content ownership and the fragility of AI watermarks must be addressed.
The greatest hurdles to businesses integrating AI into their operations are insufficient regulation, a lack of urgency in establishing an AI strategy, and trust. Bias in models and underlying training data must be evaluated and addressed as trust and transparency start from the data level.
Despite the Writer’s Guild of America’s recent real-life regulatory success on AI protection for workers, fear of copyright issues is growing; legal teams must be trained to protect company IP and strategy.
Universal Music has filed a copyright infringement lawsuit against AI company Anthropic. Universal and two other music companies insist Anthropic scrapes their songs without permission and use them to generate “identical or nearly identical copies of those lyrics” via Claude, its rival to ChatGPT. Google’s Bard model is now reportedly capable of bypassing content paywalls and summarizing content, yet another blow for publishers.
Cracking the copyright challenge
Several companies have taken steps to address copyright and customer liability. Paid versions of OpenAI chatbots will underwrite copyright risk. For example, Microsoft will assume commercial customers’ liability for using any AI-generated output.
Elsewhere, Google echoed: “If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved”. However, most disagreements and legal precedents with AI and copyright are still in development, so it seems unlikely that OpenAI, Microsoft, and Google will be writing big cheques anytime soon.