OpenAI, an artificial intelligence research and technology company, and its collaboration with SAG-AFTRA, a performers’ labour union, marks a pivotal moment for the intersection of AI in Hollywood.
The move followed a viral controversy in which users of OpenAI’s Sora 2 model, an AI text-to-video generator, generated a realistic AI-made video of actor Bryan Cranston appearing alongside a digital likeness of Michael Jackson—without the actor’s consent.
The deepfake sparked outrage among performers and unions, reigniting long-simmering concerns about the unauthorised use of human likeness in synthetic media. Cranston and SAG-AFTRA quickly raised the issue with OpenAI, prompting the company to tighten its usage policies and publicly reaffirm its stance that any replication of a performer’s image, likeness, or voice must be done with explicit opt-in consent.
Reinforcing guidelines and licensing frameworks
In a joint statement, OpenAI, Cranston, and SAG-AFTRA have publicly confirmed that OpenAI is developing clearer guidelines and enforcement tools to prevent “unintentional generations” of real people in AI content. This incident serves as a wake-up call for OpenAI executives, ensuring that the company implements measures to avoid similar scenarios in the future.
For the first time ever, a major AI company and a renowned actors’ union are coming together to create a completely new consent-based model, which is being formed for generative video technology. It is a shift from the old “opt-out” approach, where the new system would require affirmative permission before any replications can occur. This change could influence future labour contracts, creative tool design, and even national legislation.
How does the technology work?
Sora 2 works by combining the strengths of diffusion and transformer architectures to generate realistic, temporally consistent video from text prompts. The model first translates a user’s description into a series of 3D “patches”—representations that capture both spatial and motion over time. These can be considered as the “token” equivalent seen in large language models (LLM).
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataUsing a diffusion process, it progressively merges these patches, refining them into coherent frames that are then decoded into full video sequences. This hybrid design allows Sora 2 to model complex scenes with enhanced realism, motion stability, and audio.
The potential of OpenAI Sora 2
Despite the controversy, Sora 2 continues to demonstrate significant potential for creative industries.
Its ability to transform text into high-fidelity photorealistic videos collapses the barriers between imagination and screen, giving creators the power to visualise ideas without cameras, sets, or crews.
The technology could impact industries beyond film—accelerating advertisement campaigns, educational simulations, game development, and virtual storytelling. Its advanced physics modelling, dynamic lighting, and synchronised audio hint at a future where anyone can produce cinematic-quality content on demand.
The bottom line
The Cranston-Sora 2 controversy has accelerated a necessary reckoning. It reminded the public that generative AI is not merely a technical revolution, but a social contract in progress—one that must balance innovation with identity, consent, and human dignity.
OpenAI’s willingness to listen, adjust, and partner—or maybe not—could mark the beginning of a new chapter in AI-driven storytelling.
