OpenAI has strengthened safeguards for its Sora 2 AI video generator after actor Bryan Cranston's voice and likeness appeared in generated videos without his consent, violating the company's official opt-in policy.
"I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way," Cranston said, adding that he reported the incident to SAG-AFTRA, his union.
OpenAI described the incident as "unintentional generations" and, after talks with SAG-AFTRA and Cranston, has "strengthened guardrails around replication of voice and likeness" for Sora 2. The opt-in policy remains in place, giving artists full control over how their digital likeness is used.
OpenAI has also promised to handle complaints quickly. Cranston welcomed the changes but stressed that control over one's voice and appearance is a fundamental right for all artists.
The new safeguards are part of a joint declaration signed by SAG-AFTRA, OpenAI, Cranston, United Talent Agency, Creative Artists Agency, and the Association of Talent Agents (ATA). The group also backs the NO FAKES Act, proposed federal legislation that would ban unauthorized digital copies of a person's voice or likeness.
Cranston has been an outspoken critic of AI's role in Hollywood since 2023, calling for respect for actors' rights and warning against replacing human labor with AI.
Silicon Valley's "move fast and break things" mentality is back
Since its launch in early October, Sora 2 has faced steady criticism for generating unauthorized imitations of celebrities and copyrighted content, including entire episodes of "South Park."
OpenAI leaned into launch hype at the expense of strong safeguards, only introducing restrictions after public backlash. The company used a similar approach with its new image model in ChatGPT, as shown by the Ghibli case. CEO Sam Altman has promised that rights holders will get a share of revenue, though the details are still unclear.
A recent NewsGuard investigation found that Sora 2 can generate convincing fake videos in minutes with minimal effort. It's a striking change for OpenAI, which once withheld its relatively weak language model GPT-2 out of concern for fake news, but now ships a video model that NewsGuard warns could be used to spread disinformation.