Content
summary Summary

Ed Newton-Rex, head of the audio team at Stability AI, has resigned. He cited a disagreement over the application of 'fair use' when training generative AI models.

Ad

In a public statement, Newton-Rex expressed his concerns: "I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’."

He acknowledged the thoughtful approach of many at Stability AI and praised the company for its support in developing Stable Audio, an AI music generator based on licensed training data that shares revenue with rights holders. However, he said that this had not changed the prevailing view within the company about fair use.

US Copyright Office considers fair use in generative AI

The issue of fair use in generative AI came to the forefront when the US Copyright Office recently invited public comments on the subject, and Stability AI was among many companies that responded. In their 23-page submission, Stability AI stated, "We believe that AI development is an acceptable, transformative, and socially-beneficial use of existing content that is protected by fair use."

Ad
Ad

Fair use is a legal doctrine that allows limited use of copyrighted material without requiring permission from the rights holders. According to Newton-Rex, one factor affecting whether copying is fair use is "the effect of the use upon the potential market for or value of the copyrighted work."

He believes that today's generative AI models can be used to create works that compete with the copyrighted works they are trained on, which challenges the idea that training AI models in this manner can be considered fair use.

AI training without consent is morally wrong

Apart from the fair use argument, Newton-Rex also believes that training generative AI models without permission is morally wrong. He said that companies worth billions of dollars are training AI models on creators' works without permission, potentially undermining the creators' livelihoods.

Despite his disagreement with Stability AI's fair use stance, Newton-Rex remains a supporter of generative AI, having worked in the field for 13 years. However, he emphasizes that he can only support generative AI that does not exploit creators by training models on their work without permission.

Newton-Rex hopes that others within generative AI companies will speak up about the fair use issue and push for a change in how creators are treated in the development of generative AI technology.

Recommendation

High licensing fees could slow generative AI, say to AI companies

In addition to Stability AI, AI companies such as Meta, Google and OpenAI have also submitted comments to the US Copyright Office, arguing that training AI models with copyrighted material is fair use and does not infringe the rights of copyright holders. Meta compared generative AI to a printing press, a camera or a computer, and argued that high licensing fees for AI training data could slow the development of generative AI.

Google and OpenAI argued for a flexible interpretation of fair use and warned against premature legislation that could stifle innovation and limit the potential of AI technology.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Ed Newton-Rex, head of the audio team at Stability AI, has resigned due to disagreements with Stability AI over the application of fair use in the training of generative AI models.
  • The US Copyright Office is currently investigating the issue of fair use in the context of generative AI and has invited public comment from companies including Stability AI, Meta, Google, and OpenAI.
  • Newton-Rex argues that there is no fair use and that it is morally wrong to train generative AI models without permission, and hopes that other employees of AI companies will advocate for a change in attitude towards creative humans.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.