Former Meta executive Nick Clegg is warning that a legal requirement for tech companies to secure permission before using copyrighted content to train AI models would devastate the UK's AI sector.
Clegg argues that such a rule would "basically kill the AI industry in this country overnight." He calls the idea of a universal permission requirement "somewhat implausible" given the scale of data involved: "I just don't know how you go around, asking everyone first. I just don't see how that would work."
At the center of the debate is whether AI companies should be required to obtain permission (opt-in) or whether content owners should have to opt out, and whether licensing fees should be mandatory when AI providers use third-party material.
Clegg supports a clear opt-out approach, saying, "It seems to me to be not unreasonable to opt out," but maintains that expecting companies to ask permission in advance "just collides with the physics of the technology itself."
At the same time, artists such as Elton John and Paul McCartney are calling for stronger protections for intellectual property and warning that current plans could threaten the livelihoods of millions in the UK's creative sector.
The backdrop to these comments is a recent vote in the British parliament, where lawmakers rejected a proposal to increase transparency in how protected works are used for AI. Clegg made his remarks during an event promoting his upcoming book, "How to Save the Internet," set for release in September.
Meta has also maintained that broad licensing for training data is unworkable, citing prohibitive costs and the sheer scale of data required for AI development. According to the company, there is currently no viable market for licensing all the data AI models need.
OpenAI expressed a similar position to the UK Parliament in December 2023, stating that it would be impossible to train today's AI systems without using copyrighted content, since "copyright today covers virtually every sort of human expression."
Copyright disputes over AI training escalate
Legal and regulatory challenges are piling up. A US judge in San Francisco recently questioned whether Meta can legally use copyrighted books for AI development without explicit permission. "You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products," said US Judge Vince Chhabria.
A recent statement from the US Copyright Office went further, rejecting the idea that all data used for AI training should automatically qualify as fair use. The agency said only certain kinds of training might be considered transformative.
Shortly after this report was published, Shira Perlmutter, the head of the Copyright Office, was dismissed by the Trump administration. The US government remains in close contact with the AI industry, including figures like Elon Musk (xAI) and Sam Altman (OpenAI).