Content
summary Summary

At a U.S. Senate committee hearing, media experts and academics discussed the impact of generative AI on journalism.

A central theme of the hearing was the existential crisis facing local journalism: speakers emphasized the critical role of local reporting in democracy and pointed to declining revenues, rising costs, and disinformation as contributing factors to this crisis. AI technologies have the potential to exacerbate these challenges by using the work of newspapers and writers to train AI models without pay or credit. This could lead to even more "news deserts" where local journalism is absent.

Roger Lynch, CEO of Condé Nast, disagreed with the view that generative AI falls under fair use. "Fair use is to allow criticism, parody, scholarship, research, news reporting," not to make tech companies that don't want to pay rich, he said.

He also pointed to the recent New York Times lawsuit against OpenAI, which was able to extract near exact copies of its articles from ChatGPT. OpenAI, on the other hand, argues that reproducing copyrighted content is a bug, accuses the New York Times of violating its terms of service, and believes that its use of content should be protected by the fair use rule. "These technologies should be licensing our content. If they're not, Congress should act," Lynch said.

Ad
Ad

National Association of Broadcasters considers licensing necessary and feasible

Curtis LeGeyt, president and CEO of the National Association of Broadcasters, also disagreed with the notion that licensing training data and other data is impossible. "The notion that the tech industry is saying that it's too complicated to license from such an array of content owners doesn't stand up," he said. "Over the past three decades local TV broadcasters have literally done thousands of deals with cable and satellite systems across the country for the distribution of their programming."

In addition to issues of copyright and fair use, the discussion also touched on the potential for AI to misidentify or misattribute statements and spread misinformation. Some speakers expressed concern that AI could replace human journalists, leading to a loss of trust and compromising the quality of journalism. Some called on Congress to take legislative action. Suggestions included frameworks for AI transparency, licensing, updating antitrust laws, and clarifying that Section 230 on social media liability does not apply to AI.

"There's just no business model for us in that ecosystem"

Danielle Coffey, CEO of the News/Media Alliance, an industry trade group, pointed out that in addition to ChatGPT, chatbots like Microsoft Bing and Perplexity, which crawl the web and work like a search engine, could also summarize articles. She also called for tech companies to be required to make their training data searchable.

At the end of her testimony, Coffey summed up the core issue: "We find that 65% of users don't leave these walled gardens and click through, which is the only way we can monetize through advertising, so AI is only going to make the situation much worse because if you have summaries and when there's nothing left you need from the original article, this will become an existential threat to our industry and there's just no business model for us in that ecosystem."

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Media experts and researchers discussed the impact of generative AI on journalism at a US Senate hearing.
  • Roger Lynch, CEO of Condé Nast, among others, argued against the view that generative AI falls under fair use.
  • Suggestions for regulating AI in journalism included transparency, licensing, updating antitrust laws, and clarifying that Section 230 on social media liability does not apply to AI.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.