Content
summary Summary

According to a recent study, people view headlines labeled as "AI-generated" as less trustworthy and are less likely to share them - even if the content is accurate or was created by humans. They assume the articles were written entirely by AI.

Ad

Researchers at the University of Zurich conducted two experiments with nearly 2,000 participants to examine how labeling headlines as "AI-generated" affects perceived accuracy and willingness to share.

The results show: Labeling reduced credibility and sharing likelihood, regardless of content accuracy or whether it was actually created by AI or humans. Even human-written headlines falsely labeled as AI-generated were viewed just as skeptically as genuine AI headlines.

In a survey, very few participants said they could tell by eye whether an article was written by a human or machine, or that they trust AI to write accurate news articles.

Ad
Ad
Image: Altay et al.

The second study explored reasons for this mistrust. Researchers explained to participants what "AI-generated" means, from a weak definition where AI only improved clarity and style, to a strong definition where AI chose the topic and wrote the entire article.

Skepticism only occurred when participants weren't given a definition or were presented with the "strong" definition. The researchers conclude people assume headlines labeled as AI-generated were created entirely by AI.

Recommendations for news organizations

The study authors caution against simplistic use of AI labels, as these could deter useful AI content. They suggest harmful AI content should be explicitly marked instead of just labeled as AI-generated. Transparency about label meanings is also important to prevent false assumptions.

These findings align with other recent studies showing general skepticism toward AI-generated news. One study found similar sharing willingness for AI-generated and human-generated fake news, though both were seen as less credible than real news.

A Reuters Institute survey also revealed most people consider AI-generated news less valuable than human-written content.

Recommendation

Experts recommend news organizations use AI thoughtfully, especially for support tasks and new presentation formats. Caution is needed when creating content. Transparent communication about AI use is crucial - and all content, whether created with or without AI, should be human-verified. Otherwise, organizations risk losing audience trust.

Meta arbeitet an der Implementierung von
Meta wants to better identify and label AI images and content | Image: Meta AI

AI content labeling is important not just in news, but also on social media. Meta now allows users to add an "AI info" tag to posts. However, it's unclear how much AI contributed to images - whether fully generated, sky enhanced, or minor elements removed.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A study from the University of Zurich shows that people rate headlines labeled "AI-generated" as less credible and are less likely to share them - even if the content is true or created by humans.
  • Skepticism was particularly strong when participants were not given a definition of "AI-generated" or when they were told that an AI chose the topic and wrote the entire article. The researchers conclude that people assume that AI labels are entirely created by AI.
  • Experts recommend that news organizations use AI carefully, primarily in a supportive way. Caution is advised when creating content. Transparency about the use of AI and human review of all content is critical to avoid a loss of trust.
Sources
Jonathan works as a freelance tech journalist for THE DECODER, focusing on AI tools and how GenAI can be used in everyday work.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.