Ad
Skip to content

Microsoft says it wants to use AI responsibly, but MSN.com tells a different story

Image description
MSN.com

Key Points

  • A CNN report criticized Microsoft's MSN AI model for news aggregation, pointing out instances of inaccurate or inappropriate content selection, such as a false story about President Biden and a derogatory obituary.
  • Microsoft has been using automated systems for MSN since 2020, rewriting headlines and replacing images, but the criteria for content selection and the extent of automation remain unclear.
  • Microsoft's Bing Chatbot also faced criticism for providing incorrect information about elections in Germany and Switzerland, raising concerns about the responsible and safe use of AI by the company.

A recent CNN report criticized Microsoft's MSN AI model for news aggregation, citing questionable editorial decisions. This isn't new.

The CNN report highlights instances where the AI selected stories that were either inaccurate or used inappropriate language.

Examples include a story that falsely claimed President Joe Biden fell asleep during a moment of silence for wildfire victims, and an obituary that referred to an NBA player in a derogatory manner.

The report argues that Microsoft's automated system continues to display or generate content containing offensive language and false information without clear accountability.

Ad
DEC_D_Incontent-1

This is hardly new: In 2020, Microsoft began replacing MSN's news editors with automated systems that rewrite headlines or replace images with questionable results, but still planned to increase automation, which it apparently did.

MSN takes content from other publishers and publishes it on its site. The exact process by which Microsoft selects this content and publishers is unclear, as is the use of automated systems.

But it's not just MSN.com where Microsoft's AI information tools are failing in ways that threaten society: A recent study by AlgorithmWatch and AI Forensics, in collaboration with Swiss broadcasters SRF and RTS, found that Microsoft's Bing Chat gave incorrect answers to questions about upcoming elections in Germany and Switzerland.

The AI chatbot provided misleading information, such as inaccurate poll results and incorrect names of party candidates. A Microsoft spokesperson responded to the study by saying that the company is committed to improving its services and has made significant progress in improving the accuracy of Bing Chat's responses. However, this does not address the fundamental structural issues with large language models that lead to these errors.

Ad
DEC_D_Incontent-2

For a company that claims to be committed to the responsible and safe use of AI, this is a pretty poor track record.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: CNN