Ad
Skip to content

Study shows how feed ranking shapes political hostility

Image description
GPT-4o prompted by THE DECODER

A new study in Science found that the way social media feeds are ranked directly affects political hostility. The researchers demonstrated this using a technical workaround that removed the need for cooperation from the platform itself.

The idea that social media algorithms intensify political polarization has been debated for years. The suspicion has always been strong, but rigorous data has been limited, mostly because meaningful research typically depends on deep access to platform systems. Now a team of scientists has published the results of a field experiment in Science suggesting a causal link.

The researchers manipulated the feeds of 1,256 U.S. users on X (formerly Twitter) in real time during the 2024 summer election season. When posts containing antidemocratic attitudes and partisan hostility were filtered out, participants' negative feelings toward the opposing party dropped significantly. When those posts were amplified, hostility rose.

The magnitude of the effect was considerable. According to the authors, the shift in affective polarization produced by the intervention resembled the amount of change usually observed in the U.S. over a three-year period.

Ad
DEC_D_Incontent-1

A browser extension gives researchers control of the feed

Unlike earlier, often controversial experiments conducted by tech companies, this study did not involve hidden changes to user accounts. Participants were recruited through survey platforms, gave informed consent, and installed a browser extension voluntarily in exchange for a 20 dollar stipend. They knew their feeds would be altered but did not know whether they would see more or fewer toxic posts.

The extension intercepted the data stream for X's "For You" feed. A background large language model analyzed political posts in real time, flagging signs such as support for political violence or attacks on political opponents. Based on the assigned test group, the extension reordered the feed. One group saw toxic posts removed, while another saw them promoted.

Less toxicity means fewer clicks

The study also offered economic insights. Usage data showed that filtering out hostile content reduced engagement slightly.

Participants who received cleaner feeds spent less time on the platform and gave out fewer likes. Higher exposure to AAPA content, on the other hand, correlated with stronger negative emotions such as anger and sadness. These effects were symmetric across political lines, with Democrats and Republicans responding in similar ways to the algorithmic changes.

Ad
DEC_D_Incontent-2

The authors argue that their method provides a path for researchers to operate independently from platform operators. They contend that scientists no longer have to wait for tech companies to grant access to the inner workings of their systems.

Previous attempts faced far greater constraints. Research conducted with Meta during the 2020 U.S. election on Facebook and Instagram produced mixed results. Those projects were limited to interventions the company approved beforehand.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

AI news without the hype
Curated by humans.

  • Over 20 percent launch discount.
  • Read without distractions – no Google ads.
  • Access to comments and community discussions.
  • Weekly AI newsletter.
  • 6 times a year: “AI Radar” – deep dives on key AI topics.
  • Up to 25 % off on KI Pro online events.
  • Access to our full ten-year archive.
  • Get the latest AI news from The Decoder.
Subscribe to The Decoder