Content
summary Summary

A new study in Science found that the way social media feeds are ranked directly affects political hostility. The researchers demonstrated this using a technical workaround that removed the need for cooperation from the platform itself.

Ad

The idea that social media algorithms intensify political polarization has been debated for years. The suspicion has always been strong, but rigorous data has been limited, mostly because meaningful research typically depends on deep access to platform systems. Now a team of scientists has published the results of a field experiment in Science suggesting a causal link.

The researchers manipulated the feeds of 1,256 U.S. users on X (formerly Twitter) in real time during the 2024 summer election season. When posts containing antidemocratic attitudes and partisan hostility were filtered out, participants' negative feelings toward the opposing party dropped significantly. When those posts were amplified, hostility rose.

The magnitude of the effect was considerable. According to the authors, the shift in affective polarization produced by the intervention resembled the amount of change usually observed in the U.S. over a three-year period.

Ad
Ad

A browser extension gives researchers control of the feed

Unlike earlier, often controversial experiments conducted by tech companies, this study did not involve hidden changes to user accounts. Participants were recruited through survey platforms, gave informed consent, and installed a browser extension voluntarily in exchange for a 20 dollar stipend. They knew their feeds would be altered but did not know whether they would see more or fewer toxic posts.

The extension intercepted the data stream for X's "For You" feed. A background large language model analyzed political posts in real time, flagging signs such as support for political violence or attacks on political opponents. Based on the assigned test group, the extension reordered the feed. One group saw toxic posts removed, while another saw them promoted.

Less toxicity means fewer clicks

The study also offered economic insights. Usage data showed that filtering out hostile content reduced engagement slightly.

Participants who received cleaner feeds spent less time on the platform and gave out fewer likes. Higher exposure to AAPA content, on the other hand, correlated with stronger negative emotions such as anger and sadness. These effects were symmetric across political lines, with Democrats and Republicans responding in similar ways to the algorithmic changes.

The authors argue that their method provides a path for researchers to operate independently from platform operators. They contend that scientists no longer have to wait for tech companies to grant access to the inner workings of their systems.

Recommendation

Previous attempts faced far greater constraints. Research conducted with Meta during the 2020 U.S. election on Facebook and Instagram produced mixed results. Those projects were limited to interventions the company approved beforehand.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A new Science study shows that adjusting how social media feeds rank political content on X directly influences users' hostility toward the opposing party.
  • Researchers used a browser extension and a background LLM to reorder feeds in real time without relying on platform cooperation, demonstrating that increased exposure to toxic political posts raises affective polarization, while reduced exposure lowers it.
  • Filtering out hostile content slightly reduced user engagement, and the effects appeared evenly across Democrats and Republicans, suggesting broader implications for both platform design and independent research methods.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.