A new study from Stanford, Washington, and Northeastern Universities demonstrates how social media algorithms can measurably alter users’ political attitudes. Researchers developed a browser extension that re-ranks posts on X (formerly Twitter) based on their level of partisan hostility. The results, published in Science, show that even temporary changes to algorithmic exposure can shift public perceptions of opposing political groups.
How the Study Worked
The study involved over 1,200 participants who consented to having their X feeds modified for ten days before the 2024 US presidential election. Half the group used a browser extension designed to down-rank highly divisive content—posts calling for violence or extreme actions against political opponents—pushing them lower in their feeds. The other half used an extension that increased exposure to such content.
The key finding is that simply altering algorithmic prioritization had a significant effect. Participants exposed to less polarising content showed a measurable improvement in their attitudes toward the opposing party. This change, averaging two points on a 100-point scale, is equivalent to roughly three years of natural shifts in American political polarization.
Bipartisan Effects and Emotional Impact
The shift in attitudes was consistent across the political spectrum ; both liberal and conservative users showed similar changes. The researchers also observed an immediate emotional response: participants exposed to less hostile content reported lower levels of anger and sadness while using the platform. However, these emotional effects did not persist after the study concluded.
Implications for Platform Regulation
This research highlights that platforms have the power to reduce polarization through algorithmic adjustments. Down-ranking extreme content can demonstrably improve attitudes toward opposing groups. The study also notes a notable workaround: these interventions can be implemented “without platform collaboration,” meaning researchers and independent developers can modify feeds directly via browser extensions.
This suggests an alternative to relying on social media companies to self-regulate. The tool tested could be adapted to other platforms, although the study acknowledges that its current form is limited to browser-based X access (it does not affect the mobile app).
Long-Term Questions Remain
The study does not measure the lasting impact of reduced exposure to divisive content. It remains unclear whether these shifts in attitude would persist over time or if users would eventually revert to their previous biases. Nevertheless, the findings offer a direct, measurable link between algorithmic design and political polarization.
The study provides compelling evidence that social media platforms are not neutral arbiters but actively shape public perceptions. Algorithmic adjustments can be a powerful tool for mitigating political hostility, though more research is needed to understand long-term effects.







































