The Rage-Scroll Loop: How Twitter’s Algorithms Trap Us in Filter Bubbles and Outrage
Why Outrage Travels Faster Than Truth on X/Twitter—and What It Means for Writers and Readers
We all know the experience: you see a tweet that pushes some anger button, you click, reply, maybe share. Then the next feed item is similar. Before long, your X/Twitter timeline feels like an echo chamber of rage. What’s really happening here, and what are the costs to public discourse?
What the Research Tells Us
A 2025 study by Smitha Milli et al. confirms something many of us have felt: emotionally charged, especially anger-laden content, is heavily amplified by Twitter’s engagement-based algorithm. Relative to a reverse-chronological timeline, Twitter shows significantly more content that expresses out-group animosity and anger—and users report feeling worse about political out-groups after seeing those tweets. (PMC)
Another piece of evidence: the study Evaluating Twitter’s algorithmic amplification of low-credibility content (Corsi, 2024) found that low-credibility URLs perform disproportionately well on Twitter when paired with high engagement, particularly from influential/verified users. This suggests that posts which may be misleading or false get a boost if they generate enough traffic. (SpringerOpen)
Meanwhile, Tulane University’s “rage clicks” research shows that people are more likely to engage with content that explicitly challenges their own beliefs compared to content that simply affirms them. There’s a “confrontation effect”, which means content evoking disagreement or outrage gets more traction. (Tulane News)
“The filter bubble tends to dramatically amplify confirmation bias -- in a way, it’s designed to.” (Goodreads)
How My Feed Felt During Observation
When I paid attention to my own X/Twitter feed, the pattern mirrored these studies. Posts with provocative headlines, sharp moral judgments, or sensational framing got the most visibility. Calm or nuanced posts (long threads, fact-checks, moderate voices) were less common. The feed seemed to lean toward outrage, repetition, and polarization.
I noticed that after I engaged (liked, replied to) even one or two posts of a certain kind, I started seeing far more of the same sort: dissent, anger, conflict. It was as if the algorithm had “learned”.
Rage to Engage: Rhetoric and Risk
Consider three types of “rage to engage” content I observed:
Hyperbolic Headlines / All-Caps / Exaggeration
Creators use bold, extreme language (“THIS IS UNBELIEVABLE”, “They’ve Ruined It”) to provoke instant emotional reaction. The voice is urgent, moralistic.Selective Framing / Context Omission
A video or quote is stripped from its broader context; on-screen text makes a definitive claim (“Proof they are lying”) that may mislead. The rhetoric relies on implication rather than careful assertion.Mockery / Out-group Ridicule
Use of sarcasm, insults, or dismissive language toward the opposing side. The tone is often informal, even comedic, to draw in shares and replies, not necessarily to engage honestly.
These rhetorical strategies are effective at getting attention, but they come with ethical costs. They tend to magnify mis- or disinformation, encourage tribalism and devalue truth in favor of virality.
Filter Bubbles and Algorithmic Incentives
The research by Milli et al. shows that Twitter’s algorithm tends to prioritize content that provokes emotional reaction, especially anger or out-group hostility. (PMC) That same mobility toward outraged content shapes what we see, and what we come to expect. The algorithm “rewards” posts that generate likes, shares, replies, especially if they provoke disagreement. That means “rage to engage” content is essentially economically rational for the platform.
Conclusion
What I observed in my own feed is mirrored in academic studies: there is a strong incentive built into X/Twitter’s algorithm for outrage, controversy, and emotionally charged content. That incentive is one of the engines behind filter bubbles and growing polarization. It doesn’t mean every post or user is complicit, and there is potential to break the cycle. But it requires awareness, not just from readers, but from writers and platforms themselves.
Works Cited
Milli, Smitha, Micah Carroll, Yike Wang, Sashrika Pandey, Sebastian Zhao, Anca D. Dragan. Engagement, User Satisfaction, and the Amplification of Divisive Content on Social Media. 2025. (PMC)
Corsi, G. Evaluating Twitter’s Algorithmic Amplification of Low-Credibility Content. 2024. (SpringerOpen)
Tulane University. “Rage clicks: Study shows how political outrage fuels social media engagement.” Oct. 9, 2024. (Tulane News)
“The Algorithm of Outrage.” Medium. Blaze Currie. (On the nature of outrage-driven engagement tactics.) (Medium)


