Brain Rot, Feedback Loops, and the Shared Costs of Social Media Optimization
In last month’s post, I introduced the idea of linguistic feedback loops in large language models (LLMs) through small but telling examples—words like delve that appear with surprising frequency in AI-generated text.
These indicators of generative AI use emerge because they are statistically overrepresented in training data, and LLMs tend to reproduce what they encounter most often. A feedback loop forms when humans, through repeated exposure, begin to adopt these same patterns and use them more frequently in online content. That content then feeds back into future training sets, reinforcing the cycle.
The “delve” phenomenon reflects a feedback loop that begins inside model training and spills outward into human use. This month, I want to look at a different (and potentially more consequential) dynamic: one that does not originate in models at all, but takes shape within human–social media platform ecosystems before being folded back into future training, with far more troubling implications.
That dynamic has already been named. The Oxford English Dictionary’s Word of the Year for 2024—think mindless TikToks and memes, low-effort YouTube, rage-bait and recycled trend mashups. Yes, that’s right: brain rot. A phenomenon familiar to followers of technological and cultural trends, as well as to any parent of a nine-to-fourteen-year-old who has wondered how a sequence of jump cuts, exaggerated intonation, and looping phrases can be delivered with such energy—and yet be so mentally exhausting to overhear at the same time.
There is much to worry about from a parent’s perspective with the rise of this content style: reduced tolerance for boredom, emotional dysregulation driven by constant stimulation, and the effects of dopamine reward loops on habit formation, which have been shown to affect neurological development, but that is a topic for another day. To keep the focus on feedback loops in language, I’ll use brain rot as a way of understanding how these environments can reinforce harmful patterns over time.
What Exactly Is Brain Rot?
The term is commonly used to describe a dominant mode of communication across social media, one adopted by influencers and content creators and tuned for speed, engagement, and emotional reaction. This style did not emerge accidentally. As with many platform-shaped behaviors, its spread is driven less by preference than by incentives: advertising-funded systems systematically reward whatever captures attention for the longest time, regardless of cognitive or social cost. Platforms optimize for engagement, engagement fuels revenue, and creators adapt their content accordingly.
Over time, the most successful formats become codified. They are formalized, taught, and replicated through courses, playbooks, hooks, and pacing rules. What begins as experimentation becomes standardized and professionalized, simply because it performs.
Common characteristics of brain rot content style include:
- Short, repetitive phrasing that minimizes processing effort
- Emotional immediacy, favoring reaction over reflection
- Low ambiguity, with statements framed as settled rather than open
- Fast cognitive payoff, delivering a sense of understanding without sustained thinking
Oxford University Press defines brain rot as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.”
The term itself was coined by Henry David Thoreau in England in 1854, when he compared intellectual decline to the potato-rot blight that caused the Great Famine in Ireland. He was lamenting a growing preference for ease, clarity, and entertainment over intellectual rigor. Although the context is very different, his concerns map closely onto the modern usage of the term: an atrophy of sustained attention, erosion of critical reasoning, and the replacement of reflective thought with passive consumption.
Why This Style Spreads
The traits associated with brain rot perform well because they align closely with how social media platforms reward attention. Recommender algorithms amplify content that generates fast signals such as clicks, likes, shares, and watch time. Infinite scroll favors material that captures attention immediately, while short-form video and captioned formats bias communication toward speed and low context. Together, these conditions systematically favor language that is easy to process and quick to reward, rather than language that requires sustained thought.
This is not simply a story about “bad” content. Every generation has had low-quality, escapist media. What is different—and troubling—is the degree to which it is systematically optimized and industrialized.
Unlike the feedback loops beginning to emerge around terms like delve, brain rot is not yet a true feedback loop. For now, it functions more like an optimization funnel, where certain styles are aggressively promoted even though they are not yet self-reinforcing through model training. However, that could change.
When Large Language Models Enter the Loop
Left to human–platform dynamics alone, brain rot would already warrant concern. The more serious risk emerges when large language models enter the loop, transforming a cultural tendency into a scalable, self-reinforcing system.
Firstly, LLMs dramatically reduce the cost of producing engagement-optimized language. What once required time, intuition, and labor can now be generated at scale. Creators are incentivized to increase output to make as much money as possible, not to diversify style. We are already seeing AI-generated content that exhibits clear brain rot characteristics, and this trend is likely to accelerate.
Secondly, as the volume of this content grows, it comes to occupy an increasingly large share of the linguistic environment—and, eventually, of the data from which future models are trained. This is where a potential feedback loop begins to form.
Brain rot promotes a communicative environment optimized for engagement rather than depth, one that minimizes the epistemic friction required for genuine understanding. If most of what we encounter is effortless to consume and instantly rewarding, skills such as vocabulary growth, sustained attention, and comfort with complexity are used less and may deteriorate over time. There are also open questions about what this environment means for LLMs own capabilities. Recent research suggests that models trained predominantly on content with brain rot characteristics exhibit measurable cognitive decline themselves, raising the possibility that the same optimization pressures shaping human communication could eventually degrade the systems trained on it.
Feedback Loops, Not Fate
The good news? None of this is inevitable. Feedback loops only take hold when patterns go unnoticed or unexamined, and part of the value in tracing them early is simply to keep options open. Linguistic environments are shaped over time, sometimes deliberately and sometimes accidentally, and they can be influenced in more than one direction. But that influence does not happen automatically. It depends on what people are willing to notice, question, and push back on.
When it comes to large language models, there are already practical ways to intervene. Choices about training data matter. Systems trained on a wider range of curated sources—including books, academic work, long-form journalism, and expert material—are less likely to be dominated by the patterns of low-effort, engagement-optimized language. Models do not have to mirror whatever happens to be most abundant online; they reflect what we decide is worth preserving—and counter-balancing.
Understanding how platforms, incentives, and models shape what we read, hear, and repeat is not just a technical concern. It is a cultural one. As generative AI becomes embedded across more areas of life, opting out of understanding these dynamics increasingly means opting out of influence over how they develop. If we want a seat at the table—in deciding what kinds of language, thought, and attention we value—then noticing these feedback loops is not optional. It is the starting point.
Alexandra Frye
Alexandra Frye edits the Technology & Society blog, where she brings philosophy into conversations about tech and AI. With a background in advertising and a master’s in philosophy focused on tech ethics, she now works as a responsible AI consultant and advocate.
