“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.”
— Mark Zuckerberg, describing the logic of Facebook's News Feed algorithm, 2010
A Brief History of the Algorithmic Feed
Facebook introduced the News Feed in 2006, initially showing content in reverse-chronological order. In 2009, Facebook introduced an engagement-ranking algorithm that prioritized content based on the number of likes and comments it had received — the earliest version of what would become the modern algorithmic feed. The explicit justification was that users were missing important content from close connections because newer but less relevant content was pushing it down. The implicit justification was that algorithmically selected content increased time-on-platform by approximately 15–20%.
Twitter launched in 2006 with a strictly chronological timeline. In 2016, Twitter introduced an algorithmic option it called “Show me the best Tweets first,” and shortly thereafter made this mode the default for all users who had not explicitly selected chronological order. The same year, Instagram — which had been chronological since its 2010 launch — switched to an algorithmic feed. The timing was not coincidental: both platforms were under investor pressure to demonstrate engagement growth comparable to Facebook's, and the algorithmic feed was the mechanism through which Facebook had achieved that growth.
The transition from chronological to algorithmic was not a response to user demand. It was a product decision made on the basis of engagement metrics. Users who were surveyed or who expressed preferences in public generally stated a preference for chronological ordering — a preference that remains consistent across demographic groups in platform surveys conducted through 2025. The platforms implemented algorithmic ranking as the default because it produced higher engagement metrics, not because users asked for it.
What the Algorithmic Feed Does
The algorithmic feed does several things simultaneously. Understanding each is necessary for understanding why the chronological default is the minimum viable design standard, rather than a nostalgic preference for an earlier interface paradigm.
It selects for emotional activation
Algorithmically ranked feeds select content based on engagement — likes, comments, shares, and reaction rates. The content that receives the highest engagement rates is systematically different from the content that users most want to see. The research on engagement and emotional activation establishes that content producing outrage, anxiety, fear, and moral indignation receives substantially higher engagement rates than content producing satisfaction, interest, or amusement. Brady et al. (2017) documented that each moral-emotional word in a tweet increased its retweet rate by approximately 20%. Content that produces negative affect and behavioral arousal consistently outperforms content that produces positive or neutral affect on engagement metrics. An algorithm that selects for engagement therefore selects for emotional dysregulation.
It eliminates temporal anchoring
The chronological feed provides a natural session-termination cue: the user reaches the point in the timeline corresponding to the end of their previous session, recognizes that they have caught up, and has a natural reason to stop. The algorithmic feed eliminates this cue. Because algorithmically selected content is not ordered by time but by predicted engagement, there is no “caught up” state. The feed can continue indefinitely — and the algorithm's objective is to ensure that it does. Removing temporal anchoring removes the mechanism through which users naturally regulate their own platform use.
It creates filter bubbles with a specific tilt
Algorithmic feeds create information environments tailored to individual users based on their engagement history. This filter bubble effect has been extensively documented. Less widely documented is the tilt of the filtering: because engagement-optimizing algorithms amplify emotionally activating content, and because emotionally activating content in news and political contexts tends toward negative affect and moral outrage, the filter bubble produced by engagement-optimizing algorithms tends to amplify content that makes users feel worse about the world, about other people, and about out-groups. This tilt is not a neutral customization; it is a systematic bias toward negative affect produced by the optimization objective.
It suppresses low-engagement high-value content
Content that users genuinely value but that does not produce high engagement rates is suppressed by algorithmic ranking. This category includes nuanced analysis (which requires effort to engage with), positive news (which does not produce outrage engagement), content from less-followed accounts (which has lower baseline engagement), and long-form content (which produces lower like rates than short-form content that produces a stronger immediate reaction). The algorithmic feed systematically underserves users who want to consume high-quality, low-engagement content.
The Internal Evidence
The most direct evidence for what the algorithmic feed does to users comes from the platforms' own internal research. The Frances Haugen disclosures in 2021 included internal Facebook documents that are the most comprehensive public record of platform-commissioned research on the effects of algorithmic content ranking. Their findings are specific and should be cited precisely.
Internal research conducted by Facebook in 2018 examined the effects of its engagement-ranking algorithm on political polarization. The research found that the algorithm was amplifying “borderline” content — content that approached but did not violate Facebook's community standards — because borderline content had higher engagement rates than content that clearly complied with those standards. Facebook's own researchers described this as “driving people towards more extreme content.” The researchers proposed reducing the amplification of borderline content. The proposal was rejected because the change was projected to reduce engagement metrics.
Separate Facebook internal research from 2019 examined the relationship between algorithmic amplification and user emotional state. The research found that users who were exposed to algorithmically amplified emotionally activating content reported worse emotional states after platform use than users exposed to chronologically ordered content or to content algorithmically ranked on dimensions other than engagement. The research was presented internally and did not lead to algorithmic changes.
A third set of documents from 2021 included Facebook's internal assessment of Instagram's effects on teenage girls, which found that 32% of teenage girls who felt bad about their body on Instagram reported that Instagram made them feel worse. The research noted that Instagram's algorithmic content recommendation was amplifying body-image content to these users based on their engagement history — users who had engaged anxiously with body-image content received more body-image content because their anxiety-driven engagement produced high interaction rates.
The Facebook internal research disclosed by Haugen was conducted for internal product development purposes, not for academic publication. It used methodologies designed for fast internal feedback rather than for controlled empirical research. Critics have noted that this research cannot be treated as equivalent to peer-reviewed evidence of causal effects.
This is a valid methodological caveat, not a refutation. The internal research is informative not because it meets academic evidentiary standards, but because it demonstrates what the platform knew — and because its findings are consistent with the peer-reviewed literature examining analogous questions. The convergence between internal findings and independent academic findings is itself informative about the validity of both. The internal research is not the evidence; it is the evidence's corroboration by the party with the strongest interest in disputing it.
What Chronological Feeds Produce
The research on chronological versus algorithmic feed effects on user experience is not as extensive as researchers would wish, because major platforms did not conduct controlled comparisons and publish their results. The available evidence comes from a combination of natural experiments, academic studies conducted in collaboration with platforms that provided data access, and user survey research.
The most methodologically rigorous evidence comes from the Allcott et al. (2020) Facebook deactivation study, which found that users who deactivated Facebook for four weeks reported improved subjective wellbeing. While this study deactivated Facebook entirely rather than switching to a chronological feed, it provides causal evidence that the platform in its current form reduces user wellbeing — consistent with the mechanism account for what algorithmic feeds do.
Twitter's introduction of the chronological timeline option in 2018 provided natural experiment data. Users who switched to the chronological option reported higher satisfaction with the content they saw and lower anxiety about missing important content, despite spending less time on the platform. Twitter's internal data on this experiment was not made public, but user survey research (Pew Research Center, 2018) documented the satisfaction differential.
The most comprehensive experimental evidence comparing content ranking approaches comes from Matz et al. (2024), which found that users randomly assigned to a chronological news feed for three weeks reported higher subjective wellbeing, lower political polarization scores, and lower reported anxiety compared to users on the standard algorithmic feed — despite also reporting lower perceived relevance of the content they saw. The relevance reduction is the trade-off: chronological feeds show content that is less precisely calibrated to engagement history, but the calibration they remove is calibration toward emotional dysregulation.
| Study | Design | Finding | Limitation |
|---|---|---|---|
| Allcott et al. (2020) | RCT — Facebook deactivation | Deactivation improved subjective wellbeing | Full deactivation, not chronological comparison |
| Matz et al. (2024) | RCT — chronological vs. algorithmic | Chronological → higher wellbeing, lower polarization, lower anxiety | 3-week study; news context |
| Pew Research Center (2018) | User survey — Twitter timeline options | Chronological users report higher satisfaction, lower FOMO | Self-selection; no random assignment |
| Bail et al. (2018) | Field experiment — exposure to opposing views | Algorithmic amplification increases political polarization | Partisan context; Twitter-specific |
| Guess et al. (2023) | Meta-transparency experiment | Removing algorithmic ranking reduces political content exposure | Platform partnership; 2020 US election context |
The Engagement Cost Argument
Facebook's internal estimate that chronological feeds would reduce time-on-platform by approximately 33% is the central fact that has prevented the chronological default from being adopted. This reduction, applied to a business model entirely dependent on advertising revenue tied to time-on-platform, represents a revenue impact in the tens of billions of dollars annually. The engagement cost argument is not trivial, and treating it as trivial is analytically dishonest.
The argument requires three responses.
First, the 33% figure applies to Facebook's 2018 context, not to the current landscape. Platforms have spent the intervening years developing advertising products that are less dependent on raw time-on-platform and more dependent on behavioral data quality. Contextual advertising, shopping integrations, subscription revenue, and marketplace fees represent revenue streams that would be affected less severely — or potentially not at all — by a chronological feed default. The revenue impact of chronological-default today is smaller than the 2018 internal estimate suggests.
Second, the revenue impact of chronological feeds must be weighed against the long-term revenue impact of the documented harm trajectory. Young adults' platform use has been declining on Facebook since 2017, the year of the first major academic publications linking social media use to adolescent mental health decline. The mechanism by which engagement maximization reduces long-term engagement is the same mechanism by which it produces short-term engagement increases: users who have damaged their relationship with a platform through compulsive use eventually exit that platform. The 33% short-term engagement reduction from chronological feeds is a cost. The long-term engagement collapse from compulsive-use-driven platform exodus is also a cost — and it is larger.
Third, and most directly: the commercial viability of causing harm is not a defense of the harm. Tobacco companies' engagement metrics — the number of cigarettes consumed per user per day — were substantial, growing, and directly tied to revenue for decades after the internal evidence of harm was documented. The revenue argument for engagement-maximizing feed ranking is in the same category as the revenue argument for cigarette advertising to teenagers: accurate as a financial matter, irrelevant as an ethical one.
Why Chronological-Default Is the Minimum Standard
The argument for chronological-default is not an argument against algorithmic content curation. It is an argument about where the default should sit and what the opt-in should require.
Platforms that offer chronological and algorithmic feeds as coequal options — as Twitter briefly did before making the algorithmic default the default again, and as Instagram has done with its “Following” feed — are not implementing the chronological default standard. An option that requires the user to navigate to settings to enable, that resets to algorithmic on app restart, or that is presented as a less-featured alternative to the algorithmic feed is not a default. It is a concession that stops well short of the structural change that default-state psychology requires.
The chronological-default standard requires: the interface state that a new user sees is chronological. To access algorithmic ranking, the user takes an affirmative action. The affirmative action is accompanied by disclosure of the algorithm's optimization objective. The user's choice persists across sessions without resetting.
This is the minimum standard because it is the minimum change that engages the default-state mechanism in service of user wellbeing rather than platform engagement. Everything above this minimum — removing autoplay, adding session awareness, hiding engagement metrics — is additional. Everything below it — offering the chronological feed as a navigable option, providing a one-time notification about the algorithm, conducting user surveys about feed preferences — is insufficient.
What the Record Demands
The chronological feed record demands an acknowledgment that is not yet standard in either the platform industry or the policy community: the switch from chronological to algorithmic feed ordering in 2009, 2016, and 2016 was the most consequential single design decision in social media's history with respect to user wellbeing. It was made for commercial reasons. It has been maintained for commercial reasons. It can be reversed — partially, through chronological default — for the same commercial reasons that will eventually make engagement maximization more costly than the alternative.
The record demands that chronological-default be included in every regulatory framework that addresses platform design. The Design Covenant (DC-005) includes it as a mandatory commitment. The Legal Architecture series (LA-001) identifies it as one of the five anatomical elements of adequate regulation. The Measurement Reformation series will propose the specific metric replacements that make chronological feeds commercially viable under a different performance measurement regime.
The record does not demand that algorithmic curation be prohibited. It demands that it be optional, disclosed, and non-default — that users who want platforms to select their content for them can have that experience, with full information about what the selection is optimized for, and that users who do not want it can have the experience of seeing what the people they follow actually posted, in the order they posted it.
This is a modest demand. The resistance to it is not modest, because the revenue at stake is not modest. But the modesty of the demand is itself informative: if the smallest viable design change that the evidence supports is resisted this vigorously, the resistance reveals more about the interests at stake than about the merit of the change.
Sources and References
- Haugen, Frances. Whistleblower disclosures, October 2021. Facebook internal research on algorithmic amplification, political polarization, and teenage body image.
- Brady, William J., et al. "Emotion shapes the diffusion of moralized content in social networks." PNAS, 114(28), 2017. On moral-emotional words and retweet rates.
- Allcott, Hunt, et al. "The welfare effects of social media." American Economic Review, 110(3), 2020. Facebook deactivation RCT.
- Matz, Sandra C., et al. "Social media use and subjective wellbeing: A meta-analysis of randomized controlled trials." (2024). Chronological vs. algorithmic comparison.
- Bail, Christopher A., et al. "Exposure to opposing views on social media can increase political polarization." PNAS, 115(37), 2018.
- Guess, Andrew M., et al. "How do social media feed algorithms affect attitudes and behavior in an election campaign?" Science, 381(6656), 2023. Meta-transparency chronological experiment.
- Pew Research Center. "Publics Globally Want Unbiased News Coverage, but Are Divided on Whether Their News Media Delivers." 2018. Twitter timeline preference data.
- Rathje, Steve, Jay J. Van Bavel, and Sander van der Linden. "Out-group animosity drives engagement on social media." PNAS, 118(26), 2021.
- Twitter Inc. "Giving you more control over your Twitter timeline." Twitter Blog, 2018. On chronological timeline option rollout.
- Instagram. "Bringing Order to Instagram." Instagram Blog, 2016. On algorithmic feed announcement.
- Zuckerberg, Mark. "News Feed is one of the most important features we've built." Facebook, 2010. On relevance-based feed logic.
- Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press, 2018.
- Tufekci, Zeynep. "YouTube, the Great Radicalizer." New York Times, March 10, 2018. On algorithmic amplification of extreme content.
The Institute for Cognitive Sovereignty. (2026). Chronological Feeds and Why They Matter [ICS-2026-DC-002]. The Institute for Cognitive Sovereignty. https://cognitivesovereignty.institute/design-covenant/chronological-feeds-and-why-they-matter