“We can build technologies that support human flourishing. We just haven't had sufficient reason to — yet.”
— Tristan Harris, testimony before the Senate Commerce Subcommittee, 2021
What Attention Design Currently Optimizes For
The design choices that constitute modern social media platforms — algorithmic content ranking, variable-ratio notification scheduling, infinite scroll, engagement metric displays, autoplay, sleep disruption, and the suppression of session awareness — are not accidents or defaults or consequences of technical constraints. They are the output of design processes oriented toward a single primary objective: maximizing the time users spend on the platform and the behavioral data that time generates for targeting advertising.
This is the attention economy's engineering specification. Every major design decision in a behaviorally targeted advertising business is evaluated against its effect on time-on-platform, daily active users, and the behavioral signal density that determines advertising targeting precision. The engineer who ships a feature that reduces time-on-platform by 5% has produced a negative product outcome regardless of what that reduction means for user wellbeing. The product manager whose features increase daily active users has succeeded regardless of whether the mechanism of that increase is behavioral compulsion. The executive whose quarterly earnings call shows engagement growth has demonstrated business health regardless of the cost of that growth to the minds of the people doing the engaging.
This is not a description of malicious intent. Most of the engineers and product managers and executives who built the attention economy did not set out to harm anyone. They were solving the engineering problem they were given: build something people use a lot, so that the advertising system has a large and precisely targetable audience. The harm is a consequence of the objective function, not of any individual's malice. But the objective function is a choice — and the question this paper addresses is what a different choice would require.
The mechanism research documented in Saga I established what engagement-maximizing design does to human attention, sleep, social comparison, dopaminergic reward circuitry, and adolescent development. This paper takes that research as given and asks the design question: if the objective were user cognitive wellbeing rather than user behavioral data generation, what would the engineering specification look like?
Why Engagement Maximization Is the Wrong Objective
Engagement maximization fails as a design objective for the same reason that cigarette consumption maximization fails as a restaurant design objective: it optimizes for a metric that is systematically decoupled from the wellbeing of the people whose behavior generates the metric. A user who spends four hours scrolling through anxiety-inducing content and leaves the platform feeling worse than when they arrived has maximized the platform's engagement metric while suffering an unambiguously negative experience. The metric does not distinguish between this and an equivalent session that left the user feeling informed, connected, and satisfied.
This is not a hypothetical criticism. The Facebook internal research disclosed by Frances Haugen in 2021 included documents showing that Facebook's own researchers had identified that the platform's algorithmic content ranking was amplifying emotionally activating — specifically, anxiety-producing and outrage-producing — content because that content had the highest engagement rates. The researchers proposed changes to the algorithm that would reduce this amplification. The changes were evaluated against their effect on engagement metrics. They reduced engagement. They were not implemented.
The internal logic of engagement maximization, applied consistently, will always produce this outcome. The metric that engagement maximization optimizes for is systematically correlated with emotional dysregulation. Outrage engages. Fear engages. Social comparison engages. Content that produces satisfaction and closure does not engage in the same way — it resolves rather than amplifies, and resolution terminates the behavioral loop that engagement metrics require.
A platform designed for user cognitive wellbeing would require a different primary metric. The paper proposes voluntary return rate — the proportion of users who return to the platform within a given period after voluntarily ending a session — as the most coherent replacement. Voluntary return rate measures whether users found sufficient value in a session to choose to return, rather than measuring whether the platform's design was effective in preventing them from leaving. The distinction is the difference between a business model based on providing value and a business model based on exploiting compulsion.
The Eight Principles
The following eight principles constitute the Design Covenant's core engineering specification. Each is derived from the mechanism research in Saga I, specified as a concrete design requirement, and assessed for technical and commercial feasibility in Sections IV and V.
Principle 1: Chronological-Default Feed Ordering
Content feeds must default to chronological ordering. Users may opt into algorithmic ranking, but the default — the interface state that users encounter without taking action — presents content in the order it was posted, not in an order optimized for engagement. The reasoning and evidence for this principle are sufficiently important that they occupy the entire next paper in this series (DC-002). The short form: algorithmic engagement ranking selects for emotionally activating content and obscures the user's sense of temporal position, disabling the natural session-termination cue that “I've caught up on what I missed” provides.
Principle 2: Opt-In Push Notifications
Push notifications must be opt-in by default, not opt-out. Users who want push notifications must affirmatively enable them; the platform's default state is no push notifications. The evidence for this principle is examined in DC-003. The short form: push notifications function as behavioral interrupts that reinstantiate the dopaminergic seeking circuit on a variable-ratio schedule — the same schedule that makes slot machines maximally compulsive. The opt-out default that platforms currently use ensures that the majority of users, through inertia, remain in a state of continuous behavioral interrupt exposure.
Principle 3: Session Awareness Display
The platform must display, in a visible and non-dismissible interface element, the current session duration and the total platform use for the current day and week. This information must be accessible without navigating to a settings menu. The mechanism of harm that session awareness addresses is the elimination of temporal landmarks: infinite scroll and algorithmic feed ordering together create an interface environment in which the natural time-markers that govern offline behavior — reaching the end of the newspaper, running out of cards in the physical pile — are absent. Session awareness restores the temporal landmark function.
Principle 4: No Engagement Metric Display to Users
Platforms must not display engagement metrics — like counts, retweet/share counts, comment counts, view counts — on content in the primary feed. Users who navigate to specific content may see these counts, but they must not appear in the default feed interface. The mechanism this addresses is the social comparison and status anxiety loop that like counts activate: the user posting content is evaluating their social standing in real time via engagement metrics, and the user consuming content is evaluating others' social standing via the same metrics. Neither effect is neutral with respect to anxious self-monitoring or social comparison activation.
Principle 5: No Autoplay for Video Content
Video content must not autoplay. Each video requires a distinct user action to begin. The autoplay mechanism exploits the endowment effect (a video already in progress is harder to stop than one not yet started) and the inertia of disengagement to extend session duration past the user's intended endpoint. Removing autoplay restores the user's intentional control over content consumption.
Principle 6: Voluntary Return Rate as Primary Performance Metric
The platform's primary internal performance metric — the metric that governs product decisions, engineer performance evaluations, and executive reporting — must be voluntary return rate, not time-on-platform, daily active users, or engagement rate. This principle is structural rather than interface-level: it changes the objective function that drives all other design decisions. A platform that measures success by whether users choose to return is a platform with an incentive to make sessions satisfying rather than compulsive.
Principle 7: Age-Differentiated Design for Minor Users
For users under 18, principles 1–6 are mandatory rather than default. Minor users may not opt into algorithmic feed ranking. Minor users may not enable push notifications during hours that the eSafety Commissioner or equivalent authority designates as protected time (specifically: 10pm–7am and school hours). Engagement metrics are not displayed to minor users under any condition. These requirements extend beyond the principles applicable to adult users because the mechanism research establishes that the developing brain is categorically more vulnerable to the attention-capture mechanisms that principles 1–6 address.
Principle 8: Transparent Algorithmic Disclosure
When a platform offers users an opt-in algorithmic content ranking, it must disclose in plain language — in the same interface layer where the opt-in is presented — the specific behavioral objectives the algorithm is optimized for. A platform that tells users “Our algorithm shows you more of what you engage with” is disclosing an objective that sounds neutral but optimizes for engagement. The required disclosure must specify the objective function: “Our algorithm is optimized for time-on-platform and behavioral data collection for advertising targeting.” Informed consent to algorithmic ranking requires information about what the algorithm is ranked for.
Technical Feasibility
Each of the eight principles is technically feasible with current platform infrastructure. None requires new technology. Each requires engineering decisions that platforms are capable of making and have chosen not to make.
Chronological feed ordering requires setting the feed sorting parameter to timestamp rather than engagement score. This is a configuration change, not a new technical capability. Platforms already maintain chronological feeds internally for logging purposes; they have the infrastructure and simply choose not to serve it as the default.
Opt-in push notifications requires setting the default notification permission state to off rather than on during the onboarding flow. iOS and Android both support this. It is a default state configuration. Platforms default to opt-out because the opt-out default produces higher notification subscription rates. The technical mechanism to implement opt-in exists; the barrier is commercial, not technical.
Session awareness display requires a persistent UI element showing a counter that increments with time. This is among the simplest possible interface features — it requires less engineering than the notification systems that already exist. Apple's Screen Time and Google's Digital Wellbeing features demonstrate that the capability exists at the OS level; implementing it at the platform level requires a product decision, not technical development.
Removing engagement metric display from feeds requires a CSS/UI configuration change — hiding like counts and share counts from the feed render. Instagram ran a global test of this in 2019 and documented that it was technically trivial to implement. The test produced user wellbeing improvements. Instagram did not make the change permanent.
Removing autoplay requires setting the default video player behavior to require user action. YouTube, Netflix, and every major video platform has this as an existing setting. Defaulting to it requires a product decision.
| Principle | Technical Complexity | Engineering Hours (Est.) | Evidence of Existing Capability |
|---|---|---|---|
| 1. Chronological-default feed | Minimal — configuration | < 40 hours | Twitter offered chronological option 2018; Instagram "Following" feed |
| 2. Opt-in notifications | Minimal — default state | < 20 hours | iOS 15+ requires notification permission prompt; platforms configure the ask |
| 3. Session awareness display | Low — counter UI element | < 80 hours | Apple Screen Time, Google Digital Wellbeing; TikTok has screen time reminder |
| 4. No engagement metrics in feed | Minimal — UI hide | < 40 hours | Instagram global like-count hide test, 2019 |
| 5. No autoplay | Minimal — player default | < 20 hours | YouTube has autoplay off setting; Netflix "Continue watching" prompt |
| 6. Voluntary return rate metric | Moderate — analytics rebuild | 200–500 hours | No existing implementation; requires session-end tracking |
| 7. Age-differentiated design | Moderate — feature flag system | 300–600 hours | Platforms have user age data for advertising; applying it to UI is straightforward |
| 8. Algorithmic disclosure | Low — copy and UI | < 40 hours | No existing implementation; copywriting + modal placement |
Commercial Viability
The industry's standard objection to each of these principles is that implementing them would reduce engagement metrics and therefore reduce advertising revenue. This objection is worth taking seriously and equally worth challenging on its own terms.
The engagement reduction from chronological feeds is real. Facebook's internal research, disclosed in 2021, estimated that chronological feeds would reduce time-on-platform by approximately 33%. This is a significant revenue impact for a business model entirely dependent on time-on-platform. The objection has commercial force.
The argument from commercial viability has three responses. First, the revenue impact is real but overstated: users who find a platform more satisfying use it more willingly over time, and platforms that exploit compulsion rather than providing value face eventual user backlash and exodus — as documented by declining young-adult user rates on Facebook following the attention research publication cycle of 2017–2022. Second, the behavioral advertising model that requires engagement maximization is not the only viable monetization model: subscription models, marketplace models, and advertising models based on contextual rather than behavioral targeting all exist and have demonstrated commercial viability at scale. Third, the commercial viability of engagement maximization does not constitute a defense of its harms — the tobacco industry was highly commercially viable during the period when it was causing the most harm to the most people.
Voluntary return rate, proposed in this paper as the primary replacement metric for time-on-platform, has genuine measurement challenges. Distinguishing voluntary session endings from interruptions, defining the time window for "return," and handling users with multiple accounts or devices all create measurement complexity. The metric is not a drop-in replacement; it requires engineering investment (estimated 200–500 hours) and careful definition to be useful.
These measurement challenges are real. They do not constitute an argument against the principle. They constitute an argument for investing the engineering effort required to implement the metric correctly. The commercial viability of engagement maximization rests on the existence of a simple, easy-to-measure metric that can be optimized. Ethical attention design requires accepting that the right metric is more complex to measure than the wrong one.
What Industry Arguments Get Right (and Wrong)
The major platforms have made several substantive arguments against design principles of the type specified in this paper. These arguments deserve engagement rather than dismissal.
The user preference argument: Users have chosen algorithmic feeds over chronological ones when offered both options, suggesting that the algorithmic feed is what users actually want. This argument has partial validity: users do engage more with algorithmic feeds, and engagement is a form of revealed preference. The argument fails because revealed preference in an environment designed to exploit compulsion does not constitute evidence of authentic preference. A user who engages more with algorithmically ranked content in the same way that a user of variable-ratio reinforcement slot machines pulls the handle more often. The revealed preference reflects the effectiveness of the mechanism, not the authenticity of the want.
The user control argument: These principles should be implemented as user choices, not as defaults, because users should control their own experience. This argument is correct about the value of user control and incorrect about the mechanism through which defaults work. Defaults are not neutral. The research on default effects in behavioral economics demonstrates that default states shape behavior independently of user preferences — most users never change defaults regardless of their stated preferences. A platform that provides the choice between chronological and algorithmic while defaulting to algorithmic has functionally chosen algorithmic for the majority of its users. Defaulting to the option that serves user wellbeing rather than platform engagement is itself a user-respecting choice.
The innovation argument: Prescriptive design principles constrain platform innovation. This argument is weakest. The innovation argument conflates the freedom to experiment with the freedom to exploit. Platforms have substantial remaining design space within the eight principles — they can innovate in recommendation quality, in social connection features, in creator tools, in event discovery, and in dozens of other dimensions. The principles constrain one specific design choice — the choice to exploit behavioral compulsion as the primary engagement mechanism — and leave the rest of the design space open.
What the Principles Demand
The eight principles are not aspirations. They are an engineering specification — a description of what a platform built for user cognitive wellbeing rather than advertiser data generation would look like as a designed artifact. Each is technically achievable. Each addresses a documented mechanism of harm. Each requires a commercial decision that platforms have chosen not to make.
The principles demand, first, that the design community accept that engagement maximization is a choice and not a constraint. The engineers and product managers and designers who built the attention economy were not building the only possible internet. They were building one specific kind of internet, optimized for one specific objective, producing one specific set of harms. A different kind of internet, optimized for a different objective, is technically possible and commercially arguable — and the argument for building it is the subject of the rest of this series.
The principles demand, second, that the legal frameworks examined in the Legal Architecture series incorporate affirmative design standards alongside prohibitions on harmful practices. Regulation that says only “you may not do X” leaves platforms free to optimize for harm in all directions that X does not cover. Regulation that says “you must implement Y” — where Y is derived from the mechanism research and specified with the precision that this paper attempts — gives platforms an engineering target that serves user wellbeing.
The principles demand, third, that the voluntary Design Covenant proposed in DC-005 be adopted by at least some platforms before mandatory standards arrive. The voluntary covenant is not a substitute for mandatory standards. It is a demonstration that the standards are achievable — a proof of concept that platforms can operate under ethical design principles without commercial collapse, a record that removes the commercial viability objection from the mandatory standard debate.
Sources and References
- Haugen, Frances. Whistleblower disclosures, October 2021. Facebook internal research documents on engagement ranking and emotional contagion.
- Harris, Tristan. Testimony before the Senate Commerce Subcommittee on Consumer Protection. June 25, 2019; September 30, 2021.
- Center for Humane Technology. "Ledger of Harms." humanetech.com, 2019–2024.
- Thaler, Richard H. and Cass R. Sunstein. Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press, 2008. On default effects.
- Johnson, Eric J., and Daniel Goldstein. "Do Defaults Save Lives?" Science, 302(5649), 2003. On the mechanism of defaults in behavior.
- Brady, William J., et al. "Emotion shapes the diffusion of moralized content in social networks." PNAS, 114(28), 2017. On engagement and emotional activation.
- Allcott, Hunt, et al. "The welfare effects of social media." American Economic Review, 110(3), 2020. On revealed preference and compulsive engagement.
- Twenge, Jean M., et al. "Trends in U.S. Adolescents' Media Use, 1976–2016." Psychology of Popular Media Culture, 8(3), 2019.
- Orben, Amy, and Andrew K. Przybylski. "The association between adolescent well-being and digital technology use." Nature Human Behaviour, 3(2), 2019.
- Instagram. "Instagram Update on Like Counts." Newsroom, 2019. On global like-hide test results.
- Apple Inc. "Screen Time." iOS 12+ feature documentation, 2018.
- Google. "Digital Wellbeing." Android feature documentation, 2018.
- Fogg, B.J. Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann, 2003. On persuasive design mechanisms.
- Montag, Christian, et al. "Addictive features of social media/messenger platforms and freemium games." PLOS ONE, 14(7), 2019.
- Lanzing, Marjolein. "Strongly recommended: Revisiting decisional privacy to judge hypernudging in self-tracking technologies." Philosophy and Technology, 2019.
Standards landscape: These design principles are positioned within an existing landscape of technology ethics standards. IEEE 7010-2020 (Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being) addresses several overlapping concerns, including wellbeing metrics and stakeholder impact assessment. The UK Age Appropriate Design Code (AADC, 2020) implements child-specific protections aligned with Principles 1 (Transparency) and 5 (Vulnerability Protection). The ACM Code of Ethics (2018) establishes professional obligations consistent with the cognitive sovereignty framework. The principles proposed here are intended to complement, not replace, these established standards.
The Institute for Cognitive Sovereignty. (2026). The Principles of Ethical Attention Design [ICS-2026-DC-001]. The Institute for Cognitive Sovereignty. https://cognitivesovereignty.institute/design-covenant/the-principles-of-ethical-attention-design