“The freedom to publish is now the freedom to capture. The question is whether the commons can survive the capture.”
— Walter Lippmann's concern about the press, updated for the attention economy
Cognitive Sovereignty as a Collective Resource
The Institute's research program has been organized around individual cognitive sovereignty — the capacity of a specific person to direct their own attention, reason from evidence, and maintain epistemic agency. This framing is not wrong. Cognitive sovereignty is an individual condition. The person whose attention has been captured is the person who is harmed; the treatment is for that person; the recovery practices are individual practices.
But individual cognitive sovereignty does not exist in isolation from collective attention infrastructure. A democracy whose citizens cannot sustain attention across a complex policy argument, cannot evaluate the credibility of competing information sources, and cannot reason effectively about statistical evidence is a democracy in which the preconditions for democratic deliberation are failing — not because any individual has lost cognitive capacity, but because the aggregate distribution of attention, credibility assessment, and reasoning quality across the population has degraded to the point where collective deliberation is impaired.
This is the attentional commons problem: cognitive sovereignty, like clean air and deliberative discourse, is partly a collective resource. An individual who develops good attentional practices in a social environment where everyone else is captured by the same algorithms gains individual benefit but does not escape the consequences of living in a society whose collective epistemic infrastructure is compromised. The quality of the information environment she inhabits — the media ecosystem, the political discourse, the epistemic standards of the institutions she depends on — is a function of the aggregate cognitive health of all the people who produce and consume that environment.
This paper argues that measuring cognitive sovereignty exclusively at the individual level misses a crucial dimension of the problem — and proposes population-level indicators that capture what individual measurement cannot.
What the Attentional Commons Is
The commons, in the sense Hardin specified in 1968, is a shared resource that is non-excludable (everyone has access to it) and subtractable (use by one person reduces availability to others). The classic commons — fisheries, grazing land, clean air — is a physical resource. The attentional commons is a cognitive resource: the aggregate attention, credulity assessment capacity, and reasoning quality of a population that constitutes the raw material from which collective deliberation is built.
The attentional commons has the properties of a commons in the relevant sense. It is non-excludable: no individual can exclude herself from living in a social environment whose epistemic quality is determined by the aggregate attention and reasoning capacity of everyone who participates in it. It is subtractable in a specific sense: when the attention and reasoning capacity of millions of people is captured by engagement-maximizing algorithms and redirected toward anxiety-inducing, outrage-amplifying, and credibility-degrading content, the epistemic quality of the information environment that everyone inhabits is reduced.
The tragedy of the attentional commons is not that individuals are harmed by their own attention being captured — that is the individual dimension of the problem — it is that the capture of millions of people's attention by systems optimizing for engagement degradation produces a collective epistemic environment in which deliberation, shared fact-finding, and democratic decision-making become progressively harder for everyone, including those with high individual cognitive sovereignty.
Population-Level Cognitive Health Indicators
Three categories of population-level indicators are proposed for measuring the health of the attentional commons. Each is operationally definable, historically trackable, and distinguishable from the platform metrics (engagement, time-on-platform) that currently define how digital infrastructure is assessed.
Indicator Category 1: Epistemic Quality Indicators
Epistemic quality indicators measure the aggregate capacity of a population to distinguish reliable from unreliable information, evaluate source credibility, and maintain calibrated uncertainty about contested claims. Operationalizations include: misinformation belief prevalence (survey-based; tracked by Reuters Institute, Pew Research, and university media labs); source credibility discrimination accuracy (experimental tasks embedded in large-scale surveys); news consumption diversity (media diet diversity indices, trackable via disclosed platform data); and epistemic humility calibration (the degree to which populations correctly assess their own uncertainty about contested claims).
The Reuters Institute Digital News Report has tracked news source trust, consumption patterns, and avoidance behaviors across 46 countries annually since 2012 — the longest continuous dataset on epistemic quality available. The data show consistent declines in news trust, increases in news avoidance, and growing divergence between partisan information environments in countries with high social media penetration.
Indicator Category 2: Deliberative Capacity Indicators
Deliberative capacity indicators measure the aggregate capacity of a population to engage in sustained, evidence-responsive, across-difference discourse — the capacity that democratic deliberation requires. Operationalizations include: political polarization measures adjusted for informational environment (distinguishing policy disagreement from epistemic tribalism); cross-partisan communication frequency and quality; capacity to accurately attribute positions to political opponents (rather than caricatured versions); and discourse quality in public forums (coded for reasoning quality, evidence use, and good-faith engagement).
The Pew Research Center's political polarization tracking series, running since 1994, provides the baseline dataset. The polarization increase documented between 2016 and 2022 — the period of most aggressive algorithmic feed optimization across major platforms — is the strongest candidate for a measurable attentional commons effect, though the causal attribution is contested (addressed in Section VI).
Indicator Category 3: Collective Attention Span Indicators
Collective attention span indicators measure the aggregate sustained engagement capacity of a population with complex, long-form information — the capacity that complex policy analysis, long journalism, and democratic governance require. Operationalizations include: long-form reading duration (trackable via publishing analytics); news article completion rates (available from publisher data); sustained engagement with complex legislative and judicial processes (civic engagement tracking); and educational attainment in sustained-attention disciplines (reading comprehension longitudinal data).
| Indicator Category | Key Measure | Existing Data Source | Trend Direction (2012–2024) |
|---|---|---|---|
| Epistemic Quality | Misinformation belief prevalence | Reuters Institute, Pew Research | Increasing in most markets |
| Epistemic Quality | News source trust | Reuters Institute Digital News Report | Declining; avoidance increasing |
| Deliberative Capacity | Cross-partisan attitude accuracy | More in Common; Academic polarization studies | Declining — misperception increasing |
| Deliberative Capacity | Affective political polarization | Pew Research Political Polarization Series | Increasing sharply post-2016 |
| Collective Attention Span | Long-form article completion rates | Publisher-disclosed scroll depth data | Declining; average read time falling |
| Collective Attention Span | Reading comprehension (adolescents) | PISA international assessments | Declining in digital-native cohorts |
What Capturing the Commons Produces
When the attention infrastructure of an entire population is systematically captured and redirected toward engagement-maximizing content, three collective functions fail that individual-level cognitive sovereignty cannot restore.
Shared fact-finding fails. Democratic governance requires a population that can converge on shared empirical assessments of the world — what the unemployment rate is, whether a public health threat is real, whether a policy has its claimed effects. This convergence is not natural; it requires epistemic infrastructure: reliable information sources, shared credibility standards, and the social trust that allows citizens to believe information from institutions they did not personally verify. When engagement-maximizing algorithms select for content that disputes consensus facts, rewards outrage over accuracy, and creates information environments in which different groups encounter categorically different empirical claims, the shared factual ground that democratic deliberation requires erodes.
Coordinated response breaks down. Collective action on shared problems — climate change, public health crises, infrastructure investment — requires that populations can form shared assessments of problems and commit to shared responses. This requires that attention can be sustained across complex issues for long enough to form considered preferences, and that deliberation can produce genuine consensus rather than tribal signal-sending. When the attentional commons is captured, collective action problems become harder to solve, not because people are less willing to cooperate, but because the epistemic infrastructure for identifying the problem, agreeing on its scope, and coordinating on a response has degraded.
Institutional trust fails selectively. Captured attention environments tend to erode trust in complex institutions — government, journalism, science, public health — that require sustained attention and epistemic trust to function. They do not erode trust uniformly; they erode trust in ways that serve engagement: distrust in establishment institutions generates engagement (outrage, sharing, discussion), while trust generates complacency (no story). The result is a population that is highly engaged with narratives of institutional failure and systematically underexposed to evidence of institutional function. This is not irrational given the information environment — it is the rational response to the information environment that engagement-maximizing design produces.
Historical Precedents for Commons Measurement
The attentional commons is not the first collective resource to be systematically degraded by private economic actors in ways that individual-level measurement could not detect. Environmental commons degradation — air quality, water quality, biodiversity — was similarly invisible in individual-level frameworks until population-level measurement infrastructure was developed.
The Clean Air Act (1970) and the environmental standards it mandated were possible because the EPA developed population-level air quality indicators — the Air Quality Index — that made diffuse collective harm visible and actionable in regulatory frameworks. Before standardized population-level measurement existed, air pollution harms were documented only at the individual clinical level: this patient has elevated lead levels; this child has asthma. The population-level indicators revealed what individual cases could not: that the aggregate harm was systematic, traceable to specific industrial sources, and amenable to regulatory intervention.
The economic commons analogy is the GDP problem that the Measurement Crisis series (MC-001: What GDP Cannot See) documents. GDP, like the engagement metric, is a high-resolution measure of something real (economic activity) that is systematically decoupled from the collective welfare it was designed to track. The development of alternative economic indicators — the Genuine Progress Indicator, the OECD Better Life Index, the Human Development Index — has been exactly the process of developing population-level measurement infrastructure for the economic commons that this paper proposes for the attentional commons.
The Evidence Base for Population-Level Effects
The causal evidence for population-level attentional commons degradation is more contested than the causal evidence for individual-level harms, because population-level experiments are harder to run and confounders are harder to control. The available evidence is associational and natural-experimental rather than randomized controlled.
The strongest natural experimental evidence comes from Facebook's own deployment timeline. Bail et al. (2018, Science) conducted a randomized study of political polarization effects of Twitter deactivation, finding that deactivation reduced affective polarization among Republicans. Allcott et al. (2020, American Economic Review) ran a randomized study of Facebook deactivation, finding reductions in political news consumption, polarization, and post-experiment Facebook use — but modest effects on knowledge and attitudes. The effects are real; the magnitude is contested.
The epidemiological evidence is stronger at the association level. The period of aggressive algorithmic feed optimization (2014–2022) correlates with documented increases in affective political polarization (Pew), declines in institutional trust across Western democracies (Edelman Trust Barometer), increases in misinformation belief prevalence (Reuters Institute), and declining news comprehension in adolescent cohorts (PISA). The correlation does not establish causation, but it is consistent with the mechanism hypothesis and is not explained by alternative accounts that ignore the role of information environment changes.
Economic inequality, institutional failures, and political culture changes — which predate the social media period — are plausible alternative explanations for the trends attributed here to attentional commons capture. Gurri (2018) argues that the crisis of authority is a consequence of information abundance per se, not engagement-maximizing design specifically. Achen and Bartels (2016) argue that voters were never as rational as democratic theory assumes, predating social media entirely.
The response is that the attentional commons hypothesis does not require that social media is the sole cause of polarization or institutional distrust — only that engagement-maximizing design is a systematic amplifier of those trends at a scale and mechanism that population-level measurement can detect and that regulatory intervention can address. The environmental analogy is precise: industrial pollution was not the only cause of respiratory illness in 1970, but it was a systematic and regulable contributor that population-level measurement made visible. The question is not whether social media is the sole cause; it is whether it is a substantial, measurable, and correctable contributor to attentional commons degradation.
What Measuring the Commons Demands
Population-level attentional commons measurement demands infrastructure that does not exist: standardized indicators, longitudinal data collection, platform disclosure requirements, and independent research access.
The platform disclosure problem is the most acute. The best data for measuring attentional commons health is platform-held data: the actual content that different demographic groups encounter, the algorithmic ranking decisions that shape exposure, the engagement patterns across different content types, and the correlations between platform exposure patterns and downstream belief and behavior outcomes. This data exists; it is held by the platforms that produce it; it is not available to independent researchers. The GDPR's research access provisions and the US KOSA's research access provisions (documented in LA-002 and LA-003 respectively) are the legislative instruments most relevant to this access problem.
Without platform data access, population-level cognitive health measurement is limited to survey-based indicators and behavioral data collected outside the platform. These are valuable but insufficient for the causal attribution questions that regulation requires. A regulatory framework that mandated platform disclosure of the data necessary to compute attentional commons indicators would, for the first time, make it possible to connect specific platform design choices to population-level epistemic outcomes — exactly the evidence base that the Legal Architecture series argues is necessary for effective cognitive sovereignty regulation.
The Measurement Reformation capstone (MR-004) addresses how this infrastructure gets built. The present paper establishes what it needs to measure: the attentional commons is real, its degradation is consequential, and its invisibility to current regulatory frameworks is a structural deficiency that the Collective Blind Spot perpetuates.
Selected Sources
- Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243–1248.
- Bail, C.A., et al. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216–9221.
- Allcott, H., et al. (2020). The welfare effects of social media. American Economic Review, 110(3), 629–676.
- Reuters Institute for the Study of Journalism. (2024). Digital News Report 2024. University of Oxford.
- Pew Research Center. (2022). Political Polarization in the American Public. Washington, D.C.
- Edelman. (2024). Edelman Trust Barometer 2024. Edelman Intelligence.
- OECD. (2023). PISA 2022 Results (Volume I): The State of Learning and Equity in Education. OECD Publishing.
The Institute for Cognitive Sovereignty. (2026). The Attentional Commons [ICS-2026-MR-003]. The Institute for Cognitive Sovereignty. https://cognitivesovereignty.institute/measurement-reformation/the-attentional-commons