The Measurement Reformation · Paper II

The Cognitive Sovereignty Index

A Scored, Citable Metric for Individual and Population-Level Cognitive Agency

The Institute for Cognitive Sovereignty · 2026 · Research Paper · Open Access · CC BY-SA 4.0

ICS-2026-MR-002 Published March 6, 2026 20 min read
0
Scored, citable composite metrics for cognitive sovereignty in regular research or clinical use
6
HEXAD dimensions that the Cognitive Sovereignty Index operationalizes into a composite score
79
Papers in the Institute's published evidence base that the CSI draws on for calibration and validation
“Not everything that can be counted counts, and not everything that counts can be counted.”
— Attributed to William Bruce Cameron, 1963 — and precisely the problem this paper attempts to solve
Methodological status: The CSI scoring architecture presented in this paper is a proposed framework. No pilot data has been collected, no reliability coefficients calculated, and no population norms established. The architecture specifies what would need to be measured and how components would be aggregated — not what has been measured. Empirical validation, including prospective administration, predictive validity assessment, and cross-cultural testing, is required before the CSI can function as a deployed measurement instrument. Section VII addresses these requirements explicitly.
Section I

Why a Composite Index Is Necessary

Cognitive sovereignty, as the Institute's research program has established, is not a single-dimensional construct. It is the aggregate condition of a person's capacity to direct their own attention, evaluate their own perceptions, reason from evidence, regulate their emotional states, navigate social information environments, and maintain epistemic agency in the face of systems designed to compromise it. These capacities are related but distinct; they degrade under capture in different ways and at different rates; and they respond to different interventions.

The existing research base that the Institute's series document includes measures for components of this condition — attention measures (the Sustained Attention to Response Task, the Attentional Network Task), mood and well-being instruments (the PANAS, the Warwick-Edinburgh Mental Well-being Scale), screen time self-report measures, social media use inventories — but no single measure operationalizes cognitive sovereignty as a composite. The result is a measurement gap that has practical consequences: researchers cannot easily compare cognitive sovereignty outcomes across studies that use different component measures, clinicians cannot easily assess a patient's cognitive sovereignty status holistically, and regulators cannot easily mandate platform disclosure of cognitive sovereignty impacts because there is no agreed metric to disclose.

A composite index addresses this problem by aggregating component measures into a single score that is standardized, reproducible, and citable. The Human Development Index (HDI), the Genuine Progress Indicator (GPI), and the OECD Better Life Index are all examples of composite indices that have achieved policy relevance by aggregating multiple components into a single tractable number. The Cognitive Sovereignty Index (CSI) proposed here is designed to occupy a similar role: not to replace component measures in research settings, but to provide a tractable summary measure for policy, clinical, and public communication purposes.


Section II

What the CSI Measures

The Cognitive Sovereignty Index is organized around the six dimensions of the HEXAD framework, which the Institute's existing materials at hexad.html describe and which will be formally specified in the HEXAD series papers. The six dimensions and their cognitive sovereignty relevance are:

Dimension 1: Attentional Sovereignty

The capacity to direct and sustain voluntary attention. Operationalized in the CSI as: self-reported difficulty sustaining attention on chosen tasks; behavioral measures of distractibility; frequency of attention interruptions; subjective sense of attentional control. Degradation profile: attentional sovereignty degrades under notification load, sleep deprivation, social comparison activation, and variable-ratio reinforcement exposure.

Dimension 2: Perceptual Sovereignty

The capacity to evaluate one's own perceptions for reliability — to maintain calibrated skepticism about information sources without collapsing into either credulity or wholesale distrust. Operationalized as: performance on source credibility assessment tasks; susceptibility to misinformation; epistemic humility calibration. Degradation profile: perceptual sovereignty degrades under information overload, algorithmic filter bubbles, and chronic time pressure.

Dimension 3: Reasoning Sovereignty

The capacity to reason from evidence to conclusions without systematic distortion by motivated reasoning, availability bias, or emotional state contamination. Operationalized as: performance on cognitive reflection tasks; susceptibility to common reasoning fallacies; ability to reason effectively about statistical evidence. Degradation profile: reasoning sovereignty degrades under high cognitive load, emotional dysregulation, and sleep deprivation.

Dimension 4: Emotional Sovereignty

The capacity to regulate emotional states without external prosthesis — specifically, without using platform engagement as an emotional regulation mechanism. Operationalized as: self-reported emotional regulation capacity; use of social media for mood management; ability to tolerate negative affect without behavioral compulsion. Degradation profile: emotional sovereignty degrades under dopaminergic conditioning, social comparison activation, and variable-ratio reinforcement.

Dimension 5: Social Cognitive Sovereignty

The capacity to navigate social information environments without systematic distortion by platform-mediated social comparison, status anxiety, or tribalistic in-group/out-group amplification. Operationalized as: social comparison orientation; in-group/out-group attribution patterns; susceptibility to outrage-amplifying content. Degradation profile: social cognitive sovereignty degrades under engagement-ranked feed exposure, like-count display, and algorithmic outrage amplification.

Dimension 6: Epistemic Sovereignty

The capacity to form, revise, and hold beliefs through one's own epistemic processes rather than through deference to algorithmically curated consensus signals. Operationalized as: intellectual independence measures; susceptibility to social proof manipulation; capacity to maintain well-reasoned minority positions against social pressure. Degradation profile: epistemic sovereignty degrades under filter bubble exposure, engagement-ranked consensus amplification, and social media crowd behavior.

Named Condition — ICS-2026-MR-002
The Measurement Gap
The absence of a standardized, scored, citable composite metric for cognitive sovereignty — the condition in which the research base for a construct exists across dozens of component measures in multiple disciplines, none of which has been integrated into an index deployable for policy, clinical, or public communication purposes. The Measurement Gap produces a specific failure mode: researchers and clinicians can document degradation in individual components of cognitive sovereignty but cannot compare, aggregate, or communicate the overall condition in terms that regulatory and policy processes can act on.

Section III

The Scoring Architecture

The CSI is scored on a 0–100 scale, with 100 representing maximum cognitive sovereignty across all six dimensions. Each dimension contributes a sub-score on the same 0–100 scale; the composite CSI score is a weighted average of the six dimension scores.

The proposed default weighting is equal: each dimension contributes one-sixth of the composite score. This is a deliberate starting point rather than a validated conclusion. The HDI was similarly launched with equal weighting before decades of research on component relationships informed revised weightings; the CSI anticipates the same iterative development. The equal-weighted composite is the minimal credible starting position.

Each dimension score is derived from three input types:

  • Self-report instruments (validated scales, administered via questionnaire; weight: 40% of dimension score)
  • Behavioral tasks (cognitive assessments administered digitally; weight: 40% of dimension score)
  • Environmental factors (objective conditions: daily screen time, notification frequency, sleep duration, exercise frequency; weight: 20% of dimension score)

The environmental factor component is designed to capture structural conditions that influence cognitive sovereignty independent of individual performance on assessments — the person who scores well on attentional tasks but is sleeping five hours a night in a high-notification environment is in a more fragile attentional sovereignty position than their task performance alone indicates.

Dimension Primary Self-Report Instrument Primary Behavioral Task Key Environmental Factor
Attentional Sovereignty Adult ADHD Self-Report Scale (ASRS) + custom attentional control items Sustained Attention to Response Task (SART) Daily notification count; sleep duration
Perceptual Sovereignty Actively Open-minded Thinking scale (AOT) News credibility assessment task (standardized stimuli) Primary news source diversity; platform filter bubble score
Reasoning Sovereignty Need for Cognition scale (NCS) Cognitive Reflection Test (CRT, 7-item version) Daily screen time; sleep duration
Emotional Sovereignty Emotion Regulation Questionnaire (ERQ) + social media for mood items Delay discounting task (impulsivity proxy) Social media daily use duration; notification interruptions
Social Cognitive Sovereignty Social Comparison Orientation scale (SCS) + FOMO scale Outrage susceptibility task (custom) Engagement-ranked feed exposure; like-count display exposure
Epistemic Sovereignty Actively Open-minded Thinking (epistemic subscale) + intellectual humility items Belief revision task (performance under social pressure) Filter bubble score; social proof manipulation exposure

Section IV

Calibration Against the Evidence Base

The CSI's component measures are selected for their alignment with the mechanisms documented in the Institute's published papers. This is calibration in the sense of ensuring that the instrument tracks the constructs that the evidence base has established as relevant — not empirical calibration of scale properties, which requires prospective data collection.

The Attention Series (AS-001 through AS-005) documents engagement-maximizing design's effects on sustained attention, dopaminergic conditioning, and adolescent development. The CSI's Attentional Sovereignty dimension is designed to track exactly the degradation these mechanisms produce. The Neurotoxicity Record (NR-001 through NR-006) documents the molecular and physiological pathway from platform use to cognitive impairment; the environmental factors in the CSI (sleep, notification load, screen time) are selected because they are the proximate variables the NR series identifies as upstream of those pathways.

The Measurement Crisis series (MC-001 through MC-006) documents how metrics become targets and how targets replace the things they were meant to measure. The CSI is designed with this risk explicitly in mind: the composite architecture and the equal weighting are intended to make it harder to optimize for CSI score without improving the underlying condition, because the components track distinct mechanisms that are not simultaneously improvable through a single intervention.

Calibration against prospective data will require longitudinal studies administering the CSI alongside validated component measures and tracking predictive validity — the extent to which CSI scores at time 1 predict relevant outcomes at time 2. This is the next stage of development and requires institutional support and research funding that the Institute does not currently have. This paper establishes the theoretical basis for the index; empirical calibration is the research agenda it implies.


Section V

Individual vs. Population Applications

The CSI is designed to function at two levels: as a self-assessment tool for individual use, and as a research instrument for population-level measurement.

At the individual level, the CSI provides a scored, interpretable summary of a person's current cognitive sovereignty status across six dimensions. A person who scores 78 overall with a low Emotional Sovereignty sub-score (42) and high Attentional Sovereignty (90) can identify Emotional Sovereignty as the primary degradation domain and access the relevant practices from the Recovery Architecture series. The individual-level application does not require validated population norms — it provides directionally useful information about the distribution of strength and vulnerability across dimensions.

At the population level, the CSI enables comparisons across groups, platforms, interventions, and time periods. Population-level CSI measurement requires sampling methodology, standardized administration, and the population norms that individual-level use does not require. The population-level application is the policy-relevant application — the application that regulators could mandate platforms to facilitate, that public health researchers could use to track cognitive sovereignty outcomes, and that the Attentional Commons paper (MR-003) uses as a foundation for collective well-being measurement.

The precedent for this dual-level application is the Patient Reported Outcomes Measurement Information System (PROMIS), a National Institutes of Health initiative that developed standardized, validated instruments for patient-reported health outcomes that function both as individual assessment tools and as population-level research instruments. The CSI is designed for an analogous role in cognitive health research.


Section VI

Limitations of Composite Indices

Composite indices are methodologically vulnerable in three respects that any honest presentation of the CSI must acknowledge.

First, compositing produces averaging artifacts. A person with extreme degradation in one dimension and high function in others can achieve a moderate composite score that does not accurately represent their condition. The CSI mitigates this by reporting dimension sub-scores alongside the composite — the composite is a summary, not a replacement for dimensional analysis.

Second, composite indices are subject to Goodhart's Law in precisely the way this series documents engagement metrics being subject to it. If the CSI becomes a target — if platforms are required to improve their users' CSI scores — there will be incentives to game the instrument. High CSI scores could be achieved by coaching users on the behavioral tasks without improving the underlying capacities. This is the same gaming vector that the MR-001 paper identifies for self-report metrics: it is manageable with third-party administration and auditing, but not with platform-controlled administration.

Third, the equal weighting is a pragmatic choice without strong theoretical justification. Different weighting schemes would produce different composite scores; the choice of weights is a normative decision about which dimensions of cognitive sovereignty matter most, and there is no value-neutral answer to that question. The CSI is transparent about this: the weights are reported alongside the score, and alternative weightings can be applied to the sub-scores by anyone who disagrees with the default.

Counterpoint
Composite indices obscure more than they reveal — the component measures are what matter

A serious methodological objection holds that composite indices trade precision for communicability at an unacceptable cost. The component measures are the actual data; the composite is a summary statistic that aggregates across incommensurable dimensions. A person with high attentional sovereignty and low emotional sovereignty is not usefully described by a composite score that averages those conditions. The component profile is informative; the composite number is not.

The response is that this objection applies with equal force to the Human Development Index, the OECD Better Life Index, and every other composite measure that has achieved policy relevance precisely because it provides a tractable summary. The composite is not designed to replace component analysis in research settings; it is designed to provide a summary metric that can be cited in policy documents, mandated in regulatory frameworks, and communicated to general audiences. These are legitimate uses that component profiles cannot serve efficiently. The composite and the component profile are complementary, not competing.


Section VII

What the CSI Demands

The Cognitive Sovereignty Index as proposed here is a theoretical instrument. It requires three things to become a deployed measurement tool.

First, it requires empirical validation: prospective studies administering the CSI in standardized conditions, tracking predictive validity against established outcome measures, and producing the population norms without which individual scores are not interpretable in comparative terms. This is a substantial research program requiring institutional support, funding, and collaboration across the psychology, public health, and measurement science communities.

Second, it requires open-source implementation: a publicly available, freely licensed version of the CSI instrument — questionnaires, task specifications, scoring algorithms — that researchers and clinicians can adopt and that platforms can be required to support without licensing barriers. The instrument must be a public good, not a proprietary tool, if it is to function as a regulatory standard.

Third, it requires adoption by a standards body: a regulatory agency, professional association, or international body with the authority to mandate use of the CSI as a reporting metric for platforms operating in its jurisdiction. The Legal Architecture series (LA-001 through LA-005) describes the statutory framework that would supply this mandate. The Measurement Reformation paper (MR-004) describes the institutional pathway to adoption.

The CSI's relationship to the Institute's project is the relationship between a measurement instrument and the thing it measures: the four prior sagas documented cognitive sovereignty's degradation across dozens of mechanisms; the Measurement Reformation series documents what measuring cognitive sovereignty coherently would require. The CSI is the answer to the question “measured how?” — a question that the critique of engagement metrics makes unavoidable.


Sources

Selected Sources

  • UNDP. (2020). Human Development Report 2020. United Nations Development Programme. (HDI methodology reference)
  • Cella, D., et al. (2010). The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks. Journal of Clinical Epidemiology, 63(11), 1179–1194.
  • Gross, J.J., & John, O.P. (2003). Individual differences in two emotion regulation processes: implications for affect, relationships, and well-being. Journal of Personality and Social Psychology, 85(2), 348–362. (ERQ validation)
  • Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42. (CRT validation)
  • Stanovich, K.E., & West, R.F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology, 89(2), 342–357. (AOT scale)
  • Robertson, R.E., et al. (2023). Auditing partisan audience bias within Google Search. Proceedings of the National Academy of Sciences, 120(36). (Filter bubble measurement methodology)
How to Cite

The Institute for Cognitive Sovereignty. (2026). The Cognitive Sovereignty Index [ICS-2026-MR-002]. The Institute for Cognitive Sovereignty. https://cognitivesovereignty.institute/measurement-reformation/the-cognitive-sovereignty-index

References

Internal: This paper is part of The Measurement Reformation (MR series), Saga V. It draws on and contributes to the argument documented across 20 papers in 5 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.