HEXAD Series — Paper III of V

The Dimensional Assessment Protocol

How to Measure Where You Are Across All Six Dimensions
ICS-2026-HX-003 March 2026 The Institute for Cognitive Sovereignty 20 min read
6 dimensions, each assessed by distinct instrument types
3 input types per dimension: self-report, behavioral, contextual
0 validated cross-dimensional protocols in prior literature

I. The Assessment Gap

The six HEXAD dimensions (specified in Paper I) and their degradation signatures (documented in Paper II) create a diagnostic framework. What the framework has lacked, until now, is an operationalized assessment instrument: a concrete protocol that tells a practitioner or individual how to measure where they actually stand across each dimension.

Existing cognitive assessment instruments are not adequate substitutes. Standardized cognitive testing measures underlying capacity rather than capture-induced impairment. Mental health screening instruments capture symptoms of conditions that may or may not correlate with cognitive sovereignty status. Media literacy assessments capture one dimension — perceptual — without attending to the others. No validated instrument has existed for the composite profile that the HEXAD framework specifies.

This paper proposes the Dimensional Assessment Protocol (DAP): a structured, multi-input assessment procedure for generating a dimensional profile across all six HEXAD dimensions. The DAP is not a finished product but a research proposal — an evidence-grounded specification for what a validated instrument would need to include, how it would be administered, and how its outputs would be interpreted.

Named Condition
The Assessment Void
The absence of a validated, multi-dimensional instrument for measuring the full profile of cognitive sovereignty impairment across individuals, leaving practitioners, researchers, and individuals without the diagnostic foundation required for targeted intervention. The Assessment Void means that restoration efforts must be designed without precise knowledge of what needs restoring.

II. Design Requirements

A valid Dimensional Assessment Protocol must satisfy six design requirements derived from the clinical assessment literature and the specific structure of the HEXAD framework:

  1. Dimensional specificity: The instrument must generate distinct scores for each of the six dimensions, not a single aggregate. Capture profiles differ across individuals; a single aggregate score would obscure the diagnostic information that motivates the dimensional framework.
  2. Multi-input architecture: Each dimension must be assessed through more than one input type. Self-report alone is insufficient: attentional degradation impairs the metacognitive accuracy required for accurate self-report; perceptual degradation distorts the referents against which self-assessments are made. Behavioral and contextual data must supplement self-report.
  3. Ecological validity: The protocol must assess dimensions as they operate in natural contexts, not only in controlled test environments. Attentional capacity in a laboratory may differ substantially from attentional sovereignty in an attention-capture environment.
  4. Low assessment burden: An instrument requiring four hours of clinical testing cannot be widely deployed. The DAP targets a 45–60 minute completion time for a full dimensional profile.
  5. Interpretive clarity: Scores must be interpretable against a population distribution and actionable — they must point toward specific interventions. The Cognitive Sovereignty Index (Measurement Reformation Paper II) provides the population reference framework.
  6. Repeated-measures capacity: The DAP must be suitable for repeated administration at 4–12 week intervals to track restoration progress. Instrument design must account for practice effects and establish test-retest reliability thresholds.

III. Instrument Structure: Three Input Types

For each of the six dimensions, the DAP combines three input types:

Input Type 1: Self-Report Subscale

A validated self-report subscale for each dimension, drawing on or adapting existing validated instruments where available (e.g., the MAAS for attentional items, the News Media Literacy subscale for perceptual items, the Need for Cognition Scale for reasoning items, the Difficulties in Emotion Regulation Scale for emotional items, the Interpersonal Reactivity Index for social cognitive items, the Actively Open-minded Thinking Scale for epistemic items). Where existing instruments require adaptation, the DAP specifies the adaptation rationale and required validation procedures.

Self-report subscales use a 5-point Likert format, 6–8 items per dimension, producing a 36–48 item self-report module.

Input Type 2: Behavioral Indicator Module

A structured behavioral observation or task-based module for each dimension. Behavioral indicators are less susceptible to the distortions that make self-report alone insufficient for degraded cognitive states:

Input Type 3: Contextual Exposure Profile

A structured intake of the assessable's current capture exposure context: primary platforms, average daily use hours, notification volume, sleep timing and quality, primary information sources. The contextual profile does not directly produce dimension scores but provides essential calibration for interpreting behavioral and self-report data, and serves as the primary input for the Capture Profile designation (Paper II).

IV. Per-Dimension Assessment Specifications

Dimension I — Attentional Sovereignty
  1. On a typical day, how often do you check your phone without deciding to? (Never / Rarely / Sometimes / Often / Very often)
  2. When you sit down to focus on a single task, how long can you typically sustain focus before an urge to check email, social media, or messages arises?
  3. How often do you begin a task intending to work for 30+ minutes and find yourself having switched to something else within 10 minutes?
  4. Do you find it difficult to read a book or long article without the impulse to check your phone or open another tab?
  5. How many times per hour do notifications interrupt your work or leisure time on a typical day?
  6. Rate your overall ability to direct and sustain your own attention on a scale of 1–10.
Dimension II — Perceptual Sovereignty
  1. When you encounter a news story or claim online, do you typically verify it with an additional source before accepting it?
  2. How often do you actively seek information that contradicts your existing beliefs?
  3. Can you name the primary owners and editorial orientations of your three most-used information sources?
  4. Do you believe the content you see in your social media feed is representative of what is actually happening in the world?
  5. How confident are you in your ability to distinguish reliable from unreliable online sources?
  6. In the past month, have you changed your view on any significant topic based on new information?
Dimension III — Reasoning Sovereignty
  1. How often do you find yourself making decisions impulsively that you later regret?
  2. When you encounter a compelling argument you initially agree with, do you look for counterarguments before accepting it?
  3. How would you rate your typical quality of sleep on a scale of 1–10?
  4. On days when you feel emotionally activated (angry, anxious, excited), do you notice any difference in the quality of your reasoning?
  5. How often do you feel cognitively overwhelmed by the volume of information you receive?
  6. How comfortable are you with remaining uncertain about a question until you have sufficient evidence to form a view?
Dimension IV — Emotional Sovereignty
  1. How often does seeing content online (posts, news, comments) change your emotional state in a way that persists for hours?
  2. When you are in a negative emotional state, how long does it typically take you to return to baseline without external stimulation (scrolling, seeking validation, etc.)?
  3. How much does the number of likes or positive responses your posts receive affect how you feel about yourself on that day?
  4. Do you use social media or phone use as a primary way to manage negative emotions (boredom, loneliness, anxiety)?
  5. How often do you experience outrage or moral indignation in response to online content?
  6. Rate your overall capacity to regulate your emotional state without external stimulation on a scale of 1–10.
Dimension V — Social Cognitive Sovereignty
  1. In a typical week, how many hours do you spend in reciprocal in-person social interaction vs. consuming parasocial content (podcasts, streams, influencer content)?
  2. When you disagree with someone online, do you typically try to understand their reasoning before responding?
  3. Do you believe that people who hold views very different from yours are genuinely trying to act on their values, even if you think their values are wrong?
  4. How often do your online interactions involve people you have never met and will never meet in person?
  5. How comfortable are you with the ambiguity and unpredictability of other people's mental states?
  6. Have online interactions made you more or less trusting of strangers compared to five years ago?
Dimension VI — Epistemic Sovereignty
  1. How often do you form or revise opinions primarily because you saw that many people around you held them?
  2. Can you articulate the primary evidence for at least two of your most important political or social beliefs?
  3. How often do you read primary sources (research papers, official documents, original speeches) rather than summaries or commentary?
  4. When a trusted figure (expert, community leader, media personality) states something confidently, how likely are you to accept it without further verification?
  5. How comfortable are you arriving at a conclusion that differs from the consensus in your social circle?
  6. Rate your confidence in your capacity for genuinely independent thought on a scale of 1–10.

V. Scoring Architecture

Raw scores from each input type are converted to standardized scores (0–100 range) and combined using the weighting structure derived from the Cognitive Sovereignty Index framework (Measurement Reformation Paper II). Note: These weights are hypothesized based on theoretical rationale, not empirically calibrated. They require validation against external criterion measures before clinical use.

Input Type Weight Rationale Notes
Self-Report Subscale 35% Captures experienced impairment; reduced weight due to metacognitive bias risk in degraded states Weight may be reduced to 25% if assessable scores >80 on contextual capture exposure
Behavioral Indicator Module 45% More resistant to metacognitive distortion; primary source of diagnostic validity Requires standardized administration conditions
Contextual Exposure Profile 20% Calibration and contextual adjustment; exposure predicts degradation even where self-report lags Used also to generate Capture Profile designation

Dimensional scores are not aggregated into a single composite for individual-use purposes. The full six-dimension profile is the primary output. Aggregation into a Cognitive Sovereignty Index score is available for population-level research applications.

VI. Score Interpretation

DAP scores are interpreted against three reference frames:

VII. Required Validation Studies

The DAP as specified here is a research proposal, not a validated instrument. Six validation studies are required before clinical or regulatory deployment:

  1. Internal consistency validation: Cronbach's alpha ≥ 0.75 for each dimensional subscale, across a demographically diverse sample of n ≥ 500.
  2. Convergent validity: Correlations between DAP dimensional scores and existing validated instruments for overlapping constructs (e.g., MAAS for attentional, DERS for emotional). Expected r ≥ 0.50 for relevant pairings.
  3. Discriminant validity: DAP dimensional scores should show lower correlations with conceptually distinct constructs than with convergent constructs. Cross-dimensional DAP correlations should be < within-dimension input-type correlations.
  4. Predictive validity: DAP scores at baseline should predict intervention response at follow-up in line with the mechanisms specified in Paper II and the practice prescriptions in Paper IV.
  5. Test-retest reliability: Intraclass correlation ≥ 0.80 across administrations 4 weeks apart in a stable-context control condition.
  6. Sensitivity to intervention: In a randomized controlled trial with interventions from the Dimensional Practice Guide (Paper IV), DAP scores should show significant change in the expected direction on targeted dimensions, without equivalent change on non-targeted dimensions.
Counterpoint

The behavioral indicator module, as specified, presents significant standardization challenges. Lateral reading performance, theory of mind tasks, and belief-update tasks all require administration conditions that cannot be guaranteed outside clinical or research settings. A protocol that requires standardized administration cannot be a widely deployed individual-use tool. This is a genuine tension between diagnostic validity and accessibility that the DAP does not fully resolve.

VIII. What the Protocol Demands

The Dimensional Assessment Protocol cannot be validated by any single research group. It requires a coordinated research program with the following components:

This is not a trivial research agenda. But the alternative — deploying restoration interventions (Paper IV) without knowing which dimensions need restoring — is the current state. The Assessment Void means that most wellness interventions for capture-related impairment are undirected. The DAP exists to replace undirected intervention with diagnostic precision.

References

Internal: This paper is part of The HEXAD Series (HX series), Saga V. It draws on and contributes to the argument documented across 20 papers in 5 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.