I. The Assessment Gap
The six HEXAD dimensions (specified in Paper I) and their degradation signatures (documented in Paper II) create a diagnostic framework. What the framework has lacked, until now, is an operationalized assessment instrument: a concrete protocol that tells a practitioner or individual how to measure where they actually stand across each dimension.
Existing cognitive assessment instruments are not adequate substitutes. Standardized cognitive testing measures underlying capacity rather than capture-induced impairment. Mental health screening instruments capture symptoms of conditions that may or may not correlate with cognitive sovereignty status. Media literacy assessments capture one dimension — perceptual — without attending to the others. No validated instrument has existed for the composite profile that the HEXAD framework specifies.
This paper proposes the Dimensional Assessment Protocol (DAP): a structured, multi-input assessment procedure for generating a dimensional profile across all six HEXAD dimensions. The DAP is not a finished product but a research proposal — an evidence-grounded specification for what a validated instrument would need to include, how it would be administered, and how its outputs would be interpreted.
II. Design Requirements
A valid Dimensional Assessment Protocol must satisfy six design requirements derived from the clinical assessment literature and the specific structure of the HEXAD framework:
- Dimensional specificity: The instrument must generate distinct scores for each of the six dimensions, not a single aggregate. Capture profiles differ across individuals; a single aggregate score would obscure the diagnostic information that motivates the dimensional framework.
- Multi-input architecture: Each dimension must be assessed through more than one input type. Self-report alone is insufficient: attentional degradation impairs the metacognitive accuracy required for accurate self-report; perceptual degradation distorts the referents against which self-assessments are made. Behavioral and contextual data must supplement self-report.
- Ecological validity: The protocol must assess dimensions as they operate in natural contexts, not only in controlled test environments. Attentional capacity in a laboratory may differ substantially from attentional sovereignty in an attention-capture environment.
- Low assessment burden: An instrument requiring four hours of clinical testing cannot be widely deployed. The DAP targets a 45–60 minute completion time for a full dimensional profile.
- Interpretive clarity: Scores must be interpretable against a population distribution and actionable — they must point toward specific interventions. The Cognitive Sovereignty Index (Measurement Reformation Paper II) provides the population reference framework.
- Repeated-measures capacity: The DAP must be suitable for repeated administration at 4–12 week intervals to track restoration progress. Instrument design must account for practice effects and establish test-retest reliability thresholds.
III. Instrument Structure: Three Input Types
For each of the six dimensions, the DAP combines three input types:
Input Type 1: Self-Report Subscale
A validated self-report subscale for each dimension, drawing on or adapting existing validated instruments where available (e.g., the MAAS for attentional items, the News Media Literacy subscale for perceptual items, the Need for Cognition Scale for reasoning items, the Difficulties in Emotion Regulation Scale for emotional items, the Interpersonal Reactivity Index for social cognitive items, the Actively Open-minded Thinking Scale for epistemic items). Where existing instruments require adaptation, the DAP specifies the adaptation rationale and required validation procedures.
Self-report subscales use a 5-point Likert format, 6–8 items per dimension, producing a 36–48 item self-report module.
Input Type 2: Behavioral Indicator Module
A structured behavioral observation or task-based module for each dimension. Behavioral indicators are less susceptible to the distortions that make self-report alone insufficient for degraded cognitive states:
- Attentional: Sustained attention task (e.g., continuous performance task variant), self-report consistency across attention-demanding vs. low-demand conditions
- Perceptual: Lateral reading performance task: given a set of claims and online access, how effectively does the assessable identify source credibility and bias?
- Reasoning: Cognitive reflection test (3-item or 7-item) + a short argument evaluation task
- Emotional: Affect regulation self-monitoring log (3-day brief ecological momentary assessment) + response latency on emotion recognition task
- Social Cognitive: Theory of mind inference task (e.g., Reading the Mind in the Eyes, adapted) + self-report of weekly in-person vs. parasocial contact hours
- Epistemic: Actively Open-minded Thinking task + belief update task (given new evidence on a prior belief, how does the assessable update?)
Input Type 3: Contextual Exposure Profile
A structured intake of the assessable's current capture exposure context: primary platforms, average daily use hours, notification volume, sleep timing and quality, primary information sources. The contextual profile does not directly produce dimension scores but provides essential calibration for interpreting behavioral and self-report data, and serves as the primary input for the Capture Profile designation (Paper II).
IV. Per-Dimension Assessment Specifications
- On a typical day, how often do you check your phone without deciding to? (Never / Rarely / Sometimes / Often / Very often)
- When you sit down to focus on a single task, how long can you typically sustain focus before an urge to check email, social media, or messages arises?
- How often do you begin a task intending to work for 30+ minutes and find yourself having switched to something else within 10 minutes?
- Do you find it difficult to read a book or long article without the impulse to check your phone or open another tab?
- How many times per hour do notifications interrupt your work or leisure time on a typical day?
- Rate your overall ability to direct and sustain your own attention on a scale of 1–10.
- When you encounter a news story or claim online, do you typically verify it with an additional source before accepting it?
- How often do you actively seek information that contradicts your existing beliefs?
- Can you name the primary owners and editorial orientations of your three most-used information sources?
- Do you believe the content you see in your social media feed is representative of what is actually happening in the world?
- How confident are you in your ability to distinguish reliable from unreliable online sources?
- In the past month, have you changed your view on any significant topic based on new information?
- How often do you find yourself making decisions impulsively that you later regret?
- When you encounter a compelling argument you initially agree with, do you look for counterarguments before accepting it?
- How would you rate your typical quality of sleep on a scale of 1–10?
- On days when you feel emotionally activated (angry, anxious, excited), do you notice any difference in the quality of your reasoning?
- How often do you feel cognitively overwhelmed by the volume of information you receive?
- How comfortable are you with remaining uncertain about a question until you have sufficient evidence to form a view?
- How often does seeing content online (posts, news, comments) change your emotional state in a way that persists for hours?
- When you are in a negative emotional state, how long does it typically take you to return to baseline without external stimulation (scrolling, seeking validation, etc.)?
- How much does the number of likes or positive responses your posts receive affect how you feel about yourself on that day?
- Do you use social media or phone use as a primary way to manage negative emotions (boredom, loneliness, anxiety)?
- How often do you experience outrage or moral indignation in response to online content?
- Rate your overall capacity to regulate your emotional state without external stimulation on a scale of 1–10.
- In a typical week, how many hours do you spend in reciprocal in-person social interaction vs. consuming parasocial content (podcasts, streams, influencer content)?
- When you disagree with someone online, do you typically try to understand their reasoning before responding?
- Do you believe that people who hold views very different from yours are genuinely trying to act on their values, even if you think their values are wrong?
- How often do your online interactions involve people you have never met and will never meet in person?
- How comfortable are you with the ambiguity and unpredictability of other people's mental states?
- Have online interactions made you more or less trusting of strangers compared to five years ago?
- How often do you form or revise opinions primarily because you saw that many people around you held them?
- Can you articulate the primary evidence for at least two of your most important political or social beliefs?
- How often do you read primary sources (research papers, official documents, original speeches) rather than summaries or commentary?
- When a trusted figure (expert, community leader, media personality) states something confidently, how likely are you to accept it without further verification?
- How comfortable are you arriving at a conclusion that differs from the consensus in your social circle?
- Rate your confidence in your capacity for genuinely independent thought on a scale of 1–10.
V. Scoring Architecture
Raw scores from each input type are converted to standardized scores (0–100 range) and combined using the weighting structure derived from the Cognitive Sovereignty Index framework (Measurement Reformation Paper II). Note: These weights are hypothesized based on theoretical rationale, not empirically calibrated. They require validation against external criterion measures before clinical use.
| Input Type | Weight | Rationale | Notes |
|---|---|---|---|
| Self-Report Subscale | 35% | Captures experienced impairment; reduced weight due to metacognitive bias risk in degraded states | Weight may be reduced to 25% if assessable scores >80 on contextual capture exposure |
| Behavioral Indicator Module | 45% | More resistant to metacognitive distortion; primary source of diagnostic validity | Requires standardized administration conditions |
| Contextual Exposure Profile | 20% | Calibration and contextual adjustment; exposure predicts degradation even where self-report lags | Used also to generate Capture Profile designation |
Dimensional scores are not aggregated into a single composite for individual-use purposes. The full six-dimension profile is the primary output. Aggregation into a Cognitive Sovereignty Index score is available for population-level research applications.
VI. Score Interpretation
DAP scores are interpreted against three reference frames:
- Population norms: Where does this score fall in the population distribution? (Requires the population validation studies specified in the next section.) Proposed interpretive ranges (pending empirical validation): scores <40 would indicate significant degradation; moderate scores (40–70) would indicate partial capture; scores >70 would indicate relatively intact sovereignty on this dimension. These thresholds are stipulated based on analogous clinical instruments and require calibration against validated outcomes.
- Intra-individual profile shape: Which dimensions are most and least impaired? A flat profile (all dimensions similarly impaired) suggests a different capture environment than a peaked profile (one or two dimensions severely impaired, others intact). The Capture Profile (Paper II) provides the interpretive framework for profile shape.
- Change over time: For repeated-measures use, a clinically significant change is defined as a score shift of ≥8 points on any dimension between administrations ≥4 weeks apart. This threshold is proposed based on analogous benchmarks in the PROMIS system and requires empirical validation.
VII. Required Validation Studies
The DAP as specified here is a research proposal, not a validated instrument. Six validation studies are required before clinical or regulatory deployment:
- Internal consistency validation: Cronbach's alpha ≥ 0.75 for each dimensional subscale, across a demographically diverse sample of n ≥ 500.
- Convergent validity: Correlations between DAP dimensional scores and existing validated instruments for overlapping constructs (e.g., MAAS for attentional, DERS for emotional). Expected r ≥ 0.50 for relevant pairings.
- Discriminant validity: DAP dimensional scores should show lower correlations with conceptually distinct constructs than with convergent constructs. Cross-dimensional DAP correlations should be < within-dimension input-type correlations.
- Predictive validity: DAP scores at baseline should predict intervention response at follow-up in line with the mechanisms specified in Paper II and the practice prescriptions in Paper IV.
- Test-retest reliability: Intraclass correlation ≥ 0.80 across administrations 4 weeks apart in a stable-context control condition.
- Sensitivity to intervention: In a randomized controlled trial with interventions from the Dimensional Practice Guide (Paper IV), DAP scores should show significant change in the expected direction on targeted dimensions, without equivalent change on non-targeted dimensions.
The behavioral indicator module, as specified, presents significant standardization challenges. Lateral reading performance, theory of mind tasks, and belief-update tasks all require administration conditions that cannot be guaranteed outside clinical or research settings. A protocol that requires standardized administration cannot be a widely deployed individual-use tool. This is a genuine tension between diagnostic validity and accessibility that the DAP does not fully resolve.
VIII. What the Protocol Demands
The Dimensional Assessment Protocol cannot be validated by any single research group. It requires a coordinated research program with the following components:
- A multi-site validation consortium spanning at least three countries and five languages
- Funding for longitudinal data collection across a minimum two-year period
- Open data commitments enabling independent replication of validity findings
- Platform cooperation for behavioral data integration, or regulatory mandates requiring such cooperation
- A digital infrastructure for standardized behavioral task administration that is accessible, private, and free at point of use
This is not a trivial research agenda. But the alternative — deploying restoration interventions (Paper IV) without knowing which dimensions need restoring — is the current state. The Assessment Void means that most wellness interventions for capture-related impairment are undirected. The DAP exists to replace undirected intervention with diagnostic precision.