Engineered Incompetence — Paper 3 of 3

The Unmeasurable Made Invisible

How Reductionist Neuroscience Methodology Structurally Excludes Consciousness as a Research Object

CSI-2026-EI-003January 15, 202635 min readConsciousness Research
Download .docx Learn: Systems →
30 years
Neural Correlates of Consciousness research
1995
Chalmers identified the gap — it has not closed
0
explanatory gap closure despite massive investment

Abstract

Consciousness research faces a problem that no amount of additional funding, better imaging technology, or larger participant samples will resolve: the primary instrument of mainstream neuroscience — third-person, reductionist measurement of neural correlates — was designed to study the material substrate of consciousness, not consciousness itself. The hard problem of consciousness (why physical processes produce subjective experience at all) is not a question the instrument can answer, because subjective experience is definitionally inaccessible to third-person measurement. Over thirty years of Neural Correlates of Consciousness research has produced a detailed map of what happens in the brain when conscious states occur, while leaving the explanatory gap between physical process and subjective experience exactly where Chalmers found it in 1995. The field has responded not by developing new instruments but by redefining success: finding correlates has been institutionally elevated to explaining consciousness, and researchers who challenge this redefinition find their funding and publication pathways constrained. This paper documents the instrument mismatch, demonstrates that the explanatory gap has not closed despite massive investment, and proposes a research architecture that treats first-person data as scientifically primary rather than methodologically embarrassing.

I The Problem That Doesn't Go Away

In 1995, philosopher David Chalmers published 'Facing Up to the Problem of Consciousness' in the Journal of Consciousness Studies. He distinguished between what he called the 'easy problems' and the 'hard problem.' The easy problems — explaining how the brain integrates information, discriminates stimuli, reports mental states, controls behavior — are not actually easy. They will require decades of careful research. But they are tractable in principle: they are questions about how physical systems perform functions, and functional explanations are what science does.

The hard problem is different. It asks: why is there subjective experience at all? Why, when light of a certain wavelength hits your retina and triggers a cascade of neural firing, is there something it is like to see red? The neural cascade can be described in full mechanistic detail. What cannot be explained is why that mechanism is accompanied by experience — why there is an 'inside' to the process at all, rather than simply the physical events occurring in the dark.

Chalmers' paper was published more than thirty years ago. In the intervening decades, neuroscience has made extraordinary progress. We can image brain activity at submillimeter resolution. We can record from thousands of neurons simultaneously. We can identify the neural signatures of specific conscious states with remarkable precision. We know more about the neural substrate of consciousness than any previous generation of scientists.

KEY TERMS

The 'hard problem of consciousness' (Chalmers, 1995): why does any physical process produce subjective experience — something it is like to undergo — rather than simply executing its function in the absence of inner experience? 'Neural correlates of consciousness' (NCCs): the neural activity patterns that reliably accompany particular conscious states. The 'explanatory gap' (Levine, 1983): the conceptual distance between a complete physical description of a process and an explanation of why that process is accompanied by subjective experience rather than none. These terms are used with their original technical precision throughout this paper.

The hard problem remains exactly where it was in 1995. Not approximately where it was — exactly. No mechanism has been proposed that explains why any physical process, however complex, produces subjective experience rather than simply executing its function in the absence of experience. The explanatory gap has not narrowed. The instrument has produced everything it was designed to produce. The phenomenon being sought is something else.

II The Instrument: What Third-Person Neuroscience Can and Cannot Do

2.1 The Design Logic of Reductionist Neuroscience

Third-person neuroscience is built on a foundational methodological commitment: that the phenomena of mind can be explained by identifying their physical substrates and the causal mechanisms relating them. This is not merely a practical choice — it reflects a philosophical position (physicalism, or materialism) that most practicing neuroscientists hold implicitly or explicitly. The position is that consciousness is, at bottom, a physical process, and that explaining the physical process will eventually constitute explaining consciousness.

The instrument that follows from this commitment is measurement of physical variables from an external, third-person perspective: fMRI measures blood oxygenation as a proxy for neural activity; EEG measures scalp electrical potentials; single-cell recording measures individual neuron firing; lesion studies identify brain regions necessary for specific functions. These tools are exquisitely suited to their purpose. They have transformed our understanding of the neural implementation of cognition, perception, memory, and behavior.

What they cannot do, by design, is measure subjective experience from the inside. fMRI cannot tell you what it is like to see red. It can tell you which regions are active when a subject reports seeing red. EEG cannot measure the quality of pain — it can measure the neural pattern correlated with pain reports. The instrument measures physical correlates. The hard problem asks about the subjective correlate. These are different questions.

2.2 The Redefinition of Success

Faced with this structural limitation, mainstream neuroscience has not developed alternative instruments. It has redefined success. The research program called Neural Correlates of Consciousness (NCC) — now over thirty years old and the dominant paradigm in consciousness research — has institutionally elevated the identification of neural correlates to a proxy for explaining consciousness itself.

THE REDEFINITION

Consciousness research used to ask: why does subjective experience exist? The NCC program asks: what neural patterns are associated with conscious states? These are different questions. The second is answerable with the existing instrument. The first is not. The institutional response has been to treat progress on the second question as progress on the first — to declare that mapping correlates is explaining consciousness, rather than describing its neural shadow.

This redefinition is not always explicit. Individual researchers conducting fMRI studies of visual consciousness are not consciously evading the hard problem. But the institutional incentive structure — which funds research that produces publishable results and defunds research that does not — has shaped the field toward questions the instrument can answer and away from questions it cannot. The hard problem has not been solved. It has been administratively demoted.

2.3 What 'Progress' Has Actually Produced

The NCC program has produced genuine knowledge. We know that consciousness requires activity in a distributed network including the prefrontal cortex, parietal cortex, and thalamus. We know that specific conscious contents — the perception of a face, the recognition of a word, the experience of pain — have identifiable neural signatures. We know that consciousness can be disrupted by targeted interventions in specific brain regions. This is real science producing real knowledge about the neural substrate.

What it has not produced, after three decades and billions of dollars of investment:

Any mechanistic account of why neural activity produces subjective experience

Any explanation of why the particular neural patterns associated with consciousness produce the particular qualities of experience they do

Any resolution of whether a system with the same neural correlate patterns but different physical substrate (silicon, for example) would have subjective experience

Any agreement between the field's two leading theoretical frameworks — Global Workspace Theory and Integrated Information Theory — which make mutually incompatible predictions and cannot both be correct

Any empirical test that could distinguish between 'consciousness requires the specific biological implementation' and 'consciousness is substrate-independent' — a question with enormous implications that the instrument cannot adjudicate

The list of things neuroscience cannot tell us about consciousness is not getting shorter. It is getting longer as the questions become more precise. This is the signature of instrument mismatch, not of a problem requiring more time and funding.

III Two Theories, One Problem: The Framework Wars

3.1 Global Workspace Theory

Global Workspace Theory (GWT), developed by Bernard Baars and extended computationally by Stanislas Dehaene and colleagues, proposes that consciousness arises when information is 'broadcast' widely across the brain via a global workspace — a neural architecture that makes information available to multiple cognitive systems simultaneously. Unconscious processing occurs in modular, encapsulated systems. Conscious experience occurs when information enters the global workspace and becomes globally accessible.

GWT makes specific empirical predictions: conscious stimuli should produce widespread, late-latency neural activity (the 'ignition' phenomenon); unconscious stimuli should produce early, localized activity. These predictions have been broadly supported by EEG and fMRI studies. GWT is testable with third-person instruments and has accumulated substantial empirical support within the NCC program.

3.2 Integrated Information Theory

Integrated Information Theory (IIT), developed by Giulio Tononi and championed by Christof Koch, proposes that consciousness is identical to integrated information — a quantity called phi (Φ) that measures how much a system's elements are causally integrated beyond the sum of their independent contributions. High phi means high consciousness. The theory predicts that consciousness is substrate-independent: any system with sufficient integrated information is conscious, regardless of whether it is biological.

IIT makes predictions that are in direct conflict with GWT. IIT predicts that the posterior cortex (which has high integration) is the primary seat of consciousness, not the prefrontal cortex (which GWT emphasizes). IIT predicts that certain feedforward networks — which have high information but low integration — are not conscious. IIT implies that some simple systems may be conscious while some complex computers may not be.

3.3 The Adversarial Collaboration and Its Result

In 2019, the Allen Institute for Brain Science and collaborators launched an adversarial collaboration between GWT and IIT proponents — a preregistered study designed to test the theories' conflicting predictions about where in the brain consciousness originates. The study, published in 2023 in Nature after four years of data collection, was one of the largest and most carefully designed consciousness experiments ever conducted.

The result: both theories received partial support and partial disconfirmation. Neither was clearly vindicated. The study's authors concluded that the question remained open. Notably, both Dehaene (GWT) and Koch (IIT) disputed the interpretation of the results as challenging their respective theories.

WHAT THE ADVERSARIAL COLLABORATION ACTUALLY DEMONSTRATED

The two leading theoretical frameworks in consciousness research, after more than thirty years of development, a preregistered adversarial collaboration, and the most sophisticated neuroimaging data ever collected, cannot be adjudicated by the instrument being used to test them. They disagree on where in the brain consciousness lives, and the instrument — which measures neural activity — cannot definitively say which localization is correct. The theories are not converging. They are diverging. In a healthy field with the right instrument, more data resolves disputes. In a field with the wrong instrument, more data produces more sophisticated disagreement.

3.4 Why Neither Theory Solves the Hard Problem

There is a more fundamental issue than the empirical disagreement between GWT and IIT. Neither theory, even if empirically confirmed, would solve the hard problem. GWT explains which neural processes are associated with consciousness. It does not explain why global broadcast produces experience rather than simply making information widely available without anyone experiencing it. IIT explains which systems have high integrated information. It does not explain why high phi is accompanied by subjective experience rather than simply by complex information processing.

David Chalmers — whose formulation of the hard problem triggered the NCC program — has made this point repeatedly and has not been answered. The theories are sophisticated accounts of the neural and computational correlates of consciousness. They are not accounts of consciousness itself. The instrument used to develop and test them is a correlate-finder. It cannot find the thing the correlates are correlates of.

IV. The Funding Architecture: How the Instrument Locks In

4.1 Where the Money Goes

The NIH's National Institute of Neurological Disorders and Stroke (NINDS) and National Institute of Mental Health (NIMH) together fund the majority of neuroscience research in the United States. Neither institute has a dedicated consciousness research program. Consciousness research is funded under other categories — basic neuroscience, cognitive neuroscience, perception — and must justify itself using the instruments and metrics those categories reward. The precise ratio of consciousness-specific funding to total neuroscience funding is difficult to calculate precisely because no NIH budget line uses that category — which is itself diagnostic: the absence of a budget line reflects the field's institutional posture as much as any figure would.

The practical effect: consciousness research that produces neural correlate data (publishable in high-impact journals, citable, replicable with the existing instrument) receives funding. Consciousness research that proposes first-person methodologies, develops phenomenological frameworks, or challenges the NCC program's assumptions does not produce the same type of output and does not receive the same funding.

Research Type

Funding Pathway

fMRI study of visual consciousness correlates

NIH R01 basic neuroscience; high impact factor journals; standard review panels

EEG study of attention and consciousness markers

NIH R01 cognitive neuroscience; standard pathway

Neurophenomenology (first-person + third-person integrated methods)

No dedicated NIH program; must justify to panels trained in standard methods; lower funding success

IIT phi calculation research

Fundable but controversial; requires computational framing to pass review

Hard problem theoretical research (philosophy of mind)

Not fundable by NIH; requires philosophy funding which is orders of magnitude smaller

Psychedelic-assisted consciousness research

Schedule I barriers; specialized DEA licenses; extremely limited site availability

The funding architecture does not merely reflect a preference for certain types of research. It actively shapes what counts as consciousness research in the first place. Researchers who want careers in consciousness science must produce fMRI data. The instrument is not just a tool — it is the credential.

4.2 The Publication Incentive Structure

High-impact neuroscience journals — Nature Neuroscience, Neuron, Current Biology — evaluate papers on methodological rigor, novelty of findings, and sample size. These criteria systematically favor neural correlate studies (which produce novel, visually compelling brain images with large effect sizes) over phenomenological or theoretical work (which produces arguments and frameworks that cannot be visualized as activation maps).

Researchers are evaluated for tenure, grants, and institutional standing primarily on publication record in high-impact journals. A researcher who develops a sophisticated theoretical framework for consciousness that cannot produce fMRI data is disadvantaged relative to a researcher who runs the same correlate study with a larger sample. The incentive structure rewards instrument use, not insight.

The result is a field where the most important questions — why does subjective experience exist? what distinguishes conscious from unconscious processing in a way that explains rather than merely describes? — are effectively delegated to philosophy departments, which have a fraction of the funding and none of the institutional prestige. The questions are not considered unanswerable. They are considered unfundable. The practical effect is the same.

V The Psychedelic Window: When the Instrument Accidentally Opened

One of the most significant developments in consciousness research in the past two decades has come from an unexpected direction: psychedelic neuroscience. The revival of MDMA and psilocybin research (documented in Paper 2 of this series) has produced neuroimaging data that the existing instrument can measure but cannot explain — data that points directly at the instrument's limitations.

Psilocybin studies at Imperial College London (Carhart-Harris et al., 2012–2023) have produced a consistent and striking finding: psilocybin dramatically increases the complexity and integration of brain activity as measured by fMRI and EEG, while simultaneously producing profound alterations in subjective experience — ego dissolution, mystical-type experiences, heightened emotional intensity, and, in therapeutic contexts, lasting personality changes and reduced depression.

The neural correlates are clear. The neural complexity increases, the default mode network (the brain's self-referential 'resting state' network, associated with the constructed self) is disrupted, cross-network connectivity increases. The instrument can measure all of this. What the instrument cannot explain is why these specific patterns produce the specific qualities of experience they produce — why disrupting the default mode network feels like the dissolution of the self rather than simply changes in self-referential processing, or why increased neural entropy is accompanied by reported mystical experiences rather than simply by more variable information processing.

THE PSYCHEDELIC DATA AS INSTRUMENT PROBE

Psychedelic states represent the most dramatic naturally producible alterations of consciousness available for study. They are pharmacologically precise (specific receptor targets), reproducible, dose-dependent, and accompanied by rich, detailed first-person reports. The gap between the neural correlate data and the quality of experience reported in these states is wider and more precise than in ordinary consciousness research. This makes psychedelic data not just therapeutically interesting but methodologically valuable: it is a high-contrast demonstration of exactly where the instrument's explanatory power ends and the hard problem begins.

VI Devil's Advocate: The Case for the NCC Program

SERIES STANDARD

Every paper in the Engineered Incompetence series is required to present the strongest possible opposing argument and engage it seriously before responding. The arguments below are the ones this paper's thesis must actually defeat. The Institute's publishing rationale — why these papers are not submitted to mainstream peer review — is documented in Paper 1, Section 6.2 of this series.

6.1 The Emergence Argument: Give It Time

The strongest defense of the NCC program is straightforward: consciousness is an emergent property of sufficiently complex neural computation, and we simply have not yet identified the full computational architecture. The history of science is full of phenomena that seemed mysterious until the right level of analysis was identified. Vitalism — the belief that living organisms require a non-physical 'life force' — seemed compelling until biochemistry explained the mechanisms of metabolism, reproduction, and development. The apparent mystery of life dissolved when the right instrument (molecular biology) was applied at the right level of description.

On this view, the hard problem is a temporary philosophical puzzle that will dissolve when neuroscience identifies the computational principles that give rise to consciousness. The explanatory gap feels unbridgeable now because we are in the early stages of understanding the relevant mechanisms. Calling for alternative instruments is premature — it risks abandoning the program just as it approaches its breakthrough.

This is the argument that consciousness researchers themselves most frequently deploy, and it deserves the most serious engagement.

Response to the Emergence Argument

The vitalism analogy is the most commonly offered defense of the NCC program and it is instructive precisely because it fails in a specific, diagnosable way. Vitalism dissolved because biochemistry provided a mechanistic account of how molecular processes produce the functions of life — metabolism, reproduction, homeostasis. The explanatory gap closed because the 'life force' was a placeholder for mechanisms we didn't yet understand, and when we understood the mechanisms, the placeholder became unnecessary.

The hard problem of consciousness is structurally different. The gap is not between 'we don't understand the mechanism' and 'we do understand the mechanism.' The gap is between 'we understand the mechanism fully' and 'we still don't understand why the mechanism is accompanied by subjective experience.' Chalmers' point — made precisely and not yet answered — is that even a complete functional and computational account of all neural processes would leave the question open: why is any of this accompanied by experience?

Daniel Dennett's response — that subjective experience is an illusion, that there is no 'hard' problem because there is nothing beyond the functional explanation — is the only genuine dissolution of the hard problem available within the reductionist framework. Keith Frankish's 'illusionism' develops this position most rigorously, arguing that phenomenal consciousness as we conceive it does not exist and that the hard problem dissolves once we stop mistaking a cognitive representation for a metaphysical fact. It is also, as critics have noted, an attempt to solve the problem by denying the phenomenon. If consciousness is an illusion, there is still the question of what it is like to have the illusion. The experience of illusion is still experience. Frankish's illusionism names the problem differently but does not make it go away — it relocates the explanatory gap from 'why does consciousness exist?' to 'why does the illusion of consciousness seem so vivid and real?' The regress continues.

The emergence argument has been available since 1995. After more than thirty years, the explanatory gap remains. At some point — and this paper argues we are there — the continued insistence that more time and more data will close the gap must be treated as a prediction that has failed, not a prediction that hasn't yet been tested.

6.2 The Methodological Rigor Argument: First-Person Data Is Unreliable

A second serious objection: the alternative methodologies proposed — phenomenology, neurophenomenology, first-person reports as primary data — are not scientifically reliable. Introspection is notoriously unreliable. People confabulate, misremember, rationalize, and are subject to demand effects. The entire edifice of psychological research built on self-report has proven less replicable than the field assumed. Introducing first-person data as scientifically primary is not an expansion of consciousness research — it is a regression to pre-scientific introspective psychology.

This is also a serious argument and one that requires careful handling.

Response to the Methodological Rigor Argument

The objection correctly identifies a real limitation of introspective self-report. Unaided introspection is unreliable in the ways the objection describes. The neurophenomenological program — associated primarily with Francisco Varela, Evan Thompson, and Eleanor Rosch — does not propose naive introspection as a primary data source. It proposes trained, disciplined first-person inquiry (drawing on phenomenological tradition) as a complement to third-person measurement, with the two streams used to mutually constrain and enrich each other.

The distinction matters. Asking a research participant 'what did you experience?' and accepting their answer uncritically is unreliable. Training participants in phenomenological attention, developing structured protocols for first-person inquiry, validating first-person reports against behavioral and physiological data, and building a systematic science of the structure of experience — as opposed to its content — is a different enterprise. It is difficult. It requires new methodological infrastructure. But its difficulty is not an argument for the adequacy of the existing instrument. It is an argument for investing in developing the new one.

The reliability of self-report data is also not uniformly poor. The psychophysics tradition has used carefully structured first-person reports to build precise, reliable science of perception for over a century. Signal detection theory depends on first-person reports. Pain research, clinical psychology, and quality-of-life measurement all depend on first-person data and have developed validated instruments for collecting it. The objection proves too much: if first-person data is too unreliable to be scientifically useful, it is also too unreliable to serve as the outcome measure for SSRI trials, pain medication studies, and psychiatric diagnosis. The field selectively accepts first-person data when it fits the existing instrument and rejects it when it challenges the instrument's sufficiency.

VII The Alternative: A Research Architecture for Consciousness

7.1 Neurophenomenology as the Bridge Instrument

Francisco Varela's neurophenomenology proposal — developed in collaboration with Evan Thompson and Eleanor Rosch in 'The Embodied Mind' (1991) and extended in subsequent work — represents the most developed framework for consciousness research that takes first-person data seriously as scientific data. The core idea is that rigorous first-person inquiry and third-person measurement should be conducted simultaneously, with each stream used to generate hypotheses and constraints for the other.

In practice: participants are trained in phenomenological attention — not generic mindfulness, but structured protocols derived from the phenomenological tradition (Husserl, Merleau-Ponty) for attending to the structure of experience rather than its content. During neuroimaging, participants provide structured reports using validated phenomenological interview protocols. The first-person data is treated not as an outcome measure but as a source of hypotheses: if participants report a specific structure of temporal experience during a task, the neuroscience asks what neural dynamics could implement that structure.

This is not a rejection of third-person neuroscience. It is an expansion of its data sources. The third-person instrument remains. It is now paired with a first-person instrument that can access the dimension of reality the third-person instrument cannot reach.

7.2 A Proposed Research Architecture

Research Layer

Method

What It Contributes

First-person inquiry

Trained phenomenological interview, experience sampling, micro-phenomenology protocol (Petitmengin)

Access to structure of experience: temporal flow, spatial character, affective tone, degree of integration — data the third-person instrument cannot capture

Second-person interface

Intersubjective verification, dyadic phenomenological interview, contemplative traditions as data source (validated experiential reports from centuries of systematic introspective training)

Cross-validation of first-person reports; identifies shared structures vs. idiosyncratic content; builds the reproducibility infrastructure first-person data requires

Third-person measurement

fMRI, EEG, single-cell recording, pharmacological probes (psychedelics as precision tools for consciousness alteration)

Neural implementation data; identifies correlates; provides biological constraints for theoretical frameworks; tests predictions generated by first-person inquiry

Theoretical integration

Neurophenomenology (Varela), Integrated Information Theory as bridge framework, predictive processing accounts (Friston), active inference framework

Builds mechanistic bridges between first-person structure and third-person implementation; generates falsifiable predictions across both streams

7.3 Funding Reform Requirements

The proposed architecture requires institutional change, not just methodological change. Specifically:

A dedicated NIH funding stream for consciousness research that does not require justification by neural correlate output — analogous to the dedicated funding streams for cancer, neurological disease, and psychiatric disorders

Interdisciplinary review panels that include philosophers of mind, phenomenologists, and contemplative science researchers alongside neuroscientists — the current review panels are composed entirely of researchers committed to the third-person instrument

Graduate training programs that teach phenomenological methodology alongside neuroscience methods — the current training pipeline produces only third-person instrument users

Publication pathways in high-impact journals for theoretical and phenomenological work — the current impact metrics systematically disadvantage non-imaging research

Expansion of psychedelic research infrastructure (Schedule II rescheduling, increased site licensing) given psychedelics' unique value as precision consciousness-alteration probes

VIII The Series Connection: Consciousness and the Instrument Capture Loop

Papers 1, 2, and 3 of this series have now documented the Engineered Incompetence pattern in three domains: particle physics, pharmaceutical regulatory science, and neuroscience. The pattern is consistent across all three.

Domain

Instrument

Phenomenon Missed

Particle Physics

High-energy collision detector

Emergent properties of cosmological vacuum state; fundamental constant variation

Psychiatric Pharmacology

Blinded single-compound RCT

Experience-dependent therapeutic mechanisms; set-and-setting effects; neuroplasticity windows

Consciousness Research

Third-person neural correlate measurement

Subjective experience itself; the explanatory relationship between physical processes and consciousness

In each case: the instrument was correct for its original domain. The institution grew around it. The theoretical frontier moved. The institution did not. The field redefined success to match the instrument's capabilities. The phenomenon being sought — gravitational-geometric signals, experiential therapy mechanisms, subjective experience — is not accessible to the instrument.

Consciousness research is the most philosophically fundamental case in the series. The instrument mismatch in physics costs money and delays discovery. The instrument mismatch in psychiatry costs lives through denied treatment. The instrument mismatch in consciousness research costs something harder to quantify: it has allowed the most important question in science — what is the nature of subjective experience? — to be administratively classified as a philosophical curiosity while the real science funds correlate-mapping that cannot answer it.

The meta-analysis that follows these position papers will name the structural mechanism that produces all three failures. This paper has established its third pillar.

IX Conclusion

More than thirty years after the hard problem was precisely formulated, the explanatory gap between neural processes and subjective experience has not closed. The primary instrument of consciousness research — third-person measurement of neural correlates — has produced genuine knowledge about the neural substrate while leaving the question it was supposed to answer exactly where it was. The field has responded by redefining the question: finding correlates has been elevated to explaining consciousness, and the hard problem has been administratively demoted from a central scientific challenge to a philosophical nuisance.

The instrument is not wrong — it is wrong for this. fMRI and EEG are powerful tools for understanding the neural implementation of cognition and the physical correlates of conscious states. They cannot answer why any physical process is accompanied by subjective experience, because subjective experience is definitionally inaccessible to third-person measurement. No additional resolution, no larger sample, no better imaging will change this. It is a categorical limitation, not a practical one.

The alternative exists. Neurophenomenology, contemplative science, first-person protocol development, and psychedelic neuroimaging represent a convergent set of methodological tools that can treat subjective experience as primary scientific data rather than noise to be controlled. Developing these tools requires institutional will: new funding streams, new review panels, new training pipelines, new publication standards. The cost is real. It is also vastly smaller than the cost of another generation of correlate-mapping that leaves the question unanswered.

The most important question in science deserves an instrument that can reach it.

References

  1. Chalmers, D.J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.
  2. Chalmers, D.J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
  3. Baars, B.J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press. [Global Workspace Theory]
  4. Dehaene, S., Changeux, J.P., & Naccache, L. (2011). The global neuronal workspace model of conscious access: From neuronal architectures to clinical applications. In S. Dehaene & Y. Christen (Eds.), Characterizing Consciousness: From Cognition to the Clinic? Springer.
  5. Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5, 42. [IIT]
  6. Koch, C., Massimini, M., Boly, M., & Tononi, G. (2016). Neural correlates of consciousness: Progress and problems. Nature Reviews Neuroscience, 17, 307–321.
  7. Doerig, A., et al. (2023). The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness. Consciousness and Cognition, 116, 103427.
  8. Cogitate Consortium; Melloni, L., et al. (2023). An adversarial collaboration to critically evaluate theories of consciousness. Nature, 621, 321–329. [Results paper. The preregistered protocol appeared in PLOS One (2021); the results were published in Nature. Both GWT and IIT received partial support and partial disconfirmation; the dispute remained unresolved.]
  9. Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435–450.
  10. Levine, J. (1983). Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly, 64(4), 354–361.
  11. Varela, F.J., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press. [Neurophenomenology foundation]
  12. Varela, F.J. (1996). Neurophenomenology: A methodological remedy for the hard problem. Journal of Consciousness Studies, 3(4), 330–349.
  13. Thompson, E. (2007). Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Harvard University Press.
  14. Petitmengin, C. (2006). Describing one's subjective experience in the second person: An interview method for the science of consciousness. Phenomenology and the Cognitive Sciences, 5, 229–269.
  15. Carhart-Harris, R.L., et al. (2012). Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin. PNAS, 109(6), 2138–2143.
  16. Carhart-Harris, R.L., & Friston, K.J. (2019). REBUS and the anarchic brain: Toward a unified model of the brain action of psychedelics. Pharmacological Reviews, 71(3), 316–344.
  17. Dennett, D.C. (1991). Consciousness Explained. Little, Brown. [The dismissal of the hard problem]
  18. Frankish, K. (2016). Illusionism as a theory of consciousness. Journal of Consciousness Studies, 23(11–12), 11–39.
  19. Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127–138.

The Institute for Cognitive Sovereignty

Engineered Incompetence Series | Paper 3 | February 2026

Uncomfortable but Rigorous