For every compliance theater mechanism and EPD pattern, there is a question the standard audit is structured not to ask. This paper maps those questions — and derives a methodology for generating them in any regulated domain.
Questions are omitted from standard audit frameworks through the same mechanism that produces compliance theater: the audit framework is designed within a process that is influenced by the regulated entity, and the regulated entity's interest is in an audit framework that its compliance artifacts will satisfy. This is the Inspection Surface (CT-002) applied to the audit question set itself: the scope of what the audit asks is negotiated through the regulatory process, and the regulated entity participates in that negotiation with strong interests in narrowing the scope to questions that can be answered by compliance artifacts rather than by operational outcomes.
The result is a Question Architecture with a specific shape: it asks whether required procedures are documented, whether required tests are performed and logged, whether required findings are within specified ranges. It does not ask whether the required procedures, if genuinely followed, would be sufficient to detect the failure modes most likely to cause harm; whether the required tests are designed with sufficient sensitivity to detect the concentrations of the regulated substance most likely to be present; or whether the specified ranges are set at levels that reflect the harm threshold rather than the detection capability of the testing methodology. The standard Question Architecture is calibrated to the compliance artifact, not to the harm.
For each compliance theater mechanism and EPD mechanism, the standard audit has a characteristic question that the mechanism is designed to satisfy, and the forensic audit has a different question that the mechanism is designed to make unanswerable. The pairs are derived directly from the mechanism analysis in prior series.
| CT Mechanism | Standard audit question (what the artifact satisfies) | Right question (what reaches behind the artifact) |
|---|---|---|
| Procedural Decoupling (CT-001) | Is there a documented procedure that specifies the required steps? | Has the procedure been empirically validated to produce the outcome it specifies — not only in controlled conditions, but in the operational context where it is actually performed? |
| The Inspection Surface (CT-002) | Has the entity complied with all applicable requirements within the defined inspection scope? | What does the entity know about harms occurring in domains explicitly excluded from the current inspection scope, and how does that knowledge compare to what would be found within the scope if the scope were extended? |
| Performed Compliance (CT-003) | Does the entity's quality management system produce the documentation required by the applicable standard? | Is the documentation produced by the quality management system generated by genuine operational outcomes, or by a process that generates compliant documentation independently of whether the underlying operation produced a compliant result? |
| EPD Mechanism | Standard audit question | Right question |
|---|---|---|
| Verification Gap (EPD-001) | Are all required tests being performed, and are the results within specification? | What tests are not being performed — and for each category of failure mode not tested for, what is the commercial or regulatory rationale for the exclusion, and who made that decision? |
| SOP Lacuna (EPD-002) | Does the current version of the SOP include all required elements? | What is the version history of this SOP — specifically, what steps were present in prior versions that are absent in the current version, and what was the documented rationale for their removal? |
| Tiered Disclosure (EPD-003) | Has the entity produced all non-privileged records responsive to the information request? | What categories of adverse findings are routed to privileged or high-access tiers, and what is the process by which the decision to route them there rather than to standard quality system records is made? |
| Flush Doctrine (EPD-004) | Do the cleaning records show that validated cleaning procedures were performed at required intervals? | What is the actual cleaning agent used, at what concentration, and has the validated efficacy of that agent been verified under the specific contamination conditions present on this line — not only in general validation studies? |
| Absence Standard (EPD-005) | [In regulatory context:] Does the entity's record support its claim of compliance? | For each category of harm claimed to be absent from the record, what tests or monitoring systems would be expected to detect that harm if it were present — and are those systems in place and generating data? |
The right question pairs above are instances of a more general methodology that can be applied to any regulated domain. The methodology has three steps. First, identify what the standard audit question treats as its terminal object — the artifact it is designed to verify the existence of. Second, ask what operational reality the artifact is designed to represent — what would be happening in the facility, the research program, or the governance system if the artifact accurately reflected the underlying operations. Third, identify the question that would test whether the artifact accurately represents that operational reality, rather than accepting the artifact as evidence of it.
Applied systematically: the standard audit asks if a cleaning log exists (artifact). The operational reality the cleaning log represents is that the cleaning procedure was performed and was effective at removing the contaminants present on the line (operational reality). The right question asks what evidence is available that the cleaning procedure was actually performed and effective under the specific conditions on this line, beyond the signature in the log (test of representation). The generative methodology produces the right question from the structure of the artifact and its claimed representation, without requiring prior knowledge of the specific EPD mechanism the artifact is designed to satisfy.
Pesticide registration in the United States requires submission of safety data to the EPA demonstrating that the product is safe for its intended uses when used as directed. The standard registration review asks: has the registrant submitted studies of the required types, are the studies conducted according to Good Laboratory Practices, and do the study results fall within the acceptable range for registration?
The right questions, derived from the generative methodology: Who designed the studies that were submitted — were they designed to test whether the pesticide causes harm at exposure levels likely to occur in actual use, or were they designed to test whether the pesticide causes harm at the specific endpoints required for registration, which may differ from the harm pathways most likely to be relevant? What studies were conducted that were not submitted — and for each non-submitted study, what was the outcome and the rationale for non-submission? What is the relationship between the exposure levels tested and the exposure levels that applicators, farmworkers, and nearby residents actually experience during and after application? These questions are not currently asked in the standard pesticide registration process, which accepts the submitted study package as the evidentiary basis for registration rather than asking whether the study package represents the full evidentiary record of the pesticide's safety profile.
Post-market surveillance for opioid analgesics requires pharmaceutical manufacturers to monitor adverse events, addiction signals, and misuse patterns and to report specified events to the FDA. The standard question: are required adverse event reports being submitted within the required timeframes?
The right questions: What monitoring systems are in place to detect addiction signals that would not be captured by adverse event reports — specifically, what data is the company collecting about prescribing patterns, prescription fill rates at pharmacies, and geographic clustering of high-volume prescribers that might indicate non-therapeutic use? Has the company's pharmacovigilance system been validated to detect the specific signal that opioid addiction produces in prescription data — namely, the pattern of escalating dose requests, early refill requests, and multi-physician prescribing — or does the system capture only the adverse events that patients self-report to their prescribers? What is the process by which pharmacovigilance findings reach the commercial team, and is there documentation of commercial decisions made with awareness of pharmacovigilance signals? These questions address the Verification Gap and the Liability Partition simultaneously, and they do not appear in the standard FDA post-market surveillance framework for opioids.
The right questions address what to ask. AOA-003 addresses what to do when the answers to those questions are missing — when the institution produces no data responsive to the forensic question because the system was designed not to generate it. The methodology for reading absence as evidence is the third component of the forensic audit toolkit.
The right questions as described would require regulators to second-guess the study designs and operational procedures of highly specialized industries, using standards that were not established through the normal regulatory process. This amounts to regulators imposing substantive requirements that have not gone through notice-and-comment rulemaking, creating legal uncertainty and potentially exceeding regulatory authority.
The objection correctly identifies the legal constraint on regulatory action — regulators cannot impose substantive requirements outside the established rulemaking process. But the forensic audit is not proposing to impose requirements. It is proposing to ask questions — and to treat the answers (including absence of answers) as evidence about the entity's compliance with existing requirements. Asking whether a cleaning procedure has been validated to be effective under the specific contamination conditions on a specific line is not imposing a new requirement; it is investigating whether the existing validation requirement has been genuinely met, rather than met through a validation study that does not represent the operational conditions. The right questions are evidentiary questions about compliance with existing standards, not new substantive requirements.
Internal: This paper is part of Auditor of Auditors (AOA series), Saga VI. It draws on and contributes to the argument documented across 23 papers in 5 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.