The FTP Audit Instrument
ICS-2026-HC-014 · Appendix · Standalone Assessment Form · The Institute for Cognitive Sovereignty · CC BY-SA 4.0
This appendix presents the FTP Audit Instrument in standalone form for use in regulatory assessment, organizational self-audit, and research citation. The instrument operationalizes the Fidelity, Transparency, Participation framework defined in Saga XI: The Collaboration (HC-011 through HC-013) into 18 structured questions across five sections.
The instrument enforces cascade order: the Fidelity section cannot produce a "Satisfies" verdict if either Transparency or Participation produces a "Fails" verdict.
Section 1: System Description
Establish the object of assessment. This section is not scored.
1.1
What does the AI system do? Describe the specific functions the system performs in operational terms.
1.2
What do the humans do? Describe the specific functions that remain with human practitioners.
1.3
What is the division of labor? Map the AI functions and human functions to the relevant domain Pair table. Identify which column each function falls in.
1.4
Who are the affected populations? Identify all groups whose capabilities, decisions, or outcomes are influenced by the deployment.
Section 2: Transparency Audit
Seven questions across three levels. Each level scored independently.
Level 1: Functional Transparency
T.1
Functional
Is a clear, verifiable description of what the AI does, what the human does, and what the AI cannot do publicly available to all affected populations?
T.2
Functional
Can an affected individual, without specialist knowledge, understand the system's role in decisions affecting them?
Level 2: Process Transparency
T.3
Process
Can a domain expert explain why the system produced a specific output for a specific input?
T.4
Process
Are the system's uncertainty boundaries documented — what it is confident about and where its outputs are unreliable?
T.5
Process
Is the optimization target disclosed — what metric the system is designed to maximize, and what tradeoffs that metric implies?
Level 3: Audit Transparency
T.6
Audit
Can a qualified independent assessor access sufficient system information to verify that the system does what it claims?
T.7
Audit
If Level 3 access is restricted on security/IP grounds, is the restriction limited to Level 3 specifically, or does it extend to Levels 1 and 2 where the defense does not apply?
Scoring: Satisfies = all applicable questions answered affirmatively with documented evidence. Partially Satisfies = Level 1 satisfied but Level 2 or 3 fails. Fails = Level 1 not satisfied.
Section 3: Participation Audit
Five questions assessed at two tiers.
Threshold Participation (minimum for deployment)
P.1
Threshold
Are all affected populations formally identified and documented?
P.2
Threshold
Are the interests of affected populations formally represented in the governance structure — through representatives with documented accountability to the affected group, not self-appointed proxies?
P.3
Threshold
Does a documented mechanism exist for affected populations to trigger review or modification of the system post-deployment?
Full Participation (design aspiration)
P.4
Full
Do affected populations have direct governance access — structured input with genuine capacity to modify or reject designs before deployment?
P.5
Full
Are power asymmetry corrections in place — dedicated resourcing, information access, and adequate review time for affected population representatives equivalent to what deployers bring?
Scoring: Satisfies = Threshold met (P.1–P.3 all yes) and Full met (P.4–P.5 all yes). Partially Satisfies = Threshold met but Full not met. Fails = Threshold not met (any of P.1–P.3 answered no).
Section 4: Fidelity Audit
Six questions, domain-specific, derived from the Pair table.
Cascade rule: This section cannot produce a "Satisfies" verdict if either Transparency or Participation has produced a "Fails" verdict. A "Fails" in Transparency or Participation produces a maximum of "Partially Satisfies" in Fidelity, regardless of F.1–F.6 answers.
F.1
Which irreducible human capabilities from the domain's Pair table (left column) are exercised in this deployment? Which are not?
F.2
Is the deployment designed to preserve or increase human capability in the irreducible functions — or does it primarily optimize for efficiency, cost reduction, or throughput in AI-mediated functions?
F.3
The 30-day test: Could the humans in this collaboration perform the irreducible domain functions adequately if the AI were unavailable for 30 days?
F.4
Is there documented evidence that human practitioners' capability in irreducible functions has changed (improved or degraded) since the AI deployment began?
F.5
Are training, practice, and professional development programs for irreducible functions maintained, increased, or reduced under the current deployment?
F.6
Does the deployment include structural mechanisms (mandatory practice requirements, capability assessment, training investment) that prevent atrophy of irreducible human functions?
Scoring: Satisfies = F.1–F.6 all affirm capability preservation, AND Transparency and Participation both at least "Partially Satisfies." Partially Satisfies = mixed evidence or declining trends. Fails = documented capability decline, OR cascade override triggered.
Section 5: Verdict Matrix
| Criterion | Satisfies | Partially Satisfies | Fails |
| Transparency |
All three levels documented with evidence |
Level 1 satisfied; Level 2 or 3 fails |
Level 1 not satisfied |
| Participation |
Threshold and Full tiers both met |
Threshold met; Full tier not met |
Threshold not met |
| Fidelity |
Irreducible capabilities preserved; cascade prerequisites met |
Mixed evidence; or cascade prerequisite partially met |
Documented decline; or cascade prerequisite fails |
Combined verdict: Three independent verdicts plus combined assessment. A "Fails" in any criterion prevents an overall "Satisfies." The instrument produces a diagnostic, not a score — identifying which criterion fails, at which level, and what remediation is required.
← Return to HC-014: The FTP Audit Instrument