The first prerequisite in the FTP cascade — you cannot participate meaningfully in what you cannot see
Transparency is the first prerequisite in the FTP cascade. It is not one of three parallel criteria. It is the condition without which the other two cannot function.
The logic is straightforward: you cannot participate meaningfully in what you cannot see. A governance structure that excludes full transparency about what the system does, what it optimizes for, and where it fails cannot satisfy Participation even if affected populations are nominally included. They are included in a process they cannot evaluate.
And you cannot design for Fidelity without genuine Participation. If the populations whose capability is at stake have no structural input, there is no mechanism to catch when the design optimizes for something other than human capability. An audit instrument that checks Fidelity without first verifying Transparency and Participation produces false positives by design. This is the documented failure mode of existing frameworks — the compliance theater that HC-015 examines in detail.
Transparency in AI deployment is not a binary condition. A system can be transparent about some things and opaque about others. The three-level framework makes this assessable:
| Level | Question | What it requires |
|---|---|---|
| 1. Functional | What does the AI do? What does the human do? What can the AI not do? | A clear, verifiable description of the division of labor — not marketing language but operational specification |
| 2. Process | How does the system produce outputs? Where does uncertainty live? | Sufficient disclosure of the decision process to allow a domain expert to understand why a specific output was produced |
| 3. Audit | Can the system be independently assessed for compliance with its stated function? | Access for qualified independent assessors to evaluate whether the system does what it claims |
Each level is independently assessable. A system can satisfy Level 1 (clear functional description) while failing Level 2 (no process transparency) and Level 3 (no independent audit access). This matters because different transparency failures produce different harms — and because the most common defense against transparency requirements (security and IP concerns) applies to Level 3 only.
Functional transparency answers three questions: What does the AI do? What does the human do? What can the AI not do?
Frank Pasquale's The Black Box Society (2015) documented the systematic failure of functional transparency across finance, healthcare, and reputation systems. The finding is not that these systems are opaque for technical reasons. It is that opacity serves identifiable interests: it prevents affected parties from understanding how decisions about them are made, which prevents them from contesting those decisions effectively.
The EU AI Act (Articles 13–14) now requires functional transparency for high-risk systems: a description of the intended purpose, the level of accuracy and limitations, and the conditions under which it performs as specified. This is Level 1 transparency encoded in regulation. The gap between the regulatory requirement and actual deployment practice remains substantial — a gap the compliance theater record (HC-015) documents in detail.
Functional transparency is the minimum. It is also the level that most failing deployments lack. In criminal justice (ProPublica's COMPAS investigation), in hiring (automated resume screening), in content moderation — the affected populations often cannot answer the basic question: what does this system do to me?
Process transparency goes deeper: not just what the system does, but how it produces its outputs. Where does uncertainty live? What is the system confident about and what is it guessing? What variables drive the output?
Wachter, Mittelstadt, and Floridi (2017) established that a "right to explanation" — even where legally mandated under GDPR Article 22 — is meaningless without process transparency. A system can provide an "explanation" that is technically compliant but informationally empty: "The decision was based on your profile data" explains nothing. Process transparency requires disclosure sufficient for a domain expert to understand why this specific output was produced for this specific input.
Process transparency does not require disclosing model weights, training data, or proprietary architecture. It requires disclosing, in terms a domain practitioner can evaluate, what the system is doing with their case. A radiologist needs to know what features the AI flagged and with what confidence. A judge needs to know what variables drove the risk score and how sensitive the score is to changes in those variables. A teacher needs to know why the system placed a child at a particular level.
Audit transparency answers whether the system can be independently verified. Not by its developers, not by the deploying organization, but by qualified external assessors with sufficient access to determine whether the system does what it claims.
This is where the legitimate tension with security and intellectual property interests lives. Audit access to model internals can create security vulnerabilities (adversarial attack surfaces) and can expose proprietary innovations. These concerns are real — but they apply to Level 3 specifically, not to Levels 1 and 2.
The critical finding: most deployments that claim security or IP justification for opacity are not opaque at Level 3. They are opaque at Levels 1 and 2 — where the security/IP defense does not apply. A hiring algorithm that will not disclose what criteria it uses (Level 1) is not protecting trade secrets. It is preventing contestation. A criminal risk score that will not disclose what variables it weighs (Level 2) is not protecting against adversarial gaming. It is preventing judicial review.
"Full transparency creates adversarial vulnerability. If you publish how the system works, bad actors game it. If you publish the architecture, competitors copy it. Transparency and security are in tension."
This objection is valid against Level 3 audit transparency in narrow cases. It is not valid against Levels 1 and 2, which is where most failing deployments are actually opaque.
The three-level framework resolves this by making each level independently assessable. A system can satisfy Levels 1 and 2 (functional and process transparency) while restricting Level 3 (audit access) to qualified, credentialed assessors under appropriate security protocols — analogous to financial auditing, where independent auditors have access to internal records under confidentiality agreements.
The critical reframe: the security/IP objection is used as a blanket defense for opacity at all three levels. The FTP audit instrument (HC-014) assesses each level separately and flags when the defense is being deployed against the wrong level. This converts a vague objection ("we can't be transparent because security") into a precise diagnostic ("you can't satisfy Level 3 audit access — but you haven't satisfied Level 1 functional transparency either, and the security defense doesn't apply there").
The transparency failure in high-stakes AI deployment is not incidental. It is structural. Opacity serves identifiable interests:
Competitive advantage. A hiring algorithm's criteria, a pricing algorithm's logic, a content recommendation system's optimization target — these are competitive assets. Transparency reduces competitive advantage by allowing competitors to replicate the approach and allowing customers to evaluate the product against alternatives.
Liability exposure. If a system's decision logic is transparent, failures are attributable. If a healthcare AI's reasoning process is documented, medical malpractice liability can attach to the logic, not just the outcome. Opacity provides liability diffusion — the "nobody can determine exactly what went wrong" defense.
Contestation prevention. A credit score that is transparent can be contested. A criminal risk score that is transparent can be challenged by defense counsel. A content moderation decision that is transparent can be appealed on its merits. Opacity prevents effective contestation — not by denying it formally, but by denying the informational basis for it.
Opacity is not a bug in the deployment. It is the deployment working as designed — for the deployer's interests, against the affected population's interests.
This structural incentive is why Transparency is the first prerequisite in the FTP cascade. Without regulatory or governance requirements for transparency, market incentives produce opacity. And opacity makes Participation impossible and Fidelity unverifiable.
Transparency is necessary but not sufficient. A system can be fully transparent and still exclude the affected population from governance (Participation failure) or still degrade human capability over time (Fidelity failure). Transparency opens the door. What walks through it is the subject of the next two papers.
HC-012 (Participation: The Governance Requirement) examines what happens after Transparency is satisfied: who has structural access to influence the design? The Consent Deficit — the gap between who bears the consequences and who makes the decisions — is the second prerequisite in the cascade.
HC-013 (Fidelity: The Capability Test) is the terminal test: are the humans in this collaboration becoming more or less capable over time in the domain's irreducible functions? Fidelity cannot be assessed without Transparency (you cannot measure what you cannot see) and without Participation (you cannot identify the right measurement without the affected population's input).
The cascade is the argument: Transparency enables Participation enables Fidelity. Break it at any link and the chain fails.
Internal: This paper is part of The Collaboration (HC series), Saga XI. It draws on and contributes to the argument documented across 31 papers in 2 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.