ICS-2026-IA-006 · Influence Architecture · Series 38

The Detection Standard

Three forensic criteria for distinguishing organic information environments from manufactured ones. The right questions applied to information environments.

Named condition: The Organic-Synthetic Distinction · Saga VII · Series 38 · 15 min read · Open Access · CC BY-SA 4.0

I. The Problem This Paper Solves

The five preceding papers documented three mechanisms of influence architecture — affective engineering (IA-001), consensus engineering (IA-002), and source laundering (IA-003) — and two primary case studies demonstrating their combined deployment (IA-004, IA-005). The mechanisms are documented. The question this paper answers is: how do you detect them in real time?

The Semantic Record's Counter-Semantic Standard (SR-006) specified three forensic criteria for distinguishing linguistic evolution from semantic capture. This paper specifies the equivalent standard for the information environment: three forensic criteria for distinguishing organic information environments from manufactured ones.

The standard does not determine truth. It determines whether apparent consensus, apparent emotional activation, and apparent source diversity are consistent with organic origin — or whether they bear the structural signatures of engineering.

II. The Three Forensic Criteria

Criterion 1: Volume Signature

The question: Is the pattern of engagement around this topic consistent with organic interest — or does it show the statistical signatures of coordinated amplification?

What organic looks like: Organic trending follows a power-law distribution: a small number of highly connected accounts initiate sharing; the content diffuses through their networks at rates proportional to the network's existing connectivity patterns; engagement builds gradually, peaks, and decays following a natural curve. The time-to-peak is typically hours to days for genuinely viral content.

What manufactured looks like: Manufactured trending shows three anomalies:

Application protocol: For any topic that appears to be trending, map the posting frequency distribution, account-age distribution, and engagement-to-follower ratio of the accounts driving the trend. If all three show anomalies consistent with coordinated deployment, the volume signature criterion is met.

Criterion 2: Origin Diversity

The question: Does the apparent geographic, demographic, and ideological diversity of the accounts engaging with this content reflect genuine distributed opinion — or does it show the structural signatures of manufactured diversity?

What organic looks like: Genuine widespread agreement on a topic produces engagement from accounts with genuinely diverse characteristics: different posting histories (varied topic interests over months or years), different social networks (connections to different communities), different linguistic patterns (regional dialects, varying formality, different vocabularies), and different demographic indicators (varied profile information, varied photo styles, varied biographical details).

What manufactured looks like: Manufactured diversity is designed to simulate organic diversity but fails at the level of granular consistency:

Criterion 3: Temporal Pattern

The question: Does the timing of engagement follow the pattern of organic information diffusion — or does it show the signatures of coordinated deployment?

What organic looks like: Organic information diffusion follows a cascade pattern: initial sharing by a small number of accounts, gradual spread through their networks, accelerating engagement as the content reaches more densely connected hubs, peak engagement, and natural decay. The cascade pattern reflects the actual social network structure through which the content travels.

What manufactured looks like: Coordinated deployment produces temporal anomalies:

III. The Application Protocol

The three criteria function as a diagnostic sequence, identical in structure to the Counter-Semantic Standard (SR-006):

Step 1: Identify the information environment under evaluation. This may be a trending topic, a public debate, a product of media coverage, or any other information phenomenon whose organic or synthetic origin is in question.

Step 2 — Apply Criterion 1 (Volume Signature). Map the posting frequency, account-age, and engagement-to-follower distributions of the accounts driving the phenomenon. If all three show anomalies consistent with coordinated deployment, Criterion 1 is met.

Step 3 — Apply Criterion 2 (Origin Diversity). Analyse the linguistic patterns, social network structures, and content convergence of the apparently diverse accounts. If the analysis reveals clustering inconsistent with genuine independent diversity, Criterion 2 is met.

Step 4 — Apply Criterion 3 (Temporal Pattern). Map the engagement timeline. If the pattern shows spike-without-build-up, sustained-without-decay, or cross-platform synchronisation inconsistent with organic diffusion, Criterion 3 is met.

Step 5 — Classification. If all three criteria are met: the information environment bears the structural signatures of manufactured origin. If only one or two criteria are met: the environment warrants investigation but does not meet the full evidentiary standard for a manufactured-origin finding.

IV. The Limits of This Standard

1. It cannot determine intent. The standard identifies structural signatures consistent with coordination. It cannot determine whether the coordination was conducted for political manipulation, commercial marketing, state propaganda, or genuine grassroots organising that happens to use coordinated tactics. Intent evidence must come from primary documents (internal communications, funding records, operational orders), not from the structural analysis alone.

2. It requires computational infrastructure. The three criteria cannot be applied by individual users scrolling through their feeds. They require data access (platform APIs or data partnerships), computational analysis (statistical modelling of posting patterns, network analysis, temporal mapping), and domain expertise (understanding of what organic distributions look like in different contexts). The Detection Standard is a tool for researchers, journalists, regulators, and platform trust-and-safety teams — not for individual media consumers.

3. The arms race is permanent. Sophisticated operators design their operations to evade the specific detection methodologies that are publicly known. Every publication of a detection methodology provides a roadmap for the next evasion. This does not make the standard useless — it means the standard must be continuously updated, and that the most effective detection methodologies may need to remain partially unpublished to prevent evasion.

4. It does not address source laundering. The three criteria detect coordinated inauthentic behaviour (IA-002) and affective engineering (IA-001). They do not detect source laundering (IA-003), which operates through single legitimate-appearing entities rather than through coordinated account networks. Source laundering detection requires the funding, ownership, and editorial analysis specified in IA-003.

V. The Institutional Requirement

The Detection Standard, like the Counter-Semantic Standard (SR-006), is not a consumer tool. It is an institutional requirement — a specification for what platforms, regulators, and independent research bodies must be capable of doing to maintain the epistemic integrity of the information environments they govern.

For platforms: The three criteria should be applied continuously to trending topics and high-engagement content. Results should be published in transparency reports at quarterly frequency. The criteria should be supplemented by internal methodologies that are not published (to prevent evasion).

For regulators: The Detection Standard should inform the design of platform transparency mandates. Regulators should require platforms to report Criterion 1, 2, and 3 anomalies for any content that reaches a defined engagement threshold — and to disclose the action taken (or not taken) in response.

For independent researchers: The Oxford Internet Institute's Computational Propaganda Project, the Stanford Internet Observatory, the Atlantic Council's Digital Forensic Research Lab, and comparable institutions are the existing approximations of the Detection Standard's institutional requirement. Their work should be funded at a scale proportional to the threat — which currently it is not.

For the Institute: The Detection Standard completes the forensic toolkit that the Auditor of Auditors series (AOA-001 through AOA-006) began. The AOA series specified how to audit institutions. The SR series specified how to audit definitions. The IA series specifies how to audit information environments. Together, they constitute the three layers of the cognitive sovereignty audit: can the institution be held accountable (AOA), can the language be held stable (SR), and can the information environment be held honest (IA)?

Named Condition

The Organic-Synthetic Distinction — the evidentiary standard and forensic methodology for identifying whether an information environment reflects genuine distributed opinion or manufactured apparent consensus. Consists of three criteria applied in sequence: volume signature analysis (posting frequency, account-age, engagement-to-follower distributions), origin diversity analysis (linguistic clustering, social network isolation, content convergence), and temporal pattern analysis (spike-without-build-up, sustained-without-decay, cross-platform synchronisation). The information-operations equivalent of the forensic audit methodology: the right questions applied to information environments rather than documents or definitions.

How to cite this paper
The Institute for Cognitive Sovereignty. “The Detection Standard.” ICS-2026-IA-006. Series 38: The Influence Architecture. Saga VII: The Archive. cognitivesovereignty.institute, March 2026.

References

Internal: This paper is part of The Influence Architecture (IA series), Saga VII. It draws on and contributes to the argument documented across 69 papers in 13 series.

External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.