Six moves. Every effective piece of misinformation uses at least one. Once you can name the move, you can see it coming before it lands.
Rhetoric is the oldest technology of persuasion. Aristotle catalogued its moves in 350 BCE. The Sophists were teaching it for money a century before that. What is new in the current moment is not that manipulation tactics exist — it is that they have been operationalized at scale, studied empirically through randomized controlled trials, and the results used both to deploy manipulation more effectively and to build resistance against it.
The research program that produced the most systematic catalogue of manipulation tactics emerged from an unlikely collaboration: Cambridge University psychologists working with Google Jigsaw, the technology incubator's unit focused on countering extremism and disinformation. Their goal was not to produce an academic taxonomy but to build an intervention — a game called Bad News, in which players take the role of a misinformation producer and learn the tactics from the inside out. The theory was that active production of manipulative content would generate stronger resistance than passive recognition. The theory was correct.
What emerged from this research program — validated across five languages, multiple cultural contexts, and tens of thousands of participants — was a compact taxonomy of six manipulation tactics that underlie the overwhelming majority of effective misinformation:
These six are not exhaustive of all manipulation techniques. They are the six that appear most consistently across effective misinformation, that transfer across cultural and linguistic contexts, and that respond most reliably to inoculation-based interventions.
One might ask why manipulation reduces to six tactics rather than sixty or six hundred. The answer lies in the psychology of persuasion: effective manipulation must work within the constraints of human cognition, specifically the limited bandwidth of the system it is attempting to bypass. Manipulation that requires extensive cognitive processing is not manipulation — it is argument. The six tactics are those that have been evolutionarily and culturally selected for their capacity to bypass the evaluation process rather than engage it.
Scapegoating works because attribution of causality is cognitively cheaper than causal analysis. Emotional hijacking works because the evaluative system is suppressed under arousal. False authority works because evaluating credibility is hard. Conspiracy framing works because unfalsifiability cannot be easily penetrated. Decontextualization works because context requires bandwidth. Discrediting works because source evaluation precedes content evaluation.
Each tactic exploits a specific cognitive shortcut. None of them would work on a mind with unlimited bandwidth and perfect information. They are designed for the human mind as it actually is — bandwidth-limited, heuristic-dependent, and operating in an information environment that long ago exceeded its evaluation capacity.
Roozenbeek & van der Linden's Bad News game, developed through the Cambridge Social Decision-Making Lab and deployed across five language versions (German, Greek, Polish, Swedish, English) in collaboration with the UK Foreign and Commonwealth Office, tested whether active inoculation against the six tactics produced measurable resistance. The finding: significant reductions in perceived reliability of manipulative content across all tested languages and cultural contexts, with the effect size holding from the Netherlands to Poland. The manipulation grammar is not culturally specific — it is a feature of how human cognition processes information under conditions of uncertainty.
The 2025 EU election campaign (Communications Psychology) deployed prebunking videos specifically targeting scapegoating, decontextualization, and discrediting across 13 surveys in 12 EU nations, reaching over 120 million YouTube users. Videos improved both manipulation discernment and sharing decisions — including among adults 45 and older, who had previously been considered harder to reach with digital interventions.
The formal identification of this grammar has a specific practical implication: it transforms recognition from an intuitive, case-by-case judgment into a trainable structural skill. You do not need to be familiar with a particular piece of misinformation to recognize it. You need to be able to identify which of the six moves it is making.
This is the prebunking insight. Traditional media literacy education attempts to teach people what is true. Prebunking teaches people the structure of manipulation — so that when the move is made, regardless of content, topic, or source, the recognition fires before the persuasion lands.
The manipulation grammar is the prior layer. Before the epistemic crisis (Series II) becomes legible, before the inoculation science (Series III) makes sense, before informational sovereignty (Series IV) becomes practical — the vocabulary must be established. These are the six words. Everything else is sentences.