The internal research identified design interventions that would reduce the documented harms. Each was evaluated against its revenue cost. Each was rejected, delayed, or implemented in insufficient form.
The internal research at Facebook did not merely document harm. It identified specific design interventions that would reduce it. This distinction matters because it transforms the evidentiary record from a story about ignorance into a story about decision-making. The company did not learn that its platform harmed adolescents and then face the difficult question of what to do about it. The company learned that its platform harmed adolescents and was presented, by its own researchers, with a catalog of actionable modifications. Each modification had a projected impact on adolescent welfare. Each had a projected impact on engagement and revenue. The catalog existed. The decisions were made.
The first intervention category involved like count visibility. Internal researchers documented that the public display of like counts was a primary driver of social comparison among adolescent users. The mechanism was direct: a visible like count converts a social interaction into a quantified ranking. The recommendation was to hide like counts, or at minimum to hide them for users under eighteen. The projected welfare benefit was measurable: reduced social comparison, reduced anxiety around posting, reduced fixation on quantified social validation. The projected revenue cost was also measurable: reduced engagement with like-based feedback loops, reduced posting frequency driven by like-count anxiety, reduced time-on-platform associated with like-count monitoring.
The second category involved content recommendation algorithms. Internal research documented that the Explore page and algorithmic feed surfaced appearance-related, diet-related, and body-comparison content to adolescent users at rates that correlated with increased body dissatisfaction and negative self-evaluation. The recommendation was to modify the algorithmic ranking for users under eighteen to reduce exposure to content categories that the research had specifically identified as harm vectors. The projected welfare benefit was reduction in the documented body image effects. The projected revenue cost was reduction in engagement generated by the highest-engagement content categories for adolescent users — which were, precisely, the categories that produced the harm.
The third category involved direct content filtering for diet and body image material. The research recommended limiting adolescent exposure to content promoting extreme dieting, body transformation, and appearance comparison. This was not a general content moderation recommendation. It was specific to content categories that the internal research had identified as producing measurable harm in the adolescent population. The implementation would have required content classification at scale, which the company already performed for advertising purposes but had not applied to adolescent welfare.
The fourth category involved usage time prompts and friction mechanisms. The research recommended implementing prompts that would alert adolescent users to time spent on the platform, encourage breaks, and introduce friction into the infinite scroll architecture. The projected welfare benefit was reduction in the cumulative exposure effects that the research had documented: the relationship between time-on-platform and the severity of body image, comparison, and mood effects was dose-dependent, and reducing the dose would reduce the effect. The projected revenue cost was, by definition, the reduction in time-on-platform — which is the primary metric against which advertising revenue is calculated.
Each of these recommendations existed in documented form within the company before the Haugen disclosure. Each had been evaluated. Each had a projected welfare benefit and a projected revenue cost. The remediation catalog was not hypothetical. It was operational — the product of research conducted by the company's own employees, presented through the company's own internal channels, and assessed against the company's own metrics.
The organizational structure in which these recommendations were evaluated is the key to understanding why they were not implemented. The structure is not complex. It does not require allegations of malice or conspiracy. It requires only the recognition that Facebook, like all publicly traded companies operating on an attention-inventory business model, evaluates product decisions against engagement and revenue metrics.
The attention-inventory model operates on a specific economic logic: the company sells advertising impressions. The value of those impressions is determined by the number of users, the time each user spends on the platform, and the precision of the targeting. Product changes that increase any of these three variables increase revenue. Product changes that decrease any of them decrease revenue. This is not a distortion of the business model. It is the business model.
The welfare research documented that the design features producing adolescent harm were the same features producing adolescent engagement. The like count that drives social comparison also drives posting frequency and return visits. The algorithmic amplification of appearance content that produces body dissatisfaction also produces the highest engagement rates among adolescent users. The infinite scroll architecture that enables cumulative exposure also maximizes time-on-platform. The harm and the revenue are not independent variables. They are two outputs of the same design choices, and they are inversely related: reducing the harm requires reducing the engagement, and reducing the engagement reduces the revenue.
This is the Revenue-Welfare Inversion. It is not a failure of corporate ethics. It is an incentive structure. When the organizational decision-making process evaluates product changes against revenue impact, and when the welfare-improving changes carry revenue costs, the predictable output of the decision-making process is that the welfare-improving changes will not be implemented. The inversion does not require anyone to decide to harm adolescents. It requires only that no one in the decision chain has the authority or incentive to accept the revenue cost of not harming them.
The AE series (Saga VIII) documents this incentive architecture in detail as a general feature of attention-economy business models. The Instagram case is a specific instance of the general pattern. The internal research provided the company with the information necessary to modify the architecture. The incentive structure ensured that the information would not produce the modification. The research existed. The remediation catalog existed. The Revenue-Welfare Inversion determined the outcome.
The company's response to its internal research was not total inaction. It was selective action — and the pattern of selection reveals the operating logic.
Like count hiding was tested. In 2019, Instagram began testing hidden like counts in several countries, including Canada, Australia, Brazil, and Ireland. The feature removed the public display of like counts on posts, allowing only the post's author to see the number. In 2021, the option to hide like counts was made available to all users globally — as an opt-in feature. The default remained visible like counts. The distinction between a default-on and an opt-in implementation is not trivial. Default settings determine the experience of the vast majority of users because the vast majority of users do not change defaults. An opt-in feature that addresses the documented harm but is not enabled by default addresses the public relations problem without modifying the actual user experience at scale. The feature exists. The harm architecture persists.
Algorithmic modifications for adolescent users were not implemented in the period between the internal research findings and the Haugen disclosure. The Explore page continued to surface content through the same engagement-optimized ranking for users under eighteen as for all other users. The content categories that internal research had specifically identified as harm vectors — appearance-focused content, diet content, body comparison content — continued to be algorithmically amplified for adolescent users because they continued to generate high engagement rates among adolescent users. The algorithm was not modified. The harm mechanism continued to operate as documented.
Content filtering for diet and body image material was minimal. The company's content moderation policies addressed content that violated community standards — graphic self-harm imagery, content promoting eating disorders in explicit terms — but did not address the broader category of appearance and diet content that the internal research had identified as the harm vector. The distinction is between content that is explicitly harmful in its messaging (pro-anorexia communities, self-harm tutorials) and content that produces harm through its structural position in the comparison architecture (aspirational body imagery, fitness transformation content, diet culture). The former was moderated. The latter was amplified.
Usage time prompts were implemented — and designed to be dismissed with a single tap. Instagram introduced a "Take a Break" feature that reminded users after a specified time period. The implementation included a prompt that could be dismissed instantly, no default-on time limit, and no friction mechanism that would interrupt the scroll architecture. The feature created the appearance of a time-management tool without modifying the underlying design that the research had identified as the engagement mechanism. The infinite scroll continued. The autoplay continued. The notification architecture continued. The usage prompt existed alongside the engagement architecture rather than modifying it — a speed limit sign posted on a road with no enforcement.
"Facebook has made numerous changes to protect teen users: restricting DMs from adults, implementing time management tools, hiding like counts in some markets. The claim that nothing was done is factually incorrect."
The distinction is between safety features and architectural modifications. Restricting direct messages from adults to minors addresses a specific safety risk: predatory contact. It does not modify the engagement architecture that the internal research identified as the harm vector. Implementing time management tools addresses the appearance of concern about usage duration. It does not modify the infinite scroll, autoplay, notification, and algorithmic amplification systems that the internal research documented as the mechanisms through which duration produces harm. Hiding like counts in some markets as an opt-in feature addresses the public relations dimension of the like-count research. It does not change the default experience for the population the research identified as harmed.
The pattern is consistent: changes that address specific safety risks without revenue impact were implemented. Changes that address the core engagement architecture — the algorithmic amplification, the comparison engine, the reward loop — with revenue impact were not. Safety features and architectural modifications are different categories. The company implemented the former and declined the latter. This is consistent with the Revenue-Welfare Inversion, not a refutation of it.
The routing of internal welfare research within Facebook's organizational structure is itself a design decision with structural consequences. The research produced by the company's internal teams documenting adolescent harm was routed to legal review rather than product review. This routing decision determined the organizational response.
Legal review and product review are different institutional functions with different outputs. Product review assesses a finding in terms of design modification: given this research, what should the product do differently? The output of product review is a design change — a modification to the algorithm, the interface, the default settings, the content ranking. Product review converts research into product decisions.
Legal review assesses a finding in terms of liability exposure: given this research, what is the company's legal risk? The output of legal review is a legal strategy — privilege assertions, document retention policies, public statement drafting, regulatory positioning. Legal review converts research into liability management. It does not produce product changes because product changes are not within its institutional function.
By routing welfare research to legal rather than product, the company converted a welfare finding into a liability assessment. The organizational consequence was predetermined by the routing decision. Legal review does not have the authority to order product modifications. It does not evaluate design alternatives against welfare outcomes. It evaluates documents against litigation risk. The research that documented adolescent harm was processed through an institutional function that could only produce one category of output: legal strategy. The design modifications that the research recommended were never evaluated by the institutional function that had the authority to implement them — because the research was never routed there.
This is the architecture of non-decision. The company did not decide not to implement the remediation catalog. It routed the remediation catalog to an institutional function that could not implement it. The non-decision is structural. No individual executive needed to reject the welfare recommendations. The organizational architecture rejected them by directing them to a function whose output is legal strategy rather than product design. The remediation catalog entered the legal department and became a liability document. It never emerged as a product specification.
The routing decision itself, of course, was a decision. Someone — or some organizational protocol — determined that research documenting harm to adolescent users would be processed through legal channels rather than product channels. That routing decision is the decision that matters. Everything that followed was the predictable output of the routing architecture. Once the research entered the legal function, the only possible organizational outputs were legal outputs: privilege designation, litigation preparation, regulatory positioning, public relations strategy. The product remained unchanged because the product function never received the research through channels that would have produced product changes.
The pattern documented in SG-001 through SG-004 is not unique to Facebook. It is a structural pattern that recurs whenever an industry generates internal research documenting harm caused by its own products, and when remediation of that harm would reduce the revenue generated by those products. The tobacco industry provides the clearest historical parallel — and the parallel is precise, not metaphorical.
Tobacco companies' internal research programs, beginning in the 1950s and continuing through the 1990s, produced findings documenting the health effects of smoking. These findings were not limited to harm documentation. They included identification of harm-reduction measures: filter design modifications that would reduce tar and particulate delivery, nicotine reduction strategies, additive modifications that would reduce the formation of specific carcinogens. The remediation catalog existed. It was evaluated. It was not implemented.
The reason it was not implemented follows the same structural logic as the Instagram case. Implementing harm-reduction measures implied acknowledging that the product caused harm. Acknowledging that the product caused harm created liability. The organizational incentive structure therefore disfavored both the acknowledgment and the remediation. The optimal strategy under this incentive structure was to suppress the research, delay the remediation, and publicly contest the evidence of harm — which is what the industry did for approximately four decades.
The organizational logic is identical in both cases: once internal research documents harm, any remediation based on that research functions as an implicit acknowledgment of the harm. The acknowledgment creates liability exposure. The liability exposure exceeds the welfare benefit of the remediation in the organization's decision calculus — because the organization's decision calculus weights financial outcomes, and the liability cost of acknowledging harm exceeds the financial benefit of reducing it. The remediation is therefore foregone not because the organization does not know how to implement it, but because implementing it is structurally adverse to the organization's interests as defined by its incentive architecture.
The TB series (Saga VII) documents this pattern in detail for the tobacco industry. The Instagram case extends the pattern to the attention economy. The specific products are different. The specific harms are different. The organizational logic — the incentive structure that converts internal welfare research into a liability problem rather than a design problem — is the same. The Foregone Remediation is not a coincidence. It is a structural feature of industries in which harm is produced by the same product features that generate revenue.
The evidentiary significance of the Foregone Remediation is specific, and it is distinct from the significance of the internal research itself.
The internal research documented in SG-001 through SG-003 establishes institutional knowledge: the company knew that its platform produced measurable harm in adolescent users. This finding is significant but not, by itself, sufficient to establish the full evidentiary case. A company might know that its product produces harm and be unable to modify the product without destroying its utility. A company might know that its product produces harm and be genuinely uncertain about which modifications would reduce it. A company might know that its product produces harm and be in the process of developing and testing remediation.
The Foregone Remediation eliminates each of these defenses. The company was not unable to modify the product — its own researchers identified specific, technically feasible modifications. The company was not uncertain about which modifications would work — its own researchers projected the welfare impact of each. The company was not in the process of developing remediation — it evaluated the remediation catalog against its revenue impact and declined to implement the modifications that carried revenue costs.
What the Foregone Remediation establishes is the connection between institutional knowledge and institutional inaction. The company knew the harm. The company knew the remediation. The company's organizational architecture was structured to prevent the remediation from being implemented. This is not a case of a company that discovered harm and struggled with the difficult tradeoffs of addressing it. This is a case of a company whose internal structure — the routing of welfare research to legal, the evaluation of product changes against revenue metrics, the opt-in rather than default-on implementation of the modifications that were partially deployed — was designed, intentionally or emergently, to ensure that the knowledge of harm would not produce the remediation of harm.
The Foregone Remediation is the evidentiary link that transforms the Instagram case from a story about unforeseen consequences into a story about organizational architecture. The harm was foreseen. The remediation was identified. The architecture prevented the remediation. The architecture was not modified. The harm continued. Each of these statements is documented in the company's own internal record. The Foregone Remediation is what the company's own evidence proves about the company's own decisions.
This is the structural core of the Instagram case as an evidentiary matter. Not what the company did not know — it knew. Not what the company could not have done — it could have. Not what the company intended — intention is irrelevant to the structural analysis. What the company's organizational architecture was designed to produce: the continuation of the harm-generating design in the presence of the harm-documenting research in the absence of the harm-reducing remediation. The Foregone Remediation is not an accusation. It is a description of how the system worked.
Internal: This paper is part of The Instagram Files (SG series), Saga IX. It draws on and contributes to the argument documented across 22 papers in 5 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.