Platforms that shape citizen information environments should be treated as information fiduciaries — with affirmative duties of loyalty that constrain data use and attention monetization.
A fiduciary is a legal entity entrusted with acting in the interest of another party. The concept is ancient in common law and operates across multiple domains of contemporary legal practice. What distinguishes the fiduciary relationship from ordinary commercial relationships is the duty of loyalty: an affirmative obligation to act in the beneficiary's interest, including — and this is the critical feature — when the beneficiary's interest conflicts with the fiduciary's own interest. The duty is not merely to avoid harm. It is to serve the other party's interest as the primary constraint on the fiduciary's behavior.
Three established fiduciary relationships demonstrate the principle. Physicians bear a fiduciary duty to patients: they must recommend treatments that serve the patient's medical interest, even when alternative treatments would be more profitable for the physician. Attorneys bear a fiduciary duty to clients: they must pursue the client's legal interest, even when doing so requires actions that are inconvenient, unprofitable, or contrary to the attorney's preferences. Financial advisors operating under the fiduciary standard must recommend investments that serve the client's financial interest, even when alternative products would generate higher commissions for the advisor.
In each case, the duty arises from a specific structural condition: one party entrusts something of value to another party who has the power to use that trust for their own benefit at the beneficiary's expense. The fiduciary duty exists precisely because the relationship creates the opportunity for exploitation. The legal framework converts a relationship of vulnerability into a relationship of obligation. Jack Balkin of Yale Law School has argued, in a series of influential papers beginning in 2016, that this framework applies directly to the relationship between digital platforms and their users — and that the failure to recognize this application has produced an information environment in which entities with immense power over citizens' epistemic lives operate without the legal obligations that comparable power relationships have historically required.
The argument is structural, not analogical. It does not claim that platforms are "like" doctors. It claims that the conditions that give rise to fiduciary duties in law — entrustment, power, vulnerability, inability to monitor — are present in the platform-user relationship as a matter of documented fact. The legal category already exists. What is missing is its application to the entities that most pervasively shape the information environments of democratic citizens.
Three conditions establish a fiduciary relationship in common law. All three are present in the relationship between digital platforms and their users. The conditions are not matters of interpretation. They are observable features of the relationship as it currently operates.
Entrustment of something of value. Users entrust platforms with personal data — behavioral data, social graph data, location data, communication data, search history, purchase history, biometric data. This data has economic value, which is why platforms collect it. But the entrustment extends beyond data. Users entrust platforms with their information environment: the algorithmic curation that determines what news they see, what opinions they encounter, what facts they are exposed to, and what content is suppressed or amplified in their feeds. Users entrust platforms with their attention — a finite cognitive resource that the platform monetizes through advertising. The entrustment is comprehensive and ongoing. It encompasses data, epistemic environment, and cognitive resources.
Power that creates vulnerability. Platforms exercise power over users' information environments that users cannot reciprocally exercise over the platform. The platform determines the algorithmic ranking that shapes the user's information diet. The platform determines which content is amplified and which is suppressed. The platform determines the design of the interface — including dark patterns that exploit cognitive vulnerabilities to increase engagement, data sharing, or time on platform. The power asymmetry is not marginal. A single platform (Meta) shapes the information environment of over three billion people. The decisions made by a small number of engineers and product managers about algorithmic ranking, notification design, and content moderation policy affect the epistemic lives of more people than any government policy in history.
Inability to monitor or constrain. Users have effectively no ability to monitor how platforms use the trust relationship. Algorithmic curation is proprietary and opaque. Users cannot determine why specific content appears in their feeds, what data is being collected about them in real time, how that data is being sold or shared, or what experiments are being conducted on their behavior. The Terms of Service that nominally govern the relationship are unilateral, non-negotiable, and incomprehensible to the vast majority of users. The information asymmetry between the platform and the user is as extreme as the information asymmetry between a physician and a patient — and in the medical context, that asymmetry is precisely what gives rise to the fiduciary duty.
The three conditions are met. The relationship is fiduciary in structure. What it lacks is the legal recognition that would attach the duty of loyalty. The consequence of this gap is an information environment in which entities with greater power over citizens' epistemic lives than any institution in history operate with no affirmative obligation to serve the interests of the citizens whose information environments they control.
Under a fiduciary framework, platforms would bear an affirmative duty not to use the trust relationship — the data, the attention, the information environment — in ways that conflict with users' interests. This duty would constrain specific practices that are currently standard in the platform business model.
Data exploitation against user interest. The sale or sharing of user data with third parties whose interests conflict with the user's interest would be constrained. A platform that collects data about a user's mental health struggles and sells that data to advertisers targeting vulnerable populations is using the trust relationship against the user's interest. A platform that collects data about a user's political views and sells that data to political operatives seeking to manipulate the user's voting behavior is using the trust relationship against the user's interest. The fiduciary duty would not prohibit all data use — it would prohibit data use that conflicts with the user's interest, as fiduciary duties do in every other domain.
Dark patterns that exploit cognitive vulnerabilities. Interface design elements that exploit documented cognitive vulnerabilities to increase engagement against users' stated preferences would be constrained. Infinite scroll that overrides the user's intention to stop browsing. Notification patterns designed to trigger compulsive checking behavior. Default settings that maximize data sharing and require affirmative action to reduce it. These design patterns use the trust relationship — the user's engagement with the platform — against the user's interest in managing their own attention and data. Under a fiduciary framework, the platform's interest in maximizing engagement would be subordinated to the user's interest in an interface that respects their cognitive autonomy.
Recommendation algorithms that degrade informational quality. When engagement optimization and informational quality conflict — as they systematically do — the fiduciary duty would require that informational quality take priority. A recommendation system that amplifies outrage content because outrage generates engagement, despite the documented effect of outrage amplification on users' epistemic environments, is using the trust relationship against users' epistemic interest. The duty would not require that platforms abandon recommendation systems. It would require that recommendation systems not degrade users' information environments in pursuit of engagement metrics.
Monetization of outrage and conflict. The amplification of content that generates engagement through outrage, fear, and intergroup conflict would be constrained when the amplification degrades users' epistemic environments. The platform's financial interest in engagement-driven advertising revenue would be subordinated to users' interest in an information environment that supports informed judgment. This is the direct analog of the physician who cannot prescribe an addictive drug because the revenue exceeds the medical benefit, or the financial advisor who cannot recommend a high-commission product that does not serve the client's returns.
"Imposing fiduciary duties on platforms would stifle innovation and harm the digital economy." — Imposing fiduciary duties on physicians did not stifle medical innovation. Imposing fiduciary duties on financial advisors did not destroy the financial services industry. In both cases, the duty of loyalty constrained specific practices that harmed the beneficiary while preserving the underlying business model. The information fiduciary framework does not prohibit advertising, recommendation systems, or data-driven personalization — it constrains these practices when they conflict with users' interests. The objection treats any constraint on platform behavior as a threat to innovation; the fiduciary model treats the absence of constraint as a threat to the public interest.
The information fiduciary is not a novel legal category created from whole cloth. It is the application of an established legal framework to a new context. Understanding how the platform analogy maps onto existing fiduciary relationships clarifies both what the duty would require and what it would not.
The physician-patient analogy. Physicians can profit from the relationship with patients. They can charge fees, earn salaries, and operate within profit-generating institutions. What they cannot do is prescribe treatments that serve the physician's financial interest at the expense of the patient's health. A physician who prescribes an unnecessary procedure because it generates revenue is violating the fiduciary duty — not because profit is prohibited but because profit that conflicts with the patient's interest is prohibited. The platform analog: platforms can monetize the relationship through advertising. What they cannot do — under a fiduciary framework — is shape the user's information environment in ways that serve the platform's engagement metrics at the expense of the user's epistemic wellbeing.
The financial advisor analogy. Financial advisors operating under the fiduciary standard can earn commissions and fees. What they cannot do is recommend investments that serve the advisor's interest at the expense of the client's returns. The suitability standard (the weaker, non-fiduciary standard) only requires that recommendations be "suitable" — not that they be in the client's best interest. The difference between the suitability standard and the fiduciary standard is the difference between "not actively harmful" and "affirmatively in the client's interest." Platforms currently operate under something weaker than even the suitability standard: they have no affirmative obligation regarding the quality of the information environment they create for users.
The attorney-client analogy. Attorneys can profit from representing clients. What they cannot do is use the trust relationship — the confidential information, the legal position, the vulnerability of the client — in ways that serve the attorney's interest at the expense of the client's. Attorney-client privilege exists because the relationship requires the client to entrust information that could be used against them. The platform analog is direct: users entrust platforms with information — behavioral, social, personal — that can be and is used against their interests. The legal framework that protects the client in the attorney relationship does not exist in the platform relationship.
In each of these analogies, the fiduciary duty does not prohibit the fiduciary from conducting business or earning revenue. It constrains the fiduciary from using the specific trust relationship against the specific interests of the specific beneficiary. The information fiduciary would operate on the same principle: not a prohibition on the platform business model but a constraint on the practices within that model that conflict with users' interests.
Current regulatory approaches to platform governance focus overwhelmingly on consent. The GDPR, the California Consumer Privacy Act, and comparable frameworks operate on the principle that users should be informed about data practices and given the opportunity to consent or decline. The consent model has three structural failures that the fiduciary model addresses.
The comprehension problem. Meaningful consent requires that the consenting party understand what they are consenting to. Privacy policies average 4,000 to 6,000 words. A 2008 study by McDonald and Cranor estimated that reading all the privacy policies a typical internet user encounters would require approximately 244 hours per year. The policies are written in legal language that is incomprehensible to most users. The data practices they describe are technically complex and their consequences are opaque. Users cannot meaningfully consent to practices they cannot meaningfully understand. The consent model requires comprehension that the consent mechanism structurally prevents.
The temporal problem. Consent is a one-time event. The platform relationship is ongoing and evolving. A user who consents to a platform's data practices at the moment of account creation is nominally consenting to practices that will change — through updated terms of service, new features, algorithmic modifications, and new data partnerships — over the duration of the relationship. The consent given at time zero does not meaningfully cover the practices that emerge at time one hundred. The fiduciary duty, by contrast, is an ongoing obligation that tracks the ongoing nature of the relationship. It does not require periodic re-consent. It requires continuous compliance with the duty of loyalty.
The power asymmetry problem. Consent between parties of roughly equal bargaining power has legal meaning. Consent between a multi-hundred-billion-dollar corporation and an individual user — where the user's alternative is to forgo access to the dominant communication and information infrastructure of their society — is a legal fiction. The user does not negotiate the terms. The user does not have meaningful alternatives. The user clicks "I agree" because participation in the digital information environment requires it. This is not consent in any legally meaningful sense. It is acquiescence to non-negotiable terms under conditions of structural dependency. The fiduciary model does not require consent from the beneficiary. It imposes obligations on the fiduciary regardless of what the beneficiary agrees to — because the power asymmetry that gives rise to the duty also invalidates the consent that the weaker party can offer.
The consent model treats the platform-user relationship as a market transaction between equal parties. It is not. It is a relationship of entrustment, power, and vulnerability — the conditions that give rise to fiduciary duties. The regulatory framework should match the structure of the relationship it governs.
Implementing the information fiduciary framework would require specific legal and institutional changes. The framework is not self-executing. It requires statutory establishment, definitional precision, enforcement mechanisms, and coordination with existing regulatory structures.
Statutory establishment. Congress would need to establish the information fiduciary as a legal category through legislation. The statute would define which entities qualify as information fiduciaries — likely based on criteria including the volume of personal data collected, the degree to which the entity shapes users' information environments through algorithmic curation, and the number of users affected. Not every website or app would qualify. The category is designed for entities whose relationship to users meets the three fiduciary conditions: entrustment of value, power creating vulnerability, and inability to monitor.
Definition of the duty of loyalty. The statute would need to define the duty of loyalty in the platform context with sufficient specificity to guide compliance and enforcement. This would include: a prohibition on using personal data in ways that conflict with users' interests; a prohibition on interface design that exploits documented cognitive vulnerabilities; a requirement that recommendation algorithms not systematically degrade users' information environments; and a requirement for transparency about the data practices and algorithmic systems that affect users' information environments.
Enforcement mechanisms. The duty would be enforceable through multiple channels: FTC enforcement authority for systemic violations; state attorney general enforcement for violations affecting state residents; and a private right of action for individual users or classes of users harmed by violations of the fiduciary duty. The enforcement architecture should be modeled on existing fiduciary enforcement — combining regulatory oversight with private litigation to create comprehensive accountability.
Safe harbors. Platforms that demonstrate good-faith compliance with the fiduciary duty through documented practices — transparent algorithmic auditing, independent review of data practices, proactive elimination of dark patterns — would receive safe harbor protection from certain categories of enforcement action. The safe harbor incentivizes compliance without requiring perfection and provides legal certainty for platforms that invest in meeting the fiduciary standard.
Coordination with existing frameworks. The information fiduciary framework would complement, not replace, existing data protection frameworks. The GDPR's data minimization and purpose limitation principles align with the fiduciary duty. State privacy laws provide additional protections. The fiduciary framework adds what these frameworks lack: an affirmative, ongoing duty of loyalty that constrains platform behavior beyond the minimum requirements of data protection law. The information fiduciary is the legal component of the Civic Architecture — the complement to the institutional component (public media, AR-001) and the design component (civic technology, AR-003).
Internal: This paper is part of The Attentional Republic (AR series), Saga X. It draws on and contributes to the argument documented across 24 papers in 5 series.
External references for this paper are in development. The Institute’s reference program is adding formal academic citations across the corpus. Priority papers (P0/P1) have complete references sections.