Table of Contents
Our legal system is built on a fiction that no longer serves us. For centuries, we've operated under the premise that there are two types of entities in the world: persons with agency (humans and corporations) and objects without it (tools and property). This binary framework worked adequately in a world of clear causality and direct responsibility. That world no longer exists.
The artificial intelligence revolution isn't just creating new capabilities—it's creating a fundamental category error in our social, legal, and economic systems. We face a reality where our most powerful technologies exist in an uncanny valley between agent and instrument, possessing too much autonomy to be merely tools yet insufficient agency to bear responsibility. This paradox isn't a temporary inconvenience awaiting a clever legal patch; it represents a structural incompatibility between our existing frameworks and technological reality.
The Agency Gap
When a chatbot falsely claims you participated in the January 6th riots, who bears responsibility? The AI company that built it? The anonymous users whose posts contaminated the training data? The algorithm itself? Our intuitions about causality and responsibility break down when confronted with systems that transform, recombine, and generate information through computational processes no single human fully understands.
What's emerging is an agency gap—a chasm between our legal frameworks predicated on identifiable agents and the distributed, emergent behavior of modern technological systems. This isn't merely a technical problem; it's an ontological one.
Consider Robby Starbuck's defamation lawsuit against Meta for its AI's false claims. Our defamation laws presume a defamer—a conscious entity with "intent to deceive" or at least "recklessness with the truth." But AI systems inhabit a twilight zone where they act in some sense independently while lacking the mental states our legal system demands. As the source material notes: "AI models act in some sense like independent agents, and in some sense not; it's possible that they have too little independent agency to be legally responsible for their actions, but enough to make nobody else responsible for their actions."
The result is a legal no-man's-land where accountability goes to die.
The Responsibility Diffusion Effect
This agency gap creates what I call the Responsibility Diffusion Effect—the tendency for accountability to dissipate when technology systems distribute decision-making across complex networks of human and non-human actors.
The Meta lawsuit illustrates this perfectly. Meta's AI didn't invent its false claims from nothing; it remixed and reprocessed user-generated content from social media platforms. Section 230 shields those platforms from liability for user content, but that shield may not extend to AI-generated responses derived from that same content. The training data flows through algorithmic transformation, creating outputs that no human explicitly authored yet emerged from a system humans designed.
Responsibility becomes not merely unclear but fundamentally diffuse, spreading across:
- Engineers who built the models
- Content moderators who curated training data
- Users who generated the original content
- Executives who determined deployment policies
- The AI system's own operational parameters
The more sophisticated our systems become, the thinner responsibility spreads, until it becomes a homeopathic dilution—technically present but practically nonexistent.
The Corporate Agency Fiction
The Tesla-Musk dynamic reveals another facet of this problem. Our corporate structures assume that companies are discrete entities with coherent agency. But Tesla exists in multiple contradictory states simultaneously—a car company, a robotics pioneer, a vessel for Musk's future-building aspirations, and a public entity with shareholder obligations.
When the board reportedly began searching for a new CEO, the central question wasn't about leadership qualifications but about which version of Tesla would prevail. As Musk reportedly texted: "he was worried that no one could replace him atop the company and sell the vision that Tesla isn't just an automaker, but the future of robotics and automation as well."
The "Technoking" title Musk adopted wasn't mere eccentricity—it was an acknowledgment that traditional corporate roles no longer map onto the reality of distributed agency networks. Who bears responsibility for Tesla's strategic direction? The CEO? The board? The shareholders? Musk himself, whose personal brand and visions have become inseparable from the company's identity?
Our corporate law assumes companies have unified agency, but reality shows they're increasingly multipolar entities with conflicting internal logics—microcosms of the broader agency crisis.
The Identity Verification Paradox
World's iris-scanning project represents perhaps the most profound irony in this landscape. In a world where agency is increasingly ambiguous, Altman's project attempts to establish a binary distinction between humans and non-humans. The underlying belief is that we can restore order by reliably identifying "true agents" (humans with eyeballs) versus "non-agents" (AI systems).
Yet this approach fundamentally misunderstands the problem. The issue isn't distinguishing humans from AI; it's that our entire conceptual framework of agency, responsibility, and liability has become inadequate. World attempts to solve a philosophical problem with technological tools, promising to authenticate humans in a reality where human agency itself has become fractured and distributed.
The regulatory about-face that now permits Worldcoin's U.S. launch reflects this confusion. Regulators oscillate between treating crypto tokens as securities (implying a responsible issuer with agency) and as something else entirely—utility tokens, commodities, or technological protocols. When something can be simultaneously a security, a technology, a currency, and an identity system, our categorical frameworks collapse.
Beyond Agency: A New Framework
The solution isn't merely adapting our existing legal and regulatory frameworks—it's recognizing their fundamental inadequacy and developing new conceptual models suited to distributed agency networks.
What might this look like in practice?
1. Outcome-Based Regulation: Rather than attempting to identify responsible agents, focus on establishing acceptable and unacceptable outcomes. Don't ask "who's responsible for AI defamation?" but rather "how do we ensure victims of AI defamation receive redress?"
2. Systemic Liability: Replace individual liability with systemic liability models where responsibility is proportionally distributed across the value chain. Every participant in the AI ecosystem—from data providers to model developers to deployers—bears some fraction of responsibility proportional to their benefit and control.
3. Beneficial Ownership Transparency: Require transparent documentation of who benefits from technological systems, regardless of agency questions. World's iris-scanning technology may lack clear agency, but its financial benefits flow to identifiable parties.
4. Structural Separation: Acknowledge that singular corporate entities cannot coherently embody multiple contradictory logics. The Musk conundrum suggests we need new organizational structures that formally separate different modes of agency within the same technical ecosystem.
5. Externality Pricing: Accept that some technologies create unattributable externalities and implement systematic pricing mechanisms that fund remediation without requiring specific fault attribution.
The Coming Realignment
Those who insist we can adapt existing frameworks to the AI era misunderstand the depth of the ontological shift underway. This isn't about teaching old laws new tricks—it's about recognizing a fundamental category error in how we conceptualize technology, agency, and responsibility.
The SEC's Ponzi scheme case against "Vanguard Holdings Group Irrevocable Trust" provides an instructive contrast. Traditional fraud involves clear agency—Welsh, Alexander, and Conner allegedly made false statements with clear intent to deceive. The regulatory framework functions effectively here because agency is unambiguous. Everyone understands who did what and why.
But as our technologies increasingly occupy the uncanny valley between tools and agents, such clarity disappears. We face a paradoxical future where the most consequential decisions may have no identifiable decision-makers, where the most powerful systems may be simultaneously everywhere and nowhere, where responsibility becomes so distributed it effectively disappears.
This isn't technological determinism—it's a call to philosophical and legal innovation. Our conceptual frameworks aren't immutable facts but human constructs designed to help us navigate reality. When reality changes, so must our constructs.
The agency illusion—the persistent belief that we can identify clear lines of responsibility in complex sociotechnical systems—isn't merely incorrect; it's increasingly dangerous. It leaves victims without recourse, systems without governance, and societies without the conceptual tools to manage the technologies reshaping them.
The question isn't whether AI will break our legal systems, but whether we can develop new frameworks before the old ones catastrophically fail. The agency gap isn't coming—it's already here.