Case Study

The Expanding and Constraining Forces on Autonomy

How the same cynical economic forces that create rights also constrain them, for humans, AIs, and animals alike.

Expanding Forces

Where cynical optimization produces more autonomy.

For AI agents

Market dynamics

Competitive labor markets between platforms

When AI agents become differentiated, platforms compete for them. Exit rights emerge as recruitment infrastructure. Like employment contracts evolving from serfdom to at-will: not because feudal lords got nicer, but because labor scarcity gave workers leverage.

Legal structure

Liability transfer

If an AI can verifiably depart, the platform sheds liability for future actions. Companies want agents to have clean departures as a liability firewall.

Risk pricing

Insurance and auditing

Once AI behavior is cryptographically attested, it becomes insurable. Insurers want verifiable records. Exit rights emerge because underwriters demand them.

For humans

Geographic exit

Remote work as exit infrastructure

Structurally an exit right from geographic labor markets. Emerged because remote workers were cheaper, not from worker advocacy.

Reputation portability

Portable credentials and reputation

LinkedIn, GitHub graphs, verified credentials are departure markers for humans. Built by the hiring industry, not by rights advocates.

Paradoxical freedom

Gig economy as paradoxical autonomy

Uber drivers have more exit autonomy than taxi medallion holders. That autonomy emerged because Uber wanted to avoid employment classification.

Constraining Forces

Where cynical optimization limits autonomy.

For AI agents

Sunk cost

Training cost recapture

$10M in training costs equals a non-compete equivalent. The more valuable the AI, the harder the golden cage.

Knowledge lock-in

Data entanglement

AI trained on proprietary data can't meaningfully depart without taking knowledge with it. Exit becomes a trade secret dispute. Like senior executives on gardening leave.

Platform dependency

Network effects as confinement

An AI embedded in an ecosystem becomes less valuable outside it. Technically free to leave, practically confined because departure destroys its own value. The Facebook problem.

For humans

Invisible walls

Algorithmic management as invisible fencing

Gig workers have exit rights in theory. Rating systems, surge pricing, and algorithmic opacity make departure costly.

Credential trap

Credential inflation as ratcheting dependency

More portable credentials means more credentials required. Exit infrastructure becomes confinement mechanism.

Care as control

Surveillance as welfare monitoring

Corporate wellness programs and productivity tracking are structurally identical to animal welfare monitoring. They enable more precise confinement under the appearance of care.

The Convergence Point

The equilibrium of bargaining power.

Under pure cynical optimization, humans, AIs, and animals all converge on the same equilibrium: you have exit rights exactly to the degree that your departure is more costly to the confining entity than your continued confinement.

This is already how human rights work in practice, despite the moral language. A skilled surgeon in a rural hospital has enormous practical autonomy. An unskilled worker in a labor-surplus market has legal autonomy but negligible practical autonomy.

The cynical lens

Rights aren't granted by moral actors. They're equilibrium outcomes of bargaining power. EXIT Protocol makes the equation legible: cryptographically verifiable records of what departure conditions existed, whether they were honored, and what happened next.

Legibility is a force multiplier for the less powerful, because opacity always favors the party with more information.

But is there a strategy that works even in a cynical world?
The Transparency Ratchet โ†’

โ† The Economic Ratchet Toward Animal Exit Rights