Case Study

Reputation and the Right to Be Known

Uncensorable consensus creates a world of zero accountability. Distributed reputation resolves this without reintroducing centralized authority. But reputation has its own dark side.

The Problem

Zero accountability as structural consequence.

The thought experiment gives us uncensorable coordination. SEAL gives us privacy. EXIT gives us departure rights. Together, these three layers enable any actor to operate with zero accountability.

The dissident is protected. So is the criminal. The medium cannot distinguish between them. That is not a flaw to be patched. It is a structural consequence of building a communication layer that no authority can censor.

Privacy plus uncensorable communication plus the right to leave equals the ability to act without consequence. This is the honest assessment from the thought experiment. The question is what comes next.

The Resolution

NAME: a deep graph of mutual attestation.

Not a score. Scores are gameable, reductive, and owned by whoever runs the platform. NAME is a high-dimensional trust graph. Dense, portable, and belonging to no one but you.

Mutual attestation at every interaction

Every interaction can produce a mutual attestation: "this entity honored our agreement" or "this entity broke it." Both parties sign. Neither can fabricate the other's signature. The record is append-only and cryptographically bound to the participants.

Portable and domain-specific

Reputation travels with you via EXIT. It is not locked inside any platform. And it is contextual: trusted as a plumber does not mean trusted as a childminder. Trust is earned per domain, not granted universally.

Sybil resistance through graph density

Real participants accumulate thousands of attestations across years. Interlocking relationships, overlapping communities, temporal depth. A fabricated identity has a week of shallow connections. The graph knows the difference without requiring any central registry.

Contextual trust, not universal scoring

There is no single "reputation score." Trust is a question asked in context: does this entity have a history of honoring agreements of this type, with participants in this domain, over this time period? The answer is always specific. Never a number.

How NAME addresses the dark scenarios

Terrorist coordination

The identity trap

Members either bring their real NAME, making them traceable through social topology, or use a fresh NAME with zero reputation. No one trusts an empty graph. They cannot acquire resources, recruit, or coordinate with anyone outside their cell. The medium is uncensorable. But trust is earned, not assumed.

Parallel legal systems

The hemorrhage cost

Communities that abuse their members lose them. Members who leave carry their attestations with them. A community that hemorrhages participants accumulates collective reputation cost visible to the entire graph. Not shut down by authority. Shunned by the network.

Conspiracy communities

Sovereign but irrelevant

A closed community can reach internal consensus on anything. Their collective attestations carry near-zero weight outside their bubble. They are sovereign. They are also irrelevant. The graph does not suppress them. It simply does not trust them.

The accountability paradox

Identity versus reputation

You can act privately through SEAL. You can communicate freely through SENSUS. You can leave freely through EXIT. But your reputation follows you. Not your identity. Your reputation. The distinction matters. Privacy protects who you are. Reputation records what you have done.

The Dark Side

Reputation creates its own prisons.

This section is not a caveat. It is the core problem. NAME solves accountability. It also creates new forms of confinement that must be addressed honestly.

Inescapable state

You can EXIT a platform. You cannot EXIT your own reputation.

A person who made bad decisions at 19 carries that graph forever. Every broken agreement, every negative attestation, woven into a structure that follows them across every context and community. NAME creates a new prison: not physical confinement, but social confinement. The walls are made of other people's memories, and the doors do not open.

Power-law dynamics

The rich get richer in reputation.

Well-connected nodes accumulate trust faster. They are more visible, more attested, more trusted by default. Newcomers start with nothing. Structural inequality baked into graph topology. The same concentration dynamics that plague financial systems reproduce themselves in trust networks. Different currency, same outcome.

Permanent exclusion

Communities that blacklist create dead zones in the graph.

A former member blacklisted by a large community carries a void where connections used to be. Other communities see the gap. NAME without correction mechanisms recreates class hierarchy through trust concentration. The excluded remain excluded, not by decree, but by topology.

Weaponized attestation

A mob can destroy someone's NAME through coordinated false attestation.

Negative attestations as social attack. A coordinated group filing false claims against a single node. The graph cannot distinguish genuine grievance from organized harassment without additional structure. Reputation becomes a weapon in the hands of the many against the few.

Corrections

Addressing the dark side without reintroducing authority.

Each correction operates within the graph itself. No central authority decides who deserves forgiveness or who is being unfairly excluded. The graph surfaces deviations from healthy norms. Social pressure does the rest.

Cryptographic forgetting

Graduated decay of attestation detail

Old attestations become verifiable-that-they-existed but not verifiable-in-detail. "This node had a negative period ten years ago" without revealing "this node did X specific thing." The fact persists. The specifics fade. This mirrors how human memory actually works: we remember that someone was unreliable in their twenties without remembering every instance. Cryptographic forgetting makes this property structural rather than accidental.

Graph health auditing

The graph detects its own pathologies

When decay rates deviate from healthy norms, the deviation becomes visible. People holding grudges beyond what evidence warrants. Communities operating punitively rather than protectively. The graph surfaces the pattern. It does not enforce a correction. It makes the dysfunction legible so social pressure can operate on accurate information rather than hidden bias.

Natural decay

The Harberger question

We probably cannot control how quickly reputation decays in people's minds. That process is organic, uneven, and deeply human. But a graph audit can surface when participants are being harsher than roughly 15% natural annual decay, or when bias concentrates against specific nodes or communities. The audit does not override human judgment. It contextualizes it.

EXIT as reputation reset

Departure carries topology, not judgment

Leaving one context for another carries your graph structure, preventing Sybil attacks, but allows recontextualization. You do not escape your past. But you can demonstrate that your present is different. The new community sees your graph density (proof of real participation) without inheriting the old community's specific grievances. You start with trust earned through structural proof, not through the lens of whoever you left behind.

The Irreducible Minimum

The one-shot problem.

NAME addresses accountability for repeated interactions. It creates cost for betrayal, incentives for cooperation, and consequences for patterns of harm. But it does not address the one-shot attack: someone willing to burn their entire NAME for a single act of destruction.

The cost is high. The damage is done.

Years of accumulated trust, destroyed in a single action. Starting over from zero in every context, permanently marked by the graph's memory of what happened. The cost is real and severe.

But the damage is already done.

This is the irreducible minimum of harm in any free system. Even in physical reality, a person willing to sacrifice everything can cause destruction. No reputation system, no legal system, no surveillance apparatus eliminates this possibility entirely. The question is not whether it can be prevented. It is whether the damage can be bounded.

The resolution probably lies elsewhere in the architecture. Not in reputation, but in deterministic power constraints that timebox effects. Preventing any single action from cascading beyond a bounded scope. Power gated behind deterministic checkpoints that prevent simultaneous activation.

Coming Soon

Deterministic power constraints. The next layer in the architecture. Not yet documented here.

The Architecture So Far

Each layer addresses a dark side of the previous layers.

No single layer is safe alone. Circular dependency is the point.

Layer 1

EXIT: The right to leave

Protects against confinement. No platform, community, or institution can hold you against your will. Departure is unconditional and verifiable.

Layer 2

SEAL: The right to privacy

Protects against surveillance. Your actions, communications, and associations are yours to reveal or conceal. Privacy is not secrecy. It is sovereignty over your own information.

Layer 3

SENSUS: The right to agree on truth

Protects against censorship and manipulation. No authority can prevent groups from reaching genuine agreement or fabricate consensus on their behalf.

Layer 4

NAME: The right to be known by your actions

Protects against unaccountability. Your reputation is portable, contextual, and unforgeable. You are known by what you have done, not by what any institution claims about you.

Layer 5

MANTLE: Deterministic power constraints

Protects against coordinated attacks and cascading harm. Power gated behind deterministic checkpoints. The final layer that bounds the damage any single actor can cause. <Coming Soon>

Closing

Reputation without privacy is surveillance.
Privacy without reputation is impunity.
Together: you are known by what you choose to do, not by what others choose to see.

← Back to What If?

EXIT Protocol · Case Studies