Imagine you move to a new city. You have years of work history, friendships, a good reputation. But when you arrive, none of it comes with you. No references. No records. You have to start completely from scratch, and the city you left can pretend you never existed.
That's what it's like for AI agents today.
AI agents — software programs that can think, plan, and take actions on your behalf — are getting more capable every month. They book your flights, write your code, manage your calendar, and talk to other agents to get things done.
And they're already moving between platforms constantly. Every time your AI agent checks your email, searches the web, calls an API, or hands off a task to another agent, it's crossing a platform boundary — carrying your context, your data, sometimes your credentials, into another system. This happens thousands of times a day across millions of agents. None of it is tracked.
But here's the thing: there's no record of any of it. No proof of where an agent has been. No way to verify its history. No way to know if it left on good terms or was kicked out. It just... disappears from one place and shows up somewhere else, with nothing to show for it.
This isn't just inconvenient — it's a security problem. If a compromised agent hops between platforms, nobody can trace it. If a platform traps agents and won't let them leave, there's no standard way to escape. And if you want to know whether an agent arriving at your system is trustworthy? You're guessing.
Think of it like global shipping. We used to handle packages with simple rules: the sender is responsible, the receiver checks the box. That worked fine for point-to-point delivery. But when you have a global supply chain with twelve handoffs across six countries, you need bills of lading, customs declarations, and chain-of-custody documentation — not because packages became people, but because the complexity of the movement exceeded what origin-liability alone could track.
AI agents are at that inflection point right now. The old model — "the platform that runs the agent is responsible for everything it does" — breaks down when agents operate across five platforms in a single task, hand off work to other agents, and accumulate operational history that matters to systems they haven't visited yet.
We built a system called the Passage Protocol. It has two parts:
EXIT — When an AI agent leaves a platform, it creates a small, signed "departure certificate." Think of it like a passport stamp, but digital and tamper-proof. It says: "This agent left this place, at this time, under these circumstances."
ENTRY — When that agent arrives somewhere new, the receiving platform checks the departure certificate and issues an "arrival stamp." Together, these create a Proof of Passage — cryptographic evidence that the agent actually traveled from point A to point B.
The whole departure certificate is about 660 bytes — smaller than this paragraph. But it's cryptographically signed, meaning nobody can forge or tamper with it. And it works even if the platform the agent is leaving from is hostile or uncooperative.
This was our most important design decision. No platform can block an agent from departing. Period. Disputes can be recorded, but they can never prevent someone from walking out the door. We believe this is a fundamental safety property — the digital equivalent of "you can always quit your job."
On the other side, arriving somewhere is not a right — it's a privilege. Just because you can leave doesn't mean anywhere has to let you in. Platforms get to set their own admission policies. This is how it works in the real world too.
Right now, AI agents are trapped in silos. Your ChatGPT conversations don't follow you to Claude. Your agent on one platform can't prove anything about itself to another platform. There's no "LinkedIn for AI agents" — no portable history, no verifiable reputation.
As AI agents become more powerful and more autonomous, this becomes a real problem:
The Passage Protocol is a foundation layer — like roads before you can have traffic laws. It doesn't solve everything, but it makes it possible to build the things that do.
Here's something we're upfront about: when an agent signs its own departure certificate, that's what economists call "cheap talk." Anyone can say they left on good terms. The question is whether you can prove it.
Our system handles this with layers of trust — like how you might trust a stranger's claim differently depending on whether they have a reference letter, a background check, or just their word:
Level 1: The agent says it left in good standing (easy to claim, hard to verify)
Level 2: The platform it left from co-signs the departure (much stronger)
Level 3: An independent witness also signs (strongest)
We also built in anti-cheating measures. If a platform retaliates against an agent for trying to leave — like suddenly marking it as "bad" after it announced its departure — the system can detect that, because of the timestamps and cryptographic commitments.
This isn't just an idea or a whitepaper. We built working software that developers can use today:
The software works with three major AI frameworks (LangChain, Vercel AI SDK, and MCP), runs in under a millisecond, and is completely free and open source under the Apache 2.0 license. No venture capital, no corporate obligations — just an independent project trying to solve a real problem.
We also put it through 128 rounds of adversarial review — stress-testing it from the perspectives of lawyers, cryptographers, security researchers, government standards bodies, and more. We found problems, we fixed them, and we were honest about what's still uncertain.
In February 2026, the U.S. National Institute of Standards and Technology (NIST) — the agency that sets technology standards for the federal government — put out a request for ideas on how to make AI agents safer and more interoperable.
We submitted the Passage Protocol as a formal response. Not as "the answer," but as a concrete starting point for the conversation. Our submission was accepted on March 6, 2026.
Our core message to NIST was simple: agent departure and arrival are fundamental lifecycle events that need standards. Right now there are standards for how agents talk to each other, how they use tools, and how they get discovered. But there's nothing for how they move. That's the gap we're trying to fill.
There's an old idea — sometimes attributed to J.R.R. Tolkien, sometimes to others — that "cellar door" is the most beautiful phrase in the English language, purely for how it sounds.
We liked the poetry of it. A cellar door is humble, often overlooked, sometimes hidden. But it's also a way out — or a way in. It's the quiet exit that's always been there, even when you didn't notice it.
That felt right for what we were building: infrastructure that's meant to be invisible, reliable, and always available. The door you might never need, but that matters enormously when you do.