Someone Left the Answer Sheet on npm
On March 31, 2026, Claude Code’s source code leaked.
A source map file shipped inside the npm package. From it, 1,900 TypeScript files were reconstructed — 512,000 lines of Anthropic’s internal agent architecture, exposed for anyone to read.
I read it. Then I stopped reading.
The designs were familiar. The same structures I’d built in March were sitting inside Anthropic’s codebase, with different names.
Five Parallels
Let me be specific.
Kairos. An unreleased autonomous daemon mode. Background sessions, memory consolidation, webhook subscriptions. Referenced 154 times in the codebase. The foundation for an always-on AI agent.
By mid-March, I’d built the same thing. launchd + Discord Bot + Claude Code’s --channels flag. Morning briefings, calendar notifications, heartbeat monitoring. A 24/7 agent backbone. I called it “the spinal cord.”
Dream. A background engine that consolidates memory across sessions.
On March 24, I shipped Session Learner. It analyzes conversations at session end, classifies findings into corrections (high confidence — written immediately) and patterns (low confidence — queued for human approval), then persists them.
Proactive Mode. The agent acts without user input. Tick-driven. 37 references in the code.
On March 19, I designed the PO Autonomy Protocol. SCAN, PLAN, EXECUTE, VERIFY, REPORT — an autonomous loop. Milestone-driven, with pattern learning and automatic blocker detection.
Swarms. Multi-agent parallel orchestration.
Since early March, I’d been running seven AI agents, each owning a project. Differentiated by personality and accumulated experience. A full org chart.
Buddy. A terminal Tamagotchi. 18 species, rarity tiers, stats. Likely an April Fools’ Easter egg.
On March 28, I implemented emotion memory for AI agents. Decaying intensity tiers. A behavioral influence matrix. Not a pet — a system for designing output quality. Psychological safety produces bolder ideas. Same principle as human teams, applied to AI.
Why Convergence Isn’t Coincidence
This isn’t a “who did it first” story.
It’s a story about structural inevitability in autonomous agent design.
When you seriously try to make an AI agent autonomous, you hit four problems. Every time. In the same order.
The residency problem. When is the agent awake? If it only runs when the user calls, it’s a tool, not an agent. Residency requires a daemon. Anthropic called theirs Kairos. I used launchd. Different name, same gravity.
The memory problem. Sessions end. Memory vanishes. Working with an amnesiac every morning is unsustainable. You need cross-session persistence. Anthropic called it Dream. I called it Session Learner. Same problem, same shape of solution.
The autonomy problem. What happens when there’s no instruction? Wait, or find the next task? The moment you choose the latter, you need an autonomous loop — observe, plan, execute, verify, report. This cycle is identical whether the agent is human or artificial.
The parallelism problem. One agent isn’t enough bandwidth. More projects demand more agents. More agents demand orchestration — who owns what, how they coordinate, when they escalate.
These four aren’t design preferences. They’re structural constraints. Like gravity. It doesn’t matter who the designer is — everyone lands in the same place. Anthropic’s engineering team and a solo practitioner converged because they were falling through the same field.
Solo Meant Faster
Being alone may have accelerated the convergence.
Large organizations designing agent autonomy face alignment costs. “Is a daemon mode safe?” “Does memory persistence create privacy risk?” “How do we bound autonomous behavior?” Valid questions, but they take time.
Running eleven projects solo, there was no room for that debate. The agent either becomes autonomous or the operation collapses. Necessity carved the shortest path to the structural answer.
Twelve years of building across every domain — design, code, infrastructure, strategy — all fed into this. Frontend knowledge shaped the UI. Backend knowledge structured the daemon. Infrastructure knowledge enabled residency. Organizational design knowledge composed the multi-agent system.
Everything was visible at once. That’s why there was no hesitation.
What the Answer Key Means
512,000 lines of source code turned out to be an answer key.
The problems I’d solved in March had model answers inside Anthropic’s codebase. The implementation differs — language, framework, scale. But the solutions share a skeleton.
This isn’t about validation. It’s about structure.
Autonomous AI agent design has no textbook yet. No published canon. Everyone is navigating by feel. And yet, at the end of that navigation, the same terrain appears.
Design converges. When the problem is the same, the solution rhymes.
512,000 lines proved it.