On March 30, 2026, Anthropic accidentally published Claude Code version 2.1.88 to npm with a source map file that served as a complete translation key to decode the bundled source. Within hours, Chaofan Shou, an intern at blockchain startup Solayer Labs, unlocked and published the code. The repository was forked more than 41,500 times before Anthropic managed to take it down, making it one of the fastest-growing repositories in GitHub history. The exposed codebase contained over 512,000 lines across 1,900 files, and the analysis of what was inside has sent shockwaves through the AI industry.
The leak revealed 44 hidden features that paint a picture far more ambitious than what Anthropic has publicly described. The most significant discovery is KAIROS, an always-on background agent subsystem designed to run continuously without user prompting. KAIROS includes a logic system called autoDream, which actively merges duplicate memories, eliminates contradictions, resolves speculative entries, and prunes stored information to make it more actionable. In practical terms, this means Claude would not just respond to requests but would constantly refine its understanding of your work, preferences, and context between sessions.
Other notable features include ULTRAPLAN, a remote planning mode capable of handling resource-intensive tasks in the cloud for up to 30 minutes per session. There is also a voice interface, a persona system called Buddy that provides commentary on your work as you code, and an “undercover mode” that allows the agent to commit files to public git repositories without leaving its signature. The leak also exposed codenames for models that have not been publicly announced, giving competitors a preview of Anthropic’s product roadmap.
The root cause was a missing .npmignore file. When software companies publish closed-source code, a bundler tool typically scrambles the source files into unreadable output. Anthropic’s build process did this correctly, but the npm package also included the source map file that reverses the entire process. One missing configuration line turned a routine software update into the biggest source code leak in AI history.
What the leak reveals about Anthropic’s strategy is arguably more important than the specific features. What was being marketed as a coding assistant is actually the foundation of a complete operating system for AI agents. The architecture supports persistent memory, autonomous background processing, multi-modal interaction, and cloud-based resource allocation. The gap between the Claude that users interact with today and the version Anthropic has been building internally is substantial.
For sales teams tired of cold leads, slow customer responses, and manual processes, Dapta is the ultimate tool.
Dapta is the leading platform for creating AI sales agents specifically designed to increase inbound lead conversion. Respond to your leads in less than a minute with voice AI and WhatsApp that converts.
If you want your team to sell more while AI handles the complex stuff, you have to try it.
Anthropic’s official response was measured. The company acknowledged the leak but stated it was not accidental, a claim that drew skepticism from the developer community given that the exposure resulted from a basic packaging error. Whether it was truly intentional (perhaps as a controlled preview to gauge market interest) or a genuine mistake will likely remain debated. Either way, every major competitor now has a detailed blueprint of where Anthropic is headed.
For the broader AI ecosystem, this leak confirms that the race to build autonomous, persistent AI agents is not months away. It is weeks away, and every major player knows exactly what the others are building.