On March 31, 2026, a routine Claude Code update accidentally bundled a source map file that exposed approximately 59.8 MB of Anthropic internal source code — 1,906 files, 513,000 lines of proprietary code. Anthropic confirmed the incident. SecurityWeek, The Register, Fortune, and VentureBeat have all covered it. This is no longer a rumor.
There is now an active Vidar malware campaign targeting developers who search “leaked Claude Code” on Google. Fake GitHub repositories are distributing trojanized executables that install GhostSocks, a proxy backdoor. If you downloaded anything claiming to be the “leaked Claude Code” and ran an executable, treat your machine as compromised.
What Anthropic Has Confirmed
Anthropic issued a public statement acknowledging that a source map file was mistakenly included in a Claude Code package update. The exposed bundle contained internal tooling, prompt scaffolding, and system infrastructure code. Anthropic stated the exposure window was limited and that no customer data or API credentials were included. An internal review was initiated immediately.
Independent researchers confirmed the file counts and sizes before Anthropic statement. The source map exposure is a build-pipeline failure — not a breach in the traditional sense. No attacker exfiltrated anything; the files were briefly publicly accessible in a signed update package.
⚠️ Active Malware Warning: The Fake “Leak” Downloads
This is the most urgent information in this article, and it was absent from most early coverage.
Within 48 hours of the story breaking, threat actors created fake GitHub repositories with names like anthropic-claude-code-leak, claude-source-dump, and leaked-claude-internal. These repos include README files with plausible-looking file trees and link to ZIP archives or EXE installers. Running the executable installs Vidar, an information-stealing malware, which then deploys GhostSocks — a SOCKS5 proxy backdoor that persists on the machine and routes attacker traffic through the victim connection.
What to do if you ran the executable
- Isolate the machine immediately — disconnect from your network, VPN, and corporate systems.
- Rotate all credentials accessible from that machine — API keys, SSH keys, cloud provider credentials, GitHub tokens, database passwords.
- Audit outbound connections from the infected machine over the past 72 hours using firewall or EDR logs.
- Do not just run antivirus — GhostSocks is designed to survive standard AV. Assume a full wipe and restore from a known-clean backup is required.
- Alert your security team — if this happened on a corporate device, this is an active incident, not a personal cleanup task.
The malicious repos were reported to GitHub and several were taken down, but mirror copies and fresh forks continue appearing. Do not download anything claiming to be “the Claude source code” from any unofficial source.
What the Exposed Code Actually Contains
Researchers who reviewed the source map before takedown reported that the exposed files primarily cover Claude Code internal orchestration — the scaffolding that manages tool calls, context windows, and the session loop. There are references to internal model routing logic and prompt construction templates.
What the files do not appear to contain: model weights, training data, customer conversation logs, or API authentication infrastructure. The practical competitive risk is real but bounded — a competitor gains insight into Anthropic engineering patterns, not the models themselves. For most development teams using Claude Code commercially, the immediate risk is not from the source exposure. It is from the malware campaign exploiting the story.
The 6-Hour Incident Playbook for Development Teams
If your leadership channel is asking whether your team is exposed, this is the sequence for the first six hours. Fast and repeatable.
Hour 0–1: Stop silent spread
- Pause nonessential AI automations in CI and release tooling.
- Disable broad prompt logging where possible.
- Open one incident thread with clear ownership and timestamps.
- Check whether anyone on the team searched for or downloaded anything related to the leaked code.
Hour 1–2: Rotate secrets as if exposure already happened
- Rotate API keys and service tokens tied to build, deploy, and data access.
- Revoke stale personal access tokens and machine users no one has reviewed recently.
- Prioritize secrets with production or customer-data scope first.
Hour 2–4: Prove or disprove data movement
- Search logs for signature patterns: internal hostnames, customer IDs, credential prefixes, repository-specific strings.
- Audit outbound requests from IDE plugins and automation agents during the suspected window.
- Capture immutable snapshots of relevant logs before retention jobs prune them.
Hour 4–6: Communicate what is true
- Publish a short internal update with facts, unknowns, and next update time.
- Give support and customer teams a single approved statement so they do not improvise.
- List immediate controls now active so stakeholders see action instead of ambiguity.
Hardening Your AI Stack So the Next Incident Hurts Less
Control the input path
- Add pre-send redaction rules for obvious sensitive patterns.
- Block high-risk files from assistant context by default — env files, private keys, internal config.
- Require explicit opt-in before tools access full-repository context.
Control the storage path
- Shorten retention for AI prompt and response logs.
- Segment observability streams so engineering analytics never gets raw secrets.
- Run weekly scans over log sinks for credential patterns and private identifiers.
Control the people path
- Train developers on what not to paste into prompts. Keep it practical and short.
- Write a one-page AI incident first-response doc and rehearse it quarterly.
- Define who can approve high-risk assistant integrations before they go live.
What This Signals for the AI Tooling Industry
The Claude Code source exposure is not an isolated incident. It reflects a structural challenge in the AI tooling industry: software moves from experimental to production-critical faster than security practices adapt. Buyers are now asking harder questions: Where is data processed? How long is it retained? Can we audit it? Can we shut it off quickly?
In 2024, teams mostly asked whether AI coding tools made developers faster. In 2026, that question still exists, but it has a twin: can we trust this in production when pressure spikes? The teams that keep shipping through incidents like this will be the ones who can answer on demand: what did the system see, what did it store, and who can prove it?
Frequently Asked Questions
Has the Claude Code source leak been officially confirmed?
Yes. Anthropic confirmed the incident. The exposure involved approximately 59.8 MB of internal source code across 1,906 files and 513,000 lines. The cause was a source map file accidentally bundled into a production update package. Coverage appeared in SecurityWeek, The Register, Fortune, and VentureBeat.
What is the Vidar malware risk connected to this incident?
Threat actors are using the story to distribute malware through fake “Claude Code leak” repositories on GitHub. Downloads contain Vidar, an information stealer, which installs GhostSocks — a SOCKS5 proxy backdoor. If you downloaded and ran any executable from an unofficial source claiming to be the leaked code, isolate the machine immediately and treat it as compromised.
Was customer data or API credentials included in the leak?
According to Anthropic statement, no customer data or API authentication credentials were included. The exposed files primarily cover internal tooling, orchestration scaffolding, and system infrastructure. Model weights and training data were not part of the exposure.
Should engineering teams stop using Claude Code?
Not necessarily. The direct risk to teams using Claude Code legitimately is low — the source exposure does not affect how the tool functions or compromise user data. The primary immediate risk is the malware campaign. Continue using Claude Code through official channels only, and apply the hardening steps above.
For teams evaluating their toolchain while the situation stabilises: our roundup of AI code generation tools in 2026 covers which assistants have strong security track records and enterprise hardening options. Also relevant: the AI accountability and legal landscape in 2026 — source-code incidents like this are precisely what new regulations are now targeting.
What should engineering teams do in the first hour?
Pause high-risk automations, reduce prompt logging, open a single incident channel, check whether anyone downloaded files from unofficial sources, and start key rotation for systems with production access.
