Three Hours on the Wire: What the Axios Attack Tells Us About the Limits of Supply Chain Security
Sometime in mid-March 2026, a developer named Jason Saayman accepted a Slack invite.
Saayman is the lead maintainer of Axios, an npm package with roughly 100 million weekly downloads and, at the time of the attack, over 170,000 dependent packages. The invite came from someone claiming to be the founder of a well-known company. The Slack workspace looked real. Channel activity, team profiles and linked social content. Weeks of normal conversation followed. Then a video call was scheduled. During the call, the caller's audio 'broke'. Could Jason install a quick fix?
He did. And now a North Korean remote access trojan called WAVESHAPER.V2 was running on his machine. His npm token, browser sessions, AWS credentials and macOS Keychain were being exfiltrated.

100 million downloads a week and nobody watching the front door
The attackers moved fast. Using the stolen credentials, they published a poisoned version of Axios to npm, the public registry where JavaScript developers download their software components. They didn't change Axios's own code. Instead, they added a single hidden dependency: a fake package with a name designed to look like something legitimate. That fake package contained the real payload. Npm's default behaviour is to run it automatically during installation. There was no prompt, no confirmation.
Any machine that pulled a fresh install of Axios during those hours silently ran the attacker's code.
StepSecurity later reported the time from npm install to the first outbound C2 connection was within two seconds.
The payload: one RAT, three platforms, two bugs
The malicious code was obfuscated to avoid casual inspection. Once it ran, it detected which operating system the machine was running and downloaded a tailored remote access trojan (RAT) from attacker-controlled infrastructure, a throwaway server registered the day before. There were separate versions for macOS, Windows and Linux, each disguised to blend in. On macOS, the trojan hid among Apple's system files. On Windows, it masqueraded as a Microsoft update. On Linux, a simple script.
Once installed, the RAT phoned home. Its first message back to the attackers was a listing of the victim's files: home directories, documents and configuration folders. On Windows, it included OneDrive and application data too. A second message followed with the machine's hostname, username, operating system, running software and hardware details. After that, the attackers could send commands: run scripts, deliver additional payloads or enumerate the system further.
Here's where it gets interesting. According to reverse engineering by Datadog Security Labs and Elastic Security Labs, the Windows variant had a bug: a key function was written but never actually called. The RAT sent reconnaissance data home but couldn't receive instructions. The Linux variant crashed in most automated environments because it assumed a human was logged in at a screen. Only macOS worked properly.
The installation script also covered its tracks: after running, it deleted itself and replaced the poisoned configuration with a clean copy.
A state-sponsored social engineering operation
Within days, two independent attributions landed. Microsoft Threat Intelligence identified the campaign as Sapphire Sleet. Google Cloud and Mandiant attributed it to UNC1069. Different names for the same North Korean state-sponsored threat cluster.
These are the same operators behind years of cryptocurrency exchange targeting, venture capital firm compromises and blockchain theft campaigns dating back to at least 2018. The implant on Saayman's machine was identified as WAVESHAPER.V2, a known tool in that group's arsenal. And Mandiant noted something that should worry every open-source maintainer: several other maintainers across the Node.js ecosystem reported receiving similar social engineering approaches around the same time. This wasn't a single opportunistic hack. It was a coordinated intelligence operation targeting the npm supply chain.
The fake company, the fake Slack workspace with believable activity and the weeks of rapport-building before the ask: This is the kind of access operation that state-sponsored groups run against diplomats and defence contractors. Now it's being used against people who maintain JavaScript libraries in their spare time.
Three hours of community immune response
Back to the timeline. Within about an hour of the poisoned version going live, people started noticing. Developers filed bug reports on the project's public page. The attacker, still controlling Saayman's account, deleted them. But reports kept coming. Security monitoring tools from StepSecurity and Socket.dev flagged suspicious network connections from the package within minutes of it appearing.
The turning point came from another Axios contributor who had fewer permissions than the compromised account. He couldn't undo the damage directly but he could raise the alarm through other channels. He contacted npm's security team.
By 03:15 UTC, roughly three hours after the poisoned version was published, npm pulled it from the registry. It's unclear how many installations happened during that window, but with Axios averaging over 100 million weekly downloads, even a three-hour exposure had significant reach. Palo Alto's Unit 42 later reported affected organisations across financial services, tech, higher education and other sectors spanning the US, Europe, the Middle East, South Asia and Australia.

Everything we're told to do and where it stops working
Here's the part that should make security teams uncomfortable. Let's walk through the standard supply chain defences and consider what each one would have done:
Lockfiles. If your lockfile already pinned a safe version and your CI uses frozen installs, you were protected here. That's the good news. The bad news is that someone has to update dependencies eventually, and the developer who ran that update pulled the compromised version onto their machine. New projects installing Axios for the first time got it too. Lockfiles protect the builds that don't change. They can't protect the person who makes the change.
Software composition analysis and npm audit. No CVE existed for axios@1.14.1 at the time it was published. The malicious package plain-crypto-js was brand new, not in any vulnerability database. SCA tools scan against known bad lists. This was an unknown bad.
Provenance attestation. Legitimate Axios releases use npm's OIDC Trusted Publisher flow, cryptographically tied to GitHub Actions. The malicious publish used a stolen Github token and bypassed that entirely. The forensic signal was visible after the fact (no trustedPublisher metadata, no gitHead, no corresponding commit) but that's not something anyone checks during a routine npm install.
Code review of dependencies. In principle, you should review what you're pulling in. In practice, who audits the postinstall hook of every transitive dependency on every update? Axios has over 170,000 dependents. The malicious code wasn't even in Axios itself. It was in a dependency of the dependency.
These controls are still absolutely necessary. But they share a common assumption: that you can prevent malicious code from executing in the first place. When a nation-state actor spends weeks socially engineering a single person to get a publishing token, you're playing defence against an adversary who has already decided your controls can be bypassed.
When prevention fails, what's your detection story?
So the payload ran. The RAT beaconed within seconds. Directory listings from the developer's home folder were exfiltrated before anyone knew something was wrong. In the Axios case, the community caught it in three hours. That's genuinely impressive.
But three hours is a long time when the RAT is exfiltrating your file listings and process tables. And the community detection model only works when someone happens to be watching. The TeamPCP/Trivy supply chain compromise that hit two weeks earlier took longer to detect, and its credential harvester was vacuuming up SSH keys, cloud tokens and TLS certificates from every machine it touched.
The pattern across both attacks is the same: the initial compromise is invisible to traditional defences and the post-compromise behaviour (reading files, listing directories and phoning home to C2 infrastructure) looks like normal application activity. The credential harvester reads ~/.ssh/id_rsa using standard system calls. The RAT sends HTTP POST requests. No exploits, no privilege escalation and nothing to trigger a signature.
This is where the thinking needs to shift. Instead of asking "can we stop every supply chain compromise before it executes?" (an impossibly large surface area), the more tractable question is: "when the payload runs, will it encounter anything that tells us it's here?"
Decoy credentials planted at the paths attackers target. SSH keys that look real but exist solely as tripwires. AWS credential files with realistic access key IDs that trigger an alert the moment anything reads them. Directory structures designed so that when a RAT enumerates the filesystem, it's stepping on ground that was prepared for it.
The Axios RAT's first action was to list the contents of the user's home directory, Desktop and .config folder. The TeamPCP harvester systematically read through 50+ credential file paths. Both attacks rely on the assumption that the environment they land in is genuine and unmonitored. That's an assumption defenders can exploit.
Network deception works the same way. The RAT gave the attackers the ability to run scripts and deliver additional payloads. The next step is always lateral movement: scanning the network, probing other machines, trying to reach something more valuable than a single developer's laptop. If the network they're moving through contains systems that look real but exist purely to detect that kind of activity, the attacker announces themselves the moment they touch one. No false positives, no alert fatigue. Just a high-confidence signal that someone is somewhere they shouldn't be.
The asymmetry is worth considering. Prevention asks you to secure every possible supply chain input, every dependency and every maintainer's personal security hygiene across every open-source project you depend on. Detection through deception asks you to prepare the ground the attacker will land on. One of these scales. The other doesn't.
If you want to start building that ground, the supply chain hygiene tools are well-documented: Socket.dev and Snyk for dependency monitoring, npm's --ignore-scripts flag to disable automatic postinstall execution, and StepSecurity's Harden-Runner for detecting anomalous network activity from CI pipelines. These are prevention tools, but good ones.
For the deception layer, the options range from open-source honeypot frameworks like CanaryTokens and HoneyDB through to commercial deception platforms that handle credential lures, tripwire files and decoy infrastructure at scale. The right choice depends on your environment and what you're willing to maintain. The important thing is that something is there when the next payload lands.
The question that matters
Saayman published a post-mortem on 2 April. The Axios project is moving to OIDC-based publishing, CI-only releases, immutable pipelines and hardened GitHub Actions. All good steps. All prevention steps.
But the takeaway from the Axios compromise isn't about what npm should do differently, what maintainers should do differently or what your lockfile policy should be. Those are all worth improving but none of them would have stopped a state-sponsored intelligence operation that spent weeks building a fake company to get one person to install one file.
The takeaway is that your npm install ran their code and you probably didn't know until someone on the internet told you. The question isn't whether that will happen again. It's whether next time, you'll know before they do.
