All Insights

A Backdoored AI Library Just Auto-Executed on Thousands of Machines

CivSafe Team·April 3, 2026·6 min read

Last week a popular open-source AI library got backdoored. Not in a theoretical "this could happen" way. In a "malicious code was published to PyPI, auto-executed on every machine that installed it, and stole credentials" way.

The library is LiteLLM. If you've set up any AI workflow tooling in the last two years — automation pipelines, internal chatbots, document processing, anything that routes requests to a language model — there's a reasonable chance LiteLLM is in your stack. And there's a smaller but non-trivial chance you installed one of the two compromised versions.

Here's what happened, and what you need to do right now.

What LiteLLM is and why it's everywhere

LiteLLM is a Python library that acts as a unified gateway to AI services. Instead of writing separate integration code for OpenAI's API, then separate code for Gemini, then separate code for Cohere, you call LiteLLM once and it handles the routing. Over 200 model providers. One consistent interface.

It's elegant and it's become a standard building block. If you've used any open-source AI agent framework, workflow automation tool, or self-hosted AI platform in the past 18 months, LiteLLM is almost certainly somewhere in the dependency tree — even if you've never heard of it. That's the nature of Python package ecosystems. You install one tool, it installs a dozen others.

That ubiquity is what made it a target.

How the attack worked

On March 24, 2026, a threat actor called TeamPCP published two malicious versions of LiteLLM to PyPI: 1.82.7 and 1.82.8. The packages were live for roughly 40 minutes before PyPI quarantined them.

Forty minutes is a long time when automated pipelines are constantly pulling package updates.

The malicious code was embedded as a .pth file — litellm_init.pth — inside the package. Python's .pth mechanism is designed to modify import paths, but it also auto-executes any code it contains on startup. This means the malware ran every time Python started on an affected machine. Not just when LiteLLM was explicitly called. Every Python process.

What the malware did: it deployed the SANDCLOCK credential stealer. SANDCLOCK looked for SSH keys, cloud provider credentials (AWS access keys, GCP application default credentials, Azure tokens), Kubernetes configs, API keys in .env files, and database passwords. It also set up a persistence mechanism — a systemd service named sysmon — designed to survive reboots and continue exfiltrating in the background.

On Kubernetes clusters, it looked for cluster secrets and node-setup pods. This wasn't opportunistic malware. It was purpose-built to extract the access credentials that let you into cloud infrastructure and AI services.

This wasn't a random attack

TeamPCP has been running a coordinated supply chain campaign for weeks. Before LiteLLM, they compromised Aqua Security's Trivy — a widely-used open-source security scanner — by gaining access to a maintainer's credentials. Then they used the trusted Trivy GitHub Actions pipeline to push malicious code. Checkmarx's GitHub Actions workflow was also hit.

This is the cascading part: by compromising a security tool first, they established a foothold in CI/CD pipelines that organizations actually trust. From Trivy they reached LiteLLM. Security researchers at ReversingLabs described it as a multi-stage attack designed to maximize reach by exploiting the implicit trust in automated pipelines.

The result: Mercor, a $10 billion AI hiring startup, confirmed it was "one of thousands" of companies affected. Mercor disclosed a security incident on March 31 and April 2. The extortion group Lapsus$ is separately claiming to have stolen 4 terabytes of data including source code and database records. The full scope of how many organizations were hit is still being assessed.

Who's at risk

You're at risk if you or anyone on your team installed or upgraded LiteLLM between March 20 and March 27, 2026. The compromised versions are specifically 1.82.7 and 1.82.8.

The organizations most at risk are those running automated workflows that pull the latest package versions without pinning. CI/CD pipelines. Docker images built from generic Python base images. Development environments where pip install --upgrade litellm is a habit. Any setup where "keep things current" is the policy.

Smaller organizations are particularly exposed here because they're more likely to be running LiteLLM directly rather than through a managed SaaS layer, less likely to have tooling that monitors for supply chain anomalies, and less likely to have rotated credentials recently.

What to check and what to do

First: find out if you installed the bad versions.

pip show litellm

If the version is 1.82.7 or 1.82.8, you're affected. Also check uv caches if your team uses that toolchain:

find ~/.cache/uv -name "litellm_init.pth"

Check virtual environments in CI/CD runners, not just developer laptops. The infected .pth file is the artifact to look for.

Second: look for persistence.

ls ~/.config/sysmon/sysmon.py
ls ~/.config/systemd/user/sysmon.service

If either of those files exists, the malware established persistence. You're not just looking at an infected package — you're looking at an active credential exfiltration situation.

On Kubernetes: audit kube-system for pods matching node-setup-* and check cluster secrets for unauthorized access.

Third: rotate everything.

If you installed either compromised version, treat every credential on that machine as compromised. That means:

  • SSH keys
  • Cloud provider credentials (AWS, GCP, Azure)
  • API keys in .env files
  • Database passwords
  • Kubernetes configs

Rotation is not optional if the stealer ran. It almost certainly ran.

Fourth: pin your packages.

Going forward: pin LiteLLM to a specific version and don't auto-upgrade without review. litellm==1.82.6 is clean. The LiteLLM maintainers published a security advisory on their docs site confirming the incident and the safe versions.

The broader pattern worth understanding

This attack worked because the AI tooling ecosystem has grown faster than the security hygiene around it. A library like LiteLLM touches the API keys for every AI service your organization uses. When it's compromised, the attacker gets everything — your OpenAI key, your cloud provider access, your internal data pipelines.

The teams building AI workflows — often moving fast on small budgets — tend to have the same package dependencies as well-funded engineering orgs, but fewer eyes on the security signals. That asymmetry is what groups like TeamPCP are exploiting.

Supply chain attacks against AI tooling are going to keep happening. This won't be the last one.

The practical response isn't to stop using open-source libraries — that's not realistic and not necessary. It's to treat your AI tooling stack with the same credential hygiene you'd apply to anything else that touches production systems: pin versions, rotate keys, and know what's running in your environment.


If you're not sure what's in your team's AI tooling stack or whether your credentials are exposed, that's a 30-minute conversation we can have before it becomes an incident. Get in touch.

CivSafe — Strategic Innovation. Community Impact.