March 25, 2026
5 min
Matteo Dora
David Berenstein

LiteLLM supply chain compromise: what happened, impact on Giskard, and lessons learned

On 24 March 2026, two malicious versions of the widely used LiteLLM Python library (1.82.7 and 1.82.8) were published to PyPI. The compromised packages ran a credential-stealing attack that exfiltrated secrets (SSH keys, cloud provider sessions, Terraform state, etc.) and attempted lateral movement into Kubernetes clusters. By now, both versions have been removed by PyPI.

What happened

The attack is part of a broader campaign by a threat actor known as TeamPCP, who previously compromised Aqua Security's Trivy scanner and Checkmarx's KICS GitHub Action over the preceding week.

Timeline:

  • March 19: Trivy was compromised. TeamPCP pushed a malicious Trivy release. The malicious payload scraped memory from the CI/CD runner to extract secrets.
  • LiteLLM's CI/CD pipeline used Trivy for security scanning. The compromised Trivy action exfiltrated LiteLLM's PyPI credentials and GitHub tokens from their CI runner.
  • March 24, ~10:39–10:52 UTC: LiteLLM compromise. Using the stolen credentials, the attacker published two malicious LiteLLM versions to PyPI (1.82.7 and 1.82.8).
  • March 24, ~14:00 UTC: PyPI quarantined the litellm project and then removed the compromised versions.

Impact on Giskard

Giskard’s commercial customers are not affected. While some of our services use LiteLLM, it was pinned to a non-compromised version.

Open-source users may be affected. If they installed or upgraded dependencies on March 24 between ~10:39 and ~16:00 UTC. The package litellm could have been pulled in as a transitive dependency when installing giskard[llm], giskard-agents, or giskard-checks (alpha). This applies regardless of whether LiteLLM was pulled in via Giskard or any other library in your environment. LiteLLM is a common dependency across many GenAI Python projects, so we strongly recommend checking for indicators of compromise in any case.

How to check if you’ve been compromised

If you think you might be affected, check immediately for indications of compromise. The malicious payload left traces of the exfiltration in the system. We have created a small bash script attached to our GitHub discussion to perform these checks automatically.

You can also manually check for the presence of the infected files:

  • litellm_init.pth (inside the infected litellm package)
  • ~/.config/sysmon/sysmon.py
  • Verify the LiteLLM version in any python environment (1.82.7 and 1.82.8 were compromised)

If compromised, rotate all credentials that were accessible from the affected machine: SSH keys, cloud tokens, API keys, database passwords, etc. In Kubernetes environments, audit your kube-system for unusual pods and take them down if you find them.

Also, check the official security update from LiteLLM for more information https://docs.litellm.ai/blog/security-update-march-2026

What we learn from this attack

This incident is a reminder that supply chain attacks are extraordinarily dangerous and that CICD pipelines are a primary attack surface. TeamPCP's campaign compromised a trusted CICD tool, harvested credentials from the runner, and used those credentials to poison the next targets.

While it would not be fair to blame the LiteLLM maintainers – they were victims of a relatively sophisticated, multi-stage campaign – the incident does expose a series of avoidable mistakes:

  • LiteLLM CICD pipeline gave Trivy and other tools full access to all environment secrets, including the PyPI token and Github PATs (which it had no reason to see). Steps were not isolated following the least privilege principle.
  • The project also relied on long-lived PyPI API tokens rather than trusted publishing, meaning a single leaked secret was enough to push arbitrary packages.
  • CICD dependencies and tools were not pinned (the pipeline would pull the latest release of Trivy).
  • Credentials were not rotated after the Trivy compromise.

This specific compromise would not have affected Giskard, as we use PyPI trusted publishing rather than long-lived tokens, we pin dependencies to commit hashes, and we isolate our CICD steps. But supply chain attacks are unpredictable, and it is difficult to guarantee that no other sensitive credentials would have been exposed in a similar scenario. CICD pipelines are complex, and it is easy to let mistakes slip in.

Although not directly affected, this incident served as a reminder to actively audit our own supply chain posture. Over the next few days, we’ll verify that all good practices are being followed on our projects and reinforce them with additional checks.

What we already do:

  • Use PyPI trusted publishers instead of static tokens
  • Pin Github actions and dependencies to hashes (via renovate)
  • Scan our code, including CI workflows, with semgrep

Our next steps:

  • Integrate zizmor, a static analysis tool for GitHub Actions, to catch more specific CICD misconfigurations
  • Configure strict cooldown periods for our dependencies using renovate
  • Audit all secrets used in our CICD pipelines for least-privilege principle
  • Review our own dependencies to assess third-party risk

References

Continuously secure LLM agents, preventing hallucinations and security issues.
Book a Demo

You will also like

LLM Security: 50+ adversarial attacks for AI Red Teaming

LLM Security: 50+ adversarial attacks for AI Red Teaming

Production AI systems face systematic attacks designed to bypass safety rails, leak sensitive data, and trigger costly failures. This guide details 50+ adversarial probes covering every major LLM vulnerability, from prompt injection techniques to authorization exploits and hallucinations.

View post

OWASP Top 10 for LLM 2025: Understanding the Risks of Large Language Models

The landscape of large language model security has evolved significantly since the release of OWASP’s Top 10 for LLM Applications in 2023, which we covered in our blog at the time. The 2025 edition represents a significant update of our understanding of how Gen AI systems are being deployed in production environments. The update does not come as a surprise, as companies like MITRE also continuously update their risk framework, Atlas. The lessons from enterprise deployments, and direct feedback from a global community of developers, security professionals, and data scientists working in AI security.

View post

Risk assessment for LLMs and AI agents: OWASP, MITRE Atlas, and NIST AI RMF explained

There are three major tools for assessing risks associated with LLMs and AI Agents: OWASP, MITRE Attack and NIST AI RMF. Each of them has its own approach to risk and security, while examining it from different angles with varying levels of granularity and organisational scope. This blog will help you understand them.

View post
Get AI security insights in your inbox