Popular AI gateway startup LiteLLM ditches controversial startup Delve

Foto: Li-Anne Dias
Millions of developers using the LiteLLM gateway must prepare for major changes in security architecture after the open-source version of the tool fell victim to aggressive credential-stealing malware. In response to the incident, the LiteLLM startup officially severed ties with the controversial firm Delve, which was responsible for the previous security certification. Despite holding two compliance certificates intended to guarantee procedures minimizing attack risks, the systems failed at a critical moment. The decision to repeat the entire certification process with a new auditor signals that the AI gateway industry is beginning to approach compliance issues with much greater severity. For global users, this means a need for increased vigilance in managing API keys and the inevitable requirement to update libraries to versions free of malicious code. This incident exposes the weakness of automated certifications when faced with real-world credential-stealing threats. In the era of mass adoption of Large Language Models, trust in data intermediaries is becoming a currency more important than the functionality of the code itself. LiteLLM is now staking everything on one card, attempting to regain its reputation through total transparency and a replacement of technological partners responsible for auditing. This is a brutal lesson for the entire creative sector: a certificate on a website is no substitute for continuous monitoring of software supply chain integrity.
In the world of AI infrastructure, where trust is a currency as valuable as computing power, LiteLLM has just undergone a brutal lesson in humility. The startup responsible for the immensely popular AI gateway, used by millions of developers worldwide, has officially announced the termination of its partnership with Delve. This decision is not merely a change of service provider, but a drastic rescue move after the open-source version of the project fell victim to a destructive credential-stealing malware attack.
The situation is particularly paradoxical because LiteLLM held two security compliance certificates, which it obtained specifically through Delve. Theoretically, these certifications were meant to guarantee that the startup had procedures in place to minimize the risk of incidents. Reality verified these assurances in the worst possible way – through a direct leak of authentication credentials, striking at the foundations of the ecosystem built around LiteLLM.
The Illusion of Security and Empty Certificates
The mechanism of LiteLLM is based on unifying access to hundreds of large language models (LLMs) through a single, consistent API. For developers, this means incredible convenience, but for cybercriminals, it represents a single, central point of attack, where a breach grants access to the API keys of many different providers. The use of malicious credential-stealing software in the open-source version demonstrated that the audit processes conducted by Delve may have been merely superficial.
Read also
This incident casts a shadow over the entire "AI compliance startups" sector. Companies like Delve promise a quick and painless path through certification processes, which is crucial for young tech firms when acquiring corporate clients. However, the LiteLLM case proves that possessing the right document is not synonymous with actual resilience to attacks. The malicious code that infected the repositories operated within a trusted environment, exposing vulnerabilities that auditors simply failed to notice or ignored.

Hard Reset: A New Auditor and Rebuilding Trust
The reaction from LiteLLM has been immediate and radical. The company publicly declared its intention to completely repeat the certification process from scratch, this time engaging a completely different partner and an independent auditor. This is a rare case in the industry where a startup admits to an error in selecting a security partner and decides to "scrap" its existing formal achievements to regain credibility in the eyes of the open-source community.
- Complete termination of cooperation: LiteLLM is definitively ending its relationship with Delve.
- Invalidation of certificates: Previous security credentials are deemed unreliable.
- New audit process: All security procedures will undergo re-verification by an external entity.
- Cleaning the open-source version: The priority is to remove all traces of malware and secure the code distribution mechanisms.
The key challenge for the LiteLLM team will now be proving that they can control the software supply chain (supply chain security). In the age of AI, where libraries are updated almost daily, the vulnerability to injecting malicious code into popular packages is becoming one of the greatest threats. Resigning from Delve is a signal that the company has stopped believing in automatic "checking of compliance boxes" in favor of real penetration testing and deep code analysis.
Crisis in the AI Compliance Sector
The collapse of Delve's reputation in the context of this attack could have repercussions for the entire certification services market. If the tools meant to protect startups fail against classic, albeit aggressive, malware, the whole idea of a fast-track certification loses its meaning. The tech industry must ask itself whether the rush to deploy AI models is happening at the expense of elementary digital hygiene principles.
"Certification that does not withstand a real threat is merely expensive marketing, not a component of a defense strategy."
For LiteLLM, the coming months will be a test of survival. Although their tool remains technically excellent and essential for many developers, the stigma of "stolen credentials" will be hard to erase. Moving to a new auditor is a step in the right direction, but it will require full transparency regarding how the malicious software entered their source code in the first place and why monitoring systems failed.
The AI gateway sector is facing a turning point. This incident will force LiteLLM's competitors to revise their own standards. Security in the AI world cannot end at the model layer – it must encompass every script, every library, and every intermediary that claims to have a stamp guaranteeing data integrity.









