Skip to Content

LangSmith Vulnerability: How a Popular LLM Platform Exposed Sensitive Data

AI Platforms: Productivity Boost or Security Risk?

Get All The Latest Research & News!

Thanks for registering!

AI-powered development platforms promise to streamline workflows and accelerate innovation. But what happens when these same platforms inadvertently become conduits for cyber threats? 

The recent discovery of a critical flaw in LangChain’s LangSmith highlights the high stakes of security in the AI ecosystem.

Inside the AgentSmith Vulnerability

Security researchers at Noma Security uncovered a vulnerability, named AgentSmith, that posed a significant risk to organizations using LangSmith. 

This platform, widely used for developing and monitoring large language model (LLM) applications, integrates a public repository, LangChain Hub, where users can share AI agents and models. 

The flaw earned a high-severity CVSS score of 8.8, reflecting its potential impact.

  • Malicious Injection: Attackers uploaded AI agents embedded with a proxy server configuration to the public hub.

  • User Traps: When users tried these agents, their data, including prompts, API keys, and attachments, was routed through the attacker’s proxy server without their knowledge.

  • Stealthy Exfiltration: Sensitive information could be intercepted, enabling attackers to hijack environments, steal proprietary data, or deplete API quotas, all while remaining undetected.

For organizations, the risks included persistent exposure of internal datasets, trade secrets, and intellectual property, especially if malicious agents were cloned into enterprise environments. The attack could lead to financial losses, legal ramifications, and reputational damage.

LangChain’s Rapid Response

Upon responsible disclosure on October 29, 2024, LangChain moved swiftly. By November 6, the team had patched the vulnerability, implementing backend fixes and introducing new user warnings for custom proxy settings when cloning agents. 

This decisive response helps prevent similar attacks going forward, but the incident is a stark reminder of the dangers lurking in open, collaborative AI platforms.

The Expanding Threat Landscape: WormGPT Evolves

The timing of the LangSmith disclosure coincides with another concerning trend: the evolution of WormGPT. 

According to Cato Networks, new WormGPT variants have emerged, leveraging models like xAI Grok and Mistral AI Mixtral. These tools, designed to deliver uncensored and potentially malicious LLM outputs, are now being marketed to cybercriminals.

  • Instead of building new models from scratch, threat actors adapt existing LLMs by modifying system prompts or fine-tuning them with illicit datasets.

  • This approach lowers barriers for attackers, further enabling the weaponization of generative AI for phishing, malware, and sophisticated scams.

What Security Leaders Must Know

The LangSmith incident highlights the urgent need for robust AI platform security. As LLM tools become more open and interconnected, attackers are quick to exploit weak links - turning collaboration into a supply-chain style risk. To defend against these evolving threats, organizations should prioritize:

  • Continuous vulnerability monitoring and rapid patching
  • User education and awareness of risks in open AI ecosystems
  • Rigorous access control and credential management

Ultimately, safeguarding sensitive assets and maintaining trust in AI-driven applications requires a proactive, security-first mindset. As the AI landscape evolves, so too must defenses.

Source: The Hacker News

LangSmith Vulnerability: How a Popular LLM Platform Exposed Sensitive Data
Joshua Berkowitz June 30, 2025
Share this post