Security researchers have identified a new potential threat to software supply chains stemming from AI-generated code through a technique called "slopsquatting."
Coined by security researcher Seth Larson, the term "slopsquatting" describes an attack where malicious actors publish harmful packages under fictional names - names that AI models like ChatGPT or CodeLlama frequently suggest incorrectly. Unlike the well-known "typosquatting" technique, these names aren't based on typos of known libraries but instead on package names the AI completely fabricates.
The problem occurs because generative AI models often suggest libraries that don't actually exist when writing code. A study published in March 2025 revealed that approximately 20 percent of analyzed AI code examples (from a total of 576,000 Python and JavaScript snippets) contained non-existent packages. Even established tools like ChatGPT-4 hallucinate packages about 5 percent of the time, while open-source models like DeepSeek, WizardCoder, or Mistral show significantly higher rates. The research team hasn't tested newer models yet, but hallucinations remain an issue even with the most advanced language models.
These hallucinated package names often sound plausible. Developers who adopt AI-generated code might try to install such packages without realizing they don't exist. Attackers could register these invented names in repositories like PyPI or npm and publish malicious code under them. When such a package is installed, the harmful code enters the software without warning. Since many developers rely on AI-generated code or process it automatically, this creates a potential entry point for attacks.
Hallucinations are predictable - making them attractive to attackers
What makes this particularly concerning is that the study shows 58 percent of hallucinated package names appeared multiple times across similar queries. This predictability makes them especially attractive targets for attackers. Socket, a company specializing in open-source security, warns that these patterns represent a "predictable attack target." Of the hallucinated names, 38 percent resembled real packages, 13 percent were typos, and the rest were pure invention.
To protect against these threats, researchers recommend several measures: never adopt package names without verification, specify version numbers (using lockfiles), implement hash verification, and always test AI-generated code in isolated environments. Additionally, reducing the "temperature" parameter - which controls the randomness of AI output - can help minimize hallucinations.