Content
summary Summary

Security researchers have identified a new potential threat to software supply chains stemming from AI-generated code through a technique called "slopsquatting."

Ad

Coined by security researcher Seth Larson, the term "slopsquatting" describes an attack where malicious actors publish harmful packages under fictional names - names that AI models like ChatGPT or CodeLlama frequently suggest incorrectly. Unlike the well-known "typosquatting" technique, these names aren't based on typos of known libraries but instead on package names the AI completely fabricates.

The problem occurs because generative AI models often suggest libraries that don't actually exist when writing code. A study published in March 2025 revealed that approximately 20 percent of analyzed AI code examples (from a total of 576,000 Python and JavaScript snippets) contained non-existent packages. Even established tools like ChatGPT-4 hallucinate packages about 5 percent of the time, while open-source models like DeepSeek, WizardCoder, or Mistral show significantly higher rates. The research team hasn't tested newer models yet, but hallucinations remain an issue even with the most advanced language models.

These hallucinated package names often sound plausible. Developers who adopt AI-generated code might try to install such packages without realizing they don't exist. Attackers could register these invented names in repositories like PyPI or npm and publish malicious code under them. When such a package is installed, the harmful code enters the software without warning. Since many developers rely on AI-generated code or process it automatically, this creates a potential entry point for attacks.

Ad
Ad

Hallucinations are predictable - making them attractive to attackers

What makes this particularly concerning is that the study shows 58 percent of hallucinated package names appeared multiple times across similar queries. This predictability makes them especially attractive targets for attackers. Socket, a company specializing in open-source security, warns that these patterns represent a "predictable attack target." Of the hallucinated names, 38 percent resembled real packages, 13 percent were typos, and the rest were pure invention.

To protect against these threats, researchers recommend several measures: never adopt package names without verification, specify version numbers (using lockfiles), implement hash verification, and always test AI-generated code in isolated environments. Additionally, reducing the "temperature" parameter - which controls the randomness of AI output - can help minimize hallucinations.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Security researchers warn of "slopsquatting": attackers using made-up package names that AI models hallucinate to distribute malicious code.
  • In one analysis, 20 percent of AI-generated code samples contained non-existent packages, while the rate for ChatGPT-4 was 5 percent.
  • Because many names are predictable, experts recommend lockfiles, hash checks, and isolated testing for AI code.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.