AI in practice

FraudGPT can write phishing emails and develop malware

Jonathan Kemper
Cute AI illustration of a red fish peeking out of a laptop screen.

Midjourney prompted by THE DECODER

FraudGPT is an unregulated chatbot. It spreads on the darknet and is used by criminals to write phishing emails and develop malware.

"With the rise of generative AI models, the threat landscape has changed drastically," says Netenrich researcher Rakesh Krishnan. Recently, Netenrich's research team found evidence of software called FraudGPT in Darknet forums and Telegram channels.

This is an unmoderated AI chatbot for criminal purposes that is believed to have been circulating on marketplaces since at least July 22, 2023.

According to the product description, FraudGPT helps cybercriminals write phishing emails, develop cracking tools, and forge credit cards. It even helps find the best targets with the easiest victims. The screenshots below show examples.

Image: Netenrich
Image: Netenrich
Image: Netenrich

The exact technology behind FraudGPT is not clear from the Darknet marketplace listing. The screenshots suggest that FraudGPT is accessible through a custom web interface.

Starting at $200 per month

FraudGPT is offered on several marketplaces, according to Netenrich. The developer, called "CanadianKingpin," charges $200 per month, but offers discounts for three- or six-month subscriptions. Twelve months of access to FraudGPT costs $1,700.

The provider appears to be no stranger to the scene, advertising that it has already handled more than 3,000 sales on underground platforms.

AI also serves cybercrime

Reports of FraudGPT follow the launch of WormGPT, a language model allegedly trained or fine-tuned on malware data.

Both are examples of cybercriminals using AI tools for illegal activities. Because these models, like ChatGPT, can be instructed simply through conversation, they lower the barrier to entry for cybercriminals. Even those who are less tech-savvy or articulate can generate malware or write phishing emails.

Operators of chatbots such as Claude, ChatGPT, or Bard go to great lengths to prevent their technology from being used for malicious purposes. They are constantly responding to new tactics to trick the AI into producing malicious code or phishing emails, for example.

But this is only true for commercial providers. With the advent of increasingly powerful open-source models, it seems that the criminal use of AI can hardly be contained.