Cybercriminals are increasingly using generative AI tools such as ChatGPT or WormGPT, a dedicated malware model, to send highly convincing fake emails to organizations, allowing them to bypass security measures.
A new wave of highly convincing fake emails is hitting unsuspecting employees. That's according to British computer hacker Daniely Kelley, who has been researching WormGPT, an AI tool optimized for cybercrimes such as Business Email Compromise (BEC). Kelley is referring specifically to his observations on underground forums.
In addition, special prompts known as "jailbreaks" are exchanged to manipulate models such as ChatGPT to generate output that could include the disclosure of sensitive information or the execution of malicious code.
"This method introduces a stark implication: attackers, even those lacking fluency in a particular language, are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks," Kelley writes.
AI-generated emails are grammatically correct, which Kelley says makes them more likely to go undetected, and easy-to-use AI models have lowered the barrier to entry for attacks.
WormGPT is an AI model optimized for online fraud
WormGPT is an AI model specifically designed for criminal and malicious activity that is shared on popular online cybercrime forums. It is marketed as a "blackhat" alternative to official GPT models and advertised with privacy protection and "fast money".
Similar to ChatGPT, WormGPT can create convincing and strategically sophisticated emails. This makes it a powerful tool for sophisticated phishing and BEC attacks. Kelley describes it as ChatGPT with "no ethical boundaries or restrictions".
WormGPT is based on the open-source GPT-J model, which approaches the performance of GPT-3 and can perform textual tasks similar to ChatGPT, as well as write or format simple code. The WormGPT derivative is said to have been trained with additional malware datasets, although the author of the hacking tool does not disclose these.
Kelley tested WormGPT on a phishing email designed to trick a customer service representative into paying an urgent bogus bill. The sender of the email was the CEO of the targeted company (see screenshot).
Kelley calls the results of the experiment "unsettling" and the fraudulent email generated "remarkably persuasive, but also strategically cunning." Even inexperienced cybercriminals could pose a significant threat with a tool like WormGPT, Kelley writes.
The best protection against AI-based BEC attacks is prevention
As AI tools continue to proliferate, new attack vectors will emerge, making strong prevention measures essential, Kelley says. Organizations should develop BEC-specific training and implement enhanced email verification measures to protect against AI-based BEC attacks.
For example, Kelley cites alerts for emails from outside the organization that impersonate managers or suppliers. Other systems could flag messages with keywords such as "urgent," "sensitive," or "wire transfer," which are associated with BEC attacks.