AI in practice

AI-generated phishing emails are closing in on human effectiveness, IBM study reveals

Matthias Bastian
Hand-drawn illustration, sleek and clean in design: A simplified representation of a machine neural network, with nodes and connections forming a brain-like structure. From this abstract network, multiple envelopes emanate, each with a subtle digital glow, implying they are AI-generated emails.

DALL-E 3 prompted by THE DECODER

An IBM research team is studying how effective generative AI is at social engineering and writing phishing emails.

The researchers used five ChatGPT prompts to create phishing emails for specific industries. The prompts focused on the top concerns of employees in those industries, and specifically selected social engineering and marketing techniques to increase the likelihood that employees would click on a link in the email.

AI phishing is almost as good as human phishing, but much faster

The AI- and human-generated phishing emails were then sent to over 800 employees in an A/B test. The results showed that the AI-generated phishing emails were only slightly behind the human-generated phishing emails.

The click-through rate for AI phishing is close to the click-through rate for human phishing emails (11% vs. 14%). | Image: Stephanie Carruthers via securityintelligence.com

The fact that the human emails still had an edge, according to IBM, was because they were more customized and personalized to the company, while ChatGPT took a more generic approach. Still, ChatGPT's email attacks were nearly equal to the human attacks, although they were reported as suspicious more often.

AI phishing emails were slightly more suspicious than human emails, but the difference is small. | Image: Stephanie Carruthers via securityintelligence.com

The difference is in the time required: it takes IBM's red team about 16 hours to create a high-quality phishing email. ChatGPT did it in five minutes.

"Attackers can potentially save almost two days of work by using generative AI models," writes Stephanie Carruthers, chief people hacker at IBM X-Force Red.

ChatGPT's phishing attempt. | Image: Stephanie Carruthers via securityintelligence.com
The human phishing email. | Image: Stephanie Carruthers via securityintelligence.com

IBM researchers point to tools such as WormGPT, LLMs optimized for cyberattacks that can be purchased online. They expect AI attacks to become more sophisticated and surpass human attacks, although they have not yet seen generative AI phishing attacks themselves.

In this context, a recent quote from OpenAI CEO Sam Altman is worth noting: He predicts that AI will be "capable of superhuman persuasion" even before it is generally intellectually superior to humans. You can imagine what this means for phishing and cybersecurity.

To prepare for the changing threat landscape, IBM's security researchers believes businesses and consumers should consider the following recommendations: