An IBM research team is studying how effective generative AI is at social engineering and writing phishing emails.
The researchers used five ChatGPT prompts to create phishing emails for specific industries. The prompts focused on the top concerns of employees in those industries, and specifically selected social engineering and marketing techniques to increase the likelihood that employees would click on a link in the email.
AI phishing is almost as good as human phishing, but much faster
The AI- and human-generated phishing emails were then sent to over 800 employees in an A/B test. The results showed that the AI-generated phishing emails were only slightly behind the human-generated phishing emails.
The fact that the human emails still had an edge, according to IBM, was because they were more customized and personalized to the company, while ChatGPT took a more generic approach. Still, ChatGPT's email attacks were nearly equal to the human attacks, although they were reported as suspicious more often.
The difference is in the time required: it takes IBM's red team about 16 hours to create a high-quality phishing email. ChatGPT did it in five minutes.
"Attackers can potentially save almost two days of work by using generative AI models," writes Stephanie Carruthers, chief people hacker at IBM X-Force Red.
IBM researchers point to tools such as WormGPT, LLMs optimized for cyberattacks that can be purchased online. They expect AI attacks to become more sophisticated and surpass human attacks, although they have not yet seen generative AI phishing attacks themselves.
In this context, a recent quote from OpenAI CEO Sam Altman is worth noting: He predicts that AI will be "capable of superhuman persuasion" even before it is generally intellectually superior to humans. You can imagine what this means for phishing and cybersecurity.
To prepare for the changing threat landscape, IBM's security researchers believes businesses and consumers should consider the following recommendations:
- When in doubt, call the sender to verify the legitimacy of an email or phone scam.
- Overcome the grammar stereotype and make employees aware of the length and complexity of email content.
- Review social engineering programs, including incorporating techniques such as vishing into training.
- Tighten identity and access management controls to verify user access and privileges.
- Continually adapt and innovate to stay ahead of bad actors.