OpenAI and Microsoft have taken down five state-affiliated threat actor accounts that were using AI services for malicious cyber activity.
Specifically, the five actors were from China, Iran, North Korea, and Russia. They used OpenAI models for tasks such as researching companies and cybersecurity tools, translating technical articles, debugging code, and creating malicious scripts or content for phishing campaigns.
The actors also attempted to understand publicly available vulnerabilities and conduct open-source research on satellite communications protocols and radar imaging technology. One actor attempted to locate defense experts in the Asia-Pacific region.
Working with Microsoft Threat Intelligence, OpenAI was able to successfully shut down all five actors.
According to OpenAI, the actors' activities are consistent with previous Red Team assessments OpenAI conducted with external cybersecurity experts.
These assessments found that GPT-4 provides "limited, incremental capabilities for malicious cybersecurity tasks" that OpenAI believes do not go far beyond what is already possible with publicly available non-AI tools.
According to reports from cybersecurity companies, sophisticated phishing is on the rise as tools like ChatGPT make it easier and faster to come up with ideas and designs for attacks.
OpenAI recently published an experiment investigating whether GPT-4 could help professionals or students create biological weapons more efficiently than traditional Internet tools. The result indicated that GPT-4 does help, but the difference is small.