Content
summary Summary

OpenAI and Microsoft have taken down five state-affiliated threat actor accounts that were using AI services for malicious cyber activity.

Specifically, the five actors were from China, Iran, North Korea, and Russia. They used OpenAI models for tasks such as researching companies and cybersecurity tools, translating technical articles, debugging code, and creating malicious scripts or content for phishing campaigns.

The actors also attempted to understand publicly available vulnerabilities and conduct open-source research on satellite communications protocols and radar imaging technology. One actor attempted to locate defense experts in the Asia-Pacific region.

Working with Microsoft Threat Intelligence, OpenAI was able to successfully shut down all five actors.

Ad
Ad

According to OpenAI, the actors' activities are consistent with previous Red Team assessments OpenAI conducted with external cybersecurity experts.

These assessments found that GPT-4 provides "limited, incremental capabilities for malicious cybersecurity tasks" that OpenAI believes do not go far beyond what is already possible with publicly available non-AI tools.

According to reports from cybersecurity companies, sophisticated phishing is on the rise as tools like ChatGPT make it easier and faster to come up with ideas and designs for attacks.

OpenAI recently published an experiment investigating whether GPT-4 could help professionals or students create biological weapons more efficiently than traditional Internet tools. The result indicated that GPT-4 does help, but the difference is small.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI and Microsoft have identified and taken down five nation-state threat actors from China, Iran, North Korea, and Russia that used AI services for malicious cyber activities.
  • The actors used AI services for tasks such as company research, translating technical articles, debugging code, and creating malicious scripts or content for phishing campaigns.
  • OpenAI emphasizes that GPT-4 provides limited, incremental capabilities for malicious cybersecurity tasks that do not go far beyond what is already possible with publicly available non-AI tools.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.