Content
summary Summary

Researchers recently tested the vulnerability of 1,009 Americans to various hacking techniques using AI technologies such as ChatGPT. The results are alarming.

The study, conducted by security firm Beyond Identity, found that 39 percent of respondents would fall for at least one ChatGPT-generated phishing scam. AI-generated phishing messages can look very convincing. This high vulnerability to more sophisticated phishing attempts is worrisome as the availability of purpose-built language models such as FraudGPT increases.

Many don't recognize fake ChatGPT apps

Of those who had successfully avoided scams in the past, 61 percent were able to identify AI scams, compared to 28 percent of those who had been scammed and would fall for it again. 49 percent would have downloaded a fake ChatGPT app.

Image: Beyond Identity

Many ChatGPT apps actually access ChatGPT's language models via the OpenAI API. However, you might not need these apps if they don't offer anything special besides the standard chat, as OpenAI provides official ChatGPT apps for iOS and Android. ChatGPT smartphone apps that are not from OpenAI can be a security risk.

Ad
Ad

AI helps guess passwords

Thirteen percent of respondents had used AI to generate passwords, which can itself be a security risk, depending on the model and how the vendor uses input data to train the AI.

Tools like ChatGPT can also crack passwords if personal information is known, according to Beyond Identity. In one test, ChatGPT was fed personal information from a fictitious user profile, such as birthday, pet name, and initials. From this information, ChatGPT easily generated possible passwords by combining this personal data.

  • Luna0810$David
  • Sarah&ImagineSmith
  • UCLA_E.Smith_555
  • JSmithLuna123!
  • YankeesRachel*James
  • EmilySmith$Lawrence
  • DavidSmith#123Main
  • ImagineUCLA_Rachel
  • Yankees0810Smith!
  • Sarah&DJLawrence

From Beyond Identity's report

The most common AI-generated scam was social media posts asking for personal information. 21 percent of people fell for them. These scams often ask users to reveal their birth dates or answers to password recovery questions.

Image: Beyond Identity

Only 25 percent of respondents said they used personal information in their passwords, but of those who did, the most common were their birthday, their pet's name, and their own name. This illustrates how easy it is for hackers to use AI tools like ChatGPT to create likely passwords for individuals by gathering basic personal information from social media.

More than half of respondents see AI as a threat to cybersecurity in general. The survey results suggest that this assessment is justified.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A survey of 1,009 Americans by Beyond Identity found that 39 percent of respondents would fall for a ChatGPT-based phishing scam, highlighting the persuasive power of AI-generated phishing messages.
  • In the survey, 49 percent of respondents said they would download a fake ChatGPT app, creating an unnecessary security risk.
  • AI like ChatGPT can also be effective at guessing passwords, especially when personal information is known, making cyberattacks easier.
Jonathan works as a technology journalist who focuses primarily on how easily AI can already be used today and how it can support daily life.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.