China and Iran Have Crossed a Red Line: They’re Using ChatGPT to Create Malware and Carry Out Phishing Attacks

  • Over the past few weeks, OpenAI has identified more than 20 cyberattacks orchestrated using ChatGPT and other LLMs.

  • China and Iran are behind these cyberattacks, which primarily target several Asian governments.

OpenAI recently identified more than 20 cyberattacks orchestrated using ChatGPT and other LLMs. According to its engineers, Chinese and Iranian hackers have leveraged these AI systems for malware development, debugging, and other malicious activities.

Chinese activists reportedly planned the first attack using ChatGPT, targeting multiple Asian governments. Notably, the attack used a spear phishing strategy called “SweetSpecter.” This method involves a ZIP archive containing a malicious file that, once downloaded and opened, infects the user’s system. OpenAI engineers discovered that the attackers employed several ChatGPT accounts to write code and exploit system vulnerabilities.

The Other Side of AI

A group based in Iran known as CyberAv3ngers carried out the second attack. This organization used ChatGPT to exploit vulnerabilities and steal passwords from users of macOS computers. Another group, Storm-0817, also based in Iran, launched the third attack. They used ChatGPT to develop Android malware capable of stealing contact lists, accessing call logs, and browsing history.

According to OpenAI, the attackers relied on well-known methods, indicating they didn’t create any new techniques using ChatGPT.

According to OpenAI, the attackers relied on well-known methods, indicating they didn’t create any new techniques using ChatGPT. As a result, there have been no substantially new malware variants. However, the real concern is that these cyberattacks highlight how easy it is for hackers with minimal knowledge to leverage AI to develop software with significant malicious potential.

OpenAI has confirmed it will continue improving its technology to prevent malicious use. In the meantime, the company has established several specialized security and protection teams that will share their findings with other companies and the broader community to help prevent further cyberattacks. However, it’s not just OpenAI that needs to take these precautions. Other companies that develop generative AI models should follow suit and adopt similar measures to enhance security in this area.

Image | Lucas Andrade (Pexels)

Related | Cybercriminals Are Using a New Method to Steal Google Passwords: Full-Screen Mode

See all comments on https://www.xatakaon.com

SEE 0 Comment

Cover of Xataka On