WormGPT: Hackers’ New Way of Phishing You


ChatGPT, released in November 2022, created buzz for being a highly interactive AI. It can interact with the user and assist in several ways and work with users of different fields. ChatGPT has only expanded its usage and requirement since it’s launch. Now hackers have unleashed a malicious version of OpenAI’s ChatGPT called WormGPT, which is being used to craft an effective email phishing attack on thousands of victims.

WormGPT, based on the 2021 GPTJ large language model developed by EleutherAI, is designed specifically for malicious activities, according to a report by cybersecurity firm SlashNext. Features include unlimited character support, chat memory retention, and code formatting, and WormGPT has been trained on malware-related datasets.

Cybercriminals are now using WormGPT to launch a type of phishing attack known as a Business Email Compromise (BEC) attack.

ChatGPT has guardrails in place to protect against unlawful or nefarious use cases. WormGPT doesn’t have those guardrails, so it can used to create malware too.

Phishing is commonly executed via email, text messages, or social media posts under a false name. In a business email compromise attack, an attacker poses as a company executive or employee and tricks the target into sending money or sensitive information in exchange of some offered service. Hackers use WormGPT to create carefully crafted inputs designed to manipulate interfaces like ChatGPT. This can be used for generating output that might involve disclosing sensitive information, producing inappropriate content, or executing harmful code. Malware code can infect victims’ devices with viruses, worms, trojans, ransomware, spyware, or keyloggers that can damage, steal, or encrypt their data. WormGPT can write convincing human-like emails, making fraudulent messages harder to spot. By automating the development of false emails that are very convincing and personalized for the recipient, cybercriminals can increase the attack’s success rate.

How to Safeguard Against it?

BEC-Specific Training: Companies should develop extensive, regularly updated training programs aimed at countering BEC attacks, especially those enhanced by AI. Such programs educate employees on the nature of BEC threats, how AI is used to augment them, and the tactics employed by attackers.

Enhanced Email Verification Measures: To fortify against AI-driven BEC attacks, organisations should enforce stringent email verification processes. Implement systems that automatically alert when emails originating outside the organisation impersonate internal executives or vendors. Use email systems that flag messages containing specific keywords linked to BEC such as like “urgent”, “sensitive”, or “wire transfer”. Ensure that potentially malicious emails are subjected to thorough examination before any action is taken.

Will it Stop at This?

No, such malicious can harm society at large in various ways. For example, fake news can spread misinformation and disinformation that can influence public opinion and undermine democracy. Deepfakes can create realistic but false images or videos that can impersonate or defame people or events. Spam can clog up networks and servers with unwanted or unsolicited messages that can waste resources and bandwidth. Such hacking tools can infiltrate and compromise the user’s device or account that runs it using exploits, brute force attacks, or social engineering techniques.

Leave a Reply

Your email address will not be published. Required fields are marked *