5 cyber threats that criminals can generate with the help of ChatGPT

5 cyber threats that criminals can generate with the help of ChatGPT

SHARE IT

20 April 2023

ChatGPT, the public generative AI that came out in late November 2022, has raised legitimate concerns about its potential to amplify the severity and complexity of cyberthreats. In fact, as soon as OpenAI announced its release many security experts predicted that it would only be a matter of time before attackers started using this AI chatbot to craft malware or even augment phishing attacks.

And it has not taken long for their suspicions to be confirmed, as it has been discovered that cybercriminals have already started to use this tool based on the GPT-3 AI language model to recreate malware strains and perpetrate different types of attacks. Cybercriminals simply need to have an OpenAI account, which they can create free of charge from its website, and then make a query.

What can cybercriminals do with ChatGPT ?

Attackers can leverage ChatGPT's generative artificial intelligence to craft malicious activity, including:

- Phishing

Threat actors can use the ChatGPT system's Large Language Model (LLM) to move away from universal formats and automate the creation of unique phishing or spoofing emails, written with perfect grammar and natural speech patterns tailored to each target. This means that email attacks crafted with the help of this technology look much more convincing, making it harder for recipients to detect and avoid clicking on malicious links that may contain malware.

- Identity theft

In addition to phishing, bad actors can make use of ChatGPT to impersonate a trusted institution, thanks to the AI's ability to replicate the corporate tone and discourse of a bank or organization, and then exploit these messages on social media, SMS or via emails to obtain people's private and financial information. Malicious actors can also write social media posts posing as celebrities by exploiting this capability.

- Other social engineering attacks

Social engineering attacks can also be launched where actors use the model to create fake profiles on social media, making them look very realistic, and then trick people into clicking on malicious links or persuade them to share personal information.

- Creation of malicious bots

ChatGPT can be used to create chatbots, as it has an API that can feed other chats. Its user-friendly interface, designed for beneficial uses, can be used to trick people and run persuasive scams, as well as to spread spam or launch phishing attacks.

- Malware

ChatGPT can help perform a task that usually requires high-level programming skills: generating code in various programming languages. The model enables threat actors with limited technical or no coding skills to develop malware. ChatGPT writes it simply by knowing which functionality the malware should have.

In turn, sophisticated cybercriminals could also use this technology to make their threats more effective or to close existing loopholes. In one case shared by a criminal on a forum, ChatGPT was exploited to create malware using a Python-based code that can search, copy and exfiltrate 12 common file types, such as Office documents, PDFs and images from an infected system. In other words, if it finds a file of interest, the malware copies it to a temporary directory, compresses it and sends it over the web. The same malware author also showed how he had used ChatGPT to write Java code to download the PuTTY SSH and telnet client and covertly run it on a system via PowerShell.

View them all