How ChatGPT will change cybersecurity

How ChatGPT will change cybersecurity

SHARE IT

27 January 2023

Kaspersky is investigating how the ability of the general public to access ChatGPT might change the established rules of the cybersecurity world. This investigation comes a few months after OpenAI released ChatGPT3, one of the most powerful AI models to date. ChatGPT3 can explain complex scientific concepts better than many professors, write music and create almost any text a user wants.

ChatGPT3 is basically an artificial intelligence language model for generating persuasive texts that are hard to distinguish from what humans write. Therefore, cybercriminals are already trying to leverage this technology to carry out phishing attacks. Previously, the main obstacle to conducting mass phishing campaigns was the high cost of writing each targeted email. ChatGPT appears to drastically change the balance, because it can allow attackers to create convincing and personalized phishing messages on an industrial scale. It can even add verisimilitude to the response by creating convincing fake e-mails that appear to be sent between employees. Unfortunately, this means that the number of successful phishing attacks may increase.

Many users have already found that ChatGPT is capable of generating code, even malicious ones. Creating a simple infostealer will be doable even without programming skills. However, diligent users have nothing to fear. Security solutions can detect and neutralize code written by bots as quickly as all older human-created malware. While some analysts are concerned that ChatGPT may even create a unique malware for each specific victim, these samples would still exhibit malicious behavior that would likely be detected by a security solution. Additionally, bot-written malware is likely to contain subtle bugs and logical inconsistencies, meaning that full automation of malware coding has yet to be achieved.

While the tool can be useful for attackers, defenders can also benefit from it. For example, ChatGPT is already able to quickly explain what a particular piece of code does. In the context of the SOC team, where constantly overworked analysts are required to spend minimal time on each incident, it could act as a tool to speed up processes. In the future users will likely see several specialized products: a reverse engineering model to better understand the code, a CTF solving model, a vulnerability research model, and more.

View them all