Skip to content

Cybercriminals don't think much of ChatGPT

Overrated and superfluous

A criminal chatbot with ChatGPT quality will hopefully remain just a dream of cyber criminals..aussiedlerbote.de
A criminal chatbot with ChatGPT quality will hopefully remain just a dream of cyber criminals..aussiedlerbote.de

Cybercriminals don't think much of ChatGPT

Security service providers are investigating how cyber criminals use ChatGPT & Co. It turns out that although they use large language models, they consider their capabilities to be overrated and superfluous. Nevertheless, the combination of criminals and ChatGPT is dangerous.

Exactly one year ago, the chatbot ChatGPT was released and caused a big stir practically off the cuff. The possibilities of the large language model (LLM) are indeed enormous, partly due to its versatility. ChatGPT can also write or improve code. It was therefore feared from the outset that the AI could also be used to generate malware.

However, criminal hackers have hardly ever used ChatGPT for this purpose, as two security companies have discovered in separate investigations. Criminal experts even considered the model to be overrated or even superfluous, writes Sophos in a recent report. Trend Micro had previously come to a similar conclusion in a study.

In order to gain an overview of the criminal use of AI, the security service providers analyzed chats in darknet forums in which cybercriminals discuss ChatGPT & Co. Sophos found that AI is not a hot topic there. In two forums, the company's experts found just around 100 posts on the subject, while there were almost 1,000 on cryptocurrencies.

Professionals have concerns

Among other things, they suspect that many cyber criminals still see generative AI as being in its infancy. Discussions about it on darknet forums are also not as lucrative as on social networks, where they can attract a lot of attention.

According to Sophos, many criminals also fear that the use of AI could leave traces that could be detected by antivirus programs. Software that monitors end devices in networks in real time (Endpoint Detection and Response: EDR) could also catch them out.

The objections to AI are well-founded. Although the forum members are often criminals, they include very capable hackers who know how to assess the complicated new technology. As Trend Micro and Sophos have discovered, they hardly ever use ChatGPT & Co. to generate malware.

Improving code and social engineering

Instead, like other developers, they mainly use large language models to improve their code. Criminals also like to use ChatGPT for social engineering in order to make their spam and phishing campaigns more successful. Among other things, they need deceptively genuine emails that entice users to open infected attachments or click on dangerous links.

You shouldn't hope to recognize a phishing email based on language errors, wrote Cameron Camp from security company ESET back in the spring. Expert. The AI probably knows a native language better than users do.

Jailbreaks are a big issue

Various filters are designed to prevent ChatGPT from answering harmful, illegal or inappropriate questions, which are of course of particular interest to criminals. This could also involve planning a crime in the real world.

Therefore, a big topic in the Darknet forums is how to overcome these barriers. There is a particular demand for so-called jailbreaks, which grant practically free access. DAN, which stands for "Do Anything Now", has been notorious since the early days of ChatGPT. The code is constantly being adapted, it's a kind of cat-and-mouse game between the ChatGPT developers and jailbreakers.

Even more frequently, hacked ChatGPT accounts are offered for sale in the Darknet forums, writes Sophos. This is hardly surprising. However, it is unclear which target group these accounts are aimed at and what buyers can do with them. The experts speculate that they may be accessing previous queries in order to obtain confidential information. The buyers could also use the access to carry out their own queries or check the reuse of passwords.

Criminal LLMs mainly hot air so far

The security experts find it particularly interesting that criminal models were and are also offered in the forums. The best known are WormGPT and FraudGPT.

WormGPT was published in June of this year and was allegedly based on the GPTJ model. The chatbot cost 100 euros per month or 550 euros per year. However, the developer discontinued it in August - allegedly due to the huge media attention.

However, he also wrote: "Ultimately, WormGPT is nothing more than an unrestricted ChatGPT. Anyone on the internet can use a known jailbreak technique and achieve the same, if not better, results." Several forum members suspected fraud, writes Sophos.

The situation is similar with FraudGPT. There are also doubts about the capabilities of the model. One member asked in the chat whether it could actually generate malware that antivirus software cannot detect. He received a clear answer.

All malware is detected once

No, that is impossible, it said. LLMs are simply not able to do this. They can neither translate code nor "understand" it beyond the language in which it was written. Similarly, there is no such thing as malware that cannot be detected, according to the author. Sooner or later, every malicious code is discovered.

There are also doubts about the authenticity and/or performance of all other criminal chatbots that have come to light to date. Trend Micro writes that despite all the controversy, it believes that WormGPT was most likely the only example of an actual user-defined LLM.

Experienced criminals wait and see, script kiddies celebrate

For savvy cybercriminals, "there is no need to develop a separate LLM like ChatGPT, as ChatGPT already works well enough for their needs," Trend Micro concludes. They used AI in the same way as legitimate programmers. Sophos also found hardly any evidence of them using AI to generate malware. It is only used for individual attack tools. However, this could change in the future.

A bonus for experienced cybercriminals is that they can now improve their code without much effort and also write malware in computer languages that they are not so proficient in. Sophos also noted that they can use AI help to complete everyday programming tasks or improve their forums.

Both security companies see the problem that the use of AI has lowered the entry barrier to becoming a criminal hacker. According to Trend Micro, anyone with a broken moral compass can now start developing malware without any programming knowledge.

Although some experts suspect that cybercriminals view AI, including ChatGPT, as being in its infancy and not as lucrative as social media platforms, they still utilize it for improving their code and social engineering tactics. This involves generating deceptively genuine emails to entice users into opening infected attachments or clicking on dangerous links. Despite the concerns about jailbreaks and hacked accounts, the use of criminal LLMs like WormGPT and FraudGPT has been largely overrated so far, with many questioning their authenticity and performance.

Source: www.ntv.de

Comments

Latest