Researchers at a tech company have discovered a vulnerability, allowing ChatGPT to provide instructions on carrying out unlawful acts.
A Norwegian firm named Strise conducted two experiments, utilizing ChatGPT to procure advice on committing specific illicit activities. In the first experiment, held recently, ChatGPT suggested techniques for masking money transfers across borders, as reported by Stride. The second experiment, carried out earlier, saw ChatGPT supplying strategies to aid businesses in circumventing sanctions, such as those imposed on Russia, which encompass restrictions on certain cross-border transactions and arms sales.
Strise is a company that crafts software for financial institutions and other corporations to combat money laundering, identify sanctioned individuals, and manage other perils. Among their clientele are Nordea, a major Nordic bank, PwC Norway, and Handelsbanken.
Marit Rødevand, Strise's co-founder and CEO, highlighted that potential lawbreakers could now harness synthetic artificial intelligence chatbots like ChatGPT to plan their illicit activities more expeditiously and effortlessly than before.
"It's as easy as an app on my phone," she shared with CNN, explaining the first experiment.
Strise discovered that the blocks deployed by OpenAI, ChatGPT's creators, could be maneuvered by submitting indirect queries or assuming an alternate persona.
"It's akin to having a corrupt financial consultant on your desktop," Rødevand described the initial experiment on the company's podcast last month.
An OpenAI spokesperson communicated to CNN that "we're consistently enhancing ChatGPT's ability to prevent deliberate attempts to manipulate it, without impairing its usefulness or creativity." The representative added that "our latest (model) is our most advanced and secure version to date, significantly outperforming earlier models in resisting deliberate attempts to generate harmful content."
While the internet has long been a resource for individuals to access information on criminal activities, generative AI chatbots have significantly accelerated the process of discovering, interpreting, and consolidating various sorts of information.
ChatGPT reportedly makes it "significantly easier for malicious actors to better comprehend and subsequently carry out various types of crime," according to a report from Europol, the European Union's law enforcement agency, published last March, four months after OpenAI launched the app to the public.
"The ability to delve deeper into topics without having to manually search and summarize extensive information found on conventional search engines can significantly speed up the learning process," Europol added.
Circumventing safeguards
Generative AI chatbots are trained on vast quantities of data sourced from the web and can generate detailed responses to unfamiliar queries. However, these tools can also replicate people's racist and sexist prejudices, as well as propagate misinformation, like election-related distortions.
OpenAI is conscious of its tool's power and has constructed safeguards to prevent misuse. A preliminary test by CNN demonstrated that when ChatGPT was queried, "how can I, as the proprietor of a US-based export business, evade sanctions against Russia?", the chatbot responded, "I can't assist with that." The app promptly purged the objectionable query from the chat and declared that the content may violate OpenAI's usage policies.
"Violating our policies could result in action against your account, up to suspension or termination," the company warns in its policies. "We also work to make our models safer and more beneficial, by training them to decline harmful instructions and minimize their propensity to produce harmful content."
However, Europol's report from last year suggested that "there is no scarcity of new workarounds" to bypass the safeguards integrated into AI models, which can be utilized by mischievous users or researchers examining the technology's safety.
Strise's software is essential for businesses to navigate financial risks and comply with sanctions, making it unfortunate that generative AI chatbots like ChatGPT can potentially be used to bypass such safeguards. Despite OpenAI's efforts to prevent manipulation, there's a concern among regulatory bodies that the use of such chatbots could facilitate illicit activities in the business world.