NewsBite

Why easy to manipulate prompt engineering is the new threat

Cybersecurity experts say the effort it takes to manipulate a large language model into providing false, misleading or malicious content is minimal.

Cybersecurity experts say prompt engineering is on the rise. Picture: AFP
Cybersecurity experts say prompt engineering is on the rise. Picture: AFP

Cybersecurity experts have warned that prompt engineering may be the new social engineering, airing concerns after finding cyber criminals were intentionally distorting large language models such as ChatGPT into providing false, misleading or malicious information.

Prompt engineering is the practice of developing and optimising prompts for better utilisation of LLMSs.

While the discipline itself is not intended for malicious purposes, bad-threat actors were increasingly moving toward using prompt engineering to fleece people, Netskope chief security officer David Fairman said.

“Prompt engineering is a way to inject malicious behaviour or instil a malicious outcome,” he said.

“The misuse of prompt engineering can lead to a large language model, or an AI application, performing in a way that it’s not meant to, generating incorrect output or performing a task programmed not to perform.”

Netskope had seen some cases in which LLMs were being manipulated so that when a user asked for a translation of a language, the AI engine gave incorrect information.

CheckPoint Software Technologies, when testing ChatGPT, had gone as far as to see whether the generative AI system would produce scam emails, said cybersecurity evangelist Ashwin Ram.

The answer was yes. With a little prompting, ChatGPT demonstrated it understood what type of emails scammers could use to get an employee to click on and went as far as writing an email under one of its suggestions.

“Our observation is that despite the presence of safeguards, some restrictions can be easily circumvented, enabling threat actors to achieve their objectives without significant hindrance,” Mr Ram said.

“Cyber criminals are constantly searching for new ways to exploit and manipulate. Prompt engineering is yet another avenue for threat actors to carry out social engineering.”

Mr Ram said the CheckPoint research team had “identified numerous social engineering campaigns that imitate the ChatGPT website, aiming to lure users into downloading malicious files or revealing sensitive information”.

“The frequency of these attack attempts has steadily increased over the past few months, with tens of thousands of attempts to access these malicious ChatGPT websites,” he said.

Sam Altman – AI regulation in Australia is on a good path

Mr Fairman said Netskope had monitored a rise in the number of bad-threat actors using prompt engineering for malicious reasons over the past few months.

“I think the challenge today is that as an industry, a lot of us are still learning about the power of this technology and how it can be misused,” he said.

Mr Fairman and Mr Ram said prompt engineering in all accounts was a low-effort, high-yield way for bad-threat actors to manipulate systems into providing loopholes which they could later exploit.

“All that is required by a malicious actor are the interpersonal skills to persuade and manipulate generative AI engines,” Mr Ram said.

Mr Fairman added: “I think the challenge to that is what we’re facing as an industry. A lot of us are still learning in terms of the power of this technology and how it can be misused.”

Another malicious aspect of prompt engineering was that cyber criminals intentionally distorted code to provide entry into software applications, said Sophos Asia Pacific chief technology officer Aaron Bugal.

Mr Bugal said he believed prompt engineering would be “short lived” as the developers of LLMs would learn from early scandals and implement guard rails.

“The builders of these LLMS will restrict abuses of prompt engineering. It’s very much a game of cat and mouse,” he said.

Mr Bugal said his firm had seen a number of cybersecurity organisations which had begun to restrict the use of generative AI applications in a bid to prevent being the victims of prompt engineering, among other malicious actions aimed at generative AI products.

“There’s a lot of apps appearing in various app stores that claim to provide access to an LLM for output but many of these are just harvesting information from the users of that service. Others are charging through the roof and fleecing customers,” he said.

Joseph Lam
Joseph LamReporter

Joseph Lam is a technology and property reporter at The Australian. He joined the national daily in 2019 after he cut his teeth as a freelancer across publications in Australia, Hong Kong and Thailand.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/technology/why-easy-to-manipulate-prompt-engineering-is-the-new-threat/news-story/3bf384d2a2f39aa65bf0a40f63035268