Cyber criminals are selling access to your GPT account, paying for others with your credit card
Premium ChatGPT accounts are being traded and sold on the dark web in their hundreds. But cybersecurity firms are fighting back.
Premium ChatGPT accounts are being traded and sold on the dark web in their hundreds as bad-threat actors capitalise on the hype and mass adoption of the new technology.
Cybersecurity company Check Point Software Technologies has revealed that cyber criminals are selling lifetime access to premium and enterprise ChatGPT accounts for as little as $59.99, many of which it believes are being paid for by stolen credit cards.
The cost of a ChatGPT premium account is $20 per month.
Some criminals were selling access to hacked accounts which bad-threat actors were raiding for potentially sensitive information inputted by the account owner.
The revelation comes as a Salesforce survey found 90 per cent of C-suite executives in Australia believe they understand generative AI and can safely use the platforms without exposing themselves or their company to a data breach.
A number of top technologists and cybersecurity experts say the overly confident view of the new technology was likely due to a “knowledge gap” and that C-suite executives should keep in mind they were often the most at risk and most likely to be targeted by cyber criminals.
Check Point Software cybersecurity evangelist Ashwin Ram said cybersecurity firms were not denying the benefits of AI – which many people have begun to obsess over while ignoring the potential harm they have the ability to cause when not used correctly – but were concerned about the threats use of the platforms posed.
“We’re seeing hundreds of accounts being traded. It’s not one or two, there are hundreds of accounts being traded on the dark web now,” Mr Ram said.
“Organisations need to be very careful. Especially if they’ve given their staff access to corporate accounts. If those accounts are compromised, that means threat actors will also have access to your chat history.”
Implementing guidelines which dictate the safe use of ChatGPT was the only responsible way to use the technology in a workplace, Mr Ram said.
“With ChatGPT, there have already been two security incidents where there was accidental exposure of data from specific users which was shared with others. That’s already happened, it’s no longer a case of it could happen, we’ve seen it,” he said.
“This software can potentially have vulnerabilities and that’s what bad-threat actors are trying to exploit. If you’re putting personal data into these platforms, be aware that bad-threat actors are trying to find it.”
On the confidence levels of local C-suite leaders, Mr Ram said: “It may be that people don’t understand all of the risks associated with these platforms.”
Check Point Software had found that one in 25 domain names related to ChatGPT or OpenAI that had been registered were “either suspicious or malicious”. “They are being used for phishing campaigns and to harvest credentials,” Mr Ram said.
Cybersecurity companies have for months been warning users about the potential threat generative AI platforms pose to companies when employees use the technology regularly.
Salesforce senior vice president of solution and customer advisory Rowena Westphalen, behind the study of C-suite executives, said one of the debates the company’s customers were having right now was whether to build their own large language learning models or to outsource.
“There’s almost an assumption that if you’re going to use a public LLM that, you know, the data that you’ve got is out there,” she said.
“In our regulated industries, when we’ve got personal or private information and customer information, you’ve got to be really careful about what you share.”
The downside of building your own large language model was that it was quite costly. Salesforce allowed customers to build their own LLM or provided a prompt service which helped customers to accurately navigate LLMs while preventing the LLM from learning from that prompt, she said.
Darktrace is among three companies to have evolved its cybersecurity products to prevent workers from entering sensitive information into generative AI engines.
They form part of its DETECT and RESPOND products which Oakley Cox, the company’s APAC analyst technical director, said had stopped – in one case – an employee from uploading one gigabyte of sensitive data to a generative AI engine.
“C suites or VIP employees within an organisation are certainly an attack target. Whaling is the industry term where you’re fishing for the big fish, the whales,” he said.
“Part of that is playing off their role with an organisation. Bad threat actors will play off their confidence in that organisation in an attempt to trick them into clicking a link or inputting particularly sensitive data.”
Zscaler was another cybersecurity firm which had developed what it calls a data loss prevention tool which restricts sensitive information being entered into ChatGPT.
“This technology is built to allow companies to now use generative AI with a certain amount of boundary lines set,” said director of sales engineering Rajiv Niles.
Mr Niles said organisations needed to understand shadow IT – a term which describes when employees utilise SaaS services that haven’t been approved by IT – was a big threat.
“We’re now bringing visibility into what we call unsanctioned transactions,” he said, adding that Zscaler was constantly assessing the generative AI tools its clients used and tracking how the platforms were being used to detect which employees may pose the biggest risk to the company.
At Netskope, its data loss prevention tool had gone a step further and was able to scan images for potentially sensitive information and prevent them from being entered in generative AI products.
“Adversaries are really good at figuring out how to take normal processes and abuse them,” said Netskope chief information security officer David Fairman.
On C-suite executives being confident in the use of generative AI, he said that may be a “narrow view”.
“Even technologists and cybersecurity professionals are still grappling with this and trying to understand this very fast moving technology,” he said.