How criminals are getting help from AI hacking tools
A baffling incident from Hong Kong, where a worker was fooled by deepfake colleagues into sending fraudsters $US25 million, shows the risks that companies are facing.
One of the first malicious generative AI tools closed down not with a bang or a whimper, but a warning.
The creators of WormGPT – which was designed as a ChatGPT equivalent freed of ethical constraints to aid hacking, phishing and fraud – posted a screed on their private chat group in August blaming the media for its demise. But amid the self-justifications and complaints about the scrutiny it attracted, the five anonymous creators made clear how easily anyone else could create a criminal AI.
Subscribe to gift this article
Gift 5 articles to anyone you choose each month when you subscribe.
Subscribe nowAlready a subscriber?
Introducing your Newsfeed
Follow the topics, people and companies that matter to you.
Find out moreRead More
Latest In Technology
Fetching latest articles