NewsBite

‘This is happening very, very fast’: The good, the bad and the future of autonomous AI

If you thought ChatGPT was impressive, it is only the beginning. Semi-autonomous artificial intelligence, such as AutoGPT, is set to change everything.

AI and ChatGPT to disrupt business and work

Imagine an artificial intelligence tool that not only responds to your instructions but can instruct itself and execute tasks without need for human approval.

It’s not science fiction – the tech is already here with semi-autonomous agents such as AutoGPT.

Based on the same technology as OpenAI’s wildly popular chat bot ChatGPT, AutoGPT is more powerful as it is connected to the internet and does not wait to be told what to do next.

It can take a user’s broad goal then break it down into sub-tasks and execute each one – such as reading documents, searching the internet, creating spreadsheets and even downloading software – to eventually produce an output without any further prompting from the user.

For any Terminator fans, it may sound eerily similar to Skynet, but experts insist AutoGPT is not self-aware and “generalised AI” with the ability to think is decades away.

“We haven’t got to that Skynet stage,” University of New South Wales Business School associate professor Dr Rob Nicholls said.

“Generative AI products (such as ChatGPT and AutoGPT) appear to be very smart because they are very smart but they are not thinking products.

“Generalised AI is theoretically possible but we don’t know how to get to it at the moment.”

Don’t worry, Skynet is not here. Yet. Picture: Supplied.
Don’t worry, Skynet is not here. Yet. Picture: Supplied.

HOW AUTOGPT IS BEING USED

Already, people are using AutoGPT to help create apps and websites, conduct market research for business ideas, automate job searches and generate Dungeons and Dragons campaigns.

Dr Nicholls has personally used it to come up with questions for his university students that cannot be answered by generative AI – essentially using AutoGPT to render itself useless.

While his experimentation has produced results, the bot still “takes a lot of tweaking”.

He is confident, however, this will quickly improve.

“It learns from what it’s asked and by its mistakes,” he said.

“When somebody else comes and asks a similar question, it will require less tweaking and if lots of people are asking similar questions, the amount of tweaking for the 100th person will be quite minimal.”

UNSW Business School associate professor Dr Rob Nicholls. Picture: Supplied
UNSW Business School associate professor Dr Rob Nicholls. Picture: Supplied

ChatGPT took just two months to reach 100 million users, allowing it to improve very rapidly.

“It was almost like you were asking a child in December and now you are talking to a well-educated adult,” Dr Nicholls said.

In the future, experts hope AI will improve everything from transport, with the creation of safer autonomous vehicles; through to healthcare, with earlier cancer diagnoses; law and order, with faster crime detection; and education, with AI assistants relieving teachers of repetitive work that leads to burn out.

Dr Nicholls hoped it would also learn to sort fact from fiction, addressing the issue of misinformation and disinformation.

THE GOOD AND THE BAD OF GENERATIVE AI

University of Adelaide’s Australian Institute of Machine Learning founding director Professor Anton van den Hengel said AI would give countries and companies an economic advantage.

“It’s the single technology that will improve productivity in every Australian industry,” he said.

Global research by professional services company Accenture projected AI could double annual economic growth rates and boost labour productivity by up to 40 per cent by 2035.

Professor Anton van den Hengel is urging Australia to invest in AI for its productivity benefits. Picture: Calum Robertson.
Professor Anton van den Hengel is urging Australia to invest in AI for its productivity benefits. Picture: Calum Robertson.

But AI could also have a dark side if it was used for mischievous projects or allowed to run wild with “hallucinations” – inaccurate or completely fabricated responses presented as fact.

AutoGPT could be used to speed up malware creation and increase the reach of scams, such as phishing emails that attempt to steal personal details.

Still, Dr Nicholls – also deputy director of UNSW Institute for Cyber Security – did not discourage people from experimenting with AutoGPT.

He only warned against over-sharing personal data, or that of an employer.

“When all of the large language models (ChatGPT, AutoGPT, etc) collect information off the internet, primarily they try to de-identify information, but if you are continually feeding information about you in, there is a risk that will form part of the training model,” he said.

“Anything you wouldn’t be comfortable saying to a stranger, you shouldn’t be saying to a generative AI.”

He was not concerned, however, that AutoGPT would be hacked to execute tasks, such as downloading malware, without a user’s knowledge.

“All the players creating these systems – OpenAI, Microsoft, Google, Meta, etc. – they see this as a huge reputational risk (and) provide protection against the potential for hacking,” he said.

OpenAI chief executive Sam Altman, whose company’s reputation is staked on its cybersecurity. Picture: Jason Redmond / AFP
OpenAI chief executive Sam Altman, whose company’s reputation is staked on its cybersecurity. Picture: Jason Redmond / AFP

While AutoGPT was created by video game developer Toran Bruce Richards rather than one of the major tech companies, it was based on OpenAI’s GPT-4 (the model that powers ChatGPT) and Dr Nicholls said access to this was designed to minimise risk to users.

AutoGPT itself may not be so confident, though.

Its own disclaimer included an agreement that “by using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise”.

It also said that “as an autonomous experiment, Auto-GPT may generate content or take actions that are not in line with real-world business practices or legal requirements.”

To address AI’s potential issues, regulation has become a hot topic across the world, with governments scrambling to control the technology that is evolving faster than they can keep up.

Professor van den Hengel said the only way Australia would have any control over the AI used here would be to develop its own.

“At the moment, any AI we use is being developed overseas so anything we say is going to be irrelevant,” he said.

AUSSIES GETTING ON BOARD

Melbourne’s Riley James – co-founder and chief executive of DeFi Link, an incubator for projects in emerging technologies – has not only been experimenting with AutoGPT but his team is about to launch its own free, web-based version, Xeus.AI.

“If people want to go crazy with it, they can go there and give it a go,” he said.

“It’s going to be hosted in the interface itself so it’s on our server, not on (the user’s) computer, so you don’t have to download anything.”

Riley James is an early adopter of AutoGPT. Picture: Supplied
Riley James is an early adopter of AutoGPT. Picture: Supplied

Aside from developing Xeus.AI, Mr James’ team had mostly been using AutoGPT to produce website and social media content.

“AutoGPT can give you a response that’s 10 times better than ChatGPT in its most simple version because it has the ability to go and research things,” he said.

“It will write an article then it can act like the editor and review the article.

“We’ve done a lot of that mostly for creating SEO (search engine optimisation) content or creating content for websites or Twitter posts or LinkedIn posts.

“There’s not much editing involved once you get the prompts right.”

They had also dabbled in building software with AutoGPT, but Mr James said the self-coding was not yet up to standard.

He said they had also been careful to protect themselves while experimenting with the tool, given its ability to execute tasks without explicit permission.

“All of our team have second computers that are just throwaway computers,” he said.

“There’s going to be a lot of, I think, difficulties with hacking and cyber security – it’s a can of worms that we’re opening up with AI, if it can do its own things and download stuff and go crazy.”

In the not-so-distant future, Mr James expected to see different AI tools and plug-ins connect with each other to work as a decentralised autonomous organisation (DAO) that would be “AutoGPT on steroids”.

“This is happening very, very fast … this kind of tech probably only came about in the last 30 days,” he said.

“You can already get it to execute trades if you’re trading crypto or shares.

“You can already connect it to robots now, so people have created an interface where the chat functionality then talks to the code base of the robot and … tell it to actually do commands and walk around the house.

“That’s happening very quickly, so I think you’ll have ‘I, Robot’ type droids walking around very soon – probably have some beta versions of that in, like, six months.”

Originally published as ‘This is happening very, very fast’: The good, the bad and the future of autonomous AI

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.adelaidenow.com.au/news/national/this-is-happening-very-very-fast-the-good-the-bad-and-the-future-of-autonomous-ai/news-story/d2b4ea819c47faf35955c8a3d4ef21bc