NewsBite

Don’t believe all the doom and gloom — AI can be a force for good

AI can be a force for good in a world where you can no longer trust anything you see or hear.

AI-generated deepfake images of former US president Donald Trump being arrested went viral.
AI-generated deepfake images of former US president Donald Trump being arrested went viral.

It’s official, the research and consultancy firm Gartner ­recently declared generative AI to be at the very peak of inflated expectations on its well known hype cycle. As an AI researcher, I’m relieved. Hopefully this means it isn’t going to get any madder than it is.

When ChatGPT was released to the public on the last day of ­November 2022, it was an overnight success. It was in the hands of more than a million people in the first week. And now, just over a year later, it is available to more than a billion people via Bing, Skype, Snapchat and many other apps. We’ve never seen anything like this.

Open AI, the company behind ChatGPT, has gone from being your typical loss-making start-up to a company with an annual turnover of more than $US1bn. And chief executive Sam Altman is now on Time magazine’s annual list of the 100 most influential people and is courted by heads of state around the world.

This overnight success was, as the joke goes, decades in the making. The “T” in GPT is the transformer neural network architec­ture invented at Google in 2017. Transformers built upon the deep learning revolution that Geoffrey Hinton and others have been leading over the past decade. And that deep learning revolution itself can be traced back to Frank Rosenblatt’s back propagation algorithm from the 1960s, Warren McCulloch and Walter Pitts’ perception from the 1940s, and even Gottfried Leibnitz’s chain rule from 17th century mathematics.

This article is part of a special series by senior journalists to mark The Australian’s 60th anniversary this year.
This article is part of a special series by senior journalists to mark The Australian’s 60th anniversary this year.

The impact that AI is starting to have is large. The impact that AI will ultimately have is immense. Comparisons are easy to make. Bigger than fire, electricity or the internet, according to Alphabet chief executive Sundar Pichai. The best or worst thing ever to happen to humanity, according to historian and best-selling author Yuval Harari. Even the end of the human race itself, according to the late Stephen Hawking.

Google CEO Sundar Pichai. Picture: AFP
Google CEO Sundar Pichai. Picture: AFP
Stephen Hawking. Picture: Getty Images
Stephen Hawking. Picture: Getty Images

The public is, not surprisingly, starting to get nervous. A recent survey by KPMG showed that a majority of the public in 17 countries, including Australia, were ­either ambivalent or unwilling to trust AI, and that most of them ­believed that AI regulation was necessary.

Perhaps this should not be surprising when many people working in the field themselves are getting nervous. Last March, more than 1000 tech leaders and AI ­researchers signed an open letter calling for a six-month pause in developing the most powerful AI systems. And in May, hundreds of my colleagues signed an even shorter and simpler statement warning that “mitigating the risk of extinction from AI should be a global priority alongside other ­societal-scale risks such as pandemics and nuclear war”.

For the record, I declined to sign both letters as I view them as alarmist, simplistic and unhelpful. But let me explain the very real concerns behind these calls, how they might impact upon us over the next decade or two, and how we might address them constructively.

AI is going to cause significant disruption. And this is going to happen perhaps quicker than any previous technological-driven change. The Industrial Revolution took many decades to spread out from the northwest of England and take hold across the planet.

The ­internet took more than a decade to have an impact as people slowly connected and came online. But AI is going to happen overnight. We’ve already put the plumbing in.

It is already clear that AI will cause considerable economic disruption. We’ve seen AI companies worth billions appear from ­nowhere. Mark Cuban, owner of the Dallas Mavericks and one of the main “sharks” on the ABC ­reality television series Shark Tank, has predicted that the world’s first trillionaire will be an AI entrepreneur. And Forbes magazine has been even more precise and predicted it will be someone working in the AI healthcare sector.

A 2017 study by PwC estimated that AI will increase the world’s GDP by more than $15 trillion in inflation-adjusted terms by 2030, with growth of about 25 per cent in countries such as China compared to a more modest 15 per cent in countries like the US. A recent ­report from the Tech Council of Australia and Microsoft estimated AI will add $115bn to Australia’s economy by 2030. Given the economic headwinds facing many of us, this is welcome to hear.

But while AI-generated wealth is going to make some people very rich, others are going to be left ­behind. We’ve already seen inequality within and between countries widen. And technological unemployment will likely cause significant financial pain.

There have been many alarming predictions, such as the famous report that came out a decade ago from the University of Oxford predicting that 47 per cent of jobs in the US were at risk of automation over the next two decades. Ironically AI (specifically ­machine learning) was used to compute this estimate. Even the job of predicting jobs to be automated has been partially automated.

Such predictions do not, however, count the number of new jobs created by new technologies, nor take ­account of demographic shifts, workforce participation rates or even changes in the hours that we all work. But whatever the net impact on jobs, AI will be disruptive and people must prepare for work to change.

AI can now do many of the cognitive and creative tasks that some white-collar workers thought would keep them safe from automation.
AI can now do many of the cognitive and creative tasks that some white-collar workers thought would keep them safe from automation.

The people most at risk are not necessarily those who expected to be. A while back, many thought it would be blue-collar workers replaced by robots. And while car factories are now full of robots that paint, and warehouses of robots that pick, it is many white-collar workers who are now most fearful. Robots are expensive. And many of those blue-collar jobs were not that well paid in the first place. It therefore may not make much economic sense to try to replace them with robots.

But generative AI can now do many of the cognitive and creative tasks that some of those more highly paid white-collar workers thought would keep them safe from automation. Be prepared, then, for a ­significant hollowing out of the middle. The impact of AI won’t be limited to economic disruption.

Indeed, the societal disruption caused by AI may, I suspect, be even more troubling. We are, for example, about to face a world of misinformation, where you can no longer trust anything you see or hear. We’ve ­already seen a deepfake image that moved the stock market, and a deepfake video that might have triggered a military coup. This is sure to get much, much worse.

Eventually, technologies such as digital watermarking will be embedded within all our devices to verify the authenticity of anything digital. But in the meantime, ­expect to be spoofed a lot. You will need to learn to be a lot more sceptical of what you see and hear.

Social media should have been a wake-up call about the ability of technology to hack how people think. AI is going to put this on steroids. I have a small hope that fake AI-content on social media will get so bad that we realise that social media is merely the place that we go to be entertained, and that absolutely nothing on social media can be trusted.

This will provide a real opportunity for old-fashioned media to step in and provide the authenticated news that we can trust.

All of this fake AI-content will perhaps be just a distraction from what I fear is the greatest heist in history. All of the world’s information – our culture, our science, our ideas, our politics – are being ingested by large language models.

If the courts don’t move quickly and make some bold decisions about fair use and intellectual property, we will find out that a few large technology companies own the sum total of human knowledge. If that isn’t a recipe for the concentration of wealth and power, I’m not sure what is.

But this might not be the worst of it. AI might disrupt humanity ­itself. As Yuval Harari has been warning us for some time, AI is the perfect technology to hack ­humanity’s operating system. The dangerous truth is that we can easily change how people think; the trillion-dollar advertising industry is predicated on this fact. And AI can do this manipulation at speed, scale and minimal cost. And even if our minds aren’t being hacked, AI may dumb us down, and lead to a loss of human creativity.

Who knows, we may one day wake up in some sort of Matrix?

Historian, philosopher, writer and futurologist Yuval Noah Harari. Picture: Olivier Middendorp
Historian, philosopher, writer and futurologist Yuval Noah Harari. Picture: Olivier Middendorp

This might not even be the worst possible outcome. AI may pose an existential risk. Human life may disappear completely. The other, less intelligent life on the planet hasn’t done very well since humanity took over. According to the UN Convention of Biological Diversity, up to 150 species are lost every day due to our greed and laziness. What hope then is there for humanity if we are no longer the most intelligent species on the planet?

Doomsters have predicted a number of possible existential risks. Superintelligent AI that takes over the planet. AI-invented bioweapons that kill us all. An AI warning system that mistakenly sparks nuclear war. These may sound like science fiction but ­Russia already has the Poseidon autonomous, nuclear-powered unmanned submarine that can ­deliver a dirty cobalt bomb to any coastal location on the planet. The Poseidon could travel at high speed underwater and undetected into Sydney Harbour and take out half the city. How can we be so foolish as to hand this sort of decision to an algorithm?

Given the scale of the risk, the precautionary principle requires us to treat such existential issues seriously. The good news is that we still have a long distance to go ­before machines match humans in all their cognitive capabilities.

Large language models such as ChatGPT, despite all their fluency, are still remarkably stupid. Ask ChatGPT how to measure out ­2 litres using just a 3 litre and 1 litre jug, and it will give you a four-step plan that ends “with 2 litres of water in the 1-litre jug”.

But the bad news is that AI is leaving the research laboratory rapidly – let’s not forget the billion people with access to ChatGPT – and even the limited AI capabilities we have today could be harmful.

When AI is serving up advertisements, there are few harms if AI gets it wrong. But when AI is deciding sentencing, welfare payments, or insurance premiums, there can be real harms. What then can be done? The tech industry has not done a great job of regulating itself so far. Therefore it would be unwise to depend on self-regulation. The open letter calling for a pause failed. There are few ­incentives to behave well when trillions of dollars are in play.

Some have called for governments to limit access to AI hardware such as the specialised graphic processing units (GPUs) that are now in high demand. This is, I’m afraid, a fool’s errand. We’ve already seen over a hundred-fold reduction in the compute needed to build large language models. Researchers at Stanford built a ChatGPT clone for just $600 in compute credits. The ultimate goal is to match the human brain, and this only needs 20W of power.

Similarly, regulation to prevent access to AI software is unlikely to work. In the 1990s, the US government tried to limit access to high-grade encryption software. This completely failed. The British government has announced an AI safety summit at England’s Bletchley Park. I hold out little hope for this.

There are, however, three ­important levers we must use.

The first is the law. We need to vigorously enforce existing law. For instance, AI is a physical thing and we have existing laws about product liability. Companies will behave more responsibly with ­releasing AI into the wild if the courts vigorously enforced existing product liability laws on AI products.

The second lever we need to use is the market itself. If many companies have advanced AI, then competition will weed out bad behaviours. We need to have choice. The market is an imperfect beast and it needs rules to ensure, for instance, that externalities are properly priced. And for the market to work, we need to apply antitrust regulation more forcibly. The tech space has become monopolistic and anti-competitive.

And finally, the third lever is money. In particular, the government must invest more boldly. This will ensure that Australia has the human capital to ride the AI wave, and the fundamental AI ­research to feed the innovation pipeline.

The future is, I believe, bright. The future is AI.

Professor Toby Walsh is the Chief Scientist, Laureate Fellow and Scientia Professor of AI at UNSW’s AI Institute.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/nation/dont-believe-all-the-doom-and-gloom-ai-can-be-a-force-for-good/news-story/ae1f8ebbb3af49fbb170c7ad566a9fb8