NewsBite

Advertisement

This was published 1 year ago

Opinion

Time is running out to subdue AI’s overwhelming power

Jeff Bleich had a problem with his cat. The former US ambassador to Australia wanted to send Oscar a message: Stop hunting small animals and bringing them into the house.

Illustration by Dionne Gain

Illustration by Dionne GainCredit:

He decided to ask the computer program ChatGPT – hailed as the mainstream advent of artificial intelligence – to write the message for him. In an extra twist, Bleich asked it to write the chatbot as Shakespeare would. In a matter of moments he got back a sonnet:

Oscar, thou mischievous feline of mine/ Thy hunting and slaying of birds and mice

Makes me shudder with fear and disgust/ Prithee, desist from thy bloodthirsty deeds,

Lest thou anger the gods and suffer their wrath/ Thy playful nature is a joy to behold,

But thy wanton cruelty doth not please me/ Cease thy savage behaviour and become

A gentle and loving companion once more/ And if thou dost persist in thy evil ways,

I fear thou mayest be banished from this house/ And forced to roam the cold and lonely streets.

Think on these things, dear Oscar, and change thy ways/ Before it is too late, and thou art doomed to a miserable fate.

Advertisement

Bleich’s whimsy starkly illustrates the education system’s dilemma. There is no longer any way a teacher or lecturer can be confident that students’ work is their own – existing plagiarism checkers have no defence against ChatGPT.

Microsoft billionaire Bill Gates says AI technology is now “targeting” white-collar jobs.

Microsoft billionaire Bill Gates says AI technology is now “targeting” white-collar jobs.Credit: Peter Morris

So far, three Australian states have blocked its use on school internet networks – NSW, Queensland and Tasmania – and so have many other school authorities internationally. But they can’t block it everywhere. As Elon Musk remarked in response to its arrival: “Goodbye homework.”

Jeremy Weinstein, a professor at Stanford University in the heart of Silicon Valley and co-author of System Error: Where Big Tech Went Wrong and How We Can Reboot, points out that the maker of ChatGPT – a San Francisco firm called OpenAI – “is only one company and there are tens of companies developing these large language models”.

Weinstein says that “it’s obviously a revolution” and that “like many of the tech advances before, the world is going to be completely different” as a result.

In an anonymous survey of some 4500 Stanford students this month conducted by a campus newspaper, The Stanford Daily, 17 per cent said that they’d used ChatGPT in their final exams and assignments even when it was a violation of ethics codes.

“One of the costs this imposes is on teachers and the education system – we are in the moment where teachers and school districts are being overwhelmed,” Weinstein tells me. “Are we approaching this new moment with concern for potential harms? We absolutely are not.”

It should be possible to integrate a program like ChatGPT into teaching, much as the new-fangled calculator eventually was integrated into teaching maths. But schools, companies, regulators are unready, says Weinstein: “Do any companies or governments have the infrastructure to allow the benefits of this technology and mitigate its potential harms? We don’t have standards or codes in companies, and we have a race between disruption and democracy - and democracy’s always losing.”

The world is at a “seatbelt moment” with machine learning as it was when the basic safety device was imposed on the car industry in the 1960s and 70s, but so far, no one is installing the seat belts: “Government is largely absent from the regulatory landscape across the board in tech,” says Weinstein. “In AI we are reliant on self-regulation. It gives us things like platform moderation in the hands of a single individual, which is deeply uncomfortable for many people,” a reference to Musk’s control of Twitter.

It also creates perverse outcomes like the hiring system created by Amazon. The machine learning program devoured all the existing data about hiring practices at Amazon and applied it to new job applicants. The result was a bot that systematically discriminated against women. The bot, unfixable, had to be junked.

Loading

It’s one of the limitations of machine learning that it learns from the data it’s trained on. So ChatGPT can perform impressively broad and fast research on the internet, but it’s only as accurate as what’s on the internet. And we all know how accurate that is. Caveat emptor.

The dilemma imposed by ChatGPT extends far beyond education. “There will be a lot of angst about the fact that [artificial intelligence] is targeting white-collar work,” Bill Gates predicted during a visit to Sydney last week.

It was already targeting blue-collar work. As Bleich well knows. Barack Obama’s representative in Australia from 2009-13 is now the chief legal officer for Cruise, a company that already has a hundred driverless taxis on the streets of San Francisco offering rides to the public.

Driverless vehicles are yet to be perfected but already have a better safety record than humans behind the wheel. The implications are plain for the millions of people who make a living as delivery drivers, couriers, truckies, cabbies, Uber drivers.

Loading

The release of ChatGPT is now sending a chill through the smug set. Lawyers, doctors, journalists, academics all face the prospect of serious disruption as machine learning promises to do part of their work faster and at near-zero cost. Millions more jobs face disruption.

One US Congress member, Ted Lieu, a California Democrat with a degree in computer science, says that he is “freaked out” by AI. He proposes a federal commission to consider how to regulate it. He hopes eventually to create something like the US Food and Drug Administration.

Weinstein agrees that this is the sort of ambition that’s needed. He says that Australia’s regulators can play an important part: “I think we are in a moment of regulatory experimentation. For this reason, even if small markets can’t influence the extra-territorial behaviour of large tech companies, they can experiment with new policy and regulatory approaches. That’s huge value right now.”

As for Oscar, the sonnet failed to change his ways, says Bleich. “But the bell we finally managed to get around his neck seemed to do the trick.” A steadying reminder that there are some things machine learning can’t do. Yet.

The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.

More world commentary from our acclaimed writers

Weapons of choice: We may not be at war, but Australia has some tough lessons to learn from Ukraine about what we need to prioritise and prepare in case we do go to war - Mick Ryan

The US/ Australia alliance: There’s a new head of the US Studies Centre in Sydney, and this Republican has a few things he wants Australians to know about the reality of their relationship with the US - Peter Hartcher

Long term gain: Recent headlines suggest that far-right populism is on the decline, but that would be short-sighted and dangerous to conclude. The long-term trend line is becoming clearer – and turning a sharp right - Duncan McDonnell

Most Viewed in Technology

Loading

Original URL: https://www.watoday.com.au/technology/time-is-running-out-to-subdue-ai-s-overwhelming-power-20230130-p5cgdf.html