NewsBite

Real challenge to build ethics into artificial intelligence

China has very different ideas on deploying AI than the West: Beijing’s deep integration of facial-recognition tracking in the state security apparatus being a case in point.

China has very different ideas on deploying AI than the West. Picture: Supplied
China has very different ideas on deploying AI than the West. Picture: Supplied

When we think about artificial intelligence and the inroads the technology is making in our daily lives, what comes to mind? The Jetsons zooming cheerfully into a future curated by super-smart ­machines, or that proverbial frog in hot water, blissfully unaware of what’s cooking until it’s too late?

The advances are coming so fast, with such profound implications for how people work, play and relate to the world around them – as well as each other – it is impossible to know when the tsunami of change will abate. Right now, the challenge is merely to keep up.

So this week’s launch by Industry and Science Minister Ed Husic of a “responsible artificial intelligence network” to guide business and innovators on the safe and ethical use of AI presents a timely opportunity to take stock. Not only of the benefits and inevitable disruption flowing from the latest wave of the information revolution, but also to ask the foundational questions that tend to get subsumed when the pot is keenly bubbling.

These go to first principles of what AI systems should or shouldn’t do when interfacing with people, and the values underpinning them. For years, scientists have debated the alternatives. Some favour a charter along the lines of Isaac Asimov’s prescient Laws of Robotics, envisaged by the great science fiction writer in the 1940s to forbid a robot from hurting a person through either overt action or a failure to act. Ever since, novelists and Hollywood have come up with equally ingenious ways to get around the famous treatise.

Others put the case for a version of the Hippocratic oath binding medical doctors to “first do no harm”. CSIRO’s Data61 R&D division has even published a somewhat clunky table of eight “core principles” for AI, starting with the proposition that the benefits for people must outweigh the costs.

Few would argue, and that’s also the point. Motherhood statements are well and good, yet in the real world they have their limits. The government-backed AI network is an attempt to take the next step and put the theory into practice, a claimed world-first. It will be driven by the National AI Centre, whose boss, Stela Solar, says Australian businesses want to do the right thing by both their customers and the community, but frequently don’t understand all that’s involved in making the technology safe and ethical.

“Right now, no one in the world has worked out how to do AI completely responsibly,” she warns.

Australian businesses want to do the right thing by both their customers and the community, but frequently don’t understand all that’s involved in making the technology safe and ethical. Picture: AFP
Australian businesses want to do the right thing by both their customers and the community, but frequently don’t understand all that’s involved in making the technology safe and ethical. Picture: AFP

“There is no one checklist to follow, no scaffold to use. And when there is no plan for industry to follow, it becomes a risk for whether these services can be provided in fair and equitable ways.”

The partnership with the CSIRO, employer organisation the Australian Industry Group, Standards Australia, the Committee for Economic Development of Australia and the Tech Council of Australia, among other players, aims to provide expert advice and coaching on the traps to avoid when deploying AI, with an eye to regulation that’s in the works internationally. “We think that a lot of the commercial sector is probably not ready and not aware of the changes that are coming,” Solar says, drawing on her experience as a former head of global AI strategy for Microsoft.

“So the network is about demystifying what it means to have a robust and reliable approach to the practice of AI.”

She cites devastating claims in the US that the automated assessment tools used by banks there to assess home loans were biased on race and gender grounds. A 2021 investigation by The Markup, a news outlet that tracks the social impact of digital technologies, alleged the credit-scoring algorithms discriminated heavily against non-white applicants. The rejection rate for African-Americans was a staggering 80 per cent higher than for whites in a similar financial bracket and Latinos were 40 per cent more likely to be turned down for a mortgage, The Markup asserted.

While its findings have been disputed by the American Bankers’ Association, the prospect of Australian women facing similar discrimination has been called out by the federal Human Rights Commission.

AI is only as good as the data punched into it, and this can be undermined by inherent bias. Picture: Getty Images
AI is only as good as the data punched into it, and this can be undermined by inherent bias. Picture: Getty Images

The lesson, says Solar, is that AI is only as good as the data punched into it, and this can be undermined by inherent bias. (In the case of algorithmic lending, the racial problem was said to lie in the way it rewarded traditional credit to which minorities had less access, while failing to take into account other indicators such as regular payments for rent and utilities.) “That’s obviously against our human values of fairness,” she continues. “Responsible AI is about making the connection between those core values and how we embed them into how AI systems are shaped and the outcomes they create.”

Closer to home, consider the fallout from the emergence four months ago of OpenAI’s ChatGPT chatbot, the new interactive online tool that can answer complex questions, write poetry and essays, even mimic human emotions. Flush with billions invested by Microsoft and other tech sector notables, the San Francisco company this week unveiled a new version of the system that powers its chatbots, upping the ante in Silicon Valley’s race to embrace AI.

ChatGPT-4 reportedly scores better than most law graduates ­sitting for bar exams in the US.

Still, schools and universities in Australia are scrambling to catch up, appalled by the potential for the technology to be used by students to cheat. As Data61’s Jon Whittle points out, ChatGPT was unleashed on an unprepared world by design; those who flocked to try it for free were in effect human guinea pigs. “The reason OpenAI put it out was they wanted to get lots and lots of people using it to test the system,” he says. “These people, and we’re probably talking in the hundreds of millions, were test subjects. OpenAI can do that because they don’t have to worry too much about market share whereas other companies such as Google have taken a more cautious approach.”

The subtext in the unfolding story of digital transformation is etched in unforeseen consequences. Remember the excitement attending the launch of Apple’s iPhone in 2007 by the late Steve Jobs? A mobile device that could make calls, send and receive emails, navigate the web, play your songs! What wasn’t there to like?

Who could have imagined the smart phone would also become an instrument for propagating the excesses of social media, then in its infancy. Or that an entire generation of children would have their worldview reduced to the dimensions of a touchscreen. Machine learning, a building block of AI, is driving the technology into new realms.

For Chinese AI Chatbots, Politics Is Off-Limits

The latest five-yearly report of the global 100 Year Study on Artificial Intelligence hosted by Stanford University sets out the challenge: to build machines that can co-operate and collaborate seamlessly with people, capable of making decisions that are “aligned with fluid and complex human values and preferences”.

The verdict is mixed. People are using AI more than ever to dictate to their phone, get recommendations, enhance the background on conference calls made ubiquitous by the pandemic. The 100 Year panel, including the University of Sydney’s influential professor of artificial intelligence Toby Walsh, found in 2021 that in addition to chatbots, neural network language models support machine translation and useful applications in text classification and speech recognition. Image processing tech is widespread but increasingly prone to backlash over deep-fake misrepresentations online – so-called synthetic porn is one concerning example of this – and facial recognition systems are being shunned by governments in Europe.

The driverless car is yet to materialise, though the technology is close and test vehicles have taken to the road in trials in South Australia, regional NSW and Ipswich in southeast Queensland. In medicine, AI tools now exist for identifying a variety of eye and skin disorders, detecting cancer, and supporting measurements needed for clinical diagnosis. Studies show that AI is better at reading some X-rays than a trained radiologist.

Many of the “grand challenges” for AI set by pioneering scientist Raj Reddy in 1988 have been achieved: a “self-organising system” that can read a textbook and answer questions, check; a world-champion chess machine, done; the accident-avoiding car and translating telephone – yes and yes.

Others, however, remain unmet, among them mathematical discovery and a self-replicating system that enables a small set of machine tools to produce other tools using locally available raw materials, a stepping stone towards realising the sci-fi dream of colonising space. The point is the expectations around AI have been over-hyped, for all the remarkable gains in the field.

“One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is simply a tool,” the 100 Year panel warns.

If there is wide agreement on the need for ethical and responsible AI – and who would argue for the alternative? – that’s where the consensus ends. Whittle, the director of CSIRO’s Data61 unit, says communist China has very different ideas on deploying AI than the West – Beijing’s deep integration of facial-recognition tracking in the state security apparatus being a case in point.

The censorship China imposes on the internet is likely built into the large language models developed for its version of a chatbot, Whittle says, which wouldn’t be tolerated in Europe, Australia or the US. “When we talk about AI ethics or AI values, the natural question to ask is, whose ethics and values?” he tells Inquirer.

Australia to develop safe and ethical AI network

At Flinders University’s College of Science and Engineering in Adelaide, researcher Russell Brinkworth has reverse-engineered the way insects see to build an enhanced robotic eye. The definition is so acute it can pick out a near-invisible koala from the thick of the forest, or the obscured features of a hoodie wearer. Brinkworth concedes the system could be applied to facial recognition, but insists that’s not for him to decide. “We’re making a smart eye … and through that we’re extracting more usable information from the environment. The AI, the brain behind it, is something else,” he says.

The notion that there is an easy fix to these ethical dilemmas is sheer wishful thinking. When he was a federal human rights commissioner, Edward Santow led an inquiry that considered whether a dedicated federal regulator was the solution. He soon concluded that it wasn’t.

A better answer, he believes, is to be found at the grassroots level where he now operates, working as industry professor and co-director of the Human Technology ­Institute of the University of Technology Sydney, a partner in the Responsible AI Network. It means providing actionable guidance, not lip service to the AI engineers who complain about ethics briefs pitched “at such a high level” that they don’t have a clue how to implement them.

“The empirical research is quite stark,” Santow says, noting that there has been in the order of 500 inquiries and official investigations worldwide into the ethics of AI since 2015.

“It suggests that vast majority of these ethical principles … have had no discernible impact at all.”

Recently, Whittle had a session with Microsoft’s new, for now invitation-only BingAI chatbot keen to test its selling point to scientists of delivering academically-sourced responses. He asked the system to summarise the latest developments in his area of expertise, responsible AI. Sure enough, two studies popped up, one of them from the CSIRO. The second paper puzzled Whittle. He had never heard of it.

“I thought, well, that’s interesting. It’s found something that I didn’t know about, even though I’ve been working in this area for years,” he says. It took quite a bit of back and forth for the chatbot to, in effect, fess up: the research concerned didn’t actually exist. The machine had concocted it.

And here, at least, the term ­artificial intelligence couldn’t be more apt.

Read related topics:China Ties
Jamie Walker
Jamie WalkerAssociate Editor

Jamie Walker is a senior staff writer, based in Brisbane, who covers national affairs, politics, technology and special interest issues. He is a former Europe correspondent (1999-2001) and Middle East correspondent (2015-16) for The Australian, and earlier in his career wrote for The South China Morning Post, Hong Kong. He has held a range of other senior positions on the paper including Victoria Editor and ran domestic bureaux in Brisbane, Perth and Adelaide; he is also a former assistant editor of The Courier-Mail. He has won numerous journalism awards in Australia and overseas, and is the author of a biography of the late former Queensland premier, Wayne Goss. In addition to contributing regularly for the news and Inquirer sections, he is a staff writer for The Weekend Australian Magazine.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/inquirer/real-challenge-to-build-ethics-into-artificial-intelligence/news-story/7c4d3595e241530e6bd89a833d7d222a