NewsBite

How artificial intelligence guru Catriona Wallace is policing ChatGPT, web 3.0 and the metaverse

Catriona Wallace works with government ministers and top-level mandarins to address the risks posed by emerging technology. So why has she been in the Amazon taking potent psychedelics?

AI expert Catriona Wallace. Picture: Nick Cubbin
AI expert Catriona Wallace. Picture: Nick Cubbin

In November last year, Catriona Wallace travelled to the Peruvian Amazon to take part in a shamanic initiation ritual. In near-impenetrable jungle just outside the port city of Iquitos, she spent a month doing a “dieta”, an intense process that involved communing with a master healing plant, the chullachaqui or ­“protector plant”, chosen for her by the resident Shipibo people. She drank tea made from it, bathed in it, and sat in silent prayer with it, sweltering in the unrelenting ­humidity of the rainforest.

Sydney-born Wallace was one of a dozen international participants, all with extensive experience in plant medicine. Introducing themselves the first night, “we went round the circle and it was: healer, healer, healer, healer, technologist”, Wallace says. “They were all looking at me like, ‘What are you doing here?’”

What indeed. Wallace, 57, is a world leader in the field of artificial intelligence, the game-changing technology that has leapt to the ­forefront of mainstream consciousness with the release of a new wave of generative AI ­programs such as the chatbot ChatGPT, art tool DALL-E and text-to-music bot MusicLM.

Catriona Wallace wants the world to get ahead of the game. Picture: Nick Cubbin
Catriona Wallace wants the world to get ahead of the game. Picture: Nick Cubbin

A behavioural scientist, entrepreneur and futurist, she has a PhD in human-technology interaction and a unique skill-set that means she has the ear of commonwealth and state governments, Interpol, large corporations, and billionaire investors such as Richard Branson.

Wallace lives on the bleeding edge of emerging tech, immersed in a world of augmented and virtual reality, machine learning, cyber­security, blockchain and cryptocurrency. She’s already looking beyond generative AI to Web 3.0, the next iteration of the internet, and working with government ministers and top-level mandarins to address potential risks posed in the immersive community of the metaverse. ­The eSafety Commissioner Julie Inman Grant calls her a “pioneer” in the space.

So what was Wallace doing in the Amazon, among the monkeys and mosquitoes, tapping the knowledge of an indigenous tribe whose technological advances stopped at machetes and spears? And taking ayahuasca, the potent psychedelic brew that’s legal there?

She was seeking answers. Unconventional? Yes. But these are unconventional times. “If technology consciousness keeps evolving and human consciousness doesn’t, then we’re in trouble,” Wallace says, settling into a couch in her coastal-chic-meets-Woodstock home on Sydney’s northern beaches. “We can sit around and keep talking about the same things, or we can start thinking more expansively about new technologies,” she says. “I think psychedelics provide a great way to do that.”

Virtual worlds, creative robots, hallucinogens as a tonic for future shock. This will all sound pretty radical, I suggest, to the average Australian whose bandwidth is already being challenged by pandemic fallout, geopolitical unrest and economic disruption. Wallace stirs her decaf chai latte, nods her head. “Correct.”

-

“In the next 12 months we will see more and more evidence that this is not science fiction”

-

AI is the fastest-growing technology sector in the world; PricewaterhouseCoopers estimates it will add $15.7 trillion to the global economy by 2030. The technology has already transformed everything from customer support to playlist recommendations, and momentum is accelerating: experts estimate there have been five years’ worth of tech advancements in the past 12 months.

The November launch of ChatGPT by Microsoft-backed research company OpenAI put a rocket under the sector. By January, the online chatbot had 100m monthly users, making it the fastest-growing consumer app in ­history. Trained to recognise patterns in the vast tracts of information on the internet, the tool offers coherent, conversational responses to natural language prompts. It can explain string theory, write a poem, suggest birthday party ideas or, much to the consternation of ­educators, churn out entire school assignments. In January, US Democrat Ted Lieu introduced a bill asking Congress to regulate AI; the bill was written entirely using ChatGPT. Meanwhile, AI image generator DALL-E is being used to create surrealist art and photorealistic images both beautiful and horrifying.

In the wake of ChatGPT’s release, a slew of contenders has jumped into the space, including Microsoft with a new, AI-powered version of its search engine Bing, and Google with its own AI chatbot, Bard. (It soon became clear the technology wasn’t quite ready for prime time. In early tests, the new Bing began to have ­existential meltdowns, threatening its users or declaring its love for them; and during a demo, Bard produced a factual error, sending the share price of Google’s parent company ­Alphabet Inc crashing by US$100 billion.)

Wallace says robots will not be limited to simple, repetitive tasks. Picture: Nick Cubbin
Wallace says robots will not be limited to simple, repetitive tasks. Picture: Nick Cubbin

Almost overnight, it seems, the wider world has become aware of what technologists like Wallace have long understood: the world is being fundamentally reshaped by algorithm-driven AI, and the work of robots will not be limited to simple, repetitive tasks.

Human creativity was expected to be the last holdout against the progress of artificial intelligence. But, a bit like Hemingway’s line about bankruptcy happening “gradually, then suddenly”, AI’s entry into the arena of imaginative output is suddenly upon us. “The arrival of ChatGPT was fabulous because it just woke everyone up,” Wallace says. And we needed waking up, she says. The metaverse – a virtual world resembling reality, in which people will connect socially, work, shop, learn and ­conduct ­business – sits just on the horizon, underpinned by ever more powerful AI, and there are myriad issues to deal with before its arrival. “We need to get regulation in place now, ahead of the metaverse coming within three to five years and being really mainstream in 10 years,” she says. Advances in augmented/virtual reality and blockchain (the technology that enables cryptocurrency) will be required before the metaverse is fully ­realised, but Apple is expected to announce a new high-end AR/VR headset this year, “and when Apple comes to market, that means it’s moving into the mainstream,” Wallace says. “In the next 12 months we will see more and more evidence that this is not science fiction.”

Wallace using a virtual reality headset. Picture: Megan Lehmann
Wallace using a virtual reality headset. Picture: Megan Lehmann

As well as sitting on the board of the Garvan Institute of Medical Research, the Sydney lab at the forefront of genome ­analytics, and non-profit AI research facility the Gradient ­Institute, Wallace co-chairs the AI Coalition ­initiative of Richard Branson’s B Team. She is also consulting on the federal ­government’s AI Action Plan, a $113m initiative established by the former Coalition government and continued under Labor with the aim of making Australia a leading digital economy by 2030. As chair of Boab AI, a venture capital fund that invests in AI, she also sees new advances coming down the pike before most.

While everyone else was busy asking ChatGPT for workout plans and travel itineraries, Wallace was tapping the ancient wisdom of the Amazon, expanding her thinking in search of novel solutions to unprecedented challenges. “What I recognised by the end of my time in the jungle is that I’m not necessarily a healer,” she says of the month she spent working with ­indigenous plant medicine. “It made me laser-­focused on my soul’s purpose, which is to be a protector in the world of dangerous tech.”

Wallace first glimpsed the future in the late 1990s. She’d spent four years with the NSW Police Force, but left to study organisational behaviour at UNSW’s Australian Graduate School of Management (where she’s now an adjunct professor), writing her PhD thesis on the likelihood of technology replacing human jobs. “That piqued my interest,” she says. “I could see that there was going to be a much stronger merging of technology and humanity.”

Rather than switch disciplines to become a software engineer or programmer, Wallace took an unconventional route: she entered the digital space via the stock exchange. In 2013, she founded Flamingo AI, one of Australia’s first AI companies and an early chatbot developer. Three years later, it became only the ­second woman-led business to list on the Australian Stock Exchange. Wallace ran the company out of New York for four years and sold it in 2020, but not before she noticed “a lot of ethical ­challenges” with the way corporates in the US and Australia were using AI. “The big financial companies did not have rigorous processes to ensure the data sets they used to train banking algorithms were without bias,” she says. “There were also some institutions who didn’t disclose to their customers that they were interacting with AI, and there was no real monitoring for unintended consequences or harms. Essentially, the banks and insurance companies would set the systems up then just let them run with ­minimal understanding of ethics or responsible AI governance.”

-

“Three to four male avatars virtually gang-raped my avatar”

-

Watching formal laws and regulations struggling to play catch-up, she became active in the field of responsible AI (also known as ethical or trustworthy AI), the practice of designing and deploying AI systems that are fair, safe, transparent and respectful of users’ privacy.

Last year, Wallace co-authored a book, Checkmate Humanity: The How and Why of Responsible AI, which outlines the potential perils AI poses, from individual harms such as fraud, harassment, discrimination and disinformation through to “full existential risk”. The latter is as frightening as it sounds. In his 2020 book The Precipice, Australian philosopher Toby Ord, a senior research fellow at Oxford University, estimated the likelihood of various events leading to “existential catastrophe” in the next 100 years. The chance that oblivion comes via asteroid impact is 1 in 1 million; nuclear war: 1 in 1000; climate change: 1 in 1000; a naturally arising pandemic: 1 in 10,000; an engineered pandemic: 1 in 30. The risk that humanity will be wiped out by “unaligned artificial intelligence” (that is, AI that behaves differently from the ­intentions of its programmer) is 1 in 10.

Nightmare fuel. But it’s an extreme-case scenario and Wallace is, in fact, an enthusiastic AI booster. She is most optimistic about its ­benefits in the realms of education and healthcare, where it could be used to diagnose and treat patients, help older Australians live fuller lives, and contribute to public-health programs built around massive amounts of data collected on, for example, personal genomes. Automated ­systems in vehicles, buildings, on farms and in business will save time, money and lives by ­taking care of the “four Ds” – dirty, dull, difficult and dangerous jobs. She speaks of a “beautiful” future of shared knowledge, bolstered creativity and, eventually, transhumanism, the embedding of software and hardware in human bodies to enhance physical and ­mental capabilities. “I think most of the philosophers believe it’s inevitable, this transhuman path,” she says. “A lot of people, particularly my medicine friends and my indigenous friends, think it’s terrible, it’s sacrilegious, but to me, it’s about becoming more human.”

VR platform Horizon Worlds, created by Meta (formerly Facebook).
VR platform Horizon Worlds, created by Meta (formerly Facebook).

Besides, the genie’s out of the bottle. “One of the things that AI does is remove international boundaries,” she says. “Companies like Uber, Google, Facebook and Amazon are now prolific in Australia, and that will happen to all our other industries if we don’t start to adopt and use AI. There’ll be that shift of power and control through data and AI if we don’t get up to speed very quickly.”

She just believes tech evangelists should pump the brakes a little while we get some ethical frameworks in place. Wallace has previously worked with the Gradient Institute in developing the NSW government’s AI ethics policy, and has just finished work on its new metaverse strategy. “We recommended specific services for the NSW government to put in the metaverse, as well as ways to build their capabilities in order to be metaverse-ready,” Wallace says.

Wallace in Mexico.
Wallace in Mexico.

But how to police this amorphous space? And whose jurisdiction is it? These are the kinds of questions Wallace is currently chewing over with state and federal police in Australia, the ­Department of Home Affairs, and Interpol’s Metaverse Expert Group. Even Interpol, which last year launched its own virtual offices in the metaverse, is struggling with the problem of regulation. The agency recently admitted it’s ­unsure whether some actions, such as sexual harassment, should be handled the same way in virtual reality as in a physical space.

Wallace’s home is a study in contradictions. Large windowed spaces, plantation shutters and white-on-white stylings suggest mainstream beachside living, yet tucked away in nooks and alcoves are dreamcatchers and carvings, cultural artefacts and folk art: evidence of her psychedelic wanderings through Mexico and Peru. Above the dining table hangs an indigenous painting, foregrounding a “grandmother spirit” with the eyes of a jaguar, the spirit animal associated with ayahuasca.

Wallace shares a blended family of eight children with partner Dr Arne Rubinstein, an adolescent development expert; she lives here with three of them, plus two dogs and a snake. None of these children are in the tech industry but, she says, AI is already starting to infiltrate each of their working lives.

Ethics have played a role in Wallace’s life since childhood. Her father, a successful Sydney businessman, made sure she and her three ­siblings were aware that, as middle-class white people, they’d been born privileged and thus had an obligation to be of service to those with fewer resources and opportunities.

Nina Jane Patel.
Nina Jane Patel.

In her late twenties, Wallace apprenticed to a Native American elder in New Mexico for 18 months, which awakened her interest in spirituality and indigenous medicine. Since selling ­Flamingo AI – “so I was no longer the CEO of a publicly listed company” – she has regularly travelled overseas to take natural psychedelics in places where they are legal. Psychedelics are ­thought to have been integral to the foundation of Silicon Valley (Apple’s founder Steve Jobs was a big fan of LSD) and the quest for breakthrough ideas has led to a “psychedelic renaissance” in many of the world’s innovative hot spots.

In early 2022, Wallace travelled to Mexico to take 5-MeO-DMT, a compound derived from the Sonoran Desert Toad. Known as the “God molecule”, it is one of the most potent psychedelic agents on Earth. During an intense, half-hour trip she had a “vision” for a way to tackle the complexities facing a society moving quickly from the 2D internet into the 3D space of the metaverse. “I just saw what needed to be built,” she says. Upon returning to Australia, she started drawing up plans for a global initiative called the Responsible Metaverse Alliance (RMA).

If the world of generative AI is the Wild West, the metaverse is the cantina scene in Star Wars. Currently dominated by zombie-slaying ­gamers, the metaverse is an ungoverned – some say ­ungovernable – realm of decentralised, interconnected virtual worlds in which people – represented as digital avatars – will soon be able to work, play and trade in all sorts of ways. Tech research firm Gartner estimates that by 2026, one in four people will spend at least an hour a day in the metaverse. As the next milestone in the evolution of the internet, it will inevitably up-end both private and public enterprises. On one hand: unimagined opportunities. On the other: unprecedented threats. Ensuring the ­correct one outweighs the other is, Wallace believes, the challenge of our time.

The Responsible Metaverse Alliance aims to prevent the mistakes of Web 2.0, where bullying, toxicity and misinformation have often run rampant on social media sites such as Facebook and Twitter. Its July launch attracted some heavy hitters: federal industry and science minister Ed Husic, NSW minister for digital government Victor Dominello, eSafety Commissioner Julie Inman Grant and Human Rights Commissioner Lorraine Finlay. Also in attendance: UK-based psychotherapist and futurist Nina Jane Patel, who made international headlines last year for revealing she was virtually gang-raped as a beta tester on the VR platform Horizon Worlds, created by Meta (formerly Facebook).

INTERPOL has launched in the Metaverse.
INTERPOL has launched in the Metaverse.

“Within 60 seconds of joining I was verbally and sexually harassed,” Patel wrote in a Medium post. “Three to four male avatars, with male voices essentially, virtually gang-raped my avatar and took photos. As I tried to get away, they yelled, ‘Don’t pretend you didn’t love it’.” She wrote that although it occurred ­virtually, “my physiological and psychological response was as though it happened in reality”.

This is exactly the sort of experience RMA hopes to head off at the pass. “Hyper-realistic, potentially invasive experiences in the metaverse can only amplify the kinds of trauma we’ve already seen occurring online,” Inman Grant says. “We need architects of the metaverse to be designing in fundamental safety protections today, rather than waiting for devastating new forms of abuse to be unleashed in this brave new online world.”

Wallace is not a coder and that’s a good thing. Software and programming engineers “can get so caught up in what’s possible – ‘We can do this and we can do this’ – that they don’t stop to think, ‘Should we?’,” she says. Neither is she male, also a good thing.

Professor Toby Walsh, 59, is chief scientist at UNSW’s AI Institute and is also active in the field of responsible AI. He notes a distinct lack of diversity in the technology sector. “Most of the people are white males like me,” he says. “And inevitably the biases within the field affect what choices are made and how we build stuff.” Without a diverse group monitoring outcomes, harms and unintended consequences of AI, “lots of questions that should be asked don’t get asked,” he says. “That’s what Catriona’s passionate about, and rightly so.”

The core of a responsible metaverse “is that there are regulations and policies that govern these worlds in a way that creates a safe place for women, children and the vulnerable”, Wallace says. “Diversity and inclusion also need to be at the core of its design. We don’t believe any of that should take away from the commercial viability of applications and the metaverse itself. We think it should go hand-in-hand.”

-

“We need architects of the metaverse to be designing in fundamental safety protections today”

-

Last month, the Partnership on AI, a non-profit featuring tech titans such as Meta, ­Amazon, Google and Microsoft focused on ­establishing ethical AI standards and practices, released a guide for “Responsible Practices in Synthetic Media”. Synthetic media is the artificial creation or modification of media by AI. Known colloquially as deepfakes, they are used in entertainment, art, education and research but also have the potential for use in scams, harassment and electioneering.

Adherence is voluntary, however, and this throws up one of the more vexing questions around the application of ethical AI principles in the metaverse: its promise as a decentralised, non-authoritarian society would necessarily be compromised by a governing body. “One of the challenges we’re talking about is that, in order to decentralise it, does there need to be a central body that’s monitoring the decentralisation of it?” Wallace says. “So it gets in that loop.”

As a start, the RMA advocates the establishment of universal rules relating to identity. “One of the problems in the metaverse is you can go in with whatever name, whatever avatar, you like [and] abuse people, defraud people without it being possible to easily identify you... We think whatever laws apply in the physical world now should absolutely apply in the virtual world.”

Presumably, there are those who will have to be dragged kicking and screaming into this fuzzily drawn theoretical future? “Yes, the politicians mostly think it’s just not real,” she says. “They’ve got other priorities to look after and they think this isn’t important. But it is.”

The solution? She brings her trusty Oculus Quest 2 VR headset to meetings. “We’ve had the most success by giving them immersive ­experiences,” she says. “They put the headsets on and within five minutes even the most sceptical are saying, ‘Oh, I get it, now what do we have to do?’” NSW digital minister Victor Dominello was one of the first to experience the metaverse, in March 2022, and was an ­enthusiastic convert, followed by the commissioners for human rights, disability discrimination and age discrimination.

Toby Walsh. Picture: Grant Turner
Toby Walsh. Picture: Grant Turner

While those reluctant or unable to use VR technology will still be able to access physical ­offices or go online, Wallace does hold concern for the elderly and those living in remote communities. “I will put pressure on government and the tech companies themselves that if they are going to push people into these virtual channels, they must provide appropriate hardware and software, the training and the internet bandwidth to be able to provide good services to everyone.”

Once upon a time, people remembered phone numbers and birthdays, and they found things out by asking people. Now their smartphone does that and much more for them. It’s an extension of self. Elon Musk’s Neuralink has yet to implant its skull-embedded brain chip in a human, but the billionaire already foresees an upgrading frenzy similar to the release of a new iPhone. Where does it end?

Wallace is matter-of-fact. Next stop: the bifurcation of the species, into those who support transhumanism, the marriage of man and machine, and those who want to continue to live organically, like the Shipibo people of Peru. “There will be people who say, ‘I don’t want any of that. I believe to be human is to be in this body, on this land, connected to the Earth’,” she says.

I picture the uncontacted tribes of the Amazon, flitting between countries, over meaningless borders, communing with plants and gazing at the stars. Rejecting the robots for something freer. Surely this bifurcation would disadvantage those who hold out against progress? “Highly likely,” Wallace says. She pauses for a minute. Sips her tea. Then: “Depends what we think ­disadvantage is, right?”

Megan Lehmann
Megan LehmannFeature Writer

Megan Lehmann writes for The Weekend Australian Magazine. She got her start at The Courier-Mail in Brisbane before moving to New York to work at The New York Post. She was film critic for The Hollywood Reporter and her writing has also appeared in The Times of London, Newsweek and The Bulletin magazine. She has been a member of the New York Film Critics Circle and covered international film festivals including Cannes, Toronto, Tokyo, Sarajevo and Tribeca.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/weekend-australian-magazine/leader-in-ai-catriona-wallace-set-to-make-chatgpt-web-30-and-the-metaverse-mainstream-ready/news-story/4ffde246c03b7377f6c2c5350c444b73