NewsBite

Inside the AI ‘arms race’ to change the world

Ending hunger, curing disease, extending lifetimes: artificial intelligence may hold untold promise for humanity, say its champions, but some experts warn of catastrophic danger if the technology is allowed to outsmart and escape its creators.

Experts have warned of catastrophic danger if AI technology is allowed to outsmart and escape its creators. Collage: Frank Ling
Experts have warned of catastrophic danger if AI technology is allowed to outsmart and escape its creators. Collage: Frank Ling

In his new book, The Singularity is Nearer, Ray Kurzweil shares a ­recent wide-ranging conversation with his father, Fredric.

A noted concert pianist and educator, Fred reflects on his passion for music, hopes for his legacy, and regrets that supporting his family limited his creative ambitions. Despite rarely arriving home before 9pm, Fred encouraged the technology pursuits of his prodigy son, who at 16 made his TV debut performing a piano tune composed by a computer he built.

The exchange is particularly poignant because Fred died more than 50 years ago. Using his ­father’s course notes, writings and letters, Kurzweil has created a chat AI replicant of Fred: “the first step”, he writes, “in bringing my ­father back”.

Ray Kurzweil. Picture: Getty
Ray Kurzweil. Picture: Getty

By the late 2030s, Kurzweil foresees molecular-scale engineering reviving the dead as androids with superior DNA-based bodies – a theory that easily may be dismissed if not for Kurzweil’s remarkable record of prescience. Having pioneered text digitisation, automated speech-to-text technology and music synthesis, the futurist, sometimes considered machine learning’s godfather, carries the title principal researcher and AI visionary at Google.

Kurzweil presents eye-watering AI-enabled opportunities, from clean energy to curing devastating diseases and eradicating poverty. Cultured meat will feed the world’s population while using a fraction of the land currently farmed. Material abundance will reduce incentives to hoard resources or wage war. Three-D printing will revolutionise decentralised manufacturing, facilitating affordable construction of housing and skyscrapers worldwide.

Custom-printed organs will become commonplace. Simulated medical trials will enable simultaneous testing of infinite variations as medical innovation evolves “from a linear hit-or-miss process to an exponential information technology in which we systematically reprogram the suboptimal software of life”. This statement is at once thought-provoking and unhinged. Evolution isn’t entirely random; natural selection has shaped life through directional pressures over millennia. Suggesting that technology can simply reprogram ­biology glosses over the intricacy of genes, molecules and ecosystems, while calling evolutionary advances suboptimal overlooks their resilience and efficiency.

Yet Kurzweil is right that, ­despite hand-wringing sparked by new technologies from the printing press onwards, billions of lives have been improved thanks to tech entrepreneurs. Global poverty has halved in two decades and access to clean drinking water has more than doubled since 1990. But there are no historical analogs for today’s moment. ­Increasingly, opportunities will emerge for AI to wreak catastrophic destruction, whether by accident or design.

 ■  ■

As machine learning accelerates, so have technologists’ grandiose promises about a radiant AI-powered future. But AI prophets undermine their cause by downplaying the existential threats posed by their creations.

In a blog titled Why AI Will Save the World, venture capital heavyweight Marc Andreessen envisages every child with “an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful”, delivering “the machine version of infinite love”.

In his vision, modern warfare will become less bloody as AI tacticians empower military commanders to make better decisions. Far from ushering in a harsher, mechanistic era, “infinitely patient and sympathetic AI will make the world warmer and nicer”. He waves off critics as “bootleggers … paid to attack AI by their universities, think tanks, activist groups and media outlets”. That seems rich for the co-founder of Andreessen Horowitz, with $US42bn ($64.5bn) in assets and dozens of AI investments. “AI companies should be allowed to build AI as fast and aggressively as they can,” Andreessen insists. “There should be no regulatory barriers.”

OpenAI founder Sam Altman’s recent blog, The Intelligence Age, similarly trumpets “shared prosperity to a degree that seems unimaginable today”. His manifesto describes an epoch so extraordinary that “no one can do it justice by trying to write about it now”. AI, he argues, will amplify human potential and fulfil our “innate desire to create and to be useful to each other”. Altman’s promise to return humanity to co-operative living resonates with religious themes of redemption: “We will be back in an expanding world, and we can again focus on playing positive-sum games.”

OpenAI founder Sam Altman. Picture: AFP
OpenAI founder Sam Altman. Picture: AFP

In 2021, OpenAI executive Dario Amodei defected to launch Anthropic, creating Claude to compete with ChatGPT, citing concerns about Altman’s lax approach to AI safety. But doomerism makes a poor pitch deck and as Anthropic seeks capital to augment its cheques from Amazon, Amodei has succumbed to starry-eyed futurism. In a 14,000-word blog, Machines of Loving Grace, last month he imagines AI curing mental illnesses and granting “biological freedom”, ­delivering humans control over their bodies and potentially indefinite life.

Whether these self-styled messiahs believe their own propaganda or are merely talking their book remains unclear. Vanity Fair’s Nick Bilton writes about a dinner with Larry Page, in which the Google co-founder held court about the inevitable creation of ­superintelligent machines, artificial general intelligence. These computers would eventually decide that humans have little use and “get rid of us”. When challenged, Page calls it “just the next step in evolution”, while dismissing objections as ­“specism”.

Google co-founder Larry Page. Picture: David Paul Morris/Bloomberg
Google co-founder Larry Page. Picture: David Paul Morris/Bloomberg

“All of the people leading the development of AI right now are completely disingenuous in public,” a veteran lobbyist tells Bilton.

“They are all just in a race to be the first to build AGI and are either oblivious to the consequences of what could go wrong, or they just don’t care.”

 ■  ■

Against the grain of these evangelical visions, newly anointed Nobel laureate Geoffrey Hinton has emerged as an advocate for AI safety. Last year he ended his decade-long tenure at Google to sound the alarm about the AI technologies he helped create. “I console myself with the normal excuse: if I hadn’t done it, somebody else would have,” he reflects. “It is hard to see how you can prevent the bad actors from using it for bad things.”

Hinton championed California’s AI safety bill that cleared the state legislature in September, mandating that firms spending more than $US100m on training advanced AI models must conduct thorough safety testing and face liability for mass casualties or financial damages exceeding $US500m. A key provision required AI makers to implement a “kill switch”, allowing immediate shutdown of their models if necessary. The bill was intended to broadly align California’s AI legislation with the EU’s, which last year passed the first comprehensive AI regulatory framework, banning high-risk use cases. “It’s critical that we have legislation with real teeth to address the risks,” says Hinton.

Predictably, the tech sector panicked, ordering politicians off its turf. Meta chief AI scientist Yann LeCun brushed off the bill as grounded in “completely hypothetical science fiction scenarios”. His warning of “an end to innovation” was echoed by other Silicon Valley titans, even though the threshold meant only large AI labs, not start-ups, would fall within the law. But if tech firms are so confident that their models won’t cause harm, why fear liability?

Bill author senator Scott Wiener insisted the legislation merely ensured that large AI labs followed through on their existing commitments to test for catastrophic risks. “When technology companies promise to perform safety testing and then baulk at oversight of that safety testing, it makes one think hard about how well self-regulation will work out for humanity,” Weiner argues.

Politicians cited social media’s unchecked growth as evidence that early intervention was needed before the AI industry gained similarly unbridled power. Yet, as firms threatened to exit the state, California Governor Gavin Newsom nixed the bill.

Stepping into the void of ­congressional inaction, US President Joe Biden issued an executive order last year mandating that ­advanced AI developers share their safety testing data with the government and participate in “red-teaming” security tests. President-elect Donald Trump has called the order a threat to “AI innovation … free speech, and human flourishing”, and vowed to repeal it on “day one”.

President-elect Donald Trump. Picture: AFP
President-elect Donald Trump. Picture: AFP

But while Trump’s anti-regulation impulses have garnered him support from tech oligarchs, AI safety could make it on to his second administration’s agenda with Elon Musk in its inner orbit. “A super genius” is how Trump christened the xAI founder and Republican Party megadonor on election night, inviting him to pose with extended family in a victory photo.

Musk had championed California’s failed AI bill against the tech sector consensus and last year joined more than 1000 signatories calling for a pause on creating powerful AI “digital minds (like GPT-4) that no one – not even their creators – can understand, predict, or reliably control”. The letter warns: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Trump’s newly appointed efficiency tsar is poised to become his de facto tech adviser as well. Yet, with xAI eyeing a funding round that could result in its valuation rising to $US40bn, is Musk now so deeply invested in AI that, like Anthropic’s founder, he will be willing to shrug off the very risks he once warned against?

Tesla and SpaceX CEO Elon Musk (R) jumps on stage as he joins Donald Trump during a campaign rally last month. Picture: AFP
Tesla and SpaceX CEO Elon Musk (R) jumps on stage as he joins Donald Trump during a campaign rally last month. Picture: AFP

One potential point of friction with Trump could stem from Musk’s advocacy for open-source AI – a position that raised alarms among some China hawks due to concerns about sharing powerful AI models such as Meta’s Llama, ­already used by the People’s Liberation Army for military applications. xAI recently released the code for its chatbot, Grok, aiming to gain ground on ChatGPT. Musk labelled the Altman-run market leader, which he co-founded before exiting, “ClosedAI” and touts the democratic promise of open-sourcing. Mark Zuckerberg similarly framed the open release of Llama as a gift to humanity, asserting in July that “open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society”.

In practice, Musk likely sees open access to Grok as a way to grow its developer base and reach more users, feeding xAI with the data needed to refine its tech. But, at least, Trump’s soon-to-be vice-president has signed up to this philosophy. In a March post on X, JD Vance argues one of AI’s biggest dangers is “a partisan group of crazy people us(ing) AI to infect every part of the information economy with left-wing bias,” emphasising that “the solution is open source”.

 ■  ■

Locally, the Albanese government faces similar pressures as California’s Newsom to avoid introducing prescriptive Europe-style AI laws. In a recently unveiled discussion paper, the Australian government also floated mandating a kill switch to keep AI systems safe. Having spent last year consulting with the tech industry, the government convened an AI ­advisory body in February and seems stuck in a holding pattern, with further roundtables under way. But while local development stalls, the global AI arms race charges on.

With governments stretched and under-skilled to address this frontier, critics from within the tech establishment are essential.

But while Hinton and Google could no longer coexist, Microsoft AI chief executive Mustafa Suleyman’s book, The Coming Wave, shows it’s possible to thread this needle. Suleyman, co-founder of Inflection AI, recently acquired by Microsoft, is less prescriptive than Kurzweil with his crystal ball; instead, he envisages power simultaneously concentrating and decentralising over the coming decade, causing widespread social and political upheaval.

Microsoft AI chief executive Mustafa Suleyman. Picture: AFP
Microsoft AI chief executive Mustafa Suleyman. Picture: AFP

Suleyman foresees more power flowing to tech giants. Meanwhile, some governments will respond to the risk of cyber attacks, engineered pandemics and automated wars with intensified surveillance, using the spectre of catastrophe to shore up control. China, which has unveiled a strategy to ­become the leader in AI by 2030, produces four times more academic papers on the subject of AI than the US, much of it focused on surveillance.

However, Suleyman also expects power to splinter, as governments struggle to meet their constituents’ needs. Tech will fill the gaps; it will become much easier for social groups bound by ideology, faith, culture, or identity to establish self-sufficient communities, with medical and educational systems now cheaper and simpler to implement. Suleyman also envisages cutting-edge artillery becoming the work of start-ups or even garage amateurs. The outcome: sophisticated offensive capabilities will cease to be just the province of nation-states.

Lethal autonomous weapons will dramatically lower the material costs of going to war. While Kurzweil holds faith in Cold War-style deterrence, Suleyman points out that mutually assured destruction breaks down when attackers are unknown and any laptop can house a world-altering algorithm.

Suleyman has a phrase for tech’s reluctance to face up to AI’s dark possibilities – that’s “the pessimism-aversion trap”, or our tendency to put on blinkers when overwhelmed by fear. As he writes: “Our species is not wired to truly grapple with transformation at this scale, let alone the potential that technology might fail us in this way.”

 ■  ■

Certainly, Kurzweil isn’t wired as such. He does acknowledge AI’s potential for extinction-level events. Worst case? The accidental genesis of self-replicating nanobots triggering an unstoppable chain reaction, consuming the Earth’s biomass and transforming everything we know into “grey goo”. But Kurzweil is undaunted, confident that AI will enable us to handle the threats it creates. His optimism about the future of work is equally questionable. So far, new technologies have created opportunities while making existing jobs obsolete. But as technologies eclipse human cognitive abilities, new jobs are unlikely to emerge nearly as fast as white-collar workers are displaced. For Kurzweil, all paths lead to the singularity, where human and machine intelligence merge. He predicts singularity will arrive by 2045, with nanotechnology integrating cloud-based neurons, our brains becoming more than 99.9 per cent non-biological and our minds expanding “many millions-fold”. He even implies a future treatment for gender dysphoria; noting that transgender people can now align their bodies with their identities, he adds: “Imagine how much more we’ll be able to shape ourselves when we can program our brains directly.”

Kurzweil does acknowledge AI’s potential for extinction-level events.
Kurzweil does acknowledge AI’s potential for extinction-level events.

Nanobots will patrol our bloodstream, preventing disease and repairing ageing tissues, long before problems become detectable by doctors. Kurzweil finds “sound logic” in gerontologist Aubrey de Grey’s claim that the first person to live to age 1000 is likely already among us. Our challenge is to live long enough to reach “longevity, escape velocity”, where anti-ageing research and nanomedicine add at least a year of life annually.

Within 20 years, he expects nanobots will enter the brain to copy memories and personality, creating a biological and psychological twin, “You 2”. But merging with superintelligent AI is just a step: “once our brains are backed up on a more advanced digital substrate, our self-modification powers can be fully realised. Our behaviours can align with our values, and our lives will not be marred and cut short by the failings of our biology. Finally, humans can be truly responsible for who we are.”

Kurzweil’s edifice assumes our biological origins and evolutionary hardwiring limit authenticity and self-actualisation. But can identity transcend our biological machinery, from which emotions and consciousness emerge? Surpassing it may mean losing what makes us human, reducing identity to mere programming rather than the accumulation of experiences. History teems with regimes and ideologies that have sought to re-engineer humanity towards perfection, with disastrous results.

 ■  ■

Philosopher John Gray positions Kurzweil within the transhumanist movement, where religion is ­recycled as science and science replaces God. “Transhumanists believe human beings are essentially sparks of consciousness, which can escape mortality by detaching themselves from the decaying flesh in which they happen to be embodied,” Gray writes in Seven Types of Atheism. Transhumanism rekindles, through science, the transcendent aspirations science extinguished. Even the language of the singularity has biblical undertones: “The singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event.”

Kurzweil’s 2011 biopic documentary Transcendent Man opens with him admitting he often fantasises about dying: “It’s such a profoundly sad, lonely feeling that I really can’t bear it. So I go back to thinking about how I’m not going to die.” Today, the 76-year-old’s quest to escape death is more strident than ever, emboldened by tech’s acceleration and perhaps his advancing years. But AI’s near-term promise and peril for mortals who won’t be cryogenically preserved, like Kurzweil, demands a different analysis.

Suleyman would draw a red line around allowing AI capabilities such as autonomy and recursive self-improvement. He argues that AI should be trained in self-doubt, soliciting feedback and deferring to human judgment. Currently, models are black boxes with opaque decision-making, but ­developing ways for AI to justify its choices and reveal internal ­workings for examination is vital.

Suleyman emphasises the need for more AI safety researchers, a niche he describes as “shockingly small”. As technologies become more powerful, cheap, and widely distributed, we need influential insiders such as Microsoft AI’s chief executive, who appreciate that tech advancement is a net positive for humanity but also take harm mitigation seriously.

“If the wave is uncontained, it’s only a matter of time,” warns Suleyman. “Allow for the possibility of accident, error, malicious use, evolution beyond human control, unpredictable consequences. At some stage, in some form, something, somewhere, will fail.”

Rather than a disaster such as Chernobyl or Bhopal, it will be a worldwide calamity and “the legacy of technologies produced, for the most part, with the best of ­intentions”.

Ben Naparstek is chief executive of lifestyle publisher Urban List and a former Amazon executive.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/inquirer/inside-the-ai-arms-race-to-change-the-world/news-story/d7fc389b2b740ff57867c013704ed8a3