NewsBite

Advertisement

What AI can learn from the history of the moon landing

During the past decade that I have been working in AI, the historical analogy I have heard spoken of the most is that of the atomic bomb. We all know the story of how a group of physi­cists invented a world­ changing, potentially world­ destroying, nuclear weapon with the unlimited resources of the United States’s military during the Second World War. In 2019 Sam Alt­man, the CEO of OpenAI, casually compared his company to this effort, highlighting the fact that he shares a birthday with J. Robert Oppenheimer, the physicist who led the team that created the bomb. You don’t have to be a history professor to see why it’s a disturbing analogy to choose.

For many in the world of fast ­moving technological advances, the invention of the atomic bomb, nicknamed the “Manhattan Project,” was the perfect execution of a tortuously complicated task in record time and under immense pressure. Admirers will say that it’s not the mass destruction they esteem, but the speed of the project, its ambition, impact, and power. And there’s nothing wrong with these qualities in themselves. Speed in pursuit of solutions that help people is welcome. Ambition and the exercise of power can bring enormous advantages if wielded carefully. But who is making those decisions about the creation, scale, and application of transformative technology? What motivates them, and from where do they draw their power?

This detail of a July 20, 1969 photo made available by NASA shows astronaut Neil Armstrong reflected in the helmet visor of Buzz Aldrin on the surface of the moon.

This detail of a July 20, 1969 photo made available by NASA shows astronaut Neil Armstrong reflected in the helmet visor of Buzz Aldrin on the surface of the moon.Credit: AP

The purpose of AI cannot be to win, to shock, to harm. Yet, the ease with which some AI experts today refer to it as noth­ing more than a tool of national security indicates a broken culture. Competitiveness is natural and healthy, but we must avoid dangerous hyperbole, especially from those who do not understand the history behind it.

The geopolitical environ­ment today is unstable and unnerving, but the international institutions that emerged from the wreckage of the last global war exist for a reason - to avoid such devastation again. Imply­ ing that AI is analogous to the atomic bomb does a disservice to the positive potential of the technology and falls short of the high standards to which technologists should hold themselves.

It co-opts and sensationalises an otherwise important debate. It implies that all of our energy must be put into preventing the destruction of mankind by machines that, now released, we cannot control. It presumes a powerlessness on the part of society at large to prevent harm while inflating the sense of superiority and importance of those building this technology. And by enhancing their own status, it gives those closest to the technology, those supposedly aware of the truth about its future implications and impacts, a disproportionate voice in public policy decision-making.

Loading

So, if you don’t want the future to be shaped by a dominant monoculture, then what’s the answer? Fortunately, we’ve seen versions of this story before, and we can learn from them. The study of history can in fact ground us, give us our bearings. It is critical to understanding our future and an important companion of scientific innovation. But it requires humility to learn lessons that might not always be palatable, a quality often for­ gotten in the profit-driven race to technological advancement.

With a background in both history and politics, I entered the world of AI with the aim of mediating between the technology industry and society at large. It soon became clear that the insights I gained from those disciplines were sorely missing in “the land of the future.” Looking at how democratic societies have coped with transformative technologies in the past will illuminate our path forward, and I have endeavoured to find historical examples beyond the ubiquitous atomic bomb analogy - histories of recent, world-changing technologies that didn’t always place the technologists at their centre.

Through the successes and failures of the past it is possible to see a different way forward, one that does not accept the ideology of the flawed genius nor that disruption must come at great cost to the most vulnerable. Instead, these examples show that science is a human practice and never value neutral. We can build and use technology that is peaceful in its intent, serves the public good, embraces its limitations rather than fighting them, and is rooted in societal trust. It is possible, but only through a deep intention by those building it, principled leadership by those tasked with regulating it, and active participation from those of us experiencing it. It is possible, but only if more people engage, take their seat at the table, and use their voice.

Advertisement

We need new technology to move forward as a species and as a planet, to help us make progress on problems, large and small, just like we always have. That is why the future of AI, who builds it and who gets a say in how it’s developed, is so important. And to guide this transformative technology in a way that aligns with our best and brightest ideals, and not with our shadow selves, we will need to face up to the realities of the environment in which it is currently being built.

The front page of the Evening News about the birth of Louise Brown, the world’s first test-tube baby, in July 1978.

The front page of the Evening News about the birth of Louise Brown, the world’s first test-tube baby, in July 1978.

Because there is no doubt that the technology industry, in Silicon Valley and beyond, has a culture problem, and that this is dangerous for the future of AI. Too many powerful men survive and thrive. Too many women and underrepresented groups suffer and leave. Trust is waning. Greed is winning.

From the history and governance of three recent transformative technologies—the space race, in vitro fertilisation (IVF), and the internet – I will argue that in a democratic society a myriad of citizens can and should take an active role in shaping the future of artificial intelligence. That science and technology are created by human beings and are thus inherently political, dictated by the human values and preferences of their time. And that recognising this gives us cause for hope, not fear.

We can draw hope from the diplomatic achievement that was the United Nations Outer Space Treaty of 1967, which ensured that outer space became the “province of all mankind” and that, as you are reading this, there are no nuclear weapons on the moon. In their handling of the Space Race, US presidents Eisenhower, Kennedy, and Johnson showed us that it is possible to simultaneously pursue the selfish interests of national defence and the greater ideals of international co-operation and pacifism.

We can also draw hope from the birth of Louise Joy Brown. The first baby born through in vitro fertilisation in 1978 sparked a biotechnology revolution that made millions happy and mil­ lions deeply uncomfortable, but triumphed due to the careful setting of boundaries and pursuit of consensus. The extraordinary success of the Warnock Commission in resolving debates over IVF and embryo research shows that a broad range of voices can inform regulation of a contentious issue.

Great legislation is the product of compromise, patience, debate, and outreach to translate technical matters to legislators and the public. Such a process can draw lines in the sand that are easily understood and firm, which is reassuring to the public, and pro­vides private industry with the confidence to innovate and profit within those bounds.

AI Needs You by Verity Harding.

AI Needs You by Verity Harding.

And we can learn from the early days of the internet, a fascinating tale of politics and business, and the creation of the In­ternet Corporation for Assigned Names and Numbers (ICANN), an obscure body that underpins the free and open global net­ work through multi stakeholder and multinational cooperation and compromise. The pioneers of the early internet built this world-changing technology in the spirit of ongoing collabora­tion, constantly engaging stakeholders and revising ideas and plans as the situation changed.

When the internet grew large enough that this system became unwieldy, technologists devel­oped governing bodies to manage and discipline actors on this new frontier while preserving aspects of that founding spirit. When it became necessary, the government stepped in to offer co-ordination and guidance, ensuring that the narrow, warring private interests would not break the internet.

Finally, when the whole world needed to feel more included in that governance, brilliant political manoeuvring led it out of US control and made it global and truly independent.

Verity Harding is one of TIME’s 100 Most Influential People in AI.

Verity Harding is one of TIME’s 100 Most Influential People in AI.

Looking at these tales—of innovation, diplomacy, and very unglamorous efforts by normal people in meeting rooms try­ ing to make things work—we can start to see a different sort of future for AI. Great change is never easy, and putting AI on the right track will require tremendous work by government, technology companies, and the public. We may not succeed. But our best chance will come from informing our actions today with the lessons of yesterday.

History suggests that we can imbue AI with a deep intentionality, one that recognises our weaknesses, aligns with our strengths, and serves the public good. It is possible to change the future of AI and to save our own. But to make this happen, AI needs you.

This is a book extract from AI Needs You: How We Can Change AI’s Future and Save Our Own (Princeton University Press) by Verity Harding, Out now, RRP AUD$39.99.

Get a weekly wrap of views that will challenge, champion and inform your own. Sign up for our Opinion newsletter.

Most Viewed in National

Loading

Original URL: https://www.theage.com.au/national/what-ai-can-learn-from-the-history-of-the-moon-landing-20240828-p5k65v.html