NewsBite

Advertisement

This was published 11 months ago

Helen Toner, the effective altruist who sparked the Open AI coup

By Tim Biggs

Helen Toner, the Melbourne researcher seemingly at the heart of an ideological debate in which generative AI poster child Sam Altman was ousted from OpenAI, will not be remaining on the company’s not-for-profit board in the wake of Altman’s return to the fold.

The story of Altman’s removal and comeback has been a rollercoaster ride since the board’s decision earlier this week, and the episode is the latest flashpoint of an ongoing philosophical conflict under way in Silicon Valley.

Helen Toner in May cautioned against over-relying on AI chatbots, saying there was “still a lot we don’t know” about them.

Helen Toner in May cautioned against over-relying on AI chatbots, saying there was “still a lot we don’t know” about them.

While Altman, the face of OpenAI and “generative artificial intelligence” in general, represents the commercial aspirations that underpin the future of the technology, Toner represents the effective altruism movement which wants to maximise good and limit harm from AI.

The tug of war between these two ideals, which originally resulted in Altman being shown the door, appears to have pulled him back in and put Toner on the outside within a matter of days.

OpenAI, which is unusually structured as a charity but with a for-profit arm that has seen products such as Chat-GPT become wildly successful in a short amount of time, has become a case study in the clash of visions for artificial intelligence going forward.

Since Toner’s time at Melbourne University, she has been associated with effective altruism. Graduating with a bachelor of science in 2014, she went on to work at Melbourne AI companies Draftable and Vesparum, as well as effective altruism charity assessment firm GiveWell.

She joined the Open Philanthropy Project as an analyst in 2015, had a stint doing research on AI governance at the University of Oxford, and in 2019 joined Georgetown University’s Center for Security and Emerging Technology (CSET), which advises on potentially dangerous implications of AI and other developments.

It’s funded in part by effective altruist-linked organisations including Open Philanthropy and the William and Flora Hewlett Foundation, as well as Elon Musk’s Musk Foundation.

In 2021, she graduated from Georgetown with a master’s degree in security studies and joined Open AI’s non-profit board. In 2022, she also became director of foundational research grants at CSET.

Advertisement

Effective altruism, which was popularised by philosophers including Peter Singer but was most recently in the news because of (disputed) associations with FTX founder Sam Bankman-Fried, encourages people to take careers where they can maximise global positive impact.

Some effective altruists are dedicated to funding charities with the most impact, while others focus on animal welfare. Many, such as Toner, work to manage long-term existential risks such as AI.

“Looking back over history, a small number of major transition points had radically larger effects on how people live and how civilisation functions than many of the smaller changes put together,” she said during a talk on AI risk at the Centre for Effective Altruism in London in 2017.

“If it looks like something on the horizon might have a reasonable chance of being one of these major transitions, and it looks like there might be reasonable work you can do to make that transition go better for humanity, then that could be a really valuable thing to work on.”

In the talk, Toner said she was not concerned that AI would become sentient and attack humans; rather that improperly developed AI could result in unreliable and poorly designed AI products and services.

Loading

Speaking to the Financial Times this year, she warned that AI executives generally took the risks seriously but were also motivated by profit.

“I think it’s really important to make sure that there is outside oversight, not just by the boards of the companies but also by regulators and by the broader public,” she said. “Even if their hearts are in the right place, we shouldn’t rely on that as our primary way of ensuring they do the right thing.”

Toner has been sought for comment.

During Toner’s time on the OpenAI board, Altman had reportedly complained that her public comments and published research seemed to criticise OpenAI’s approach to safety.

US-based technology journalist Kara Swisher said tension between Altman, who has pushed for rapid commercialisation of OpenAI’s products, and Toner was a key factor in the board’s decision to eject Altman.

On Reddit and X, Toner has become a lightning rod for criticism, as commenters place the blame on her for limiting the power of OpenAI’s generative tools despite there being several other people involved in the boards’ decision.

On the other hand, it also did not escape some commenters that following this apparent disagreement on the need to balance money-making and safety, OpenAI had ditched the only two women it had on its board — keeping the two men who were also involved in removing Altman — and electing other men to take the empty chairs.

Ilya Sutskever, the company’s co-founder and chief scientist, who played a major role in Altman’s ousting, later said on X that he regretted his actions. He will stay at OpenAI as Altman returns.

With some reports indicating that OpenAI had recently hit a milestone in its attempts to create an artificial general intelligence (AGI) model that would amplify both the power and potential dangers of the product, it’s no surprise that tensions between the ethicists on the company’s board and those looking to run the company as a business would intensify.

Given Toner’s background, it’s no surprise where she stood.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

Loading

Original URL: https://www.smh.com.au/link/follow-20170101-p5emb7