NewsBite

How a fervent belief split Silicon Valley — and fuelled the blow-up at OpenAI

Sam Altman’s firing showed the influence of effective altruism and its view that AI development must slow down; his return marked its limits.

The blow-up at OpenAI showed its influence — and the triumphant return of chief executive Sam Altman revealed hard limits, capping a bruising year for the divisive philosophy. Picture: Justin Sullivan/Getty Images
The blow-up at OpenAI showed its influence — and the triumphant return of chief executive Sam Altman revealed hard limits, capping a bruising year for the divisive philosophy. Picture: Justin Sullivan/Getty Images

Over the past few years, the social movement known as effective altruism has divided employees and executives at artificial-intelligence companies across Silicon Valley, pitting believers against nonbelievers.

The blow-up at OpenAI showed its influence — and the triumphant return of chief executive Sam Altman revealed hard limits, capping a bruising year for the divisive philosophy.

Coming just weeks after effective altruism’s most prominent backer, Sam Bankman-Fried, was convicted of fraud, the OpenAI meltdown delivered another blow to the movement, which believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age — and failure to do so could have apocalyptic consequences.

OpenAI, which released ChatGPT a year ago, was formed in part on the principles of effective altruism, a broad social and moral philosophy that influences the AI research community in Silicon Valley and beyond. Some followers live in private group homes, where they can brainstorm ideas, engage in philosophical debates and relax playing a four-person variant of chess known as Bughouse. The movement includes people devoted to animal rights and climate change, drawing ideas from rationalist philosophers, mathematicians and forecasters of the future.

Supercharged by hundreds of millions of dollars in tech-titan donations, effective altruists believe a headlong rush into artificial intelligence could destroy mankind. They favour safety over speed for AI development. The movement, which includes people who helped shape the generative-AI boom, is insular and multifaceted but shares a belief in doing good in the world — even if that means simply making a lot of money and giving it to worthy recipients.

Altman, who was fired by the board Friday, clashed with the company’s chief scientist and board member Ilya Sutskever over AI-safety issues that mirrored effective-altruism concerns, according to people familiar with the dispute.

Sam Altman, OpenAI chief executive. Picture: Joel Saget/AFP
Sam Altman, OpenAI chief executive. Picture: Joel Saget/AFP

Voting with Sutskever, who led the coup, were board members Tasha McCauley, a tech executive and board member for the effective-altruism charity Effective Ventures, and Helen Toner, an executive with Georgetown University’s Center for Security and Emerging Technology, which is backed by a philanthropy dedicated to effective-altruism causes. They made up three of the four votes needed to oust Altman, people familiar with the matter said.

The board said he failed to be “consistently candid.” The company announced Wednesday that Altman would return as chief executive and Sutskever, McCauley and Toner would be replaced. Emmett Shear, a tech executive favouring a slowdown in AI development and recruited as the interim CEO, was out.

Altman’s dismissal had triggered a company revolt that threatened OpenAI’s future. More than 700 of about 770 employees had called for Altman’s return and threatened to jump ship to Microsoft, OpenAI’s biggest investor. Sutskever said Monday he regretted his vote.

“OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence,” venture capitalist and OpenAI investor Vinod Khosla wrote in an opinion piece for The Information.

Members of the OpenAI board who voted to fire Sam Altman, left to right: Ilya Sutskever, Tasha McCauley, Helen Toner and Adam D'Angelo. All but D'Angelo will be replaced.
Members of the OpenAI board who voted to fire Sam Altman, left to right: Ilya Sutskever, Tasha McCauley, Helen Toner and Adam D'Angelo. All but D'Angelo will be replaced.

Altman toured the world this spring warning that AI could cause serious harm. He also called effective altruism an “incredibly flawed movement” that showed “very weird emergent behaviour.” The effective-altruism community has spent vast sums promoting the idea that AI poses an existential risk. But it was the release of ChatGPT that drew broad attention to how quickly AI had advanced, said Scott Aaronson, a computer scientist at the University of Texas, Austin, who works on AI safety at OpenAI. The chatbot’s surprising capabilities worried people who had previously brushed off concerns, he said.

The movement has spread among the armies of tech-industry scientists, investors and executives racing to create AI systems to mimic and eventually surpass human ability. AI can bring global prosperity, but it first must be prevented from wreaking havoc, according to those in the movement.

Google and other companies are trying to be the first to roll out AI systems that can match the human brain. They largely regard artificial intelligence as a tool to advance work and economies at great profit.

The movement’s high-profile supporters include Dustin Moskovitz, a co-founder of Facebook, and Jann Tallinn, the billionaire founder of Skype, who have pledged billions of dollars to effective-altruism research. Before his fall, Bankman-Fried had also pledged billions. Elon Musk has called the writings of effective altruism’s co-founder William MacAskill “a close match for my philosophy.”

Philosopher Will MacAskill speaking at TED2018 on April 12, 2018 in Vancouver. Picture: Lawrence Sumulong/Getty Images
Philosopher Will MacAskill speaking at TED2018 on April 12, 2018 in Vancouver. Picture: Lawrence Sumulong/Getty Images

Marc Andreessen, the co-founder of venture-capital firm Andreessen Horowitz, and Garry Tan, chief executive of the start-up incubator Y Combinator, have criticised the movement. Tan called it an insubstantial “virtue signal philosophy” that should be abandoned to “solve real problems that create human abundance.” Urgent fear among effective-altruists that AI will destroy humanity “clouds their ability to take in critique from outside the culture,” said Shazeda Ahmed, a researcher who led a Princeton University team that studied the movement.

“That is never good for any community trying to solve any trenchant problem.” The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future.

This account of the movement is based on interviews with more than 50 executives, researchers, investors, current and former effective-altruists, as well as public talks, academic papers and other published material from the effective-altruism community.

Clip job

One fall day last year, thousands of paperclips in the shape of OpenAI’s logo arrived at the company’s San Francisco office. No one seemed to know where they were from, but everybody knew what they meant.

The paperclip has become a symbol of doom in the AI community. The idea is that an artificial-intelligence system told to build as many paperclips as possible might destroy all of humanity in its drive to maximise production.

The prank was done by an employee at cross-town rival, Anthropic, which itself sprang from divisions over AI safety.

Dario Amodei, OpenAI’s top research scientist, split from the company, joined by several company executives in early 2021. They started Anthropic, an AI research company friendly to effective altruists.

One of the paperclips dropped off at OpenAI in San Francisco.
One of the paperclips dropped off at OpenAI in San Francisco.

Bankman-Fried had been one of Anthropic’s largest investors and supported the company’s mission, which favoured AI safety over growth and profits.

The fear of futuristic AI systems hasn’t stopped even those worried about safety from trying to build artificial general intelligence or AGI — advanced systems that match or outdo the human brain.

At OpenAI’s holiday party last December, Sutskever addressed hundreds of employees and their guests at the California Academy of Science in San Francisco, not far from the museum’s dioramas of stuffed zebras, antelopes and lions.

“Our goal is to make a mankind-loving AGI,” said Sutskever, the company’s chief scientist.

“Feel the AGI,” he said. “Repeat after me. Feel the AGI.” Effective altruists say they can build safer AI systems because they are willing to invest in what they call alignment: making sure employees can control the technology they create and ensure it comports with a set of human values. So far, no AI company has said what those values should be.

OpenAI recently said it would dedicate a fifth of its computing resources over the next four years to what the company called “superalignment,” an effort led by Sutskever. The team has been building, among other things, an AI-derived “scientist” that can conduct research on AI systems, people familiar with the matter said.

Sam Altman, left, and OpenAI chief scientist Ilya Sutskever speaking in Tel Aviv on June 5. Picture: Jack Guez/Getty Images
Sam Altman, left, and OpenAI chief scientist Ilya Sutskever speaking in Tel Aviv on June 5. Picture: Jack Guez/Getty Images

Frustrated employees said attention to AGI and alignment has left fewer resources to solve more immediate issues such as developer abuse, fraud and nefarious AI uses that could affect the 2024 election. They say the resource disparity reflects the influence of effective altruism.

While OpenAI is building automated tools to catch abuses, it hasn’t hired many investigators for that work, according to people familiar with the company. It also has few employees monitoring its developer platform, which is used by more than two million researchers, companies and other developers.

The company has recently hired someone to consider the role of OpenAI technology in the 2024 election. Experts warn of the potential for AI-generated images to mislead voters.

A spokeswoman for OpenAI said the company has invested heavily in moderating its products and has several teams to manage short-term risks. The company said it also has put significant resources into training its AI models to avoid generating harmful content.

At Google, the merging this year of its two artificial intelligence units — DeepMind and Google Brain — triggered a split over how effective-altruism principles are applied, according to current and former employees.

DeepMind co-founder Demis Hassabis, who has long hired people aligned with the movement, is in charge of the combined units.

Google Brain employees say they have largely ignored effective altruism and instead explore practical uses of artificial intelligence and the potential misuse of AI tools, according to people familiar with the matter.

One former employee compared the merger with DeepMind to a forced marriage, “making many people squirm at Brain.”

P (doom)

Arjun Panickssery, a 21-year-old AI safety researcher, lives with other effective altruists at Andromeda House, a five-bedroom, three-storey home a few blocks from the University of California, Berkeley campus.

They host dinners, and visitors are sometimes asked to reveal their P (doom) — estimates of the chances of an AI catastrophe.

Berkeley, Calif., is an epicentre of effective altruism in the Bay Area, Panickssery said. Some houses designate “no-AI” zones to give people an escape from constant talk about artificial intelligence.

Open Philanthropy’s then-CEO Holden Karnofsky had once lived with two senior OpenAI executives, according to Open Philanthropy’s website. Since 2015, Open Philanthropy, a non-profit that supports effective-altruism causes — has given away $327 million, including $30 million to OpenAI, its website shows.

When Karnofsky was engaged to Daniela Amodei, now Anthropic’s president, they were roommates with Amodei’s brother Dario, now Anthropic’s CEO.

In August 2017, Karnofsky and Daniela Amodei married in an effective-altruism-theme ceremony. Wedding guests were encouraged to donate to Karnofsky’s effective-altruism charity GiveWell and to read a 457-page tome by German philosopher Jürgen Habermas beforehand.

Anthropic co-founder and CEO Dario Amodei during TechCrunch Disrupt 2023 in San Francisco on Sept. 20. Picture: Kimberely White/Getty Images
Anthropic co-founder and CEO Dario Amodei during TechCrunch Disrupt 2023 in San Francisco on Sept. 20. Picture: Kimberely White/Getty Images

“This is necessary context for understanding our wedding,” the couple wrote on a website for the event.

The effective-altruism movement dates back roughly two decades, when a group of Oxford University philosophers and those they identified as “super-hardcore do-gooders,” were looking for a marketing term to promote their utilitarian version of philanthropy.

Adherents believe in maximising the amount of good they do with their time. They can earn as much money as possible, then give much of it away to attack problems that government and traditional non-profits are ignoring or haven’t solved. They focus on ideas that deliver the biggest impact or help the largest number of people per dollar spent.

Bankman-Fried, who was convicted this month, said he was building his fortune only to give most of it away.

FTX founder Sam Bankman-Fried, centre, arriving at the federal courthouse in New York on March 30. Picture: Ed Jones/Getty Images
FTX founder Sam Bankman-Fried, centre, arriving at the federal courthouse in New York on March 30. Picture: Ed Jones/Getty Images

Beginning around 2014, effective altruists became attuned to the risk of human annihilation by an advanced AI system, a danger on the scale of climate change. The revelation coincided with publication of the book “Superintelligence” by the Swedish philosopher Nick Bostrom, which popularised the paperclip as a symbol of AI danger.

Effective altruists have since formed networks of online communities, where they exchange job advice, argue about philosophy and offer predictions. Affiliated non-profits and student groups organise local meetups and conferences focused on using reason, economics and mathematics to solve the world’s biggest problems.

The gatherings and events, held around the world, are often closed to outsiders. Organisers of a recent effective-altruism conference in New York declined the request of a Wall Street Journal reporter to attend, saying in an email that there was “a high bar for admissions.” The email suggested the reporter would “benefit from spending some more time engaging with the effective altruism community.”

Caitlin Ostroff contributed to this article.

The Wall Street Journal

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/the-wall-street-journal/how-a-fervent-belief-split-silicon-valley-and-fuelled-the-blowup-at-openai/news-story/20b1ad03baedae14f920436ff90d1278