Firestorm erupts inside Google and across artificial intelligence field over algorithmic dangers
An academic paper has ignited a firestorm inside Google, across AI field over how the technology is being developed, by whom and its potential dangers.
On Wednesday, Emily Bender, a professor of linguistics at Washington University, will present what may appear to be just another academic paper, aside from its somewhat quirky title: “On the Dangers of Stochastic Parrots: Can Language Models be too Big?”
The paper is just one of about 80 to be presented at the ACM FAcct Conference, an annual event for computer scientists focused on algorithmic bias and accountability. Yet it has ignited a firestorm inside Google and across the field of artificial intelligence over how the powerful technology is being developed, by whom and its potential dangers.
In December, Timnit Gebru, one of the co-authors of the paper and co-head of Google’s ethical AI research team, abruptly left the company — she claims she was fired, Google said she resigned — after she refused to retract the research.
The company claimed it “didn’t meet our bar for publication”, though it had been peer-reviewed and accepted for the conference. Meg Mitchell, founder of Google’s ethical AI team, rushed to Dr Gebru’s defence.
In a letter to bosses, which she made public later, she said: “The firing seems to have been fuelled by the same underpinnings of racism and sexism that our AI systems, when in the wrong hands, tend to soak up.
“How Dr Gebru was fired is not OK.”
Dr Mitchell herself was sacked by Google three weeks later over claims that she had violated its code of conduct by removing confidential files from company systems.
The ethical AI research arm inside what is arguably the world’s most powerful AI developer was suddenly in disarray — and facing questions about whether it was operating in good faith.
AI is often referred to as the “new electricity”, a force coursing through everything from auto-complete in emails to self-driving cars to facial recognition.
Algorithms are “trained” to recognise patterns and predict answers by crunching through data.
However, that data is provided, and the algorithms crafted, by humans who bring prejudices with them.
Dr Gebru, who co-founded a network for black computer scientists, was involved with research that showed how facial recognition software used by police worldwide was a lot less accurate for women and people of colour.
The audacity of Google leaders to be angry about leaks cause they want to have "honest conversations."
— Timnit Gebru (@timnitGebru) March 1, 2021
Hilarious. Bruh you fired me for trying just that. Internally. You know how much stuff I could've leaked to the press over the years. Did I? No. And You fired & smeared me 1\
The anger over her dismissal has not ceased.
More than 2600 Google staff signed an open letter in support of her.
This month, the company announced a management reorganisation, a racial equality review, and new policies including tying manager and executive pay to diversity and inclusion goals.
Dr Gebru, who was born in Ethiopia, said: “The paper was just a pretext.
“It wasn’t really the reason they fired me … after reading the paper, a lot of people have come to that conclusion.
“The root of it is racism and sexism.”
The paper examined the risks of “large language models”, which process huge amounts of text to create a tool that can generate believable articles, poetry — even racist insults.
Last summer, OpenAI, a non-profit group funded by Elon Musk and Microsoft, debuted its GPT-3 language model, which can produce human-like text for newspapers, life advice on Reddit and do basic coding.
Dr Gebru said: “There was a lot of hype around these models, and if there was talk about drawbacks, it was a footnote.
“It was like, ‘Yeah, these things are great’.
“But then there’s this Islamophobic stuff it generates.”
The paper sought to point out such dangers.
Google AI chief Jeff Dean apologised for how Dr Gebru’s exit was handled.
He wrote: “It led some to question their place here, which I regret.”
In January, Google unveiled a language model hugely more powerful than GPT-3.
The Times