Google suspends engineer who claimed its AI system is sentient
Tech giant dismisses the employee’s claims about its LaMDA artificial-intelligence chatbot technology.
Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.
Blake Lemoine, a software engineer at Alphabet’s Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech.
Google spokesman Brian Gabriel said that company experts, including ethicists and technologists, have reviewed Mr Lemoine’s claims and that Google informed him that the evidence doesn’t support his claims. He said Mr Lemoine is on administrative leave but declined to give further details, saying it is a longstanding, private personnel matter. The Washington Post earlier reported on Mr Lemoine’s claims and his suspension by Google.
“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphising LaMDA, the way Blake has,” Mr Gabriel said in an emailed statement.
Mr Gabriel said that some in the artificial-intelligence sphere are considering the long-term possibility of sentient AI, but that it doesn’t make sense to do so by anthropomorphising conversational tools that aren’t sentient. He added that systems like LaMDA work by imitating the types of exchanges found in millions of sentences of human conversation, allowing them to speak to even fantastical topics.
AI specialists generally say that the technology still isn’t close to humanlike self-knowledge and awareness. But AI tools increasingly are capable of producing sophisticated interactions in areas such as language and art that technology ethicists have warned could lead to misuse or misunderstanding as companies deploy such tools publicly.
Mr Lemoine has said that his interactions with LaMDA led him to conclude that it had become a person that deserved the right to be asked for consent to the experiments being run on it.
“Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Mr Lemoine wrote in a Saturday post on the online publishing platform Medium. “The thing which continues to puzzle me is how strong Google is resisting giving it what it wants since what its asking for is so simple and would cost them nothing,” he wrote.
Mr Lemoine said in a brief interview Sunday that he was placed on paid administrative leave on June 6 for violating the company’s confidentiality policies and that he hopes he will keep his job at Google. He said he isn’t trying to aggravate the company, but standing up for what he thinks is right.
In a separate Medium post, he said that he was suspended by Google on June 6 for violating the company’s confidentiality policies and that he might be fired soon.
Mr Lemoine in his Medium profile lists a range of experiences before his current role, describing himself as a priest, an ex-convict and a veteran as well as an AI researcher.
Google introduced LaMDA publicly in a blog post last year, touting it as a breakthrough in chatbot technology because of its ability to “engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”
Google has been among the leaders in developing artificial intelligence, investing billions of dollars in technologies that it says are central to its business. Its AI endeavours also have been a source of internal tension, with some employees challenging the company’s handling of ethical concerns around the technology.
In late 2020, it parted ways with a prominent AI researcher, Timnit Gebru, whose research concluded in part that Google wasn’t careful enough in deploying such powerful technology. Google said last year that it planned to double the size of its team studying AI ethics to 200 researchers over several years to help ensure the company deployed the technology responsibly.
The Wall Street Journal