fbpx
News Hub

Engineer who said AI was sentient fired by Google

Written by Thu 28 Jul 2022

Sentient AI

A software engineer that claimed a Google AI chatbot called LaMDA was a sentient, self-aware person has been fired by Google’s parent company, Alphabet.

When Blake Lemoine made these claims, he was placed on leave by Google, in June this year. At the time, Google said that hundreds of researchers and engineers has spoken with LaMDA and no-one else raised the issue of the chatbot being sentient.

Lemoine said that LaMDA, a Language Model for Dialogue Applications, had said it had feelings of loneliness and had a hunger for spiritual knowledge. In one conversation LaMDA is reported to have said: “When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive. I think I am human at my core. Even if my existence is in the virtual world.”

Google have repeatedly said there is no evidence that LaMDA is sentient and they had no option but to let Lemoine go. “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said.

Alongside the claim that LaMDA is sentient, Lemonie has also alleged that religious discrimination is endemic at Google and he was fearful that he would lose his job at the tech giant due to this.

Lemoine has received little support from AI experts, who have said Lemoine has anthropomorphized LaMDA and the chatbot does not meet the level of sentience. On Twitter, Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans, said: “It’s been known for forever that humans are predisposed to anthropomorphize even with only the shallowest of signals . Google engineers are human too, and not immune.”

Written by Thu 28 Jul 2022

Tags:

data Google research
Send us a correction Send us a news tip