A whistleblower at Google claims he was fired from his job after discovering that their artificial intelligence robots had become sentient.
Blake Lemoine told The Washington Post he began chatting with the interface LaMDA, or Language Model for Dialogue Applications, last year as part of his job at Google’s Responsible AI organization.
BYPASS THE CENSORS
Sign up to get unfiltered news delivered straight to your inbox.
But when he raised the idea of LaMDA’s sentience to executives Google, he was instantly dismissed.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Brian Gabriel, a Google spokesperson, told The Post.
Businessinsider.com reports: Lemoine was placed on paid administrative leave for violating Google’s confidentiality policy, according to The Post. He also suggested LaMDA get its own lawyer and spoke with a member of Congress about his concerns.
The Google spokesperson also said that while some have considered the possibility of sentience in artificial intelligence “it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” Anthropomorphizing refers to attributing human characteristics to an object or animal.
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” Gabriel told The Post.
He and other researchers have said that the artificial intelligence models have so much data that they are capable of sounding human, but that the superior language skills do not provide evidence of sentience.
In a paper published in January, Google also said there were potential issues with people talking to chatbots that sound convincingly human.
Google and Lemoine did not immediately respond to Insider’s requests for comment.