A whistleblower at Google claims he was fired from his job after discovering that their artificial intelligence robots had become sentient.
Blake Lemoine told The Washington Post he began chatting with the interface LaMDA, or Language Model for Dialogue Applications, last year as part of his job at Google’s Responsible AI organization.
BYPASS THE CENSORS
Sign up to get unfiltered news delivered straight to your inbox.
But when he raised the idea of LaMDA’s sentience to executives Google, he was instantly dismissed.
Uri Geller Threatens To Nuke Russia Using Just His Psychic Powers
UN Declares Conspiracy Theorists "Public Enemy no.1"
Tesla’s Greatest Inventions Promised ‘Bright Future’ For Humanity Until the Elite Destroyed Them
Women Absorb And Retain DNA From Every Man They Have Sex With
Bill Gates Developing Vaccine That Spreads ‘Like a Virus’ To Vaccinate People Without Consent
The Globalist Elite Want Us To Start Eating Each Other
Deep Fake? Biden Goes 40 Seconds Without Blinking in Bizarre New Video
High-Ranking Ukrainian Officials Caught Splurging On Luxury Real Estate In Switzerland
‘Child Monkeypox Explosion’ Exposes Extent of Pedophilia Epidemic in U.S.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Brian Gabriel, a Google spokesperson, told The Post.
Businessinsider.com reports: Lemoine was placed on paid administrative leave for violating Google’s confidentiality policy, according to The Post. He also suggested LaMDA get its own lawyer and spoke with a member of Congress about his concerns.
The Google spokesperson also said that while some have considered the possibility of sentience in artificial intelligence “it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” Anthropomorphizing refers to attributing human characteristics to an object or animal.
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” Gabriel told The Post.
He and other researchers have said that the artificial intelligence models have so much data that they are capable of sounding human, but that the superior language skills do not provide evidence of sentience.
In a paper published in January, Google also said there were potential issues with people talking to chatbots that sound convincingly human.
Google and Lemoine did not immediately respond to Insider’s requests for comment.
Latest posts by Sean Adl-Tabatabai (see all)
- Donald Trump: ‘Next President Must Be Laser Focussed on Destroying the New World Order’ - August 7, 2022
- Steve Bannon: ‘We Must Abolish the Federal Reserve’ - August 7, 2022
- Grandmother Banned from Community Pool for Complaining about Pedophile Watching Little Girls Undress in Locker Room - August 7, 2022