“If we one day develop machines with general intelligence that surpasses ours, they would be in a very powerful position,” says Nick Bostrom, Oxford professor and founding director of the Future of Humanity Institute.
Bostrom sat down with Reason science correspondent Ron Bailey to discuss his latest book, Superintelligence: Paths, Dangers, Strategies, in which he discusses the risks humanity will face when artificial intelligence (AI) is created. Bostrom worries that, once computer intelligence excedes our own, machines will be beyond our control and will seek to shape the future according to their will. If the machines’ goals aren’t properly set by designers, they could see humans as liabilities—leading to our annihilation.
BYPASS THE CENSORS
Sign up to get unfiltered news delivered straight to your inbox.
How do we avoid a robot apocalypse? Bostrom proposes two solutions: either limit AI to only answering questions in a preset boundary, or engineer AI to include human preservation. “We have got to solve the control problem before we solve the AI problem,” Bostrom explains. “The big challenge then is to reach into this huge space of possible mind decisions, motivation system designs, and try to pick out one of the very special ones that would be consistent with human survival and flourishing.”
Balenciaga Pedo-gate Blown WIDE OPEN
Klaus Schwab and George Soros Declare China Must Lead New World Order
Klaus Schwab: ‘God Is Dead’ and the WEF is ‘Acquiring Divine Powers’
‘Passion of the Christ’ Star Claims Hollywood Elite Are Trafficking Children For Adrenochrome
Bill Gates Tells World Leaders ‘Death Panels’ Will Soon Be Required
Justin Bieber: Facial Paralysis Is ‘Punishment’ For Exposing Illuminati Pedophilia
Spanish Royalty Expose Who Really Killed Princess Diana
‘Controlled Opposition’: Dave Chappelle’s Family Say He Was Killed and Cloned by the Illuminati
Michael Jackson Was Murdered for Saying SAME Things As Kanye 13 Years Ago
Error 403: The request cannot be completed because you have exceeded your quota..
Domain code: youtube.quota
Reason code: quotaExceeded
Until such time, Bostrom believes research into AI should be dramatically slowed, allowing humanity ample time to understand its own objectives.
Latest posts by Royce Christyn (see all)
- Government Op Who Predicted Super Bowl Score Warns Of Nuclear War - February 18, 2017
- Video: Why Voting Doesn’t Change Anything & Democracy Is A Lie - May 7, 2016
- Did Bible Verse Predict String of Recent Quakes, Volcano, & Foam? - April 17, 2016