The CEO of Tesla and SpaceX, Elon Musk has repeatedly voiced concerns about artificial intelligence (AI), saying “super-intelligent” machines might decide to destroy human life.
Likewise, Stephen Hawking believes that future developments in artificial intelligence have the potential to eradicate mankind.
Now hundreds of leading scientists have joined Musk and Hawking in warning of the potential dangers of artificial intelligence. They have signed an open letter calling for research on how to avoid harming humanity.
RT reports: The open letter, drafted by the Future of Life Institute and signed by hundreds of academics and technologists, calls on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.
The letter’s authors recognize the remarkable successes in “speech recognition, image classification, autonomous vehicles, machine translation, legged location and question-answering systems,” and argue that it is not unfathomable that the research may lead to the eradication of disease and poverty. But they insisted that “our AI systems must do what we want them to do” and laid out research objectives that will “help maximize the societal benefit of AI.”
The document of research priorities says that future AI could potentially impact society in areas such as computer security, economics, law, and philosophy, and highlight many concerns for communities. For instance, scientists suggested that AI should be developed to analyze how workers – and their wages – would be affected if parts of the economy become automated.
They also suggested looking into various issues involving AI that need to be considered before breakthroughs are made that are considered beneficial to society: Can lethal autonomous weapons be made to comply with humanitarian law? How will AI systems be constrained over privacy rights when it comes to obtaining data from surveillance cameras, phone lines, and emails?