Elon Musk and Stephen Hawking sign deal to keep AI from ending humanity

12 Jan 2015

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Terminator robot image via Wikimedia Commons

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Hundreds of scientists, academics and entrepreneurs, including Elon Musk and Stephen Hawking, have digitally added their name to an open letter promising artificial intelligence (AI) won’t end humanity.

Both Musk and Hawking have been quite vocal in recent months about their fears over the direction with which AI is taking and their belief that its unmitigated development could, the far future, lead to robots overtaking mankind and eventually destroying it.

While many see films including the Terminator series as works of science fiction, the Future of Life Institute (FLI) certainly don’t see it as a fantasy concept with their open letter.

While not putting it in such dramatic terms, the group are trying to promote the idea that there needs to be increased regulation with how the technology of robots and AI develops if it is to co-habit the workspace of humans and even one day, in our homes.

Benefits and pitfalls of AI

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” says the open letter.

It continued, “The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

As part of the letter, a research document has also been published by the FLI detailing specifically what points need to be addressed regarding the development of AI including the future of autonomous weapons and vehicles, market disruption because of AI and AI’s place within privacy and the analysing of people’s online data.

Calling on their research to be expanded to the greater science and technology community, the letter said, “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com