Hawking, Musk and Wozniak call for worldwide ban on autonomous weapons

28 Jul 2015

Experts worry that the continued development of autonomous weapons will have wide-reaching, global concerns

A worldwide ban on the development and use of “offensive autonomous weapons” has been called for in an open letter from the Future of Life Institute, with signatories including Stephen Hawking, Elon Musk, Steve Wozniak and hundreds of other experts in robotics and artificial intelligence.

Autonomous weapons, the letter argues, could set off the “third revolution in warfare, after gunpowder and nuclear weapons”.

The letter was published to coincide with the International Joint Conference on Artificial Intelligence, currently taking place in Buenos Aires.

Autonomous weapons – more bluntly referred to by some as killer robots – are capable of selecting and neutralising targets without any human oversight.

According to the letter: “AI technology has reached a point where the deployment of such systems is – practically, if not legally – feasible within years, not decades.”

The letter cites the relative affordability of development of these weapons systems as a major cause for concern, saying that, if development of the technologies continues, it is only a matter of time before they are used by terrorists and warlords.

Of course, terrorists and warlords aren’t the only people capable of inflicting a huge amount of damage with these systems.

It is argued in the letter that the potential ubiquity of offensive autonomous weapons, in negating the need to send humans into the battlefield, could increase the likelihood of wars breaking out. When governments no longer have to consider the loss of life among their own people, what’s to stop them becoming increasingly trigger happy?

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the end point of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” reads the letter.

Another concern of the letter writers and signatories is the immense damage that weaponising AI could have on public perceptions of the technology.

The experts and researchers behind the letter acknowledge the many positive applications of AI, ranging from fighting disease to carrying out rescues.

However, the letter argues, if AI is capable of killing without a human instructing it to do so, the public may not trust it to carry out these humanitarian tasks, setting the industry back by decades.

If the generally horrified reaction to the drone-mounted automatic handgun that came to public attention a few weeks ago is anything to go by, they might be right about that.

Main image, via Shutterstock

Kirsty Tobin was careers editor at Silicon Republic