AI gone MAD: New report says technology threatens nuclear deterrence

25 Apr 2018

Image: John Wollwerth/Shutterstock

A new report predicts that before the midway point of this century, the precarious balancing act among the nuclear powers will be broken by AI.

Since the beginning of the cold war at the end of the 1940s, the major nuclear powers have teetered on the brink of nuclear war, only stopped because of the game theory concept known as mutually assured destruction, or MAD for short.

The logic is that the US would not want to launch a massive nuclear strike against Russia because it knows that a retaliation would be just as horrific, so it chooses to maintain the peace, albeit in a less than ideal way.

Recently, the current political global tensions between the major world powers has returned the atmosphere of the cold war once again, making people fear a nuclear confrontation.

Adding further worry into the mix is a report published by the think tank known as the Rand Corporation, which suggests that the advent of artificial intelligence (AI) threatens to destabilise this fine balancing and upend its foundations by the year 2040.

The authors of the report predict that, as AI becomes more advanced and powerful, improved sensor technologies could introduce the possibility that retaliatory forces such as submarine and mobile missiles could be targeted and destroyed.

This means that a nuclear nation might be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals, even if they have no intention of carrying out an attack.

In essence, it makes another nation nervous that, even if it has no intention of using its own nuclear weapons, it can’t be sure that an advanced AI wouldn’t be capable of making a first, lethal blow.

On the other hand …

On the other hand, the report said, AI could enhance strategic stability by improving accuracy in intelligence collection and analysis.

By having enhanced analytics and the capability of interpreting adversary actions, it could reduce miscalculation or misinterpretation that could lead to unintended escalation.

The report authors believe it is possible that AI will develop to such a degree that an algorithm would be less error-prone than a human, making the chance of nuclear war less likely.

“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes,” said Andrew Lohn, co-author on the paper and associate engineer at Rand.

“There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult, and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk.”

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com