AI ethics and the military: A tangled web

19 Feb 2019

The Pentagon. Image: © icholakov/Stock.adobe.com

With increased discussion around the use of AI by global militaries, how is the ethical conversation evolving?

Cick to read more stories from AI Week.

 The difficult ethical questions arising from the use of AI (artificial intelligence) are manifold and something Siliconrepublic.com has covered time and again. While artificial intelligence is often hyped up as a business saviour and derided as a job killer, the question of AI ethics also comes to mind in military uses of the technology, particularly in the wake of the Project Maven furore at Google.  

More and more militaries using AI

The reality is that AI is already a growing element of the military strategy of many countries, while the EU and other countries such as China have been engaging for some time on the issue of AI ethics.

Just this month, the US Pentagon released a call for the rapid adoption of AI in all aspects of the military and asked for the collaborative help of big tech firms. Earlier in the year, it had sought clearer ethical guidelines for the use of AI.  

Dana Deasy, CIO at the US Department of Defense, told press: “We must adopt AI to maintain our strategic position and prevail on future battlefields.” Oracle, IBM, Google and SAP have all indicated interest in working on future Department of Defense AI projects.  

When people think of the use of AI by military, they may first think of the ‘killer robots’ or autonomous weapons that many have warned about. While AI weapons are a stark reality, many of the deployments involve uses of the tech in automated diagnostics, defensive cybersecurity and hardware maintenance assistance. The contentious use of facial recognition by US immigration authority ICE can also be considered a deployment of AI in an increasingly militarised world.  

The use cases for AI in defence are plentiful. Antony Edwards is COO of Eggplant, a provider of continuous intelligent test automation services which has some clients in the defence space. These services are used by NASA to ensure all the systems in the Orion spacecrafts digital cockpit are behaving correctly. “That these instruments are showing the correct information and entering information into the instrument has the correct effect, is clearly critical to mission success,” Edwards explained.  

The Federal Aviation Administration also uses Eggplant to ensure its digital displays are correct: “ie if an aircraft comes into the monitored airspace, it shows on the appropriate screen in the appropriate way.”

How should AI be approached?

According to an Electronic Frontier Foundation (EFF) white paper geared towards militaries, there are certain things that can be done to approach AI in a thoughtful way.

These include supporting civilian leadership of AI research, supporting international agreements and institutions on the issues, focusing on predictability and robustness, encouraging open research and dialogue between nations, and placing a higher priority on defensive cybersecurity measures. 

Looking at ethical codes, some legal experts argue that ethics themselves are too subjective to govern the use of AI, according to the MIT Technology Review.

Human rights issues

Many leading human rights organisations argue that the use of weapons such as armed drones will lead to an increase in civilian deaths and unlawful killings. Others are concerned that unregulated AI will lead to an international arms race.

This is a concern for many who are not convinced that AI as it exists now should be deployed in certain circumstances, due to vulnerabilities and a lack of knowledge of the weaknesses in certain models.  

AI expert David Gunning spoke about the issues with Siliconrepublic.com: “We don’t want there to be a military arms race on creating the most vicious AI system around … But, to some extent, I’m not sure how you avoid it.

“Like any technology arms race, as soon as our enemies use it, we don’t want to be left behind. We certainly don’t want to be surprised.”  

Edwards believes that more awareness of AI among software acquirers is an important element when it comes to using it in these contexts. “AI breaks many of the assumptions that people make about software and its potential negative impacts, so anyone acquiring a product that includes AI must understand what that AI is doing, how it works, and how it is going to impact the behaviour of the software.  

“They must also understand what safety mechanisms have been built in to protect against errant algorithms.” 

AI ethics can be unclear

Luca De Ambroggi is senior research director of AI at IHS Markit, with decades of experience in AI and machine learning. He says that when it comes to military projects, ethics “can get very muddy”.  

He added: “AI ethics are generally complex at a global level precisely because different cultures have different values.

“However, because the fine lines of war and peace are at stake, the military arena can actually be where a global consensus is found if and when the international community come together around a table.  

“From the Nuclear Non-Proliferation Treaty to the Geneva Convention, there is a long history of creating good faith agreements about the rules of war. As with nuclear, however, there will also be rogue states that openly disavow any agreed AI ethical framework and those who choose not to act in its spirit.” 

As was mentioned earlier, De Ambroggi agrees that the willingness of nations to discuss these issues is paramount. He added: “It would be irresponsible if ethics was ignored. AI usage will remain with the human operator for now, as it is still intended to aid humans at a tactical and command level.  

“For this reason, it is vital a code is developed and adhered to. However, we must continue to research the benefits and pitfalls of widespread AI application and implementation within military usage, to further inform the ethics of AI.” 

Who makes the call?

Principal technology strategist at Quest, Colin Truran, got to the core of the issue when it comes to AI ethics in a general sense: “The current overarching conundrum surrounding AI ethics is really in who decides what is ‘ethical’. AI is developing in a global economy, and there is a high likelihood of data exchange between multiple AI solutions.” 

Ultimately these are ethical quandaries that will likely take years to find answer to, if such a feat is even possible. As the EFF notes, the next number of years will be a critical period in determining how militaries will use AI: “The present moment is pivotal: in the next few years either the defence community will figure out how to contribute to the complex problem of building safe and controllable AI systems, or buy into the hype and build AI into vulnerable systems and processes that we may come to regret in decades to come.”

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com