Humans must retain control of military AI systems, MEPs say

21 Jan 2021

Image: © AA+W/Stock.adobe.com

MEPs have called for an EU strategy prohibiting the use of lethal autonomous weapon systems and a ban on so-called ‘killer robots’.

A new report from MEPs calls for an EU legal framework on AI with definitions and ethical principles, including its military use.

MEPs have stressed the importance of ensuring that AI and related technologies are human-centred, and that AI systems must be subject to “meaningful control” from humans.

The report was adopted on Wednesday (20 January) with 364 votes in favour, 274 votes against and 52 abstentions. It focused on military use of AI, mass surveillance, deepfakes and AI systems in the public sector.

“The use of lethal autonomous weapon systems (LAWS) raises fundamental ethical and legal questions on human control”, the report said, insisting on the need for an EU-wide strategy against LAWS and a ban on so-called ‘killer robots’.

“The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgement, in line with the principles of proportionality and necessity.”

The text calls on the EU to take a leading role in creating and promoting a global framework governing the military use of AI, alongside the UN and the international community.

AI use in non-military settings

Outside of military use, the report states that personal data must be protected and the principle of equal treatment upheld when AI is used in the public sector.

“While the use of AI technologies in the justice sector can help speed up proceedings and take more rational decisions, final court decisions must be taken by humans, be strictly verified by a person and be subject to due process,” it said.

MEPs also warned of threats to fundamental human rights when it comes to the use of AI technologies in mass civil and military surveillance. The report states that the use of “highly intrusive social scoring applications” for monitoring and rating citizens must be banned.

One of the most notable examples of social scoring applications is the planned social credit system in China, which has started to be rolled out.

The MEP report also raised concerns around deepfake technologies and their potential to “destabilise countries, spread disinformation and influence elections”.

MEP Gilles Lebreton said there needs to be legal responses to the challenges posed by AI development.

“To prepare the Commission’s legislative proposal on this subject, this report aims to put in place a framework which essentially recalls that, in any area, especially in the military field and in those managed by the state such as justice and health, AI must always remain a tool used only to assist decision-making or help when taking action. It must never replace or relieve humans of their responsibility,” he said.

The latest call from MEPs follows a number of draft proposals that were voted in by the European Parliament in October 2020. These proposals laid the groundwork for regulating AI with regards to ethics, liability and intellectual property.

Jenny Darmody is the editor of Silicon Republic

editorial@siliconrepublic.com