State surveillance and automated warfare: Experts call for AI regulation

21 Feb 2018

Automated, sophisticated surveillance systems are a risk posed by AI. Image: sspopov/Shutterstock

According to a group of leading experts, AI could become a manipulative and dangerous technological advance if immediate action is not taken.

While artificial intelligence (AI) is involved in some of the most exciting and beneficial advancements in history, from disaster-relief supply deliveries made via intelligent drones to expediting scientific research, there is still a great deal of risk inherent in this emerging area.

The level of sophistication at this point should be viewed as an early indicator of the potential of AI, but it should be noted that the dramatic evolution of adjacent fields such as robotics and cheaper hardware will see the technology progress exponentially in the near future.

26 experts have presented a major report on the threat of AI falling into the wrong hands, with institutions such as the EFF, the University of Oxford and the Future of Humanity Institute taking part.

The Malicious Use of Artificial Intelligence report issues a stark warning that those who design and research AI systems need to do more to mitigate major risks, while governments need to consider new laws in the face of these new issues.

The report recommends four key steps:

  • Researchers and policymakers need to collaborate more closely to investigate, prevent and mitigate malicious uses of AI
  • Those who work in the AI field must take heed of the dual-use nature of the technology they are helping to create. Proactive and mindful steps should be taken to reach out to the relevant people when harmful applications of AI are possible
  • Best practices should be modelled on industries that have more mature methods of dealing with dual-use concerns such as computer security
  • The range of stakeholders and experts engaging in discussions around the risks of AI should be expanded

AI technology is catching up to the hype

For a long time, hype around AI was much further ahead than the technology itself, but this has changed in recent years with the rapid development of technology and reduction in the price of resources used to build AI systems.

The report says there will be three key changes in the threat landscape brought about by AI:

  • worsening of existing threats, such as expanding the set of actors that already carry out attacks
  • emergence of unforeseen threats due to new technologies
  • the attacks generally becoming more targeted, effective and likely to exploit vulnerabilities in AI systems

Researchers need to think about potential threats far earlier than they are doing so at present, or they risk empowering bad actors to change the fabric of everyday life. Cyberattacks could become much easier by automating discovery of critical bugs, while social engineering attacks could be far more convincing with the aid of algorithmic profiling. Synthesised speech deception and finely targeted spam emails are just some of the risks that could become realities in the future.

Weaponising AI

The digital, political and physical security of the world could be endangered by irresponsible development of AI technologies, from cyberattack automation to the creation of highly believable fake videos used to manipulate public opinion. In a worrying prediction, the report says weaponised drones could be hijacked in the future and human control in warfare situations could easily be lost.

The report lays out a number of scenarios that could happen if the path to regulation and more considered development isn’t followed, from hijacking of AI cleaning robots to detonate explosive devices, to the use of AI systems in the creation of state surveillance regimes.

In terms of attacks on the physical world, increasing anonymity and the creation of psychological distance with AI-enabled systems could lead to bad actors feeling less empathy towards targets, opening the doors for more severe attacks.

Paper co-author and executive director of Cambridge University’s Centre for the Study of Existential Risk, Dr Sean Ó hÉigeartaigh, said: “We live in a world that could become fraught with day-to-day hazards from the misuse of AI, and we need to take ownership of the problems because the risks are real.

“There are choices that we need to make now, and our report is a call to action for governments, institutions and individuals across the globe.”

The subject of AI is not one that can be discussed in black-and-white terms, as the paper co-authors clearly note. What they do say, though, is that there needs to be far more collaboration, discussion and proactive planning to prevent this pivotal technology from being used for dangerous and malicious ends.

Ethical quandaries

Commenting on the report, founding director of the SFI-funded Insight Centre for Data Analytics at Dublin City University, Prof Alan Smeaton, noted: “The report is really valuable because it has the backing of so many important organisations and people, and it’s really timely because people are starting to get spooked about what AI or data analytics might actually be able to do.”

He added that trying to regulate uses of AI is impossible in his view. “We can’t anticipate what the downstream uses of AI or any new technology might be, so all we have is the science fiction of programmes like Netflix’s popular Black Mirror, which has been running for four seasons already.

“What the report doesn’t do is make a distinction between what can and cannot be regulated about future (mis)use of our own data, which is used to power the data analytics which we call artificial intelligence. Regulating the use of our data is being addressed through the EU-wide GDPR legislation coming into force in May of this year. Ethical use and misuse is not regulated, and that’s what spooks people.”

Updated, 3.50pm, 21 February 2018: This article was updated to include comments from Prof Alan Smeaton of the Insight Centre for Data Analytics.

Ellen Tannam was a journalist with Silicon Republic, covering all manner of business and tech subjects

editorial@siliconrepublic.com