EU reveals its 7 essentials for achieving trustworthy AI

9 Apr 2019

Image: © mixmagic/Stock.adobe.com

A group of Europe’s top AI experts has published its AI ethics guidelines with the aim of dissuading stakeholders from going down a dangerous path.

After months of deliberation and fine-tuning, the European Commission’s High Level Expert Group on Artificial Intelligence has published the ethics guidelines for trustworthy AI.

Including University College Cork’s Prof Barry O’Sullivan, the group will now make itself available to industry, research institutes and public authorities under a pilot phase to cooperate on what amounts to a loose framework for discouraging developers to follow a potentially dangerous path.

Among the guidelines published were a set of seven essentials for achieving trustworthy AI:

  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy
  2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life-cycle phases of AI systems
  3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them
  4. Transparency: The traceability of AI systems should be ensured
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility
  6. Societal and environmental wellbeing: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes

‘Important step towards ethical and secure AI in the EU’

From today (9 April), companies, public administrations and organisations can sign up to the European AI Alliance ahead of the launch of the guidelines’ pilot phase some time in summer of this year.

Speaking of the launch, the EU’s commissioner for digital economy and society, Mariya Gabriel, said: “Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values … following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society.

“We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”

Starting in autumn 2019, the EU plans to launch a number of ‘excellence centres’ and will start setting up networks of digital innovation hubs in member states to develop and implement a model for best practices in data sharing among them.

Once the pilot phase is completed in early 2020, the AI expert group will take on board feedback from European AI Alliance members and propose any potential changes or further steps to prevent any instances of ‘AI run amok’.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com