The High-Level Expert Group on Artificial Intelligence has released the first draft of its AI ethics guidelines and it is looking for feedback from citizens.
The European Commission (EC) has been working on several pieces of technology policy for some time now. In April of this year, the EC chose to develop guidelines and policy around AI (artificial intelligence), 5G networks and blockchain.
Some months have passed since the initial discussions and today (18 December), the EC High-Level Expert Group on Artificial Intelligence released the first draft of its ethics guidelines for the development and use of AI.
AI ethics crucial to build trust
A group of 52 experts from academia, business and civil society created the draft to help developers and users of AI in ensuring the technology respects fundamental rights, regulations and core principles.
EC vice-president and commissioner for the Digital Single Market, Andrus Ansip, said: “AI can bring major benefits to our societies, from helping diagnose and cure cancers to reducing energy consumption. But, for people to accept and use AI-based systems, they need to trust them, know that their privacy is respected, that decisions are not biased.”
Commissioner for digital economy and society, Mariya Gabriel, said that the use of AI must “always be aligned with our core values and uphold human rights”, noting that the purpose of the draft guidelines was to ensure these values were enshrined in practice.
Two ingredients for trustworthy AI
The guidelines lay out two elements for the creation of trustworthy AI, which was described in the report as the ‘north star’ for further development of the technology. “Trustworthy AI has two components: one, it should respect fundamental rights, applicable regulation, and core principles and values, ensuring an ‘ethical purpose’; and two, it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.”
The EC also highlighted the need for “responsible competitiveness”, as this will generate user trust, thus facilitating uptake of AI. It added: “These guidelines are not meant to stifle AI innovation in Europe, but instead aim to use ethics as inspiration to develop a unique brand of AI, one that aims at protecting and benefiting both individuals and the common good.”
AI: Good and bad
Running to 37 pages, the detailed draft document notes the major issue of AI bias, as well as the importance of human values and the need for robust systems.
On the other hand, it highlighted the potential benefits: “AI systems can be a force for collective good when deployed towards objectives like the protection of democratic process and rule of law; the provision of common goods and services at low cost and high quality; data literacy and representativeness [etc].”
Thorny points such as lethal autonomous weapons, mass citizen-scoring and covert AI systems are also highlighted in the document.
Have your say
The draft guidelines are now open for comments until 18 January, with discussions taking place through the European AI Alliance. The expert group will present its final guidelines to the EC in March next year, with the hope of bringing the European ethical approach to a global audience.
As well as the ethical guidelines, the EC said it hopes to boost investment in AI in the bloc, as it is low and fragmented compared to other parts of the world. It hopes to do so with national AI strategies in each member state by mid-2019, as well as a new European public-private partnership and the introduction of a new AI scale-up fund.
The development and linking of world-leading AI research centres is also a key element of the overall AI strategy. Secure, robust EU-wide databases will be a priority for the EC, as well as boosting digital skills.