The European Commission has released plans for a pilot project intended to test the first draft of ethical rules for developing and applying AI technologies to ensure they can be implemented in practice.

It’s also planning to use feedback from the project to figure out the best solutions to implement and verify its recommendations whilst also encouraging international development and cooperation for ‘human-centric AI’ initiatives.

In December 2018, the Commission’s “High Level Group on AI” published their first initial draft ethics guidelines for responsible AI.

The group is comprised of 52 experts from across industry, academia and civic society. A revised version of the document was presented to the Commission in March.

Vice-President for the Digital Single Market, Andrus Ansip said: “I welcome the work undertaken by our independent experts. The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies.

“Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”

The Commission is taking a “three-step approach”: setting-out the key requirements for trustworthy AI, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI.

It declared “seven essentials for achieving trustworthy AI” consisting of:

Human agency and oversight: “AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.”

Robustness and safety: “Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.”

Privacy and data governance: “Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.”

Transparency: “The traceability of AI systems should be ensured.”

Diversity, non-discrimination and fairness: “AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.”

Societal and environmental well-being: “AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.”

Accountability: “Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.”

Following the pilot period, in early 2020, the Commision said that the AI collective will review the assessment lists for the key requirements – building on the feedback received before proposing any further steps.

Furthermore, to ensure the ethical development of AI, the Commission will launch a set of networks of AI research excellence centres; begin setting up networks of digital innovation hubs; and together with Member States and stakeholders by the autumn 2019.

Commissioner for Digital Economy and Society Mariya Gabriel added: “Today, we are taking an important step towards ethical and secure AI in the EU.

“We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”