Just under a year after first announcing its strategy to manage new AI, the European Commission on April 8 presented a seven-step approach to ensuring that all future AI solutions are ethical and trustworthy.
The new guidelines operate under the Commission’s previously established three-pronged strategy to take a “European approach” to AI by staying ahead of the curve with technological developments, preparing for socioeconomic changes and ensuring an ethical and legal framework for the use of AI. A high-level expert group cherry-picked to lead the ethical effort first published a draft of ethics guidelines in December.
“The ethical dimension of AI is not a luxury feature or an add-on,” Andrus Ansip, the Commission’s vice president for the digital single market, said in a statement. “It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe—being a leader of human-centric AI that people can trust.”
The Commission is looking at AI in the context of healthcare, car safety, crime, climate change and energy consumption, according to the statement. The high-level expert group on AI outlined the following seven considerations for building trustworthy tech:
- Human agency and oversight: AI systems should support human agency and fundamental rights while avoiding decreasing, limiting or misguiding human autonomy
- Robustness and safety: Algorithms need to be secure, reliable and robust enough to deal with errors and inconsistencies
- Privacy and data governance: Citizens should have full control over their own data, and it shouldn’t be used to harm or discriminate against them
- Transparency: All AI systems should be traceable
- Diversity, non-discrimination and fairness: AI should be accessible and consider the whole range of human abilities and needs
- Societal and environmental wellbeing: Systems should be used to enhance positive change and promote sustainability and ecological responsibility
- Accountability: Mechanisms need to be put in place to hold AI and its outcomes accountable
Members of the expert group are scheduled to present their work during Digital Day 2019 in Brussels on April 9. The Commission is also expected to launch a pilot phase this summer to ensure the ethical guidelines are useable in clinical practice.
“Today, we are taking an important step toward ethical and secure AI in the EU,” Commissioner for Digital Economy and Society Mariya Gabriel said. “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”
The EU isn’t the only entity trying to keep up with the rapid rise of AI—universities, global companies like Facebook and Google and a slew of medical imaging societies have all established their own ethical frameworks for AI. The Commission said it plans to involve other countries and companies in its pilot.
“The Commission wants to bring this approach to AI ethics to the global stage because technologies, data and algorithms know no borders,” the statement read. “To this end, the Commission will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20.”