Facebook gives $7.5M to establish AI ethics center

Facebook is providing $7.5 million in initial funding over five years to help establish an independent research center focused on the ethics of AI.

The funding comes at a time when the emergence of AI technology has brought up new questions regarding its use in the medical setting. Specifically, the center will explore ethical issues surrounding AI, including transparency in medical treatment cases and human-AI interaction. 

The Institute for Ethics in Artificial Intelligence will be established through Facebook’s partnership with the Technical University of Munich (TUM) in Germany, the social media company announced. The center will focus on advancing the field of ethical research on new technology and explore fundamental issues affecting the use and impact of AI. It also plans to explore other funding opportunities from additional partners and agencies.

“At Facebook, ensuring the responsible and thoughtful use of AI is foundational to everything we do—from the data labels we use, to the individual algorithms we build, to the systems they are a part of,” Joaquin Quinonero Candela, Facebook’s director of applied machine learning, wrote in the announcement. “However, AI poses complex problems which industry alone cannot answer, and the independent academic contributions of the Institute will play a crucial role in furthering ethical research on these topics.”

Late last year, University of Guelph in Ontario, Canada launched a similar effort with its Center for Advancing Responsible and Ethical Artificial Intelligence (CARE-AI). The university's center aims to address the ethics of AI and build "machines with morals," focusing on applying the technology to several industries, including human and animal health, environmental sciences, agri-food and the bio-economy.

According to the blog post, Facebook's AI center will leverage TUM’s academic experts to conduct independent, evidence-based research to provide insights and guidance for society, industry, legislators and decision-makers in the public and private sectors. Safety, privacy, fairness and transparency are among some of the issues the center hopes to address regarding the use and impact of AI.

In a statement, TUM business ethics professor Christoph Lutge, PhD, said the center’s “evidence-based research will address issues that lie at the interface of technology and human values.”

“Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms,” he said. “We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction.”

""

Danielle covers Clinical Innovation & Technology as a senior news writer for TriMed Media. Previously, she worked as a news reporter in northeast Missouri and earned a journalism degree from the University of Illinois at Urbana-Champaign. She's also a huge fan of the Chicago Cubs, Bears and Bulls. 

Trimed Popup
Trimed Popup