Perspective: Ethical challenges must be addressed with machine learning

There’s a strong belief that AI and machine-learning have the potential to improve the medical industry. But before healthcare providers go all-in with machine learning, the technology needs to align with safety and ethical standards, according to a recent perspective published in PLOS Medicine.

Namely, AI must meet data protection requirements, minimize effects of biases and satisfy transparency standards.

In the perspective, authors Effy Vayena and Alessandro Blasimme, with Switzerland’s Department of Health Sciences and Technology, and I. Glenn Cohen, with the Harvard Law School, addressed several challenges the healthcare industry must address before machine-learning can have a positive impact in medicine.

Data protection requirements

The data used for machine-learning algorithms is subject to privacy protection that require developers acknowledge ethical and regulatory restrictions, the authors stated. They also stressed the importance of knowing the origin of data and getting consent to use and reuse it.

“Data that are used to train algorithms must have the necessary use authorizations, but determining which data uses are permitted for a given purpose is not an easy feat,” Vayena, Blasimme and Cohen wrote. “This will also depend on data type, jurisdiction, purpose of use and oversight models.”

Minimizing biases

In order to prevent biases in machine-learning trained algorithms, researchers should stray away from using poorly-represented training data sets, the authors stated. Typically, biases come about when data sources don’t reflect true epidemiology within a given demographic or when an algorithm is trained on a data set that does not contain enough members of a given demographic.

To avoid the issue, the authors encouraged scientists to develop best practices for recognizing and minimizing the effects of biased trained data sets.

“If algorithms trained on data sets with these characteristics are adopted in healthcare, they have the potential to exacerbate health disparities,” the authors wrote.


The authors said the lack of transparency within machine-learning raises the “most difficult ethical and legal questions,” while encouraging scientists to be open and honest about the use machine-learning methods in medicine to enhance shared decision-making between patients and their physicians.

“Moreover, the disclosure of basic yet meaningful details about medical treatment to patients—a fundamental tenet of medical ethics—requires that the doctors themselves grasp at least the fundamental inner workings of the devices they use. Therefore, for MLm (machine learning in medicine) to be ethical, developers must communicate to their end users—doctors—the general logic behind MLm-based decisions,” the authors wrote.

“Given the importance of personal contact and interaction between patients, healthcare professionals and caregivers, it is key for healthcare providers to embed MLm in supportive and empowering delivery systems. Third-party auditing may also prove necessary, and such ethical integration may become a precondition for accreditation.”