AI could pose ethical concerns for those in healthcare

In a New England Journal of Medicine article published March 15, researchers at Stanford University raised concerns with the ethical implications of artificial intelligence’s (AI) ability to make healthcare decisions for patients.

As machine learning and AI advance in their healthcare capabilities, information is limited on their ethical implications during the decision-making process. Some AI concerns are straightforward, like the probability an algorithm could mimic human biases based on race, while others pose less obvious risks.

“The use of machine learning in complicated care practices will require ongoing consideration, since the correct diagnosis in a particular case and what constitutes best practice can be controversial,” stated first author Danton Char, MD, and colleagues. “Prematurely incorporating a particular diagnosis or practice approach into an algorithm may imply a legitimacy that is unsubstantiated by data.”

Most notably, these algorithms have shown a previous capability of racial bias. In the article, Char and colleagues noted an AI program had shown inclination toward racial discrimination in a trial by predicting the offender’s “risk of recidivism.”

The researchers worried this racial discrimination could filter into healthcare algorithms when inclusive data is unavailable. For example, algorithms designed to predict patient outcomes using genetics could form a bias from a lack of information pertaining to certain populations. Char and colleagues pointed out the Framingham Heart Study, which used AI to predict the risk of cardiovascular events in non-white populations, demonstrated biased results in both the over- and underestimations of risk.

In the NEJM article, researchers pushed for updated ethical guidelines for machine learning and AI to include the education of physicians in building AI systems, improving selection of data sets for AI and implementing limitations. Remaining compliant with the current construction of AI systems would only advance the rate of concerning ethical decisions, they wrote.

“We believe that challenges such as the potential for bias and questions about the fiduciary relationship between patients and machine-learning systems will have to be addressed as soon as possible,” concluded Char and colleagues. “Machine-learning systems could be built to reflect the ethical standards that have guided other actors in healthcare—and could be held to those standards. A key step will be determining how to ensure that they are—whether by means of policy enactment, programming approaches, task-force work or a combination of these strategies.”

 

""
Cara Livernois, News Writer

Cara joined TriMed Media in 2016 and is currently a Senior Writer for Clinical Innovation & Technology. Originating from Detroit, Michigan, she holds a Bachelors in Health Communications from Grand Valley State University.

Trimed Popup
Trimed Popup