Researchers have detected racial bias in an algorithm commonly used by health systems to make decisions about patient care, according to a new study published in Science.
The algorithm, the study’s authors explained, is deployed throughout the United States to evaluate patient needs.
“Large health systems and payers rely on this algorithm to target patients for ‘high-risk care management’ programs,” wrote Ziad Obermeyer, MD, school of public health at the University of California, Berkeley, and colleagues. “These programs seek to improve the care of patients with complex health needs by providing additional resources, including greater attention from trained providers, to help ensure that care is well coordinated. Most health systems use these programs as the cornerstone of population health management efforts, and they are widely considered effective at improving outcomes and satisfaction while reducing costs.”
While studying the algorithm—which the team noted does not specifically track race—Obermeyer et al. found its predictions are focused on health costs such as insurance claims more than health needs. Black patients generate “lesser medical expenses, conditional on health, even when we account for specific comorbidities,” which means accurate predictions of costs are going to automatically contain a certain amount of racial bias.
Correcting this unintentional issue, the authors noted, could increase the percentage of black patients receiving additional help thanks to the algorithm from 17.7% to 46.5%. So they worked to find a solution. And by retraining the algorithm to focus on a combination of health and cost instead of just future costs, the researchers achieved an 84% reduction in bias. They are continuing this work into the future, “establishing an ongoing (unpaid) collaboration” to make the algorithm even more effective.
“These results suggest that label biases are fixable,” the authors wrote. “Changing the procedures by which we fit algorithms (for instance, by using a new statistical technique for decorrelating predictors with race or other similar solutions) is not required. Rather, we must change the data we feed the algorithm—specifically, the labels we give it.”