Explainable AI’s pros ‘not what they appear’ while its cons are ‘worth highlighting’

Before healthcare AI can achieve widespread translation from lab to clinic, it will need to overcome its proclivity for producing biased outputs that worsen social disparities.

Consensus on this among experts and watchdogs is readily observable.

Increasingly, so is consensus on a cure: explainable healthcare AI.

Not so fast, warns an international and multidisciplinary team of academics.

The latter consensus, the group writes in a paper published by Science July 16, “both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.”

The paper’s lead author is Boris Babic, JD, PhD (philosophy), MSc (statistics), of the University of Toronto. Senior author is I. Glenn Cohen, JD, director of Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics.

En route to fleshing out their argument, the authors supply a helpful primer on the difference between explainable versus interpretable AI (and machine learning).

Interpretable AI/ML, which is an aside to the paper’s thrust, “uses a transparent [‘white-box’] function that is in an easy-to-digest form,” they write.  

By contrast, explainable AI/ML “finds an interpretable function that closely approximates the outputs of the black box. … [T]he opaque function of the black box remains the basis for the AI/ML decisions, because it is typically the most accurate one.”

From there Babic and colleagues walk the reader through several reasons explainable AI/ML algorithms are probably incapable of keeping their implied promise within healthcare to facilitate user understanding, build trust and support accountability.

The reasons they describe include:

Ersatz understanding. Explainable AI’s supposed advantage are a “fool’s gold” since its rationales for black-box predictions shed little if any light on the box’s logic.

“[W]e are likely left with the false impression that we understand it better,” the authors note. “We call the understanding that comes from [such] post hoc rationalizations ‘ersatz understanding.’”

Lack of robustness. Explainable algorithms should show their chops by outputting similar explanations for similar inputs. But they might be thrown by a tiny change in input—an addition or subtraction of a few pixels of an image, for example—and so yield contradictory or competing recommendations.

“A doctor using such an AI/ML-based medical device,” the authors comment, “would naturally question that algorithm.”

Tenuous connection to accountability. The relationship between explainability and accountability is compromised by the reliance of AI/ML systems on multiple components, “each of which may be a black box in and of itself,” Babic and co-authors point out. In such a scenario, the clinical end user would need a fact finder or investigator to “identify, and then combine, a sequence of partial post hoc explanations.”

“Thus,” they note, “linking explainability to accountability may prove to be a red herring.”

In some clinical cases for which an assist from AI/ML would surely guide decision-making, interpretable may offer a firm middle ground. The authors cite as an example the need for a prediction on risk of kidney failure to optimally assign patients to dialysis when kidney patients outnumber dialysis machines.

“In such cases, the best standard would be to simply use interpretable AI/ML from the outset, with clear predetermined procedures and reasons for how decisions are taken,” the authors write. “In such contexts, even if interpretable AI/ML is less accurate, we may prefer to trade off some accuracy [as] the price we pay for procedural fairness.”

Babic et al. conclude by restating the gist of their argument and clarifying its ramifications for care:

[T]he current enthusiasm for explainability in healthcare is likely overstated: Its benefits are not what they appear, and its drawbacks are worth highlighting. ... Healthcare professionals should strive to better understand AI/ML systems to the extent possible and educate themselves about how AI/ML is transforming the healthcare landscape, but requiring explainable AI/ML seldom contributes to that end.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup