6 hurdles primary care must clear to begin leveraging AI

Despite the inroads AI has made into many medical specialties, the technology has failed to find a foothold in primary care. Why is that?

A physician with expertise in both that field and population-health science puts his head together with a distinguished professor of computer science to answer the question in an opinion piece running in the May edition of Annals of Family Medicine.

Primary-care artificial intelligence “should aim to improve care delivery and health outcomes; using this benchmark, it has yet to make an impact,” write Winston Liaw, MD, MPH, and Ioannis Kakadiaris, PhD, both with the University of Houston.

Citing the lack of engagement from the primary-care community as a prime reason for the disappointing showing to date, the authors suggest the widespread reticence has real-world consequences.

“Without input from primary care,” they point out, healthcare AI researchers “may fail to grasp the context of primary care data collection, its role within the health system and the forces shaping its evolution.”

Liaw and Kakadiaris then lay out seven AI challenges primary care must meet if the profession is to catch up with U.S. healthcare as a whole.

1. Inefficient data entry. “Without timely data, artificial intelligence systems do not have the information they need to make decisions,” the authors write.

2. Poorly processed data. Because researchers mistrust the accuracy of the data that does get entered in primary care, the understandable tendency is to “omit or modify data according to arbitrary or inappropriate rules, which can lead artificial intelligence systems to learn the wrong lessons.”

3. Unexplained (“black box”) AI results. “For users to trust artificial intelligence systems, they need to understand why decisions are made.”

4. Magnification of existing biases. “The systematic under- or over-prediction of probabilities for populations emerges for multiple reasons, including biased training data and outcomes influenced by earlier, biased decisions.”  

5. Siloed data. “This leads to tools that perform worse when used at different institutions. Furthermore, the population on which the tool was trained may shift, causing its performance to suffer over time.”

6. Privacy concerns. “With the digitization of data, patients are increasingly unable to determine when, how and to what extent information about them is communicated to others. Breaches and misuse erode trust in artificial intelligence systems and may make individuals reluctant to access care.”

“[W]e do not simply need the application of artificial intelligence to primary care, but rather, the development of new methods that are tailored to the breadth, complexity and longitudinality of primary care,” Liaw and Kakadiaris conclude. “[G]eneralists are set apart by our overriding interest in people, an interest that is vital to the creation of a bond between physician and patient.”

To this the authors add that the proliferation of EHRs—and, with it, the rise of AI—threaten this bond by adding “more and more layers” of technology.

To keep this threat from maturing into a hazard, primary care AI “needs to narrow this divide by facilitating new opportunities for connection” between primary-care researchers and academic AI experts, the authors write. “Finding creative solutions to this challenge is necessary if we hope to restore the relationships that sustain us and our patients.”

To read the whole piece, click here.