3 key differences between the diagnostic reasoning of humans and AI

The use of AI in healthcare is rapidly rising, but healthcare providers remain an absolutely essential part of patient care, according to a new analysis published in CMAJ.

AI can’t replace human reasoning, the authors added, but it can certainly play a valuable role in assisting physicians on a daily basis.

“Several studies have shown the extent to which AI can be used to make and support diagnosis in medicine,” wrote Thierry Pelaccia, University of Strasbourg in France, and colleagues. “Since current evidence supports the effectiveness of AI for only a small selection of diagnostic tasks and human experts remain able to learn and diagnose a wide array of conditions, human intelligence would seem to remain essential to diagnosis for now.”

Pelaccia and colleagues wrote about some of the most important differences between human intelligence and AI. These are three of the biggest differences covered in their analysis:

1. The way they make diagnoses

“Physicians mainly use a hypothetico-deductive approach to make diagnoses. After generating diagnostic hypotheses early, they spend most of their diagnostic time testing them by collecting more data,” the authors wrote. “This approach is underpinned by cognitive processes that, according to the dual-process theory, can be either intuitive or analytical.”

AI, on the other hand, makes a diagnosis based on properly collected and labeled data—the model stores knowledge and is continuously developed until it “proposes accurate outputs” on a training set.

“Although humans understand cause-and-effect relations, these are not yet modelled in AI,” the team added. “This subject has been studied for a long time in AI, but it is only recently that first attempts to define an AI that ‘thinks like a human’ have been proposed.”

2. What can lead to a misdiagnosis

There are more than 12 million misdiagnoses made annually in the United States, according to data shared by the authors, and diagnostic error rates range from 5% to 15%. Cognitive biases are a common cause, one that researchers have spent considerable time over the years studying with a close eye.

Errors made by AI models, however, typically come from issues related to how they were trained. Perhaps the data is not up to par, for example, or maybe the actual experiment was not designed well.

3. Physicians can learn a lot with limited data

Physicians can go far with “very few data,” working to make the proper diagnosis and provide the best patient care possible. AI models, however, are nothing without massive datasets that take time, energy—and money—to put together.

“Most AI systems do not model intuition and therefore require substantial data to make a relevant diagnosis,” the authors wrote. “This is why AI is presently most effective in situations where all the data of the problem to be solved are immediately accessible, such as in medical imaging. Artificial intelligence also requires data transformation, but in AI this a much more complex and time-consuming process.”

Overall, Pelaccia et al. concluded, there is still a significant amount of work to be done in the development of AI. The quality and accessibility of medical data must be improved, they wrote, and physicians will need to fully embrace these evolving technologies instead of being resistant to change. Over time, however, it is possible for AI to “assume its place as a routine tool for medical practice.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup