AI predicts common childhood ailments better than some physicians

Researchers in China have developed an AI-based natural language processing algorithm which processes free text from physician notes in electronic health records (EHRs) to predict common ailments in a pediatric population. The algorithm outperformed junior physicians in diagnosing some illnesses.

Their findings were published in Nature.

“AI-based methods have emerged as powerful tools to transform medical care,” wrote lead author Kang Zhang, MD, PhD, of the University of California, San Diego, and colleagues. “Although machine learning classifiers (MLCs) have already demonstrated strong performance in image-based diagnoses, analysis of diverse and massive EHRs data remains challenging.” 

The researchers' model used deep learning techniques to obtain clinically relevant information from EHRs. More than 101 million data points (EHR and handwritten notes) obtained from 1.3 million pediatric patient visits from 567,498 patients were analyzed to train and validate the AI algorithm.

The system was able to achieve high accuracy in diagnosing several diseases, including asthma, bacterial meningitis, varicella, the flu, mononucleosis and roseola. These are all conditions that could potentially pose life-threatening conditions, so accurate diagnosis is of utmost importance, the researchers wrote.

“Our model demonstrated high diagnostic accuracy across multiple organ systems and was comparable to experienced pediatricians in diagnosing common childhood diseases,” the researchers wrote. 

For the first level where the diagnostic system classified the patient’s diagnosis into a broad organ system, the accuracy was 90 percent—ranging from 85 percent for gastrointestinal diseases to 98 percent for neuropsychiatric disorders.

The algorithm achieved “a very high level” of accuracy in predicting diagnoses for the generalized systemic conditions, with an accuracy of 90 percent for infectious mononucleosis, 94 percent for the flu, 93 percent for varicella, 97 percent for hand-foot-mouth disease and 93 percent for bacterial meningitis. 

When the researchers compared their AI algorithm to physicians’ performance, they found their algorithm performed better than junior physicians but did not outperform experienced providers. 

“This system achieved excellent performance across all organ systems and subsystems, demonstrating a high level of accuracy for its predicted diagnoses when compared with the initial diagnoses determined by an examining physician,” the researchers noted.

Zhang et al. noted their framework can be implemented in clinical practice in a variety of ways—it can triage procedures in an urgent care setting and help prioritize patients based on their predicted diagnosis. Additionally, the algorithm can help providers diagnose patients with complex or rare conditions.

“Our study provides a proof of concept for implementing an AI-based system as a means to aid physicians in addressing large amounts of data and augmenting diagnostic evaluations and provide clinical decision support in cases of diagnostic uncertainty or complexity,” the researchers concluded. “Although this impact may be most evident in areas where healthcare providers are in relative shortage, the benefits of such an AI system are likely to be universal.”

More companies are investing into pediatric care that adopts machine learning, which may indicate interest in the pediatric population. Recently, California-based Pr3vent, an AI healthcare company closed a Series A financing round. Pr3vent’s AI-enabled screening tool detects abnormalities by using images of a baby’s retina and is the first diagnostic screening tool of its kind for early detection of preventable vision loss in newborns.