That smartphone you’re carrying around in your hand could potentially be used as a tool to help recognize signs of depression in patients, leading to earlier intervention, according to researchers with Stanford University, who recently presented a machine-learning method for measuring the severity of depression symptoms in a published paper.
When tested, the method achieved 83.3 percent sensitivity and 82.6 percent specificity for detecting a major depressive disorder. It also achieved an average error of 3.67 points on the Patient Health Questionnaire (PHQ) scale, a clinically-validated tool that helps clinicians diagnose depression and monitor treatment response.
“Overall, this paper shows how speech recognition, computer vision and natural language processing can be combined to assist mental health patients and practitioners,” Stanford University graduate students Albert Haque and Michelle Guo, psychiatry and behavioral sciences instructor Adam S. Miner and computer science professor Li Fei-Fei wrote. “This technology could be deployed to cell phones worldwide and facilitate low-cost universal access to mental healthcare.”
Though there are more than 300 million people worldwide who have depression, several access barriers—like social stigmas, cost and treatment availability—cause about 60 percent of mentally ill adults to not receive any mental health services. Researchers believed automatic detection of depressive symptoms could improve diagnostic accuracy and availability, leading to faster intervention.
Clinicians typically identify depression in patients by a series of in-person interviews and other methods, which can take time and delay access to treatment, researchers said.
“Thus, AI-based solutions to assessing symptom severity may address entrenched barriers to access and treatment,” Haque et. al wrote. “We envision an AI-based solution where depressed individuals can receive evidence-based mental health services while avoiding existing barriers to access.”
The machine-learning method measured depressive symptoms by using audio, 3D facial expressions and text transcription from clinical interviews with patients. The information came from a dataset that featured a total of 50 hours of data collected from 189 clinical interviews and 142 patients. The data used was de-identified and did not contain protected health information of patients, researchers noted. Once the information came in, the model created either a PHQ score or a classification label that indicated a depressive disorder.
The researchers said that just like the machine-learning model, smartphones collect 3D facial expressions and spoken language from users, and suggested an AI-based solution could “leverage multi-modal sensors or text messages, as is common on modern smartphones, to increase timely and cost-effective symptom screening.”
“Conversational AIs are another potential solution,”Haque et. al said. “Our hope is that automated feedback will (i) provide actionable feedback to individuals who may be depressed, and (ii) improve automated depression screening tools for clinicians, by including visual, audio and linguistic signals.”
Stanford University researchers aren't the only group making an effort to utilize smartphones and AI to help detect mental health problems in users. In September, researchers with Worcester Polytechnic Institute (WPI) in Worcester, Massachusetts, secured $2.8 million in funding to develop a smartphone application that can detect signs of various medical conditions in soldiers.
The funding will support WPI computer scientists’ effort to develop “machine-learning algorithms that will sort through a host of data collected by sensors in smartphones to detect telltale signs of medical conditions that can affect a soldier’s readiness.”