Seeking to improve the speed, quality and transparency of research involving healthcare AI, a multinational group of scholars has put together two fresh sets of guidelines for researchers testing the technology in clinical trials.
The guidance for one sphere of activity, SPIRIT-AI, deals with protocols for conducting such trials. The other, for CONSORT-AI, steers study reporting.
If those names sound familiar, it’s because earlier versions of both frameworks have been in use for several years.
Placing the updated guidance in three peer-reviewed journals—Nature Medicine, the BMJ and The Lancet Digital Health—the authors indicate their primary objective is to speed AI-based diagnostics and treatments to patients.
Secondarily, by boosting public confidence in the safety, efficacy and appropriateness of interventions with AI components, the team hopes to set the stage for realistic expectations to triumph over idealistic claims.
“Patients could benefit hugely from the use of AI in medical settings, but before we introduce these technologies into everyday practice we need to know that they have been robustly evaluated and proven to be effective and safe,” explains corresponding author Alastair Denniston, PhD, MRCP, of the University of Birmingham in the U.K., in a news release sent by that institution. “Our previous work showed just how big a problem this can be and that we needed a way to cut through the hype surrounding AI in healthcare.”
Melanie Calvert, PhD, also a professor at the University of Birmingham, adds that the publication of fresh guidance reduces the risk of “not generating sufficiently robust evidence to decide whether AI interventions should be commissioned in the real world. These new guidelines will help to identify and overcome research challenges associated with AI-led health innovation.”