In the era of AI and machine learning, it’s no surprise clinical decision support systems (CDSSs) are becoming more available to clinicians. However, the industry has yet to address the capabilities needed for an effective CDSS, which has prevented the tools from being widely adopted in clinical settings despite research that proves its abilities.
In order to address those challenges, developers should incorporate several characteristics into CDSSs so they can be easily accepted and integrated into routine clinical workflows, Edward H. Shortliffe, MD, PhD, adjunct professor of Biomedical Informatics at Columbia University, and Martin J. Sepulveda, MD, ScD, IBM fellow, argued in viewpoint article published in JAMA Nov. 5.
Namely, there are six ways to make CDSSs more transparent, easier to use, and based on peer-reviewed scientific evidence, Shortliffe and Sepulveda proposed:
- Transparency should be incorporated into CDSSs so that users can understand why certain advice or recommendations were offered.
- CDSSs should be efficient and blend into a busy workflow in a clinical environment.
- Major training should not be required for clinicians to use CDSSs. The systems should be intuitive and simple to learn, as well as easy for clinicians to obtain advice or analytic results.
- The insights provided by a CDSS should be relevant to clinicians.
- Advice given by CDSSs should recognize expertise of the user. It should also be clear the system is designed to inform and assist clinicians, not replace them.
- The foundation of CDSSs and their decisions should be based on rigorous, peer-reviewed scientific evidence, helping establish safety, usability and reliability.
“Clinical decision support systems are not perfect instruments and will have failures. But a CDSS must be designed to be fail-safe and to do no harm,” the authors wrote. “In addition, equal attention must be paid to evidence of a CDSS’ ease of workflow integration and its usability as measured against the attributes previously noted, without which a CDSS will inevitably fail.”
Complexities within AI and decision support in clinical settings may limit the ability to move forward with the technologies "as quickly as some may predict," the authors concluded.