Physicians considering new technology for their practices tend to ask one or more of four questions before moving ahead: Does it actually work? Will I get paid for it? What are the liability issues? And will it work in my practice environment?
This was noted by Vanderbilt University’s Jesse Ehrenfeld, MD, MPH, chair-elect of the American Medical Association’s board of trustees, at a workshop convened last fall in Washington by the National Academies of Sciences, Engineering and Medicine.
Numerous influential participants shared ideas, all of which orbited around the theme of balancing safety and autonomy when using AI applications for older adults and people with disabilities.
Scribes took minutes, and on May 17 a comprehensive summary was posted online.
Speaking to Ehrenfeld’s point about physician questions, Amanda Lazar, PhD, of the University of Maryland’s college of information studies pointed out that technologies are often touted as relieving health workers’ workloads—yet many such solutions change the work without reducing the load, according to the summary.
To this Robyn Stone, PhD, of the LeadingAge association added that she hasn’t seen much happening with technologies for managing the care of patients with chronic conditions.
“It may make more sense to be putting more money in training human capital than investing in some technology that’s obsolete within two years or isn’t going to take them where they want to go,” Stone said.
All the above comments came during a panel discussion on the use of AI to promote health and well-being and to provide care.
Discussions with other experts took up questions around such subtopics as defining AI, considering the consumer perspective and moving AI toward user-centered design.
To access the full summary of the proceedings for free, click here.