Talk to Me: Speech Recognition Streamlines Clinical Communication

Speech recognition technology is well on its way to becoming one of the most widely adopted technologies in healthcare settings because it can save documentation time and can boost both the availability and accuracy of patient records.

Although speech recognition software, along with dictation and transcription software, are no strangers to healthcare facilities, interest in these applications is growing as meaningful use definitions unfold. Governmental pressure, incentives and the overall feeling that “it’s time” to get on board are pushing organizations over the fence to invest in EHRs, says Reid Conant, MD, CMIO of Tri-City Medical Center in Oceanside, Calif. For example, speech recognition software could help ensure compliance with the meaningful use requirement that a patient’s health record be made immediately available from the provider, he notes.
 

Quicker turnaround times

Front-end speech recognition architectures enable integrated output that can shorten patient report turnaround times, thus reducing physicians’ time, and, therefore, costs. St. Luke’s Hospital in Cedar Rapids, Iowa, which integrated Medquist’s SpeechQ speech recognition software in 2005, offers one example. By February 2008, an average of 85 percent of signed emergency department (ED) final reports were delivered within 30 minutes of the initial exam, says Joe Moore, MD, CIO for the physician-owned Radiology Consultants of Iowa (RCI), which services 14 hospitals. In addition, 96 percent of dictated exams were signed in less than 60 minutes, Moore adds.

Front-end speech recognition employs static or handheld recording devices where physicians see their words on a screen as they speak. So instead of using transcriptionists to proof or edit the physician note, the physician can immediately see errors and correct mistakes using speech-activated commands.

“One of the greatest benefits of speech recognition is the quality of care we’re able to give to outreach services provided by rural hospitals. We have a digital network in place to shift images around, along with electronic reporting capability using speech recognition,” says Moore. Rural facilities like Marengo Memorial Hospital, a 25-bed critical access facility in Marengo, Iowa, that performs 4,000 exams a year, are accustomed to turnaround times of a couple of days. They benefit from being able to receive final signed reports within minutes; receiving the same level of quality service as at RCI’s St. Luke’s location, says Moore.
 

When the speech comes in

At Tri-City, physicians use Nuance’s Dragon Medical front-end speech recognition software incorporated into the template-based physician documentation system, Cerner’s PowerNote ED, to access any of the facility’s 72,000 patient records, Conant says. Physicians record their notes in real time into Tri-City’s Cerner Millennium EMR.

EMR programs with documentation templates can generate notes that sound similar, says Greg Hindahl, MD, CMIO of Deaconess Health System in Evansville, Ind. Deaconess implemented the EpicCare Ambulatory EMR in May 2009, and the 500-bed acute-care facility went live in November 2009 with Epic’s inpatient EMR to connect 700 members of Deaconess’ medical staff. Dragon Medical software is integrated into the EMR system to streamline documentation. Using the software, “you can use templates but also use voice recognition to dictate some or all of the patient’s history narrative. You can have a template for repetitious documents like physical exams, but also use voice to get the patient’s history how you want it,” says Hindahl.

“It’s important to maintain the codified elements of a template-based documentation system to get the reporting information such as core measures, illness tracking and compliance-related issues,” adds Conant, “but there can be a real challenge of getting the narrative into the patient’s chart and we filled that gap with speech recognition software.”

Within Deaconess’ ambulatory offices, static microphones on desks have worked very well to provide an accurate report. Hindahl warns, however, that although the acuity of speech recognition software is improving, some background noise can be accidentally turned into text if physicians don’t invest in decent microphones. In Deaconess’ ED, Hindahl says a Dictaphone Powermic II handheld microphone effectively blocks out the background noise of the ED to accurately record physician speech.

In June 2008, Prevea Health System, a 20-clinic facility network based in Green Bay, Wis., added back-end speech recognition, an alternative to front-end speech recognition, to its Epic EHR system, aiming to drive down costs and to increase productivity, says Monica Zeller, the system’s director of health information management.

Since that time, Prevea has increased productivity—as measured in turnaround time, speed of editing and transcribing notes—by 134 percent, even as workflow has increased. Zeller says Prevea saved more than $275,000 in outsourced transcription costs after eScription’s first year and expects to hit $1 million in savings for the first five years of going live.

Unlike front-end speech software, back-end speech recognition software utilizes transcriptionists to oversee physicians’ notes. Using a back-end speech system, a physician dictates notes into a static microphone or headset. At Prevea, the physician’s dictation goes into the Epic EHR system, which is then translated by Nuance’s eScription and goes to a back-end transcriptionist to edit and cross-check the note before sending the note back into the EHR system for the provider to sign.

“With Epic’s patient identifiers, our technical setup uses back-end numbers to put the patient’s information into the physician’s note,” says Zeller. The patient’s information starts with the medical record number and then other identifiers follow that. For example, if a single patient undergoes multiple services in one day, that information is put through the system using back-end numbers and patient identifiers before being entered into the patient’s medical record.

“One of our goals with eScription was to not change providers’ workflow, since Prevea works with so many workflow models,” says Zeller. At Prevea, partial dictation can be run through eScription and the back end can take care of the rest, she says.
 

Hybrid system drives medical center

For the four radiologists at Rutland Regional Medical Center (RRMC), in Rutland, Vt., GE Healthcare’s Centricity Precision Reporting offers a hybrid of both front-end and back-end platforms to ensure streamlined dictation without disturbing the radiologist, says J.C. Biebuyck, MD, director of MRI services at RRMC. Serving 70,000 patients, the 188-bed facility has been using Precision Reporting since August 2009.

“For me, the most important thing is not to be distracted from the images. The fewer distractions, the better,” says Biebuyck. Precision Reporting allows the physician to dictate into the system; once the patient’s record is completely dictated, the software automatically generates the report. Upon completion, Centricity Precision Reporting gives the physician an option to sign the report or send it to a back-end transcriptionist if corrections are necessary, says Biebuyck.

Signing about 150 cases a day, Biebuyck says 60 to 70 percent of reports go to the transcriptionist throughout the day and report turnaround time has improved from 24 hours to 7 hours at RRMC since the facility went live with the speech recognition platform.
 

Minimum training, maximum returns

Speech recognition software generally requires little training—it can take as little as 30 minutes to train the software and a couple of hours to train users, according to Conant. This increases its acceptance among clinicians. Zeller says 98 percent of Prevea’s providers are using the healthcare system’s speech-recognition tools. She credits the high adoption rate to the flexible workflow processes that assimilate to physicians’ work habits.

“Some older physicians at Deaconess think that an EMR system is going to be frustrating because they don’t type well, but the speech component has actually allowed these older physicians to continue to practice medicine by being able to do the work in the computer without doing the typing,” says Hindahl. However, he believes that speech recognition software is not as powerful unless macro templates are utilized in conjunction with dictation programs to maximize efficiency.

“When you’re pushing a system on providers that [could potentially make] workflow more complicated, it’s important to add applications to simplify those workflows,” Conant says. For example, Powermic II can bundle commands taking short workflow functions and package them into a single function, according to Conant. “A ‘Sign Note’ button can be clicked and Dragon will perform the programmed, bundled key commands in front of the physician faster than if they were performing the commands themselves,” he says.

For best results, CMIOs who are considering deploying speech recognition technology should analyze their current workflow and projections of future workflow, says Conant, who provides health IT consulting as part of Conant & Associates. It’s a matter of efficiency: Understanding workflow “makes training more effective, adoption better and usability greatly improves.”

Beyond understanding the clinical care process, it’s important to get buy-in by demonstrating what speech recognition offers, but also understand “the software is as accurate as you make it,” Zeller says.

“If the staff is on board doing the same things, it greatly benefits the quality of care. Our approach was for everyone to be involved in implementation from the very beginning. That meant answering questions before going live and letting everyone decide end-goals,” Zeller says.

Trimed Popup
Trimed Popup