News You Need to Know Today
View Message in Browser

Ready AI tips for reluctant nursing teams | Healthcare AI newsmakers

Friday, May 10, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo

AI in nursing

High-tech, high-touch: 4 ways nursing leaders can strike the right balance for AI-averse teams

Most nurses are at least somewhat skittish about the march of AI into their workflows. Their main concerns revolve around the technology’s potential to slap back the human touch in care delivery.

At the same time, some have made their peace with the presence of AI applications in their work lives. Members of this subgroup appreciate its promise to help nurses fix inefficiencies, complete administrative tasks, analyze data, build new skills and automate monitoring.

These are among the findings from a survey of more than 1,100 U.S. nurses and nursing students taken earlier this year. The work was conducted by Cross Country Healthcare, a Florida-based staffing consultancy, in partnership with Florida Atlantic University’s Christine E. Lynn College of Nursing.

In introducing their findings and observations, the authors of the survey report comment that AI “will not replace wisdom—intuition, empathy and experience. Nothing can replace the human experience.” However, they add, AI “has the potential to free time from routine tasks to help nursing practitioners focus more on their patients and healthcare outcomes.”

Directing their attention to nursing leaders, the authors suggest four steps for equipping AI-ambivalent nursing teams with tech-savvy skills and perspectives. Here are the action items they recommend and the advice they offer.

1. Embrace transparency.

Healthcare leaders must communicate openly about AI implementation to foster trust and alleviate concerns. Providing clear insights into how AI will be used and its potential impact on nursing workflows can empower nurses to embrace innovation confidently.

‘Transparency also involves addressing apprehensions regarding job security and privacy. By openly discussing these concerns and outlining the organization’s strategies to mitigate them, nurses can feel reassured about their future within the evolving healthcare landscape.’

2. Ramp up training.

As AI becomes increasingly integrated into healthcare settings, ensuring nurses possess the necessary skills to leverage this technology is crucial. Comprehensive training programs tailored to nurses can demystify AI and enhance their proficiency in AI-powered tools.

‘By investing in ongoing training initiatives, healthcare organizations can equip nurses with the knowledge and skills needed to embrace AI confidently. This enhances their professional development and fosters a culture of continuous learning within the nursing workforce.’

3. Tailor communications.

Recognizing the diverse perspectives within the nursing community is essential for effective AI integration. Healthcare leaders should tailor communication strategies to resonate with different nurse personas, acknowledging their unique concerns and preferences.

‘Whether addressing AI skeptics, cautious believers or enthusiasts, personalized communication strategies can foster understanding and acceptance of AI among nurses. By actively listening to nurses’ feedback and adapting communication approaches accordingly, healthcare organizations can cultivate a supportive environment conducive to AI adoption.’

4. Listen to employees and incorporate their input.

Engaging nurses in AI implementation is vital to acceptance and adoption. By soliciting and incorporating nurses’ feedback, healthcare organizations can tailor AI solutions to address specific pain points and enhance the nursing experience.

‘By prioritizing employee feedback and emphasizing AI’s positive impact on nursing practice, healthcare organizations can foster a culture of innovation and collaboration. Moreover, highlighting AI’s tangible benefits, such as streamlining administrative tasks and improving patient outcomes, can inspire nurses to embrace this transformative technology.’

The authors close on a pro-AI note. The technology, they reiterate, can offer efficiency gains to supplement staffing levels and reduce stressful working conditions. However, they add:

‘It is essential to the future success of healthcare that we acknowledge that skilled talent will remain indispensable to effective healthcare delivery and outcomes.’

Read the full report.

 

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Google sics Med-Gemini on ChatGPT-4. But only in a manner of speaking. In a study comparing the two on competence for complex clinical tasks, Google’s own researchers found their brainchild—which is still under development—“surpassed [OpenAI’s] GPT-4 model family on every benchmark where a direct comparison is viable, often by a wide margin.” The study authors comment that Med-Gemini’s nice performance may mean it’s not far from release into the real world. In fact the test version, they report, bested human experts at summarizing medical texts and showed “promising potential” for medical research, education and multimodal medical dialogue. A summary of the study, which has not yet been peer-reviewed, is here. (Click “View PDF” for the full study draft.)
     
  • AI’s ability to uncover occult patterns makes the technology a natural fit for cancer doctors. Chevon Rariy, MD, chief health officer and senior VP of digital health at Arizona-based Oncology Care Partners, makes the point in an interview with HIMSS Media’s Healthcare IT News. “By leveraging patient engagement tools that are AI-driven and individualized, we are transforming the way oncology care is delivered,” she says, adding that patient input guides adjustments in care plans in ways it never used to. The approach lets patients take a more active role in their care, which, Rariy suggests, contributes to better treatment outcomes as well as more satisfying patient experiences.
     
  • GenAI models are only as intelligent as the data fed to them—and the filters built into them. This came clear with one look at the bungled results Google’s Gemini image generator came up with in February. Having learned from Google’s stumble, OpenAI is working on a framework for avoiding that kind of unwanted laughter. Its solution, called Model Spec in an early draft iteration, will incorporate public input on how models in ChatGPT and the OpenAI application programming interface (API) should behave in interactions with end users. OpenAI says it’s not waiting for finalization to post the draft because it wants to “provide more transparency on our approach to shaping model behavior and to start a public conversation about how it could be changed and improved.” The company adds that it will continuously update Model Spec based on what it learns from sharing the framework and hearing back on it from stakeholders.
     
  • Here’s an AI-equipped doctor who’s shocking patients by what she’s not doing: tapping keys during patient time. A GenAI notetaking app now does that for the physician, Amy Wheeler, MD, a primary care doctor in the Mass General Brigham system. Wheeler tells The Wall Street Journal she’s gratified to be giving patients her undivided attention. Meanwhile the health system’s CMIO, Rebecca Mishuris, MD, says the pilot project will measure the value of the technology by, among other things, patient experience and physician retention. So far, Mishuris adds, “the feedback is impressive. I have quotes from people saying, ‘I’m no longer leaving medicine.’”
     
  • Do China’s AI models use any U.S. technology? What is Beijing’s stance on U.S. AI models? How accessible are OpenAI’s AI models in China? And while we’re at it, how dependent—if at all—is China on U.S. artificial intelligence technology? Reuters asks these questions in the context of rhetoric emanating from Washington about restricting exports of non-open AI models made in the USA. As the news service points out, China has similar designs of its own. Get the answers, Reuters-style.
     
  • Nobody knows what the perfect CAIO looks, sounds or acts like. “We’re still figuring it out,” explains Mark Daley, PhD, chief AI officer at Western University in Ontario, in comments made to CIO.com. “You need someone with enough technical knowledge to be able to keep up with the latest developments … and sort the ‘real’ from the mirages. But you also need someone who understands business process—not just how the organization operates, but why it operates the way it does.”
     
  • If someone wins a Pulitzer Prize using GenAI, does the AI’s creator get a share of the spoils? This is no longer just a hypothetical scenario. On May 6, two of 15 winners in the journalism category admitted using the technology to write their winning works. One of them, Ishaan Jhaveri of The New York Times, tells the Nieman Journalism Lab that his team didn’t use GenAI on work that otherwise would have been done manually. “We used AI precisely [for] the type of task that would’ve taken so long to do manually that [it would distract from] other investigative work,” Jhaveri says. As he puts it, the Nieman Lab adds, AI can help investigative reporters find needles in proverbial haystacks while they go about their real work: investigating and reporting. 
     
  • Research roundup:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare