Assistive robots used in medical settings could inspire caregivers—familial as well as professional—to treat patients more empathetically and patiently, potentially improving outcomes.
That’s according to a study conducted at Tufts University in Massachusetts and published in the June edition of the Journal of Medical Internet Research.
Psychologist Meia Chita-Tegmark, PhD, and colleagues showed 188 randomly recruited lay participants vignette scenarios in which robots took two approaches to assisting patients: patient-centered or task-centered.
The patient-centered robots focused on the needs and choices of the patient as he or she complied with or resisted a treatment plan.
The task-centered robots focused on how strictly the patient followed the same plan.
After the exercise, participants completed a questionnaire designed to measure their perceptions of emotional intelligence in the robot, trust in the robot and potential acceptance of the robot for the management of their own health.
Most salient to exploring the study’s main hypothesis—“people’s impressions of a patient will be affected by the robot’s behavior”—the questionnaire asked for post-vignette impressions of the patient.
Analyzing the responses, Chita-Tegmark and colleagues found that, in scenarios describing a robot acting in a patient-centered manner, the robot caused people to form more positive impressions of the patient.
The questions aimed at ascertaining these impressions asked, for example, whether the patient seemed competent, honest and self-disciplined rather than disruptive, hostile and disorganized.
“These psychological attributes have been shown to make a difference in the quality of care a patient may be given,” the authors explained. “As we design social robots for healthcare, we need to understand … how the robot fits overall into the network and dynamics of the patient’s social relationships. This is important because social life and support has been consistently shown to be a crucial predictor of health outcomes.”
Commenting on people’s willingness to take cues from robots in patient-care scenarios, and underscoring how readily this influence translated into more positive impressions of the patients, Chita-Tegmark et al. express some surprise.
“It is remarkable that robots are able to have this effect given that their language output is scripted and certainly does not come with the emotional connotations that a person’s choice of language would have,” the authors write. “A robot’s behavior is not connected to beliefs and attitudes in the same way a human’s behavior is, yet our findings suggest that we inadvertently let our perceptions and impressions of others be guided by the robot’s actions.”
This finding may have cautionary implications for the field of human-robot interaction, the authors acknowledge, but it also holds promise.
“If robots can have an influence on how we think of others,” they write, “then perhaps they can be used for improving relationships in the healthcare setting and beyond.”