4 hazards to avoid as AI expands in eldercare

AI holds the promise of improving gerontology by making it more predictive, personalized, preventive and participatory. However, to get there, an equal number of pronounced risks must be negotiated.  

Bioethicist Giovanni Rubeis, PhD, of the Institute of Medical History and Ethics at Heidelberg University in Germany fleshes out the details of the challenge in an article published online July 15 in Archives of Gerontology and Geriatrics.

The four Ds that make up the downside, as Rubeis sees it:

  1. Depersonalization of care through algorithm-based standardization
  2. Discrimination of minority groups through generalization
  3. Dehumanization of the care relationship through automatization
  4. Disciplining of patients, however inadvertent, through monitoring and surveillance

Rubeis suggests ways to avoid each of these pitfalls.

To avoid both depersonalization and discrimination, he asserts, healthcare providers and researchers need to maintain a dual focus, zeroing in on the needs and characteristics of minority groups in particular and of older people in general.

The overarching aim should be to provide “relevant, reliable and context-sensitive” data for Big Data analytics.

“This also includes increased research efforts regarding the acceptance of older people vis-à-vis existing AI-based gerontechnology,” Rubeis points out. “In addition, ethical issues connected to AI in elderly care need more in-depth analysis and concise models for framing.”

To circumvent the dangers of dehumanization, AI developers must tailor models for older people rather than relying on modifications to algorithms for nonelderly adults, Rubeis argues.

“Design processes should be participatory and user-driven, integrating the actual needs and resources of older people,” he writes. “In addition, caregivers play an important role here. As overseers of care, they can act as advocates of older people’s interests and support them in making their voices heard by care providers and policy makers.”

And it will take some structured questioning of concepts embedded in cultural attitudes toward aging to keep eldercare-specific AI applications from making patients feel they’re being watched for disciplinary reasons.

“Instead of simply implementing AI-based solutions just because they promise cost reduction, the actual needs of patients have to be assessed,” Rubeis urges. “It is crucial to adapt the methods of assessment to the specific conditions present in different healthcare systems. That means that the implementation of AI-based solutions in elderly care cannot simply follow a ‘one-size-fits-all’ attitude.”

In his conclusion, Rubeis underscores that AI-based gerontechnology is going to disrupt eldercare—and the disruption could end up a net plus or net minus for older people.

Where it ultimately lands between those two poles, he writes, will depend on “whether the needed joint efforts of users, caregivers, care providers, engineers and policy makers will be made.”