Friday, November 15, 2024
News

Ethical Issues Abound in Adoption of Artificial Intelligence in Cancer Care

Issues include explainability, patient consent, and responsibility

By Lori Solomon HealthDay Reporter

FRIDAY, March 29, 2024 (HealthDay News) — There may be ethical barriers to the adoption of artificial intelligence (AI) into cancer care, according to a study published online March 28 in JAMA Network Open.

Andrew Hantel, M.D., from the Dana-Farber Cancer Institute in Boston, and colleagues evaluated oncologists’ views on the ethical domains of the use of AI in clinical care. The analysis included 204 survey responses from 37 states.

The researchers found that most participants (84.8 percent) reported that AI-based clinical decision models needed to be explainable by oncologists to be used in the clinic, while 23.0 percent stated they also needed to be explainable by patients. Eight in 10 (81.4 percent) supported patient consent for AI model use during treatment decisions. In a scenario in which an AI decision model selected a different treatment regimen than the oncologist planned to recommend, most respondents said they would present both options and let the patient decide (36.8 percent), with those from academic settings more likely than those from other settings to let the patient decide (odds ratio, 2.56). Three-quarters of respondents (76.5 percent) agreed that oncologists should protect patients from biased AI tools, but only 27.9 percent were confident in their ability to identify poorly representative AI models.

“These findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions as well as decisional responsibility when problems related to AI use arise,” the authors write.

Several authors disclosed ties to the pharmaceutical and biotechnology industries.

Copyright © 2024 HealthDay. All rights reserved.

HealthDay.com
the authorHealthDay.com