AI scores higher on empathy than human doctors: Study

ChatGPT scored higher on empathy than human doctors, according to a new study.

The study compared responses to real-world health questions by human physicians and artificial intelligence assistant ChatGPT.

‘GODFATHER OF AI’ LEAVES GOOGLE TO WARN OF RISKS OF TECH ARMS RACE

The vast majority, nearly 80%, of the AI answers were preferred by a panel of licensed healthcare professionals.

The authors of the study believe their findings show room for doctors to improve their bedside manner and could help medical professionals draft responses to patient questions. However, it does not suggest AI as a replacement for doctors, but rather as an assistant to improve doctor performance.

“The opportunities for improving healthcare with AI are massive,” lead author Dr. John W. Ayers, vice chief of innovation at the University of California, San Diego School of Medicine’s Division of Infectious Disease and Global Public Health, said in a press release. “AI-augmented care is the future of medicine.”

With the recent spike in the use of telemedicine, some doctors believe this tool can be helpful in providing patients with personalized, empathetic, and high-quality responses while battling against “physician burnout.”

UC San Diego used patient questions from the r/AskDocs page on Reddit that received responses from a verified healthcare professional. It then fed the same questions to ChatGPT.

It found that ChatGPT’s “quality” was rated at 78.5%, while physicians were rated at 22.1%. ChatGPT also rated higher in empathy with 45.1%, compared to physicians’ 4.6%.

There are some questions as to the quality of the study, Do No Harm Chairman Dr. Stanley Goldfarb told the Washington Examiner, saying the self-described study limitations was “one of the longest sections I have ever seen devoted to this topic.”

One concern for Goldfarb is that the study “evaluators did not assess the chatbot responses for accuracy or fabricated information.”

Additional limitations of the study include the fact that measurements of quality and empathy were not tested or validated, the study’s evaluators were also coauthors, “which could have biased their assessments,” and that the length of chatbot responses, which were typically longer, could have incorrectly been associated with “greater empathy.”

“It also looks like the authors had several real potential conflicts given that they are owners, or at least have equity in a company that apparently supports chatbot activity,” Goldfarb pointed out.

Several of the authors have connections to data analytics and health analytics companies, including Ayers, having been the CEO of, and owning equity in, Good Analytics. He also owns equity in the healthcare software company Health Watcher.

Dr. Mark Dredze was the chief scientific officer of Good Analytics, and took “personal fees” from companies Bloomberg LP and Sickweather, according to the study’s conflicts of interest disclosure.

Dr. Eric Leas reported personal fees from Good Analytics “during the conduct of the study.”

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Dr. Michael Hogarth is an adviser for healthcare chatbot company LifeLink.

“Artificial intelligence will play a role in some low-level human interactions,” Goldfarb concluded. “The question is, will they be capable of dealing with some real, complex issues? It would’ve been nice if they had given us some examples of what they considered better responses by the computer compared to the physician.”

Related Content