The potential applications of advancements in artificial intelligence (AI) assistants like ChatGPT in medicine have been the subject of a great deal of speculation.
An early look at the potential role that AI assistants could play in medicine is provided by a new study that was published in JAMA Internal Medicine and was led by Dr. John W. Ayers from the Qualcomm Institute at the University of California, San Diego. The study compared ChatGPT and physician-written responses to real-world health questions. 79% of the time, a panel of licensed health care professionals preferred ChatGPT’s responses because they were of higher quality and showed more empathy.
Ayers, who is also vice chief of innovation in the Division of Infectious Disease and Global Public Health at the UC San Diego School of Medicine, stated, “The opportunities for improving health care with AI are massive.” The future of medicine is AI-augmented care.
“While ChatGPT can pass a medical licensing exam, directly answering patient questions accurately and empathetically is a different story.”
Dr. Davey Smith, a physician-scientist, co-director of the UC San Diego Altman Clinical and Translational Research Institute.
Is ChatGPT prepared for medical care?
In the new review, the exploration group set off to address the inquiry: Can ChatGPT accurately answer questions from patients to doctors? If this is the case, AI models could be incorporated into health systems to reduce the ever-increasing workload placed on physicians and enhance physician responses to patient inquiries.
“ChatGPT could possibly finish a clinical permitting test,” said concentrate on co-creator Dr. Davey Smith, a doctor researcher, co-overseer of the UC San Diego Altman Clinical and Translational Exploration Organization, and teacher at the UC San Diego Institute of Medication, “however straightforwardly responding to patient inquiries precisely and sympathetically is an alternate ballgame.”
Dr. Eric Leas, a Qualcomm Institute affiliate and assistant professor at the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science who was a co-author of the study, added, “The COVID-19 pandemic accelerated virtual health care adoption.” While this made getting to really focus on patients more straightforward, doctors are troubled by a blast of electronic patient messages looking for clinical guidance that have added to record-breaking degrees of doctor burnout.”
Planning a review to test ChatGPT in a medical care setting
To get a huge and different example of medical care questions and doctor responses that didn’t contain recognizable individual data, the group went to web-based entertainment where a great many patients freely present clinical inquiries on which specialists answer: Reddit’s AskDocs.
r/AskDocs is a subreddit with around 452,000 individuals who post clinical inquiries and check medical services experts’ replies. Even though anyone can respond to a question, moderators check the credentials of health care professionals, and respondent credentials are shown in the responses. The end product is a large and varied collection of medical questions posed by patients and their responses from licensed medical professionals.
While some might contemplate whether question-answer trades via virtual entertainment are a fair test, colleagues noticed that the trades were intelligent in light of their clinical experience.
The group haphazardly inspected 195 trades from AskDocs, where a checked doctor answered a public inquiry. The group gave ChatGPT the original question and asked it to write a response. Each question and its corresponding response were evaluated by a panel of three licensed health care professionals who were blinded to the physician or ChatGPT. They looked at reactions in terms of data quality and sympathy, taking note of which one they liked.
79% of the time, the panel of professionals in the health care industry preferred ChatGPT responses to physician responses.
“ChatGPT messages were answered with nuanced and precise data that frequently tended to additional parts of the patient’s inquiries than doctor reactions,” said Jessica Kelley, a medical caretaker expert with San Diego firm Human Life Span and study co-creator.
Also, ChatGPT reactions were evaluated as fundamentally superior to doctor reactions: ChatGPT had responses of good or very good quality that were 3.6 times higher than those from physicians (physicians 22.1% vs. ChatGPT 78.5%). Also, the responses were more sympathetic: compassionate or extremely sympathetic reactions were 9.8 times higher for ChatGPT than for doctors (doctors 4.6% versus ChatGPT 45.1%).
“I never envisioned saying this,” added Dr. Aaron Goodman, a partner clinical teacher at the UC San Diego Institute of Medication and study co-creator, “yet ChatGPT is a solution I might want to provide for my inbox. The apparatus will change the manner in which I support my patients.”
Saddling man-made intelligence partners for patient messages
“While our review set ChatGPT in opposition to doctors, a definitive arrangement isn’t tossing your PCP out through and through,” said Dr. Adam Poliak, an associate teacher of software engineering at Bryn Mawr School and review co-creator. “Instead, for better and more compassionate care, a doctor should use ChatGPT.”
Dr. Christopher Longhurst, Chief Medical Officer and Chief Digital Officer at UC San Diego Health, stated, “Our study is among the first to show how AI assistants can potentially solve real-world health care delivery problems.” We are beginning this process at UCSD Health because these results suggest that tools like ChatGPT can efficiently draft high-quality, individualized medical advice for clinician review.”
“It is important that integrating AI assistants into health care messaging be done in the context of a randomized controlled trial to judge how the use of AI assistants impacts outcomes for both physicians and patients,” said Dr. Mike Hogarth, a physician-bioinformatician who is also a study co-author and co-director of the Altman Clinical and Translational Research Institute at UC San Diego.
Investing in AI assistant messaging may have an effect on physician performance and patient health, in addition to enhancing workflow.
“We could use these technologies to train doctors in patient-centered communication, eliminate health disparities suffered by minority populations who frequently seek health care via messaging, build new medical safety systems, and assist doctors by delivering higher quality and more efficient care,” said study co-author Dr. Mark Dredze, Johns Hopkins John C. Malone Associate Professor of Computer Science.
More information: Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum, JAMA Internal Medicine (2023). DOI: 10.1001/jamainternmed.2023.1838. jamanetwork.com/journals/jamai … ainternmed.2023.1838