New Delhi: ChatGPT outperforms physicians in providing high-quality, empathetic advice in answering patient questions, according to a study. There has been widespread speculation about how advances in artificial intelligence (AI) assistants such as ChatGPT could be used in medicine. The study, published in JAMA Internal Medicine, compared the written answers of physicians and ChatGPTs to real-world health questions.
A panel of licensed health professionals preferred ChatGPT’s responses 79 percent of the time and rated ChatGPT’s responses as higher quality and more empathetic. ,ALSO READ: Actor Shah Rukh Khan fined for promoting BYJU’s false coaching promise,
“The opportunities to improve health care with AI are massive,” said John W. Ayers said. “AI-augmented care is the future of medicine,” he said. ,Also Read: Amazon’s Great Summer Sale 2023 Begins May 4: Check Out Top Offers,
In the new study, the research team set out to answer the question: Can ChatGPT accurately answer questions sent by patients to their doctors?
If so, AI models can be integrated into healthcare systems to improve physician responses to patient-sent queries and reduce the ever-increasing burden on physicians.
“ChatGPT may be able to pass the medical licensing exam,” said Dr. Davey Smith, co-director of the UC San Diego Altman Clinical and Translational Research Institute, “but answering patient questions accurately and empathetically is a whole different ballgame.”
According to researchers, while the COVID-19 pandemic has accelerated the adoption of virtual healthcare, making it easier to care for patients, physicians are burdened with electronic patient messages seeking medical advice, which has contributed to record-breaking levels of physician burnout. have contributed.
To understand how ChatGPT can help, the team randomly sampled 195 exchanges from Reddit’s AskDocs where a verified practitioner answered a public question.
The team provided the original question to ChatGPT and asked it to write a response. A panel of three licensed health professionals evaluated each question and associated responses regardless of whether the response originated from the physician or the ChatGPT.
They compared responses based on information quality and empathy, judging by who they liked. Panels of health care professional evaluators preferred ChatGPT responses to physicians’ responses 79 percent of the time.
The study showed that ChatGPT messages responded with nuanced and precise information, often addressing more aspects of patient questions than physician responses.
Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: Good or very good quality responses were 3.6 times more likely to be for ChatGPT than for physicians. Responses were also more sympathetic: sympathetic or very sympathetic responses were 9.8 times more likely for ChatGPT than for therapists.
However, the team said, the ultimate solution isn’t to throw your doctor out completely. “Instead, a therapist using ChatGPT is the answer to better and more empathetic care,” said Adam Poliak, assistant professor of computer science at Bryn Mawr College.