Health News

ChatGPT May Aid Decision Making in the ED for Acute UC

The artificial intelligence chatbot ChatGPT (OpenAI) may serve as a reliable clinical decision-support tool for assessing patients who present to the emergency department (ED) with acute ulcerative colitis (UC).

In a small study, ChatGPT version 4 (GPT-4) accurately gauged disease severity and made decisions about the need for hospitalization that were largely in line with expert gastroenterologists.

“Our findings suggest that GPT-4 has potential as a clinical decision-support tool in assessing UC severity and recommending suitable settings for further treatment,” say the authors, led by Asaf Levartovsky, MD, Department of Gastroenterology, Sheba Medical Center, Tel Aviv University, Israel.

The study was published online August 23 in the American Journal of Gastroenterology.

Assessing Its Potential

UC is a chronic inflammatory bowel disease known for episodes of flare-ups and remissions. Flare-ups often result in a trip to the ED, where staff must rapidly assess disease severity and need for hospital admission.

Levartovsky and colleagues explored how helpful GPT-4 could be in 20 distinct presentations of acute UC in the ED. They assessed the chatbot’s ability to determine the severity of disease and whether a specific presentation warranted hospital admission for further treatment.

They fed GPT-4 case summaries that included crucial data such as symptoms, vital signs, and laboratory results. For each case, they asked the chatbot to assess disease severity based on established criteria and recommend hospital admission or outpatient care.

The GPT-4 answers were compared with assessments made by gastroenterologists and the actual decision regarding hospitalization made by the physician in the ED.

Overall, ChatGPT categorized acute UC as severe in 12 patients, as moderate in 7, and mild in 1. In each case, the chatbot provided a detailed answer depicting severity of every variable of the criteria and an overall severity classification.

ChatGPT’s assessments were consistent with gastroenterologists’ assessments 80% of the time, with a “high degree of reliability” between the two assessments, the study team reports.

The average correlation between ChatGPT and physician ratings was 0.839 (P < .001). Inconsistencies in four cases stemmed largely from inaccurate cut-off values for systemic variables (such as hemoglobin and tachycardia).

Following severity assessment, ChatGPT leaned toward hospital admission for 16 out of 18 patients, whereas in actual clinical practice, only 12 patients were hospitalized. In one patient with moderate UC who was discharged, ChatGPT was in favor of hospitalization, based on the patient’s age and comorbid conditions. For two moderate UC cases, the chatbot recommended consultation with a healthcare professional for further evaluation and management, which the researchers deemed to be an indecisive response.

Based on their findings, the researchers say ChatGPT could serve as a real-time decision support tool — one that is not meant to replace physicians but rather enhance human decision-making.

They note that the small sample size is a limitation and that ChatGPT’s accuracy rate requires further validation across larger samples and diverse clinical scenarios.

The study had no specific funding. The authors report no relevant financial relationships.

Am J Gastroenterol. Published online August 23, 2023. Abstract

For more news, follow Medscape on Facebook, Twitter, Instagram, YouTube, and LinkedIn

Source: Read Full Article