Expert review of AI-generated responses to the top ten patient complaints in primary care
Background: Artificial intelligence (AI) systems such as ChatGPT are among the fastest-growing applications of all time. Most physicians are familiar with the pitfalls of “Dr. Google” and WebMD. However, the same is not true for the increasingly accessed AI-based applications. This study...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Academia.edu Journals
2024-11-01
|
Series: | Academia Medicine |
Online Access: | https://www.academia.edu/125308167/Expert_review_of_AI_generated_responses_to_the_top_ten_patient_complaints_in_primary_care |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823859728252403712 |
---|---|
author | Monica Gillie George Kent |
author_facet | Monica Gillie George Kent |
author_sort | Monica Gillie |
collection | DOAJ |
description |
Background: Artificial intelligence (AI) systems such as ChatGPT are among the fastest-growing applications of all time. Most physicians are familiar with the pitfalls of “Dr. Google” and WebMD. However, the same is not true for the increasingly accessed AI-based applications. This study aims to evaluate ChatGPT’s response to the top ten complaints seen in primary care to help clinicians assess the utility and accuracy of a popular AI-based application. Methods: The top ten patient-reported complaints leading to a visit with their primary care physician were used to generate two questions, each regarding cause and treatment. These questions were then asked via the Perplexity AI search engine. Each response was graded by three experienced family medicine clinicians, and the overall score was reported for its utility and appropriateness. Results: About 95% of responses were rated as useful with 85% of responses were clinically appropriate. There were three responses deemed inappropriate by the reviewers indicating possible areas of harmful omission or improper triage. Unanimously, the response to treatment of shortness of breath was regarded as not useful and inappropriate due to lack of emphasis on seeking medical care and life-threatening conditions. Fatigue received the highest ratings of utility for both etiology and treatment. Responses were overall focused and concise. However, citations were secondary sources with variability in utility and clinical safety. Conclusion: “Doctor AI” is here to stay and will require ongoing investigation as it inevitably plays an increasing role in the provision of medical information and advice to patients. The rapid pace of AI search engine development produces limitations in a study of this design, as results are likely to differ over a short period of time. More research on the safety and utility of medical AI in the primary care setting is paramount. |
format | Article |
id | doaj-art-f3318e7aa1b94b4088f2ab5edbb404ee |
institution | Kabale University |
issn | 2994-435X |
language | English |
publishDate | 2024-11-01 |
publisher | Academia.edu Journals |
record_format | Article |
series | Academia Medicine |
spelling | doaj-art-f3318e7aa1b94b4088f2ab5edbb404ee2025-02-10T22:26:34ZengAcademia.edu JournalsAcademia Medicine2994-435X2024-11-011410.20935/AcadMed7388Expert review of AI-generated responses to the top ten patient complaints in primary careMonica Gillie0George Kent1Stanford-O’Connor Family Medicine Residency Program, Stanford University, San Jose, CA 95128, USA.Department of Medicine, Division of Primary Care and Population Health, Stanford University School of Medicine, USA. Background: Artificial intelligence (AI) systems such as ChatGPT are among the fastest-growing applications of all time. Most physicians are familiar with the pitfalls of “Dr. Google” and WebMD. However, the same is not true for the increasingly accessed AI-based applications. This study aims to evaluate ChatGPT’s response to the top ten complaints seen in primary care to help clinicians assess the utility and accuracy of a popular AI-based application. Methods: The top ten patient-reported complaints leading to a visit with their primary care physician were used to generate two questions, each regarding cause and treatment. These questions were then asked via the Perplexity AI search engine. Each response was graded by three experienced family medicine clinicians, and the overall score was reported for its utility and appropriateness. Results: About 95% of responses were rated as useful with 85% of responses were clinically appropriate. There were three responses deemed inappropriate by the reviewers indicating possible areas of harmful omission or improper triage. Unanimously, the response to treatment of shortness of breath was regarded as not useful and inappropriate due to lack of emphasis on seeking medical care and life-threatening conditions. Fatigue received the highest ratings of utility for both etiology and treatment. Responses were overall focused and concise. However, citations were secondary sources with variability in utility and clinical safety. Conclusion: “Doctor AI” is here to stay and will require ongoing investigation as it inevitably plays an increasing role in the provision of medical information and advice to patients. The rapid pace of AI search engine development produces limitations in a study of this design, as results are likely to differ over a short period of time. More research on the safety and utility of medical AI in the primary care setting is paramount.https://www.academia.edu/125308167/Expert_review_of_AI_generated_responses_to_the_top_ten_patient_complaints_in_primary_care |
spellingShingle | Monica Gillie George Kent Expert review of AI-generated responses to the top ten patient complaints in primary care Academia Medicine |
title | Expert review of AI-generated responses to the top ten patient complaints in primary care |
title_full | Expert review of AI-generated responses to the top ten patient complaints in primary care |
title_fullStr | Expert review of AI-generated responses to the top ten patient complaints in primary care |
title_full_unstemmed | Expert review of AI-generated responses to the top ten patient complaints in primary care |
title_short | Expert review of AI-generated responses to the top ten patient complaints in primary care |
title_sort | expert review of ai generated responses to the top ten patient complaints in primary care |
url | https://www.academia.edu/125308167/Expert_review_of_AI_generated_responses_to_the_top_ten_patient_complaints_in_primary_care |
work_keys_str_mv | AT monicagillie expertreviewofaigeneratedresponsestothetoptenpatientcomplaintsinprimarycare AT georgekent expertreviewofaigeneratedresponsestothetoptenpatientcomplaintsinprimarycare |