A comparative analysis of ChatGPT and Google in providing quality and clinical relevance of responses to patients' frequently asked questions on robotic-assisted total knee arthroplasty


Creative Commons License

Aydin M., Aral F., Dasci M. F., Surucu S., Mahirogullari M., Citak M.

ARCHIVES OF ORTHOPAEDIC AND TRAUMA SURGERY, cilt.145, sa.1, 2025 (SCI-Expanded, Scopus) identifier identifier identifier

Özet

IntroductionThe purpose of this study was to identify the most frequent questions a patient might encounter in an internet search about robotic-assisted total knee arthroplasty (RATKA), and to identify and categorize the answers to these questions to assess the suitability of Chat Generative Pre-Trained Transformer (ChatGPT) and Google search engine as an online health information source for patients.MethodsThe 20 most frequently asked questions (FAQs) were identified by entering the search term "Robot-Assisted Total Knee Replacement" into both Google Search and ChatGPT-4. For Google, a clean search was performed, and the 20 FAQs were extracted from the "People also ask" section. For ChatGPT-4, a specific prompt was used to generate the 20 most frequently asked questions. All identified questions, along with the corresponding answers and cited references, were systematically recorded. A modified version of the Rothwell system was used to categorize questions into 10 subtopics. Each reference was categorized into the following groups: commercial, academic, medical practice, single surgeon personal, or social media. The questions and sources obtained from ChatGPT and Google were compared using Fisher's exact test.ResultsThe percentage distribution of questions by category between Google and ChatGPT was as follows: indications/management (15% vs. 25%), technical details (35% vs. 30%), evaluation of surgery (0% vs. 0%), risks/complications (5% vs. 5%), restrictions (10% vs. 0%), specific activities (15% vs. 5%), timeline of recovery (10% vs. 20%), pain (0% vs. 5%), longevity (0% vs. 0%), and cost (10% vs. 10%). Answers to questions were more frequently sourced from academic websites in ChatGPT compared to Google (70% vs. 20%; p-value = 0.0025).ConclusionChatGPT offers a promising alternative to traditional search engines for patient education, particularly in the context of preparing for RATKA. Compared to Google, ChatGPT provided significantly fewer references to commercial content and offered responses that were more aligned with academic sources.Level of evidenceLevel IV, Survey study Internet sources.