Examining ChatGPT’s validity as a source for scientific inquiry and its misconceptions regarding cell energy metabolism


Creative Commons License

Elmas R., ADIGÜZEL ULUTAŞ M., YILMAZ M.

Education and Information Technologies, 2024 (SSCI) identifier

  • Yayın Türü: Makale / Tam Makale
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1007/s10639-024-12749-1
  • Dergi Adı: Education and Information Technologies
  • Derginin Tarandığı İndeksler: Social Sciences Citation Index (SSCI), Scopus, Communication Abstracts, EBSCO Education Source, Educational research abstracts (ERA), ERIC (Education Resources Information Center), INSPEC
  • Anahtar Kelimeler: Artificial intelligence (AI), Biochemistry, ChatGPT, Misconceptions
  • Gazi Üniversitesi Adresli: Evet

Özet

Many people use technological tools that are widely accessible, respond quickly, and have extensive information networks today. Due to recent technological advances in education and the increasing acceptance of Artificial Intelligence (AI) technologies, the issues regarding their implementation in education require identification and analysis. ChatGPT (Chat Generative Pre-trained Transformer), an artificial intelligence program that emerged in 2022, contains notable characteristics. OpenAI created ChatGPT and released it to users in 2022. ChatGPT is a machine learning-powered chatbot that can deliver detailed responses to inquiries. This research aims to evaluate the validity of ChatGPT-generated responses when scientific questions related to the biochemistry discipline are posed. A document analysis was conducted to determine the scientific validity of responses produced by ChatGPT for five questions. Five questions originating from bio-chemistry content were asked to ChatGPT in a written format. The AI’s generated answers were saved and analyzed depending on their scientific validity. As a result of the study, it was detected that ChatGPT responded with scientifically incorrect or incomplete answers to the five questions asked. Besides, when asked the reason for ChatGPT’s response, it is seen that AI insisted on its invalid answers. Following prompts for certainty, the AI’s performance was evaluated. It provided scientifically correct answers to the first two questions, partially correct answers to the third, and consistently offered invalid solutions for the remaining questions. Ultimately, ChatGPT’s capabilities are limited in providing scientifically rigorous responses. To obtain accurate and appropriate answers, it is imperative to pose comprehensive and detailed inquiries that facilitate a more precise and informed response. Scholars and researchers must acknowledge that ChatGPT harbors certain misconceptions and consequently only constitutes a somewhat dependable and scientifically validated resource.