Automated scoring in the era of artificial intelligence: An empirical study with Turkish essays


AYDIN B., Kışla T., Elmas N. T., Bulut O.

System, vol.133, 2025 (SSCI) identifier

  • Publication Type: Article / Article
  • Volume: 133
  • Publication Date: 2025
  • Doi Number: 10.1016/j.system.2025.103784
  • Journal Name: System
  • Journal Indexes: Social Sciences Citation Index (SSCI), Scopus, Academic Search Premier, IBZ Online, Periodicals Index Online, Applied Science & Technology Source, Communication Abstracts, EBSCO Education Source, Educational research abstracts (ERA), Linguistics & Language Behavior Abstracts, MLA - Modern Language Association Database
  • Keywords: Automated scoring, Large language models, Multilevel models, Rater reliability, Turkish essays, Zero-shot with rubric
  • Gazi University Affiliated: Yes

Abstract

Automated scoring (AS) has gained significant attention as a tool to enhance the efficiency and reliability of assessment processes. Yet, its application in under-represented languages, such as Turkish, remains limited. This study addresses this gap by empirically evaluating AS for Turkish using a zero-shot approach with a rubric powered by OpenAI's GPT-4o. A dataset of 590 essays written by learners of Turkish as a second language was scored by professional human raters and an artificial intelligence (AI) model integrated via a custom-built interface. The scoring rubric, grounded in the Common European Framework of Reference for Languages, assessed six dimensions of writing quality. Results revealed a strong alignment between human and AI scores with a Quadratic Weighted Kappa of 0.72, Pearson correlation of 0.73, and an overlap measure of 83.5 %. Analysis of rater effects showed minimal influence on score discrepancies, though factors such as experience and gender exhibited modest effects. These findings demonstrate the potential of AI-driven scoring in Turkish, offering valuable insights for broader implementation in under-represented languages, such as the possible source of disagreements between human and AI scores. Conclusions from a specific writing task with a single human rater underscore the need for future research to explore diverse inputs and multiple raters.