Investigation of Interrater Reliability in The Evaluation of Foreign Language Writing Skills With Multigroup Confirmatory Factor Analysis


Creative Commons License

Önen E., Taşdelen Yayvak M. K.

Journal of Education and Training Studies, vol.7, no.1, pp.30-37, 2019 (Peer-Reviewed Journal)

  • Publication Type: Article / Article
  • Volume: 7 Issue: 1
  • Publication Date: 2019
  • Doi Number: 10.11114/jets.v7i1.3421
  • Journal Name: Journal of Education and Training Studies
  • Journal Indexes: ERIC (Education Resources Information Center)
  • Page Numbers: pp.30-37

Abstract

In this study, it was aimed to examine the interrater reliability of the scoring of paragraph writing skills on foreign

languages with the measurement invariance tests. The study group consists of 267 students studying English at the

Preparatory School at Gazi University. In the study, where students write a paragraph on the same topic, the paragraphs

are rated separately by three different interrater using the same scoring key. The evidence for the validity measurements

was collected with AFA and DFA while the evidence for the reliability measurements was collected by the

Cronbach alpha (α) coefficient. As a result of testing with Multi Group Confirmatory Factor Analysis within the context

of the measurement invariance of the interrater reliability, no evidence of full and partial scalar invariance can be

obtained while evidence of formal configural and metric invariance is obtained. As a result, the lack of evidence of

scalar invariance means that raters scoring the writing skills do not use the same initial level of performance. In this

case, the invariant uniqueness and invariant factor variances could not be tested, and therefore no evidence of reliability

between raters could be obtained.

Keywords:interrater reliability, measurement invariance, evaluation of writing skills, multigroup confirmatory factor

analysis