Test Synchronization: Testing the Statistical Equations of Tests with Different Forms of Forms that Measure the Same Behaviors


KAN A.

JOURNAL OF MEASUREMENT AND EVALUATION IN EDUCATION AND PSYCHOLOGY-EPOD, vol.1, no.1, pp.16-21, 2010 (ESCI) identifier

Abstract

In this study, firstly it was aimed to see whether the tests with different item format create advantage or disadvantage on examinees' performance, in other words, whether the examinees' scores are fairly affected by particular test forms with different formatted items. Second purpose is to make use of test equating procedure as a tool for examining the effect of item formats on test performance and by this way identifying whether the two different item formatted test scores could be used interchangeably. This was questioned by searching whether the test scores obtained from different item formats could be used interchangeably. The data of the study was collected from 402 6th grade students from various secondary schools. Single group design, and linear equating procedure from classical equating methodologies were used in the study. At the same time, as a part of the equating process, SEEs (Standart error of equating) for single group were estimated. Confidence bands based on the SEE was used to assess equivalence of different item formatted mathematic test edition. As a result of the study, differences were found between equating function and identity function, differences ranging from -0,041 to 1,159. Because these differences are more than two SEEs for some score range, the two different item formatted mathematic test edition (numeric formatted and word formatted) can not be considered equivalent and interchangeable.