SAGE Open, cilt.16, sa.1, 2026 (SSCI, Scopus)
This study explores how test-takers process multiple-choice questions in a reading comprehension test using eye-tracking data. A total of 159 participants completed a 10-item English test, while their eye movements were recorded with the Smart Eye Aurora eye tracker. The study first examined item processing patterns using latent profile analysis and then compared test performance across these groups. To identify processing patterns, latent profile models (ranging from one to four classes) were tested for each item based on average log process times across defined Areas of Interest (AOIs) for text lines and answer choices. Results showed that a two-class model (fast- and slow-pacing) provided the best fit for five items, while a three-class model (fast-, moderate-, and slow-pacing) best fit the remaining items. Items with two subgroups were typically moderately difficult (short) or easy (long), while items with three subgroups varied in difficulty and length. Additionally, test-takers in fast-pacing group were more likely to answer items correctly and achieved higher total scores than those in other groups, particularly for highly discriminating and moderately difficult or easy items. Overall, these findings highlight the importance of examining item processing patterns to better understand how individuals interact with multiple-choice test items. Item characteristics—such as difficulty, discrimination, and length—play a crucial role in shaping processing behaviors, providing deeper insights into the cognitive aspects of eye movement patterns during test-taking.