Objectives: Autoverification is the process of evaluating and validating laboratory results using predefined computer-based algorithms without human interaction. By using autoverification, all reports are validated according to the standard evaluation criteria with predefined rules, and the number of reports per laboratory specialist is reduced. However, creating and validating these rules are the most demanding steps for setting up an autoverification system. In this study, we aimed to develop a model for helping users establish autoverification rules and evaluate their validity and performance. Design & methods: The proposed model was established by analyzing white papers, previous study results, and national/international guidelines. An autoverification software (myODS) was developed to create rules according to the model and to evaluate the rules and autoverification rates. The simulation results that were produced by the software were used to demonstrate that the determined framework works as expected. Both autoverification rates and step-based evaluations were performed using actual patient results. Two algorithms defined according to delta check usage (Algorithm A and B) and three review limits were used for the evaluation. Results: Six hundred seventeen rules were created according to the proposed model. 1,976 simulation results were created for validation. Our results showed that manual review limits are the most critical step in determining the autoverification rate, and delta check evaluation is especially important for evaluating inpatients. Algorithm B, which includes consecutive delta check evaluation, had higher AV rates. Conclusions: Systemic rule formation is a critical factor for successful AV. Our proposed model can help laboratories establish and evaluate autoverification systems. Rules created according to this model could be used as a starting point for different test groups.