PAMUKKALE UNIVERSITY JOURNAL OF ENGINEERING SCIENCES-PAMUKKALE UNIVERSITESI MUHENDISLIK BILIMLERI DERGISI, vol.30, no.4, pp.494-508, 2024 (ESCI)
There is often a trade-off between accuracy and interpretability in Machine Learning (ML) models. As the model becomes more complex, generally the accuracy increases and the interpretability decreases. Interpretable Machine Learning (IML) methods have emerged to provide the interpretability of complex ML models while maintaining accuracy. Thus, accuracy remains constant while determining feature importance. In this study, we aim to compare agnostic IML methods including SHAP and ELI5 with the intrinsic IML methods and Feature Selection (FS) methods in terms of the similarity of attribute selection. Also, we compare agnostic IML models (SHAP, LIME, and ELI5) among each other in terms of similarity of local attribute selection. Experimental studies have been conducted on both general and private datasets to predict company default. According to the obtained results, this study confirms the reliability of agnostic IML methods by demonstrating similarities of up to 86% in the selection of attributes compared to intrinsic IML methods and FS methods. Additionally, certain agnostic IML methods can interpret models for local instances. The findings indicate that agnostic IML models can be applied in complex ML models to offer both global and local interpretability while maintaining high accuracy.