International Conference of Innovative Computer Engineering (ICE 2025), Ankara, Türkiye, 06 Kasım 2025, ss.7, (Özet Bildiri)
Abstract
Aim: To analyze the clinical interpretability of deep learning models for classifying brain tumors on MRI.
Methods: We compared a baseline Convolutional Neural Network (CNN), an attention-enhanced CNN, and transfer-learning back-bones (MobileNetV2, InceptionV3, Xception) on a four-class MRI dataset (glioma, meningioma, pituitary, no tumor). Models were trained with Adam and evaluated using Grad-CAM visualizations, F1-score, accuracy, precision, recall, and confusion matrices.
Result: MobileNetV2 achieved the highest accuracy (95.7%), with closely aligned precision, recall, and F1-score. The attention-augmented CNN performed competitively, and Grad-CAM highlighted tumor-relevant regions, supporting model reliability.
Conclusion: Transfer learning, especially MobileNetV2, offers strong performance for MRI tumor classification, while attention mechanisms and Grad-CAM improve focus and interpretability, facilitating clinical adoption.