A Lightweight and Explainable AI Framework Toward Automated Infraocclusion Detection in Pediatric Panoramic Radiographs


HATİPOĞLU PALAZ Z., Cege E. E., Maiga B., Dalveren Y., Dalveren G. G. M., KARA A., ...Daha Fazla

Diagnostics, cilt.16, sa.6, 2026 (SCI-Expanded, Scopus) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 16 Sayı: 6
  • Basım Tarihi: 2026
  • Doi Numarası: 10.3390/diagnostics16060866
  • Dergi Adı: Diagnostics
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, EMBASE, Directory of Open Access Journals
  • Anahtar Kelimeler: artificial intelligence, classification, deep learning, detection, infraocclusion, panoramic radiographs
  • Gazi Üniversitesi Adresli: Evet

Özet

Background/Objectives: Infraocclusion in pediatric patients may result in space loss, malocclusion and the need for complex orthodontic treatment if not detected early. Conventional diagnosis may be subject to human error and can be challenging, particularly in pediatric cases. The aim of this study is to design and evaluate a lightweight, two-stage deep learning framework with integrated explainable AI (XAI) techniques for automated infraocclusion detection in pediatric panoramic radiographs. Methods: Annotated panoramic radiographs of pediatric patients aged 7–11 years were used for training and validation. In the first stage, a MobileNet V2 Lite model was used to detect the region of interest (ROI) comprising premolars and molars. In the second stage, a custom CNN classifier was proposed to distinguish between infraocclusion and no infraocclusion. Model performance was evaluated in terms of diagnostic accuracy, computational complexity, and statistical significance. XAI techniques were also incorporated to visualize model attention and enhance interpretability. Results: The detection stage achieved high reliability with a precision, recall, F1-score, and AP50 values of 0.99, and an AP75 of 0.89, indicating accurate ROI localization. The classification stage reached an overall accuracy of 98.78%, with class-specific accuracies of 99.25% for infraocclusion and 98.31% for no infraocclusion cases. The framework also demonstrated computational efficiency, requiring only 1.88 M trainable parameters (7.19 MB), with short training times and low inference latency (0.8 ms for classification and 19 ms for detection). XAI visualizations consistently highlighted clinically relevant regions, such as occlusal margins and interproximal areas, confirming the model’s alignment with radiographic features recognized by clinicians. Conclusions: The proposed two-stage framework provides an accurate, computationally efficient, and interpretable solution for automated infraocclusion detection in pediatric patients. Its modular design and reduced complexity support practical integration into routine clinical workflows, including resource-constrained environments. These findings indicate that lightweight and XAI systems may enhance early infraocclusion detection while maintaining clinical transparency.