Real-time fire and smoke detection for mobile devices using deep learning


Creative Commons License

Safak E., BARIŞÇI N.

JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, cilt.38, sa.4, ss.2179-2190, 2023 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 38 Sayı: 4
  • Basım Tarihi: 2023
  • Doi Numarası: 10.17341/gazimmfd.1041091
  • Dergi Adı: JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Art Source, Compendex, TR DİZİN (ULAKBİM)
  • Sayfa Sayıları: ss.2179-2190
  • Anahtar Kelimeler: Fire and smoke detection, fire and smoke detection for mobile devices, convolutional neural networks, transfer learning, internet of things, IoT, deep learning
  • Gazi Üniversitesi Adresli: Evet

Özet

Fire is a natural disaster that causes ecological, social and economic damage. With global warming and the widespread use of explosive/flammable chemicals, fires have become one of the most important problems for humanity. Early fire detection is critical to minimize destruction. For this reason, studies on fire detection through images have begun to be carried out in order to detect fires early. In recent studies for fire detection from images deep learning algorithms are generally used. These studies focus on the analysis of images taken from cameras with models running on powerful servers. With the developments in mobile devices and internet of things, images can analyzed on edge devices. In the study fire and smoke detection model which requires low processing power has been developed to enable the images to be analyzed on mobile device without transferring them to server. The MobileNet convolutional neural network was revised, the last 3 layers were removed and replaced with a flattening layer and a dense layer consisting of two nodes. Fire and smoke detection model, the method with the highest accuracy was determined among the models developed using revised MobileNet, original MobileNet, MobileNetV2, EfficientNetB0, ShuffleNet, NASNetMobile and PeleeNet convolutional neural networks. To train and test the models, 80% of the dataset consisting of 43,355 images was used for training and 20% for testing. According to test results, highest accuracy rate of 98.37% was achieved in the revised MobileNet.