IoT based mobile driver drowsiness detection using deep learning


Safak E., DOĞRU İ. A., BARIŞÇI N., TOKLU S.

JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, cilt.37, sa.4, ss.1869-1881, 2022 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 37 Sayı: 4
  • Basım Tarihi: 2022
  • Doi Numarası: 10.17341/gazimmfd.999527
  • Dergi Adı: JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Art Source, Compendex, TR DİZİN (ULAKBİM)
  • Sayfa Sayıları: ss.1869-1881
  • Anahtar Kelimeler: Driver drowsiness detection, driver drowsiness detection on mobile devices, convolutional neural networks, IoT, deep learning, DETECTION SYSTEM, IMAGES
  • Gazi Üniversitesi Adresli: Evet

Özet

Driver drowsiness detection is an important issue to prevent traffic accidents. 40% of severe traffic accidents are due to drowsiness. Various methods are used for driver drowsiness detection. One of the driver drowsiness detection method is driver drowsiness detection based on the analysis of signals such as EEG and ECG. Another driver drowsiness detection method is driver drowsiness detection based on vehicle-driver interaction. The last driver drowsiness detection method used in the study is driver drowsiness detection from images. This method is more advantageous than the other two methods in terms of cost and usability because no driver intervention required. Classical image processing techniques and deep learning algorithms are used for driver drowsiness detection from images. Recent driver drowsiness detection studies are based on deep learning models. In addition, the model to be developed will need to be able to work on mobile devices in order to ensure widespread use. In the study, Convolutional Neural Networks were used for driver drowsiness detection on mobile devices. In order to increase the success rate of the model, the pre-trained model was reused with the transfer learning technique. The model developed for the training consists of 14 layers and 1,236,217 parameters. The dataset consists of two categories, open-eye and closed-eye images. The developed model achieved 95.65% accuracy, 95.86% precision, 94.32% recall and 95.17% f1 score which achieved better results than previous studies. A dataset of 2425 images was used to train the model.