APPLIED SCIENCES, cilt.14, sa.19, ss.8769-8920, 2024 (SCI-Expanded)
This paper introduces a novel method for emotion classification within
virtual reality (VR) environments, which integrates biosignal processing
with advanced machine learning techniques. It focuses on the processing
and analysis of electrocardiography (ECG) and galvanic skin response
(GSR) signals, which are established indicators of emotional states. To
develop a predictive model for emotion classification, we extracted key
features, i.e., heart rate variability (HRV), morphological
characteristics, and Hjorth parameters. We refined the dataset using a
feature selection process based on statistical techniques to optimize it
for machine learning applications. The model achieved an accuracy of
97.78% in classifying emotional states, demonstrating that by accurately
identifying and responding to user emotions in real time, VR systems
can become more immersive, personalized, and emotionally resonant.
Ultimately, the potential applications of this method are extensive,
spanning various fields. Emotion recognition in education would allow
further implementation of adapted learning environments through
responding to the current emotional states of students, thereby
fostering improved engagement and learning outcomes. The capability for
emotion recognition could be used by virtual systems in psychotherapy to
provide more personalized and effective therapy through dynamic
adjustments of the therapeutic content. Similarly, in the entertainment
domain, this approach could be extended to provide the user with a
choice regarding emotional preferences for experiences. These
applications highlight the revolutionary potential of emotion
recognition technology in improving the human-centric nature of digital
experiences.