Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions

Despite advances in the field of emotion recognition, the research field still faces two main limitations: the use of deep models for increasingly complex calculations and the identification of emotions through various data types. This study aims to advance the knowledge on multimodal emotion recogn...

Full description

Saved in:
Bibliographic Details
Main Authors: Songul Erdem Guler, Fatma Patlar Akbulut
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10870204/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite advances in the field of emotion recognition, the research field still faces two main limitations: the use of deep models for increasingly complex calculations and the identification of emotions through various data types. This study aims to advance the knowledge on multimodal emotion recognition by combining electroencephalography (EEG) signals with facial expressions, using advanced models such as Transformer, Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). The results validate the effectiveness of this approach, demonstrating the high accuracy of the Gated Recurrent Unit (GRU) model, which achieved an average of 91.8% classification accuracy on unimodal (EEG-only) data and an average of 97.8% classification accuracy on multimodal (EEG and facial expressions) datasets in the multi-class emotion categories. The findings emphasize that by applying a multi-class classification framework, multimodal approaches offer significant improvements over traditional unimodal techniques. This work presents a framework that captures complex neural dynamics and visible emotional cues, enhancing the robustness and accuracy of emotion recognition systems. These results have important practical implications, showing how integrating various data sources with advanced models can overcome the limitations of single-modality systems.
ISSN:2169-3536