Deep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent Features
The detection and classification of emotional states in speech involves the analysis of audio signals and text transcriptions. There are complex relationships between the extracted features at different time intervals which ought to be analyzed to infer the emotions in speech. These relationships...
Saved in:
Main Authors: | Samuel, Kakuba, Alwin, Poulose, Dong, Seog Han, Senior Member, Ieee |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023
|
Subjects: | |
Online Access: | http://hdl.handle.net/20.500.12493/921 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution
by: Samuel, Kakuba, et al.
Published: (2023) -
Fusion of MHSA and Boruta for key feature selection in power system transient angle stability
by: WANG Man, et al.
Published: (2025-01-01) -
Manet: motion-aware network for video action recognition
by: Xiaoyang Li, et al.
Published: (2025-02-01) -
Research on YOLOv5 Oracle Recognition Algorithm Based on Multi-Module Fusion
by: Xinhang Zhang, et al.
Published: (2025-01-01) -
Two-dimensional semantic morphological feature extraction and atlas construction of maize ear leaves
by: Hongli Song, et al.
Published: (2025-02-01)