Harnessing the Multi-Phasal Nature of Speech-EEG for Enhancing Imagined Speech Recognition

Analyzing speech-electroencephalogram (EEG) is pivotal for developing non-invasive and naturalistic brain-computer interfaces. Recognizing that the nature of human communication involves multiple phases like audition, imagination, articulation, and production, this study uncovers the shared cognitiv...

Full description

Saved in:
Bibliographic Details
Main Authors: Rini Sharon, Mriganka Sur, Hema Murthy
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Open Journal of Signal Processing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10839023/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Analyzing speech-electroencephalogram (EEG) is pivotal for developing non-invasive and naturalistic brain-computer interfaces. Recognizing that the nature of human communication involves multiple phases like audition, imagination, articulation, and production, this study uncovers the shared cognitive imprints that represent speech cognition across these phases. Regression analysis, using correlation metrics reveal pronounced inter-phasal congruence. This insight promotes a shift from single-phase-centric recognition models to harnessing integrated phase data, thereby enhancing recognition of cognitive speech. Having established the presence of inter-phase associations, a common representation learning feature extractor is introduced, adept at capturing the correlations and replicability across phases. The features so extracted are observed to provide superior discrimination of cognitive speech units. Notably, the proposed approach proves resilient even in the absence of comprehensive multi-phasal data. Through thorough control checks and illustrative topographical visualizations, our observations are substantiated. The findings indicate that the proposed multi-phase approach significantly enhances EEG-based speech recognition, achieving an accuracy gain of 18.2% for 25 cognitive units in continuous speech EEG over models reliant solely on single-phase data.
ISSN:2644-1322