Deep Learning Approaches for Motor Imagery Brain-Computer Interfaces
Raghda Hassan Mohamed;
Abstract
Motor imagery represents one Brain-Computer Interface (BCI) paradigm that has been utilized in developing applications to assist subjects with motor disability. Such paradigm relies on analyzing brain electroencephalography (EEG) activity to identify the intended movement direction. Existing motor imagery feature extraction techniques are focused on utilizing traditional signal processing and machine learning techniques. Recent advances in the deep learning field has inspired the development of few methods for motor imagery classification that achieved further performance improvement. This thesis introduces a deep neural network approach for motor imagery classification using Long Short-Term Memory (LSTM) combined with Autoencoders based on a sequence-to-sequence architecture. The proposed network extracts features from the frequency-domain representation of EEG signals. This network is trained to obtain low-dimensional representation of EEG features that are then fed into a multi-layer perceptron of 3 layers for classification. Systematic and extensive examinations have been carried out by applying the approach to public benchmark EEG datasets. The obtained results outperform classical state–of-the-art methods employing standard frequency-domain features and common spatial patterns, and comparative results to methods such as filter bank common spatial pattern and its variants. Our results indicate the efficacy of the proposed LSTM autoencoder approach in EEG motor imagery classification.
Other data
| Title | Deep Learning Approaches for Motor Imagery Brain-Computer Interfaces | Other Titles | أساليب التعلم العميق لواجهات المخ والحاسب الخاصة بتصور الحركة | Authors | Raghda Hassan Mohamed | Issue Date | 2021 |
Attached Files
| File | Size | Format | |
|---|---|---|---|
| BB2598.pdf | 1.26 MB | Adobe PDF | View/Open |
Similar Items from Core Recommender Database
Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.