Abstract:
Brain-computer interfaces (BCIs) are becoming increasingly important in various fields and offer transformative potential for applications in medical rehabilitation, assistive technology, and human-computer interaction. Critical to their effectiveness is the ability to accurately interpret electroencephalography (EEG) signals, in particular the distinction between the resting state and the moving state of the human brain. This distinction is crucial as it directly affects the responsiveness and accuracy of the interface. While numerous deep learning models have been developed for the classification of motion-related EEG signals, e.g. for forward, right, left or wrist movements, there is a notable gap in the literature for the classification of resting-state EEG signals. The ability to distinguish resting from moving signals is crucial for improving the performance of prosthetic control applications, robots and computers that use signals from the motor cortex to ensure accurate recognition of resting signals.
To address this gap, this thesis explores a range of deep learning models, from classical approaches to state-of-the-art technologies, applied to EEG data for binary classification of movement and resting signals. Among these models, Long Short-Term Memory (LSTM) networks and transformers stand out, achieving an average accuracy of over 95% in predicting motion and resting states. These promising results suggest that advanced AI models capable of distinguishing resting signals from movements signals can significantly improve the reliability and effectiveness of various neuro-controlled devices.