Developing Explainable Deep Learning Models Using EEG for Brain Machine Interface Systems

dc.contributor.advisorContreras-Vidal, Jose L.
dc.contributor.committeeMemberParikh, Pranav J.
dc.contributor.committeeMemberBuckner, Cameron J.
dc.contributor.committeeMemberMayerich, David
dc.contributor.committeeMemberNguyen, Hien Van
dc.creatorSujatha Ravindran, Akshay
dc.creator.orcid0000-0001-7213-9587 2021 2021
dc.description.abstractDeep learning (DL) based decoders for Brain-Computer-Interfaces (BCI) using Electroencephalography (EEG) have gained immense popularity recently. However, the interpretability of DL models remains an under-explored area. This thesis aims to develop and validate computational neuroscience approaches to make DL models more robust and explainable. First, a simulation framework was developed to evaluate the robustness and sensitivity of twelve back-propagation-based visualization methods. Comparing to ground truth features, after randomizing model weights and labels, multiple methods had reliability issues: e.g., the gradient approach, which is the most used visualization technique in EEG, was not class or model-specific. Overall, DeepLift was the most reliable and robust method. Second, we demonstrated how model explanations combined with a clustering approach can be used to complement the analysis of DL models applied to measured EEG in three tasks. In the first task, DeepLift identified the EEG spatial patterns associated with hand motor imagery in a data-driven manner from a database of 54 individuals. Explanations identified different strategies used by individuals and exposed the issues in limiting decoding to the sensorimotor channels. The clustering approach improved the decoding in high-performing subjects. In the second task, we used GradCAM to explain the Convolutional Neural Network’s (CNN) decision associated with detecting balance perturbations while wearing an exoskeleton, deployable for fall prevention. Perturbation evoked potentials (PEP) in EEG (∼75 ms) preceded both the peak in electromyography (∼180 ms) and the center of pressure (∼350 ms). Explanation showed that the model utilized electro-cortical components in the PEP and was not driven by artifacts. Explanations aligned with dynamic functional connectivity measures and prior studies supporting the feasibility of using BCI-exoskeleton systems for fall prevention. In the third task, the susceptibility of DL models to eyeblink artifacts was evaluated. The frequent presence of blinks (in 50% trials or more), whether they bias a particular class or not, leads to a significant difference in decoding when using CNN. In conclusion, the thesis contributes towards improving the BCI decoders using DL models by using model explanation approaches. Specific recommendations and best practices for the use of back-propagation-based visualization methods for BCI decoder design are discussed.
dc.description.departmentElectrical and Computer Engineering, Department of
dc.format.digitalOriginborn digital
dc.rightsThe author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).
dc.subjectDeep Learning
dc.subjectBrain-Machine Interface
dc.titleDeveloping Explainable Deep Learning Models Using EEG for Brain Machine Interface Systems
dcterms.accessRightsThe full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period.
local.embargo.terms2023-12-01 College of Engineering and Computer Engineering, Department of Engineering of Houston of Philosophy


Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
13.3 MB
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
4.44 KB
Plain Text
No Thumbnail Available
1.82 KB
Plain Text