-
problem statement is to find representations that are invariant to inter and intra subject diffrences
- reduce noise associated with EEG data is key → find resources
-
task at hand is mental load classification task
-
Procedure is as follows
- EEG activity data trasnformed into topology-preserving multi-spectral images: captures image from specific wavelength
- Train on recurrent convolutional network based on video classification techniques to learn about the images themselves
- preserves spatial, spectral, and temporal structure of EEG → leads to finding features that are less sensistive to variations and distortions within each dimension
- RNNs found to have better and improved classification results
-
Dataset is key issue (found on Kaggle lmao)
-
deep belief network and ConvNets have been used to learn representations from MRI and same approach can be used for neuroimaging
-
Convolution and recurrent neural networks used to extract representations from EEG time series → studies demonstrated potential benefits of downscaling data and applying this approach → goal was to preserve structure of EEG data within space, time, and frequency
-
EEG workings
- uses changes in electrical voltage acros the scalp induced by cortical activity:

- Restricted Boltzman Machine layers to a deep belief network and using supervised pretraining results in networks that can learn increasingly complex representations of the data and achieve considerable accuracy increase as compared to other classifiers
- problem was framed similarly to that of multi-channel speech obtained from several different microphones or in this case, the electrodes to record signals from speakers or cortical regions
- goal is to learn the discriminative manifold between different states and corresponding EEG readings
- procedure involved turning EEG features into multi-dimensional tensor which retains structure of data throughout learning process over the traditional vector usecase
- train on deep recurrent-convolutional neural network architectures inspired by video classification to learn time series data
- use ConvNets to extract spatial and sepctral invariant rep. from each frame data
- adopt LSTM network to extract temporal patterns in frame sequence
- this approach reduced classification error to 8.9% from 15.3%