hvEEGNet: a novel deep learning model for high-fidelity EEG reconstruction

Front Neuroinform. 2024 Dec 20:18:1459970. doi: 10.3389/fninf.2024.1459970. eCollection 2024.

Abstract

Introduction: Modeling multi-channel electroencephalographic (EEG) time-series is a challenging tasks, even for the most recent deep learning approaches. Particularly, in this work, we targeted our efforts to the high-fidelity reconstruction of this type of data, as this is of key relevance for several applications such as classification, anomaly detection, automatic labeling, and brain-computer interfaces.

Methods: We analyzed the most recent works finding that high-fidelity reconstruction is seriously challenged by the complex dynamics of the EEG signals and the large inter-subject variability. So far, previous works provided good results in either high-fidelity reconstruction of single-channel signals, or poor-quality reconstruction of multi-channel datasets. Therefore, in this paper, we present a novel deep learning model, called hvEEGNet, designed as a hierarchical variational autoencoder and trained with a new loss function. We tested it on the benchmark Dataset 2a (including 22-channel EEG data from 9 subjects).

Results: We show that it is able to reconstruct all EEG channels with high-fidelity, fastly (in a few tens of epochs), and with high consistency across different subjects. We also investigated the relationship between reconstruction fidelity and the training duration and, using hvEEGNet as an anomaly detector, we spotted some data in the benchmark dataset that are corrupted and never highlighted before.

Discussion: Thus, hvEEGNet could be very useful in several applications where automatic labeling of large EEG dataset is needed and time-consuming. At the same time, this work opens new fundamental research questions about (1) the effectiveness of deep learning models training (for EEG data) and (2) the need for a systematic characterization of the input EEG data to ensure robust modeling.

Keywords: EEG; VAE; latent representation; motor imagery; variational autoencoder.

Grants and funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was partially supported by the Italian Ministry of University and Research under the grant “Dipartimenti di Eccellenza 2023–2027” of the Department of Informatics, Systems and Communication of the University of Milano-Bicocca, Italy. G. Cisotto also acknowledges the financial support of PON “Green and Innovation” 2014–2020 action IV.6 funded by the Italian Ministry of University and Research to the University of Milan-Bicocca (Milan, Italy). A. Zancanaro acknowledges the financial support of PON “Green and Innovation” 2014–2020 action IV.5 funded by the Italian Ministry of University and Research to the MUR at the University of Padova (Padova, Italy).