FDCN-C: A deep learning model based on frequency enhancement, deformable convolution network, and crop module for electroencephalography motor imagery classification

PLoS One. 2024 Nov 21;19(11):e0309706. doi: 10.1371/journal.pone.0309706. eCollection 2024.

Abstract

Motor imagery (MI)-electroencephalography (EEG) decoding plays an important role in brain-computer interface (BCI), which enables motor-disabled patients to communicate with external world via manipulating smart equipment. Currently, deep learning (DL)-based methods are popular for EEG decoding. Whereas the utilization efficiency of EEG features in frequency and temporal domain is not sufficient, which results in poor MI classification performance. To address this issue, an EEG-based MI classification model based on a frequency enhancement module, a deformable convolutional network, and a crop module (FDCN-C) is proposed. Firstly, the frequency enhancement module is innovatively designed to address the issue of extracting frequency information. It utilizes convolution kernels at continuous time scales to extract features across different frequency bands. These features are screened by calculating attention and integrated into the original EEG data. Secondly, for temporal feature extraction, a deformable convolution network is employed to enhance feature extraction capabilities, utilizing offset parameters to modulate the convolution kernel size. In spatial domain, a one-dimensional convolution layer is designed to integrate all channel information. Finally, a dilated convolution is used to form a crop classification module, wherein the diverse receptive fields of the EEG data are computed multiple times. Two public datasets are employed to verify the proposed FDCN-C model, the classification accuracy obtained from the proposed model is greater than that of state-of-the-art methods. The model's accuracy has improved by 14.01% compared to the baseline model, and the ablation study has confirmed the effectiveness of each module in the model.

MeSH terms

  • Brain-Computer Interfaces*
  • Deep Learning*
  • Electroencephalography* / methods
  • Humans
  • Imagination / physiology
  • Neural Networks, Computer

Grants and funding

This research was funded by the National Natural Science Foundation of China under Grant U1813212 and 52277061, and in part by the Shenzhen Science and Technology Program under Grant JCYJ20220818095804009, JSGG20200701095406010, and 20220809200041001 awarded to GC. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.