Motor imagery (MI)-electroencephalography (EEG) decoding plays an important role in brain-computer interface (BCI), which enables motor-disabled patients to communicate with external world via manipulating smart equipment. Currently, deep learning (DL)-based methods are popular for EEG decoding. Whereas the utilization efficiency of EEG features in frequency and temporal domain is not sufficient, which results in poor MI classification performance. To address this issue, an EEG-based MI classification model based on a frequency enhancement module, a deformable convolutional network, and a crop module (FDCN-C) is proposed. Firstly, the frequency enhancement module is innovatively designed to address the issue of extracting frequency information. It utilizes convolution kernels at continuous time scales to extract features across different frequency bands. These features are screened by calculating attention and integrated into the original EEG data. Secondly, for temporal feature extraction, a deformable convolution network is employed to enhance feature extraction capabilities, utilizing offset parameters to modulate the convolution kernel size. In spatial domain, a one-dimensional convolution layer is designed to integrate all channel information. Finally, a dilated convolution is used to form a crop classification module, wherein the diverse receptive fields of the EEG data are computed multiple times. Two public datasets are employed to verify the proposed FDCN-C model, the classification accuracy obtained from the proposed model is greater than that of state-of-the-art methods. The model's accuracy has improved by 14.01% compared to the baseline model, and the ablation study has confirmed the effectiveness of each module in the model.
Copyright: © 2024 Liang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.