Decoding the Locus of Covert Visuospatial Attention from EEG Signals

PLoS One. 2016 Aug 16;11(8):e0160304. doi: 10.1371/journal.pone.0160304. eCollection 2016.

Abstract

Visuospatial attention can be deployed to different locations in space independently of ocular fixation, and studies have shown that event-related potential (ERP) components can effectively index whether such covert visuospatial attention is deployed to the left or right visual field. However, it is not clear whether we may obtain a more precise spatial localization of the focus of attention based on the EEG signals during central fixation. In this study, we used a modified Posner cueing task with an endogenous cue to determine the degree to which information in the EEG signal can be used to track visual spatial attention in presentation sequences lasting 200 ms. We used a machine learning classification method to evaluate how well EEG signals discriminate between four different locations of the focus of attention. We then used a multi-class support vector machine (SVM) and a leave-one-out cross-validation framework to evaluate the decoding accuracy (DA). We found that ERP-based features from occipital and parietal regions showed a statistically significant valid prediction of the location of the focus of visuospatial attention (DA = 57%, p < .001, chance-level 25%). The mean distance between the predicted and the true focus of attention was 0.62 letter positions, which represented a mean error of 0.55 degrees of visual angle. In addition, ERP responses also successfully predicted whether spatial attention was allocated or not to a given location with an accuracy of 79% (p < .001). These findings are discussed in terms of their implications for visuospatial attention decoding and future paths for research are proposed.

MeSH terms

  • Artifacts
  • Attention / physiology*
  • Electroencephalography*
  • Evoked Potentials / physiology
  • Female
  • Humans
  • Male
  • Photic Stimulation
  • Signal Processing, Computer-Assisted*
  • Spatial Processing / physiology*
  • Support Vector Machine
  • Visual Perception / physiology
  • Young Adult

Grants and funding

This work was supported by FRQNT (http://www.frqnt.gouv.qc.ca/en/accueil); Natural Sciences and Engineering Research Council of Canada (http://www.nserc-crsng.gc.ca/index_eng.asp); and FRQS (http://www.frqs.gouv.qc.ca/en/accueil). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.