Heteromodal Cortical Areas Encode Sensory-Motor Features of Word Meaning

J Neurosci. 2016 Sep 21;36(38):9763-9. doi: 10.1523/JNEUROSCI.4095-15.2016.

Abstract

The capacity to process information in conceptual form is a fundamental aspect of human cognition, yet little is known about how this type of information is encoded in the brain. Although the role of sensory and motor cortical areas has been a focus of recent debate, neuroimaging studies of concept representation consistently implicate a network of heteromodal areas that seem to support concept retrieval in general rather than knowledge related to any particular sensory-motor content. We used predictive machine learning on fMRI data to investigate the hypothesis that cortical areas in this "general semantic network" (GSN) encode multimodal information derived from basic sensory-motor processes, possibly functioning as convergence-divergence zones for distributed concept representation. An encoding model based on five conceptual attributes directly related to sensory-motor experience (sound, color, shape, manipulability, and visual motion) was used to predict brain activation patterns associated with individual lexical concepts in a semantic decision task. When the analysis was restricted to voxels in the GSN, the model was able to identify the activation patterns corresponding to individual concrete concepts significantly above chance. In contrast, a model based on five perceptual attributes of the word form performed at chance level. This pattern was reversed when the analysis was restricted to areas involved in the perceptual analysis of written word forms. These results indicate that heteromodal areas involved in semantic processing encode information about the relative importance of different sensory-motor attributes of concepts, possibly by storing particular combinations of sensory and motor features.

Significance statement: The present study used a predictive encoding model of word semantics to decode conceptual information from neural activity in heteromodal cortical areas. The model is based on five sensory-motor attributes of word meaning (color, shape, sound, visual motion, and manipulability) and encodes the relative importance of each attribute to the meaning of a word. This is the first demonstration that heteromodal areas involved in semantic processing can discriminate between different concepts based on sensory-motor information alone. This finding indicates that the brain represents concepts as multimodal combinations of sensory and motor representations.

Keywords: concepts; embodiment; lexical semantics; multimodal processing; predictive machine learning; semantic memory.

Publication types

  • Research Support, N.I.H., Extramural

MeSH terms

  • Adult
  • Algorithms
  • Brain Mapping*
  • Cerebral Cortex / diagnostic imaging
  • Cerebral Cortex / physiology*
  • Computer Simulation
  • Concept Formation / physiology*
  • Decision Making
  • Female
  • Humans
  • Image Processing, Computer-Assisted
  • Magnetic Resonance Imaging
  • Male
  • Middle Aged
  • Models, Neurological*
  • Oxygen / blood
  • Photic Stimulation
  • Reaction Time / physiology
  • Semantics*
  • Young Adult

Substances

  • Oxygen