Seeing it all: Convolutional network layers map the function of the human visual system

Neuroimage. 2017 May 15:152:184-194. doi: 10.1016/j.neuroimage.2016.10.001. Epub 2016 Oct 21.

Abstract

Convolutional networks used for computer vision represent candidate models for the computations performed in mammalian visual systems. We use them as a detailed model of human brain activity during the viewing of natural images by constructing predictive models based on their different layers and BOLD fMRI activations. Analyzing the predictive performance across layers yields characteristic fingerprints for each visual brain region: early visual areas are better described by lower level convolutional net layers and later visual areas by higher level net layers, exhibiting a progression across ventral and dorsal streams. Our predictive model generalizes beyond brain responses to natural images. We illustrate this on two experiments, namely retinotopy and face-place oppositions, by synthesizing brain activity and performing classical brain mapping upon it. The synthesis recovers the activations observed in the corresponding fMRI studies, showing that this deep encoding model captures representations of brain function that are universal across experimental paradigms.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Brain Mapping / methods*
  • Humans
  • Image Processing, Computer-Assisted
  • Magnetic Resonance Imaging
  • Models, Neurological*
  • Photic Stimulation
  • Signal Processing, Computer-Assisted
  • Visual Cortex / physiology*
  • Visual Pathways / physiology
  • Visual Perception / physiology*