Where Does Value Come From?

Trends Cogn Sci. 2019 Oct;23(10):836-850. doi: 10.1016/j.tics.2019.07.012. Epub 2019 Sep 4.

Abstract

The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how we can account for the context sensitivity of valuation. We summarise both old and new theories proposing that animals track current and desired internal states and seek to minimise the distance to a goal across multiple value dimensions. We suggest that this framework readily accounts for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain-imaging studies of value-guided decision-making.

Keywords: goal-directed decision-making; homeostasis; medial prefrontal cortex; reinforcement learning; reward; value.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Decision Making*
  • Functional Neuroimaging
  • Goals*
  • Humans
  • Learning*
  • Motivation
  • Prefrontal Cortex / diagnostic imaging
  • Prefrontal Cortex / physiology
  • Reinforcement, Psychology*
  • Reward