Advertisement

Cognitive Neurodynamics

, Volume 4, Issue 2, pp 91–105 | Cite as

Computational models of reinforcement learning: the role of dopamine as a reward signal

Review

Abstract

Reinforcement learning is ubiquitous. Unlike other forms of learning, it involves the processing of fast yet content-poor feedback information to correct assumptions about the nature of a task or of a set of stimuli. This feedback information is often delivered as generic rewards or punishments, and has little to do with the stimulus features to be learned. How can such low-content feedback lead to such an efficient learning paradigm? Through a review of existing neuro-computational models of reinforcement learning, we suggest that the efficiency of this type of learning resides in the dynamic and synergistic cooperation of brain systems that use different levels of computations. The implementation of reward signals at the synaptic, cellular, network and system levels give the organism the necessary robustness, adaptability and processing speed required for evolutionary and behavioral success.

Keywords

Reinforcement learning Dopamine Reward Temporal difference 

Notes

Acknowledgments

The authors wish to thank Dr. Ian Fasel, Nathan Insel and Minryung Song for useful comments on the manuscript. R.D.S. was supported by the Canadian Institute of Health Research SIB 171357.

References

  1. Abercrombie ED, Keefe KA et al (1989) Differential effect of stress on in vivo dopamine release in striatum, nucleus accumbens, and medial frontal cortex. J Neurochem 52:1655–1658PubMedCrossRefGoogle Scholar
  2. Arbuthnott GW, Wickens J (2007) Space, time and dopamine. Trends Neurosci 30:62–69PubMedCrossRefGoogle Scholar
  3. Baras D, Meir R (2007) Reinforcement learning, spike-time-dependent plasticity, and the bcm rule. Neural Comput 19:2245–2279PubMedCrossRefGoogle Scholar
  4. Barto AG (1995) Adaptive critics and the basal ganglia. Models of information processing in the basal ganglia. 215–232Google Scholar
  5. Barto AG, Sutton RS, Anderson C (1983) Neuron-like adaptive elements that can solve difficult learning control problems, IEEE transactions on systems, man, and cybernetics. SMC 13:834–846Google Scholar
  6. Bauer EP, LeDoux JE (2004) Heterosynaptic long-term potentiation of inhibitory interneurons in the lateral amygdala. J Neurosci 24:9507–9512PubMedCrossRefGoogle Scholar
  7. Bauer EP, Schafe GE, LeDoux JE (2002) NMDA receptors and L-type voltage-gated calcium channels contribute to long-term potentiation and different components of fear memory formation in the lateral amygdala. J Neurosci 22:5239–5249PubMedGoogle Scholar
  8. Bayer HM, Glimcher PW (2005) Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron 47:129–141PubMedCrossRefGoogle Scholar
  9. Bayer HM, Lau B, Glimcher PW (2007) Statistics of midbrain dopamine neuron spike trains in the awake primate. J Neurophysiol 98:1428–1439PubMedCrossRefGoogle Scholar
  10. Bergstrom BP, Garris PA (2003) Passive stabilization of striatal extracellular dopamine across the lesion spectrum encompassing the presymptomatic phase of Parkinson’s disease: a voltammetric study in the 6-OHDA-lesioned rat. J Neurochem 87:1224–1236PubMedCrossRefGoogle Scholar
  11. Berke JD, Hyman SE (2000) Addiction, dopamine, and the molecular mechanisms of memory. Neuron 25:515–532PubMedCrossRefGoogle Scholar
  12. Berridge KC (2007) The debate over dopamine’s role in reward: the case for incentive salience. Psychopharmacology (Berl) 191:391–431Google Scholar
  13. Berridge KC, Robinson TE (1998) What is the role of dopamine in reward: hedonic impact, reward learning, or incentive salience? Brain Res Brain Res Rev 28:309–369PubMedCrossRefGoogle Scholar
  14. Bertin M, Schweighofer N, Doya K (2007) Multiple model-based reinforcement learning explains dopamine neuronal activity. Neural Netw 20:668–675PubMedCrossRefGoogle Scholar
  15. Bi G, Poo M (1998) Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci 18:10464–10472PubMedGoogle Scholar
  16. Bissière S, Humeau Y, Luthi A (2003) Dopamine gates ltp induction in lateral amygdala by suppressing feedforward inhibition. Nature Neurosci 6:587–592PubMedCrossRefGoogle Scholar
  17. Bouton ME (2004) Context and behavioral processes in extinction. Learn Mem 11:485–494PubMedCrossRefGoogle Scholar
  18. Calabresi P, Picconi B, Tozzi A, Di Filippo M (2007) Dopamine-mediated regulation of corticostriatal synaptic plasticity. Trends Neurosci 30:211–219PubMedCrossRefGoogle Scholar
  19. Camerer CF (2008) Neuroeconomics: opening the gray box. Neuron 60:416–419PubMedCrossRefGoogle Scholar
  20. Canavan AG, Passingham RE et al (1989) The performance on learning tasks of patients in the early stages of Parkinson’s disease. Neuropsychologia 27:141–156PubMedCrossRefGoogle Scholar
  21. Cardinal RN, Parkinson JA et al (2002) Emotion and motivation: the role of the amygdala, ventral striatum, and prefrontal cortex. Neurosci Biobehav Rev 26:321–352PubMedCrossRefGoogle Scholar
  22. Centonze D, Grande C et al (2003) Distinct roles of D1 and D5 dopamine receptors in motor activity and striatal synaptic plasticity. J Neurosci 23:8506–8512PubMedGoogle Scholar
  23. Cepeda C, Levine MS (1998) Dopamine and N-methyl-d-aspartate receptor interactions in the neostriatum. Dev Neurosci 20:1–18PubMedCrossRefGoogle Scholar
  24. Chuhma N, Choi WY et al (2009) Dopamine neuron glutamate cotransmission: frequency-dependent modulation in the mesoventromedial projection. Neuroscience 164:1068–1083PubMedCrossRefGoogle Scholar
  25. Cohen MX, Frank MJ (2009) Neurocomputational models of basal ganglia function in learning, memory and choice. Behav Brain Res 199:141–156PubMedCrossRefGoogle Scholar
  26. Dalley JW, Laane K et al (2005) Time-limited modulation of appetitive Pavlovian memory by D1 and NMDA receptors in the nucleus accumbens. Proc Natl Acad Sci USA 102:6189–6194PubMedCrossRefGoogle Scholar
  27. Daniel JA, Galbraith S et al (2009) Functional heterogeneity at dopamine release sites. J Neurosci 29:14670–14680PubMedCrossRefGoogle Scholar
  28. Daw ND, Niv Y et al (2005) Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat Neurosci 8:1704–1711PubMedCrossRefGoogle Scholar
  29. Daw ND, Courville AC, Touretzky DS (2006) Representation and timing in theories of the dopamine system. Neural Comput 18:1637–1677PubMedCrossRefGoogle Scholar
  30. Dayan P, Abbott LF (2001) Theoretical neuroscience. Computational and mathematical modeling of neural systems. The MIT Press, CambridgeGoogle Scholar
  31. Dayan P, Balleine BW (2002) Reward, motivation, and reinforcement learning. Neuron 36:285–298PubMedCrossRefGoogle Scholar
  32. Dayan P, Daw ND (2008) Decision theory, reinforcement learning, and the brain. Cogn Affect Behav Neurosci 8:429–453PubMedCrossRefGoogle Scholar
  33. Di Chiara G (2002) Nucleus accumbens shell and core dopamine: differential role in behavior and addiction. Behav Brain Res 137:75–114PubMedCrossRefGoogle Scholar
  34. Doya K, Samejima K et al (2002) Multiple model-based reinforcement learning. Neural Comput 14:1347–1369PubMedCrossRefGoogle Scholar
  35. Ehrlich I, Humeau Y et al (2009) Amygdala inhibitory circuits and the control of fear memory. Neuron 62:757–771PubMedCrossRefGoogle Scholar
  36. Everitt BJ, Parkinson JA et al (1999) Associative processes in addiction and reward. The role of amygdala-ventral striatal subsystems. Ann N Y Acad Sci 877:412–438Google Scholar
  37. Everitt BJ, Robbins TW (2005) Neural systems of reinforcement for drug addiction: from actions to habits to compulsion. Nat Neurosci 8:1481–1489PubMedCrossRefGoogle Scholar
  38. Fellous J-M, Suri RE (2003) The roles of dopamine. The handbook of brain theory and neural networks. MIT Press, Cambridge, pp 361–365Google Scholar
  39. Fenu S, Di Chiara G (2003) Facilitation of conditioned taste aversion learning by systemic amphetamine: role of nucleus accumbens shell dopamine D1 receptors. Eur J Neurosci 18:2025–2030PubMedCrossRefGoogle Scholar
  40. Fibiger HC, Phillips AG (1986) Reward, motivation, cognition, psychobiology of mesotelencephalic dopamine systems. Handbook of physiology—The nervous system IV. F E Bloom Baltimore, Williams and Wilkins, 647–675Google Scholar
  41. Fields HL, Hjelmstad GO et al (2007) Ventral tegmental area neurons in learned appetitive behavior and positive reinforcement. Annu Rev Neurosci 30:289–316PubMedCrossRefGoogle Scholar
  42. Floresco SB, West AR et al (2003) Afferent modulation of dopamine neuron firing differentially regulates tonic and phasic dopamine transmission. Nature Neurosci 6:968–973PubMedCrossRefGoogle Scholar
  43. Florian RV (2007) Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Comput 19:1468–1502PubMedCrossRefGoogle Scholar
  44. Fourcaudot E, Gambino F et al (2009) L-type voltage-dependent Ca(2+) channels mediate expression of presynaptic LTP in amygdala. Nat Neurosci 12:1093–1095PubMedCrossRefGoogle Scholar
  45. Frank MJ (2005) Dynamic dopamine modulation in the basal ganglia: a neurocomputational account of cognitive deficits in medicated and nonmedicated Parkinsonism. J Cogn Neurosci 17:51–72PubMedCrossRefGoogle Scholar
  46. Frank MJ, Seeberger LC, O’reilly RC (2004) By carrot or by stick: cognitive reinforcement learning in Parkinsonism. Science 306:1940–1943PubMedCrossRefGoogle Scholar
  47. Frank MJ, Moustafa AA et al (2007) Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning. Proc Natl Acad Sci USA 104:16311–16316PubMedCrossRefGoogle Scholar
  48. Frank MJ, Doll BB et al (2009) Prefrontal and striatal dopaminergic genes predict individual differences in exploration and exploitation. Nature Neurosci 12:1062–1068PubMedCrossRefGoogle Scholar
  49. Freeman WJ (2007) Definitions of state variables and state space for brain-computer interface: part 1. Multiple hierarchical levels of brain function. Cogn Neurodyn 1:3–14PubMedCrossRefGoogle Scholar
  50. Frey U, Hartmann S et al (1989) Domperidone, an inhibitor of the D2-receptor, blocks a late phase of an electrically induced long-term potentiation in the CA1-region in rats. Biomed Biochim Acta 48:473–476PubMedGoogle Scholar
  51. Frey U, Schroeder H et al (1990) Dopaminergic antagonists prevent long-term maintenance of posttetanic LTP in the CA1 region of rat hippocampal slices. Brain Res 522:69–75PubMedCrossRefGoogle Scholar
  52. Frey U, Matthies H et al (1991) The effect of dopaminergic D1 receptor blockade during tetanization on the expression of long-term potentiation in the rat CA1 region in vitro. Neurosci Lett 129:111–114PubMedCrossRefGoogle Scholar
  53. Furuyashiki T, Gallagher M (2007) Neural encoding in the orbitofrontal cortex related to goal-directed behavior. Ann N Y Acad Sci 1121:193–215PubMedCrossRefGoogle Scholar
  54. Furuyashiki T, Holland PC et al (2008) Rat orbitofrontal cortex separately encodes response and outcome information during performance of goal-directed behavior. J Neurosci 28:5127–5138PubMedCrossRefGoogle Scholar
  55. Gall CM, Hendry SH et al (1987) Evidence for coexistence of GABA and dopamine in neurons of the rat olfactory bulb. J Comp Neurol 266:307–318PubMedCrossRefGoogle Scholar
  56. Gallagher M, McMahan RW et al (1999) Orbitofrontal cortex and representation of incentive value in associative learning. J Neurosci 19:6610–6614PubMedGoogle Scholar
  57. Glimcher PW, Rustichini A (2004) Neuroeconomics: the consilience of brain and decision. Science 306:447–452PubMedCrossRefGoogle Scholar
  58. Goto Y, Grace AA (2005) Dopaminergic modulation of limbic and cortical drive of nucleus accumbens in goal-directed behavior. Nat Neurosci 8:805–812PubMedCrossRefGoogle Scholar
  59. Grace AA (1991) Phasic versus tonic dopamine release and the modulation of dopamine system responsivity: a hypothesis for the etiology of schizophrenia. Neuroscience 41:1–24PubMedCrossRefGoogle Scholar
  60. Gurden H, Tassin JP et al (1999) Integrity of the mesocortical dopaminergic system is necessary for complete expression of in vivo hippocampal–prefrontal cortex long-term potentiation. Neuroscience 94:1019–1027PubMedCrossRefGoogle Scholar
  61. Hatfield T, Han JS et al (1996) Neurotoxic lesions of basolateral, but not central, amygdala interfere with pavlovian second-order conditioning and reinforcer devaluation effects. J Neurosci 16:5256–5265PubMedGoogle Scholar
  62. Hernandez PJ, Andrzejewski ME et al (2005) AMPA/kainate, NMDA, and dopamine D1 receptor function in the nucleus accumbens core: a context-limited role in the encoding and consolidation of instrumental memory. Learn Mem 12:285–295PubMedCrossRefGoogle Scholar
  63. Hikosaka O, Bromberg-Martin E et al (2008) New insights on the subcortical representation of reward. Curr Opin Neurobiol 18:203–208PubMedCrossRefGoogle Scholar
  64. Hiraoka K, Yoshida M, Mishima T (2009) Parallel reinforcement learning for weighted multi-criteria model with adaptive margin. Cogn Neurodyn 3:17–24PubMedCrossRefGoogle Scholar
  65. Holland PC, Gallagher M (1999) Amygdala circuitry in attentional and representational processes. Trends Cogn Sci 3:65–73PubMedCrossRefGoogle Scholar
  66. Huang Y-Y, Kandel ER (1998) Postsynaptic induction and PKA-dependent expression of LTP in the lateral amygdala. Neuron 21:169–178PubMedCrossRefGoogle Scholar
  67. Huang Y-Y, Kandel ER (2007) Low-frequency stimulation induces a pathway-specific late phase of LTP in the amygdala that is mediated by PKA and dependent on protein synthesis. Learn Mem 14:497–503PubMedCrossRefGoogle Scholar
  68. Huang CC, Lin HJ et al (2007) Repeated cocaine administration promotes long-term potentiation induction in rat medial prefrontal cortex. Cereb Cortex 17:1877–1888PubMedCrossRefGoogle Scholar
  69. Humeau Y, Shaban H et al (2003) Presynaptic induction of heterosynaptic associative plasticity in the mammalian brain. Nature 426:841–845PubMedCrossRefGoogle Scholar
  70. Ikemoto S, Panksepp J (1999) The role of nucleus accumbens dopamine in motivated behavior: a unifying interpretation with special reference to reward-seeking. Brain Res Brain Res Rev 31:6–41PubMedCrossRefGoogle Scholar
  71. Izhikevich EM (2007) Solving the distal reward problem through linkage of stdp and dopamine signaling. Cereb Cortex 17:2443–2452PubMedCrossRefGoogle Scholar
  72. Joel D, Niv Y et al (2002) Actor-critic models of the basal ganglia: new anatomical and computational perspectives. Neural Netw 15:535–547PubMedCrossRefGoogle Scholar
  73. Kaelbling LP, Littman ML et al (1996) Reinforcement learning: a survey. JAIR 4:237–285Google Scholar
  74. Kelley AE, Berridge KC (2002) The neuroscience of natural rewards: relevance to addictive drugs. J Neurosci 22:3306–3311PubMedGoogle Scholar
  75. Kerr JN, Wickens JR (2001) Dopamine D-1/D-5 receptor activation is required for long-term potentiation in the rat neostriatum in vitro. J Neurophysiol 85:117–124PubMedGoogle Scholar
  76. Kilpatrick MR, Rooney MB et al (2000) Extracellular dopamine dynamics in rat caudate-putamen during experimenter-delivered and intracranial self-stimulation. Neuroscience 96:697–706PubMedCrossRefGoogle Scholar
  77. Knowlton BJ, Mangels JA et al (1996) A neostriatal habit learning system in humans. Science 273:1399–1402PubMedCrossRefGoogle Scholar
  78. Kobayashi S, Schultz W (2008) Influence of reward delays on responses of dopamine neurons. J Neurosci 28:7837–7846PubMedCrossRefGoogle Scholar
  79. Kolomiets B, Marzo A et al (2009) Background dopamine concentration dependently facilitates long-term potentiation in rat prefrontal cortex through postsynaptic activation of extracellular signal-regulated kinases. Cereb Cortex 19:2708–2718PubMedCrossRefGoogle Scholar
  80. Kombian SB, Malenka RC (1994) Simultaneous LTP of non-NMDA- and LTD of NMDA-receptor-mediated responses in the nucleus accumbens. Nature 368:242–246PubMedCrossRefGoogle Scholar
  81. Konda VR, Borkar VS (1999) Actor-critic—type learning algorithms for markov decision processes. SIAM J Control Optim 38:94–123CrossRefGoogle Scholar
  82. Koob GF (1992) Drugs of abuse: anatomy, pharmacology and function of reward pathways. Trends Pharmacol Sci 13:177–184Google Scholar
  83. Kötter R, Wickens J (1995) Interactions of glutamate and dopamine in a computational model of the striatum. J Comput Neurosci 2:195–214PubMedCrossRefGoogle Scholar
  84. Kroener S, Chandler LJ et al (2009) Dopamine modulates persistent synaptic activity and enhances the signal-to-noise ratio in the prefrontal cortex. PLoS One 4:e6507PubMedCrossRefGoogle Scholar
  85. Lapish CC, Kroener S et al (2007) The ability of the mesocortical dopamine system to operate distinct temporal modes. Psychopharmacology (Berl) 191:609–626CrossRefGoogle Scholar
  86. Legenstein R, Pecevski D et al (2008) A learning theory for reward-modulated spike-timing-dependent plasticity with application to biofeedback. PLoS Comput Biol 4:e1000180PubMedCrossRefGoogle Scholar
  87. Li Y, Kauer JA (2004) Repeated exposure to amphetamine disrupts dopaminergic modulation of excitatory synaptic plasticity and neurotransmission in nucleus accumbens. Synapse 51:1–10PubMedCrossRefGoogle Scholar
  88. Lindvall O, Bjorklund A (1978) Anatomy of the dopaminergic neuron systems in the rat brain. Adv Biochem Psychopharmacol 19:1–23PubMedGoogle Scholar
  89. Ljungberg T, Apicella P et al (1992) Responses of monkey dopamine neurons during learning of behavioral reactions. J Neurophys 67:145–163Google Scholar
  90. Louilot A, Le Moal M et al (1986) Differential reactivity of dopaminergic neurons in the nucleus accumbens in response to different behavioral situations. An in vivo voltammetric study in free moving rats. Brain Res 397(2):395–400PubMedCrossRefGoogle Scholar
  91. Mackintosh NJ (1983) Conditioning and associative learning. Oxford University Press, New YorkGoogle Scholar
  92. Maher BJ, Westbrook GL (2008) Co-transmission of dopamine and GABA in periglomerular cells. J Neurophysiol 99:1559–1564PubMedCrossRefGoogle Scholar
  93. Markram H, Lübke J et al (1997) Regulation of synaptic effocacy by coincidence of postsynaptic aps and epsps. Science 275:213–215PubMedCrossRefGoogle Scholar
  94. Matsumoto M, Hikosaka O (2009) Two types of dopamine neuron distinctly convey positive and negative motivational signals. Nature 459:837–841PubMedCrossRefGoogle Scholar
  95. McGaugh JL (2002) Memory consolidation and the amygdala: a systems perspective. Trends Neurosci 25:456PubMedCrossRefGoogle Scholar
  96. Mirenowicz J, Schultz W (1994) Importance of unpredictability for reward responses in primate dopamine neurons. J Neurophys 72:1024–1027Google Scholar
  97. Mirenowicz J, Schultz W (1996) Preferential activation of midbrain dopamine neurons by appetitive rather than aversive stimuli. Nature 379:449–451PubMedCrossRefGoogle Scholar
  98. Montague PR, Dayan P et al (1996) A framework for mesencephalic dopamine systems based on predictive hebbian learning. J Neurosci 16:1936–1947PubMedGoogle Scholar
  99. Montague PR, Hyman SE et al (2004a) Computational roles for dopamine in behavioural control. Nature 431:760–767PubMedCrossRefGoogle Scholar
  100. Montague PR, McClure SM et al (2004b) Dynamic gain control of dopamine delivery in freely moving animals. J Neurosci 24:1754–1759PubMedCrossRefGoogle Scholar
  101. Morris G, Arkadir D et al (2004) Coincident but distinct messages of midbrain dopamine and striatal tonically active neurons. Neuron 43:133–143PubMedCrossRefGoogle Scholar
  102. Morris G, Nevet A et al (2006) Midbrain dopamine neurons encode decisions for future action. Nature Neurosci 9:1057–1063PubMedCrossRefGoogle Scholar
  103. Moustafa AA, Cohen MX et al (2008) A role for dopamine in temporal decision making and reward maximization in parkinsonism. J Neurosci 28:12294–12304PubMedCrossRefGoogle Scholar
  104. Nakahara H, Doya K et al (2001) Parallel cortico-basal ganglia mechanisms for acquisition and execution of visuomotor sequences—a computational approach. J Cogn Neurosci 13:626–647PubMedCrossRefGoogle Scholar
  105. Nakahara H, Itoh H et al (2004) Dopamine neurons can represent context-dependent prediction error. Neuron 41:269–280PubMedCrossRefGoogle Scholar
  106. Nicola SM, Surmeier J et al (2000) Dopaminergic modulation of neuronal excitability in the striatum and nucleus accumbens. Annu Rev Neurosci 23:185–215PubMedCrossRefGoogle Scholar
  107. O’Doherty JP, Deichmann R et al (2002) Neural responses during anticipation of a primary taste reward. Neuron 33:815–826PubMedCrossRefGoogle Scholar
  108. O’Doherty J, Dayan P et al (2003) Temporal difference models and reward-related learning in the human brain. Neuron 38:329–337PubMedCrossRefGoogle Scholar
  109. O’Doherty J, Dayan P et al (2004) Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science 304:452–454PubMedCrossRefGoogle Scholar
  110. Otani S, Daniel H et al (2003) Dopaminergic modulation of long-term synaptic plasticity in rat prefrontal neurons. Cereb Cortex 13:1251–1256PubMedCrossRefGoogle Scholar
  111. Otmakhova NA, Lisman JE (1996) D1/d5 dopamine receptor activation increases the magnitude of early long-term potentiation at ca1 hippocampal synapses. J Neurosci 16:7478–7486PubMedGoogle Scholar
  112. Otmakhova NA, Lisman JE (1998) D1/d5 dopamine receptors inhibit depotentiation at ca1 synapses via camp-dependent mechanism. J Neurosci 18:1270–1279PubMedGoogle Scholar
  113. Pan WX, Schmidt R et al (2005) Dopamine cells respond to predicted events during classical conditioning: Evidence for eligibility traces in the reward-learning network. J Neurosci 25:6235–6242PubMedCrossRefGoogle Scholar
  114. Pawlak V, Kerr JN (2008) Dopamine receptor activation is required for corticostriatal spike-timing-dependent plasticity. J Neurosci 28:2435–2446PubMedCrossRefGoogle Scholar
  115. Pennartz CM, Ameerun RF et al (1993) Synaptic plasticity in an in vitro slice preparation of the rat nucleus accumbens. Eur J Neurosci 5:107–117PubMedCrossRefGoogle Scholar
  116. Phillips PE, Stuber GD et al (2003) Subsecond dopamine release promotes cocaine seeking. Nature 422:614–618PubMedCrossRefGoogle Scholar
  117. Potjans W, Morrison A, Diesmann M (2009) A spiking neural network model of an actor-critic learning agent. Neural Comput 21:301–339PubMedCrossRefGoogle Scholar
  118. Quirk GJ, Mueller D (2007) Neural mechanisms of extinction learning and retrieval. Neuropsychopharmacology 33:56–72PubMedCrossRefGoogle Scholar
  119. Redish AD, Jensen S et al (2007) Reconciling reinforcement learning models with behavioral extinction and renewal: Implications for addiction, relapse, and problem gambling. Psychol Rev 114:784–805PubMedCrossRefGoogle Scholar
  120. Reynolds JN, Wickens JR (2002) Dopamine-dependent plasticity of corticostriatal synapses. Neural Netw 15:507–521PubMedCrossRefGoogle Scholar
  121. Reynolds JN, Hyland BI et al (2001) A cellular mechanism of reward-related learning. Nature 413:67–70PubMedCrossRefGoogle Scholar
  122. Robbe D, Alonso G et al (2002) Role of p/q-Ca2+ channels in metabotropic glutamate receptor 2/3-dependent presynaptic long-term depression at nucleus accumbens synapses. J Neurosci 22:4346–4356PubMedGoogle Scholar
  123. Robbins TW, Everitt BJ (1996) Neurobehavioural mechanisms of reward and motivation. Curr Opin Neurobiol 6:228–236PubMedCrossRefGoogle Scholar
  124. Roberts PD, Santiago RA et al (2008) An implementation of reinforcement learning based on spike timing dependent plasticity. Biol Cybern 99:517–523PubMedCrossRefGoogle Scholar
  125. Robertson EM, Cohen DA (2006) Understanding consolidation through the architecture of memories. Neuroscientist 12:261–271PubMedCrossRefGoogle Scholar
  126. Robinson TE, Berridge KC (2008) Review. The incentive sensitization theory of addiction: some current issues. Philos Trans R Soc Lond B Biol Sci 363:3137–3146PubMedCrossRefGoogle Scholar
  127. Robinson DL, Heien ML et al (2002) Frequency of dopamine concentration transients increases in dorsal and ventral striatum of male rats during introduction of conspecifics. J Neurosci 22:10477–10486PubMedGoogle Scholar
  128. Roesch MR, Calu DJ et al (2007) Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nature Neurosci 10:1615–1624PubMedCrossRefGoogle Scholar
  129. Roitman MF, Stuber GD et al (2004) Dopamine operates as a subsecond modulator of food seeking. J Neurosci 24:1265–1271PubMedCrossRefGoogle Scholar
  130. Rolls ET (2000) Precis of the brain and emotion. Behav Brain Sci 23:177–191 discussion 192–233PubMedCrossRefGoogle Scholar
  131. Romo R, Schultz W (1990) Dopamine neurons of the monkey midbrain: contingencies of responses to active touch during self-initiated arm movements. J Neurophysiol 63:592–606PubMedGoogle Scholar
  132. Samejima K, Doya K et al (2003) Inter-module credit assignment in modular reinforcement learning. Neural Netw 16:985–994PubMedCrossRefGoogle Scholar
  133. Samson RD, Pare D (2005) Activity-dependent synaptic plasticity in the central nucleus of the amygdala. J Neurosci 25:1847–1855PubMedCrossRefGoogle Scholar
  134. Satoh T, Nakai S et al (2003) Correlated coding of motivation and outcome of decision by dopamine neurons. J Neurosci 23:9913–9923PubMedGoogle Scholar
  135. Schimchowitsch S, Vuillez P et al (1991) Systematic presence of GABA-immunoreactivity in the tubero-infundibular and tubero-hypophyseal dopaminergic axonal systems: an ultrastructural immunogold study on several mammals. Exp Brain Res 83:575–586PubMedCrossRefGoogle Scholar
  136. Schoenbaum G, Chiba AA et al (1999) Neural encoding in orbitofrontal cortex and basolateral amygdala during olfactory discrimination learning. J Neurosci 19:1876–1884PubMedGoogle Scholar
  137. Schultz W (1992) Activity of dopamine neurons in the behaving primate. Semi Neurosci 4(2):129–138CrossRefGoogle Scholar
  138. Schultz W (1998) Predictive reward signal of dopamine neurons. J Neurophysiol 80:1–27PubMedGoogle Scholar
  139. Schultz W (2002) Getting formal with dopamine and reward. Neuron 36:241–263PubMedCrossRefGoogle Scholar
  140. Schultz W, Dayan P et al (1997) A neural substrate of prediction and reward. Science 275:1593–1599PubMedCrossRefGoogle Scholar
  141. Servan-Schreiber D, Printz H et al (1990) A network model of catecholamine effects: gain, signal-to-noise ratio, and behavior. Science 249:892–895PubMedCrossRefGoogle Scholar
  142. Seymour B, O’Doherty JP et al (2004) Temporal difference models describe higher-order learning in humans. Nature 429:664–667PubMedCrossRefGoogle Scholar
  143. Shen W, Flajolet M et al (2008) Dichotomous dopaminergic control of striatal synaptic plasticity. Science 321:848PubMedCrossRefGoogle Scholar
  144. Suri RE, Bargas J et al (2001) Modeling functions of striatal dopamine modulation in learning and planning. Neuroscience 103:65–85PubMedCrossRefGoogle Scholar
  145. Surmeier DJ, Ding J et al (2007) D1 and D2 dopamine-receptor modulation of striatal glutamatergic signaling in striatal medium spiny neurons. Trends Neurosci 30:228–235PubMedCrossRefGoogle Scholar
  146. Sutton RS (1984) Temporal credit assignment in reinforcement learning Ph.D. dissertation, Department of Computer Science, University of Massachusetts, Amherst, MA. Published as COINS Technical Report 84-2Google Scholar
  147. Sutton RS (1988) Learning to predict by the methods of temporal differences. Mach Learn 3:9–44Google Scholar
  148. Sutton RS, Barto AG (1990) Time-derivative models of Pavlovian reinforcement. MIT Press, CambridgeGoogle Scholar
  149. Sutton RS, Barto AG (1998) Reinforcement learning, an introduction. MIT Press, CambridgeGoogle Scholar
  150. Takahashi Y, Schoenbaum G et al (2008) Silencing the critics: understanding the effects of cocaine sensitization on dorsolateral and ventral striatum in the context of an actor/critic model. Front Neurosci 2:86–99PubMedCrossRefGoogle Scholar
  151. Taverna S, Pennartz CM (2003) Postsynaptic modulation of AMPA- and NMDA-receptor currents by group III metabotropic glutamate receptors in rat nucleus accumbens. Brain Res 976:60–68PubMedCrossRefGoogle Scholar
  152. Thivierge JP, Rivest F et al (2007) Spiking neurons, dopamine, and plasticity: timing is everything, but concentration also matters. Synapse 61:375–390PubMedCrossRefGoogle Scholar
  153. Thomas MJ, Beurrier C et al (2001) Long-term depression in the nucleus accumbens: a neural correlate of behavioral sensitization to cocaine. Nat Neurosci 4:1217–1223PubMedCrossRefGoogle Scholar
  154. Tremblay L, Schultz W (2000a) Modifications of reward expectation-related neuronal activity during learning in primate orbitofrontal cortex. J Neurophysiol 83:1877–1885PubMedGoogle Scholar
  155. Tremblay L, Schultz W (2000b) Reward-related neuronal activity during go-nogo task performance in primate orbitofrontal cortex. J Neurophysiol 83:1864–1876PubMedGoogle Scholar
  156. Vijayraghavan S, Wang M et al (2007) Inverted-U dopamine D1 receptor actions on prefrontal neurons engaged in working memory. Nat Neurosci 10:376–384PubMedCrossRefGoogle Scholar
  157. Voon V, Reynolds B, Brezing C et al (2010) Impulsive choice and response in dopamine agonist-related impulse control behaviors. Psychopharmacology (Berl) 207:645–659CrossRefGoogle Scholar
  158. Waelti P, Dickinson A et al (2001) Dopamine responses comply with basic assumptions of formal learning theory. Nature 412:43–48PubMedCrossRefGoogle Scholar
  159. Watkins C, Dayan P (1992) Q-learning. Mach Learning 8:279–292Google Scholar
  160. White DJ (1993) Markov decision processes. Willey, New YorkGoogle Scholar
  161. Wickens JR (2009) Synaptic plasticity in the basal ganglia. Behav Brain Res 199:119–128PubMedCrossRefGoogle Scholar
  162. Wickens JR, Horvitz JC et al (2007) Dopaminergic mechanisms in actions and habits. J Neuroscience 27:8181CrossRefGoogle Scholar
  163. Wiecki TV, Riedinger K et al (2009) A neurocomputational account of catalepsy sensitization induced by D2 receptor blockade in rats: Context dependency, extinction, and renewal. Psychopharmacology 204:265–277PubMedCrossRefGoogle Scholar
  164. Wise RA (1996a) Addictive drugs and brain stimulation reward. Annu Rev Neurosci 19:319–340PubMedCrossRefGoogle Scholar
  165. Wise RA (1996b) Neurobiology of addiction. Curr Opin Neurobiol 6:243–251PubMedCrossRefGoogle Scholar
  166. Wise RA (2004) Dopamine, learning and motivation. Nat Rev Neurosci 5:483–494PubMedCrossRefGoogle Scholar
  167. Wise RA (2005) Forebrain substrates of reward and motivation. J Comp Neurol 493:115–121PubMedCrossRefGoogle Scholar
  168. Wise RA, Hoffman DC (1992) Localization of drug reward mechanisms by intracranial injections. Synapse 10:247–263PubMedCrossRefGoogle Scholar
  169. Wise RA, Rompre PP (1989) Brain dopamine and reward. Annu Rev Psychol 40:191–225PubMedCrossRefGoogle Scholar
  170. Worgotter F, Porr B (2005) Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms. Neural Comput 17:245–319PubMedCrossRefGoogle Scholar
  171. Xie X, Seung HS (2004) Learning in neural networks by reinforcement of irregular spiking. Phys Rev E Stat Nonlin Soft Matter Phys 69:041909PubMedGoogle Scholar
  172. Yao WD, Spealman RD et al (2008) Dopaminergic signaling in dendritic spines. Biochem Pharmacol 75:2055–2069PubMedCrossRefGoogle Scholar
  173. Yavich L, MacDonald E (2000) Dopamine release from pharmacologically distinct storage pools in rat striatum following stimulation at frequency of neuronal bursting. Brain Res 870:73–79PubMedCrossRefGoogle Scholar
  174. Yin HH, Ostlund SB et al (2008) Reward-guided learning beyond dopamine in the nucleus accumbens: The integrative functions of cortico-basal ganglia networks. Eur J Neurosci 28:1437–1448PubMedCrossRefGoogle Scholar
  175. Young AM, Joseph MH et al (1992) Increased dopamine release in vivo in nucleus accumbens and caudate nucleus of the rat during drinking: a microdialysis study. Neuroscience 48:871–876PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  • R. D. Samson
    • 1
  • M. J. Frank
    • 2
  • Jean-Marc Fellous
    • 3
  1. 1.Evelyn F. McKnight Brain Institute and Neural Systems, Memory and AgingUniversity of ArizonaTucsonUSA
  2. 2.Department of Cognitive and Linguistic Sciences and Department of Psychology, Brown Institute for Brain ScienceBrown UniversityProvidenceUSA
  3. 3.Department of Psychology and Applied MathematicsUniversity of ArizonaTucsonUSA

Personalised recommendations