Introduction

The functions of the basal ganglia are closely linked to movements. This view derives from deficits arising after lesions, from anatomical connections with other brain structures with well-understood motor functions, and from the neuronal activity recorded in behaving animals. However, the effects of lesions in nucleus accumbens and of electrical self-stimulation of dopamine neurons point also to non-motor functions, and more specifically to reward and motivation (Kelly et al. 1975; Corbett and Wise 1980; Fibiger et al. 1987). All basal ganglia nuclei show distinct and sophisticated forms of reward processing, often combined with movement-related activity. These motor and non-motor functions are closely linked; for example, large fractions of neurons in the striatum show both movement- and reward-related activity (e.g., Hollerman et al. 1998; Kawagoe et al. 1998; Samejima et al. 2005). Such combined processing may be the hallmark of a neuronal system involved in goal-directed behavior and habit learning (Yin et al. 2004, 2005), which require the processing of reward information and of the actions required to obtain the reward (Dickinson and Balleine 1994).

This review describes electrophysiological recordings of the coding of reward prediction errors by dopamine neurons, the possible contribution of inputs from the pedunculopontine nucleus (PPN) to this signal, and the possible influence of this dopamine signal on action value coding in the striatum. However, neuronal signals for various aspects of reward, such as prediction, reception and amount, exist also in all other basal ganglia structures, including the globus pallidus (Gdowski et al. 2001; Tachibana and Hikosaka 2012), subthalamic nucleus (Lardeux et al. 2009) and pars reticulata of substantia nigra (Cohen et al. 2012; Yasuda et al. 2012). The reviewed studies concern neurophysiological recordings from monkeys, rats and mice during performance of controlled behavioral tasks involving learning of new stimuli and choices between known reward-predicting stimuli. Most of the cited studies involve recordings from individual neurons, one at a time, while the animal performs a standard task, often together with specific controls. To this end, monkeys sit, or rodents lie, in specially constructed chairs or chambers in the laboratory where they are fully awake and relaxed and react to stimuli mostly with arm or eye movements to obtain various types and amounts of liquid or food rewards. The stimuli are often presented on computer monitors in front of the animals and pretrained to predict specific rewards. Rewards are precisely quantified drops of fruit juices or water that are quickly delivered at specific time points under computer control, thus eliciting well defined, phasic stimulation of somatosensory receptors at the mouth. The experimental designs are based on constructs from animal learning theory and economic decision theory that conceptualize the functions of rewards in learning and choices. The experiments comprise the learning and updating of behavioral acts, the elicitation of approach behavior and the choice between differently rewarded levers or visual targets. Conceptual and experimental details are found in Schultz (2015).

Midbrain dopamine neurons

Studies of the behavioral deficits in Parkinson’s disease and schizophrenia help researchers to develop hypotheses about the functions of dopamine in the brain. The symptoms are probably linked to disorders in slowly changing or tonic dopamine levels. By contrast, electrophysiological recordings from individual dopamine neurons in substantia nigra pars compacta and ventral tegmental area identify a specific, phasic signal reflecting reward prediction error.

Prediction error

Dopamine neurons in monkeys, rats and mice show phasic responses to food and liquid rewards and to stimuli predicting such rewards in a wide variety of Pavlovian and operant tasks (Ljungberg et al. 1991, 1992; Schultz et al. 1993, 1997; Hollerman and Schultz 1998; Satoh et al. 2003; Morris et al. 2004; Pan et al. 2005; Bayer and Glimcher 2005; Nomoto et al. 2010; Cohen et al. 2012; Kobayashi and Schultz 2014). In all of these diverse tasks, the dopamine signal codes a reward prediction error, namely the difference between received and predicted reward. A reward that is better than predicted at a given moment in time (positive reward prediction error) elicits a phasic activation, a reward that occurs exactly as predicted in value and time (no prediction error) elicits no phasic change in dopamine neurons, and a reward that is worse than predicted at the predicted time (negative prediction error) induces a phasic depression in activity. Reward-predicting stimuli evoke similar prediction error responses, suggesting that dopamine neurons treat rewards and reward-predicting stimuli commonly as events that convey value. These responses occur also in more complex tasks, including delayed response, delayed alternation and delayed matching-to-sample (Ljungberg et al. 1991; Schultz et al. 1993; Takikawa et al. 2004), sequential movements (Satoh et al. 2003; Nakahara et al. 2004; Enomoto et al. 2011), random dot motion discrimination (Nomoto et al. 2010), somatosensory signal detection (de Lafuente and Romo 2011) and visual search (Matsumoto and Takada 2013). The prediction error response involves 70–90 % of dopamine neurons, is very similar in latency across the dopamine neuronal population, and shows only graded rather than categorical differences between medial and lateral neuronal groups (Ljungberg et al. 1992) or between dorsal and ventral groups (Nomoto et al. 2010; Fiorillo et al. 2013a). No other brain structure shows such a global and stereotyped reward signal with similar response latencies and durations across neurons (Schultz 1998). The homogeneous signal leads to locally varied dopamine release that acts on heterogeneous postsynaptic structures and thus results in diverse dopamine functions.

The phasic dopamine reward signal satisfies stringent tests for bidirectional prediction error coding suggested by formal animal learning theory, such as blocking and conditioned inhibition (Waelti et al. 2001; Tobler et al. 2003; Steinberg et al. 2013). With these characteristics, the dopamine prediction error signal implements teaching signals of Rescorla–Wagner and temporal difference (TD) reinforcement learning models (Rescorla and Wagner 1972; Sutton and Barto 1981; Mirenowicz and Schultz 1994; Montague et al. 1996; Enomoto et al. 2011). Via a three-factor synaptic arrangement, the dopamine reinforcing signal would affect coincident synaptic transmission between cortical inputs and postsynaptic striatal, frontal cortex or amygdala neurons (Freund et al. 1984; Goldman-Rakic et al. 1989; Schultz 1998), both immediately and via Hebbian plasticity. Positive dopamine prediction error activation would enhance behavior-related neuronal activity and thus favor behavior that leads to increased reward, whereas negative dopamine prediction error depression would reduce neuronal activity and thus disfavor behavior resulting diminished reward.

The phasic dopamine signal has all characteristics of effective teaching signals for model-free reinforcement learning. In addition, the signal incorporates predictions from models of the world (acquired by other systems) (Nakahara et al. 2004; Tobler et al. 2005; Bromberg-Martin et al. 2010), thus possibly serving to update predictions using a combination of (model-free) experience and model-based representations. About one-third of dopamine neurons show also a slower, pre-reward activation that varies with reward risk (mixture of variance and skewness) (Fiorillo et al. 2003), which constitutes the first neuronal risk signal ever observed. A risk signal derived purely from variance and distinct from value exists in orbitofrontal cortex (O’Neill and Schultz 2010). The dopamine risk response might be appropriate for mediating the influence of attention on the learning rate of specific learning mechanisms (Pearce and Hall 1980) and thus would support the teaching function of the phasic prediction error signal. The effects of electrical and optogenetic activation further support a teaching function of the phasic dopamine response (Corbett and Wise 1980; Tsai et al. 2009; Adamantidis et al. 2011; Steinberg et al. 2013), thus suggesting a causal influence of phasic dopamine signals on learning.

Unclear movement relationships

Besides serving as reinforcement for learning, stimulation of dopamine neurons elicits immediate behavioral actions, including contralateral rotation, locomotion (Kim et al. 2012), food seeking (Adamantidis et al. 2011) and approach behavior (Hamid et al. 2015); stimulation of striatal neurons expressing specific dopamine receptor subtypes induces differential contralateral or ipsilateral choice preferences (Tai et al. 2012). Although these dopamine effects may be related to the role of dopamine in Parkinson’s disease, the phasic dopamine response does not code movement. The dopamine responses to conditioned stimuli occur close to the time of the movement evoked by such stimuli, but dissection of temporal relationships reveals the close association with stimuli rather than movements (Ljungberg et al. 1992). Slower electrophysiological dopamine changes occur with movements (Schultz et al. 1983; Schultz 1986; Romo and Schultz 1990), but are sluggish and inconsistent and often fail to occur in better controlled behavioral tasks (DeLong et al. 1983; Schultz and Romo 1990; Ljungberg et al. 1992; Waelti et al. 2001; Satoh et al. 2003; Cohen et al. 2012; Lak et al. 2014); they seem to reflect general behavioral reactivity rather than specific motor processes. Similar or even slower changes in dopamine release occur with reward, motivation, stress, punishment, movement and attention (Young et al. 1992; Cheng et al. 2003; Young 2004; Howe et al. 2013). The lack of movement relationships of dopamine neurons is compatible with the fact that dopamine receptor stimulation improves hypokinesia without restoring phasic dopamine activity. Apparently, the Parkinsonian deficits do not reflect phasic movement-related dopamine changes; rather, tonic, ambient dopamine concentrations seem to underlie the movements and other behavioral processes that are deficient in this disorder. Taken together, these dopamine effects demonstrate the different time scales and wide spectrum of dopamine influences (Grace 1991; Grace et al. 2007; Schultz 2007; Robbins and Arnsten 2009) and suggest that the phasic dopamine reward signal does not explain Parkinsonian motor deficits in a simple way.

Two phasic response components

The phasic dopamine signal consists of two distinct components (Mirenowicz and Schultz 1996; Waelti et al. 2001; Tobler et al. 2003; Day et al. 2007; Joshua et al. 2008; Fiorillo et al. 2013b), similar to other, non-dopamine neurons involved in sensory and cognitive processing (Thompson et al. 1996; Kim and Shadlen 1999; Ringach et al. 1997; Shadlen and Newsome 2001; Bredfeldt and Ringach 2002; Roitman and Shadlen 2002; Mogami and Tanaka 2006; Paton et al. 2006; Roelfsema et al. 2007; Ambroggi et al. 2008; Lak et al. 2010; Peck et al. 2013; Stanisor et al. 2013; Pooresmaeili et al. 2014). The first dopamine response component consists of a brief activation that begins with latencies of 60–90 ms and lasts 50–100 ms; it is unselective and arises even with motivationally neutral events, conditioned inhibitors and punishers (Steinfels et al. 1983; Schultz and Romo 1990; Mirenowicz and Schultz 1996; Horvitz et al. 1997; Tobler et al. 2003; Joshua et al. 2008; Kobayashi and Schultz 2014), apparently before the stimuli and their reward values have been properly identified. The component is highly sensitive to sensory intensity (Fiorillo et al. 2013b), reward generalization (due to similarity to unrewarded stimuli) (Mirenowicz and Schultz 1996; Day et al. 2007), reward context (Kobayashi and Schultz 2014), and novelty (Ljungberg et al. 1992); thus it codes distinct forms of salience related to these physical, motivational and novelty aspects. The response is sensitive to prediction (Nomoto et al. 2010) and represents the initial, unselective, salience part of the dopamine prediction error response. The second dopamine response component begins already during the initial component; it codes reward value as prediction error and thus constitutes the specific phasic dopamine reward response. The two components become completely separated in more demanding tasks, such as random dot motion discrimination, in which the first component stays constant (Fig. 1a, blue), whereas the second dopamine response component begins later, at latencies around 250 ms, and varies with reward value (red) (Nomoto et al. 2010). The transient, initial dopamine response to physically intense stimuli may mislead towards assuming a primary, full attentional dopamine function if rewards are not tested and the second component is not revealed (Steinfels et al. 1983; Horvitz et al. 1997; Redgrave et al. 1999).

Fig. 1
figure 1

Basic characteristics of phasic dopamine responses. a Two dopamine response components: initial detection response (blue), and subsequent value response (red) in a dot motion discrimination task. The motion coherence increasing from 0 to 50 % leads to better behavioral dot motion discrimination, which translates into increases of reward probability from p = 0.49 to p = 0.99 [dopamine neurons process reward probability as value (Fiorillo et al. 2003)]. The first response component is constant (blue), whereas the second component grows with reward value derived from probability (reward prediction error). From Nomoto et al. (2010). b Accurate value coding at the time of reward despite initial indiscriminate stimulus detection response. Blue and red zones indicate the initial detection response and the subsequent value response, respectively. After an unrewarded stimulus (CS-), surprising reward (R) elicits a positive prediction error response, suggesting that the prediction at reward time reflects the lack of value prediction by the CS-. From Waelti et al. (2001). c Inverse relationship of dopamine activations to aversiveness of bitter solutions. The activation to the aversive solution (black, Denatonium, a strong bitter substance) turns into a depression with increasing aversiveness due to negative value (red), suggesting that the activation reflects physical impact rather than punishment. Imp/s impulses per second, n number of dopamine neurons. Time = 0 indicates onset of liquid delivery. From Fiorillo et al. (2013b)

After the stimulus identification by the second component, the reward representation stays on in dopamine neurons; this is evidenced by the prediction error response at the time of the reward, which reflects the predicted value at that moment (Fig. 1b) (Tobler et al. 2003; Nomoto et al. 2010). The early onset of the value component before the behavioral action explains why animals usually discriminate well between rewarded and unrewarded stimuli despite the initial, indiscriminate dopamine response component (Ljungberg et al. 1992; Joshua et al. 2008; Kobayashi and Schultz 2014). Thus, the second, value component contains the principal dopamine reward value message.

Advantage of initial dopamine activation

The initial activation reflects different components of stimulus-driven salience and may be beneficial for neuronal reward processing. The physical and motivational salience components may affect the speed and accuracy of actions (Chelazzi et al. 2014) via similar neuronal mechanisms as the enhancement of sensory processing by stimulus-driven physical salience (Gottlieb et al. 1998; Thompson et al. 2005). The novelty salience component may promote reward learning via the learning rate, as conceptualized by attentional learning rules (Pearce and Hall 1980). However, the initial dopamine activation is only a transient salience signal, as it is quickly replaced by the subsequent value component that conveys accurate reward value information. In this way, the initial dopamine activation is beneficial for neuronal processing and learning without the cost of unfocusing or misleading the behavior.

The unselective activation may help the animal to gain more rewards. Its high sensitivity to stimulus intensity, reward similarity, reward context and novelty assures the processing of a maximal number of stimuli and avoids missing a reward. Through its short latency, the initial dopamine response detects these stimuli very rapidly, even before having identified their value. As stimulation of dopamine neurons and their postsynaptic striatal neurons induces learning and approach behavior (Tsai et al. 2009; Tai et al. 2012), the fast dopamine response might induce early movement preparation and thus speed up reward acquisition before a competitor arrives, which is particularly precious in times of scarceness. Yet the response is brief enough for canceling movement preparation if the stimulus turns out not to be a reward, and errors and unnecessary energy expenditure can be avoided. Thus, the two-component structure with the early component may facilitate rapid behavioral reactions resulting in more rewards.

Confounded aversive activations

The initial dopamine response component arising with unrewarded stimuli occurs also with punishers. This activation (Mirenowicz and Schultz 1996) may appear like an aversive signal (Guarraci and Kapp 1999; Joshua et al. 2008) and might suggest a role in motivational salience common to rewards and punishers (Matsumoto and Hikosaka 2009). However, this interpretation fails to take the physical stimulus components of punishers into account, in addition to reward generalization and context. Indeed, independent variations of physical stimulus intensity and aversiveness show positive correlations of dopamine activations with physical intensity, but negative correlations with aversiveness of punishers (Fiorillo et al. 2013a, b). An aversive bitter solution, such as denatonium, induces substantial dopamine activations, whereas its tenfold higher concentration in same-sized drops elicits depressions (Fig. 1c). The activations likely reflect the physical impact of the liquid drops on the monkey’s mouth, whereas the depression undercutting the activation may reflect the absence of reward (negative prediction error) or negative punisher value (Fiorillo 2013). The absence of bidirectional prediction error responses with punishers (Joshua et al. 2008; Matsumoto and Hikosaka 2009; Fiorillo 2013) supports also the physical intensity account. Graded regional differences in responses to aversive stimuli (Matsumoto and Hikosaka 2009; Brischoux et al. 2009) may reflect sensitivity differences of the initial dopamine activation to physical salience, reward generalization and reward context (Mirenowicz and Schultz 1996; Fiorillo et al. 2013b; Kobayashi and Schultz 2014); this might also explain the stronger activations by conditioned compared to unconditioned punishers (Matsumoto and Hikosaka 2009) that defy basic notions of animal learning theory. Correlations with punisher probability (Matsumoto and Hikosaka 2009) may reflect the known sensitivity to salience differences between stimuli (Kobayashi and Schultz 2014). Thus, the dopamine activation to punishers might be explained by other factors than aversiveness, and one may wonder how many aversive dopamine activations remain when all confounds are accounted for.

Subjective value

The value of rewards originates in the organism’s requirements for nutritional and other substances. Thus, reward value is subjective and not entirely determined by physical parameters. A good example is satiation, which reduces the value of food rewards, although the food remains physically unchanged. The usual way to assess subjective value involves eliciting behavioral preferences in binary choices between different rewards. The subjective value can then be expressed as measured choice frequencies or as the amount of a reference reward against which an animal is indifferent in binary choices. This measure varies on an objective, physical scale (e.g., ml of juice for animals). Typical for subjective value, preferences differ between individual monkeys (Lak et al. 2014). The choices reveal rank-ordered subjective reward values and satisfy formal transitivity, suggesting meaningful choices by the animals.

The phasic dopamine prediction error signal follows closely the rank-ordered subjective values of different liquid and food rewards (Lak et al. 2014). The dopamine signal reflects also the arithmetic sum of positive and negative subjective values of rewards and punishers (Fiorillo et al. 2013b). Thus, dopamine neurons integrate different outcomes into a subjective value signal.

Subjective reward value is also determined by the risk with which rewards occur. Risk avoiders view lower subjective value, and risk seekers higher value, in risky compared to safe rewards of equal mean physical amount. Correspondingly, dopamine value responses to risk-predicting cues are reduced in monkeys that avoid risk and enhanced when they seek risk (Lak et al. 2014; Stauffer et al. 2014). Voltammetric measurements show similar risk-dependent dopamine changes in rat nucleus accumbens, which follow the risk attitudes of the individual animals (Sugam et al. 2012). Thus, the reward value signal of dopamine impulses and dopamine release reflects the influence of risk on subjective value.

A further contribution to subjective value is the temporal delay to reward. Temporal discounting reduces reward value even when the physical reward remains unchanged. We usually prefer receiving £100 now than in 3 months. Monkeys show temporal discounting across delays of a few seconds in choices between early and late rewards. The value of the late reward, as assessed by the amount of early reward at choice indifference (point of subjective equivalence), decreases monotonically in a hyperbolic or exponential fashion. Accordingly, phasic dopamine responses to reward-predicting stimuli decrease across these delays (Fiorillo et al. 2008; Kobayashi and Schultz 2008). Voltammetric measurements show corresponding dopamine changes in rat nucleus accumbens reflecting temporal discounting (Day et al. 2010). Taken together, in close association with behavioral preferences, dopamine neurons code subjective value with different rewards, risky rewards and delayed rewards.

Formal economic utility

Utility provides a mathematical characterization of reward preferences (Von Neumann and Morgenstern 1944; Kagel et al. 1995) and constitutes the most theory-constrained measure of subjective reward value. Whereas subjective value estimated from direct preferences or choice indifference points is expressed on an objective, physical scale, formal economic utility provides an internal measure of subjective value (Luce 1959) that is often called utils. Experimental economics tools allow constructing continuous, quantitative, numeric mathematical utility functions from behavioral choices between risky rewards (Von Neumann and Morgenstern 1944; Caraco et al. 1980; Machina 1987). The best-defined test for symmetric, variance risk employs binary equiprobable gambles (Rothschild and Stiglitz 1970). Estimated in this way, utility functions in monkeys are nonlinear and have inflection points between convex, risk seeking and concave, risk avoidance domains (Fig. 2 red) (Stauffer et al. 2014).

Fig. 2
figure 2

Utility prediction error signal in monkey dopamine neurons. Red utility function derived from behavioral choices using risky gambles. Black corresponding, nonlinear increase of population response (n = 14 dopamine neurons) in same animal to unpredicted juice. Norm imp/s normalized impulses per second. From Stauffer et al. (2014)

In providing the ultimate formal definition of reward value for decision-making, utility should be employed for investigating neuronal reward signals, instead of other, less direct measures of subjective reward value. Dopamine responses to unpredicted, free juice rewards show similar nonlinear increases (Fig. 2 black) (Stauffer et al. 2014). The neuronal responses increase only very slightly with small reward amounts where the behavioral utility function is flat, then linearly with intermediate rewards, and then again more slowly, thus following the nonlinear curvature of the utility function rather than the linear increase in physical amount. Testing with well-defined risk in binary gambles required by economic theory results in very similar nonlinear changes in prediction error responses (Stauffer et al. 2014). With all factors affecting utility, such as risk, delay and effort cost, held constant, this signal reflects income utility rather than net benefit utility. These data suggest that the dopamine reward prediction error response constitutes a utility prediction error signal and implements the elusive utility in the brain. This neuronal signal reflects an internal metric of subjective value and thus extends well beyond the coding of subjective value derived from choices.

Pedunculopontine nucleus

The PPN projects to midbrain dopamine neurons, pars reticulata of substantia nigra, internal globus pallidus and subthalamic nucleus. Other major inputs to dopamine neurons arise from striatum, subthalamic nucleus and GABAergic neurons of pars reticulata of substantia nigra (Mena-Segovia et al. 2008; Watabe-Uchida et al. 2012). Inputs to PPN derive from cerebral cortex via internal globus pallidus and subthalamic nucleus, and from the thalamus, cerebellum, forebrain, spinal cord, pons and contralateral PPN.

Neurons in the PPN of monkeys, cats and rats show considerable and heterogeneous activity related to a large range of events and behavior. The anatomical and chemical identity of these different neuron types is unknown. Subpopulations of PPN neurons are activated, or sometimes depressed, by sensory stimuli irrespective of predicting reward (Dormont et al. 1998; Pan and Hyland 2005). Some PPN neurons show spatially tuned activity with saccadic eye movements (Kobayashi et al. 2002; Hong and Hikosaka 2014). In other studies, PPN neurons show differential phasic or sustained activations following reward-predicting stimuli and rewards (Fig. 3) (Kobayashi et al. 2002; Kobayashi and Okada 2007; Okada et al. 2009; Norton et al. 2011; Hong and Hikosaka 2014). Different from dopamine neurons, distinct PPN neurons code reward-predicting stimuli and rewards, rather than both together. Sustained activations following reward-predicting stimuli continue until reward delivery in some PPN neurons; with reward delays, the activation continues until the reward finally occurs (Okada et al. 2009). PPN neurons differentiate between reward amounts. They show usually higher activity to stimuli predicting larger rewards and lower activations or outright depressions to stimuli predicting smaller rewards (Okada et al. 2009; Hong and Hikosaka 2014). Their responses to reward delivery show similarly graded coding, although without displaying depressions. Thus, PPN neurons show various activities during behavioral tasks that are separately related to sensory stimuli, movements, reward-predicting stimuli and reward reception.

Fig. 3
figure 3

Reward processing in monkey pedunculopontine nucleus. a Phasic and sustained responses to reward-predicting stimuli. Imp/s impulses per second. From Kobayashi and Okada (2007). b Magnitude discriminating reward responses. Norm imp/s normalized impulses per second. From Okada et al. (2009)

The PPN responses to reward delivery have complex relationships to prediction. Some PPN neurons are activated by both predicted and unpredicted rewards, but their latencies are shorter with predicted rewards, and the responses sometimes anticipate the reward (Kobayashi et al. 2002). Some neurons code reward amount irrespective of the rewards being predicted or not. They are not depressed by omitted or delayed rewards and show an activation to the reward whenever it occurs, irrespective of this being at the predicted or a delayed time (Okada et al. 2009; Norton et al. 2011). Other PPN neurons are depressed by smaller rewards randomly alternating with larger rewards (Hong and Hikosaka 2014), thus showing relationships to average reward predictions. Thus, reward responses of PPN neurons show some prediction effects, but do not display outright bidirectional reward prediction error responses in the way dopamine neurons do.

Some of the reward responses of PPN neurons may induce components of the dopamine reward prediction error signal after conduction via known PPN projections to the midbrain. Electrical stimulation of PPN under anesthesia induces fast and strong burst activations in 20–40 % of dopamine neurons, in particular in spontaneously bursting dopamine neurons (Scarnati et al. 1984; Lokwan et al. 1999). Non-NMDA receptor and acetylcholine receptor antagonists differentially reduce excitations of dopamine neurons, substantiated as EPSPs or extracellularly recorded action potentials (Scarnati et al. 1986; Di Loreto et al. 1992; Futami et al. 1995), suggesting an involvement of both glutamate and acetylcholine in driving dopamine neurons. Electrical PPN stimulation in behaving monkeys induces activations in monkey midbrain dopamine neurons (Hong and Hikosaka 2014). Correspondingly, inactivation of PPN neurons by local anesthetics in behaving rats reduces dopamine prediction error responses to conditioned, reward-predicting stimuli (Pan and Hyland 2005). PPN neurons differentiating between reward amounts with positive and negative responses project to midbrain targets above the substantia nigra, as shown by their antidromic activation from this region (Hong and Hikosaka 2014). Consistent with conduction of neuronal excitation from PPN to dopamine neurons, latencies of neuronal stimulus responses are slightly shorter in PPN compared to dopamine neurons (Pan and Hyland 2005). Through these synaptic influences, different groups of PPN neurons may separately induce components of the dopamine reward prediction error signal, including responses to reward-predicting stimuli, activations by unpredicted rewards, and depressions by smaller-than-predicted rewards. However, it is unknown whether PPN neurons with response characteristics not seen in dopamine neurons, such as responses to fully predicted rewards, affect dopamine neurons.

Striatum

The deficits arising from Parkinson’s disease, Huntington’s chorea, dyskinesias, obsessive–compulsive disorder and other movement and cognitive disorders suggest a prominent function of the striatum in motor processes and cognition. Consistent with this functional diversity, the three distinct groups of phasically firing, tonically firing and fast-spiking neurons in the striatum (caudate nucleus, putamen, nucleus accumbens) show a variety of behavioral relationships when sufficiently sophisticated tasks permit their detection. Each of these behavioral relationships engages relatively small fractions of different striatal neurons. However, apart from the small groups of tonically firing interneurons or fast-spiking interneurons, the anatomical and chemical identities of the different functional categories of the large group of medium-spiny striatal neurons are poorly understood. Most of these heterogeneous neurons are influenced by reward information.

Pure reward

All groups of striatal neurons process reward information without reflecting sensory stimulus components or movements. Some of these neurons show selective responses following reward-predicting stimuli or liquid or food rewards (Kimura et al. 1984; Hikosaka et al. 1989; Apicella et al. 1991, 1992; Bowman et al. 1996; Shidara et al. 1998; Ravel et al. 1999; Adler et al. 2013). Striatal responses to reward-predicting stimuli discriminate between different reward types irrespective of predictive stimuli and movements (Fig. 4a) (Hassani et al. 2001). Other groups of striatal neurons show slower, sustained increases of activity for several seconds during the expectation of reward evoked by predictive stimuli (Hikosaka et al. 1989; Apicella et al. 1992). Some of these activations begin and end at specific, predicted time points and reflect the time of reward occurrence (Schultz et al. 1992). Thus, some striatal neurons show passive responses to reward-predicting stimuli and rewards and sustained activities in anticipation of predicted rewards without coding sensory or motor information.

Fig. 4
figure 4

Reward processing in monkey striatum. a Pure reward signal in ventral striatum. The neuron discriminates between raspberry and blackcurrant juice irrespective of movement to left or right target, and irrespective of the visual image predicting the juice (top vs. bottom). Trials in rasters are ordered from top to bottom according to left and then right stimulus presentation. From Hassani et al. (2001). b Conjoint processing of reward (vs. no reward) and movement (vs. no movement) in caudate nucleus (delayed go-nogo-ungo task). The neuronal activities reflect the specific future reward together with the specific action required to obtain that reward. From Hollerman et al. (1998). c Action value coding of single striatal neuron. Activity increases with value (probability) for left action (left panel blue vs. orange), but is unaffected by value changes for right action (right panel), indicating left action value coding. Imp/s impulses per second. From Samejima et al. (2005). d Adaptation of reward expectation activity in ventral striatum during learning. In each learning episode, two new visual stimuli instruct a rewarded and an unrewarded arm movement, respectively, resulting in different reward expectations for the same movement. With rewarded movements (left), the animal’s hand returns quickly to the resting key after reward delivery (long vertical markers, right to reward). With pseudorandomly alternating unrewarded movements, the hand returns quickly after an unrewarded tone to the resting key in initial trials (top right), but subsequently returns before the tone (green arrows), indicating initial reward expectation that disappears with learning. The reward expectation-related neuronal activity (short dots) shows a similar development during learning (from top to bottom). From Tremblay et al. (1998)

Striatal responses to reward-predicting stimuli vary monotonically with reward amount (Cromwell and Schultz 2003; Báez-Mendoza et al. 2013). Increasing reward probability enhances reward value in a similar way as increasing reward amount. Accordingly, striatal neurons code reward probability (Samejima et al. 2005; Pasquereau et al. 2007; Apicella et al. 2009; Oyama et al. 2010). However, in these tests, objective and subjective reward values are monotonically related to each other; increasing objective value increases also subjective value. The difference in these value measures can be better tested by making different alternative rewards available and thus changing behavioral preferences and subjective reward value without affecting objective value. Correspondingly, some striatal neurons show stronger responses to whichever reward is more preferred by the animal irrespective of its objective value, suggesting subjective rather than objective reward value coding (Cromwell et al. 2005). In a different test for subjective value, striatal reward responses decrease with increasing reward delays, despite unchanged physical reward amount (Roesch et al. 2009; Day et al. 2011). Taken together, groups of striatal neurons signal subjective reward value.

Some striatal neurons respond to surprising rewards and reward prediction errors. Some of them show full, bidirectional coding, being activated by positive prediction errors and depressed by negative errors, although inversely coding neurons exist also (Apicella et al. 2009; Kim et al. 2009; Ding and Gold 2010; Oyama et al. 2010). These responses may affect plasticity during reinforcement learning in a similar manner as dopamine signals, although the anatomically more specific striatal projections onto select groups of postsynaptic neurons would suggest more point-to-point influences on neurons. Other striatal neurons respond either to unpredicted rewards or to reward omission (Joshua et al. 2008; Kim et al. 2009; Asaad and Eskandar 2011). Responses in some of them are stronger in Pavlovian than in operant tasks (Apicella et al. 2011) or occur only after particular behavioral actions (Stalnaker et al. 2012), suggesting selectivity for the behavior that resulted in the error. These unidirectional responses may confer surprise salience or single components of reward prediction errors.

Conjoint reward and action

In contrast to pure reward signals, some reward neurons in the striatum code reward together with specific actions. These neurons differentiate between movement and no-movement reactions (go-nogo) or between spatial target positions during the instruction, preparation and execution of action (Fig. 4b) (Hollerman et al. 1998; Lauwereyns et al. 2002; Hassani et al. 2001; Cromwell and Schultz 2003; Ding and Gold 2010). By processing information about the forthcoming reward during action preparation or execution, these activities may reflect reward representations before and during the action toward the reward, which suggests a relationship to goal-directed behavior (Dickinson and Balleine 1994). The reward influence on striatal movement activity is so strong that rewards presented at specific positions can alter the spatial preferences of saccade-related activity (Kawagoe et al. 1998). Thus, some striatal neurons integrate reward information into action signals and thus inform about the value of a chosen action. Their activity concerns the integration of reward information into motor processing and thus extends well beyond primary motor functions.

Action value

Specific actions lead to specific rewards with specific values. Action value reflects the reward value (amount, probability, utility) that is obtained by a particular action, thus combining non-motor (reward value) with motor processes (action). If more reward occurs at the left compared to the right, action value is higher for a left than a right movement. Thus, action value is associated with an action, irrespective of this action being chosen. Action value is conceptualized in machine learning as an input variable for competitive decision processes and is updated by reinforcement processes that are distinct from pure motor learning (Sutton and Barto 1998). The decision process compares the values of the available actions and selects the action that will result in the highest value. Neurons coding action values can serve as suitable inputs for neuronal decision mechanisms if each action is associated with a distinct pool of action value neurons. Thus, action value needs to be coded for each action by separate neurons irrespective of the action being chosen, a crucial characteristic of action value.

Action values are subjective and can be derived from computational models fitted to behavioral choices (Samejima et al. 2005; Lau and Glimcher 2008; Ito and Doya 2009; Seo et al. 2012) or from logistic regressions on the animal’s choice frequencies (Kim et al. 2009). Subgroups of neurons code action values in monkey and rat striatum. Their activities reflect the values obtained by specific (left vs. right) arm or eye movements (Fig. 4c) (Samejima et al. 2005; Lau and Glimcher 2008; Ito and Doya 2009; Kim et al. 2009; Seo et al. 2012) and occur irrespective of the animal’s choice, thus following the strict definition of action value from machine learning (see Sutton and Barto 1998). Action value neurons are more frequent in monkey striatum than dorsolateral prefrontal cortex (Seo et al. 2012). Thus, the theoretical concept of action value serving as input for competitive decision mechanisms has a biological correlate that is consistent with the important movement function of the striatum.

In a basic economic decision model involving the striatum (Schultz 2015), reinforcement processes would primarily affect neuronal action value signals, as compared to other decision signals. The dopamine prediction error signal from the experienced primary or higher order reward may conform to the formalism of chosen value (Morris et al. 2006; Sugam et al. 2012) and might serve to update synaptic weights on striatal neurons coding action value (Schultz 1998). In a three-factor Hebbian arrangement, the dopamine signal would primarily affect synapses that had been used in the behavior that lead to the reward. As rewards are efficient when occurring after the behavior, rather than before it, these synapses must have been marked by an eligibility trace (Sutton and Barto 1981). The neuronal reinforcement signal would affect only synapses carrying eligibility traces. By contrast, inactive synapses would not carry an eligibility trace and thus remain unchanged or even undergo mild spontaneous decrement. In this way, dopamine activations after a positive prediction error would enhance the synaptic efficacy of active cortical inputs to striatal or other cortical neurons, whereas neuronal depressions after a negative prediction error would reduce the synaptic efficacy. Although this model is simplistic, the reward functions of the basal ganglia could be instrumental in updating economic values in decision processes.

Reward learning

All groups of striatal neurons acquire discriminant responses to visual, auditory and olfactory stimuli predicting liquid or food rewards (Aosaki et al. 1994; Jog et al. 1999; Tremblay et al. 1998; Adler et al. 2013). Neuronal responses to trial outcomes increase and then decrease again during the course of learning, closely following the changes in reward associations measured by the learning rate (Williams and Eskandar 2006). Other striatal neurons respond during initial learning indiscriminately to all novel stimuli and differentiate between rewarded and unrewarded stimuli as learning advances (Tremblay et al. 1998). Striatal neurons with sustained activations preceding reward delivery show expectation-related activations in advance of all outcomes and become selective for reward as the animal learns to distinguish rewarded from unrewarded trials (Fig. 4d) (Tremblay et al. 1998). These changes seem to reflect the adaptation of reward expectation to currently valid predictors. With reversals of reward predictions, striatal neurons switch differential responses rapidly when a previously rewarded stimulus becomes unrewarded, in close correspondence to behavioral choices (Pasupathy and Miller 2005). Some striatal responses reflect correct or incorrect performance of previous trials (Histed et al. 2009). Neuronal responses in some striatal neurons code reward value using inference from paired associates and exclusion of alternative stimuli, thus reflecting acquired rules (Pan et al. 2014). Taken together, striatal neurons are involved in the formation of reward associations and the adaptation of reward predictions.

Social rewards

Observing reward in others allows social partners to appreciate the outcomes of social interactions, compare other’s reward with own reward, and engage in mutually beneficial behavior, such as coordination and cooperation. The basic requirement for these processes involves the distinction between own rewards and the rewards of others. To attribute the reception of a reward to a specific individual requires identifying the agent whose action led to the reward. Once this issue is solved, one can advance to investigating the inequality in reward that different individuals receive for the same labor, and the inequality in labor leading to the same reward between individuals, which has wide ranging social consequences.

In a social reward experiment, two monkeys sit on opposite sides of a horizontally mounted touch-sensitive computer monitor and are presented with visual stimuli indicating who receives reward and who needs to act to produce that reward (Fig. 5a). Phasically active neurons in monkey striatum process mostly own rewards, irrespective of the other animal receiving reward or not (Fig. 5b, red and green) (Báez-Mendoza et al. 2013). Very few striatal neurons signal a reward that is delivered only to another monkey. Thus, striatal reward neurons seem to be primarily interested in own reward. Neurons that signal a reward to another monkey are found more frequently in anterior cingulate cortex (Chang et al. 2013). Striatal social reward neurons show an additional crucial feature. Most of their reward processing depends on the animal whose action produces the reward. It makes a difference when I receive reward because of my own action or because of the action of another individual. Striatal neurons make exactly this distinction. Many of them code own reward only when the animal receiving the reward acts (Fig. 5c), whereas other striatal neurons conversely code own reward only when the conspecific acts (Fig. 5d), thus dissociating actor from reward recipient. These contrasting activities are not due to differences in effort and often disappear when a computer actor replaces the conspecific, thus suggesting a social origin. Taken together, some striatal neurons process reward in a meaningful way during simple social interactions. Such neuronal processes may constitute building blocks of social behavior involving the outcomes of specific actions.

Fig. 5
figure 5

Social reward signals in monkey striatum. a Behavioral task. Two monkeys sit opposite each other across a horizontally mounted touch-sensitive computer monitor. The acting animal moves from a resting key towards a touch table to give reward to itself, the other animal, both, or none, depending on a specific cue on the monitor (not shown). b Neuronal activation when receiving own reward, either only for the actor (red) or for both animals (green), but no activation with reward only for the other (violet) or nobody (blue). c Activation to the cue predicting own reward (for actor only and for both, red and green), conditional on own action, and not occurring with conspecific’s action. d Activation following target touch predicting own reward, conditional on conspecific’s action (dotted lines), and not occurring with own action. bd Imp/s impulses per second, from Báez-Mendoza et al. (2013)

Conclusions

The motor functions of the basal ganglia known from clinical conditions extend into a more global role in actions that are performed to attain a rewarding goal or occur on a more automatic, habitual basis. The link from movement to reward is represented in neuronal signals in the basal ganglia and the PPN. Of these structures, the current review describes the activity in dopamine neurons, PPN and striatum. Dopamine neurons represent movement and reward with two entirely different time scales and mechanisms. The movement function of dopamine is restricted to ambient levels that are necessary for movements to occur, whereas phasic dopamine changes with movements are not consistently observed. By contrast, the reward function is represented in the phasic dopamine reward utility prediction error signal. PPN neurons show a large variety of phasic movement relationships that are often modulated by reward, or display reward-related activities irrespective of movements. Thus, PPN neurons show closer neuronal associations between motor and reward processing than dopamine neurons. The striatum shows even more closely integrated motor and reward processing; its neurons process reward conjointly with movement and code very specific, action-related variables suitable for maximizing reward in economic decisions. Although this general framework is likely to be revised in the coming years, the reviewed data suggest non-motor, reward processes as inherent features of basal ganglia function.