Advertisement

Neuroethics

, Volume 11, Issue 3, pp 245–257 | Cite as

Decision-Making and Self-Governing Systems

  • Adina L. Roskies
Original Paper

Abstract

Neuroscience has illuminated the neural basis of decision-making, providing evidence that supports specific models of decision-processes. These models typically are quite mechanical, the realization of abstract mathematical “diffusion to bound” models. While effective decision-making seems to be essential for sophisticated behavior, central to an account of freedom, and a necessary characteristic of self-governing systems, it is not clear how the simple models neuroscience inspires can underlie the notion of self-governance. Drawing from both philosophy and neuroscience I explore ways in which the proposed decision-making architectures can play a role in systems that can reasonably be thought of as “self-governing”.

Keywords

Self-control Agency Prospection Free will Determinism 

Introduction

Research in neuroscience has moved beyond the explanation of only simple or low-level sensory and motor phenomena to grapple with some of the most complex kinds of cognitive processes humans can engage in. Some of this progress is due to the development of neuroimaging tools that allow the monitoring of brain activity noninvasively in healthy normal humans. Some of this progress is due to the development of animal models for complex cognitive tasks, and the consequent emergence of an understanding of such behaviors at the level of the operation of individual neurons. As advances in neuroscience are increasingly able to explain cognitive behaviors once thought to be exercises of an immaterial mind or soul, there is an increasingly unsettling worry that neuroscience will explain away the aspects of behavior that make it seem like the behavior of an agent: behavior that is freely chosen or under agentive control. The threats are twofold. The threat that receives the most attention is from determinism. Some have argued that the predictability of certain kinds of neural activity or behaviors provides evidence for the truth of determinism, and thus for the falsity of free will [1, 2]. That determinism is indeed perceived as a threat is evidenced by the dominant focus in the philosophy of free will on the truth and potential implications of determinism. It is interesting, however, that drawing from the same types of data (i.e. activity of single neurons, behaviors) others argue that it is their unpredictability given identical stimuli which provides evidence for indeterminism, and thus for free will (c.f. [3]). I have argued that this focus on determinism from a neuroscientific standpoint is inconclusive, for two reasons. First, because neuroscience cannot provide strong evidence for either determinism or indeterminism; and second, because other philosophically plausible positions exist in which free will is compatible with determinism [4]. If so, then even evidence for determinism would not be sufficient to address the question of free will. In any event, the focus of this paper is not free will, but rather the possibility of self-governance as it plays a role in agency.

The second threat has been less thoroughly explored. I will call it the threat of mechanism. Here, mechanism is just the view that the mind/brain is some sort of machine or physical device, composed of interacting parts and governed by physical law.1 Mechanistic explanations have been argued to be the core aim of neuroscientific investigation [5], but they are at odds with appeals to the irreducibly mental, with agent causation, and the like. The threat of mechanism is distinguishable from the threat of determinism because mechanism is orthogonal to determinism: some mechanisms, for example, may have indeterministic elements and thus there may be indeterministic as well as deterministic mechanisms. However, it is likely that mechanism is often confused with determinism, because mechanisms are implementations of causal chains, and the underlying assumption common to both mechanism and determinism is that causation precludes freedom [6]. Nahmias and colleagues have suggested that people responding to philosophical questions about determinism often misinterpret the claim of determinism to be equivalent to that of mechanism, and that it is mechanism that poses the greater threat to our folk philosophical notions. The reason, they suggest, is that they view mechanisms as precluding the efficacy of the psychological: they view action as not dependent on psychological states, but rather their underlying substrate (what Nahmias and colleagues term “bypassing”) [7, 8]. Their work suggests that what is troubling about mechanism is the causal impotence of the mental. In other words, people naively assume that mechanism entails mindlessness. This assumption may be intuitive and may to some seem obviously true on its face, but its truth is doubtful.

Descartes was familiar with complex mechanical systems: mechanical automata such as dolls that performed complex-seeming actions were popular in his time, and he conjectured that animals, too, were simply automata, unthinking and unfeeling, to be distinguished from humans, who had souls [9]. The Cartesian view thus illustrates how mechanistic accounts may seem to “leave something out” with respect to human agency. Indeed, if one takes the paradigm of a mechanism to be a clock, or some other kind of existing human-constructed automaton, then it seems obvious that humans are different in kind. An automaton’s complex behavior is merely superficially mind-like and intention-like: it has no mind, it does not deliberate, it does not form intentions to act, has no control over itself, and obviously it cannot be held responsible for its behavior.

To be sure, the machines we have thus far constructed do not have psychological states, but why think that the mechanisms we have constructed exhaust the realm of possible mechanisms (complex machines operating according to causal laws), or that the properties they exhibit exhaust the possible properties of mechanisms? Why not instead entertain the possibility that certain kinds of mechanisms can transcend mindlessness? After all, as science progresses, more and more types of phenomena are able to be explained as the result of physical mechanism. For example, vital forces have given way to an understanding of life as emerging from mechanistic processes. As we understand more about cellular and molecular processes, and about development, we can see human development as the result of the unfolding of relatively simple mechanistic processes, and we can understand the increasingly complex set of capacities possessed by the developing organism as the result of the appearance of increasingly complex brain mechanisms. At no point during ontogeny is there evidence for the appearance of some other factor (i.e. a factor not realized by mechanism), despite the emergence of minded behavior. To put it another way, if we are mechanisms, then we are undoubtedly mechanisms with minds, and an existence proof for the possibility of minded mechanisms.

Thus, although the inference from mechanism to mindlessness is easy and perhaps natural, it is almost certainly mistaken, and based upon restricted paradigm cases and an oversimplified understanding of what kinds of mechanisms are possible. Once one admits of the possibility that some kinds of mechanisms may not just imitate but actually give rise to mentality, mechanism does not seem like such a threat to the possibility of agency, intentionality, or free will.

That being said, it is clear we do not yet have a neuroscience that explains how these concepts can be mechanistically instantiated. My suggestions here focus on the question of self-governance (not free will or intentionality), and are largely speculative. In what follows, I take the science of decision-making as an example. I briefly explain the basic framework of decision-making as we currently understand it at a neural level, and suggest why it might be taken to preclude real agency. I will then suggest that the shortcomings may be remedied by looking for elements that can underlie a notion of self-governance. In so doing, I take inspiration from decentralized political systems, and make some suggestions about what kind of evidence we should look for in brains to bolster a mechanistic but full-blooded account of human agency.

What Does Neuroscience Tell us about Decision-Making?

Over the last few decades neuroscience has made significant progress in elucidating the basic neural framework for decision-making. Much of what we know comes from experiments in monkeys, in which experimenters record from cells in the cortex while monkeys perform a decision task under uncertainty. One of the best studied tasks is a random-dot-motion task (RDM). In the RDM task, monkeys view visual stimuli of randomly moving dots, where some percentage of those dots move in a common direction, the others moving in random directions, adding noise to the input. [10]. The monkey’s task is to decide whether the overall motion signal from the stimulus is toward the right or the left. He indicates his decision by moving his eyes to a target on the screen, located either to the right or left of the moving stimulus. By varying the percentage of dots with coherent motion the experimenter can vary the strength of the motion signal, and thus the degree of uncertainty, and the difficulty, of the decision. At the same time that the monkey performs this task, the experimenter can record from individual neurons in the monkey’s brain, and correlate their activity with aspects of task performance.

We have known for some time that perceptual information is processed in modality-specific sensory cortex, and that different brain regions compute different kinds of features. MT, or medial temporal cortex, computes motion signals. MT cells are sensitive to visual motion in particular directions, and modulate their background firing rate depending on the motion of the dots in their receptive field. They fire more to dots moving in their preferred direction, and suppress firing to those moving opposite their preferred direction. A motion stimulus with strong rightward motion will activate cells in area MT that are sensitive to rightward motion; the neural activity varies with the momentary motion strength of the visual stimulus [11]. Lesion and microstimulation experiments make clear that these MT neurons are causally relevant to the monkey’s decision [12, 13]. These cells are thus thought to supply evidence of visual motion to the decision process further upstream (for review see [14]).

While MT responses represent the momentary information in the stimulus, neurons in cortical area LIP (lateral intraparietal area) represent the accumulation of this momentary evidence over time. LIP is a region involved in motor planning, and there neuronal firing indicates preparation of an eye movement to a particular part of visual space. LIP neurons are thought to associate information from vision with plans to look [15, 16, 17]. LIP cells also fire during the RDM task. During the random dot motion task these neurons integrate the momentary evidence from MT, gradually increasing or decreasing their rate of discharge as evidence for or against the saccade target in their RF changes over time. LIP cells thus appear to represent the overall strength of the motion stimulus over time. When LIP firing reaches a certain threshold, the monkey executes an eye movement in the direction specified by the dominant population of LIP cells.

There are two important features of the dynamics of LIP firing during the RDM task: First, during this task the firing rates of the cells tuned for motion to a target increase or decrease proportionally to the perceptual evidence for motion in the target direction. Thus, neural signals responding to a strongly coherent stimulus in the preferred direction will quickly ramp up to the decision point; when responding to a weak motion stimulus the slope is less steep, and the time taken to reach a decision is increased. In information-theoretic terms, the neurons must integrate information from a weak or noisy motion stimulus over a longer period in order to reach a conclusion about what the direction of motion is [14, 18].

Second, when traces of the cellular firing over the course of many trials are averaged and time-locked or indexed to the time of decision, dependent on the monkey’s eye movement, researchers find that a decision is made when the firing rates of task-related LIP neurons achieve a critical threshold level, regardless of the dynamics of the decision process (such as, for example, how long it takes to make the decision). This suggests that there is a threshold for terminating the decision process [12, 13, 19].

In a variant of the standard RDM task, the monkey is required to withhold his immediate response until cued to do so. I such trials, LIP neurons indicating the chosen option continue to fire at a high rate in the absence of a stimulus, until the monkey responds. Thus, LIP neurons have the additional capacity to maintain discharge for longish periods in the absence of an immediate sensory stimulus [18]. This feature is another important desideratum for neurons that represent a decision, since a decision is a commitment to act regardless of whether or not it may result in immediate action. Thus, when a decision is made but not executed, these same neurons in LIP maintain the decision outcome. In other words, they encode a plan of action. Finally, the sustained firing ceases when the monkey performs the planned action. In sum, when LIP neurons representing one or another target RF reaches a threshold firing rate, the monkey renders a decision -- either by indicating his response with a saccade, or by forming a commitment to produce a particular response when response is warranted [14].

Accumulation of evidence to a threshold level is paradigmatic of a familiar mathematical model of decision-making, called a bounded accumulation (or bounded drift-diffusion, or random walk to bound) model [14]. Such models were initially developed to describe observable aspects of decision-processes at the psychological level. The diffusion to bound model represents the two options in a binary decision as boundaries in a space for the evolution of an abstract parameter called a decision variable. Each bound represents the threshold for deciding on the option it represents. The decision variable is a quantity that depends on multiple factors and evolves over time. Its value may change in response to incoming evidence, for example, or to changes in valuation of the outcome, or in models that incorporate noise, to random noise. As the decision variable evolves, it approaches one bound, and gets further from the other. A decision is made when the decision variable reaches one of the bounds: the hypothesis or option the bound represents is chosen. Although this general model has been used to describe behavioral features of decisions, it also describes well the neural activity seen in LIP, and its relation to decision-related behaviors [14, 20]. Moreover, the model seems to generalize: the neural dynamics found in the RDM task have been found in other decision tasks in different populations of cells that are involved in performing those tasks [14, 21, 22, 23].

By identifying neural firing rates with various aspects of the diffusion-to-bound model, a number of decision-related phenomena can be explained, including the notion of a speed-accuracy trade-off [14, 24], data relevant to the phenomenon of change-of-mind [25], and of post-decision wagering [26]. The model explains why manipulation of neural activity at different points in the processing stream has the behavioral effects it does, and it illuminates the way in which value representations play a role in decision. Although the work has been done in monkeys, evolutionary relationships, functional homologies, and comparable performance between humans and monkeys on this task suggest that these same neural mechanisms are involved in human decision-making [14, 27, 28, 29]. Yet despite the rich understanding supported by linking the neural data with a computational models, this work can also be interpreted as showing that neural processes of decision making cannot support traditional notions of free will and agency.

How Does the Neuroscience Data Raise the Threat of Mechanism?

The explanatory power of the diffusion-to-bound model of decision-making is impressive. However, the identification of neural activity with elements of this model can appear to pose a challenge to our intuitive understanding of decision-making as a process that is up to us, that is, an exercise of our high-level deliberative capacities, and an important nexus for the expression of free will. The realization of this abstract mathematical model in our brains may thus threaten to paint decision-making as a mechanistic, bottom-up process, unsuited to accommodating a notion of self-governance or top-down control that seems to describe our experiences of choosing, and that seems essential to the characterization of a responsible agent. Indeed, some scientists and philosophers have used the model of decision-making derived from the neural data and described by these mathematical models as evidence for the lack of free will and autonomy [30, 31, 32, 33]. For if decision-making is merely a competition between incoming sensory information in favor of one of two options, decisions can be reached without control, without awareness, and perhaps even without the operation of mind entirely. Self-governance seems out of the question: There is no “self” to do the self-governing, or to establish self-control. The low-level processing operates without higher-level constraints. If this is all there is to decision-making, the data from the RDM task and others like it threatens to paint human decision-making as a blind and mindless bottom-up process.2

Countering the Threat of Mechanism: Self-Governance

The Notion of Self-Governance

Clearly, the basic model of decision-making described in the preceding sections cannot exhaust the kind of mechanisms that we must be if we are indeed agentive systems of the kind we take ourselves to be. The elements that seem to drop out of the picture on the mechanistic view are elements that make sense of an agent engaging in processes of decision-making, such as the ability to establish, endorse or reevaluate priorities or values, the ability to intentionally affect one’s own deliberation so as to exert self-control. Can these kinds of abilities be squared with a mechanistic model of decision-making in a way that can support intuitive notions of autonomy and responsibility? One way to make progress on this issue is to think of what may be required of complex systems for them to be reasonably thought of as self-governing.

Self-governing systems are a subset of intentional systems, which are systems that represent goals and states of the world, and that act in light of those representations to achieve their goals [34].3 At least some intentional systems are unproblematically physical, but clearly not minded (i.e. thermostats, computers). But not all intentional systems are equivalent: the class can be further differentiated by other features. For example, McGeer and Pettit distinguish routinized intentional systems – those that act in a goal-directed way without control over the way or degree to which they abide by intentional constraints, and those that are self-regulating, in that they act intentionally with regard to their constraint-conforming behavior [35]. They argue that metacognition is necessary for this kind of self-regulation: such a system must be able to represent the contents of its own representations. In other words, a self-regulating system must be capable of paying attention to the contents of its beliefs and be able to represent its representational states in order to intentionally act upon them. With regard to humans, this requirement seems to be obviously satisfied: as McGeer and Pettit explain, “the availability of language makes it unproblematic to claim that people can discriminate and attend to the contents of their intentional states and so form beliefs and desires about them.” ([35], p. 286.)4 Once in the position of recognizing rational constraints as what they are, people are further in a position to act intentionally regarding the satisfaction of those constraints.

Adopting this general characterization of what it is to be self-regulating, and drawing inspiration from folk political theory, I will call a broad ability to self-regulate “self-governance”, for governance is a notion more sweeping and systematic than mere regulation. Regulation can be implemented piecemeal, while governance alludes to a more global systemic property (but note, not necessarily entirely global, systematic or coherent). A metaphor with civic governance is apt: when we think about governments, they are broadly acting systems both responsive to and able to direct behaviors of their constituents in a variety of ways, and often flexibly so. Some of the most common ways are by setting goals, establishing policies (such as enacting laws or regulations) aimed at meeting those goals, and incentivizing or disincentivizing specific behaviors. Importantly, the governing rules (and the governing structure itself) can be revised in light of experience. I suspect similar kinds of elements may characterize self-governance for a biological system.5

Self-Governance in the Absence of a Unitary Self

An obvious prima facie question that arises when considering the possibility of self-governance is the nature of the notion of self that is alluded to in the word “self-governance”. Thus far, nothing in neuroscience provides evidence against there being an identifiable thing, system or process that is the self. However, if there is such a thing, neither has neuroscience illuminated what it might be. We must take seriously the possibility then that there is no unitary entity residing in our minds or persons that is “the self”. How, then, can we account for self-governance? It seems downright puzzling how something without a self can be self-governing. However, the analogy to civic governance suggests that this is not compelling as an objection. After all, when we think of our nation, it is not clear that there is a “self” there either, much less any identifiable or discrete thing that is the government. Rather, there are sets of processes and structures that are established by the acceptance of particular policies, and these processes in turn affect the operation of parts of the system in such a way that certain goals are achieved or at least approximated. Likewise, instead of seeking some discrete and identifiable neural entity that is the self, we can recognize that the agent is a collection of processes that work together in a way that appears orchestrated, and that serves identifiable and relatively stable long-term interests. As one theorist comments, the self “is a population of partly conflicting interests,” where those interests are identified as a collection of reward-seeking processes ([36], p. 62). Unity is to be found in the fact that there is the hard constraint of a single effector system (the body) that limits what actions can be undertaken at any given time, and that various processes can organize to best achieve their goals given existing constraints. If this is on the right track, we should look for self-governance in the interactions of parts of the system with other parts of the system in ways that suggest they provide, incorporate and respond to feedback, and for evidence that the system as a whole establishes particular goal states, functions to achieve them, and reconfigures its behavior to be better able to achieve them. These kinds of processes will incorporate and shape the decision-processes whose (partial) neural bases have been identified. Thus, the flexible operation of those processes will be evidence for their existence.

In conceiving of how self-governance may be realized in systems like us, we need to bear in mind that the work characterizing the activity of a subset of MT and LIP cells during decision-making, though revolutionary in its implications, provides only a partial (and for that matter, in all likelihood an incredibly minute) window into the entirety of computations that take place during much decision-making. There is ample opportunity for other processes interact with the ones thus far identified in ways that support self-governance, for we know that decision-making involves much more global circuitry. For example, frontal brain regions are thought to be centrally involved in executive function. It is known that LIP is reciprocally connected with frontal brain areas, and that frontal activity has a modulatory effect on LIP activity [37, 38, 39, 40, 41]. Below, I suggest some ways in which viewing the diffusion-to-bound models as part of larger systems could put flesh on the notion of self-governance. On this view, the self-governing nature of the system is not undercut by, but is entirely consistent with, the mechanistic nature of the nervous system. (Of course, it is conceptually possible that the relevant larger context includes non-mechanistic phenomena in virtue of which agency is preserved, but the optimistic induction from the successes of the mechanistic neuroscientific approach is that self-governance can be achieved without appeal to non-mechanistic elements).

Setting High-Level Goals

Any kind of self-governance requires the establishment of sets of goals or ideals that serve to structure the operation of the self-governing system. This will involve the establishment or endorsement of a set of valuations of particular behaviors or outcomes, valuations that respect or instantiate some kind of hierarchical structure, so that conflicts among competing alternatives can be resolved. Some of these values will necessarily be expressions of innate valuations, but others are certain to be of more diverse origin, as constructions based on these innate values, or acquired through learning, conditioning, or reasoning. The neuroscience of value is a thriving area, and although the findings are too complex to describe here, innate mechanisms for valuation of primary reinforcers have been described in mid-brain dopaminergic systems [42], learning mechanisms have been identified, and cortical representations of value have been identified in ventromedial prefrontal cortex [37, 43]. In addition, mechanisms for resolving conflicts and for modulating decision processes themselves have been investigated [44, 45].

Despite these advances, studies of decision-making thus far do not explore nor illuminate the question of how overarching task-related goals are established or neurally represented. The behavioral paradigms used in these physiological studies have instead exploited basic goals of monkeys in captivity: acquiring preferred foods, avoiding boredom, and so on. Scientists have trained monkeys to construct new goals (e.g. of successfully performing a task) without examining in detail how these new goals are represented (but see [42, 46, 47] and related work for ideas). Rewards in tasks are chosen for experiments based on the value already attributed them by the experimental animal; and the goals, in this case, completing the task, are instilled by reinforcement learning. Furthermore, in general the animal does not have the ability to prioritize his goals in the context of the task, so these experiments are ill-suited to exploring this important aspect of self-governance.

Regulating Desires/Modulating Valuation

Having a hierarchy of goals is a necessary prerequisite for structuring a coherent system, but the goal hierarchy must be a flexible one. An important way in which a system can be self-regulating is by altering established values associated with various options. Monkey studies in RDM type tasks have shown that modulation of reward value (e.g. replacing a drop of juice with a raisin or something else the monkey values more) leads to a modulation of activity in LIP neurons that represent that choice [48]. In essence, valuing a choice more makes it more likely that that option will be chosen, by increasing the probability that the neurons representing that option will reach threshold firing rates. The value representations explain the fact we are more likely to make decisions that lead to outcomes we value more. While this kind of modulation can be achieved by manipulation of the reward itself, as the monkey work has shown, in theory it should be achievable also by the organism’s own alteration of the valuation of one option relative to another.6

There is evidence that VM cortex is responsible for the association of value with choice options [46, 49, 50]. While establishing these associations often occurs as a subpersonal or implicit process, I hypothesize that person-level explicit strategies ought also to be able to affect valuation (indeed, it seems clear that we do it all the time). Such explicit strategies should result in measurable modulation of cortical representations. These deliberate strategies for increasing and decreasing the value of stimuli are ways in which an agent can exert control over a key element in decision-making. For example, perhaps focusing our attention on the aspects we value of a stimulus, or on the reasons for which we value it, we may be able to potentiate the role of the said value representation in the decision process [51, 52, 53]. Alternatively, building associations between a stimulus and other things that we value could increase the represented valuation of that stimulus. Determining the neural mechanisms by which these kinds of processes might occur is a job for neuroscience. Since it is difficult to do physiology at the single neuron level in humans, the challenge is to design tasks in which animals use high-level strategies to re-evaluate subjective reward structure. If it turns out that language is necessary for such re-evaluation, we may be forced to explore this in humans alone, using tools that only provide information about aggregate neural activity.

Establishing Policy

All governments regulate the behaviors of their constituents (some more than others). There are various ways of regulating. Legislation is perhaps the most familiar: governments can establish laws that mandate or prohibit behaviors. In the law, policies are sometimes encoded as rules, sometimes as guidelines. Some policies set up structures that constitute the system, and some govern how that system is supposed to operate.

Similar sorts of elements can be found in neuroscience. The scientific study of the neural basis of rule-representation and rule-following is in its infancy. There is evidence that populations of neurons in prefrontal cortex are involved in rule-representation [39, 54, 55, 56, 57]. Whether these representations function to divide logical space into clear-cut categories or instead act as graded or paradigmatic attractors (more like guidelines) remains to be determined [58]. Recent studies have shown that individual neurons involved in such representations appear to operate in context-dependent ways, with the same neurons playing a role in representing different rules in different contexts [56]. This work suggests that the circuitry of the brain can be dynamically reconfigured depending on high-level goals.

The dynamic configuration of cognitive processes in order to perform a task is called “task set”. Some recent work has focused on how the brain establishes, switches, and maintains task set [59, 60, 61, 62]. Once a goal is recognized or privileged among competing potential goals, and a high-level strategy or rule is identified, how are neural mechanisms flexibly recruited to meet the task demands required to meet the goal? Work in monkeys and humans suggests that the important work of defining the task set involves the flexible configuring of frontal regions which interact with parietal areas that have been implicated in decision-making [55, 60, 63]. While extant work does not yet provide clear evidence about how task-setting reconfiguration is achieved, it does support the idea that frontal-parietal interactions will be important for understanding aspects of self-governance at the neural level.

Altering Thresholds

The mechanistic models of decision-making in neuroscience support the notion that decisions are made when a bound or threshold of firing is reached. Recall that decisions (in the form of sustained activity that drives or could drive saccades, in the RDM experiment) are made when the neurons representing evidence in the populations signaling either right or leftward motion reached a threshold level. Decisions can change with a change in threshold: a lower-level threshold will allow a decision to be reached with less evidence in favor of a particular option; a higher threshold will lead to a more conservative decision. Thus, one way of altering decisions would be to alter the decision thresholds [64].

Although changing the threshold would be a way of regulating the decision-process, evidence for flexibility of threshold is limited. There is evidence that thresholds decay over time, perhaps reflecting the cost of gathering additional evidence [65]. But are there other ways of controlling the threshold? Increasing the number of options doesn’t change the dynamics of the decision-process or the threshold, but it raises the floor from which the process begins [66, 67]. Likewise, changing the value of the competing options, or incorporating information about priors, raises or lowers the floor. Thus, the base-rate of firing, or the floor from which the threshold must be reached varies, dependent on task-related dimensions. Both changing the floor and threshold firing rates alter the total excursion necessary to reach a decision, but it is not known whether the effects of changing the two are functionally equivalent in all circumstances, or whether there are computational differences in making decisions easier or harder in these two ways. One hypothesis for why the brain tends to alter the baseline level of firing rather than thresholds is that it allows the system to effectively alter the amount of evidence needed for a decision without requiring a retooling of the central operating principles of the system in subsequent steps of processing. It may thus endow the processing streams with a kind of modularity. It also allows for the conservation of the decision-making machinery (which may be a simple threshold-detection) regardless of the complexity of the inputs or demands on the system [24].

Policies, or general strategies, can influence first-order decisions, so they may be appropriate candidates for aspects of behavior for which a person can be held responsible. Thus, it will be of interest whether the setting of certain policies is available to consciousness, or the extent to which they are reviseable by learning procedures. For example, the nervous system is suffused with noise, which affects both the inputs to decision processes and the mechanisms by which they are instantiated. Noise can affect the outcome of a decision process, and thus the nervous system must employ policies to manage the noise, such as by altering the effective decision threshold [27]. As this can affect speed-accuracy tradeoffs, determining appropriate context-relative speed/accuracy ratios is one way in which we can self-regulate. To the extent that we are able to modulate the relevant parameters by adopting such policies, we can be held responsible for our choices. Investigations of potential policy mechanisms and their relation to higher-level control should be a matter of priority for decision researchers. How policies are set, assessed, and revised are important elements for a neuroscientific theory of agency and a compatibilist theory of responsibility.

Prospection

A system that is self-governing must be able to care about its future state, and must be able to imagine or consider alternative potential future states in its processes of planning and decision-making. This ability to engage in what some refer to as “mental time travel” involves several subsidiary abilities: among them, the ability to imagine or think about the non-actual; the ability to identify relevant counterfactuals or possible futures; and the ability to envision the consequences of and deliberate about these non-actual situations [68, 69, 70].

The importance of prospection for self-governance cannot be underestimated. One example in which it promises to play an important role is in the exercise of and failures of self-control. Philosophers have long been puzzled by the apparent possibility of akrasia or weakness of will: choosing to act in ways that one judges not to be in one’s best interest. Weakness of will is thought to be an example of irrational choice. If one’s theory of choice is that one always decides to pursue the option that has the highest value, and that it is rational to choose what one most values, it is hard to explain irrational choices. Apparent cases of weakness of will would really be cases of mistaken valuation: overvaluing an option that is in fact not the most valuable option. And indeed, if one cannot rationally criticize the strength of desires (see Hume’s famous observation that “it is not against reason that I should prefer the destruction of half the world to the pricking of my little finger”), we cannot explain irrational choice.

Progress has been made on this question by the realization that we discount the value of options with time, so that rewards we expect in the distant future are less valuable to us now than the same rewards expected in the near future [47]. It was generally assumed that the temporal discounting function was exponential, which would imply that the relative value of different future rewards would remain stable over time, regardless of their proximity, and thus that our preferences would remain stable over time. However, research has established that the temporal discounting function is not exponential, but hyperbolic [36]. Hyperbolic discounting makes options available in the short term temporarily more valuable, providing an explanation of impulsivity: options that we may value less in the long run may be valued more than other options if they are immediately available. The shape of the discounting curve thus can explain the phenomenon of akrasia: Even though we may objectively (in the long run) value option A more than B, if B is immediate and A further in the future, B’s current value may eclipse A’s, and lead us to opt for B.

If this is the case, why aren’t we always impulsive, always subject to weakness of will? Ainslie makes a compelling argument that the value of repeated choices can be summed prospectively: by seeing one future choice as potentially one in a series of future choices, the values of iterated outcomes can be summed, and the summed value may still outweigh the value of a more immediate choice [71, 72]. This is an example of how from decision-making about prospective behavior there could emerge something akin to self-control, a process that could counteract the primacy of the short-term rewards in decisions that have long-term implications [73, 74, 75]. There is evidence that we, and even rats and monkeys, can identify a series of prospective choices of a type, and can bundle the value of the series [36, 51, 71]. The value of a series is roughly equal to the sum of the discounted values of the individual choices [71].

This process of prospection and bundling of future actions for purposes of valuation may be central to self-governance, for it makes sense of what it is for an entity to see itself as existing through time. What makes humans distinctive from other animals may be the extent, effectiveness, and pervasiveness of our planning and our ability to foresee and to care about the distant future. All animals exhibit hyperbolic discounting, and some have the ability to forego short-term rewards for longer-term benefits. Pigeons and rats have shown the ability to bundle, and to make short term decisions that favor longer-term benefits, but the temporal scope of these decisions for delayed gratification is on the order of seconds or minutes. In contrast, humans can make these kinds of compromises that encompass temporal spans of years or even centuries.

It is likely that other aspects of our cognition that are not themselves decision-processes, but interact with them, are essential to the scope of this self-control ability. For example, our conceptual sophistication allows us to flexibly see future potential episodes of decision as of a kind: An opportunity to eat ice cream can be conceptualized as an opportunity to eat mocha ice cream, an opportunity to eat ice cream, an opportunity to eat ice cream on a special day (i.e. my birthday), and opportunity to eat something sweet, an opportunity to eat something sweet and fattening, an opportunity to eat dessert, an opportunity to eat, and so on. Depending on how we categorize that opportunity, it may or may not be bundled with other events: there will be many possible future events to bundle it with if categorized as an opportunity to eat, and many fewer to bundle it with if categorized as an opportunity to eat artisan ice cream on my twenty-first birthday. If this story is correct, an ability to flexibly conceptualize future events may be essential for self-control, and a richer ability to conceptualize may provide many more opportunities to exert (or fail to exert) self-control. These conceptual abilities are thus also going to be important for self-regulation, and how they are neurally realized is something we have yet to understand.

Moral Responsibility and Self-Governance

In the preceding sections I have tried to gesture at the types of processes that are likely to be coupled with known decision-related activity in order to make possible self-governance. Systems that are self-governing are also in an important sense autonomous or self-creating. Autonomy is intimately tied to questions of moral responsibility, and understanding the capacity for self-governance can help explain why it is appropriate to hold such a system morally responsible. In brief, self-governance requires that the system establish some set of ideals of adequate or desirable functioning, and that it establish a more-or-less coherent set of policies and structures to achieve or approach those ideals. This places the system in a normative framework from which it can be evaluated: it can be criticized both with respect to the effectiveness of the policies in achieving these ideals, in the adequacy with which the structures enacted allow it to implement its stated policies, and even with respect to the adequacy of the stated ideals, when such a system is compared to other systems. It is perhaps unsurprising that the organisms that are the best candidates for being considered self-governing in this sense tend to be social organisms that routinely interact with other such systems.

McGeer and Petit [35] explain that the ability to recognize reasons and to act rationally for the sake of so acting likewise enables a system to act despite those constraints: Just as metacognitive abilities allow people to actively pursue the constraints of rationality, they can also enable people to intentionally flout them. Thus, because we recognize the constraints of rationality, we can be held accountable for acting according to them or failing to. Our theories of moral responsibility add complexity to this basic normative element, for they are also highly context dependent: Sometimes this willful embrace of irrationality is criticizable, as in wishful thinking; other times it may be called for, as it sometimes is for practically rational reasons, such as when we trust others without clear evidence of their trustworthiness, but for higher goals, such as in order to foster relationships of trust. Thus, the capacity for self-governance grounds a basic notion of responsibility. The sophistication of the structures supporting self-governance may modulate the kinds of responsibility judgments that are warranted.7

Conclusion

Here I have taken for granted some contentious but plausible aspects of philosophy of mind: that brain states are representational states, and that they can be assessed for content. I recognize that neuroscience has not yet aligned itself with a particular theory of content (various elements of causal functional theories are appealing), and that one will need to be given to defend the attribution of mindedness. I also acknowledge that even when we assume a theory of content, there are worries that content in the absence of an adequate causal story will put pressure on aspects of our intuitive conception of mind or agency. Without a doubt, there are theoretical challenges to be met regarding the level at which causation occurs, as exemplified by Kim’s exclusion argument [78]. My aim here was much more humble than to alleviate these worries. It was not to demonstrate how mechanisms can be minded, but rather to suggest where we might make progress in seeing how this could be so. What I have suggested is that the picture neuroscience has given us thus far is consistent with a mechanistic picture of human brain function, but that that mechanistic picture may be a small part of a larger mechanistic yet self-governing system, and that such systems are likely rich enough to undergird notions of agency and mindedness, and to support normative notions of responsibility. The job for neuroscientists is to explore these hypotheses, and to determine whether and how such abstract characterizations are realized. Perhaps, when all is said and done, we will have a clearer picture of how mechanism can underlie, and not undercut, mindedness.

Footnotes

  1. 1.

    There is doubtless more to be said about what things count as mechanisms, but here my aim is only to gesture at the problem.

  2. 2.

    Of course, the fact that the neuroscientific picture is consistent with a purely mechanistic account does not (and cannot) empirically rule out the possibility that non-mechanistic “mental” phenomena also contribute. It is not that the neuroscience demonstrates that traditional mental explanations are untenable, but it does offer inductive support: generalizing from the success of the current model systems, we begin to suspect that nonmechanistic explanations are superfluous.

  3. 3.

    Here I use “intentional” as a technical term, as introduced by Dennett, and not colloquially, meaning “having intentions”.

  4. 4.

    Although language use provides clear evidence of metacognition, a capacity to verbally articulate the contents of one’s own mental states may not be necessary for the requisite kind of metacognition for self-regulation. There is some evidence that some nonlinguistic creatures nonetheless can plan, coordinate their behavior with others, and so on, and thus may also have the capacity for a degree of self-regulation. For example, monkeys are able to decide whether to opt out of a decision trial for a certain lesser reward in a way that provides evidence that they are able to represent and effectively assess their confidence in their motion judgments, a kind of metacognition [26]. Whether self-governance requires language is a further question: because of its breadth, the needed metacognitive capacities may indeed require linguistic abilities, for the broad representational powers of language provide a scope that exceeds that of domain-specific metarepresentational abilities.

  5. 5.

    Such analogies are not new. They go back as far as Plato, and are operative in the work of Marvin Minsky (Society of Mind) and in discussions of group agency (see List and Pettit, Group Agency).

  6. 6.

    This is not to be blind to the thesis that our desires are given, not chosen. To be sure, there are certain things that serve as rewards despite our second order desires for them not to be objects of desire. All creatures innately value the primary reinforcers. But given those innate triggers for our reward system, much work, from the classical conditioning of Pavlov to the present day, has shown that the realm of valuation, from what acts as a reward to the potency of that reward, is plastic and open to change, both deliberate and unintentional.

  7. 7.

    This structural view of the basis for moral responsibility has affinities with, for example, Frankfurt’s views on the will and moral responsibility [76, 77], but it allows for much more structural and dynamic complexity than a purely Frankfurtian view advocates.

Notes

Acknowledgments

This article was made possible through the support of grants from the NEH and the John Templeton Foundation via a Philosophy and Science of Self-Control grant. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the granting agencies.

References

  1. 1.
    Coyne, J. A. 2012. Column: Why you don’t really have free will – USATODAY.com, USA Today.
  2. 2.
    Cashmore, A.R. 2010. The Lucretian swerve: The biological basis of human behavior and the criminal justice system. Proceedings of the National Academy of Sciences 107(10): 4499–4504.CrossRefGoogle Scholar
  3. 3.
    Brembs, B. 2010. Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates. Proceedings of the Royal Society of London - Series B: Biological Sciences. doi: 10.1098/rspb.2010.2325.Google Scholar
  4. 4.
    Roskies, A.L. 2006. Neuroscientific challenges to free will and responsibility. Trends in Cognitive Sciences 10(9): 419–423.CrossRefGoogle Scholar
  5. 5.
    Craver, C.F. 2007. Explaining the brain: Mechanisms and the Mosaic unity of neuroscience. OXFORD: Clarendon Press.Google Scholar
  6. 6.
    Roskies, A. L. 2014. Can neuroscience resolve issues about free will? In Moral psychology volume 4: free will and moral responsibility, vol. 4, MIT Press.Google Scholar
  7. 7.
    Nahmias, E. 2011. Intuitions about free will, determinism, and bypassing. In The Oxford handbook of free will, 2nd ed., ed R. Kane, Oxford University Press.Google Scholar
  8. 8.
    Murray, D., and E. Nahmias. 2014. Explaining Away Incompatibilist Intuitions. Philosophy and Phenomenological Research 88(2): 434–467.CrossRefGoogle Scholar
  9. 9.
    Descartes, R. 1641. Meditations on first philosophy. Cambridge: Cambridge University Press.Google Scholar
  10. 10.
    Newsome, W.T., K.H. Britten, and J.A. Movshon. 1989. Neuronal correlates of a perceptual decision. Nature 341(6237): 52–54.CrossRefGoogle Scholar
  11. 11.
    Britten, K.H., W.T. Newsome, M.N. Shadlen, S. Celebrini, and J.A. Movshon. 1996. A relationship between behavioral choice and the visual responses of neurons in macaque MT. Visual Neuroscience 13(1): 87–100.CrossRefGoogle Scholar
  12. 12.
    Ditterich, J., M.E. Mazurek, and M.N. Shadlen. 2003. Microstimulation of visual cortex affects the speed of perceptual decisions. Nature Neuroscience 6(8): 891–898.CrossRefGoogle Scholar
  13. 13.
    Hanks, T.D., J. Ditterich, and M.N. Shadlen. 2006. Microstimulation of macaque area LIP affects decision-making in a motion discrimination task. Nature Neuroscience 9(5): 682–689.CrossRefGoogle Scholar
  14. 14.
    Gold, J.I., and M.N. Shadlen. 2007. The neural basis of decision making. Annual Review of Neuroscience 30(1): 535–574.CrossRefGoogle Scholar
  15. 15.
    Colby, C.L., and M.E. Goldberg. 1999. Space and attention in parietal cortex. Annual Review of Neuroscience 22(1): 319–349.CrossRefGoogle Scholar
  16. 16.
    Mazzoni, P., R.M. Bracewell, S. Barash, and R.A. Andersen. 1996. Motor intention activity in the macaque’s lateral intraparietal area. I. Dissociation of motor plan from sensory memory. Journal of Neurophysiology 76(3): 1439–1456.CrossRefGoogle Scholar
  17. 17.
    Snyder, L.H., A.P. Batista, and R.A. Andersen. 2000. Intention-related activity in the posterior parietal cortex: a review. Vision Research 40(10–12): 1433–1441.CrossRefGoogle Scholar
  18. 18.
    Roitman, J.D., and M.N. Shadlen. 2002. Response of Neurons in the Lateral Intraparietal Area during a Combined Visual Discrimination Reaction Time Task. The Journal of Neuroscience 22(21): 9475–9489.CrossRefGoogle Scholar
  19. 19.
    Huk, A.C., and M.N. Shadlen. 2005. Neural activity in macaque parietal cortex reflects temporal integration of visual motion signals during perceptual decision making. Journal of Neuroscience: The Official Journal of the Society for Neuroscience 25(45): 10420–10436.CrossRefGoogle Scholar
  20. 20.
    Busemeyer, J.R., R.K. Jessup, J.G. Johnson, and J.T. Townsend. 2006. Building bridges between neural models and complex decision making behaviour. Neural Networks 19(8): 1047–1058.CrossRefGoogle Scholar
  21. 21.
    Heekeren, H.R., S. Marrett, P.A. Bandettini, and L.G. Ungerleider. 2004. A general mechanism for perceptual decision-making in the human brain. Nature 431(7010): 859–862.CrossRefGoogle Scholar
  22. 22.
    Rorie, A.E., and W.T. Newsome. 2005. A general mechanism for decision-making in the human brain? Trends in Cognitive Sciences 9(2): 41–43.CrossRefGoogle Scholar
  23. 23.
    Cui, H., and R.A. Andersen. 2011. Different Representations of Potential and Selected Motor Plans by Distinct Parietal Areas. The Journal of Neuroscience 31(49): 18130–18136.CrossRefGoogle Scholar
  24. 24.
    Standage, D., G. Blohm, and M.C. Dorris. 2014. On the neural implementation of the speed-accuracy trade-off. Decision Neuroscience 8: 236.Google Scholar
  25. 25.
    Kiani, R., C.J. Cueva, J.B. Reppas, and W.T. Newsome. 2014. Dynamics of Neural Population Responses in Prefrontal Cortex Indicate Changes of Mind on Single Trials. Current Biology 24(13): 1542–1547.CrossRefGoogle Scholar
  26. 26.
    Kiani, R., and M.N. Shadlen. 2009. Representation of confidence associated with a decision by neurons in the parietal cortex. Science 324(5928): 759–764.CrossRefGoogle Scholar
  27. 27.
    Shadlen, M.N., and A.L. Roskies. 2012. The neurobiology of decision-making and responsibility: reconciling mechanism and mindedness,” Front. Decision Sciences 6(56): 1–12.Google Scholar
  28. 28.
    Neubert, F.-X., R.B. Mars, J. Sallet, and M.F.S. Rushworth. 2015. Connectivity reveals relationship of brain areas for reward-guided learning and decision making in human and monkey frontal cortex. Proceedings of the National Academy of Sciences 112(20): E2695–E2704.CrossRefGoogle Scholar
  29. 29.
    Wallis, J.D. 2012. Cross-species studies of orbitofrontal cortex and value-based decision-making. Nature Neuroscience 15(1): 13–19.CrossRefGoogle Scholar
  30. 30.
    Fisher, C.M. 2001. If there were no free will. Medical Hypotheses 56(3): 364–366.CrossRefGoogle Scholar
  31. 31.
    Sternberg, E.J. 2010. My brain made me do it: the rise of neuroscience and the threat to moral responsibility. Amherst: Prometheus Books.Google Scholar
  32. 32.
    Montague, P.R. 2008. Free will. Current Biology 18(14): R584–R585.CrossRefGoogle Scholar
  33. 33.
    Spinney, L. 2004. I’m not guilty - but my brain is. The Guardian. August 12.Google Scholar
  34. 34.
    Dennett, D.C. 1989. The intentional stance. MIT: Reprint edition.Google Scholar
  35. 35.
    McGeer, V., and P. Pettit. 2002. The self-regulating mind. Language & Communication 22: 281–299.CrossRefGoogle Scholar
  36. 36.
    Ainslie, G. 2011. Free will as recursive self-prediction: does a deterministic mechanism reduce responsibility? In Addiction and Responsibility, eds. J. Poland and G. Graham, 55–88. The MIT Press.Google Scholar
  37. 37.
    Jocham, G., P.M. Furlong, I.L. Kröger, M.C. Kahn, L.T. Hunt, and T.E.J. Behrens. 2014. Dissociable contributions of ventromedial prefrontal and posterior parietal cortex to value-guided choice. NeuroImage 100: 498–506.CrossRefGoogle Scholar
  38. 38.
    Waskom, M.L., D. Kumaran, A.M. Gordon, J. Rissman, and A.D. Wagner. 2014. Frontoparietal Representations of Task Context Support the Flexible Control of Goal-Directed Cognition. The Journal of Neuroscience 34(32): 10743–10755.CrossRefGoogle Scholar
  39. 39.
    Woolgar, A., S. Afshar, M.A. Williams, and A.N. Rich. 2015. Flexible coding of task rules in Frontoparietal cortex: an adaptive system for flexible cognitive control. Journal of Cognitive Neuroscience 27(10): 1895–1911.CrossRefGoogle Scholar
  40. 40.
    Fitzgerald, J.K., S.K. Swaminathan, and D.J. Freedman. 2012. Visual categorization and the parietal cortex. Frontiers in Integrative Neuroscience 6: 18.CrossRefGoogle Scholar
  41. 41.
    Crowe, D.A., S.J. Goodwin, R.K. Blackman, S. Sakellaridi, S.R. Sponheim, A.W. MacDonald Iii, and M.V. Chafee. 2013. Prefrontal neurons transmit signals to parietal neurons that reflect executive control of cognition. Nature Neuroscience 16(10): 1484–1491.CrossRefGoogle Scholar
  42. 42.
    Schultz, W. 2015. Neuronal reward and decision signals: from theories to data. Physiological Reviews 95(3): 853–951.CrossRefGoogle Scholar
  43. 43.
    Louie, K., and P.W. Glimcher. 2012. Efficient coding and the neural representation of value. Annals of the New York Academy of Sciences 1251(1): 13–32.CrossRefGoogle Scholar
  44. 44.
    Botvinick, M., and T. Braver. 2015. Motivation and cognitive control: from behavior to neural mechanism. Annual Review of Psychology 66(1): 83–113.CrossRefGoogle Scholar
  45. 45.
    Bronfman, Z.Z., N. Brezis, R. Moran, K. Tsetsos, T. Donner, and M. Usher. 2015. Decisions reduce sensitivity to subsequent information. Proceedings of the Royal Society B 282(1810). doi: 10.1098/rspb.2015.0228.
  46. 46.
    Glimcher, P.W., and E. Fehr. 2013. Neuroeconomics, second edition: decision making and the brain, 2 edn. Amsterdam: Academic Press.Google Scholar
  47. 47.
    Schultz, W. 2010. Subjective neuronal coding of reward: temporal value discounting and risk. The European Journal of Neuroscience 31(12): 2124–2135.CrossRefGoogle Scholar
  48. 48.
    Platt, M.L., and P.W. Glimcher. 1999. Neural correlates of decision variables in parietal cortex. Nature 400(6741): 233–238.CrossRefGoogle Scholar
  49. 49.
    Smith, D.V., B.Y. Hayden, T.-K. Truong, A.W. Song, M.L. Platt, and S.A. Huettel. 2010. Distinct Value Signals in Anterior and Posterior Ventromedial Prefrontal Cortex. The Journal of Neuroscience 30(7): 2490–2495.CrossRefGoogle Scholar
  50. 50.
    McNamee, D., A. Rangel, and J.P. O’Doherty. 2013. Category-dependent and category-independent goal-value codes in human ventromedial prefrontal cortex. Nature Neuroscience 16(4): 479–485.CrossRefGoogle Scholar
  51. 51.
    Steinbeis, N., J. Haushofer, E. Fehr, and T. Singer. 2016. Development of behavioral control and associated vmPFC–DLPFC connectivity explains children’s increased resistance to temptation in intertemporal choice. Cerebral Cortex 26(1): 32–42.CrossRefGoogle Scholar
  52. 52.
    de Lange, F.P., S. van Gaal, V.A.F. Lamme, and S. Dehaene. 2011. How awareness changes the relative weights of evidence during human decision-making. PLoS Biology 9(11): e1001203.CrossRefGoogle Scholar
  53. 53.
    Herrington, T.M., and J.A. Assad. 2009. Neural Activity in the Middle Temporal Area and Lateral Intraparietal Area during Endogenously Cued Shifts of Attention. The Journal of Neuroscience 29(45): 14160–14176.CrossRefGoogle Scholar
  54. 54.
    Hanks, T.D., C.D. Kopec, B.W. Brunton, C.A. Duan, J.C. Erlich, and C.D. Brody. 2015. Distinct relationships of parietal and prefrontal cortices to evidence accumulation. Nature 520(7546): 220–223.CrossRefGoogle Scholar
  55. 55.
    Mante, V., D. Sussillo, K.V. Shenoy, and W.T. Newsome. 2013. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503(7474): 78–84.CrossRefGoogle Scholar
  56. 56.
    Roy, J.E., T.J. Buschman, and E.K. Miller. 2014. PFC neurons reflect categorical decisions about ambiguous stimuli. Journal of Cognitive Neuroscience 26(6): 1283–1291.CrossRefGoogle Scholar
  57. 57.
    Buschman, T.J., and E.K. Miller. 2014. Goal-direction and top-down control. Philosophical Transactions of the Royal Society B 369(1655): 20130471.CrossRefGoogle Scholar
  58. 58.
    von Helversen, B., L. Karlsson, B. Rasch, and J. Rieskamp. 2014. Neural substrates of similarity and rule-based strategies in judgment. Frontiers in Human Neuroscience 8: 809.Google Scholar
  59. 59.
    Etzel, J.A., M.W. Cole, J.M. Zacks, K.N. Kay, and T.S. Braver. 2015. Reward motivation enhances task coding in frontoparietal cortex. Cerebral Cortex. doi: 10.1093/cercor/bhu327.Google Scholar
  60. 60.
    Rainer, G., W.F. Asaad, and E.K. Miller. 1998. Selective representation of relevant information by neurons in the primate prefrontal cortex. Nature 393(6685): 577–579.CrossRefGoogle Scholar
  61. 61.
    Dosenbach, N.U.F., K.M. Visscher, E.D. Palmer, F.M. Miezin, K.K. Wenger, H.C. Kang, E.D. Burgund, A.L. Grimes, B.L. Schlaggar, and S.E. Petersen. 2006. A Core System for the Implementation of Task Sets. Neuron 50(5): 799–812.CrossRefGoogle Scholar
  62. 62.
    McKee, J.L., M. Riesenhuber, E.K. Miller, and D.J. Freedman. 2014. Task Dependence of Visual and Category Representations in Prefrontal and Inferior Temporal Cortices. The Journal of Neuroscience 34(48): 16065–16075.CrossRefGoogle Scholar
  63. 63.
    Wisniewski, D., C. Reverberi, A. Tusche, and J.-D. Haynes. 2015. The Neural Representation of Voluntary Task-Set Selection in Dynamic Environments. Cerebral Cortex 25(12): 4715–4726.CrossRefGoogle Scholar
  64. 64.
    Standage, D., D.-H. Wang, and G. Blohm. 2014. Neural dynamics implement a flexible decision bound with a fixed firing rate for choice: a model-based hypothesis. Decision Neuroscience 8: 318.Google Scholar
  65. 65.
    Drugowitsch, J., R. Moreno-Bote, A.K. Churchland, M.N. Shadlen, and A. Pouget. 2012. The Cost of Accumulating Evidence in Perceptual Decision Making. The Journal of Neuroscience 32(11): 3612–3628.CrossRefGoogle Scholar
  66. 66.
    Churchland, A.K., and J. Ditterich. 2012. New advances in understanding decisions among multiple alternatives. Current Opinion in Neurobiology 22(6): 920–926.CrossRefGoogle Scholar
  67. 67.
    Churchland, A.K., R. Kiani, and M.N. Shadlen. 2008. Decision-making with multiple alternatives. Nature Neuroscience 11(6): 693–702.CrossRefGoogle Scholar
  68. 68.
    Buckner, R.L., and D.C. Carroll. 2007. Self-projection and the brain. Trends in Cognitive Sciences 11(2): 49–57.CrossRefGoogle Scholar
  69. 69.
    Mullally, S.L., and E.A. Maguire. 2014. Memory, Imagination, and Predicting the Future A Common Brain Mechanism? The Neuroscientist 20(3): 220–234.CrossRefGoogle Scholar
  70. 70.
    Szpunar, K.K., R.N. Spreng, and D.L. Schacter. 2014. A taxonomy of prospection: Introducing an organizational framework for future-oriented cognition. Proceedings of the National Academy of Sciences 111(52): 18414–18421.CrossRefGoogle Scholar
  71. 71.
    Ainslie, G., and J.R. Monterosso. 2003. Building Blocks of Self-Control: Increased Tolerance for Delay with Bundled Rewards. Journal of the Experimental Analysis of Behavior 79(1): 37–48.CrossRefGoogle Scholar
  72. 72.
    Ainslie, G. 2013. Intertemporal bargaining in addiction. Frontiers in Psychology 4: 1-5.Google Scholar
  73. 73.
    Liu, L., T. Feng, J. Chen, and H. Li. 2013. The value of emotion: how does episodic prospection modulate delay discounting? PLoS ONE 8(11): e81717.CrossRefGoogle Scholar
  74. 74.
    Jimura, K., M.S. Chushak, and T.S. Braver. 2013. Impulsivity and Self-Control during Intertemporal Decision Making Linked to the Neural Dynamics of Reward Value Representation. The Journal of Neuroscience 33(1): 344–357.CrossRefGoogle Scholar
  75. 75.
    Sasse, L.K., J. Peters, C. Büchel, and S. Brassen. 2015. Effects of prospective thinking on intertemporal choice: The role of familiarity. Human Brain Mapping 36(10): 4210–4221.CrossRefGoogle Scholar
  76. 76.
    Frankfurt, H.G. 1969. Alternate possibilities and moral responsibility. Journal of Philosophy 66(23): 829–839.CrossRefGoogle Scholar
  77. 77.
    Frankfurt, H.G. 1971. Freedom of the will and the concept of a person. Journal of Philosophy 68(1): 5–20.CrossRefGoogle Scholar
  78. 78.
    Kim, J. 2000. Mind in a physical world: an essay on the mind-body problem and mental causation, Reprint edition. A Bradford Book.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.Department of PhilosophyDartmouth CollegeHanoverUSA

Personalised recommendations