Extended cognition, personal responsibility, and relational autonomy
- First Online:
- Cite this article as:
- Cash, M. Phenom Cogn Sci (2010) 9: 645. doi:10.1007/s11097-010-9177-8
- 473 Views
The Hypothesis of Extended Cognition (HEC)—that many cognitive processes are carried out by a hybrid coalition of neural, bodily and environmental factors—entails that the intentional states that are reasons for action might best be ascribed to wider entities of which individual persons are only parts. I look at different kinds of extended cognition and agency, exploring their consequences for concerns about the moral agency and personal responsibility of such extended entities. Can extended entities be moral agents and bear responsibility for actions, in addition to or in place of the individuals typically held responsible? What does it mean to be “autonomous” when one’s cognition is influenced and supported by a milieu of environmental factors? To answer these questions, I explore strong parallels between HEC’s critique of individualism in cognition, and feminist critiques of individualist accounts of self, agency, and autonomy. This relational and social conception of autonomous agency, as scaffolded and supported (or undermined and impaired) by a milieu of social, relational, and normative factors, has important lessons for HEC. Drawing together these two visions of distributed and decentralized aspects of personhood highlights how cognition, action, and responsibility are inextricably linked. It also encourages a reconceptualization of all cognition and all concerns about responsibility for actions, not simply as sometimes “extended” around individuals, but as fundamentally communal, social, and normative, with individual cognition and individual moral responsibility being derivative special cases, not the paradigm examples. Individuals are merely one of many possible loci of cognition, action, and responsibility.
KeywordsExtended cognitionMoral agentResponsibilityCollective responsibilityAutonomyRelational self
Hypothesis of Extended Cognition
“We are responsible for boundaries. We are they.”
—Donna Harraway (1991)
“A Cyborg Manifesto”
While elaborating and defending the Hypothesis of Extended Cognition (HEC)—with its vision of human cognition as supported by promiscuously opportunistic, soft-assembled, hybrid coalitions of neural, bodily, and environmental elements—Clark (2001, p. 141) mentions in passing that “we need (I suspect) an account of personal responsibility and moral agency which respects the thin, decentralized and distributed nature of situated intelligent control”. I think he’s right. HEC certainly questions the traditionally dominant assumption of individualism in philosophy of mind and action (Wilson 2004). HEC entails that some of the intelligent control of action—indeed some of the intentional states that are the reasons for action—are best seen as distributed across a system of which an individual person is only a part. This also seems to challenge our traditional individualistic conceptions of moral agency and responsibility. Are the requirements for being a moral agent—an agent that autonomously governs its actions based, in part, on reflection on its goals and values, and on a conception of what it ought to do—affected by this hybrid vision of cognition and action? Could a decentralized and distributed “extended” system that controls action itself be a moral agent? Does the influence of these external factors affect the moral agency of the individuals involved in such systems? Furthermore, could taking seriously HEC’s hybrid conception of agency reveal problems with our practice of concentrating moral responsibility on individual persons? Can entities wider than individual persons bear moral responsibility in addition to or instead of the responsibility of the individuals involved?1
While exploring these questions, I will also highlight important and so far underappreciated parallels between arguments for HEC and contemporary feminist arguments for nonindividualistic, relational, and socially constituted conceptions of self, autonomous agency, and responsibility. I draw some important conclusions from this exploration for the conception of HEC and the very idea of the bounds of cognition.
Three approaches to intentionality and agency
We can begin by distinguishing three different conceptions of the relationship between intentional states, actions, and responsibility for actions. The first, internalist, conception takes intentional states to be reducible to brain states that have “genuine” or “intrinsic” or “nonderived” intentionality based on their neurological properties and/or their functional roles in producing and guiding behavior. Actions are thus bodily movements caused by such “intrinsically” intentional brain states and processes. The responsible agent is the individual whose brain contains the causally efficacious intentional state.2
This kind of account seems incompatible with HEC at the outset. Both Clark (2005, 2007a, 2008) and Menary (2006) argue that situating all intentionality purely in neurological states and processes begs the question against the view that entities larger than individual human beings might be appropriately seen as agents that have intentional states and might be responsible for their actions.
This view contrasts with an ascriptive conception of intentionality and agency, such as that supported by Dennett (1971, 1991). On this view, an agent is an entity whose movements an observer can explain by adopting the intentional stance and ascribing contentful intentional states to that makes rational sense of its actions. For Dennett (1991), such ascriptions are justified to the extent that these explanations give the observer a pragmatic advantage (over the physical stance, as well as over alternative theories of intentional states) in explaining and predicting the agent’s actions.3 Such ascriptions of contentful intentional states, to Dennett, are based in evolutionarily derived, subjective norms of rational action. Gibbard (1990, 2003) defends a similar view. Biological and cultural evolution has given humans (who gain a practical benefit from predicting the actions of others) a somewhat common subjective sense of “rationality” that leads us each to have expectations about what it would be rational to believe and to do in particular situations, and which we each use to ascribe intentional states to others.
Most defenders of HEC, including Clark, seem to adopt an ascriptive approach like this. While individual organisms are the prototypical targets of ascriptions of intentionality and agency, they might not be the only targets of such ascriptions. It is at least conceivable that a system that extends beyond an individual agent could have intentional states ascribed to it as reasons. HEC posits that this is so. For instance, Clark and Chalmers give the example of Otto, who compensates for deficits with his neurological memory by using a notebook, arguing that the dispositional belief that the Museum is on 53rd Street is best ascribed to the integrated system constituted by Otto and his notebook.
I believe, and have argued elsewhere (Cash 2008; see also Brandom 1994; Haugeland 1990), that a third, normative, account of agency and intentionality has advantages over both internalist and ascriptive conceptions of agency and intentionality. Dennett and Gibbard hold that ascriptions of intentional states to an agent are made by an isolated observer who justifies their ascriptions pragmatically. On this normative view, in contrast, the paradigmatic cases of such ascriptions are made by another member of the agent’s linguistic and normative community; the ascriptions abide by, and are justified by, the norms of that community’s practice of giving intentional states as reasons for actions. This practice is firmly situated in and supported by that community’s shared, public language, with its norms regulating the appropriate uses of words to give content to intentional states. Most importantly, the shared norms of this social and linguistic practice regulate both agents and observers, usually as part of cooperative activities. As Morton (2003, p. viii) puts it, “we make ourselves intelligible, in order to have the benefits of being understood” by other members of our community. Many of the norms of this practice are tacit norms, rather than explicitly followed rules; they are instituted by a community’s agreement in judgments about the propriety of particular actions, utterances, and ascriptions of reasons. And often, the ascriptions themselves are also tacit, not explicitly thought or expressed, but simply enacted in our responses to others’ actions. This practice constrains what ascriptions an observer is licensed to ascribe according to the agent’s behavior. But they also normatively constrain the further actions of the agent. Agents who recognize that observers are licensed to ascribe particular intentional states to them ought to take themselves to be committed to further actions consistent with those intentional states. If I say to you that I intend to go for a walk, I should recognize that this utterance licenses you to ascribe to me the intention to go for a walk; I have licensed you to expect me to go for a walk, and thus I have placed myself under a commitment (ceteris paribus) to go for a walk.
I will not here give a thorough defense of this normative approach (I give one in Cash 2008); for much of what I argue here, distinctions between the ascriptivist and normative accounts are subtle and not particularly salient. Here, I give this brief sketch only, illustrating its suitability for making sense of the connections between intentional states and agency, on the one hand, and moral agency and personal responsibility, on the other. My initial task here is to explore the implications of accepting HEC for the ascriptivist and normative conceptions of cognition and of agency and for the account of moral responsibility that aligns with either conception.
However, in latter sections of this paper, I will argue that accepting the normative account of intentionality and agency helps us appreciate ways that concerns about the locus of cognition and about responsibility for action are deeply entwined. Here, the propriety of individuating an event as an action, and of ascribing intentional states to the agent of that action, is not grounded in the theoretical stance of an individual observer applying subjective norms of rational explanation and rational action (as Dennett, Gibbard and Clark4 hold). Rather, it is grounded in the shared, intersubjective norms of the social and linguistic practice of ascribing intentional states to one another as reasons for actions. This practice constitutes certain entities as persons (entities that can act for reasons), constitutes some events as intentional actions (ones for which asking those persons for reasons is appropriate), and constitutes some intentional actions as a particular person’s responsibility (ones it is appropriate to ask that person for reasons, and to sanction, applaud or excuse that person for performing), and constitutes some entities as moral agents (entities whose actions are subject to moral evaluation; it makes sense to say that they know they ought to act in particular ways).
Diminished moral agency and personal responsibility?
The question we need to settle then is whether it is possible and reasonable for this practice to recognize entities wider than individual human beings as persons in this sense of being responsible moral agents. Answering this question will depend on the relationship between any individuals involved and the wider extended entities that HEC might encourage us to see (also) as the responsible agent. It seems clear that an action could be performed by what Wilson (2001) calls a “radically wide” system: a system in which parts “most readily identifiable as playing a crucial role in producing or sustaining” the action are external to an individual person (Wilson 2001, p. 8). Clark seems to hold that the reasons ascribable to an extended agent can depend crucially (however, we cash this term out) upon entities external to any individual person. For instance, in Clark and Chalmers’ (1998) example, Otto has memory problems and depends upon trusted information he has recorded in a habitually consulted notebook. The standing dispositional belief that the Museum of Modern Art is on 53rd Street, according to Clark, is appropriately ascribed to the extended system of Otto and his notebook, not simply ascribed as a temporary occurrent belief held by the embodied and embedded person Otto while he is consulting his notebook (e.g., Clark 2008, p. 98). The notebook, which is external to Otto as an individual person, is one of the crucial constituents of the system to which the belief the museum is on 53rd Street is appropriately ascribed. And since this belief is appropriately ascribed as the reason for the action of going to 53rd Street, this extended system of Otto and his notebook is perhaps best seen as the agent of that action.
In most cases of extended agency like this, we can already assign responsibility well. As Clark (2003) argues, we have long been “Natural Born Cyborgs” and so have always appreciated that agents use external equipment to perform certain actions. The practice of giving and accepting reasons for action incorporates much hard-won wisdom about conditions under which “extenuating circumstances” should excuse or exculpate the person from (full) responsibility. A potentially challenging aspect of hybrid agency, however, involves radically wide agency, in which the environmental aspects of the system that produced the action are a “crucial” aspect of the system and are beyond the individual’s control. “It’s not my fault that I hit the pedestrian while driving. My brakes suddenly failed.” In such a case, the individual person can appear to lack sufficient control over the event for it to count as their intentional action. In such cases, holding the individual responsible is debatable, to the extent that it is debatable whether that external component is within the individual’s control. For instance, if the brakes failed because the driver/owner had not performed the kind of regular maintenance that would have prevented the failure, then the individual might nonetheless be appropriately held responsible, because the failure was preventable by things a responsible driver/owner ought to do. But if the brakes failed because a sharp object was accidentally propelled into the brake hose, such that the brake system lost all pressure, then hitting the pedestrian would be an excusable tragic accident that could have happened to even the most careful driver/owner. Thus it seems that HEC would not significantly affect personal responsibility for actions in many such cases; controversial ascriptions of responsibility would still be somewhat controversial, based on debate about whether the extended agent had crucial parts outside the individual and not sufficiently under the individual’s control. These concerns are already parts of traditional debates over mitigating circumstances.
However, some “radically wide” agency might be seen to excuse the individual involved, not because the system’s action was not under the individual’s control, but because the standard model of an individual intentionally acting for reasons does not apply well to the situation. As Doris (2009) eloquently shows, there are many situations in which persons fail to exemplify the “reflectivist” model of action, in which a person reflects on their values and their beliefs about a situation, and deliberately chooses to perform an action that reflects those values and beliefs. In such situations, the causes of the agent’s “actions” are often not their explicitly entertained intentional states but environmental factors that cue the behavior at a rather subconscious level, beyond the individual’s awareness. An example might be Bargh et al.’s (1996) study of subconscious stereotype priming, in which people were subconsciously primed for either the concept of rudeness or of politeness by a task involving making sentences from sets of supplied words, in which approximately half the 30 sets contained a single word related to the concept. Subjects were then asked to go to a different room to get a subsequent task from a confederate who, apparently deep in conversation with an assistant, ignored the subject for up to ten minutes or until the subject interrupted. Those subjects whose concept of rudeness was primed were significantly more likely to interrupt this conversation than controls primed for neither concept, and those primed by words related to politeness were significantly less likely to interrupt the conversation. The subjects did not show any conscious awareness of the priming, and their explicit judgments about the experimenter’s rudeness or politeness showed no significant differences among subjects. The priming subconsciously influenced only the subjects’ behavior, but not the kinds of consciously consulted beliefs and values that normally it would be appropriate to ascribe as the subjects’ reasons for waiting or interrupting.
In situations like these, social behavior, for which we might ordinarily take individuals to be praiseworthy or blameworthy, seems strongly influenced by factors of which the agents were not at all conscious. Thus it might seem quite reasonable not to ascribe to the subjects any explicitly consulted beliefs, which are their reasons for acting, about the virtue of helping or the impropriety of interrupting others. And if that’s the case, perhaps there is a case to be made that the subjects do not deserve any credit or blame for what happened, since it may not even qualify as an action performed for reasons. In these situations, the subjects’ moral agency is somewhat impaired, perhaps. The politeness or rudeness could be described as caused by the situation, rather than as the result of the subjects consciously reflecting on their beliefs and values and choosing to act in accordance with those beliefs and values.
In contrast with this kind of situation that might simply undermine some aspect of moral agency, other situations could be similar in structure to cases in which coercion mitigates individuals’ responsibility. We do not necessarily hold the bank teller fully responsible for giving the armed robber the bank’s money, because we recognize the coercion as mitigating the teller’s responsibility. Could some situations be implicitly coercive in ways that also mitigate individuals’ responsibility? Milgram’s (1974) experiment, might be a good example of a somewhat implicitly coercive social situation, in which a “core” cause of an individual’s action might be external, but here this “core” cause is the social situation and group dynamics, rather than the physical situation. In this experiment, subjects recruited for what they were told was a study of learning involving negative reinforcement of errors, administered what they believed were increasing levels of electrical shock for incorrect answers to another person (a confederate who acted shocked, though no shock was actually administered). When subjects wanted to stop, the experimenter, an authority figure, repeated “the experiment requires that you continue”. Sixty five percent of the subjects capitulated to the instruction, and continued to administer what they believed to be potentially fatal 450-volt shocks to someone they thought was a volunteer experimental subject like themselves. In the light of the large number of Milgram’s subjects who followed the instruction to continue, it may be reasonable to see Milgram’s experiment as revealing aspects of certain social situations to be more coercive than previously recognized, rather than simply interpreting it as evidence of how subjects mistakenly abdicate responsibility that is truly theirs. Would it be reasonable to see Milgram’s subjects as less responsible, if the situation were judged to be coercive? Would it be reasonable to see the wider system of the experimental situation plus individuals as producing the shocks, in a way that somewhat mitigates or excuses the individuals’ from full and sole responsibility?
Can radically wide agency diminish personal responsibility?
It’s important to note that I am not here arguing that radically wide extension of agency in these cases should absolve individuals of responsibility. My point is merely that HEC seems to entail the possibility of exculpation or excuses in at least some cases like these, in which actions are not wholly the individual’s person’s actions. But even in such cases, there are nonetheless very good moral, pragmatic and social reasons to hold responsible the individuals in such situations.
A skeptic might take these cases to an extreme, and object that if we allow that external situations might mitigate or exculpate the individuals involved, then it would follow that no individual is responsible for anything they do, since antecedent conditions can always be shown to play an important role in initiating an action. For example, “I’m not responsible. My genes and my environment made me the kind of person who does that” (e.g., Blatchford 1918). If HEC licensed such widespread exculpation, this might be objected as a reductio ad absurdum of the hypothesis.
Two responses are open to this objection, however. One is to show that the slope is not as slippery as it might appear by identifying limits on how far, or in what situations, agency and responsibility can legitimately be extended. In a similar vein, Clark repeatedly blocks arguments against HEC involving a slippery slope to a reductio ad absurdum, that on HEC my cognitive system extends to every agent I have ever conversed with and every piece of information found in the library or on the internet (e.g., Brook 2006). Clark describes what he takes to be reasonable boundaries to cognitive extension, arguing that only external equipment that plays a relevantly similar functional role to unconscious parts of one’s biological brain should be considered part of an agent’s cognitive system. This would only include external equipment that is habitually, fluently, and transparently used, is available as and when needed, and whose results are as trusted as information recalled from memory (Clark 1997, p. 217, 2005, p. 3, 2008, pp. 79–81; Clark and Chalmers 1998, p. 17). It seems reasonable to hold that similar limits on extension of responsibility could be argued for. (Although I have not attempted to describe such limits here, they could perhaps be based on the degree of control the individual has over the situation, the level of awareness they have of the role of situational influence, and on how generally coercive the situation is, would almost anyone act similarly?) This would halt the slippery slope at a limited range of cases, without generally exculpating every individual from responsibility for any action with which external influences were somehow involved.
A more effective response, however, is to stress that absolving an individual of any responsibility for what they were caused to do by outside forces, abdicates any of the force of socialization or “responsibilization” as Pettit (2007, p. 177) calls it, that is also a justification for sanctioning individuals. Even though young children may not yet have developed all of the habits and skills needed for full moral responsibility, we have good reason to hold them responsible anyway, because the practice of doing so is intended to motivate individuals to improve in their decision making, to encourage people to become responsible moral agents. Individual human beings (usually) develop into responsible moral persons through socialization; they rise to meet the expectations their caregivers have for them. As we develop, we gradually take responsibility for our actions because of the social benefits of freedom and trust earned by showing that we can be responsible members of the community. (Teenagers who want the freedom to drive a car are motivated to show they are responsible enough to be allowed to drive.) There is also a social cost to not following these norms, in the form of sanctions that others are entitled to apply in cases of norm infraction.
A defense of “I cannot help it, my genes and society made me that way” or “my environment made me do it” abdicates the responsibility for attempting to improve and to earn the right to be trusted, a right that a society otherwise accords to all members who have shown themselves capable of living up to the responsibility that this trust assumes. Of course, a newcomer from a different community with different norms, for instance, could bypass much of the socialization members of their new community might get. Such a person could easily infringe against important norms of their new community by following the norms of the community in which they were raised. But for such a newcomer to argue that they should be excused on this ground is effectively for them to argue that they have no responsibility to compensate for this history by attempting to learn the norms of their new community and to adapt to these norms. This would effectively be to argue that they should not be trusted and so not accorded much responsibility or freedom. To take such a defense too seriously would be to give up on the potential for taking responsibility for such a condition and for attempting to minimize its potential negative effects.
Recognition of this potential for improvement, and thus for taking control over factors that otherwise might be external influences on one’s actions, is crucial. It can enable us to admit that the potential for systems wider than individual agents to produce actions does not automatically absolve individuals of all responsibility. A forward-looking justification for practices of punishment would support sometimes applying relevant sanctions even to people with a history or environment that might undermine their ability to act responsibly, in order to increase their ability to do so. Praising the politeness and discouraging the rudeness of subjects in Bargh et al.’s (1996) study of subconscious stereotype priming might still be worthwhile, even if the behavior was subconsciously “caused” by environmental factors that primed them for rudeness or politeness. Furthermore, expecting people to exercise some autonomy even in relatively coercive situations encourages people to do the right thing, even when it is difficult or socially awkward to do so. In general, encouraging good behavior, discouraging bad behavior, and encouraging people to resist ceding their autonomy in socially coercive situations, is worthwhile, because that social expectation is a further external influence that reinforces the value of—and increases the likelihood of—responsible, autonomous moral agency.
Furthermore, there is potential for us to use this knowledge of potential external influences to reinforce good behavior. Understanding positive or negative influences on people’s behavior might lead people to seek out or avoid such influences and even manipulate themselves and one another either into situations that cause them to behave well or away from causes of bad behavior. This kind of ability to manage the effects of influences around us, to mitigate some and amplify others, is an important component of human autonomy, to which I will return in a later section.
Thus radically extended agency does raise some concerns about personal responsibility for actions. However, in many such cases, a focus on individuals at the center of such extended agents can still be justified. And in cases in which the notion of a radically extended agent does seem to undermine the fairness of holding the individual responsible, it seems that an individualistic account of moral responsibility would have similar concerns regarding mitigating circumstances that might reduce individuals’ personal responsibility. And HEC-inspired accounts, furthermore, have the additional advantage of explicitly encouraging moral evaluation of environmental factors that interact with individuals to produce good and bad behavior, rather than simply focusing on individuals’ behavior. In addition to praising and sanctioning individuals, we might have a moral obligation to also encourage good behavior by explicitly manipulating environments in ways that induce individuals to behave well. We also perhaps ought to identify and modify the kinds of environments that tend to produce bad behavior. The educational initiatives, socialization efforts, ethical theorizing, and moral norms in our societies should be seen as part of the system that produces morally valenced behavior.
Group agents and collective responsibility
The National Aeronautics and Space Administration (NASA) planned and completed a successful mission to the International Space Station.
A paper with 67 coauthors employs an experimental design that emphasizes the positive benefits of ABC’s new drug and minimizes discovery of serious risk factors that other researchers have found.
XYZ Corporation issued a new version of its popular software and regrets the number of problems that bugs caused to customers who installed it.
Although they were behind at the close of the first half, the football team worked together to win the championship game.
During the New York draft riots of 1863, the angry mob burned 50 buildings to the ground.
The people of the industrialized nations of the world are largely responsible for global climate change.
The University Curriculum Committee approved the proposed changes to the Philosophy Major program.
Consider (a). No single individual has the knowledge and skills required to successfully design, construct, launch, navigate, pilot, and land a space shuttle. This ability belongs to NASA as an organization rather than to any individual member. The principal credit for a successful mission is most appropriately ascribed to NASA as an organization, rather than to any particular pilot, manager, or engineer. Similarly, the blame for an unsuccessful mission might also be ascribed to NASA as an organization. The Presidential commission on the Challenger Disaster (Rogers et al. 1986) found that NASA’s organizational culture and group decision-making processes were a significant contributing cause of the disaster. Thus it might be said that NASA, as an organization, bore at least a significant portion of the responsibility for the disaster.
Cases like this, in which complex systems are produced by many people, are often lamented as examples of the “many hands” problem (Thompson 1980), in which individually reasonable decisions can lead to disaster collectively. Such cases are often bemoaned from an individualistic perspective as involving a diffusion of responsibility, in which it is difficult to justify ascription of responsibility to any particular individuals. Consider (b), for instance. If there are problems with a multi-authored paper’s methodology or conclusions,8 it is often difficult to pin the responsibility on any particular author, since many different people contributed to the project in differing amounts, in many different ways. From the perspective of HEC and collective responsibility, however, we might avoid the problematic desire to “scapegoat” particular individuals and look instead to the organization as a whole as the responsible agent.
This will be more or less appropriate in different cases. The collective of authors in (b) might be a temporary affiliation, formed only for this research project with no lasting integrity beyond that particular collaborative activity, such that the desire to hold the collective responsibility for questionable research methodology might be difficult to put into practice. But in many cases of problematic group actions, there is an enduring collective entity, such as the corporation in (c), to hold responsible as the defendant in a civil suit. Or in cases like (d), the whole team is given credit for winning the game, even though particular players might also be singled out for significant contributions to that effort.
Can groups be fully accredited moral agents?
However, it may be that we require more than just a group’s actions having a morally valenced effect for a group to be capable of full moral agency and responsibility. I argued above that we should see moral responsibility as a socially cultivated competence. A person becomes a responsible moral agent as they are socialized into their normative community and become able to participate in the community’s normative practices. We should see a fully accredited moral agent as an agent who can act for reasons (act on conceptions of reasons), as opposed to simply having reasons ascribed to it. A fully socialized moral agent can do something (or refrain from doing something), because they know that they should (or should not) do that. Such a moral agent can understand and undertake the kinds of commitments and obligations that follow from acting as a member of a normative community. Such a moral agent also has a narrative sense of itself as an enduring agent, with goals, beliefs, commitments, preferences, character, and history. Now, while we can have good explanatory reasons for ascribing intentional states to groups and treating them as agents that are causally responsible for actions, many groups would be incapable of the requisite metacognitive skills for moral responsibility. Consider the mob in (e) above. No single person in this mob presumably had the intention to burn down 50 buildings. Or consider in (f) the collective effect on global climate change of a population’s choices. These seem to be candidate examples in which a collective is causally responsible for the overall result, yet perhaps such very loosely affiliated collectives could not be morally responsible over and above, or as well as, the responsibilities of individuals in the group. But it might be argued that more cohesive, enduring groups like NASA, a football team, a corporation, or a collaboration of scientists, could be seen as moral agents if they satisfy particular conditions for collective moral agency and collective responsibility.
What kinds of conditions would such a group have to satisfy to be a responsible collective agent? Pettit (2007) makes a strong argument that collective agents can be responsible, if the group is autonomous, faces a significant choice between morally valenced options, has the understanding and access to evidence required for judging the relative value of options, and has the control required to make such judgments and put them into action. Pettit argues that many incorporated collectives (corporations, churches, clubs, for example) can satisfy these conditions.
Tuomela (2008) adds to these by distinguishing between group members acting in the “We-mode” and in the “I-mode”. In the “We-mode,” the group members think and act as one agent, essentially acting for a collectively constructed “group reason”. In such a case, the group members share a commitment to a collective “ethos”: collectively accepted constitutive goals, values, norms, standards, traditions, and so on. When acting for a group reason, the group’s constitutive ethos gives members, as group members, a reason to think, feel,9 and act in certain ways. Group members, by virtue of their commitment to the group’s “ethos,” are together committed to the group’s chosen courses of action or goals, and thus are “in the same boat,” falling or standing together. However, Tuomela argues that the group is not an extra agent over and above the group’s members (p. 5). The group’s actions are the actions of individual members when they act as group members to carry out their individual contributions to the group’s action. For instance, in (g) above, the chair of the University Curriculum Committee sends the Provost the committee’s approval of proposed changes to the Philosophy major. This constitutes the chair’s contribution, as a group member, to the committee’s action of approving these changes. Other members contribute by presenting arguments for or against the motion and by voting on the motion. This act of approving the changes is an action for which the committee as a whole is responsible. The chair does not bear a disproportionate portion of the responsibility by virtue of sending the notice of approval, because the chair’s action is merely his individual contribution to an action that the committee members collectively have the “we-intention” to perform.10 Even individual committee members who argued against the changes and voted against the motion are part of the responsible collective by virtue of those dissenting individual members’ continuing “We-mode” commitment to the ethos, reasons, and actions that constitute the group.
In addition to having this shared commitment to the group’s ethos, Sheehy (2007) adds a further condition on the possibility of collective moral responsibility by appealing to an analogy with individual moral agents. Individuals are blameworthy, Sheehy argues, only to the extent that they are capable of reflection on their collective goals, values, and actions, and of revision of them. Sheehy argues that groups might be capable of collective moral responsibility for collective actions only when the group has shared representations of its practices, character, goals, and values and has a capacity for collective reflection on them and for their revision. Only such groups could be fully accredited moral agents and so be appropriate targets of ascriptions of praise, blame, and other forms of moral responsibility. Sheehy gives an example (432) of a group whose practices, values, and goals presuppose or entail the endorsement of racist attitudes, policies, or practices, but these are a side effect, unappreciated by the individual members, who endorse a commitment to an ethos that makes no explicit mention of the racially biased effects of that ethos.11 It would be appropriate to hold the group morally responsible for such racist practices, however, only if it was at least possible for the group to identify the racist effects of its actions and policies, and revise their “wicked” ethos. If the group had mechanisms for engaging in such critical deliberation on their ethos, and for adjusting it, only then would it be reasonable to hold the group responsible for not utilizing that ability. The angry mob probably lacks any shared representation of goals and has no ability to collectively reflect on them. However, requiring a more cohesive and enduring group to revise its practices, values, and goals and to make reparations to victims of its actions might also be a reasonable response, even if this places a burden on nonracist individuals within the group. This burden is justified by those members’ ongoing commitment to the group and to its ethos.
While I take the above accounts of conditions under which collectives can be responsible moral agents to be somewhat compelling, I should concede that the possibility for collectives to be responsible moral agents is certainly controversial. Most objections to collective responsibility, however, take a similar pattern to internalist objections to HEC, such as Adams and Aizawa’s, based on the belief that there is something special about individual human beings’ cognition and their moral agency that disqualifies entities wider than individuals from having genuine cognition or genuine responsibility.
HEC: analogy or reconception?
This brings us to a pivotal concern: in much of the debates regarding extended cognition, action, and responsibility, defenders and objectors both appeal to an analogy between individuals and wider entities. The arguments above for collective responsibility take this form. The defenders described above appeal to criteria for individuals’ moral agency, arguing that some collectives can also satisfy these criteria. Objectors point out disqualifying differences between collectives and individuals,12 such that groups might well be agents, but only individuals can be moral agents and bear personal responsibility.
But is such an analogy the best way to conceive of HEC and of decentralized and distributed accounts of moral agency and responsibility? Many interpretations of Clark and Chalmers’ (1998) “parity principle,” including some of Clark’s (e.g., 2008, pp. 114–116), have this form, taking processes “in the head” as our paradigm example of cognitive processes and justifying HEC by appeal to an analogy between external processes and those “done in the head.” Likewise, some objectors to HEC (e.g., Adams and Aizawa 2001; Rupert 2004) base their arguments on dissimilarities between internal and external processes. Clark (2008, pp. 97–99) defends the parity principle against such charges by disputing the scale of the analogy: he defends a rather coarse-grained functional similarity between internal and external mechanisms (does it guide behavior in the same kind of way?), in contrast with objectors who require precisely equivalent fine-grained similarity of internal and external mechanisms. The question in this debate is whether extended cognitive tools are similar enough, or similar in the right kinds of ways, to neurological cognitive tools to be also considered “genuinely” cognitive parts of a cognitive system.
However, we should ask how these debates over extended cognition and over its effects on moral agency and responsibility would look if we moved past analogies to individuals, and examined distributed, decentralized, hybrid coalitions on their own terms. It is worth considering HEC not simply as an addendum to the traditional neurocentric account of cognition but as positing a rather fundamental reconception of cognition and thus of agency and even of responsibility for actions. Some of Clark’s and Menary’s rhetoric advocates this revisionist model, in which HEC entails a new hybrid or integrationist conception of cognition and of cognitive science.13 Many cases of progress in science have this form; a new conception of a phenomenon is proposed to deal with anomalies that the traditional conception can only explain using somewhat ad hoc, complicated “epicycles.” This new conception both smoothly explains the anomalies and also entails a reconception of phenomena that we previously thought were well explained by the traditional vision.
HEC’s hybrid account of cognition, I argue below, gives us reason to reconceptualize all of cognition and cognitive science: it’s not that some cognitive processes are externally scaffolded. Rather, the focus of cognitive science should shift to focus on the operations of hybrid ensembles of neural, bodily, and environmental resources and processes. If we did so, then even traditional neuroscientific investigations of brain function should acknowledge the myriad ways that neural, bodily, and environmental elements together constitute systems responsible for cognition and action, even if the role of the brain is most salient in some investigations.
There’s good reason to similarly revise individualistic accounts of moral agency and of responsibility as well. As I have argued above, there are troubling potential anomalies in some cases of radically extended action; especially in collective action. I believe a hybrid integrationist alternative account of moral agency and responsibility can more smoothly explain some of these anomalous cases. But perhaps more significantly, as I argue below, there are further troubling aspects of personal responsibility and moral agency—including lessons we can draw from other contemporary critiques of individualistic conceptions of agency and responsibility—which show that responsibility can be collective and distributed in ways that are quite unlike our typical accounts of individual responsibility. The parallels between these arguments and those for HEC are striking and significantly under-appreciated. Taking them seriously highlights the collective and decentralized nature of both cognition and responsibility for actions.
These parallels also highlight how accounts of cognition, action, and responsibility are inextricably bound together. And this further encourages a reconceptualization of all cognition and all concerns about responsibility for actions, not as simply as “extended” around individuals, but as fundamentally communal, social, and normative, with individual cognition and individual moral responsibility being derivative special cases, not the paradigm examples.
Extended cognition and relational autonomy
Arguments for revision of the concepts of personal responsibility and moral agency are not new. Many—principally feminist—scholars have critiqued accounts of self, agency, and autonomy that view the self as a Cartesian mind or an unencumbered individual who thinks and acts independently from any external influence. They instead highlight the way our selves, our capacities for action, our values, and our attitudes are developed in social relationships and are scaffolded and supported (but also constrained and undermined) by the teaching, guidance, advice, examples, and normative practices of our communities.14 The social, cultural, and normative context in which we are socialized instills in us not only the community’s collective intelligence and wisdom, its norms and role models, and its judgments and attitudes but also its prejudices and oppressive practices. The relationships within which we develop scaffold and support (but can also impede and repress) our emerging sense of ourselves, of our abilities, and of our values, aims, and desires.
Such accounts must, as Friedman (2000, p. 219) points out, reconcile this conception of agency and autonomy as inextricably social and relational with the notion that autonomy and moral agency require some degree of individual personhood. This requires a reconception of selfhood and individuality that incorporates the recognition that one’s self and one’s sense of autonomy are somewhat decentralized and are relationally and socially constituted. Meyers (1991, 2005), for instance, argues that these social and relational influences on our selves do not deterministically limit our autonomy, but rather they support and construct each person’s self and their competencies for autonomous activity.
Autonomous persons are not somehow independent of these social forces, but rather they are able to critically reflect on the forces that shape their values, desires, attitudes, actions, and commitments and are able to engage with them, endorsing and enhancing some, rejecting and resisting others. It is through such reflection and engagement that one can autonomously construct an authentic self within this social and relational context. Consider, for example, Meyers’ (2005, pp. 32–33) example of developing a metabolic condition that requires a dietary restriction, but one she is tempted to break occasionally for deliciously tempting foods. Meyers told many of her friends about her condition and then found that because these friends were aware of her condition. When eating around them, she was less tempted to break the restriction. She had not asked them to help her stick to her dietary restriction, but knowing that they know, and so expect her to restrict her diet, adds some social pressure that discourages her from violating the diet. Now she is conscious of this social reinforcement effect, Meyers says she deliberately tells acquaintances to recruit them as unwitting extensions of her resolve. Thus her autonomous will to stick to the dietary restriction is partially offloaded onto her relationships with friends and acquaintances. Her capacity for autonomy is reinforced by deliberately endorsing and enhancing particular aspects of her social relationships; telling others about her condition better enables her self-as-relational to autonomously refuse delicious but harmful food.
Similarly, a culture’s norms and practices play a crucial role in shaping one’s sense of propriety, one’s values and desires, one’s actions and commitments. Some of these are norms and practices we embrace and happily reinforce; some we reject and struggle to revise. Integral to a person’s autonomy is the ability to critically reflect (often together with others) on these social norms and to use the skills for resistance they also inherit from their community to autonomously work (often together with others) to undermine problematic values and norms. Meyers gives the example of feminist consciousness raising and the consequent political critiques and oppositional activism. These are examples of relationally enhanced, socially equipped autonomous engagement with—and attempts to improve—one’s self, one’s relationships, and the norms and practices of one’s community.
These arguments for relational autonomy seem precisely parallel to HEC’s critiques of individualistic conceptions of agency and cognition and the accompanying focus on the ways in which our cognitive abilities are the result of hybrid coalitions of neural, bodily, and environmental factors. As Clark (2007b) argues, we engage in “ecological control” by not micromanaging every detail of our actions, but instead delegating much of our problem-solving responsibilities, taking advantage of robust, reliable sources of order in the body, brain, and environment (p. 103). My conscious mind is not a “central organizer” of my activities, an independent and unified point source of intelligent control over my body, or an isolated, self-contained vessel for my intentional states. Rather, I am what Clark (2007b) calls a “soft self”: a “rough-and-tumble control sharing coalition of processes—some neural, some bodily, some technological—and an ongoing drive to tell a story in which ‘I’ am the central player” (p. 114). I have incorporated the technological, environmental, bodily, and—although Clark often downplays these aspects—social and normative resources around me15 to tell myself and others a story about who and what I am, and what I can do, what I want to do, and what I ought to do. I also take advantage of these resources to live up to those stories.
On this view of cognitive agency, an “intelligent” agent is not one who is intelligent independently of the external factors that shape, scaffold, and support our cognitive abilities, just as autonomy does not mean being independent of the social and relational forces that shape our selves. An autonomously “intelligent” agent is one who critically engages the cognitive tools around them, one who selects, endorses, and uses effective cognitive tools, who replaces, refines, or augments less effective cognitive tools, and who selectively incorporates these social, relational, technological, environmental, and bodily resources into their sense of who they are, what they know, what they want, and what they can do.
Thus both cognition and autonomy are offloaded onto the social, technological, and normative environment. These parallels between the arguments for extended cognition and the arguments for relational autonomy strike me as deeply important and well worth exploring further (I do not claim to have explored these in much detail here). My aim here is to briefly sketch one further important and interesting aspect of this relational and social approach to self and autonomy: a certain kind of collective responsibility that follows from it. This account of collective responsibility also has important parallels with HEC and some common critiques of the thesis. These parallels should force us to seriously consider critiques of a deeply entrenched notion of individual responsibility by some accounts of relational autonomy and the social constitution of selves. Similar deeply entrenched notions of individual cognition support Clark’s account of “extended” cognition. These need to be carefully reconsidered.
Collective cognition and collective responsibility
O’Connor (2001) presents a powerful argument, following from the relational and social conception of agency and autonomy, for a form of collective responsibility we have not yet examined. Consider her central example (pp. 41–43) of a spate of burnings and bombings of African–American churches in the Southern USA during the mid-1990s. A national task force failed to find any common thread running through these incidents that hinted at any kind of racially motivated conspiracy and concluded that these were isolated incidents. O’Connor charges that this fails to understand the problem, by assuming that there would only be a common thread here if there were individual organizers influencing all these cases and that the only racist acts are ones in which an explicit racist motivation is present (“I did it because I hate blacks!”). O’Connor’s central point is that it is a mistake to focus only on explicitly racist intentions and on those individuals who set the fires and bombs and to look only for individuals who might have organized those agents. Focusing only on such individuals keeps the focus away from the background of social and normative practices that support and perpetuate the kinds of values, judgments, prejudices, privileges, biases, stereotypes, and relationships of dominance and subjugation that maintain conditions of racial injustice and oppression. It is these background conditions that make such racist acts possible.
We collectively must take responsibility for the ways in which we indirectly cause harms by together creating and maintaining such background conditions, O’Connor argues (pp. 52–53). If I burn a church, I cause harm somewhat directly. But if I say “Blacks are lazy,” I indirectly cause harm by fostering a negative stereotype that may, in turn, lead others to act negatively toward Black people, for instance, by deciding not to hire a qualified Black person for a job. Such stereotypes, judgments, and prejudices can affect people’s actions on a very unconscious level. Actions still count as racist, she argues, if they are influenced, even unconsciously, by such background attitudes, judgments, and practices.
Thus collective responsibility, O’Connor (2001, pp. 126–131) argues, applies not just to the kinds of cohesive and enduring groups I discussed earlier, but even to the kinds of loosely affiliated collectives she describes as having a serial identity16; one that is unified passively by being oriented around particular objects, habits, or norms created by the unintended collective result of past actions. For example, “Whites” is a family resemblance category, with no necessary and sufficient uniting features. Rather, it names a collective that has a serial identity relative to a wide range of oppressive practices and beliefs. And White people, qua Whites, have a collective responsibility for the practices and institutions that shape them, that constitute them as “Whites,” and which they benefit from and maintain; even if they maintain them only passively through unintentional inaction (being the recipients of white privilege, for example, which means that particular problems are not apparent to a person and certain choices might not even occur to them). By accepting the privileges of whiteness and not struggling against this practice and not rejecting (as much as one can) the benefits it confers, white people implicitly consent to and contribute to these practices, the identities they construct, and the choices they make possible and make difficult. Thus although the individuals who set the fires and the bombs bear primary responsibility for their actions, O’Connor argues that the diffuse community which often tacitly maintains (and which fails to ameliorate) the background of racist attitudes, judgments, and practices that make such actions possible, is collectively also responsible for those individuals’ actions.17 Responsibility for actions, on her view, has a thoroughly collective dimension, independent of any similarity between collectives and individuals.
Gallagher and Crisafi (2009) give a very similar response to the individualism in HEC. They reject the parity principle’s argument from analogy, arguing that similarity to internal cognitive mechanisms is not a good hallmark of cognitive processes. They argue that cognition is extended into “mental institutions” such as legal systems and museums. The legal system’s body of principles, precedents, and procedures is a set of tools that aid jurists, judges, and lawyers in thinking about how to resolve legal disputes. They do not have to think cases through alone, from first principles each time, but can take advantage of and build upon others’ previous cognitive work. Our cognitive systems are enmeshed with and enabled by large-scale cognitive institutions like these. As another example, consider the scientific community’s practice of sharing and comparing theories and hypotheses, publishing and critiquing methods and results, developing and refining instruments and analytical techniques. This practice supports scientific activity by providing cognitive resources and tools for scientists to build upon, refine, replace, and share.
Institutions such as these, Gallagher and Cristafi argue (p. 49), are both the products of human cognitive activity and producers of human cognitive activity. The fact that these systems are explicitly designed to support human cognition makes them just as qualified to be vehicles of cognition as is Otto’s notebook. As long as these institutions are “linked in the right way” with human problem-solving activities, the question of the parity principle, they argue, is irrelevant: these institutions can collaborate in hybrid cognitive processes that could not, even in principle, be done in the head but should still count as cognitive. There is an important sense in which we, collectively, can honestly claim to understand and be able to do things that no individual can claim to know or be able to do.
To take this a little further than do Gallagher and Cristafi, it is not only our institutions that are constitutive parts of our cognitive abilities and processes but also our normative practices. For example, another crucial support of our cognitive activity is the English language itself, the normative linguistic practice within which we think thoughts, make notes, form hypotheses, run experiments, write and read papers, critique and refine ideas, and have debates and conversations. In his “Magic Words” paper, Clark (1998) expounds on the many and various ways in which acquiring a language augments individual human cognition (although Clark concentrates principally on intra-cognitive processes and less on collaborative, social cognitive processes). The language itself is a very pervasive and powerful support of and mechanism for cognition. In making additions and amendments to our languages (e.g., adopting Arabic rather than Roman numerals, which make complex math much easier), we eo ipso add to our cognitive abilities. The same goes for many of our normative practices and the myriad ways they make possible our cognitive activities. So our institutions, our languages, and the very cognitive and normative practices within which we cognize have been shaped by us to make cognition easier, and they have, in turn, shaped the cognitive abilities that language-enabled humans possess.
Thus there is an apposite dovetail between O’Connor’s account of our collective responsibility for the background practices and attitudes that support individual actions, and Gallagher and Cristafi’s account of the way we collectively pool our cognitive resources in our shared institutions, tools, and practices. Both cognition and responsibility for actions are ongoing, public, largely collective matters, based upon shared normative practices, to which each individual action potentially contributes and from which each action draws. In fact, as I argue in the next section, these are basically anchored in the same overall normative practice of giving reasons for actions.
If everyone’s responsible, then nobody is
The obvious objection this position invites is that the concepts of cognition and responsibility are being stretched too thinly. O’Connor notes that a common objection to her account is “if everyone’s responsible, then nobody is.” Similarly, the objection sometimes raised against Clark’s HEC (e.g., Brook 2006) is that “my” cognitive system is over-extended, to every book I’ve ever read, every person I talk to, and every site on the internet. If both my cognitive system and also my responsibility are extended into the whole normative, social technological, and environmental system, then the notion of any action or thought being “mine” is absurdly over-extended.
Clark and others defend against such objections to HEC by trying to place what they take to be reasonable limits on how far we should or could extend an individual’s cognitive system. They are thus embroiled in boundary disputes about the limits of cognitive extension. These objections come from the individualistic conceptions of cognition and of responsibility that locate cognition primarily within individuals. But Clark does not reject this individual-centered model, just the location of the boundary, wanting to “extend” the system beyond the individual’s skin and skull. It seems likely that he would also object to Gallagher and Cristafi’s notion of cognitive institutions, as over-extending the mind.
O’Connor’s response to such objections (chap. 7) is different to Clark’s and is similar to Gallagher and Cristafi’s argument against the parity principle and for mental institutions. Rather than amending the individualistic account of moral agency and responsibility, she argues we should revise this concept of responsibility that focuses on individuals and their explicit intentions. There is much tacit guidance and influence in the background that supports our agency and helps produce particular actions, yet is not part of the agent’s explicit intentions. And similarly, on HEC, there is much to cognition beyond an individual’s explicit intentional states. There is much tacit knowledge and skill built into our cognitive tools and norms that shape the behaviors of individual agents. Our basic concept of responsibility for action should extend far beyond any particular individual and their intentional states, as should our conception of cognitive processes.
So if what I say and do is simply a part of this vast cognitive and moral milieu, what makes any thoughts or actions “mine” then? How can I take credit or blame for anything? Here, we come back to the difference I sketched earlier in this paper, between internalist, ascriptive, and normative accounts of intentionality. On the internalist view, an action is really mine if my brain contains the intentional states that caused the action. Internalist accounts of intentional states are naturally aligned with retributive accounts of responsibility that center on whether an individual really is the agent of the action or really could have done otherwise and so deserves sanction or credit. Accepting HEC undermines and is incompatible with this account of what makes an idea or action mine.
On the ascriptive view, in contrast, ascriptions of intentional states are made relative to an individual’s subjective “theory” of intentionality and rationality, which helps that individual explain and predict the actions of others. So an action is mine, only relative to particular observer’s subjective perspective, if that observer has pragmatic or explanatory reasons to call it mine; but there is no fact about whether it is really mine (cf. Dennett 1991).
However, a normative understanding of others’ actions is shared among community members; it is constituted by the shared normative practice of giving and asking for reasons for actions. It is this practice that regulates judgments about when it is appropriate to take and to attribute responsibility for actions, about what reasons for actions count as “rational” reasons and reasonable excuses, and about what actions it is appropriate to judge as “rational” or acceptable. There are practice-constituted facts about the normative statuses ascribed within this practice, in the same way that there are practice-constituted facts about runs scored in baseball games. There is no objective fact about whether an action is really mine. But this shared normative practice of giving reasons for actions grounds an agreement in judgments about whether an action or utterance should count as “mine”: as one for which I ought to take responsibility by giving reasons and for which I can appropriately be held responsible by asking for my reasons. All responsibility for actions is assigned according to this shared social practice and its norms.
This practice of holding people responsible for their behavior has been wrestling with questions about fairly apportioning responsibility for actions for a very long time. This evolving normative practice constitutes what it is to be fairly held responsible for an event. They have changed little in response to metaphysical arguments about the true nature of agency, as many incompatibilists about free will (who are, not incidentally, mostly also internalists about intentionality and responsibility) seem to assume they should.18 Normative (and also ascriptive) approaches to responsibility ask consequentialist forward-looking questions about the justification of our practice of holding individual agents accountable for their actions (doing so helps make people act responsibly) and only secondarily ask about the backward-looking question of whether it is fair and reasonable to apply that practice to a particular case and hold a particular individual responsible for a particular action.19
These practices take agency and responsibility as primitive concepts, about which we already have a sophisticated and nuanced understanding. They incorporate a lot of hard-won wisdom about conditions under which it is appropriate or fair to hold a person morally responsible or legally accountable for an action and about what conditions excuse or exculpate individuals. The distinction between actions for which one should be held responsible and actions for which one should not be held (fully) responsible is continually being modified as we come across and debate cases in the grey area in which excuses are proffered and considered. The practice changes over time, in response to problematic cases, about which we have hard-fought debates about the fairness of holding an agent responsible. Legal cases, public debates, and other venues for discussion often do prompt revisions to our practice of holding agents responsible. And—importantly for present purposes—scientific evidence can also be influential in helping us better understand certain underappreciated (especially social and relational) aspects of human psychology and decision making relevant to deciding whether particular kinds of agents can fairly be held fully responsible for particular kinds of actions (e.g., “battered person’s syndrome,” Roth and Coles 1995). These practices can also be informed by neuroscience, social science, and the kinds of cognitive science motivated by HEC, which may well suggest further modifications as we better understand the contributions of external factors to actions for which we find it appropriate to encourage or to sanction individuals. But our current practices are the starting point, the context, for explorations of our conception of responsibility for actions and of how it might be improved.
Individual action and cognition is derived from public practices and institutions
This background of normative and cognitive practices is the result of our exercises of moral agency and they make such agency possible. These practices help make us into responsible and rational agents. Our actions, utterances, and attitudes are contributions to this background of norms, practices, institutions, relationships, values, knowledge, wisdom, and cognitive tools for which we are all collectively responsible and from which we each are constituted. As we act with and around one another, we support and shape our community of norm-abiding reasonable, intelligent citizens. This is so when we act to sanction or encourage particular kinds of actions, as well as when we explicitly construct, explicitly articulate, and teach others about laws and ethical principles. The reason we sanction and encourage (but also excuse and exculpate) one another is, in part, to socialize others and shape their behavior and thereby shape and maintain the kinds of practices (and exceptions to their application) that we endorse and want to shape our community and its members.
Extending O’Connor’s argument a little then, I want to argue that any individual responsibility we have for our actions is derived from the responsibility we collectively have for our normative practices, attitudes, and the actions that follow from them. Individuals’ taking responsibility and their being held responsible for their actions is derived from and embodied in the community (including that individual) collectively taking responsibility for maintaining the normative practices within which we live and act, that individual transgressions would otherwise undermine. Sanctioning and reinforcing individuals’ actions is eo ipso taking responsibility for creating, correcting, and maintaining the practices and attitudes that constitute the kind of community in which we want to live.
Individual cognitive actions and utterances are derived from, and contribute to, the milieu of cognitive and normative practices, institutions, and tools that we collectively create and maintain. Cognition is a two-way interaction. In addition to working within and using the background of tools, institutions and practices my community affords me, I, in turn, reinforce and reshape this background through my personal actions and utterances by using the language as I do, by explicitly using and teaching cognitive norms in the courses I teach, by participating as I do in the production of scholarly research, and by raising critiques of ideas and research programs I hold to be important and worth improving. In a very real sense, these cognitive activities I participate in are indeed derived from these larger social and cognitive institutions and practices that my personal cognitive contributions build upon and draw from.
Indeed, given the nature of such ‘mental institutions,’ including the learning practices that are propagated in educational institutions, it may be more appropriate to say that the cognition that goes on in one’s individual head is really derivative from, or perhaps an internalized version of these larger processes—socially instituted processes—that are ongoing and outside of any particular individual’s head (Gallagher and Crisafi 2009, p. 49).
We are (individually) products and (collectively) producers of our social, normative, and cognitive practices. Individuals’ cognition is thus not best seen as “extended.” Rather, the cognitive activities of individuals are collaboratively constructed and supported by the community’s tools, institutions, and normative practices and by the myriad ways in which we collectively maintain, support, and improve this normative and cognitive milieu. But when the question is who gets the credit or the blame for a particular action, we often have good normative and social reasons for holding individuals responsible and for expecting them to take responsibility. But sometimes, it is not enough to applaud, excuse, or sanction individuals’ actions. In many cases, we should also focus on the background of normative, social, and cognitive factors that support, shape, and scaffold that agent and their actions. Correcting or amending these background factors (often, but not only, by correcting individuals) also contributes to shared form(s) of life.
Overall then, accepting HEC invites questions and potential objections regarding agency and responsibility for actions. However, as I have argued here, our normative practices of ascribing responsibility for actions are already well equipped to handle most putative cases of “extended” agency. And HEC can often be useful in encouraging us to address the contributions of wider systems and the roles they play in producing actions, rather than simply responding by sanctioning individuals. Even so, in most cases, it may still be appropriate to sanction individuals who are part of wider systems, to encourage those individuals to take responsibility for the role they play in such systems. Furthermore, an approach that respects HEC might be better able to address and respond to ways that groups themselves might appropriately be held responsible for their actions, rather than expecting all such responsibility either to fall only to individuals within such groups or to dissipate blamelessly over collections of individuals.
But perhaps the most significant and revisionary result of this exploration of responsibility and cognition is that it highlights our collective responsibility for the cognitive tools and normative practices (particularly the practice of giving reasons for actions) that constitutively support our moral and cognitive agency. And this leads to a revision of the traditional notion of individual-centered “extended” cognition. Rather than starting with individual persons and arguing that some situations “extend” the cognition and agency of that individual (and then disputing the proper boundaries of cognition), I have argued that cognition is instead best seen as socially embedded, environmentally supported, relationally and normatively constituted, and very much a collective and normative enterprise. Many events, tools, norms, and policies, on many different scales, including individual actions, contribute to our shared background of cognitive and normative tools and practices. But often, we have normative reasons, derived from our collective responsibility for our shared normative practice of holding moral agents responsible for their actions, for concentrating on the contributions or transgressions of particular individuals.
We hold individuals responsible by sanctioning, letting pass, or encouraging their actions and by applauding or critiquing their contributions to the pool of cognitive tools and norms we use to solve problems. And as we do so, we are eo ipso reinforcing or reforming the social practices and cognitive resources that constitute us as intelligent, responsible, moral agents and that constitute the collective forms of life within which such autonomous agency is possible.
On this collective and normative approach to agency and responsibility, the evolving practice of giving reasons for actions can adapt to our increasingly explicit awareness of the many normative, social, and environmental influences on human decision making. This practice can come to more fairly assign responsibility for actions to individuals, when appropriate, as well as dealing with the more “anomalous” cases of radically extended agency and group agency, to which an exclusively individualistic account of agency can have trouble apportioning responsibility.
The principal function of assignments of responsibility is to maintain and improve the cognitive, social, and normative background in which we think and act together. On this view, we contribute to this goal each time we reinforce, sanction, or excuse individuals’ actions. But we need not be restricted to focusing on individuals and their actions. We can correct and reform whatever seems to be the most salient aspects of the decentralized and distributed individual, social, technological, cognitive, and normative systems that act around us. Often, but by no means always, this will involve addressing individual persons and their attitudes and actions. But we should always also take seriously our collective responsibility to better understand and more directly address (enhance, reform, or even replace) aspects of the background within which such individuals live, think, and act. We can and should focus such corrective efforts also on groups and their dynamics, on the social environment, and on the cognitive tools and norms that also contribute to actions we find problematic. Individuals are merely one of many possible foci of cognition, action, and responsibility.
Note that I am not here exploring the moral implications of the extended mind thesis, as some have done. Levy (2007), for instance, focuses on the implications for “neuroethics” of HEC, arguing that considering there to be no privileged boundary between internal and external aspects of the mind refocuses the question from whether neurological interventions are ethical to which interventions into the mind (either internal or external) are ethically permitted and under what conditions.
This kind of view is defended, for example, by Searle (1992), Dretske (1989, 1995) and Fodor (1984, 1987a,b, 1990). Cummins (1996) also seems to defend such a view of intentionality. The arguments against HEC by Adams and Aizawa (2001, 2008, 2010) also assume and defend this position on intrinsic (or “nonderived”) intentionality and on the brain-localized nature of cognition.
Dennett (1971, 1991) explicitly defends adopting the intentional stance toward an entity, by way of appeal to the pragmatic advantage gained by being better able to explain and predict the entity’s actions.
For an illustrative example of Clark’s defense of an ascriptive account over a normative account, see Clapin’s (2002) excellent collection of papers, commentaries, and transcribed conversations between Dennett and Clark (who defend an ascriptive accounts), Haugeland and Smith (defending a normative accounts), and Cummins (defending an internalist account). Haugeland’s commentary on Clark, Clark’s responses, and the ensuing discussion are especially illuminating in this regard.
Although Clark often uses examples of cooperative cognition, his major focus is on the “organism centered” (e.g. 2008, p. 139) cognition of a system comprised of brain, body, and environment working together. Examples and explanations of collective cognition play a minimal role in his explorations of HEC. Socially extended cognition, and even the social and collaborative construction of cognitive tools (such as the shared public languages that external representations often employ), seems to play a background role for Clark.
For debates on this issue, see the special issue of the Journal of Social Philosophy on collective responsibility (Fall 2007, Vol. 38 Issue 3). See also the special issue of Midwest Studies in Philosophy on the same topic (September 2006. Vol. 30 Issue 1).
David Copp (2007) presents several other interesting candidate examples of collective responsibility, most involving explicit coercion that makes certain excuses open to individual members but not to the group. For criticism of Copp’s argument and examples, see Ludwig (2007) and Miller (2007).
I owe thanks to Rebecca Kulka (2009) for bringing multiple-authored papers to my attention as examples of collective action in which responsibility for problematic methodology or conclusions is difficult to apportion.
In a related vein, Helm (2008) focuses on collective agents being truly responsible (as opposed to mere intentional and moral patients) by virtue of the collective’s members caring for one another and for their shared goals and engaging together in activity motivated by such emotions. A group having such cares, Helm argues, is a necessary condition of plural agency.
Mäkelä (2007) disagrees with Tuomela, arguing that groups’ actions always depend upon the decisions of individuals who carry out the group’s actions, so collectives lack the control or autonomy necessary for full moral responsibility. Tuomela’s response, however, would be that these individuals’ shared commitment to the group’s ethos and goals will motivate them to act on the group’s behalf even when they disagree. If they don’t do so, then other individuals should step up and act for the group. But if the group cannot put its decisions into action because no individuals are willing or able to carry them out, then perhaps that particular group fails to have the autonomy required for moral agency on that particular occasion. Compare the Iron Man race competitor who has exhausted every part of their body’s energy supplies and has collapsed, unable to run or even walk further. No matter how badly they want to get up and finish the race, their legs cannot comply. This dependence on a noncompliant part may disqualify them from having moral agency about finishing the race; we perhaps should not blame them for not finishing. However, this momentary inability does not necessarily undermine the control and autonomy they generally have on other occasions.
An example might be “Driving While Black”; in some areas, police stop, search, and arrest a disproportionate number of Black motorists driving in predominantly White neighborhoods, in which Blacks are perceived to be “out of place” (Bates 2010). The practice is sometimes defended as an efficient law-enforcement strategy, based on officers’ experienced assessment that Blacks are more likely to break laws (MacDonald 2001), or as an accurate reflection of the proportion of speeders (MacDonald 2002). Statistical studies do not seem to support this defense, however. For example, Engel and Calnon (2004) show that (controlling for factors other than race) Blacks are 47% more likely to be stopped and 79% more likely to be arrested than Whites. And among all those arrested, Blacks are 50% more likely to be searched. Yet Whites who were searched were nearly twice as often found with contraband. This, Bates argues, undermines this appeal to the efficiency of stopping and searching Black motorists. Bates (2010) reports a particularly revealing example of this pattern in Detroit.
See, for example, Miller (2006) who argues that all collective moral responsibility amounts to interdependent joint individual responsibility, in which each individual is responsible, but their responsibility is conditional on that of the other individuals in the group. Consider also Mäkelä’s (2007) objection that groups don’t have the kind of control individuals have, which is necessary for moral agency, since they always depend on individuals to carry out their decisions.
For example, Menary (2006, pp. 339–342) critiques an analogy reading of the parity principle argument, arguing that the aim of “cognitive integrationism” is not to show how internal and external processes are similar, but how they are complementary. Cognitive science, he argues, needs to focus on integrated complementary hybrid systems that complete cognitive tasks. Although Clark often uses analogies between internal and extended systems, he also argues that the Parity Principle is not a call “for fine-grained sameness of processing and storage”. Rather, it is “is a call for sameness of opportunity, such that bioexternal elements might turn out to be parts of the mechanism of cognition, even if their contributions are quite unlike (perhaps deeply complementary to) those of the biological brain” (2008, p. 115 emphasis original).
See, for example, the papers collected in Meyers (1997) and Mackenzie and Stoljar (2000). For an overview of such feminist critiques of agency, see Meyers (1998). For an overview of the feminist critique of autonomy, see Friedman (1997). Meyers (2005) gives an account of five different dimensions of selfhood (self-as-unitary, self-as-relational, self-as-embodied, self-as-social, and self-as-divided) and the capacities for autonomy each engenders.
For an account of the way the normative practice of giving reasons for actions shapes people and their cognitive capacities, see Zawidzki (2008).
Similarly, then, there is a sense in which even a loose, disorganized group of rioters is collectively responsible for the ways in which they create and maintain the energy and angry mood of the rioting crowd, and thus they are collectively responsible, indirectly, for the damage caused by members whose violent actions are made possible by this background of energy and anger. In addition, if police panicked the crowd by firing on them and caused a stampede in which several people were trampled to death, there is a sense in which the police could be seen to share responsibility for creating the panic and thus indirectly causing the trampling.
See Dennett (2003, chaps. 9 and 10) for an argument that our time-tested practice of holding people accountable for their actions should and does already answer most questions about whether an agent “could have done otherwise” in all the senses that matter for deciding whether they are responsible for their actions. Thus we do not need a metaphysical solution to the problem of freedom and determinism to answer them. We already have legal and ethical practices to do that.