Minds and Machines

, Volume 23, Issue 3, pp 339–352

The Mark of the Cognitive

Authors

    • University of Delaware
  • Rebecca Garrison
    • University of Delaware
Article

DOI: 10.1007/s11023-012-9291-1

Cite this article as:
Adams, F. & Garrison, R. Minds & Machines (2013) 23: 339. doi:10.1007/s11023-012-9291-1

Abstract

It is easy to give a list of cognitive processes. They are things like learning, memory, concept formation, reasoning, maybe emotion, and so on. It is not easy to say, of these things that are called cognitive, what makes them so? Knowing the answer is one very important reason to be interested in the mark of the cognitive. In this paper, consider some answers that we think do not work and then offer one of our own which ties cognition to actions explained via the having of reasons.

Keywords

CognitionCognitive processingReasonsInformationRepresentationFunction

Introduction

Suppose you want to build a computer or robot that can think; one that has cognitive processes. What do you have to include to insure genuine thought or cognition is taking place? There are robots now that go around AI labs and pick up coke cans and other tasks, but there is no reason to believe that these robots can think.1 When one picks up a coke can, it doesn’t know what a coke can is. Even if these robots detect coke cans by detecting shape and/or color and size, it is fairly clear that the robots do not have the concepts of shape, size, or color. That is, they don’t have concepts of what they are doing. They don’t understand coke, can, or even picking up. So what would one have to do to get a robot to be able to understand and think and cognitively process information? In part, the search for the mark of the cognitive is the search for what is missing in existing robots that is not missing in us, viz. cognitive processing.

One could also get at the problem of the mark of the cognitive by asking where in the biological chain of life (from the most simple organisms to the most advanced) does one cross the boundary of the cognitive? Where does cognition begin? It seems fairly clear that single celled organisms don’t think.2 It seems that birds do think. It is less clear where the change takes place. A mark of the cognitive would help one find that transition. What is the mark, (or marks), that distinguishes non-cognitive processing from cognitive processing?

Armed with a mark of the cognitive, one would be well placed to try to build a cognitive artifact, a genuinely intelligent agent that thinks for itself. And armed with a mark of the cognitive one would be able to say how far down the biological chain cognitive processing extends—to mice (surely), to amphibians (likely), to ants or mosquitoes (maybe, but doubtful).

Why a Mark of the Cognitive is Relevant

Why be interested? One reason to be interested is to answer the questions raised above. Another reason is that with rise of the cognitive sciences over the last twenty-five plus years, it is embarrassing to say the least for there to be a science of cognition (of the cognitive) that is unable to say what constitutes cognition.3

It is easy to give a list of cognitive processes. They are things like learning, memory, concept formation, reasoning, maybe emotion, and so on. It is not easy to say, of these things that are called cognitive, what makes them so. Knowing the answer is one very important reason to be interested in the mark of the cognitive.

A mark of the cognitive would also be handy in settling some recent philosophical disputes over cognition and the four E’s: embedded, embodied, extended and enactive. Does cognition extend beyond the boundaries of the body or brain? Do cognitive processes extend into tools that we use as cognitive aides, smart phone, pencil & paper, notebooks, the Internet?4 Is cognition embodied? That is, do the body and properties of a cognitive creature determine the type of mind or thoughts that it may have? Or again, does cognition extend across the sensory-motor divide, rather than reside within a central system “sandwiched” between sensory and motor processing?5 Is cognition enactive? That is, is cognition an action, an activity?6 And, is cognition embedded?7 Do minds only form and develop when causally embedded in suitable environments?

Behavior Alone Does not Reveal What is or is Not Cognitive Processing

Behavior alone won’t tell you whether something is cognitive. Behavior is sometimes viewed as bodily motion (regardless of how it is caused). So Tom’s arm having moved would be considered behavior on this view. Another way of viewing behavior is that it is the causing of bodily movement. So Tom’s moving his arm would be his behavior (not his arm’s moving). For our purposes here, we will treat behavior as the causing of bodily movement.8

The mere fact that two instances of types of behavior have the same physical characteristics won’t tell you how the behavior came about or whether it has cognitive states at its source. So consider bacteria. While some bacteria are immobile, some move propelled by a flagellum. The latter bacteria exhibit positive taxis (movement toward a stimulus) and negative taxis (movement away from a stimulus). Essentially, they move towards food and away from toxins. To an observer this behavior may be seen as cognitive and one may be tempted to attribute beliefs, desires, and goals to the bacterium. But on closer inspection there are complete explanations (none of which constitute cognitive processing) of their “sensing,” of the chemical reactions that produce energy for their movement, and of the stimuli that trigger their movement.9 Though their behavior alone is similar to cognitively produced movements, only some enactivists seriously contend that bacteria are cognitive systems and that thoughts are involved in the explanation of their behavior.10 Nonetheless, the behavior alone won’t tell one whether bacteria are or are not cognitive systems.

Or consider slime mold which has received much attention for behavior that is similar to behavior demonstrating basic intelligence. Slime mold can navigate a maze and find the shortest route to the end of the maze (where a food source is located).11 What is more it can time its movement through the maze to match the periodicity of on and off infusion of cold (which inhibits motion through the maze). When cold is introduced to the maze the advance of the activities of the mold is slowed. Then later, the mold slows its own movements in anticipation of the introduction of the next bout of cold (even when it is not infused into the maze).12

Without knowing how the mold does this, if you or I were to produce similar behavior, it would be deemed to be explained cognitively, in terms of our goals, beliefs, desires, and memories. Whatever the final explanation of the behavior of the slime mold, it is very unlikely that it is due to cognitive resources. However, behavior alone won’t tell us whether the behavior is or is not cognitively produced.

Lastly, consider Rodney Brooks’ (1991) robot Herbert. Herbert moves through the A.I. lab and picks up soda cans and returns. Suppose that a human lab assistant in the A.I. lab picks up a soda can and returns it to a trashcan. In some sense of behavior, Herbert and the lab assistant have done the “same thing.” The overt movements are the same. However, on the notion of behavior that involves causing, these may not be the same behavior. The lab assistant retrieves the can for a reason. Perhaps the reason is to remove clutter in the lab. Herbert’s movements, however, are not performed for Herbert’s reasons. Herbert has no reasons.13 So the similarity of the overt movements that Herbert shares with the lab assistant masks an important difference in their behavior—acting for a reason versus not acting for a reason. This difference is not observable from bodily motion alone.14

Hence, whether the motions of a system (or creature) have cognitive causes cannot be determined by observing the motions alone. For there to be genuine intentional behavior explicable in cognitive terms depends on the causes of the motions (movements). Only motions with cognitive explanations are truly cases of intelligent behavior. Consider tropisms. Plants will turn their leaves toward the light. There is some chemical mechanism within the plant that causes its behavior to mimic that of an intelligent agent. But the plants are only cognitive systems if they have beliefs, desires, or intentions of their own.15 We suspect the same thing is going on in the case of Brooks’s robots. They are mimicking the motions of intelligent agents by the design of their creator, but they have the wrong sorts of internal (and non-cognitive) mechanisms for their behavior to be purposive in any interesting sense. Indeed, once one sees that an act is not just a bodily motion, but includes the cause of the bodily motion, then it is hard to accept that intelligent action could be solely “in” the bodily movements.

The “Mark” According to Mark (Rowlands): Mark’s Mark

Mark Rowlands (2010, pp. 110–111) recently offered a mark of the cognitive that consists of four conditions. They are—“A process P is a cognitive process if:
  1. 1.

    P involves information processing—the manipulation and transformation of information-bearing structures.

     
  2. 2.

    This information processing has the proper functionof making available either to the subject or to subsequent processing operations information that was, prior to this processing, unavailable.

     
  3. 3.

    This information is made available by way of the production, in the subject of P, of a representational state.

     
  4. 4.

    P is a process that belongs to the subject of that representation state.

     

There is much in the way of explanation of these conditions that we won’t be allowed to rehearse here due to limitations of space, but there are a few things we need to add before going on to evaluate these conditions. First, these are offered only as sufficient conditions for something to be a cognitive process (not necessary). So they would not help us with, say, the matter of how far down the biological scale cognitive processing goes. What is true higher up may not be true lower down, and Rowlands’ criteria would not address that matter.

Second, by stipulation (condition 2) the information processing of condition (1) must have this as a proper function. Rowlands may be limiting his criteria to biological systems. For if we build a robot that can think, it will not have “proper function” in any biological sense of a process that was selected for via natural evolutionary causes. And all of his examples to explain are biological (function of heart, kidneys). Only if Rowlands would include artifact functions as proper functions would his criteria extend to cover the minds of machines or robots (should we succeed in building them).16

Third, the types of states to which information is made available, according to this condition, can be personal level or sub-personal level states or processes. Hence, such processes at the sub-personal level are sub-personal cognitive processes. According to Rowlands, eventually cognitive processes must contribute to and percolate upward to personal level processing of a conscious subject. Rowlands seems to reject that all cognitive processing could be at a sub-personal level (an entirely sub-personal level cognitive subject).

Fourth, Rowlands would cash out the notion of representation naturalistically. On his view, not all representations have content that has truth values. The content of a representation can be lower-level than that—what he calls “adequacy conditions.” These are less than truth conditions but enough to explain how representations can be felicitous or infelicitous with respect to what they are supposed to represent (as he sees all representation as normative and the normativity as deriving from proper function).17

Fifth, for his purposes (and ours) he accepts that the content of these representations is non-derived. This just means that the content in a cognitive representation is not borrowed from the meaning or content of another system but is original to it.18

Sixth, according to Rowlands all cognitive processing must eventually trickle up or contribute to personal level cognitive processing. That is all cognitive processing must belong to a subject and in the end belong at the personal level (though there may be some sub-personal processing along the way that must become “appropriately integrated” at the personal level).

Now we turn to the evaluation of his conditions. There is no objection to condition #1. Indeed, elsewhere one of us has already committed to this condition (Adams 2010). We have no strong objection to condition #2. One concern is that in the future medical technology may advance and devise prosthetic devices that can be implanted into the brain to take over functions of damaged brain tissue. For instance, in dementia people lose short term memory, but perhaps some man-made device could be implanted that would improve short term memory. These devices would contribute to cognitive processing of memory but would not have biological proper function. Rowlands would have to rule them out as cognitive processes. Another concern is whether some event or process could contribute to cognitive processing prior to becoming selected for and thus, having “proper function.” So, here’s the worry. On many theories of teleological function, biological traits or processes do not have proper function until they are selected for. So they don’t establish proper function until the second generation (Enc and Adams 1998). So suppose an individual has a genetic mutation and the mutation develops a brain region that give the individual a new cognitive ability (such as increased memory capacity—boosts the “George Miller” magic number from 7 plus or minus 2 to 9 plus or minus 2 chunks of memory). Surely, this would count as cognitive processing, and yet it would not satisfy Rowlands’ condition. Of course, we don’t want to make too much of this because Rowlands is only offering “sufficient” conditions for cognitive processing, not necessary ones. Still, we wonder why he would so limit his sufficient conditions.

We have some concerns about the third condition, but not to the fact that information is made available via representations. The notion of cognition being about manipulation of representations goes back to Turing and has been at the heart of cognitive science ever since. We do have some concerns about the understanding of “made available” because towards the end of the book this becomes labeled “disclosure” and Rowlands claims that disclosure is the heart of intentionality itself. We will have more to say about this shortly.

About condition (4), Rowlands (p. 135) himself acknowledges that it has the air of circularity about it. Is a currently existing computer a subject? If it is, then our computer thinks. If not, why not? Rowlands has to tell us what counts as a subject. The air of circularity comes in that attempt. “After all, it is not any sort of subject that can own a cognitive process: the subject in question must be a cognitive subject. But a cognitive subject…is a subject of cognitive processes. Therefore, the criterion presupposes, rather than explains, an understanding of what it is for a process to be cognitive.” Rowlands’ reply to this concern is that the conditions are “recursive rather than circular,” and that all he means by “cognitive subject” is something satisfying his mark of the cognitive, and that it “owns” information-processing operations.19

We think that Rowlands does not escape the worry about circularity. One reason to want to know what cognitive processes are is in order to explain how subjects come to be. Where in the biological chain do things cross from non-cognitive subjects to the existence of cognitive subjects? Cognitive processes are the type of processes that generate the existence of cognitive subjects. It can hardly be a requirement for a process to be cognitive that a cognitive subject already exist. The existence of such subjects is the expected outcome of cognitive processes not a precursor of their existence.20

Lastly, in conjunction with the “making available” aspect of condition (2), Rowlands thinks intentionality in general is a type of disclosing activity (chapter 8, p. 206). Without explicitly giving steps of an argument of this form, he indirectly seems to be arguing that cognitive processes are a type of disclosing activity, not all disclosing activity is limited to events inside the body or brain, so not all disclosing is limited to events inside the body or brain. Therefore, not all cognitive processes are limited to activities or events inside the body or brain. This is part of his argument that the mind extends.

But what counts as disclosing? Rowlands himself points out (p. 212) that not all disclosing is cognitive. To be cognitive, he says, is to satisfy his marks of the cognitive. He realizes that those who challenge his view will look for easy counterexamples, and he offers one himself, such as walking around the corner. Why offer this as a potential counterexample? Because in walking around the corner one engages in activity that makes available new information in the form of representations and the new information is revealed to an owning subject. Rowlands attempts to deflect the example by denying that walking around the corner has a proper function. True enough. But walking (full stop) does have a proper function. And one of its functions is to locomote and in so far as it is, one makes available to oneself new information about the world. So it appears that Rowlands does not escape the fact that his view licenses the claim that walking is thinking. Indeed, any information-bearing activity by any biological item that has a proper function to reveal and can be manipulated to produce representations in a cognitive subject will count as thinking (cognitive processing) on this view. So, turning pages in a book (using one’s finger that has the proper function of manipulating things in one’s environment to reveal the world) will be cognitive. Turning one’s head to hear or see what is over there will be cognitive. Not the receipt of the information about what’s over there, but the turning of the head will be cognitive…the motor activity. This may not be surprising to Rowlands, to enactivists or to some embodied cognitivists, but it will be to everyone else. On his view, turning over a rock to see what is under it is a cognitive process (and not just the seeing what is under it but the turning). Opposable thumbs have proper function in spades! Who would have thought that thinking could be so easy?21

Of course, Rowlands may say that people walk for many purposes and so locomotion, not disclosing, is the only proper function of walking. But why think that is true? If humans had been able to move but not learn, not have information disclosed, surely there would have been no selection for locomotion. Surely the function of walking is to move and in so doing find food, find mates, find shelter, find (and avoid) predators, and so on. All of these functions seem to come inescapably tied to disclosing valuable information.

Reasons Meet the Mark of the Cognitive

In this section we will suggest that on our view reasons form a central necessary (but not sufficient) part of the mark of the cognitive. We will begin by suggesting that creatures that cognitively process information are capable of doing things for reasons. Earlier we objected to saying that bacteria or slime mold or Brooks’ robots do things for reasons. The explanation of the behavior of these things is always at a different level. There are non-representational explanations of why they do what they do. The explanations may be chemical, physical, or electronic and programmable, but even though one may find these very same features in a cognitive creature, one will also find that the explanation of cognitive behavior includes the representational content of the internal states. That is, reasons may be physically instantiated in cognitive systems, but reasons include more than the intrinsic properties of those states. Reasons look beyond the physical properties of internal states to what they represent about the world. That will be a crucial difference between an explanation of behavior in terms of reasons and an explanation not in terms of reasons. In so far as the explanation is in terms of reasons, the behavior will be cognitively explainable.

Since we are relying heavily on the notion of behavior that is explicable in terms of reasons, we want to distinguish two uses of the term “reason.” Suppose someone says that “the reason plants turn their leaves toward the light is to maximize opportunity for photosynthesis.” This use of the term “reason” (call it evolutionary reason) is not the one that underwrites cognitive processes. This is an evolutionary level explanation that operates at the level of the species, not at the level of the individual. The chemical mechanism within the plant that causes it to turn its leaves toward the light doesn’t rise to the level of attributing reasons to the plant itself. The internal chemical causes aren’t representations of the plant’s goals or strategies for attaining them. The plant is not doing things for reasons (not reasons of its own).

The same is true in the case of Brooks’s robot Herbert. One may say, “the reason Herbert is going over to the corner is to pick up and return with a coke can.” Again, the explanation of why this is true is at the level of the programmer. The programmer has constructed Herbert such that he will retrieve coke cans. If anyone’s reasons are involved in what Herbert is doing, it is the reasons of the programmer, not of Herbert. At the level of the individual robot Herbert, there are no reasons. Herbert has no reasons for what he does and Brooks proudly flaunts this in his titles of articles and explanations of how Herbert works. Herbert has no beliefs or desires nor intentions or plans of his own.22

So on our view of reasons that are involved in cognitive systems with cognitive processes, the reason required must be the system’s own reasons, and the explanation in terms of reasons must not only be at the evolutionary level of the species or the programmer (or whatever). Call these reasons system centered reasons. The explanations in terms of the reasons are teleological. Cognitive systems do one thing A in order to do (achieve) another thing B. In the types of reasons explanations we are featuring, the goal of doing B and the strategy for accomplishing B by doing A are represented within the system. So the cat stalking the bird is crouching and moving slowly in order to catch the bird. The goal is to catch the bird by sneaking up on it and not being seen. The reason the cat is engaging in stalking behavior (to catch the bird) requires the cat to have a mental model of what is to be accomplished and in some degree of detail of how to achieve that goal. There is a reason why the cat is moving in the way that it is. There may be an evolutionary story of why cats stalk their prey in this way. But in the case of this individual cat and this individual token of stalking, the explanation has moved from the level of the species to the level of the individual reasons of this individual cat. The explanation is now in terms of cognitive reasons. This cat wants to catch that bird and is stalking it for this reason. We maintain that if the full and accurate explanation of a system’s behavior must include the systems own reasons, then one is explaining the behavior of a cognitive system and those reasons system centered reasons will be cognitive reasons.23

In the case of evolutionary level (only) reasons, there need be nothing in the system that represents the goal to be achieved. There need be nothing like a plan or strategy or history of memories of prior events of attempting to achieve such a goal. The processes of selection select the internal causes that produce the desired outcome. In the case of the plant turning its leaves toward the light, there was natural selection for internal chemical mechanisms that produce the tropism of turning the leaves toward the light. This benefited the plant and sustained the mechanisms that produce this behavior in the next generation.

In the case of Herbert, the programmers selected the internal mechanisms in Herbert that produced the behavior of picking up coke cans and disposing of them. The mechanisms were produced and selected for, and thus sustained for their positive outcomes benefitting the programmers (not Herbert).

In the case of the cat, the mechanisms that produce the stalking include those that cause hunger, the desire for food, the desire to catch birds or mice or other animals as a form of amusement, and so on. There is selection for the mechanisms that produce this drive or desire because the drive or desire produces the stalking behavior that produces the food as a benefit of successful stalking. Of course, there is also some selection for the stalking strategy. There can be an innate strategy that all cats eventually try (stalking), but individual cats can learn and become better at it. This learning coupled with the drive or desire for food yields the neural structure that causes the cat’s stalking. There may well be selection operating at the level of the species, but it is intertwined with the individual learning and recruitment in the brain of the individual cat that constitutes its own reason for stalking. That is, this individual cat also has its own reasons for stalking this individual bird (it is hungry and wants to catch the bird for food or it is interested in play and wants to catch the bird for its amusement—as cats sometimes do). In any case, the reasons are the cat’s own when they spring from the cat’s own desires and beliefs about how best to catch its prey. The evolutionary level may select for these types of cognitive mechanisms being present. At the level of the individual, these mechanisms will yield representations of goal-states of the individual cat and learned strategies that have been honed for achieving those goals. It is the fact that these are represented within the cat that makes these the cat’s own reasons. So evolution selects for a reasoning24 mechanism within the individual organism, such as the cat, and the having of this mechanism constitutes the having of cognitive reasons for this animal’s behavior. We have perfectly good names for these mechanisms: beliefs, desires, plans, intentions, and reasons.

Reasons and Meaning Versus Information

In a recent paper by Turner (forthcoming), claims are made and supported for the view that there is cognition taking place within a termite mound considered as an extended individual organism. This paper is interesting for a number of reasons, including the claims that a certain variety of termite mound constitutes an individual organism possessing cognitive states at the level of the mound. We won’t be concerned with the extended organism claim here. Instead, we will focus on Turner’s claims that there is cognition taking place within the specific type of termite mound itself (Macrotermes), not merely among the termites. Turner thinks that cognition did not start with humans, and that it is very likely that other systems evolved with degrees of cognition and self-awareness. He thinks (forthcoming p. 10) the Macrotermes colony is such a system: “The Macrotermes colony provides an interesting example of just such a cognitively aware “superindividual.” Cognitive self-awareness in the Macrotermes colony arises most strikingly in the context of mound injury and repair.”

He seems to attribute cognition both to the individual termites and to the mound itself, considered as a “superindividual.” When there is a breach in the mound, individual termites detect the damage and repair the breach and they detect the damage via detecting changes in the atmosphere and air flow within the mound. At the level of individual termites, he attributes to them the cognitive attributes of being sensory transducers for temporal variation of turbulent wind velocity (p. 14), and adds that they are sensitive to sudden changes in variation and are “cognitively analogous to sensory neurons (p. 16).”

At the level of the mound, the swarm itself is attributed the ability to locate the source of the damage and initiate repair activities (p. 18). Turner treats the mound itself as a cognitive individual with knowledge not possessed by any of the individual termite workers alone. So it must detect the damage, repair it, stop the repairing activity when the breach is fixed, and go on about orchestrating other colony activities. On the model Turner is considering, there is, therefore, cognition at the level of the colony itself. Why? Because there is information about the breach that is used by the workers to repair the damage and return the colony to homeostasis. Indeed, Turner considers “cognition to be a fundamental property of living systems (26-7)”…that extends right to the level of the cell itself (28).25

Upon detection of a breach in the mound some termites rush toward the breach (the “first responders”) and others (the tocsins) rush the opposite direction to inform other termites of the breach. The solution to the problem of where to start repair is solved when there is a detection of crossing a “threshold” of variation in turbulent wind velocity. When crossed, the termites will start dropping particles of soil and commence shoring up the breach. “Cognitively, each site represents a different ‘hypothesis’ by the swarm about where the damage is. (p. 19)”

What we think is happening in the explanations Turner gives is that information about the breach to the mound is created and transmitted to the termite workers via changes in air flow. The detection of this information via the bodily mechanism of the termites turns on a pattern of behavior appropriate to either the first responders or the tocsins. They do different things because they have different mechanisms turned on by detection of the change in airflow. The explanation of the behavior of the termites does not rise to the level of reasoning in the termites. Why not?

We maintain that to get to the level of reasoning, the internal states that represent what is happening within the termite mound must rise to the level of having meaning. The states must be about the environment in ways that beliefs, desires, or intentions can be about the environment. That is, there must be something that can be falsely tokened, that can misrepresent; that can represent non-actual states of affairs. For instance, if the termites reason about how to repair the damage to the mound, they have to be able to represent something non-actual. They have to be able to represent the non-actual goal of having a repaired mound at the site where it is currently damaged and be working to bring that still non-actual state into existence. We suspect that at most they have the capacity to detect whether the actual state of the mound is that of being repaired or of being not repaired. But in either case, they are only detecting the actual state of the mound not some future non-actual state that they are striving to bring into existence. This latter kind of representation is something a cognitively reasoning creature is able to do.26

How to make the distinction? An easy way to distinguish mere information-handling capabilities from reasoning is to contrast two sense of “meaning:” The informational sense of meaning is where one thing may indicate or be about another but may not be falsely tokened or represent a non-actual state of affairs. So the relation in the wild (no tricks) between smoke and fire is an informational relation. Smoke in the woods indicates (informs) one of the presence of fire. However, the relation of “smoke” to smoke is not of this type. Where smoke won’t exist without fire, the word “smoke” may well exist without the presence of smoke (as in “if I start a fire will there be smoke?”). Reasoning about smoke permits the ability to think about what does not yet exist (as in the example of the question above). Merely detecting the presence of smoke can inform one about the presence of fire, because the one does not occur without the other. We suspect that the termites exploit the natural law-like regularity between changes in airflow within the mound and the corresponding breach (or state of repair) of the mound. And that is all that is going on. There is nothing that rises to the level of reasoning going on within the termites or within the mound. And we would agree with Turner’s claim that what they do is analogous to “sensory neurons.” Sensory neurons (taken by themselves and independently of their contribution of the rest of the brain) do not engage in reasoning either. For the most part, they are merely transducers of information.27

Everything we have so far argued in this paper is in support of the view that cognition involves reasoning. So if there were to be cognition taking place in the termite mound itself, there would reasoning in the mound itself. For that matter, if there is cognition in the individual termites there would need to be reasoning taking place within the individual termites. This makes the following kind of argument possible. Cognition involves reasoning. What is taking place in the termite mound does not involve reasoning. So what is taking place in the termite mound does not involve cognition. We hope to have shown above why we think it is very plausible that what is happening in the termite mound does not involve reasoning (at the appropriate level of description). At most it involves a non-cognitive processing of information.

Conclusion

In this paper, we have attempted to motivate interest in the mark of the cognitive. We have found work to be done when armed with such a mark (work such as settling disputes over the 4 E’s of cognition: extended, embodied, enactive, and embedded cognition). In addition, a science of cognition should be able to say what constitutes cognitive processing, where it begins in the biological chain, or how to build a cognitive agent.

Next, we pointed out why mere behavior will not disclose whether a system or creature has cognition. Cognitive processing causes certain kinds of purposive behavior, but it is not detectible from the observable portion of the behavior itself. We considered some examples from science: examples of the behavior of bacteria, slime mold, and Rodney Brooks’ robots. In each case the behavior could be explained without the involvement of cognition.

We considered a mark of the cognitive recently proposed by Mark Rowlands and found it to be wanting. Though there are good features of the view proposed by Rowlands, in the end it seems to presuppose the existence of cognitive subjects in the explanation of what makes a process itself cognitive.

We proposed that the mark of the cognitive is inextricably bound to the notion of behaving (or acting) for a reason. We propose that only cognitive creatures or systems are capable of acting/behaving for reasons (explainable via reasons at the level of the individual).

Lastly, we tested this idea against a suggestion by Turner that a certain kind of termite builds a super-individual mound that has cognitive processes at the level of the mound itself. We gave reasons why the mound and the information processing by the termites that build and maintain the mound, do not rise to the level of cognitive processing in terms of reasons held by the individual termites or the mound itself. Instead, we proposed that a full explanation can be given in terms of information that is being exploited at non-cognitive level, by the termites within the mound.

Footnotes
1

We discuss these cases below.

 
2

Noe (2010, p. 42) talks as though he attributes mind to bacteria, but it is not clear to us whether he maintains that bacteria think.

 
3

Everyone agrees and knows that memory, perception, reasoning, and perhaps emotion are cognitive processes, but what we are asking is what makes them cognitive? In virtue of what are these processes of the same type—cognitive? Most practicing scientists presume there is an answer to this question but few try to give it.

 
4

See Clark and Chalmers (1998) and Rowlands (2010), for the pro and Adams and Aizawa (2008) for the con.

 
5

For more, see Adams (2010).

 
6

For arguments that it is see Noe (2004, 2010), for reservations see Aizawa (2007).

 
7

For examples and arguments that cognition requires embedding in an environment and use of that environment as a type of “scaffolding” see Clark (2008). We are not sure there is opposition to this view, if the scaffolding is not considered constitutive of mind.

 
8

We are not suggesting so-called “agent” causation, but differentiating two schemas for individuating actions.

 
9

See Parker (2001). Our claim is not that if some behavior has a chemical explanation it is not cognitive. Rather it is that if it has a chemical explanation that does not constitute being a reason or representation, then it is not cognitive. Reasons and representations themselves no doubt have chemical constitutions.

 
10

See Noe (2010), Maturana and Varela (1980), and Thompson (2010).

 
11

See Saigusa et al. (2008).

 
12

Of course this is not too surprising given the tendency of slime mold to be sensitive to properties such as heat and cold, oxygen and ammonia gradients, light and dark (Bonner 2009). Intermittent introduction of cold bursts could result in changes to internal chemical reactions. The periodicity could mimic memory (Ball 2008) and be explained in terms of the physics of well-understood oscillators.

 
13

We think having concepts and having reasons are related in this way. To have a reason one needs concepts, but having reasons extends beyond the mere having of concepts. Nonetheless, concepts are featured in reasons. If Herbert gathered coke cans for the reason that he wanted to keep the office tidy, he’d have both concepts of coke cans and reasons to collect them.

 
14

Herbert can’t do anything intentionally or purposefully. He has no reasons. He does not know what a soda can is. He has no beliefs or desires. He has no desire to retrieve the can, nor beliefs about retrieval. Herbert is not a cognitive agent despite the fact that his motions mimic those of a cognitive agent. It is on the basis of this mimicry of motion that Brooks deems Herbert’s behavior to be “intelligent.” Brooks claims that his robots “have goals,” “make predictions,” and “do things,” using the language of the intentional idiom, as if they were purposive, intentional agents. But there is no good reason to think any of this is true precisely because purposive, intentional agents act for reasons (just what Brooks denies of his robots). Someone might claim that if Herbert revises his behavior until the can is picked up successfully then this shows he does have the desire to pick up the can. To us, this would be a rampant form of behaviorism, long abandoned for good reasons.

 
15

There are, of course, theorists who do attribute cognition to plants based on the activity that plants engage in which is behaviorally similar to the behavior of reasoning creatures. See (Calvo and Keijzer 2008).

 
16

Another reason to think that Rowlands may be willing to include artifact function is that, in the end, he thinks the processes of a person (Otto) using a notebook to help remember his way around New York City count as cognitive processing (2010, p. 208) and the processing includes processing language and languages are clearly human artifacts.

 
17

Rowlands borrows from Millikan (1984, 1993) an account of “proper function” as that which some item is “supposed” to do or has been evolutionarily “designed” to do.

 
18

See Adams and Aizawa (2008).

 
19

It is sometimes hard to know exactly what to make of Rowlands’ view. For instance, at one point to diffuse worry about what “person” means in “personal level” cognitive processing, he says: “‘Person’ in this context, approximates to ‘organism capable of detecting changes in the environment and modifying its behavior accordingly (p. 146).” The problem of course is that bacteria, plants, and even slime-mold can do this. So, surely there is more to being a person or subject than this, and the problem Rowlands faces is saying what that is without circularity among his conditions of the mark of the cognitive.

 
20

So we continue to find Rowlands’ attempt to shrug off circularity unpersuasive. He spends an entire chapter (6) on what “ownership” comes to, but we find this explanation even less clear or persuasive.

 
21

We suspect Rowlands will try to evade these objections by invoking a “vehicle/content” distinction or “causal/constitutive” distinction. But what he has to say about these things in the relevant pages of the book are far from clear or persuasive (see chapter 8). What is more, as far as we can see, our examples fit the letter of his marks of the cognitive. As far as we can see, even if walking is only a vehicle, it is still a vehicle in a cognitive process of disclosing information about the world and will count as a cognitive process on his conditions, despite his protest to the contrary. Notice that his response to “walking around the corner” was not about vehicles or contents or causation or constitution but about proper function. So, we think he went with his best shot and we think it still is not good enough.

 
22

We are not saying that in principle a robot could not be built which operates with reasons, just that this is not what is happening with Herbert.

 
23

We are not saying that evolution cannot supply a creature with reasons. When it does, there must be some internal structure that constitutes a representation of a goal or desire that grounds the truth of the statement that the individual does something for a reason (its reason). Naturally, we also do not claim that reasons have to be consciously entertained.

 
24

We are not saying this is a mechanism to which minds have conscious access.

 
25

See Adams and Beighley (2011) for discussion of a similar idea by Fitch (2007).

 
26

See also Sterelny (2003) who suggests that “de-coupled” representations are a key ingredient in the evolution of cognition.

 
27

Of course not all sensory neurons are transducers. Amacrine cells in the retina are an example. But these would not be examples of neurons playing the role described in the termite mound. One may also object that sensory neurons themselves can be falsely tokened. But one must solve the “disjunction” problem for them, as well. Do they falsely indicate their usual cause or truly indicate a broader set of disjunctive causes? If the latter, they are still just information transducers. We suspect that the analogy to the termites is of the latter variety.

 

Acknowledgments

We want to thank all those who attended the University of Delaware Cognitive and Neuroscience Workshop in September 2011 for helpful comments. Additional thanks to Ken Aizawa, Gary Fulller, and John A. Barker for helpful suggestions. We thank two anonymous referees for helpful comments. We also thank the University of Delaware's Office of Undergraduate Research for support for this project.

Copyright information

© Springer Science+Business Media Dordrecht 2012