1 Introduction

New trends in cognitive research capitalize on anti-representational dynamicism, which these days has taken much of the role once played by connectionism as the hallmark of heterodoxy. Connectionism is apparently not at the centre of the controversy anymore. This seems to be so in spite of the fact that connectionism arguably provided the research context that gave rise to and supported dynamical theory in the first place.Footnote 1 Prominent quarters in cognitive science agree that cognition is essentially accountable in terms of the assumptions and mathematical language of dynamics. As many authors have pointed out, dynamicism is not necessarily anti-representational. However, dynamicism, unlike its connectionist and classicist predecessors, “forms a powerful framework for developing models of cognition that sidestep representation altogether” (Van Gelder 1998, p. 622). More importantly, the hopes associated with dynamical approaches to cognition crystallize vividly in a growing consensus that assumes anti-representationalism as a working hypothesis (e.g., Varela et al. 1991; Van Gelder 1995; Port and van Gelder 1995; Thompson 2007; Calvo Garzón 2008; Chemero 2009).

Remarkably, the anti-representational dynamicist turn has led to a number of influential accounts in several fields such as robotics (Brooks 1991), motor control and coordination (Haken et al. 1985; Kelso 1995; Beer 1995), developmental psychology (Thelen and Smith 1994; Thelen et al. 2001), perception categorization (Beer 2003), imagined action (Van Rooij et al. 2002), to name a few. In this paper, I concentrate on a particular explanatory target, namely, systematicity. At present, we lack a fully developed or explicitly articulated dynamicist account of systematicity. However, proponents of non-representational dynamicism naturally attempt to extend their models to phenomena of high cognition and hence try to deal with what apparently are “representation hungry problems” (Clark and Toribio 1994; Clark 1997, pp. 166–170). In this context, systematicity is indeed one decisive and central touchstone for the consideration of a promising and overarching cognitive research program of high cognition. Unsurprisingly, systematicity has been explicitly acknowledged as a genuine explananda in dynamicist developments (Horgan and Tienson 1994, pp. 328–333; Petitot 1995; Van Gelder 1998, §6.9; Calvo Garzón 2004). The clear presumption, therefore, is that dynamicism may offer a rival non-representational explanation of systematicity and contribute as a third party to the debate carried out by classicist and connectionist contenders for the last 30 years or so.

As is known, the ‘systematicity debate’ has consisted of a relentless classicist/connectionist debate. Since Fodor and Pylyshyn’s seminal paper (1988), the dialectics surrounding systematicity may make it seem that we are, as it were, stuck in a closed loop. On the one hand, supporters of classical schemes of mental representation have emphasized again and again the need of a compositional system of symbols in the account of systematicity phenomena (e.g., Fodor and McLaughlin 1990; McLaughlin 1993, 2009; García-Carpintero 1995; Aydede 1997; Fodor 1997; Hadley 2004). On the other hand, proponents of connectionist sub-symbolic schemes of mental representation have tried to respond to the different challenges posed by classicists in a variety of ways (e.g., Van Gelder 1990; Smolensky 1990; Smolensky et al. 1992; Matthews 1994; Cummins 1996; Cummins et al. 2001) (see Aizawa 2003 for detailed discussion). Whether or not connectionist models support a form of analogue computation, both classicist and connectionist competitors in the debate are versions of representational computationalism, that is to say, the view that cognition consists in the manipulation of representations according to rules. We are indeed familiar with the view that (radical or extreme) dynamicism parts company with representational computationalism (e.g. Van Gelder and Port 1995; Eliasmith 1996; Chemero 2009; Fresco 2012). In this context, the desiderata associated with anti-representational dynamicism, considered as a general theory of cognition, clearly includes the aim of offering a rival and new kind of explanation of systematicity. Some authors have even shown sympathy to the view that dynamicism may not only offer a rival account, but entirely reformulate the debate out of the representational-computational paradigm so as to resolve or somehow stop the dialectical loop between classicist and connectionist parties. This seems to be Calvo Garzón’s standpoint when he writes:

It is noteworthy, however, that both hypotheses, the classical and the connectionist, fall neatly within the information-processing paradigm. [...] Perhaps we are stuck in a never-ending dialectic of positing challenges to connectionism, and then trying to account for them statistically, forever and ever. [...] In view of this scenario, I contend, we may need to consider turning to questions concerning the role that potential contenders, such as Dynamic Systems Theory (DST) [...] may play in the future. (Calvo Garzón 2004, p. 14)

Calvo Garzón’s clear suggestion in this passage is that anti-representational DST is called to provide, in the context of discussions about systematicity, a way out of the “information-processing blind alley”, as he calls it, represented by classicist and connectionist versions of computationalism.Footnote 2

It is my view that anti-representational dynamicism will probably not offer a fully satisfactory account of systematicity of the sort suggested in Calvo Garzón’s quotation. In this paper, I will argue, more precisely, that any explanations of systematicity offered on the anti-representational dynamicist model should be ready to meet a renewed version of the old systematicity challenge. Fodor and Pylyshyn’s (1988) celebrated challenge can be stated thus: either connectionist accounts do not explain systematicity or, if they do, they are (mere) implementations of (and hence no real alternative to) accounts in terms of symbolic representation. Without endorsing any particular position regarding the outcome of this challenge, in this paper I show that it is revisited in the anti-representational version encouraged by dynamicism: either non-representational dynamicism does not explain systematicity, or else, if it does, it is just an implementation of representational accounts.Footnote 3

I do not mean to suggest that the presented line of reasoning is, if sound, conclusive against the merits of an anti-representationalist stance generally. Anti-representational approaches may shed light on a variety of aspects of cognition and may also have a regulative role for the explanatory useful postulation of representations. Furthermore, the discussion will involve certain obvious limitations: this critical assessment will be based upon an analysis of just one particular simple case and in relation to the specific problem of systematicity. Sympathizers of anti-representational dynamicism may still argue for the irrelevance of the case, the problem, or both. Nonetheless, the suggestion is that the analysis to follow spells out (1) the nature of the difficulties anti-representational dynamicism must face in the account of systematicity and (2) the traits of the debate we are forced to face once anti-representational dynamicism is under serious consideration as a general alternative model of systematic cognition.

The paper is structured in the following way. The next section presents the issue under scrutiny via a neutral characterization of dynamicism, representation and systematicity. This clarification task is especially demanding in the case of the notion of systematicity because an explicit non-representational characterization of this notion is lacking in the literature. In Sect. 3, I will analyze a particular case of systematic sensorimotor behavior not suspicious of involving tendentious computational or representational assumptions: systematic behavior in the honey bee. I will also carefully examine in which sense dynamicist approaches may provide an explanation of such behavior and conclude that anti-representational dynamicist accounts fail to explain the fundamental trait of systematic behaviors qua systematic, i.e., their involving the exercise of the same behavioral capacities. As a conclusion (Sect. 4), I will suggest a distinctive way out of the pose-challenge/respond-to-challenge dialectics: to look for a unified, though rich and complex, cognitive science where different levels of explanation–including notably the representational level–result in powerful accounts of cognitive phenomena, such as systematicity.

2 Dynamicism, representation and systematicity. The issue neutrally described

This is the question that concerns us here: can dynamicism account for systematicity phenomena without recourse to representations? A proper statement of the question requires an explanation of the meaning of the terms involved.

2.1 Dynamicism

It is hard to downplay the rising importance of dynamicism or dynamical systems theory or the dynamical hypothesis in cognitive science over the last years. Since scholars are familiar with the fundamental traits of this school, a brief characterization will suffice for present purposes.

Dynamicism is the general view that cognitive behavior can be accounted for in terms of the mathematical models of dynamics. More precisely, the dynamicist thesis claims that cognitive behavior is to be explained in terms of sets of (nonlinear) differential equations that provide the values of a number \(n\) of variables as changing over time and which define an \(n\)-dimensional state space or dynamic field.Footnote 4 In a sense, this thesis is a truism: since, by general assent, cognitive behavior is physically implemented, it must be accountable in terms of the dynamical language of physics. The truism disappears when it is claimed that cognition can and should be modeled in such terms across the board. Thus, dynamicism turns on a commitment to explanations of cognitive phenomena essentially involving (numerical) quantification of variables, a metric of time, analysis of the interdependence between variables, and focus on differential equations of change of those variables over time. In the literature, there are several features that can be emphasized or added to the picture: stability or self-organization (of behavior under certain conditions) (e.g., Kelso 1995), real time modeling (as opposed to ‘ersatz’ time modeling) (e.g., Van Gelder and Port 1995), continuity in state-space evolution (e.g., Calvo Garzón 2008), agent-environment coupling (e.g., Beer 1995; Chiel and Beer 1997) or quantitative character (of the variables and behavior in the system) (e.g., Van Gelder 1998). Careful delineation of all these related aspects goes clearly beyond the purposes of our discussion.

Crucially, some theorists take it that dynamicism amounts to or else strongly encourages anti-representationalism, that is, the thesis that cognition is not representational computation. The question of whether dynamicism really entails or otherwise naturally demands anti-representationalism is contentious. As is known, even paradigmatic cases of dynamical models can be taken to involve representations in fundamental ways (Bechtel 1998; Grush 2003). Assessment of such deep questions concerning dynamicism is beyond the reasonable scope of this paper. Here, I will explore the nature of dynamicist explanations of systematicity only insofar as they are within the scope of anti-representational dynamicism (see also Sect. 1 and fn. 2).

2.2 Representation

Although more ambitious statements are available, by representation I understand a state, set of states or process (hence, something out there in the world, plausibly in somebody’s head) which stands in for or carries information about other states or events. That which the state or process stands in for or carries information about counts as the content of the representation. Representations, so understood, involve (a) a possible range of contents associated with them; (b) these contents being associated in a determined way (in accordance with a certain representational scheme); and (c) with conditions on proper functioning or manipulation and correct representation. Thus, I follow many authors (e.g., Clark and Toribio 1994; Clark 1997; Van Gelder 1995; Bechtel 1998; Grush 2003; Chemero 2009) in taking a version of Haugeland’s tripartite characterization (cf. Haugeland 1991, p. 62) as a baseline notion of representation. Although such a notion of representation involves a commitment to (information-processing) computationalism, this capsule-form definition has the merit of being quite neutral as regards the many different ways in which the notion of representation might be ultimately understood. For instance, it remains neutral as to whether representational contents are internal or external to the computing system, whether representation is primarily language-like or analogue, or whether representation should be characterized in terms of strong decoupability (of the potentially absent target represented and the representing state) or else in terms of a weak (feedback-dependant) decoupability (as in emulation theory). For present purposes, we can postpone a statement of the precise nature of representation by appealing to the propounded baseline and all-embracing conception of representation.

Generally, and in the particular case of systematicity, I assume that the legitimate postulation of representations requires the specification of a real explanatory task not achievable by alternative theoretical means. Even though one can find a priori arguments in favor of the classical representations in cognitive science (e.g., Davies 1991), I also assume that whether we are right in considering a state, set of states or process of an organism as fulfilling such a task is, ultimately, an empirical question. There seems to be a large consensus on these points among classicists and dynamicists alike (e.g., Burge 2010; Van Gelder 1995, p. 352; Bechtel 1998; Beer 2014).

2.3 Systematicity

Our discussion must rely on a neutral notion of systematicity. More precisely, the issue at stake requires that we describe systematicity without appealing, either explicitly or implicitly, to the notion of representation. This turns out to be not a very easy task. The reason is that representation-free characterizations of systematicity are, if not utterly absent, quite unusual in the literature: the paradigm cases of systematicity phenomena are linguistic or conceptual cases, that is to say, cases that very obviously fit the representational scheme (see e.g., Fodor 1987; Fodor and Pylyshyn 1988; McLaughlin 1993, 2009). Some authors would even suggest that systematicity should be stated in terms of “abilities to have mental representations with propositional contents” (McLaughlin 2009, p. 254). However, a purely behavioral characterization of systematicity is very much needed for a neutral assessment of the systematicity issue generally (see e.g., García-Carpintero 1995; Verdejo 2012). A purely behavioral characterization prevents prejudice in the assessment of rival explanations at the representational level and is patently required when engaged in the project of assessing anti-representational theories. What I propose for present purposes is, therefore, to characterize systematicity in terms of causally and nomologically related behaviors.

By nomologically and causally related behaviors (or nomologically related behaviors for short) I understand cognitive behaviors for which conditionals of a certain form are true in virtue of/ justified by appeal to causal laws, or laws involving causal relations. By ‘law’ in this context I refer simply to ceteris paribus empirical generalizations or regularities which are counterfactual supporting (cf. Aizawa 2003, pp. 28–29; McLaughlin 2009, pp. 252–253). For present purposes, we do not need to specify the nature of the causal relation that the law involves. Suffice it to say that the causal relation in question would prevent the law to consist of a merely accidental regularity or generalization. The form of the conditionals is as follows: if behavior (of type) A occurs, then behavior (of type) B (at least potentially) also occurs. The conditionals do not express a purely causal relation between behaviors A and B. They express a law or generalization, or an instance of a law or generalization, that connects A and B and which involves a causal relation. Thus, the causal relation need not, and typically should not be read off the conditional itself. The conditionals do not say, and in the cases to be considered it is generally not the case, that A behaviors cause or bring about B behaviors.

Thus, the propounded basic characterization is this: given two nomologically and causally related cognitive behaviors, A and B, they are systematically related behaviors insofar as they are exercises of the same cognitive behavioral capacity or set of behavioral capacities. The relevant truism of this characterization is that any pair of behaviors whatsoever, A and B, might be nomologically related behaviors without their being systematically related. Let us state this more precisely. The characterization assumes that every pair of cognitive behaviors, A and B, are, respectively, the behavioral result of a set of cognitive behavioral capacities A(\(c_{1}^{a}, c_{2}^{a},\ldots ,c_{n}^{a}\)) and B(\(c_{1}^{b}, c_{2}^{b},\ldots ,c_{n}^{b}\)). Provided that there is a causal law that connects A and B, to render A and B systematic requires, in addition, that at least one of the capacities involved in the production of A is the same as one of the capacities involved in the production of B—i.e., \(\exists c_{i}^{a} c_{u}^{b}(c_{i}^{a}=c_{u}^{b})\).Footnote 5

Some examples will help to elucidate this notion of systematicity. Thus, a subject S’s riding a bicycle (behavior A), causally entails S’s (potential) perceptual identification of bicycles (behavior B). However, the capacities responsible for A (having to do with limb movement, motor-coordination or equilibrium) are in this case completely distinct from the capacities responsible for B (such as memory retrieval or recognition of bicycle-defining perceptual traits). Similarly, S’s eating a burger (behavior A) is nomologically and causally related to, as it might be, S’s digesting it (behavior B). These behaviors would however fail to be systematic because none of the behavioral capacities for A (such as capacities for biting, chewing or swallowing) are the same as the capacities intervening in the production of B (such as the ones involved in nutrients decomposition, chemical alteration and absorption).

Now, a paradigmatic case of systematic behavior is the following: to utter the sentence ‘John loves Mary’ and to utter the sentence ‘Mary loves John’. These behaviors are nomologically related ones: in actual language-users, if one exhibits one of them, then one must (at least potentially) exhibit the other. Following our proposed characterization, this pair of behaviors is, in addition to its being nomologically related, also a systematic pair: the behaviors at stake correspond to the same behavioral capacities, namely, the capacities to utter the words ‘Mary’, ‘John’, and ‘loves’.

The distinction between merely causally and nomologically related behaviors and systematically related ones is crucial for a correct assessment of the systematicity debate. The distinction reflects the ‘intrinsic-connection’ requirement for systematicity appealed to by Fodor and allies (e.g., Fodor 1987, p. 149; Fodor and Pylyshyn 1988, p. 37) where, to put it in Aizawa’s terms, “the claim regarding intrinsic connections concerns cognitive capacities or competences” (2003, p. 92). In a similar vein, McLaughlin analyzes systematicity as involving a “constitutive basis” for the possession of capacities (McLaughlin 1993, §\({}2\)). In the context of our discussion, we can gloss the use of these expressions as expressing the fundamental point that in order for a pair of behaviors to be systematic it is not enough that their presence or occurrence is regularly and causally connected in nature. In addition, there must be a common explanation of this fact. In the terms I am proposing, the explanation concerns the existence of common behavioral capacities involved in the production of the target behaviors. We may state the foregoing points in terms of Behavioral Systematicity (BS):

(BS)

A given pair of behaviors A and B is systematic to the extent that:

  1. (a)

    A and B are causally and nomologically related behaviors (they comply with causal laws of the conditional form: if A, then (potentially) B).

  2. (b)

    A and B are exercises of at least one common behavioral capacity.

  3. (c)

    The fact that (b) is (part of) an explanation of the fact that (a).Footnote 6

BS invokes only behaviors and behavioral capacities and is, therefore, a representation-free notion of systematicity of the sort we need in order to be neutral with respect to anti-representationalist views. Clearly, what is crucial for BS is the correct, empirically contrasted, identification of behavioral capacities as manifested in overt or observable (types of) behavior. Several counterfactuals may be used to establish the existence of such capacities for any pair of systematic behaviors A and B. The counterfactuals would show that one does not find organisms that exhibit A without exhibiting B, or that if an organism ceases to exhibit A, then it ceases to exhibit B. These would be empirically contrasted counterfactuals to the effect that A and B constitute kinds of cognitive behaviors that, as Fodor and Pylyshyn put it, come “in structurally related clusters” (Fodor and Pylyshyn 1988, p. 49). One may of course deny, or otherwise exhibit extreme skepticism towards the existence or the widespread existence of systematicity phenomena (e.g., Johnson 2004; Gomila et al. 2012). The ongoing dialectics, however, requires that one accepts some neutral characterization of systematicity to advance the present discussion.

Several caveats may prevent much misunderstanding in the discussion to follow. In the first place, the presented notion of systematicity is not to be considered as involving a definition with necessary and sufficient conditions. I side here with McLaughlin in thinking that “no statement of such [noncircular necessary and sufficient] conditions, no definition [of systematicity], is to be had” (McLaughlin 2009, p. 252). In the context of our discussion, it is enough that we have a sufficiently clear, coherent and handy notion of behavioral systematicity applicable to central cases.

Admittedly however, the target notion may be put to the test by a number of limiting cases in which the individuation of behaviors and behavioral capacities is not deemed appropriate. For instance, my catching this baseball at 75 km/h (behavior A), is certainly an exercise of the same behavioral capacity as catching this baseball at 75.5 km/h (behavior B). I doubt however that one is really tempted to judge that these behaviors are, in any cognitively relevant sense, systematically related (instead of considering them as the same kind of behavior simpliciter). We may refer to these cases as trivially systematic: BS is fulfilled for A and B but only because A\(=\)B. Since we do not want the number of systematic behaviors under consideration to increase absurdly, a minimally interesting reading of the notion requires excluding cases of trivial systematicity. Similarly, note that condition (c) rules out cases of what we can call idle systematicity, that is, cases in which (a) and (b) in BS are fulfilled for A and B but only because the capacity considered to be involved is a catch-all, explanatorily irrelevant capacity. For example, eating, climbing, flying or running might all be viewed as exercises of some general capacity of survival or as involving the capacity of breathing. But these behaviors are systematically related only in an idle and explanatorily irrelevant sense in relation to such capacities.

Relatedly, inappropriate reading of the target characterization seems to rule out paradigmatic instances of systematicity. Mentally performing a piece of mathematical or logical reasoning is certainly a case in point. There is no obvious sense in which we should consider the steps in the mental calculus as different sorts of behaviors–indeed, it is not obvious why we should consider them as overt behaviors at all–in which case the propounded characterization may be called into question. Such cases are characterized by the absence of specific observable behaviors associated with them. Sufficiently complex analysis, however, might construe cases of logical or mathematical reasoning as involving relevant sorts of (intelligent) overt behavior such as successfully performing calculations or providing correct answers to logical problems. The details of such a story may be hard to fill. The presumption, however, is that such a story would provide candidate behaviors that would render abstract reasoning and similar processes of high-level cognition systematic in our sense.

Finally, there are two further conditions that the relevant behaviors must reasonably meet. First, the nomological relation of behaviors that are candidates for systematicity must plausibly be symmetric, that is, the conditionals of the form ‘if behavior A, then (potentially) behavior B’ must be reversible to ‘if behavior B, then (potentially) behavior A’. If the nomological relation is not reversible in this way, this would be quite strong evidence that there is no common behavioral capacity involved in their production. Secondly, the kind of behaviors at stake must not be purely reactive behaviors. The amoeba moving around in water may be taken to satisfy some instances of BS. Granted that the behavior of the amoeba is indeed purely reactive or automatic or involves purely reactive capacities, this is an undesired result. This result is avoided if we restrict our characterization to bona fide, non-purely reactive cognitive behaviors.

3 Systematic behavior in the honey bee

In a completely different context, Carruthers (2004) has argued that honey bee behavior involves a bee’s mind in the quite demanding sense of a belief/desire psychology. Nothing of the sort will be defended here. However, bee behavior is a case of systematicity especially salient in this context. To put it mildly, bee behavior is a significant kind of sensorimotor, embedded, embodied and completely practical behavior. Bluntly put, bee behavior is not a chess game or some other sort of logical or abstract cognitive phenomenon apt for easy computational or representational interpretation. My focus on such a case involves a clear suggestion: if anti-representational dynamicism cannot explain all there is, from a purely behavioral standpoint, to bee systematic behavior, that must be because systematic behavior generally does not fit the anti-representational dynamicist mould. Here is the broad outline of the argument that follows:

  1. (1)

    Systematic behaviors are exercises of the same behavioral capacities (as per BS).

  2. (2)

    Anti-representational dynamicism offers (eventually very complex) specifications of behavior in dynamic fields, but no account of the underlying behavioral capacities of such specifications, and hence no account of systematicity.

  3. (3)

    Representational theories do offer an account of behavioral capacities underlying systematic behavior in terms of representational schemes. Conclusion: Anti-representational dynamicism cannot adequately explain systematicity if it is not by dynamically implementing a representational account.

To establish (2), it will be useful to investigate the explanatory import of dynamical accounts regarding (merely) nomological behaviors (Subsect. 3.1) and behaviors that are furthermore systematic (Subsect. 3.2). This (abductive) argument will be completed in Subsect. 3.3 with a defense of (3) via an analysis of the representational alternative.

3.1 Merely nomologically related behavior: honey bees flying behavior

It is common wisdom that honey bees–Apis mellifera–spend their lives foraging nectar and pollen for the hive’s colony. They look for a source of nectar and then go back to the nest over and over again from birth until death. Detailed observation has shown that honey bee spatial behavior is richer than in many other insects, involving a variety of flight strategies such as straight flight trajectories, landmark exploitation and short cutting. For present purposes, we can select the following extremely simplified conditional regarding bee cognitive behavior: if a honey bee is capable of flying from the hive to sources of nectar, then, as a matter of empirical and contingent truth, it is also capable of flying back from the sources of nectar to the hive. We have then the following pair of nomologically related flying behaviors:

  • FB1: flying from the hive to sources of nectar.

  • FB2: flying from sources of nectar to the hive.

Scholars have postulated quite complex kinds of representations in order to account for bee flying and navigating behavior. They include both egocentric view-based dead reckoning and allocentric and general map-like spatial representation (see e.g., Menzel et al. 2006). But our aim is to consider whether an account in terms of anti-representational dynamicism could be given that can dispense with representations altogether and, still, be a satisfactory account of bees flying behavior. On the assumption that FB1 and FB2 are not systematic, but merely nomologically and causally connected cognitive behaviors,Footnote 7 there is no obvious reason why we should doubt that such an explanation is possible.

Let us illustrate this point with a toy example. Consider a dynamical function where FB1 and FB2 target behaviors are accounted for in terms of the bee’s flying position as continuously changing over time with respect to the hive’s position. Let us introduce, then, the position of the bee in the hive’s range \((\hbox {x}_{b})\) and the position of the hive \((\hbox {x}_{0})\). Bee flying behavior can be seen as responding to distance with respect to the hive \((\hbox {x}_{b}- \hbox {x}_{0})\). When a maximum distance in a foraging flight has been reached (and a quantity goal of nectar achieved), bee dynamics make it return back home. That is, the effect of distance would be a continuously increasing negative force towards the hive’s position. Once at hive, however, a certain ‘foraging inertia’ produces another foraging flight, which again, would be associated with a continuously increasing back-to-hive flying response. Thus, to a very rough approximation, bee flying behavior (FB1, FB2) can be accounted for in terms of the following second order linear differential equation:

$$\begin{aligned} \hbox {m}_{\mathrm{b}}(\hbox {d}^{2} \hbox {x}_{\mathrm{b}} /\hbox {dt}^{2}) = - \hbox {k}_{\mathrm{b}}(\hbox {x}_{\mathrm{b}} - \hbox {x}_{0}), \end{aligned}$$

where \(\hbox {m}_{\mathrm{b}}\) is interpreted as a bee foraging constant, \(\hbox {d}^{2} \hbox {x}_{\mathrm{b}}/\hbox {dt}^{2}\) stands for the continuously changing bee foraging impetus, and \(\hbox {k}_{\mathrm{b}}\) is a homing constant. To be sure, this serves at most as a rough and even metaphorical approximation to a dynamical account of bee flying behavior, but it is enough to illustrate how easily we can begin to provide a dynamical account of such behavior. Indeed, so interpreted, bee flying behavior can be assimilated to the behavior of a simple harmonic oscillator–such as a spring or a pendulum–where \(\hbox {m}_{\mathrm{b}}\) is the inertial mass, \(\hbox {d}^{2} \hbox {x}_{\mathrm{b}}/\hbox {dt}^{2}\) is the acceleration and \(\hbox {k}_{\mathrm{b}}\) is the stiffness constant: a paradigmatic instance of behavior accountable in dynamical terms (see Fig. 1).Footnote 8

Fig. 1
figure 1

Bee flying behavior can be initially modeled as an undamped pendulum or some other form of simple harmonic motion. Foraging displacements (black arrows) are associated to a restoring force towards equilibrium position x\(_{0}\) (the hive) which produces homing displacements (grey arrows) at the maximum distance. Once bees arrive at hive, a certain foraging inertia make them go past equilibrium to complete the cycle

As wished, the dynamics here are completely free from representational posits and offer an account of why FB1 behavior is connected in nature to FB2 behavior: the bee flies in an oscillatory fashion in such a way that it seeks for nectar when it flies from the hive until it reaches a maximum distance (FB1) and, granted the restoring force associated with the maximum distance, it goes back to the hive to unload its charge (FB2). The bee repeats this operation because of a certain ‘foraging inertia’ that makes it go past the hive’s equilibrium position and repeats the operation back and forth through the hive. In short, with sufficient aid of mathematical sophistication, there is no obvious reason for doubting that a satisfactory account could be given of the nomological relation between FB1 and FB2 in non-representational dynamics.

3.2 Systematic behavior: honey bees communicating behavior

As is known, honey bees do something more than just flying around foraging nectar for the hive’s colony. They also possess a unique and considerably rich system of communication to report to other bees the presence and position of nectar, pollen or water relative to the position of the hive and the position of the sun in the sky. They dance in a ‘drawing eight’ pattern in the hive so as to signal with straight movements crossing the center of the ‘eight figure’ the direction of the source of nectar (determined by the angle described with respect to the sun) and the distance (determined by the number of waggles performed) (Gould and Gould 1988). For present purposes, let us concentrate on the following simplified conditional regarding bees communicating behavior. If honey bees are capable of communicating, via an appropriately performed waggle dance, the position of nectar at 500 m north of the hive, then they are, as a matter of empirical and contingent truth, also capable of communicating, via an appropriately performed waggle dance, the position of nectar at 500 m south of the hive. The relevant pair of nomologically connected communicating behaviors is then CB1 and CB2:

  • CB1: perform waggle dance so as to signal nectar 500 m north of hive.

  • CB2: perform waggle dance so as to signal nectar 500 m south of hive.

This is, as required, a completely neutral description of bees communicating behavior. Now, can this communicating behavior be explained in non-representational dynamical terms?

Let us have a look at a possible strategy. Thelen and colleagues first suggested (Thelen and Smith 1994) and then carefully articulated (Thelen et al. 2001) a dynamical account of a classic case of infant perseverative reaching behavior, the so-called ‘A-not-B error’ originally reported by Piaget (1954). Infants between 7 and 12 months of age exhibit reaching-object behavior as if objects presented to them and then hid had lasting existence where they first appeared (location A), even if they observe how the object is switched to another hiding location (location B). In broadest outline, Thelen et al. model the infant’s reaching behavior as a “coupled dynamics of looking, planning, reaching and remembering within the particular context of the task” (Thelen et al. 2001, p. 5). Thus, reaching A and reaching B behaviors are the result of the dynamic activation of a motor planning field, which changes over time and is itself the result of the integration of the activation of visual and memory parameters (namely, the task input concerning the target locations in the experimental setting, the specific input relative to the effect of a cued location and the memory input regarding previous reaching decisions) together with a developmentally constrained cooperativity parameter. When the dynamics yield activation in the motor field above threshold, the reaching movement thereby specified gets generated by the infant (see Thelen et al. 2001 for details).

Now, Thelen et al.’s (2001) non-representational model works very well in dynamically accounting for the different aspects that affect child development (including notably child’s memory) and that determine the ‘A-not-B’ error. This and similar models that account for the dynamics of motor planning (e.g., Erlhagen and Schöner 2002; Schöner et al. 1997) may therefore constitute a substantial anti-representational basis for the explanation of honeybees communicating behavior.

Consider the following sketch of a model by way of illustration. Analogously to the way in which Thelen et al. model the infant’s (reaching A, and reaching B) behavior, communicating bees behavior can be modeled in terms of a motor planning field–in this case a communicating planning field–which shows the activation of a continuous communicating parameter, x, over time. Different levels of activation of the communicating parameter, u(x), would correspond to different specifications of communicating behavior. If the level of activation corresponding to a specification of communicating behavior is above threshold, then the bee would actually perform the corresponding waggle dance. The target communicating behaviors are CB1 and CB2 above, but the model could be extended to any communicating behavior CB\(n\). The dynamic field may thus be a function of the communicating parameter and time, u(x, t). The system, in addition, can be taken to be a function of a great deal of different input parameters (such as the ones corresponding to sources of nectar detected, quantity/quality of nectar, distance, moment of the day, etc.). These input parameters would be the counterpart of Thelen et al.’s task input, specific input and memory input. Now, if the input parameters and the evolution of the system in time are appropriate, a given level of activation of the communicating parameter would be above the performing threshold and the bee would actually perform the corresponding waggle dance (see Fig. 2).

Fig. 2
figure 2

The dynamic field for bee communicating behavior could be seen as a parameter, x, whose activation, u(x), specifies a range of different behaviors (CB1, CB2,..., CB\(n\)) and changes continuously over time. At t\(^\prime \), the level of activation for CB1 is above threshold and the bee would actually perform the corresponding waggle dance. These dynamics can be understood as a function of a variety of input parameters, such as, say, the one corresponding to the quality of nectar detected: if the quality of nectar is higher at a given location, the activation of communicating parameter for that location would also be higher

Arguably, if fully laid out, one such account would be promising for the explanation of the nomological connection of communicating behaviors CB1 and CB2. However, CB1 and CB2 are not only nomologically related from a strictly behavioral point of view. In addition, bee communicating behavior is, to all intents and purposes, a clear instance of systematic behavior.

To repeat, systematic behaviors are (causally and nomologically) connected behaviors which are, as a matter of empirically testable fact, explained as involving exercises of the same behavioral capacities (see BS in Sect. 2). CB1 and CB2 are a case in point. CB1 clearly differs from CB2: it is not the same, from a behavioral point of view to signal nectar 500 m north of the hive than signaling nectar 500 m south of the hive. However, and this is the crucial point, no one would reasonably deny that CB1 and CB2 are exercises of the same behavioral capacities, namely, the capacities involved in signaling nectar found in foraging tasks somewhere in the hive’s range.Footnote 9

The non-representational dynamicist account seems meager as an account of systematicity in this behavioral sense. Even if we could formulate a mathematical model with the assumptions and language of dynamic systems in this case, and even if all the variables and complexity of real bee behavior were taken into account in this mathematical model, the nontrivial question would still arise as of why bee communicating behavior CB1 is at all systematically linked with bee communicating behavior CB2, that is, why CB1 and CB2 involve exercises of the same behavioral capacities. In the appropriate system of differential equations, CB1 would be just a part (e.g., a particular point, a set of points, a basin of attraction, an arrangement of basins of attraction) of the continuous dynamic field, whereas CB2 would be just another part of the dynamic field. It would remain mysterious why CB1 and CB2 are exercises of the same bee communicating capacities. The systematicity question is thus how one can tell apart, just by analyzing the differential equations or parameters of a given dynamics, which behaviors are nomologically related and which are furthermore systematic. No easy answer to this question seems to be available because any behavior (independently of whether it is systematic or not) would be, from a dynamical perspective, just a part of the dynamic field. If this is correct, anti-representational dynamicism is, as a matter of principle, unable to properly identify, let alone explain, systematicity phenomena.

Anti-representationalists may wish to reply to the foregoing considerations in a number of ways. First, they may straightforwardly object, that, in dynamical models, CB1 and CB2 can be considered as exercises of the same behavioral capacity insofar as they are specified by the same set of nonlinear differential equations.

This suggestion, however, clearly underestimates the problem under consideration, namely, the problem of properly distinguishing nomologically and causally connected behaviors from systematic ones. Let us assume that we have a correct dynamical account for actual CB1 and CB2 in terms of differential equations. Now, let us imagine a counterfactual situation in which CB1 and CB2 are not exercises of the same behavioral capacity (say, that CB1 is the exercise of a signaling-north communicating capacity and that CB2 is the exercise of a signaling-south communicating capacity). The question is: would the difference between actual (systematic) and counterfactual (non-systematic) scenarios involve a difference in the corresponding dynamic systems of differential equations? Well, it is hard to see how given that the target behaviors (as opposed to the underlying capacities) would still be the same and, hence, the dynamic activation values specifying the (occurrence of the) behaviors must also be the same in the actual and counterfactual cases. And what else can a set of differential equations provide apart from a dynamic activation field which specifies (the occurrence of) the target behaviors? Since the same specifications of bee behavior may correspond to different underlying capacities, these specifications are clearly insensitive to the capacities underlying such behavior. Therefore, “same set of differential equations” is not plausibly interpreted as “same underlying behavioral capacities”.

The anti-representational dynamicist may also try to reply that discrimination between behaviors that are exercises of the same capacities and behaviors that are not can be made, in dynamical approaches, via empirical data somehow captured or predicted in the models. Relevant developmental and counterfactual data about a given pair of nomologically connected behaviors would provide evidence about whether the pair involves the same or distinct behavioral capacities. What the dynamicist does is to provide a dynamical account of those empirically confirmed systematicities.

This reply would just be a plain acknowledgement of the main thesis here defended, namely, that anti-representational dynamicism cannot, in and of itself, discriminate and let alone provide a satisfactory account of systematic behaviors. The dynamicist needs to appeal to other sorts of considerations, beyond dynamical approaches as such, in order to offer principled criteria for distinguishing merely nomologically connected behaviors and behaviors that are, in addition, systematic. Once the discrimination is made via alternative means, the dynamicist has nothing on offer that would explain or ground such discrimination.

The problem is not compellingly addressed by appealing to future improvements in dynamical accounts and concepts. Forthcoming dynamical analyses, one would be tempted to argue, might identify in state space or dynamic field which points, sets of points, attractors or bifurcations would correspond to a given behavioral capacity so as to identify in turn combinatorial structures involved in properly systematic behaviors. This strategy is, to my knowledge, quite speculative, but it probably illustrates the polemical situation once we have arrived at this point. A new systematicity loop would seem to begin. Granted that we have a dynamic system with a definite dynamic field, it is hard to see what could ground the required identification of systematic behaviors—viz. the required selection of points or sets of points or basins of attraction in the dynamic field–apart from sheer stipulation. The dynamicist contender would try to provide improved and extremely complex mathematical criteria to address this worry. Still, the decision as to what parts of the dynamic field correspond to systematic behaviors would arguably be based upon considerations other than purely dynamic considerations. The suggestion is that what would do the job in satisfactory dynamicist explanations of systematicity is not the dynamics of the system per se, but our best available theory about the target behavioral capacities. Which one is such a theory? No matter which one you choose exactly, it is to all appearances a representational theory of cognition.

Another possible way of approaching the problem at stake is in terms of the distinction between covering-law and mechanistic explanations. It is often claimed that dynamical models distinctively provide covering-law explanations, that is to say, explanations that proceed via subsumption of a target phenomenon under a natural law from which it can be predicted when relevant conditions are stated (Bechtel 1998; Walmsley 2008; Chemero 2009). The problem with anti-representational dynamicism regarding systematicity can be seen as a consequence of the fact that no causal processes or mechanisms would seem to be specified in the account. Covering-law explanations for any pair of behaviors A and B are clearly silent regarding underlying behavioral capacities and, therefore, seem compatible with these behaviors being either systematic or unsystematic. This is so over and above the predictive power of the alleged dynamical model. But the point against dynamical accounts of systematicity is not merely, as it might be, that dynamical explanations are not mechanistic and are, for this reason, at fault (cf. Eliasmith 1996). The point is more exactly that, insofar as they are not mechanistic, no criterion appears to be in view for the discrimination between merely nomologically connected and genuinely systematic behaviors.

True, some authors would be ready to argue that dynamical explanations are also mechanistic or causal (e.g., Gervais and Weber 2011; Zednik 2011). But conceding this point does not lead the anti-representational dynamicist too far. For the problem would then simply turn to whether the dynamicist could really invoke non-representational mechanisms and causes in this context. Indeed, it is hard to see that the alleged mechanisms or causal agents, if any, figuring in would-be satisfactory dynamical explanations of systematicity are other than, precisely, the corresponding mechanisms and causal agents postulated in schemes of representational explanation. This point is in agreement, for instance, with Carlos Zednik’s observation that advocates of mechanistic explanations in dynamical approaches “may be steering toward reconciliation with proponents of representationalism” (Zednik 2011, p. 261, emphasis his). Thus, the dilemma between covering-law and mechanistic-causal explanation in dynamicism as regards systematicity is plausibly seen as just a version of the dilemma between either failing to explain or else implementing representational explanations of systematicity phenomena. Let us examine, therefore, the latter kind of explanation.

3.3 The representational solution

Compare the above scenario with the following: let us postulate a representational system or scheme for the honey bee, say, a system that involves representation of nectar position relative to the hive. For present purposes, we can state the representational scheme as a function R that takes direction, \(d\), distance with respect to the hive, dis, and target, \(t\), as inputs and delivers the corresponding waggle dance as output. Let us assume that the function R(\(d, dis, t\)) determines a possible range of contents (of the form ‘Target \(t\), at direction \(d\) and distance dis’) expressible in a waggle dance. Let us assume further that this function is subject to malfunctioning and misrepresentation. Since the psychological reality of R would ultimately require its physical implementation in bee organisms, it follows that R would involve the sorts of states, sets of states and processes meeting our previous characterization of representation (see Sect. 2 above).

Although described in roughest outline, this representational scheme constitutes what we can call a nectar-location representational system for the honey bee. With some such representational system in hand, we begin to see what would explain the fact that CB1 and CB2 are exercises of the same behavioral capacity: if a system such as R is used by the bee then we can consider it as being the basis of the different manifestations of bee communicating behavior. Thus, if bees exploit R in order to signal ‘\(t=\) NECTAR, at \(d_{1}=\) NORTH and dis \(=\) 500 m’, then it is only to be expected that they exploit the same representational scheme in order to signal ‘\(t\) \(=\) NECTAR at \(d_{2}\) \(=\) SOUTH and dis \(=\) 500 m’. The exercise of one and the same behavioral capacity is seen, under this view, as the use of one and the same representational scheme R.

Filling in the details of the representational story involves all sorts of complex and substantial matters. The correct statement of a representational system must include fundamental claims about the exact content and structure of representations, biological and developmental plausibility, actual availability for bee organisms and much else. Note however that the postulation of representational schemes of this sort is not an ad hoc explanation. A representational system R would be empirically grounded and made precise in the light of the bee’s testable signaling practices.Footnote 10 Its postulation need not be supported by prior, uncritically assumed behavioral capacities free from empirical commitments.

The explanatory import of representational systems seems to generalize wildly. It is plausible that such systems provide an account of behavioral systematicity across many different cognitive domains and across many different kinds of organism. The relevant question in this context is of a familiar form: what, if not representational theories, would account for systematicity? The suggestion is that representational theories of cognition (whatever exactly their specific nature) are actually our best estimates of systematic behaviors. To all appearances then, the only way anti-representational dynamicism can account for systematicity is by appealing (however tacitly) to some such theory. There is of course nothing theoretically objectionable about this. There is a price to pay nonetheless. The price is that dynamicism is no longer anti-representational and becomes an implementation of representational theories. This is known territory, it is just the old systematicity challenge reformulated against anti-representational dynamicism.

4 Conclusion: a renewed systematicity challenge

This is the argument so far. Once equipped with neutral notions of representation and behavioral systematicity, we can analyze a substantial case of systematicity in nature, namely, honey bee communicating behavior. Unlike perhaps honey bee flying behavior, honey bee communicating behavior is blatantly behavior that arises from the same behavioral capacities. Dynamic fields cannot account for this fact because dynamic fields would include all sorts of behaviors and would offer no way of distinguishing, from among those behaviors, the ones that are exercises of the same behavioral capacities and the ones that are not. Things change dramatically, however, if we appeal to an appropriate scheme of representations. The moral is simple, as it is familiar. Anti-representational dynamicism has one of two options: either dismiss systematicity in any neutrally described terms (that is, dismiss systematicity tout court), or else be conceived as an implementation of representational systems. This would seem fatal for anti-representational dynamicism, considered as an alternative and general theory of high cognition. We should try to do better, if we can.

The foregoing argument would perhaps not persuade anti-representationalists of the failure of their anti-representationalism. Even so, it is clear that the promise of anti-representational dynamicism for a radically new kind of explanation in the systematicity debate is simply ungrounded. In this context, anti-representational dynamicism might at best lead us to a slightly modified version of Fodor and Pylyshyn’s well-known systematicity challenge.

The here defended line of reasoning also provides support for a more general and far-reaching conclusion regarding the benefits of collaborative research which may be succinctly expressed in terms of levels of explanation. Marr (1982) famously delineated a three-level distinction as regards explanation in cognitive research (i.e., the distinction of the computational, the algorithmic and the implementation levels). Even if Marr’s own developments may be polemical in several ways, the existence of different levels of explanation is perhaps the only unchallenged idea of old computational research (see Verdejo and Quesada 2011 for discussion). However, the dynamicist’s anti-representational turn actually involves an unjustified underestimation of the importance of the different levels, and especially in this context, of the algorithmic, representational level. The claim that there are no representations or that we should dispense with representations is, nonetheless, a substantial and extreme claim concerning Marr’s algorithmic level 2: it amounts, in fact, to a radical dismissal of research at that level as a genuine source for scientific progress. By contrast, it would seem that we can accommodate everything that dynamicism has on offer within an overall framework for representational computationalism, writ large, in which explanatory (Marrian) levels are better demarcated and integrated. This would probably require the task of investigating, for each particular dynamical model on offer, at which level or levels it operates.Footnote 11

This does not mean that dynamical approaches would be secondary in this scenario. Here, as always, it is useful to keep in mind Marr’s own dictum: levels of explanation must be differentiated but all these levels are levels “at which an information-processing device must be understood before one can be said to have understood it completely” (Marr 1982, p. 24). Since information-processing devices can also be taken to include dynamical information-processing devices, nothing at first sight seems to tell against the idea of a dynamical account of systematicity that is part of an integrated account at various levels of explanation, including, notably, the representational or algorithmic level.Footnote 12 This strategy for a unified cognitive science would put us far from the pose-challenge/respond-to-challenge dialectics and is, plausibly, the most promising framework for reaching a powerful and rich account of systematicity phenomena.