Philosophical Studies

, Volume 176, Issue 10, pp 2563–2588 | Cite as

Talking our way to systematicity

  • Léa SaljeEmail author
Open Access


Do we think in a language-like format? Taking the marker of language-like formats to be the property of unconstrained systematicity, this paper considers the following master argument for the claim that we do: (1) language is unconstrainedly systematic, (2) if language is unconstrainedly systematic then so is thought, (3) so thought is unconstrainedly systematic. It is easy to feel that there is something right about this argument, that there will be some way of filling in its details that will vindicate the idea that our thought must be unconstrainedly systematic given that the language in which we express it is. Clearly, however, the second premise needs support—we need a principled reason for moving from the unconstrained systematicity of language to the unconstrained systematicity of thought. This paper gives three passes at formulating such a principle. This turns out to be much harder than it might seem. We should, I conclude, resist falling too easily for the lure of this master argument for the language-like format of thought.


Thought Language Systematicity Generality constraint 

1 Introduction

Do we think in a language-like format? One reason to think that we do is that we are linguistic creatures—we can express our thoughts in public language, a systematic representational format in which there is no system-imposed limitation on what can be represented by the possible recombinations of words into new well-formed sentences. This status as complex language-users, so the argument might go, requires our underlying thoughts to be likewise unconstrainedly systematic. And if we take unconstrained systematicity to be a marker of language-like systems, then this secures the claim that our thoughts are language-like.

The schematic argument just given relies on a key bridging principle to take us from the unconstrained systematicity of public language to the unconstrained systematicity of thought. In outline the argument is, I think, a seductive one; it is easy to feel that there will be some way of filling in its details that will vindicate the idea that the system of thought underlying our use of public language must be unconstrainedly systematic, given that language is. This paper explores a number of forms the argument’s key bridging principle might take. To anticipate, it turns out to be much harder than it seems to say what it is. I conclude that we should resist falling too easily for the lure of this master argument for the claim that we think in a language-like format.

In the next section (Sect. 2) I say a bit more about what it means to take unconstrained systematicity as a marker of language-like representational formats. In Sect. 3 I introduce the master argument, and in the three sections following it (Sects. 46) I present three different passes at giving a bridging principle that will take us from the unconstrained systematicity of public language to that of the thought underlying language-use. The first pass exploits the idea that language expresses thought; thought, according to this first strategy, must be at least as systematic as the language that expresses it. The second pass has it that thought is relevantly similar to language, such that we should expect the two systems to share their systematicity profiles. The third and final pass appeals to facts about what it takes to learn a public language—we could not become public language users, the idea is, unless our thought was similarly systematic. I conclude in Sect. 7.

2 Unconstrained systematicity

The question whether we think in a language-like format needs clarifying. Plainly, we represent things in thought, and given that representations cannot float freely of a format in which they are represented, questions about the format of thought are surely well-put. But (perhaps with the exception of episodes of inner speech) we don’t literally have sentences running through our introspectible streams of consciousness. This raises a question of methodology: if the format of thought is not directly introspectible then how should we approach questions about the format of thought? A standard approach takes advantage of the fact that different formats organise contents in different ways; even if the format of thought cannot be directly known by introspection, it can be abductively inferred from observations about its computational character. To get our question off the ground, then, we need to pinpoint a distinguishing feature of language that reveals something about the way that contents computationally pattern in that format. Our question will then be whether thought shares this feature.1

An initially plausible candidate is the property of systematicity. If a representational medium is really like language then a fully competent user ought in principle to be able to recombine the simple parts making up complex strings in that medium into any number of new combinations, syntactic constraints aside. After all, full and free recombinability of parts seems to be permitted by the internal structure of language, so if thought is structured like language then we should expect the potential for unlimited recombinability of parts in thought too. This is a close descendent of Gareth Evans’ Generality Constraint on thought, the condition that ‘[i]f a subject can be credited with the thought that a is F, then he must have the conceptual resources for entertaining the thought that a is G, for every property of being G of which he has a conception’ (Evans 1982, p. 104).

There are a number of ways of formulating the property of systematicity. The following is medium-neutral: a medium is systematic iff a suitably idealised subject who understands the complex expressions \(e_{1} - e_{n}\) in that medium also understands all the complex expressions that can be built up of the constituent parts of \(e_{1} - e_{n}\) in accordance with the recombinability principles associated with that medium.2 Of course, much rides on what limitations are ruled out of relevance by mention of an idealised subject. The qualification is intended to capture performative shortcomings of ordinary subjects. In the case of thought, for instance, an ordinary thinker might be prevented from entertaining certain recombinations, not because of anything to do with the representational format of thought, but because of problems with ‘hardware’ as Chris Peacocke puts it—neural anomalies and their like, or because of the suppressive influences of PTSD, or because the thinker keeps being knocked unconscious at the critical moment, or for any of a long list of similar reasons.3 These are hardly the sorts of failings that will be revelatory of facts about the nature of the representational system in which the recombinations were attempted. For the property of systematicity to hold, it must be the case that there are no limitations on the possible recombinations available to a user imposed by features of the system itself, rather than by a given user’s deficiencies.

To say that language is systematic is to say that there is no system-imposed limitation on the new sentences available to a suitably idealised language-user, made up of constituent words from other complex sentences already in her grasp—for any two sentences ‘aRb’ and ‘cRd’ already available to the suitably idealised language-user, she will also be in a position to put together the sentences ‘aRd’ and ‘cRb’. Linguistic competence, that is to say, is not punctate, sentence by sentence, but rather consists in a set of interconnected abilities, such that an appropriately idealised user couldn’t be counted as having the former sentences in her understanding-range and not the latter.

There are, of course, many different sorts and strengths of understanding, and the plausibility of the claimed systematicity of language depends crucially on where we set that bar. A high bar might demand that the subject be able to conceive, in some appropriate sense, of the contribution being made to a sentence’s truth conditions. Call this strong notion of understanding s-understanding. If s-understanding is what we have in mind, then the claim that language is systematic is not obviously true (As Zoltán Szabó doubtingly asks, ‘do all who understand ‘within an hour’ and ‘without a watch’ also understand ‘within a watch’ and ‘without an hour’?’ Szabó 2017). A much weaker notion has it that so long as the subject grasps the semantic significance of the words making up a sentence, and so long as we take the meaning of any complex linguistic expression to be determined by the meaning of its parts, then that is enough for the subject to count as understanding the sentence. Call this weak notion of understanding w-understanding. So stated, the systematicity of language is difficult to deny. Different linguistic expressions (or perhaps different uses of those expressions) have different grammatical functions. So long as a speaker grasps those individual expressions in their original strings, and reshuffles them in a way that respects those functions, she will have a new sentence in her w-understanding range. The semantic properties might be relevant to whether the sentence is (non)sensical, but not to whether it is a sentence that she w-understands.

Put like that it’s hard to see how language could fail to count as systematic.4 In what follows I assume the weaker notion of understanding, and take the systematicity of language for granted.

Provisionally, then, the question whether we think in a language-like format is, at least in part, the question whether we think in a systematic representational format. A further clause to this condition is prescribed by recent work by Elizabeth Camp, who has argued that mere systematicity is not unique to language-like systems.5 Once we prise apart the traditional dichotomy among representational systems between language on the one hand (digital, compositional, systematic, conventional), and pictures on the other (analogue, non-systematic, non-conventional resemblance), we make space for a spectrum of ‘mixed’ formats in the middle, formats that combine features traditionally associated with one side of the dichotomy or the other. Examples discussed by Camp include Linnean taxonomic trees, Venn diagrams, seating charts, and maps, but we might also want to think about emoticons, braille, scuba diving signals, modern staff notation, flow diagrams, hieroglyphics, kanjis, timelines, and many more besides.6 Camp convincingly argues that, like cases of ‘pure’ language that display all and only features of one side of the traditional dichotomy, many of these mixed systems also display systematicity—even if they also, for instance, encode some of their information analogically or by a principle of isomorphic resemblance. To give an example (mine, not Camp’s): there are semantically significant constituent parts to a timeline that can be recombined to represent different overall contents. The semantic properties of those basic parts matter for the question whether there is a coherent overall state of affairs represented by the resultant timeline, and if so, whether it accurately represents how things really are (or were, or will be) in the world. But so long as standard timeline-syntax is respected—the scale points in one direction, no symbol is positioned off the scale, there is clear spatial ordering between the timeline’s basic referential parts, etc.—then we can reshuffle the semantically meaningful parts of a given timeline however we like, or reassemble it using parts from other timelines, and still have before us a well-formed representation in that format. Mere systematicity, then, is not a distinctive marker of language-like formats.

The distinguishing feature of language-like systems, Camp argues, is not systematicity per se, but the semantic laxity of the recombinability principles governing that systematicity.7 In language, the principles governing the ways in which the system’s basic parts (words) can be combined into complexes (sentences) have minimal semantic significance: they place no substantive de jure constraints on what the referents of those basic parts can be. She explains:

Predication [...] signifies instantiation or property-possession [...]; and this relation is sufficiently abstract and general that it can relate nearly any property and object. Further,in language the referential relation mapping basic expressions to objects and properties in the world is conventional or causal. Taken together, both the referential relation and the combinatorial principle are abstract enough that they don’t impose in-principle limitations on what can be assigned as referents to those basic expressions (Camp 2009a, pp. 120–121).8

Now, this lack of substantive constraint on what can be expressed by a format’s basic parts ramifies into the range of total contents that can be expressed that format. If Camp is right, then it turns out that the vast expressive range of language can be traced back to the semantic insignificance of the principle governing the recombinability of its basic parts. What’s special about language is not mere systematicity, but semantically unconstrained systematicity (or ‘unconstrained systematicity’ for short).

Compare again the case of timelines. Timelines, like language, are systematic. Where they differ from language is that the principle of recombinability governing timelines has what we might call robust semantic significance, or one that places substantive constraints on the range of contents expressible in that format. It demands that the basic parts of a timeline be combined in a way that respects an isomorphism between the spatial ordering of the timeline’s parts, and the temporal ordering of the referents of those parts; this is, in effect, how a timeline works. But this rule considerably restricts what can serve as the referents of those timeline-parts—they must be events, or states with the right sort of temporal profile to feature in such an ordering relation. The result of all this is that the domain of contents that can be represented by timelines is narrower than for language, not because of a difference in whether those formats are systematic at all, but because of a difference in semantic restrictions imposed by the recombinability principles governing their respective systematicity profiles.

An advantage of joining Camp in recognising different degrees of semantic constraint here is that it makes explicit an issue that is sometimes left out of focus in discussions of systematicity: the question whether talk of systematicity is intended to capture something about a medium’s syntax alone, or something more ambitious about the way in which its syntax and its semantics interact. How we resolve this question, of course, matters for what we can rightly expect from a fully systematic medium. Understood as purely syntactic property, a fully systematic medium will be associated with the capacity to fully recombine its parts without gaps. This reading is silent about what range of domains will be representable in that medium. By contrast, understood as a mixed syntactic-semantic property, a fully or unconstrained systematic medium will be one whose expressive range isn’t restricted by its governing recombinability principles. (Those principles impose no substantive restrictions on what can be represented by its basic parts, the idea is, and those parts can be gaplessly recombined.) By showing how the in-principle expressive range of a medium is derived from the level and kind of semantic constraint on its operative recombinability principles, Camp clearly elects a property of the more ambitious kind as the one that distinguishes language from other media.

Here is a summary of key points from the discussion so far. Systematicity is a property had by a given medium iff a suitably idealised subject who understands a given range of complex expressions in that medium also understands all the complex expressions that can be built up of their constituent parts in accordance with the recombinability principles governing that medium. Constraint is the restriction on what a medium’s basic parts can represent imposed by the recombinability principles that feature in a specification of that medium’s systematicity property. Most mixed media will have constrained systematicity, which means that the recombinability principles mentioned in a specification of their systematicity property imposes substantive restrictions on what can be represented by their basic parts. As a consequence, these media will have an in-principle restricted expressive range, where that restriction is derived from facts about their recombinability principles. Language has unconstrained systematicity, which means that the recombinability principles mentioned in a specification of its systematicity property does not impose substantive restrictions on what can be represented by its basic parts.9 Put altogether, I take it that demonstrating that a representational system displays unconstrained systematicity will secure the claim that it is language-like in this central respect.

Nothing so far settles it that unconstrained systematicity is the only distinctive marker of language-like systems. Two other natural candidates are the adjacent properties of compositionality and productivity. Could either of these do equally well? The short answer is that once we have Camp’s mixed systems before us it becomes clear that neither of these properties uniquely singles out language-like formats. Take compositionality first. A given system is compositional just in case the meaning of any complex string in that system is determined, along with its syntactic structure, by the meaning of its constituent parts.10 Plausibly, any systematic medium will be compositional; it’s hard to see how a format could be systematic if the meaning of the recombined strings wasn’t determined by the meaning of their recombined parts and syntactic structure. But we’ve already seen plenty of non-linguistic systems that are like this—just think of how the meaning of a Venn diagram is determined by the meaning of its parts. So compositionality cannot be the distinctive language-like marker we’re looking for.

A system is productive, by contrast, just in case its competent users are able to generate and understand infinitely many novel strings from prior understanding of finitely many constituent parts and recombinability principles. It is standard to understand the productivity of a system too as requiring its compositionality; as Frege writes, ‘the possibility of our understanding sentences which we have never heard before rests evidently on this, that we can construct the sense of a sentence out of parts that correspond to words.’ (Frege 1914/1980, p. 79) As with compositionality, it should be clear that productivity can’t be the property we’re looking for. Any mixed system with a recursive grammar will be productive even if it will not thereby be unconstrainedly systematic.11 By recursive operations, for instance, a competent timeline-user will be able to generate and understand indefinitely many novel strings in that system from a finite set of base resources. But we have already seen that unlike language, timelines are not unconstrainedly systematic.

The key to establishing that we think in a language-like format will be to show that our thoughts are unconstrainedly systematic.

3 The master argument

Most people think that human thought is like this, or near enough. Twentieth and twenty-first century analytic philosophy has been dominated by a tendency to think of thought as the hidden face of language, or as bearing many of the same structural properties as the language in which we express it. This way of seeing things is partly methodologically motivated. The properties of language are for the most part much more easily apprehended than those of thought, and we might think that our access to the format in which we think depends in some ineliminable way on the expression of thought in language. Thus Frege famously complained:

I am not in the happy position here of a mineralogist who shows his hearers a mountain crystal. I cannot put a thought in the hands of my readers with the request that they should minutely examine it from all sides. I have to content myself with presenting the reader with a thought, in itself immaterial, dressed in sensible linguistic form (Frege 1956, p. 298).

Or, to give it another well-trodden metaphor, ‘[l]anguage may be a distorting mirror; but it is the only mirror we have’ (Dummett 1993, p. 6). Little wonder the temptation to assimilate some of the structural features of thought to those of its linguistic apparel.

By itself, the idea that we can’t access thoughts except through language would not be a good reason to think that thought is like language in the relevant respects. (We might as well argue that since we can only see cells through microscopes, cells must be importantly like microscopes.) Still, we might think that our status as public language-users does give us a reason to think that our thought is like language in the relevant respects, if only we could find a bridging principle to take us from the recombinability properties of language to the recombinability properties of the thought of language-users.

This is a popular move. Mark Johnson, for instance, identifies the following master argument in the literature for the systematicity of thought:

The argument here is simple: (i) language is systematic; (ii) if language is systematic, then so is thought; hence, (iii) thought is systematic (Johnson 2004, p. 133).

Like many of the arguments to follow, Johnson’s master argument appeals to the property of systematicity simpliciter rather than that of unconstrained systematicity. To draw now on the moral of the last section, if we are really to establish that our thought is distinctively language-like then what we need is a way of getting from the unconstrained systematicity of language to the unconstrained systematicity of the thought of language-users. Discussions in this area haven’t always taken seriously enough the ramifications of accepting the kinds of mixed systems recognised by Camp, and the distinctive property of unconstrained systematicity that follows. Broadly stated, the aim of this paper is show just how deeply these ramifications go. More narrowly stated, the aim of this paper is to consider whether a version of this master argument can be made to work even with the stronger property of unconstrained systematicity in place. For this, we will need a reason to accept a strengthened analogue of the second premise, call it (ii’); a bridging principle that will allow us to move from the unconstrained systematicity of language (i’) to the unconstrained systematicity of thought (iii’). Assuming such a principle could be found, we would then have good reason to think that we language-users also think in a language-like format.

In what follows I consider three candidates for such a bridging principle. But first, I want to put aside a somewhat flatfooted route to the claim that our thoughts are unconstrainedly systematic, that doesn’t appeal to our linguistic capacities at all. That is, that even if it’s agreed that we don’t introspectively encounter mental sentences, perhaps the unconstrained systematicity of thought is something that we can ‘just tell’ from the inside. Thus, for instance, J.L. Bermúdez writes that, ‘[t]here is a clear sense in which thoughts seem to be structured entities, made up of elements that can reappear in further thoughts’ (Bermúdez 2003, p. 15).12 We human thinkers, the idea might be, are uniquely well-placed to say what our thoughts are like. And what they are like—or so it seems to us—is that they are so structured that their constituent parts could be recombined with other thought constituents in a semantically unconstrained way to form unlimited further thoughts.

I doubt whether everyone will share Bermúdez’s confidence in our ability to apprehend internal structure to our thoughts. In my own case, I couldn’t say that it seems one way or the other. More importantly, though, there are at least two reasons to be wary about reaching claims of unconstrained systematicity on the basis of first-personal introspective observation. The first is that unconstrained systematicity is a modal property—it is the property had by a representational system just in case it is possible for its semantically meaningful parts to be fully and freely recombined without semantic constraint by an idealised user. Such infinite possibilities needn’t be actualised for a system to count as systematic. Now even if it’s right that we have a special first-personal way of apprehending internal structure to our thoughts, this would surely be at most be a way of telling what our thoughts are actually like, not how they could be. What’s more, even if we can know this by observation alone, what the Camp discussion above showed is that not just any kind of systematicity will do to establish that our thought is language-like. For that, our thought must be unconstrainedly systematic. That means that if we are to discover the language-like medium of thought by first-hand apprehension alone, our observational powers must be sufficiently fine-tuned to discriminate between unconstrained systematicity on the one hand, and anything less on the other. And that really does seem to stretch our observational powers too far.

A second reason for caution is that unconstrained systematicity would be a very hard thing to test in this way. To show that a claim to unconstrained systematicity can be violated by the kinds of thinkers we are, we would need to alight on a thought that (in principle) cannot be thought by thinkers like us. But, of course, for any thought we care to think of, its actuality bears witness to the possibility of entertaining it. This should put us on guard against a lurking refrigerator light fallacy—generalisations from the observed to the unobserved, where facts about the observational acts themselves played a part in determining what was found.13 We might be misled into thinking that our powers of representational recombination are unlimited because for every recombination we’ve tried, we’ve succeeded. (Compare: we have some reason to think that the recombinability principles governing baboon thought is semantically limited by domain.14 But a baboon performing the same test would meet with the same success: for every combination she could think of trying, she too would succeed.)

I turn now to the first of three passes at supplying the missing bridging principle.

4 First pass: sentences express thoughts

A first pass at a bridging principle might be that sentences express thoughts, so, roughly speaking, what goes for language must go for thought too. If every sentence expresses a thought, and every sentence in a language can be decomposed and its parts recombined through every possible permutation without semantic constraint with respect to what those parts can represent, then the same must be true of the thoughts they express—one’s representational scope in thought couldn’t fall short of one’s representational scope in language. So if the combinatorial possibilities in language amount to unconstrained systematicity, then so too must the possibilities for recombination in thought.

Something like this is just what Jerry Fodor seems to have in mind in the following argument. First, he offers a version of the master argument identified by Johnson above:

(a) There’s a certain property that linguistic capacities must have in virtue of the fact that natural languages have a combinatorial semantics

(b) Thought has this property too

(c) So thought must have a combinatorial semantics

[...] The property of linguistic capacities I have in mind is one that inheres in the ability to understand and produce sentences. That ability is—as I shall say—systematic [...] (Fodor 1987b, 148–149).

Fodor seems to think that the fact that language has the relevant systematicity property in (a) gives us reason to think that thought has it too, in (b). And indeed, what he goes on to give us is precisely an argument taking us from the systematicity of language in (a) to the systematicity of thought in (b). He writes:

A fast argument is that cognitive capacities must be at least as systematic as linguistic capacities, since the function of language is to express thought. [...] You can’t have it that language expresses thought and that language is systematic unless you also have it that thought is as systematic as language is (Fodor 1987b, p. 151).15

It is because language expresses thought that we get to move from (a) to (b); since language expresses thought, if language is systematic, then so too is thought. Or so the fast argument goes.

Another writer offering a similar argument, but in a different philosophical tradition, is Bermúdez. It is offered (with some upfront interpretative latitude) on Frege’s behalf. Thoughts, for Frege, are expressed by sentences in virtue of being the senses of sentences. Of course, this doesn’t tell us much about the relation between thoughts and sentences until we know what it is for one thing to be the sense of another. But whatever it comes to, on Bermúdez’s reconstruction, it is in virtue of being the sense of a sentence that thoughts get to be causally efficacious in the world; he explains, ‘Frege’s account [...] gives a clear answer to the fundamental metaphysical problem of how we can make epistemic contact with thoughts—how thoughts can be the sort of things that can feature in reasoning and have causal effects within the world’ (Bermúdez 2003, p. 15). By being the sense of a sentence, thoughts have a presence in the world such that we can know about them, grasp them, and make use of them in our reasoning.

A further advantage of understanding the relation between sentences and thoughts in this way, Bermúdez goes on to argue, is that it provides us with direct access to the compositional structure of thoughts, and relatedly to the recombinability properties of thought as a system; we need only look to the corresponding properties of the sentences embodying those thoughts. If a thought gets to have presence in the world in virtue of being the sense of a sentence with a certain logical structure, then we must take it that that structure ‘identifies the constituents of the thought, that is, the sense of which it is composed’ (Bermúdez 2003, p. 16). More broadly, treating thoughts as the senses of sentences in this way entitles us to extrapolate from global structural features of language to those of thought:

Still on the semantic dimension of thought, there is a further and more overarching structural isomorphism in play, not simply between a thought and a sentence expressing it, but between thought as a whole and language as a whole. [...] Not only are they compositional (made up of recombinable constituents that can feature in a range of further thoughts in the way I have already considered) but the range of available thoughts is not in principle limited (Bermúdez 2003, p. 17).

Because thoughts are the senses of sentences their analysis can go via linguistic analysis; their internal structure can be read off from the logical structure of their corresponding sentences, and the unlimited recombinability of their constituent parts can be read off from the unlimited recombinability of the constituent parts of those sentences.

Both versions of this first bridging principle are too quick, and for the same reason. To have any chance of working—that is, to have any chance of simply reading off the combinatorial possibilities in thought from those of language on the strength of the expressive relation between them—we would need that relation to fund a mapping of the right kind between possible sentences and possible thoughts.

The strongest sentence-thought correspondence we might try for would be bijective; every possible sentence expresses exactly one thought. This is surely strong enough to do the trick. Under such a mapping, if language is unconstrainedly systematic then so is thought—the two sets, under this suggestion, have exactly matching cardinalities, so there is no room for their systematicity profiles to come apart. That bijection is too strong a mapping to hope for, however, is easily demonstrated with sentences containing indexical expressions. The sentence ‘I’m hungry’, for example, expresses different thoughts in the mouths of different speakers. So it is false that every sentence expresses exactly one thought.16

Perhaps the demand that every sentence express exactly one thought is anyway more stringent than is needed to substantiate this first bridging principle. Presumably it doesn’t matter if there are more possible thoughts than there are possible sentences, if what we’re after is a guarantee that thought is at least as systematic as language. An alternative suggestion might be that we need only add the premise that every sentence expresses at least one thought to get this principle to work.

This weaker alternative certainly sidesteps the problem of indexicality, but comes with three problems of its own. First, some will want to resist even the downgraded claim that every sentence expresses at least one thought. For instance, the string

Colourless green ideas sleep furiously.

is a perfectly well-formed sentence, but many will want to say that it does not express a thought.

The question whether this sentence expresses a thought is far from obvious, so if we can help it it would be better not to leave the argument here. If the idea is that whether I express a thought with a string amounts to—or perhaps dovetails with—the question whether I can entertain its truth conditions, then clearly there’s a sense in which I fail this test here. But is this the relevant test? We’ve already noted different grades of understanding and registered the relevant grade for our discussion as w-understanding, on which I understand a string so long as I understand the semantic significance of the parts making it up in a compositional system. And, of course, I perfectly well count as w-understanding the above string; as Camp argues, the truth-conditions and truth-values of such strings are generally speaking ‘all too obvious’ (p. 226), even if our understanding of what they come to is importantly thinner than in the case of intra-categorial strings. What seems right to say is that I cannot visualise, otherwise imagine, or conceive of the truth conditions associated with this sentence obtaining. But neither can I when it comes to the truth conditions of sentences stating necessary mathematical falsehoods, though I obviously can and sometimes do think such falsehoods, and when I do I sometimes express them in language. So this more demanding notion of understanding can’t be the one we’re after in deciding which sentences do and which don’t express thoughts.

There are more general reasons for resisting this move too. Camp has compellingly argued that cross-categorial strings like this have substantive (i.e. not merely formal) inferential roles that we routinely and easily exploit in metaphor-involving communication. From the above string, for example, one could conclude that these colourless green ideas would not be effective exam-invigilators at present, or that their mode of sleeping is not peaceful.17 In a similar vein, Ofra Magidor has clearly and persuasively set out a battery of considerations that seem to tell in favour of the meaningfulness of such strings—that they unambiguously translate, or can have synomyms, can be paraphrased, and that they behave themselves like meaningful strings when embedded in propositional attitude ascriptions. At best, leaving things here would be to leave the argument on unsettled grounds.18

A second problem is that, even if we put these anomalous strings aside and grant that every sentence expresses at least one thought, we would still need to show that no two sentences express the same thought. It might not matter if there are more possible thoughts than possible sentences, but it would surely be enough to derail this strategy if it turned out that there were more possible sentences than possible thoughts. But consider the following pair of sentences:

Mary kicked the dog.

The dog was kicked by Mary.

These are distinct sentences whose difference does not run so deep as a difference in the thought expressed—one couldn’t rationally assent to one but not the other. So it’s false that no two sentences express the same thought.

There is also a third, and somewhat more interesting, problem with this way of going. Once we’ve given up on bijection, there is no longer a straightforward way of reading off the unconstrained systematicity of thought from that of language. For sure, if we (i) succeed in showing that there are at least as many possible thoughts as there are possible sentences, and (ii) establish that there are infinite possible sentences (because language is systematic and its principles of recombination include recursively applicable rules), then we will have earned the conclusion that there are infinite possible thoughts. But given the Camp discussion above, we now know that this is not to show very much. Once we hone in on the property of unconstrained systematicity rather than systematicity tout court as the marker of language-like formats, it’s no longer enough to show that there are infinite possible recombinations in thought to get us to the conclusion that we think in a language-like format. We must also show that the systematicity profiles of thought and language have the very same shape: that they are governed by identically semantically (un)constraining principles of recombination. And for that, it will not be enough—as this first bridging principle offers—simply to say that sentences express thoughts, unless we can get that expressive relation to secure a one-one mapping between them.

This first bridging principle, then, doesn’t yet give us a way of satisfactorily completing the master argument. There is something to say, however, about why it might have seemed like a good idea in the first place. The lure of this ‘fast argument’ equivocates between two ways of individuating sentences: as well-formed word-strings on the one hand, and as dedicated one-one expressions of thought on the other. Insofar as we’re thinking of sentences in the first way, we can help ourselves to the unconstrained systematicity of language—given the notion of w-understanding drawn in Sect. 2, it was effectively this notion of a sentence that featured in the case for the systematicity of language in Sect. 2. But this notion of a sentence does not guarantee a one-one relation between sentences and underlying thoughts. So as long as we’re thinking of sentences in this first way we can’t use a bridging principle of this kind to take us from the unconstrained systematicity of language to the unconstrained systematicity of thought. But insofar as we’re thinking of them in the second way—as linguistic entities that line up one-one with thoughts—then even if these are sentences of the right kind to play the required role in this bridging principle, these aren’t the sorts of sentences that featured in the original case for the unconstrained systematicity of language. Either way, we cannot get from the representational scope of language to the representational scope of thought as quickly as this first pass suggests. We will have to try something else.

5 Second pass: thought is like language

A second pass says that the systematicity profile of language is evidence for that of thought, because thought is like language in the relevant respects. And the relevant similarity between them is enough to get us to the claim that they share the property of unconstrained systematicity.

Similar in what way? An obviously bad answer to this question, given our reasons for asking it, would be that thought is like language with respect to its systematicity profile, (so, the argument would go, we should expect them to share systematicity profiles). We would get to our desired conclusion all right, but only on the back of a glaringly tight explanatory circle. Of course, this isn’t a serious suggestion. But what it brings out is that whatever dimension of similarity between thought and language we recruit to the service of this second bridging principle, it must be close enough to the property of unconstrained systematicity that we can derive sameness of one from the sameness of the other, but it must not be too close.

An obvious target here is the Language of Thought Hypothesis (LOTH) championed by Fodor, on which thought is construed as a system of interacting representational mental symbols realised in a structurally isomorphic system of neural vehicles.19 Importantly, operations over these complexes of mental symbols, for Fodor, are semantically blind—thinking involves operations on these representations that are causally sensitive to their syntactic or structural features alone. If this is really what thinking is like, then this is a respect in which thought is like language. What it takes to put together a new (w-understandable) sentence is likewise a semantically blind operation that need only be sensitive to the syntactic features of the words it uses for the purpose—indeed, this was an important part of the case for the systematicity of language in Sect. 2. So if thought and language really share this point of similarity, then these seem like promising grounds for the claim that thought must be similar to language in its unconstrained systematicity too.

The next question to ask is: why accept Fodor’s view of thinking in the first place, on which this argument rests? And here’s where the trouble begins. The classic argument for Fodor’s view is that LOTH is an empirical hypothesis that best explains the productivity and systematicity of thought.20 (It’s not clear how far apart these arguments are supposed to be for Fodor: ‘Productivity and systematicity run together; if you postulate mechanisms adequate to account for one, then [...] you get the other automatically’ Fodor 1987b, p. 150). The fact that our thought is unconstrainedly systematic, that thinkers like us are able to decompose our thoughts and recompose their constituents into unlimited novel recombinations without semantic constraint, would be utterly mysterious if our thoughts lacked the kind of combinatorial syntax posited by the LOTH. LOTH is a hypothesis that best explains the data—data that include (and here’s the trouble) the unconstrained systematicity of thought.

Unconstrained systematicity really does seem to be a property of language, and one that imposes the kinds of structural requirements characteristic of a representational system with the compositional semantics and combinatorial syntax that LOTH theorists take to be central to language-like systems. So if it is right that thought is like language in this respect—i.e. with respect to its unconstrained systematicity—then there is nothing obviously problematic about arguing from there to the claim that thinking happens in a language-like format. So long, that is, as its starting premise can be given independent support. But, of course, what support we can give that premise is precisely the question we have been pursuing. So even if this argument is in good standing on its own terms, it cannot serve as a starting point for the bridging principle we’re after, on pain of only a slightly bigger circle than the one we began with: the systematicity profile of thought is evidenced by its similarity to language, and its similarity to language is evidenced by supposed facts about its systematicity.

Fodor’s attempt is surely not the only way of making good on this second bridging principle, but the example brings out a broader difficulty that is, I think, baked into this second approach. The success of this approach depends on identifying a dimension of similarity between thought and language that is (a) sufficiently tightly related to the relevant systematicity property to imply a shared systematicity profile, but (b) not so tightly related as to collapse the resulting argument into circularity. The task of finding a similarity feature that falls into this slim window would be hard won even if the relevant systematicity property was that of mere systematicity rather than unconstrained systematicity. (Notice, nothing in the Fodorian attempt outlined in this section speaks directly to the stronger property, and yet it still falls into the pitfall in (b)). But recognition of Camp’s mixed representational formats makes it all the more improbable that we will find any dimension of similarity that fits the bill. That’s because once we dismantle the traditional dichotomy between linguistic and imagistic representational formats, then we also lose the right to appeal to anything like a loose clustering of ‘language-like’ properties (e.g. compositionality, productivity, the use of iconic symbols, etc.) to do the work of this second principle. The only feature that will get us the desired conclusion that we think in a language-like format is the highly specific and maximally strong feature of semantically unconstrained systematicity. And it’s not clear that there is any distinct property that would be evidence for unconstrained systematicity, that isn’t at the same time so closely related to it as to collapse the argument based on this bridging principle into circularity.

6 Third pass: linguistic abilities require full systematicity of thought

The third pass is that a language-user could not become a language-user if her thought was not as unconstrainedly systematic as language; the unconstrained systematicity of thought is required by what it takes to learn a language. The idea might be something like this. The way we learn a language is not like learning the contents of a phrase book. As Fodor and Zenon Pylyshyn put it:

[Y]ou can learn any part of a phrase book without learning the rest. [...] Perhaps it’s self-evident that the phrase book story must be wrong about language acquisition, because a speaker’s knowledge of his native language is never like that. You don’t, for example, find native speakers who know how to say in English that John loves the girl but don’t know how to say in English that the girl loves John (Fodor and Pylyshyn 1988, p. 37).

Language-learning involves gaining non-punctuate, interconnected cognitive abilities to produce and parse whole networks of sentences.
In developing this capacity, however, we don’t just learn to put together words into sentences; we develop a non-punctuate ability to put together strings adequate for the expression of thoughts. And it is here that we find our bridging principle. For if the non-punctuate abilities we gain in learning a language partly amount to a non-punctuate capacity to express one’s thoughts, then a natural explanation becomes available for why language-learning is the way that it is. The reason we learn to put together our sentences in the way that we do is because we put together our underlying thoughts in the way that we do. The recombinability principles governing the languages we learn reflect the recombinability principles governing the thoughts that we learn to express in them. Fodor and Pylyshyn again:

[J]ust as you don’t find people who can understand the sentence ‘John loves the girl’ but not the sentence ‘the girl loves John’, so too you don’t find people who can think the thought that John loves the girl but can’t think the thought that the girl loves John. Indeed, [...] the systematicity of thought follows from the systematicity of language if you assume—as most psychologists do—that understanding a sentence involves entertaining the thought it expresses (p. 39).

By an inference to the best explanation, the non-punctuate nature of language-learning reveals the systematicity profile of thought as lining up with the systematicity profile of the language in which we learn to express it. Or at least, so the argument might go.

A few things really do seem guaranteed by what it takes to learn a language. One is that, given that language-learning is not a punctuate process, a learner will have to master the recombinability principles governing the language, as well as certain grammatical categories featuring in those principles, and some vocabulary. That seems right. Another is that if she is to use language as a way of communicating her thoughts to others, then she must learn how to put that vocabulary together in line with those recombinability principles in such a way that the resulting sentences are well-suited to express her thoughts.

The problem with this third bridging principle is that language is not the only systematic public representational system we learn to use to express our thoughts. I can just as well—and in some cases much more easily—express a thought using a Venn diagram, or a seating chart, or a hastily drawn map. So unless there is a special case to be made for language, then it looks like this third pass threatens to overgenerate patterns of recombination to which our thought is ‘revealed’ to conform. In learning to use maps, for instance, I will likewise have to learn some of the conventional categories of representational symbols used in maps—a cross for churches, ‘atm’ letters for cashpoints, blue shading for water, contour lines for elevation, and so on. I will also need to master a number of principles that govern how these symbols are properly recombinable into novel maps. For instance, the principle that the relative distances between semantically meaningful components of the map must preserve the ratios between the actual distances of the represented area, or the principle that for some of the symbols (dashed lines, blue shading) the shapes and orientations of the symbols must be isomorphic to corresponding features of the the represented topography, but that for others (crosses, ‘atm’ letters) those features of the representing symbols are insignificant. An analogue of the above argument would give us the result that if one is able to become cartographically competent, then we should infer that this is because the recombinability principles governing maps mirror those of the thoughts of the map-using thinker. And likewise for any other public system of representation that we might learn to use to express our thoughts.

One option at this point would be to embrace a kind of radical pluralism about what formats we think in. In answer to our starting question on this view: yes, we do think in a language-like format. But we also think in a map-like format, a Venn-diagram-like format, a staff-notation-like format, and any number of other formats depending on what public systems of representation we have mastered. And—contra the spirit the arguments of Sect. 4—there is no guarantee that our linguistic utterances mirror, or even approximate, the structure of the thoughts they express. I like this view, but presume that such a ragbag approach is not what fans of the master argument from Sect. 3 have in mind. (If they do, the widespread emphasis on the language-like structure of thought to the exclusion of talk of other formats cries out for explanation.)21 By contrast, if we are gunning for a stronger claim that we are exclusively, or even just primarily linguistic thinkers, then we will need to say something about what is so special about the link between thought and language, over and above the link between thought and these other public representational media.22

Another response might be to say that when faced with competing principles of recombinability as revealed by the different public systems of representation that we are capable of learning, we should default to the strongest—in this case, the unconstrained systematicity that we get from language. We could account for our capacity to use all of these expressive systems in one blow, as it were, by positing an unconstrainedly systematic system of thought.

The problem with this suggestion is that it seems to show too much. For notice that the strongest principle of recombinability that we master in learning to use a language does not bottom out with the semantically unconstrained ways in which the words in a language are put together into novel sentences. A learner will also need to master the ways in which phonemes are combined to produce spoken words, or letters combined to spell out written words (or strokes on a page combined to make up letters), the way tones are combined with complexes of phonemes to make up meaningful words in a tonal language, and so on. Are we to take it to follow that if she is capable of learning to do this, it must be because the semantically meaningful constituents of the language-learner’s thoughts similarly further decompose into phonetic, letter-like, or tonal components? That would surely be a reductio of this approach. To avoid it we would need a non-ad hoc reason to default to unconstrained systematicity, rather than to anything stronger, and I don’t know what that reason would be.

Another thing we could try is to appeal to the vast difference in expressive range of these systems to block the generalisation from one case to the other. Language is the overwhelmingly dominant mode in which we express our thoughts, maps a vanishingly marginal one; while I can cartographically express only a very limited number of thoughts, I can use language to express near enough any thought I like. The question is, does this difference give us reason to say that learning to use language reveals thought to be governed by the same recombinability principles, whereas learning to use maps does not?

I see no reason why it would. The idea behind the Fodor-Pylyshyn argument above was that what it takes to learn to use a representational system with a certain recombinability profile (in the case they consider, language) reveals the learner’s underlying thoughts to display a matching recombinability profile, because learning to use that system involves developing a non-punctuate capacity to put together novel recombinations in a way that serves to express thoughts. There is no reason why considerations of expressive range should prevent this principle from applying to map-learning as much as to language-learning. On the face of it, the argument makes no appeal to the expressive range of language, so on the face of it, there’s no reason to think that it wouldn’t analogously apply to a representational system with a narrower range.

A similar but different line of response calls on the idea that although we can learn to use maps and the rest, we don’t use those formats to express our thoughts in anything like the same way we use language. Language, we might think, is the natural expressive medium for creatures like us, such that we have reason to privilege its distinctive variety of systematicity over and above the systematicity of maps and other public representational systems in making inferences about the representational format of thought in a thinker who is able to use them all.
Fig. 1


I don’t see this reply getting us very far. After all, why couldn’t I produce a map like the one in Fig. 1 to express a thought about where Belgrave Music Hall is located, just as fluently as by the use of a sentence? Indeed, when it comes to thoughts concerning spatial relations, a map seems much better suited than language to express my thought. To approach the same level of detail expressed using the map in Fig. 1, for instance, one would need to utter something like the following, rather cumbersome—

‘Belgrave Music Hall is located in a small square just off the corner of Merrion St. and North St., the latter of which is one down from Wade Lane, and two down from Albion St.’

—and even then, we lose detail about the respective orientations of the roads, their relative dimensions, the position of Belgrave Music Hall within the square, etc, contained in the original thought. Of course, it’s not entirely transparent what talk of ‘naturalness’ here amounts to, but it would be surprising if we counted the more laborious and ineffective mode of expression as the more natural between the two. So it is by no means a given that language is always the most natural way of expressing a thought.

What’s more, there are perfectly non-fanciful ways of accounting for the initial intuition that language is a ‘natural’ expressive medium. Spoken language has a lot going for it in virtue of which we use it far more often than we use maps to express thoughts: speed, cheapness of resources, ease of production, expressive power, etc. Such features invite the expectation that we will be much better practiced at expressing thoughts in language than in maps, and this in turn invites the expectation that a thought’s expression in language will have a sense of psychological immediacy to it that’s missing from its cartographical expression. This should not mislead us into losing sight of the fact that both are instruments for the expression of thought, even if one is more familiar to us than the other.

Another nearby move is to press on the broadly Chomskian claim that humans have a universal and innate biological capacity for language-learning. What it takes to learn a language involves mastering a set of fantastically complex grammatical rules and rules governing the assignment of meaning to words and sentences—rules so complex that their explicit articulation is beyond reach for most mature speakers. And yet for all that, linguistic proficiency is typically mastered by around the age of eight. Led by Chomsky, the dominant view among contemporary linguists, psychologists and philosophers of language is that such a feat would not be possible unless we were born with innate knowledge of a universal grammar: a genetically endowed store of linguistic information that forms the basis of the language-learning process. Perhaps, then, this is the relevant way in which language is our singularly natural expressive medium.23

There are, I think, at least two reasons not to be satisfied with this way of solving our overgeneration problem. The first is that it’s not straightforward how it solves, or even speaks to, that problem. That problem, in a nutshell, was that learning to use any systematic public representational medium will involve gaining non-punctate abilities to put together representational complexes in that medium in a way that serves to express underlying thoughts. If these features of the language-learning process are supposed to be revelatory of a linguistic format to our thoughts as Fodor and Pylyshyn suggest, then unless we can put our finger on a relevant difference-maker, we will be compelled by consistency to say the same about our learning-capacities in other formats. Facts about the biological basis for the learning-process in the case of language did not play any role in the argument, and neither is it obvious why it should make a difference whether some of this knowledge is possessed by the subject in embryonic form at birth and is merely ‘triggered’ by the relevant format-learning process, or whether it is all gained at the later stage. On either story, the thinker gains the relevant non-punctate ability to express thoughts in the format, which is what was supposed to do the work in the original argument.

Maybe the idea isn’t that this is explictly part of the argument, but that if we are actively seeking a difference-maker then we should treat it as highly suggestive that we are born with an inbuilt ‘blueprint’ for language. This brings us to the second reason to be wary of appeals to the innateness of our language-learning capacities as a way of marking out the specialness of language: language is not the only complex representational system that children master at a seemingly prodigious age. There is increasing research into the developmental trajectory of map-use, for example, that suggests that although full proficiency with specific graphic devices used in maps continues to develop into adulthood (compared by cognitive scientists Barbara Landau and Laura Lakusta to the continuing training required for proficiency with writing in particular scripts24), understanding of the symbolic function of maps emerges abruptly in typically developing children as early as 2½–3,25 and even without prior training, pre-school children are spontaneously able to exploit geometric information in maps to find corresponding locations in their environment.26 Moreover, there is evidence that the capacity to understand maps is universal across human cultures, including cultures that provide little exposure to graphic symbols.27 This clearly supports the hypothesis that the capacity to understand maps does not require training or previous exposure to similar formats. There are plausible hallmarks here of a capacity that is innate and universal.

There is, of course, much more work to be done in filling out the full ontogenetic picture of map-use. But the provisional conclusion seems warranted that we have parallel reasons to posit some form of cartographic ‘blueprint’, or innate capacity, as we do in the case of language. Moreover, given that this research into early map-use is indicative of spontaneously developing capacities for the manipulation and understanding of graphic symbolic representation, this raises as a serious possibility the extension of such innate and universal capacities to other graphic symbolic systems of representation too.28 Things don’t look good for the appeal to the innateness of language-learning capacities as the difference-maker to save this bridging principle.

There is a broader point here that surfaces across these various attempted responses to the overgeneration problem with this third bridging principle. That is, that reaching the unconstrained systematicity of thought from considerations about what it takes to learn language requires there to be some aspect of the language-learning process that is both unique to language-learning, and ostensibly relevant to questions about the specific systematicity profile of thought. None of the responses considered so far meet both conditions. As things stand, of course, there’s no reason to think there isn’t such an aspect to be found; but as things stand, neither is there yet any reason to think that there is.

7 Conclusion

Over the last three sections I have reviewed and rejected three dominant suggestions in the literature about how to complete the master argument with which we started. I haven’t shown that there isn’t a better patch-up to be found. Given the intuitive grip of that master argument, however, I take this to already be an important result. It’s easy to feel that the claimed equivalence of systematicity profiles between thought and language is not only generally assumed to be correct, but that it is assumed to be obviously so, because argumentatively overdetermined. The bridging principles I have considered here are independent of each other, and the arguments against them likewise. If those arguments are in good standing, then rather than an embarrassment of argumentative riches, what we have is an embarrassment of individually unsatisfying attempts to bridge an argumentative gap.

All of this amounts to an argument against an argument and not against its conclusion—that conclusion, that we think in a language-like format, might be reached in other ways. Perhaps, for instance, it will be argued that unconstrained systematicity is a property of thought that can be derived from the inferential patterns that characteristically shape our mental lives, given the sorts of reasoners we are.29 Or perhaps there will be arguments from evolutionary biology to the effect that the sort of unlimited representational power possessed by unconstrainedly systematic systems of thought has such enormous adaptive benefits for animals like us, that it outweighs the costs of not settling for a more representationally modest system.30

Proper discussion of these alternative approaches would take us beyond the scope of this paper, but let me just say that there seem to be important challenges to be met by both. The first seems to draw on a somewhat idealised picture of human reasoning that is increasingly undermined by work coming out of the related cognitive sciences.31 By now it is standard to group our reasoning capacities into two kinds: system one capacities, loosely characterised as automatic, quick and subconscious, and system two capacities, characteristically effortful, conscious and slow. In place of the clean and well-executed inferences typically assumed by arguments taking up this first approach, there is mounting evidence that our system one reasoning processes are pervasively guided by rough-and-ready heuristics; low-level processes involving coarse-grained associations and mental habits that plausibly call for a very different-looking underlying representational format than would be required by a simple inferential model of human reasoning.32 It is eminently plausible that a proper understanding of the ways in which we move about in thought will be revelatory of the representational medium in which those movements take place. The difficulty facing this first approach is that given what those movements are now known to be like, it’s not at all obvious that the systematicity profile of the underlying representational medium will be revealed to be so fine-grained and unconstrained as language—at least, for those parts of our cognitive lives governed by system one rules.

A different challenge faces the second approach. That is, it must be made to respect the fact that we are animals like any other, occupying a particular ecological niche. After all, there is a very natural way of explaining the semantic limitations imposed by the systematicity characteristic of baboon thought: given its role with respect to its natural environment there are some combinations in thought that a baboon will simply never have call to represent—e.g. this banana is a dominant male.33 The baboon is an animal of a certain kind occupying a certain ecological niche, and these sorts of limitations are plausibly explained by the fact that its system of thought has evolved in ways that best and most cheaply equip it to successfully navigate that niche. But so are we. So all things being equal, we should likewise expect there to be similar limitations applying to the adaptively optimal representational range in the thought of human animals—limitations that might well be hidden to us, given refrigerator light considerations. For an argument of this kind to work, we will need a reason to think that all things are not equal in this respect.

Besides, there is a much bolder view brought within reach by the discussion of this paper that now deserves consideration. That is, once we have Camp’s mixed systems in our sights, we lose a key motivation for linking the format of thought to that of language. If we only have two models of what representation can be like—language and pictures—then it is natural to insist that there are features of thought that make it much more like language than the only alternative. But once we diversify our recognised models of representational formats in the way Camp has argued, we lose our incentive to hitch the format of thought to any one of these public systems of representation. That’s because once we proliferate our models of representational formats this far we make room for Bishop Butler’s insight that everything is what it is, and not another thing.34 A live option now is that we think in a sui generis format—that in answer to our starting question, we should answer that we think in a thought-like format, which is not especially like any other representational system. For fans of this bold view, progress on questions about what kinds of thinkers we are will be made by a better understanding of the unique glitches and contours of thought itself, rather than by linguistic analysis.

There is no conceptually necessary alignment between the systematicity profiles of language and of the thought of language-users. It is perfectly conceivable that a creature’s representational scope in language could outstrip her scope in thought. In that case, as Johnson puts it, ‘language as a means for expressing thought would then be like a tool that had more capabilities than its owner could ever make use of’ (Johnson 2004, p. 133). In other domains we are familiar with the idea that extra-mental representational systems of our own creation have powers that far exceed our own—computational powers, for instance, or the capacity for information-storage. There’s a real and surprising challenge in saying why we shouldn’t think of language in the same way.


  1. 1.

    See Camp (2009a, p. 110), Johnson (2015, p. 3) and references contained therein for examples of this functional approach to questions of mental representational format.

  2. 2.

    Formulations of systematicity or the Generality Constraint are sometimes further qualified by a categorial restriction on the relevant range of recombined strings that must be understood if a medium is to count as systematic (see, e.g. Strawson 1959, 99n; Evans 1982, 101 n.17; Peacocke 1992, p. 42.) I follow Camp (Camp 2004, 2009b) in eschewing such restrictions, and discuss surrounding issues on pp. 13–14. However, adherents to this categorial restriction are invited in superimpose it here and throughout. Since the restriction will apply equally to the systematicity profiles mentioned on both ends of the key bridging principle, it will not substantively affect the arguments of the paper.

  3. 3.

    Peacocke (1992, p. 43).

  4. 4.

    This might overstate things. Johnson (2004) argues that there are significant difficulties in identifying the relevant syntactic categories that are supposed to feature in claims about the systematicity of language. The challenges he raises are serious, and must be faced by proponents of the master argument. Another challenge to the systematicity of language comes from so called ‘Travis cases’: indexical-free sentences whose truth conditions seemingly depend on facts about the occasion of use—two utterances of the sentence ‘the ball is round’, for instance, the first in answer to the question ‘what shape does a squash ball assume on rebound?’ (on this occasion the sentence is false; on rebound the ball contorts into an ovoid), and uttered again in answer to a question from a sporting naif, ‘what shape is the ball used by squash players?’ (on this occasion the sentence is true) (Travis 1996, p. 454). Travis uses such cases to argue that thought can’t be systematic given that there is no single set of application conditions for a given concept (Travis 1993), and we could raise a similar challenge at the level of language. Either of these arguments could block the master argument at the earliest possible point by providing reason to reject its first premise. In what follows I put these arguments aside and grant the (unconstrained) systematicity of language in order to explore problems later in the argument.

  5. 5.

    Camp (2007) and Camp (2009a); see also Braddon-Mitchell and Jackson (2006) and Rescorla (2009) for similar arguments.

  6. 6.

    See Johnson (2015) for a worked case study of such a hybrid system, manguage, that includes features standardly associated with both maps and languages by design.

  7. 7.

    Cf. Camp (2009a, p. 120).

  8. 8.

    Likewise, she argues, for other linguistic combinatorial principles than predicative concatenation; Camp (2009a, n.16).

  9. 9.

    This position is slightly stronger than Camp’s, because Camp argues that the discrete nature of linguistic representation precludes its parts from representing continuous values (Camp 2007, p. 172, Camp 2009a, n.17). I am persuaded, in part by considerations raised in Johnson (2015), §4, that this view collapses the distinction between what is representable in a system and the features of the system in which it is represented; there is no in-principle reason why ranges of continuous values can’t be represented in a discrete sentential system (‘The range, call it R, between [ostensively defined terms] \({red}_{n}\) and \({red}_{n+1}\)’). For this reason I eschew this caveat included by Camp. Cf. Peacocke (1986).

  10. 10.

    This basic characterisation of compositionality will do for our purposes, but see Szabó (2012) for a careful discussion of important issues to do with its formulation.

  11. 11.

    Cf. Blumson (2012) for a worked-through example of this in the case of maps.

  12. 12.

    Dummett gestures at a similar claim, that we cannot help but apprehend the internal structure of our thoughts: ‘no one could have the thought, ‘This rose smells sweet’, [...] without apprehending its complexity, that is, in this example, without conceiving of himself as thinking about the rose, and as thinking of it something that can be true of other things and false of yet others’ (Dummett 1993, p. 135).

  13. 13.

    The refrigerator light fallacy is a particular instance of the well-known observer selection effect; for a book-length discussion of this effect in other areas of philosophy and science see Bostrom (2002).

  14. 14.

    Camp (2009a).

  15. 15.

    A comparable argument comes from Dummett’s Frege: ‘The discernment of constituent senses as parts of a thought is parasitic upon the apprehension of the structure of the sentence expressing it. Frege claimed that the structure of a thought must be reflected in the structure of a sentence expressing it, and indeed that seems essential to the notion of expressing a thought, rather than merely encoding it’ (Dummett 1993, p. 7).

  16. 16.

    Indeed, we might think that ‘Travis cases’ (see n.4) show that the context-sensitivity of language is even more pervasive than this, and so that a one-one correspondence is even less likely.

  17. 17.

    Camp (2004), §III; these examples are mine not Camp’s, but for similar examples see p. 221.

  18. 18.

    Magidor (2009) and Magidor (2013), chap. 3 (Magidor’s view is strictly about the meaningfulness of these sentences, but I take it that her arguments could be extended to the claim that they express thoughts; see Camp (2016), p. 615 for a similar suggestion). In n. 2 I invited fans of categorial restrictions on formulations of systematicity to graft them onto the discussion throughout. Such readers will presumably reject the claim that the indented string should be counted as a sentence at all—or at least, that it is a sentence in the sense relevant to claims about the systematicity of language. So for these readers too, the argumentative move that intercategorial strings do not express thoughts is unsatisfactory; there’s nothing in this move that precludes the right sort of correspondence between the sentences in the range relevant to claims about systematicity and thoughts.

  19. 19.

    Fodor (1975, 1987b, 2008) and Fodor and Pylyshyn (1988).

  20. 20.

    See Aydede (2015), Blumson (2012) and Camp (2007) for representative overviews of this argumentative strategy; and see Fodor and Pylyshyn (1988) and Fodor (1987a, b, 2008).

  21. 21.

    Indeed it seems to directly violate Bermúdez’s expressibility principle: ‘Any thinkable thought can in principle be linguistically expressed without residue or remainder’, Bermudez (2016, p. 2) or Fodor’s claim that ‘the content of a sentence is, or is the same as, the content of the corresponding thought’ Fodor (2001, p. 12). Cf. Rescorla (2009, p. 378) for a pluralist view of this kind.

  22. 22.

    It is possible that we could achieve full expressive generality with this patchwork, but not for the orderly reason that our thoughts take place in a uniformly semantically unconstrained systematic medium. This means that even if, per impossible, we could circumvent refrigerator light worries to establish expressive generality for thinkers like us, we couldn’t work backwards from there to the uniform language-like format of thought. Merely establishing expressive generality for thinkers like us isn’t by itself a point in favour of defenders of the uniform language-like format of thought.

  23. 23.

    See Chomsky (1965, 1988) and Chomsky (2007) for representative formulations of his central arguments for, and statements of, the universal grammar hypothesis.

  24. 24.

    Landau and Lukusta (2009, p. 6).

  25. 25.

    DeLoache (1995), Liben and Myers (2007) and Landau and Lukusta (2009).

  26. 26.

    Landau (1986), Huttenlocher et al. (1999), Shusterman et al. (2008) and Huttenlocher et al. (2008).

  27. 27.

    Dehaene et al. (2008).

  28. 28.

    See, e.g., Shusterman et al. (2008, p. 8).

  29. 29.

    See, e.g., Bermúdez (2003), Campbell (1986), Crane (1992), Davies (1998) and Devitt (2006) for examples of this sort of argument.

  30. 30.

    See, e.g., Chomsky (2007, p. 17).

  31. 31.

    See, e.g., Tversky and Kahneman (1974), Baron (2000), Slovic et al. (2002), Teigen (2004), Koehler and Harvey (2004), Hardman (2009), Kahneman (2011) and Fiedler and Sydow (2015).

  32. 32.

    See Beck (2012, 2014) for pioneering work showing that an empirically adequate account of some of our reasoning processes involving representations of analogue magnitudes implies that some of our thoughts don’t meet Evans’ Generality Constraint; and see Rescorla (2009) for a proof of concept argument that rational cognitive processes can occur in non-logical representational media.

  33. 33.

    This example is from Camp (2009b, p. 297); see also Carruthers (2004, p. 19) and Johnson (2015, p. 17) for similar points.

  34. 34.

    Butler (1827).



Thanks very much to Helen Steward, Daniel Morgan, Daniel Elstein, Josephine Salverda, Marie Guillot and Ed Nettel for comments and discussion on earlier drafts of this paper, and to anonymous referees for this journal.


  1. Aydede, M. (2015). The language of thought hypothesis. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. Accessed 1 Jan 2017.
  2. Baron, J. (2000). Thinking and deciding (3rd ed.). Cambridge: Cambridge University Press.Google Scholar
  3. Beck, J. (2012). The generality constraint and the structure of thought. Mind, 121(483), 563–600.CrossRefGoogle Scholar
  4. Beck, J. (2014). Analogue magnitudes, the generality constraint, and nonconceptual thought. Mind, 123(492), 1155–1165.CrossRefGoogle Scholar
  5. Bermúdez, J. L. (2003). Thinking without words. Oxford: Oxford University Press.CrossRefGoogle Scholar
  6. Bermudez, J. L. (2016). Understanding ‘I’. Oxford: Oxford University Press.Google Scholar
  7. Blumson, B. (2012). Mental maps. Philosophy and Phenomenological Research, 85(2), 413–434.CrossRefGoogle Scholar
  8. Bostrom, N. (2002). Anthropic Bias: Observer selection effects in science and philosophy. New York: Routledge.Google Scholar
  9. Braddon-Mitchell, D., & Jackson, F. (2006). Philosophy of mind and cognition: An introduction. Oxford: Wiley.Google Scholar
  10. Butler, J. (1827). Fifteen sermons preached at the rolls chapel (L. Dagg Trans.). Cambridge: Hilliard and Brown; Boston: Hilliard, Gray, Little, and Wilkins. (Reproduced with permission of the Bishop Payne Library, Virginia Theological Seminary, 2005).Google Scholar
  11. Camp, E. (2004). The generality constraint and categorial restrictions. Philosophical Quarterly, 54(215), 209–231.CrossRefGoogle Scholar
  12. Camp, E. (2007). Thinking with maps. Philosophical Perspectives, 21(1), 145–182.CrossRefGoogle Scholar
  13. Camp, E. (2009a). A language of baboon thought? In R. W. Lurz (Ed.), The philosophy of animal minds (pp. 108–127). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  14. Camp, E. (2009b). Putting thoughts to work: Concepts, systematicity, and stimulus-independence. Philosophy and Phenomenological Research, 78(2), 275–311.CrossRefGoogle Scholar
  15. Camp, E. (2016). Review: Category mistakes by ofra magidor (OUP). Mind, 125(498), 611–615.CrossRefGoogle Scholar
  16. Campbell, J. (1986). Conceptual structure. In C. Travis (Ed.), Meaning and interpretation. Oxford: Blackwell.Google Scholar
  17. Carruthers, P. (2004). On being simple minded. American Philosophical Quarterly, 41(3), 205–220.Google Scholar
  18. Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge: MIT Press.Google Scholar
  19. Chomsky, N. (1988). Language and problems of knowledge. The managua lectures. Cambridge: MIT Press.Google Scholar
  20. Chomsky, N. (2007). Biolinguistic explorations: Design, development, evolution. International Journal of Philosophical Studies, 15, 1–21.CrossRefGoogle Scholar
  21. Crane, T. (1992). The nonconceptual content of experience. In The contents of experience. Cambridge: Cambridge University Press.Google Scholar
  22. Davies, M. (1998). Language, thought, and the language of thought (aunty’s own argument revisited). In P. Carruthers (Ed.), Language and thought (pp. 226–247). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  23. Dehaene, S., Izard, V., Pica, P., & Spelke, E. S. (2008). Core knowledge of geometry in an Amazonian indigene group. Science, 311, 381–384.CrossRefGoogle Scholar
  24. DeLoache, J. (1995). Early understanding and use of symbols: The model model. Current Perspectives in Psychological Science, 4, 109–113.CrossRefGoogle Scholar
  25. Devitt, M. (2006). Ignorance of language. Oxford: Oxford University Press.CrossRefGoogle Scholar
  26. Dummett, M. A. E. (1993). Origins of analytical philosophy. Cambridge: Harvard University Press.Google Scholar
  27. Evans, G. (1982). In J. McDowell (Ed.), The varieties of reference. Oxford: Oxford University Press.Google Scholar
  28. Fiedler, K., & Sydow, M. (2015). Heuristics and Biases: Beyond Tversky and Kahneman’s (1974) judgment under uncertainty. In M. W. Eysenck & D. Groome (Eds.), Cognitive psychology: Revising the classical studies (pp. 146–161). London.Google Scholar
  29. Fodor, J. A. (1975). The language of thought. Cambridge: Harvard University Press.Google Scholar
  30. Fodor, J. A. (1987a). Psychosemantics: The problem of meaning in the philosophy of mind. Cambridge: MIT Press.CrossRefGoogle Scholar
  31. Fodor, J. A. (1987b). Why there still has to be a language of thought. In Psychosemantics. MIT Press.Google Scholar
  32. Fodor, J. A. (2001). Language, thought, and compositionality. Mind and Language, 16, 1–15.CrossRefGoogle Scholar
  33. Fodor, J. A. (2008). Lot 2: The language of thought revisited. Oxford: Oxford University Press.CrossRefGoogle Scholar
  34. Fodor, J. A., & Pylyshyn, Z. (1988). Connectionism and cognitive architecture. Cognition, 28(1–2), 3–71.CrossRefGoogle Scholar
  35. Frege, G. (1914/1980). Letter to Jourdain. In Philosophical and mathematical correspondence. Chicago Press. Hans Kaal (trans).Google Scholar
  36. Frege, G. (1956). The thought: A logical enquiry. Mind, 65(259), 289–311.CrossRefGoogle Scholar
  37. Hardman, D. (2009). Judgment and decision making: Psychological perspectives. New York: Wiley.Google Scholar
  38. Huttenlocher, J., Newcombe, N., & Vsilyeva, M. (1999). Spatial scaling in young children. Psychological Science, 10, 393–398.CrossRefGoogle Scholar
  39. Huttenlocher, J., Vasilyeva, M., Newcombe, N., & Duffy, S. (2008). Developing symbolic capacity one step at a time. Cognition, 106, 1–12.CrossRefGoogle Scholar
  40. Johnson, K. (2004). On the systematicity of language and thought. Journal of Philosophy, 101(3), 111–139.CrossRefGoogle Scholar
  41. Johnson, K. (2015). Maps, languages, and manguages: Rival cognitive architectures? Philosophical Psychology, 28(6), 815–836.CrossRefGoogle Scholar
  42. Kahneman, D. (2011). Thinking fast and slow. New York: Farrar, Straus and Giroux.Google Scholar
  43. Koehler, D. J., & Harvey, N. (2004). Blackwell handbook of judgment and decision making. Hoboken: Wiley.CrossRefGoogle Scholar
  44. Landau, B. (1986). Early map use as an unlearned ability. Cognition, 22, 201–223.CrossRefGoogle Scholar
  45. Landau, B., & Lukusta, L. (2009). Spatial representation across species: Geometry, language, and maps. Current Opinion in Neurobiology, 19(1), 12–19.CrossRefGoogle Scholar
  46. Liben, L., & Myers, L. (2007). Developmental changes in children's understanding of maps: What, when, and how? In J. Plummert & J. Spencer (Eds.), The emerging spatial mind. Oxford: Oxford University Press.Google Scholar
  47. Magidor, O. (2009). Category mistakes are meaningful. Linguistics and Philosophy, 32, 553–581.CrossRefGoogle Scholar
  48. Magidor, O. (2013). Category mistakes. Oxford: Oxford University Press.CrossRefGoogle Scholar
  49. Peacocke, C. (1986). Analogue content. Proceedings of the Aristotelian Society, 60, 1–17.CrossRefGoogle Scholar
  50. Peacocke, C. (1992). A study of concepts. Cambridge: MIT Press.Google Scholar
  51. Rescorla, M. (2009). Cognitive maps and the language of thought. British Journal for the Philosophy of Science, 60, 377–407.CrossRefGoogle Scholar
  52. Shusterman, A., Ah Lee, S., & Spelke, E. S. (2008). Young children’s spontaneous use of geometry in maps. Developmental Science, 11(2), F1–F7.CrossRefGoogle Scholar
  53. Slovic, P., Finucane, M., Peters, E., & MacGregor, D. G. (2002). The affect heuristic. In Heuristics and Biases: The psychology of intuitive judgment (pp. 397–420). Cambridge University Press.Google Scholar
  54. Strawson, P. F. (1959). Individuals: An essay in descriptive metaphysics (Vol. 2, pp. 246–246). London: Routledge.Google Scholar
  55. Szabó, Z. G. (2012). The case for compositionality. In W. H. M. Werning & E. Machery (Eds.), The Oxford handbook of compositionality. Oxford: Oxford University Press.Google Scholar
  56. Szabó, Z. G. (2017). Compositionality. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Summer 2017 ed.). Accessed 1 Jan 2017.Google Scholar
  57. Teigen, K. H. (2004). Judgments by representativeness. In R. F. Pohl (Ed.), Cognitive illusions: A handbook on fallacies and biases in thinking, judgment and memory. Hove: Psychology Press.Google Scholar
  58. Travis, C. (1993). On constraints of generality. In Proceedings of the Aristotelian Society (Vol. 94, pp. 165–188).Google Scholar
  59. Travis, C. (1996). Meaning’s role in truth. Mind, 105(419), 451–466.CrossRefGoogle Scholar
  60. Tversky, A., & Kahneman, D. (1974). Judgments under uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Philosophy, Religion and History of ScienceUniversity of LeedsLeedsUK

Personalised recommendations