The final virtue of external interaction I will discuss is, in some ways, the summation of persistence, rearrangement, and reformulation. It may be called the power of construction. In making a construction—whether it be the graphical layovers of the dancer shown in Fig. 5, the geometric construction of Fig. 1, or building a prototype of a design as in Fig. 8a—there is magic in actually making something in the world. As mentioned in the discussion of scale models, by constructing a structure, we prove that its parts are mutually consistent. If we can build it, then it must be logically and physically viable. If we can run it, then the actions of those parts are consistent, at least some of the time; and if we can run it under all orderings then it is consistent all of the time. The physical world does not lie.
The constructive process has a special place in human thinking because it is self-certifying. In mathematics, constructive reasoning means proving a mathematical object exists by showing it. For example, if it were claimed that a given set has a largest element, then a constructionist proof would provide a method for finding the largest element, and then apply the method to actually display the element.
Not every form of human reasoning is constructive. Humans reason by analogy, by induction, they offer explanations, and they think while they perform other activities, such as following instructions, interpreting a foreign language, and so on. None of these are constructive methods in the mathematical sense. However, because of the incremental nature of construction, the effort to construct a solution may also be a way of exploring a problem. When students look for a constructive proof to a geometric problem, they use the evolving external structure to prompt ideas, bump into constraints, and realize possibilities. When they write down partial translations of a paragraph, they rely on explicit fragments to help guide current translation.
The question that begs to be asked is whether thinking with external elements is ever necessary. Can we, in principle, do everything in our heads, or do we need to interact with something outside ourselves in order to probe and conceptualize, and get things right? In mathematics, externalization is necessary, not just for communication, but to display the mathematical object in question. It is like measurement: you cannot provide the value of a physical magnitude without measuring it. You cannot show the reality of a mathematical object (for constructivists) without revealing a proof that parades it. Yet, during the discovery process might not all the thinking be internal, the result of an interaction between elements inside the head? Where is the proof that, at first, all that probing and conceptualizing might not be the outcome of a purely internal activity? Might it not be that all the ‘real’ thinking lives internally, and that the internal activity is simulating what it would be like to write things down outside? Or perhaps that the internal activity amounts to running through how one would present one’s idea to others? Mightn’t the truth be that we needed the outside world to teach us how to think,Footnote 3 but once we know how we never need to physically encounter tangible two- or three-dimensional structures to epistemically probe the ‘world’?
I believe this is wrong: physical interaction with tangible elements is a necessary part of our thinking process because there are occasions when we must harness physical processes to formulate and transition between thoughts. There are cognitive things we can do outside our heads that we simply cannot do inside. On those occasions, external processes function as special cognitive artifactsFootnote 4 that we are incapable of simulating internally.
To defend this hypothesis is harder than it might seem. In practice, few people can multiply two four-digit numbers in their heads. And, if they can, then increase the problem to ten digit numbers. This ‘in practice’ limitation does not prove the ‘in principle’ claim, however, that normal human brains lack the capacity to solve certain problems internally that they can solve with external help, with tools, computers or other people. There are chess masters who can play nearly as well, blindfolded as open eyed (Chabris and Hearst 2003).Footnote 5 There is no evidence that a team of chess players is better than an individual.Footnote 6 There are people with savant syndrome who can multiply large numbers in their heads, or determine primes or square roots. Other savants with eidetic memories can read books at a rate of 8–10 s per page, memorizing almost everything (see footnote 5). Tesla said that when he was designing a device, he would run a simulation of it in his head for a few weeks to see which parts were most subject to wear (Hegarty 2004, p. 281, citing Shepherd). Stephen Hawking is said to have developed analytical abilities that allowed him to manipulate equations in mind equivalent to more than a page of handwritten manipulations. For any reasoning problem of complexity n, how do we know there is not some person, somewhere, who can solve it in their head, or could, if trained long enough? To be sure, this says little about the average person. Any given person may reach their computational limit on problems much smaller than n. And our technology and culture has evolved to support the majority of people. So, in practice, all people rely on available tools, practices, and techniques for reasoning. Nonetheless, if a single person can cope with n, then there is an existence proof that the complexity of external simulation does not itself mean that internal simulation is not possible. It suggests that any problem we cannot solve in our heads that we can solve with external help, has more to do with cost structure than with an in principle biological inability.
One way of making the in principle case is to show that there are operations that can be performed on external representations that cannot be performed on internal representations, and that, somehow, these are essential. Are there epistemic activities we can perform outside that we cannot duplicate inside, not because of their complexity, but because there are physical properties and technologies available on the outside that we cannot duplicate mentally—operations we cannot mentally simulate with sufficient realism to deliver dependable answers?
Consider Fig. 10. The dots in the two images on the left are related to one another by a rotation of 4°. This is essentially invisible unless the two images are superimposed, as in the image on the right. Superimposition is a physical relation that can be repeated any number of times, as is rotation. Both require control over physical transformations. In the case of superposition, the position of the layers must be controlled precisely, and in the case of rotation, the angle must be controlled precisely. Are there such functions in the brain?
The brain process we are requiring is analog in nature. For over 25 years, a dispute has raged over whether brains support analog processes or whether mental imagery is driven by non-analog means (Pylyshyn 2001). We can sidestep this question, though, by appealing to an in principle distinction between types of processes. In an important paper, Von Neumann (1948) mentioned that some processes in nature might be irreducibly complex. Any description of one of those processes would be as complex as the process itself. Thus, to simulate or model that process one would have to recreate all the factors involved. This holds whether the simulation or modeling is being performed internally or externally. Von Neumann put it like this:
“It is not at all certain that in this domain a real object might not constitute the simplest description of itself, that is, any attempt to describe it by the usual literary or formal-logical method may lead to something less manageable and more involved.”(p 311)
David Marr invoking the same idea, spoke of Type 2 processes where any abstraction would be unreliable because the process being described evolves as the result of “the simultaneous action of a considerable number of processes, whose interaction is its own simplest description” (Marr 1977). Protein folding and unfolding are examples of such processes, according to Marr.Footnote 7 Other examples might be the n body problem, the solution to certain market equilibrium problems, situations where the outcome depends on the voting of n participants, and certain quantum computations.
The hallmark of these problems is that there exists physical processes that start and end in an interpretable state, but the way they get there is unpredictable; the factors mediating the start and end state are large in number, and on any individual run are impossible to predict. To determine the outcome, therefore, it is necessary to run the process, and it is best to run the process repeatedly. No tractable equation will work as well.
How are these problems to be solved if we have no access to the process or system itself? The next best thing is to run a physically similar process. For example, to compute the behavior of an n body system, such as our solar system, our best hope is to construct a small analog version of that system—an orrery—then run the model, and read off the result (see Fig. 11). Using this analog process, we can compute a function (to a reasonable degree of approximation) that we have no other reliable way of computing.
The implication is that for brains to solve these sort of problems, they would have to encode the initial state of the type II system, and then simulate the physical interaction of its parts. If this interaction is essentially physical—if, for instance, it relies on physical equilibria, or mechanical compliance, or friction—there may be no reliable way of running an internal simulation. We need the cognitive amplification that exploiting physical models provide. We would need to rely on the parallel processing, the physical interaction, and the intrinsic unpredictability of those analog systems. There is nothing in our brains (or minds) like that.
The conclusion I draw is that to formulate certain thoughts and to transition to others, we must either be able to represent arbitrarily complex states—states that cannot be represented in compact form—or we must rely on the external states themselves to encode their values and then use them to transition to later states. These external states we are able to name but never characterize in full structural detail.Footnote 8