Skip to main content
Log in

The Swapping Constraint

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Triviality arguments against the computational theory of mind claim that computational implementation is trivial and thus does not serve as an adequate metaphysical basis for mental states. It is common to take computational implementation to consist in a mapping from physical states to abstract computational states. In this paper, I propose a novel constraint on the kinds of physical states that can implement computational states, which helps to specify what it is for two physical states to non-trivially implement the same computational state.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The mapping function between physical states and computations must be a mapping from physical states to states in the computation; as we will see, physical states are fine-grained enough that multiple physical states will map to one computational state.

  2. The sufficient level of complexity is just that the system is in a unique physical state at any given point, where uniqueness applies to the intrinsic properties of the system (Putnam 1987). I follow Godfrey-Smith (2009) in assuming an account of intrinsic properties along the lines of Langton and Lewis (1998).

  3. A non-structuralist response to triviality worries—which rejects some of these assumptions about what it takes to implement a computation—can be found in Rescorla (2014). This paper will focus on structuralist responses to issues of triviality.

  4. I will focus on Godfrey-Smith’s argument because it is immune to objections which have been raised to earlier triviality arguments, and because there is not yet a satisfying response to it in the literature. However, I take what is said here to be in general a first step at outlining a constraint on computational triviality.

  5. The contingency tree in Fig. 1 is taken from Godfrey-Smith (2009). It has been pointed out to me by an anonymous reviewer that this is a nonstandard description of FSA, as it does not specify a non-arbitrary initial state, or terminating states.

  6. Chalmers (2012) says something similar: “A physical system implements a given computation when there exists a grouping of physical states of the system into state-types and a one-to-one mapping from formal states of the computation to physical state-types, such that computational states related by an abstract state-transition relation are mapped onto physical state-types related by a corresponding causal state-transition relation” (Chalmers 2012: 229). It should be noted that Chalmers reverses the domain and range of the mapping: whereas I (and Godfrey-Smith) state it as a mapping from physical states (P) to formal states (S), Chalmers states it as a mapping from formal states (S) to physical states (P). My reason for stating it as a P to S function is that if physical states are fine-grained, then there will be many physical states mapping to formal states (a many-to-one mapping). So, doing it the other way give you a one-to-many mapping, which would not be a function (thanks to  an anonymous reviewer for pointing this out to me). One could specify the mapping this way if one had a reasonably good characterization of the physical state types being mapped to; but, of course, this is exactly what is at issue.

  7. See Putnam (1987) for the origins of this kind of disjunctive description.

  8. Godfrey-Smith makes a note of this issue as well. “The criteria for realization discussed above look weak because of the existential quantifiers; all that is required that a system have some physical states that map onto a given structure, or contain some states that are related in such a way that they occupy a given set of roles. But this weakness is often something that functionalism seeks, because of the message of multiple realizability, and the alleged ‘autonomy’ of high-level descriptions of complex systems” (Godfrey-Smith 2009: 289)

  9. This is more or less what Godfrey-Smith (2009) suggests in response to the triviality problem he raises.

  10. Thanks to an anonymous reviewer for pressing me to say more about this issue.

  11. As Godfrey-Smith notes, key to showing that some trivial system B implements some computation S is “that all B’s physical outputs, as well as inner states, are unique” (Godfrey-Smith 2009: 287).

  12. It is also worth noting that this interpretation requires us to give a different account of transduction than one that is tacitly assumed. Transduction is typically thought of as an operation on the inputs / outputs of a system as a whole, but if we want to adjust the transducer layer so as to accommodate physical differences in swapped components, then we need to think of transduction from/to inputs and outputs at the level of states, rather than at the level of the system as a whole (thanks to an anonymous reviewer for pointing this out.)

  13. Thanks to an anonymous reviewer for pushing me to address this in detail.

  14. We might also think that only true computational systems have the property of being claimed by us to be true computational systems. But this would not make such a property a compelling one to use in developing a constraint.

  15. We might wish to put this in terms of a microphysical duplicate of the state \(P_1\) being instantiated at the node of the contingency tree where \(P_{1000}\) is.

  16. That physical component could be swapped with a similar enough physical component in (a) a standard computer of the same kind, or (b) the same physical system at a different time.

  17. Thanks to an anonymous reviewer for pushing me to go into greater detail on this point.

  18. This is not crucial, but it is worth noting to highlight the difference between the bucket of water and a wall. In some sense the wall is ‘more’ trivial (which is why I use the bucket as the example) because the different states are not differentiable in the terms used here. We can add complexities to the case, such as drop levers, etc. in order to give a more direct mapping here. However, it should be noted that we run the risk of turning the bucket of water into a nontrivial computer.

  19. This means that it is not entirely appropriate, in the contingency tree that follows, to represent only two physical inputs, rather than four (or more).

  20. “All the designer has to do to generate coke machine behavior over the interval is build a transducer device that does nothing when it detects \(O_1^P\) (etc.), emits a coke when it detects \(O_4^P\) (etc.), and emits a coke and change in response to \(O_6^P\) (etc.) ... It is as if a designer had enormous knowledge of the physical dispositions of the bucket of water, and very fine-grained ways of building input-output devices” (Godfrey-Smith 2009: 287).

  21. No corresponding worry arises for the nontrivial system; we might think of the initial mapping of the nontrivial system as fixing the granularity with which we describe its behavior. With respect to that level of grain, swapping does not change the behavior of the system. With respect to the level of grain used to describe the behavior of a trivial system, it does.

  22. Of course, what counts as sufficient similarity is, in some sense, going to be determined here by the swapping constraint itself. Perhaps the strongly interpreted swapping constraint offers a way of distinguishing physical computers of the same ‘type’ in one sense, helping to distinguish multiple realizability between tokes from multiple realizability between types. This relates to classic work on the type-identity theory (Lewis 1966), as well as some more recent work on multiple realizability (Shapiro 2000).

  23. Thanks to an anonymous reviewer for bringing this example to my attention, and for pressing me to say more about this issue in general.

  24. However, something like stronger view – that different material substrate realize different computations – has advocates in a related debate regarding mechanistic accounts of computation. Kaplan (2017), for example, responds to recent arguments against mechanistic accounts, made on the basis of similar charges (Chirimuuta 2014). Kaplan’s argument, briefly, is that we cannot expect the scope of mechanistic explanations to completely account for considerations involving multiple realizability.

  25. Sprevak (2012) argues somewhat compellingly that the components of a CSA can actually be construed as FSA which are particularly permissive about inputs and outputs. See Sect. 4 of Chalmers (2012) for some discussion of this proposal.

  26. See Glisky and Kong (2008) for some experimental evidence in support of this claim.

  27. See Olson (2002) and Shoemaker (2004) for some discussion of this issue in connection with theories of personal identity.

References

  • Anderson, M. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33(4), 245–266.

    Article  Google Scholar 

  • Chalmers, D. (1996). Does a rock implement every finite-state automaton? Synthese, 108(3), 309–333.

    Article  MathSciNet  MATH  Google Scholar 

  • Chalmers, D. (2011). A computational foundation for the study of cognition. Journal of Cognitive Science, 12(4), 323–357.

    Google Scholar 

  • Chalmers, D. (2012). The varieties of computation: A reply. Journal of Cognitive Science, 13, 211–248.

    Article  Google Scholar 

  • Chirimuuta, M. (2014). Minimal models and canonical neural computations: The distinctness of computational explanation in neuroscience. Synthese, 191(2), 127–153.

    Article  Google Scholar 

  • Glisky, E. (2007). Changes in cognitive function in human aging. In D. Riddle (Ed.), Brain aging: Models, methods, and mechanisms. Boca Ratonn, FL: CRC Press.

    Google Scholar 

  • Glisky, E., & Kong, L. (2008). Do young and older adults rely on different processes in source memory tasks? A neuropsychological study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34(4), 809–822.

    Google Scholar 

  • Godfrey-Smith, P. (2009). Triviality arguments against functionalism. Philosophical Studies, 145(2), 273–295.

    Article  Google Scholar 

  • Kaplan, D. M. (2017). Neural computation, multiple realizability, and the prospects for mechanistic explanation. In D. M. Kaplan (Ed.), Explanation and integration in mind and brain science. Oxford: Oxford University Press.

    Google Scholar 

  • Langton, R., & Lewis, D. (1998). Defining ‘intrinsic’. Philosophy and Phenomenological Research, 58, 333–345.

    Article  Google Scholar 

  • Lewis, D. (1966). An argument for the identity theory. The Journal of Philosophy, 63(1), 17–25.

    Article  Google Scholar 

  • Olson, E. T. (2002). What does functionalism tell us about personal identity? Noûs, 36(4), 682–698. http://www.jstor.org/stable/3506231.

  • Putnam, H. (1987). Representation and reality. Cambridge, MA: MIT Press.

    Google Scholar 

  • Rescorla, M. (2014). A theory of computational implementation. Synthese, 191(6), 1277–1307.

    Article  MathSciNet  Google Scholar 

  • Searle, J. (1990). Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association, 64(3), 21–37.

    Article  Google Scholar 

  • Shapiro, L. A. (2000). Multiple realizations. The Journal of Philosophy, 97(12), 635–654.

    Article  Google Scholar 

  • Shoemaker, S. (2004). Functionalism and personal identity: A reply. Noûs, 38(3), 525–533. http://www.jstor.org/stable/3506251.

  • Sprevak, M. (2010). Computation, individuation, and the received view on representation. Studies in History and Philosophy of Science, 41, 260–270.

    Article  Google Scholar 

  • Sprevak, M. (2012). Three challenges to Chalmers on computational implementation. Journal of Cognitive Science, 13, 107–143.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Henry Ian Schiller.

Additional information

Thanks to Mark Sprevak for guidance and support in the early stages of this project, and for crucial feedback on an earlier draft. Thanks to Cory Juhl, and to several anonymous reviewers for crucial feedback on more recent drafts. Thanks are also due to Andy Clark, Jonny Lee, Becky Millar, Alex Rendón and audiences at the University of Edinburgh and the 2015 Northwest Philosophy Conference, for helpful questions and discussion.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schiller, H.I. The Swapping Constraint. Minds & Machines 28, 605–622 (2018). https://doi.org/10.1007/s11023-018-9473-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-018-9473-6

Keywords

Navigation