Abstract
Predictive Processing accounts of Cognition, PPC, promise to forge productive alliances that will unite approaches that are otherwise at odds (see Clark, A. Surfing uncertainty: prediction, action and the embodied mind. Oxford University Press, Oxford, 2016). Can it? This paper argues that it can’t—or at least not so long as it sticks with the cognitivist rendering that Clark (2016) and others favor. In making this case the argument of this paper unfolds as follows: Sect. 1 describes the basics of PPC—its attachment to the idea that we perceive the world by guessing the world. It then details the reasons why so many find cognitivist interpretations to be inevitable. Section 2 examines how prominent proponents of cognitivist PPC have proposed dealing with a fundamental problem that troubles their accounts—the question of how the brain is able to get into the great guessing game in the first place. It is argued that on close inspection Clark’s (2016) solution, which he calls bootstrap heaven is—once we take a realistic look at the situation of the brain—in fact bootstrap hell. Section 3 argues that it is possible to avoid dwelling in bootstrap hell if one adopts a radically enactive take on PPC. A brief sketch of what this might look like is provided.
Similar content being viewed by others
Notes
Clark (2016) is very clear that we must distinguish two stories that might be told about predictive processing. The first story presents an “extremely broad” vision according to which the brain’s role in cognition is that of “multi-level probabilistic prediction” (p. 10). The second, more specific, predictive processing story attempts to give the details about how the first story might get told in light of current developments in neuroscience.
Clark (2016) frequently tells us that PPC is a “compelling ‘cognitive package deal’ in which perception, imagination, understanding, reasoning, and action are co-emergent from the whirrings and grindings of the predictive, uncertainty-estimating brain” (p. xvi).
According to Hohwy’s (2014) take on PPC “prediction error minimization is the only principle for the activity of the brain” (p. 2, emphasis added). Yet there are reasons to abandon the all-encompassing, imperialist vision presented in standard formulations of PPC that holds that the sole explanation of adaptive error minimization is the reduction of free energy. While the reduction of free energy principle is central to PPC, it should not be regarded as ‘the’ foundational, ultimate explanation of all adaptive behavior; the one factor that drives it. PPC must surrender this pretension given that the optimal strategy for reducing surprise and minimizing predictive error would be, as the ‘dark room’ objection highlights, to find a stable environment and engage with the world as little as possible. Clearly, the ‘dark room’ strategy cannot explain why the world teems with so many diverse forms of adaptive life that employ an incredible variety of adventurous cognitive strategies. Highlighting the explanatory limitations of relying on a single principle to explain all of this, Menary (forthcoming, see also Menary 2015) supplies a compelling argument for relinquishing the ‘isolated brain’ interpretation of PPC in favour of situating the PPC enterprise within a broader, more ‘open minded’ and pluralist explanatory framework—one which assumes that to explain adaptive life and cognition demands appeal to a wider set of grounding evolutionary principles and not just the idea that organisms seek to minimize free energy. This argument against the secluded brain formulation of PPC only concerns PPC’s official explanatory ambitions. It is, thus, independent of the epistemic and semantic concerns raised above. Nevertheless, they fit together as a seamless package.
Clark (2015a) identifies the radical implications of PPC as follows: (1) the core flow of information is top-down—the forward flow of information is replaced by the forward flow of prediction error; (2) motor control is just top-down sensory prediction; (3) efference copies are replaced by top-down predictions; (4) cost functions are absorbed into predictions (p. 3). Despite proposing such major changes to the standard cognitivist take on cognition Clark’s take on PPC exemplifies how one can be a radical revisionist without being a true revolutionary. For, if Clark is right, PPC only requires making partial and piecemeal adjustments to traditional cognitivism. The radical changes he highlights fall a long way short of constituting a fundamental and wholesale replacement of the previous framework.
Interestingly, despite painting PPC as “a picture of the brain as a secluded inference-machine” that relies wholly on self-evidencing, Hohwy (2014) also notes the parallels with enactivist thinking, observing that “the notion of self-evidencing appears to be the epistemic cousin to the dynamic systems theory notions of self-organization and self-enabling, which are often used to explain enactivism” (p. 19).
REC also holds that some forms of cognition involve content. In some acts of cognition we represent the world as being in ways that might not obtain—namely, in ways that can be true or false, accurate or inaccurate, and so on. However, REC denies that this is a feature of the most fundamental forms of cognition.
Other theorists are even firmer and more forthright on the link between PPC and cognitivism—they regard PPC as absolutely and unavoidably wedded to the idea that the brain trades in contentful representations. Hohwy (2014) tells us, for example, that the brain’s predictions about likely sensory input “necessarily rely on internal representations of hidden causes in the world (including the body itself)” (p. 17, emphasis added).
Clark (2016) is quick to observe that this sort of matchmaking is often quite piecemeal and partial, as in cases of rapid perception during which we only get the ‘gist’ of a scene (p. 27).
Subjects viewed one of eight possible stimulus orientations while activity was monitored in early visual areas (V1–V4 and MT+) using standard fMRI procedures. For each 16-s ‘trial’ or stimulus block, a square-wave annular grating was presented at the specified orientation (0, 22.5 ...157.5\({^{\circ }})\), which flashed on and off every 250 ms with a randomized spatial phase to ensure that there is no mutual information between orientation and local pixel intensity.
This is Clark’s (2016) own phrase: he tells us that, “the prediction task is ... a kind of bootstrap heaven” (Clark 2016, p. 19). He elsewhere tells us that, “the impasse was solved in principle at least by the development of learning routines that made iterated visits to bootstrap heaven” (Clark 2016, p. 20). Hohwy (2013) makes a similar move, holding: “a solution to the problem of perception ... must have a bootstrapping effect such that perceptual inference and prior belief is explained as being normative in one fell swoop, without helping ourselves to the answer by going beyond the perspective of the skull-bound brain” (p. 15).
Noting this Rosenberg (2014) claims that, “the real challenge for neuroscience is to explain how the brain stores information when it can’t do so ... in sentences made up in a language of thought” (Rosenberg 2014, p. 26). This assessment is almost right. Here is REC’s proposed modification: The real challenge for PPC regarding neuroscience is to [get beyond trying to] explain how the brain stores information when it can’t do so ... in sentences made up in a language of thought.
Hohwy (2013) rejects the idea that the historical constraints on how a system copes with uncertainty could be understood in terms of what he calls ‘mere biases’. This is because he insists on construing PPC in terms of inference understood as a normative notion, where the norms in question are those of Bayesian epistemology that tells us “what we should infer given our evidence” (p. 15).
References
Akins, K. (1996). Of sensory systems and the “aboutness” of mental states. Journal of Philosophy, 93(7), 337–372.
Bruineberg, J., & Rietveld, E. (2014). Self-organization, free energy minimization, and optimal grip on a field of affordances. Frontiers in Human Neuroscience, 8, 1–14.
Byrge, L., Sporns, O., & Smith, L. B. (2014). Developmental process emerges from extended brain–body–behaviour networks. Trends in Cognitive Sciences, 18, 395–403.
Clark, A. (2013a). Expecting the world: Perception, prediction, and the origins of human knowledge. Journal of Philosophy, 110(9), 469–496.
Clark, A. (2013b). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36, 181–253.
Clark, A. (2015a). Embodied prediction. In T. Metzinger & J. M. Windt (Eds.), Open MIND (Vol. 7). Frankfurt am Main: MIND Group.
Clark, A. (2015b). Predicting peace: An end to the representation wars. In T. Metzinger & J. M. Windt (Eds.), Open MIND (Vol. 7). Frankfurt am Main: MIND Group.
Clark, A. (2016). Surfing uncertainty: Prediction, action and the embodied mind. Oxford: Oxford University Press.
de-Wit, L., Alexander, D., Ekroll, V., & Wagemans, J. (2016). Is neuroimaging measuring information in the brain? Psychonomic Bulletin and Review, 1–14. doi:10.3758/s13423-016-1002-0.
Friston, K. J. (2010). The free-energy principle: A unified brain theory? Nature Neuroscience, 11, 127–138.
Friston, K. J., & Stephan, K. E. (2007). Free-energy and the brain. Synthese, 159, 417–458.
Gerrans, P. (2014). The measure of madness. Cambridge: MIT Press.
Gibson, M. (2004). From naming to saying: The unity of the proposition. London: Routledge.
Gładziejewski, P. (2015). Explaining cognitive phenomena with internal representations: A mechanistic perspective. Studies in Logic, Grammar and Rhetoric, 40(1), 63–90.
Gładziejewski, P. (2016). Predictive coding and representationalism. Synthese, 193(2), 559–582.
Hohwy, J. (2013). The predictive mind. Oxford: Oxford University Press.
Hohwy, J. (2014). The self-evidencing brain. Noûs, 50(2), 259–285.
Hutto, D. D. (2013). Fictionalism about folk psychology. The Monist, 96(4), 585–607.
Hutto, D. D., & Myin, E. (2013). Radicalizing enactivism: Basic minds without content. Cambridge: MIT Press.
Hutto, D. D., & Myin, E. (2017). Evolving enactivism: Basic minds meet content. Cambridge: MIT Press.
Kamitani, Y., & Tong, F. (2005). Decoding the visual and subjective contents of the human brain. Nature Neuroscience, 8(5), 679–685.
Kamitani, Y., & Tong, F. (2006). Decoding seen and attended motion directions from activity in the human visual cortex. Current Biology, 16(11), 1096–1102.
Menary, R. (2015). What? Now. Predictive coding and enculturation. In T. Metzinger & J. M. Windt (Eds.), Open MIND (Vol. 25). Frankfurt am Main: MIND Group.
Orlandi, N. (2014). The innocent eye: Why vision is not a cognitive process. Oxford: Oxford University Press.
Reddy, L., Tsuchiya, N., & Serre, T. (2010). Reading the mind’s eye: Decoding category information during mental imagery. NeuroImage, 50(2), 818–825.
Rescorla, M. (2016). Bayesian sensorimotor psychology. Mind and Language, 31(1), 3–36.
Rosenberg, A. (2014). Disenchanted naturalism. In B. Bashour & H. D. Muller (Eds.), Contemporary philosophical naturalism and its implications. London: Routledge.
Sprevak, M. (2013). Fictionalism about neural representations. The Monist, 96, 539–560.
Travis, C. (2004). The silence of the senses. Mind, 113(449), 57–94. doi:10.1093/mind/113.449.57.
Wittgenstein, L. (1953). Philosophical investigations. Oxford: Blackwell.
Acknowledgements
The primary research for this article was supported by the Australian Research Council Discovery Project “Minds in Skilled Performance” (DP170102987).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hutto, D.D. Getting into predictive processing’s great guessing game: Bootstrap heaven or hell?. Synthese 195, 2445–2458 (2018). https://doi.org/10.1007/s11229-017-1385-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-017-1385-0