The free energy principle says that any self-organising system that is at nonequilibrium steady-state with its environment must minimize its (variational) free energy. It is proposed as a grand unifying principle for cognitive science and biology. The principle can appear cryptic, esoteric, too ambitious, and unfalsifiable—suggesting it would be best to suspend any belief in the principle, and instead focus on individual, more concrete and falsifiable ‘process theories’ for particular biological processes and phenomena like perception, decision and action. Here, I explain the free energy principle, and I argue that it is best understood as offering a conceptual and mathematical analysis of the concept of existence of self-organising systems. This analysis offers a new type of solution to long-standing problems in neurobiology, cognitive science, machine learning and philosophy concerning the possibility of normatively constrained, self-supervised learning and inference. The principle can therefore uniquely serve as a regulatory principle for process theories, to ensure that process theories conforming to it enable self-supervision. This is, at least for those who believe self-supervision is a foundational explanatory task, good reason to believe the free energy principle.
This is a preview of subscription content,to check access.
Access this article
In machine learning, there is a recent focus on self-supervision as a crucial subtype of unsupervised learning, that is, learning that does not require labelled training data. In this research area, self-supervised learning proceeds by withholding some of the data and then letting the system predict it (cf. contextual cues); in this way data supervise the learning (such that self-supervision, in some sense, is a type of supervised learning too). In this paper, the notion of self-supervision is used in a quite general and generic sense, which captures the autonomy of self-supervision and its reliance on an internal model of causes of sensory input; it is intended to capture ‘truly’ unsupervised learning, namely where the system only relies on itself and its own exploration for normative constraints on learning and inference (this is pursued in Sects. 4, 5 below); the notion of unsupervised learning goes back to at least Barlow (e.g., Barlow 1989) and is now a textbook topic in machine learning.
There might of course be other concepts of the existence of biological organisms. The current approach focuses on the idea of nonequilibrium steady state, which immediately suggests a statistical analysis. There is a long history connecting existence of biological systems with nonequilibrium steady state (or far from equilibrium states); (see, e.g., Ashby 1947; Nicolis and Prigogine 1977; Prigogine and Nicolis 1971; Schrödinger 1944; Von Bertalanffy 1950). More philosophically-oriented discussion can be found in Mark Bickhard’s work, such as (Bickhard 2009), which draws on the dynamical systems approach to discuss both the nature of representation and topics close to FEP; there is an interesting project in exploring their affinities (see also Bickhard 2016). There are non-equilibrium steady state systems that are not biological in the common sense, such as non-biological adaptive systems, and perhaps phenomena such as tornadoes. There is debate about this issue of the scope of FEP; (e.g., Sims 2016). For this paper, I set aside a full discussion of scope issues and focus on the kinds of systems for which self-supervision and normativity are commonly discussed.
Surprise is also known as surprisal and corresponds to Shannon's self-information: the negative log probability of some system states, conditioned that system or model. The average self-information of a system is entropy; suggesting that avoiding surprises places an upper bound on the entropy of a system’s states.
There is substantial debate about the very notion of conceptual analysis and the a priori in philosophy. I do not here rely on any particular approach but merely on a basic sense of conceptual analysis as given by grasp and elucidation of concepts, and the way in which that is a priori at least in the sense of not immediately requiring empirical investigation. It may be that our conceptual analysis is susceptible to empirical evidence, and this can lead to considerations about whether the initial analysis failed, or whether the concept has changed; it may also be that there are different conceptual analyses of the same terms (e.g., in different cultures), which in turn raises questions whether this is a case of different understandings of the same concepts or of different concepts (much of this discussion plays out in Frank Jackson’s defence of conceptual analysis and the debate following Jackson (1998)). My argument here is subject to the eventual fate of these questions, together with other purported a priori conceptual analyses in philosophy.
The recognition model is sometimes described as a recognition density; namely, an approximate posterior probability density over unknown (external) states of the world causing sensory states. This model or density can be considered a Bayesian belief about something; namely, unknown states of the world (note that Bayesian beliefs are not propositional beliefs). Note also a distinction I set aside here: optimising a recognition density is different from optimising a model. In statistics, this is the difference between Bayesian model inversion—to produce an approximate posterior over unknown causes, given a model—and Bayesian model selection—0to produce an approximate posterior over competing models.
A functional is a function of a function. The free energy is a functional because it is a function of a probability density; namely, q, which is a probability density function of external states.
The implicit conversion of an intractable integration problem into a tractable optimisation problem is due to Richard Feynman, who introduced the notion of variational free energy via the path integral formulation of quantum electrodynamics. It was subsequently exploited in machine learning, where minimising variational free energy is formally synonymous with approximate Bayesian inference.
Notice that the problem of self-supervised learning and inference relates to the general problem of learning where any learning system seeking to estimate a data generating process would have to minimize risk (expected value of some loss function). The problem is that the relevant joint probability distributions are unknown. So, the system has to minimize some proxy (e.g., empirical risk; Vapnik 1995). The FEP uses a proxy as well, minimizing (expected) free energy, to minimise surprise.
There are various complexity measures in the literature. An influential approach belongs with the Akaike Information Criterion (Akaike 1974). Under FEP, complexity is conceived as the KL divergence between the prior and posterior distributions; a smaller divergence indicates that less complexity was introduced to account for new observations.
This is not to suggest that one should believe FEP merely because it is in some sense ‘mathematical’ (though there is perhaps a sense in which mathematical proof should be believed). Rather the point is that, when investigating what reasons we have for believing FEP, we should look for mathematical (and conceptual) reasons, not empirical evidence.
Hamilton’s principle states that a mechanical system develops in time such that the integral of the difference between kinetic and potential energy is stationary. There is some debate about the epistemic status of Hamilton’s principle (see, e.g., Smart and Thébault 2015; Stöltzner 2009); in unpublished work the latter authors have argued that a Humean about laws can place Hamilton’s principle as the most fundamental law, making it essentially empirical rather than a priori). I think most standard descriptions are consistent with the reading that it is not a law of nature in the usual sense but a mathematical principle for understanding the dynamics of a physical system in terms of a variational problem, given information about the system and the forces acting on it. My appeal to Hamilton’s principle is not intended to establish complete parity between it and FEP; it may very well be that the former is not driven by conceptual analysis in the way I have argued FEP is. I appeal to the principle here to indicate that there is precedence in science for considering something a principle, which systems may or may not conform with, rather than a law. Another question for further discussion is whether Newton’s laws stand to Hamilton’s principle as process theories like predictive coding stand to FEP. Note finally that there is deep affinity between FEP and Hamilton’s principle, such that FEP applies to systems for which Hamilton’s Principle holds, hence, if the latter is not a priori then arguably FEP would not be either; for discussion of fundamental physics and FEP, see Friston (2019).
Heuristically, if the underlying distribution is multimodal (i.e., non-normal, or not gaussian), then predictive coding can mischaracterise a given sample, which is close to one peak, as a large prediction error relative to another peak; for discussion, see the Hierarchical Gaussian Filter developed in Mathys et al. (2014). For discussion of how this relates to empirical research on particular systems, such as human brains, see Friston (2009). In terms of evidence, Friston remarks that “there is no electrophysiological or psychophysical evidence to suggest that the brain can encode multimodal approximations: indeed, with ambiguous figures, the fact that percepts are bistable (as opposed to bimodal and stable) suggests the recognition density is unimodal” (2009: p. 298).
Here the question of the scope of FEP is relevant. FEP is so general that it may apply to systems like single fat cells, which would not be sharing cognitive architecture with humans. The question what FEP implies for such systems then depends on the assumptions made for them. For FEP applied to really basic model systems, see Friston (2013). Interesting questions arise about the meaning of key notions, such as ‘inference’, ‘representation’, and ‘computation’ and how far and in what manner they might deviate from their literal senses, associated with symbolic representation etc. It seems likely that to get literal inference/computation/representation, we need to appeal to some subset of process theories and assumptions of particular systems of FEP, such as those arguably applying to humans. The thrust of my argument in this paper, to be unfolded in the next section, is that FEP entails approximation to Bayesian inference, and therefore a sense of normativity that seems relevant for what might be regarded basic notions of representation and misrepresentation (at least in the sense of genuine norm-violation). I think it is a substantial further discussion how far and in what way the notions ‘inference’ and ‘representation’ used here deviate from literal (e.g., symbolic) inference and representation. My view is that, in so far as FEP ensures normativity, the use of those notions is justified to all systems for which FEP applies, since we often cash out those notions precisely in terms of normativity. I do also think it is likely that FEP will eventually lead to a recalibration of what we mean by ‘inference’ and ‘representation’. However, the issues here are substantial and some aspects will need to be developed in subsequent research.
I am here using these terms in a fairly generic, philosophical sense. In the fields of cognitive science, machine learning, and statistical learning, there is substantial treatment of the issue of supervision, using somewhat different understandings of the notion of supervision. In machine learning approaches, there are many unsupervised algorithms and many things that organisms do that involve supervised learning (including supervision by nature). In philosophy, supervised learning raises foundational problems of normativity, essentially related to the learner’s grasp of the labels, which must be considered before supervised learning can truly be understood. I am here implying that self-supervised learning is, or should be, a (or perhaps the) quest of machine learning. Of course, valuable machine learning advances can come from devising robust supervised learning algorithms, but from a philosophical perspective, machine learning will only throw light on human intelligence (or approximate human intelligence) if it begins from a basis of self-supervision. This claim is based on the observation, versions of which stretch all the way back to Kant and beyond, that human intelligence must come about just by relying on sensory input and prior belief. See also footnote 1 for comments on my use of the notion of self-supervision.
Here and elsewhere in the paper, various typically personal-level terms are used (‘know’, ‘believe’, ‘evidence’ etc.). This is not to imply that these are personal level rather than subpersonal level processes or states. I am agnostic on how to draw that boundary and here simply use these terms more or less like they are used in the wider literature and in textbooks on machine learning and statistical learning. ‘Knowing’ is the appropriate notion to use here because the reasoning behind FEP leads to the idea that the model is inferred (and what a system infers it in some sense knows). There is a substantial, different debate to be had about the sense in which ‘approximate inference’ is ‘inference’, related to these issues, which is beyond the scope of this paper.
I am not here providing a foundational defence of Bayesianism as such; I am relying on the fact that FEP implies an approximation to the exact Bayesian posterior, which is a good candidate for being a paradigm of normativity. For discussion of Bayesian optimality, see Rahnev and Denison (2018).
There have been previous suggestions linking self-organisation and self-supervision. Ashby, for example, argued that systems can be both self-organised and also display determinate behaviour (Ashby 1954). The current proposal makes this link via the notion of self-evidencing inherent in FEP.
In this paper, the focus is on self-organising systems, which is what FEP is formulated for. Such systems can act to maintain themselves in their expected states. There are some very substantial questions about where to put the boundary between self-organised systems in this sense and any other system in the broadest sense (e.g., mechanical systems, or any system indeed that physics can describe). In some iterations, the notion of free energy minimization is so general that it literally applies to every thing (Friston 2019); (see also fn 11). In other words, something is needed to distinguish mere causal mechanisms from self-organising biological organisms (and to distinguish between things and non-things). One distinction is between things that can model expected free energy and infer policies on this basis in order to engage in active self-evidencing, and things that cannot. Mechanical systems cannot do this, if their action repertoire is pre-set (by a designer who has performed the active inference for the mechanism, for example; for self-supervised artificial intelligence mechanisms, the discussion veers into the substance of this paper). Hence, the question how the system can minimize surprise, if it can’t know its model a priori, pertains to self-organising systems conceived in this way. Further research is needed to fully discuss the question what if anything FEP and kindred approaches imply about non-self-organising systems. In particular, there is the question what, if anything, the notions of perceptual and active inference come to in these kinds of cases where it is less clear that they apply; for example, a projectile is described by Hamilton’s principle but does not, in any sense of the word, “compute” its stationary points of action.
These comments may need some clarification, to set them in the context of computational neuroscience and machine learning approaches. The argument here is not that, a priori, all ‘bottom-up’ approaches fall short of converging on Bayes’ rule; and there are bottom-up learning methods (from single-layer perceptrons onwards) for which there are convergence proofs. Rather, the argument is that some of these bottom-up methods rest on supervised learning (e.g., perceptrons), which raises the problem of normativity in focus here, and suggesting they do not conform to FEP. If the bottom-up method does in fact conform to FEP, then it can potentially form a process theory (subject to assumptions and anatomical plausibility) for which the problem of normativity will not arise, and thereby a good starting point for the quest for truly self-supervised learning. In this light, convergence does not suffice for normativity because the underlying process should also be self-supervised in a manner that does not evoke the problem of normativity; conformity to FEP demonstrates that both constraints are satisfied. When it comes to the ‘top-down’ approaches more commonly endorsed by FEP, the claim is that they are normative in the sense of converging on Bayes; a good place to explore the mathematical grounds for this claim is in discussion of the Hierarchical Gaussian Filter (Mathys et al. 2011) that sets precision-weighted prediction error minimization in the context of approximate inference and reinforcement learning (with a dynamic learning rate).
The time scale over which free energy is assessed is important. Here I just consider it the appropriate time scale for the organism in question but there is a substantial further question to address here. The critical point here is that approximation is not instantaneous.
An interesting question is whether this approach will outlaw “weird” concepts like, indeed, a regular concept dog-or-sheep, which itself has satisfaction conditions. FEP could allow such concepts if they had conceivable free energy minimization properties. If not, and if such concepts are regarded as bona fide concepts nevertheless, then FEP would only be a partial solution. The example given here is semantic but the account extends to action, that is, the problem of inferring a specific policy, for example, for avoiding an environment that is too cold. Active inference sets out how expected free energy is minimised, given an internal model of the environment’s states (including the agent’s own states). That is, a precise policy for action (conceived as a series of control states) is inferred, which is expected to maintain the system in its expected (not too cold) states (e.g., putting on a coat). The system can rank and execute specific policies but only given an internal model governed by FEP, which cashes out normativity by allowing a KL-divergence to be minimised between states achievable given a policy and states the system expects to occupy. Policies in active inference (for expected surprise) are then like priors in perceptual inference (for actual surprise). A policy is inferred by assessing the surprise expected under different policies. On the basis of the model’s inferred policy, an expectation of sensory input is generated, which is minimized through action. As such, policies are part of the generative model and help specify the expected states of the system. The cost function itself is absorbed into the priors, and policies can also be updated and there can be model selection (based on complexity considerations/Bayesian model evidence). In this sense, just as a system’s internal model can have a fine-grained set of priors that describe its beliefs about the world, it can have a fine-grained set of policies that describe beliefs about how it acts in the world. Active inference thus furnishes an answer to challenges about decision-making and inference of specific policies, such as (Klein 2016).
There is interesting discussion from the dynamical systems perspective, but kindred to FEP topics, in Bickhard (2009).
An interesting question arises here about artificial systems running active inference algorithms but which are not easily seen as self-organised or autonomous systems: do they display normativity? A full answer is beyond the scope of this paper partly because it touches on issues around emulation, which may undermine true self-supervision. It may be that some artificial systems can be considered truly normative, in the FEP sense, and therefore self-organising.
Notice that here care is taken not to imply that FEP itself implies cognitive architecture. Notions of architecture will need to build on assumptions about the particular system in question, which will constrain processes for message passing structure. It is a topic for further discussion how assumptions play this role, and what assumptions may look like in various non-human systems. This invokes a larger issue in philosophy of science concerning the ways in which principles constrain process theories (or laws). For FEP, the starting point for this issue is the idea of the addition of assumptions about particular systems to the principle. However, further discussion is needed of what this exactly means: is the relation something on a spectrum between derivation (given assumptions) and more informal notions of coherence, for example? There is a significant body of literature on these questions, which is still to make contact with the specific status of principles versus process theories (see, e.g., Craver 2005; Zednik and Jäkel 2016).
There are many proposals for unsupervised learning, which are not designed to conform to FEP (e.g., Zheng et al. 2018). The claim is not that such approaches cannot deliver what they promise. It may be that they are in fact conforming to FEP, or they may establish normativity in their own right (perhaps conforming to some other principle). Here, the focus is FEP and the claim is that it has a philosophically appealing approach to normativity in terms of self-supervised (as that term is used here).
Adams, F., & Aizawa, K. (2017). Causal theories of mental content. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (summer 2017 edition). https://plato.stanford.edu/archives/sum2017/entries/content-causal/.
Akaike, H. (1974). A new look at the statistical model identification. Paper presented at the IEEE Transactions on Automatic Control.
Allen, M. (2018). The foundation: Mechanism, prediction, and falsification in Bayesian enactivism: Comment on “Answering Schrödinger’s question: A free-energy formulation” by Maxwell James Désormeau Ramstead et al. Physics of Life Reviews, 24, 17–20. https://doi.org/10.1016/j.plrev.2018.01.007.
Allen, M., & Friston, K. J. (2016). From cognitivism to autopoiesis: Towards a computational framework for the embodied mind. Synthese, 195, 2459–2482. https://doi.org/10.1007/s11229-016-1288-5.
Ashby, W. R. (1947). Principles of the self-organizing dynamic system. The Journal of General Psychology, 37(2), 125–128. https://doi.org/10.1080/00221309.1947.9918144.
Ashby, W. R. (1954). Design for a brain. New York: Wiley.
Bar, M. (2011). Predictions in the brain: Using our past to generate a future. Oxford: Oxford University Press.
Barlow, H. B. (1989). Unsupervised learning. Neural Computation, 1(3), 295–311. https://doi.org/10.1162/neco.19220.127.116.115.
Bickhard, M. H. (2009). The biological foundations of cognitive science. New Ideas in Psychology, 27(1), 75–84. https://doi.org/10.1016/j.newideapsych.2008.04.001.
Bickhard, M. H. (2016). The anticipatory brain: Two approaches. In V. C. Müller (Ed.), Fundamental issues of artificial intelligence (pp. 261–283). Cham: Springer International Publishing.
Bishop, C. M. (2007). Pattern recognition and machine learning. Cordrecht: Springer.
Block, N. (2015). The puzzle of perceptual precision. In T. Metzinger & J. M. Windt (Eds.), Open MIND. Frankfurt am Main: MIND Group.
Bogacz, R. (2017). A tutorial on the free-energy framework for modelling perception and learning. Journal of Mathematical Psychology, 76(Part B), 198–211. https://doi.org/10.1016/j.jmp.2015.11.003.
Brown, L. D. (1981). A complete class theorem for statistical problems with finite sample spaces. The Annals of Statistics, 9(6), 1289–1300.
Buckley, C. L., Kim, C. S., McGregor, S., & Seth, A. K. (2017). The free energy principle for action and perception: A mathematical review. Journal of Mathematical Psychology, 81, 55–79. https://doi.org/10.1016/j.jmp.2017.09.004.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.
Clark, A. (2016). Surfing uncertainty: Prediction, action, and the embodied mind. New York: Oxford University Press.
Colombo, M., & Wright, C. (2018). First principles in the life sciences: The free-energy principle, organicism, and mechanism. Synthese. https://doi.org/10.1007/s11229-018-01932-w.
Constant, A., Ramstead, M. J. D., Veissière, S. P. L., Campbell, J. O., & Friston, K. J. (2018). A variational approach to niche construction. Journal of the Royal Society Interface, 15, 141. https://doi.org/10.1098/rsif.2017.0685.
Craver, C. F. (2005). Beyond reduction: Mechanisms, multifield integration and the unity of neuroscience. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 373.
Friston, K. (2003). Learning and inference in the brain. Neural Networks, 16(9), 1325–1352.
Friston, K. (2009). The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences, 13(7), 293–301.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews: Neuroscience, 11(2), 127–138.
Friston, K. (2011). What is optimal about motor control? Neuron, 72(3), 488–498. https://doi.org/10.1016/j.neuron.2011.10.018.
Friston, K. (2013). Life as we know it. Journal of the Royal Society Interface. https://doi.org/10.1098/rsif.2013.0475.
Friston, K. (2018). Does predictive coding have a future? Nature Neuroscience. https://doi.org/10.1038/s41593-018-0200-7.
Friston, K. (2019). A free energy principle for a particular physics. Retrieved from arXiv arXiv:1906.10184.
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G. (2017). Active inference: A process theory. Neural Computation, 29(1), 1–49.
Friston, K., Rigoli, F., Ognibene, D., Mathys, C., Fitzgerald, T., & Pezzulo, G. (2015). Active inference and epistemic value. Cognitive Neuroscience, 6(4), 187–214. https://doi.org/10.1080/17588928.2015.1020053.
Friston, K., & Stephan, K. (2007). Free energy and the brain. Synthese, 159(3), 417–458.
Glüer, K., & Wikforss, Å. (2018). The normativity of meaning and content. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2018 Edition ed.). https://plato.stanford.edu/archives/spr2018/entries/meaning-normativity/.
Gregory, R. L. (1968). Perceptual illusions and brain models. Proceedings of the Royal Society of London, Series B: Biological Sciences, 171, 179–196.
Gregory, R. L. (1980). Perceptions as hypotheses. Philosophical Transactions of the Royal Society B, 290, 181–197.
Heeger, D. J. (2017). Theory of cortical function. Proceedings of the National Academy of Sciences, 114(8), 1773–1782. https://doi.org/10.1073/pnas.1619788114.
Helmholtz, H. V. (1867). Handbuch der Physiologishen Optik. Leipzig: Leopold Voss.
Hohwy, J. (2013). The predictive mind. Oxford: Oxford University Press.
Hohwy, J. (2015). The neural organ explains the mind. In T. K. Metzinger & J. M. Windt (Eds.), Open MIND. Frankfurt am Main: MIND Group.
Hohwy, J. (2016). The self-evidencing brain. Noûs, 50(2), 259–285. https://doi.org/10.1111/nous.12062.
Jackson, F. (1998). From metaphysics to ethics. Oxford: Oxford University Press.
Kant, I. (1787). Kritik der reinen Vernunft. In Königlichen Preußischen Akademie der Wissenschaften (Ed.), 1900–, Kants gesammelte Schriften. Berlin: Georg Reimer.
Kauffman, S. (2019). A world beyond physics: the emergence and evolution of life. New York: Oxford University Press.
Kiefer, A., & Hohwy, J. (2018). Content and misrepresentation in hierarchical generative models. Synthese, 195(6), 2387–2415. https://doi.org/10.1007/s11229-017-1435-7.
Kirchhoff, M., Parr, T., Palacios, E., Friston, K., & Kiverstein, J. (2018). The Markov blankets of life: Autonomy, active inference and the free energy principle. Journal of the Royal Society Interface, 15, 138. https://doi.org/10.1098/rsif.2017.0792.
Klein, C. (2016). What do predictive coders want? Synthese, 195(6), 2541–2557. https://doi.org/10.1007/s11229-016-1250-6.
Knill, D. C., & Pouget, A. (2004). The Bayesian brain: The role of uncertainty in neural coding and computation. Trends in Neurosciences, 27(12), 712–719.
Kripke, S. (1982). Wittgenstein on rules and private language. Oxford: Oxford University Press.
Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action. The Behavioral and Brain Sciences, 8, 529–566.
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness potential): The unconscious initiation of a freely voluntary act. Brain, 106, 623–642.
MacKay, D. M. C. (1956). The epistemological problem for automata. In C. Shannon & J. McCarthy (Eds.), Automata studies (pp. 235–251). Princeton, NJ: Princeton University Press.
Mathys, C., Daunizeau, J., Friston, K., & Stephan, K. (2011). A Bayesian foundation for individual learning under uncertainty. Frontiers in Human Neuroscience. https://doi.org/10.3389/fnhum.2011.00039.
Mathys, C. D., Lomakina, E. I., Daunizeau, J., Iglesias, S., Brodersen, K. H., Friston, K. J., et al. (2014). Uncertainty in perception and the Hierarchical Gaussian Filter. Frontiers in Human Neuroscience. https://doi.org/10.3389/fnhum.2014.00825.
Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.
Nicolis, G., & Prigogine, I. (1977). Self-organization in non-equilibrium systems. New York: Wiley.
Parr, T., Markovic, D., Kiebel, S. J., & Friston, K. J. (2019). Neuronal message passing using Mean-field, Bethe, and Marginal approximations. Scientific Reports, 9(1), 1889. https://doi.org/10.1038/s41598-018-38246-3.
Piekarski, M. (2019). Normativity of predictions: A new research perspective. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2019.01710.
Prigogine, I., & Nicolis, G. (1971). Biological order, structure and instabilities. Quarterly Reviews of Biophysics, 4(2–3), 107–148. https://doi.org/10.1017/S0033583500000615.
Rahnev, D., & Denison, R. N. (2018). Behavior is sensible but not globally optimal: Seeking common ground in the optimality debate. Behavioral and Brain Sciences, 41, e251. https://doi.org/10.1017/S0140525X18002121.
Schrödinger, E. (1944). What is life?. Cambridge: Cambridge University Press.
Schwöbel, S., Kiebel, S., & Marković, D. (2018). Active inference, belief propagation, and the bethe approximation. Neural Computation. https://doi.org/10.1162/neco_a_01108.
Sims, A. (2016). A problem of scope for the free energy principle as a theory of cognition. Philosophical Psychology, 29(7), 967–980. https://doi.org/10.1080/09515089.2016.1200024.
Smart, B. T. H., & Thébault, K. P. Y. (2015). Dispositions and the principle of least action revisited. Analysis, 75(3), 386–395. https://doi.org/10.1093/analys/anv050.
Spratling, M. W. (2017). A review of predictive coding algorithms. Brain and Cognition, 112, 92–97. https://doi.org/10.1016/j.bandc.2015.11.003.
Stefanics, G., Heinzle, J., Attila Horváth, A., & Enno Stephan, K. (2018). Visual mismatch and predictive coding: A computational single-trial ERP study. The Journal of Neuroscience, 38, 4020–4030. https://doi.org/10.1523/jneurosci.3365-17.2018.
Stöltzner, M. (2009). Can the principle of least action be considered a relativized a priori? In M. Bitbol, P. Kerszberg, & J. Petitot (Eds.), Constituting objectivity: Transcendental perspectives on modern physics (pp. 215–227). Dordrecht: Springer.
Vapnik, V. N. (1995). The nature of statistical learning theory. Dordrecht: Springer.
Varela, F. G., Maturana, H. R., & Uribe, R. (1974). Autopoiesis: The organization of living systems, its characterization and a model. Biosystems, 5(4), 187–196. https://doi.org/10.1016/0303-2647(74)90031-8.
Von Bertalanffy, L. (1950). The theory of open systems in physics and biology. Science, 111(2872), 23–29.
Wittgenstein, L. (1953). Philosophical investigations. Oxford: Basil Blackwell.
Yuille, A., & Kersten, D. (2006). Vision as Bayesian inference: Analysis by synthesis? Trends Cogn Sci., 10(7), 301–308.
Zednik, C., & Jäkel, F. (2016). Bayesian reverse-engineering considered as a research strategy for cognitive science. Synthese, 193(12), 3951–3985. https://doi.org/10.1007/s11229-016-1180-3.
Zheng, D., Luo, V., Wu, J., & Tenenbaum, J. (2018). Unsupervised learning of latent physical properties using perception-prediction networks. Retrieved from arXiv arXiv:1807.09244.
Thank you to Stephen Gadsby, Karl Friston and Thomas Parr for comments and suggestions on earlier versions; and to anonymous reviewers for several suggestions. Thank you for comments and suggestions to participants in workshops at Macquarie University and University of Cambridge, including Dan Williams who organised the latter, where some of this material was presented. Thank you to the members of the Cognition and Philosophy Lab for comments. This research is supported by the Australian Research Council DP190101805.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Hohwy, J. Self-supervision, normativity and the free energy principle. Synthese 199, 29–53 (2021). https://doi.org/10.1007/s11229-020-02622-2