Skip to main content

Habitual and Reflective Control in Hierarchical Predictive Coding

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)


In cognitive science, behaviour is often separated into two types. Reflexive control is habitual and immediate, whereas reflective is deliberative and time consuming. We examine the argument that Hierarchical Predictive Coding (HPC) can explain both types of behaviour as a continuum operating across a multi-layered network, removing the need for separate circuits in the brain. On this view, “fast” actions may be triggered using only the lower layers of the HPC schema, whereas more deliberative actions need higher layers. We demonstrate that HPC can distribute learning throughout its hierarchy, with higher layers called into use only as required.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others


  1. 1.

    At approximately 78%, the accuracy we achieved is significantly lower than standard non-PCN deep learning methods. This is partly because the model has not been fine-tuned (e.g. hyper-parameters, using convolutional layers, etc.). But it is also true that generative models tend to underperform discriminative models in classification tasks. This will be particularly true in our implementation which uses flat priors.


  1. Adams, R.A., Shipp, S., Friston, K.J.: Predictions not commands: active inference in the motor system. Brain Struct. Funct. 218(3), 611–643 (2013)

    Article  Google Scholar 

  2. Baltieri, M., Buckley, C.L.: Generative models as parsimonious descriptions of sensorimotor loops. Behav. Brain Sci. 42, e218–e218 (2019)

    Article  Google Scholar 

  3. Bogacz, R.: A tutorial on the free-energy framework for modelling perception and learning. J. Math. Psychol. 76, 198–211 (2017)

    Article  MathSciNet  Google Scholar 

  4. Bruineberg, J., Kiverstein, J., Rietveld, E.: The anticipating brain is not a scientist: the free-energy principle from an ecological-enactive perspective. Synthese 195(6), 2417–2444 (2016).

    Article  Google Scholar 

  5. Buckley, C.L., Chang, S.K., McGregor, S., Seth, A.K.: The free energy principle for action and perception: a mathematical review (2017)

    Google Scholar 

  6. Burr, C.: Embodied decisions and the predictive brain. In: Wiese, T.M.W. (ed.) Philosophy and predictive processing. MIND Group, Frankfurt a. M. (2016)

    Google Scholar 

  7. Clark, A.: Whatever next? predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36(3), 181–204 (2013)

    Article  Google Scholar 

  8. Clark, A.: Predicting peace: The end of the representation wars. MIND Group, Open MIND. Frankfurt a. M. (2015)

    Google Scholar 

  9. Daw, N.D., Gershman, S.J., Seymour, B., Dayan, P., Dolan, R.J.: Model-based influences on humans’ choices and striatal prediction errors. Neuron 69(6), 1204–1215 (2011)

    Article  Google Scholar 

  10. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc. Ser. B (Methodol.) 39(1), 1–22 (1977)

    MathSciNet  MATH  Google Scholar 

  11. Dolan, R., Dayan, P.: Goals and habits in the brain. Neuron (Cambridge, Mass.) 80(2), 312–325 (2013)

    Google Scholar 

  12. Friston, K.: Learning and inference in the brain. Neural Netw. 16(9), 1325–1352 (2003)

    Article  Google Scholar 

  13. Friston, K.: A theory of cortical responses. Philos. Trans. Roy. Soc. B Biol. Sci. 360(1456), 815–836 (2005)

    Article  Google Scholar 

  14. Friston, K.: The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11(2), 127–138 (2010)

    Article  Google Scholar 

  15. Friston, K.: What is optimal about motor control? Neuron 72(3), 488–498 (2011)

    Article  Google Scholar 

  16. Friston, K., Kilner, J., Harrison, L.: A free energy principle for the brain. J. Physiol.-Paris 100(1–3), 70–87 (2006)

    Article  Google Scholar 

  17. Friston, K.J., Daunizeau, J., Kilner, J., Kiebel, S.J.: Action and behavior: a free-energy formulation. Biol. Cybern. 102(3), 227–260 (2010)

    Article  Google Scholar 

  18. Gläscher, J., Daw, N., Dayan, P., O’Doherty, J.P.: States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron 66(4), 585–595 (2010)

    Article  Google Scholar 

  19. Hipólito, I., Baltieri, M., Friston, K., Ramstead, M.J.: Embodied skillful performance: Where the action is. Synthese, pp. 1–25 (2021)

    Google Scholar 

  20. Hohwy, J.: The Predictive Mind. Oxford University Press, Oxford (2013)

    Google Scholar 

  21. Kahneman, D.: Thinking, fast and slow. Macmillan (2011)

    Google Scholar 

  22. LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010).

  23. MacKay, D.J., Mac Kay, D.J.: Information Theory, Inference and Learning Algorithms. Cambridge University Press, Cambridge (2003)

    Google Scholar 

  24. Millidge, B.: Combining active inference and hierarchical predictive coding: a tutorial introduction and case study. PsyArXiv (2019)

    Google Scholar 

  25. Pezzulo, G., Donnarumma, F., Iodice, P., Maisto, D., Stoianov, I.: Model-based approaches to active perception and control. Entropy (Basel, Switzerland) 19(6), 266 (2017)

    Article  Google Scholar 

  26. Pezzulo, G., Rigoli, F., Friston, K.: Active inference, homeostatic regulation and adaptive behavioural control. Prog. Neurobiol. 134, 17–35 (2015)

    Article  Google Scholar 

  27. Ramstead, M.J., Kirchhoff, M.D., Friston, K.J.: A tale of two densities: active inference is enactive inference. Adapt. Behav. 28(4), 225–239 (2020)

    Article  Google Scholar 

  28. Rao, R.P., Ballard, D.H.: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2(1), 79–87 (1999)

    Article  Google Scholar 

  29. Seth, A.K.: The cybernetic bayesian brain: from interoceptive inference to sensorimotor contingencies (2015)

    Google Scholar 

  30. Stanovich, K.E., West, R.F.: Individual differences in reasoning: implications for the rationality debate? Behav.l Brain Sci. 23(5), 645–665 (2000)

    Article  Google Scholar 

  31. Sutton, R.S.: Reinforcement learning: an introduction (2018)

    Google Scholar 

  32. Whittington, J.C., Bogacz, R.: An approximation of the error backpropagation algorithm in a predictive coding network with local hebbian synaptic plasticity. Neural Comput. 29(5), 1229–1262 (2017)

    Article  MathSciNet  Google Scholar 

  33. Wunderlich, K., Dayan, P., Dolan, R.J.: Mapping value based planning and extensively trained choice in the human brain. Nat. Neurosci. 15(5), 786–791 (2012)

    Article  Google Scholar 

Download references


PK would like to thank Alec Tschantz for sharing the “Predictive Coding in Python” codebase on which the experimental code was based. Thanks also to three anonymous reviewers whose comments helped improve the clarity of this paper, particularly in relation to temporal aspects of predictive coding. PK is funded by the Sussex Neuroscience 4-year PhD Programme. CLB is supported by BBRSC grant number BB/P022197/1.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Paul F. Kinghorn .

Editor information

Editors and Affiliations

A Network parameters

A Network parameters

Network size: 4 layer

Number of nodes on each layer: 10, 100, 300, 785 for MNIST-group and MNIST-digit1. 20, 100, 300, 785 for MNIST-barred. In the bottom layer, 784 nodes were fixed to the MNIST image, the 785th node was an action node which updates in testing. In initial set of experiments, top layer was fixed to a one-hot representation of MNIST label in training. In second set of experiments this was set to random value and allowed to update.

Non-linear function: tanh

Bias used: yes

Training set size: full MNIST training set of 60,000 images, in batches of 640

Number of training epochs: 10

Testing set size: 1280 images selected randomly from MNIST test set

Learning parameters used in weight update of EM process: Learning Rate = 1e–4, Adam

Learning parameters used in node update of EM process: Learning Rate = 0.025, SGD

Number of SGD iterations in training: 200

Number of SGD iterations in test mode: 200 * epoch number. The size is increased as epochs progress to allow for the decreasing size of the error between layers (as discussed in the text, this would normally be counteracted by increase in precision values).

Random initialisation: Except where fixed, all nodes were initialized with a random values selected from \(\mathcal {N}(0.5, 0.05)\)

In the experiment using a 7 layer network, the number of nodes on each layer were: 10, 25, 50, 100, 200, 300, 794. All other parameters the same as above

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kinghorn, P.F., Millidge, B., Buckley, C.L. (2021). Habitual and Reflective Control in Hierarchical Predictive Coding. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics