Biological Cybernetics

, Volume 102, Issue 3, pp 227–260

Action and behavior: a free-energy formulation

  • Karl J. Friston
  • Jean Daunizeau
  • James Kilner
  • Stefan J. Kiebel
Open Access
Original Paper

DOI: 10.1007/s00422-010-0364-z

Cite this article as:
Friston, K.J., Daunizeau, J., Kilner, J. et al. Biol Cybern (2010) 102: 227. doi:10.1007/s00422-010-0364-z


We have previously tried to explain perceptual inference and learning under a free-energy principle that pursues Helmholtz’s agenda to understand the brain in terms of energy minimization. It is fairly easy to show that making inferences about the causes of sensory data can be cast as the minimization of a free-energy bound on the likelihood of sensory inputs, given an internal model of how they were caused. In this article, we consider what would happen if the data themselves were sampled to minimize this bound. It transpires that the ensuing active sampling or inference is mandated by ergodic arguments based on the very existence of adaptive agents. Furthermore, it accounts for many aspects of motor behavior; from retinal stabilization to goal-seeking. In particular, it suggests that motor control can be understood as fulfilling prior expectations about proprioceptive sensations. This formulation can explain why adaptive behavior emerges in biological agents and suggests a simple alternative to optimal control theory. We illustrate these points using simulations of oculomotor control and then apply to same principles to cued and goal-directed movements. In short, the free-energy formulation may provide an alternative perspective on the motor control that places it in an intimate relationship with perception.


Computational Motor Control Bayesian Hierarchical Priors 

List of symbols

\({{\bf \Psi}\supseteq \{ \tilde{\bf{x}},{\tilde {\bf v}},{\boldsymbol \theta},{\boldsymbol \gamma}\}}, {\Psi \supseteq \{\tilde {x},\tilde {v},\theta ,\gamma \}}\)

Unknown causes of sensory input; variables in bold denote true values and those in italics denote variables assumed by the agent or model

\({\tilde{x}(t) = [x, {x}', {x}'',\ldots]^T, \dot{\tilde{x}}(t) = f(\tilde{x}, \tilde{v}, \theta)+\tilde{w}}\)

Generalised hidden-states that act on an agent. These are time-varying quantities that include all high-order temporal derivatives; they represent a point in generalised coordinates of motion that encodes a path or trajectory

\({\tilde{v}(t) = [v, {v}', {v}'',\ldots]^T}\)

Generalised forces or causal states that act on hidden states

\({\tilde{s}(t) = g(\tilde{x}, \tilde{v}, \theta) + \tilde{z}}\)

Generalised sensory states caused by hidden states

\({\theta \supseteq \{\theta_1 ,\theta_2, \ldots\}}\)

Parameters of the equations of motion and sensory mapping

\({\gamma \supseteq \{\gamma^s, \gamma^x, \gamma^v\}}\)

Parameters of the precision of random fluctuations \({\Pi(\gamma^i) : i \in s, x, v}\)

\({\tilde{w}(t) = [w, {w}', {w}'', \ldots]^T}\)

Generalised random fluctuations of the motion of hidden states

\({\tilde{z}(t) = [z, {z}', {z}'', \ldots]^T}\)

Generalised random fluctuations of sensory states

\({\tilde{n}(t) = [n, {n}', {n}'', \ldots]^T}\)

Generalised random fluctuations of causal states

\({\Pi^i := \Pi(\gamma^i) = \Sigma(\gamma^i)^{-1}}\)

Precisions or inverse covariances of generalised random fluctuations

\({{\bf g}(\tilde{\bf{x}}, {\tilde{\bf v}}, {\bf \theta})}\) , \({{\bf f}(\tilde{\bf{x}}, {\tilde{\bf v}}, \tilde{a}, {\bf \theta})}\)

Sensory mapping and equations of motion generating sensory states

\({g(\tilde{x}, \tilde{v}, \theta)}\) , \({f(\tilde{x}, \tilde{v}, \theta)}\)

Sensory mapping and equations of motion modeling sensory states


Policy: a scalar function of generalised sensory and internal states

\({p(\tilde{\bf{x}}|m)}\) , \({p(\tilde{s}|m)}\)

Ensemble densities; the density of the hidden and sensory states of agents at equilibrium with their environment.

\({D(q\vert \vert p) = \left\langle{{\rm ln}(q/p)}\right\rangle_q}\)

Kullback-Leibler divergence or cross-entropy between two densities

\({\langle \rangle_q }\)

Expectation or mean of under the density q


Model or agent; entailing the form of a generative model

\({H(X) = \left\langle {\ln p(\tilde{\bf{x}}\vert m)}\right\rangle_p H(S)=\left\langle {\ln p(\tilde {s}\vert m)}\right\rangle_p}\)

Entropy of generalised hidden and sensory states

\({}{-\ln p(\tilde{s}\vert m)}\)

Surprise or self-information of generalised sensory states

\({F(\tilde{s},\mu ) \ge -\ln p(\tilde{s}\vert m)}\)

Free-energy bound on surprise


Recognition density on causes Ψ with sufficient statistics μ

\({\mu =\{\tilde {\mu}(t),\mu _\theta ,\mu _\gamma \}} \tilde {\mu}=\{\tilde {\mu}_x ,\tilde {\mu}_v \}\)

Conditional or posterior expectation of the causes Ψ; these are the sufficient statistics of the Gaussian recognition density

\({\tilde{\eta}(t) = [\eta, {\eta}', {\eta}'', \ldots ]^T}\)

Prior expectation of generalised causal states

\({\xi_i = \Pi_i\tilde{\varepsilon}_i : i \in s, x, v}\)

Precision-weighted generalised prediction errors

\({{\tilde {\varepsilon}} = \left[ \begin{array}{l} \tilde {\varepsilon}_s =\tilde {s}-g(\mu)\\ \tilde {\varepsilon}_x =D\tilde {\mu}_x -f(\mu)\\ \tilde{\varepsilon}_v =\tilde {\mu}_v -\tilde {\eta}\end{array} \right]}\)

Generalised prediction error on sensory states, the motion of hidden states and forces or causal states.

Copyright information

© The Author(s) 2010

Authors and Affiliations

  • Karl J. Friston
    • 1
  • Jean Daunizeau
    • 1
  • James Kilner
    • 1
  • Stefan J. Kiebel
    • 1
  1. 1.The Wellcome Trust Centre for Neuroimaging, Institute of NeurologyUniversity College LondonLondonUK

Personalised recommendations