Mathematical Appendix
The Data Rate Theorem
The DRT, a generalization of the classic Bode Integral Theorem for linear control systems, describes the stability of feedback control under data rate constraints (Nair et al. 2007). Given a noise-free data link between a discrete linear plant and its controller, unstable modes can be stabilized only if the feedback data rate \(\mathcal {I}\) is greater than the rate of ‘topological information’ generated by the unstable system. For the simplest incarnation, if the linear matrix equation of the plant is of the form \(x_{t+1}=\mathbf {A}x_{t}+ \cdots \), where \(x_{t}\) is the n-dimensional state vector at time \(t\), then the necessary condition for stabilizability is that
$$\begin{aligned} \mathcal {I} > \log [|\hbox {det}\mathbf {A}^{u}|] \end{aligned}$$
(30)
where det is the determinant and \(\mathbf {A}^{u}\) is the decoupled unstable component of \(\mathbf {A}\), i.e., the part having eigenvalues \(\ge \)1. The determinant represents a generalized volume. Thus there is a critical positive data rate below which there does not exist any quantization and control scheme able to stabilize an unstable system (Nair et al. 2007).
The new theorem, and its variations, relate control theory to information theory and are as fundamental as the Shannon Coding and Source Coding Theorems, and the Rate Distortion Theorem for understanding complex cognitive machines and biological phenomena (Cover and Thomas 2006). Very clearly, however, the DRT speaks to the centrality of the question whether nonequilibrium phase transitions play an important role in living systems.
Morse Theory
Morse Theory explores relations between analytic behavior of a function—the location and character of its critical points—and the underlying topology of the manifold on which the function is defined. We are interested in a number of such functions, for example, information source uncertainty on a parameter space and possible iterations involving parameter manifolds determining critical behavior. An example might be the sudden onset of a giant component when ‘weak ties’ exceed a threshold. These can be reformulated from a Morse Theory perspective (Pettini 2007).
The basic idea of Morse Theory is to examine an \(n\)-dimensional manifold \(M\) as decomposed into level sets of some function \(f: M \rightarrow \mathbf {R}\) where \(\mathbf {R}\) is the set of real numbers. The \(a\)-level set of \(f\) is defined as
$$\begin{aligned} f^{-1}(a) = \{x \in M: f(x)=a\}, \end{aligned}$$
the set of all points in \(M\) with \(f(x)=a\). If \(M\) is compact, then the whole manifold can be decomposed into such slices in a canonical fashion between two limits, defined by the minimum and maximum of \(f\) on \(M\). Let the part of \(M\) below \(a\) be defined as
$$\begin{aligned} M_{a} = f^{-1}(-\infty , a] = \{x \in M : f(x) \le a\}. \end{aligned}$$
These sets describe the whole manifold as \(a\) varies between the minimum and maximum of \(f\).
Morse functions are defined as a particular set of smooth functions \(f:M \rightarrow \mathbf {R}\) as follows. Suppose a function \(f\) has a critical point \(x_{c}\), so that the derivative \(df(x_{c})=0\), with critical value \(f(x_{c})\). Then, \(f\) is a Morse function if its critical points are nondegenerate in the sense that the Hessian matrix of second derivatives at \(x_{c}\), whose elements, in terms of local coordinates are
$$\begin{aligned} \mathcal {H}_{i,j}= \partial ^{2}f/\partial x^{i} \partial x^{j}, \end{aligned}$$
has rank \(n\), which means that it has only nonzero eigenvalues, so that there are no lines or surfaces of critical points and, ultimately, critical points are isolated.
The index of the critical point is the number of negative eigenvalues of \(\mathcal {H}\) at \(x_{c}\).
A level set \(f^{-1}(a)\) of \(f\) is called a critical level if \(a\) is a critical value of \(f\), that is, if there is at least one critical point \(x_{c} \in f^{-1}(a)\).
Again following Pettini (2007), the essential results of Morse Theory are:
-
1.
If an interval \([a,b]\) contains no critical values of \(f\), then the topology of \(f^{-1}[a, v]\) does not change for any \(v \in (a, b]\). Importantly, the result is valid even if \(f\) is not a Morse function, but only a smooth function.
-
2.
If the interval \([a, b]\) contains critical values, the topology of \(f^{-1}[a,v]\) changes in a manner determined by the properties of the matrix \(H\) at the critical points.
-
3.
If \(f :M \rightarrow \mathbf {R}\) is a Morse function, the set of all the critical points of \(f\) is a discrete subset of \(M\), i.e., critical points are isolated. This is Sard’s Theorem.
-
4.
If \(f : M \rightarrow \mathbf {R}\) is a Morse function, with \(M\) compact, then on a finite interval \([a, b] \subset \mathbf {R}\), there is only a finite number of critical points \(p\) of \(f\) such that \(f(p) \in [a, b]\). The set of critical values of \(f\) is a discrete set of \(\mathbf {R}\).
-
5.
For any differentiable manifold \(M\), the set of Morse functions on \(M\) is an open dense set in the set of real functions of \(M\) of differentiability class \(r\) for \(0 \le r \le \infty \).
-
6.
Some topological invariants of \(M\), that is, quantities that are the same for all the manifolds that have the same topology as \(M\), can be estimated and sometimes computed exactly once all the critical points of \(f\) are known: let the Morse numbers \(\mu _{i}(i=0, \)...\(, m)\) of a function \(f\) on \(M\) be the number of critical points of \(f\) of index \(i\), (the number of negative eigenvalues of \(\mathcal {H}\)). The Euler characteristic of the complicated manifold \(M\) can be expressed as the alternating sum of the Morse numbers of any Morse function on \(M\),
$$\begin{aligned} \chi = \sum _{i=1}^{m} (-1)^{i}\mu _{i}. \end{aligned}$$
The Euler characteristic reduces, in the case of a simple polyhedron, to
$$\begin{aligned} \chi = V - E + F \end{aligned}$$
where \(V, E\), and \(F\) are the numbers of vertices, edges, and faces in the polyhedron.
-
7.
Another important theorem states that, if the interval \([a,b]\) contains a critical value of \(f\) with a single critical point \(x_{c}\), then the topology of the set \(M_{b}\) defined above differs from that of \(M_{a}\) in a way which is determined by the index, \(i\), of the critical point. Then \(M_{b}\) is homeomorphic to the manifold obtained from attaching to \(M_{a}\) an \(i\)-handle, i.e., the direct product of an \(i\)-disk and an \((m-i)\)-disk.
Pettini (2007) and Matsumoto (2002) contain details and further references.
Groupoids
A groupoid, \(G\), is defined by a base set \(A\) upon which some mapping—a morphism—can be defined. Note that not all possible pairs of states \((a_{j}, a_{k})\) in the base set \(A\) can be connected by such a morphism. Those that can define the groupoid element, a morphism \(g=(a_{j},a_{k})\) having the natural inverse \(g^{-1}=(a_{k}, a_{j})\). Given such a pairing, it is possible to define ‘natural’ end-point maps \(\alpha (g)=a_{j}, \beta (g)=a_{k}\) from the set of morphisms \(G\) into \(A\), and a formally associative product in the groupoid \(g_{1}g_{2}\) provided \(\alpha (g_{1}g_{2})=\alpha (g_{1}), \beta (g_{1}g_{2})=\beta (g_{2})\), and \(\beta (g_{1})=\alpha (g_{2})\). Then, the product is defined, and associative, \((g_{1}g_{2})g_{3}=g_{1}(g_{2}g_{3})\). In addition, there are natural left and right identity elements \(\lambda _{g}, \rho _{g}\) such that \(\lambda _{g}g=g=g\rho _{g}\).
An orbit of the groupoid \(G\) over \(A\) is an equivalence class for the relation \(a_{j} \sim G a_{k}\) if and only if there is a groupoid element \(g\) with \(\alpha (g)=a_{j}\) and \(\beta (g)=a_{k}\). A groupoid is called transitive if it has just one orbit. The transitive groupoids are the building blocks of groupoids in that there is a natural decomposition of the base space of a general groupoid into orbits. Over each orbit, there is a transitive groupoid, and the disjoint union of these transitive groupoids is the original groupoid. Conversely, the disjoint union of groupoids is itself a groupoid.
The isotropy group of \(a \in X\) consists of those \(g\) in \(G\) with \(\alpha (g)=a=\beta (g)\). These groups prove fundamental to classifying groupoids.
If \(G\) is any groupoid over \(A\), the map \((\alpha , \beta ):G \rightarrow A \times A\) is a morphism from \(G\) to the pair groupoid of \(A\). The image of \((\alpha , \beta )\) is the orbit equivalence relation \(\sim G\), and the functional kernel is the union of the isotropy groups. If \(f: X \rightarrow Y\) is a function, then the kernel of \(f,\, ker(f)=[(x_{1},x_{2})\in X \times X:f(x_{1})=f(x_{2})]\) defines an equivalence relation.
Groupoids may have additional structure. For example, a groupoid \(G\) is a topological groupoid over a base space \(X\) if \(G\) and \(X\) are topological spaces and \(\alpha , \beta \) and multiplication are continuous maps.
In essence, a groupoid is a category in which all morphisms have an inverse, here defined in terms of connection to a base point by a meaningful path of an information source dual to a cognitive process.
The morphism \((\alpha , \beta )\) suggests another way of looking at groupoids. A groupoid over \(A\) identifies not only which elements of \(A\) are equivalent to one another (isomorphic), but it also parameterizes the different ways (isomorphisms) in which two elements can be equivalent, i.e., in our context, all possible information sources dual to some cognitive process. Given the information theoretic characterization of cognition presented above, this produces a full modular cognitive network in a highly natural manner.