Abstract
The Fisher metric on a manifold of probability distributions is usually treated as a metric on the tangent bundle. In this paper, we focus on the metric on the cotangent bundle induced from the Fisher metric with calling it the Fisher co-metric. We show that the Fisher co-metric can be defined directly without going through the Fisher metric by establishing a natural correspondence between cotangent vectors and random variables. This definition clarifies a close relation between the Fisher co-metric and the variance/covariance of random variables, whereby the Cramér-Rao inequality is trivialized. We also discuss the monotonicity and the invariance of the Fisher co-metric with respect to Markov maps, and present a theorem characterizing the co-metric by the invariance, which can be regarded as a cotangent version of Čencov’s characterization theorem for the Fisher metric. The obtained theorem can also be viewed as giving a characterization of the variance/covariance.
Similar content being viewed by others
1 Introduction
The Fisher metric on a statistical manifold (a manifold consisting of probability distributions) is one of the most important notions in information geometry [1]. It is usually treated as a Riemannian metric which is a metric on the tangent bundle. The subject of the present paper is the metric on the cotangent bundle corresponding to the Fisher metric, which we call the Fisher co-metric. The Fisher metric and the Fisher co-metric are essentially a single geometric object so that one is induced from another. Nevertheless, studying the Fisher co-metric has several implications as mentioned below, which are what the present paper intends to show.
Firstly, as will be seen in Sect. 2, the Fisher co-metric is defined via the variance/covariance of random variables based on a natural correspondence between cotangent vectors and random variables. This definition is very natural and does not seem arbitrary. There is no room for questions such as why \(\log p\) appears in the definition of the Fisher metric.
Secondly, the above relationship between cotangent vectors and random variables directly links the variance/covariance of an unbiased estimator and the Fisher co-metric, which trivializes the Cramér-Rao inequality. Recognizing this fact, the Fisher metric appears to be a detour for the Cramér-Rao inequality, at least conceptually.
Thirdly, once we focus on the Fisher co-metric, we are motivated to reconsider a known result for the Fisher metric as a source of similar problems for the Fisher co-metic and the variance/covariance, which may lead to a new insight. As an example, co-metric and variance/covariance versions of Čencov’s theorem on characterization of the Fisher metric are investigated in this paper.
The paper is organized as follows. In Sect. 2, we introduce the Fisher co-metric on the manifold \(\mathcal {P}(\Omega )\), which is the totality of positive probability distributions on a finite set \(\Omega \), via the variance/covariance of random variables on \(\Omega \). In Sect. 3, the Fisher co-metric is shown to be equivalent to the Fisher metric by a natural correspondence. In Sect. 4, the Fisher metric and co-metric on an arbitrary submanifold of \(\mathcal {P}\) are discussed, where we see that the Cramér-Rao inequality is trivialized by considering the co-metric. Section 5 treats the e- and m-connections on \(\mathcal {P}(\Omega )\), where it is clarified that, in application to estimation theory, the role of the m-connection as a connection on the cotangent bundle and its relation to the Fisher co-metric are crucial. Sections 2–5 can be considered to constitute a first half of the paper, which is aimed at showing the naturalness and the usefulness of considering the Fisher co-metric.
The second half of the paper focuses on the monotonicity and the invariance of the Fisher metric and co-metric with respect to Markov maps. In Sect. 6, we investigate the monotonicity. We show there that the monotonicity of the Fisher metric, which is well known as a characteristic property of the metric, is equivalently translated into the monotonicity of the Fisher co-metric and that of the variance. In Sect. 7, after reviewing the invariance of the Fisher metric and Čencov’s theorem, we consider their co-metric versions. It is shown that, being different from the monotonicity, the invariance of the metric and that of the co-metric are not logically equivalent. We present a theorem on characterization of the Fisher co-metric in terms of the invariance, which corresponds to Čencov’s theorem but does not follow from it. The obtained theorem can also be expressed as a theorem on characterization of the variance/covariance. In Sect. 8, we investigate a stronger version of the invariance, which can be regarded as the joint condition that combines the invariance of the metric and that of the co-metric. The formulation used for expressing this condition is applied to affine connections in Sect. 9, whereby a kind of invariance condition for affine connection is obtained. The condition is shown to be equivalent to a known version of invariance condition which is seemingly weaker than the original condition used by Čencov to characterize the \(\alpha \)-connections, but actually characterizes the \(\alpha \)-connection as well. Section 10 is devoted to concluding remarks.
Remark 1.1
Throughout this paper, we denote the tangent space and the cotangent space of a manifold M at a point \(p\in M\) by \(T_{p} (M)\) and \(T^*_{p} (M)\), respectively. We also denote the totality of smooth vector fields and that of smooth differential 1-forms on M by \(\mathfrak {X}(M)\) and \(\mathfrak {D}(M)\), respectively. We generally use capital letters \(X, Y{,} \ldots \) for vector fields in \(\mathfrak {X}(M)\), which are maps assigning tangent vectors \(X_p, Y_p, \ldots \) in \(T_{p} (M)\) to each point \(p\in M\). To save the symbols, we also denote general tangent vectors in \(T_{p} (M)\) by \(X_p, Y_p, \ldots \), not only when they are the values of vector fields. Similarly, We use Greek letters \(\alpha , \beta , \ldots \) for 1-forms in \(\mathfrak {D}(M)\), which are maps assigning cotangent vectors \(\alpha _p, \beta _p, \ldots \) in \(T^*_{p} (M)\) to each point \(p\in M\), and also denote general cotangent vectors by \(\alpha _p, \beta _p, \ldots \), not only when they are the values of 1-forms. The pairing of \(X_p\in T_{p} (M)\) and \(\alpha _p\in T^*_{p}(M)\) is expressed as \(\alpha _p (X_p)\), considering a cotangent vector as a function on the tangent space. We keep the first capital letters \(A, B, \ldots \) for random variables (\(\mathbb {R}\)-valued functions on sample spaces).
2 The Fisher co-metric
We introduce the Fisher co-metric in this section, while its equivalence to the Fisher metric will be shown in the next section.
Let \(\Omega \) be a finite set with cardinality \(|\Omega |\ge 2\), and let \(\mathcal {P}(\Omega )\) be the totality of strictly positive probability distributions on \(\Omega \):
which is regarded as a manifold with \(\dim \mathcal {P}(\Omega ) = |\Omega |-1\). Let the totality of \(\mathbb {R}\)-valued functions on \(\Omega \) be denoted by \(\mathbb {R}^\Omega \), and define \(\bigl (\mathbb {R}^\Omega \bigr )_c:= \{A\in \mathbb {R}^\Omega \,\vert \, \sum _{\omega \in \Omega } A(\omega ) =c\}\) for a constant \(c\in \mathbb {R}\). Since \(\mathcal {P}\) is an open subset of the affine space \(\bigl (\mathbb {R}^\Omega \bigr )_1\), its tangent space can be identified with the linear space \(\bigl (\mathbb {R}^\Omega \bigr )_0\). Following the terminology of [1], we denote this identification \(T_{p} (\mathcal {P}) \rightarrow \bigl (\mathbb {R}^\Omega \bigr )_0\) by \(X_p \mapsto X_p^{(\textrm{m})}\), and call \(X_p^{(\textrm{m})}\) the m-representation of \(X_p\).
For an arbitrary submanifold M of \(\mathcal {P}\) (including the case when \(M=\mathcal {P}\)) we define \(T^{(\textrm{m})}_{p}(M):= \{X_p^{(\textrm{m})} \,\vert \, X_p \in T_{p}(M)\}\), which is a linear subspace of \(T^{(\textrm{m})}_{p}(\mathcal {P}) = \bigl (\mathbb {R}^\Omega \bigr )_0\). When the elements of M are parametrized as \(p_\xi \) by a coordinate system \(\xi = (\xi ^i)\) of M, the m-representation of \((\partial _i)_p \in T_{p}(M)\), where \(\partial _i:= \frac{\partial }{\partial \xi ^i}\), with \(p=p_\xi \) is represented as
and \(\{ (\partial _i)_{p}^{(\textrm{m})} \}_{i=1}^n\) (\(n=\dim M\)) constitute a basis of \(T^{(\textrm{m})}_{p}(M)\).
We denote the expectation of a random variable \(A\in \mathbb {R}^\Omega \) w.r.t. a distribution \(p\in \mathcal {P}\) by
and define the function
Since \(\langle A\rangle \) is a smooth function on the manifold \(\mathcal {P}\), its differential \((d\langle A\rangle )_p \in T^*_{p} (\mathcal {P})\) at each point \(p\in \mathcal {P}\) is defined. We introduce the following map:
for which we have
Proposition 2.1
For every \(p\in \mathcal {P}\), the linear map \(\delta _p: \mathbb {R}^\Omega \rightarrow T^*_{p} (\mathcal {P})\) is surjective with \(\textrm{Ker}\,\delta _p =\mathbb {R}\), where \(\mathbb {R}\) is regarded as a subspace of \(\mathbb {R}^\Omega \) by identifying a constant \(c\in \mathbb {R}\) with the constant function \(\omega \mapsto c\). Hence, \(\delta _p\) induces a linear isomorphism \(\mathbb {R}^\Omega / \mathbb {R}\rightarrow T^*_{p} (\mathcal {P})\).
Proof
Every cotangent vector \(\alpha _p\in T^*_{p} (\mathcal {P})\) is a linear functional on \(T_{p} (\mathcal {P})\), which is represented as \(\alpha _p: X_p \mapsto \sum _\omega X_p^{(\textrm{m})} (\omega ) A (\omega )\) by some \(A\in \mathbb {R}^{\Omega }\). This means that \(\alpha _p = \delta _p (A)\) due to (2.6). Hence, \(\delta _p\) is surjective. For any \(A\in \mathbb {R}^{\Omega }\), we have
which proves \(\textrm{Ker}\,\delta _p =\mathbb {R}\). \(\square \)
For each \(p\in \mathcal {P}\), denote the \(L^2\) inner product and the covariance of random variables \(A, B\in \mathbb {R}^\Omega \) by
Then \(\textrm{Cov}_p: (A, B) \mapsto \textrm{Cov}_p (A, B)\) is a degenerate nonnegative bilinear form on \(\mathbb {R}^\Omega \) with kernel \(\mathbb {R}\), and defines an inner product on \(\mathbb {R}^\Omega / \mathbb {R}\). Therefore, Proposition 2.1 implies that an inner product on \(T^*_{p} (\mathcal {P})\), which we denote by \(g_p\), can be defined by
Denoting the norm for \(g_p\) by \(\Vert \cdot \Vert _p\), we have
where the RHS is the variance \(V_p (A):= \langle (A- \langle A\rangle _p)^2 \rangle _p\).
We have thus defined the map g which maps each point \(p\in \mathcal {P}\) to the inner product \(g_p\) on \(T^*_{p} (\mathcal {P})\). We generally call such a map (a metric on the cotangent bundle) a co-metric. Although a co-metric is essentially equivalent to a usual (Riemannian) metric (a metric on the tangent bundle) by the correspondence explained in the next section, it is often useful to distinguish them conceptually. The co-metric defined by (2.9) is called the Fisher co-metric, since it corresponds to the Fisher metric as will be shown later.
Remark 2.2
Eq. (2.10) is found in Theorem 2.7 of the book [1], where the norm and the inner product on the cotangent space were considered to be induced from the Fisher metric.
3 The correspondence between a metric and a co-metric
By a standard argument of linear algebra, an inner product \(\langle \cdot , \cdot \rangle \) on an \(\mathbb {R}\)-linear space V establishes a natural linear isomorphism between V and its dual space \(V^*\), which we denote by \({\mathop {\longleftrightarrow }\limits ^{\langle \cdot , \cdot \rangle }}\). This gives a one-to-one correspondence between a metric on a manifold M and a co-metric on M as follows. Given a metric g on M, a tangent vector \(X_p\in T_{p} (M)\) and a cotangent vector \(\alpha _p\in T^*_{p} (M)\) at a point \(p\in M\) correspond each other by
The correspondence is extended to the correspondence between a vector field \(X\in \mathfrak {X}(M)\) and a 1-form \(\alpha \in \mathfrak {D}(M)\) by
(Note: some literature refers to this correspondence as the musical isomorphism with notation \(\alpha = X^\flat \) and \(X = \alpha ^\sharp \), while we will use the symbol \(\sharp \) for a different meaning later.) This correspondence determines a co-metric on M, which is denoted by the same symbol g, such that for every \(p\in M\)
Conversely, given a co-metric g on M, the correspondence \({\mathop {\longleftrightarrow }\limits ^{g_p}}\) is defined by
and a metric on M is defined by the same relation as (3.3). It should be noted that when a metric and a co-metric correspond in this way, the relations (3.1) and (3.4) are equivalent, so that there arises no confusion even if we use the same symbol g for the corresponding metric and co-metric in \({\mathop {\longleftrightarrow }\limits ^{g_p}}\) and \({\mathop {\longleftrightarrow }\limits ^{g}}\).
Note that for an arbitrary coordinate system \((\xi ^i)\) of M, \(g_{ij}:= g(\frac{\partial }{\partial \xi ^i}, \frac{\partial }{\partial \xi ^j})\) and \(g^{ij}:= g(d\xi ^i, d\xi ^j)\) form the inverse matrices of each other at every point of M. Note also that the norms for \((T_{p}(M), g_p)\) and \((T^*_{p}(M), g_p)\) are linked by
where the \(\max \)’s in these equations are achieved by those \(X_p\) and \(\alpha _p\) which correspond to each other by \({\mathop {\longleftrightarrow }\limits ^{g_p}}\) up to a constant factor.
For a tangent vector \(X_p\in T_{p} (\mathcal {P})\), define
which is the derivative of the map \(\mathcal {P}\rightarrow \mathbb {R}^\Omega \), \(p\mapsto \log p\) w.r.t. \(X_p\). (In [1], \(L_{X_p}\) is called the e-representation of \(X_p\) and is denoted by \(X_p^{(\textrm{e})}\).) Note that \(L_{X_p}\) is characterized by (cf. (2.6))
The following proposition shows that the metric induced from the Fisher co-metric g by the correspondence \({\mathop {\longleftrightarrow }\limits ^{g}}\) is the Fisher metric.
Proposition 3.1
For each point \(p\in \mathcal {P}\), we have:
-
1.
\(\forall A\in \mathbb {R}^{\Omega }, \;\forall X_p\in T_{p}(\mathcal {P}), \;\; X_p {\mathop {\longleftrightarrow }\limits ^{g_p}} \delta _p (A) \; \Leftrightarrow \; L_{X_p} = A - \langle A\rangle _p\).
-
2.
\(\forall X_p, Y_p\in T_{p}(\mathcal {P}), \;\; g_p (X_p, Y_p) = \langle L_{X_p}, L_{Y_p}\rangle _p\).
Proof
1: According to (3.4), the condition \(X_p {\mathop {\longleftrightarrow }\limits ^{g_p}} \delta _p (A)\) is equivalent to
Here the LHS is equal to
while the RHS is equal to \(\langle L_{X_p}, B\rangle _p\) by (3.8). Hence, (3.9) is equivalent to \(L_{X_p} = A-\langle A\rangle _p\).
2: Obvious from item 1 and (3.3). \(\square \)
4 The Fisher co-metric on a submanifold and the Cramér–Rao inequality
Let M be an arbitrary submanifold of \(\mathcal {P}\). Then a metric on M is induced as the restriction of the Fisher metric g, which we denote by \(g_M: p\mapsto g_{M, p} = g_p\vert _{T_{p}(M)^2}\). When a coordinate system \(\xi = (\xi ^i)\) is given on M, corresponding to (2.2) it holds that
We have
which defines the Fisher information matrix \(G_M (p) = [g_{M, ij} (p) ]\). The metric \(g_{M}\) induces a co-metric on M, which is denoted by the same symbol \(g_{M}\). Letting
we have \(G_M (p)^{-1} = [g_{M}^{ij} (p) ]\).
Suppose that a cotangent vector \(\alpha _p \in T^*_{p}(M)\) on M is the restriction of a cotangent vector \(\tilde{\alpha }_p \in T^*_{p}(\mathcal {P})\) on \(\mathcal {P}\); i.e., \(\alpha _p = \tilde{\alpha }_p\vert _{T_{p} (M)}\). Then, it follows from (3.6) that
(Note that \(\Vert X_p \Vert _p = \Vert X_p \Vert _{M, p}\) since the metric on M is the restriction of the metric on \(\mathcal {P}\).) Furthermore, for an arbitrary \(\alpha _p \in T^*_{p}(M)\), there always exists \(\tilde{\alpha }_p \in T^*_{p} (\mathcal {P})\) satisfying \(\alpha _p = \tilde{\alpha }_p\vert _{T_{p} (M)}\) and \(\Vert \alpha _p \Vert _{M, p} =\Vert \tilde{\alpha }_p \Vert _{p}\). Indeed, letting \(X_p\in T_{p}(M)\) be defined by \(X_p {\mathop {\longleftrightarrow }\limits ^{g_{M, p}}} \alpha _p\), such an \(\tilde{\alpha }_p\) is obtained by \(X_p {\mathop {\longleftrightarrow }\limits ^{g_{p}}} \tilde{\alpha }_p\).
The above observations lead to the following proposition.
Proposition 4.1
-
1.
For any \(\alpha _p \in T^*_{p} (M)\), we have
$$\begin{aligned} \Vert \alpha _p \Vert _{M, p}&= \min \{\Vert \tilde{\alpha }_p \Vert _p\, \,\vert \, \,\tilde{\alpha }_p\in T^*_{p}(\mathcal {P}) \;\;\text {and}\;\; \alpha _p = \tilde{\alpha }_p\vert _{T_{p}(M)}\} \nonumber \\&= \Vert {(\alpha _p)}^\sharp \Vert _p, \end{aligned}$$(4.5)where \({(\alpha _p)}^\sharp := \arg \min _{\tilde{\alpha }_p}\{\Vert \tilde{\alpha }_p \Vert _p \,\vert \, \cdots \}\).
-
2.
For any \(\alpha _p, \beta _p \in T^*_{p} (M)\), we have
$$\begin{aligned} g_{M, p} (\alpha _p, \beta _p) = g_p ({(\alpha _p)}^\sharp , {(\beta _p)}^\sharp ). \end{aligned}$$(4.6)
The above proposition shows that the Fisher co-metric on M can be defined from the Fisher co-metric on \(\mathcal {P}\) directly by (4.5) and (4.6), not by way of the Fisher metric.
Corollary 4.2
(The Cramér-Rao inequality) Suppose that an n-tuple of random variables \(\textbf{A}=(A^1, \dots {,}\, A^n )\in (\mathbb {R}^\Omega )^n\) satisfies
for a coordinate system \(\xi = (\xi ^i)\) of an n-dimensional submanifold M of \(\mathcal {P}\) and for a point \(p\in M\). Letting \(V_p (\textbf{A}) = [v^{ij}] \in \mathbb {R}^{n\times n}\) be the variance-covariance matrix of \(\textbf{A}\) defined by
and letting \(G_M (p)\) be the Fisher information matrix, we have
Proof
For an arbitrary column vector \(c= (c_i)\in \mathbb {R}^n\), let
Since (4.7) implies that \(\tilde{\alpha }_p\vert _{T_{p}(M)} = \alpha _p\), it follows from Proposition 4.1 that \(\Vert \tilde{\alpha }_p \Vert _p \ge \Vert \alpha _p \Vert _{M, p}\). Noting that \(\Vert \tilde{\alpha }_p \Vert _p^2 = {}^{\textrm{t}}{c}\, V_p (\textbf{A})\, c\) and \(\Vert \alpha _p \Vert _{{M, p}}^2 = {}^{\textrm{t}}{c}\, {G_M (p)}^{-1} c\), where \({}^{\textrm{t}}{}\) denotes the transpose, we obtain (4.9). \(\square \)
5 On the e, m-connections
An affine connection is usually treated as a connection on the tangent bundle, while it corresponds to a connection on the cotangent bundle by the relation
This correspondence is one-to-one, so that we can define an affine connection by specifying a connection on the cotangent bundle. Therefore, the \(\alpha \)-connection in information geometry can also be introduced in this way. Although affine connections are out of the main subject of this paper, we will briefly discuss the significance of defining the m-connection (i.e. \((\alpha = -1)\)-connection) in this way, since it is closely related to the role of the Fisher co-metric in the Cramér-Rao inequality.
We start by introducing the m-connection \(\nabla ^{(\textrm{m})}\) on \(\mathcal {P}= \mathcal {P}(\Omega )\) as a flat connection on the cotangent bundle for which the 1-form \(d\langle A\rangle \) is parallel for any \(A\in \mathbb {R}^\Omega \); i.e.,
Since \(\dim \{d \langle A\rangle \,\vert \, A\in \mathbb {R}^\Omega \} = \vert \Omega \vert - 1 = \dim \mathcal {P}\), (5.2) implies that every parallel 1-from is represented as \(d\langle A\rangle \) by some \(A\in \mathbb {R}^\Omega \). Then the correspondence (5.1) determines a connection on the tangent bundle, which is denoted by the same symbol \(\nabla ^{(\textrm{m})}\). Letting \(\alpha = d\langle A\rangle \) in (5.1) and applying (5.2), we have
This implies that, for any \(Y\in \mathfrak {X}(\mathcal {P})\),
where “m-parallel” means “parallel w.r.t. \(\nabla ^{(\textrm{m})}\)”. Since this property characterizes the m-connection on \(\mathcal {P}\) (e.g. Equation (2.39) of [1]), our definition of the m-connection is equivalent to the usual definition in information geometry.
Next, we define the e-connection \(\nabla ^{(\textrm{e})}\) as the dual connection of \(\nabla ^{(\textrm{m})}\) w.r.t. the Fisher metric g ([1, 2]), which means that
Using (5.1), we can rewrite (5.5) into
This implies that, for any \(X\in \mathfrak {X}(\mathcal {P})\) and \(\alpha \in \mathfrak {D}(\mathcal {P})\),
Now, let us recall the situation of Corollary 4.2. An estimator \(\textbf{A}=(A^1, \dots {,} \, A^n )\) is said to be efficient for the statistical model \((M, \xi )\) when it is unbiased (i.e. \(\forall i\), \(\xi ^i = \langle A^i\rangle \vert _{M}\)) and achieves the equality in the Cramér-Rao inequality (4.9) for every \(p\in M\). Noting that the achievability at each \(p\in M\) is represented by the condition \(\forall i\), \(\delta _p (A^i) {= (d\langle A^i\rangle )_p} = {((d\xi ^i)_p)}^\sharp \) and recalling (5.2), we can see that the condition for \((M, \xi )\) to have an efficient estimator is expressed as
On the other hand, it is well known that the existence of an efficient estimator is equivalent to the condition that M is an exponential family and that \(\xi \) is an expectation coordinate system, which can be rephrased as (see Theorem 3.12 of [1])
Therefore, the two conditions (5.8) and (5.9) are necessarily equivalent. These are both purely geometrical conditions for a submanifold of the dually flat space \((\mathcal {P}, g, \nabla ^{(\textrm{e})}, \nabla ^{(\textrm{m})})\), and we can prove their equivalence within this geometrical framework, forgetting its statistical background. Indeed, the equivalence can be proved for a more general situation where M is a submanifold of a manifold S equipped with a Riemannian metric g and a pair of dual affine connections \(\nabla , \nabla ^*\) on the assumption that \(\nabla ^*\) is flat. Note that this assumption is weaker than the dually-flatness of \((S, g, \nabla , \nabla ^*)\) in that \(\nabla \) is allowed to have non-vanishing torsion, which is essential in application to quantum estimation theory. See section 7 of [4] for details.
6 Monotonicity
The monotonicity with respect to a Markov map is known to be an important and characteristic property of the Fisher metric. In this section we discuss the monotonicity of the Fisher co-metric and its relation to the variance of random variables.
Let \(\Omega _1\) and \(\Omega _2\) be arbitrary finite sets, and let \(\mathcal {P}_i:= \mathcal {P}(\Omega _i)\) for \(i=1, 2\). A map \(\Phi : \mathcal {P}_1 \rightarrow \mathcal {P}_2\) is called a Markov map when it is affine in the sense that \(\forall p, q\in \mathcal {P}_1 \), \(0\le \forall a \le 1\), \(\Phi (a p + (1-a) q ) = a\, \Phi (p) + (1-a)\, \Phi (q)\). Every Markov map \(\Phi \) is represented as
where W is a surjective channel from \(\Omega _1\) to \(\Omega _2\); i.e.,
When \(\Phi \) is represented as (6.1), we write \(\Phi = \Phi _W\).
More generally, for a submanifold M of \(\mathcal {P}_1\) and a submanifold N of \(\mathcal {P}_2\), a map \(\varphi : M \rightarrow N\) is called a Markov map when there exists a Markov map \(\Phi : \mathcal {P}_1 \rightarrow \mathcal {P}_2\) such that \(\varphi = \Phi \vert _M\). Since a Markov map \(\varphi \) is smooth, it induces at each \(p\in M\) the differential
and its dual
where \({}^{\textrm{t}}{}\) denotes the transpose of a linear map. See Remark 6.2 below for the notation \(\varphi ^* = \varphi ^*_p\).
As is well known, the Fisher metric satisfies the following monotonicity property for its norm:
The cotangent version of the monotonicity is given below.
Proposition 6.1
We have
where \(\Vert \cdot \Vert _{M, p}\) and \(\Vert \cdot \Vert _{N, \varphi (p)}\) denote the norms w.r.t. the Fisher co-metrics \(g_{M}\) and \(g_{N}\), respectively.
Proof
Since the inequality is trivial when \(\Vert \varphi ^*(\alpha _{\varphi (p)}) \Vert _{M, p} = 0\), we assume \(\Vert \varphi ^*(\alpha _{\varphi (p)}) \Vert _{M, p} > 0\). Then, invoking (3.6), we have
where the third equality follows since \(X_p\) achieving \(\max _{X_p\in T_{p} (M) \setminus \{0\}}\) should satisfy \(\varphi _*(X_p) \ne 0\) due to the assumption \(\Vert \varphi ^*(\alpha _{\varphi (p)}) \Vert _{M, p} > 0\), and the first \(\le \) follows from (6.6). \(\square \)
Remark 6.2
We have written \(\varphi ^* (\alpha _{\varphi (p)})\) for \(\varphi ^*_p (\alpha _{\varphi (p)})\) above (and will use similar notations throughout the paper), considering that omitting p from \(\varphi ^*_p\) is harmless in the context and is better for the readability of expressions. Note that the notation \(\varphi ^* (\alpha _{\varphi (p)})\), if it appears alone, is mathematically ambiguous unless \(\varphi ^{-1} (\varphi (p))\) is the singleton \(\{p\}\) in that \(\varphi ^*_{p'} (\alpha _{\varphi (p)}) \in T^*_{p'}(M)\) depends on a choice of \(p' \in \varphi ^{-1} (\varphi (p))\). On the other hand, the notation \(\varphi _*(X_p)\) has no such ambiguity, since we know that the argument \(X_p\) belongs to \(T_{p}(M)\) and hence \(\varphi _*\) must be \(\varphi _{*, p}\).
Let us consider the case when \(M = \mathcal {P}_1\) and \(N = \mathcal {P}_2\), and let \(\Phi = \Phi _W: \mathcal {P}_1 \rightarrow \mathcal {P}_2\) be an arbitrary Markov map represented by a surjective channel W. Recalling (6.1) and the definition of m-representation of tangent vectors, we have
for \(X_p \in T_{p}(\mathcal {P}_1)\) and \(Y_{\Phi (p)}\in T_{\Phi (p)} (\mathcal {P}_2)\). We claim that
where \(\Phi ^* = \Phi ^*_p\), and \(E_W (A\,\vert \,\cdot ) \in \mathbb {R}^{\Omega _1}\) denotes the conditional expectation of A defined by
Eq. (6.9) is verified as follows; for every \(\beta _{p} = \delta _p (B) \in T^*_{p} (\mathcal {P}_1)\), where \(B \in \mathbb {R}^{\Omega _2}\), we have
where the second \(\Leftrightarrow \) follows from (2.6) and (6.8), the third \(\Leftrightarrow \) follows from \(T^{(\textrm{m})}_{p}(\mathcal {P}) = \bigl (\mathbb {R}^\Omega \bigr )_0\), and \(\mathbb {R}\) is identified with the set of constant functions on \(\Omega _1\).
Invoking (2.10) and (6.9), we see that the monotonicity (6.7) is equivalent to the following well-known inequality for the variance:
which we refer to as the monotonicity of the variance.
In the above proof of Proposition 6.1, we derived (6.7) from (6.6). Conversely, we can derive (6.6) from (6.7) by the use of (3.5) as follows; for any \(X_p\in T_{p}(M)\),
where we have assumed \(\Vert \varphi _*(X_p) \Vert _{\varphi (p)} >0\) with no loss of generality, which yields the third equality (cf. the proof of Proposition 6.1), and the first \(\le \) follows from (6.7). Thus, (6.6) and (6.7) are equivalent. Note that this equivalence is derived solely from a general argument on metrics and co-metrics, and does not rely on the special characteristics of the Fisher metric/co-metric. In this sense, we say that (6.6) and (6.7) are logically equivalent.
Recalling that the Fisher metric is characterized as the unique monotone metric up to a constant factor, we obtain the following propositions from the logical equivalence mentioned above.
Proposition 6.3
The monotonicity (6.7) characterizes the Fisher co-metric up to a constant factor.
Proposition 6.4
The variance is characterized up to a constant factor as the positive quadratic form for random variables satisfying the monotonicity (6.12).
Remark 6.5
We have described the above propositions in a rough form for the sake of readability. For the exact statement, we need a formulation similar to Theorems 7.1, 7.2 and 7.3 in the next section. See also Remark 8.4.
Remark 6.6
Since the monotonicity of the Fisher metric (6.6), that of the Fisher co-metric (6.7), and that of the variance (6.12) are all logically equivalent, we can derive (6.6) from the more popular (6.12).
7 Invariance
Čencov showed in [3] that the Fisher metric is characterized up to a constant factor as a covariant tensor field of degree 2 satisfying the invariance for Markov embeddings. Note that the invariance is weaker than the monotonicity and that the tensor field is not assumed to be positive nor symmetric. In this section we review Čencov’s theorem and then investigate its co-metric version, which will be shown to be equivalent to a theorem characterizing the variance/covariance of random variables.
We begin by reviewing the invariance property of the Fisher metric. Suppose that M and N are arbitrary submanifolds of \(\mathcal {P}_1 = \mathcal {P}(\Omega _1)\) and \(\mathcal {P}_2 = \mathcal {P}(\Omega _2)\), respectively, and that a pair of Markov maps
satisfis
where \(\circ \) denotes the composition of maps. Note that \(\varphi \) is injective while \(\psi \) is surjective. Given a pair of points \((p, q) \in M\times N\) satisfying
we have
It then follows from the monotonicity (6.6) that
so that we have the invariance of the Fisher metric
which is equivalent to
This means that \(\varphi _{*, p}: T_{p}(M) \rightarrow T_{q} (N)\) is isometry, which is represented as
where \((\varphi _{*, p})^\dagger : T_{q}(N) \rightarrow T_{p} (M)\) denotes the adjoint (Hermitian conjugate) of \(\varphi _{*, p} \) w.r.t. the inner products \(g_{M, p}\) and \(g_{N, q}\).
A Markov map \(\Phi : \mathcal {P}_1\rightarrow \mathcal {P}_2\) is called an Markov embedding when there exists a Markov map \(\Psi : \mathcal {P}_2\rightarrow \mathcal {P}_1\) such that
Note that \(\vert \Omega _1\vert \le \vert \Omega _2\vert \) necessarily holds in this case. As a special case of the invariance (7.7), we have
According to Čencov, this property characterizes the Fisher metric up to a constant factor. The exact statement is presented below.
Theorem 7.1
(Čencov [3]) For \(n=2, 3, \ldots \), let \(\Omega _n:= \{1, 2, \ldots , n\}\) and \(\mathcal {P}_n:= \mathcal {P}(\Omega _n)\), and let \(g_n\) be the Fisher metric on \(\mathcal {P}_n\). Suppose that we are given a sequence \(\{h_n\}_{n=2}^\infty \), where \(h_n\) is a covariant tensor field of degree 2 on \(\mathcal {P}_n\) which continuously maps each point \(p\in \mathcal {P}_n\) to a bilinear form \(h_{n, p}: T_{p} (\mathcal {P}_n)^2 \rightarrow \mathbb {R}\). Then the following two conditions are equivalent.
-
(i)
\(\exists c\in \mathbb {R}, \, \forall n, \;\; h_n = c g_n\).
-
(ii)
For any \(m\le n\) and any Markov embedding \(\Phi : \mathcal {P}_m \rightarrow \mathcal {P}_n\), it holds that
$$\begin{aligned} \forall p\in \mathcal {P}_m,\,&\forall X_{p}, \forall Y_p \in T_{p} (\mathcal {P}_m), \nonumber \\&h_{m, p} (X_{p}, Y_{p}) = h_{n, \Phi (p)} (\Phi _* (X_p), \Phi _* (Y_p)). \end{aligned}$$(7.11)
We now proceed to the invariance property of co-metrics. Let us consider the same situation as (7.1)-(7.3), which implies that
Then it follows from the monotonicity (6.7) that, for any \(\alpha _{p}\in T^*_{p}(M)\),
so that we have the invariance of the Fisher co-metric
which is equivalent to
This means that \(\psi ^*_{q}: T^*_{p}(M) \rightarrow T^*_{q} (N)\) is isometry, which is represented as
where \((\psi ^*_{q} )^\dagger : T^*_{q}(N) \rightarrow T^*_{p} (M)\) denotes the adjoint (Hermitian conjugate) of \(\psi ^*_{q}\) w.r.t. the inner products \(g_{M, p}\) and \(g_{N, q}\) on the cotangent spaces. Due to \(\psi ^*_{q} = {}^{\textrm{t}}{(\psi _{*, q})}\), (7.16) can be rewritten as
We observed in Sect. 6 that the monotonicity of metrics (6.6) and that of co-metrics (6.7) are logically equivalent. On the other hand, such an equivalence does not hold for the invariance properties (7.6) and (7.14). Indeed, a linear-algebraic consideration shows that the implications (7.4) \(\wedge \) (7.8) \(\Rightarrow \) (7.17) and (7.4) \(\wedge \) (7.17) \(\Rightarrow \) (7.8) do not hold unless \(\varphi _{*, p}\) is a linear isomorphism (cf. Lemma 8.2). This means that we cannot expect Čencov’s theorem to yield a corollary which states that the Fisher co-metric is characterized by the invariance (7.14). Nevertheless, the statement itself is true as explained below.
Let us return to the situation of (7.9). We call a Markov map \(\Psi : \mathcal {P}_2 \rightarrow \mathcal {P}_1\) a Markov co-embedding when there exists a Markov embedding \(\Phi : \mathcal {P}_1 \rightarrow \mathcal {P}_2\) satisfying (7.9). As an example of (7.14), (7.9) implies the invariance
which can be rewritten as
Actually, the range \(\forall q\in \Phi (\mathcal {P}_1)\) in the above equation can be extended to \(\forall q\in \mathcal {P}_2\) for the reason described below.
It is known (e.g. Lemma 9.5 of [3]) that every pair \((\Phi , \Psi )\) of Markov embedding and co-embedding satisfying (7.9) is represented in the following form:
and
where F is a surjection \(\Omega _2\rightarrow \Omega _1\) which yields the partition \(\Omega _2 = \bigsqcup _{x\in \Omega _1} F^{-1} (x)\), and \(\{r_x\}_{x\in \Omega _1}\) is a family of probability distributions on \(\Omega _2\) such that the support of \(r_x\) is \(F^{-1}(x)\) for every \(x\in \Omega _1\). We note that a Markov co-embedding \(\Psi \) is determined by F alone, while a Markov embedding \(\Phi \) is determined by F and \(\{r_x\}_{x\in \Omega _1}\) together. Consequently, \(\Psi \) is uniquely determined from \(\Phi \), while \(\Phi \) for a given \(\Psi \) has the degree of freedom corresponding to \(\{r_x\}_{x\in \Omega _1}\). According to this fact, when a Markov co-embedding \(\Psi \) and a distribution \(q\in \mathcal {P}_2\) are arbitrarily given, we can always choose a Markov embedding \(\Phi \) satisfying (7.9) and \(q \in \Phi (\mathcal {P}_1)\); indeed, defining \(r_x\) by
the resulting \(\Phi \) satisfies \(q = \Phi (q^F) \in \Phi (\mathcal {P}_1)\). This is the reason why \(\forall q\in \Phi (\mathcal {P}_1)\) in (7.19) can be replaced with \(\forall q\in \mathcal {P}_2\). We thus have
or equivalently,
for every Markov co-embedding \(\Psi \).
The invariance (7.23) characterizes the Fisher co-metric up to a constant factor. Namely, we have the following theorem.
Theorem 7.2
For \(n=2, 3, \ldots \), let \(\Omega _n:= \{1, 2, \ldots , n\}\) and \(\mathcal {P}_n:= \mathcal {P}(\Omega _n)\), and let \(g_n\) be the Fisher co-metric on \(\mathcal {P}_n\). Suppose that we are given a sequence \(\{h_n\}_{n=2}^\infty \), where \(h_n\) is a contravariant tensor field of degree 2 on \(\mathcal {P}_n\) which continuously maps each point \(p\in \mathcal {P}_n\) to a bilinear form \(h_{n, p}: T^*_{p} (\mathcal {P}_n)^2 \rightarrow \mathbb {R}\). Then the following two conditions are equivalent.
-
(i)
\(\exists c\in \mathbb {R}, \, \forall n, \;\; h_n = c g_n\).
-
(ii)
For any \(m\le n\) and any Markov co-embedding \(\Psi : \mathcal {P}_n \rightarrow \mathcal {P}_m\), it holds that
$$\begin{aligned} \forall q\in \mathcal {P}_n, \, \forall \alpha _{\Psi (q)},&\forall \beta _{\Psi (q)} \in T^*_{\Psi (q)} (\mathcal {P}_m),\nonumber \\&h_{m, \Psi (q)} (\alpha _{\Psi (q)}, \beta _{\Psi (q)}) = h_{n, q} (\Psi ^* (\alpha _{\Psi (q)}), \Psi ^* (\beta _{\Psi (q)})). \end{aligned}$$(7.25)
The proof will be given by rewriting the statement in terms of variance/covariance for random variables. Suppose that a Markov co-embedding \(\Psi : \mathcal {P}_1\rightarrow \mathcal {P}_2\) is represented as (7.20) by a surjection \(F: \Omega _2 \rightarrow \Omega _1\). Then \(\Psi \) is represented as \(\Psi = \Phi _W\) by the channel W from \(\Omega _2\) to \(\Omega _1\) defined by
For an arbitrary \(A\in \mathbb {R}^{\Omega _1}\), its conditional expectation w.r.t. W is represented as
so that it follows from (6.9) that
where we have invoked \(\Psi (q) = q^F\) from (7.20). Hence, (7.23) and (7.24) are rewritten as
and
These identities themselves are obvious, but what is important is that they characterize the variance/covariance up to a constant factor. Namely, we have the following theorem.
Theorem 7.3
In the same situation as Theorem 7.2, suppose that we are given a sequence \(\{\gamma _n\}_{n=2}^\infty \), where \(\gamma _n\) is a map which continuously maps each point \(p\in \mathcal {P}_n\) to a bilinear form \(\gamma _{n, p}\) on \(\mathbb {R}^{\Omega _n}\). Then the following conditions (i) and (ii) are equivalent.
-
(i)
\(\exists c\in \mathbb {R}, \, \forall n, \forall p\in \mathcal {P}_n, \;\; \gamma _{n, p} = c\, \textrm{Cov}_p\).
-
(ii)
: (ii-1) \(\wedge \) (ii-2)
-
(ii-1)
\(\forall n, \forall p\in \mathcal {P}_n, \forall A\in \mathbb {R}^{\Omega _n}, \;\; \gamma _{n, p} (A, 1) =0\).
-
(ii-2)
For any \(m\le n\) and any surjection \(F: \Omega _n \rightarrow \Omega _m\), it holds that
$$\begin{aligned} \forall p\in \mathcal {P}_n, \forall A, B \in \mathbb {R}^{\Omega _m}, \;\; \gamma _{m, p^F} (A, B) = \gamma _{n, p} (A\circ F, B\circ F). \end{aligned}$$(7.31)
-
(ii-1)
Note that, if we assume that \(\{\gamma _n\}_n\) are all symmetric tensors, then (7.31) can be replaced with
which corresponds to (7.29).
See A1 in Appendix for the proof, where we use an argument similar to Čencov’s proof of Theorem 7.1. It is obvious that Theorem 7.2 immediately follows from this theorem.
Remark 7.4
If we delete (ii-1) from (ii) in Theorem 7.3, then we have (i)\({}'\) \(\Leftrightarrow \) (ii-2) by replacing (i) with
- (i)\({}'\):
-
\( \exists c_1, \exists c_2\in \mathbb {R}, \, \forall n, \forall p\in \mathcal {P}_n,\)
\( \forall A, B\in \mathbb {R}^{\Omega _n}, \;\; \gamma _{n, p}(A, B) = c_1\, \langle A, B\rangle _p + c_2\, \langle A\rangle _p \langle B\rangle _p.\)
We give a proof for (i)\({}'\) \(\Leftrightarrow \) (ii-2) in A1, from which Theorem 7.3 is straightforwad.
8 Strong invariance
In the preceding two sections, we have observed the following facts.
-
The monotonicity of metrics and that of co-metrics are logically equivalent.
-
The monotonicity logically implies the invariance of metrics and that of co-metrics.
-
The invariance of metrics and that of co-metrics are not logically equivalent.
In this section we introduce a new notion of invariance called the strong invariance, and show that:
-
The strong invariance of metrics and that of co-metrics are logically equivalent.
-
The monotonicity of metrics/co-metrics logically implies the strong invariance of metrics/co-metrics.
-
The strong invariance of metrics/co-metrics logically implies the invariance of metrics and that of co-metrics.
Recall the situation of (7.1), (7.2) and (7.3), where we are given submanifolds \(M \subset \mathcal {P}_1\) and \(N \subset \mathcal {P}_2\), Markov mappings \(\varphi : M \rightarrow N\) and \(\psi : N \rightarrow M\) satisfying \(\psi \circ \varphi = id _M\), and points \(p\in M\) and \(q\in N\) satisfying \(q = \varphi (p)\) and \(p = \psi (q)\). Then we have the following proposition.
Proposition 8.1
The Fisher metrics \(g_M\) and \(g_N\) on M and N satisfy
or equivalently,
In addition, \((\psi _{*, q})^\dagger \circ \psi _{*, q} = \varphi _{*, p} \circ \psi _{*, q}\) is the orthogonal projector from \(T_{q} (N)\) onto \(\varphi _{*, p} (T_{p} (M)) = T_{q} (\varphi (M))\).
The property (8.1) is called the strong invariance of the Fisher metric. The proposition will be proved by using the following lemma.
Lemma 8.2
Let U and V be finite-dimensional metric linear spaces, and let \(A: U\rightarrow V\) and \(B: V\rightarrow U\) be linear maps satisfying \(B A = I\). Then the following two conditions are equivalent.
-
(i)
\(A^\dagger A = B B^\dagger = I\).
-
(ii)
\(B = A^\dagger \).
When these conditions hold, \(B^\dagger B = A B\) is the orthogonal projector from V onto the image \(\textrm{Im}\, A\) of A.
Proof
It is obvious that (ii) \(\Rightarrow \) (i) under the assumption \(B A = I\). Conversely, if we assume (i) with \(B A = I\), then we have
from which (ii) follows.
Assume (i) and (ii). Then \((B^\dagger B)^2 = B^\dagger B\) due to \(B B^\dagger =I\), which implies that \(B^\dagger B\) is the orthogonal projector onto \(\textrm{Im}\, B^\dagger = \textrm{Im}\, A\). \(\square \)
Proof of Prop 8.1.
Letting \(U:= T_{p} (M)\), \(V:=T_{q} ({N})\), \(A:= \varphi _{*, p}\) and \(B:= \psi _{*,q}\) in the previous lemma, we see from (7.4), (7.8) and (7.17) that the assumption \(BA = I\) and the condition (i) are satisfied, so that we have (ii), which means (8.2). \(\square \)
As can be seen from Lemma 8.2, the strong invariance (8.1) is equivalent to the condition that both the invariance for the metric (7.6)–(7.8) and the invariance for the co-metric (7.14)–(7.16) hold. Taking the transpose of both sides of (8.2), the strong invariance is also expressed
which means that the Fisher co-metric satisfies
Since the strong invariance (8.4) logically implies the invariance (7.7) for the Fisher metric via (8.1), we see that the following proposition, which is stated in a rough form similar to Proposition 6.3, is obtained as a corollary of Čencov’s theorem (Theorem 7.1).
Proposition 8.3
The strong invariance (8.4) characterizes the Fisher co-metric up to a constant factor.
An exact formulation of this proposition will be given in A2 of Appendix with a proof based on Čencov’s theorem.
Remark 8.4
The above proposition is stronger than Proposition 6.3 and weaker than Theorem 7.2. To formulate Proposition 6.3 and Proposition 8.3 in exact forms similar to Theorem 7.2, it matters what assumptions should be imposed on bilinear forms \(\{h_{n, p}\}\) on the cotangent spaces prior to the monotonicity or the strong invariance. Here we should keep in mind that the significance of these propositions, which are weaker than Theorem 7.2, lies in the fact that they follow from Čencov’s theorem while Theorem 7.2 does not. For Proposition 6.3, we need to assume that \(\{h_{n, p}\}\) are inner products (positive symmetric forms) to ensure that the monotonicity of them is translated into the monotonicity of the corresponding inner products on the tangent spaces. For Proposition 8.3, on the other hand, we only need to assume \(\{h_{n, p}\}\) to be non-degenerate (nonsingular) and symmetric. See A2 for details.
Let us consider the strong invariance (8.4) for the case when \((\phi , \psi )\) is a Markov embedding/co-embedding pair \((\Phi , \Psi )\) and rewrite it into an identity for the covariance of random variables. Let \(\Omega _1\) and \(\Omega _2\) be arbitrary finite sets satisfying \(\vert \Omega _1 \vert \le \vert \Omega _2 \vert \) and let \(\mathcal {P}_i:= \mathcal {P}(\Omega _i)\), \(i=1, 2\). Given a surjection \(F: \Omega _2 \rightarrow \Omega _1\) and a distribution \(q\in \mathcal {P}_2\), let \((\Phi , \Psi )\) be defined by (7.20) and (7.21) with (7.22). For arbitrary \(A \in \mathbb {R}^{\Omega _1}\) and \(B \in \mathbb {R}^{\Omega _2}\), let \(\alpha _{q^F}:= \delta _{q^F} (A) \in T^*_{{q^F}} (\mathcal {P}_1)\) and \(\beta _q:= \delta _q (B) \in T^*_{q} (\mathcal {P}_2)\), for which the strong invariance (8.4) is represented as
Recalling (7.28), we have
On the other hand, \(\Phi \) is represented as \(\Phi = \Phi _V\) by the channel V defined by
so that (6.9) yields
Hence, the strong invariance (8.5) is rewritten as
This identity can be verified directly as follows. Letting \(a:= \langle A\rangle _{q^F} = \langle A\circ F\rangle _q\) and \(b:= \langle B\rangle _q = \langle E_V (B\, \vert \, \cdot )\rangle _{q^F}\), we have
where the fourth equality follows from
Note that (7.30) is obtained from (8.9) by substituting \(B\circ F\) for B.
9 Weak invariance for affine connections
In addition to characterizing the Fisher metric by the invariance with respect to Markov embeddings, Čencov also gave a characterization of the \(\alpha \)-connections by the invariance condition. In this section we show that a similar notion to the strong invariance of metrics, which is described in terms of Markov embedding/co-embedding pairs, can be considered for affine connections.
Let \(\Omega _1, \Omega _2\) be arbitrary finite sets satisfying \(2\le \vert \Omega _1 \vert \le \vert \Omega _2 \vert \), and let \(\Phi : \mathcal {P}_1 \rightarrow \mathcal {P}_2\) be a Markov embedding, where \(\mathcal {P}_i:= \mathcal {P}(\Omega _i)\), \(i=1, 2\). Suppose that affine connections \(\nabla \) and \(\nabla '\) are given on \(\mathcal {P}_1\) and \(\mathcal {P}_2\), respectively. When these connections are the \(\alpha \)-connection on \(\mathcal {P}_1\) and that on \(\mathcal {P}_2\) for some common \(\alpha \in \mathbb {R}\), they satisfy
Some remarks on the meaning of the above equation are in order. First, we define \(\Phi _* (X)\) for an arbitrary vector field X on \(\mathcal {S}_1\) as a vector field on \(K:= \Phi (\mathcal {P}_1)\) that maps each point \(q=\Phi (p)\in K\), where \(p\in \mathcal {P}_1\), to
Since K is a submanifold of \(\mathcal {S}_2\) on which the connection \(\nabla '\) is given, \(\nabla '_{\Phi _* (X)} \Phi _* (Y)\) in (9.1) is defined as a map which maps each point \(q\in K\) to a tangent vector in \(T_{q}(\mathcal {P}_2)\), although \(\nabla '_{\Phi _* (X)} \Phi _* (Y)\) does not belong to \(\mathfrak {X}(K)\) in general. The condition (9.1) means that K is autoparallel in \(\mathcal {P}_2\) with respect to \(\nabla '\) and that the restricted connection of \(\nabla '\) induced on the autoparallel K is obtained from \(\nabla \) by the diffeomorphism \(\Phi : \mathcal {P}_1 \rightarrow K\). Čencov [3] showed that the invariance condition characterizes the family \(\{\alpha \text {-connection}\}_{\alpha \in \mathbb {R}}\) by a formulation similar to Theorem 7.1.
Remark 9.1
As is mentioned above, the fact that \(\{\alpha \text {-connection}\}_{\alpha \in \mathbb {R}}\) satisfy the invariance (9.1) implies that \(K = \Phi (\mathcal {P}_1)\) is autoparallel in \(\mathcal {P}_2\) w.r.t. the \(\alpha \)-connection for every \(\alpha \in \mathbb {R}\). A kind of converse result is found in [5], which states that if a submanifold K of \(\mathcal {P}_2 = \mathcal {P}(\Omega _2)\) is autoparallel w.r.t. the \(\alpha \)-connection for every \(\alpha \in \mathbb {R}\) (or, for some two different values of \(\alpha \)), then K is represented as \(\Phi (\mathcal {S}_1)\) by some Markov embedding \(\Phi \) from some \(\mathcal {S}_1 = \mathcal {S}(\Omega _1)\) into \(\mathcal {S}_2\).
Let g and \(g'\) be the Fisher metrics on \(\mathcal {P}_1\) and \(\mathcal {P}_2\), respectively. Noting that \(\Phi _*\) is an isometry with respect to these metrics due to the invariance of the Fisher metric, (9.1) implies that
where the RHS denotes the function on \(\mathcal {P}_1\) such that
Since (9.3) is apparently weaker than (9.1), we call this property the weak invariance of the connections \(\nabla , \nabla '\). In an actual fact, however, as is mentioned in [6] and is proved in [7], the weak invariance (9.3) characterizes the family \(\{\alpha \text {-connection}\}_{\alpha \in \mathbb {R}}\) as well as the stronger condition (9.1).
Now, recalling the strong invariance (8.1) of the Fisher metric, we see that the weak invariance (9.3) is equivalent to
where the RHS denotes the map
The fact that the weak invariance is a condition on connections is more clearly expressed in (9.5) than (9.3) in that (9.5) does not include the Fisher metric.
10 Concluding remarks
In this paper we have focused on the face of the Fisher metric as a metric on the cotangent bundle, calling it the Fisher co-metric to distinguish it from the original Fisher metric on the tangent bundle. What we have shown are listed below.
-
1.
Based on a correspondence between cotangent vectors and random variables, the Fisher co-metric is defined via the variance/covariance in a natural way (Sect. 2).
-
2.
The Cramér-Rao inequality is trivialized by considering the Fisher co-metric (Sect. 4).
-
3.
The role of the m-connection as a connection on the cotangent bundle is important in considering the achievability condition for the Cramér-Rao inequality (Sect. 5).
-
4.
The monotonicity of the Fisher metric is equivalently translated into the monotonicity of the Fisher co-metric and that of the variance (Sect. 6).
-
5.
The invariance of the Fisher metric and that of the Fisher co-metric are not logically equivalent, and a new Čencov-type theorem for characterizing the Fisher co-metric by the invariance is established, which can also be regarded as a theorem for characterizing the variance/covariance (Sect. 7).
-
6.
The notion of strong invariance is introduced, which combines the invariance of the Fisher metric and that of the Fisher co-metric (Sect. 8).
-
7.
The weak invariance of the \(\alpha \)-connections is expressed in a formulation similar to the strong invariance of the Fisher metric/co-metric (Sect. 9).
It should be noted that, although this paper emphasizes the importance of the Fisher co-metric, this does not diminish the importance of the Fisher metric at all. Apart from the importance as a metric on the tangent bundle itself, which is essential for geometry of statistical manifolds, we should not forget that the Fisher information matrix (i.e. the components of the Fisher metric) is of primary importance as a practical tool to compute the Fisher co-metric. Even if we know that \(g_{M}^{ij} (p) = g_{M, p} ((d\xi ^i)_p, (d\xi ^j)_p)\) in (4.3) can be defined by (4.5) and (4.6) and that understanding \(g_{M}^{ij} (p)\) in this way is important for conceptual understanding of the Cramér-Rao inequality, this does not tell us a method to compute \(g_{M}^{ij} (p)\) for a given statistical model \((M, \xi )\) better than computing the inverse of the Fisher information matrix \(G_M (p):= [g_{M, ij} (p) ]\) in general.
Finally, we note that some of the results obtained here can be extended to the quantum case in several directions, which will be discussed in a forthcoming paper.
Data Availability
Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.
References
Amari, S., Nagaoka, H.: Methods of Information Geometry. American Mathematical Society, Providence, RI; Oxford University Press, Oxford (2000)
Nagaoka, H., Amari, S.: Differential geometry of smooth families of probability distributions. METR 82-7, Dept Math Eng and Instr Phys, Univ. of Tokyo (1982). https://www.keisu.t.u-tokyo.ac.jp/research/techrep/y1982/. https://bsi-ni.brain.riken.jp/database/item/104
Čencov, N.N.: Statistical decision rules and optimal inference. American Mathematical Society, Providence, RI (1982) (original Russian edition: Nauka, Moscow (1972))
Nagaoka, H., Fujiwara, A.: Autoparallelity of quantum statistical manifolds in the light of quantum estimation theory. arXiv:2307.03431
Nagaoka, H.: Information-geometrical characterization of statistical models which are statistically equivalent to probability simplexes. Proc. 2017 IEEE ISIT, pp. 1346–1350 (2017)
Fujiwara, A.: Hommage to Chentsov’s theorem. Inf. Geo. 7(Suppl 1), 79–98 (2024)
Fujiwara, A.: Foundations of Information Geometry. Kyoritsu Shuppan, Tokyo (2021). (in Japanese)
Acknowledgements
This work was partly supported by JSPS KAKENHI Grant Numbers 23H05492.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author is a current member of the Editorial Board of Information Geometry. The author states that there is no other conflict of interest.
Additional information
Communicated by Hiroshi Matsuzoe.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 A1 Proof of Theorem 7.3
As mentioned in Remark 7.4, we show the equivalence (i)\({}'\) \(\Leftrightarrow \) (ii-2), with expressing (i)\({}'\) as
- (i)\({}'\):
-
\( \exists c_1, \exists c_2\in \mathbb {R}, \, \forall n, \forall p\in \mathcal {P}_n, \; \gamma _{n, p} = c_1\, \langle \cdot , \cdot \rangle _p + c_2\, \langle \cdot \rangle _p \langle \cdot \rangle _p. \)
The implication (i)\({}'\) \(\Rightarrow \) (ii-2) is obvious. To show the converse, we assume (ii-2) and will derive (i)\({}'\) by several steps. The uniform distribution on \(\Omega _n\) is denoted by \(u_n\) in the sequel.
-
(a)
\(\forall n, \,\exists c_{1,n}, \exists c_{2, n} \in \mathbb {R}, \; \gamma _{n, u_n} = c_{1,n} \, \langle \cdot , \cdot \rangle _{u_n} + c_{2,n} \, \langle \cdot \rangle _{u_n} \langle \cdot \rangle _{u_n}. \)
Proof
Fix \(n\ge 2\) arbitrarily, and let \(m=n\), and F be a permutation on \(\Omega _n\). Define \(e_{n, i}\in \mathbb {R}^{\Omega _n}\) for \(i\in \Omega _n\) by \(e_{n, i} (j) = \delta _{i, j}\). Then \({u_n}^F = u_n\), and condition (ii-2) claims that
Since this holds for any permutation F, there exist \(a_n, b_n \in \mathbb {R}\) such that
Comparing this with
we have the assertion of (a) with letting \(c_{1,n}:= n a_n\) and \(c_{2,n}:= n^2 b_n\). \(\square \)
-
(b)
\(\exists c_1, \exists c_2\in \mathbb {R}, \, \forall n, \; \gamma _{n, u_n} = c_1\, \langle \cdot , \cdot \rangle _{u_n} + c_2\, \langle \cdot \rangle _{u_n} \langle \cdot \rangle _{u_n}\).
Proof
It suffices to show that the constants \(\{c_{1,n}\}\) and \(\{c_{2,n}\}\) in (a) satisfy \(c_{1, n} = c_{1, m}\) and \(c_{2, n} = c_{2, m}\) for every \(n, m \ge 2\). Let \(F: \Omega _{mn} \rightarrow \Omega _n\) be defined by
Then we have \(u_{mn}^F = u_n\), and (ii-2) yields
Due to (a), the LHS of the above equation is represented as
while the RHS is represented as
Thus we have \(c_{1,n} = c_{1, mn}\) and \(c_{2,n} = c_{2, mn}\). Similarly, we can show that \(c_{1,m} = c_{1, mn}\) and \(c_{2,m} = c_{2, mn}\). Hence we obtain \(c_{1, n} = c_{1, m}\) and \(c_{2, n} = c_{2, m}\). \(\square \)
-
(c)
For any \(n\ge 2\) and any \(p\in \mathcal {P}_n\) whose components \(\{p(i)\}_{i\in \Omega _n}\) are all rational numbers, we have \(\gamma _{n, p} = c_1\, \langle \cdot , \cdot \rangle _p + c_2 \langle \cdot \rangle _p \langle \cdot \rangle _p \), where \(c_1\) and \(c_2\) are the constants in (b).
Proof
Suppose that the components of p are represented as \(p(i) = k_i / m\) by natural numbers \(m, k_1, \ldots {,}\, k_n\) satisfying \(\sum _{i=1}^n k_i = m\). Let \(\Omega _m = \bigsqcup _{i=1}^n K_i\) be a partition of \(\Omega _m\) such that \(\vert K_i \vert = k_i\), and define \(F: \Omega _m \rightarrow \Omega _n\) by
Then we have \(u_m^F = p\). Letting \(\gamma '_{n, p}:= c_1\, \langle \cdot , \cdot \rangle _p + c_2 \langle \cdot \rangle _p \langle \cdot \rangle _p\) and noting that both \(\{\gamma _{n, p}\}\) and \(\{\gamma '_{n, p}\}\) satisfy condition (ii-2), we obtain
where the second equality follows from (b). \(\square \)
-
(d)
(i)\({}'\) is derived from (c) and the continuity of \(p\mapsto \gamma _{n, p}\).
\(\square \)
1.2 A2 An exact form of Proposition 8.3
We present a proposition of Čencov-type that can be regarded as an exact version of Proposition 8.3, and show that the proposition follows from Theorem 7.1 (Čencov’s theorem).
Proposition A.1
For \(n=2, 3, \ldots \), let \(\Omega _n:= \{1, 2, \ldots , n\}\) and \(\mathcal {P}_n:= \mathcal {P}(\Omega _n)\), and let \(g_n\) be the Fisher co-metric on \(\mathcal {P}_n\). Suppose that we are given a sequence \(\{h_n\}_{n=2}^\infty \), where \(h_n\) is a contravariant tensor field of degree 2 on \(\mathcal {P}_n\) which continuously maps each point \(p\in \mathcal {P}_n\) to a non-degenerate and symmetric bilinear form \(h_{n, p}: T^*_{p} (\mathcal {P}_n)^2 \rightarrow \mathbb {R}\). Then the following two conditions are equivalent.
-
(i)
\(\exists c\in \mathbb {R}\setminus \{0\}, \, \forall n, \;\; h_n = c g_n\).
-
(ii)
For any \(m\le n\) and any pair \((\Phi , \Psi )\) of Markov embedding \(\Phi :\mathcal {P}_m\rightarrow \mathcal {P}_n\) and Markov co-embedding \(\Psi : \mathcal {P}_n \rightarrow \mathcal {P}_m\) satisfying \(\Psi \circ \Phi = id _{\mathcal {P}_m}\), it holds that
$$\begin{aligned} \forall p\in \mathcal {P}_m,\,&\forall \alpha _{p} \in T^*_{p} (\mathcal {P}_m), \forall \beta _{\Phi (p)} \in T^*_{\Phi (p)} (\mathcal {P}_n), \nonumber \\&h_{m, p} (\alpha _{p}, \Phi ^* (\beta _{\Phi (p)})) = h_{n, \Phi (p)} (\Psi ^* (\alpha _p), \beta _{\Phi (p)}). \end{aligned}$$(A.1)
Proof
Since the Fisher co-metric satisfies (8.4), we have (i) \(\Rightarrow \) (ii). To prove the converse, assume that \(\{h_n\}_{n=2}^\infty \) satisfies (ii). Due to the assumption that \(h_{n, p}\) is non-degenerate for every n and \(p\in \mathcal {P}_n\), \(h_{n, p}\) induces the correspondence \({\mathop {\longleftrightarrow }\limits ^{h_{n, p}}}\) between \(T_{p} (\mathcal {P}_n)\) and \(T^*_{p} (\mathcal {P}_n)\) together with the bilinear form \(h_{n, p}: T_{p} (\mathcal {P}_n)^2 \rightarrow \mathbb {R}\) as in the case of inner products by
and
Then, similar to the equivalence between (8.1) and (8.4), we see that (A.1) is equivalent to
Indeed, if
then
and
which shows the equivalence of (A.1) and (A.2). Due to \(\Psi _* \circ \Phi _* = id _{T_{p} (\mathcal {P}_m)}\), Eq. (A.2) implies (7.11) in (ii) of Theorem 7.1. Thus we have the following train of implications:
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Nagaoka, H. The Fisher metric as a metric on the cotangent bundle. Info. Geo. 7 (Suppl 1), 651–677 (2024). https://doi.org/10.1007/s41884-023-00126-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41884-023-00126-9