The generalized fundamental equation of information on symmetric cones

In this paper we generalize the fundamental equation of information to the symmetric cone domain and find general solution under the assumption of continuity of respective functions.


Introduction
The generalized fundamental equation of information (actually its specification with "constant multiplicative function") on the open vector domain is a functional equation of the form where (x, y) ∈ D 0 = {(x, y) ∈ (0, 1) r × (0, 1) r : x i + y i ∈ (0, 1), ∀ 1 ≤ i ≤ r} and 1 = (1, 1, . . . , 1) ∈ R r . All operations in (1) are performed componentwise. First it was solved for r = 1 in [13] and later for any r ∈ N in [5]. There exists a vast literature regarding the fundamental equation of information and its generalizations. A history of main results together with references may be found in [1,2,16] or in the monograph [7], where one chapter is devoted to this equation.
Our aim is to analyze the following generalization of (1), when respective functions are defined on matrices. Let Ω + denote the cone of positive definite real symmetric matrices of rank r, I the identity matrix and let D + = {x ∈ Ω + : I − x ∈ Ω + } be the analogue of the (0, 1) interval in Ω + . Consider unknown real functions f , g, h and k defined on D + that satisfy the where (x, y) ∈ D + 0 = (x, y) ∈ D 2 + : x + y ∈ D + and · denotes the ordinary matrix product. Take x = u · diag (x) · u T and y = u · diag (y) · u T , where u is a fixed orthogonal matrix and (x, y) ∈ D 0 . Then (x, y) ∈ D + 0 , x and y commute and (I − x) −1/2 = u · diag (1 − x) −1/2 · u T . Thus, (2) gets the form for (x, y) ∈ D 0 , where f u (x) = f (u·diag (x)·u T ), g u (x) = g(u·diag (x)·u T ) and so on. This means that (2) is a generalization of (1) to a wider domain, since (3) is satisfied for any orthogonal matrix u and when one takes noncommutative x and y, the situation is far more complicated, because the operations are not performed component-wise. It also justifies the name "generalized fundamental equation of information on Ω + ", despite its lack of clear connection to the developed information theory.
We want to go further and consider the more general notion of division of matrices, which is defined through, a so-called, multiplication algorithm. A multiplication algorithm is a mapping w : Ω + → GL(r, R) such that w(x) · w T (x) = x for any x ∈ Ω + . Multiplication algorithms (actually their inverses called division algorithms) were introduced in [15] alongside the characterization of the Wishart probability distribution (see also [4] for generalization to symmetric cones). There exists an infinite number of multiplication algorithms and the two basic examples are w 1 (x) = x 1/2 (x 1/2 being the unique positive definite symmetric square root of x) and w 2 (x) = t x , where t x is the lower triangular matrix from the Cholesky decomposition of x = t x · t T x . For unknown functions f, g, h, k : D + → R we will analyze the following functional equation where (x, y) ∈ D + 0 , w and w are two multiplication algorithms satisfying additionally some natural properties. Note that for (x, y) ∈ D + 0 the arguments of the functions g and k belong to D + .
Considering such a generalization of (2), in general, it is not possible to reduce (4) to (1) as was possible for (2). However, taking scalar matrices (x = xI and y = yI for (x, y) ∈ (0, 1) 2 such that x+y ∈ (0, 1)) Eq. (4) comes down to (1) with r = 1. This fact will be the crux in the proof of the main theorem. The continuous solution to (4) will be given in terms of, so-called, w-logarithmic Cauchy functions, i.e., functions that satisfy the following functional equation It is easy to see that f (x) = H(det x), where H is a generalized logarithmic function (H(ab) = H(a) + H(b), a, b > 0), is always a w-logarithmic function for any w, but sometimes not the only one (see comment before Corollary 3.9). The general form of w-logarithmic Cauchy functions for two basic examples of multiplication algorithms w 1 (x) = x 1/2 and w 1 (x) = t x without any regularity assumptions was recently found in [9]. Later on we will write w(x)y = w(x)·y·w T (x) and, in this case, w(x) denotes the linear operator acting on Ω + .
Finally, it must be noted that Eq. (4) is interesting to probabilists, since it is closely related to a characterization of matrix-variate beta probability distribution. In [17] and [11,12], the problem of characterization of beta probability distribution was reduced to solving the fundamental equation of information with four unknown functions. Analogously, the solution to (4) is used to characterize the matrix-variate beta probability distribution in [10].
All above considerations can be generalized to symmetric cones, of which Ω + is the prime example. The paper is organized as follows. In the next section we give the necessary introduction to the theory of symmetric cones. Next, in Sect. 3.1 we recall known results concerning w-logarithmic Cauchy functions. Sect. 3.2 is devoted to the solution of (4) in the symmetric cone setting.

Preliminaries
In this section, we recall basic facts of the theory of symmetric cones, which are needed in the paper. For further details, we refer to [6].
A Euclidean Jordan algebra is a Euclidean space E (endowed with a scalar product denoted x, y ) equipped with a bilinear mapping (product) E × E (x, y) → xy ∈ E and a neutral element e in E such that for all x, y, z in E: (i) xy = yx, (ii) x(x 2 y) = x 2 (xy), (iii) xe = x, (iv) x, yz = xy, z . For x ∈ E let L(x): E → E be the linear map defined by L(x)y = xy, and define The map P : E → End(E) is called the quadratic representation of E.
An element x is said to be invertible if there exists an element y in E such that L(x)y = e. Then, y is called the inverse of x and is denoted by y = x −1 . Note that the inverse of x is unique.

B. Ko lodziejek AEM
A Euclidean Jordan algebra E is said to be simple if it does not contain any nontrivial ideals. Up to linear isomorphism, there are only five kinds of Euclidean simple Jordan algebras. Let K denote either the real numbers R, the complex ones C, quaternions H or the octonions O, and write S r (K) for the space of r × r Hermitian matrices valued in K, endowed with the Euclidean structure x, y = Trace (x ·ȳ) and with the Jordan product where x · y denotes the ordinary product of matrices andȳ is the conjugate of y. Then S r (R), r ≥ 1, S r (C), r ≥ 2, S r (H), r ≥ 2, and the exceptional S 3 (O) are the first four kinds of Euclidean simple Jordan algebras. Note that in this case, if K = O, The fifth kind is the Euclidean space R n+1 , n ≥ 2, with the Jordan product To each Euclidean simple Jordan algebra one can attach the set of Jordan squaresΩ The interior Ω ofΩ is a symmetric cone. Moreover Ω is irreducible, i.e. it is not the Cartesian product of two convex cones. One can prove that an open convex cone is symmetric and irreducible if and only if it is the cone Ω of some Euclidean simple Jordan algebra. Each simple Jordan algebra corresponds to a symmetric cone; hence, there exist up to linear isomorphism also only five kinds of symmetric cones. The cone corresponding to the Euclidean Jordan algebra R n+1 equipped with the Jordan product (7) is called the Lorentz cone. Henceforth we will assume that Ω is irreducible. We denote by G(Ω) the subgroup of the linear group GL(E) of linear automorphisms which preserves Ω, and we denote by G the connected component of G(Ω) containing the identity. Recall that if E = S r (R), for any g ∈ G(Ω) there exists a ∈ GL(r, R) such that for all x ∈ Ω. This concept is consistent with the so-called division algorithm g, which was introduced in [15] and [4], that is a mapping Ω x → g(x) ∈ G such that g(x)x = e for any x ∈ Ω. If w is a multiplication algorithm, then g = w −1 Vol. 90 (2016)

Fundamental equation of information 921
is a division algorithm and vice versa; if g is a division algorithm, then w = g −1 is a the multiplication algorithm. Note that Ω is closed under multiplication x • w y = w(x)y, but this multiplication is neither commutative nor associative. It may also not have a neutral element, but always w(e) ∈ K. One of the two basic examples of multiplication algorithms is the map w 1 (x) = P x 1/2 . We will now introduce a very useful decomposition in E, called spectral decomposition. An element c ∈ E is said to be an idempotent if cc = c = 0. Idempotents a and b are orthogonal if ab = 0. An idempotent c is primitive if c is not a sum of two non-null idempotents. A complete system of primitive orthogonal idempotents is a set (c 1 , . . . , c r ) such that The size r of such a system is a constant called the rank of E. Any element x of a Euclidean simple Jordan algebra can be written as x = where Det denotes the determinant in the space of endomorphisms. Inserting a multiplication algorithm g = w(y), y ∈ Ω, and x = e we obtain Det (w(y)) = (det y) dim Ω/r and hence det(w(y)x) = det y det x (8) for any x, y ∈ Ω.

Logarithmic Cauchy functions
Henceforth we will assume that Ω is an irreducible symmetric cone. As will be seen, the solution to the fundamental equation of information will be given in terms of the so-called w-logarithmic Cauchy functions, i.e., functions f : Ω → R that satisfy the following functional equation where w is a multiplication algorithm. Functional Eq. (9) for w 1 (x) = P(x 1/2 ) on Ω + was already considered in [3] for differentiable functions and in [14] for continuous functions on real or complex Hermitian positive definite matrices of rank 2. Without any regularity assumptions, it was solved on the Lorentz cone in [18]. Recently, the general form of w 1 -logarithmic functions, without any regularity assumptions, was given in [9]. In this case It should be stressed that there exists an infinite number of multiplication algorithms. If w is a multiplication algorithm, then trivial extensions are given by w (k) (x) = w(x)k, where k ∈ K is fixed. One may consider also multiplication algorithms of the form P (x α )t x 1−2α , which interpolate between the two main examples: w 1 (which is α = 1/2) and w 2 (which is α = 0). In general, any multiplication algorithm may be written in the form w(x) = P(x 1/2 )k x , where k x ∈ K and K is the group of automorhisms.
Note that due to (8) the function H(det x) is always a solution to (9), regardless of the choice of the multiplication algorithm w, but may be not the only one -the best example is the multiplication algorithm related to the triangular group (see [9,Theorem 3.5] and comment before Corollary 3.9). If a w-logarithmic functions f is additionally is the only possible solution (Theorem 3.3). We now state the above mentioned results, which will be useful in the proof of the main theorem.
Then there exist a w-logarithmic function f and real constants a 0 , b 0 such that for any x ∈ Ω, Then there exists a generalized logarithmic function H such that for any x ∈ Ω,  (9). Assume additionally that f is K-invariant, i.e., f (kx) = f (x) for any k ∈ K and x ∈ Ω. Then there exists a generalized logarithmic function H such that for any x ∈ Ω,

The fundamental equation of information with four unknown functions on symmetric cones
The solution to the one-dimensional version of the fundamental equation with four unknown functions comes from [13]. An independent and shorter solution, but with the additional assumption of local integrability of functions was given in [17] alongside the characterization of beta probability distribution. The problem when the equation is satisfied almost everywhere for measurable functions was considered in [11,12]. Recall the main result of [13] (for α = 0 with the substitution of H 1 + H 3 in place of H 1 compared to the original formulation): Henceforth we will assume that the multiplication algorithms w additionally satisfies the following natural conditions A. w is homogeneous of degree 1, that is w(sx) = sw(x) for any s > 0 and x ∈ Ω, B. continuity in e, that is lim x→e w(x) = w(e), C. surjectivity of the mapping Ω x → g(x)e ∈ Ω. The same will be assumed for w. By g (resp. g) we denote w −1 (resp. w −1 ). It is easy to construct a multiplication algorithm that does not satisfy A and B, but we do not know whether there exists a multiplication algorithm that does not satisfy condition C.
By w e and g e we will denote w(e) and g(e) respectively (analogously w e and g e ). Equation (4) is rewritten to the symmetric cone setting in the following way: Note that D 0 is a subset of Ω 2 , while D 0 ⊂ R 2 + . An analogous problem, when the general form of the multiplication algorithm was considered for the Olkin-Baker functional equation was dealt with in [8], but that functional equation was much easier to solve. What is interesting, for scalar arguments, the fundamental equation of information can be brought to the Olkin-Baker equation (see the proof of the main theorem in [13]). However it is not known if there exists such a connection between these functional equations when one considers matrix-or cone-variate arguments.

Main Theorem 3.5 (Fundamental equation of information on symmetric cones).
Assume that (10) such that for any x ∈ D, The proof of Theorem 3.5 will be preceded by two lemmas. Note that in these lemmas it is additionally assumed that w e = w e = Id Ω , however, this does not affect the generality of Theorem 3.5.
From this we will conclude that f (y) = 0 for any y ∈ D. Condition C states that, for any y ∈ Ω there exists x ∈ Ω such that g(x)e = y. If y ∈ D then x ∈ Ω\D, because e − y = e − g(x)e = g(x)(x − e) belongs to Ω if and only if x − e ∈ Ω. Therefore for any y ∈ D there exists t ∈ Ω such that g(e + t)e = y. Then f (y) = f (g(e + t)e) = 0, Which completes the proof.
If the multiplication algorithms w = g −1 and w = g −1 satisfy conditions A−C, w e = w e = Id Ω , then there exists a wand w-logarithmic function h such that for any x ∈ D, for real constants f 0 and g 0 .

B. Ko lodziejek AEM
Proof. As in the proof of Lemma 3.6, let us first consider (14) for x = αe and y = βe, (α, β) ∈ D 0 . Define F (α) = K(α) = f (αe) and G(α) = H(α) = g(αe). Theorem 3.4 and the continuity of F , G, H and K imply that there exist real constants κ, f 0 and g 0 such that Without loss of generality we may assume f 0 = g 0 = 0. Let us take (x, y) = (s, e − (s + αt)) ∈ D 0 . It is easy to see that (x, y) ∈ D 0 if and only if (s, αt) ∈ D 0 . For (s, αt) ∈ D 0 Eq. (14) gets the form (after rearrangements) The limit as α → 0 of the left hand side of the above equality exists, hence the limit of the right hand side exists as well. Passing to the limit as α → 0, we obtain Define We will show that h(x) exists for any x ∈ Ω. Inserting s = t ∈ D in (16), we get Recall that f 1 α+1 e = κ log α α+1 . Therefore we obtain for any t ∈ D, Because of condition C, for any y ∈ Ω there exists x ∈ Ω such that g(x)e = y.
In the proof of the previous lemma it was shown that if y ∈ D, then x ∈ Ω\D. Analogously, if y ∈ Ω\D, then x ∈ D. Take y ∈ Ω, then (e + y) ∈ Ω\D, hence there exists an element x = e − t ∈ D such that g(e − t)e = y + e, i.e., y = g(e − t)e − e = g(e − t)t. This implies that the function h is well defined on the whole Ω.
Using the definition of h and the form of the one-dimensional solutions (15), we obtain that h(βe) = κ log β and Vol. 90 (2016)

Fundamental equation of information 927
for any β > 0 and x ∈ Ω. Returning to (16), using (17) and the definition of h, we get Note that due to (18), the above equation holds for any t ∈ Ω. Our aim is to show that h is w-logarithmic. Put s → βs in (19) for s ∈ D and β > 0 and, by the homogeneity of w, we have g(βs + αt)βs = g(s + α β t)s. Thus, using (18) we arrive at Observe that, the limit on the right hand side of the above equation does not depend on β. Therefore, we may pass to the limit as β → 0 to obtain (h is continuous and g e = Id Ω )

Now, subtracting the above equation from (19) we get h(g(e − s)s) − h(s) = h(g(e − s)t) − h(t)
for any s ∈ D and t ∈ Ω. Put s = e − αx, t = w(x)y and use the homogeneity of w and property (18) to obtain (after rearrangements) Again, passing to the limit as α → 0 we obtain (recall that h(e) = κ log 1 = 0) Therefore, by (17) and (21) we have t)), t ∈ D.
Using (9) on both sides we obtain which by standard argument (take t = αx and pass to the limit as α → 0) implies that h = h, that is, h is both wand w-logarithmic.
Since e − g(e − y)x = g(e − y)(e − x − y) and using the fact that h is wand w-logarithmic (that is h(g(a) The functionḡ is continuous, therefore Lemma 3.4 implies thatḡ is constant, which completes the proof.
Now we are ready to prove the main theorem.
Proof of Theorem 3.5. First observe that, without loss of generality, we may assume that g e = Id Ω . Indeed, the mappingĝ : x → w e g(x) defines a division algorithm with this property, sinceĝ(e) = w e g e = Id Ω . Thus, forĝ(x) := g(g e x) we have g(g(e − x)y) = g(g e w e g(e − x)y) =ĝ(ĝ(e − x)y). The same can be done for g and the function k. In this way we arrive at the Eq. (10) with division algorithms g and g (and also functions g and k) redefined in a way that g e = g e = Id Ω (compare the formulation of Theorem 3.5). Consider the Eq. (10) for (αx, e − y) ∈ D and pass to the limit as α → 0. By the continuity of g in e and g on D we get We will prove that this limit exists for any x ∈ Ω. Taking, as in the proofs of previous lemmas, x = αe and y = βe in (10) for (α, β) ∈ D 0 , and defining F (α) = f (αe), G(α) = g(αe), H(α) = h(αe) and K(α) = k(αe), Theorem 3.4 implies that there exist constants κ i , i = 1, 2, 3, and C 1 , C 4 such that for α ∈ (0, 1), holds. Without loss of generality we may assume C 1 = C 4 = 0. Therefore, we have lim α→0 {k(αe) − κ 2 log α} = 0, The above equalities and similar considerations as in (18) imply that for x ∈ D and β > 0, The limit l 1 (x) therefore exists for any x ∈ Ω.
Having solved the fundamental equation of information for any multiplication algorithm it is easy to give the specification: and The proof quickly follows from Theorems 3.2 and 3.5.
In order to show an example of a multiplication algorithm for which the solution to the generalized fundamental equation of information on Ω is of a different form than given in (31), we should give a full introduction regarding the triangular group on Ω and the related Gauss decomposition. The plan in this paper was to keep it as concise as possible, therefore we decided to restrict ourselves to the cone Ω + of positive definite real symmetric matrices of rank r, where the Gauss decomposition on Ω + and the Cholesky decomposition coincide. We adhere to the notation from the Introduction, t x is the lower triangular matrix from the Cholesky decomposition of x = t x · t T x . Then it is easy to see that w 2 (x) = t x is a multiplication algorithm. Let Δ k (x) denote the kth principal minor of x and define for s ∈ R r the generalized power function Δ s (x) = Δ 1 (x) s1−s2 Δ 2 (x) s2−s3 . . . Δ r (x) sr .
We will now give the solution to the fundamental equation of information on symmetric cones for any two multiplication algorithms, but with the additional assumption that any two unknown functions are invariant under the