1 Introduction

Matsumoto and Yor [15, 16] have shown that if X and Y are independent random variables, Y is gamma distributed with the shape parameter p and the scale parameter a and X has the generalized inverse Gaussian distribution (GIG) with parameters \((-p,a,b)\), then the random variables \(U=(X+Y)^{-1}\) and \(V=X^{-1}-(X+Y)^{-1}\) are independent with respective distributions GIG with parameters \((-p,b,a)\) and gamma with parameters p and b.

Matsumoto and Yor asked about the converse theorem based on the independence of U and V. Assume that X and Y are non-degenerate nonnegative independent random variables, such that U and V are independent. Does this imply that X and Y must follow GIG and gamma distributions, respectively?

A positive answer to this question was given by Letac and Wesołowski [13], with the use of Laplace transforms. In the same paper, both the Matsumoto–Yor property and its converse (with additional smoothness assumptions) were generalized to the cone \(\Omega _+\) of symmetric positive definite (rr) real matrices in the following way. For \(p>(r-1)/2\) and \( {\mathbf{a }} , {\mathbf{b }} \in \Omega _+\), consider two independent random variables X and Y with following densities

$$\begin{aligned} \mu _{-p, {\mathbf{a }} , {\mathbf{b }} }(\mathrm{d} {\mathbf{x }} )&=c_1 (\det {\mathbf{x }} )^{-p-(r+1)/2}\exp \left( -{\mathrm {tr}}\,( {\mathbf{a }} \cdot {\mathbf{x }} )-{\mathrm {tr}}\,( {\mathbf{b }} \cdot {\mathbf{x }} ^{-1})\right) I_{\Omega _+}( {\mathbf{x }} )\mathrm{d} {\mathbf{x }} ,\\ \gamma _{p, {\mathbf{a }} }(\mathrm{d} {\mathbf{y }} )&=c_2 (\det {\mathbf{y }} )^{p-(r+1)/2}\exp (-{\mathrm {tr}}\,( {\mathbf{a }} \cdot {\mathbf{y }} ))I_{\Omega _+}( {\mathbf{y }} )\mathrm{d} {\mathbf{y }} . \end{aligned}$$

The distribution of X is the GIG with parameters \((-p, {\mathbf{a }} , {\mathbf{b }} )\), and the distribution of Y is the Wishart distribution with shape parameter p and scale parameter \( {\mathbf{a }} \). Letac and Wesołowski have shown that if X and Y are as above, then (UV) has distribution \(\mu _{-p, {\mathbf{b }} , {\mathbf{a }} }\otimes \gamma _{p, {\mathbf{b }} }\). As was observed by the authors, the natural framework for Matsumoto–Yor property is symmetric cones. Statement of a symmetric cone version of Matsumoto–Yor property is given in Sect. 3.

In this paper, we give a new proof of the converse result of the Matsumoto–Yor property, when X and Y take values in any irreducible symmetric cone. The smoothness assumption is reduced from \(C^2\) densities in [13] and differentiability in [17] to the continuity only. A new solution of a related functional equation on symmetric cones (see Theorem 4.5) was found under the assumption of continuity of respective functions with the use of the corresponding univariate result due to Wesołowski [18]. Similar reduction in regularity assumptions was recently performed in the density version of Lukacs–Olkin–Rubin in [7].

It is worth mentioning that several related one-dimensional results [3, 10] as well as results for random matrices [9, 12].

While solving the functional equation, we use Hua’s identity, which allows to write the inverse of \(V=X^{-1}-(X+Y)^{-1}\) in a very convenient form:

$$\begin{aligned} V^{-1}=X+X\cdot Y^{-1}\cdot X. \end{aligned}$$
(1)

Hua’s identity has already proved to be useful in some problems related to GIG and Wishart distributions—see [1], where it was used to analyze some random continued fractions on symmetric cones.

The paper is organized as follows. We start in the next section with some basic definitions and theorems regarding analysis on symmetric cones. In Sect. 3, we define the GIG and Wishart distributions and state the Matsumoto–Yor property on symmetric cones. A core of the proof of the converse to the Matsumoto–Yor property is a solution of some functional equation for real functions with arguments from the cone. Section 4 is devoted to analysis of this functional equation. The statement and the proof of the main result are given in Sect. 5. Finally, in Sect. 6, we give some remarks regarding the MY property on matrices of different dimensions and related functional equation.

2 Symmetric Cones

In this section, we give a short introduction to the theory of symmetric cones. For further details, we refer to [4].

A Euclidean Jordan algebra is a Euclidean space \(\mathbb {E}\) (endowed with the scalar product denoted by \(\left\langle {\mathbf{x }} , {\mathbf{y }} \right\rangle \)) equipped with a bilinear mapping (product)

$$\begin{aligned} \mathbb {E}\times \mathbb {E}\ni \left( {\mathbf{x }} , {\mathbf{y }} \right) \mapsto {\mathbf{x }} {\mathbf{y }} \in \mathbb {E}\end{aligned}$$

and a neutral element \( {\mathbf{e }} \) in \(\mathbb {E}\) such that for all \( {\mathbf{x }} , {\mathbf{y }} , {\mathbf{z }} \) in \(\mathbb {E}\):

  • \( {\mathbf{x }} {\mathbf{y }} = {\mathbf{y }} {\mathbf{x }} \),

  • \( {\mathbf{x }} ( {\mathbf{x }} ^2 {\mathbf{y }} )= {\mathbf{x }} ^2( {\mathbf{x }} {\mathbf{y }} )\),

  • \( {\mathbf{x }} {\mathbf{e }} = {\mathbf{x }} \),

  • \(\left\langle {\mathbf{x }} , {\mathbf{y }} {\mathbf{z }} \right\rangle =\left\langle {\mathbf{x }} {\mathbf{y }} , {\mathbf{z }} \right\rangle \).

For \( {\mathbf{x }} \in \mathbb {E}\) let \(\mathbb {L}( {\mathbf{x }} ):\mathbb {E}\rightarrow \mathbb {E}\) be the linear map defined by

$$\begin{aligned} \mathbb {L}( {\mathbf{x }} ) {\mathbf{y }} = {\mathbf{x }} {\mathbf{y }} , \end{aligned}$$

and define

$$\begin{aligned} \mathbb {P}( {\mathbf{x }} )=2\mathbb {L}^2( {\mathbf{x }} )-\mathbb {L}\left( {\mathbf{x }} ^2\right) . \end{aligned}$$

Let \({\mathrm {End}}(\mathbb {E})\) denote the space of endomorphisms of \(\mathbb {E}\). The map \(\mathbb {P}:\mathbb {E}\mapsto \mathrm {End}(\mathbb {E})\) is called the quadratic representation of \(\mathbb {E}\).

An element \( {\mathbf{x }} \) is said to be invertible if there exists an element \( {\mathbf{y }} \) in \(\mathbb {E}\) such that \(\mathbb {L}( {\mathbf{x }} ) {\mathbf{y }} = {\mathbf{e }} \). Then, \( {\mathbf{y }} \) is called the inverse of \( {\mathbf{x }} \) and it is denoted by \( {\mathbf{y }} = {\mathbf{x }} ^{-1}\). Note that the inverse of \( {\mathbf{x }} \) is unique. It can be shown that \( {\mathbf{x }} \) is invertible if and only if \(\mathbb {P}( {\mathbf{x }} )\) is invertible, and in this case, \(\left( \mathbb {P}( {\mathbf{x }} )\right) ^{-1} =\mathbb {P}\left( {\mathbf{x }} ^{-1}\right) \).

A Euclidean Jordan algebra \(\mathbb {E}\) is said to be simple if it is not a Cartesian product of two Euclidean Jordan algebras of positive dimensions. Up to linear isomorphism, there are only five kinds of Euclidean simple Jordan algebras. Let \(\mathbb {K}\) denote either the real numbers \(\mathbb {R}\), the complex ones \(\mathbb {C}\), the quaternions \(\mathbb {H}\) or the octonions \(\mathbb {O}\). Let us write \(S_r(\mathbb {K})\) for the space of \(r\times r\) Hermitian matrices valued in \(\mathbb {K}\), endowed with the Euclidean structure \(\left\langle {\mathbf{x }} , {\mathbf{y }} \right\rangle ={\mathrm {Trace}}\,( {\mathbf{x }} \cdot \bar{ {\mathbf{y }} })\) and with the Jordan product

$$\begin{aligned} {\mathbf{x }} {\mathbf{y }} =\tfrac{1}{2}( {\mathbf{x }} \cdot {\mathbf{y }} + {\mathbf{y }} \cdot {\mathbf{x }} ), \end{aligned}$$

where \( {\mathbf{x }} \cdot {\mathbf{y }} \) denotes the ordinary product of matrices and \(\bar{ {\mathbf{y }} }\) is the conjugate of \( {\mathbf{y }} \). Then \(S_r(\mathbb {R}), r\ge 1, S_r(\mathbb {C}), r\ge 2, S_r(\mathbb {H}), r\ge 2\), and the exceptional \(S_3(\mathbb {O})\) are the first four kinds of Euclidean simple Jordan algebras. Note that in this case if \(\mathbb {K}\ne \mathbb {O}\), then

$$\begin{aligned} \mathbb {P}( {\mathbf{y }} ) {\mathbf{x }} = {\mathbf{y }} \cdot {\mathbf{x }} \cdot {\mathbf{y }} . \end{aligned}$$

The fifth kind is the Euclidean space \(\mathbb {R}^{n+1}, n\ge 2\), with the Jordan product

$$\begin{aligned} \begin{aligned} \left( x_0,x_1,\ldots , x_n\right) \left( y_0,y_1,\ldots ,y_n\right) =\left( \sum _{i=0}^n x_i y_i,x_0y_1+y_0x_1,\ldots ,x_0y_n+y_0x_n\right) . \end{aligned} \end{aligned}$$
(2)

To each Euclidean simple Jordan algebra, one can attach the set \(\bar{\Omega }\) of Jordan squares

$$\begin{aligned} \bar{\Omega }=\left\{ {\mathbf{x }} \in \mathbb {E}: \text{ there } \text{ exists } {\mathbf{y }} \text{ in } \mathbb {E} \text{ such } \text{ that } {\mathbf{x }} = {\mathbf{y }} ^2 \right\} . \end{aligned}$$

The interior \(\Omega \) is a symmetric cone. Moreover, \(\Omega \) is irreducible, i.e., it is not the Cartesian product of two convex cones. One can prove that an open convex cone is symmetric and irreducible if and only if it is the symmetric cone \(\Omega \) of some Euclidean simple Jordan algebra. Each simple Jordan algebra corresponds to a symmetric cone; hence, there exists up to linear isomorphism also only five kinds of symmetric cones. The cone corresponding to the Euclidean Jordan algebra \(\mathbb {R}^{n+1}\) equipped with Jordan product (2) is called the Lorentz cone.

We will now introduce a very useful decomposition in \(\mathbb {E}\), called the spectral decomposition. An element \( {\mathbf{c }} \in \mathbb {E}\) is said to be a primitive idempotent if \( {\mathbf{c }} {\mathbf{c }} = {\mathbf{c }} \ne 0\) and if \( {\mathbf{c }} \) is not a sum of two non-null idempotents. A complete system of primitive orthogonal idempotents is a set \(\left( {\mathbf{c }} _1,\ldots , {\mathbf{c }} _r\right) \) such that

$$\begin{aligned} \sum _{i=1}^r {\mathbf{c }} _i= {\mathbf{e }} \quad \text{ and }\quad {\mathbf{c }} _i {\mathbf{c }} _j=\delta _{ij} {\mathbf{c }} _i\quad \text{ for } 1\le i\le j\le r. \end{aligned}$$

The size r of such system is a constant called the rank of \(\mathbb {E}\). Any element \( {\mathbf{x }} \) of a Euclidean simple Jordan algebra can be written as \( {\mathbf{x }} =\sum _{i=1}^r\lambda _i {\mathbf{c }} _i\) for some complete system of primitive orthogonal idempotents \(\left( {\mathbf{c }} _1,\ldots , {\mathbf{c }} _r\right) \). The real numbers \(\lambda _i, i=1,\ldots ,r\) are the eigenvalues of \( {\mathbf{x }} \). One can then define the trace and the determinant of \( {\mathbf{x }} \) by, respectively, \({\mathrm {tr}}\, {\mathbf{x }} =\sum _{i=1}^r\lambda _i\) and \(\det {\mathbf{x }} =\prod _{i=1}^r\lambda _i\). An element \( {\mathbf{x }} \in \mathbb {E}\) belongs to \(\Omega \) if and only if all its eigenvalues are strictly positive.

Note that up to a multiplicative constant, \({\mathrm {tr}}\,( {\mathbf{x }} {\mathbf{y }} )\) is the only scalar product on \(\mathbb {E}\) which makes \(\Omega \) self dual. Henceforth, we assume that \(\Omega \) is an irreducible cone and that corresponding Jordan algebra \(\mathbb {E}\) is equipped with canonical scalar product \(\left\langle {\mathbf{x }} , {\mathbf{y }} \right\rangle ={\mathrm {tr}}\,( {\mathbf{x }} {\mathbf{y }} )\).

The rank r and \(\dim \Omega \) of irreducible symmetric cone are connected through the relation

$$\begin{aligned} \dim \Omega =r+\frac{\mathrm{d} r(r-1)}{2}, \end{aligned}$$

where d is an integer called the Peirce constant.

The important property of the determinant is that

$$\begin{aligned} \det \left( \mathbb {P}( {\mathbf{x }} ) {\mathbf{y }} \right) =(\det {\mathbf{x }} )^2 \det {\mathbf{y }} ,\quad ( {\mathbf{x }} , {\mathbf{y }} )\in \Omega ^2. \end{aligned}$$
(3)

It turns out that (3) characterizes determinant—see Lemma 4.2 below. Moreover (see [4, Proposition II.4.2])

$$\begin{aligned} {\mathrm {Det}}\left( \mathbb {P}( {\mathbf{x }} )\right) =(\det {\mathbf{x }} )^{2\dim \Omega /r}, \end{aligned}$$
(4)

where \({\mathrm {Det}}\) denotes the determinant in the space of endomorphisms on \(\Omega \).

In the proof of our main theorem, we will need the following identity (called Hua’s identity—see [4, Exercise 5c, p.39])

$$\begin{aligned} {\mathbf{a }} ^{-1}-( {\mathbf{a }} + {\mathbf{b }} )^{-1}=( {\mathbf{a }} +\mathbb {P}( {\mathbf{a }} ) {\mathbf{b }} ^{-1})^{-1} \end{aligned}$$
(5)

when \( {\mathbf{a }} \in \Omega , {\mathbf{b }} \in \mathbb {E}\) are such that \( {\mathbf{b }} , {\mathbf{a }} + {\mathbf{b }} \) and \( {\mathbf{a }} +\mathbb {P}( {\mathbf{a }} ) {\mathbf{b }} ^{-1}\) are invertible. Note that if \( {\mathbf{a }} , {\mathbf{b }} \in \Omega \), then \( {\mathbf{a }} ^{-1}-( {\mathbf{a }} + {\mathbf{b }} )^{-1}\in \Omega \). For the cone \(\Omega _+\) of symmetric positive definite real matrices, Hua’s identity takes the form given in (1).

3 Wishart and GIG Distributions

The Wishart distribution \(\gamma _{p, {\mathbf{a }} }\) in \(\bar{\Omega }\) is defined for any \( {\mathbf{a }} \in \Omega \) and any p in the set

$$\begin{aligned} \Lambda =\{0,d/2,d,\ldots ,d(r-1)/2\}\cup (d(r-1)/2,\infty ) \end{aligned}$$

by its Laplace transform

$$\begin{aligned} \int _{\bar{\Omega }} \exp (-\left\langle \sigma , {\mathbf{y }} \right\rangle )\gamma _{p, {\mathbf{a }} } (\mathrm{d} {\mathbf{y }} )=\left( \frac{\det {\mathbf{a }} }{\det \left( {\mathbf{a }} +\sigma \right) }\right) ^p, \end{aligned}$$

which holds for any \(\sigma + {\mathbf{a }} \in \Omega \). If \(p>\dim \Omega /r-1\), then \(\gamma _{p, {\mathbf{a }} }\) is absolutely continuous with respect to the Lebesgue measure and has the density

$$\begin{aligned} \gamma _{p, {\mathbf{a }} }(\mathrm{d} {\mathbf{x }} )=\frac{(\det {\mathbf{a }} )^p}{\Gamma _\Omega (p)} (\det {\mathbf{x }} )^{p-\dim \Omega /r}\mathrm{e}^{-\left\langle {\mathbf{a }} , {\mathbf{x }} \right\rangle } I_\Omega ( {\mathbf{x }} )\,\mathrm{d} {\mathbf{x }} ,\quad {\mathbf{x }} \in \Omega , \end{aligned}$$

where \(\Gamma _\Omega \) is the gamma function of the symmetric cone \(\Omega \) (see [4, p.124]).

The absolutely continuous generalized inverse Gaussian distribution \(\mu _{p, {\mathbf{a }} , {\mathbf{b }} }\) on \(\Omega \) is defined for \( {\mathbf{a }} , {\mathbf{b }} \in \Omega \) and \(p\in \mathbb {R}\) by its density

$$\begin{aligned} \mu _{p,a,b}(\mathrm{d} {\mathbf{x }} )=\frac{1}{K_p( {\mathbf{a }} , {\mathbf{b }} )} (\det {\mathbf{x }} )^{p-\dim \Omega /r}\mathrm{e}^{-\left\langle {\mathbf{a }} , {\mathbf{x }} \right\rangle - \left\langle {\mathbf{b }} , {\mathbf{x }} ^{-1}\right\rangle }I_\Omega ( {\mathbf{x }} )\,\mathrm{d} {\mathbf{x }} ,\quad {\mathbf{x }} \in \Omega , \end{aligned}$$

where \(K_p( {\mathbf{a }} , {\mathbf{b }} )\) is a normalizing constant.

In [13], Theorem 3.1 was proved in the special case of the cone of symmetric positive definite real matrices \(\Omega _+\). As it was observed by the authors, symmetric cones are the natural framework for considering the Matsumoto–Yor property. We state the following theorem without a proof as it only mimics the argument for \(\Omega _+\). The original proof relies on the properties of Bessel-like functions (\(K_p( {\mathbf{a }} , {\mathbf{b }} )\)) introduced in [5], which retain their usual properties in the symmetric cone setting.

Theorem 3.1

Let \(p\in \Lambda \) and \( {\mathbf{a }} \) and \( {\mathbf{b }} \) in irreducible symmetric cone \(\Omega \). Let X and Y be independent random variables in \(\Omega \) and \(\bar{\Omega }\) with respective distributions \(\mu _{-p, {\mathbf{a }} , {\mathbf{b }} }\) and \(\gamma _{p, {\mathbf{a }} }\). Then random variables \(U=(X+Y)^{-1}\) and \(V=X^{-1}-(X+Y)^{-1}\) are independent with respective distributions \(\mu _{-p, {\mathbf{b }} , {\mathbf{a }} }\) and \(\gamma _{p, {\mathbf{b }} }\).

4 Functional Equations

At the beginning of this section, we state three results that will be useful in the proof of the main technical result—Theorem 4.5. The first one regards regular additive functions (see [11]) on symmetric cone.

Lemma 4.1

(Additive Cauchy functional equation) Let \(f:\Omega \rightarrow \mathbb {R}\) be a measurable function such that

$$\begin{aligned} f( {\mathbf{x }} )+f( {\mathbf{y }} )=f( {\mathbf{x }} + {\mathbf{y }} ),\quad ( {\mathbf{x }} , {\mathbf{y }} )\in \Omega ^2. \end{aligned}$$

Then there exists \( {\mathbf{f }} \in \mathbb {E}\) such that \(f( {\mathbf{x }} )=\left\langle {\mathbf{f }} , {\mathbf{x }} \right\rangle \) for any \( {\mathbf{x }} \in \Omega \).

An elementary proof of this theorem may be found in [6]. The following lemma was recently proved in [8].

Lemma 4.2

(Logarithmic Pexider functional equation) Let \(f_1, f_2, f_3:\Omega \rightarrow \mathbb {R}\) be measurable functions such that

$$\begin{aligned} f_1( {\mathbf{x }} )+f_2( {\mathbf{y }} )=f_3\left( \mathbb {P}\left( {\mathbf{x }} ^{1/2}\right) {\mathbf{y }} \right) ,\quad ( {\mathbf{x }} , {\mathbf{y }} )\in \Omega ^2. \end{aligned}$$

Then there exist a constant \(q\in \mathbb {R}\) and constants \(\gamma _1, \gamma _2\in \mathbb {R}\) such that for all \( {\mathbf{x }} \in \Omega \),

$$\begin{aligned} f_1( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\gamma _1, \\ f_2( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\gamma _2, \\ f_3( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\gamma _1+\gamma _2. \end{aligned}$$

The main technical result will rely on the following univariate result due to Wesołowski [18].

Theorem 4.3

Let ABC and D be locally integrable real functions defined on \((0,\infty )\) such that

$$\begin{aligned} g(x(x+y))-g(y(x+y))=\alpha (x)-\alpha (y),\quad (x,y)\in (0,\infty )^2. \end{aligned}$$
(6)

Then there exist real numbers ABC and D such that for any \(x>0\),

$$\begin{aligned} g(x)=Ax+B\log x+C,\quad \alpha (x)=Ax^2+B\log x+D. \end{aligned}$$

The following result then follows from Theorem 4.3.

Theorem 4.4

Let ABC and D be locally integrable real functions defined on \((0,\infty )\) such that

$$\begin{aligned} A(x)+B(y)=C\left( (x+y)^{-1}\right) +D \left( x^{-1}-(x+y)^{-1}\right) ,\quad (x,y)\in (0,\infty )^2.\quad \end{aligned}$$
(7)

Then there exist real numbers pfg and \(C_i, i=1,\ldots ,4\), such that for any \(x>0\),

$$\begin{aligned} A(x)&=-p\log x+fx+gx^{-1}+C_1,\\ B(x)&=p\log x+fx+C_2,\\ C(x)&=-p\log x+g x+f x^{-1}+C_3,\\ D(x)&=p\log x+g x+C_4, \end{aligned}$$

and \(C_1+C_2=C_3+C_4\).

Proof

Denote \(g_1(x)=A(x^{-1})-B(x^{-1})\) and \(\alpha _1(x)=D(x^2)\). Interchange the roles of x and y in (7) and subtract from the original equation. Then

$$\begin{aligned} g_1\left( x^{-1}\right) -g_1 \left( y^{-1}\right) =\alpha _1\left( \sqrt{\frac{y}{x(x+y)}}\right) -\alpha _1\left( \sqrt{\frac{x}{y(x+y)}}\right) . \end{aligned}$$

Inserting \(x=(u(u+v))^{-1}\) and \(y=(v(u+v))^{-1}\), we arrive at (6) with g and \(\alpha \) replaced, respectively, with \(g_1\) and \(\alpha _1\).

Substituting \(x\mapsto (x+y)^{-1}\) and \(y\mapsto x^{-1}-(x+y)^{-1}\) in (7), we obtain

$$\begin{aligned} A\left( (x+y)^{-1}\right) +B\left( x^{-1} -(x+y)^{-1}\right) =C(x)+D(y),\quad (x,y)\in (0,\infty )^2. \end{aligned}$$

As before, denoting \(g_2(x)=C(x^{-1})-D(x^{-1})\) and \(\alpha _2(x)=B(x^2)\) and subtracting the same equation with x and y interchanged, we see that (6) holds true for \(g_2\) and \(\alpha _2\) also. Functions \(g_i\) and \(\alpha _i, i=1,2\), are locally integrable, because for \(g_1\) we have

$$\begin{aligned} \int _K|A(x^{-1})-B(x^{-1})|\,\mathrm{d}x=\int _{\phi (K)}|A(y)-B(y)|\frac{\mathrm{d}y}{y^2}\le c\int _{\phi (K)}|A(y)-B(y)|\,\mathrm{d}y \end{aligned}$$

for all compact sets \(K\subset (0,\infty )\), where \(\phi (K)\) is the (compact) image of K under \(\phi (x)=x^{-1}\). Since A and B were assumed to be locally integrable, we see that \(g_1\) is locally integrable. Analogously, we proceed for \(g_2, \alpha _1\) and \(\alpha _2\). Thus, by Theorem 4.3, we obtain (we borrow this notation from Theorem 4.3):

$$\begin{aligned} B(x)&=\alpha _2(\sqrt{x})=A_2x+B_2/2\log x+D_2, \\ D(x)&=\alpha _1(\sqrt{x})=A_1x+B_1/2\log x+D_1, \\ A(x)&= A(x)-B(x)+B(x)=g_1(x^{-1})+\alpha _2(\sqrt{x})\\&=A_2x+A_1x^{-1}-(B_1-B_2/2)\log x+C_1+D_2, \\ C(x)&=C(x)-D(x)+D(x)=g_2(x^{-1})+\alpha _1(\sqrt{x})\\&=A_1x+A_2x^{-1}-(B_2-B_1/2)\log x+C_2+D_1, \end{aligned}$$

Inserting it back into (7), it can be quickly verified that \(B_1=B_2=B\). \(\square \)

We are now ready to state and solve the functional equation related to the Matsumoto–Yor property on symmetric cones.

Theorem 4.5

Let abc and d be continuous real functions defined on \(\Omega \) such that

$$\begin{aligned} a( {\mathbf{x }} )+b( {\mathbf{y }} )=c\left( ( {\mathbf{x }} + {\mathbf{y }} )^{-1}\right) +d\left( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1}\right) ,\quad ( {\mathbf{x }} , {\mathbf{y }} )\in \Omega ^2. \end{aligned}$$
(8)

Then there exist constants \(q\in \mathbb {R}, {\mathbf{f }} , {\mathbf{g }} \in \mathbb {E}\) and \(\gamma _i\in \mathbb {R}, i=1, 2, 3\), such that for any \( {\mathbf{x }} \in \Omega \),

$$\begin{aligned} a( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\left\langle {\mathbf{f }} , {\mathbf{x }} \right\rangle +\left\langle {\mathbf{g }} , {\mathbf{x }} ^{-1}\right\rangle +\gamma _1+\gamma _3,\\ b( {\mathbf{x }} )&=-q\log \det {\mathbf{x }} +\left\langle {\mathbf{f }} , {\mathbf{x }} \right\rangle +\gamma _2,\\ c( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\left\langle {\mathbf{g }} , {\mathbf{x }} \right\rangle +\left\langle {\mathbf{f }} , {\mathbf{x }} ^{-1}\right\rangle +\gamma _3,\\ d( {\mathbf{x }} )&=-q\log \det {\mathbf{x }} +\left\langle {\mathbf{g }} , {\mathbf{x }} \right\rangle +\gamma _1+\gamma _2. \end{aligned}$$

Proof

By inserting \(( {\mathbf{x }} , {\mathbf{y }} )=(\alpha {\mathbf{z }} ,\beta {\mathbf{z }} )\) for \(\alpha , \beta >0\) and \( {\mathbf{z }} \in \Omega \) into (8), we arrive at the equation (7) with \(A(\alpha ):=a(\alpha {\mathbf{z }} ), B(\alpha ):=b(\alpha {\mathbf{z }} ), C(\alpha ):=c(\alpha {\mathbf{z }} ^{-1})\) and \(D(\alpha ):=d(\alpha {\mathbf{z }} ^{-1})\). Functions ABC and D are continuous, so they are locally integrable. Therefore, by Theorem 4.4, for any \( {\mathbf{z }} \in \Omega \), there exist constants \(p( {\mathbf{z }} ), f( {\mathbf{z }} ), g( {\mathbf{z }} )\) and \(C_i( {\mathbf{z }} ), i=1,\ldots ,4\), such that

$$\begin{aligned} \begin{aligned} a(\alpha {\mathbf{z }} )&=-p( {\mathbf{z }} )\log \alpha +f( {\mathbf{z }} )\alpha +g( {\mathbf{z }} )\alpha ^{-1}+C_1( {\mathbf{z }} ),\\ b(\alpha {\mathbf{z }} )&=p( {\mathbf{z }} )\log \alpha +f( {\mathbf{z }} )\alpha +C_2( {\mathbf{z }} ),\\ c(\alpha {\mathbf{z }} ^{-1})&=-p( {\mathbf{z }} )\log \alpha +g( {\mathbf{z }} ) \alpha +f( {\mathbf{z }} ) \alpha ^{-1}+C_3( {\mathbf{z }} ),\\ d(\alpha {\mathbf{z }} ^{-1})&=p( {\mathbf{z }} )\log \alpha +g( {\mathbf{z }} )\alpha +C_4( {\mathbf{z }} ),\\ C_1( {\mathbf{z }} )+C_2( {\mathbf{z }} )&=C_3( {\mathbf{z }} )+C_4( {\mathbf{z }} ), \end{aligned} \end{aligned}$$
(9)

for any \(\alpha >0\) and \( {\mathbf{z }} \in \Omega \). Functions \( {\mathbf{z }} \mapsto p( {\mathbf{z }} ), {\mathbf{z }} \mapsto f( {\mathbf{z }} ), {\mathbf{z }} \mapsto g( {\mathbf{z }} )\) and \( {\mathbf{z }} \mapsto C_i( {\mathbf{z }} ), i=1,\ldots ,4\), are continuous, because abc and d are continuous. Let \(\beta >0\). By the equality \(a(\alpha (\beta {\mathbf{z }} ))=a((\alpha \beta ) {\mathbf{z }} )\), we obtain that for any \(\alpha >0\),

$$\begin{aligned} a(\alpha \beta {\mathbf{z }} )=&-p( {\mathbf{z }} )\log \alpha \beta +f( {\mathbf{z }} )\alpha \beta +g( {\mathbf{z }} )\alpha ^{-1}\beta ^{-1}+C_1( {\mathbf{z }} )\\ =&-p(\beta {\mathbf{z }} )\log \alpha +f(\beta {\mathbf{z }} )\alpha +g(\beta {\mathbf{z }} )\alpha ^{-1}+C_1(\beta {\mathbf{z }} ), \end{aligned}$$

hence

$$\begin{aligned} \begin{aligned} f(\beta {\mathbf{z }} )&=\beta f( {\mathbf{z }} ),\qquad g(\beta {\mathbf{z }} )=\beta ^{-1}g( {\mathbf{z }} ), \\ p(\beta {\mathbf{z }} )&=p( {\mathbf{z }} ), \quad C_1(\beta {\mathbf{z }} )=C_1( {\mathbf{z }} )-p( {\mathbf{z }} )\log \beta . \end{aligned} \end{aligned}$$
(10)

Following the same procedure for functions bc and d, we have

$$\begin{aligned} \begin{aligned} C_i(\beta {\mathbf{z }} )&=C_i( {\mathbf{z }} )+p( {\mathbf{z }} )\log \beta ,\quad i=2,3,\\ C_4(\beta {\mathbf{z }} )&=C_4( {\mathbf{z }} )-p( {\mathbf{z }} )\log \beta . \end{aligned} \end{aligned}$$
(11)

Using (9) for \(\alpha =1\) in (8), we get

$$\begin{aligned} f( {\mathbf{x }} )+g( {\mathbf{x }} )+C_1( {\mathbf{x }} )+f( {\mathbf{y }} )+C_2( {\mathbf{y }} )= & {} g( {\mathbf{x }} + {\mathbf{y }} )+f( {\mathbf{x }} + {\mathbf{y }} )+C_3( {\mathbf{x }} + {\mathbf{y }} )\nonumber \\&+\,g\left( ( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}\right) \nonumber \\&+\,C_4\left( ( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}\right) . \end{aligned}$$
(12)

Consider the above equation for \((\alpha ^{-1} {\mathbf{x }} ,\alpha ^{-1} {\mathbf{y }} )\in \Omega ^2, \alpha >0\). Then, by (10),

$$\begin{aligned}&\alpha ^{-1} f( {\mathbf{x }} )+\alpha g( {\mathbf{x }} ) +C_1(\alpha ^{-1} {\mathbf{x }} )+\alpha ^{-1} f( {\mathbf{y }} )+C_2(\alpha ^{-1} {\mathbf{y }} ) \\&\quad =\alpha g( {\mathbf{x }} + {\mathbf{y }} )+\alpha ^{-1} f( {\mathbf{x }} + {\mathbf{y }} )+C_3(\alpha ^{-1}( {\mathbf{x }} + {\mathbf{y }} ))\\&\quad \quad +\,\alpha g\left( ( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}\right) +C_4\left( \alpha ^{-1}( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}\right) . \end{aligned}$$

Multiplying both sides of the above equation by \(\alpha \) and passing to the limit as \(\alpha \rightarrow 0\), by (11), we obtain

$$\begin{aligned}&f( {\mathbf{x }} )+ f( {\mathbf{y }} )-f( {\mathbf{x }} + {\mathbf{y }} ) \\&\quad =\lim _{\alpha \rightarrow 0}\alpha \left\{ C_3(\alpha ^{-1}( {\mathbf{x }} + {\mathbf{y }} )) +C_4\left( \alpha ^{-1}( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}\right) \right. \\&\qquad \quad \quad \quad \quad \left. -C_1(\alpha ^{-1} {\mathbf{x }} )-C_2(\alpha ^{-1} {\mathbf{y }} )\right\} . \end{aligned}$$

By (10) and (11), the limit on the right-hand side of the above equation equals 0. Thus, by Lemma 4.1, there exists \( {\mathbf{f }} \in \mathbb {E}\) such that \(f( {\mathbf{x }} )=\left\langle {\mathbf{f }} , {\mathbf{x }} \right\rangle \). Analogously, consider (12) for \((\alpha {\mathbf{x }} ,\alpha {\mathbf{y }} )\in \Omega ^2, \alpha >0\), multiply its both sides by \(\alpha \) and pass to the limit as \(\alpha \rightarrow 0\). Then

$$\begin{aligned}&g( {\mathbf{x }} )-g( {\mathbf{x }} + {\mathbf{y }} )-g\left( ( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}\right) \\&\quad =\lim _{\alpha \rightarrow 0} \alpha \left\{ C_3(\alpha ( {\mathbf{x }} + {\mathbf{y }} ))+C_4\left( \alpha ( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}\right) \right. \\&\qquad \quad \quad \quad \quad \left. -C_1(\alpha {\mathbf{x }} )-C_2(\alpha {\mathbf{y }} )\right\} =0. \end{aligned}$$

Define \(\bar{g}( {\mathbf{x }} )=g( {\mathbf{x }} ^{-1})\). Then,

$$\begin{aligned} \bar{g}( {\mathbf{x }} ^{-1})=\bar{g}(( {\mathbf{x }} + {\mathbf{y }} )^{-1})+ \bar{g}( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1}). \end{aligned}$$

Thus, \(\bar{g}\) is additive, i.e., there exists \( {\mathbf{g }} \in \mathbb {E}\) such that \(g( {\mathbf{x }} )=\left\langle {\mathbf{g }} , {\mathbf{x }} ^{-1}\right\rangle \).

By the use of above results for f and g, (12) simplifies to

$$\begin{aligned} C_1( {\mathbf{x }} )+C_2( {\mathbf{y }} )=C_3( {\mathbf{x }} + {\mathbf{y }} )+C_4 \left( ( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}\right) . \end{aligned}$$
(13)

Recall that by Hua’s identity (5), the argument of \(C_4\) above may be written as

$$\begin{aligned} ( {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1})^{-1}= {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{y }} ^{-1}. \end{aligned}$$

Using this fact along with (11) in (13) for \( {\mathbf{y }} =\alpha {\mathbf{z }} \), we obtain

$$\begin{aligned}&C_1( {\mathbf{x }} )+C_2( {\mathbf{z }} )+p( {\mathbf{z }} )\log \alpha \\&\quad =C_1( {\mathbf{x }} )+C_2(\alpha {\mathbf{z }} )=C_3( {\mathbf{x }} +\alpha {\mathbf{z }} )+C_4\left( \alpha ^{-1}(\alpha {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1})\right) \\&\quad =C_3( {\mathbf{x }} +\alpha {\mathbf{z }} )+C_4(\alpha {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1})+p(\alpha {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1})\log \alpha . \end{aligned}$$

Passing to the limit as \(\alpha \rightarrow 0\) (recall that \(C_i\) are continuous on \(\Omega \)), we obtain

$$\begin{aligned} C_1( {\mathbf{x }} )+C_2( {\mathbf{z }} )-C_3( {\mathbf{x }} )-C_4(\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1})=\lim _{\alpha \rightarrow 0}\log \alpha \left\{ p\left( \alpha {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1}\right) -p( {\mathbf{z }} )\right\} \end{aligned}$$
(14)

for any \(( {\mathbf{x }} , {\mathbf{z }} )\in \Omega ^2\). A necessary condition for the limit on the right-hand side to exist is

$$\begin{aligned} \lim _{\alpha \rightarrow 0} \left\{ p(\alpha {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1})-p( {\mathbf{z }} )\right\} =0. \end{aligned}$$

But p is continuous and \(\lim _{\alpha \rightarrow 0}p(\alpha {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1})=p(\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1})\), hence \(p( {\mathbf{z }} )=p(\mathbb {P}( {\mathbf{x }} ) {\mathbf{z }} ^{-1})\). Thus, function p is constant and the right-hand side of (14) is equal to 0. Hence, substituting \( {\mathbf{z }} = {\mathbf{y }} ^{-1}\) and \( {\mathbf{x }} \mapsto {\mathbf{x }} ^{1/2}\) in (14), we get

$$\begin{aligned} C_1( {\mathbf{x }} ^{1/2})-C_3( {\mathbf{x }} ^{1/2})+C_2( {\mathbf{y }} ^{-1})=C_4(\mathbb {P}( {\mathbf{x }} ^{1/2}) {\mathbf{y }} ). \end{aligned}$$

Define \(f_1( {\mathbf{x }} ):=C_1( {\mathbf{x }} ^{1/2})-C_3 ( {\mathbf{x }} ^{1/2}), f_2( {\mathbf{x }} ):=C_2( {\mathbf{x }} ^{-1})\) and \(f_3( {\mathbf{x }} ):= C_4( {\mathbf{x }} )\) for \( {\mathbf{x }} \in \Omega \). Then

$$\begin{aligned} f_1( {\mathbf{x }} )+f_2( {\mathbf{y }} )=f_3(\mathbb {P}( {\mathbf{x }} ^{1/2}) {\mathbf{y }} ),\quad ( {\mathbf{x }} , {\mathbf{y }} )\in \Omega ^2. \end{aligned}$$

By Lemma 4.2, there exist real constants \(q, \gamma _1\) and \(\gamma _2\) such that for any \( {\mathbf{x }} \in \Omega \),

$$\begin{aligned} f_1( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\gamma _1,\\ f_2( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\gamma _2,\\ f_3( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\gamma _1+\gamma _2, \end{aligned}$$

that is,

$$\begin{aligned} C_1( {\mathbf{x }} )&=C_3( {\mathbf{x }} )+2q\log \det {\mathbf{x }} +\gamma _1,\\ C_2( {\mathbf{x }} )&=-q\log \det {\mathbf{x }} +\gamma _2,\\ C_4( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\gamma _1+\gamma _2. \end{aligned}$$

Let us go back to (13) and use the above result. Then

$$\begin{aligned}&C_3( {\mathbf{x }} )+2q\log \det {\mathbf{x }} -q\log \det {\mathbf{y }} \\&\quad =C_3( {\mathbf{x }} + {\mathbf{y }} ) +q\log \det ( {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{y }} ^{-1}),\quad ( {\mathbf{x }} , {\mathbf{y }} )\in \Omega ^2. \end{aligned}$$

Since \(\det ( {\mathbf{x }} +\mathbb {P}( {\mathbf{x }} ) {\mathbf{y }} ^{-1})=\det ( {\mathbf{x }} ^2)\det ( {\mathbf{x }} ^{-1}+ {\mathbf{y }} ^{-1})\), we obtain

$$\begin{aligned} C_3( {\mathbf{x }} )-q\log \det {\mathbf{y }} =C_3( {\mathbf{x }} + {\mathbf{y }} )+q \log \det \left( {\mathbf{x }} ^{-1}+ {\mathbf{y }} ^{-1}\right) . \end{aligned}$$

One can interchange \( {\mathbf{x }} \) and \( {\mathbf{y }} \) on the right-hand side to obtain

$$\begin{aligned} C_3( {\mathbf{x }} )+q\log \det {\mathbf{x }} =C_3( {\mathbf{y }} )+q\log \det {\mathbf{y }} =\mathrm{const}:=\gamma _3, \end{aligned}$$

that is, \(C_3( {\mathbf{x }} )=-q\log \det {\mathbf{x }} +\gamma _3\), what completes the proof. \(\square \)

5 Main Result

In the following section, we prove our main result, which is a converse to the Matsumoto–Yor property in the symmetric cone-variate case. We reduce the smoothness conditions for densities from \(C^2\) densities in [13] and differentiability in [17] to the continuity only.

Theorem 5.1

Let X and Y be independent random variables in \(\Omega \) with continuous and strictly positive densities. If the random variables \(U=(X+Y)^{-1}\) and \(V=X^{-1}-(X+Y)^{-1}\) are independent, then there exists \(p>\dim \Omega /r-1, {\mathbf{a }} \) and \( {\mathbf{b }} \) in \(\Omega \) such that X and Y follow respective distributions \(\mu _{-p, {\mathbf{a }} , {\mathbf{b }} }\) and \(\gamma _{p, {\mathbf{a }} }\).

Proof

Define the map \(\Psi :\Omega ^2\rightarrow \Omega ^2\) by \(\Psi ( {\mathbf{x }} , {\mathbf{y }} )=\left( ( {\mathbf{x }} + {\mathbf{y }} )^{-1}, {\mathbf{x }} ^{-1}-( {\mathbf{x }} + {\mathbf{y }} )^{-1}\right) =( {\mathbf{u }} , {\mathbf{v }} )\). Obviously, \((U,V)=\Psi (X,Y)\). Function \(\Psi \) is a bijection. In order to find the joint density of (UV), the essential computation is the one involved with finding the Jacobian J of the map \(\psi ^{-1}\), that is, the determinant of the linear map

$$\begin{aligned} \begin{pmatrix} \mathrm{d} {\mathbf{u }} \\ \mathrm{d} {\mathbf{v }} \end{pmatrix} \mapsto \begin{pmatrix} \mathrm{d} {\mathbf{x }} \\ \mathrm{d} {\mathbf{y }} \end{pmatrix} = \begin{pmatrix} \mathrm{d} {\mathbf{x }} /\mathrm{d} {\mathbf{u }} &{} \mathrm{d} {\mathbf{x }} /\mathrm{d} {\mathbf{v }} \\ \mathrm{d} {\mathbf{y }} /\mathrm{d} {\mathbf{u }} &{} \mathrm{d} {\mathbf{y }} /\mathrm{d} {\mathbf{v }} \end{pmatrix} \begin{pmatrix} \mathrm{d} {\mathbf{u }} \\ \mathrm{d} {\mathbf{v }} \end{pmatrix}. \end{aligned}$$

It is easy to see that \(\Psi =\Psi ^{-1}\), that is \(( {\mathbf{x }} , {\mathbf{y }} )=\left( ( {\mathbf{u }} + {\mathbf{v }} )^{-1}, {\mathbf{u }} ^{-1}-( {\mathbf{u }} + {\mathbf{v }} )^{-1}\right) \). Note that the derivative of the map \( {\mathbf{x }} \mapsto {\mathbf{x }} ^{-1}\) is \(-\mathbb {P}( {\mathbf{x }} )^{-1}\). Thus

$$\begin{aligned} J&=\left| \begin{array}{cc} -\mathbb {P}( {\mathbf{u }} + {\mathbf{v }} )^{-1} &{} -\mathbb {P}( {\mathbf{u }} + {\mathbf{v }} )^{-1} \\ -\mathbb {P}( {\mathbf{u }} )^{-1}+\mathbb {P}( {\mathbf{u }} + {\mathbf{v }} )^{-1} &{} \mathbb {P}( {\mathbf{u }} + {\mathbf{v }} )^{-1} \end{array} \right| \\&=\left| \begin{array}{cc} -\mathbb {P}( {\mathbf{u }} )^{-1} &{} 0 \\ -\mathbb {P}( {\mathbf{u }} )^{-1}+\mathbb {P}( {\mathbf{u }} + {\mathbf{v }} )^{-1} &{} \mathbb {P}( {\mathbf{u }} + {\mathbf{v }} )^{-1} \end{array} \right| \\&={\mathrm {Det}}\left( \mathbb {P}( {\mathbf{u }} + {\mathbf{v }} )^{-1}\mathbb {P}( {\mathbf{u }} )^{-1}\right) . \end{aligned}$$

By (4), we get

$$\begin{aligned} J=(\det {\mathbf{u }} \det ( {\mathbf{u }} + {\mathbf{v }} ))^{-2\dim \Omega /r}. \end{aligned}$$

Since (XY) and (UV) have independent components, the following identity holds almost everywhere with respect to the Lebesgue measure:

$$\begin{aligned} f_U( {\mathbf{u }} )f_V( {\mathbf{v }} )=(\det {\mathbf{u }} \det ( {\mathbf{u }} + {\mathbf{v }} ))^{-2\dim \Omega /r} f_X\left( ( {\mathbf{u }} + {\mathbf{v }} )^{-1}\right) f_Y\left( {\mathbf{u }} ^{-1}-( {\mathbf{u }} + {\mathbf{v }} )^{-1}\right) , \end{aligned}$$

where \(f_X, f_Y, f_U\) and \(f_V\) denote densities of XYU and V, respectively. Since the respective densities are assumed to be continuous, the above equation holds for every \(( {\mathbf{u }} , {\mathbf{v }} )\in \Omega ^2\). Taking the logarithms of both sides of the above equation (it is permitted since \(f_X, f_Y>0\) on \(\Omega \)), we get

$$\begin{aligned} a( {\mathbf{u }} )+b( {\mathbf{v }} )=c\left( ( {\mathbf{u }} + {\mathbf{v }} )^{-1}\right) +d\left( {\mathbf{u }} ^{-1}-( {\mathbf{u }} + {\mathbf{v }} )^{-1}\right) , \end{aligned}$$
(15)

where

$$\begin{aligned} a( {\mathbf{x }} )&=\log \, f_U( {\mathbf{x }} )+\tfrac{2\dim \Omega }{r}\log \det {\mathbf{x }} ,\\ c( {\mathbf{x }} )&=\log \, f_X( {\mathbf{x }} )+\tfrac{2\dim \Omega }{r}\log \det {\mathbf{x }} ,\\ b&=\log \, f_V,\qquad d=\log \, f_Y. \end{aligned}$$

By Theorem 4.5, there exist constants \(q\in \mathbb {R}, {\mathbf{f }} , {\mathbf{g }} \in \mathbb {E}\) and \(\gamma _i\in \mathbb {R}, i=1, 2, 3\), such that for any \( {\mathbf{x }} \in \Omega \),

$$\begin{aligned} c( {\mathbf{x }} )&=-q\log \det {\mathbf{x }} +\left\langle {\mathbf{g }} , {\mathbf{x }} \right\rangle +\left\langle {\mathbf{f }} , {\mathbf{x }} ^{-1}\right\rangle +\gamma _3,\\ d( {\mathbf{x }} )&=q\log \det {\mathbf{x }} +\left\langle {\mathbf{g }} , {\mathbf{x }} \right\rangle +\gamma _1+\gamma _2, \end{aligned}$$

that is,

$$\begin{aligned} f_X( {\mathbf{x }} )&= \mathrm{e}^{\gamma _3} (\det {\mathbf{x }} )^{-q-2\dim \Omega /r} \mathrm{e}^{\left\langle {\mathbf{g }} , {\mathbf{x }} \right\rangle +\left\langle {\mathbf{f }} , {\mathbf{x }} ^{-1}\right\rangle },\\ f_Y( {\mathbf{x }} )&= \mathrm{e}^{\gamma _1+\gamma _2}(\det {\mathbf{x }} )^q \mathrm{e}^{\left\langle {\mathbf{g }} , {\mathbf{x }} \right\rangle }. \end{aligned}$$

Since \(f_X\) and \(f_Y\) are some densities, we have \( {\mathbf{a }} =- {\mathbf{g }} \in \Omega , {\mathbf{b }} =- {\mathbf{f }} \in \Omega \) and \(q=p-\dim \Omega /r>-1\). Thus, \(X\sim \mu _{-p, {\mathbf{a }} , {\mathbf{b }} }\) and \(Y\sim \gamma _{p, {\mathbf{a }} }\). \(\square \)

6 Comments

Recall that \(S_r(\mathbb {K})\) denotes the space of \(r\times r\) Hermitian matrices valued in \(\mathbb {K}\). Let \(\Omega _r(\mathbb {K})\) be the symmetric cone of Jordan algebra \(\mathbb {E}=S_r(\mathbb {K})\), where \(\mathbb {K}\) denotes either the real numbers \(\mathbb {R}\), the complex ones \(\mathbb {C}\) or the quaternions \(\mathbb {H}\). We exclude here the non-associative case \(\mathbb {K}=\mathbb {O}\).

Let z be a fixed \(s\times r\) matrix of full rank valued in \(\mathbb {K}\) and define the linear mapping \(\mathbb {P}_{sr}:S_r(\mathbb {K})\rightarrow S_s(\mathbb {K})\) by

$$\begin{aligned} \mathbb {P}_{sr}(z) {\mathbf{x }} =z\cdot {\mathbf{x }} \cdot z^*. \end{aligned}$$

If \(r=s\), then \(\mathbb {P}_{sr}\) is the ordinary quadratic representation of \(\Omega _s\). In the rest of the paper, we will drop the subscript and simply write \(\mathbb {P}\) (abusing the notation from previous sections).

Now, consider the following transformation \(\psi _z:\Omega _r(\mathbb {K})\times \Omega _s(\mathbb {K})\rightarrow \Omega _s(\mathbb {K})\times \Omega _r(\mathbb {K})\), where

$$\begin{aligned} \psi _z( {\mathbf{x }} , {\mathbf{y }} )=\left( (\mathbb {P}(z) {\mathbf{x }} + {\mathbf{y }} )^{-1}, {\mathbf{x }} ^{-1}-\mathbb {P}(z^*)(\mathbb {P}(z) {\mathbf{x }} + {\mathbf{y }} )^{-1}\right) . \end{aligned}$$

It is natural to ask whether an analogue of Theorem 5.1 holds if we consider independent random variables X and Y valued in \(\Omega _r(\mathbb {K})\) and \(\Omega _s(\mathbb {K})\) and define \((U,V)=\psi _z(X,Y)\). The answer is affirmative, and it was given in [14, Theorem 4.1]. Following the same steps as in the proof of Theorem 5.1, the problem of characterization of probability measures is reduced to the problem of solving following functional equation

$$\begin{aligned} a( {\mathbf{u }} )+b( {\mathbf{v }} )&=c\left( (\mathbb {P}(z^*) {\mathbf{u }} + {\mathbf{v }} )^{-1}\right) +d\left( {\mathbf{u }} ^{-1}-\mathbb {P}(z)(\mathbb {P}(z^*) {\mathbf{u }} + {\mathbf{v }} )^{-1}\right) ,\nonumber \\&\quad ( {\mathbf{u }} , {\mathbf{v }} )\in \Omega _s(\mathbb {K})\times \Omega _r(\mathbb {K}), \end{aligned}$$
(16)

where \(a,d:\Omega _s(\mathbb {K})\rightarrow \mathbb {R}\) and \(b,c:\Omega _r(\mathbb {K})\rightarrow \mathbb {R}\) are some unknown functions. This functional equation was solved by Massam and Wesołowski [14] for \(\mathbb {K}=\mathbb {R}\) under the assumption that the unknown functions are differentiable. It can be shown that through Theorem 4.5, this assumption may be weakened to continuity. Therefore, we obtain the following refinement of [14, Theorem 4.1]:

Theorem 6.1

Let X and Y be independent random variables with values in \(\Omega _r(\mathbb {K})\) and \(\Omega _s(\mathbb {K})\), respectively. Assume that X and Y have continuous densities, which are strictly positive. Define \((U,V)=\psi _z(X,Y)\).

If U and V are independent, then there exist matrices \(( {\mathbf{a }} , {\mathbf{b }} )\in \Omega _s(\mathbb {K})\times \Omega _r(\mathbb {K})\) and a constant \(p>\dim \Omega _{r}(\mathbb {K})/r-1\) such that

$$\begin{aligned} (X,Y)\sim \mu ^{(r)}_{-p,\mathbb {P}(z^*) {\mathbf{a }} , {\mathbf{b }} }\otimes \gamma ^{(s)}_{q, {\mathbf{a }} }, \end{aligned}$$

where \(q=p+(\dim \Omega _{s}(\mathbb {K})/s-\dim \Omega _{r}(\mathbb {K})/r)\).

The superscripts \(^{(s)}\) and \(^{(r)}\) are used to emphasize the ranks of the cones on which the distributions are considered.

The solution to (16) was also used in the proof of the characterization of Wishart distribution through its block conditional independence structure (see [14, Theorem 5.1]. One of the technical assumptions was that the respective random matrix has a differentiable density. This was assumed only in order to solve a functional equation, whose solution was not known under weaker assumptions. Therefore, this assumption may be reduced to the existence of continuous densities.

An analogous assumption was imposed on the densities in the recent paper of Bobecka [2], where the multivariate MY property on trees is considered—see [2, Theorem 4.3]. Thanks to the solution of (16) under weaker assumptions, this theorem holds true if we assume continuity of densities only.