1 Introduction

The Lukacs [17] theorem is one of the most celebrated characterizations of probability distributions. It states that if \(X\) and \(Y\) are independent, positive, non-degenerate random variables such that their sum and quotient are also independent, then \(X\) and \(Y\) have gamma distributions with the same scale parameter.

This theorem has many generalizations. The most important in the multivariate setting was given by Olkin and Rubin [21] and Casalis and Letac [7], where the authors extended characterization to matrix and symmetric cones variate distributions, respectively. There is no unique way of defining the quotient of elements of the cone of positive definite symmetric matrices \( \Omega _+\), and in these papers, the authors have considered very general form \(U=g(X+Y)\cdot X\cdot g^T(X+Y)\), where \(g\) is the so-called division algorithm, that is, \(g( \mathbf{a} )\cdot \mathbf{a} \cdot g^T( \mathbf{a} )=I\) for any \( \mathbf{a} \in \Omega _+\), where \(I\) is the identity matrix and \(g( \mathbf{a} )\) is invertible for any \( \mathbf{a} \in \Omega _+\) (later on, abusing notation, we will write \(g( \mathbf{x} ) \mathbf{y} =g( \mathbf{x} )\cdot \mathbf{y} \cdot g^{T}( \mathbf{x} )\), that is, in this case, \(g( \mathbf{x} )\) denotes the linear operator acting on \( \Omega _+\)). The drawback of their extension was the additional strong assumption of invariance of the distribution of \(U\) under a group of automorphisms. This result was generalized to homogeneous cones in Boutouria et al. [6].

There were successful attempts in replacing the invariance of the “quotient” assumption with the existence of regular densities of random variables \(X\) and \(Y\). Bobecka and Wesołowski [2] assuming existence of strictly positive, twice differentiable densities proved a characterization of Wishart distribution on the cone \( \Omega _+\) for division algorithm \(g_1( \mathbf{a} )= \mathbf{a} ^{-1/2}\), where \( \mathbf{a} ^{1/2}\) denotes the unique positive definite symmetric root of \( \mathbf{a} \in \Omega _+\). These results were generalized to all non-octonion symmetric cones of rank   >2 and to the Lorentz cone for strictly positive and continuous densities by Kołodziejek [12, 13].

Exploiting the same approach, with the same technical assumptions on densities as in Bobecka and Wesołowski [2], it was proven by Hassairi et al. [11] that the independence of \(X+Y\) and the quotient defined through the Cholesky decomposition, i.e., \(g_2( \mathbf{a} )=T_ \mathbf{a} ^{-1}\), where \(T_ \mathbf{a} \) is a lower triangular matrix such that \( \mathbf{a} =T_ \mathbf{a} \cdot T_ \mathbf{a} ^T\in \Omega _+\), characterizes a wider family of distributions called Riesz (or sometimes called Riesz-Wishart). This fact shows that the invariance property assumed in Olkin and Rubin [21] and Casalis and Letac [7] is not of technical nature only. Analogous results for homogeneous cones were obtained by Boutouria [4, 5].

In this paper, we deal with the density version of Lukacs–Olkin–Rubin theorem on symmetric cones for division algorithm satisfying some natural properties. We assume that the densities of \(X\) and \(Y\) are strictly positive and continuous. We consider quotient \(U\) for an arbitrary, fixed division algorithm \(g\) as in the original paper of Olkin and Rubin [21], additionally satisfying some natural conditions. In the known cases (\(g=g_1\) and \(g=g_2\)), this improves the results obtained in Bobecka and Wesołowski, Hassairi et al. and Kołodziejek [2, 11, 13]. In general case, the densities of \(X\) and \(Y\) are given in terms of, so-called, \(w\)-multiplicative Cauchy functions, that is functions satisfying

$$\begin{aligned} f( \mathbf{x} )f\left( w(I) \mathbf{y} \right) =f \left( w( \mathbf{x} ) \mathbf{y} \right) ,\quad ( \mathbf{x} , \mathbf{y} )\in \Omega _+^2, \end{aligned}$$

where \(w( \mathbf{x} ) \mathbf{y} =w( \mathbf{x} )\cdot \mathbf{y} \cdot w^T( \mathbf{x} )\) (i.e., \(g( \mathbf{x} )=w( \mathbf{x} )^{-1}\) is a division algorithm). Consistently, we will call \(w\) a multiplication algorithm. Such functions were recently considered in Kołodziejek [14].

Unfortunately, we cannot answer the question whether there exists division (or equivalently multiplication) algorithm resulting in characterizing other distribution than Riesz or Wishart. Moreover, the simultaneous removal of the assumptions of the invariance of the “quotient” and the existence of densities remains a challenge.

This paper is organized as follows. We start in the next section with basic definitions and theorems regarding analysis on symmetric cones. The statement and proof of the main result are given in Sect. 4. Section 3 is devoted to consideration of \(w\)-logarithmic Cauchy functions and the Olkin–Baker functional equation. In that section, we offer much shorter, simpler and covering more general cones proof of the Olkin–Baker functional equation than given in Bobecka and Wesołowski, Hassairi et al. and Kołodziejek [2, 11, 13].

2 Preliminaries

In this section, we give a short introduction to the theory of symmetric cones. For further details, we refer to Faraut and Korányi [8].

A Euclidean Jordan algebra is a Euclidean space \({\mathbb {E}}\) (endowed with scalar product denoted \(\left\langle \mathbf{x} , \mathbf{y} \right\rangle \)) equipped with a bilinear mapping (product)

$$\begin{aligned} {\mathbb {E}}\times {\mathbb {E}}\ni \left( \mathbf{x} , \mathbf{y} \right) \mapsto \mathbf{x} \mathbf{y} \in {\mathbb {E}}\end{aligned}$$

and a neutral element \( \mathbf{e} \) in \({\mathbb {E}}\) such that for all \( \mathbf{x} \), \( \mathbf{y} \), \( \mathbf{z} \) in \({\mathbb {E}}\):

  1. (i)

    \( \mathbf{x} \mathbf{y} = \mathbf{y} \mathbf{x} \),

  2. (ii)

    \( \mathbf{x} ( \mathbf{x} ^2 \mathbf{y} )= \mathbf{x} ^2( \mathbf{x} \mathbf{y} )\),

  3. (iii)

    \( \mathbf{x} \mathbf{e} = \mathbf{x} \),

  4. (iv)

    \(\left\langle \mathbf{x} , \mathbf{y} \mathbf{z} \right\rangle =\left\langle \mathbf{x} \mathbf{y} , \mathbf{z} \right\rangle \).

For \( \mathbf{x} \in {\mathbb {E}}\) let \({\mathbb {L}}( \mathbf{x} ):{\mathbb {E}}\rightarrow {\mathbb {E}}\) be linear map defined by

$$\begin{aligned} {\mathbb {L}}( \mathbf{x} ) \mathbf{y} = \mathbf{x} \mathbf{y} , \end{aligned}$$

and define

$$\begin{aligned} {\mathbb {P}}( \mathbf{x} )=2{\mathbb {L}}^2( \mathbf{x} )-{\mathbb {L}}\left( \mathbf{x} ^2\right) . \end{aligned}$$

The map \({\mathbb {P}}:{\mathbb {E}}\mapsto End({\mathbb {E}})\) is called the quadratic representation of \({\mathbb {E}}\).

An element \( \mathbf{x} \) is said to be invertible if there exists an element \( \mathbf{y} \) in \({\mathbb {E}}\) such that \({\mathbb {L}}( \mathbf{x} ) \mathbf{y} = \mathbf{e} \). Then, \( \mathbf{y} \) is called the inverse of \( \mathbf{x} \) and is denoted by \( \mathbf{y} = \mathbf{x} ^{-1}\). Note that the inverse of \( \mathbf{x} \) is unique. It can be shown that \( \mathbf{x} \) is invertible if and only if \({\mathbb {P}}( \mathbf{x} )\) is invertible, and in this case, \(\left( {\mathbb {P}}( \mathbf{x} )\right) ^{-1} ={\mathbb {P}}\left( \mathbf{x} ^{-1}\right) \).

Euclidean Jordan algebra \({\mathbb {E}}\) is said to be simple if it is not a Cartesian product of two Euclidean Jordan algebras of positive dimensions. Up to linear isomorphism, there are only five kinds of Euclidean simple Jordan algebras. Let \({\mathbb {K}}\) denotes either the real numbers \({\mathbb {R}}\), the complex ones \({\mathbb {C}}\), quaternions \({\mathbb {H}}\) or the octonions \({\mathbb {O}}\) and write \(S_r({\mathbb {K}})\) for the space of \(r\times r\) Hermitian matrices with entries valued in \({\mathbb {K}}\), endowed with the Euclidean structure \(\left\langle \mathbf{x} , \mathbf{y} \right\rangle ={\mathrm {Trace}}\,( \mathbf{x} \cdot \bar{ \mathbf{y} })\) and with the Jordan product

$$\begin{aligned} \mathbf{x} \mathbf{y} =\tfrac{1}{2}( \mathbf{x} \cdot \mathbf{y} + \mathbf{y} \cdot \mathbf{x} ), \end{aligned}$$
(1)

where \( \mathbf{x} \cdot \mathbf{y} \) denotes the ordinary product of matrices and \(\bar{ \mathbf{y} }\) is the conjugate of \( \mathbf{y} \). Then, \(S_r({\mathbb {R}})\), \(r\ge 1\), \(S_r({\mathbb {C}})\), \(r\ge 2\), \(S_r({\mathbb {H}})\), \(r\ge 2\), and the exceptional \(S_3({\mathbb {O}})\) are the first four kinds of Euclidean simple Jordan algebras. Note that in this case

$$\begin{aligned} {\mathbb {P}}( \mathbf{y} ) \mathbf{x} = \mathbf{y} \cdot \mathbf{x} \cdot \mathbf{y} . \end{aligned}$$
(2)

The fifth kind is the Euclidean space \({\mathbb {R}}^{n+1}\), \(n\ge 2\), with Jordan product

$$\begin{aligned} \left( x_0,x_1,\ldots , x_n\right) \left( y_0,y_1,\ldots ,y_n\right) =\left( \sum _{i=0}^n x_i y_i,x_0y_1+y_0x_1,\ldots ,x_0y_n+y_0x_n\right) . \quad \end{aligned}$$
(3)

To each Euclidean simple Jordan algebra, one can attach the set of Jordan squares

$$\begin{aligned} \bar{ \Omega }=\left\{ \mathbf{x} ^2: \mathbf{x} \in {\mathbb {E}}\right\} . \end{aligned}$$

The interior \( \Omega \) is a symmetric cone. Moreover, \( \Omega \) is irreducible, i.e., it is not the Cartesian product of two convex cones. One can prove that an open convex cone is symmetric and irreducible if and only if it is the cone \( \Omega \) of some Euclidean simple Jordan algebra. Each simple Jordan algebra corresponds to a symmetric cone; hence, there exist up to linear isomorphism also only five kinds of symmetric cones. The cone corresponding to the Euclidean Jordan algebra \({\mathbb {R}}^{n+1}\) equipped with Jordan product (3) is called the Lorentz cone.

We denote by \(G({\mathbb {E}})\) the subgroup of the linear group \(GL({\mathbb {E}})\) of linear automorphisms which preserves \( \Omega \), and we denote by \(G\) the connected component of \(G({\mathbb {E}})\) containing the identity. Recall that if \({\mathbb {E}}=S_r({\mathbb {R}})\) and \(GL(r,{\mathbb {R}})\) is the group of invertible \(r\times r\) matrices, elements of \(G({\mathbb {E}})\) are the maps \(g:{\mathbb {E}}\rightarrow {\mathbb {E}}\) such that there exists \( \mathbf{a} \in GL(r,{\mathbb {R}})\) with

$$\begin{aligned} g( \mathbf{x} )= \mathbf{a} \cdot \mathbf{x} \cdot \mathbf{a} ^T. \end{aligned}$$

We define \(K=G\cap O({\mathbb {E}})\), where \(O({\mathbb {E}})\) is the orthogonal group of \({\mathbb {E}}\). It can be shown that

$$\begin{aligned} K=\{ k\in G:k \mathbf{e} = \mathbf{e} \}. \end{aligned}$$

A multiplication algorithm is a map \( \Omega \rightarrow G: \mathbf{x} \mapsto w( \mathbf{x} )\) such that \(w( \mathbf{x} ) \mathbf{e} = \mathbf{x} \) for all \( \mathbf{x} \in \Omega \). This concept is consistent with, so-called, division algorithm \(g\), which was introduced by Olkin and Rubin [21] and Casalis and Letac [7], that is a mapping \( \Omega \ni \mathbf{x} \mapsto g( \mathbf{x} )\in G\) such that \(g( \mathbf{x} ) \mathbf{x} = \mathbf{e} \) for any \( \mathbf{x} \in \Omega \). If \(w\) is a multiplication algorithm, then \(g=w^{-1}\) (that is, \(g( \mathbf{x} )w( \mathbf{x} )=w( \mathbf{x} )g( \mathbf{x} )=Id_ \Omega \) for any \( \mathbf{x} \in \Omega \)) is a division algorithm and vice versa; if \(g\) is a division algorithm, then \(w=g^{-1}\) is a multiplication algorithm. One of two important examples of multiplication algorithms is the map \(w_1( \mathbf{x} )={\mathbb {P}}\left( \mathbf{x} ^{1/2}\right) \).

We will now introduce a very useful decomposition in \({\mathbb {E}}\), called spectral decomposition. An element \( \mathbf{c} \in {\mathbb {E}}\) is said to be a idempotent if \( \mathbf{c} \mathbf{c} = \mathbf{c} \ne 0\). Idempotents \( \mathbf{a} \) and \( \mathbf{b} \) are orthogonal if \( \mathbf{a} \mathbf{b} =0\). Idempotent \( \mathbf{c} \) is primitive if \( \mathbf{c} \) is not a sum of two non-null idempotents. A complete system of primitive orthogonal idempotents is a set \(\left( \mathbf{c} _1,\ldots , \mathbf{c} _r\right) \) such that

$$\begin{aligned} \sum _{i=1}^r \mathbf{c} _i= \mathbf{e} \quad \text{ and }\quad \mathbf{c} _i \mathbf{c} _j=\delta _{ij} \mathbf{c} _i\quad \text{ for } 1\le i\le j\le r. \end{aligned}$$

The size \(r\) of such system is a constant called the rank of \({\mathbb {E}}\). Any element \( \mathbf{x} \) of a Euclidean simple Jordan algebra can be written as \( \mathbf{x} =\sum _{i=1}^r\lambda _i \mathbf{c} _i\) for some complete system of primitive orthogonal idempotents \(\left( \mathbf{c} _1,\dots , \mathbf{c} _r\right) \). The real numbers \(\lambda _i\), \(i=1,\dots ,r\) are the eigenvalues of \( \mathbf{x} \). One can then define trace and determinant of \( \mathbf{x} \) by, respectively, \({\mathrm {tr}}\, \mathbf{x} =\sum _{i=1}^r\lambda _i\) and \(\det \mathbf{x} =\prod _{i=1}^r\lambda _i\). An element \( \mathbf{x} \in {\mathbb {E}}\) belongs to \( \Omega \) if and only if all its eigenvalues are strictly positive.

The rank \(r\) and \(\dim \Omega \) of irreducible symmetric cone are connected through relation

$$\begin{aligned} \dim \Omega =r+\frac{d r(r-1)}{2}, \end{aligned}$$

where \(d\) is an integer called the Peirce constant.

If \( \mathbf{c} \) is a primitive idempotent of \({\mathbb {E}}\), the only possible eigenvalues of \({\mathbb {L}}( \mathbf{c} )\) are \(0\), \(\tfrac{1}{2}\) and \(1\). We denote by \({\mathbb {E}}( \mathbf{c} ,0)\), \({\mathbb {E}}( \mathbf{c} ,\tfrac{1}{2})\) and \({\mathbb {E}}( \mathbf{c} ,1)\) the corresponding eigenspaces. The decomposition

$$\begin{aligned} {\mathbb {E}}={\mathbb {E}}( \mathbf{c} ,0)\oplus {\mathbb {E}}( \mathbf{c} ,\tfrac{1}{2})\oplus {\mathbb {E}}( \mathbf{c} ,1) \end{aligned}$$

is called the Peirce decomposition of \({\mathbb {E}}\) with respect to \( \mathbf{c} \). Note that \({\mathbb {P}}( \mathbf{c} )\) is the orthogonal projection of \({\mathbb {E}}\) onto \({\mathbb {E}}( \mathbf{c} ,1)\).

Fix a complete system of orthogonal idempotents \(\left( \mathbf{c} _i\right) _{i=1}^r\). Then for any \(i,j\in \left\{ 1,2,\ldots ,r\right\} \), we write

$$\begin{aligned} {\mathbb {E}}_{ii}&= {\mathbb {E}}( \mathbf{c} _i,1)={\mathbb {R}} \mathbf{c} _i, \\ {\mathbb {E}}_{ij}&= {\mathbb {E}}\left( \mathbf{c} _i,\frac{1}{2}\right) \cap {\mathbb {E}}\left( \mathbf{c} _j,\frac{1}{2}\right) \text{ if } i\ne j. \end{aligned}$$

It can be proved (see Faraut and Korányi [8, Theorem IV.2.1]) that

$$\begin{aligned} {\mathbb {E}}=\bigoplus _{i\le j}{\mathbb {E}}_{ij} \end{aligned}$$

and

$$\begin{aligned} {\mathbb {E}}_{ij}\cdot {\mathbb {E}}_{ij}&\subset {\mathbb {E}}_{ii}+{\mathbb {E}}_{ij}, \\ {\mathbb {E}}_{ij}\cdot {\mathbb {E}}_{jk}&\subset {\mathbb {E}}_{ik}, \text{ if } i\ne k, \\ {\mathbb {E}}_{ij}\cdot {\mathbb {E}}_{kl}&= \{0\}, \text{ if } \{i,j\}\cap \{k,l\}=\emptyset . \end{aligned}$$

Moreover (Faraut and Korányi [8, Lemma IV.2.2]), if \( \mathbf{x} \in {\mathbb {E}}_{ij}\), \( \mathbf{y} \in {\mathbb {E}}_{jk}\), \(i\ne k\), then

$$\begin{aligned} \mathbf{x} ^2&= \tfrac{1}{2}|| \mathbf{x} ||^2( \mathbf{c} _i+ \mathbf{c} _j),\\ \nonumber || \mathbf{x} \mathbf{y} ||^2&= \tfrac{1}{8}|| \mathbf{x} ||^2|| \mathbf{y} ||^2. \end{aligned}$$
(4)

The dimension of \({\mathbb {E}}_{ij}\) is the Peirce constant \(d\) for any \(i\ne j\). When \({\mathbb {E}}\) is \(S_r({\mathbb {K}})\), if \((e_1,\ldots ,e_r)\) is an orthonormal basis of \({\mathbb {R}}^r\), then \({\mathbb {E}}_{ii}={\mathbb {R}}e_i e_i^T\) and \({\mathbb {E}}_{ij}\) \(={\mathbb {K}}\left( e_i e_j^T+e_je_i^T\right) \) for \(i<j\) and \(d\) is equal to \(dim_{|{\mathbb {R}}}{\mathbb {K}}\).

For \(1\le k \le r\), let \(P_k\) be the orthogonal projection onto \({\mathbb {E}}^{(k)}={\mathbb {E}}( \mathbf{c} _1+\ldots + \mathbf{c} _k,1)\), \(\det ^{(k)}\) the determinant in the subalgebra \({\mathbb {E}}^{(k)}\), and, for \( \mathbf{x} \in \Omega \), \(\Delta _k( \mathbf{x} )=\det ^{(k)}(P_k( \mathbf{x} ))\). Then, \(\Delta _k\) is called the principal minor of order \(k\) with respect to the Jordan frame \(( \mathbf{c} _k)_{k=1}^r\). Note that \(\Delta _r( \mathbf{x} )=\det \mathbf{x} \). For \(s=(s_1,\ldots ,s_r)\in {\mathbb {R}}^r\) and \( \mathbf{x} \in \Omega \), we write

$$\begin{aligned} \Delta _s( \mathbf{x} )=\Delta _1( \mathbf{x} )^{s_1-s_2} \Delta _2( \mathbf{x} )^{s_2-s_3}\ldots \Delta _r( \mathbf{x} )^{s_r}. \end{aligned}$$

\(\Delta _s\) is called a generalized power function. If \( \mathbf{x} =\sum _{i=1}^r\alpha _i \mathbf{c} _i\), then \(\Delta _s( \mathbf{x} )=\alpha _1^{s_1}\) \(\alpha _2^{s_2}\ldots \alpha _r^{s_r}\).

We will now introduce some basic facts about triangular group. For \( \mathbf{x} \) and \( \mathbf{y} \) in \( \Omega \), let \( \mathbf{x} \Box \mathbf{y} \) denote the endomorphism of \({\mathbb {E}}\) defined by

$$\begin{aligned} \mathbf{x} \Box \mathbf{y} ={\mathbb {L}}( \mathbf{x} \mathbf{y} )+{\mathbb {L}}( \mathbf{x} ){\mathbb {L}}( \mathbf{y} )-{\mathbb {L}}( \mathbf{y} ){\mathbb {L}}( \mathbf{x} ). \end{aligned}$$

If \( \mathbf{c} \) is an idempotent and \( \mathbf{z} \in {\mathbb {E}}( \mathbf{c} ,\frac{1}{2})\), we define the Frobenius transformation \(\tau _ \mathbf{c} ( \mathbf{z} )\) in \(G\) by

$$\begin{aligned} \tau _ \mathbf{c} ( \mathbf{z} )=\exp (2 \mathbf{z} \Box \mathbf{c} ). \end{aligned}$$

Since \(2 \mathbf{z} \Box \mathbf{c} \) is nilpotent of degree \(3\) (see Faraut and Korányi [8, Lemma VI.3.1]), we get

$$\begin{aligned} \tau _{ \mathbf{c} }( \mathbf{z} )=I+(2 \mathbf{z} \Box \mathbf{c} )+\frac{1}{2}(2 \mathbf{z} \Box \mathbf{c} )^2. \end{aligned}$$
(5)

Given a Jordan frame \(( \mathbf{c} _i)_{i=1}^r\), the subgroup of \(G\),

$$\begin{aligned} {\mathcal {T}}=\left\{ \tau _{ \mathbf{c} _1}( \mathbf{z} ^{(1)})\ldots \tau _{ \mathbf{c} _{r-1}}( \mathbf{z} ^{(r-1)}){\mathbb {P}}\left( \sum _{i=1}^r \alpha _i \mathbf{c} _i\right) :\alpha _i>0, \mathbf{z} ^{(j)}\in \bigoplus _{k=j+1}^r{\mathbb {E}}_{jk}\right\} \end{aligned}$$

is called the triangular group corresponding to the Jordan frame \(( \mathbf{c} _i)_{i=1}^r\). For any \( \mathbf{x} \) in \( \Omega \), there exists a unique \(t_{ \mathbf{x} }\) in \({\mathcal {T}}\) such that \( \mathbf{x} =t_{ \mathbf{x} } \mathbf{e} \), that is, there exist (see Faraut and Korányi [8, Theorem IV.3.5]) elements \( \mathbf{z} ^{(j)}\in \bigoplus _{k=j+1}^r {\mathbb {E}}_{jk}\), \(1\le j\le r-1\) and positive numbers \(\alpha _1, \ldots ,\alpha _r\) such that

$$\begin{aligned} \mathbf{x} =\tau _{ \mathbf{c} _1}( \mathbf{z} ^{(1)})\tau _{ \mathbf{c} _2}( \mathbf{z} ^{(2)})\ldots \tau _{ \mathbf{c} _{r-1}}( \mathbf{z} ^{(r-1)})\left( \sum _{k=1}^r \alpha _k \mathbf{c} _k \right) . \end{aligned}$$
(6)

Mapping \(w_2: \Omega \rightarrow {\mathcal {T}}, \mathbf{x} \mapsto w_2( \mathbf{x} )=t_{ \mathbf{x} }\) realizes a multiplication algorithm.

For \({\mathbb {E}}=S_r({\mathbb {R}})\), we have \( \Omega = \Omega _+\). Let us define for \(1\le i,j\le r\) matrix \(\mu _{ij}\) \(=\left( \gamma _{kl}\right) _{1\le k,l\le r}\) such that \(\gamma _{ij}=1\) and all other entries are equal \(0\). Then for Jordan frame \(\left( \mathbf{c} _i\right) _{i=1}^r\), where \( \mathbf{c} _k=\mu _{kk}\), \(k=1,\ldots ,r\), we have \( \mathbf{z} _{jk}=(\mu _{jk}+\mu _{kj})\in {\mathbb {E}}_{jk}\) oraz \(|| \mathbf{z} _{jk}||^2=2\), \(1\le j,k\le r\), \(j\ne k\). if \( \mathbf{z} ^{(i)}\in \bigoplus _{j=i+1}^r {\mathbb {E}}_{ij}\), \(i=1,\ldots ,r-1\), then there exists \(\alpha ^{(i)}=(\alpha _{i+1},\ldots ,\alpha _r)\in {\mathbb {R}}^{r-i}\) such that \( \mathbf{z} ^{(i)}=\sum _{j=i+1}^r \alpha _j \mathbf{z} _{ij}\). Then, the Frobenius transformation reads

$$\begin{aligned} \tau _{ \mathbf{c} _i}( \mathbf{z} ^{(i)}) \mathbf{x} ={\mathcal {F}}_i(\alpha ^{(i)})\cdot \mathbf{x} \cdot {\mathcal {F}}_i(\alpha ^{(i)})^T, \end{aligned}$$

where \({\mathcal {F}}_i(\alpha ^{(i)})\) is so-called Frobenius matrix:

$$\begin{aligned} {\mathcal {F}}_i(\alpha ^{(i)})=I+\sum _{j=i+1}^r \alpha _j \mu _{ji}, \end{aligned}$$

i.e., bellow \(i\)th one of identity matrix there is a vector \(\alpha ^{(i)}\), particularly

$$\begin{aligned} {\mathcal {F}}_2(\alpha ^{(2)})=\begin{pmatrix} 1 &{} 0 &{} 0 &{} \cdots &{} 0 \\ 0 &{} 1 &{} 0 &{} \cdots &{} 0 \\ 0 &{} \alpha _{3} &{} 1 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} \alpha _{r} &{} 0 &{} \cdots &{} 1 \end{pmatrix}. \end{aligned}$$

It can be shown (Faraut and Korányi [8, Proposition VI.3.10]) that for each \(t\in {\mathcal {T}}\), \( \mathbf{x} \in \Omega \) and \(s\in {\mathbb {R}}^r\),

$$\begin{aligned} \Delta _s(t \mathbf{x} )=\Delta _s(t \mathbf{e} )\Delta _s( \mathbf{x} ) \end{aligned}$$
(7)

and for any \( \mathbf{z} \in {\mathbb {E}}( \mathbf{c} _i,\frac{1}{2})\), \(i=1,\ldots ,r\),

$$\begin{aligned} \Delta _s(\tau _{ \mathbf{c} _i}( \mathbf{z} ) \mathbf{e} )=1, \end{aligned}$$
(8)

if only \(\Delta _s\) and \({\mathcal {T}}\) are associated with the same Jordan frame \(\left( \mathbf{c} _i\right) _{i=1}^r\).

We will now introduce some necessary basics regarding certain probability distribution on symmetric cones. Absolutely continuous Riesz distribution \(R_{s, \mathbf{a} }\) on \( \Omega \) is defined for any \( \mathbf{a} \in \Omega \) and \(s=(s_1,\ldots ,s_r)\in {\mathbb {R}}^r\) such that \(s_i>(i-1)d/2\), \(i=1,\ldots ,r\), though its density

$$\begin{aligned} R_{s, \mathbf{a} }(\text{ d } \mathbf{x} )=\frac{\Delta _s( \mathbf{a} )}{\Gamma _ \Omega (s)} \Delta _{s-\dim \Omega /r}( \mathbf{x} )e^{-\left\langle \mathbf{a} , \mathbf{x} \right\rangle }I_ \Omega ( \mathbf{x} )\,\text{ d } \mathbf{x} ,\quad \mathbf{x} \in \Omega , \end{aligned}$$

where \(\Delta _s\) is the generalized power function with respect to a Jordan frame \(( \mathbf{c} _i)_{i=1}^r\) and \(\Gamma _ \Omega \) is the Gamma function of the symmetric cone \( \Omega \). It can be shown that \(\Gamma _ \Omega (s)\) \(=(2\pi )^{(\dim \Omega -r)/2}\prod _{j=1}^r \Gamma (s_j-(j-1)\tfrac{d}{2})\) (see Faraut and Korányi [8, VII.1.1.]). Riesz distribution was introduced in Hassairi and Lajmi [10].

Absolutely continuous Wishart distribution \(\gamma _{p, \mathbf{a} }\) on \( \Omega \) is a special case of Riesz distribution for \(s_1=\ldots =s_r=p\). If \( \mathbf{a} \in \Omega \) and \(p>\dim \Omega /r-1\) it has density

$$\begin{aligned} \gamma _{p, \mathbf{a} }(\text{ d } \mathbf{x} )=\frac{(\det \mathbf{a} )^p}{\Gamma _ \Omega (p)} (\det \mathbf{x} )^{p-\dim \Omega /r}e^{-\left\langle \mathbf{a} , \mathbf{x} \right\rangle }I_ \Omega ( \mathbf{x} )\,\text{ d } \mathbf{x} ,\quad \mathbf{x} \in \Omega , \end{aligned}$$

where \(\Gamma _ \Omega (p):=\Gamma _ \Omega (p,\ldots ,p)\). Wishart distribution is a generalization of gamma distribution (case \(r=1\)).

In generality, Riesz and Wishart distributions does not always have densities, but due to the assumption of existence of densities in Theorem 4.2, we are not interested in other cases.

3 Functional Equations

3.1 Logarithmic Cauchy Functions

As will be seen, the densities of respective random variables will be given in terms of \(w\)-logarithmic Cauchy functions, i.e., functions \(f: \Omega \rightarrow {\mathbb {R}}\) that satisfy the following functional equation

$$\begin{aligned} f( \mathbf{x} )+f(w( \mathbf{e} ) \mathbf{y} )=f(w( \mathbf{x} ) \mathbf{y} ),\quad ( \mathbf{x} , \mathbf{y} )\in \Omega ^2, \end{aligned}$$
(9)

where \(w\) is a multiplication algorithm. If \(f\) is \(w\)-logarithmic, then \(e^f\) is called \(w\)-multiplicative. In the following section, we will give the form of \(w\)-logarithmic Cauchy functions for two basic multiplication algorithms, one connected with the quadratic representation

$$\begin{aligned} w_1( \mathbf{x} )={\mathbb {P}}( \mathbf{x} ^{1/2}), \end{aligned}$$
(10)

and the other related to a triangular group \({\mathcal {T}}\),

$$\begin{aligned} w_2( \mathbf{x} )=t_ \mathbf{x} \in {\mathcal {T}}. \end{aligned}$$
(11)

Such functions were recently considered without any regularity assumptions in Kołodziejek [14].

It should be stressed that there exists infinite number of multiplication algorithms. If \( w \) is a multiplication algorithm, then trivial extensions are given by \( w ^{(k)}( \mathbf{x} )= w ( \mathbf{x} ) k\), where \(k\in K\) is fixed (Remark 4.3 explains why this extension is trivial when it comes to multiplicative functions). One may consider also multiplication algorithms of the form \({\mathbb {P}}( \mathbf{x} ^\alpha )t_{ \mathbf{x} ^{1-2\alpha }}\), which interpolate between the two main examples: \( w _1\) (which is \(\alpha =1/2\)) and \( w _2\) (which is \(\alpha =0\)). In general, any multiplication algorithm may be written in the form \( w (x)={\mathbb {P}}( \mathbf{x} ^{1/2})k_x\), where \( \mathbf{x} \mapsto k_ \mathbf{x} \in K\).

Functional equation (9) for \(w_1\) was already considered by Bobecka and Wesołowski [3] for differentiable functions and by Molnár [19] for continuous functions of real or complex Hermitian positive definite matrices of rank  >2. Without any regularity assumptions, it was solved on the Lorentz cone by Wesołowski [23].

Case of \(w_2( \mathbf{x} )=t_ \mathbf{x} \in {\mathcal {T}}\) for a triangular group \({\mathcal {T}}\), perhaps a bit surprisingly, leads to a different solution. It was indirectly solved for differentiable functions by Hassairi et al. [11, Proof of Theorem 3.3].

By Faraut and Korányi [8, Proposition III.4.3], for any \(g\) in the group \(G\),

$$\begin{aligned} \det (g \mathbf{x} )=({\mathrm {Det}}\,g)^{r/\dim \Omega }\det \mathbf{x} , \end{aligned}$$

where \({\mathrm {Det}}\) denotes the determinant in the space of endomorphisms on \( \Omega \). Inserting a multiplication algorithm \(g=w( \mathbf{y} )\), \( \mathbf{y} \in \Omega \), and \( \mathbf{x} = \mathbf{e} \), we obtain

$$\begin{aligned} {\mathrm {Det}}\left( w( \mathbf{y} )\right) =(\det \mathbf{y} )^{\dim \Omega /r} \end{aligned}$$
(12)

and hence

$$\begin{aligned} \det (w( \mathbf{y} ) \mathbf{x} ) =\det \mathbf{y} \det \mathbf{x} \end{aligned}$$

for any \( \mathbf{x} , \mathbf{y} \in \Omega \). This means that \(f( \mathbf{x} )=H(\det \mathbf{x} )\), where \(H\) is generalized logarithmic function, i.e., \(H(ab)=H(a)+H(b)\) for \(a,b>0\), is always a solution to (9), regardless of the choice of multiplication algorithm \(w\). If a \(w\)-logarithmic functions \(f\) is additionally \(K\)-invariant (\(f( \mathbf{x} )=f(k \mathbf{x} )\) for any \(k\in K\)), then \(H(\det \mathbf{x} )\) is the only possible solution (Theorem 3.4).

In Kołodziejek [14], the following theorems have been proved. They will be useful in the proof of the main theorems in this paper.

Theorem 3.1

(\(w_1\)-logarithmic Cauchy functional equation) Let \(f: \Omega \rightarrow {\mathbb {R}}\) be a function such that

$$\begin{aligned} f( \mathbf{x} )+f( \mathbf{y} )=f\left( {\mathbb {P}}\left( \mathbf{x} ^{1/2}\right) \mathbf{y} \right) ,\quad ( \mathbf{x} , \mathbf{y} )\in \Omega ^2. \end{aligned}$$

Then, there exists a logarithmic function \(H\) such that for any \( \mathbf{x} \in \Omega \),

$$\begin{aligned} f( \mathbf{x} )=H(\det \mathbf{x} ). \end{aligned}$$

Theorem 3.2

(\(w_2\)-logarithmic Cauchy functional equation) Let \(f: \Omega \rightarrow {\mathbb {R}}\) be a function satisfying

$$\begin{aligned} f( \mathbf{x} )+f( \mathbf{y} )=f(t_{ \mathbf{y} } \mathbf{x} ) \end{aligned}$$

for any \( \mathbf{x} \) and \( \mathbf{y} \) in the cone \( \Omega \) of rank \(r\), \(t_{ \mathbf{y} }\in {\mathcal {T}}\), where \({\mathcal {T}}\) is the triangular group with respect to the Jordan frame \(\left( \mathbf{c} _i\right) _{i=1}^r\). Then, there exist generalized logarithmic functions \(H_1,\ldots , H_r\) such that for any \( \mathbf{x} \in \Omega \),

$$\begin{aligned} f( \mathbf{x} )=\sum _{k=1}^r H_k(\Delta _k( \mathbf{x} )), \end{aligned}$$

where \(\Delta _k\) is the principal minor of order \(k\) with respect to \(\left( \mathbf{c} _i\right) _{i=1}^r\).

If we assume in Theorem 3.2 that \(f\) is additionally measurable, then functions \(H_k\) are measurable. This implies that there exist constants \(s_k\in {\mathbb {R}}\) such that \(H_k(\alpha )=s_k\log \alpha \) and

$$\begin{aligned} f( \mathbf{x} )=\sum _{k=1}^r s_k\log (\Delta _k( \mathbf{x} ))=\log \prod _{k=1}^r \Delta ^{s_k}_k( \mathbf{x} ). \end{aligned}$$

Thus, we obtain the following

Remark 3.3

If we impose on \(f\) in Theorem 3.2 some mild conditions (e.g., measurability), then there exists \(s\in {\mathbb {R}}^r\) such that for any \( \mathbf{x} \in \Omega \),

$$\begin{aligned} f( \mathbf{x} )=\log \Delta _s( \mathbf{x} ). \end{aligned}$$

Theorem 3.4

Let \(f: \Omega \rightarrow {\mathbb {R}}\) be a function satisfying (9). Assume additionally that \(f\) is \(K\)-invariant, i.e., \(f(k \mathbf{x} )=f( \mathbf{x} )\) for any \(k\in K\) and \( \mathbf{x} \in \Omega \). Then, there exists a logarithmic function \(H\) such that for any \( \mathbf{x} \in \Omega \),

$$\begin{aligned} f( \mathbf{x} )=H(\det \mathbf{x} ). \end{aligned}$$

Lemma 3.5

(\(w\)-logarithmic Pexider functional equation) Assume that \(a\), \(b\), \(c\) defined on the cone \(\Omega \) satisfy following functional equation

$$\begin{aligned} a( \mathbf{x} )+b( \mathbf{y} )=c(w( \mathbf{x} ) \mathbf{y} ),\quad ( \mathbf{x} , \mathbf{y} )\in \Omega ^2. \end{aligned}$$

Then, there exist \(w\)-logarithmic function \(f\) and real constants \(a_0, b_0\) such that for any \( \mathbf{x} \in \Omega \),

$$\begin{aligned} a( \mathbf{x} )&=f( \mathbf{x} )+a_0,\\ b( \mathbf{x} )&=f(w( \mathbf{e} ) \mathbf{x} )+b_0,\\ c( \mathbf{x} )&=f( \mathbf{x} )+a_0+b_0. \end{aligned}$$

3.2 The Olkin–Baker Functional Equation

In the following section, we deal with the Olkin–Baker functional equation on irreducible symmetric cones, which is related to the Lukacs independence condition (see proof of the Theorem 4.2).

Henceforth, we will assume that multiplication algorithm \( w \) additionally is homogeneous of degree \(1\), that is, \( w (s \mathbf{x} )=s w ( \mathbf{x} )\) for any \(s>0\) and \( \mathbf{x} \in \Omega \). It is easy to create a multiplication algorithm without this property, for example:

$$\begin{aligned} w ( \mathbf{x} )={\left\{ \begin{array}{ll} w _1( \mathbf{x} ), &{} \text{ if } \det \mathbf{x} >1, \\ w _2( \mathbf{x} ), &{} \text{ if } \det \mathbf{x} \le 1. \end{array}\right. } \end{aligned}$$

The problem of solving

$$\begin{aligned} f(x)g(y)=p(x+y)q(x/y),\quad (x,y)\in (0,\infty )^2 \end{aligned}$$
(13)

for unknown positive functions \(f\), \(g\), \(p\) and \(q\) was first posed in Olkin [20]. Note that in one-dimensional case, it does not matter whether one considers \(q(x/y)\) or \(q(x/(x+y))\) on the right- hand side of (13). Its general solution was given in Baker [1] and later analyzed in Lajkó [15] using a different approach. Recently, in Mészáros [18] and Lajkó and Mészáros [16], Eq. (13) was solved assuming that it is satisfied almost everywhere on \((0,\infty )^2\) for measurable functions which are nonnegative on its domain or positive on some sets of positive Lebesgue measure, respectively. Finally, a new derivation of solution to (13), when the equation holds almost everywhere on \((0,\infty )^2\) and no regularity assumptions on unknown positive functions are imposed, was given in Ger et al. [9]. The following theorem is concerned with an adaptation of (13), after taking logarithm, to the symmetric cone case.

Theorem 3.6

(Olkin–Baker functional equation on symmetric cones) Let \(a\), \(b\), \(c\) and \(d\) be real continuous functions on an irreducible symmetric cone \( \Omega \) of rank \(r\). Assume

$$\begin{aligned} a( \mathbf{x} )+b( \mathbf{y} )=c( \mathbf{x} + \mathbf{y} )+d\left( g \left( \mathbf{x} + \mathbf{y} \right) \mathbf{x} \right) ,\qquad ( \mathbf{x} , \mathbf{y} )\in \Omega ^2, \end{aligned}$$
(14)

where \( g ^{-1}= w \) is a homogeneous of degree \(1\) multiplication algorithm. Then, there exist constants \(C_i\in {\mathbb {R}}\), \(i=1,\ldots ,4\), \(\Lambda \in {\mathbb {E}}\) such that for any \( \mathbf{x} \in \Omega \) and \( \mathbf{u} \in {\mathcal {D}}\) \(=\left\{ \mathbf{x} \in \Omega : \mathbf{e} - \mathbf{x} \in \Omega \right\} \),

$$\begin{aligned} a( \mathbf{x} )&=\left\langle \Lambda , \mathbf{x} \right\rangle +e( \mathbf{x} )+C_1,\\ b( \mathbf{x} )&=\left\langle \Lambda , \mathbf{x} \right\rangle +f( \mathbf{x} )+C_2,\\ c( \mathbf{x} )&=\left\langle \Lambda , \mathbf{x} \right\rangle +e( \mathbf{x} )+f( \mathbf{x} )+C_3,\\ d( \mathbf{u} )&=e(w( \mathbf{e} ) \mathbf{u} )+f( \mathbf{e} -w( \mathbf{e} ) \mathbf{u} )+C_4, \end{aligned}$$

where \(e\) and \(f\) are continuous \( w \)-logarithmic Cauchy functions and \(C_1\,+\,C_2=C_3\,+\,C_4\).

We will need following simple lemma. For the elementary proof, we refer to Kołodziejek [13, Lemma 3.2].

Lemma 3.7

(Additive Pexider functional equation on symmetric cones) Let \(a\), \(b\) and \(c\) be measurable functions on a symmetric cone \( \Omega \) satisfying

$$\begin{aligned} a( \mathbf{x} )+b( \mathbf{y} )=c( \mathbf{x} + \mathbf{y} ),\qquad ( \mathbf{x} , \mathbf{y} )\in \Omega ^2. \end{aligned}$$
(15)

Then, there exist constants \(\alpha , \beta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {E}}\) such that for all \( \mathbf{x} \in \Omega \),

$$\begin{aligned} a( \mathbf{x} )&=\left\langle \lambda , \mathbf{x} \right\rangle +\alpha , \nonumber \\ b( \mathbf{x} )&=\left\langle \lambda , \mathbf{x} \right\rangle +\beta , \nonumber \\ c( \mathbf{x} )&=\left\langle \lambda , \mathbf{x} \right\rangle +\alpha +\beta . \end{aligned}$$
(16)

Now, we can come back and give a new proof the Olkin–Baker functional equation.

Proof of Theorem 3.6

In the first part of the proof, we adapt the argument given in Ger et al. [9], where the analogous result on \((0,\infty )\) was analyzed, to the symmetric cone setting.

For any \(s>0\) and \(( \mathbf{x} , \mathbf{y} )\in \Omega ^2\), we get

$$\begin{aligned} a(s \mathbf{x} )+b(s \mathbf{y} )=c(s( \mathbf{x} + \mathbf{y} ))+d\left( g(s \mathbf{x} +s \mathbf{y} )s \mathbf{x} \right) . \end{aligned}$$
(17)

Since \(w\) is homogeneous of degree \(1\), we have \(g(s \mathbf{x} )=\tfrac{1}{s}g( \mathbf{x} )\) and so \(g(s \mathbf{x} +s \mathbf{y} )s \mathbf{x} \) \(=g( \mathbf{x} + \mathbf{y} ) \mathbf{x} \) for any \(s>0\). Subtracting now (14) from (17) for any \(s>0\), we arrive at the additive Pexider equation on symmetric cone \( \Omega \),

$$\begin{aligned} a_s( \mathbf{x} )+b_s( \mathbf{y} )=c_s( \mathbf{x} + \mathbf{y} ),\qquad ( \mathbf{x} , \mathbf{y} )\in \Omega ^2, \end{aligned}$$

where \(a_s\), \(b_s\) and \(c_s\) are functions defined by \(a_s( \mathbf{x} ):=a(s \mathbf{x} )-a( \mathbf{x} )\), \(b_s( \mathbf{x} ):=b(s \mathbf{x} )\) \(-b( \mathbf{x} )\) and \(c_s( \mathbf{x} ):=c(s \mathbf{x} )-c( \mathbf{x} )\).

Due to continuity of \(a\), \(b\) and \(c\) and Lemma 3.7, it follows that for any \(s>0\), there exist constants \(\lambda (s)\in {\mathbb {E}}\), \(\alpha (s)\in {\mathbb {R}}\) and \(\beta (s)\in {\mathbb {R}}\) such that for any \( \mathbf{x} \in \Omega \),

$$\begin{aligned} a_s( \mathbf{x} )&= \left\langle \lambda (s), \mathbf{x} \right\rangle +\alpha (s),\\ b_s( \mathbf{x} )&= \left\langle \lambda (s), \mathbf{x} \right\rangle +\beta (s),\\ c_s( \mathbf{x} )&= \left\langle \lambda (s), \mathbf{x} \right\rangle +\alpha (s)+\beta (s). \end{aligned}$$

By the definition of \(a_s\) and the above observation, it follows that for any \((s,t)\in (0,\infty )^2\) and \( \mathbf{z} \in \Omega \)

$$\begin{aligned} a_{st}( \mathbf{z} )=a_t(s \mathbf{z} )+a_s( \mathbf{z} ). \end{aligned}$$

Hence,

$$\begin{aligned} \left\langle \lambda (st), \mathbf{z} \right\rangle +\alpha (st)=\left\langle \lambda (t),s \mathbf{z} \right\rangle +\alpha (t)+\left\langle \lambda (s), \mathbf{z} \right\rangle +\alpha (s). \end{aligned}$$
(18)

Since (18) holds for any \( \mathbf{z} \in \Omega \), we see that \(\alpha (st)=\alpha (s)+\alpha (t)\) for all \((s,t)\in (0,\infty )^2\). That is \(\alpha (s)=k_1\log \,s\) for \(s\in (0,\infty )\), where \(k_1\) is a real constant.

On the other hand

$$\begin{aligned} \left\langle \lambda (st), \mathbf{z} \right\rangle =\left\langle \lambda (s), \mathbf{z} \right\rangle +\left\langle \lambda (t),s \mathbf{z} \right\rangle =\left\langle \lambda (t), \mathbf{z} \right\rangle +\left\langle \lambda (s),t \mathbf{z} \right\rangle \end{aligned}$$
(19)

since one can interchange \(s\) and \(t\) on the left-hand side. Putting \(s=2\) and denoting \(\Lambda =\lambda (2)\), we obtain

$$\begin{aligned} \left\langle \lambda (t), \mathbf{z} \right\rangle =\left\langle \Lambda , \mathbf{z} \right\rangle (t-1) \end{aligned}$$

for \(t>0\) and \( \mathbf{z} \in \Omega \). It then follows that for all \(s\in (0,\infty )\) and \( \mathbf{z} \in \Omega \),

$$\begin{aligned} a_s( \mathbf{z} )=a(s \mathbf{z} )-a( \mathbf{z} )=\left\langle \Lambda , \mathbf{z} \right\rangle (s-1)+k_1\log \,s. \end{aligned}$$
(20)

Let us define function \(\bar{a}\) by formula

$$\begin{aligned} \bar{a}( \mathbf{x} )=a( \mathbf{x} )-\left\langle \Lambda , \mathbf{x} \right\rangle . \end{aligned}$$

From (20), we get

$$\begin{aligned} \bar{a}(s \mathbf{x} )=\bar{a}( \mathbf{x} )+k_1\log s \end{aligned}$$
(21)

for \(s>0\) and \( \mathbf{x} \in \Omega \).

Analogous considerations for function \(b_s\) gives existence of constant \(k_2\) such that \(\bar{b}(s \mathbf{x} )=\bar{b}( \mathbf{x} )+k_2\log s\), where

$$\begin{aligned} \bar{b}( \mathbf{x} )=b( \mathbf{x} )-\left\langle \Lambda , \mathbf{x} \right\rangle , \end{aligned}$$

hence \(\bar{c}(s \mathbf{x} )=\bar{c}( \mathbf{x} )+(k_1+k_2)\log s\) and

$$\begin{aligned} \bar{c}( \mathbf{x} )=c( \mathbf{x} )-\left\langle \Lambda , \mathbf{x} \right\rangle \end{aligned}$$

for any \(s>0\) and \( \mathbf{x} \in \Omega \).

Functions \(\bar{a}\), \(\bar{b}\), \(\bar{c}\) and \(d\) satisfy original Olkin–Baker functional equation:

$$\begin{aligned} \bar{a}( \mathbf{x} )+\bar{b}( \mathbf{y} )=\bar{c}( \mathbf{x} + \mathbf{y} )+d\left( g \left( \mathbf{x} + \mathbf{y} \right) \mathbf{x} \right) ,\quad ( \mathbf{x} , \mathbf{y} )\in \Omega ^2. \end{aligned}$$
(22)

Taking \( \mathbf{x} = \mathbf{y} = \mathbf{v} \in \Omega \) in (22), we arrive at

$$\begin{aligned} \bar{a}( \mathbf{v} )+\bar{b}( \mathbf{v} )=\bar{c}(2 \mathbf{v} )+d(\tfrac{1}{2} \mathbf{e} )=\bar{c}( \mathbf{v} )+(k_1+k_2)\log 2+d(\tfrac{1}{2} \mathbf{e} ). \end{aligned}$$
(23)

Insert \( \mathbf{x} =\alpha w ( \mathbf{v} ) \mathbf{u} \) and \( \mathbf{y} = w ( \mathbf{v} )( \mathbf{e} -\alpha \mathbf{u} )\) into (22) for \(0<\alpha <1\) and \(( \mathbf{u} , \mathbf{v} )\in ({\mathcal {D}}, \Omega )\). Using (21), we obtain

$$\begin{aligned} \bar{a}( w ( \mathbf{v} ) \mathbf{u} )+\bar{b}( w ( \mathbf{v} )( \mathbf{e} -\alpha \mathbf{u} ))=\bar{c}( \mathbf{v} )+d\left( \alpha \mathbf{u} \right) -k_1\log \alpha ,\quad ( \mathbf{u} , \mathbf{v} )\in ({\mathcal {D}}, \Omega ). \end{aligned}$$

Let us observe, that due to continuity of \(\bar{b}\) on \( \Omega \) and \(\lim _{\alpha \rightarrow 0}\left\{ w ( \mathbf{v} )( \mathbf{e} -\alpha \mathbf{u} )\right\} \) \(= w ( \mathbf{v} ) \mathbf{e} = \mathbf{v} \in \Omega \) (convergence in the norm generated by scalar product \(\left\langle \cdot ,\cdot \right\rangle \)), limit as \(\alpha \rightarrow 0\) of the left-hand side of the above equality exists. Hence, the limit of the right-hand side also exists and

$$\begin{aligned} \bar{a}( w ( \mathbf{v} ) \mathbf{u} )+\bar{b}( \mathbf{v} )=\bar{c}( \mathbf{v} )+\lim _{\alpha \rightarrow 0}\left\{ d(\alpha \mathbf{u} ) -k_1\log \alpha \right\} ,\quad ( \mathbf{u} , \mathbf{v} )\in ({\mathcal {D}}, \Omega ). \end{aligned}$$
(24)

Subtracting (24) from (23), we have

$$\begin{aligned} \bar{a}( w ( \mathbf{v} ) \mathbf{u} )=\bar{a}( \mathbf{v} )+g( \mathbf{u} ) \end{aligned}$$
(25)

for \( \mathbf{u} \in {\mathcal {D}}, \mathbf{v} \in \Omega \), where \(g( \mathbf{u} )=\lim _{\alpha \rightarrow 0}\left\{ d(\alpha \mathbf{u} ) -k_1\log \alpha \right\} -(k_1+k_2)\log 2-d(\tfrac{1}{2} \mathbf{e} )\). Due to the property (21), equation (25) holds for any \( \mathbf{u} \in \Omega \), so we arrive at the \( w \)-logarithmic Pexider equation. Lemma 3.5 implies that there exists \( w \)-logarithmic function \(e\) such that

$$\begin{aligned} \bar{a}( \mathbf{x} )=e( \mathbf{x} )+C_1 \end{aligned}$$

for any \( \mathbf{x} \in \Omega \) and a constant \(C_1\in {\mathbb {R}}\). Function \(e\) is continuous, because \(\bar{a}\) is continuous. Coming back to the definition of \(\bar{a}\), we obtain

$$\begin{aligned} a( \mathbf{x} )=\left\langle \Lambda , \mathbf{x} \right\rangle +e( \mathbf{x} )+C_1, \quad \mathbf{x} \in \Omega . \end{aligned}$$

Analogously for function \(b\), considering equation (22) for \( \mathbf{x} = w ( \mathbf{v} )( \mathbf{e} -\alpha \mathbf{u} )\) and \( \mathbf{y} =\alpha w ( \mathbf{v} ) \mathbf{u} \) after passing to the limit as \(\alpha \rightarrow 0\), we show that there exists continuous \( w \)-logarithmic function \(f\) such that

$$\begin{aligned} b( \mathbf{x} )=\left\langle \Lambda , \mathbf{x} \right\rangle +f( \mathbf{x} )+C_2, \quad \mathbf{x} \in \Omega \end{aligned}$$

for a constant \(C_2\in {\mathbb {R}}\). The form of \(c\) follows from (23). Taking \( \mathbf{x} =w( \mathbf{e} ) \mathbf{u} \) and \( \mathbf{y} = \mathbf{e} -w( \mathbf{e} ) \mathbf{u} \) in (22) for \( \mathbf{u} \in {\mathcal {D}}\), we obtain the form of \(d\). \(\square \)

4 The Lukacs–Olkin–Rubin Theorem Without Invariance of The Quotient

In the following section, we prove the density version of Lukacs–Olkin–Rubin theorem for any multiplication algorithm \(w\) satisfying

  1. (i)

    \( w (s \mathbf{x} )=s w ( \mathbf{x} )\) for \(s>0\) and \( \mathbf{x} \in \Omega \),

  2. (ii)

    differentiability of mapping \( \Omega \ni \mathbf{x} \mapsto w ( \mathbf{x} )\in G\).

We assume (ii) to ensure that Jacobian of the considered transformation exists. We start with the direct result, showing that the considered measures have desired property. The converse result is given in Theorem 4.2. For every generalized multiplication \( w \), the family of these \( w \)-Wishart measures (as defined in (26)) contains the Wishart laws. For \( w = w _1\), there are no other distributions, while the \( w _2\)-Wishart measures consist of the Riesz distributions. It is an open question whether there is a generalized multiplication \( w \) that leads to other probability measures in this family.

Theorem 4.1

Let \( w \) be a multiplication algorithm satisfying condition (ii) and define \( g = w ^{-1}\). Suppose that \(X\) and \(Y\) are independent random variables with densities given by

$$\begin{aligned} f_X( \mathbf{x} )&= C_X e( \mathbf{x} )\exp \left\langle \Lambda , \mathbf{x} \right\rangle I_ \Omega ( \mathbf{x} ),\nonumber \\ f_Y( \mathbf{x} )&= C_Y f( \mathbf{x} )\exp \left\langle \Lambda , \mathbf{x} \right\rangle I_ \Omega ( \mathbf{x} ), \end{aligned}$$
(26)

where \(e\) and \(f\) are \( w \)-multiplicative functions, \(\Lambda \in {\mathbb {E}}\) and \({\mathbb {E}}\) is the Euclidean Jordan algebra associated with the irreducible symmetric cone \( \Omega \).

Then, vector \((U,V)=\left( g (X+Y)X,X+Y\right) \) have independent components.

Note that if \( w ( \mathbf{x} )= w _1( \mathbf{x} )={\mathbb {P}}( \mathbf{x} ^{1/2})\), then there exist positive constants \(\kappa _X\) and \(\kappa _Y\) such that \(e( \mathbf{x} )=(\det \mathbf{x} )^{\kappa _X-\dim \Omega /r}\) and \(f( \mathbf{x} )=(\det \mathbf{x} )^{\kappa _Y-\dim \Omega /r}\). In this case \(-\Lambda =: \mathbf{a} \in \Omega \) and \((X,Y)\sim \gamma _{\kappa _X, \mathbf{a} }\otimes \gamma _{\kappa _Y, \mathbf{a} }\). Similarly, if \( w ( \mathbf{x} )= w _2( \mathbf{x} )=t_ \mathbf{x} \), \(X\) and \(Y\) follow Riesz distributions with the same scale parameter \(-\Lambda \in \Omega \). In general, we do not know whether \( \mathbf{a} =-\Lambda \) should always belong to \( \Omega \).

Proof

Let \(\psi : \Omega \times \Omega \rightarrow {\mathcal {D}}\times \Omega \) be a mapping defined through

$$\begin{aligned} \psi ( \mathbf{x} , \mathbf{y} )=\left( g ( \mathbf{x} + \mathbf{y} ) \mathbf{x} , \mathbf{x} + \mathbf{y} \right) =( \mathbf{u} , \mathbf{v} ). \end{aligned}$$

Then, \((U,V)=\psi (X,Y)\). The inverse mapping \(\psi ^{-1}:{\mathcal {D}}\times \Omega \rightarrow \Omega \times \Omega \) is given by

$$\begin{aligned} ( \mathbf{x} , \mathbf{y} )=\psi ^{-1}( \mathbf{u} , \mathbf{v} )=\left( w ( \mathbf{v} ) \mathbf{u} , w ( \mathbf{v} )( \mathbf{e} - \mathbf{u} )\right) , \end{aligned}$$

hence \(\psi \) is a bijection. We are looking for the Jacobian of the map \(\psi ^{-1}\), that is, the determinant of the linear map

We have

where \({\mathrm {Det}}\) denotes the determinant in the space of endomorphisms on \( \Omega \). By (12), we get

$$\begin{aligned} {\mathrm {Det}}\left( w \left( \mathbf{v} \right) \right) =(\det \mathbf{v} )^{\dim \Omega /r}. \end{aligned}$$

Now, we can find the joint density of \((U,V)\). Since \((X,Y)\) have independent components, we obtain

$$\begin{aligned} f_{(U,V)}( \mathbf{u} , \mathbf{v} )=(\det \mathbf{v} )^{\dim \Omega /r}f_X( w ( \mathbf{v} ) \mathbf{u} )f_Y( w ( \mathbf{v} )( \mathbf{e} - \mathbf{u} )) \end{aligned}$$
(27)

We assumed (26), thus there exist \(\Lambda \in {\mathbb {E}}\), \(C_X, C_Y \in {\mathbb {R}}\) and \( w \)-multiplicative functions \(e\), \(f\) such that

$$\begin{aligned} f_{(U,V)}( \mathbf{u} , \mathbf{v} )=\,&(\det \mathbf{v} )^{\dim \Omega /r}f_X( w ( \mathbf{v} ) \mathbf{u} )f_Y( w ( \mathbf{v} )( \mathbf{e} - \mathbf{u} )) \\ =\,&C_1C_2\, (\det \mathbf{v} )^{\dim \Omega /r} e( w ( \mathbf{v} ) \mathbf{u} ) f( w ( \mathbf{v} )( \mathbf{e} - \mathbf{u} ))\\&\quad e^{\left\langle \Lambda , \mathbf{v} \right\rangle }I_ \Omega ( w ( \mathbf{v} ) \mathbf{u} )I_ \Omega ( w ( \mathbf{v} )( \mathbf{e} - \mathbf{u} )) \\ =\,&C_1C_2\, (\det \mathbf{v} )^{\dim \Omega /r} e( \mathbf{v} )f( \mathbf{v} ) e^{\left\langle \Lambda , \mathbf{v} \right\rangle }I_ \Omega ( \mathbf{v} ) \,\,\\&\quad e( w ( \mathbf{e} ) \mathbf{u} ) f( w ( \mathbf{e} )( \mathbf{e} - \mathbf{u} )) I_{\mathcal {D}}( \mathbf{u} ), \\ =\,&f_U( \mathbf{u} ) \, f_V( \mathbf{v} ), \end{aligned}$$

what completes the proof. \(\square \)

To prove the characterization of given measures, we need to show that the inverse implication is also valid. The following theorem generalizes results obtained in Bobecka and Wesołowski, Hassairi et al. and Kołodziejek [2, 11, 13]. We consider quotient \(U\) for any multiplication algorithm \( w \) satisfying conditions (i) and (ii) given at the beginning of this section (note that multiplication algorithms \( w _1\) and \( w _2\) defined in (10) and (11), respectively, satisfy both of these conditions). Respective densities are then expressed in terms of \( w \)-multiplicative Cauchy functions.

Theorem 4.2

(The Lukacs–Olkin–Rubin theorem with densities on symmetric cones) Let \(X\) and \(Y\) be independent rv’s valued in irreducible symmetric cone \( \Omega \) with strictly positive and continuous densities. Set \(V=X+Y\) and \(U= g \left( X+Y\right) X\) for any multiplication algorithm \( w = g ^{-1}\) satisfying conditions (i) and (ii). If \(U\) and \(V\) are independent, then there exist \(\Lambda \in {\mathbb {E}}\) and \( w \)-multiplicative functions \(e\), \(f\) such that (26) holds.

In particular,

  1. (1)

    if \( g ( \mathbf{x} )= g _1( \mathbf{x} )={\mathbb {P}}( \mathbf{x} ^{-1/2})\), then there exist constants \(p_i>\dim \Omega /r-1\), \(i=1,2\), and \( \mathbf{a} \in \Omega \) such that \(X\sim \gamma _{p_1, \mathbf{a} }\) and \(Y\sim \gamma _{p_2, \mathbf{a} }\),

  2. (2)

    if \( g ( \mathbf{x} )= g _2( \mathbf{x} )=t_{ \mathbf{x} }^{-1}\) , then there exist constants \(s_i=(s_{i,j})_{j=1}^r\), \(s_{i,j}>(j-1)d/2\), \(i=1,2\), and \( \mathbf{a} \in \Omega \) such that \(X\sim R_{s_1, \mathbf{a} }\) and \(Y\sim R_{s_2, \mathbf{a} }\).

Proof

We start from (27). Since \((U,V)\) is assumed to have independent components, the following identity holds almost everywhere with respect to Lebesgue measure:

$$\begin{aligned} (\det ( \mathbf{x} + \mathbf{y} ))^{\dim \Omega /r}f_X( \mathbf{x} )f_Y( \mathbf{y} )=f_U\left( g \left( \mathbf{x} + \mathbf{y} \right) \mathbf{x} \right) f_V( \mathbf{x} + \mathbf{y} ), \end{aligned}$$
(28)

where \(f_X\),\(f_Y\),\(f_U\) and \(f_V\) denote densities of \(X\), \(Y\), \(U\) and \(V\), respectively.

Since the respective densities are assumed to be continuous, the above equation holds for every \( \mathbf{x} , \mathbf{y} \in \Omega \). Taking logarithms of both sides of the above equation (it is permitted since \(f_X, f_Y>0\) on \( \Omega \)), we get

$$\begin{aligned} a( \mathbf{x} )+b( \mathbf{y} )=c( \mathbf{x} + \mathbf{y} )+d\left( g \left( \mathbf{x} + \mathbf{y} \right) \mathbf{x} \right) , \end{aligned}$$
(29)

where

$$\begin{aligned} a( \mathbf{x} )&=\log \, f_X( \mathbf{x} ),\\ b( \mathbf{x} )&=\log \, f_Y( \mathbf{x} ),\\ c( \mathbf{x} )&=\log \, f_V( \mathbf{x} )-\tfrac{\dim \Omega }{r}\log \det ( \mathbf{x} ),\\ d( \mathbf{u} )&=\log \, f_U( \mathbf{u} ), \end{aligned}$$

for \( \mathbf{x} \in \Omega \) and \( \mathbf{u} \in {\mathcal {D}}\).

The first part of the conclusion follows now directly from Theorem 3.6. Thus, there exist constants \(\Lambda \in {\mathbb {E}}\), \(C_i\in {\mathbb {R}}\), \(i\in \{1,2\}\) and \( w \)-logarithmic functions \(e\) and \(f\) such that

$$\begin{aligned} f_X( \mathbf{x} )&=e^{a( \mathbf{x} )}=e^{C_1}e( \mathbf{x} )e^{\left\langle \Lambda , \mathbf{x} \right\rangle },\\ f_Y( \mathbf{x} )&=e^{b( \mathbf{x} )}=e^{C_2}f( \mathbf{x} )e^{\left\langle \Lambda , \mathbf{x} \right\rangle }, \end{aligned}$$

for any \( \mathbf{x} \in \Omega \).

Let us observe that if \( w ( \mathbf{x} )= w _1( \mathbf{x} )={\mathbb {P}}( \mathbf{x} ^{1/2})\), then for Theorem 3.1, there exist constants \(\kappa _i\), \(i=1,2,\) such that \(e( \mathbf{x} )=(\det \mathbf{x} )^{\kappa _1}\) and \(f( \mathbf{x} )=(\det \mathbf{x} )^{\kappa _2}\). Since \(f_X\) and \(f_Y\) are densities, it follows that \( \mathbf{a} =-\Lambda \in \Omega \), \(k_i=p_i-(\dim \Omega )/r>-1\) and \(e^{C_i}=(\det ( \mathbf{a} ))^{p_i}/\Gamma _ \Omega (p_i)\), \(i=1,2\).

Analogously, if \( w ( \mathbf{x} )= w _2( \mathbf{x} )=t_ \mathbf{x} \), then Theorem 3.2 and Remark 3.3 imply that there exist constants \(s_i=(s_{i,j})_{j=1}^r\), \(s_{i,j}>(j-1)d/2\), \(i=1,2\), and \( \mathbf{a} =-\Lambda \in \Omega \) such that \(X\sim R_{s_1, \mathbf{a} }\) i \(Y\sim R_{s_2, \mathbf{a} }\). \(\square \)

Remark 4.3

Fix \(k\in K\) and consider \(w^{(k)}( \mathbf{x} )=w( \mathbf{x} ) k\). The \(w^{(k)}\)-multiplicative function \(f\) satisfies equation

$$\begin{aligned} f( \mathbf{x} )f(w( \mathbf{e} )k \mathbf{y} )=f(w( \mathbf{x} ) k \mathbf{y} ). \end{aligned}$$

Substituting \( \mathbf{y} \mapsto k^{-1} \mathbf{y} \in \Omega \), we obtain

$$\begin{aligned} f( \mathbf{x} )f(w( \mathbf{e} ) \mathbf{y} )=f(w( \mathbf{x} ) \mathbf{y} ), \end{aligned}$$

that is \(w^{(k)}\)-multiplicative functions are the same as \(w\)-multiplicative functions. This leads to the rather unsurprising observation that if we consider Theorem 4.2 with \(w( \mathbf{x} )={\mathbb {P}}( \mathbf{x} ^{1/2})k\) or \(w( \mathbf{x} )=t_ \mathbf{x} k\), regardless of \(k\in K\), we will characterize the same distributions as in points \((1)\) and \((2)\) of Theorem 4.2.

With Theorem 4.2, one can easily reprove original Lukacs–Olkin–Rubin theorem (version of Olkin and Rubin [22] and Casalis and Letac [7]), when the distribution of \(U\) is invariant under a group of automorphisms:

Remark 4.4

Let us additionally assume in Theorem 4.2, that the quotient \(U\) has distribution which is invariant under a group of automorphisms, that is \(kU\mathop {=}\limits ^{d}U\) for any \(k\in K\). From the proof of Theorem 4.1, it follows that there exist continuous \( w \)-multiplicative functions \(e\) and \(f\) and constant \(C\) such that for \( \mathbf{u} \in {\mathcal {D}}\),

$$\begin{aligned} f_U( \mathbf{u} )=C e( w ( \mathbf{e} ) \mathbf{u} ) f( \mathbf{e} - w ( \mathbf{e} ) \mathbf{u} ). \end{aligned}$$

The distribution of \(U\) is invariant under \(K\), thus density \(f_U\) is a \(K\)-invariant function, that is \(f_U( \mathbf{u} )=f_U(k \mathbf{u} )\) for any \(k\in K\). Note that \(w( \mathbf{e} )\in K\), thus

$$\begin{aligned} e( \mathbf{u} )f( \mathbf{e} - \mathbf{u} )=e(k \mathbf{u} )f( \mathbf{e} -k \mathbf{u} ),\quad (k, \mathbf{u} )\in K\times {\mathcal {D}}. \end{aligned}$$
(30)

We will show that both functions \(e\) and \(f\) are \(K\)-invariant. Recall that \(e( \mathbf{x} )\,\,e( w ( \mathbf{e} ) \mathbf{y} )\) \(=e( w ( \mathbf{x} ) \mathbf{y} )\), therefore after taking \( \mathbf{y} =\alpha \mathbf{e} \), we obtain \(e(\alpha \mathbf{x} )=e( \mathbf{x} )e(\alpha \mathbf{e} )\) for any \(\alpha >0\) and \( \mathbf{x} \in \Omega \). Inserting \( \mathbf{u} =\alpha \mathbf{v} \) into (30), we arrive at

$$\begin{aligned} e( \mathbf{v} ) e(\alpha \mathbf{e} ) f( \mathbf{e} \!-\!\alpha \mathbf{v} ) \!=\!e(\alpha \mathbf{v} ) f( \mathbf{e} \!-\!\alpha \mathbf{v} )\!=\!e(\alpha k \mathbf{v} ) f( \mathbf{e} \!-\!\alpha k \mathbf{v} ) \!=\! e(k \mathbf{v} ) e(\alpha \mathbf{e} ) f( \mathbf{e} \!-\!k\alpha \mathbf{v} ). \end{aligned}$$

Thus, \(e( \mathbf{v} ) f( \mathbf{e} -\alpha \mathbf{v} )=e(k \mathbf{v} ) f( \mathbf{e} -k\alpha \mathbf{v} )\) for any \(\alpha \in (0,1]\), \( \mathbf{v} \in {\mathcal {D}}\). Since \(f( \mathbf{e} )=1\) and \(f\) is continuous on \( \Omega \), by passing to the limit as \(\alpha \rightarrow 0\), we get that \(e\) is \(K\)-invariant and so is \(f\). By Theorem 3.4 and continuity of \(e\) and \(f\), we get that there exist constants \(\kappa _1\), \(\kappa _2\) such that \(e( \mathbf{x} )=(\det \mathbf{x} )^{\kappa _1}\) and \(f( \mathbf{x} )=(\det \mathbf{x} )^{\kappa _2}\); hence, \(X\) and \(Y\) have Wishart distributions.