1 Moment function sequences of higher rank on hypergroups

The aim of this paper is to characterize moment function sequences of higher rank on some type of hypergroups. Concerning hypergroups here we will follow the monographs [1, 11]. Some elements of hypergroup theory can be applied not only in mathematics, but also in physics. In [12] one can find the physical definition of a finite hypergroup by means of particle collisions. In the same paper the connection of hypergroups with Information Theory and the Second Law of Thermodynamics are presented. The later topics are discussed also in [13, Sect. 6].

Moment function sequences of higher rank on hypergroups were defined in [3] (see also [4, 5]). Here we recall the definition.

Let X be a commutative hypergroup and r a positive integer. For each multi-index \(\alpha \) in \(\mathbb {N}^r\) let \(\varphi _{\alpha }:X\rightarrow \mathbb {C}\) be a continuous function. The family \((\varphi _{\alpha })_{\alpha \in \mathbb {N}^r}\) is called a moment function sequence of rank r if \(\varphi _0\ne 0\), and

$$\begin{aligned} \varphi _{\alpha }(x*y)=\sum _{\beta \le \alpha } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \varphi _{\beta }(x)\varphi _{\alpha -\beta }(y) \end{aligned}$$
(1)

holds for each multi-index \(\alpha \) in \(\mathbb {N}^r\) whenever xy is in X. If the system (1) holds only for \(\vert \alpha \vert \le N\) with some given natural number N, then the family \((\varphi _{\alpha })_{\vert \alpha \vert \le N}\) is called a finite moment function sequence of rank r. If \(r=1\), then we simply call the family a moment function sequence, resp. a finite moment function sequence.

Clearly, for \(\alpha =0\) we have \(\varphi _0(x*y)=\varphi _0(x)\varphi _0(y)\), that is, \(\varphi _0\) is an exponential on the hypergroup X. We say that the moment function sequence of rank r corresponds to the exponential \(\varphi _0\). The functions \(\varphi _{\alpha }\) with \(\vert \alpha \vert =1\) satisfy

$$\begin{aligned} \varphi _{\alpha }(x*y)=\varphi _{\alpha }(x)\varphi _0(y)+ \varphi _{\alpha }(y)\varphi _0(x), \end{aligned}$$

and they are called \(\varphi _0\)-sine functions. In the case \(\varphi _0=1\), \(\varphi _0\)-sine functions are called additive functions.

For instance, for \(r=2\) and \(N=2\) the system (1) has the following form:

$$\begin{aligned} \begin{aligned} \varphi _{0,0}(x*y)&=\varphi _{0,0}(x)\varphi _{0,0}(y)\\ \varphi _{1,0}(x*y)&=\varphi _{1,0}(x)\varphi _{0,0}(y)+\varphi _{0,0}(x)\varphi _{1,0}(y)\\ \varphi _{0,1}(x*y)&=\varphi _{0,1}(x)\varphi _{0,0}(y)+\varphi _{0,0}(x)\varphi _{0,1}(y)\\ \varphi _{1,1}(x*y)&=\varphi _{1,1}(x)\varphi _{0,0}(y)+\varphi _{1,0}(x)\varphi _{0,1}(y)+\varphi _{0,1}(x)\varphi _{1,0}(y)+\\&\quad +\,\varphi _{0,0}(x)\varphi _{1,1}(y). \end{aligned} \end{aligned}$$
(2)

Our purpose is to describe the general solution of the system (1). We shall do this in the following paragraphs on some special hypergroups. We note that even the system (2) is not easy to solve in general. But if we assume \(\varphi _{0,0}=1\) then we can describe the general solution of (2). Indeed, by the second and third equation we have that \(\varphi _{1,0}\) and \(\varphi _{0,1}\) are arbitrary additive functions on X, say \(\varphi _{1,0}=a\), \(\varphi _{0,1}=b\), where

$$\begin{aligned} a(x*y)=a(x)+a(y),b(x*y)=b(x)+b(y) \end{aligned}$$

holds for each xy in X. Let \(\Phi =\varphi _{1,1}-a\cdot b\), then we have

$$\begin{aligned} \Phi (x*y)= & {} \varphi _{1,1}(x*y)-a(x*y)b(x*y)\\= & {} \varphi _{1,1}(x)+\varphi _{1,0}(x)\varphi _{0,1}(y) +\varphi _{0,1}(x)\varphi _{1,0}(y)+\varphi _{1,1}(y)\\{} & {} -a(x)b(x)-a(x)b(y)-a(y)b(x)-a(y)b(y)=\Phi (x)+\Phi (y), \end{aligned}$$

that is, \(\Phi \) is additive, say \(\Phi =c\) with \(c(x*y)=c(x)+c(y)\). It follows that \(\varphi _{1,1}=a\cdot b+c\). Conversely, it is easy to check, that for any additive functions \(a,b,c:X\rightarrow \mathbb {C}\) the functions \(\varphi _{0,0}=1\), \(\varphi _{1,0}=a\), \(\varphi _{0,1}=b\), \(\varphi _{1,1}=c\) form a solution of the system (2).

In fact, in [4], we described moment function sequences of higher rank on commutative groups. It turns out that, if the generating exponential in a moment function sequence of higher rank on a commutative hypergroup is the identically 1 function, then the methods of the proofs from [4] can be adopted, and one concludes that such moment function sequences can be represented using Bell polynomials \(B_{\alpha }\) as

$$\begin{aligned} f_{\alpha }(x)= B_{\alpha }(a(x)) \qquad \left( x\in X\right) \end{aligned}$$

with an appropriate sequence \(a= (a_{\alpha })_{\alpha \in \mathbb {N}^{r}}\) of additive functions.

In [5] a similar description is given on polynomial hypergroups in a single variable. Further, in [7, 9] and in [8] the authors described moment function sequences of rank one on polynomial hypergroups and on Sturm–Liouville hypergroups, respectively. In this paper we focus on moment function sequences of higher rank, from which the rank one case will follow.

2 Polynomial hypergroups in several variables

The basic concepts of polynomial hypergroups in several variables can be found in [6] and also in [1]. The single variable case of moment functions on polynomial hypergroups has been considered in [8]. Here we summarize the necessary facts.

Let X be a countable set equipped with the discrete topology and let d be a positive integer. We consider a set \((Q_x)_{x\in X}\) of polynomials in d complex variables. If for any nonnegative integer n the symbol \(X_n\) denotes the set of all elements x in X for which the degree of \(Q_x\) is not greater than n, then we suppose that the polynomials \(Q_x\) with x in \(X_n\) form a basis for all polynomials of degree not greater than n. In this case for every xy in X the product \(Q_x\,Q_y\) admits a unique representation

$$\begin{aligned} Q_x\,Q_y=\sum _{w\in X}c(x,y,w)Q_w \end{aligned}$$
(3)

with some complex numbers c(xyw). A hypergroup \((X,*)\) is called a polynomial hypergroup in d variables or d-dimensional polynomial hypergroup if there exists a family of polynomials \((Q_x)_{x\in X}\) in d complex variables satisfying the above condition and such that the convolution in X is defined by

$$\begin{aligned} \delta _x*\delta _y(\{w\})=c(x,y,w) \end{aligned}$$

for each xyw in X. We say that this polynomial hypergroup is associated with the family of polynomials \((Q_x)_{x\in X}\). Equation (3) is called the linearization formula.

By the conditions on the sequence of polynomials \((Q_x)_{x\in X}\) it follows that there is exactly one element x in X for which \(Q_x\) is a nonzero constant. It is easy to see that necessarily \(x=e\) is the identity of the hypergroup, and \(Q_e=1\). Clearly, X contains exactly d nonconstant linear polynomials which are linearly independent.

3 Moment functions of higher rank on multivariate polynomial hypergroups

In this section we generalize the results in [5] by characterizing moment function sequences of higher rank on multivariate polynomial hypergroups.

Theorem 1

Let dr be positive integers, and X a d dimensional polynomial hypergroup generated by the family of polynomials \((Q_x)_{x\in X}\). The family of functions \(\varphi _{\alpha }:X\rightarrow \mathbb {C}\) \((\alpha \in \mathbb {N}^r\)) forms a moment function sequence of rank r on X if and only if

$$\begin{aligned} \varphi _{\alpha }(x)=\partial ^{\alpha }(Q_x\circ f)(0) \end{aligned}$$
(4)

holds for all x in X and for each \(\alpha \) in \(\mathbb {N}^r\), where \(f=(f_1,f_2,\dots ,f_d):\mathbb {R}^r\rightarrow \mathbb {C}^d\) and

$$\begin{aligned} f_{i}(t)=\sum _{\alpha \in \mathbb {N}^r} \frac{c_{i,\alpha }}{\alpha !}t^{\alpha },t\in \mathbb {R}^r \end{aligned}$$

for \(i=1,\dots ,d\).

We note that, although here \(f_\alpha \) is defined by a formal power series, but in formula (4) we need the coefficients only, regardless to convergence.

Proof

For each natural number n, let \(X_n\) denote the set of polynomials \(Q_x\) with \(\deg Q_x\le n\). Let N be a natural number, let \(\alpha \) be in \(\mathbb {N}^r\) with \(\vert \alpha \vert \le N\), and let \(\varphi _{\alpha }\) denote the function defined by (4) in terms of some \(f=(f_1,f_2,\dots ,f_d):\mathbb {R}\rightarrow \mathbb {C}^d\), where \(f_i:\mathbb {R}^r\rightarrow \mathbb {C}\) is any N-times differentiable function. By the linearization formula, we have

$$\begin{aligned} (Q_x\circ f)(t)(Q_y\circ f)(t)=\sum _{w\in X}c(x,y,w)(Q_w\circ f)(t) \end{aligned}$$

for each t in \(\mathbb {R}^r\) and for all xy in X. Applying \(\partial ^{\alpha }\) on both sides with respect to t and substituting \(t=0\) we have for each xy in X

$$\begin{aligned}{} & {} \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \ \varphi _{\beta }(x)\varphi _{\alpha -\beta }(y)\\{} & {} \quad =\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \partial ^{\beta }(Q_x\circ f)(0)\partial ^{\alpha -\beta }(Q_y\circ f)(0)=\sum _{w\in X}c(x,y,w)\partial ^{\alpha }(Q_w\circ f)(0)\\{} & {} \quad =\sum _{w\in X}c(x,y,w)\varphi _{\alpha }(w)=\sum _{w\in X}(\delta _x*\delta _y)(w)\varphi _{\alpha }(w)=\varphi _{\alpha }(x*y), \end{aligned}$$

which means that the functions \(\varphi _{\alpha }:X\rightarrow \mathbb {C}\) for \(\vert \alpha \vert \le N\) form a finite moment sequence of rank r on X. As N is arbitrary, we have proved that the family \(\varphi _{\alpha }\) with \(\alpha \) in \(\mathbb {N}^r\) given in (4) forms a moment function sequence of rank r with any complex numbers \(c_{i,\alpha }\), \(i=1,2,\dots ,d\).

To prove the converse statement we assume that the family of functions \(\varphi _{\alpha }:X\rightarrow \mathbb {C}\) \((\vert \alpha \vert \le N)\) forms a finite moment function sequence of rank r on the hypergroup X. As \(\varphi _0\) is an exponential (see [11] for the form of exponentials on multivariate polynomial hypergroup), we have that \(\varphi _0(x)=Q_x(\lambda )\) holds for each x in X with some \(\lambda \) in \(\mathbb {C}^d\), where \(\lambda =(c_{1,0},c_{2,0},\dots ,c_{d,0})\). The vectors

$$\begin{aligned} \bigl (\partial _1Q_x(\lambda ),\partial _2Q_x(\lambda ),\dots , \partial _dQ_x(\lambda )\bigr ) \end{aligned}$$

for x in \(X_1\) and \(x\ne e\) are linearly independent (see [11]), consequently, for every \(\alpha \) multi-index with \(\vert \alpha \vert \le N\) the system of linear equations

$$\begin{aligned} \varphi _{\alpha }(x)=\sum _{i=1}^dc_{i,\alpha }\partial _iQ_x(\lambda ) \end{aligned}$$

for x in \(X_1\) with \(x\ne e\) has a unique solution \(c_{i,\alpha }\) \((i=1,2,\dots ,d)\). Then we define \(f=(f_1,f_2,\dots ,f_d)\) by

$$\begin{aligned} f_i(t)=\sum _{\vert \alpha \vert \le N}\frac{c_{i,\alpha }}{\alpha !}t^{\alpha } \end{aligned}$$

for each t in \(\mathbb {R}^r\) and for \(i=1,2,\dots ,d\). Further let

$$\begin{aligned} \psi _{\alpha }(x)=\varphi _{\alpha }(x)-\partial ^{\alpha }(Q_x\circ f)(0) \end{aligned}$$

for \(\vert \alpha \vert \le N\), whenever x is in X. We show that the functions \(\psi _{\alpha }\) vanish identically on X. For \(\alpha =0\) we have \(\psi _0(x)=\varphi _0(x)-Q_x(f(0))\) for all x in X. However, as \(f(0)=\lambda \), it follows immediately from the choice of \(\lambda \) that \(\varphi _0(x)=Q_x(f(0))\), hence \(\psi _0(x)=0\) for each x in X.

From the equation of the moment functions it follows by induction on \(\vert \alpha \vert \) that \(\varphi _{\alpha }(e)=0\) for \(1\le \vert \alpha \vert \le N\), consequently, we have that \(\psi _{\alpha }(e)=0\) for \(\vert \alpha \vert \le N\). On the other hand, for every x in \(X_1\), the polynomial \(Q_x\) is linear, hence

$$\begin{aligned} \partial ^{\alpha }(Q_x\circ f)(0)=\sum _{i=1}^d\partial _iQ_x(f(0))\partial ^{\alpha }f_i(0)=\sum _{i=1}^d\partial _iQ_x(\lambda ) c_{i,\alpha }= \varphi _{\alpha }(x) \end{aligned}$$

holds for \(1\le \vert \alpha \vert \le N\), whenever \(x\ne e\). This means that \(\psi _{\alpha }(x)=0\) for any x in \(X_1\) and for \(\vert \alpha \vert \le N\).

Now we proceed by induction on n. Suppose that we have proved \(\psi _{\alpha }(x)=0\) for \(\vert \alpha \vert \le N\) and for each x in \(X_n\), and let x be arbitrary in \(X_{n+1}\). We know (see the proof of 3.1.2 Proposition in [1]) that \(Q_x\) has a representation in the form

$$\begin{aligned} Q_x(\lambda )=\sum _{j=1}^s a_jQ_{x_j}(\lambda )Q_{y_j}(\lambda ) \end{aligned}$$
(5)

for any \(\lambda \) in \(\mathbb {C}^d\) with some complex numbers \(a_j\) and with some \(x_j\) in \(X_1\) and \(y_j\) in \(X_n\) \((j=1,2,\dots ,s)\), where s is a positive integer. This means that

$$\begin{aligned} \delta _x=\sum _{j=1}^s a_j\delta _{x_j}*\delta _{y_j} \end{aligned}$$

holds. Consequently, we have

$$\begin{aligned} \varphi _{\alpha }(x)=\sum _{j=1}^s a_j\varphi _{\alpha }({x_j}*{y_j}) \end{aligned}$$

for \(\vert \alpha \vert \le N\). On the other hand, applying \(\partial ^{\alpha }\) on (5) and substituting \(t=0\) we have

$$\begin{aligned}{} & {} \partial ^{\alpha }\bigl (Q_x\circ f\bigr )(0)=\sum _{j=1}^sa_j\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \partial ^{\beta }\bigl (Q_{x_j}\circ f\bigr )(0) \partial ^{\alpha -\beta }\bigl (Q_{y_j}\circ f\bigr )(0)\\{} & {} \quad =\sum _{j=1}^sa_j\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \varphi _{\beta }(x_j)\varphi _{\alpha -\beta }(y_j)=\sum _{j=1}^sa_j \varphi _{\alpha }(x_j*y_j)= \varphi _{\alpha }(x), \end{aligned}$$

which means that \(\psi _{\alpha }(x)=0\) for \(\vert \alpha \vert \le N\). This completes the proof.\(\square \)

4 Sturm–Liouville hypergroups

Sturm–Liouville hypergroups represent another important class of hypergroups, which arise from Sturm–Liouville boundary value problems on the nonnegative reals. In order to build up the Sturm–Liouville operator basic to the construction of hypergroups one introduces the Sturm–Liouville functions. For further details see [1] and [2]. In what follows \(\mathbb {R}_0\) denotes the set of nonnegative real numbers.

The continuous function \(A:\mathbb {R}_0\rightarrow \mathbb {R}\) is called a Sturm–Liouville function if it is positive and continuously differentiable on the positive reals. Different assumptions on A can be found in [1] which lead to the desired Sturm–Liouville problem. For a given Sturm–Liouville function A one defines the Sturm–Liouville operator \(L_A\) by

$$\begin{aligned} L_Af=-f''-\frac{{A}'}{A}f', \end{aligned}$$

where f is a twice continuously differentiable real function on the positive reals. Using \(L_A\) one introduces the differential operator l by

$$\begin{aligned} l[u](x,y)= & {} (L_A)_xu(x,y)-(L_A)_yu(x,y)\\= & {} -\partial _1^2u(x,y)-\frac{A'(x)}{A(x)}\partial _1u(x,y)+ \partial _2^2u(x,y)+\frac{A'(y)}{A(y)}\partial _2u(x,y), \end{aligned}$$

where u is twice continuously differentiable for all positive reals xy. Here \((L_A)_x\) and \((L_A)_y\) indicates that \(L_A\) operates on functions depending on x or y, respectively.

A hypergroup on \(\mathbb {R}_0\) is called a Sturm–Liouville hypergroup if there exists a Sturm–Liouville function A such that, for each nonnegative even \(C^{\infty }\)-function f on \(\mathbb {R}\), the function \(u_f\) defined by

$$\begin{aligned} u_f(x,y)=\int _{\mathbb {R}_0}\,f\,d(\delta _x*\delta _y)\qquad (x,y>0) \end{aligned}$$

is twice continuously differentiable and satisfies the partial differential equation

$$\begin{aligned} l[u_f]=0 \end{aligned}$$

with \(\partial _2u_f(x,0)=0\) for all positive x. Hence \(u_f\) is a solution of the Cauchy problem

$$\begin{aligned} \begin{array}{rcl} \partial _1^2u(x,y)+\frac{A'(x)}{A(x)}~\partial _1u(x,y)&{}=&{} \partial _2^2u(x,y)+\frac{A'(y)}{A(y)}~\partial _2u(x,y),\\ \partial _2u(x,0)&{}=&{}0 \end{array} \end{aligned}$$

for all positive xy. From general properties of one-dimensional hypergroups given in [1], it follows that \(u_f(y,0)= u_f(0,y)=f(y)\) and \(\partial _1u_f(0,y)=0\) holds, whenever y is a positive real number. In other words, \(u_f\) is the unique solution of the boundary value problem

$$\begin{aligned}{} & {} \partial _1^2u(x,y)+\frac{A'(x)}{A(x)}~\partial _1u(x,y)= \partial _2^2u(x,y)+\frac{A'(y)}{A(y)}~\partial _2u(x,y)\nonumber \\{} & {} \quad \partial _1u(0,y)=0,\nonumber \\{} & {} \quad \partial _2u(x,0)=0\qquad u(x,0)=f(x),\qquad u(0,y)=f(y) \end{aligned}$$
(6)

for all positive xy. As this boundary value problem uniquely defines \(u_f\) for each f, we may consider it the boundary value problem defining the Sturm–Liouville hypergroup.

A systematic study of basic functional equations on Sturm–Liouville hypergroups can be found in the monograph [11]. In the sequel we shall use the following result (see Theorem 4.2. in [11], p. 62., and also [9, 10]).

Theorem 2

Let \(K=(\mathbb {R}_0,A)\) be the Sturm–Liouville hypergroup corresponding to the Sturm–Liouville function A. The continuous function \(m:\mathbb {R}_0\rightarrow \mathbb {C}\) is an exponential on K if and only if it is the restriction of an even \(C^{\infty }\)-function on \(\mathbb {R}\) and there exists a complex number \(\lambda \) such that

$$\begin{aligned} m''(x)+\frac{A'(x)}{A(x)}m'(x)=\lambda m(x),m(0)=1,m'(0)=0 \end{aligned}$$
(7)

holds for each \(x>0\).

5 Moment functions of higher rank on Sturm–Liouville hypergroups

Let \(K=(\mathbb {R}_0,A)\) be a Sturm–Liouville hypergroup. In this section we describe all generalized moment functions defined on K.

Theorem 3

Let \(K=(\mathbb {R}_0,A)\) be the Sturm–Liouville hypergroup corresponding to the Sturm–Liouville function A and let r be a positive integer. The family of continuous functions \(f_{\alpha }:\mathbb {R}_0\rightarrow \mathbb {C}\) \((\alpha \in \mathbb {N}^r)\) forms a moment function sequence of rank r on the hypergroup K if and only if these functions are restrictions of even \(C^{\infty }\)-functions on \(\mathbb {R}\), and there are complex numbers \(c_{\alpha }\) for each \(\alpha \) in \(\mathbb {N}^r\) such that

$$\begin{aligned} f_0''(x)+\frac{A'(x)}{A(x)}f_0'(x)=c_0\,f_0(x),\qquad f_0(0)=1,\qquad f_0'(0)=0 \end{aligned}$$
(8)

and

$$\begin{aligned} f_{\alpha }''(x)+\frac{A'(x)}{A(x)}f_{\alpha }'(x)=\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) c_{\beta }\,f_{\alpha -\beta }(x),\qquad f_{\alpha }(0)=0,\qquad f_{\alpha }'(0)=0 \end{aligned}$$
(9)

holds for each positive x and for every \(\alpha \) in \(\mathbb {N}^r\).

Proof

First we prove the sufficiency. If the functions \(f_{\alpha }:\mathbb {R}_0\rightarrow \mathbb {C}\) \((\alpha \in \mathbb {N}^r)\) satisfy the conditions (8) and (9), then \(f_0\) is an exponential function, by Theorem 2, hence \(f_0(x*y)=f_0(x)f_0(y)\) holds for all nonnegative numbers x and y. We show that equation (1) holds for all \(\alpha \) in \(\mathbb {N}^r\), that is, the function

$$\begin{aligned} h(x,y)=\sum _{\beta \le \alpha } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \,f_{\beta }(x)f_{\alpha -\beta }(y) \end{aligned}$$

is a solution of the differential equation in (6). The latter assertion is equivalent to the differential equation in (6) is equivalent to

$$\begin{aligned}{} & {} \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \, f''_{\beta }(x)f_{\alpha -\beta }(y)+\frac{A'(x)}{A(x)}\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \,f'_{\beta }(x)f_{\alpha -\beta }(y)\\{} & {} \quad =\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \, f_{\beta }(x)f''_{\alpha -\beta }(y)+\frac{A'(y)}{A(y)}\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \,f_{\beta }(x)f'_{\alpha -\beta }(y), \end{aligned}$$

which is equivalent to

$$\begin{aligned} \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \,\Bigl ( f''_{\beta }(x)+\frac{A'(x)}{A(x)}\,f'_{\beta }(x)\Bigr )f_{\alpha -\beta }(y)= \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \,\Bigl ( f''_{\alpha -\beta }(y)+\frac{A'(y)}{A(y)}\,f'_{\alpha -\beta }(y)\Bigr )f_{\beta }(x) \end{aligned}$$

that is, to

$$\begin{aligned} \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \,\sum _{\gamma \le \beta }\left( {\begin{array}{c}\beta \\ \gamma \end{array}}\right) \, c_{\gamma } f_{\beta -\gamma }(x)f_{\alpha -\beta }(y)= \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \,\sum _{\gamma \le \alpha -\beta }\left( {\begin{array}{c}\alpha -\beta \\ \gamma \end{array}}\right) c_{\gamma } f_{\alpha -\beta -\gamma }(y)f_{\beta }(x). \end{aligned}$$

But this equation holds true, since by choosing \(\delta =\beta +\gamma \), the right hand side is equal to

$$\begin{aligned} \sum _{\delta \le \alpha }\sum _{\gamma \le \delta }^l\left( {\begin{array}{c}\alpha \\ \delta -\gamma \end{array}}\right) \left( {\begin{array}{c}\alpha -(\delta -\gamma )\\ \gamma \end{array}}\right) \, c_{\gamma }f_{\alpha -\delta }(y)f_{\delta -\gamma }(x), \end{aligned}$$

which is obviously equal to the left hand side. Moreover, the boundary value conditions in (6) are also satisfied, as

$$\begin{aligned} \partial _1 h(0,y)=\sum _{\beta \le \alpha }\,f'_{\beta }(0)f_{\alpha -\beta }(y)=0, \end{aligned}$$

and

$$\begin{aligned} h(0,y)=\sum _{\beta \le \alpha }\,f_{\beta }(0)f_{\alpha -\beta }(y)=f_{\alpha }(y), \end{aligned}$$

and similarly \(\partial _2 h(x,0)=0\), and \(h(x,0)=f_{\alpha }(x)\), hence h is a solution of the boundary value problem, which implies, by uniqueness, that \(h(x,y)=f_\alpha (x*y)\).

Conversely, suppose that the family of continuous functions \(f_{\alpha }:\mathbb {R}_0\rightarrow \mathbb {C}\) \((\alpha \in \mathbb {N}^r)\) forms a moment function sequence of rank r. Then, by definition, \(f_0\) is an exponential and the conditions of (9) are satisfied. Now we proceed by induction, and we assume that (9) holds for the even \(C^{\infty }\)-functions \(f_{\alpha }\), with \(\vert \alpha \vert \le N\), where N is a natural number. Let \(\alpha '=\alpha +(1,0,\dots ,0)\). We have

$$\begin{aligned} f_{\alpha '}(x*y)=\sum _{\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) f_{\beta }(x)f_{\alpha '-\beta }(y), \end{aligned}$$
(10)

and, by the definition of the hypergroup, this implies that

$$\begin{aligned}{} & {} \sum _{\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) f''_{\beta }(x)f_{\alpha '-\beta }(y)+ \frac{A'(x)}{A(x)}\sum _{\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) f'_\beta (x)f_{\alpha '-\beta }(y)\\{} & {} \quad =\sum _{\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) f_{\beta }(x)f''_{\alpha '-\beta }(y)+ \frac{A'(y)}{A(y)}\sum _{\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) f_\beta (x)f'_{\alpha '-\beta }(y). \end{aligned}$$

Rearranging the terms we have

$$\begin{aligned}{} & {} \Bigl (f''_{\alpha '}(x)+\frac{A'(x)}{A(x)}f_{\alpha '}'(x)\Bigr )f_0(y)+ \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) \Bigl (f''_{\beta }(x)+\frac{A'(x)}{A(x)}f'_\beta (x)\Bigr )f_{\alpha '-\beta }(y)\\{} & {} \quad =\Bigl (f''_{\alpha '}(y)+\frac{A'(y)}{A(y)}f_{\alpha '}'(y)\Bigr )f_0(x)+ \sum _{0<\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) \Bigl (f''_{\alpha '-\beta }(y)+ \frac{A'(y)}{A(y)}f'_{\alpha '-\beta }(y)\Bigr )f_{\beta }(x). \end{aligned}$$

Therefore, by the induction hypothesis

$$\begin{aligned}{} & {} \Bigl (f''_{\alpha '}(x)+\frac{A'(x)}{A(x)}f_{\alpha '}'(x)\Bigr )f_0(y)+ \sum _{\beta \le \alpha }\sum _{\gamma \le \beta }\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) \left( {\begin{array}{c}\beta \\ \gamma \end{array}}\right) c_{\gamma }\,f_{\beta -\gamma }(x)f_{\alpha '-\beta }(y)\\{} & {} \quad =\Bigl (f''_{\alpha '}(y)+\frac{A'(y)}{A(y)}f_{\alpha '}'(y)\Bigr )f_0(x)+ \sum _{0<\beta \le \alpha '}\sum _{\gamma \le \alpha '-\beta }\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) \left( {\begin{array}{c}\alpha '-\beta \\ \gamma \end{array}}\right) c_{\gamma }\,f_{\beta }(x)f_{\alpha '-\beta -\gamma }(y). \end{aligned}$$

In other words

$$\begin{aligned}{} & {} \Bigl (f''_{\alpha '}(x)+\frac{A'(x)}{A(x)}f_{\alpha '}'(x)\Bigr )f_0(y)\\{} & {} \qquad +\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) c_\beta \,f_{0}(x)f_{\alpha '-\beta }(y)+\sum _{\beta \le \alpha }\sum _{\gamma< \beta }\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) \left( {\begin{array}{c}\beta \\ \gamma \end{array}}\right) c_{\gamma }\,f_{\beta -\gamma }(x)f_{\alpha '-\beta }(y)\\{} & {} \quad =\Bigl (f''_{\alpha '}(y)+\frac{A'(y)}{A(y)}f_{\alpha '}'(y)\Bigr )f_0(x)\\{} & {} \qquad +\sum _{0<\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) c_{\alpha '-\beta }\,f_{\beta }(x)f_{0}(y)+\sum _{0<\beta \le \alpha '}\sum _{\gamma < \alpha '-\beta }\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) \left( {\begin{array}{c}\alpha '-\beta \\ \gamma \end{array}}\right) c_{\gamma }\,f_{\beta }(x)f_{\alpha '-\beta -\gamma }(y). \end{aligned}$$

It is easy to see that the last terms on the two sides are equal. This means that

$$\begin{aligned}{} & {} \Bigl (f''_{\alpha '}(x)+\frac{A'(x)}{A(x)}f_{\alpha '}'(x)-\sum _{0<\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) c_{\alpha '-\beta }\,f_{\beta }(x)\Bigr )f_0(y)\\{} & {} \quad =\Bigl (f''_{\alpha '}(y)+\frac{A'(y)}{A(y)}f_{\alpha '}'(y)-\sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) c_\beta f_{\alpha '-\beta }(y)\Bigr )f_0(x) \end{aligned}$$

holds for each positive x and y, hence there exists a complex number \(c_{\alpha '}\) such that

$$\begin{aligned} f''_{\alpha '}(x)+\frac{A'(x)}{A(x)}f_{\alpha '}'(x)-\sum _{0<\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) c_{\alpha '-\beta }\,f_{\beta }(x)=c_{\alpha '}f_0(x). \end{aligned}$$

This proves (8) for \(\alpha '=\alpha +(1,0,\dots ,0)\). Similarly one can show it holds also for \(\alpha '=\alpha +(0,1,0,\dots ,0)\), ..., \(\alpha '=\alpha +(0,0,\dots ,0,1)\). As a consequence of (10) we also have \(f_{\alpha '}(0)=0\), and due to

$$\begin{aligned} 0=\sum _{\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) f_\beta (x)f_{\alpha '-\beta }'(0) =f_0(x)f'_{\alpha '}(0) \end{aligned}$$

we get that \(f'_{\alpha '}(0)=0\). Hence (9) holds for \(\alpha '\) and the theorem is proved by induction.\(\square \)

The next theorem uses the notion of an exponential family on a hypergroup. For the formal definition and properties see [11].

Theorem 4

Let \(K=(\mathbb {R}_0,A)\) be the Sturm–Liouville hypergroup corresponding to the Sturm–Liouville function A with the exponential family \(\varphi \) and let r be a a positive integer. The family of continuous functions \(f_{\alpha }:\mathbb {R}_0\rightarrow \mathbb {C}\) \((\alpha \in \mathbb {N}^r)\) forms a moment function sequence of rank r on the hypergroup K if and only if there are complex numbers \(c_\alpha \) for \(\alpha \in \mathbb {N}^r\) such that

$$\begin{aligned} f_\alpha (x)=\partial ^{\alpha }\varphi \big (x,f(t)\big )\big \vert _{t=0} \end{aligned}$$
(11)

holds for each x in \(\mathbb {R}_0\), where \(\varphi \) is the exponential family of the Sturm–Liouville hypergroup K and

$$\begin{aligned} f(t)=\sum _{\alpha \in \mathbb {N}^r}\,c_\alpha \frac{t^\alpha }{\alpha !} \end{aligned}$$

for each t in \(\mathbb {R}^r\).

As we noted in Theorem 1, although here \(f_\alpha \) is defined by a formal power series, but in formula (11) we need the coefficients only, regardless to convergence.

Proof

Let N be a natural number and let \(\alpha \) be in \(\mathbb {N}^r\) with \(\vert \alpha \vert \le N\), further let \(\varphi \) be the exponential family of the hypergroup K, and let f be the function defined above. We shall use \(\partial _x\), resp. \(\partial _t\) for differentiation with respect to the variable x, resp. t. If we take \(\lambda =f(t)\) in (7) with \(m=f_0\), and apply \(\partial _t^\alpha \) on both sides, we obtain

$$\begin{aligned} \partial _x^2\partial _t^\alpha \varphi \big (x,f(t)\big )+\frac{A'(x)}{A(x)}\partial _x\partial _t^\alpha \varphi \big (x,f(t)\big )= \sum _{\beta \le \alpha } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \, \partial _t^\beta f(t)\partial _t^{\alpha -\beta }\varphi \big (x,f(t)\big ).\ \qquad \end{aligned}$$
(12)

Taking \(t=0\) and we denote \(f_\alpha (x)=\partial _t^\alpha \varphi \big (x,f(t)\big )\big \vert _{t=0}\) the following equation follows:

$$\begin{aligned} f_\alpha ''(x)+\frac{A'(x)}{A(x)}f_\alpha '(x)=\sum _{\beta \le \alpha }\,\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) c_\beta \,f_{\alpha -\beta }(x), \end{aligned}$$

furthermore \(f_0(0)=1,\) \(f'_0(0)=0,\) and \(f_\alpha (0)=0,\) \(f'_\alpha (0)=0\) in case of \(\alpha > 0\). This means that all the conditions of Theorem 3 are satisfied, hence the family \((f_{\alpha })_{\vert \alpha \vert \le N}\) forms a finite moment sequence of rank r. As this holds for each N, the sufficiency part of the theorem is proved.

To prove the converse we assume that the family of functions \(f_\alpha \) forms a moment function sequence of rank r and we prove the necessity part by induction. It is obvious that the statement is true for \(f_0\) and we suppose that we have proved \(f_\alpha (x)=\partial _t^\alpha \varphi \big (x,f(t)\big )\big \vert _{t=0}\) for \(\vert \alpha \vert \le N\), where N is a natural number. Let \(\vert \alpha \vert =N\), and \(\alpha '=\alpha +(1,0,\dots ,0)\). We consider the function

$$\begin{aligned} g(x)=f_{\alpha '}(x)-\partial _t^{\alpha '}\varphi \big (x,f(t)\big )\big \vert _{t=0} \end{aligned}$$

for each positive x. Then the expression \(g''(x)+\frac{A'(x)}{A(x)}g'(x)\) is equal to

$$\begin{aligned} f''_{\alpha '}(x)+\frac{A'(x)}{A(x)}f_{\alpha '}'(x)-\partial _x^2\partial _t^{\alpha '}\varphi (x,f(t))\big \vert _{t=0} -\frac{A'(x)}{A(x)}\partial _x\partial _t^{\alpha '}\varphi \big (x,f(t)\big )\big \vert _{t=0}, \end{aligned}$$

and, using Theorem 3 and (12), we get

$$\begin{aligned}{} & {} c_0f_{\alpha '}(x)+\sum _{0<\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) c_\beta \partial _t^{\alpha '-\beta }\varphi \big (x,f(t)\big )\big \vert _{t=0}\\{} & {} \quad -\sum _{\beta \le \alpha '}\left( {\begin{array}{c}\alpha '\\ \beta \end{array}}\right) \, c_\beta \partial _t^{\alpha '-\beta }\varphi \big (x,f(t)\big )\vert _{t=0}=c_0\,g(x). \end{aligned}$$

Similarly one can show it holds also for \(\alpha '=\alpha +(0,1,0,\dots ,0)\), ..., \(\alpha '=\alpha +(0,0,\dots ,0,1)\). Consequently

$$\begin{aligned} g''(x)+\frac{A'(x)}{A(x)}g'(x)=c_0\,g(x),\quad g(0)=0,\quad g'(0)=0, \end{aligned}$$

hence \(g(x)\equiv 0\) and the proof is complete.\(\square \)