1 Introduction

A hypergroup is a locally compact Hausdorff space X equipped with an involution and a convolution operation defined on the space of all bounded complex regular measures on X. For the formal definition, historical background and basic facts about hypergroups we refer to [1]. In this paper X denotes a locally compact hypergroup with identity element o, involution , and convolution \(*\). In fact, the quadruple is what we should call a hypergroup, but for the sake of simplicity we shall call X a hypergroup. In this paper we shall consider commutative hypergroups only, hence we always suppose that X is a locally compact commutative hypergroup.

Given x in X we denote the point mass with support the singleton \(\{x\}\) by \(\delta _x\) which is a probability measure on X, and so is \(\delta _x*\delta _y\) whenever xy are in X. For a continuous function \(h:X\rightarrow \mathbb {C}\) the integral

$$\begin{aligned} \int _X h(t)d(\delta _x*\delta _y)(t) \end{aligned}$$

will be denoted by \(h(x*y)\). Clearly, \(h(x*y)\) is the mathematical expectation of the random variable h on the probability space \((X,\mathcal B, \delta _x*\delta _y)\), \(\mathcal B\) being the \(\sigma \)-algebra of all Borel subsets of X. Given y in X the function \(x\mapsto h(x*y)\) is the translate of h by y. A comprehensive monograph on the subject is [3].

A set of continuous complex valued functions on X is called translation invariant, if it contains all translates of its elements. A linear translation invariant subspace of all continuous complex valued functions is called a variety, if it is closed with respect to uniform convergence on compact sets. The smallest variety containing the given function h is called the variety of h, and is denoted by \(\tau (h)\). Clearly, it is the intersection of all varieties including h. A continuous complex valued function is called an exponential polynomial, if its variety is finite dimensional. The simplest nonzero exponential polynomial is the one having one dimensional variety: it consists of all constant multiples of a nonzero continuous function. If we normalize that function by taking 1 at o then we have the concept of an exponential. Recall that m is an exponential on X if it is a non-identically zero continuous complex-valued function satisfying \(m(x*y)=m(x) m(y)\) for each xy in X and s is an m-sine function on X if it is a continuous complex-valued function fulfilling \(s(x*y)=s(x)m(y)+m(x)s(y)\) for each xy in X.

By the commutativity of the hypergroup every nonzero finite dimensional variety contains an exponential. In accordance with [4], an exponential polynomial is called an m-exponential monomial if its variety contains only the exponential m. Clearly, m is an m-exponential monomial. We define the degree of exponential monomials as follows. Exponential monomials having one dimensional variety have degree 0, and the degree of an exponential monomial \(\varphi \) is \(n\ge 1\), if the degree of the exponential monomial \(x\mapsto \varphi (x*y)-m(y)\varphi (x)\) is \(n-1\). For instance, nonzero m-sine functions have degree 1.

For any nonnegative integer N the continuous function \(\varphi :X \rightarrow \mathbb {C}\) is called a generalized moment function of order N, if there exist complex valued continuous functions \(\varphi _k:X \rightarrow \mathbb {C}\) such that \(\varphi _N=\varphi \) and

$$\begin{aligned} \varphi _k(x*y)=\sum _{j=0}^k {k \atopwithdelims ()j}\varphi _j(x)\varphi _{k-j}(y) \end{aligned}$$

holds for all \(k=0,1,\dots , N\) and for all xy in X. We say that the functions \((\varphi _k)_{k\in \left\{ 0,1,\dots ,N \right\} }\) form a generalized moment function sequence of order N. For the sake of simplicity, in this paper we shall omit the adjective “generalized” and we refer to moment functions and moment function sequences. We note that in [2], a more general concept of moment function sequences was introduced.

Observe that \(\varphi _0\) is an exponential on the hypergroup X. In this case we say that \(\varphi _0\) generates the given moment function sequence of order N, and that the moment functions in this sequence correspond to \(\varphi _0\). Clearly, a moment function of order 1 corresponding to the exponential m is an m-sine function. Given the exponential m, all m-sine functions form a linear space.

Important examples of exponential monomials are provided by the moment functions. Clearly, every moment function corresponding to the exponential m is an m-exponential monomial. In particular, if the order of a generalized moment function is N, then it is an exponential monomial of degree at most N.

Exponential monomials are the basic building blocks of spectral synthesis. We say that a variety is synthesizable if all exponential monomials in the variety span a dense subspace. We say that spectral synthesis holds for a variety if every subvariety of it is synthesisable. If every variety on X is synthesisable, then we say that spectral synthesis holds on X. Clearly, on every commutative hypergroup, spectral synthesis holds for finite dimensional varieties.

2 The main result

The above notions suggest that generalized moment functions may play a fundamental role in the theory of spectral analysis and spectral synthesis on commutative hypergroups. In our former paper [2], we described generalized moment functions on commutative groups using Bell polynomials, even in the higher rank case. In fact, the notion of exponential monomials is not easy to handle, compared to that of generalized moment functions: the functional equations characterizing generalized moment functions are more convenient than those for exponential monomials. Therefore it might be fruitful to know in which situations can exponential monomials be expressed in terms of generalized moment functions. In this work we initiate the study of this problem on commutative hypergroups. The statement below is the first step towards this.

Theorem 2.1

Let X be a commutative hypergroup with identity o. Let \(m:X\rightarrow \mathbb {C}\) be an exponential, and \(\varphi :X\rightarrow \mathbb {C}\) an m-exponential monomial. If the linear subspace of the variety \(\tau (\varphi )\) of \(\varphi \) consisting of all m-sine functions is one dimensional, then \(\tau (\varphi )\) is generated by generalized moment functions.


Suppose that \(\varphi _0,\varphi _1,\dots ,\varphi _n\) is a basis of \(\tau (\varphi )\), and \(\varphi _0=m\), \(\varphi _1=s\) is an m-sine function, \(\varphi _n=\varphi \), and the degrees of these basis functions are increasing with respect to their subscripts. In other words, we suppose that the mapping \(k\mapsto \mathrm {deg}\varphi _{k}\) is increasing.

Then we can write for \(k=1,2,\dots ,n+1\):

$$\begin{aligned} \varphi _{n+1-k}(x*y)=\sum _{i=1}^{n+1} c_{k,i}(y)\varphi _{n+1-i}(x) \end{aligned}$$

for each xy in X. As the function \(k\mapsto \deg \varphi _k\) is increasing, it follows that the matrix \(c_{k,i}(y)\) is upper triangular for each x (i.e. \(c_{k,i}=0\) for \(k>i\)), and \(c_{i,i}=m\) for \(i=1,2,\dots ,n+1\). Further

$$\begin{aligned} \varphi _{n+1-k}\bigl ((x*y)*z\bigr )= & {} \sum _{i=1}^{n+1} c_{k,i}(z)\varphi _{n+1-i}(x*y)\\= & {} \sum _{i=1}^{n+1} \sum _{j=1}^{n+1} c_{k,i}(z)c_{i,j}(y)\varphi _{n+1-j}(x), \end{aligned}$$


$$\begin{aligned} \varphi _{n+1-k}\bigl (x*(y*z)\bigr )=\sum _{j=1}^{n+1} c_{k,j}(y*z)\varphi _{n+1-j}(x)=\sum _{j=1}^{n+1} c_{k,j}(z*y)\varphi _{n+1-j}(x). \end{aligned}$$

Hence, by associativity

$$\begin{aligned} c_{k,j}(z*y)=\sum _{i=1}^{n+1}c_{k,i}(z)c_{i,j}(y) \end{aligned}$$

for each yz in X. If \(C:X\mapsto \mathbb {C}^{(n+1)(n+1)}\) is the matrix function defined by \(C(x)=\bigl (c_{i,j}(x)\bigr )\), then we have

$$\begin{aligned} C(x*y)=C(x)C(y),\text {and}C(o)=I, \end{aligned}$$

where I is the \((n+1)\times (n+1)\) identity matrix. Since

$$\begin{aligned} c_{i,i+1}(x*y)=m(x)c_{i,i+1}(y)+c_{i,i+1}(x)m(y), \end{aligned}$$

it follows that \(c_{i,i+1}\) is an m-sine function for each \(i=1,2,\dots ,n\).

We prove the statement by induction on the dimension n of \(\tau (\varphi )\). First we consider the cases \(n=1,2,3,4\) separately and then we prove the statement by induction on \(n\ge 4\).

For \(n=1\) the statement is trivial, since a one dimensional variety consists of the constant multiples of an exponential, which is a generalized moment function.

For \(n=2\) the statement is obvious, because a two dimensional variety consists of the linear combinations of an exponential m and an m-sine function, which are generalized moment functions.

For \(n=3\) we have the system of equations

$$\begin{aligned} \varphi _2(x*y)= & {} c_{1,1}(y)\varphi _2(x)+c_{1,2}(y)\varphi _1(x)+c_{1,3}(y)\varphi _0(x)\\ \varphi _1(x*y)= & {} c_{2,1}(y)\varphi _2(x)+c_{2,2}(y)\varphi _1(x)+c_{2,3}(y)\varphi _0(x)\\ \varphi _0(x*y)= & {} c_{3,1}(y)\varphi _2(x)+c_{3,2}(y)\varphi _1(x)+c_{3,3}(y)\varphi _0(x), \end{aligned}$$

where the c’s are continuous complex valued functions in \(\tau (\varphi )\). Clearly, \(c_{2,1}, c_{3,1}, c_{3,2}\) are zero, and we have \(c_{1,1}=c_{2,2}=c_{3,3}=m\). Hence the above system can be written as

$$\begin{aligned} \varphi _2(x*y)= & {} c_{1,1}(y)\varphi _2(x)+c_{1,2}(y)\varphi _1(x)+c_{1,3}(y)\varphi _0(x)\\ \varphi _1(x*y)= & {} c_{2,2}(y)\varphi _1(x)+c_{2,3}(y)\varphi _0(x)\\ \varphi _0(x*y)= & {} c_{3,3}(y)\varphi _0(x). \end{aligned}$$

If \(C(x)=\bigl (c_{i,j}(x)\bigr )\), then we have \(C(x*y)=C(x)C(y)\), and we can write

$$\begin{aligned} C= \begin{pmatrix} m&{}\quad c_{1,2}&{}\quad c_{1,3}\\ 0&{}\quad m&{}\quad c_{2,3}\\ 0&{}\quad 0&{}\quad m \end{pmatrix}. \end{aligned}$$

It follows that \(c_{1,2}, c_{2,3}\) are m-sine functions, hence \(c_{1,2}=\alpha _{1,2}s\), \(c_{2,3}=\alpha _{2,3}s\). Then C(x) has the following form:

$$\begin{aligned} C= \begin{pmatrix} m&{}\quad \alpha _{1,2}s&{}\quad c_{1,3}\\ 0&{}\quad m&{}\quad \alpha _{2,3}s\\ 0&{}\quad 0&{}\quad m \end{pmatrix}. \end{aligned}$$

As the first row of C generates \(\tau (\varphi )\), \(m, c_{1,2}, c_{1,3}\) are linearly independent. It follows that \(\alpha _{1,2}\ne 0\). By the equation for \(\varphi _1(x*y)\) above, it follows \(c_{2,3}=s\), hence \(\alpha _{2,3}=1\ne 0\). We have

$$\begin{aligned} c_{1,3}(x*y)=m(x)c_{1,3}(y)+\alpha _{1,2} \alpha _{2,3} s(x)s(y)+c_{1,3}(x)m(y), \end{aligned}$$

and we conclude that \(c_{1,1}, \frac{1}{\alpha _{1,2}}c_{1,2}, \frac{2}{\alpha _{1,2} \alpha _{2,3}}c_{1,3}\) form a generalized moment function sequence. This proves our statement for \(n=3\).

Now we prove the statement for \(n=4\). In that case the above notation will be modified as

$$\begin{aligned} \varphi _3(x*y)= & {} c_{1,1}(y)\varphi _3(x)+c_{1,2}(y)\varphi _2(x)+c_{1,3}(y)\varphi _1(x)+c_{1,4}(x)\varphi _0(x)\\ \varphi _2(x*y)= & {} c_{2,2}(y)\varphi _2(x)+c_{2,3}(y)\varphi _1(x)+c_{2,4}(y)\varphi _0(x)\\ \varphi _1(x*y)= & {} c_{3,3}(y)\varphi _1(x)+c_{3,4}(y)\varphi _0(x)\\ \varphi _0(x*y)= & {} c_{4,4}(y)\varphi _0(x), \end{aligned}$$


$$\begin{aligned} C= \begin{pmatrix} m&{}\quad \alpha _{1,2}s&{}\quad c_{1,3}&{}\quad c_{1,4}\\ 0&{}\quad m&{}\quad \alpha _{2,3}s&{}\quad c_{2,4}\\ 0&{}\quad 0&{}\quad m&{}\quad \alpha _{3,4}s\\ 0&{}\quad 0&{}\quad 0&{}\quad m \end{pmatrix}, \end{aligned}$$

where the c’s are continuous complex valued functions in \(\tau (\varphi )\). Here again, the functions in the first row generate \(\tau (\varphi )\), hence they are linearly independent. Consequently, \(\alpha _{1,2}\ne 0\). On the other hand,

$$\begin{aligned} c_{1,3}(x*y)=m(x)c_{1,3}(y)+\alpha _{1,2} \alpha _{2,3} s(x)s(y)+c_{1,3}(x)m(y), \end{aligned}$$

hence \(\alpha _{2,3}\ne 0\): otherwise \(c_{1,3}\) is an m-sine function, a constant multiple of s, which contradicts the linear independence of the functions in the first row. Finally, the equation for \(\varphi _1(x*y)\) gives that \(\alpha _{3,4}\ne 0\). We conclude that the functions

$$\begin{aligned} c_{1,1}, \frac{1!}{\alpha _{1,2}}c_{1,2}, \frac{2!}{\alpha _{1,2}\alpha _{2,3}}c_{1,3}, \frac{3!}{\alpha _{1,2}\alpha _{2,3}\alpha _{3,4}}c_{1,4} \end{aligned}$$

form a generalized moment function sequence, which proves our statement for \(n=4\).

Suppose that the statement has been proved if the dimension is not greater than \(n\ge 4\), and now we prove it for dimension \(n+1\). Our previous notation in this general situation takes the form

$$\begin{aligned} \varphi _n(x*y)= & {} c_{1,1}(y)\varphi _n(x)+c_{1,n}(y)\varphi _1(x)+ \cdots +c_{1,n+1}(y)\varphi _0(x)\\&\vdots&\\ \varphi _2(x*y)= & {} c_{n-1,n-1}(y)\varphi _2(x)+c_{n-1,n}(y)\varphi _1(x)+c_{n-1,n+1}(y)\varphi _0(x)\\ \varphi _1(x*y)= & {} c_{n,n}(y)\varphi _1(x)+c_{n,n+1}(y)\varphi _0(x)\\ \varphi _0(x*y)= & {} c_{n+1,n+1}(y)\varphi _0(x), \end{aligned}$$


$$\begin{aligned} C= \begin{pmatrix} m&{}\quad \alpha _{1,2}s&{}\quad c_{1,3}&{}\quad \ldots &{}\quad c_{1,n}&{}\quad c_{1,n+1}\\ 0&{}\quad m&{}\quad \alpha _{2,3}s&{}\quad \ldots &{}\quad c_{2,n}&{}\quad c_{2,n+1}\\ \vdots &{}\quad \vdots &{}\quad m&{}\quad \ldots &{}\quad \ldots &{}\quad \ldots \\ 0&{}\quad 0&{}\quad 0&{}\quad \ddots &{}\quad c_{n-2,n}&{}\quad c_{n-2,n+1}\\ 0&{}\quad 0&{}\quad 0&{}\quad \ldots &{}\quad m&{}\quad \alpha _{n,n+1}s\\ 0&{}\quad 0&{}\quad 0&{}\quad \ldots &{}\quad 0&{}\quad m \end{pmatrix}. \end{aligned}$$

From the fact that the functions in the first row generate \(\tau (\varphi )\) we infer that they are linearly independent. The case of dimension n can be applied for the variety spanned by \(\varphi _0,\varphi _1,\dots ,\varphi _{n-1}\) to deduce that \(\alpha _{1,2}, \alpha _{2,3},\dots ,\alpha _{n-1,n}\) are different from zero. Finally, the equation for \(\varphi _1(x*y)\) above shows that

$$\begin{aligned} \varphi _1(x*y)=c_{n,n}(y)\varphi _1(x)+c_{n,n+1}(y)\varphi _0(x), \end{aligned}$$

that is

$$\begin{aligned} s(x*y)=m(y)s(x)+\alpha _{n,n+1}s(y)m(x), \end{aligned}$$

which implies \(\alpha _{n,n+1}=1\ne 0\). Consequently, all the \(\alpha \)’s are nonzero. Then we let \(f_0=m\) and for \(k=1,2,\dots ,n\)

$$\begin{aligned} f_k=\frac{k!}{\alpha _{1,2}\alpha _{2,3}\cdots \alpha _{k,k+1}}c_{1,k+1}. \end{aligned}$$

We show that \(f_0,f_1,\dots ,f_n\) form a generalized moment function sequence of order X. We have

$$\begin{aligned}&f_k(x*y)=\frac{k!}{\alpha _{1,2}\cdots \alpha _{k,k+1}}c_{1,k+1}(x*y) =\frac{k!}{\alpha _{1,2}\cdots \alpha _{k,k+1}} \sum _{j=1}^{k+1} c_{1,j}(x) c_{j,k+1}(y)\\&\quad = \frac{k!}{\alpha _{1,2}\cdots \alpha _{k,k+1}} \sum _{j=0}^{k} \frac{\alpha _{1,2}\cdots \alpha _{j,j+1}}{j!}f_j(x) c_{j+1,k+1}(y)\\&\qquad =\sum _{j=0}^{k} \frac{k!}{j!}\frac{1}{\alpha _{j+1,j+2}\cdots \alpha _{k,k+1}}f_j(x) c_{j+1,k+1}(y)\\&\quad = \sum _{j=0}^{k} \frac{k!}{j!}\frac{1}{\alpha _{j+1,j+2}\cdots \alpha _{k-1,k}}f_j(x) \frac{c_{j,k}(y)}{\alpha _{j,j+1}}\\&\qquad =\sum _{j=0}^{k} \frac{k!}{j!}\frac{1}{\alpha _{j,j+1}\cdots \alpha _{k-1,k}}f_j(x) c_{j,k}(y)\\&\quad = \sum _{j=0}^{k} \frac{k!}{j!}\frac{1}{\alpha _{j,j+1}\cdots \alpha _{k-2,k-1}}f_j(x) \frac{c_{j-1,k-1}(y)}{\alpha _{j-1,j}}\\&\qquad =\sum _{j=0}^{k} \frac{k!}{j!}\frac{1}{\alpha _{j-1,j}\cdots \alpha _{k-2,k-1}}f_j(x) c_{j-1,k-1}(y). \end{aligned}$$

Continuing this process we arrive at

$$\begin{aligned}&f_k(x*y)=\sum _{j=0}^{k} \frac{k!}{j!}\frac{1}{\alpha _{2,3}\alpha _{3,4}\cdots \alpha _{k-j,k-j+1}}f_j(x)\frac{c_{1,k-j+1}(y)}{\alpha _{1,2}}\\&\quad = \sum _{j=0}^{k} \frac{k!}{j!(k-j)!}\frac{(k-j)!}{\alpha _{1,2}\alpha _{2,3}\cdots \alpha _{k-j,k-j+1}}f_j(x)c_{1,k-j+1}(y)\\&\qquad = \sum _{j=0}^{k} \left( {\begin{array}{c}k\\ j\end{array}}\right) f_j(x)f_{k-j}(y), \end{aligned}$$

which proves the statement. \(\square \)

One may ask how restrictive is the condition that the sine functions in a variety form a one dimensional linear space. Of course, this requirement is quite restrictive, but still there are large, important classes of commutative hypergroups having this property. This condition means a kind of “one-dimensionality”‘ of the hypergroup. These classes include all polynomial hypergroups in one variable, Sturm–Liouville hypergroups, etc., where, in fact, every finite dimensional variety has this property. For instance, on polynomial hypergroups we can apply our result as follows.

Corollary 2.2

Let X be a polynomial hypergroup associated with the sequence of polynomials \(\bigl (P_n\bigr )_{n\in \mathbb {N}}\). Then every complex valued function on \(\mathbb {N}\) (i.e. every complex sequence) is the pointwise limit of linear combinations of generalized moment functions on X.


By Theorem 6.7. in [3], spectral synthesis holds on every polynomial hypergroup. This means that, in every variety the exponential polynomials span a dense subspace. If \(f:\mathbb {N}\rightarrow \mathbb {C}\) is any function, then we can apply this result for the variety \(\tau (f)\) of f. Consequently, to prove our statement it is enough to show that for each exponential m on X, all m-sine functions form a one dimensional linear space. Let m be an exponential on X. By Theorem 2.2. in [3], there exists a complex number \(\lambda \) such that \(m(n)=P_n(\lambda )\) holds for each n in \(\mathbb {N}\). On the other hand, by Theorem 2.5. in [3], every m-sine function s on X has the form \(s(n)=c P'_n(\lambda )\) with some complex number c. It follows that all m-sine functions form a one dimensional linear space, hence by Theorem 2.1, our statement follows. \(\square \)

The following result can be obtained on Sturm–Liouville hypergroups. Here \(\mathbb {R}_0\) denotes the set of nonnegative reals.

Corollary 2.3

Let \(X=(\mathbb {R}_0,A)\) be the Sturm–Liouville hypergroup associated with the Sturm–Liouville function \(A:\mathbb {R}_0\rightarrow \mathbb {R}\). Let V be a synthesizable variety on X. Then every function in V is the uniform limit on compact sets of linear combinations of generalized moment functions on X.


Applying a similar argument to that in the previous theorem it is enough to show that for each exponential on X, the linear space of all m-sine functions in the variety of an arbitrary m-exponential monomial is one-dimensional.

By Theorem 4.2. in [3], the function \(m:\mathbb {R}_0\rightarrow \mathbb {C}\) is an exponential if and only if it is twice continuously differentiable and there exists a complex number \(\lambda \) such that

$$\begin{aligned} m''(x)+\frac{A'(x)}{A(x)}m'(x)=\lambda m(x) \end{aligned}$$

holds for \(x>0\), further \(m(0)=1\), \(m'(0)=0\). Suppose that m satisfies (2.2). Then, by Theorem 4.5. in [3], the function \(s:\mathbb {R}_0\rightarrow \mathbb {C}\) is an m-sine function if and only if it is twice continuously differentiable and there exists a complex number c such that

$$\begin{aligned} s''(x)+\frac{A'(x)}{A(x)}s'(x)-\lambda s(x)=c m(x) \end{aligned}$$

holds for \(x>0\), further \(s(0)=0\), \(s'(0)=0\). Let \(s_0\) be the unique twice continuously differentiable function satisfying

$$\begin{aligned} s_0''(x)+\frac{A'(x)}{A(x)}s_0'(x)-\lambda s_0(x)=m(x) \end{aligned}$$

for \(x>0\), and \(s_0(0)=0\), \(s_0'(0)=0\). It is known that this problem has a unique solution, hence \(s_0\) is unique. On the other hand, if s is any m-sine function, then there is a unique c such that s satisfies the problem (2.3). However, \(c s_0\) also satisfies (2.3), hence we infer \(s=c s_0\), and the proof is complete. \(\square \)