1 Introduction

We consider a problem originating from the \(\ell ^1\)-summability of multivariate Fourier series. Let f be a \(2\pi \)-periodic function in \(L^2({{\mathbb {T}}}^d)\) and let \({{\hat{f}}}_{\alpha }\) be the Fourier coefficient of f with the multi-index \({\alpha }= ({\alpha }_1,\ldots , {\alpha }_d) \in {{\mathbb {Z}}}^d\). Let

$$\begin{aligned} S_n^{(1)} (f;{\theta })= \sum _{|{\alpha }| \le n} {{\hat{f}}}_{\alpha }\textrm{e}^{\textrm{i}{\alpha }\cdot {\theta }}, \quad {\theta }\in {{\mathbb {T}}}^d, \quad n \in {{\mathbb {N}}}_0, \end{aligned}$$

be the n-th \(\ell ^1\)-partial sum of its Fourier series, where \(|{\alpha }|=|{\alpha }|_1 = |{\alpha }_1|+ \cdots + |{\alpha }_d|\), so that the summation is over indices in the \(\ell ^1\)-ball of radius n. The \(\ell ^1\)-summability has been studied in [2, 5, 8,9,10,11] and it is closely related to the summability of Fourier series of orthogonal polynomials on the cube [12]. The Dirichlet kernel of \(S_n^{(1)}(f)\) turns out to be a divided difference to be defined below in the form

$$\begin{aligned} D_{n}^{(1)}({\theta }) = [\cos {\theta }_1, \ldots , \cos {\theta }_d] G_{n,d}, \qquad {\theta }\in {{\mathbb {T}}}^d, \end{aligned}$$

where \(G_{n,d}\) is a function of one variable as shown in [2, 12] (see (2.2) in the next section). The divided difference can be written as an integral with a Peano kernel; in particular, for a \((d-1)\)-times differentiable function \(F: [-1,1]\rightarrow {{\mathbb {C}}}\),

$$\begin{aligned} {}[\cos {\theta }_1,\ldots ,\cos {\theta }_d] F = \int _{-1}^1 F^{(d-1)}(u) M_{d-1}(u| \cos {\theta }_1,\ldots ,\cos {\theta }_d) \textrm{d}u, \end{aligned}$$

where \(u \mapsto M_{d-1}(u| \cos {\theta }_1,\ldots ,\cos {\theta }_d)\) is the B-spline function, which is a piecewise polynomial function in \(C^{d-2}([-1,1])\) with \(\cos {\theta }_1,\ldots , \cos {\theta }_d\) as its knots (see the next section for its definition). Motivated by the \(\ell ^1\)-summability and functions defined by the above divided difference, we call a function \(\ell ^1\)-invariant if \({{{\hat{f}}}_{\alpha }} = {{{\hat{f}}}_{\beta }}\) whenever \(|{\alpha }| = |{\beta }|\) and study properties of such functions.

Our analysis is partially motivated by the study in [2], where the \(\ell ^1\)-summability of the Fourier transform in \({{\mathbb {R}}}^d\), defined by

$$\begin{aligned} R_{\rho ,\textrm{d}}^{(1)}(f; x) = \int _{|v|_1 \le \rho } {{\hat{f}}}(v) \textrm{e}^{\textrm{i}v \cdot x} \textrm{d}v, \quad x \in {{\mathbb {R}}}^d \quad \hbox {and}\quad \rho \ge 0, \end{aligned}$$

is treated and its associated Dirichlet kernel is shown to be given as a divided difference, namely

$$\begin{aligned} {{\mathfrak {D}}}_{\rho , \textrm{d}}(x) = \left[ x_1^2,\ldots ,x_d^2\right] {{\mathfrak {G}}}_{\rho , d}, \qquad x \in {{\mathbb {R}}}^d, \end{aligned}$$

where \({{\mathfrak {G}}}_{\rho ,d}\) is a function of one variable. The Fourier transform of the B-spline function \(x \mapsto M_{d-1}(u| x_1^2,\ldots , x_d^2)\), considered as a function of its knots, is analyzed in [2], which turns out to enjoy a rich structure and provides necessary tools for studying the class of \(\ell ^1\)-invariant functions \(f(\Vert \cdot \Vert _1)\) defined on \({{\mathbb {R}}}^d\). In particular, it leads to a characterization of \(f: {{\mathbb {R}}}_+ \rightarrow {{\mathbb {R}}}\) so that \(x \mapsto f(\Vert \cdot \Vert _1)\) is a positive definite function on \({{\mathbb {R}}}^d\).

We will show that the \(\ell ^1\)-invariant functions on the torus are all given by divided differences with knots \(\cos \theta _1, \ldots , \cos \theta _d\), and we will study the Fourier series of the B-spline \({\theta }\mapsto M_{d-1}(u| \cos {\theta }_1,\ldots , \cos {\theta }_d)\), which is \(\ell ^1\)-invariant. While the Fourier transform of the B-spline \(x \mapsto M_{d-1}(u| x_1^2,\ldots , x_d^2)\) on \({{\mathbb {R}}}^d\) satisfies an integral recursive relation in dimension d, the Fourier coefficients of the B-spline \(x \mapsto M_{d-1}(u| \cos {\theta }_1,\ldots , \cos {\theta }_d)\) on \({{\mathbb {T}}}^d\) satisfy a somewhat surprising biorthogonal relation with a family of polynomials. Let \(m_{n,d}\) denote the Fourier coefficients of the B-spline function with the index \(|{\alpha }| = n\). Then there is a sequence of polynomials \(h_{n,d}\), given in terms of the Gegenbauer polynomials, such that \(\{m_{n,d}: n \in {{\mathbb {N}}}_0\}\) and \(\{h_{n,d}: n \in {{\mathbb {N}}}_0\}\) are biorthogonal in the sense that

$$\begin{aligned} \int _{-1}^1 m_{n,d}(u) h_{\ell ,d}(u)\,\textrm{d}u = \delta _{n,\ell }, \qquad n, \ell = 0,1, 2, \ldots . \end{aligned}$$

This orthogonal relation can be used to derive the Fourier orthogonal series of the B-spline function in the Gegenbauer polynomials explicitly; the first term of the series gives, in particular, that

$$\begin{aligned} \frac{1}{(2\pi )^d} \int _{{{\mathbb {T}}}^d} M_{d-1}(u| \cos {\theta }_1,\ldots ,\cos {\theta }_d)\,\textrm{d}{\theta }= \frac{\Gamma (\frac{d+1}{2})}{\sqrt{\pi }\Gamma (\frac{d}{2})(d-1)!} (1-u^2)_+^{\frac{d-2}{2}}, \end{aligned}$$

an identity that appears to be new. We will also give a characterization of \(\ell ^1\)-invariant functions that are either positive definite or strictly positive definite on \({{\mathbb {T}}}^d\).

The paper is organized as follows. We recall the definition and basic properties of \(\ell ^1\)-summability in the next section and establish several necessary identities. The Fourier orthogonal series of the B-spline with respect to its knot is given in the third section. The positive definite functions among \(\ell ^1\)-invariant functions are discussed in the fourth section.

2 \(\ell ^1\)-summability on \({{\mathbb {T}}}^d\)

Let f be a \(2\pi \)-periodic function defined on \({{\mathbb {T}}}^d\). If \(f\in L^2({{\mathbb {T}}}^d)\), then the Fourier series of f is defined by

$$\begin{aligned} f(x) = \sum _{{\alpha }\in {{\mathbb {Z}}}^d} {{\hat{f}}}_{\alpha }\textrm{e}^{\textrm{i}{\alpha }\cdot x},\; x\in {{\mathbb {T}}}^{d}, \quad \hbox {with} \quad {{\hat{f}}}_{\alpha }= \frac{1}{(2\pi )^d} \int _{{{\mathbb {T}}}^{d}} f(y) \textrm{e}^{\textrm{i}{\alpha }\cdot y}\,\textrm{d}y,\;\alpha \in {{\mathbb {Z}}}^{d}. \end{aligned}$$

We study the class of periodic functions that we call \(\ell ^1\)-invariant.

Definition 2.1

A function \(f: {{\mathbb {T}}}^d \rightarrow {{\mathbb {R}}}\) is called \(\ell ^1\)-invariant if

$$\begin{aligned} {{\hat{f}}}_{\alpha }= {{\hat{f}}}_{{\beta }} \quad \hbox {whevever}\, |{\alpha }| = |{\beta }|\, \hbox {for}\, {\alpha }, {\beta }\in {{\mathbb {Z}}}^d. \end{aligned}$$

We denote the Fourier coefficient \({{\hat{f}}}_{\alpha }\) of such a function by \({{\hat{f}}}_{|{\alpha }|}\).

If f is \(\ell ^1\)-invariant, then its Fourier series is of the form

$$\begin{aligned} f(x) = \sum _{n=0}^\infty {{\hat{f}}}_n E_n(x), \qquad E_n(x) = \sum _{|{\alpha }| = n} \textrm{e}^{\textrm{i}{\alpha }\cdot x}, \quad x\in {{\mathbb {T}}}^d, \end{aligned}$$
(2.1)

where \(|{\alpha }|\) is the \(\ell ^1\)-norm, that is \(|{\alpha }|:= |{\alpha }_1| + \cdots + |{\alpha }_d|\), of \({\alpha }\in {{\mathbb {Z}}}^d\).

A function f on \({{\mathbb {T}}}^d\) is called \(\ell ^1\)-summable if its partial sum \(S_n^{(1)} f\) over the expanding \(\ell ^1\)-ball, defined by

$$\begin{aligned} S_n^{(1)} f(x) = \sum _{|{\alpha }| \le n} {{\hat{f}}}_{\alpha }\textrm{e}^{\textrm{i}{\alpha }\cdot x},\qquad x\in {{\mathbb {T}}}^d, \end{aligned}$$

converges to f. The partial sum can be written as an integral operator

$$\begin{aligned} S_n^{(1)} f(x) = \frac{1}{(2\pi )^d} \int _{{{\mathbb {T}}}^d} f(y) D_{n,d}(x-y)\,\textrm{d}y = f*D_{n,d}(x), \end{aligned}$$

where the kernel \(D_{n,d}\) is the analog of the Dirichlet kernel defined by

$$\begin{aligned} D_{n,d}(y):= \sum _{|{\alpha }|\le n} \textrm{e}^{\textrm{i}{\alpha }\cdot y},\qquad y\in {{\mathbb {T}}}^d. \end{aligned}$$

It is shown in [2, 12] that the kernel \(D_{n,d}\) can be written as a divided difference

$$\begin{aligned} D_{n,d}(x) = [\cos x_1,\ldots ,\cos x_d] G_{n,d}, \end{aligned}$$

where \(G_{n,d}\) is a univariate function defined by

$$\begin{aligned} G_{n,d}(\cos {\theta }) = (-1)^{\lfloor \frac{d-1}{2} \rfloor } 2 \cos \tfrac{{\theta }}{2}(\sin {\theta })^{d-2} \times {\left\{ \begin{array}{ll} \cos (n+\tfrac{1}{2}){\theta }&{} \hbox { for } d\hbox { even}, \\ \sin (n+\tfrac{1}{2}){\theta }&{} \hbox {for }d \hbox {odd}. \end{array}\right. } \end{aligned}$$
(2.2)

We briefly recall the notion of a divided difference of a function that is at least continuous. Let f be a real or complex function on \({{\mathbb {R}}}\), and let \(m \in {{\mathbb {N}}}_0\). The m-th divided difference of f at the (pairwise distinct) knots, \(x_0, x_1, \ldots , x_m\) in \({{\mathbb {R}}}\) is defined inductively as

$$\begin{aligned} {}[x_0]\,f = f(x_0) \quad \hbox {and}\quad [x_0,\ldots ,x_m]\,f = \frac{[x_0,\ldots ,x_{m-1}]\,f - [x_1,\ldots ,x_m]\,f }{x_0 - x_m}. \end{aligned}$$

The divided difference is a symmetric function of the knots. The knots of the divided difference may coalesce. In particular, if all knots coalesce and if the function is sufficiently differentiable, then the divided difference collapses to

$$\begin{aligned} {}[x_0,\ldots ,x_m] f = \frac{f^{(m)}(x_0)}{m!} \quad \hbox {if}\, x_0 = x_1 = \cdots = x_m. \end{aligned}$$
(2.3)

Our analysis depends heavily on an integral representation of the divided difference, for which we need the definition of B-spline. For \(x_0< \cdots < x_m\), the B-spline of order m with knots \(x_0, \ldots , x_m\) is defined by

$$\begin{aligned} {{\mathbb {R}}}\ni u \quad \rightarrow \quad M_m(u | x_0,\ldots ,x_m) = [x_0,\ldots ,x_m]\left\{ \frac{(\,\cdot \, - u)_+^{m-1}}{(m-1)!}\right\} . \end{aligned}$$

The B-spline vanishes outside the interval \((x_0,x_m)\) and it is strictly positive on the interval itself, and

$$\begin{aligned} \int _{{\mathbb {R}}}M_m(u|x_0,\ldots ,x_m)\;\,\textrm{d}u = \frac{1}{m!}. \end{aligned}$$

For better reference, we state the integral representation of the divided difference as a lemma.

Lemma 2.2

Let \(f:{{\mathbb {R}}}\rightarrow {{\mathbb {C}}}\) be m-times continuously differentiable. Then

$$\begin{aligned} {}[x_0,\ldots ,x_m]\,f = \int _{{\mathbb {R}}}f^{(m)}(u)M_m(u | x_0,\ldots ,x_m)\, \textrm{d}u. \end{aligned}$$

We shall also need the B-splines’ recurrence relation (Powell, 1982) [6]

$$\begin{aligned} M_{m+1}(u| x_0,\ldots ,x_m,x_{m+1}) \,&= \frac{(u-x_0)M_m(u | x_0,\ldots ,x_m)}{x_{m+1}-x_0}\\&\quad + \frac{(x_{m+1}-u)M_m(u | x_1,\ldots ,x_{m+1})}{x_{m+1}-x_0}. \end{aligned}$$

We first write the function \(E_n\) as an integral against the B-spline. We will need the Gegenbauer polynomials, which are orthogonal polynomials with respect to the weight function

$$\begin{aligned} w_{\lambda }(t) = (1-t^2)^{{\lambda }-\frac{1}{2}}, \quad {\lambda }> -\tfrac{1}{2}, \end{aligned}$$

on the interval \([-1,1]\). Let \((a)_n = a(a+1)\cdots (a+n-1)\) denote the Pochhammer symbol. The Gegenbauer polynomial of degree n is denoted by \(C_n^{\lambda }\) and normalized by \(C_n^{\lambda }(1) = \frac{(2{\lambda })_n}{n!}\). The Gegenbauer polynomials satisfy the orthogonality

$$\begin{aligned} c_{\lambda }\int _{-1}^1 C_n^{\lambda }(t) C_m^{\lambda }(t) w_{\lambda }(t) \,\textrm{d}t = \frac{{\lambda }}{n+{\lambda }} C_n^{\lambda }(1) \delta _{n,m}, \qquad n, m \in {{\mathbb {N}}}_0, \end{aligned}$$

where \(c_{\lambda }\) is a constant so that \(c_{\lambda }\int _{-1}^1 w_{\lambda }(t) \,\textrm{d}t = 1\). For convenience, we also define

$$\begin{aligned} Z_n^{\lambda }(t):= \frac{n+{\lambda }}{{\lambda }} C_n^{\lambda }(t). \end{aligned}$$

The generating function of the Gegenbauer polynomials is given by

$$\begin{aligned} \frac{1}{(1- 2r t + r^2)^{{\lambda }}} = \sum _{n=0}^\infty C_n^{\lambda }(t) r^n, \qquad 0 \le r < 1. \end{aligned}$$

Throughout this paper we define \(C_n^{\lambda }(t) =0\) whenever \(n < 0\).

Lemma 2.3

For \(\theta = ({\theta }_1,\ldots ,{\theta }_d)\in {{\mathbb {T}}}^d\), the function \(E_n\) satisfies

$$\begin{aligned} E_n(\theta ) \,&= [\cos {\theta }_1, \ldots , \cos {\theta }_d] H_{n,d} \\&= \int _{-1}^1 h_{n,d}(u) M_{d-1}(u | \cos {\theta }_1,\ldots , \cos {\theta }_d) \,\textrm{d}u, \nonumber \end{aligned}$$
(2.4)

where \(H_{n,d}\) and \(h_{n,d}\) are defined by \(H_{0,d} = G_{0,d}\) and, for \(n \ge 1\),

$$\begin{aligned} H_{n,d}(\cos {\theta }) = 2 (-1)^{\lfloor \frac{d-1}{2}\rfloor } (\sin {\theta })^{d-1} \times {\left\{ \begin{array}{ll} - \sin (n {\theta }) &{} \hbox {for} d\hbox { even}, \\ \cos (n {\theta }) &{} \hbox {for }d\hbox { odd}, \end{array}\right. } \end{aligned}$$
(2.5)

and \(h_{n,d}\) is a polynomial of degree n given by, \(h_{0,d} =1\) and, for \(n \ge 1\),

$$\begin{aligned} h_{n,d}(u) \,&= (d-1)! \sum _{j=0}^d (-1)^j \left( {\begin{array}{c}d\\ j\end{array}}\right) C_{n-2j}^d(u) \\&= (d-1)! \sum _{j=0}^{d-1}(-1)^j \left( {\begin{array}{c}d-1\\ j\end{array}}\right) Z_{n-2j}^{d-1}(u). \nonumber \end{aligned}$$
(2.6)

Proof

By its definition, \(E_n({\theta }) = D_n({\theta }) - D_{n-1}({\theta })\), so that \(E_n({\theta })\) is a divided difference of \(H_{n,d} = G_{n,d} - G_{n-1,d}\), from which (2.4) follows readily with \(h_{n,d} = H_{n,d}^{(d-1)}\) and the identity (2.5) follows as a consequence of (2.2) and the trigonometric identities \(\cos (n+\frac{1}{2}){\theta }- \cos (n-\frac{1}{2}){\theta }= - 2 \sin n{\theta }\sin \frac{{\theta }}{2}\) and \(\sin (n+\frac{1}{2}){\theta }- \sin (n-\frac{1}{2}){\theta }= 2 \cos n{\theta }\sin \frac{{\theta }}{2}\). Now, it is shown in [2] that

$$\begin{aligned} g_{n,d}(t) = G_{n,d}^{(d-1)}(t) = f_{n,d}(t) + f_{n-1,d}(t), \end{aligned}$$

where \(f_{n,d}\) is given in terms of the Gegenbauer polynomials by

$$\begin{aligned} f_{n,d} (t) = (d-1)! \sum _{j=0}^{d-1} (-1)^j \left( {\begin{array}{c}d-1\\ j\end{array}}\right) C_{n-2j}^{d}(t). \end{aligned}$$

Using the relation [7, (4.7.29)]

$$\begin{aligned} (n+{\lambda }) C_n^{{\lambda }}(t) = {\lambda }\left[ C_n^{{\lambda }+1}(t) - C_{n-2}^{{\lambda }+1}(t) \right] \end{aligned}$$

with \({\lambda }= d-1\), we then obtain

$$\begin{aligned} h_{n,d} = f_{n,d} - f_{n-2,d} = (d-1)!\sum _{j=0}^d (-1)^j \left( {\begin{array}{c}d-1\\ j\end{array}}\right) \left[ C_{n-2j}^d(t) - C_{n-2j-2}^{d}(t)\right] \end{aligned}$$

which is the second expression of \(h_{n,d}\) in (2.6) by recursion with \(Z_n^d\). Furthermore, the first identity in (2.6) follows from \(\left( {\begin{array}{c}d-1\\ j\end{array}}\right) + \left( {\begin{array}{c}d-1\\ j-1\end{array}}\right) = \left( {\begin{array}{c}d\\ j\end{array}}\right) \) and

$$\begin{aligned} h_{n,d} = f_{n,d} - f_{n-2,d} = (d-1)!\sum _{j=0}^d (-1)^j \left( \left( {\begin{array}{c}d-1\\ j\end{array}}\right) + \left( {\begin{array}{c}d-1\\ j-1\end{array}}\right) \right) C_{n-2j}^d, \end{aligned}$$

where we define for convenience \(\left( {\begin{array}{c}d-1\\ m\end{array}}\right) = 0\) if \(m = -1\) or \(m = d\). \(\square \)

Let \(N_d(n) = \# \{{\alpha }\in {{\mathbb {N}}}_0^d: |{\alpha }| =n\}\) be the cardinality of the set \(\{{\alpha }: |{\alpha }| =n\}\). Then, \(N_{d}(n) = E_n(0)\). As a consequence of the identities (2.4) and (2.6), we obtain

$$\begin{aligned} N_d(n) = E_n(0) = \frac{h_{n,d}(1)}{(d-1)!} = \sum _{j=0}^d \frac{(-d)_j (2d)_{n-2j}}{j! (n-2j)!}, \end{aligned}$$

where \((a)_n = a(a+1)\cdots (a+n-1)\) is the Pochhammer symbol. The last sum can be written as a hypergeometric \({}_3F_2\) function evaluated at 1,

but the series is not balanced so it does not have a closed-form formula. The first values of N(nd) are given below

$$\begin{aligned} N_2(n) = 4n, \quad N_3(n) = 4 n^2+2, \quad N_4(n) = \frac{8}{3} n (n^2+2). \end{aligned}$$

The function \(h_{n,d}\) satisfies a generating function identity which we state as the following result.

Lemma 2.4

Let \(0 \le r < 1\). Then

$$\begin{aligned} (d-1)! \frac{(1-r^2)^d}{(1-2 r u + r^2)^d} = \sum _{n= 0}^\infty h_{n,d}(u) r^n. \end{aligned}$$
(2.7)

Proof

By the explicit formula of \(h_{n,d}\), we obtain

$$\begin{aligned} \frac{1}{(d-1)!} \sum _{n=0}^\infty h_{n,d}(u) r^n \,&= \sum _{n=0}^\infty \sum _{j=0}^d (-1)^j \left( {\begin{array}{c}d\\ j\end{array}}\right) C_{n-2j}^d(u) r^n \\&= \sum _{j=0}^d (-1)^j\left( {\begin{array}{c}d\\ j\end{array}}\right) \sum _{n=2j}^\infty C_{n-2j}^d(u) r^{n-2j} r^{2j} \\&= \sum _{j=0}^d (-1)^j \left( {\begin{array}{c}d\\ j\end{array}}\right) r^{2j} \sum _{n= 0}^\infty C_n^d(u) r^n \\&= (1-r^2)^d \frac{1}{(1-2 r u+r^2)^d}, \end{aligned}$$

where we have used the generating function of the Gegenbauer polynomials. \(\square \)

Our next result is of interest in itself, which gives an explicit formula for the divided difference of the function

$$\begin{aligned} P_r(t): = \frac{1}{1-2 r t +r^2}, \quad 0 \le r < 1, \quad t \in [-1,1]. \end{aligned}$$

Proposition 2.5

For \(0 \le r < 1\),

$$\begin{aligned} (d-1)! \int _{-1}^1 \frac{1}{(1-2 r u + r^2)^d}&M_{d-1}(u | \cos {\theta }_1,\ldots ,\cos {\theta }_d) \,\textrm{d}u \nonumber \\&= \frac{1}{\prod _{i=1}^d (1-2 r \cos {\theta }_i + r^2)}. \end{aligned}$$
(2.8)

In particular,

$$\begin{aligned}{}[\cos {\theta }_1, \ldots , \cos {\theta }_d] P_r = \frac{(2r)^{d-1}}{\prod _{i=1}^d (1-2 r \cos {\theta }_i + r^2)}. \end{aligned}$$
(2.9)

Proof

We start with the elementary identity

$$\begin{aligned} \frac{1-r^2}{1-2r \cos \phi +r^2} = \sum _{n=0}^\infty r^{n} \textrm{e}^{\textrm{i}n \phi }. \end{aligned}$$

Reorganizing the d-fold product of this identity and setting \({\theta }= ({\theta }_1,\ldots , {\theta }_d)\) as above, we obtain the equalities

$$\begin{aligned} \frac{(1-r^2)^d}{\prod _{i=1}^d (1-2 r \cos {\theta }_i + r^2)} \,&= \sum _{n=0}^\infty r^n \sum _{|{\alpha }| = n} \textrm{e}^{\textrm{i}{\alpha }\cdot {\theta }} \nonumber \\&= \sum _{n=0}^\infty r^n E_n({\theta }) = \sum _{n=0}^\infty r^n [\cos {\theta }_1, \ldots , \cos {\theta }_d] H_{n,d} \nonumber \\&= \sum _{n=0}^\infty r^n \int _{-1}^1 h_{n,d}(u) M_{d-1} (u |\cos {\theta }_1,\ldots , \cos {\theta }_d) \,\textrm{d}u, \end{aligned}$$
(2.10)

from which the identity (2.8) follows from the generating function of \(h_{n,d}\). Now, taking derivatives of \(P_r\), we obtain readily that

$$\begin{aligned} P_r^{(d-1)} (u) = \frac{(d-1)! (2r)^{d-1}}{(1- 2 r u + r^2)^d}, \end{aligned}$$

so that the left-hand side of (2.8) can be identified with the divided difference of \(P_r\), which gives (2.9). \(\square \)

3 Fourier series of B-splines with respect to its knots

As a function of its knots, the B-spline function is a periodic function on \({{\mathbb {T}}}^d\),

$$\begin{aligned} {{\mathbb {T}}}^d \ni {\theta }\mapsto M_{d-1}(u| \cos {\theta }_1,\ldots ,\cos {\theta }_d) \in {{\mathbb {R}}}, \end{aligned}$$

for each u, and we also define for convenience

$$\begin{aligned} {{\mathcal {M}}}_d({\alpha }; {\theta }):= M_{d-1}(\cos \alpha | \cos {\theta }_1,\ldots ,\cos {\theta }_d). \end{aligned}$$

Studying this case is sufficient since for \(|u| \ge 1\), \( M_{d-1}(u| \cos {\theta }_1,\ldots ,\cos {\theta }_d) = 0\) by definition, for all \({\theta }\in {{\mathbb {T}}}^d\). We first show that it is an integrable function on \({{\mathbb {T}}}^d\).

Proposition 3.1

For \(u \in (-1,1)\), the function \({\theta }\mapsto M_{d-1}(u| \cos {\theta }_1,\ldots ,\cos {\theta }_d)\) is in \(L^1({{\mathbb {T}}}^d)\).

Proof

Let \(u = \cos {\alpha }\) for \(0< {\alpha }< \pi \) be fixed. Since the function \({{\mathcal {M}}}_d({\alpha };\cdot )\) is obviously even in each of its variables, we only need to consider \({\theta }\in [0, \pi ]^d\). Furthermore, the divided difference is a symmetric function of its knots, the function \({{\mathcal {M}}}_{d}({\alpha };\cdot )\) is a symmetric function and it is nonnegative, so we only need to show that it is an \(L^1\) function on the domain

$$\begin{aligned} \triangle _d = \{ {\theta }= ({\theta }_1,\ldots ,{\theta }_d) \in {{\mathbb {T}}}^d: 0 \le {\theta }_d \le {\theta }_{d-1}\le \cdots \le {\theta }_1 \le \pi \}. \end{aligned}$$

Indeed, the above consideration leads readily to

$$\begin{aligned} \int _{{{\mathbb {T}}}^d} M_{d-1}(u| \cos {\theta }_1,\ldots ,\cos {\theta }_d)\,\textrm{d}{\theta }= 2^d d! \int _{\triangle _d} M_{d-1}(u| \cos {\theta }_1,\ldots ,\cos {\theta }_d)\,\textrm{d}{\theta }. \end{aligned}$$

We start with the case \(d=2\); the univariate case is trivial by continuity and compact support. On the domain \(\triangle _2\), the function is given by

$$\begin{aligned} {{\mathcal {M}}}_{2}({\alpha }; {\theta }) = \frac{\chi _{[{\theta }_2,{\theta }_1]} ({\alpha })}{\cos {\theta }_2 - \cos {\theta }_1} = {\left\{ \begin{array}{ll} 0 &{} {\alpha }\le {\theta }_2, \\ \frac{1}{\cos {\theta }_2-\cos {\theta }_1} &{} {\theta }_2< {\alpha }< {\theta }_1, \\ 0 &{} {\alpha }\ge {\theta }_1. \end{array}\right. } \end{aligned}$$

Hence, it follows readily that

$$\begin{aligned} \int _{\triangle _2} {{\mathcal {M}}}_{2}({\alpha }; {\theta }) \,\textrm{d}{\theta }\,&= \int _{\alpha }^\pi \int _0^{\alpha }\frac{1}{\cos {\theta }_2-\cos {\theta }_1} \,\textrm{d}{\theta }_2 \,\textrm{d}{\theta }_1 \\&= \int _{\alpha }^\pi \int _0^{\alpha }\frac{1}{2 \sin \frac{{\theta }_1-{\theta }_2}{2} \sin \frac{{\theta }_1+{\theta }_2}{2} } \,\textrm{d}{\theta }_2 \,\textrm{d}{\theta }_1 \\&\le \frac{\pi }{\min \{\sin \frac{{\alpha }}{2}, \cos \frac{{\alpha }}{2}\}} \int _{\alpha }^\pi \int _0^{\alpha }\frac{1}{{\theta }_1 - {\theta }_2} \,\textrm{d}{\theta }_2 \,\textrm{d}{\theta }_1, \end{aligned}$$

where we have used the inequality \(\sin \frac{{\theta }_1-{\theta }_2}{2} \ge \frac{{\theta }_1-{\theta }_2}{\pi }\) and, if \(\frac{{\theta }_1+{\theta }_2}{2} \le \frac{\pi }{2}\), \(\sin \frac{{\theta }_1+{\theta }_2}{2} \ge \sin \frac{{\theta }_2}{2} \ge \sin \frac{{\alpha }}{2}\), whereas if \(\frac{{\theta }_1+{\theta }_2}{2} > \frac{\pi }{2}\), \(\sin \frac{{\theta }_1+{\theta }_2}{2} = \sin (\frac{\pi - {\theta }_1}{2} + \frac{\pi - {\theta }_2}{2}) \ge \sin \frac{\pi - {\alpha }}{2} = \cos \frac{{\alpha }}{2}\). The last integral is equal to \(-{\alpha }\ln {\alpha }+ \pi \ln \pi + ({\alpha }- \pi ) \ln (\pi -{\alpha })\), which is bounded for \(0< {\alpha }< \pi \), so that \({{\mathcal {M}}}_{1}({\alpha }; {\theta }) \in L^1({{\mathbb {T}}}^2)\) for \(0< {\alpha }< \pi \).

For \(d > 2\), we use induction on d and the already stated recurrence relation from above for B-splines. Since

$$\begin{aligned} {{\mathcal {M}}}_{d+1}( \alpha | {\theta }) =\,&\frac{\cos \alpha -\cos {\theta }_1}{\cos {\theta }_{d+1}-\cos {\theta }_1}{{\mathcal {M}}}_d(\alpha |{\theta }_1,\ldots ,{\theta }_d) \\&+\frac{\cos {\theta }_{d+1}-\cos \alpha }{\cos {\theta }_{d+1}-\cos {\theta }_1}{{\mathcal {M}}}_{d}(\alpha | {\theta }_2,\ldots , {\theta }_{d+1}) \end{aligned}$$

and \({{\mathcal {M}}}_{d+1}( \alpha | {\theta }) =0\) if \(\alpha \not \in ({\theta }_1,{\theta }_{d+1})\), it follows that for \(\alpha \in [0,\pi ]\) and \(\theta \in \triangle _d\),

$$\begin{aligned} {{\mathcal {M}}}_{d+1}( \alpha | {\theta })\le {{\mathcal {M}}}_d(\alpha |{\theta }_1,\ldots ,{\theta }_d)+{{\mathcal {M}}}_{d}(\alpha | {\theta }_2,\ldots , {\theta }_{d+1}). \end{aligned}$$

Consequently, the integrability of \({{\mathcal {M}}}_{d+1}({\alpha }| {\theta })\) follows from the integrability of \({{\mathcal {M}}}_d({\alpha }| {\theta })\) by induction. \(\square \)

Since \({{\mathcal {M}}}_d({\alpha };\cdot )\) is a nonnegative integrable function, we can expand it into multiple Fourier series, which leads us to consider the Fourier coefficients of the B-spline function as a function of its knots. More interestingly, we consider the \(\ell ^1\)-sum of its Fourier coefficients.

Definition 3.2

For \(d \ge 2\) and \(n \in {{\mathbb {N}}}_0\), we define

$$\begin{aligned} m_{n,d} (u):= \frac{1}{(2\pi )^d} \int _{{{\mathbb {T}}}^d} M_{d-1} (u | \cos {\theta }_1,\ldots ,\cos {\theta }_d) \frac{E_n({\theta })}{N_d(n)} \,\textrm{d}{\theta }. \end{aligned}$$

By the definition of \(E_n({\theta })\), \(m_{n,d}\) is the \(\ell ^1\) mean of the Fourier transform of the B-spline function \({\theta }\mapsto M_{d-1} (u | \cos {\theta }_1,\ldots ,\cos {\theta }_d)\) with respect to its knots.

Theorem 3.3

The family of functions \(\{m_{n,d}: n \in {{\mathbb {N}}}_0\}\) and the family of functions \(\{h_{n,d}: n \in {{\mathbb {N}}}_0\}\) are biorthogonal; more precisely,

$$\begin{aligned} \int _{-1}^1 m_{n,d}(u) h_{n',d}(u) \,\textrm{d}u = \delta _{n,n'}, \qquad n, n' \in {{\mathbb {N}}}_0. \end{aligned}$$
(3.1)

Proof

Multiplying the first identity of (2.10) by \(E_n({\theta })\) and integrating over \({\theta }\in {{\mathbb {T}}}^d\), we obtain

$$\begin{aligned} r^n= \frac{1}{(2\pi )^d} \int _{{{\mathbb {T}}}^d} \frac{(1-r^2)^d}{\prod _{i=1}^d (1-2 r \cos {\theta }_i + r^2)}\frac{E_n({\theta })}{N_d(n)} \,\textrm{d}{\theta }. \end{aligned}$$

Using (2.8) and exchanging the order of integrals on the right-hand side, we obtain

$$\begin{aligned} r^n&= \frac{ (d-1)!}{(2\pi )^d} \int _{-1}^1 \frac{(1-r^2)^d}{(1-2 r u + r^2)^d} \int _{{{\mathbb {T}}}^d} M_{d-1}(u | \cos {\theta }_1,\ldots ,\cos {\theta }_d) \frac{E_n({\theta })}{N_d(n)} \,\textrm{d}{\theta }\,\textrm{d}u \\&= (d-1)! \int _{-1}^1 \frac{(1-r^2)^d}{(1-2 r u + r^2)^d} m_{n,d}(u) \,\textrm{d}u \\&= \sum _{k =0}^\infty \int _{-1}^1 h_{k,d}(u) m_{n,d}(u) \,\textrm{d}u \, r^k, \end{aligned}$$

where the last step follows from (2.7). Since the above identity holds for \(|r| <1\), comparing the coefficients of \(r^n\) proves (3.1) by linear independence. \(\square \)

Using the orthogonality, we can now derive a series expansion of \(m_{n,d}\). Let us start with \(d =2\).

Proposition 3.4

For \(n =0, 1,2,\ldots \), \(m_{n,2}(u) = 0\) if \(|u| \ge 1\), and furthermore

$$\begin{aligned} m_{n,2}(\cos {\alpha }) = \frac{2}{\pi } \sum _{k=0}^\infty \frac{\sin ((n+2k+1) {\alpha })}{n+2k+1}, \qquad 0< {\alpha }< \pi . \end{aligned}$$
(3.2)

Proof

Let \(n \ge 0\) be fixed. Since \(m_{n,2}(\pm 1) =0\), we assume that \(m_{n,2}(u)\) contains a factor \(\sqrt{1-u^2}\) and takes the form

$$\begin{aligned} m_{n,2}(u) = \frac{2}{\pi }\sqrt{1-u^2} \sum _{k=0}^\infty a_k \frac{U_{n+2k}(u)}{n+2k+1}, \end{aligned}$$

where the coefficients \(a_k\) are real numbers that will be determined by the biorthogonality of (3.1) and \(U_n\) are as usual the Chebyshev polynomials of the second kind, satisfying \(U_n=C^1_n\). Using the orthogonality of \(U_n\),

$$\begin{aligned} \frac{2}{\pi } \int _{-1}^1 U_n(t) U_m(t) \sqrt{1-t^2} \,\textrm{d}t = \delta _{n,m}, \qquad n,m \ge 0, \end{aligned}$$

and the second explicit formula, in (2.6),

$$\begin{aligned} h_{\ell ,2}(u) = (\ell +1)U_\ell (u) - (\ell -1) U_{\ell -2}(u), \qquad \ell \ge 0, \end{aligned}$$

we see that (3.1) becomes, for \(\ell =0,1,\ldots \),

$$\begin{aligned} \delta _{\ell ,n} = \int _{-1}^1 m_{n,2}(u) h_{\ell ,2}(u) \,\textrm{d}u&= \sum _{k=0}^\infty a_k \frac{2}{\pi } \int _{-1}^1 \frac{U_{n+2k}(u)}{n+2k+1} h_{\ell ,2}(u) \sqrt{1-u^2} \,\textrm{d}u. \end{aligned}$$

If \(\ell \) and n have different parity, then both the right-hand and the left-hand side are zero. Assume now that \(\ell \) and n have the same parity. If \(\ell < n\), then the right-hand side is trivially zero by the orthogonality of \(U_n\). Thus, we only need to consider \(\ell = n + 2j\) for \(j = 0,1,\ldots \), for which the identity becomes

$$\begin{aligned} \delta _{j,0} = a_j - a_{j-1}, \qquad j =0, 1, 2, \ldots , \end{aligned}$$

where \(a_{-1} =0\), so that \(a_0 =1\) and \(a_j =a_{j-1}\) for \(j \ge 0\). Hence, \(a_j = 1\) for \(j=0,1,\ldots \). Now, setting \(u = \cos {\alpha }\), then we get \(\sqrt{1-u^2} U_{n+2k}(u) = \sin ((n+2k+1) {\alpha })\), which gives the expression \(m_{n,d}\) in (3.2). \(\square \)

It turns out, surprisingly, that the expression (3.2) can be written, for each n, as a finite sum.

Theorem 3.5

For \(n =0, 1,2,\ldots \), and \(0< {\alpha }< \pi \),

$$\begin{aligned} m_{2 n,2}(\cos {\alpha }) \,&= \frac{1}{2} - \frac{2}{\pi } \sum _{k=0}^{n-1} \frac{\sin ((2k+1) {\alpha })}{2k+1}, \end{aligned}$$
(3.3)
$$\begin{aligned} m_{2 n+1,2}(\cos {\alpha })\,&= \frac{1}{2} - \frac{{\alpha }}{\pi } - \frac{2}{\pi } \sum _{k=1}^{n} \frac{\sin (2k{\alpha })}{2k}. \end{aligned}$$
(3.4)

In particular, we obtain

$$\begin{aligned} m_{0,2}(u) = \frac{1}{(2\pi )^2} \int _{{{\mathbb {T}}}^2} M_{1} (u | \cos {\theta }_1, \cos {\theta }_2) \,\textrm{d}{\theta }_1 \,\textrm{d}{\theta }_2 = {\left\{ \begin{array}{ll} 0 &{}\, u\le -1, \\ \frac{1}{2} &{}\, -1< u <1, \\ 0 &{}\, u\ge 1. \end{array}\right. } \end{aligned}$$
(3.5)

Proof

Let \(f_0\) and \(f_1\) be odd \(2\pi \)-periodic functions so that their restriction on \([0,\pi ]\) are defined by

$$\begin{aligned} f_0({\theta }) = {\left\{ \begin{array}{ll} 0 &{} {\theta }= 0, \\ \frac{\pi }{4} &{} 0< {\theta }<\pi , \\ 0, &{} {\theta }= \pi , \end{array}\right. } \qquad \hbox {and} \quad f_1({\theta }) = {\left\{ \begin{array}{ll} \frac{{\theta }}{2} &{} 0 \le {\theta }< \pi , \\ 0, &{} {\theta }= \pi . \end{array}\right. } \end{aligned}$$

A quick computation shows that the Fourier series of \(f_0\) is given by

$$\begin{aligned} f_0({\theta }) = \sum _{k=0}^\infty \frac{\sin (2k+1){\theta }}{2k+1}, \qquad - \pi \le {\theta }\le \pi , \end{aligned}$$

where the convergence is pointwise. This gives immediately (3.5). Furthermore, for \(m_{2n,2}\), we obtain from (3.2)

$$\begin{aligned} m_{2n,2}(\cos {\alpha }) = \frac{2}{\pi }\sum _{k= n}^\infty \frac{\sin ( (2k+1){\alpha })}{(2k+1)} = \frac{2}{\pi }\left[ \frac{\pi }{4} - \sum _{k= 0}^{n-1} \frac{\sin ( (2k+1){\alpha })}{(2k+1)} \right] , \end{aligned}$$

which is (3.3). Another quick computation shows that the Fourier series of \(f_1(\theta )\) is

$$\begin{aligned} f_1({\theta }) = \sum _{n=1}^\infty \frac{( -1)^{n-1} \sin (n {\theta })}{n} = \sum _{n=0}^\infty \frac{\sin ((2n+1){\theta })}{2n+1} - \sum _{n=1}^\infty \frac{\sin (2n {\theta })}{2n}, \end{aligned}$$

which shows in particular, together with the Fourier series of \(f_0\),

$$\begin{aligned} \sum _{n=1}^\infty \frac{\sin (2n {\theta })}{2n} = \frac{\pi }{4} - \frac{{\theta }}{2}, \qquad 0< {\theta }< \pi . \end{aligned}$$

Thus, for \(m_{2n+1,2}\), we obtain from (3.2) that

$$\begin{aligned} m_{2n+1,2}(\cos {\alpha }) = \frac{2}{\pi }\sum _{k= n+1}^\infty \frac{\sin ( 2k {\alpha })}{2k} = \frac{2}{\pi }\left[ \frac{\pi }{4} - \frac{{\alpha }}{2} - \sum _{k= 0}^{n} \frac{\sin ( 2k {\alpha })}{2k} \right] , \end{aligned}$$

which is (3.4). \(\square \)

Proposition 3.6

For \(d >2\), the function \(m_{n,d}\) is of the form

$$\begin{aligned} m_{n,d} (u) = (1-u^2)^{d-\frac{3}{2}} \frac{c_{d-1}}{(d-1)!} \sum _{k=0}^\infty \frac{(d-1)_k}{k!} \frac{C_{n+2k}^{d-1}(u)}{C_{n+2k}^{d-1}(1)},\qquad -1\le u\le 1, \end{aligned}$$
(3.6)

where \(c_{d-1}\) is the normalization constant defined by its reciprocal

$$\begin{aligned} c_{d-1}^{-1} = \int _{-1}^1(1-u^2)^{d-\frac{3}{2}}\textrm{d}u = \frac{\Gamma \left( \frac{1}{2}\right) \Gamma \left( d-\frac{1}{2}\right) }{(d-1)!}. \end{aligned}$$

Proof

For \(d > 2\), the function \(u\mapsto M_{d-1}(u | \cos \{\cdot \}_1, \ldots , \cos \{\cdot \}_d)\) has support in \((-1,1)\) and has \(d-2\) continuous derivatives, therefore it follows that \(m_{n,d}\) is a continuous and \(m_{n,d}^{(j)}(\pm 1) = 0\) for \(j = 0,1,\ldots , d-2\). We assume that \(m_{n,d}\) has the series expansion

$$\begin{aligned} m_{n,d} (u) = \frac{c_{d-1}}{(d-1)!} (1-u^2)^{d-\frac{3}{2}} \sum _{k=0}^\infty a_k^n \frac{C_{n+2k}^{d-1}(u)}{C_{n+2k}^{d-1}(1)}, \end{aligned}$$

where the coefficients \(a_k^n\) are to be determined by the biorthogonality, and the choice of index \(n+2k\) comes from (3.1) and the parity of \(h_{n,d}\). Now, the orthogonality of the Gegenbauer polynomials is equivalent to

$$\begin{aligned} c_{d-1} \int _{-1}^1 \frac{C_n^{d-1}(u)}{C_n^{d-1}(1)} Z_m^{d-1}(u) (1-u^2)^{d-\frac{3}{2}}\,\textrm{d}u = \delta _{n,m}. \end{aligned}$$
(3.7)

Using the second explicit formula of \(h_{n,d}\) in (2.6), which shows that the term of the highest degree in \(h_{n,d}\) is \((d-1)!Z_n^{d-1}\), whereas the term of the lowest degree in \(m_{n,d}\) contains \(C_n^{d-1}\), or the term \(k = 0\) in the sum, the orthogonality (3.7) implies that the identity (3.1) for \(n' = n\) becomes

$$\begin{aligned} a_0^n = \delta _{n,n} = 1,\quad \forall n\in {{\mathbb {N}}}_0; \end{aligned}$$

moreover, for \(n'=n+2\ell \) and \(\ell = 1,2,3,\ldots \), (3.1) becomes

$$\begin{aligned} 0 = \int _{-1}^1 m_{n,d}(u) h_{n+2\ell ,d}(u)\,\textrm{d}u = (d-1)! \sum _{j=0}^{d-1} (-1)^j \left( {\begin{array}{c}d-1\\ j\end{array}}\right) a_{\ell -j}^n. \end{aligned}$$

Thus, using \(a_j^n = 0\) for \(j < 0\) and recalling that \(a_0^n = 1\), we see that \(a_j^n\) satisfy

$$\begin{aligned} \sum _{j=0}^{\ell } (-1)^j\left( {\begin{array}{c}d-1\\ j\end{array}}\right) a_{\ell -j}^n = \delta _{0,\ell }, \qquad \ell = 0,1,\ldots , d-1. \end{aligned}$$
(3.8)

It shows, in particular, that the \(a_j^n\) are independent of n and that they can be determined recursively so that the solution is unique. It turns out that the solution (3.8) is given explicitly by \(a_j = (d-1)_j / j!\). To verify that this is indeed the case, we write

$$\begin{aligned} \left( {\begin{array}{c}d-1\\ j\end{array}}\right) = (-1)^j (-d + 1)_j /j! \quad \hbox {and}\quad a_{\ell -j} = \frac{ (d-1)_{\ell -j}}{(\ell -j)!} = \frac{(d-1)_\ell (-\ell )_j}{\ell ! (2-\ell -d)_j}, \end{aligned}$$

where the second identity follows from \((-x)_{\ell -j} = (-x)_\ell (-1)^j (1-\ell -x)_j\), so that the right-hand side of (3.8) can be written as a hypergeometric function, namely

$$\begin{aligned} \sum _{j=0}^{\ell -1} (-1)^j\left( {\begin{array}{c}d-1\\ j\end{array}}\right) a_{\ell -j}^n = \frac{(d-1)_\ell }{\ell !} {}_2F_1\left( \begin{matrix} -\ell , \, 1-d \\ 2-d-\ell \end{matrix};1\right) = \frac{(d-1)_\ell (1-\ell )_\ell }{\ell ! (2-d-\ell )_\ell }, \end{aligned}$$

where the last step follows from the Chu-Vandermonde identity [1, p. 67]. Since \((-\ell +1)_\ell = 0\) for \(\ell \in {{\mathbb {N}}}\), this verifies that \(a_j = (d-1)_j / j!\) is the solution of (3.8). \(\square \)

Theorem 3.7

For \(d > 2\) and \(-1 \le u \le 1\),

$$\begin{aligned} m_{0,d}(u) \,&= \frac{1}{(2\pi )^d} \int _{{{\mathbb {T}}}^d} M_{d-1}(u | \cos {\theta }_1,\ldots ,\cos {\theta }_d)\,\textrm{d}{\theta }\\&= \frac{\Gamma (\frac{d+1}{2})}{\sqrt{\pi }\Gamma (\frac{d}{2})(d-1)!} (1-u^2)^{\frac{d-2}{2}}. \end{aligned}$$

Proof

Let \(g(u) = (1-u^2)^{- \frac{d-1}{2}}\). We compute the Fourier-Gegenbauer coefficients \({{\hat{g}}}^{d-1}_n\) defined by

$$\begin{aligned} {{\hat{g}}}^{d-1}_n \,&= c_{d-1} \int _{-1}^1 g(t) C_n^{{d-1}}(t) (1-t^2)^{d-\frac{3}{2}}\,\textrm{d}t \\&= c_{d-1} \int _{-1}^1 C_n^{{d-1}}(t) (1-t^2)^{\frac{d-2}{2}}\,\textrm{d}t. \end{aligned}$$

By the parity of \(C_n^{d-1}\), \({{\hat{g}}}^{d-1}_n = 0\) if n is odd. To compute \({{\hat{g}}}_{2n}^{d-1}\), we use the connection coefficients of Gegenbauer polynomials [1, Theorem 7.1.4’, p. 360], which gives

$$\begin{aligned} C_{2n}^{d-1}(t) = \sum _{k=0}^n \frac{(d-1)_{2n-k} (\frac{d-1}{2})_{k} }{(\frac{d-1}{2}+1)_{2n-k} k!} Z_{2n-2k}^{\frac{d-1}{2}}(u). \end{aligned}$$

Using the orthogonality of \(Z_m^{\frac{d-1}{2}}\), we then conclude that

$$\begin{aligned} {{\hat{g}}}^{d-1}_n = \frac{c_{d-1}}{c_{\frac{d-1}{2}} } \frac{(d-1)_{n} (\frac{d-1}{2})_{n} }{(\frac{d-1}{2}+1)_{n} n!} = \frac{c_{d-1}}{c_{\frac{d-1}{2}}} \frac{(d-1)_{n}}{n!} \frac{d-1}{2n+d-1}. \end{aligned}$$

(Note that \(c_\cdot \) is also defined for a non-integer index.) Consequently, the Fourier-Gegenbauer expansion of g is given by

$$\begin{aligned} g (u)= \sum _{n=0}^\infty {{\hat{g}}}_n^{d-1} \frac{C_n^{d-1}(u)}{h_n^{d-1}} = \frac{c_{d-1}}{c_{\frac{d-1}{2}}} \sum _{n=0}^\infty \frac{(d-1)_{n}}{n!} \frac{C_{2n}^{d-1}(u)}{C_{2n}^{d-1}(1)}, \end{aligned}$$

where \(h_n^{\lambda }\) denotes the \(L^2([-1,1], (1-t^2)^{{\lambda }-\frac{1}{2}})\) norm of \(C_n^{\lambda }(t)\) and it is equal to

$$\begin{aligned} h_n^{\lambda }= \frac{{\lambda }}{n+{\lambda }} C_n^{\lambda }(1), \qquad n =0,1,2,\ldots . \end{aligned}$$

Comparing with (3.6) with \(n =0\), we see that

$$\begin{aligned} m_{0,d} (u)= (1-u^2)^{d-\frac{3}{2}} \frac{c_{\frac{d-1}{2}}}{(d-1)!} g(u) = \frac{c_{\frac{d-1}{2}}}{(d-1)!} (1-u^2)^{\frac{d-2}{2}},\end{aligned}$$
(3.9)

which is the stated result. \(\square \)

In the case of \(d =2\), it is easy to see that the explicit formula (3.2) implies the relation

$$\begin{aligned} m_{n,2}(u) - m_{n+2,2}(u) = \frac{2}{\pi } \frac{\sin (n+1) {\alpha }}{n+1} = \frac{2}{\pi } \sqrt{1-u^2} \frac{U_n(u)}{U_n(1)}, \quad u = \cos {\alpha }. \end{aligned}$$

The following corollary gives a d-dimensional version of this recursive relation for \(m_{n,d}\).

Corollary 3.8

For \(d \ge 2\), \(n=0,1,2,\ldots \),

$$\begin{aligned} (d-1)! \sum _{j=0}^{d-1} (-1)^j \left( {\begin{array}{c}d-1\\ j\end{array}}\right) m_{n+2j,d}(u) = c_{d-1}(1-u^2)^{d-\frac{3}{2}} \frac{C_n^{d-1}(u)}{C_n^{d-1}(1)}. \end{aligned}$$
(3.10)

Proof

Using the explicit formula of \(m_{n,d}\) in (3.6), we obtain

$$\begin{aligned}&(d-1)! \sum _{j=0}^{d-1} (-1)^j \left( {\begin{array}{c}d-1\\ j\end{array}}\right) m_{n+2j,d} (u) \\&\qquad = (1-u^2)^{d-\frac{3}{2}} c_{d-1} \sum _{j=0}^{d-1} (-1)^j \left( {\begin{array}{c}d-1\\ j\end{array}}\right) \sum _{k= j}^\infty \frac{(d-1)_{k-j}}{(k-j)!} \frac{C_{n+2k}^{d-1}(u)}{C_{n+2k}^{d-1}(1)} \\&\qquad = (1-u^2)^{d-\frac{3}{2}} c_{d-1} \sum _{k=0}^\infty \sum _{j=0}^{d-1} (-1)^j \left( {\begin{array}{c}d-1\\ j\end{array}}\right) \frac{(d-1)_{k-j}}{(k-j)!} \frac{C_{n+2k}^{d-1}(u)}{C_{n+2k}^{d-1}(1)}, \end{aligned}$$

where we have used the convention that \((d-1)_{k-j} = 0\) if \(j > k\) and \(\left( {\begin{array}{c}d-1\\ j\end{array}}\right) =0\) if \(j > d-1\). The stated result then follows from (3.8). \(\square \)

In particular, the identity (3.10) shows that the finite combination of \(m_{n,d}\) on the left-hand side is a polynomial of degree n multiplied by \((1-t^2)^{d-\frac{3}{2}}\). The recursive formula can be used to determine \(m_{n,d}\) if we know the first \(d-1\) elements \(m_{0,d}, \ldots , m_{d-1,d}\). However, notice that the explicit expression of \(m_{0,d}\) in (3.9) contains the factor \((1-u^2)^{\frac{d-2}{2}}\), which has a power different from the power \(d - \frac{3}{2}\) on the right-hand side of (3.10), we see that an analog of (3.3) is unlikely to hold for \(d>2\); in particular, \(m_{n,d}\) will not be a polynomial when n is even for \(d >2\).

4 Positive definite \(\ell ^1\)-invariant functions

A function \(f\in C({{\mathbb {T}}}^d)\) is a positive definite function (PDF) if for every \(\Xi _N = \{\Theta _1,\ldots , \Theta _N\}\) of pairwise distinct points in \({{\mathbb {T}}}^d\) and \(N \in {{\mathbb {N}}}_0\), the matrix

$$\begin{aligned} f[\Xi _N]= \left[ f(\Theta _i - \Theta _j) \right] _{i,j = 1}^N \end{aligned}$$

is nonnegative definite; it is called a strictly positive definite function (SPDF) if the matrix is always positive definite. Let \(\Phi ({{\mathbb {T}}}^d)\) denote the set of PDFs on \({{\mathbb {T}}}^d\). By the definition of PDF, the space \(\Phi ({{\mathbb {T}}}^d)\) is closed under linear combination with nonnegative coefficients; that is, if \(f, g \in \Phi ({{\mathbb {T}}}^d)\) and \(c_i\) and nonnegative constants, then \(c_1 f + c_2 g \in \Phi ({{\mathbb {T}}}^d)\). The PDFs on \({{\mathbb {T}}}^d\) are characterized by the following theorem:

Theorem 4.1

A function \(f \in C({{\mathbb {T}}}^d)\) is a PDF on \({{\mathbb {T}}}^d\) if and only if the Fourier coefficients \({{\hat{f}}}_{\alpha }\) are nonnegative for all \({\alpha }\in {{\mathbb {N}}}_0^d\).

One direction of the theorem follows readily from the closedness of \(\Phi ({{\mathbb {T}}}^d)\) and from the fact that the exponential functions \(\textrm{e}^{\textrm{i}{\alpha }\cdot x} \in \Phi ({{\mathbb {T}}}^d)\) for all \({\alpha }\in {{\mathbb {N}}}_0^d\), since

$$\begin{aligned} \sum _{i=1}^N \sum _{j=1}^N c_i c_j\textrm{e}^{\textrm{i}{\alpha }\cdot (\Theta _i - \Theta _j)} = \left| \sum _{i=1}^N c_i \textrm{e}^{\textrm{i}{\alpha }\cdot \Theta _i } \right| ^2 \ge 0. \end{aligned}$$

In the other direction, if f is PDF on \({{\mathbb {T}}}^d\) then by the periodicity of f,

$$\begin{aligned} \int _{{{\mathbb {T}}}^d} f (\theta ) \,\textrm{d}\theta = \frac{1}{(2 \pi )^d} \int _{{{\mathbb {T}}}^d} \int _{{{\mathbb {T}}}^d} f (\theta - \phi ) \,\textrm{d}\theta \,\textrm{d}\phi . \end{aligned}$$

The right-hand side is nonnegative if f is a trigonometric polynomial, as can be seen by applying a positive cubature rule for the integral over \({{\mathbb {T}}}^d\) and using the positive definiteness of f. In particular, this shows that the left-hand side integral is nonnegative. Since \(\textrm{e}^{\textrm{i}{\alpha }\cdot {\theta }}\) is PDF, it follows from Schur’s theorem that

$$\begin{aligned} {{\hat{f}}}_{\alpha }= \frac{1}{(2 \pi )^d} \int _{{{\mathbb {T}}}^d} f({\theta }) \textrm{e}^{-\textrm{i}{\alpha }\cdot {\theta }} \,\textrm{d}{\theta }\ge 0, \qquad \forall {\alpha }\in {{\mathbb {N}}}_0^d. \end{aligned}$$

Recall that a function \(f: {{\mathbb {T}}}^d \rightarrow {{\mathbb {R}}}\) is \(\ell ^1\)-invariant if \({{\hat{f}}}_{\alpha }= {{\hat{f}}}_{\beta }\) for all \(|{\alpha }| = |{\beta }|\), \({\alpha },{\beta }\in {{\mathbb {N}}}_0^d\). These functions are given as follows.

Theorem 4.2

A function \(f \in L^1({{\mathbb {T}}}^d)\) is \(\ell ^1\)-invariant if and only if

$$\begin{aligned} f ({\theta }_1,\ldots ,{\theta }_d) = [\cos {\theta }_1, \ldots , \cos {\theta }_d] F_d, \end{aligned}$$
(4.1)

where \(F_d: [-1,1]\rightarrow {{\mathbb {R}}}\) is a \((d-1)\)-times differentiable function and satisfies

$$\begin{aligned} F_d(\cos {\theta }) = 2 (-1)^{\lfloor \frac{d+1}{2}\rfloor } (\sin {\theta })^{d-1} \sum _{n=0}^\infty a_n {\left\{ \begin{array}{ll} - \sin (n {\theta }) &{} \hbox {for }d \hbox {even}, \\ \cos (n {\theta }) &{} \hbox {for }d\hbox { odd}, \end{array}\right. } \end{aligned}$$
(4.2)

with a sequence of real numbers \(\{a_n\}_{n\ge 0}\).

Proof

If f is the given divided difference of \(F_d\) provided in (4.2), then

$$\begin{aligned} f({\theta }) = \sum _{n=0}^\infty a_n [\cos {\theta }_1,\ldots , \cos {\theta }_d] H_{n,d} = \frac{1}{2} \sum _{n=0}^\infty a_n E_n({\theta }). \end{aligned}$$

By the definition of \(E_n({\theta })\), it follows readily that \({{\hat{f}}}_{\alpha }= {{\hat{f}}}_{\beta }\) if \(|{\alpha }| = |{\beta }|\). Moreover, when its knots coalesce, the divided difference becomes a derivative as we have mentioned before. By (2.3) and the \((d-1)\)-times differentiability of F, f is continuous.

In the other direction, if f is \(\ell ^1\)-invariant, then its Fourier series is given by (2.1). By (2.4), we obtain that \(f ({\theta }_1,\ldots ,{\theta }_d) = [\cos {\theta }_1, \ldots , \cos {\theta }_d] F\) with F given by

$$\begin{aligned} F(u) = \sum _{n=0}^\infty {{\hat{f}}}_n H_{n,d}(u), \end{aligned}$$

so that F is of the form (4.2) with \(a_n = \widehat{f}_n\) by (2.5). The continuity of f requires that F has continuous derivatives of \((d-1)\) orders by (2.3). \(\square \)

If f is \(\ell ^1\)-invariant, then \({{\hat{f}}}_{\alpha }= {{\hat{f}}}_{|{\alpha }|}\), so that \({{\hat{f}}}_{\alpha }\ge 0\) if and only if \({{\hat{f}}}_n \ge 0\) in (2.1). Consequently, the following characterization of PDFs holds.

Theorem 4.3

Let \(f \in C({{\mathbb {T}}}^d)\) be \(\ell ^1\)-invariant. Then f is a PDF on \({{\mathbb {T}}}^d\) if and only if f is given by (4.1) and (4.2) with \({{\hat{f}}}_n \ge 0\) for all \(n \in {{\mathbb {N}}}_0\).

The SPDFs have been characterized in [4, Theorem 1], where it is proved that a PDF function is also SPDF if and only if the set of indices \({\mathcal {G}}=\lbrace \alpha \in {{\mathbb {Z}}}^{d} \mid {\hat{f}}_{\alpha }>0 \rbrace \) intersects all the translations of each subgroup of \({{\mathbb {Z}}}^{d}\) that has the form

$$\begin{aligned} (a_1{{\mathbb {Z}}},a_2{{\mathbb {Z}}},\ldots ,a_d {{\mathbb {Z}}}),\quad a_1,{\alpha }_2,\ldots ,a_d \in {{\mathbb {N}}}. \end{aligned}$$

More precisely, for every pair of vectors \(\gamma \in {{\mathbb {Z}}}^{d},\beta \in {{\mathbb {N}}}^{d},\) there exists a \(z\in {{\mathbb {Z}}}^{d}\) with \({\hat{f}}_{\alpha }>0\) and \(\alpha _j=\gamma _j+z_j \beta _j\), \(j=1,\ldots ,d\). For \(\ell ^1\)-invariant functions, the SPDFs are characterized below.

Theorem 4.4

Let \(f \in C({{\mathbb {T}}}^d)\) be \(\ell ^1\)-invariant. Then f is a SPDF on \({{\mathbb {T}}}^d\) if and only if f is given by (4.1) and (4.2), f is PDF and for any pair \(n<\ell \in {{\mathbb {N}}}\) there is an \(m\in {{\mathbb {N}}}\) with \({\hat{f}}_{n+m\ell }>0\) or \({\hat{f}}_{(\ell -n)+m\ell }>0\).

Proof

We prove that the given condition is equivalent to the condition given in [4], if we assume the kernel to be \(\ell ^1\)-summable.

We start with sufficiency. Suppose f is PDF and satisfies the condition of the theorem. For any two vectors \(\gamma \in {{\mathbb {Z}}}^{d},\beta \in {{\mathbb {N}}}^{d},\) we can assume without loss of generality that \(0\le \gamma _j\le \beta _j\), \(j=1,\ldots d\). We define \(n=\vert \gamma \vert \) and \(\ell =\vert \beta \vert \). Then there exists an \(m\in {{\mathbb {N}}}_0\) with \({\hat{f}}_{n+\ell m}>0\) or \({\hat{f}}_{(\ell -n)+\ell m}>0\). If \({\hat{f}}_{n+\ell m}>0\), then

$$\begin{aligned} {\hat{f}}_{\gamma +m \beta }={\hat{f}}_{n+\ell m}>0 \end{aligned}$$

since \(\vert \gamma +m \beta \vert = n+ m\ell \). Whereas if \({\hat{f}}_{(n-\ell )+\ell m}>0\), we can choose \(m'=m+1\). Now \(\vert \gamma -m' \beta \vert =\vert \gamma -\beta - m \beta \vert =(\ell -n) +m \ell \) implies

$$\begin{aligned} {\hat{f}}_{\gamma -m'\beta }={\hat{f}}_{(n-\ell )+\ell m}>0. \end{aligned}$$

For the necessity, suppose the assumption of the theorem does not hold, then there exists \(\ell >n \in {{\mathbb {N}}}\) such that \({\hat{f}}_{n+m\ell }=0\) and \({\hat{f}}_{(\ell -n)+\ell m}=0\) for all \(m\in {{\mathbb {N}}}\). Set

$$\begin{aligned} \gamma =(n,0,\ldots ,0)\in {{\mathbb {Z}}}^{d},\quad \beta =(\ell ,\ldots ,\ell )\in {{\mathbb {Z}}}^{d}. \end{aligned}$$

Then for any \(z\in {{\mathbb {Z}}}^{d}\) define \(\alpha \in {{\mathbb {Z}}}^{d}\), \(\alpha _j=\gamma _j+z_j \beta _j\), so that with \(m=\vert z\vert \),

$$\begin{aligned} \vert \alpha \vert =\vert n+ z_1 \ell \vert + \ell \sum _{j=2}^{d-1} \vert z_j\vert = {\left\{ \begin{array}{ll} n+m\ell ,&{} z_1\ge 0, \\ (\ell -n)+(m-1)\ell ,&{} z_1<0. \end{array}\right. } \end{aligned}$$

Thereby, there would be no coefficients \({\hat{f}}_{\alpha }>0\), \(\alpha _j=\gamma _j+z_j \beta _j\) for this choice of \(\gamma \) and\(\beta \), contradicting strict positive definiteness. \(\square \)