1 Introduction

The Fueter mapping theorem and the generalized Cauchy–Kovalevskaya (GCK) extension are two main tools in quaternionic, and more generally, in Clifford analysis, both allowing one to get axially regular functions, i.e. null solutions of the Cauchy–Fueter operator in \( {\mathbb {R}}^4\), starting from analytic functions of one real or complex variable.

The Fueter mapping theorem is a two-steps procedure giving an axially regular function starting from a holomorphic function of one complex variable. This is achieved by using two operators. The first one is the so-called slice operator that extends holomorphic functions of one complex variable to slice hyperholomorphic functions. The theory of slice hyperholomorphic functions is nowadays well developed, see [23, 24]. The second operator is the Laplace operator in four real variables which maps slice hyperholomorphic functions to axially regular functions. On the other hand, the generalized CK-extension is defined in terms of powers of \( {\underline{x}} \partial _{x_0}\), where \(x_0\) and \( {\underline{x}}\) are the real and imaginary parts of a quaternion, respectively.

The two maps are not the same: the generalized CK-extension is an isomorphism, whereas the Fueter map is only surjective. In [31] a connection between the two extension operators has been proved. Furthermore, in [31] the authors showed that although for the exponential, trigonometric and hyperbolic functions the two extension maps coincide, the two maps differ in most cases, for example when acting on the rational functions.

In the framework of rational functions, we recall that in [18, 38] it is explained how the state space theory of linear systems gave rise to the notion of realization, which is a representation of a rational function. In the complex setting a realization in a neighbourhood of the origin is defined as

$$\begin{aligned} R(z)= D+zC(I-zA)^{-1}B,\qquad {z\in {\mathbb {C}},} \end{aligned}$$

where A, B, C and D are matrices of suitable dimensions. Moreover, the inverse of a realization is still a realization when D is square and invertible, as well as the sum and the product of two realizations of compatible sizes. See [18] and the beginning of Sect. 3 in the present paper.

In this paper we shall introduce the counterpart of the realization theory in the regular setting through the Fueter theorem and the generalized CK-extension. The main obstacles to achieve this goal are

  • a suitable replacement of monomials in the framework of axially regular functions,

  • an appropriate product between axially regular functions.

In order to explain how to overcome these issues, we fix the following notations. The set \({\mathbb {H}}\) of real quaternions is defined as:

$$\begin{aligned} {\mathbb {H}}:=\{x=x_0+e_1x_1+e_2x_2+e_3x_3 \,| \, x_0, x_1,x_2, x_3 \in {\mathbb {R}}\}, \end{aligned}$$

where the imaginary units satisfy the relations

$$\begin{aligned} e_1^2= & {} e_2^2=e_3^2=-1, \quad \hbox {and} \quad e_1e_2=-e_2e_1=e_3, \, e_2e_3=-e_3e_2=e_1,\\ e_3e_1= & {} -e_1e_3=e_2. \end{aligned}$$

We can also write a quaternion as \(x=x_0+ {\underline{x}}\), where we denoted by \(x_0\) its real part and by \( {\underline{x}}:=e_1x_1+e_2x_2+e_3x_3\) its imaginary part. The conjugate of a quaternion \(x \in {\mathbb {H}}\) is defined as \(x=x_0- {\underline{x}}\) and its modulus is given by \(|x|= \sqrt{x {\bar{x}}}=\sqrt{x_0^2+x_1^2+x_2^2+x_3^2}\). By the symbol \({\mathbb {S}}\) we denote the sphere of purely imaginary unit quaternions defined as

$$\begin{aligned} {\mathbb {S}}:= \{ {\underline{x}}=e_1x_1+e_2x_2+e_3x_3 \, | \, x_1^2+x_2^2+x_3^2=1\}. \end{aligned}$$

We observe that if \(I \in {\mathbb {S}}\) then \(I^{2}=-1\). This means that I is an imaginary unit and that

$$\begin{aligned} {\mathbb {C}}_I:=\{u+Iv \,| \, u,v \in {\mathbb {R}}\}, \end{aligned}$$

is an isomorphic copy of the complex numbers.

In quaternionic analysis, the Taylor expansion of a regular function is given in terms of the well-known Fueter polynomials, which play the role of the monomials \(x_0^{\alpha _1}x_1^{\alpha _1} \ldots x_n^{\alpha _n}\) in several real variables. An easy way to describe regular functions is through axially regular functions, see [46]. Indeed, for axially regular functions a simpler approach than the one of Fueter polynomials is available: the approach of the Clifford-Appell polynomials.

These polynomials are defined as

$$\begin{aligned} {\mathcal {Q}}_m(x)= \frac{2}{(m+1)(m+2)}\sum _{\ell =0}^m (m-\ell +1) x^{m-\ell } {\bar{x}}^{\ell }, \end{aligned}$$
(1.1)

and they were investigated in [21, 22]. We note that they arise as the action of the Fueter map on the monomials \(x^k\), \(x \in {\mathbb {H}}\), see [30]. Any axially regular function in a neighbourhood of the origin can be written as a power series in terms of the polynomials \( {\mathcal {Q}}_m(x)\) of the form

$$\begin{aligned} f(x)=\sum _{n=0}^\infty {\mathcal {Q}}_n(x) f_n, \qquad f_n \in {\mathbb {H}}. \end{aligned}$$
(1.2)

As we discussed above, another issue is the fact that one needs a suitable product between axially regular functions, since the pointwise product evidently spoils the regularity. A well known product between regular functions is the so-called CK-product. This product is defined for regular functions f and g as

$$\begin{aligned} f(x_0, {\underline{x}}) \odot _{CK} g(x_0, {\underline{x}})=CK \left[ f(0, {\underline{x}}) \cdot g(0, {\underline{x}})\right] . \end{aligned}$$

In [10] a CK-product between Clifford-Appell polynomials is performed. Precisely, it is given by

$$\begin{aligned} \left( {\mathcal {Q}}_k \odot _{CK} {\mathcal {Q}}_{s}\right) (x)= \frac{c_k c_s}{c_{k+s}} {\mathcal {Q}}_{k+s}(x). \end{aligned}$$
(1.3)

The drawback of the previous formula is the presence of the constant \(c_k\), depending on the degree k, which makes the formula unsuitable for some types of computations.

In [32] a new kind of product is defined between axially regular functions: the so-called generalized CK-product. This gives a more natural formula for the multiplication of the Clifford-Appell polynomials:

$$\begin{aligned} {\mathcal {Q}}_{m}(x) \odot _{GCK} {\mathcal {Q}}_{\ell }(x)= {\mathcal {Q}}_{m+ \ell }(x). \end{aligned}$$

Another advantage of the generalized CK-product is that it is a convolution (also called Cauchy product) of the coefficients of the Clifford-Appell polynomials.

The polynomials \( {\mathcal {Q}}_m(x)\) are also useful to define a counterpart of the Hardy space in the quaternionic unit ball for axially regular functions. This space consists of functions of the form (1.2) which satisfy the condition \( \sum _{n=0}^\infty |f_n|^2 <\infty \). In this context, the reproducing kernel of the Hardy space is given by

$$\begin{aligned} {\mathcal {K}}(x,y)= \sum _{m=0}^\infty {\mathcal {Q}}_m(x)\overline{{\mathcal {Q}}_m(y)}. \end{aligned}$$

The notion of Clifford-Appell polynomials and generalized CK-product paved the way to provide a definition of Schur multipliers in this setting.

In the literature, Schur multipliers are related to several applications: inverse scattering (see [12, 13, 20, 26]), fast algorithms (see [42, 43]), interpolation problems (see [33]) and several other ones.

In complex analysis a function s defined in the unit disk \( {\mathbb {D}}\) is a Schur multiplier if and only if the kernel

$$\begin{aligned} k_s(z,w)= \sum _{n=0}^\infty z^n(1- s(z)\overline{s(w)}) {\overline{w}}^n \end{aligned}$$

is positive definite in the open unit disk. Recently a generalization of Schur multipliers in the slice hyperholomorphic setting has been provided, see [3, 5]. The notion we shall consider in this paper is the following: a quaternionic-valued function S defined in the unit ball is a Schur multiplier if and only if the kernel

$$\begin{aligned} K_S(x,y)= \sum _{n=0}^\infty \left( {\mathcal {Q}}_{n}(x) \overline{{\mathcal {Q}}_n(y)}-(S \odot _{GCK} {\mathcal {Q}}_n)(x) \overline{(S \odot _{GCK} {\mathcal {Q}}_n)(y)}\right) \end{aligned}$$

is positive definite in the unit ball of \({\mathbb {R}}^4\).

With this definition, most of the characterizations of Schur multipliers can be adapted to the non-commutative framework of Clifford-Appell polynomials. We note that in the quaternionic matrix case, being a Schur function is not equivalent to taking contractive values; see [4, (62.38) p. 1767].

As a particular example of Schur multiplier, we define the so-called Clifford-Appell Blaschke factor by

$$\begin{aligned} {\mathcal {B}}_a(x)=(1- {\mathcal {Q}}_1(x) {\bar{a}})^{-\odot _{GCK}} \odot _{GCK} (a- {\mathcal {Q}}_{1}(x)) \frac{{\bar{a}}}{|a|}, \end{aligned}$$

with \(a \in {\mathbb {H}}\), such that \(|a|<1\). Another and different notion of Blaschke factor is given by applying the Fueter map to the slice hyperholomorphic Blaschke factor. Nevertheless, these two regular notions of Blaschke factor are not equivalent.

The paper is divided into eight parts besides the present introduction. In Sect. 2 we recall some key notions in hypercomplex analysis and we state the Fueter mapping theorem and the generalized CK-extension. In Sect. 3 we provide the notion of axially rational regular function by using the Fueter mapping theorem. In Sect. 4 we define the counterpart of rational function in the regular setting by using the generalized CK-extension, and we prove some properties of regular rational functions. In Sect. 5 we define the Hardy space in this framework. In Sect. 6 we give the definition of Schur multipliers by means of the Clifford-Appell polynomials, and we give several characterizations of such. In Sect. 7 we prove a co-isometric realization of Schur multiplier. Section 8 is devoted to study a particular example of the Schur multiplier: the Blaschke factor. Finally, in Sect. 9 we provide another notion of axially regular Blaschke factor through the Fueter map.

2 Preliminaries

2.1 Quaternionic-valued functions

In the quaternionic setting there are various classes of functions generalizing holomorphic functions to quaternions, but in past few years two classes are the most studied: the slice hyperholomorphic functions and the regular functions. In this section we revise their definitions and their main properties.

First of all we recall the following:

Definition 2.1

We say that a set \(U \subset {\mathbb {H}}\) is axially symmetric if, for every \(u+Iv \in U\), all the elements \(u+Jv\) for \(J \in {\mathbb {S}}\) are contained in U.

The type of sets defined above are designed to work in class of functions in the next definition.

Definition 2.2

Let \(U \subset {\mathbb {H}}\) be an axially symmetric open set and let

$$\begin{aligned} {\mathcal {U}}:= \{(u,v) \in {\mathbb {R}}^2 \, | \, u+{\mathbb {S}}v \in U\}. \end{aligned}$$

A function \(f:U \rightarrow {\mathbb {H}}\) of the form

$$\begin{aligned} f(x)=f(u+Iv)= & {} \alpha (u,v)+I \beta (u,v) \\ (\hbox {resp.} f(x)= & {} f(u+Iv)= \alpha (u,v)+ \beta (u,v)I), \end{aligned}$$

is left (resp. right) slice hyperholomorphic if \(\alpha \) and \( \beta \) are quaternionic-valued functions and satisfy the so-called "even-odd" conditions i.e.

$$\begin{aligned} \alpha (u,v)=\alpha (u,-v), \qquad \beta (u,v)= - \beta (u,-v) \qquad \hbox {for all} \quad (u,v) \in {\mathcal {U}}. \end{aligned}$$
(2.1)

Moreover, the functions \( \alpha \) and \( \beta \) satisfy the Cauchy-Riemann system

$$\begin{aligned} \partial _{u} \alpha (u,v)- \partial _v \beta (u,v)=0, \quad \hbox {and} \quad \partial _{v} \alpha (u,v)+ \partial _u \beta (u,v)=0. \end{aligned}$$

The set of left (resp. right) slice hyperholomorphic functions on U is denoted by \(\mathcal{S}\mathcal{H}_L(U)\) (resp. \(\mathcal{S}\mathcal{H}_R(U)\)). If the functions \( \alpha \) and \( \beta \) are real-valued functions, then we say that the slice hyperholomorphic function f is intrinsic, and the class of instrinsic functions is denoted by \( {\mathcal {N}}(U)\).

We observe that the pointwise product of two slice hyperholomorphic functions is not slice hyperholomorphic. However it is possible to define a product that preserves the slice hyperholomorphicity.

Definition 2.3

Let \(f=\alpha _0+I\beta _0\), \(g=\alpha _1+I\beta _1 \in \mathcal{S}\mathcal{H}_L(U)\). We define their \(*\)-product as

$$\begin{aligned} f*g=(\alpha _0\alpha _1-\beta _0\beta _1) +I(\alpha _0 \beta _1+ \beta _0\alpha _1). \end{aligned}$$

Let \(f=\alpha _0+\beta _0I\), \(g=\alpha _1+\beta _1I \in \mathcal{S}\mathcal{H}_R(U)\). We define their \(*\)-product as

$$\begin{aligned} f*g=(\alpha _0\alpha _1-\beta _0\beta _1) +(\alpha _0 \beta _1+ \beta _0\alpha _1)I. \end{aligned}$$

Definition 2.4

Let \(f=\alpha _0+I\beta _0 \in \mathcal{S}\mathcal{H}_L(U)\). We define its left slice hyperholomorphic conjugate as \(f^c= \overline{\alpha _0}+I \overline{\beta _0}\) and its symmetrisation as \(f^s=f^c * f=f*f^c\). The left slice hyperholomorphic reciprocal is defined as \(f^{-*}=(f^s)^{-1}f^c\).

Let \(f=\alpha _0+\beta _0I \in \mathcal{S}\mathcal{H}_R(U)\). We define its right slice hyperholomorphic conjugate as \(f^c= \overline{\alpha _0}+\overline{\beta _0}I \) and its symmetrisation as \(f^s=f^c * f=f*f^c\). The right slice hyperholomorphic reciprocal is defined as \(f^{-*}=f^c(f^s)^{-1}\).

Another well studied class of quaternionic-valued functions is given by the Cauchy–Fueter regular (regular, for short) functions, see [19, 27, 35].

Definition 2.5

Let \( U \subset {\mathbb {H}}\) be an open set and let \(f:U \rightarrow {\mathbb {H}}\) be a function of class \( {\mathcal {C}}^1\). We say that the function f is (left) regular if

$$\begin{aligned} {\mathcal {D}}f(x)= (\partial _{x_0}+ \partial _{{\underline{x}}})f(x)= (\partial _{x_0}+e_1 \partial {x_1}+e_2 \partial _{x_2}+e_3 \partial _{x_3})f(x)=0, \qquad \forall x \in U, \end{aligned}$$

\({\mathcal {D}}\) is the so-called Cauchy–Fueter operator.

Example

The fundamental example of regular functions is given by the so-called Fueter variables defined as

$$\begin{aligned} \xi _1(x):=x_1-e_1x_0, \qquad \xi _2(x):=x_2-e_2x_0, \qquad \xi _3(x):=x_3-e_3x_0. \end{aligned}$$

A way to characterize regular functions is the well-known CK-extension, see [19, 27, 34]. An arbitrary regular function f is uniquely obtained by considering its restriction to the hyperplane \(x_0=0\). Precisely, we define the CK-extension of a function \(f({\underline{x}})\), which is real analytic in a set \({\tilde{U}}\subset {\mathbb {R}}^3\) (in the real variables \(x_1,x_2,x_3\)), as the function defined in a suitable open set \(U\subseteq \mathbb H\cong {\mathbb {R}}^4\), \(U\supset {{\tilde{U}}}\) given by

$$\begin{aligned} CK[f({\underline{x}})](x)=\sum _{j=0}^\infty \frac{(-1)^\ell }{\ell !} x_0^\ell \partial _{{\underline{x}}}[f({\underline{x}})]. \end{aligned}$$

The pointwise product of regular functions is clearly not regular. Indeed, the product of two Fueter variables is a counter-example proving this fact. For this reason, a suitable product between regular functions is established, and since it is based on the CK-extension it is called CK-product, see [19].

Definition 2.6

Let f, g be two regular functions, then their CK-product is defined as

$$\begin{aligned} (f\odot g)(x)=CK[f({\underline{x}})g({\underline{x}})] \end{aligned}$$

where the product at the right hand side is the pointwise product of two real analytic functions in \(x_1,x_2,x_3\) which are the restrictions of f and g to \(x_0=0\).

We recall that for \(a_1\),...,\(a_n \in {\mathbb {H}}\) the symmetrized product is defined as

$$\begin{aligned} a_1 \times a_2 \times \ldots \times a_n= \frac{1}{n!} \sum _{\sigma \in S_n} a_{\sigma (1)} a_{\sigma (2)} \ldots a_{\sigma (n)}, \end{aligned}$$

where \(S_n\) is the set of all permutations of the set \(\{1,\ldots ,n\}\). By making the symmetrized product of the Fueter polynomials, we get

$$\begin{aligned} \xi ^{\nu }:=\xi ^{\nu }(x)=\xi _1^{\xi _1 \times }(x) \times \xi _2^{\nu _2 \times }(x) \times \xi _3^{\nu _3 \times }(x), \qquad \nu =(\nu _1, \nu _2, \nu _3) \in {\mathbb {N}}_0^3. \end{aligned}$$
(2.2)

We observe that \( \xi ^{\nu }\) is the CK-extension of \(x^{\nu }=x_1^{\nu _1}x_2^{\nu _2}x_2^{\nu _2}\) and so it is in fact \( \xi ^{\nu }=\xi _1^{\nu _1} \odot \xi _2^{\nu _2} \odot \xi _3^{\nu _3}\).

Every regular function in neighbourhood of the origin can be written in the following way

$$\begin{aligned} f(x)= \sum _{\nu \in {\mathbb {N}}^3_0} \xi ^{\nu }f_\nu \qquad f_{\nu } \in {\mathbb {H}}. \end{aligned}$$
(2.3)

The CK-product of the basis \(\xi ^{\nu }\) si given by

$$\begin{aligned} \xi ^{\nu }p\odot _{CK} \xi ^{\mu }q= \xi ^{\nu +\mu }pq, \qquad q,p \in {\mathbb {H}}, \quad \mu , \nu \in {\mathbb {N}}_0^3. \end{aligned}$$

Thus the CK-product of two functions written in the form (2.3) in neighbourhood of the origin can be computed via the convolution (also called Cauchy product, see [36]) of the coefficients along the Fueter polynomials.

A subset of regular functions is the right quaternionic space of the axially regular functions. These functions are defined below:

Definition 2.7

Let U be an axially symmetric slice domain in \( {\mathbb {H}}\). We say that a function \(f:U \rightarrow {\mathbb {H}}\) is axially regular, if it is regular and it is of the form

$$\begin{aligned} f(x_0+ {\underline{x}})=A(x_0, |{\underline{x}}|)+ {\underline{\omega }} B(x_0, | {\underline{x}}|), \qquad {\underline{\omega }}:= \frac{{\underline{x}}}{|{\underline{x}}|}, \end{aligned}$$

where the functions A and B are quaternionic valued and satisfy the even-odd conditions (2.1). We denote by \( \mathcal{A}\mathcal{M}(U)\) the set of axially regular functions on U.

The set of axially regular functions constitutes the “building blocks” to define a regular functions in the sense of the result below, see [27].

Theorem 2.8

Let \(U \subseteq {\mathbb {H}}\) be an axially symmetric open set. Then every regular function \( f:U \rightarrow {\mathbb {H}}\) can be written as

$$\begin{aligned} \breve{f}(x)= \sum _{k=0}^\infty \breve{f}_k(x), \end{aligned}$$

where \(f_k(x)\) are functions of the form

$$\begin{aligned} \breve{f}_k(x)= \sum _{j=1}^{m_k} [A_{k,j}(x_0, | {\underline{x}}|)+ {\underline{\omega }}B_{k,j}(x_0, | {\underline{x}}|)] {\mathcal {P}}_{k,j}( {\underline{x}}), \end{aligned}$$

where \(A_{k,j}\) and \(B_{k,j}\) satisfy conditions (2.1) and \({\mathcal {P}}_{k,j}( {\underline{x}})\) form a basis for the space of spherical regular functions of degree k, which has dimension \(m_k\).

2.2 Fueter theorem and generalized CK-extension

We now recall how to induce slice hyperholomorphic functions from holomorphic intrinsic functions.

Definition 2.9

An open connected set in the complex plane is an intrinsic complex domain if it is symmetric respect the real-axis.

Definition 2.10

A holomorphic function \(f(z)= \alpha (u,v)+i \beta (u,v)\) is intrinsic if is defined in an intrinsic complex domain D and \(\overline{f(z)}=f({\bar{z}})\). We denote the set of holomorphic intrinsic functions on D by \( {\mathcal {H}}(D)\).

Remark 2.11

Slice hyperholomorphic intrinsic functions defined on

$$\begin{aligned} \Omega _D=\{x=x_0+{\underline{x}} \,\ |\ \ (x_0, |{\underline{x}}|) \in D\} \end{aligned}$$

are induced by intrinsic holomorphic functions defined in \(D \subset {\mathbb {C}}\), by the so-called slice operator defined in the following way

$$\begin{aligned} S: {\mathcal {H}}(D) \otimes {\mathbb {H}} \rightarrow \mathcal{S}\mathcal{H}_L(\Omega _D), \qquad \alpha (u,v)+i \beta (u,v) \mapsto \alpha (x_0, |{\underline{x}}|) +I \beta (x_0, |{\underline{x}}|),\nonumber \\ \end{aligned}$$
(2.4)

which consists of replacing the complex variable \(z=u+iv\) by the quaternionic variable \(x=x_0+ {\underline{x}}\) and the complex unit i is replaced by \(I:= \frac{{\underline{x}}}{|{\underline{x}}|}\).

Real analytic functions in one variable can be extended to slice hyperholomorphic functions in a suitable open set. In fact, let \( {\tilde{D}}:= D \cap {\mathbb {R}}\). We denote by \( {\mathcal {A}}({\tilde{D}})\) the space of real-valued analytic functions defined on \({\tilde{D}}\) with a unique holomorphic extension to the set D. The holomorphic extension map is defined as \(C=\exp (iv \partial _u)\). With this notation, we can define the slice regular extension map as \(S_1=S \circ C= \exp ({\underline{x}} \partial _{x_0})\).

Theorem 2.12

We have the isomorphism,

$$\begin{aligned} \mathcal{S}\mathcal{H}_L(\Omega ) \simeq {\mathcal {A}}({\tilde{D}}) \otimes {\mathbb {H}} \simeq {\mathcal {H}}(D) \otimes {\mathbb {H}}, \end{aligned}$$

and the following commutative diagram

figure a

Remark 2.13

A slice operator can be defined also for right slice hyperholomorphic functions and a result similar to Theorem 2.12 is valid in this case.

In quaternionic analysis the main tools to transform analytic functions of one real or complex variable into axially regular functions are the Fueter mapping theorem (see [37]) and the Cauchy–Kovalevskaya (CK) extension (see [27]).

Theorem 2.14

(Fueter mapping theorem) Let \(f_{0}(z)= \alpha (u,v)+i \beta (u,v)\) be a holomorphic function defined in a domain (open and connected) D in the upper-half complex plane and let \(\Omega _D\) as before. Then the operator S defined in (2.4) maps the set of holomorphic functions to the set of slice hyperholomorphic functions. Moreover, the function

$$\begin{aligned} \breve{f}(x):=\Delta \left( \alpha (x_0, |{\underline{x}}|)+ \frac{{\underline{x}}}{|{\underline{x}}|}\beta (x_0, |{\underline{x}}|)\right) , \end{aligned}$$

is axially regular, where \(\Delta := \partial _{x_0}^2+\partial _{x_1}^2+\partial _{x_2}^2+\partial _{x_3}^2\) is the Laplace operator in the four real variables \(x_{\ell }\), \( \ell =0,1,2,3\).

Remark 2.15

The Fueter theorem was extended to the Clifford setting in 1957 by M. Sce, in the case of odd dimensions, see [44]. In this case, the Laplace operator \(\Delta \) is replaced by \( \Delta _{n+1}^{\frac{n-1}{2}}\), where \( \Delta _{n+1}\) is the Laplacian in \(n+1\) dimensions and n is odd, so in this case we are dealing with a differential operator. The proof of M. Sce in the Clifford setting is just a particular case of the computations in a generic quadratic algebra, see [44] and its translation with commentaries in [25]. In 1997, T. Qian showed that the Fueter-Sce theorem can be also proved in even dimensions. In this case the operator \(\Delta _{n+1}^{\frac{n-1}{2}}\) is a fractional operator, see [40, 41].

Theorem 2.16

(Generalized CK-extension, [27]) Let \({\tilde{D}} \subset {\mathbb {R}}\) be a real domain and consider an analytic function \(f_0(x_0) \in {\mathcal {A}}({\mathbb {R}}) \otimes {\mathbb {H}}\). Then there exists a unique sequence \( \{f_{j}(x_0)\}_{j=1}^\infty \subset {\mathcal {A}}( {\mathbb {R}}) \otimes {\mathbb {H}}\) such that the series

$$\begin{aligned} f(x_0,{\underline{x}})= \sum _{j=0}^\infty {\underline{x}}^j f_{j}(x_0), \end{aligned}$$

is convergent in an axially symmetric 4-dimensional neighbourhood \( \Omega \subset {\mathbb {H}}\) of D and its sum is a regular function i.e., \((\partial _{x_0}+\partial _{{\underline{x}}})f(x_0, {\underline{x}})=0\).

Furthermore, the sum f is formally given by the expression

$$\begin{aligned} f(x_0, {\underline{x}})= \Gamma \left( \frac{3}{2}\right) \left( \frac{|{\underline{x}}|\partial _{x_0}}{2} \right) ^{- \frac{3}{2}} \left( \frac{|{\underline{x}}|\partial _{x_0}}{2} J_{\frac{1}{2}}\left( |{\underline{x}}|\partial _{x_0} \right) + \frac{{\underline{x}} \partial _{x_0}}{2} J_{\frac{3}{2}}\left( |{\underline{x}}|\partial _{x_0} \right) \right) f_{0}(x_0),\nonumber \\ \end{aligned}$$
(2.5)

where \(J_{\nu }\) is the Bessel function of the first kind of order \(\nu \).

The function in (2.5) is known as the generalized CK-extension of \(f_0\), and it is denoted by \(GCK[f_0](x_{0}, {\underline{x}})\).

This extension operator defined an isomorphism between right modules:

$$\begin{aligned} GCK: {\mathcal {A}}({\mathbb {R}}) \otimes {\mathbb {H}} \rightarrow \mathcal{A}\mathcal{M}(\Omega ), \end{aligned}$$

whose inverse is given by the restriction operator to the real line, i.e. \(GCK[f_0](x_0,0)=f_0(x_0)\).

A match between the generalized CK-extension and the Fueter theorem has been found in [31, Thm. 4.2]:

Theorem 2.17

Let \(f(u+iv)=\alpha (u,v)+i\beta (u,v)\) be an intrinsic holomorphic function defined on an intrinsic complex domain \(\Omega _2 \subset {\mathbb {C}}\). Then we have

$$\begin{aligned} \Delta \left[ f(x_0+ {\underline{x}})\right] =-2 \, GCK \left[ \partial _{x_0}^2 f_{|{\mathbb {R}}}\right] . \end{aligned}$$

2.3 Clifford-Appell polynomials

In this subsection we recall the definition and the main properties of the Clifford-Appell polynomials, see [21, 22]. These are defined by

$$\begin{aligned} {\mathcal {Q}}_m(x)= \sum _{\ell =0}^m T_\ell ^m x^{m-\ell } {\bar{x}}^{\ell }, \end{aligned}$$
(2.6)

where

$$\begin{aligned} T_\ell ^m:= \frac{2(m-\ell +1)}{(m+1)(m+2)}, \qquad m=0,1, \ldots \end{aligned}$$

The polynomials \( {\mathcal {Q}}_m(x)\) satisfy the Appell property

$$\begin{aligned} \frac{\overline{{\mathcal {D}}}}{2} {\mathcal {Q}}_m(x)= \frac{(\partial _{x_0}- \partial _{{\underline{x}}}) {\mathcal {Q}}_{m}(x)}{2}=m {\mathcal {Q}}_{m-1}(x), \end{aligned}$$
(2.7)

An interesting feature of the Clifford-Appell polynomials is that they come from the application of the Fueter map to the monomials \(x^m\). In particular, see [30], we have the formula

$$\begin{aligned} {\mathcal {Q}}_m(x)=- \frac{\Delta (x^{m+2})}{2(m+1)(m+2)}, \qquad m=0,1, \ldots \end{aligned}$$
(2.8)

Since the polynomials \({\mathcal {Q}}_m(x)\) are axially regular and

$$\begin{aligned} {\mathcal {Q}}_m(x)|_{{\mathbb {R}}}=x_0^m, \end{aligned}$$

we get that

$$\begin{aligned} {\mathcal {Q}}_m(x)=GCK[x_0^m]. \end{aligned}$$
(2.9)

The fact that the coefficients of the polynomials \({\mathcal {Q}}_m(x)\) satisfy the relation

$$\begin{aligned} \sum _{\ell =0}^m T_\ell ^m=1 \end{aligned}$$

implies the inequality

$$\begin{aligned} | {\mathcal {Q}}_m(x)| \le |x|^m. \end{aligned}$$
(2.10)

The Clifford-Appell polynomials are a basis for axially regular functions, see [10, Thm. 3.1].

Theorem 2.18

Let us consider \( \Omega \subset {\mathbb {H}}\) be an axially symmetric slice domain containing the origin. Let f be an axially regular function on \(\Omega \). Then there exist \( \{a_k\}_{k \in {\mathbb {N}}_0} \subset {\mathbb {H}}\) such that

$$\begin{aligned} f(x)= \sum _{k=0}^\infty {\mathcal {Q}}_k(x)a_k. \end{aligned}$$

In [32] the authors defined a new product among regular functions which is more useful in the set of axially regular functions than the CK-product.

Definition 2.19

Let \(f(x_0, {\underline{x}})\) and \(g(x_0, {\underline{x}})\) be axially regular functions. We define

$$\begin{aligned} f(x_0, {\underline{x}}) \odot _{GCK} g(x_0, {\underline{x}})=GCK[f(x_0,0) \cdot g(x_0,0)]. \end{aligned}$$
(2.11)

Definition 2.20

Let \(f(x_0, {\underline{x}})\) be an axially regular function then we define

$$\begin{aligned}{}[f(x_0, {\underline{x}})]^{-\odot _{GCK}}=GCK\left[ \frac{1}{f(x_0,0)}\right] . \end{aligned}$$

The previous definition introduces the multiplicative inverse of the generalized CK-product, indeed

$$\begin{aligned}{}[f(x_0, {\underline{x}})]^{-\odot _{GCK}} \odot _{GCK} f(x_0, {\underline{x}})= f(x_0, {\underline{x}})\odot _{GCK}[f(x_0, {\underline{x}})]^{-\odot _{GCK}}=1. \end{aligned}$$

This product fits perfectly with the product of Clifford-Appell polynomials. Indeed we have

$$\begin{aligned} {\mathcal {Q}}_{m}(x) \odot _{GCK} {\mathcal {Q}}_{\ell }(x)= {\mathcal {Q}}_{m+ \ell }(x). \end{aligned}$$

Remark 2.21

If we consider two axially regular functions f and g expanded in convergent series

$$\begin{aligned} f(x)= \sum _{k=0}^\infty {\mathcal {Q}}_k(x)a_k, \qquad g(x)= \sum _{k=0}^\infty {\mathcal {Q}}_k(x)b_k, \qquad \{a_k\}_{k \in {\mathbb {N}}_0}, \{b_k\}_{k \in {\mathbb {N}}_0} \in {\mathbb {H}}, \end{aligned}$$

then their generalized CK-product is given by

$$\begin{aligned} (f \odot _{GCK} g)(x)=\sum _{n=0}^\infty {\mathcal {Q}}_{n}(x) \sum _{\ell =0}^n a_\ell b_{n- \ell }. \end{aligned}$$

Thus the generalized CK-product is a convolution (also called Cauchy product, see [36]) on the coefficients along the Clifford-Appell polynomials.

Remark 2.22

It is clear that

$$\begin{aligned} f(x_0, {\underline{x}}) \odot _{GCK} 1=1 \odot _{GCK} f(x_0, {\underline{x}})=f(x_0, {\underline{x}}). \end{aligned}$$
(2.12)

Remark 2.23

As we explained in the Introduction, formula (1.3) is unsuitable for some computations because of the presence of the constants. To have a more natural product, in [8] the authors introduced the polynomials

$$\begin{aligned} P_m(x)= \frac{Q_n(x)}{\sum _{\ell =0}^m (-1)^\ell T_\ell ^m} \end{aligned}$$

In this way formula (1.3) can be written as

$$\begin{aligned} (P_k \odot _{CK} P_{s})(x)=P_{k+s}(x). \end{aligned}$$

However, the polynomials \(P_n(x)\) do not satisfy the Appell property like the one in (2.7). Moreover, the CK-product is not a convolution on the coefficients of the polynomials \( P_m(x)\). Thus the product (2.11) looks the best option to work in the set of axially regular functions.

3 Axially rational regular functions through the Fueter theorem

We start by recalling that any \({\mathbb {C}}^{N \times M}\)-valued rational function R(z), without a pole at the origin can be written in the form

$$\begin{aligned} R(z)= D+zC(I-zA)^{-1}B, \end{aligned}$$
(3.1)

where D, C, A and B are matrices of suitable sizes. Formula (3.1) is known in the literature with the name of realization (centred at the origin). It is well-known that the inverse of the function R(z) is still a realization. Indeed, if we assume \(N=M\) and D being an invertible matrices one has the following formula

$$\begin{aligned} R^{-1}(z)= D^{-1}-zD^{-1}C(I-zA^{\times })^{-1}BD^{-1}, \qquad A^\times :=H-CD^{-1}B. \end{aligned}$$

Moreover the product of two different realizations \(R_\ell (z)=D_\ell +zC_\ell (I-zH_\ell )^{-1}B_\ell \) of suitable sizes is given by

$$\begin{aligned} R_1(z)R_2(z)=D+zC(I-zA)^{-1}B, \end{aligned}$$

where

$$\begin{aligned} D=D_1D_2 \quad A=\begin{pmatrix} A_1 &{}&{} C_1A_2\\ 0 &{}&{} A_2 \end{pmatrix} \qquad C= \begin{pmatrix} C_1\\ D_1C_2 \end{pmatrix} \qquad B= \begin{pmatrix} B_1D_2&\,&B_2 \end{pmatrix}. \end{aligned}$$

The sum of two realizations it is a realization as well. This follows as a special case of the product since

$$\begin{aligned} \begin{pmatrix} R_1(z)&\,&I_N \end{pmatrix} \begin{pmatrix} I_M\\ R_2(z) \end{pmatrix} = R_1(z)+R_2(z). \end{aligned}$$

The aim of this section is to introduce a notion of realization in the framework of axially regular functions. As we explained in the previous section, there are two possible ways to extend analytic functions of one complex variable to the regular setting. As we will see, the two approaches do not coincide for rational functions.

We start by studying the notion of axially rational function by means of the Fueter theorem. To this end we need to recall the notion of rational slice hyperholomorphic functions and their characterisation, [3, Thm. 4.6].

These functions arise from the study of the counterpart of state space equations in the slice hyperholomorphic setting, see [3].

Theorem 3.1

Let r be a \( {\mathbb {H}}^{N \times N}\)-valued function, slice hyperholomorphic in a neighbourhood \(\Omega \) of the origin. Then, we have

  1. 1.

    r(x) is a rational function from \(\Omega \cap {\mathbb {R}}\) to \({\mathbb {H}}^{N\times N}\).

  2. 2.

    There exist matrices A, B and C, of appropriate dimensions, such that

    $$\begin{aligned} r(x)= D+ xC*(I-xA)^{-*}B. \end{aligned}$$
    (3.2)
  3. 3.

    The function r can be expanded in series as follows

    $$\begin{aligned} r(x)= D+ \sum _{n=1}^\infty x^n C A^{n}B, \end{aligned}$$

    for suitable matrices ABCD.

Remark 3.2

It is important to note that the formula (3.2) is formally identical to that one in the classical complex case; however, when expanded, it gives

$$\begin{aligned} r(x)= D+ xC*(I-xA)^{-*}B=D+(xC-|x|^2CA)(|x|^2A^2-2x_0A+1)^{-1}B. \end{aligned}$$

This shows that the formula is very unconventional, because the term \(|x|^2\) is involved.

Remark 3.3

If we consider two functions \(r_1\), \(r_2\) admitting realizations of the form (3.2) of appropriate sizes, then \(r_1*r_2\) can be written in the form (3.2). Similarly the function \(r_1+r_2\) admits a realization of the form (3.2).

Now, we define the first notion of axially rational regular function of this paper:

Definition 3.4

A quaternionic valued function \( \breve{r}= \Delta r\) is called rational axially regular in a neighborhood of the origin if r satisfies one of the equivalent statements in Theorem 3.1.

We now prove some equivalent statements on rational axially regular functions:

Theorem 3.5

Let r be an \({\mathbb {H}}^{M \times N}\)-valued rational slice hyperholomorphic in a neighbourhood of the origin. Then the following conditions are equivalent

  1. 1.

    \( \breve{r}(x)= \Delta r(x)\) is a rational axially regular function.

  2. 2.

    \( \breve{r}(x)\) can be written as

    $$\begin{aligned} \breve{r}(x)=-4(C- {\bar{x}}CA)Q_x(A)^{-2}AB, \end{aligned}$$

    where \(Q_x(A)= |x|^2 A^2-2 x_0A + I\) and A, B, C are quaternionic matrices of appropriate sizes

  3. 3.

    \( \breve{r}\) can be expanded as follows

    $$\begin{aligned} \breve{r}(x)= E+ \sum _{n=1}^\infty (n+1)(n+2) {\mathcal {Q}}_n(x) CA^{n+1}B, \end{aligned}$$

    where \(E:=-4CAB\) and \({\mathcal {Q}}_n(x)\) are the Clifford-Appell polynomials.

Proof

We start by showing that \(1) \Longleftrightarrow 2)\). By Definition 3.4 we know that a function \( \breve{r}\) is a rational axially regular function if there exists a rational slice hyperholomorphic function r such that

$$\begin{aligned} \breve{r}(x)= \Delta r(x). \end{aligned}$$

By Theorem 3.1 we know a characterization of rational slice hyperholomorphic functions, thus we can apply the Laplace operator to the entries \(r_{\ell j}\) of the rational slice hyperholomorphic function r, i.e.

$$\begin{aligned} r_{\ell j}(x)= d+(xc-|x|^2ca) {\mathcal {Q}}_x(a)^{-1}b, \quad \ell \in \{1, \ldots ,M\} \quad j \in \{1, \ldots , N\}, \end{aligned}$$

where \({\mathcal {Q}}_x(a):=|x|^2a^2-2x_0a+1\) and a, b, c and d represent the quaternionic entries of the matrices A, B, C and D.

To simplify the computations we set

$$\begin{aligned} g_{\ell j}(x):= (xc-|x|^2ca) {\mathcal {Q}}_x(a)^{-1}. \end{aligned}$$

Then, we have

$$\begin{aligned} \frac{\partial g_{\ell j}(x)}{\partial x_0}= & {} (c-2x_0ca) {\mathcal {Q}}_x(a)^{-1}-(xc-|x|^2ca) {\mathcal {Q}}_x(a)^{-2}(2x_0a^2-2a). \\ \frac{\partial ^2 g_{\ell j}(x)}{\partial x_0^2}= & {} -2ca {\mathcal {Q}}_x(a)^{-1}-2(c-2x_0ca) {\mathcal {Q}}_x(a)^{-2}(2x_0a^2-2a)\\{} & {} +2(xc-|x|^2ca) {\mathcal {Q}}_x(a)^{-3}(2x_0a^2-2a)^{2}+\\{} & {} -2 (xc-|x|^2ca) {\mathcal {Q}}_x(a)^{-2}a^2\\= & {} -2ca {\mathcal {Q}}_x(a)^{-1}-4(cx_0a^2-ca-2x_0^2ca^3+2x_0ca^2) {\mathcal {Q}}_x(a)^{-2}\\ {}{} & {} +8 (pc-|p|^2ca) (x_0^2a^4+a^2-2x_0a^3) {\mathcal {Q}}_{x}(a)^{-3}\\{} & {} -2(xc-|x|^2ca)a^2 {\mathcal {Q}}_x(a)^{-2}. \end{aligned}$$

For \(1 \le i \le 3\) we have

$$\begin{aligned} \frac{\partial g_{\ell j}(x)}{\partial x_i}= & {} (e_ic-2p_ica) {\mathcal {Q}}_x(a)^{-1}-(xc-|x|^2ca) {\mathcal {Q}}_{x}(a)^{-2}(2 x_i a^2). \\ \frac{\partial ^2 g_{\ell j}(x)}{\partial x_i^2}= & {} -2ca {\mathcal {Q}}_x(a)^{-1}-2(e_ic-2x_ica) {\mathcal {Q}}_x(a)^{-2}(2 x_i a^2)\\{} & {} +2(xc-|x|^2ca) {\mathcal {Q}}_x(a)^{-3} (2x_ia^2)^2\\{} & {} -2(xc-|x|^2ca) {\mathcal {Q}}_x(a)^{-2}a^2. \end{aligned}$$

Finally, we get

$$\begin{aligned} \Delta g_{\ell j}(x)= & {} \frac{\partial ^2 g_{\ell j}(x)}{\partial x_0^2} + \sum _{i=1}^3 \frac{\partial g_{\ell j}(x)}{\partial x_i}\\= & {} -8ca {\mathcal {Q}}_x(a)^{-1}-8(xc-|x|^2ca) a^2 {\mathcal {Q}}_x(a)^{-2}\\{} & {} +4 \left( -xca^2+2 | {\underline{x}}|^2 c a^3-x_0ca^2+ca+2x_0^2ca^3 \right. \\{} & {} \left. -2x_0ca^2\right) {\mathcal {Q}}_{x}(a)^{-2}+8 (xc-|x|^2ca)(| {\underline{x}}|^2a^4+x_0^2a^4+a^2-2x_0a^3) {\mathcal {Q}}_x(a)^{-3}\\= & {} -8ca {\mathcal {Q}}_x(a)^{-1}-8(xc-|x|^2ca)a^2 {\mathcal {Q}}_x(a)^{-2}\\{} & {} +4 (-xca^2+2|x|^2ca^3+ca-2x_0ca^2) {\mathcal {Q}}_p(a)^{-2}\\{} & {} +8 (xc-|x|^2ca)(|x|^2a^4+a^2-2x_0a^3) {\mathcal {Q}}_x(a)^{-3}. \end{aligned}$$

Since \(2x_0=x+ {\bar{x}}\) we obtain

$$\begin{aligned} \Delta g_{\ell j}(x)= & {} -8 ca {\mathcal {Q}}_x(a)^{-1}-8(xca^2-|x|^2ca^3) {\mathcal {Q}}_x(a)^{-2}\\{} & {} +4 \left( -xca^2+2|x|^2ca^3+ca-xca^2 \right. \\{} & {} \left. - {\bar{x}}ca^2\right) {\mathcal {Q}}_x(a)^{-2}+8(xc-|x|^2ca)a^2 {\mathcal {Q}}_x(a)^{-2}\\= & {} \left( -8ca(|x|^2a^2-2x_0a+1) -8xca^2-4xca^2+8 |x|^2ca^3\right. \\{} & {} \left. +4ca-4xca^2-4 {\bar{x}} ca^2 \right. \\{} & {} \left. +8xca^2-8|x|^2ca^3\right) {\mathcal {Q}}_x(a)^{-2}\\= & {} \left( -8|x|^2ca^3+16x_0ca^2-8ca-8xca^2+8|x|^2ca^3-4xca^2+8|x|^2ca^3\right. \\{} & {} \left. +4ca-4xca^2-4 {\bar{x}}ca^2 \right. \\{} & {} \left. +8xca^2-8|x|^2ca^3\right) {\mathcal {Q}}_x(a)^{-2}\\= & {} -4 (c-{\bar{x}}ca)a {\mathcal {Q}}_x(a)^{-2}. \end{aligned}$$

Therefore, we get

$$\begin{aligned} \breve{r}_{\ell j}(x)=-4(c- {\bar{x}}ca) {\mathcal {Q}}_x(a)^{-2}ab. \end{aligned}$$

We get the result with \(A=a\) and appropriate matrices B, C and D.

Now, we show the relation \(1) \Longleftrightarrow 3)\). By Theorem 3.1 we know that we can expand a rational slice hyperholomorphic function r as

$$\begin{aligned} r(x)= D+ \sum _{n=1}^\infty x^n CA^{n-1}B. \end{aligned}$$
(3.3)

Now, we consider the generic quaternions a, b, c and d that represent the entries of the quaternionic matrices A, B, C and D. We apply the Laplace operator in four real variables to the entries of (3.3), which are denoted by \(r_{\ell j}\), and we get

$$\begin{aligned} \breve{r}_{\ell j}(x)= \sum _{n=2}^\infty \Delta (x^n) ca^{n-1}b, \quad \ell \in \{1, \ldots ,M\} \quad j \in \{1, \ldots , N\}. \end{aligned}$$

By formula (2.8) we deduce that for \(n \ge 2\) we have \( \Delta (x^n)=-2(n-1)n {\mathcal {Q}}_{n-2}(x).\) This implies that

$$\begin{aligned} \breve{r}_{\ell j}(x)= & {} \Delta r_{\ell j}(x)\\= & {} -2 \sum _{n=2}^\infty (n-1)n {\mathcal {Q}}_{n-2}(x)ca^{n-1}b\\= & {} -2\sum _{n=0}^\infty (n+1)(n+2) {\mathcal {Q}}_n(x)ca^{n+1}b\\= & {} -4cab-2 \sum _{n=1}^\infty (n+1)(n+2) {\mathcal {Q}}_n(x)ca^{n+1}b. \end{aligned}$$

We get the result with \(A=a\) and appropriate matrices B, C and D. \(\square \)

Remark 3.6

If we restrict to the case \(x \in {\mathbb {R}}\) in Theorem 3.5 we can write the axially regular function \(\breve{r}\) as

$$\begin{aligned} \breve{r}(x)=-4C (I-xA)^{-3}AB. \end{aligned}$$

Remark 3.7

The rational axially regular functions defined in this section admit a realization as proved in Theorem 3.5, however they have some limitations. For example, if one performs the generalized CK-product of two rational axially regular functions then, in general, one does not get a rational axially regular function in the sense of Definition 3.4.

In particular, to preserve algebraic properties similar to those of the complex realizations we need to find an alternative definition of a rational axially regular function.

4 Axially rational regular function through the generalized CK-extension

In this section we propose another notion of rational axially regular, different from the one in Definition 3.4. The new notion makes use of the generalized CK-extension. The main advantage is that we can prove some main algebraic properties of realizations.

The idea of the definition comes from the standard equivalent statements given in Theorem 3.1 in the case of slice hyperholomorphic functions, but using the product \(\odot _{GCK}\) instead of the \(*\)-product since we are in the set of regular functions.

Definition 4.1

An \( {\mathbb {H}}^{M \times N}\)-valued function r is called (left) rational axially regular in a neighborhood of the origin if it can be represented in the form

$$\begin{aligned} r(x)=D+ C \odot _{GCK} \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) B), \end{aligned}$$
(4.1)

This notion arises by considering the counterpart of the state space equations in the regular hyperholomorphic setting. Let us consider the following quaternionic linear system

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{n+1}=Ax_n+Bu_n, \qquad n=0,1, \ldots \\ y_n=Cx_n+Du_n. \end{array}\right. } \end{aligned}$$
(4.2)

where A, B, C and D are matrices of appropriate sizes with quaternionic entries and \( U:=\{u_n\}_{n \in {\mathbb {N}}_0}\) is a given sequence of vectors with quaternionic entries, and of suitable size. In the complex setting the “transfer function” of the system is defined by taking the \( {\mathcal {Z}}\)-transform which, in this framework, can be defined as

$$\begin{aligned} {\mathcal {Z}}(U):= {\mathcal {U}}(x)= \sum _{n=0}^\infty {\mathcal {Q}}_n(x) u_n. \end{aligned}$$

We observe that the \({\mathcal {Z}}\)-transform is right linear, since

$$\begin{aligned} {\mathcal {Z}}(UA)={\mathcal {Z}}(U) A. \end{aligned}$$

Furthermore \( {\mathcal {Z}}(U)\) is an axially regular function, see Theorem 2.18. Another important property of the \( {\mathcal {Z}}\)-transform is the following. If we set

$$\begin{aligned} \tau _{-1}U:=(u_1,u_2, \ldots , u_n), \end{aligned}$$

then if \(u_0=0\) we have

$$\begin{aligned} {\mathcal {Z}}(\tau _{-1}U)=[{\mathcal {Q}}_1(x)^{-\odot _{GCK}}]\odot _{GCK}{\mathcal {Z}}(U). \end{aligned}$$

However, in the regular setting a “transfer function” cannot be defined by taking the \( {\mathcal {Z}}\)-transform like in the complex case. The transfer function matrix-valued of the system (4.2) is the axially regular function

$$\begin{aligned} H(x):= {\mathcal {Y}}(x) \odot _{GCK} ({\mathcal {U}}(x))^{- \odot _{GCK}}, \end{aligned}$$

where \({\mathcal {Y}}(x)\) and \({\mathcal {U}}(x)\) are the GCK-extensions of the \( {\mathcal {Z}}\)-transforms of \( y_n\) and of \(u_n\), respectively. We now give the counterpart of the classical realization for the transfer function.

Theorem 4.2

Let A, B, C, D and \( \{u_n\}_{n \in {\mathbb {N}}_0}\) be defined as above. Then we have

$$\begin{aligned} H(x)=D+C \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)B). \end{aligned}$$
(4.3)

Proof

We start by considering the system (4.2) on the real line, where A, B, C and D are replaced by given quaternionic numbers a, b, c and d. Now, we suppose that \( \{u_n\}_{n \in {\mathbb {N}}_{0}}\) is a given sequence of real numbers:

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{n+1}=ax_n+bu_n, \qquad n=0,1,\ldots .\\ y_n=cx_n+du_n. \end{array}\right. } \end{aligned}$$

Let \(x_0 \in {\mathbb {R}}\). By applying the real-valued \( {\mathcal {Z}}\)-transform defined as

$$\begin{aligned} {\mathcal {Z}}(U):= {\mathcal {U}}(u_n):= \sum _{n=0}^\infty x_0^n u_n, \end{aligned}$$

where \(U= \{u_n\}_{n \in {\mathbb {N}}_0}\) we get

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {X}}(x_0)= x_0a {\mathcal {X}}(x_0)+x_0b {\mathcal {U}}(x_0)\\ {\mathcal {Y}}(x_0)= c {\mathcal {X}}(x_0)+d {\mathcal {U}}(x_0). \end{array}\right. } \end{aligned}$$

Each element of the above system is commutative, thus we have

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {X}}(x_0)= (1-x_0a)^{-1} x_0b {\mathcal {U}}(x_0)\\ {\mathcal {Y}}(x_0)= c {\mathcal {X}}(x_0)+d {\mathcal {U}}(x_0). \end{array}\right. } \end{aligned}$$
(4.4)

All the functions involved in (4.4) are analytic on the real line, so we can use the generalized CK-extension (see Theorem 2.16) to get axially regular function in the variable x. Thus by Definition 2.19 we have

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathcal {\breve{X}}(x)=(1- {\mathcal {Q}}_1(x)a)^{-\odot _{GCK}} \odot _{GCK} ( {\mathcal {Q}}_1(x)b) \odot _{GCK} {\mathcal {U}}(x)\\ {\mathcal {Y}}(x)=c \odot _{GCK}\mathcal {\breve{X}}(x)+ d \odot _{GCK} {\mathcal {U}}(x). \end{array}\right. } \end{aligned}$$

By substituting the first equation in the second one of the above system we get

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathcal {\breve{X}}(x)=(1- {\mathcal {Q}}_1(x)a)^{-\odot _{GCK}} \odot _{GCK} ( {\mathcal {Q}}_1(x)b) \odot _{GCK} {\mathcal {U}}(x)\\ {\mathcal {Y}}(x)=c \odot _{GCK}(1- {\mathcal {Q}}_1(x)a)^{-\odot _{GCK}} \odot _{GCK} ( {\mathcal {Q}}_1(x)b) \odot _{GCK} {\mathcal {U}}(x)+ d \odot _{GCK} {\mathcal {U}}(x). \end{array}\right. } \end{aligned}$$

Finally, by the definition of the function H(x) we obtain

$$\begin{aligned} H(x)= & {} {\mathcal {Y}}(x) \odot _{GCK} ({\mathcal {U}}(x))^{- \odot _{GCK}}\\= & {} \left( c \odot _{GCK}(1- {\mathcal {Q}}_1(x)a)^{-\odot _{GCK}} \odot _{GCK} ( {\mathcal {Q}}_1(x)b) \odot _{GCK} {\mathcal {U}}(x)\right. \\{} & {} \left. + d \odot _{GCK} {\mathcal {U}}(x)\right) \odot _{GCK} ({\mathcal {U}}(x))^{- \odot _{GCK}} \\= & {} c \odot _{GCK}(1- {\mathcal {Q}}_1(x)a)^{-\odot _{GCK}} \odot _{GCK} ( {\mathcal {Q}}_1(x)b) + d. \end{aligned}$$

In order to get a matrix valued-function it is sufficient to replace a, b, c and d, respectively, with the matrices A, B, C, D of suitable size and with quaternionic entries. Then we get the axially regular function

$$\begin{aligned} H(x)=D+C \odot _{GCK}(1- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}} \odot _{GCK} ( {\mathcal {Q}}_1(x)B). \end{aligned}$$

\(\square \)

Proposition 4.3

Let r be a quaternionic valued slice hyperholomorphic function and let \(\partial _{x_0}^2r_{| {\mathbb {R}}}\) be a rational function in the real variable \(x_0\). Then we have

$$\begin{aligned} \breve{r}(x)= \Delta r(x)=D+C \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)B), \end{aligned}$$

where D, C, B and A are quaternionic matrices of suitable sizes.

Proof

By hypothesis we know that \( \partial _{x_0}^2 r|_{{\mathbb {R}}}\) is rational. This implies that we can write

$$\begin{aligned} \partial _{x_0}^2 r(x)|_{{\mathbb {R}}}=D+C(I-x_0A)^{-1} (x_0B). \end{aligned}$$

By Theorem 2.17 we know that

$$\begin{aligned} \Delta r= -2 GCK[\partial ^2_{x_0} r(x)|_{{\mathbb {R}}}]. \end{aligned}$$

We replace the quaternionic matrices A, B, C, D with the respective entries a, b, c and d. Now, by Definitions 2.19 and 2.20 we obtain

$$\begin{aligned} GCK[(I-x_0a)^{-1}(x_0b)]=(I- {\mathcal {Q}}_1(x)a)^{-\odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)b). \end{aligned}$$

This implies the following equality for the entries \(r_{\ell j}\) of the \({\mathbb {H}}^{M \times N}\)-valued function r

$$\begin{aligned} \breve{r}_{\ell j}(x){} & {} = \Delta r_{\ell j}(x)=-2[d+ c \odot _{GCK} \left( I- {\mathcal {Q}}_1(x)a\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) b)]\\{} & {} \quad \forall \ell , j \in \{1, \ldots ,N\}. \end{aligned}$$

The thesis follows by absorbing the constant \(-2\) in the matrices. \(\square \)

A relation between the two different notions of rational axially regular functions is discussed in the next result.

Proposition 4.4

A function which is rational axially regular according to Definition 3.4 is also rational according to Definition 4.1.

Proof

In Definition 3.4 we suppose that the function r is rational slice hyperholomorphic, so its restriction to the real line is a rational function and thus also the function \(\partial _{x_0}^2r_{| {\mathbb {R}}}\) is rational. The statement follows by Proposition 4.3. \(\square \)

4.1 Algebraic properties of rational axially regular functions

We now show that the notion of rational axially regular function given in Definition 4.1 is the most suitable one to extend to the Clifford-Appell framework the classical properties that hold for classical rational functions.

We begin by observing that a function which is a linear combination of the polynomials \({\mathcal {Q}}_{\ell }(x)\) admits a realization.

Lemma 4.5

Let M(x) be the \({\mathbb {H}}^{N \times N}\)-valued function defined as

$$\begin{aligned} M(x)= \sum _{\ell =0}^L {\mathcal {Q}}_{\ell }(x) M_\ell . \end{aligned}$$

Then

$$\begin{aligned} M(x)= D+ ({\mathcal {Q}}_1(x)C) \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}}B, \end{aligned}$$

where \(D= M_0\) and

$$\begin{aligned} A:=\begin{pmatrix} 0_N &{}&{} I_N &{}&{} 0_N &{}&{} \ldots \\ 0_N &{}&{} 0_N &{}&{} I_N &{}&{} 0_N &{}&{} \ldots \\ &{}&{}.\\ &{}&{}.\\ &{}&{}.\\ 0_N &{}&{} \ldots &{}&{} \ldots &{}&{} 0_N &{}&{} I_N\\ 0_N &{}&{} 0_N &{}&{} \ldots &{}&{} 0_N &{}&{} 0_N \end{pmatrix} \qquad B:= \begin{pmatrix} 0_N\\ 0_N\\ .\\ .\\ .\\ I_N\\ \end{pmatrix} \qquad C:= \begin{pmatrix} M_{L}&\,&M_{L-1}&\,&\ldots&\,&M_1 \end{pmatrix} \end{aligned}$$

Proof

The assertion follows from the formula

$$\begin{aligned} (I- {\mathcal {Q}}_1(x) A)^{-\odot _{GCK}}= \begin{pmatrix} I_N &{}&{} {\mathcal {Q}}_1(x)I_N &{}&{} {\mathcal {Q}}_{2}(x)I_N &{}&{} \ldots &{}&{} {\mathcal {Q}}_{L-1}(x)I_N\\ 0_N &{}&{} I_N &{}&{} {\mathcal {Q}}_{1}(x)I_N &{}&{} \ldots &{}&{} {\mathcal {Q}}_{L-2}(x)I_N\\ &{}&{}.\\ &{}&{}.\\ &{}&{}.\\ 0_N &{}&{} \ldots &{}&{} \ldots &{}&{} I_N&{}&{} {\mathcal {Q}}_1(x)I_N\\ 0_N &{}&{} 0_N &{}&{} \ldots &{}&{} 0_N &{}&{} I_N\\ \end{pmatrix}. \end{aligned}$$

\(\square \)

Lemma 4.6

Let us consider two axially regular realizations of the following form

$$\begin{aligned} r_j(x)=D_j+ C_j \odot _{GCK} \left( I- {\mathcal {Q}}_1(x)A_j\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) B_j) \qquad j=1,2, \end{aligned}$$

which are \( {\mathbb {H}}^{M \times N}\) and \( {\mathbb {H}}^{N \times R}\)-valued, respectively. The generalized CK-product \(r_1 \odot _{GCK} r_2\) is a \( {\mathbb {H}}^{M \times R}\)-valued function, which can be written as

$$\begin{aligned} ( r_1\odot _{GCK} r_2)(x)= & {} D_1D_2+ \begin{pmatrix} C_1&\,&D_1C_2 \end{pmatrix} \odot _{GCK} \left( I- {\mathcal {Q}}_1(x) U \begin{pmatrix} A_1 &{}&{} B_1C_2\\ 0 &{}&{} A_2 \end{pmatrix} \right) ^{-\odot _{GCK}} \\{} & {} \odot _{GCK} ({\mathcal {Q}}_1(x)U) \begin{pmatrix} B_1D_2\\ B_2 \end{pmatrix}, \end{aligned}$$

where \(U:= \begin{pmatrix} I &{}&{} 0\\ 0 &{}&{} I \end{pmatrix}\).

Given realizations of the two rational \( {\mathbb {H}}^{M \times N}\)-valued functions \(r_1\) and \(r_2\), then a realization of the the sum \(r_1+r_2\) is given by

$$\begin{aligned} r_1(x)+r_2(x)= & {} D_1+D_2+ \begin{pmatrix} C_1&\,&C_2 \end{pmatrix} \odot _{GCK} \left( I- {\mathcal {Q}}_1(x) U \begin{pmatrix} A_1 &{}&{} 0\\ 0 &{}&{} A_2 \end{pmatrix} \right) ^{-\odot _{GCK}}\\{} & {} \odot _{GCK} ({\mathcal {Q}}_1(x)U) \begin{pmatrix} B_1\\ B_2 \end{pmatrix}. \end{aligned}$$

Proof

We start by proving the formula for the generalized CK-product between \(r_1\) and \(r_2\). We have

$$\begin{aligned} (r_1 \odot _{GCK} r_2)(x)= & {} D_1D_2+D_1C_2 \odot _{GCK} (I- {\mathcal {Q}}_1(x)A_2)^{-\odot _{GCK}} \odot _{GCK} {\mathcal {Q}}_1(x)B_2\\{} & {} +C_1 \odot _{GCK} (I- {\mathcal {Q}}_1(x) A_1)^{-\odot _{GCK}}\odot _{GCK} {\mathcal {Q}}_1(x) B_1D_2\\{} & {} +C_1 \odot _{GCK} (I- {\mathcal {Q}}_1(x)A_1)^{-\odot _{GCK}} \odot _{GCK} {\mathcal {Q}}_1(x) B_1C_2\\{} & {} \odot _{GCK} (I-{\mathcal {Q}}_1(x) A_2)^{-\odot _{GCK}}\\{} & {} \odot _{GCK} {\mathcal {Q}}_1(x)B_2. \end{aligned}$$

Then, by setting \( {\mathcal {A}}:= I- {\mathcal {Q}}_1(x)A_1\), \( {\mathcal {B}}:=- {\mathcal {Q}}_1(x)B_1C_2\) and \( {\mathcal {C}}=I- {\mathcal {Q}}_1(x)A_2\) we get

$$\begin{aligned} r_1(x) \odot _{GCK} r_2(x)= & {} D_1D_2+\begin{pmatrix} C_1&\,&D_1C_2 \end{pmatrix} \odot _{GCK}\\{} & {} \begin{pmatrix} {\mathcal {A}}^{-\odot _{GCK}} &{}&{} - {\mathcal {A}}^{-\odot _{GCK}} \odot _{GCK} {\mathcal {B}} \odot _{GCK} {\mathcal {C}}^{-\odot _{GCK}}\\ 0 &{}&{} {\mathcal {C}}^{-\odot _{GCK}} \end{pmatrix}\\{} & {} \odot _{GCK} \begin{pmatrix} {\mathcal {Q}}_1(x) B_1D_2\\ {\mathcal {Q}}_1(x)B_2 \end{pmatrix}. \end{aligned}$$

Now we observe that

$$\begin{aligned} \begin{pmatrix} {\mathcal {A}}^{-\odot _{GCK}} &{}&{} - {\mathcal {A}}^{-\odot _{GCK}} \odot _{GCK} {\mathcal {B}} \odot _{GCK} {\mathcal {C}}^{-\odot _{GCK}}\\ 0 &{}&{} {\mathcal {C}}^{-\odot _{GCK}} \end{pmatrix} = \begin{pmatrix} {\mathcal {A}} &{}&{} {\mathcal {B}}\\ 0 &{}&{} {\mathcal {C}} \end{pmatrix}^{-\odot _{GCK}}. \end{aligned}$$

The above formula implies that

$$\begin{aligned} (r_1 \odot _{GCK} r_2)(x)= & {} D_1D_2+\begin{pmatrix} C_1&\,&D_1C_2 \end{pmatrix} \odot _{GCK} \begin{pmatrix} I- {\mathcal {Q}}_1(x) A_1 &{}&{} - {\mathcal {Q}}_1(x) B_1C_2\\ 0 &{}&{} I- {\mathcal {Q}}_1(x)A_2 \end{pmatrix}^{-\odot _{GCK}}\\{} & {} \odot _{GCK} \begin{pmatrix} {\mathcal {Q}}_1(x) B_1D_2\\ {\mathcal {Q}}_1(x) B_2 \end{pmatrix}\\= & {} D_1D_2+ \begin{pmatrix} C_1&\,&D_1C_2 \end{pmatrix} \odot _{GCK} \left( I- {\mathcal {Q}}_1(x) U \begin{pmatrix} A_1 &{}&{} B_1C_2\\ 0 &{}&{} A_2 \end{pmatrix} \right) ^{-\odot _{GCK}} \\{} & {} \odot _{GCK} ({\mathcal {Q}}_1(x)U) \begin{pmatrix} B_1D_2\\ B_2 \end{pmatrix}. \end{aligned}$$

To show the formula for \(r_1(x)+r_2(x)\) it is enough observe that

$$\begin{aligned} \begin{pmatrix} r_1&\,&I \end{pmatrix} \odot _{GCK} \begin{pmatrix} I\\ r_2 \end{pmatrix}=r_1(x)+r_2(x), \end{aligned}$$

and to apply the formula for \(r_1 \odot _{GCK} r_2\). \(\square \)

Lemma 4.7

Let us consider the following \( {\mathbb {H}}^{N \times N}\) rational axially regular function

$$\begin{aligned} r(x)=D+ C \odot _{GCK} \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) B), \end{aligned}$$

where A, B, C and D are matrices with quaternionic entries and of appropriate sizes and such that D is invertible. Then the generalized CK-inverse of r admits the following realization

$$\begin{aligned} r^{- \odot _{GCK}}(x)=D^{-1}-D^{-1}C \odot _{GCK}\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)B D^{-1}), \end{aligned}$$

where \({\tilde{A}}:= A-BD^{-1}C.\)

Proof

We have to show

$$\begin{aligned} r(x) \odot _{GCK} r^{-\odot _{GCK}}(x)=I. \end{aligned}$$

Then we have

$$\begin{aligned}{} & {} \left( D+ C \odot _{GCK} \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) B)\right) \odot _{GCK}\\{} & {} \quad \left( D^{-1}-D^{-1}C \odot _{GCK}\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)B D^{-1})\right) \\{} & {} =I-C \odot _{GCK}\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}}\odot _{GCK} ({\mathcal {Q}}_1(x)B D^{-1}) +C \odot _{GCK} \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}} \\{} & {} \quad \odot _{GCK} ({\mathcal {Q}}_1(x) BD^{-1})-C \odot _{GCK} \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) BD^{-1}C)\\{} & {} \quad \odot _{GCK}\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}}\odot _{GCK} ({\mathcal {Q}}_1(x) BD^{-1})\\{} & {} = I-C \odot _{GCK} \left\{ \left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}}- \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}}+\left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}} \right. \\{} & {} \quad \left. \odot _{GCK} ({\mathcal {Q}}_1(x) BD^{-1}C)\odot _{GCK}\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}} \right\} \odot _{GCK}({\mathcal {Q}}_1(x) BD^{-1}). \end{aligned}$$

Now, we observe that

$$\begin{aligned} {\mathcal {Q}}_1(x) BD^{-1}C= {\mathcal {Q}}_1(x)(A- {\tilde{A}})=(I- {\mathcal {Q}}_1(x) {\tilde{A}})-(I- {\mathcal {Q}}_1(x) A). \end{aligned}$$

This implies that

$$\begin{aligned}{} & {} \left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}}- \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}}\\{} & {} \quad +\left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) BD^{-1}C) \odot _{GCK}\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{-\odot _{GCK}}\\{} & {} =\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}}- \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}}\\{} & {} \quad +\left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}} \odot _{GCK} \left[ (I- {\mathcal {Q}}_1(x) {\tilde{A}}) \right. \\{} & {} \quad \left. -(I- {\mathcal {Q}}_1(x) A)\right] \odot _{GCK} (I- {\mathcal {Q}}_1(x){\tilde{A}})^{-\odot _{GCK}}\\{} & {} =\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}}- \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}}\\{} & {} \quad -\left( I- {\mathcal {Q}}_1(x){\tilde{A}} \right) ^{- \odot _{GCK}}+ \left( I- {\mathcal {Q}}_1(x)A\right) ^{- \odot _{GCK}}\\{} & {} =I. \end{aligned}$$

This proves the statement. \(\square \)

For a generic axially regular function written in the form

$$\begin{aligned} f(x)= \sum _{n=0}^\infty {\mathcal {Q}}_n(x) f_n, \qquad \{f_n\}_{n \in {\mathbb {N}}_0} \in {\mathbb {H}}, \end{aligned}$$

we define the operator

$$\begin{aligned} (R_0f)(x)= {\left\{ \begin{array}{ll} &{}{\mathcal {Q}}_1(x)^{-\odot _{GCK}} \odot _{GCK} \left( f(x)-f(0)\right) , \qquad x \ne 0\\ &{}f_0 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad x = 0 \end{array}\right. } \end{aligned}$$
(4.5)

which plays the role of the backward shift operator.

Now, we prove five conditions that characterize rational axially regular functions.

Theorem 4.8

The following conditions are equivalent

  1. (1)

    A rational axially regular function can be written as

    $$\begin{aligned} r(x)= D+C \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)B), \end{aligned}$$
    (4.6)

    where \(D_1\), \(C_1\), \(A_1\) and \(B_1\) are quaternionic matrices of suitable size.

  2. (2)

    The function r can be written as a series converging in a neighbourhood of the origin

    $$\begin{aligned} r(x)=\sum _{k=0}^\infty {\mathcal {Q}}_k(x) r_k \qquad r_{k}= {\left\{ \begin{array}{ll} D \qquad k=0\\ CA^kB \qquad k \ge 1. \end{array}\right. } \end{aligned}$$
    (4.7)
  3. (3)

    The right linear span \( {\mathcal {M}}(r)\) of the columns of the functions \( R_0r\), \(R_0^2 r, \ldots \) is finite dimensional.

Proof

We start proving \((1) \Longleftrightarrow (2)\). We show the implication by considering the quaternionic entries of the matrices A, B, C and D, that we denote by a, b, c and d, respectively. By Definition 2.20 we have

$$\begin{aligned}{} & {} d+ c \odot _{GCK} \left( 1- {\mathcal {Q}}_1(x)a\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) b)\\{} & {} \quad = d+ GCK[c(1-x_0a)^{-1}x_0b]\\{} & {} \quad =d+GCK \left[ \sum _{k=1}^\infty x_0^k ca^{k-1}b\right] . \end{aligned}$$

Since the generalized CK-extension is a right-linear operator and by (2.9) we get

$$\begin{aligned}{} & {} d+ c \odot _{GCK} \left( 1- {\mathcal {Q}}_1(x)a\right) ^{- \odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x) b)\\ {}{} & {} \quad = d+ \left( \sum _{k=1}^\infty GCK[x_0^k] ca^{k-1}b\right) \\{} & {} \quad = d + \sum _{k=1}^\infty {\mathcal {Q}}_k(x) ca^{k-1}b. \end{aligned}$$

Now, we show that \((1) \Longrightarrow (3)\). Firstly, we observe that

$$\begin{aligned} R_0 (r(x))=C \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{- \odot _{GCK}}B. \end{aligned}$$

By iterating similar computations we have

$$\begin{aligned} R_0^j (r(x))= C \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}} A^{j-1}B, \qquad j=1,2, \ldots \end{aligned}$$

This means that the right liner span \( {\mathcal {M}}(r)\) is included in the span of the columns of the function \(C \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}}\). Therefore the span \( {\mathcal {M}}(r)\) is finite dimensional.

Now, we prove that \((3) \Longrightarrow (1)\). Since (4) is in force there exists an integer \(m_0 \in {\mathbb {N}}\) such that for every \(m \in {\mathbb {N}}\) and \(v \in {\mathbb {H}}^q\), there exist vectors \(u_1, \ldots , u_{m_0}\) such that

$$\begin{aligned} R_0^{m_0} rv= \sum _{m=1}^{m_0} R_{0}^{m} ru_m. \end{aligned}$$
(4.8)

Now, we denote by E the \( {\mathbb {H}}^{p \times m_0q}\)-valued slice hyperholomorphic function

$$\begin{aligned} E= \begin{pmatrix} R_0r&\,&R_0^2r&\,&\ldots&\,&R_0^{m_0}r. \end{pmatrix} \end{aligned}$$

Now, by (4.8), there exists a matrix \(A \in {\mathbb {H}}^{m_0q \times m_0q}\) such that

$$\begin{aligned} R_0E=EA. \end{aligned}$$

By the definition of the operator \(R_0\), see (4.5), we have

$$\begin{aligned} E(x)-E(0)=E(x) \odot _{GCK} {\mathcal {Q}}_1(x)A. \end{aligned}$$

This implies that

$$\begin{aligned} E(x) \odot _{GCK}(I- {\mathcal {Q}}_1(x)A)=E(0). \end{aligned}$$

Therefore, we have

$$\begin{aligned} E(x)= E(0) \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}}. \end{aligned}$$
(4.9)

Moreover, we have also that

$$\begin{aligned} (R_0 r)(x)=E(x) \begin{pmatrix} I_q\\ 0\\ .\\ .\\ .\\ 0 \end{pmatrix}. \end{aligned}$$

The definition of the operator \( R_0\) and formula (4.9) implies that

$$\begin{aligned} r(x)- r(0)=E(0)\odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}}\odot _{GCK} {\mathcal {Q}}_1(x) \begin{pmatrix} I_q\\ 0\\ .\\ .\\ .\\ 0 \end{pmatrix}. \end{aligned}$$

Then we have that r(x) is of the form (4.6). \(\square \)

Remark 4.9

A different type of regular rational functions was previously considered in [16]. In that paper the authors studied a notion of rational hyperholomorphic function in \( {\mathbb {R}}^4\) by means of the Fueter variables and the CK-product. Precisely, they define the counterpart of rational function in the regular setting as

$$\begin{aligned} R(x)= & {} D+ C\odot (I- \xi _1(x) A_1-\xi _2(x)A_2-\xi _3(x)A_3)^{- \odot _{CK}} \odot _{CK} \\{} & {} (\xi _1(x) B_1+ \xi _2(x)B_2+\xi _3(x)B_3), \end{aligned}$$

where \(A_i\), \(B_i\) (with \(i=1,2,3\)) are constants matrices with entries in the quaternions and of appropriate dimensions. We observe that the function R is Fueter regular in a neighbourhood of the origin.

A different notion of rational regular function of axial type was considered in [8]. They defined a rational axially regular as

$$\begin{aligned} {\mathcal {R}}(x)=D+ \sum _{n=1}^\infty P_n(x) CA^{n-1}B, \qquad P_n(x)= \frac{Q_n(x)}{\sum _{\ell =0}^m (-1)^\ell B_\ell ^m}, \end{aligned}$$
(4.10)

where the matrices A, B, C and D are quaternionic matrices of suitable sizes. The main issue with the previous notion of rational axially regular is that it is not possible to write an expansion in series like the one in (4.7).

In this table we summarize the notions of rational functions in the hyperholomorphic setting that appear in the literature

Setting

Realization

Series

Slice hyperholomorphic

\(D+ C*(I-xA)^{-*}*(pB)\)

\(\sum _{n=0}^\infty x^n C A^{n}B\)

Monogenic

\(D+ C\odot (I- \xi _1 A_1-\xi _2A_2-\xi _3A_3)^{- \odot }\)

 
 

\(\odot (\xi _1 B_1+ \xi _2B_2+\xi _3B_3)\)

\(\sum _{n=0}^\infty \sum _{|\nu |=n} \xi ^{\nu } R_{\nu }\)

Axially regular (CK)

None

\(\sum _{k=0}^\infty P_k(x) CA^kB\)

Axially regular (GCK)

\(D+C \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}}\)

 
 

\( \odot _{GCK} ({\mathcal {Q}}_1(x)B)\)

\(\sum _{k=0}^\infty {\mathcal {Q}}_k(x) CA^kB\)

where \(R_{\nu }:=\frac{(|\nu |-1)!}{\nu !} C\begin{pmatrix} \nu _1 A^{\nu -e_1}&\,&\nu _2 A^{\nu -e_2}&\,&\nu _3 A^{\nu -e_3} \end{pmatrix}B\).

5 Hardy space

Positive definite functions and kernels and their associated reproducing kernel Hilbert spaces are important in complex analysis, stochastic process and machine learning, see [2, 45, 47]. In the quaternionic setting these notions are considered e.g. in [5, 15].

Definition 5.1

A quaternionic-valued function \({\mathcal {K}}(u,v)\), with u and v in some set \(\Omega \) is called positive definite if

  • it is Hermitian:

    $$\begin{aligned} {\mathcal {K}}(u,v)=\overline{{\mathcal {K}}(v,u)} \qquad \forall u,v \in \Omega . \end{aligned}$$
    (5.1)
  • for every \(N \in {\mathbb {N}}\), every \(u_1, \ldots , u_N \in \Omega \) and \(c_1, \ldots , c_N \in {\mathbb {H}}\) it holds that

    $$\begin{aligned} \sum _{\ell ,j =1}^N {\bar{c}}_\ell {\mathcal {K}}(u_{\ell }, u_j) c_j \ge 0. \end{aligned}$$
    (5.2)

From (5.1) it is clear that for any choice of the variables, the sum in (5.2) is a real number.

Associated with \( {\mathcal {K}}(u,v)\) there exists a uniquely defined reproducing kernel quaternionic (right)-Hilbert space \( {\mathcal {H}}({\mathcal {K}})\).

Definition 5.2

A quaternionic Hilbert space \( {\mathcal {H}}({\mathcal {K}})\) of quaternionic valued functions defined on a set \( \Omega \) is called reproducing kernel quaternionic Hilbert space if

  • for every \(v \in \Omega \) and \(c \in {\mathbb {H}}\) the function \( u \mapsto {\mathcal {K}}(u,v)c\) belongs to \( {\mathcal {H}}({\mathcal {K}})\),

  • for every \(f \in {\mathcal {H}}({\mathcal {K}})\), \(u \in \Omega \) and \(c \in {\mathbb {H}}\) it holds that

    $$\begin{aligned} {\bar{c}}f(v)= \langle f(.), {\mathcal {K}}(.,v)c \rangle _{{\mathcal {H}}({\mathcal {K}})}. \end{aligned}$$

It is possible to characterize a function belonging to \( {\mathcal {H}}({\mathcal {K}})\), see next result originally proved in [15, Prop. 9.4]

Lemma 5.3

Let us a consider the space \( {\mathcal {H}}({\mathcal {K}})\) a reproducing kernel quaternionic Hilbert space, with reproducing kernel the function \( {\mathcal {K}}(u,v)\). Then, a function f belongs to \( {\mathcal {H}}( {\mathcal {K}})\) if and only if there exists a constant \( M>0\) such that

$$\begin{aligned} {\mathcal {K}}(u,v)- \frac{f(u) \overline{f(v)}}{M^2} \ge 0. \end{aligned}$$

In the previous equality \(M:= \Vert f \Vert _{{\mathcal {H}}({\mathcal {K}})}\).

An example of reproducing kernel quaternionic Hilbert space is the Hardy space.

The aim of this section is to recall and study the main properties of the Hardy space defined through the Clifford-Appell polynomials. This space was already considered in [32], but in this paper we show more properties. We denote by \( {\mathbb {B}}\) the unit ball in \( {\mathbb {R}}^4\)

$$\begin{aligned} {\mathbb {B}}:= \{x \in {\mathbb {R}}^4: \, x_0^2+x_1^2+x_2^2+x_3^2 <1\}. \end{aligned}$$

To state the next result we introduce the notation

$$\begin{aligned} f^{m \odot _{GCK}}= \underbrace{f\odot _{GCK} f\odot _{GCK} \ldots \odot _{GCK}f}_{m-times}. \end{aligned}$$

Lemma 5.4

The function

$$\begin{aligned} {\mathcal {K}}(x,y)= \sum _{m=0}^\infty {\mathcal {Q}}_1(x)^{m\odot _{GCK}}\overline{{\mathcal {Q}}_1(y)}^{m\odot _{GCK}}, \end{aligned}$$
(5.3)

is absolutely convergent for x, \(y \in {\mathbb {B}}\).

Proof

The convergence follows by (2.10), indeed we have

$$\begin{aligned} | {\mathcal {K}}(x,y)| \le \sum _{m=0}^\infty |{\mathcal {Q}}_1(x)^{m\odot _{GCK}}| |{\mathcal {Q}}_1^{m \odot _{GCK}}(y)| \le \sum _{m=0}^\infty |xy|^m. \end{aligned}$$
(5.4)

By the behaviour of the geometric series (5.4) converges if x, \(y \in {\mathbb {B}}\). \(\square \)

By using the generalized CK-inverse, we have the following result.

Lemma 5.5

The function \({\mathcal {K}}(x,y)\), introduced in (5.3), for x, \(y \in {\mathbb {B}}\), can be written as

$$\begin{aligned} {\mathcal {K}}(x,y)=(1- {\mathcal {Q}}_1(x) {\mathcal {Q}}_1(y))^{-\odot _{GCK}}, \end{aligned}$$

where the generalized CK-extension is with respect to the variable x.

Proof

We set \( \alpha (y):={\mathcal {Q}}_1^{m \odot _{GCK}}(y)\) and we recall that \( {\mathcal {Q}}_1(x)=GCK[x_0^m]\). Then we get

$$\begin{aligned} {\mathcal {K}}(x,y)= & {} \sum _{m=0}^\infty {\mathcal {Q}}_1(x)^{m \odot _{GCK}} \alpha (y)\\= & {} GCK \left[ \sum _{m=0}^\infty x_{0}^m \alpha (y)\right] . \end{aligned}$$

Since x, \(y \in {\mathbb {B}}\) we can write

$$\begin{aligned} {\mathcal {K}}(x,y)= & {} GCK \left[ \sum _{m=0}^\infty x_{0}^m \alpha (y)\right] \\= & {} GCK[(1-x_0^m \alpha (y))^{-1}]\\= & {} (1- {\mathcal {Q}}_1(x) {\mathcal {Q}}_1(y))^{-\odot _{GCK}}. \end{aligned}$$

\(\square \)

Definition 5.6

The kernel in (5.3) is associated with a reproducing kernel Hilbert space called Hardy space. This will be denoted by \( {\textbf{H}}_2({\mathbb {B}})\).

Following [32] we recall a characterization of the Hardy space

Theorem 5.7

The Hardy space \( {\textbf{H}}_2({\mathbb {B}})\) consists of functions of the form

$$\begin{aligned} f(x)= \sum _{m=0}^\infty {\mathcal {Q}}_{m}(x) f_m, \qquad \{f_m\}_{m \ge 0} \subset {\mathbb {H}} \end{aligned}$$

where the coefficients satisfy the following condition

$$\begin{aligned} \sum _{m=0}^\infty |f_m|^2 < \infty . \end{aligned}$$

The norm of a function f in the Hardy space is given by \( \Vert f \Vert _{{\textbf{H}}_2({\mathbb {B}})}=\sum _{m=0}^\infty |f_m|^2.\)

Remark 5.8

Other different types of Hardy space can be studied in the noncommutative setting. For example, in [16] the authors studied the so-called Drury-Averson space. The reproducing kernel of this space is given by

$$\begin{aligned} K(x,y)= & {} \sum _{m=0} \sum _{|\nu |=m} \frac{|\nu |!}{\nu !} \xi (x)^\nu {\overline{\xi }}^{\nu }(y)\\= & {} (1- \xi _1(x)\overline{\xi _1(y)}-\xi _2(x)\overline{\xi _2(y)}-\xi _3(x)\overline{\xi _3(y)})^{-\odot _{CK}} \end{aligned}$$

The convergence of the previous sum is guaranteed if x, y belong the ellipsoid \( {\mathcal {E}}:=\{x \in {\mathbb {R}}^4 \,: \, 3x_0^2+x_1^2+x_2^2+x_3^2 <1\}\).

In [8], the authors use the axially regular kernel defined by

$$\begin{aligned} {\mathcal {K}}_{\mathcal {E'}}(x,y)= \sum _{m=0}^\infty P_1(x)^{m \odot _{CK}} \overline{P_1(y)}^{m \odot _{CK}}, \end{aligned}$$
(5.5)

where x, \(y \in \mathcal {E'}:=\{x \in {\mathbb {R}}^4\,: \, 9x_0^2+x_1^2+x_2^2+x_3^2 <1\}\). The function defined in (5.5) is a reproducing kernel of the Hardy space defined in terms of the polynomials \(P_{n}(x)\), see (4.10). We observe that the kernel (5.5) differs from the one used in this paper since we use another type of Clifford-Appell polynomials, see (2.6). Moreover, we make use of the GCK-product.

Finally, another hypercomplex setting where to consider the Hardy space is the slice hyperholomorphic framework, see [3, 5]. In this case the reproducing kernel is

$$\begin{aligned} k(x,y){} & {} =\sum x^n {\bar{y}}^n=(1-2y_0x+|y|^2x^2)^{-1}(1-xy)\\{} & {} =(1- {\bar{x}} {\bar{y}})(1-2x_0 {\bar{y}}+|x|^2 {\bar{y}}^2)^{-1}. \end{aligned}$$

All the reproducing kernels and domains of the different Hardy spaces (or Drury-Averson) in the non commutative settings are summarized in the following table.

Setting

Reproducing kernel

Domain

Slice hyperholomorphic

\(\sum p^n {\bar{q}}^n\)

\({\mathbb {B}}\)

Monogenic

\(\sum _{m=0} \sum _{|\nu |=m} \frac{|\nu |!}{\nu !} \xi (x)^\nu {\overline{\xi }}^{\nu }(y)\)

\({\mathcal {E}}\)

Axially regular (CK)

\( \sum _{m=0}^\infty P_1(x)^{m \odot _{CK}} \overline{P_1(y)}^{m \odot _{CK}}\)

\(\mathcal {E'}\)

Axially regular (GCK)

\( \sum _{m=0}^\infty {\mathcal {Q}}_1(x)^{m\odot _{GCK}}\overline{{\mathcal {Q}}_1(y)}^{m\odot _{GCK}}\)

\({\mathbb {B}}\)

We recall from [32] that the counterpart of shift operator in our framework is given by

$$\begin{aligned} {\mathcal {M}}_{{\mathcal {Q}}_1}={\mathcal {Q}}_1 \odot _{GCK} f, \qquad f \in {\textbf{H}}_2({\mathbb {B}}). \end{aligned}$$
(5.6)

In [32, Thm. 6.8] the authors proved that the adjoint of the previous operator is the so-called backward-shift operator and it is defined in the following way

$$\begin{aligned} {\mathcal {M}}_{{\mathcal {Q}}_1}^{*}(f)(x)= \sum _{m=0}^\infty {\mathcal {Q}}_m(x) f_{m+1}. \end{aligned}$$
(5.7)

Lemma 5.9

The operator defined in (5.7) for functions in \( {\textbf{H}}_2({\mathbb {B}})\) coincides with \( R_0f\) introduced in (4.5).

Proof

Let us consider the axially regular function on \({\mathbb {B}}\)

$$\begin{aligned} f(x)= \sum _{m=0}^\infty {\mathcal {Q}}_m(x) f_m. \end{aligned}$$

This implies that

$$\begin{aligned} f(x)-f(0)= \sum _{m=1}^\infty {\mathcal {Q}}_n(x) f_m. \end{aligned}$$

Therefore, we have that

$$\begin{aligned} (R_0f) (x)= \sum _{m=1}^\infty {\mathcal {Q}}_{m-1}(x) f_{m}={\mathcal {M}}_{{\mathcal {Q}}_1^m}^{*}(f)(x). \end{aligned}$$

\(\square \)

Lemma 5.10

Let \(f \in {\textbf{H}}_2({\mathbb {B}})\). The operator \({\mathcal {M}}_{{\mathcal {Q}}_1}\) is an isometry in the Hardy space. Moreover, we have

$$\begin{aligned} {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {M}}_{{\mathcal {Q}}_1}^{*}f(x)=f(x)-f(0), \qquad f \in {\textbf{H}}_2({\mathbb {B}}). \end{aligned}$$
(5.8)

Proof

It is easy to prove that the shift operator is an isometry on the Hardy space. By formula (5.7) we have

$$\begin{aligned} {\mathcal {M}}_{P_1^n} {\mathcal {M}}_{P_1^n}^{*}f(x)= & {} {\mathcal {Q}}_1(x) \odot _{GCK} \left( \sum _{m=0}^\infty {\mathcal {Q}}_n(x) f_{m+1}\right) \\= & {} \sum _{m=0}^\infty {\mathcal {Q}}_{m+1}(x)f_{m+1}\\= & {} f(x)-f(0). \end{aligned}$$

\(\square \)

Now we can define the point evaluation map in the Hardy space as \(Cf=f(0)\). The adjoint operator is defined as \(C^{*}u= {\mathcal {K}}(.,0)u=u\). Then by the equality (5.8) we get

$$\begin{aligned} I- {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {M}}^{*}_{{\mathcal {Q}}_1}=C^{*}C. \end{aligned}$$
(5.9)

Remark 5.11

A structural equality like the one in (5.9) is also obtained in the framework of Clifford-Appell polynomials in [8] but with the operator of multiplication by \(P_1\).

In this table we sum up the main structural identities in the quaternionic setting.

Setting

Structural identity

Slice hyperholomorphic

\(I- {\mathcal {M}}_{p}{\mathcal {M}}^{*}_p=C^*C\)

Regular

\(I- {\mathcal {M}}_{\xi } {\mathcal {M}}^{*}_{\xi }=C^*C\)

Axially regular (CK)

\(I- {\mathcal {M}}_{P_1} {\mathcal {M}}^{*}_{P_1}=C^{*}C.\)

Axially regular (GCK)

\(I- {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {M}}^{*}_{{\mathcal {Q}}_1}=C^{*}C.\)

6 Schur multipliers

We recall that, in the complex setting, a Schur multiplier is a function s that satisfies one of the following conditions, see [1].

Theorem 6.1

The following are equivalent

  1. 1.

    The function s is analytic and contractive in the open unit disk.

  2. 2.

    The function s is defined in \( {\mathbb {D}}\) and the operator of multiplication by s is a contraction from the Hardy complex Hardy space into itself.

  3. 3.

    The function s is defined in \( {\mathbb {D}}\) and the kernel

    $$\begin{aligned} k_s(z,w)= \sum _{n=0}^\infty z^n(1- s(z)\overline{s(w)}) {\overline{w}}^n= \frac{1-s(z)\overline{s(w)}}{1-z {\bar{w}}} \end{aligned}$$

    is positive definite in the open unit disk.

In the literature Schur multipliers are related to several research directions: inverse scattering (see [12, 13, 20, 26]), fast algorithms (see [42, 43]), interpolation problems (see [33]) among others.

In [3, 5] the authors defined a counterpart of the Schur multipliers in the quaternionic setting by using the theory of slice hyperholomorphic functions. Also in this framework it is possible to show a list of equivalent conditions characterising Schur multipliers, see [5, Thm. 6.2.5].

In [17] Schur multipliers were introduced in the regular setting using the Cauchy–Kovalevskaya product and series of Fueter polynomials. Precisely, a function \({\textbf{S}}\) is a Schur multiplier in the regular setting if the kernel

$$\begin{aligned} K_{{\textbf{S}}}(x,y)= \sum _{k=0}^\infty \sum _{|\nu |=k} \xi ^{\nu }(x) \overline{\xi ^{\nu }(y)}-(s\odot _{CK} \xi ^\nu )(x)\overline{(s\odot _{CK} \xi ^\nu )(y)}, \end{aligned}$$

is positive.

Inspired from this definition, we give the definition of Schur multipliers in the present framework. We note that in [8] Schur multipliers have been defined in the axially regular setting by using the polynomials defined in (4.10) and the CK-product but, as we discussed in the previous sections, the description via the CGK-product and the polynomials \({\mathcal {Q}}_{n}(x)\) has more advantages.

Definition 6.2

A function \(S: {\mathbb {B}} \rightarrow {\mathbb {H}}\) is a Schur multiplier if the kernel

$$\begin{aligned} K_S(x,y)= \sum _{n=0}^\infty \left( {\mathcal {Q}}_{n}(x) \overline{{\mathcal {Q}}_n(y)}-(S \odot _{GCK} {\mathcal {Q}}_n)(x) \overline{(S \odot _{GCK} {\mathcal {Q}}_n)(y)}\right) \end{aligned}$$

is positive in \( {\mathbb {B}} \times {\mathbb {B}}\).

The reproducing kernel Hilbert space with reproducing kernel \(K_S(x,y)\) will be denoted by \( {\mathcal {H}}(S)\). This space was first introduced in [28, 29].

In this paper we use the following notion for the multiplicative operator

Definition 6.3

Let \(S: {\mathbb {B}} \rightarrow {\mathbb {H}}\) be a generic function. The left \(\odot _{GCK}\)-multiplication operator by S is defined as

$$\begin{aligned} {\mathcal {M}}_S: f \mapsto S \odot _{GCK}f. \end{aligned}$$

If we consider a function regular in the unit ball \({\mathbb {B}}\) and written in the form \(f(x)=\sum _{k=0}^\infty {\mathcal {Q}}_k(x)f_{k}\), with \(f_k \in {\mathbb {H}}\), we can write the operator \( {\mathcal {M}}_S\) in the following way

$$\begin{aligned} (f \odot _{GCK} S)(x)=\sum _{k=0}^\infty \left( S(x) \odot _{GCK}{\mathcal {Q}}_k(x) \right) f_k. \end{aligned}$$
(6.1)

Remark 6.4

In order to define the operator \( {\mathcal {M}}_S\) we need to request that the function S has a restriction to the real axis which is real analytic. Moreover, we observe that since if the operator \( {\mathcal {M}}_S\) maps \( {\textbf{H}}_2({\mathbb {B}})\) into itself, then we have that \( S={\mathcal {M}}_S\) belongs to the Hardy space \( {\textbf{H}}_2({\mathbb {B}})\).

Theorem 6.5

A function \(S: {\mathbb {B}} \rightarrow {\mathbb {H}}\) is a Schur multiplier if and only if the operator \( {\mathcal {M}}_S\) is a contraction on \( {\textbf{H}}_2({\mathbb {B}})\).

Proof

Let us start by supposing that the operator \({\mathcal {M}}_S\) is a contraction. By the formula of the reproducing kernel of Hardy space, see formula (5.3), and (6.1) we obtain

$$\begin{aligned} {\mathcal {M}}_S {\mathcal {K}}(.,y)= \sum _{k=0}^\infty \left( S(x) \odot _{GCK}^x{\mathcal {Q}}_k(x) \right) \overline{{\mathcal {Q}}_k(y)}. \end{aligned}$$

By using the reproducing kernel property we have that

$$\begin{aligned} ({\mathcal {M}}_S^{*} {\mathcal {K}}(.,y))(x)= & {} \langle {\mathcal {M}}_S^* {\mathcal {K}}(.,y), {\mathcal {K}}(.,x) \rangle _{{\textbf{H}}_2({\mathbb {B}})}\nonumber \\= & {} \langle {\mathcal {K}}(.,y), S \odot _{GCK}^y {\mathcal {K}}(.,x) \rangle _{{\textbf{H}}_2({\mathbb {B}})}\nonumber \\= & {} \overline{ S(y) \odot _{GCK}^y {\mathcal {K}}(y,x) }\nonumber \\= & {} \overline{ S(y) \odot _{GCK}^y \sum _{k=0}^\infty {\mathcal {Q}}_k(y) \overline{{\mathcal {Q}}_k}(x) }\nonumber \\= & {} \sum _{k=0}^\infty \overline{\left( S(y) \odot _{GCK}^y {\mathcal {Q}}_k(y)\right) \overline{{\mathcal {Q}}_k}(x)}\nonumber \\= & {} \sum _{k=0}^\infty {\mathcal {Q}}_k(x) \overline{\left( S(y)\odot _{GCK}^y{\mathcal {Q}}_k(y) \right) }. \end{aligned}$$
(6.2)

This formula implies that

$$\begin{aligned}{} & {} \langle (I- {\mathcal {M}}_S {\mathcal {M}}_S^{*}) {\mathcal {K}}(.,y), {\mathcal {K}}(., x) \rangle _{ {\textbf{H}}_2({\mathbb {B}})}= \langle {\mathcal {K}}(.,y), {\mathcal {K}}(., x) \rangle _{ {\textbf{H}}_2({\mathbb {B}})}\\{} & {} \quad -\langle {\mathcal {M}}_S {\mathcal {M}}_S^{*} {\mathcal {K}}(.,y), {\mathcal {K}}(., x) \rangle _{ {\textbf{H}}_2({\mathbb {B}})}\\{} & {} \quad = \sum _{k=0}^\infty {\mathcal {Q}}_k(x) \overline{{\mathcal {Q}}_k}(y)- \sum _{k=0}^\infty \left( S \odot _{GCK} {\mathcal {Q}}_k\right) (x)\overline{\left( S \odot _{GCK}{\mathcal {Q}}_k \right) (y)}. \end{aligned}$$

Now, we consider a function \(f \in {\textbf{H}}_2({\mathbb {B}})\) of the form

$$\begin{aligned} f= \sum _{i=1}^r {\mathcal {K}}(., x_i) \alpha _i, \qquad r \in {\mathbb {N}}, \, x_i \in {\mathbb {B}}, \, \alpha _i \in {\mathbb {H}}. \end{aligned}$$
(6.3)

Therefore, we have

$$\begin{aligned} \langle (I- {\mathcal {M}}_S {\mathcal {M}}_S^{*}) f, f \rangle _{ {\textbf{H}}_2({\mathbb {B}})}= & {} \langle f, f \rangle _{ {\textbf{H}}_2({\mathbb {B}})}- \langle {\mathcal {M}}_S^{*} f, {\mathcal {M}}_S^{*} f\rangle _{ {\textbf{H}}_2({\mathbb {B}})}\nonumber \\= & {} \sum _{i,j=1}^r {\overline{\alpha }}_i {\mathcal {K}}(x_i,x_j) \alpha _j\nonumber \\{} & {} - \sum _{i,j=1}^r \sum _{k=0}^\infty {\overline{\alpha }}_i \left( S \odot _{GCK}{\mathcal {Q}}_k \right) (x)\overline{\left( S\odot _{GCK} {\mathcal {Q}}_k\right) (y)} \alpha _j\nonumber \\= & {} \sum _{i,j=1}^r {\overline{\alpha }}_i K_S(x_i, x_j) \alpha _j. \end{aligned}$$
(6.4)

Since the operator \( {\mathcal {M}}_S\) is a contraction we have that \(\langle (I- {\mathcal {M}}_S {\mathcal {M}}_S^{*}) f, f \rangle _{ {\textbf{H}}_2({\mathbb {B}})}\) is non negative. this implies that the quadratic form defined in (6.4) is non negative, then the kernel \(K_S\) is positive.

Now, we suppose that the kernel \( {\mathcal {K}}_S\) is positive on \( {\mathbb {B}} \times {\mathbb {B}}\). Firstly, we observe that the function defined in (6.2) belongs to \( {\textbf{H}}_2({\mathbb {B}})\) for each fixed \(y \in {\mathbb {B}}\), since the operator \( {\mathcal {M}}_S^{*}\) maps \( {\textbf{H}}_2({\mathbb {B}})\) to \( {\textbf{H}}_2({\mathbb {B}})\). This implies that the following operator

$$\begin{aligned} T: {\textbf{H}}_2({\mathbb {B}}) \rightarrow {\textbf{H}}_2({\mathbb {B}}), \qquad \qquad {\mathcal {K}}(.,y) \mapsto \sum _{k=0}^\infty {\mathcal {Q}}_k(x) \overline{\left( S(y) \odot _{GCK}^y {\mathcal {Q}}_k(y)\right) } \end{aligned}$$

is well defined. It is possible to consider an extension by linearity of the previous operator to functions f of the form (6.3). Such type of functions are dense in \( {\textbf{H}}_2({\mathbb {B}})\), and so we get that we can extend by continuity the operator T to all of \( {\textbf{H}}_2({\mathbb {B}})\). Using this density argument and formula (6.4), where instead of the operator \( {\mathcal {M}}_S^{*}\) we consider the operator T, by the positivity of \( {\mathcal {K}}_S\) we get that T is a contraction on \( {\textbf{H}}_2({\mathbb {B}})\). Now, we compute the adjoint of the operator T. Let \(c_1\), \(c_2 \in {\mathbb {H}}\) and \(y_1\), \(y_2 \in {\mathbb {B}}\), then we get

$$\begin{aligned} \overline{c_2} (T^{*} {\mathcal {K}}(y_1,.) c_1)(y_1)= & {} \langle T^{*} {\mathcal {K}}(y_1,.) c_1, {\mathcal {K}}(., y_2) c_2 \rangle _{{\textbf{H}}_2({\mathbb {B}})}\\= & {} \left\langle {\mathcal {K}}(y_1,.) c_1, T\left( {\mathcal {K}}(., y_2)\right) c_2 \right\rangle _{{\textbf{H}}_2({\mathbb {B}})}\\= & {} \left\langle {\mathcal {K}}(y_1,.) c_1, \sum _{k=0}^\infty {\mathcal {Q}}_k(x) \overline{\left( S(y_2) \odot _{GCK}^{y_2} {\mathcal {Q}}_k(y_2)\right) } c_2 \right\rangle _{{\textbf{H}}_2({\mathbb {B}})}\\= & {} {\overline{c}}_2 \left( \sum _{k=0}^\infty \left( S(y_2) \odot _{GCK}^{y_2} {\mathcal {Q}}_k(y_2)\right) \overline{{\mathcal {Q}}_k(y_1)} c_1 \right) \\= & {} {\overline{c}}_2 \left( {\mathcal {M}}_S ({\mathcal {K}}(.,y_2)c_1)\right) . \end{aligned}$$

Thus we get that \(T^{*}= {\mathcal {M}}_S\). Since the operator T is a contraction also its adjoint is a contraction. This implies that the operator \( {\mathcal {M}}_S\) is a contraction. \(\square \)

Another characterization of Schur multipliers is the following.

Theorem 6.6

A function \(S: {\mathbb {B}} \rightarrow {\mathbb {H}}\) is a Schur multiplier if and only if S belongs to \( \mathcal{A}\mathcal{M}({\mathbb {B}})\) and for all \( n\ge 0\) we have

$$\begin{aligned} I_{n+1}- L_n L_n^* \ge 0, \end{aligned}$$

where \(L_n\) is the lower triangular Toeplitz matrix given by

$$\begin{aligned} L_n:=\begin{pmatrix} S_0 &{}&{} 0 &{}&{} \ldots &{}&{}0\\ S_1 &{}&{} S_0 &{}&{} \ldots &{}&{}.\\ .&{}&{}.&{}&{} \ldots &{}&{}.\\ .&{}&{}.&{}&{} \ldots &{}&{}.\\ .&{}&{}.&{}&{} \ldots &{}&{} 0\\ S_n &{}&{} \ldots &{}&{} S_1 &{}&{} S_0 \end{pmatrix} \qquad S(x)= \sum _{k=0}^\infty {\mathcal {Q}}_k(x) S_k. \end{aligned}$$
(6.5)

Proof

We assume that S is a Schur multiplier. Computations similar to those done in (6.2) show that for S written as in (6.5) we have, for all \( k \ge 0\), that

$$\begin{aligned} {\mathcal {M}}_S^{*}: {\mathcal {Q}}_{k}(x) \mapsto \sum _{j=0}^k {\mathcal {Q}}_{j}(x) {\overline{S}}_{k-j}, \end{aligned}$$

which extends by linearity to

$$\begin{aligned} {\mathcal {M}}_S^{*}: f(x)=\sum _{k=0}^n {\mathcal {Q}}_k(x) f_k \mapsto \sum _{k=0}^n \left( \sum _{j=k}^n {\overline{S}}_{j-k}f_j\right) . \end{aligned}$$

If we set \( {\textbf{f}}:=[f_0, \ldots , f_n]^T\) and by the shape of the matrix \(L_n\) we get

$$\begin{aligned} \Vert f \Vert _{{\textbf{H}}_2({\mathbb {B}})}^2-\Vert {\mathcal {M}}_S^{*}f \Vert _{{\textbf{H}}_2({\mathbb {B}})}^2= & {} \sum _{k=0}^n | f_k|^2- \sum _{k=0}^n \left| \sum _{j=k}^n {\overline{S}}_{j-k} f_j \right| ^2\nonumber \\= & {} {\textbf{f}}^{*}(I_{n+1}-L_nL^{*}_n) {\textbf{f}}. \end{aligned}$$
(6.6)

By Theorem 6.5 we know that \( {\mathcal {M}}_S\) is a contraction on \({\textbf{H}}_2({\mathbb {B}})\) (thus also \({\mathcal {M}}_S^{*}\)), hence (6.6) is nonnegative for every \( {\textbf{f}} \in {\mathbb {H}}^{n+1}\). This means \(I_{n+1}- L_{n}L_{n}^{*}\) is positive definite.

Conversely, we assume that \(I_{n+1}-L_nL^{*}_n \ge 0\), for each \(n \ge 1\). By (6.6) we have that the operator \( {\mathcal {M}}_S^{*}\) acts contractively on functions of the form \(\sum _{k=0}^\infty {\mathcal {Q}}_k(x) f_k\). However, this type of functions are dense in \( {\textbf{H}}_2({\mathbb {B}})\), then the operators \( {\mathcal {M}}_S\) and \( {\mathcal {M}}_S^{*}\) are contractions. The thesis follows by Theorem 6.5. \(\square \)

Lemma 6.7

Let \(S_1\), \(S_2\) and S be Schur multipliers. Then we have the following equalities

  1. (1)

    \({\mathcal {M}}_{S_1} {\mathcal {M}}_{S_2}= {\mathcal {M}}_{S_1 \odot _{GCK} S_2}\)

  2. (2)

    \( {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {M}}_{S}= {\mathcal {M}}_{S}{\mathcal {M}}_{{\mathcal {Q}}_1}.\)

Proof

  1. (1)

    We observe that \(S_1 \odot _{GCK} S_2\) is an axially regular function with an expansion in series in terms of the polynomials \( \{{\mathcal {Q}}_{n}(y)\}_{n \ge 0}\). Then we get

    $$\begin{aligned} {\mathcal {M}}_{S_1} {\mathcal {M}}_{S_2}(f)= & {} {\mathcal {M}}_{S_1}( S_2 \odot _{GCK}f)\\= & {} (S_1 \odot _{GCK} S_2)\odot _{GCK} f\\= & {} {\mathcal {M}}_{S_1 \odot _{GCK} S_2} \end{aligned}$$
  2. (2)

    Let us consider a function \(f=\sum _{n=0}^\infty {\mathcal {Q}}_n(x) \alpha _n\), with \( \{\alpha _n\}_{n \in {\mathbb {N}}_0} \in {\mathbb {H}}\). Then by the fact that the generalized CK-product is a convolution product and (5.6) we have

    $$\begin{aligned} {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {M}}_{S}(f)= & {} {\mathcal {Q}}_1 \odot _{GCK} ({\mathcal {M}}_Sf)\\= & {} {\mathcal {Q}}_1 \odot _{GCK} \left( S \odot _{GCK} f\right) \\= & {} {\mathcal {Q}}_1 \odot _{GCK} \left( \sum _{n=0}^\infty ({\mathcal {Q}}_n(x) \odot _{GCK} S) \alpha _n\right) \\= & {} \sum _{n=0}^\infty ({\mathcal {Q}}_{n+1} \odot _{GCK} S)(x) \alpha _n\\= & {} {\mathcal {M}}_S \left( \sum _{n=0}^\infty {\mathcal {Q}}_{n+1}(x)\alpha _n\right) \\= & {} {\mathcal {M}}_S({\mathcal {Q}}_1 \odot _{GCK} f)\\= & {} {\mathcal {M}}_S{\mathcal {M}}_{{\mathcal {Q}}_1}(f). \end{aligned}$$

\(\square \)

Now, we show the counterpart of Schwarz’s lemma for Schur multipliers in this framework.

Theorem 6.8

Let S be a Schur multiplier, and assume that \(S(0)=0\). We set \(S(x)= (S^{(1)} \odot _{GCK}{\mathcal {Q}}_1 )(x)\). Then \(S^{(1)}\) is a Schur multiplier.

Proof

Since by hypothesis \(S(0)=0\) we have that \(1= {\mathcal {K}}_S(x,0) \in {\mathcal {H}}(S)\) and \( {\mathcal {K}}_S(0,0)=1=\Vert 1 \Vert _{{\mathcal {H}}(S)}\). Hence by Lemma 5.3 (with the function \(f \equiv 1\)) we have that

$$\begin{aligned} {\mathcal {K}}_S(x,y)-1 \ge 0, \end{aligned}$$

in \( {\mathbb {B}}\). By Definition 6.2 we have that

$$\begin{aligned} \sum _{n=0}^\infty \left( {\mathcal {Q}}_n(x) \overline{{\mathcal {Q}}_n(y)} - (S^{(1)} \odot _{GCK} {\mathcal {Q}}_{n+1})(x)\overline{( S^{(1)}\odot _{GCK}{\mathcal {Q}}_{n+1} )(y)}\right) \ge 1. \end{aligned}$$

Since \( {\mathcal {Q}}_0(x)= {\mathcal {Q}}_0(y)=0\) we have that

$$\begin{aligned} \sum _{n=1}^\infty {\mathcal {Q}}_n(x) \overline{{\mathcal {Q}}_n(y)}- \sum _{n=0}^\infty (S^{(1)} \odot _{GCK} {\mathcal {Q}}_{n+1})(x)\overline{( S^{(1)}\odot _{GCK}{\mathcal {Q}}_{n+1} )(y)} \ge 0. \end{aligned}$$

By changing index to the first sum we get

$$\begin{aligned} {\mathcal {Q}}_1(x){} & {} \odot _{GCK}^x \left( \sum _{n=0}^\infty {\mathcal {Q}}_n(x) \overline{{\mathcal {Q}}_n(y)}-( S^{(1)}\odot _{GCK} {\mathcal {Q}}_{n})(x)\overline{(S^{(1)} \odot _{GCK} {\mathcal {Q}}_{n})(y)}\right) \\{} & {} \odot _{GCK}^y \overline{{\mathcal {Q}}_1(y)} \ge 0. \end{aligned}$$

This implies that

$$\begin{aligned} \sum _{n=0}^\infty {\mathcal {Q}}_n(x) \overline{{\mathcal {Q}}_n(y)}-(S^{(1)} \odot _{GCK} {\mathcal {Q}}_{n})(x)\overline{(S^{(1)} \odot _{GCK} {\mathcal {Q}}_{n})(y)} \ge 0. \end{aligned}$$

Therefore, by Definition 6.2 we get the thesis. \(\square \)

Finally, we conclude this section with a characterization of the space \( {\mathcal {H}}(S)\). The proof is as in the classic case, see [9, 14].

Theorem 6.9

Let S be a Schur multiplier. Then

$$\begin{aligned} {\mathcal {H}}(S)= \hbox {range} \{\sqrt{I- {\mathcal {M}}_S {\mathcal {M}}^{*}_S}\} \end{aligned}$$

endowed with the norm

$$\begin{aligned} \Vert (\sqrt{I- {\mathcal {M}}_S {\mathcal {M}}_S^*})f \Vert _{{\mathcal {H}}(S)}=\Vert (I- \pi ) f \Vert _{{\textbf{H}}_2({\mathbb {B}})}, \end{aligned}$$

where \(\pi \) is the orthogonal projection on \(\hbox {Ker}(\sqrt{I- {\mathcal {M}}_S {\mathcal {M}}^{*}_S})\).

7 Realizations of Schur multipliers

In the case of holomorphic and slice hyperholomorphic functions the realizations and the Schur multiplier are related to each other, see [3, 5], respectively. The aim of this section is to get a similar results in the framework of Clifford-Appell polynomials.

We start by recalling the following notion.

Definition 7.1

A realization is called observable, or closely outer-connected, if the pair \((C,A) \in {\mathbb {H}}^{N \times N} \times {\mathbb {H}}^{M \times M}\) is observable, i.e.

$$\begin{aligned} \bigcap _{n=1}^\infty \hbox {ker} (CA^n)= \{0\}. \end{aligned}$$

Theorem 7.2

Let us consider a function \(S: {\mathbb {B}} \rightarrow {\mathbb {H}}\). Then S is a Schur multiplier if and only if there exists a right quaternionic Hilbert space \( {\mathcal {H}}(S)\) and a coisometric operator

$$\begin{aligned} \begin{pmatrix} A &{}&{} B\\ C &{}&{} C \end{pmatrix}: {\mathcal {H}}(S) \oplus {\mathbb {H}} \rightarrow {\mathcal {H}}(S) \oplus {\mathbb {H}} \end{aligned}$$
(7.1)

such that

$$\begin{aligned} S(x)= \sum _{n=0}^\infty {\mathcal {Q}}_n(x)S_n, \end{aligned}$$
(7.2)

where

$$\begin{aligned} S_n= {\left\{ \begin{array}{ll} D, \qquad n=0\\ CA^{n-1}B, \qquad n=1,2, \ldots \end{array}\right. } \end{aligned}$$
(7.3)

If we assume that (CA) are closely outer-connected, then the realization S is unique up to an an isometry of right quaternionic Hilbert spaces.

By Theorem 4.8 we can write the Schur multiplier S of the previous theorem as

$$\begin{aligned} S(x)= D+C \odot _{GCK} (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)B). \end{aligned}$$
(7.4)

In order to show the previous theorem we need to show some technical lemmas. We will use the following notation \(\Gamma _S:=I- {\mathcal {M}}_S {\mathcal {M}}_S^{*}\). We recall that for h, \(g \in {\textbf{H}}_2({\mathbb {B}})\) we have the following relations

$$\begin{aligned} \langle \Gamma _S h, \Gamma _S g \rangle _{{\mathcal {H}}(S)}= & {} \langle \Gamma _S h,g \rangle _{{\textbf{H}}_2({\mathbb {B}})} \end{aligned}$$
(7.5)
$$\begin{aligned} \langle \sqrt{\Gamma _S }h, \Gamma _S g \rangle _{{\mathcal {H}}(S)}= & {} \langle \sqrt{\Gamma _S }h,g \rangle _{{\textbf{H}}_2({\mathbb {B}})}, \end{aligned}$$
(7.6)

see for instance [6, 34]. To show the next results we use a similar method applied in [11], suitably adapted.

Lemma 7.3

Let S be a Schur multiplier. Then for \(x_1\), \(x_2 \in {\mathbb {B}}\) we have the following equality

$$\begin{aligned} \langle \Gamma _S {\mathcal {M}}_{Q_1}^* {\mathcal {K}}(.,x_1), \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^{*}{\mathcal {K}}(., x_2) \rangle _{{\mathcal {H}}(S)}= K_S(x_2,x_1)-1+S(x_2) \overline{S(x_1)}. \end{aligned}$$

Proof

First of all we observe that by (5.7) we have

$$\begin{aligned} {\mathcal {M}}_{{\mathcal {Q}}_1}^{*} {\mathcal {K}}(x_2,x_1)= & {} {\mathcal {M}}_{{\mathcal {Q}}_1}^{*} \left( \sum _{n=0}^\infty {\mathcal {Q}}_n(x_2) \overline{{\mathcal {Q}}_n(x_1)}\right) \nonumber \\= & {} \sum _{n=0}^\infty {\mathcal {Q}}_{n}(x_2) \overline{{\mathcal {Q}}_{n+1}(x_1)}, \end{aligned}$$
(7.7)

and

$$\begin{aligned} {\mathcal {M}}_{{\mathcal {Q}}_1^{*}} {\mathcal {M}}_{S}^{*} {\mathcal {K}}(x_2, x_1)= & {} {\mathcal {M}}_{{\mathcal {Q}}_1}^{*} \left( \sum _{n=0}^\infty {\mathcal {Q}}_{n}(x_2) \overline{(S \odot _{GCK} {\mathcal {Q}}_n)}(x_1)\right) \nonumber \\= & {} \sum _{n=0}^\infty {\mathcal {Q}}_{n}(x_2) \overline{(S\odot _{GCK}. {\mathcal {Q}}_{n+1} )}(x_1). \end{aligned}$$
(7.8)

By formulas (7.5), (7.7) and (7.8) we have that

$$\begin{aligned}{} & {} \langle \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^* {\mathcal {K}}(.,x_1), \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^{*}{\mathcal {K}}(., x_2) \rangle _{{\mathcal {H}}(S)}=\langle (I- {\mathcal {M}}_S {\mathcal {M}}_{S}^{*}) {\mathcal {M}}_{{\mathcal {Q}}_1}^{*} {\mathcal {K}}(.,x_1), \\{} & {} {\mathcal {M}}_{{\mathcal {Q}}_1}^* {\mathcal {K}}(., x_2)\rangle _{{\textbf{H}}_2({\mathbb {B}})}. \end{aligned}$$

Now, by the second point of Lemma 6.7 and Definition 6.2 we get

$$\begin{aligned}{} & {} \langle \Gamma _S {\mathcal {M}}_{S}^* {\mathcal {K}}(.,x_1), \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^{*}{\mathcal {K}}(., x_2) \rangle _{{\mathcal {H}}(S)}\\{} & {} \quad = \langle {\mathcal {M}}_{{\mathcal {Q}}_1}^* {\mathcal {K}}(.,x_1), {\mathcal {M}}_{{\mathcal {Q}}_1}^* {\mathcal {K}}(.,x_2)\rangle _{{\textbf{H}}_2({\mathbb {B}})}-\langle {\mathcal {M}}_{{\mathcal {Q}}_1}^* {\mathcal {M}}_S^* {\mathcal {K}}(.,x_1), {\mathcal {M}}_{{\mathcal {Q}}_1}^* {\mathcal {M}}_S^* {\mathcal {K}}(.,x_2)\rangle _{{\textbf{H}}_2({\mathbb {B}})}\\{} & {} \quad = \sum _{n=0}^\infty {\mathcal {Q}}_{n+1}(x_2) \overline{{\mathcal {Q}}_{n+1}(x_1)}- \sum _{n=0} ( S\odot _{GCK}{\mathcal {Q}}_{n+1} )(x_2) \overline{ (S \odot _{GCK} {\mathcal {Q}}_{n+1})(x_1)}\\{} & {} \quad = \sum _{n=0}^\infty {\mathcal {Q}}_{n}(x_2) \overline{{\mathcal {Q}}_{n}(x_2)}- {\mathcal {Q}}_0(x_2) \overline{{\mathcal {Q}}_0(x_1)}\\{} & {} \qquad -\sum _{n=0}^{\infty } (S \odot _{GCK} {\mathcal {Q}}_{n})(x_2) \overline{ (S \odot _{GCK}{\mathcal {Q}}_{n} )(x_1)}+S(x_2)\overline{S(x_1)}\\{} & {} \quad = K_S(x_2,x_1)-1+S(x_2) \overline{S(x_1)}. \end{aligned}$$

\(\square \)

In the next result we will use the following notation

$$\begin{aligned} \omega _y u:= \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^{*} {\mathcal {K}}(.,u)y \qquad y \in {\mathbb {H}}, u \in {\mathbb {B}}. \end{aligned}$$

Note that \(\omega _yu\) is well defined since \({\mathcal {M}}_{Q_1}\) is a bounded operator from \({\textbf{H}}_2({\mathbb {B}})\) into itself.

Lemma 7.4

The right vector subspace of \(( {\mathcal {H}}(S) \oplus {\mathbb {H}}) \times ( {\mathcal {H}}(S) \oplus {\mathbb {H}})\) spanned by the pairs

$$\begin{aligned} \left( \begin{pmatrix} \omega _{y}u\\ v \end{pmatrix}, \begin{pmatrix} {\hat{A}}(\omega _{y} u)+{\hat{B}}(v)\\ {\hat{C}}(\omega _{y} u)+{\hat{D}}(v) \end{pmatrix}\right) \end{aligned}$$

where

$$\begin{aligned} {\hat{A}}(\omega _{y} u):= & {} \left( K_S(x,y)-K_{S}(x,0) \right) u, \qquad {\hat{B}}u:= K_S(x,0)u \\ {\hat{C}}(\omega _{y} u):= & {} \left( \overline{S(x)}-\overline{S(0)}\right) u, \qquad {\hat{D}}u:= \overline{S(0)} u, \end{aligned}$$

defines an isometric relation R

$$\begin{aligned} R:= \begin{pmatrix} {\hat{A}} &{}&{} {\hat{B}}\\ {\hat{C}} &{}&{} {\hat{D}} \end{pmatrix}, \end{aligned}$$

with dense domain.

Proof

Firstly we show that the relation R is an isometry. Precisely, we have to show that

$$\begin{aligned} \left\langle R \begin{pmatrix} \omega _{y_1}u_1\\ v_1 \end{pmatrix}, R \begin{pmatrix} \omega _{y_2}u_2\\ v_2 \end{pmatrix} \right\rangle _{{\mathcal {H}}(S) \oplus {\mathbb {H}}}= \left\langle \begin{pmatrix} \omega _{y_1}u_1\\ v_1 \end{pmatrix}, \begin{pmatrix} \omega _{y_2}u_2\\ v_2 \end{pmatrix} \right\rangle _{{\mathcal {H}}(S) \oplus {\mathbb {H}}}, \end{aligned}$$
(7.9)

where \(u_1\), \(u_2\), \(v_1\), \(v_2 \in {\mathcal {H}}(S)\) and \(y_1\), \(y_2 \in {\mathbb {B}}\). We can write relation (7.9) as

$$\begin{aligned}{} & {} \left\langle \begin{pmatrix} {\hat{A}}(\omega _{y_1} u_1)+{\hat{B}}(v_1)\\ {\hat{C}}(\omega _{y_1} u_1)+{\hat{D}}(v_1) \end{pmatrix}, \begin{pmatrix} {\hat{A}}(\omega _{y_2} u_2)+{\hat{B}}(v_2)\\ {\hat{C}}(\omega _{y_1} u_1)+{\hat{D}}(v_1) \end{pmatrix} \right\rangle _{{\mathcal {H}}(S) \oplus {\mathbb {H}}}\nonumber \\{} & {} = \left\langle \begin{pmatrix} \omega _{y_1}u_1\\ v_1 \end{pmatrix}, \begin{pmatrix} \omega _{y_2}u_2\\ v_2 \end{pmatrix} \right\rangle _{{\mathcal {H}}(S) \oplus {\mathbb {H}}}. \end{aligned}$$
(7.10)

By using Lemma 7.3 we write the term on the right hand side of (7.10) in the following way

$$\begin{aligned} \left\langle \begin{pmatrix} \omega _{y_1}u_1\\ v_1 \end{pmatrix}, \begin{pmatrix} \omega _{y_2}u_2\\ v_2 \end{pmatrix} \right\rangle _{{\mathcal {H}}(S) \oplus {\mathbb {H}}}= {\bar{u}}_2 K_S(y_2,y_1)u_1- {\bar{u}}_2 u_1+ {\bar{u}}_2 S(y_2) \overline{S(y_1)} u_1 + {\bar{v}}_2 v_1.\nonumber \\ \end{aligned}$$
(7.11)

We can write the term on the left hand side as

$$\begin{aligned}{} & {} {\bar{u}}_2 K_S(y_2, y_1) u_1-{\bar{u}}_2 K_S(y_2, 0) u_1-{\bar{u}}_2 K_S(0, y_1) u_1\nonumber \\{} & {} \quad +{\bar{u}}_2 K_S(0,0) u_1+{\bar{u}}_2 K_S(y_2, 0) v_1-{\bar{u}}_2 K_S(0,0) v_1\nonumber \\{} & {} \quad + {\bar{v}}_2 K_S(0, y_1) u_1- {\bar{v}}_2 K_S(0,0) u_1+ {\bar{v}}_2 K_S(0,0) v_1+ {\bar{u}}_2 S(y_2) \overline{S(y_1)}u_1\nonumber \\{} & {} \quad - {\bar{u}}_2 S(y_2) \overline{S(0)}u_1 -{\bar{u}}_2 S(0) \overline{S(y_1)} u_1\nonumber \\{} & {} \quad + {\bar{u}}_2 S(0) \overline{S(0)} u_1+ {\bar{u}}_2 S(y_2)\overline{S(0)} v_1- {\bar{u}}_2 S(0) \overline{S(0)} v_1+ {\bar{v}}_2 S(0) \overline{S(y_1)}u_1\nonumber \\{} & {} \quad - {\bar{v}}_2 S(0) {\overline{S}}(0)u_1+ {\bar{v}}_2 S(0) \overline{S(0)}v_1. \end{aligned}$$
(7.12)

By Definition 6.2 we observe that

$$\begin{aligned} K_S(0, 0)=I-S(0)\overline{S(0)}. \end{aligned}$$

This implies that

$$\begin{aligned}{} & {} {\bar{u}}_2 K_S(0,0) u_1+ {\bar{u}}_2 S(0) \overline{S(0)} u_1= {\bar{u}}_2 u_1, \\{} & {} {\bar{u}}_2 K_S(0,0) v_1+ {\bar{u}}_2 S(0) \overline{S(0)} v_1={\bar{u}}_2 v_1, \\{} & {} {\bar{v}}_2 S(0) {\overline{S}}(0)u_1+ {\bar{v}}_2 K_S(0,0) u_1= {\bar{v}}_2 u_1, \\{} & {} {\bar{v}}_2 K_S(0,0)v_1+{\bar{v}}_2 S(0) \overline{S(0)} u_1={\bar{v}}_2 v_1. \end{aligned}$$

Then by using

$$\begin{aligned} K_S(y_2,0)= & {} I-S(y_2)\overline{S(0)}, \\ K_S(0, y_1)= & {} I-S(0)\overline{S(y_1)}, \end{aligned}$$

we can write (7.12) in the following way

$$\begin{aligned}{} & {} {\bar{u}}_2 K_S(y_2,y_1)u_1- {\bar{u}}_2 u_1+ {\bar{u}}_2 S(y_2)\overline{S(0)} u_1- {\bar{u}}_2u_1+ {\bar{u}}_2S(0) \overline{S(q_1)}u_1+ {\bar{u}}_2 v_1\nonumber \\{} & {} \qquad - {\bar{u}}_2 S(y_2)\overline{S(0)} v_1\nonumber \\{} & {} \quad {\bar{v}}_2 u_1-{\bar{v}}_2 S(0)\overline{S(y_1)} u_1- {\bar{u}}_2 S(y_2) \overline{S(0)}u_1-{\bar{u}}_2 S(0)\overline{S(y_1)} u_1\nonumber \\{} & {} \qquad +{\bar{u}}_2 S(y_2) \overline{S(y_1)}u_1- {\bar{u}}_2 S(y_2) \overline{S(0)}u_1\nonumber \\{} & {} \qquad + {\bar{u}}_2 S(y_2) \overline{S(0)} v_1+ {\bar{v}}_2 S(0) \overline{S(y_1)}u_1+ {\bar{v}}_2 v_1+ {\bar{u}}_2 u_1- {\bar{u}}_2v_1- {\bar{v}}_2 u_1\nonumber \\{} & {} \quad ={\bar{u}}_2 K_S(y_2,y_1)u_1- {\bar{u}}_2 u_1+ {\bar{u}}_2 S(y_2) \overline{S(y_1)} u_1 + \bar{v_2} v_1. \end{aligned}$$
(7.13)

Since (7.11) and (7.13) are equal we get that the relation R is an isometry.

Now, we show that the relation R has a dense domain. Let us consider \((\omega _0, \omega _1) \in {\mathcal {H}}(S) \times {\textbf{H}}_2({\mathbb {B}})\) be orthogonal to the domain of R, where

$$\begin{aligned} \omega _0:= \begin{pmatrix} f_0\\ v_0 \end{pmatrix} \in {\mathcal {H}}(S) \qquad \omega _1:=\begin{pmatrix} \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^{*}{\mathcal {K}}(.,y)u\\ v \end{pmatrix} \in {\textbf{H}}_2({\mathbb {B}}), \end{aligned}$$

with \(f_{0}:= \sqrt{\Gamma _S} h \in {\mathcal {H}}(S)\), \(h \in {\textbf{H}}_2({\mathbb {B}})\). If we first consider \(u=0\) we get \(v_0=0\). If now we consider that \(v=0\) from the orthogonality of \(\omega _0\) and \( \omega _1\) we get

$$\begin{aligned} \langle \sqrt{\Gamma _S} h, \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^{*} {\mathcal {K}}(., y)u \rangle _{{\mathcal {H}}(S)}=0. \end{aligned}$$
(7.14)

By formula (7.6), (5.7) and the reproducing kernel property of \( {\textbf{H}}_2({\mathbb {B}})\) we have

$$\begin{aligned} \langle \sqrt{\Gamma _S} h, \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^{*} {\mathcal {K}}(., y)u \rangle _{{\mathcal {H}}(S)}= & {} \langle \sqrt{\Gamma _S}h, {\mathcal {M}}_{{\mathcal {Q}}_1}^{*} {\mathcal {K}}(.,y)u \rangle _{{\textbf{H}}_2({\mathbb {B}})} \\ {}= & {} \langle {\mathcal {M}}_{{\mathcal {Q}}_1} \sqrt{\Gamma _S}h, {\mathcal {K}}(.,y)u \rangle _{{\textbf{H}}_2({\mathbb {B}})}\\ {}= & {} {\bar{u}}{\mathcal {M}}_{{\mathcal {Q}}_1} \left( \sqrt{\Gamma } h\right) (x)\\= & {} {\bar{u}} \left( {\mathcal {Q}}_1 \odot _{GCK} \sqrt{\Gamma _S} h\right) (x). \end{aligned}$$

By combining (7.14) and (7.15) we get

$$\begin{aligned} ({\mathcal {Q}}_1 \odot _{GCK} f_0)(x)=0. \end{aligned}$$

This implies that \(f_0(x)=0\). This concludes the proof. \(\square \)

Proposition 7.5

The relation R is the graph of a densely defined isometry. Moreover, its extension to \( {\mathcal {H}}(S) \oplus {\mathbb {H}}\) is defined as

$$\begin{aligned} \begin{pmatrix} A&{}&{} B\\ C &{}&{} D \end{pmatrix}^{*}: {\mathcal {H}}(S) \oplus {\mathbb {H}} \rightarrow {\mathcal {H}}(S) \oplus {\mathbb {H}}. \end{aligned}$$

Then

$$\begin{aligned} (Af)(y):= (R_0f)(y), \end{aligned}$$
(7.15)
$$\begin{aligned} (Bv)(y):= {\left\{ \begin{array}{ll} {\mathcal {Q}}_1(x)^{-\odot _{GCK}} \odot _{GCK} \left( S(x)-S(0)\right) v, \quad p \ne 0\\ s_1, \quad p=0 \end{array}\right. } \end{aligned}$$
$$\begin{aligned} Cf=f(0), \end{aligned}$$
(7.16)
$$\begin{aligned} Dv=S(0)v. \end{aligned}$$

Proof

First we prove that R is the graph of a densely defined isometry. By definition, the domain R is the set of \(U \in {\mathcal {H}}(S) \otimes {\mathbb {H}}\) such that there exists \(V \in {\mathcal {H}}(S) \oplus {\mathbb {H}}\) such that \((U,V) \in R\). By Lemma 7.4 we know that R has a dense domain. Thus we introduce a densely defined operator W such that \(WU=V\). Now, we assume that there exists \(V_1\) and \(V_2\) such that \(TU=V_1\) and \(TU=V_2\). Then if \((U,V_1) \in R\) and \((U,V_2) \in R\) then \((0, V_1-V_2) \in R\). Since by Lemma 7.4 we know that the relation R is an isometry we get \( \Vert 0\Vert = \Vert V_1-V_2 \Vert \). This implies that \(V_1=V_2\). Therefore W is a densely defined isometry. As in the complex Hilbert spaces, it extends to an everywhere defined isometry.

Now, we compute the operator T. Let \(y \in {\mathbb {B}}\) and \(u \in {\textbf{H}}_2({\mathbb {B}})\) and we assume that \(B^*= {\hat{B}}\) is bounded from \( {\mathcal {H}}(S)\) to \( {\mathcal {H}}(S)\) then we have

$$\begin{aligned} A^*{\left( \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^{*}{\mathcal {K}}(.,y)u\right) }=(K_S(.,y)- K_S(.,0))u. \end{aligned}$$

On one side, for \(g \in {\mathcal {H}}(S)\), by the reproducing kernel property we have

$$\begin{aligned} \langle A^{*} \Gamma _S R_0 {\mathcal {K}}(.,y), g\rangle _{{\mathcal {H}}(S)}= & {} \langle K_S(.,y)- K_S(.,0), g \rangle _{{\mathcal {H}}(S)}\nonumber \\= & {} \overline{g(y)}- \overline{g(0)}. \end{aligned}$$
(7.17)

On the other side we have that

$$\begin{aligned} \langle A^* \Gamma _S R_0 {\mathcal {K}}(.,y),g \rangle _{{\mathcal {H}}(S)}= \langle \Gamma _S R_0{\mathcal {K}}(.,y), Ag \rangle _{{\mathcal {H}}(S)}. \end{aligned}$$

Now, we set \(Ag= \sqrt{\Gamma _S}h\), with h being not unique. By formula (7.5) and the reproducing kernel property of the space \( {\textbf{H}}_2({\mathbb {B}})\) we have

$$\begin{aligned} \langle \Gamma _S R_0 {\mathcal {K}}(.,y), Ag \rangle _{{\mathcal {H}}(S)}= & {} \langle \Gamma _S R_0 {\mathcal {K}}(.,y), \sqrt{\Gamma _S}h \rangle _{{\mathcal {H}}(S)}\nonumber \\= & {} \langle R_0 {\mathcal {K}}(.,y), \sqrt{\Gamma _S}h \rangle _{{\textbf{H}}({\mathbb {B}})}\nonumber \\= & {} \langle {\mathcal {K}}(.,y), R_0^{*} Ag \rangle _{{\textbf{H}}({\mathbb {B}})}\nonumber \\= & {} \langle {\mathcal {K}}(.,y), {\mathcal {M}}_{{\mathcal {Q}}_1}Ag \rangle _{{\textbf{H}}({\mathbb {B}})}\nonumber \\= & {} \overline{{\mathcal {M}}_{{\mathcal {Q}}_1} Ag}(y). \end{aligned}$$
(7.18)

By putting together (7.17) and (7.18) we get

$$\begin{aligned} ({\mathcal {M}}_{{\mathcal {Q}}_1}Ag)(y)=g(y)-g(0). \end{aligned}$$

By formula (2.12) we get

$$\begin{aligned} (Ag)(y):= (R_0f)(y). \end{aligned}$$

Similarly, we compute the operator Bv, for \(v \in {\textbf{H}}_2({\mathbb {B}})\). We have that

$$\begin{aligned} B^{*} \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {K}}(.,y) u=\left( \overline{S(x)}- \overline{S(0)}\right) u. \end{aligned}$$

On one side we get

$$\begin{aligned} \langle v, B^{*} \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {K}}(.,y)u \rangle _{{\mathcal {H}}(S)}= {\bar{u}}\left( S(x)- S(0)\right) v. \end{aligned}$$
(7.19)

On the other side we have

$$\begin{aligned} \langle v, B^{*} \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {K}}(.,y) u \rangle _{{\mathcal {H}}(S)}= \langle Bv, \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1}^* {\mathcal {K}}(.,y) u \rangle _{{\mathcal {H}}(S)}. \end{aligned}$$

Now we set \(Bv:= \sqrt{\Gamma _S} h\). Therefore, by using formula (7.5) and the reproducing kernel of the space \( {\textbf{H}}_2({\mathbb {B}})\) we obtain

$$\begin{aligned} \langle \sqrt{\Gamma _S} h, \Gamma _S {\mathcal {M}}_{{\mathcal {Q}}_1} {\mathcal {K}}(.,y) u \rangle _{{\mathcal {H}}(S)}= & {} \langle \sqrt{\Gamma _S} h, {\mathcal {M}}_{{\mathcal {Q}}_1}^* {\mathcal {K}}(.,y) u \rangle _{{\textbf{H}}_2({\mathbb {B}})}\nonumber \\= & {} \langle {\mathcal {M}}_{{\mathcal {Q}}_1} \sqrt{\Gamma _S} h, {\mathcal {K}}(.,y) u \rangle _{{\textbf{H}}_2({\mathbb {B}})}\nonumber \\= & {} {\bar{u}} \left( {\mathcal {Q}}_1 \odot _{GCK} Bv\right) (y). \end{aligned}$$
(7.20)

By putting together formula (7.19) and (7.20) we obtain

$$\begin{aligned} (Bv)(y)= {\mathcal {Q}}_1(x)^{-\odot _{GCK}} \odot _{GCK} \left( S(x)-S(0)\right) v. \end{aligned}$$

Now we compute the operator B. To do this we note that

$$\begin{aligned} C^{*}u= {\mathcal {K}}_S(.,0) u, \end{aligned}$$

for every \(u \in {\mathbb {H}}_2({\mathbb {B}})\). Then, for \(f \in {\mathcal {H}}(S)\) we have

$$\begin{aligned} \langle C(f),u \rangle _{{\mathcal {H}}(S)}= & {} \langle f,C^*u \rangle _{{\mathcal {H}}(S)}\\= & {} \langle f,K_S(.,0) u \rangle _{{\mathcal {H}}(S)}\\= & {} {\bar{u}}f(0). \end{aligned}$$

Hence we have \( C(f)=f(0)\). Finally, it is obvious that \(D=S(0)\). \(\square \)

Proof of Theorem 7.2

We observe that the pair (CA) is closely outer connected, see (7.15) and (7.16). A generic function \(f \in {\mathcal {H}}(S)\) can written with the following power series

$$\begin{aligned} f(x)= \sum _{n=0}^\infty {\mathcal {Q}}_n(x) f_n. \end{aligned}$$

We have the expression for the coefficients of f

$$\begin{aligned} f_n:= CA^n f, \qquad n=0,1,2, \ldots \end{aligned}$$

Then we obtain

$$\begin{aligned} f(x)=C \odot _{GCK}(I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}} \odot _{GCK} f. \end{aligned}$$

Finally we apply this formula to Bv, where \(v \in {\textbf{H}}_2({\mathbb {B}})\), then we get

$$\begin{aligned} \left( S(x)-S(0)\right) v= (I- {\mathcal {Q}}_1(x)A)^{-\odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)B)v. \end{aligned}$$

Now, we show the converse. We assume that the function S has the form (7.2) with coefficients defined (7.3). First of all we show the following formula for x, \(y \in {\mathbb {B}}\)

$$\begin{aligned} 1-S(x)\overline{S(y)}=U(x)(U(y))^{*}-\left( {\mathcal {Q}}_1(x) \odot _{GCK} U(x)\right) \left( U(y)^{*} \odot _{GCK} \overline{{\mathcal {Q}}_1(y)}\right) ,\nonumber \\ \end{aligned}$$
(7.21)

where the function U is defined as

$$\begin{aligned} U(x)=\sum _{n=0}^\infty {\mathcal {Q}}_n(x)CA^n. \end{aligned}$$
(7.22)

Then we have

$$\begin{aligned} 1-S(x)\overline{S(y)}= & {} 1- \left( D +\sum _{n=1}^\infty {\mathcal {Q}}_n(x)CA^{n-1}B\right) \left( D +\sum _{m=1}^\infty {\mathcal {Q}}_m(y)CA^{m-1}B\right) ^{*}\nonumber \\= & {} 1-D D^{*}- \sum _{m=1}^\infty DB^{*} (A^{m-1})^{*} C^{*} \overline{{\mathcal {Q}}_m(y)}- \sum _{n=1}^\infty {\mathcal {Q}}_n(x) C A^{n-1}BD^{*}\nonumber \\{} & {} - \sum _{n,m=1}^\infty {\mathcal {Q}}_n(x)C(A^{n-1})BB^*(A^{m-1})^{*}C^{*} \overline{Q_m(y)}. \end{aligned}$$
(7.23)

Since the operator matrix (7.1) is coisometric we have that

$$\begin{aligned} {\left\{ \begin{array}{ll} I- DD^*=CC^*\\ DB^*=-CA^*\\ BD^{*}=-AC^{*}\\ BB^{*}=I-AA^{*}. \end{array}\right. } \end{aligned}$$

These imply that we can write formula (7.23) in the following way

$$\begin{aligned} 1-S(x)\overline{S(y)}= & {} CC^{*}+ \sum _{m=1}^\infty C(A^{m})^{*} C^* \overline{{\mathcal {Q}}_m(y)}+ \sum _{n=1} {\mathcal {Q}}_n(x) CA^{n}C^{*}\nonumber \\{} & {} - \sum _{m,n=1}^\infty {\mathcal {Q}}_n(x) CA^{n-1} (I-AA^{*}) (A^{m-1})^{*} C^{*} \overline{{\mathcal {Q}}_m(y)}.\nonumber \\ \end{aligned}$$
(7.24)

Now, we observe that

$$\begin{aligned} U(x)(U(y))^{*}= & {} \sum _{m,n=0}^\infty {\mathcal {Q}}_n(x)CA^n(A^*)^mC^* \overline{Q_m(y)}\nonumber \\= & {} \sum _{n=0}^\infty {\mathcal {Q}}_n(x) CA^nC^*+\sum _{m=1,n=0}^\infty {\mathcal {Q}}_n(x)CA^n(A^*)^mG^* \overline{Q_m(y)}\nonumber \\= & {} CC^{*}+ \sum _{n=1}^\infty {\mathcal {Q}}_n(x) CA^nC^*+ \sum _{m=1}^\infty C (A^m)^* C^* \overline{{\mathcal {Q}}_m(y)}\nonumber \\{} & {} +\sum _{m=1,n=1}^\infty {\mathcal {Q}}_n(x)CA^n(A^*)^mC^* \overline{Q_m(y)}, \end{aligned}$$
(7.25)

and

$$\begin{aligned} \left( {\mathcal {Q}}_1(x) \odot _{GCK} U(x)\right) \left( U(y)^{*} \odot _{GCK} \overline{{\mathcal {Q}}_1(y)}\right)= & {} \sum _{m,n=0}^\infty {\mathcal {Q}}_{n+1}(x)CA^n (A^m)^{*} C^{*} \overline{{\mathcal {Q}}_{m+1}(y)}\nonumber \\= & {} \sum _{m,n=1}^\infty {\mathcal {Q}}_{n}(x)CA^{n-1} (A^{m-1})^{*} C^{*} \overline{{\mathcal {Q}}_{m}(y)}. \nonumber \\ \end{aligned}$$
(7.26)

By inserting (7.25) and (7.26) in (7.24) we get the expression (7.21). Now, from (7.21) we obtain that

$$\begin{aligned} K_{{\mathcal {S}}}(x,y)= & {} \sum _{n=0}^\infty {\mathcal {Q}}_n(x) \overline{{\mathcal {Q}}_n(y)}- \sum _{n=0}^\infty (S \odot _{GCK}{\mathcal {Q}}_n)(x) \overline{(S \odot _{GCK}{\mathcal {Q}}_n )(y)}\\= & {} 1- S(x) \overline{S(y)}+\sum _{n=1}^\infty {\mathcal {Q}}_n(x) \overline{{\mathcal {Q}}_n(y)}- \sum _{n=1}^\infty (S \odot _{GCK}{\mathcal {Q}}_n)(x) \overline{(S \odot _{GCK}{\mathcal {Q}}_n )(y)}\\= & {} U(x)(U(y))^{*}-\left( {\mathcal {Q}}_1(x) \odot _{GCK} U(x)\right) \left( U(y)^{*} \odot _{GCK} \overline{{\mathcal {Q}}_1(y)}\right) +\sum _{n=1}^\infty {\mathcal {Q}}_n(x) \overline{{\mathcal {Q}}_n(y)}\\{} & {} - \sum _{n=1}^\infty (S \odot _{GCK}{\mathcal {Q}}_n)(x) \overline{(S \odot _{GCK} {\mathcal {Q}}_n)(y)}. \end{aligned}$$

By making the generalized CK multiplication of the formula (7.21) from the left with \( {\mathcal {Q}}_n(x)\) and on the right by \( \overline{{\mathcal {Q}}_n(y)}\) we get

$$\begin{aligned} (S\odot _{GCK}{\mathcal {Q}}_n )(x)\left( \overline{S\odot _{GCK} {\mathcal {Q}}_n }\right) (y)= & {} {\mathcal {Q}}_n(x) \overline{{\mathcal {Q}}_n(y)}-\left( {\mathcal {Q}}_n(x)\odot _{GCK} U(x)\right) \\{} & {} \left( (U(y))^{*} \odot _{GCK} \overline{{\mathcal {Q}}_n(y)}\right) \\{} & {} +\left( {\mathcal {Q}}_{n+1}(x) \odot _{GCK} U(x)\right) \\{} & {} \left( U(y)^{*} \odot _{GCK} \overline{{\mathcal {Q}}_{n+1}(y)}\right) . \end{aligned}$$

These imply that

$$\begin{aligned} {\mathcal {K}}_S(x,y)= & {} U(x)(U(y))^{*}-\left( {\mathcal {Q}}_1(x) \odot _{GCK} U(x)\right) \left( U(y)^{*} \odot _{GCK} \overline{{\mathcal {Q}}_1(y)}\right) \nonumber \\{} & {} + \sum _{n=1}^\infty \left( {\mathcal {Q}}_n(x)\odot _{GCK} U(x)\right) \left( (U(y))^{*} \odot _{GCK} \overline{{\mathcal {Q}}_n(y)}\right) \nonumber \\{} & {} -\sum _{n=1}^\infty \left( {\mathcal {Q}}_{n+1}(x) \odot _{GCK} U(x)\right) \left( U(y)^{*} \odot _{GCK} \overline{{\mathcal {Q}}_{n+1}(y)}\right) \nonumber \\= & {} U(x)(U(y))^{*}+ \sum _{n=2}^\infty \left( {\mathcal {Q}}_n(x)\odot _{GCK} U(x)\right) \left( (U(y))^{*} \odot _{GCK} \overline{{\mathcal {Q}}_n(y)}\right) \nonumber \\{} & {} -\sum _{n=1}^\infty \left( {\mathcal {Q}}_{n+1}(x) \odot _{GCK} U(x)\right) \left( U(y)^{*} \odot _{GCK} \overline{{\mathcal {Q}}_{n+1}(y)}\right) \nonumber \\= & {} U(x)(U(y))^{*}. \end{aligned}$$
(7.27)

Therefore we have that \( {\mathcal {K}}_S(x,y)\) is positive define in \( {\mathbb {B}}\), therefore by Definition 6.2 the function S is a Schur multiplier.

Now, we have to show the uniqueness of the claim. Let us consider two different closely outer-connected coisometric realizations of S defined in the following way

$$\begin{aligned}{} & {} S_1:\begin{pmatrix} A_1 &{}&{} B_1\\ C_1 &{}&{} D_1 \end{pmatrix}: {\mathcal {H}}_1(S) \oplus {\mathbb {H}} \rightarrow {\mathcal {H}}_1(S)\oplus {\mathbb {H}} \\{} & {} S_2: \begin{pmatrix} A_2 &{}&{} B_2\\ C_2 &{}&{} D_2 \end{pmatrix}: {\mathcal {H}}_2(S) \oplus {\mathbb {H}} \rightarrow {\mathcal {H}}_2(S)\oplus {\mathbb {H}}, \end{aligned}$$

where \( {\mathcal {H}}_1(S)\) and \( {\mathcal {H}}_2(S)\) are different right quaternionic Hilbert spaces. In order to show that \(S_1\) and \(S_2\) are equivalent we have to prove that there exists an unitary map \(W: {\mathcal {H}}_1(S) \oplus {\mathbb {H}} \rightarrow {\mathcal {H}}_2(S) \oplus {\mathbb {H}}\) such that the following diagram is commutative

figure b

This means that we have to show the following equalities

$$\begin{aligned} {\left\{ \begin{array}{ll} WA_1=A_2W\\ WB_1=B_2\\ C_1=C_2W\\ D_1=D_2 \end{array}\right. } \end{aligned}$$

The last relation is obvious, because by Proposition 7.5 we know that \(D_1=D_2=S(0)\). In order to show the other relations, we observe that by (7.27) we have

$$\begin{aligned} U_1(x)(U_1(y))^{*}= U_2(x)(U_2(y))^{*}, \end{aligned}$$

where \(U_1\) and \(U_2\) are defined as in (7.22). Then, it follows that for nay m, \(n \in {\mathbb {N}}_0\) we get

$$\begin{aligned} C_1A_1^n(A_1^m)^{*} C_1^{*}=C_2A_2^n(A_2^m)^{*} C_2^{*}. \end{aligned}$$

Since the pairs \((C_1,A_1)\) and \((C_2, A_2)\) are closely outer connected we get that the following relation

$$\begin{aligned} \left( (A_1^m)^{*} C_1^{*}u, (A_2^m)^{*} C_2^{*} u\right) , \qquad u \in {\mathbb {H}}, \quad m \in {\mathbb {N}}_0 \end{aligned}$$

is a densely defined isometric relation in \({\mathcal {H}}_1(S) \times {\mathcal {H}}_2(S)\) with dense range. Therefore it is the graph of a unitary map U such that

$$\begin{aligned} W \left( (B_1^m)^{*} C_1^{*}u \right) =(B_2^m)^{*} C_2^{*} u, \qquad m \in {\mathbb {N}}_0, \quad u \in {\mathbb {H}}. \end{aligned}$$
(7.29)

If we consider \(m=0\) in (7.29) we get

$$\begin{aligned} C_1=C_2W. \end{aligned}$$
(7.30)

Using another time (7.29) we obtain

$$\begin{aligned} (WA_1^*)\left( (A_1^*)^m C_1\right){} & {} =A_2^*(A_2^*)^m C_2^*= A_2^* WW^* (A_2^*)^m C_2^*= (A_2^* W)\left( (A_1^{*})^m C_1\right) \\{} & {} =C_2W. \end{aligned}$$

Since the pairs \((C_1, A_1)\) and \((C_2,A_2)\) are closely outer-connected we get

$$\begin{aligned} WA_1=A_2W \end{aligned}$$
(7.31)

Now, using (7.30) and (7.31) we get

$$\begin{aligned} S_n= & {} C_1 A_{1}^{n-1}B_1\\= & {} C_2 A_{2}^{n-1}B_2\\= & {} C_1 W^{*}A_{2}^{n-1}B_2\\= & {} C_1 A_{1}^{n-1}W^{*}B_2. \end{aligned}$$

By the fact that the pair \((C_1,A_1)\) is closely outer connected we get \(WB_1=B_2\). This concludes the proof. \(\square \)

8 Blaschke product: through the GCK-extension

In complex analysis the Blaschke factor is defined as

$$\begin{aligned} b_a(z)= \left( \frac{a-z}{1-z {\bar{a}}} \right) \frac{{\bar{a}}}{|a|}, \qquad a \in {\mathbb {D}}. \end{aligned}$$

These kind of functions are very important in the study of invariant subspaces and interpolation, see [33, 39]. In [7] a Blaschke factors and an interpolation problem in the slice hyperholomorphic setting were studied.

Definition 8.1

Let \(a \in {\mathbb {H}}\), \(|a|<1\). The function

$$\begin{aligned} B_a(x)= (1-x {\bar{a}})^{-*}*(a-x) \frac{{\bar{a}}}{|a|}, \end{aligned}$$

is called a slice hyperholomorphic Blaschke factor at a.

Remark 8.2

By the definition of \(*\)-product we have that

$$\begin{aligned} (1- x{\bar{a}})^{-*}=(1- {\bar{x}}a )(|x|^2 a^2-2x_0a+1)^{-1}. \end{aligned}$$

This implies that we can write the Blaschke factor at a as

$$\begin{aligned} B_a(x)= & {} \left( (1-x{\bar{a}})^{-*}*a-(1-x {\bar{a}})^{-*}*x\right) \frac{{\bar{a}}}{|a|}\nonumber \\= & {} \left[ (1- {\bar{x}} {\bar{a}} )(|x|^2 {\bar{a}}^2-2x_0{\bar{a}}+1)^{-1}a-(1- {\bar{x}} {\bar{a}} )(|x|^2 {\bar{a}}^2-2x_0{\bar{a}}+1)^{-1} *x\right] \frac{{\bar{a}}}{|a|}\nonumber \\= & {} \left[ (1- {\bar{x}} {\bar{a}} )(|x|^2 {\bar{a}}^2-2x_0{\bar{a}}+1)^{-1}a-(x- |x|^2 {\bar{a}} )(|x|^2 {\bar{a}}^2-2x_0{\bar{a}}+1)^{-1}\right] \frac{{\bar{a}}}{|a|}. \nonumber \\ \end{aligned}$$
(8.1)

Similarly to the holomorphic case also in the slice hyperholomorphic setting it is possible to have a series expansion at the origin of the Blaschke factor at a.

Proposition 8.3

Let \(a \in {\mathbb {B}}\). Then it holds that

$$\begin{aligned} B_a(x)=|a|+ \sum _{n=0}^\infty x^{n+1} {\bar{a}}^{n+1} \left( |a|- \frac{1}{|a|}\right) . \end{aligned}$$

A regular counterpart of the Blaschke factor is given in [16] for the quaternionic Arverson space. Precisely, it is given by

$$\begin{aligned} {\textbf{B}}_a(x){} & {} = \left( 1- \xi ^{\nu }(a) (\xi ^{\nu }(a))^*\right) ^{\frac{1}{2}} (1- \xi ^{\nu }(a) (\xi ^{\nu }(a))^*)^{-\odot _{CK}} \odot _{CK} (\xi ^\nu (x)- \xi ^\nu (a))\\{} & {} \quad (1- (\xi ^{\nu }(a))^* \xi ^\nu (a))^{- \frac{1}{2}}, \qquad a \in {\mathcal {E}}, \end{aligned}$$

where \( \xi ^\nu \) are the Fueter polynomials defined in (2.2). In this section we introduce and study the Blaschke products in the framework of the Clifford-Appell polynomials.

Definition 8.4

Let \(a \in {\mathbb {H}}\) and \(|a|<1\). The function

$$\begin{aligned} {\mathcal {B}}_a(x)=(1- {\mathcal {Q}}_1(x) {\bar{a}})^{-\odot _{GCK}} \odot _{GCK} (a- {\mathcal {Q}}_{1}(x)) \frac{{\bar{a}}}{|a|} \end{aligned}$$
(8.2)

is called Clifford-Appell-Blaschke factor at a.

This definition leads to the following result.

Proposition 8.5

Let \(a \in {\mathbb {H}}\) and \(|a|<1\). The Clifford-Appell-Blaschke factor \( {\mathcal {B}}_a\) is an axially regular function in \( {\mathbb {B}}\).

Remark 8.6

The Clifford-Appell-Blaschke factor in the Clifford-Appell setting can be deduced as a particular example of Schur multiplier. Precisely, if we consider

$$\begin{aligned} \begin{pmatrix} A &{}&{} B\\ C &{}&{} D \end{pmatrix}=\begin{pmatrix} {\bar{a}} &{}&{} \sqrt{1-|a|^2}\\ \sqrt{1-|a|^2} &{}&{} -a \end{pmatrix}, \end{aligned}$$

where \(a \in {\mathbb {B}}\), by formula (7.4) we get

$$\begin{aligned} {\mathcal {B}}_a(x){} & {} =-a + (1- {\mathcal {Q}}_1(x) {\bar{a}})^{-\odot _{GCK}}\odot _{GCK}[{\mathcal {Q}}_1(x) (1-|a|^2)]\\{} & {} = (1- {\mathcal {Q}}_1(x) {\bar{a}})^{-\odot _{GCK}} \odot _{GCK} ({\mathcal {Q}}_1(x)-a). \end{aligned}$$

It is interesting to note that our definition leads to a series expansion of the Clifford-Appell-Blaschke factor in terms of Clifford-Appell polynomials.

Proposition 8.7

Let a, \(x \in {\mathbb {B}}\). Then it holds that

$$\begin{aligned} {\mathcal {B}}_a(x)= |a|+ \sum _{n=0}^\infty {\mathcal {Q}}_{n+1}(x) {\bar{a}}^{n+1} \left( |a|- \frac{1}{|a|}\right) . \end{aligned}$$

Proof

We start by observing that

$$\begin{aligned} (1- {\mathcal {Q}}_1(x){\overline{a}})^{- \odot _{GCK}}= & {} GCK[(1- x_0{\bar{a}})^{-1}]\\= & {} GCK \left[ \sum _{n=0}^\infty x_0^n {\bar{a}}^n\right] \\= & {} \sum _{n=0}^\infty {\mathcal {Q}}_n(x) {\bar{a}}^n. \end{aligned}$$

By Definition 8.4 we get

$$\begin{aligned} {\mathcal {B}}_a(x)= & {} \left( \sum _{n=0}^\infty {\mathcal {Q}}_n(x) {\bar{a}}^n\right) \odot _{GCK} (a- {\mathcal {Q}}_1(x)) \frac{{\bar{a}}}{|a|}\\= & {} \sum _{n=0}^\infty \left( {\mathcal {Q}}_n(x) {\bar{a}}^n a- {\mathcal {Q}}_{n+1}(x) {\bar{a}}^n\right) \frac{{\bar{a}}}{|a|}\\= & {} \sum _{n=0}^\infty {\mathcal {Q}}_n(x) {\bar{a}}^{n+1} \frac{a}{|a|}- \sum _{n=0}^\infty {\mathcal {Q}}_{n+1}(x) {\bar{a}}^{n+1} \frac{1}{|a|}\\= & {} |a|+\sum _{n=1}^\infty {\mathcal {Q}}_n(x) {\bar{a}}^{n+1} \frac{a}{|a|}- \sum _{n=0}^\infty {\mathcal {Q}}_{n+1}(x) {\bar{a}}^{n+1} \frac{1}{|a|}\\= & {} |a|+ \sum _{n=0}^\infty {\mathcal {Q}}_{n+1}(x) {\bar{a}}^{n+1} \left( |a|- \frac{1}{|a|}\right) . \end{aligned}$$

This concludes the proof. \(\square \)

The result above implies the following.

Theorem 8.8

Let \(a \in {\mathbb {H}}\), \(|a|<1\). Then the Clifford-Appell-Blaschke factor \({\mathcal {B}}_a\) maps the unit ball \({\mathbb {B}}\) into itself.

Proof

We have to show that if \(|x| <1\) than \(| {\mathcal {B}}_a(x)|<1\). By Proposition 8.7 and the fact that in particular \( | {\mathcal {Q}}_n(x)| < |x|^n\), we have

$$\begin{aligned} | {\mathcal {B}}_a(x)|< & {} |a|+ \sum _{n=0}^\infty | {\mathcal {Q}}_{n+1}(x)| |a^{n+1}| \left( |a|+ \frac{1}{|a|}\right) \\< & {} |a| + |x| |a| \sum _{n=0}^\infty |x|^n |a|^n \left( |a|+ \frac{1}{|a|}\right) \\= & {} |a|+ \frac{|xa| (1+|a|^2)}{|a|(1-|xa|)}\\= & {} \frac{|a|+|x|}{1-|xa|}. \end{aligned}$$

To prove that \(| {\mathcal {B}}_a(x)| <1\) we have to prove that \(|a|+|x| <1-|a||x|\), which is equivalent to \(|a|+|x| <1+|a||x|\). Taking the square we get

$$\begin{aligned} (|x|^2-1) (1-|a|^2) <0. \end{aligned}$$

The previous inequality follows from \(|x|<1\) and \(|a| <1\). \(\square \)

Theorem 8.9

Let \( {\mathcal {B}}_a\) be a Clifford-Appell-Blaschke factor. The operator

$$\begin{aligned} {\mathcal {M}}_a: f \mapsto {\mathcal {B}}_a \odot _{GCK} f \end{aligned}$$

is an isometry from \( {\textbf{H}}_2({\mathbb {B}})\) into itself.

Proof

We start by considering the functions \(f(x)= {\mathcal {Q}}_u(x) h\) and \(g(x)= {\mathcal {Q}}_v(x) k\), where u, \(v \in {\mathbb {N}}_0\) and h, \(k \in {\mathbb {H}}\). We prove

$$\begin{aligned} \langle {\mathcal {B}}_a \odot _{GCK} f, {\mathcal {B}}_a \odot _{GCK} g \rangle _{{\textbf{H}}_2({\mathbb {B}})}= \delta _{uv} {\bar{k}}h. \end{aligned}$$
(8.3)

By Theorem 8.7 and using f and g defined as above, we have

$$\begin{aligned} ({\mathcal {B}}_a \odot f)(x)={\mathcal {Q}}_u(x)h|a|+ \sum _{n=0}^\infty {\mathcal {Q}}_{n+1+u}(x) {\bar{a}}^{n+1} \left( |a|- \frac{1}{|a|}\right) h \end{aligned}$$

and

$$\begin{aligned} ({\mathcal {B}}_a \odot g)(x)={\mathcal {Q}}_v(x)h|a|+ \sum _{n=0}^\infty {\mathcal {Q}}_{n+1+v}(x) {\bar{a}}^{n+1} \left( |a|- \frac{1}{|a|}\right) k. \end{aligned}$$

We begin by considering the case \(u=v\), we have

$$\begin{aligned} \langle {\mathcal {B}}_a \odot _{GCK} f, {\mathcal {B}}_a \odot _{GCK} g \rangle _{{\textbf{H}}_2({\mathbb {B}})}= & {} {\bar{k}} h \left( |a|^2+\sum _{n=0}^\infty |a|^{2n+2} \left( |a|- \frac{1}{|a|}\right) ^2\right) \\= & {} {\bar{k}} h \left( |a|^2+ \frac{|a|^2}{1-|a|^2}\left( |a|- \frac{1}{|a|}\right) ^2\right) \\= & {} {\bar{k}} h\\= & {} \langle f,g \rangle _{{\textbf{H}}_2({\mathbb {B}})}. \end{aligned}$$

Now, we consider the case \(u < v\), we have that

$$\begin{aligned} \langle {\mathcal {Q}}_u(x) h|a|, {\mathcal {Q}}_v(x) k |a|\rangle _{{\textbf{H}}_2({\mathbb {B}})}=0 \end{aligned}$$

and

$$\begin{aligned} \left\langle {\mathcal {Q}}_u(x) h|a|, \sum _{n=0}^\infty {\mathcal {Q}}_{n+1+v}(x) {\bar{a}}^{n+1} \left( |a|- \frac{1}{|a|}\right) k\right\rangle _{{\textbf{H}}_2({\mathbb {B}})}=0. \end{aligned}$$

It follows that

$$\begin{aligned}{} & {} \langle {\mathcal {B}}_a \odot _{GCK} f, {\mathcal {B}}_a \odot _{GCK} g \rangle _{{\textbf{H}}_2({\mathbb {B}})}\\= & {} \left\langle \sum _{n=0}^\infty {\mathcal {Q}}_{n+1+u}(x) {\bar{a}}^{n+1} \left( |a| - \frac{1}{|a|}\right) h, {\mathcal {Q}}_{v}(x)|a| k \right\rangle \\{} & {} + \left\langle \sum _{n=0}^\infty {\mathcal {Q}}_{n+1+u}(x) {\bar{a}}^{n+1} \left( |a| - \frac{1}{|a|}\right) h, \sum _{m=0}^\infty {\mathcal {Q}}_{m+1+v}(x) {\bar{a}}^{m+1} \left( |a| - \frac{1}{|a|}\right) k \right\rangle \\= & {} |a| {\bar{k}} {\bar{a}}^{v-u} \left( |a|- \frac{1}{|a|}\right) h\\{} & {} +\left\langle \sum _{m=0}^\infty {\mathcal {Q}}_{m+1+v}(x) {\bar{a}}^{m+1+v-u} \left( |a| - \frac{1}{|a|}\right) h, \sum _{m=0}^\infty {\mathcal {Q}}_{m+1+v}(x) {\bar{a}}^{m+1} \left( |a| - \frac{1}{|a|}\right) k \right\rangle \\= & {} |a| {\bar{k}} {\bar{a}}^{v-u} \left( |a|- \frac{1}{|a|}\right) h+ {\bar{k}} \left( |a|- \frac{1}{|a|}\right) ^2 {\bar{a}}^{v-u} \sum _{m=0}^\infty |a|^{2m+2} h\\= & {} |a| {\bar{k}} {\bar{a}}^{v-u} \left( |a|- \frac{1}{|a|}\right) h+ {\bar{k}} \left( |a|- \frac{1}{|a|}\right) ^2 {\bar{a}}^{v-u} \frac{|a|^2}{1-|a|^2} h\\= & {} 0\\= & {} \langle f,g \rangle _{{\textbf{H}}_2({\mathbb {B}})}. \end{aligned}$$

The case \(v<u\) follows by using similar arguments. By continuity for f \(g \in {\textbf{H}}_2({\mathbb {B}})\) we get

$$\begin{aligned} \langle {\mathcal {B}}_a \odot _{GCK} f, {\mathcal {B}}_a \odot _{GCK} g \rangle _{{\textbf{H}}_2({\mathbb {B}})}=\langle f,g \rangle _{{\textbf{H}}_2({\mathbb {B}})}. \end{aligned}$$

This concludes the proof. \(\square \)

9 Blaschke factor through the Fueter map

Another way to define the Blaschke factor in the regular setting is to apply the Fueter map to the slice hyperholomorphic Blaschke factor.

Definition 9.1

Let \(a \in {\mathbb {H}}\), \(|a|<1\). Let \(B_a(x)\) be the slice-hyperholomorphic Blaschke factor at a. The Fueter-Blaschke factor at a is defined as \( \Delta B_a(x)=\breve{B}_a(x)\).

Theorem 9.2

Let \(a \in {\mathbb {H}}\) and \(|a|<1\). Then the Fueter-Blaschke factor can be written as

$$\begin{aligned} \breve{B}_a(x)= \Delta B_a(x)=4(1- {\bar{x}} {\bar{a}}) (|x|^2 {\bar{a}}^2+1-2x_0 {\bar{a}})^{-2} (1-{\bar{a}}) \frac{{\bar{a}}^2}{|a|}. \end{aligned}$$

Proof

We apply the Laplace operator in four real variables to the slice hyperholomorphic Blaschke product, see (8.1). By formula (3.3) with \(c=1\) we get

$$\begin{aligned}{} & {} \Delta [(x- |x|^2 {\bar{a}} )Q_x({\bar{a}})^{-1}]=-4(1- {\bar{x}}{\bar{a}})Q_x({\bar{a}})^{-2} {\bar{a}}, \qquad \nonumber \\{} & {} Q_x({\bar{a}})^{-1}:=|x|^2 {\bar{a}}^2-2x_0{\bar{a}}+1. \end{aligned}$$
(9.1)

Now, we have to compute

$$\begin{aligned} \Delta [(1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-1}]. \end{aligned}$$

We set

$$\begin{aligned} G(x):= (1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-1}. \end{aligned}$$

We start performing the derivation of G(x) with respect to \(x_0\), we get

$$\begin{aligned} \frac{\partial G(x)}{\partial x_0}=- {\bar{a}} Q_x({\bar{a}})^{-1}-(1-{\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} (2x_0{\bar{a}}^2-2{\bar{a}}), \end{aligned}$$

and

$$\begin{aligned} \frac{\partial ^2 G(x)}{\partial x_0^2}= & {} 2 Q_x({\bar{a}})^{-2}(2x_0 {\bar{a}}^2-2{\bar{a}}){\bar{a}}+2(1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-3} (2x_0 {\bar{a}}^2-2{\bar{a}})^2\\{} & {} -2(1-{\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-2} {\bar{a}}^2. \end{aligned}$$

Now, we perform the computations with respect to the variables \(x_i\), with \(1 \le i \le 3\), we get

$$\begin{aligned} \frac{\partial G(x)}{\partial x_i}=e_i {\bar{a}} Q_x({\bar{a}})^{-1}-(1-{\bar{x}}{\bar{a}})Q_x({\bar{a}})^{-2}2x_i {\bar{a}}^2, \end{aligned}$$

and

$$\begin{aligned} \frac{\partial ^2 G(x)}{\partial x_i^2}=-4x_i e_i Q_x({\bar{a}})^{-2} {\bar{a}}^3+8(1-{\bar{x}}{\bar{a}})Q_x({\bar{a}})^{-3}{\bar{a}}^4 x_i^2-2 (1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-2} {\bar{a}}^2. \end{aligned}$$

These computations imply that

$$\begin{aligned} \Delta G(x)= & {} \left( \frac{\partial ^2}{\partial x_0^2}+ \sum _{i=1}^3 \frac{\partial ^2}{\partial x_i^2}\right) G(x)\\= & {} 4 x_0 Q_x({\bar{a}})^{-2} {\bar{a}}^3-4 Q_x({\bar{a}})^{-2}{\bar{a}}^2 +2(1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-3}(4x_0^2 {\bar{a}}^4+4{\bar{a}}^2-8x_0{\bar{a}}^3)\\{} & {} -2(1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-2} {\bar{a}}^2 -4 {\underline{x}} Q_x({\bar{a}})^{-2} {\bar{a}}^3+8(1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-3} | {\underline{x}}| {\bar{a}}^4\\{} & {} -6 (1- {\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} {\bar{a}}^2\\= & {} 4 {\bar{x}} Q_x({\bar{a}})^{-2} {\bar{a}}^3-4 Q_x({\bar{a}})^{-2} {\bar{a}}^2-8(1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-2} {\bar{a}}^2+8|x|^2(1- {\bar{x}} {\bar{a}})Q_x({\bar{a}})^{-3} {\bar{a}}^4\\{} & {} +8(1-{\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-3} {\bar{a}}^2-16 (1- {\bar{x}} {\bar{a}}) x_0 Q_x({\bar{a}})^{-3} {\bar{a}}^3\\= & {} -12 (1- {\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} {\bar{a}}^2+8(1- {\bar{x}} {\bar{a}}) (|x|^2 {\bar{a}}^2-2 x_0 {\bar{a}}+1) Q_x({\bar{a}})^{-3} {\bar{a}}^2\\= & {} -12 (1- {\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} {\bar{a}}^2+8(1- {\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} {\bar{a}}^2\\= & {} -4(1- {\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} {\bar{a}}^2. \end{aligned}$$

Therefore we have

$$\begin{aligned} \Delta G(x) =-4(1- {\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} {\bar{a}}^2. \end{aligned}$$
(9.2)

Finally, by putting together (9.1) and (9.2) we have

$$\begin{aligned} \breve{B}_a(x)= & {} \Delta B_a(x)\\= & {} 4 \left[ -(1- {\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} {\bar{a}}^2{\bar{a}}^2+(1-{\bar{x}}{\bar{a}}) Q_x({\bar{a}})^{-2}{\bar{a}}\right] \frac{{\bar{a}}}{|a|}\\= & {} 4(1- {\bar{x}} {\bar{a}}) Q_x({\bar{a}})^{-2} (1- {\bar{a}}) \frac{{\bar{a}}^2}{|a|}. \end{aligned}$$

\(\square \)

Theorem 9.3

Let \(a \in {\mathbb {H}}\), then we have

$$\begin{aligned} \breve{B}_a(x)=-2 \sum _{n=0}^\infty (n+1)(n+2) {\mathcal {Q}}_n(x) {\bar{a}}^{n+2} \left( |a|- \frac{1}{|a|}\right) . \end{aligned}$$

Proof

By Proposition 8.3 and from the fact that \(\Delta (x^n)=-2n (n-1){\mathcal {Q}}_{n-2}(x)\) for \(n\ge 2\) we get

$$\begin{aligned} \breve{B}_a(x)= & {} \Delta B_a(x)\\= & {} \sum _{n=1}^\infty \Delta (x^{n+1}) {\bar{a}}^{n+1} \left( |a|-\frac{1}{|a|}\right) \\= & {} -2 \sum _{n=1}^\infty (n+1)n {\mathcal {Q}}_{n-1}(x) {\bar{a}}^{n+1} \left( |a|- \frac{1}{|a|}\right) \\= & {} -2 \sum _{n=0}^\infty (n+1)(n+2) {\mathcal {Q}}_{n}(x) {\bar{a}}^{n+2} \left( |a|- \frac{1}{|a|}\right) . \end{aligned}$$

\(\square \)

Theorem 9.4

Let \(a \in {\mathbb {H}}\) and \(|a|<1\). Then the Fueter-Blaschke factor \(\breve{B}_a(x)\) satisfy the following properties

  1. 1.

    it maps the unit ball \( {\mathbb {B}}\) into itself.

  2. 2.

    it has a zero at \(x= \frac{a}{|a|^2}\).

Proof

  1. 1.

    We have to show that if \(|x|<1\) then \(|\breve{B}_a(x)|<1\). By Theorem 9.3 and the fact that \(| {\mathcal {Q}}_n(x)| < |x|^n \) we get

    $$\begin{aligned} |\breve{B}_a(x)|< & {} 2 \sum _{n=1}^\infty (n+1)(n+2) | {\mathcal {Q}}_n(x)| |a|^{n+2} \left( |a|+ \frac{1}{|a|}\right) \\< & {} |a|^2\sum _{n=1}^\infty (n+1)(n+2) |xa|^n \left( |a|+ \frac{1}{|a|}\right) . \end{aligned}$$

    Now from the fact that \( \sum _{n=1}^\infty n^2|xa|^n=- \frac{|xa|(|xa|+1)}{(|ax|-1)^3}\) and \( \sum _{n=1}^\infty n|xa|^n=- \frac{|xa|}{(|ax|-1)^2}\) we get

    $$\begin{aligned}{} & {} |a|^2\sum _{n=1}^\infty (n+1)(n+2) |xa|^n \left( |a|+ \frac{1}{|a|}\right) \\{} & {} =\left( - \frac{|a|^2|xa|(1+|xa|)}{(|xa|-1)^3}+3 \frac{|a|^2 |ax|}{(|ax|-1)^2}-2 \frac{|a|^2|xa|}{(|xa|-1)}\right) \left( \frac{|a|^2+1}{|a|}\right) \\{} & {} = \frac{2|x||a|^2(1+|a|^2)(3|x||a|-3-|x|^2|a|^2)}{(|xa|-1)^3}. \end{aligned}$$

    Now, since x, \(a \in {\mathbb {B}}\) we get

    $$\begin{aligned} \frac{2|x||a|^2(1+|a|^2)(3|x||a|-3-|x|^2|a|^2)}{(|xa|-1)^3}< \frac{2(1+|a|^2)|x|^2|a|^2}{(1-|x| |a|)^3}. \end{aligned}$$

    To finish the proof we have to show that

    $$\begin{aligned} \frac{2(1+|a|^2)|x|^2|a|^2}{(1-|x| |a|)^3}<1. \end{aligned}$$

    Since \( \frac{1}{1+|a|^2}<1\) we have to prove that

    $$\begin{aligned} \frac{2|x|^2 |a|^2}{(1-|x||a|)^3} <1. \end{aligned}$$

    This is equivalent to show the following inequality

    $$\begin{aligned} 3|x| |a| < 1+|x|^3|a|^3+|x|^2|a|^2. \end{aligned}$$

    The previous inequality is verified for all x, \(a \in {\mathbb {B}}\).

  2. 2.

    By Theorem 9.2 to study the zero of the function \( \breve{B}_a\) we need to study the zeros of the polynomial \(1- {\bar{x}} {\bar{a}}\). It is obvious that the zeros of the previous polynomials are given by \(x= {\bar{a}}^{-1}= \frac{a}{|a|^2}\).

\(\square \)

Remark 9.5

The Fueter-Blaschke factor at a does not satisfy an isometry property like the one showed in Proposition 8.9.