1 Introduction

The algebraic treatment of systems of linear partial differential equations with constant coefficients that was introduced mostly by Palamodov in [13], and to a lesser degree by Ehrenpreis in [8], remained for many years an important theoretical tool that found relatively few applications to the study of specific systems. In a series of papers that began as joint work with Carlos Berenstein (see [1]), the authors showed how to use Palamodov’s ideas to describe and develop a rather powerful theory of functions that satisfy the Cauchy–Fueter system in several variables. The theory of these functions, usually referred to as Cauchy–Fueter regular functions, is a very appropriate analog, in the quaternionic domain, of the theory of several complex variables. The reader interested in these developments can find a full description of this theory in [3], where other systems of differential equations of interest are examined as well.

What made these advances possible was the (at the time relatively recent) development of the theory of Gröbner bases, as well as the introduction of computer packages such as CoCoA, that allowed explicit calculations of the otherwise unwieldy modules that naturally appear when studying the syzygies of the systems under scrutiny. Thus, a new powerful tool became available for the algebraic study of systems of linear partial differential equations with constant coefficients, and the authors analyzed many important situations from this novel point of view, including the Dirac system, [3], as well as holomorphic functions on bicomplex numbers, [5, 6].

Quite recently, while translating Michele Sce’s works in hypercomplex analysis, see [7], we realized that the case of quaternions, or of bicomplex numbers, were only the tip of the iceberg and that it was possible to consider other generalizations by looking at large classes of high dimensional algebras, in which the notions of “holomorphy” or “analyticity” could be understood in a deeper sense than we had originally imagined. From a purely historical point of view, it is worth noting that, at the turn of last century, mathematicians were interested in generalizing the theory of complex analysis to higher dimension. In so doing some researchers took the direction of studying several complex variables, while others went on to the study of analyticity in higher dimensional algebras, a direction that was intensely studied for several decades.

In this second case, there were two fundamental questions. The first was to classify all the algebras of a given dimension, a task to which the Italian school of algebraists dedicated significant efforts (see e.g. [21]), and the second was to understand how to generalize the notion of holomorphy from the complex case to the case of other algebras. As it turns out, there were essentially two different ways to generalize analyticity. On one hand, one could define the notion of total differentiability that expresses (as we will see later) the fact that a function admits derivatives in the traditional sense of limit of the difference quotient with increment taken in the algebra. On the other hand, one could define a notion of monogenicity, that represents the best generalization of holomorphicity in the sense of the Cauchy–Riemann system.

The Italian school that culminated in Sce’s works in the fifties, clearly understood these different approaches, and described what happened in a large class of algebras, many of which remain of great interest today.

In this paper we recall these two notions, and we discuss not only what they mean in the very well known cases of complex and quaternionic variables, but we use them to rethink the notion of holomorphicity in the cases (already studied at least in some of their aspects) of bicomplex or hyperbolic variables. In addition to identifying what the different notions of holomorphicity mean in these cases, we use the algebraic machinery that we mentioned above, to study some less evident properties of these functions. In our final section we tackle two algebras that played a relevant role in the work of the Italian school, but that have never been fully investigated. They are known, in accordance with the classification provided by Scorza in [21], as algebras LXXIX and LXXXI, and we offer a treatment of holomorphicity in that case.

2 Totally Derivable Functions and Monogenic Functions

The notion of totally derivable functions for functions defined in a algebra of hypercomplex numbers was introduced in the mid-thirties by N. Spampinato, see [26], generalizing what G. Scorza Dragoni had done for bicomplex numbers in [22], and what L. Sobrero had done in the case of bidual numbers, see [25]. In the sequel, we denote by \(\mathscr {A}\) a real or complex algebra, associative, with unit, over the basis \(u_1,\ldots ,u_n\) and we set \(u=(u_1,\ldots ,u_n)\).

The following definition is well known:

Definition 2.1

The algebras \(\mathscr {A}'\), \(\mathscr {A}''\) are the first and second regular representation of \(\mathscr {A}\) if their elements are, respectively, order n matrices \(X'\), \(X''\) defined, for any \(x=x_1u_1+\cdots +x_nu_n\in \mathscr {A}\), by the relations

$$\begin{aligned} xu= & {} uX' \end{aligned}$$
(1)
$$\begin{aligned} ux= & {} u(X'')^T. \end{aligned}$$
(2)

To understand the meaning of the definition, consider a function \(y:\, \mathscr {A}\rightarrow \mathscr {A}\) and denote by \(\underline{x}=(x_1,\ldots ,x_n)\), \(\underline{y} =(y_1,\ldots ,y_n)\) the coordinates of x, y, respectively, with respect to the given basis, i.e. \(x=\underline{x} u^T\), \(y=\underline{y} u^T\). The definition of total derivability is as follows:

Definition 2.2

If y is derivable, that is, all the components of \(\underline{y}\) are derivable with respect to the components of \(\underline{x}\), we say that y is right (resp. left) totally derivable if the Jacobian dy/dx belongs to \(\mathscr {A}'\) (resp. the transpose of the jacobian belongs to \(\mathscr {A}^{''}\)).

Remark 2.3

The notion of right or left total derivability is designed on the notion of right or left differentiability, in the standard sense. To see this, consider a function y with values in \(\mathscr {A}\). Using the notation above, we can write:

$$\begin{aligned} y(x)=y_1(x)u_1+\cdots +y_n(x)u_n, \end{aligned}$$

where \(x=x_1u_1+\cdots +x_nu_n\). Note that \(x_i\), \(y_i\), \(i=1,\ldots , n\) are real (or complex) and x varies in an open set of \(\mathcal {A}\) of when we identify \(\mathcal {A}\) with \(\mathbb {R}^n\) (or \(\mathbb {C}^n\)). Let the functions \(y_\ell \) admit derivatives with respect to \(x_i\) and set

$$\begin{aligned} dx=dx_1u_1+\cdots +dx_nu_n, \qquad dy=dy_1u_1+\cdots +dy_nu_n, \end{aligned}$$

with

$$\begin{aligned} dy_\ell =\dfrac{\partial y_\ell }{\partial x_1}dx_1+\cdots +\dfrac{\partial y_\ell }{\partial x_n}dx_n,\qquad \ell =1,\ldots ,n. \end{aligned}$$

Then the function y is left differentiable or totally derivable on the left if there exists a function z(x) such that

$$\begin{aligned} dy=dx\, z(x), \end{aligned}$$

y is right differentiable or totally derivable on the right if there exists a function z(x) such that

$$\begin{aligned} dy=z(x)\, dx, \end{aligned}$$
(3)

for every dx. By setting \(z(x)=z_1(x)u_1+\cdots +z_n(x)u_n\) and

$$\begin{aligned} u_iu_j =\sum _{\ell =1}^n \gamma _{ij\ell } u_\ell , \end{aligned}$$

then (3) becomes

$$\begin{aligned} \sum _{\ell =1}^n dy_\ell u_\ell = \sum _{i,j,\ell =1}^n \gamma _{ij\ell }z_i dx_j u_\ell , \end{aligned}$$

so that

$$\begin{aligned} dy_\ell =\sum _{i,j=1}^n \gamma _{ij\ell }z_i dx_j, \qquad \ell =1,\ldots , n. \end{aligned}$$

Since \(dy_\ell =\sum _{j=1}^n \dfrac{\partial y_\ell }{\partial x_j}\, dx_j\) and \(dx_i\) are independent we get

$$\begin{aligned} \sum _{i=1}^n \gamma _{ij\ell }z_i=\dfrac{\partial y_\ell }{\partial x_j}, \quad j,\ell =1,\ldots , n. \end{aligned}$$
(4)

We deduce that the total derivability on the right (3) is equivalent to (4) and this latter expression is equivalent to the fact that the Jacobian matrix \((\partial y_\ell /\partial x_j)\) belongs to \(\mathcal {A}'\).

As recalled by Sce in [15], see also [7], the notion of total derivability does not depend on the choice of the basis. Indeed, at least when the elements of a basis of the algebra can be chosen to be invertible, this notion corresponds to the derivability of the function y with respect to the hypercomplex variable x, as shown in the next result (probably well known but for which we could not find any reference.

Proposition 2.4

Let \(\mathscr {A}\) be an algebra with unit, and let \(\{u_1,\ldots , u_n\}\) be a basis for \(\mathscr {A}\) consisting of invertible elements. A function \(y: \ \mathscr {A}\rightarrow \mathscr {A}\) of class \(\mathcal {C}^1\) admits left (right) derivative with respect to x at a point if and only if it is left (right) totally derivable in that point.

Proof

We give the proof in the case of the right derivative since the other case is similar. The function y admits right derivative if and only if

$$\begin{aligned} \lim _{h\rightarrow 0} (y(x+h)-y(x)) h^{-1} \end{aligned}$$

exists and is finite (the limit is independent of how h approaches zero, though obviously we have to make sure that h avoids zero-divisors). Let us assume for simplicity that the basis of \(\mathscr {A}\) is such that \(u_1=1\), something that is always possible by a change of basis, and let us write the multiplication rules of the basis elements in the form

$$\begin{aligned} u_iu_j=\sum _{k=1}^n \gamma _{ij}^k u_k. \end{aligned}$$

We can choose the increment such that \(h=tu_i\), \(t\in \mathbb {R}\), \(i=1,\ldots , n\). Then, by definition,

$$\begin{aligned} \lim _{h\rightarrow 0} (y(x+h)-y(x))h^{-1}= \dfrac{\partial y}{\partial x_i}u_i^{-1}, \qquad i=1,\ldots ,n. \end{aligned}$$
(5)

If the derivative exists, all these value coincide and taking for \(i=1\) at the right side of (5) we have \(\dfrac{\partial y}{\partial x_1}\) and we deduce

$$\begin{aligned} \dfrac{\partial y}{\partial x_1}u_i =\dfrac{\partial y}{\partial x_i}. \end{aligned}$$

Let us write \(y=\sum _{\ell =1}^n y_\ell u_\ell \), so that

$$\begin{aligned} \sum _{k=1}^n\dfrac{\partial y_k}{\partial x_1}u_{k}u_i=\sum _{\ell =1}^n \dfrac{\partial y_\ell }{\partial x_i} u_\ell \end{aligned}$$

and

$$\begin{aligned} \sum _{k, \ell =1}^n \dfrac{\partial y_k}{\partial x_1}\gamma ^\ell _{k i} u_\ell =\sum _{\ell =1}^n \dfrac{\partial y_\ell }{\partial x_i} u_\ell \end{aligned}$$

from which we deduce

$$\begin{aligned} \dfrac{\partial y_{\ell }}{\partial x_i} =\sum _{k=1}^n \gamma ^\ell _{k i}\dfrac{\partial y_k}{\partial x_1}. \end{aligned}$$
(6)

We now consider the jacobian matrix \(J=\left[ \dfrac{\partial y_\ell }{\partial x_i}\right] \). Equation (6) shows that y is right derivable if and only if the Jacobian matrix belongs to the first representation, in fact it is of the form

$$\begin{aligned} \dfrac{\partial y_\ell }{\partial x_i} =\sum _{k=1}^n \gamma ^\ell _{k i} z_k, \end{aligned}$$

see [7], Remark 2.1. \(\square \)

Remark 2.5

The previous result shows that the notion of total right or left derivability and of right or left derivability in the algebra, namely the existence of the limit of a right or left difference quotient, coincide. When an algebra is commutative there is no need to consider separately the right and the left case. In the particular case of the complex numbers, total derivable functions coincide with holomorphic functions. In the quaternionic case, total derivability coincides with the existence of quaternionic derivability and in this case the function is affine, namely it is of the form \(f(q)=qa+b\),\(a,b\in \mathbb {H}\) as proved by Meilikhson [12] in 1948, and then repeated also by Sudbery in [27].

Besides the notion of derivability, the Italian School considered also the notion of monogenicity which is inspired by the Cauchy–Riemann conditions for holomorphic functions of a complex variables. This notion in the case of a more general algebra is as follows:

Definition 2.6

Let \(y=y(x)\) be a function of a variable \(x\in \mathscr {A}\) with values in \(\mathscr {A}\). Let \(u_1,\ldots , u_n\) be a basis of \(\mathscr {A}\) and let us set \(u=(u_1,\ldots ,u_n)\), \(y=\sum _{i=1}^n y_i u_i\), \(x=\sum _{i=1}^n x_i u_i\). The function \(y=y(x)\) is said to be right monogenic if its Jacobian \(J=\left[ \dfrac{\partial y_k}{\partial x_i}\right] \) satisfies

$$\begin{aligned} u J u^T=0 \end{aligned}$$

or left monogenic if

$$\begin{aligned} u J^T u^T=0. \end{aligned}$$

Remark 2.7

We point out that the two conditions of right and left monogenicity do not coincide in a noncommutative algebra. In fact \((uJ u^T)^T\not = u J^T u^T\), in general.

Remark 2.8

In the complex case, monogenicity coincides with holomorphicity in the classical sense. For quaternionic functions, the notion identifies left or right Cauchy–Fueter regular functions, i.e. functions in the kernel of the operator

$$\begin{aligned} \frac{\partial }{\partial \bar{q}}=\frac{\partial }{\partial x_0}+i\frac{\partial }{\partial x_1}+j\frac{\partial }{\partial x_2}+k\frac{\partial }{\partial x_3} \end{aligned}$$

where we denoted a quaternion by \(q=x_0+ix_1+jx_2+kx_3\).

In the bicomplex case the notion of monogenicity amounts to the well known holomorphicity condition given by Scorza Dragoni and later on used by Bayley Price and it coincides with total derivability. In the case of hyperbolic numbers this notion coincides with the notion of holomorphicity given in [24].

Remark 2.9

The conditions of total derivability are expressed by \(n^2-n\) differential conditions on the n components \(y_i(x)\) of a function y(x), while the conditions of monogenicity are n. Total derivability is basis independent, while monogenicity depends on the chosen basis. In [16], Sce studies the relations between the two notions, according to the type of algebra. An algebra with unit is called solenoidal if in this algebra functions right totally derivable are right monogenic, it is called bisolenoidal if and only if the right totally derivable are left monogenic and in this latter case, the functions are also right monogenic.

3 Algebraic Analysis

In this section we recall only the basic notions and the techniques that we need in the sequel. For more details we refer the reader to [3].

By R we denote the ring of polynomials in n variables and complex coefficients, i.e. \(R=\mathbb {C} [z_1,\ldots ,z_n].\) We can regard R as the ring of symbols of linear constant coefficients partial differential operators if we formally replace \(z=(z_1,\ldots ,z_n)\) with \(D=(\partial /\partial x_1,\ldots ,\partial /\partial x_n)\). This formal substitution of derivatives with polynomials and viceversa basically corresponds to taking the Fourier transform. This is allowed only if the function space in which we work satisfies suitable conditions and these conditions will be satisfied in the spaces where we will work.

Then, we consider an \(r_1 \times r_0\) matrix \(P=[P_{ij}]\) of elements in R. In the cases of interest for us, such a matrix is the symbol of a matrix of differential operators P(D), and if \({\mathcal {S}}\) is a space of generalized functions, P(D) defines a natural map

$$\begin{aligned} P(D):{\mathcal {S}}^{r_0} \rightarrow {\mathcal {S}}^{r_1}, \end{aligned}$$

whose kernel is a set of functions of interest from the analysis point of view.

In reality, the more general setting in which one should work is the one of sheaves of generalized functions, to which we will apply the algebraic theory, in fact if \(\mathcal {S}\) is a sheaf, for suitable choices of \(\mathcal {S}\) we have that P(D) is a sheaf homomorphism whose kernel (again a sheaf) is denoted by \({\mathcal {S}}^P\).

Usually we work with the sheaf \(\mathcal {A}\) of real analytic functions, the sheaf \(\mathcal {B}\) of hyperfunctions, the sheaf \(\mathcal D'\) of Schwartz distributions, the sheaf \(\mathcal E\) of infinitely differentiable functions, and also the sheaf \(\mathcal O\) of holomorphic functions.

A first important algebraic object we are interested in is the R-module

$$\begin{aligned} M=R^{r_0}/P^T R^{r_1} \end{aligned}$$

where \(P^T\) denotes the transpose of P. This module is of crucial importance, due to the following fundamental theorem which is basically the ground for all of algebraic analysis:

Theorem 3.1

Let \(\mathcal {S}\) be a sheaf of generalized functions. Then there is a sheaf isomorphism

$$\begin{aligned} {\mathcal {S}}^P \cong \mathrm{Hom}(M,\mathcal {S}). \end{aligned}$$

This result explains why M is the central object in algebraic analysis, in fact it shows that the study of solutions of a general system of a linear, constant coefficients, partial differential equations can be reduced to the study of all the morphisms from M to \(\mathcal {S}\).

Another way to realize that the module M is the algebraic object which incorporates all the interesting information on the solutions of the system of differential equations \(P(D)f=0\), is via tools from commutative algebra. In fact, by the Hilbert’s syzygy theorem, we can always write a finite resolution of the module M:

Theorem 3.2

There exists an integer \(m \le n\) and a finite exact resolution of the module M with free modules as follows:

$$\begin{aligned} 0\longrightarrow R^{r_{m}}\buildrel {P^T_{m-1}}\over \longrightarrow R^{r_{m-1}} \longrightarrow \cdots \buildrel {P^T_1}\over \longrightarrow R^{r_1}\buildrel {P^T}\over \longrightarrow R^{r_0}\longrightarrow M\longrightarrow 0. \end{aligned}$$
(7)

The maps which appear in this resolutions are called the syzygies of M. The importance of the result is the fact that one can find a finite resolution, which has a natural bound on its length. It is important to note, however, that such a resolution is not unique.

By taking the dual of this finite free resolution through the Hom functor (roughly speaking, one takes the duals of the spaces involved, the transpose of the matrices representing the operators, and reverse arrows) we obtain:

$$\begin{aligned} 0\longrightarrow R^{r_0}\buildrel {P}\over \longrightarrow R^{r_1}\buildrel {P_1}\over \longrightarrow \cdots \longrightarrow R^{r_{m-1}}\buildrel {P_{m-1}}\over \longrightarrow R^{r_{m}}\longrightarrow 0 \end{aligned}$$
(8)

The complex (8) is not necessarily exact, so that one can consider its cohomology (the measure of how inexact the complex is) by taking the quotients of kernels and images. The quotients one obtains are actually R-modules:

Definition 3.3

The \(\mathrm{Ext}\)–modules of M are defined as:

$$\begin{aligned} \mathrm{Ext}^j(M,R) = H^j(M,R) = \frac{\mathrm{ker}(P_j)}{\mathrm{im}(P_{j-1})}. \end{aligned}$$

Remark 3.4

The map in the finite free resolution in Hilbert’s theorem, i.e. the syzygies, are not uniquely defined, however \(\mathrm{Ext}\)–modules are uniquely determined by M and R, and thus are invariant algebraic objects which contain some analytic information. For example, we have that for every open set U, the following sequence is exact:

$$\begin{aligned} 0\longrightarrow {\mathcal {S}}^P(U) \longrightarrow {\mathcal {S}}^{r_0}(U)\longrightarrow {\mathcal {S}}^{r_1}(U) \longrightarrow \cdots \longrightarrow {\mathcal {S}}^{r_{m-1}}(U)\longrightarrow {\mathcal {S}}^{r_{m}}(U)\longrightarrow 0 \end{aligned}$$

The map \(P_{r+1}(D)\) constructed from the maps appearing in the resolution of an operator P(D) has an analytical interpretation: it gives the compatibility conditions on the datum g of the non homogeneous system \(P_r(D)f=g\) that assure the solvability of the system on a convex open set. More in general, we have:

Theorem 3.5

Let U be a convex open (or convex compact) set in \(\mathbb {R}^n\) (or \(\mathbb {C}^n\)). Then the sequence

$$\begin{aligned}&0\longrightarrow \mathcal {S}^P(U ) \longrightarrow \mathcal {S}(U )^{r_0}\buildrel P(D) \over \longrightarrow \mathcal {S} (U )^{r_1}\buildrel P_1(D) \over \longrightarrow \ldots \\&\ldots \buildrel P_{m-1}(D) \over \longrightarrow \mathcal {S} (U )^{r_m}\longrightarrow 0 \end{aligned}$$

is exact.

Another analytic result which can be immediately deduced from the algebraic study of the matrix P is the validity of an Hartogs’ phenomenon for the functions in \(\mathcal {S}^P\).

Theorem 3.6

Let \(P=[P_{ij}]\) be a \(r_1\times r_0\) matrix of polynomials in n variables with complex coefficients, \(1\le r_0\le r_1\), \(r_1\ge 2\), such that coker(P) is torsion free. Let \(f=(f_1, \ldots ,f_{r_0})\) be a vector of hyperfunctions on \(\mathbb {R}^n\) such that

$$\begin{aligned} \sum _{j=1}^{r_0}P_{ij}(D)f_j=0, \ \ \ \ i=1,\ldots ,r_1, \end{aligned}$$

on \(\mathbb {R}^n\!{\setminus }\! K\), for K a compact convex subset in \(\mathbb {R}^n\). Then there exists a vector \(f^*=(f^*_1,\ldots ,f^*_{r_0})\) of hyperfunctions on \(\mathbb {R}^n\) such that

$$\begin{aligned} f^*_j=f_j \ \ \ { on}\ \ \mathbb {R}^n{\setminus } K, \ \ \ j=1,\ldots ,r_0 \end{aligned}$$

and

$$\begin{aligned} \sum _{j=1}^{r_0}P_{ij}(D)f^*_j=0, \ \ \ i=1,\ldots ,r_1, \end{aligned}$$

on all of \(\mathbb {R}^n\).

The condition on the cokernel that appears in this last result is equivalent to the vanishing of the Ext-modules Ext\(^i(M,R)\) for \(i=0,1\). This condition was further studied in [1] and amounts to the following simple algebraic characterization:

Proposition 3.7

If P is maximal rank, coker(P) is torsion free if and only if the greatest common divisors of the minors of P of maximal order is 1.

More in general, we have the following result that gives an analytic meaning to the algebraic study of minimal free resolutions and their properties:

Theorem 3.8

Let K be a compact set in \({\mathbb {R}}^n\), let P(D) be the matrix associated to a system such that Ext\(^j(M,R)=0\) for \(j=0,\ldots ,m-1\), where m is the length of a minimal free resolution, and let \(Q(D)=P^T_{m-1}(D)\). Suppose that either

$$\begin{aligned} \dim H^j_K({\mathbb {R}}^n, {\mathcal {B}}^P) < +\infty , \ \ \ j=1,\ldots ,m \end{aligned}$$

or

$$\begin{aligned} \dim H^{m-j}( K, \mathcal{A}^Q) \le \aleph _0, \ \ \ j=0,1,\ldots ,m-1. \end{aligned}$$

Then \(H^j_K({\mathbb {R}}^n, \mathcal{B}^P)\) and \(H^{m-j}( K, \mathcal{A}^Q)\) are respectively a FS-space and an DFS-space, and, for \(j=0,1,\ldots ,m\), they are strong dual to each other.

4 Some Algebras of Order Four

The classification of algebras of order four, on any field, has been provided by Scorza (we note that he classified algebras of order 2 and 3 in [19] and [20], respectively). He showed that they are 128, up to isomorphisms. In this rather general framework, only some of these algebras were studied from the point of view of function theory. The two most studied algebras of order four are the algebra of quaternions and the one of bicomplex numbers. In this section we will study in some more detail the case of two algebras that Sce considered of particular interest, namely algebra LXXIX and algebra LXXXI. We will then conclude with a remark on the bicomplex case that may lead to further interesting investigations.

4.1 Algebra LXXIX

Generally speaking one can consider cyclic algebras of order 2n whose basis elements \(1, j, j^2, \ldots , j^{2n-1}\) satisfy the relation \((1+j^2)^n=0\); by setting \(\omega =1+j^2\), one can further express j through the imaginary unit i, \(\omega \) and their products and powers. In this general case, these algebras are direct product of the algebra of complex numbers and algebras with basis \(1,\omega , \ldots , \omega ^{n-1}\). In this subsection we consider the case \(n=2\) which corresponds to the case of the algebra LXXIX in Scorza’s classification, see [21]:

Definition 4.1

The algebra LXXIX is the associative real algebra, that we shall denote by \(\mathscr {A}_{79}\), generated by \(1,j,j^2,j^3\) satisfying \((1+j^2)^2=0\). Equivalently, it is the associative real algebra generated by \(1,i,\omega , i\omega \) such that \(i^2=-1\), \(\omega ^2=0\), \(i\omega -\omega i=0\).

Remark 4.2

This algebra was studied by Ketchum [9] and Sobrero [25]. It is interesting to note that Sobrero defined this algebra starting by some differential equations in elasticity. He also pointed out that it was Levi Civita who showed him the second way to describe the algebra, by setting \(\omega = 1+j^2\) and \(i=\dfrac{1}{2}(3j+j^3)\), which is convenient to make computations in the algebra.

We observe that, in general, one may consider the change of basis given by relations of the form \(i=\dfrac{1}{2}(3j+j^3)\), \(\omega =a(1+j^2)+b(j+j^3)\), \(i\omega =-b(1+j^2)+a(1+j^3)\) with ab arbitrary real numbers (non both zero).

Given two elements in \(\mathscr {A}_{79}\), they can be added considering j and its powers as monomials, and multiplied by using the multiplication rules for j: let \(x=x_1+x_2j+x_3j^2+x_4 j^3\), \(x'=x'_1+x'_2j+x'_3j^2+x'_4 j^3\)

$$\begin{aligned} \begin{aligned} x+x'&=x=(x_1+x'_1)+(x_2+x'_2)j+(x_3+x'_3)j^2+(x_4 +x'_4)j^3\\ x\, x'&= (x_1x'_1-x_4x'_2-x_3x'_3-x_2x'_4+2x_4x'_4)+(x_2x'_1+x_1x'_2-x_4x'_3-x_3x'_4)j\\&\quad +(x_3x'_1+x_2x'_2-2x_4x'_2+x_1x'_3-2x_3x'_3-2x_2x'_4+3x_4x'_4)j^2\\&\quad +(x_4x'_1+x_3x'_2+x_2x'_3-2x_4x'_3+x_1x'_4-2x_3x'_4)j^3 \end{aligned} \end{aligned}$$

With some standard computations, one can easily show that for any given x the equation \(xx'=1\) is uniquely solvable if and only if the determinant of the matrix of the coefficients which is equal to \(((x_0-x_2)^2+(x_1-x_3)^2)^2\) is nonzero. We will define the modulus \(\Vert \cdot \Vert \) of an element \(x_1+x_2j+ x_3 j^2+ x_4 j^3\) by

$$\begin{aligned} \Vert x\Vert =((x_0-x_2)^2+(x_1-x_3)^2)^{1/2}. \end{aligned}$$

It is evident that the algebra \(\mathscr {A}_{79}\) contains zero divisors.

In order to deal with the product, it is more convenient to use the Levi Civita basis, so that an element x can be written as \(x=\xi _1+\xi _2i+\xi _3\omega +\xi _4i\omega =z_1+z_2\omega \) and if \(x'=\xi _1'+\xi _2'i+\xi _3'\omega +\xi _4'i\omega =z_1'+z_2'\omega \), with obvious meaning of the symbols, we have

$$\begin{aligned} \begin{aligned} x+x'&=(z_1+z_1')+(z_2+z_2')\omega \\ x\, x'&= (z_1z_1')+(z_1z_2'+z_2z_1')\omega . \end{aligned} \end{aligned}$$

It is interesting to note that

$$\begin{aligned} x^n=z_1^n+nz_1^{n-1}z_2 \omega \end{aligned}$$

so that \(z_2\) can only appear at most at degree 1 in any power of the variable x. With this basis, the modulus of x rewrites as

$$\begin{aligned} \Vert x\Vert =|z_1| \end{aligned}$$

where \(|z_1|\) denotes the modulus of the complex number \(z_1\). The product \(x\, x'\) is zero if and only if

$$\begin{aligned} z_1z_1'=0 \qquad z_1z_2'+z_2z_1'=0 \end{aligned}$$

which implies either one of the two elements is zero or both have modulus equal to zero. This is a complete characterization of the zero divisors: they are all the elements of the algebra with zero modulus, i.e. they are elements of the form \(z_2\omega \), \(z_2\not =0\).

To discuss the notion of total derivability we follow Sobrero’s paper [25] and we go back to the basis \(1,j,j^2,j^3\).

Proposition 4.3

The condition of total derivability in the algebra \(\mathscr {A}_{79}\) is expressed in matrix form as \(P(D)\underline{y}=0\) where y, x are identified with the vectors \(\underline{y}=(y_1, y_2,y_3,y_4)^T\) \(\underline{x}=(x_1,x_2,x_3,x_4)^T\) and

$$\begin{aligned} P(D)=\left[ \begin{matrix} \partial _{x_2} &{}\quad 0 &{}\quad 0&{}\quad \partial _{x_1}\\ \partial _{x_1} &{}\quad -\partial _{x_2} &{}\quad 0&{}\quad 0\\ 0&{}\quad \partial _{x_1} &{}\quad -\partial _{x_2}&{}\quad -2\partial _{x_1}\\ 0&{}\quad 0&{}\quad \partial _{x_1} &{}\quad -\partial _{x_2}\\ \partial _{x_3} &{}\quad 0 &{}\quad 0&{}\quad \partial _{x_2}\\ \partial _{x_2} &{}\quad -\partial _{x_3} &{}\quad 0&{}\quad 0\\ 0&{}\quad \partial _{x_2} &{}\quad -\partial _{x_3}&{}\quad -2\partial _{x_2}\\ 0&{}\quad 0&{}\quad \partial _{x_2} &{}\quad -\partial _{x_3}\\ \partial _{x_4} &{}\quad 0 &{}\quad 0&{}\quad \partial _{x_3}\\ \partial _{x_3} &{}\quad -\partial _{x_4} &{}\quad 0&{}\quad 0\\ 0&{}\quad \partial _{x_3} &{}\quad -\partial _{x_4}&{}-2\partial _{x_3}\\ 0&{}\quad 0&{}\quad \partial _{x_3} &{}\quad -\partial _{x_4}\\ \end{matrix} \right] , \end{aligned}$$
(9)

where we write \(\partial _{x_i}\) instead of \(\partial /\partial x_i\).

Proof

We can write the first representation of the algebra by looking for a matrix \(X'\) with real entries such that \(x u=uX'\). Easy computations show that

$$\begin{aligned} X'=\left[ \begin{matrix} x &{}\quad -v &{}\quad -w&{}\quad -y+2v\\ y &{}\quad x&{}\quad -v&{}\quad -w\\ w &{}\quad y-2v &{}\quad x-2w &{}\quad -2y+3v\\ v &{}\quad w&{}\quad y-2v &{}\quad x-2w \end{matrix} \right] . \end{aligned}$$

By imposing that the Jacobian has the form \(X'\), we get the system \(P(D)\underline{y}=0\) as stated. \(\square \)

Remark 4.4

We note that Sobrero did not use in his paper the notion of total derivability. He computed the derivatives along the directions of the elements of the basis, and he observed that having this four derivatives one can construct the derivatives along any direction not associated with a zero divisor. The elements j, \(j^2\), \(j^3\) are invertible, in fact from the relation \(j^4+2j^2+1=0\) we immediately have

$$\begin{aligned} \begin{aligned} j(-j^3-2j)&=1\\ j^2(-j^2-2)&=1 \end{aligned} \end{aligned}$$

which shows that j, \(j^2\) admit inverse. Finally, a simple calculation gives \(j^3(3j+2j^3)=1\) and so also \(j^3\) is invertible, as stated. As we observed, by virtue of Proposition 2.4, this is equivalent to total derivability.

Functions of the form \(y=y_1+y_3j^2+y_4j^3\) are totally derivable (or monogenic, in Sobrero’s terminology) if and only if they satisfy the elastic strain equations. As a general remark, it is interesting to point out that Sobrero became interested in \(\mathscr {A}_{79}\) because of some specific equations that arise naturally in physics. This is not a new phenomenon. Holomorphicity derives its interest from the fact that the Cauchy–Riemann operator factorizes the Laplacian, and as such the theory of holomorphic functions has an intimate connection with the theory of harmonic functions and potential theory. Similarly, the study of analyticity in the hyperbolic setting is connected to the solution of the D’Alambert operator and the wave equation. And as Sce pointed out in [18], under suitable conditions one can construct an algebra where holomorphicity identifies the solutions to a specific system of differential equations.

Remark 4.5

The matrix P(D) can be written as three \(4\times 4\) blocks all of the same form but we respect to different variables. In the computations below we will need the symbol of this matrix which we write up to a factor \(-i\) in terms of the variables \(x_1,\ldots , x_4\) as

$$\begin{aligned} P=\left[ \begin{matrix} P(x_1,x_2)\\ P(x_2,x_3)\\ P(x_3,x_4) \end{matrix} \right] \qquad \mathrm{where}\qquad P(a,b)= \left[ \begin{matrix} b &{}\quad 0 &{}\quad 0&{}\quad -a\\ a &{}\quad -b &{}\quad 0&{}\quad 0\\ 0&{}\quad a &{}\quad -b&{}\quad -2a\\ 0&{}\quad 0&{}\quad a &{}\quad -b \end{matrix} \right] \end{aligned}$$

We then have:

Proposition 4.6

The module \(M=R^4/ P^T R^{12}\) admits a finite free resolution of the form

$$\begin{aligned} 0\longrightarrow R^{4}(-3) \buildrel {P_2^T}\over \longrightarrow R^{12}(-2)\buildrel {P_1^T}\over \longrightarrow R^{12}(-1)\buildrel {P^T}\over \longrightarrow R^4 \longrightarrow M\longrightarrow 0, \end{aligned}$$

where

$$\begin{aligned} P_1= & {} \left[ \begin{matrix} 0 &{} Q(x_3,x_4) &{} -Q(x_2,x_3)\\ Q(x_3,x_4) &{} 0 &{} -Q(x_1,x_2)\\ Q(x_2,x_3) &{} -Q(x_1,x_2) &{} 0 \end{matrix}\right] \qquad Q(a,b)= \left[ \begin{matrix} a&{}\quad b&{}\quad 0&{}\quad 0\\ -b&{}\quad a&{}\quad -b&{}\quad -a\\ 0&{}\quad 0&{}\quad a&{}\quad -b\\ -b&{}\quad 0&{}\quad 0&{}\quad a \end{matrix}\right] , \\ P_2= & {} \left[ \begin{matrix} R(x_1,x_2)\ \, S(x_2,x_3)\ \, R(x_3,x_4) \end{matrix}\right] , \end{aligned}$$

where

$$\begin{aligned} R(a,b)= \left[ \begin{matrix} -a&{}\quad -b&{}\quad 0&{}\quad -b\\ b&{}\quad -a&{}\quad b&{}\quad 0\\ 0&{}\quad 0&{}\quad -a&{}\quad b\\ b&{}\quad 0\quad &{}0&{}\quad -a \end{matrix}\right] ,\qquad S(a,b)= \left[ \begin{matrix} a&{}\quad b&{}\quad 0&{}\quad a\\ -b&{}\quad a&{}\quad -b&{}\quad 0\\ 0&{}\quad 0&{}\quad a&{}\quad -b\\ -b&{}\quad 0&{}\quad 0&{}\quad a \end{matrix}\right] , \end{aligned}$$

moreover depth\((M)=1=\dim (M)\).

Proof

The minimal free resolution as well as the depth can be computed using any software to perform computations in commutative algebra. The explicit form of the syzygies can be checked by direct computations. \(\square \)

Remark 4.7

We note that the vanishing of Ext\(^1(M,R)=0\) is easily proved since the greatest common divisor of the minors of order 4 of the matrix P is 1. It is interesting to note that, although a minimal free resolution is linear, it is not self-dual. In fact, not only the matrices \(P^T\) and \(P_2^T\) are different, fact that depends on the above explicit construction, but it may be checked that the module generated by the rows of \(P^T\) and of \(P_2^T\) are different.

Corollary 4.8

Let U be an open convex set in \(\mathbb {R}^4\). The system \(P(D) \underline{y} =\underline{g}\) has a solution in \((\mathcal {S}(U))^{12}\) if and only if \(g\in (\mathcal {S}(U))^{12}\) satisfies \(P_1(D)g=0\). Writing \(\underline{g}=[\underline{g}_1\ \underline{g}_2\ \underline{g}_3]^T\) with \(\underline{g}_i=[g_{i1},g_{i2},g_{i3},g_{i4}]^T\) this is equivalent to

$$\begin{aligned} \begin{aligned} Q(\partial _{x_3},\partial _{x_4})\underline{g}_2-Q(\partial _{x_2},\partial _{x_3})\underline{g}_3&=0\\ Q(\partial _{x_3},\partial _{x_4})\underline{g}_1-Q(\partial _{x_1},\partial _{x_2})\underline{g}_3&=0\\ Q(\partial _{x_2},\partial _{x_3})\underline{g}_1-Q(\partial _{x_1},\partial _{x_2})\underline{g}_2&=0.\\ \end{aligned} \end{aligned}$$
(10)

The system

$$\begin{aligned} \begin{aligned} Q(\partial _{x_3},\partial _{x_4})\underline{g}_2-Q(\partial _{x_2},\partial _{x_3})\underline{g}_3&=\underline{h}_1\\ Q(\partial _{x_3},\partial _{x_4})\underline{g}_1-Q(\partial _{x_1},\partial _{x_2})\underline{g}_3&=\underline{h}_2\\ Q(\partial _{x_2},\partial _{x_3})\underline{g}_1-Q(\partial _{x_1},\partial _{x_2})\underline{g}_2&=\underline{h}_2\\ \end{aligned} \end{aligned}$$
(11)

has a solution in \(\mathcal {S}^{4}(U)\) if and only if

$$\begin{aligned} R(\partial _{x_1},\partial _{x_2})\underline{h}_1+ Q(\partial _{x_2},\partial _{x_3})\underline{h}_2+ R(\partial _{x_3},\partial _{x_4})\underline{h}_3=0. \end{aligned}$$

Remark 4.9

As we pointed out, the solutions of the system associated with P(D) can be interpreted in the framework of the study of an elastic plate. From the physical point of view, the compatibility conditions expressed by \(P_2(D)\underline{g}=0\) could be interpreted as conservation laws, see [4], in the case \(\underline{g}\) represents for instance external forces applied to the plate.

Theorem 3.8 in this framework gives:

Proposition 4.10

Let K be a compact set in \({\mathbb {R}}^4\), and let \(\mathcal {S}^P\) be the sheaf of solutions of \(P(D)\underline{y}=0\) where P(D) is as in (9). Assume that \(\dim H^0_K({\mathbb {R}}^4, \mathcal{B}^P)\) is finite dimensional. Then \(H^0_K({\mathbb {R}}^4, \mathcal{B}^P)\) and \(H^{3}( K, \mathcal{A}^{P_2^T})\) are are strong dual to each other.

Proof

By Proposition 4.6\(\dim (M)=1\) which implies the vanishing of the Ext-modules Ext\(^i(M,R)\) for \(i=0,1,2\). By Theorem 3.8 we obtain the assertion. \(\square \)

By Remark 4.7 since the system is not self-dual, the maps P and \(P^T_2\) are different, so the sheaf \(\mathcal{A}^{P_2^T}\) does not coincide with \(\mathcal{B}^P\).

In the algebra \(\mathscr {A}_{79}\) we can also consider the condition of monogenicity. To write it is real coordinates, we have a very simple matrix representation if we use the basis \(1,i,\omega , i\omega \). Let us consider \(\xi \in \mathscr {A}_{79}\), \(\xi =\xi _1+i\xi _2+\omega \xi _3+i\omega \xi _4\) and a function \(\eta =\eta (\xi )\). As before we identify \(\xi \), \(\eta \) in \(\mathscr {A}_{79}\) with \(\underline{\xi }=[\xi _1,\xi _2,\xi _3,\xi _4]^T\), \(\underline{\eta }=[\eta _1,\eta _2,\eta _2,\eta _4]^T\) in \(\mathbb {R}^4\), respectively and we have:

Definition 4.11

A function \(y:\ U\subseteq \mathscr {A}_{79}\rightarrow \mathscr {A}_{79}\) of class \(\mathcal {C}^1\) is monogenic if and only if \(\mathsf {P}(D)\underline{\eta }=0\) in U where

$$\begin{aligned} \mathsf {P}(D)=\left[ \begin{matrix} \partial _{\xi _1} &{}\quad -\partial _{\xi _2} &{}\quad 0&{}\quad 0\\ \partial _{\xi _2} &{}\quad \partial _{\xi _1} &{}\quad 0 &{}\quad 0\\ \partial _{\xi _3} &{}\quad -\partial _{\xi _4}&{}\quad \partial _{\xi _1}&{}\quad -\partial _{\xi _2}\\ \partial _{\xi _4} &{}\quad \partial _{\xi _3}&{}\quad \partial _{\xi _2}&{}\quad \partial _{\xi _1}\\ \end{matrix} \right] , \end{aligned}$$
(12)

where we write \(\partial _{\xi _i}\) instead of \(\partial /\partial \xi _i\).

Remark 4.12

It is clear that this system is not elliptic since \(\mathscr {A}_{79}\) contains zero divisors, see [18] or the translation [7], Chapter 3, and that the variety of zero divisors is in fact the characteristic variety. In fact, \(\xi =z+\omega w\) with \(z,w\in \mathbb {C}\) and we already observed that the elements of the form \(\omega w\) are all and the only zero divisors. Indeed, it is apparent that the system expressing monogenicity in \(\mathscr {A}_{79}\) is parabolic as it easily seen from the fact that its characteristic equation \(\det (P)=\xi _1^2+\xi _2^2\) depends on the two variables \(\xi _1,\xi _2\) only.

Proposition 4.13

A function \(y:\ U\subseteq \mathscr {A}_{79}\rightarrow \mathscr {A}_{79}\) of class \(\mathcal {C}^2\) is monogenic if and only if

$$\begin{aligned} y(x)=y(z,w)= F(z,w)+G(z,w)\omega , \end{aligned}$$

where F(zw) is holomorphic in z, G(zw) is polyanalytic of order 2 in z and

$$\begin{aligned} \partial _{\bar{w}}F(z,w)+\partial _{\bar{z}}G(z,w)=0. \end{aligned}$$
(13)

Proof

Functions monogenic in \(\mathscr {A}_{79}\) can be written as sum of two complex valued functions FG of the form \(F(z,w)+\omega G(z,w)\) where F is holomorphic in z and FG satisfy \(\partial _{\bar{w}}F+\partial _{\bar{z}}G=0\). We deduce that \(\partial _{\bar{z}}(\partial _{\bar{w}})F+\partial ^2_{\bar{z}}G=0\) so that \(\partial ^2_{\bar{z}}G=0\) and G is a polyanalytic function of order 2 in z. \(\square \)

Example 4.14

As an example of function monogenic in \(\mathscr {A}_{79}\) we consider the case in which F, G can be expanded in power series of z:

$$\begin{aligned} F(z,w)=\sum _{n=0}^{+\infty } z^n \alpha _{0n}(w), \qquad G(z,w)= \sum _{n=0}^{+\infty } z^n \alpha _{1n}(w)+\bar{z}\sum _{n=0}^{+\infty } z^n \alpha _{1n}(w) \end{aligned}$$

where, in order to satisfy (13), the functions \(\alpha _{\ell n}(z)\), \(\ell =0,2\) must satisfy

$$\begin{aligned} \alpha _{2n}(w)=-\frac{1}{2} \partial _{\bar{w}}\alpha _{0n}(w). \end{aligned}$$

If additionally we suppose that both \(\alpha _{\ell n}(z)\), \(\ell =0,2\) are analytic in w, and recalling that w cannot appear with powers higher than 2 we can take for example \(\alpha _{0n}(w)=a+wb\), \(\alpha _{2n}(w)=b/2\), \(a,b\in \mathbb {C}\).

In his paper [18] Sce raised the question of establishing if the components of a monogenic function all satisfy a common equation. In general, one can guarantee that this equation exists of order n if n is the dimension of the algebra seen as vector space. In some cases this order can be less, as in the case of quaternions where we know that all the components of a monogenic function are harmonic. In the case of \(\mathscr {A}_{79}\) we have:

Corollary 4.15

All the components \(y_\ell \), \(\ell =0,\ldots ,3\) of a monogenic function in \(\mathscr {A}_{79}\) satisfy the bilaplacian equation

$$\begin{aligned} \Delta ^2_{1,2} y_{\ell }=0. \end{aligned}$$

where \(\Delta _{1,2}=\frac{\partial ^2}{\partial \xi _1^2}+\frac{\partial ^2}{\partial \xi _2^2}\).

Proof

The corollary easily follows from the fact that F is holomorphic in z, and thus its components satisfy \(\Delta _{1,2}y_\ell =0\), \(\ell =1,2\) while G is polyanalytic of order 2 in z.

\(\square \)

We end this section by observing that one can consider the system giving the monogenicity in n variables in \(\mathscr {A}_{79}\), but since the variables commute among themselves the resulting minimal free resolution is of Koszul-type and so well known.

4.2 Algebra LXXXI

A variation of the algebra \(\mathscr {A}_{79}\) is the algebra LXXXI in Scorza’s classification. This algebra is noncommutative and is defined as follows:

Definition 4.16

The algebra LXXXI is the real associative algebra generated by \(1,i,\omega , i\omega \) such that \(i^2=-1\), \(\omega ^2=0\), \(i\omega +\omega i=0\).

For simplicity, we will denote this algebra by \(\mathscr {A}_{81}\). Sum and multiplication of \(x=z_1+z_2\omega \), \(x'=z_1'+z_2'\omega \), \(z_\ell ,z'_\ell \in \mathbb {C}\), \(\ell =1,2\) are given by:

$$\begin{aligned} \begin{aligned} x+x'&= z_1'+z_2'\omega \\ xx'&= z_1z_1'+(z_1z_2'+z_2\bar{z}_1')\omega . \end{aligned} \end{aligned}$$

An element \(x=z_1+z_2\omega =x_1+x_2i+x_3\omega +x_3i\omega \in \mathscr {A}_{81}\) is invertible if and only if \(x\, x'=1\) and \(x'\) exists, and is unique, if and only if the system of 4 real equations in the 4 components of \(x'\) has matrix of the coefficients non singular. The determinant of such a matrix is \((x_1^2+x_2^2)^2=|z_1|^2\). We conclude that the invertibility of x is equivalent to \(z_1\not =0\). As a consequence, all the elements of the form \(z_2\omega \) with \(z_2\not =0\) are zero divisors. We set \(\Vert x\Vert =|z_1|\).

Also in this case, we study total differentiability:

Proposition 4.17

The condition of total derivability in the algebra \(\mathscr {A}_{81}\) is expressed in matrix form as \(P(D)\underline{y}=0\) where y, x are identified with the vectors \(\underline{y}=(y_1, y_2,y_3,y_4)^T\) \(\underline{x}=(x_1,x_2,x_3,x_4)^T\) and

$$\begin{aligned} P(D)=\left[ \begin{matrix} \partial _{x_1} &{}-\partial _{x_2} &{}0&{}0\\ \partial _{x_2}&{}\partial _{x_1} &{}0 &{}0\\ 0 &{}0&{}\partial _{x_1}&{}\partial _{x_2} \\ 0 &{}0&{}-\partial _{x_2}&{}\partial _{x_1} \\ \partial _{x_3} &{}0 &{}0&{}0\\ \partial _{x_4} &{}0 &{}0&{}0\\ 0&{}\partial _{x_3} &{}0&{}0\\ 0&{}\partial _{x_4} &{}0&{}0\\ \partial _{x_1} &{}0&{}-\partial _{x_3}&{}0\\ \partial _{x_1} &{}0&{}0 &{}-\partial _{x_4}\\ 0 &{}0&{}\partial _{x_4} &{}\partial _{x_3}\\ \partial _{x_2}&{}0 &{}-\partial _{x_4}&{}0\\ \end{matrix} \right] . \end{aligned}$$
(14)

Proof

We write the first representation of the algebra by looking for a matrix \(X'\) with real entries such that \(x u=uX'\). It turns out that

$$\begin{aligned} X'=\left[ \begin{matrix} x &{} -y &{} 0&{} 0\\ y &{}x&{} 0&{} 0\\ w &{}v &{} x &{} -y\\ v &{}-w&{} y &{} x \end{matrix} \right] . \end{aligned}$$

By imposing that the Jacobian has the form \(X'\), we get the system \(P(D)\underline{y}=0\) as stated. \(\square \)

As it is well known, in the quaternionic case the notion of total derivability characterizes affine functions, i.e. functions of the form \(f(q)=a+qb\), where ab are quaternions. Thus one may wonder if there are non trivial examples of functions totally derivable in the algebra \(\mathscr {A}_{81}\). The answer is contained in the next result:

Proposition 4.18

Let \(x=x_1+x_2i+x_3\omega +x_4i\omega =z+w\omega \) with \(z=x_1+x_2i\), \(w=x_3+x_4i\) and let \(y=y_1+y_2i+y_3\omega +y_4i\omega =F+G\omega \), \(y=y(x)=y(z,w)\), \(F=y_1+iy_2\), \(G=y_3+iy_4\) of class \(\mathcal {C}^2\). Then y is totally derivable if and only if

$$\begin{aligned} y=F(z)+G(z,w)\omega \end{aligned}$$

with F holomorphic in z, G anti-holomorphic in z, holomorphic and linear in w and such that \(\partial _{x_2}\mathrm{Re}(F)=\partial _{x_4}\mathrm{Re}(G)\), \(\partial _{x_1}\mathrm{Re}(F)=\partial _{x_3}\mathrm{Re}(G)\).

The conditions are satisfied for example by functions of the form

$$\begin{aligned} y(x)=y(z,w)=a_0+ z a_1 +\left( \sum _{n\ge 0}\bar{z}^n b_n + w a_{1}\right) \omega ,\qquad a_0,a_1,b_n\in \mathbb {R}+i\mathbb {R}, \end{aligned}$$

where the series converge.

Proof

The first four conditions in the matrix (14) are equivalent to the fact that F and G are holomorphic and anti-holomorphic in z, respectively. The next four conditions impose that F does not depend on w, in fact both \(y_1\) and \(y_2\) do not depend on \(x_3\), \(x_4\). From \(\partial _{x_1} y_1= \partial _{x_3} y_3\) and \(\partial _{x_1} y_1= \partial _{x_4} y_4\) we deduce \(\partial _{x_3} \partial _{x_1} y_1= \partial _{x_3}^2 y_3=0\) since \(\partial _{x_3} \partial _{x_1} y_1=\partial _{x_1} \partial _{x_3} y_1=0\) and, similarly, we deduce that \(\partial _{x_4}^2 y_4=0\), \(\partial _{x_3}\partial _{x_4} y_i=0\), \(i=3,4\). Thus G is linear in \(x_3,x_4\) and so in w and \(\bar{w}\) since \(x_3=\frac{1}{2}(w+\bar{w})\), \(x_4=\frac{1}{2i}(w-\bar{w})\).

Then we have

$$\begin{aligned} \partial _{\bar{w}}G=\frac{1}{2}(\partial _{x_3}+i\partial _{x_4})(y_3+iy_4)=\frac{1}{2}[(\partial _{x_3}y_3-\partial _{x_4}y_4)+i(\partial _{x_4}y_3+\partial _{x_3}y_4)]=0 \end{aligned}$$

in fact the last but one condition in (14) is \(\partial _{x_4}y_3+\partial _{x_3}y_4=0\) while \(\partial _{x_1} y_1= \partial _{x_3} y_3\) and \(\partial _{x_1} y_1= \partial _{x_4} y_4\) imply that \(\partial _{x_3}y_3-\partial _{x_4}y_4=0\).

Finally, a function is right totally derivable in \(\mathscr {A}_{81}\) if and only if also last equation is satisfied.

To see that the conditions characterize a nontrivial class of functions, we consider a function \(y=F+G\omega \) such that F, G are holomorphic in z, \(\bar{z}\), respectively and with G linear in w. We expand F and G in power series as

$$\begin{aligned} F(z)+G(z,w)\omega =\sum _{n\ge 0}z^n a_n +\left( \sum _{n\ge 0}\bar{z}^n b_n +\sum _{n\ge 0}\bar{z}^n w c_n\right) \omega ,\qquad a_n,b_n, c_n\in \mathbb {R}+i\mathbb {R} \end{aligned}$$

by imposing the last condition in (14) with some lengthy but easy computations we obtain that \(c_n=-(n+1) a_{n+1}\) and the statement follows. \(\square \)

Proposition 4.19

The module \(M=R^4/ P^T R^{12}\) admits a finite free resolution of the form

$$\begin{aligned}&0\longrightarrow R^{2}(-5) \buildrel {P_3^T}\over \longrightarrow R^{2}(-3)\oplus R^{6}(-4) \buildrel {P_2^T}\over \longrightarrow R^{10}(-2)\oplus R^{4}(-3)\\&\quad \buildrel {P_1^T}\over \longrightarrow R^{12}(-1)\buildrel {P^T}\over \longrightarrow R^4 \longrightarrow M\longrightarrow 0. \end{aligned}$$

Moreover, the matrix P has rank 4 and the g.c.d. of the \(4\times 4\) minors is 1, \(\dim (R^4/M)=1\), depth\((R^4/M)=0\).

In view of Theorem 3.6 we immediately have:

Corollary 4.20

Hyperfunctions solutions to the system (14) cannot have compact singularities.

Remark 4.21

To rewrite the maps in the minimal free resolution in Proposition 4.19 in terms of some recurring operators, as we did in the case of the resolution in the case of the algebra \(\mathscr {A}_{79}\), does not seem to be an easy task. It also depend on how the matrix P which can be rewritten in various equivalent ways exploiting the equalities among some of its entries. By choosing P as above, the map \(P_3^T\) can be written in the form

$$\begin{aligned} P_3^T=\left[ \begin{matrix} 2x_2^2 &{}0&{}x_1&{}x_2&{}x_3&{}x_4&{}-x_4&{}0\\ 0&{} 2x_2^2 &{}-x_2&{}x_1&{}0&{}x_3&{}0&{}-x_4\\ \end{matrix}\right] . \end{aligned}$$

Since Ext\(^i(M,R)=0\), \(i=0,\ldots , 3\), in fact dim\((R^4/M)=1\), and Ext\(^4(M,R)\not =0\), we have that the dual of \(H^0(K,(\mathcal {S}^4)^P)\) is \(H^0(\mathbb {R}^4\!\setminus \! K,(\mathcal {S}^6)^{P_3})\).

We now turn to the notion of (left) monogenicity.

Definition 4.22

The condition of monogenicity in the algebra \(\mathscr {A}_{81}\) is expressed in matrix form as \(\mathsf {P}(D)\underline{\eta }=0\) where

$$\begin{aligned} \mathsf {P}(D)=\left[ \begin{matrix} \partial _{x_1} &{}\quad -\partial _{x_2} &{}\quad 0&{}\quad 0\\ \partial _{x_2} &{}\quad \partial _{x_1} &{}\quad 0 &{}\quad 0\\ \partial _{x_3} &{}\quad \partial _{x_4}&{}\quad \partial _{x_1}&{}\quad -\partial _{x_2}\\ \partial _{x_4} &{}\quad -\partial _{x_3}&{}\quad \partial _{x_2}&{}\quad \partial _{x_1}\\ \end{matrix} \right] , \end{aligned}$$
(15)

where we write \(\partial _{x_i}\) instead of \(\partial /\partial \xi _i\).

Remark 4.23

An interesting feature of this system is that, denoting by \(\mathsf {P}\) the symbol of \(\mathsf {P}(D)\), \(\mathsf {P}\) can be written in the form

$$\begin{aligned} \mathsf {P}=\left[ \begin{matrix} A &{} 0\\ B &{} A\end{matrix}\right] , \qquad A=\left[ \begin{matrix} {x_1} &{}-{x_2}\\ {x_2} &{}{x_1}\end{matrix}\right] ,\quad B=\left[ \begin{matrix} {x_3} &{}{x_4}\\ {x_4} &{}-{x_3}\end{matrix}\right] \end{aligned}$$

and that

$$\begin{aligned} \mathsf {F}=\left[ \begin{matrix} A &{} -B\\ B &{} A\end{matrix}\right] \end{aligned}$$

is the symbol of the matrix corresponding to left monogenicity in the algebra of quaternions \(\mathbb {H}\), notions that is commonly known as Cauchy–Fueter regularity. This difference has some relevant consequences for example, in the construction of the minimal free resolution in the case of several variables. It contains linear and quadratic maps not only in the first syzygies but also for other maps. Last map is linear in each entry starting from three variables on.

Also in this case, the operator associated with monogenicity is parabolic, in fact we have \(\det (\mathsf {P})=(x_1^2+x_2^2)^2\). This is also clear from the fact that \(\mathscr {A}_{81}\) contains zero divisors.

Proposition 4.24

A function \(y:\ U\subseteq \mathscr {A}_{81}\rightarrow \mathscr {A}_{81}\) of class \(\mathcal {C}^2\) of the form

$$\begin{aligned} y(x)=y(z,w)= F(z,w)+G(z,w)\omega , \end{aligned}$$

is monogenic if and only if F(zw) is holomorphic in z, G is harmonic in the components of z and \(\partial _{\bar{w}}\overline{F(z,w)}+\partial _{\bar{z}}G(z,w)=0\).

Proof

Functions monogenic in \(\mathscr {A}_{81}\) can be written as sum of two complex valued functions FG of the form \(F(z,w)+\omega G(z,w)\). The system (15) is satisfied if and only if F is holomorphic in z and FG satisfy \(\partial _{\bar{w}}\overline{F}+\partial _{\bar{z}}G=0\). Since FG are assumed to be of class \(\mathcal {C}^2\) we deduce that

$$\begin{aligned} \partial _{{z}}(\partial _{\bar{w}})\overline{F}+\Delta _z G=0 \end{aligned}$$

that is

$$\begin{aligned} \overline{\partial _{{w}}\partial _{\bar{z}}F}+\Delta _z G=0 \end{aligned}$$

so that \(\Delta _{{z}}G=0\). \(\square \)

4.3 Algebra of Bicomplex Numbers

We conclude with a remark that we hope may spur additional work. A very well developed analysis in the algebra of bicomplex numbers is the subject of [2] and a more modern approach that uses the methods of algebraic analysis has also received consideration and we refer the reader to [10, 11] and the references therein for further information.

As it is well known, the algebra \(\mathbb {BC}\) of bicomplex numbers is constructed over the basis \(1,\, i,\, j,\, k\) such that \(i^2=j^2=-1\), \(ij=ji=k\) and so it is a commutative algebra. A bicomplex numbers is then written as \(x=x_1+x_2i+x_3j+x_4k\).

In this framework the notion of total derivability (without distinguishing left or right since the product is commutative) is the one of interest and it is commonly known in the literature as bicomplex holomorphy. If we rewrite x as \(x=z+jw\) with \(z=x_1+x_2i\), \(w=x_3+x_4i\), we can define three conjugates in \(\mathbb {BC}\):

$$\begin{aligned} x^*= & {} \overline{z}-\mathbf{j}\overline{w},\\ \tilde{x}= & {} \overline{z}+\mathbf{j}\overline{w},\\ x^{\dagger }= & {} z-\mathbf{j}w. \end{aligned}$$

This notation is necessary to show the following result which is well known:

Theorem 4.25

Let \(U\subseteq \mathbb {BC}\) be an open set and \(F:U\rightarrow \mathbb {BC}\) such that \(F=u+\mathbf{j}v\in C^1(U)\). Then F is bicomplex hyperholomorphic if and only if F satisfies the following three systems of differential equations:

$$\begin{aligned} \frac{\partial F}{\partial x^*}= \frac{\partial F}{\partial x^{\dagger }}= \frac{\partial F}{\partial \tilde{x}}=0. \end{aligned}$$
(16)

What this theorem shows is that even in one variable, the condition of hyperholomorphicity is overdetermined (in other words, unlike what happens in \({\mathbb {C}}\) or in \({\mathbb {H}}\), the condition of holomorphicity is expressed in terms of several differential operators). This has an immediate algebraic consequence, as it can be shown that, in many respects, the theory of hyperholomorphic functions on bicomplex numbers behave more like the theory of several complex or quaternionic variables. Most of the work that the authors have done in the area of bicomplex numbers takes advantage of this peculiarity. This raises the question of whether it is possible to envision a holomorphicity theory for bicomplex that more closely resembles single variable theories. Thus it is interesting to notice that nobody has paid attention, so far, to the notion of monogenicity for bicomplex numbers that can be easily written, in the variables \(x_0,\ldots , x_3\), as:

$$\begin{aligned} \begin{aligned}&\frac{\partial y_1 }{\partial x_1 }-\frac{\partial y_2 }{\partial x_2 }-\frac{\partial y_3 }{\partial x_3 }+\frac{\partial y_4 }{\partial x_4 }=0\\&\frac{\partial y_1 }{\partial x_2 }+\frac{\partial y_2 }{\partial x_1 }-\frac{\partial y_3 }{\partial x_4 }-\frac{\partial y_4 }{\partial x_3 }=0\\&\frac{\partial y_1 }{\partial x_3 }-\frac{\partial y_2 }{\partial x_4 }+\frac{\partial y_3 }{\partial x_1 }-\frac{\partial y_4 }{\partial x_2 }=0\\&\frac{\partial y_1 }{\partial x_4 }+\frac{\partial y_2 }{\partial x_3 }+\frac{\partial y_3 }{\partial x_2 } + \frac{\partial y_4 }{\partial x_1 }=0. \end{aligned} \end{aligned}$$
(17)

This latter condition does not seem to be studied in the literature and corresponds to the condition \(\frac{\partial F}{\partial x^*}= 0\), and is therefore not overdetermined anymore. Evidently, if the function y(x) is totally derivable, then it is also monogenic but the converse is not true. It is a curiosity to observe that the system (17) only differs from the Cauchy–Fueter system for a sign, and yet clearly yields a very different space. It will be of interest to develop a theory for this wider class of functions.