Introduction

The main purpose of the present work is to study the explicit description of the general solutions to a linear system of moment differential equations of the form

$$\begin{aligned} \partial _{m}y(z)=Ay(z), \end{aligned}$$
(1)

where \(A\in {\mathbb {C}}^{n\times n}\) is a constant matrix and \(y(z)=(y_1(z),\ldots ,y_n(z))\) is a vector of unknown functions, for some positive integer \(n\ge 1\). Here, \(m=(m(p))_{p\ge 0}\) is a fixed sequence of moments (see Definition 2), and \(\partial _my(z)=(\partial _m(y_1),\ldots ,\partial _m(y_n))\) stands for the moment derivative of the vector y(z) (see 9), generalizing the classical derivation operator.

The studies of moment differential equations were firstly developed by W. Balser and M. Yoshino in their former work [3], and have attracted the attention of many researchers in the last years due to the versatility of moment differentiation. Recently, several summability results of the formal solutions to moment partial differential equations have been obtained by Michalik [24] and by Michalik et al. [15], and also in the final section of the chapter [28], by Sanz. Also, some advances have been made on the study of moment integro-differential equations [17]. The knowlegdge of the growth of the coefficients of the formal solutions to moment partial differential equations is also of great interest as a first step towards summability results [16, 30], and also the Stokes phenomenon [26].

We also refer to the work [14], where Malek et al. determine summability properties of the solutions of a family of singularly perturbed partial differential equations in the complex domain whose coefficients are \({\mathbb {M}}\)-sums, for some strongly regular sequence \({\mathbb {M}}\), and also the work [9] on multisummability results in Carleman ultraholomorphic classes by means of nonzero proximate orders.

As a matter of fact, versatility of moment differentiation particularizes on the classical differentiation corresponding to the moment sequence \(m=(p!)_{p\ge 0}\). Moreover, the operator \(\partial _{m}\) is quite related to the Caputo 1/s-fractional differential operator when fixing the sequence of moments \(m=(\Gamma (1+\frac{p}{s}))_{p\ge 0}\). The theory regarding this concrete choice for the moment sequence is currently being developed due to its applications in many fields of research. Stability properties of systems of fractional differential equations have also been widely studied in literature. In 1996, Matignon [22], describes the stability properties of systems of Caputo fractional differential equations of the form

$$\begin{aligned} {}^C D^{1/k}_{z}y(z)=Ay(z), \end{aligned}$$
(2)

where \({}^C D^{1/s}_{z}\) stands for Caputo fractional derivative of order 1/s. Such properties are given in terms of the eigenvalues of the matrix of the system A, but also of s. The author determines that the solutions of the previous system are asymptotically stable if the eigenvalues \(\lambda \) of A satisfy \(|\arg (\lambda )|>\frac{\pi }{2s}\). This result is coherent with our asymptotic study of the solutions in fifth section. This mentioned asymptotic behavior of the solution is due to the appearance of kernel functions associated to summability which construct the analytic solution of the system of moment differential equations. More precisely, we prove (see Theorem 1 and Theorem 2) that \(E(\lambda z)v\) is a solution of (1), where \(\lambda \in \hbox {spec}(A)\) and \(v\in \hbox {ker}(A-\lambda I_n)\) is an associated eigenfunction. E is an entire function, converging to zero in the sector \(\{z\in {\mathbb {C}}:|\arg (z)|>\pi \omega ({\mathbb {M}})/2\}\), for some positive number \(\omega ({\mathbb {M}})\) related to the moment sequence m (see Definition 2). We observe both sectors coincide in the framework of Caputo fractional differential operators.

In practice, the linearized problem (2), much more easier to handle, is usually considered to study the solutions to a nonlinear system, as it is the case of [32] where the authors study the linearized associated system of the form (2) to study a fractional model of cancer-immune system, providing the asymptotic stability of the solutions from the Jacobian matrix of the initial nonlinear system. We also refer to [1], where the authors study stability properties of Caputo fractional differential equations under the action of impulses. Other recent results and applications considering linear and semilinear systems of fractional differential equations with constant coefficients are also studied in [21, 23]. We also refer to the work by Bonilla et al. [4] where the authors study systems of linear fractional differential equations with constant coefficients, and the statements of Theorem 1 and Theorem 2 are displayed in terms of the exponential matrix in this framework.

Given a strongly regular sequence \({\mathbb {M}}\) admitting a kernel function E for \({\mathbb {M}}\)-summability (see Definition 1 and Definition 2), the properties associated to the entire function E allow us to construct the general solution of (1) in Theorem 1 (for a diagonal matrix A) and Theorem 2 (under general settings). This general solution, depending on n arbitrary constants, is obtained by appropriate manipulation of E, the eigenvalues and generalized eigenvectors of A, together with the Jordan canonical form associated to A. Once obtained the explicit entire solutions of the system (1), we study their asymptotic behavior at infinity with the help of the classical theory of growth of entire functions. This leads us to the knowledge of the global growth of the entire solutions of (1) at infinity not only in a global sense (Theorem 5), but also determining the radial growth of the solutions at infinity (Theorem 7). The main novelty of the present work is twofold. On the one hand, a closed explicit expression for the entire solutions to linear systems of moment partial differential equations of the form (1) is obtained (Theorem 1 and Theorem 2) in the general framework of moments related to strongly regular sequences \({\mathbb {M}}\) admitting kernel functions for \({\mathbb {M}}\)-summability (see “Strongly Regular Sequences and Generalized Summability” section). On the other hand, we also determine the order and type of growth associated to such entire solutions at infinity both globally (Theorem 5) and also following rays approaching to infinity (Theorem 7).

The paper is organized as follows: After fixing some notation considered in the work (“ Notation” section) we recall the main facts on strongly regular sequences and generalized summability (“Strongly Regular Sequences and Generalized Summability” section). The general explicit entire solutions to (1) are constructed in “Entire Solutions to a Linear System of Moment Differential Equations” section, and their asymptotic behavior at infinity is obtained in “ Asymptotic Study of the Solutions” section, with a special attention to the radial growth at infinity of the entire solutions (“On the Radial Growth of the Solutions” section).

Notation

\({\mathcal {R}}\) denotes the Riemann surface of the logarithm. Let \(d\in {\mathbb {R}}\), and \(\theta >0\). We write \(S_d(\theta )\) for the set

$$\begin{aligned} S_d(\theta ):=\left\{ z\in {\mathcal {R}}:|\arg (z)-d|<\frac{\theta }{2}\right\} . \end{aligned}$$

\(I_n\) stands for the identity operator in \({\mathbb {C}}^{n\times n}\) for some \(n\in {\mathbb {N}}:=\{1,2,\ldots \}\). Given a set C, we write \(\#(C)\) for the cardinal number of C.

D(zr) stands for the open disc centered at \(z\in {\mathbb {C}}\) and radius \(r>0\).

\({\mathbb {C}}[[z]]\) denotes the set of formal power series in z with complex coefficients. Given an open set \(U\subseteq {\mathbb {C}}\), we write \({\mathcal {O}}(U)\) for the set of holomorphic functions on U.

Strongly Regular Sequences and Generalized Summability

This section is devoted to recall the main facts on the elements involved in the results of the present work with respect to the theory of generalized summability. We mainly focus on the definition and main properties of a strongly regular sequence \({\mathbb {M}}\), and the concept of kernel functions for \({\mathbb {M}}\)-summability, leading to the notion of sequence of moments, and other tools associated to generalized summability and generalized differential operators. The main problem under study will be written in terms of these tools and notions.

Strongly Regular Sequences

The notion of strongly regular sequence is due to Thilliez [31], generalizing Gevrey sequences.

Definition 1

Let \({\mathbb {M}}:=(M_p)_{p\ge 0}\) be a sequence of positive real numbers such that \(M_0=1\) and satisfies the following properties:

  1. (lc)

    \({\mathbb {M}}\) is logarithmically convex, i.e. \(M_{p}^2\le M_{p-1}M_{p+1}\) for every \(p\ge 1\).

  2. (mg)

    \({\mathbb {M}}\) is of moderate growth, i.e. there exists \(A_1>0\) with \(M_{p+q}\le A_1^{p+q}M_pM_q\), for every pair of integers \(p,q\ge 0\).

  3. (snq)

    \({\mathbb {M}}\) is non-quasianalytic, i.e. there exists \(A_2>0\) such that

    $$\begin{aligned} \sum _{q\ge p}\frac{M_q}{(q+1)M_{q+1}}\le A_2\frac{M_p}{M_{p+1}} \end{aligned}$$

    for every \(p\ge 0\).

Any sequence \({\mathbb {M}}\) satisfying the previous properties is known as a strongly regular sequence.

Examples of strongly regular sequences can be found in the literature in many applications. For example Gevrey sequences, defined by \({\mathbb {M}}_{\alpha }:=(p!^{\alpha })_{p\ge 0}\) for some \(\alpha >0\), are of great importance in the theory of summability of formal solutions to ordinary and partial differential equations (see [2], and the references therein). Given \(\alpha >0\) and \(\beta \in {\mathbb {R}}\), the sequence \({\mathbb {M}}_{\alpha ,\beta }:=(p!^{\alpha }\prod _{m=0}^{p}\log ^{\beta }(e+m))_{p\ge 0}\) is also a strongly regular sequence, the first terms being slightly modified if \(\beta <0\). This modification does not interfere with the spaces of functions related to that sequence. As a matter of fact, the particular case \(\alpha =-\beta =1\) is linked to the \(1+\) level in the study of difference equations [7, 8].

Associated to a strongly regular sequence \({\mathbb {M}}\) one can define the non-decreasing continuous function \(M:[0,\infty )\rightarrow [0,\infty )\) by \(M(0)=0\) and

$$\begin{aligned} M(t):=\sup _{p\ge 0}\log \left( \frac{t^p}{M_p}\right) ,\quad t>0 \end{aligned}$$
(3)

(see [20]). In the work [27], Sanz proves that the order of M, previously used in the theory of meromorphic functions in [5], and defined by

$$\begin{aligned} \rho (M):=\limsup _{r\rightarrow \infty }\max \left\{ 0,\frac{\log (M(r))}{\log (r)}\right\} , \end{aligned}$$

turns out to be a positive real number which provides the inverse of the limit opening of injectivity of the asymptotic Borel map. Namely, the existence of nonzero functions asymptotically flat at the vertex of a sectorial region is guaranteed whenever the opening of the sectorial region is at most \(\pi \omega ({\mathbb {M}})\), with \(\omega ({\mathbb {M}}):=1/\rho (M)\) (see Corollary 3.16 in [11]).

Generalized Summability

The following definitions and properties correspond to the theory of generalized summability, associated to a strongly regular sequence, extending the classical notion of Gevrey summability and the moment summability methods developped by Balser, which can be found in [2], Sect. 5.5. The generalization of that theory to strongly regular sequences was put forward by Sanz in [27]. We recall in this section some of the main concepts for the sake of completeness, which will be used in the sequel.

Definition 2

Let \({\mathbb {M}}\) be a strongly regular sequence, with \(0<\omega ({\mathbb {M}})<2\), and define the function M according to (3). We say that \({\mathbb {M}}\) admits a pair of kernel functions for \({\mathbb {M}}\)-summability if there exist two functions, e and E satisfying the following conditions:

  1. 1.

    e is holomorphic on the sector \(S_0(\omega ({\mathbb {M}})\pi )\), and e(z)/z is locally uniformly integrable at the origin meaning there exists \(t_0>0\), and for every \(z_0\in S_0(\omega ({\mathbb {M}})\pi )\) there exists \(r_0=r_0(z_0)>0\) such that \(D(z_0,r_0)\subseteq S_0(\omega ({\mathbb {M}})\pi )\) and

    $$\begin{aligned} \int \limits _0^{t_0}\sup _{z\in D(z_0,r_0)|}e(t/z)|\frac{dt}{t}<\infty . \end{aligned}$$

    In addition to this, for every fixed \(\delta >0\), there exist \(k_1,k_2>0\) with

    $$\begin{aligned} |e(z)|\le k_1\exp \left( -M\left( \frac{|z|}{k_2}\right) \right) , \qquad z\in S_0(\omega ({\mathbb {M}})\pi -\delta ). \end{aligned}$$
  2. 2.

    \(e(|z|)\in {\mathbb {R}}_{+}\) for every \(z\in S_0(\omega ({\mathbb {M}})\pi )\).

  3. 3.

    E is an entire function, and there exist \(k_3,k_4>0\) with

    $$\begin{aligned} |E(z)|\le k_3\exp \left( M\left( \frac{|z|}{k_4}\right) \right) ,\quad z\in {\mathbb {C}}. \end{aligned}$$

    Moreover, there exists \(\beta >0\) such that for every \(0<\theta <2\pi -\omega ({\mathbb {M}})\pi \) and \(R>0\), there exists \(k_5>0\) with

    $$\begin{aligned} |E(z)|\le k_5|z|^{-\beta },\quad z\in S_{\pi }(\theta )\hbox { with }|z|\ge R. \end{aligned}$$
    (4)
  4. 4.

    E admits the representation

    $$\begin{aligned} E(z)=\sum _{p\ge 0}\frac{z^p}{m(p)},\quad z\in {\mathbb {C}}, \end{aligned}$$
    (5)

    where m is the moment function associated with e

    $$\begin{aligned} m(z):=\int \limits _0^{\infty }t^{z-1}e(t)dt, \end{aligned}$$
    (6)

    for every \(z\in \{\omega \in {\mathbb {C}}:\hbox {Re}(\omega )\ge 0\}\). The function m turns out to be continuous in \(\{z\in {\mathbb {C}}:\hbox {Re}(z)\ge 0\}\) and holomorphic in \(\{z\in {\mathbb {C}}:\hbox {Re}(z)> 0\}\). The sequence \((m(p))_{p\ge 0}\) is known as the sequence of moments associated with the pair of kernel functions.

The existence of a pair of kernel functions for \({\mathbb {M}}\)-summability is not always guaranteed. However, it is when departing from a strongly regular sequence admitting a nonzero proximate order in the sense of Lindelöf (see [10, 13]), which is the case of sequences appearing in applications, as stated in [27]. We also refer to [10], where the authors determine conditions for a strongly regular sequence for admitting a nonzero proximate order. In particular, every sequence of the form \({\mathbb {M}}_{\alpha ,\beta }\) for any \(\alpha >0\) and \(\beta \in {\mathbb {R}}\) as defined above, and also Gevrey sequence \({\mathbb {M}}_{\alpha }\) for every \(\alpha >0\) admit nonzero proximate order (see Theorem 3.6 and Example 3.7 in [10]) In addition to this, \(\omega ({\mathbb {M}}_{\alpha ,\beta })=\alpha \) and \(\omega ({\mathbb {M}}_{\alpha })=\alpha \).

Definition 3

A nonzero proximate order \(\rho (t)\) is a nonnegative continuously differentiable function defined in an interval of the form \((c,\infty )\) for some \(c\in {\mathbb {R}}\) such that

$$\begin{aligned} \lim _{r\rightarrow \infty }\rho (r)=\rho , \end{aligned}$$
(7)

for some \(\rho \in {\mathbb {R}}\), and

$$\begin{aligned} \lim _{r\rightarrow \infty }r\rho '(r)\ln (r)=0. \end{aligned}$$
(8)

In the case of \({\mathbb {M}}_{\alpha ,\beta }\) and \({\mathbb {M}}_{\alpha }\) with \(\alpha >0\) and \(\beta \in {\mathbb {R}}\), one has \(\rho =1/\alpha \).

Definition 4

(Definition 4.1, [10]) Let \({\mathbb {M}}=(M_p)_{p\ge 0}\) be a (lc) sequence under the assumption that \(\lim _{p\rightarrow \infty }\frac{M_{p+1}}{M_p}=\infty \). We say \({\mathbb {M}}\) admits a proximate order if there exists a proximate order \(\rho (t)\) and constants A and B such that

$$\begin{aligned} A\le \log (t)(\rho (t)-d_{{\mathbb {M}}}(t))\le B, \end{aligned}$$

for t large enough, where \(d_{{\mathbb {M}}}\) is defined by

$$\begin{aligned} d_{{\mathbb {M}}}(t):=\frac{\log (M(t))}{\log (t)}, \end{aligned}$$

for t large enough.

We refer to [10], Sect. 4, for a deepen study on sequences admitting a nonzero proximate order.

The following result can be found in [27], Proposition 5.8.

Proposition 1

Given a strongly regular sequence \({\mathbb {M}}=(M_p)_{p\ge 0}\). Assume that \({\mathbb {M}}\) admits a pair of kernel functions for \({\mathbb {M}}\)-summability, and let \(m=(m(p))_{p\ge 0}\) be the associated moment sequence. Then, \({\mathbb {M}}\) and m are equivalent sequences, i.e. there exist \(c_1,c_2>0\) such that

$$\begin{aligned} c_1^pM_p\le m(p)\le c_2^pM_p,\qquad p\ge 0. \end{aligned}$$

Given a sequence of positive real numbers, or in practice a sequence of moments \(m=(m(p))_{p\ge 0}\), Balser and Yoshino [3] defined the moment derivative operator \(\partial _{m,z}:{\mathbb {C}}[[z]]\rightarrow {\mathbb {C}}[[z]]\) by

$$\begin{aligned} \partial _{m,z}\left( \sum _{p\ge 0}\frac{a_p}{m(p)}z^p\right) =\sum _{p\ge 0}\frac{a_{p+1}}{m(p)}z^p. \end{aligned}$$
(9)

Its definition can be naturally extended to any holomorphic function defined on some neighborhood of the origin when considering the Taylor expansion of the function. Such formal operator corresponds to the usual derivation for the sequence \(m=(p!)_{p\ge 0}\) and it is quite related to the Caputo 1/s-fractional differential operator \({}^C D^{1/k}_{z}\) when considering the sequence of moments \(m=(\Gamma (1+\frac{p}{k}))_{p\ge 0}\). Indeed, for the previous moment sequence one has that

$$\begin{aligned} ( \partial _{m,z}f)(z^{1/k})={}^C D^{1/k}_{z}(f(z^{1/k})) \end{aligned}$$
(10)

for every \(f\in {\mathbb {C}}[[z]]\). We refer to [25] (see Definition 5 and Remark 1) for further details. In addition to this, the definition of moment differential operators can be extended not only to holomorphic functions near the origin, but also to summable functions. More precisely, given a strongly regular sequence \({\mathbb {M}}\) admitting a nonzero proximate order, and a function f defined on a finite sector S of opening larger than \(\pi \omega ({\mathbb {M}})\) and bisecting direction \(d\in {\mathbb {R}}\), which admits the formal power series \({\hat{f}}\in {\mathbb {C}}[[z]]\) as its \({\mathbb {M}}\)-asymptotic expansion in S (observe from Corollary 4.12 [27] this function is unique under this property), then one can define \(\partial _{m,z}f\) as the unique function admitting \(\partial _{m,z}({\hat{f}})\) as its \({\mathbb {M}}\)-asymptotic expansion in a finite sector of opening larger than \(\pi \omega ({\mathbb {M}})\) and bisecting direction \(d\in {\mathbb {R}}\). We refer to [15] for a more detailed description of this result.

Entire Solutions to a Linear System of Moment Differential Equations

In this section, we state the first main result of the present work. Namely, we give the precise analytic expression of the solution to certain linear system of moment differential equations. The asymptotic behavior of such solutions at infinity will be analyzed in a subsequent section.

Let \({\mathbb {M}}\) be a strongly regular sequence admitting a nonzero proximate order. This entails the existence of an associated pair of kernel functions for \({\mathbb {M}}\)-summability, say e and E, satisfying the properties enumerated in Definition 2. Let \(m=(m(p))_{p\ge 0}\) be the sequence of moments associated to the previous kernel functions. Let \(n\ge 1\), and \(A\in {\mathbb {C}}^{n\times n}\) be a \(n\times n\) matrix with complex coefficients. We consider the moment differential equation

$$\begin{aligned} \partial _my=Ay, \end{aligned}$$
(11)

where \(y=(y_1,\ldots ,y_n)^{T}\) is a vector of unknown functions, with \(y_j=y_j(x)\) for \(1\le j\le n\), and \(\partial _{m}y\) stands for the vector \((\partial _{m}y_1,\ldots , \partial _{m}y_n)^{T}\).

Lemma 1

Let \((z_0,y_0)\in {\mathbb {C}}^{1+n}\). The Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{lcc} \partial _my&{}=Ay\\ y(z_0)&{}=y_0 \end{array} \right. \end{aligned}$$
(12)

admits a unique solution which is an entire function.

Proof

One can set \(z_0=0\) without loss of generality. Indeed, if y is a solution of the equation with Cauchy data located at the origin, then \(y(z-z_0)\) is a solution of the same equation with its Cauchy data located at \(z=z_0\).

We first show that (12) admits a unique formal power series solution. Let us write \(A=(a_{jk})_{1\le j,k,\le n}\), and assume that \(y(z)=\sum _{p\ge 0}\tilde{y}_p\frac{z^p}{m(p)}\in {\mathbb {C}}^{n}[[z]]\) is the formal solution of (12). We write \(\tilde{y}_p=(\tilde{y}_{p,1},\ldots ,\tilde{y}_{p,n})\) for every \(p\ge 0\). Then, one has that \(\tilde{y}_{p+1,j}=\sum _{k=1}^{n}a_{jk}\tilde{y}_{p,j}\) for every \(1\le j\le n\) and \(p\ge 0\). Let \(\alpha :=\max _{1\le j,k\le n}|a_{j,k}|\), and put \(\left\| \tilde{y}_p\right\| :=\sum _{j=1}^{n}|\tilde{y}_{p,j}|\) for every \(p\ge 0\). For all \(p\ge 1\) and \(1\le j\le n\) one has

$$\begin{aligned} \left\| \tilde{y}_{p+1}\right\| =\sum _{j=1}^{n}|\tilde{y}_{p+1,j}|\le \sum _{j=1}^{n}\sum _{k=1}^{n}|a_{jk}||\tilde{y}_{p,j}|\le \alpha n\left\| \tilde{y}_{p}\right\| . \end{aligned}$$

We define the sequence \((c_p)_{p\ge 0}\) by \(c_0:=\left\| \tilde{y}_0\right\| \), and for every \(p\ge 0\) we put \(c_{p+1}:= \alpha n c_p\). This entails that \(c_p=(\alpha n)^{p}c_0\) for every \(p\ge 0\). In addition to this, it is straight to check that \(\left\| \tilde{y}_p\right\| \le c_p\) for every \(p\ge 0\). Therefore, one has that \(|\tilde{y}_{p,j}|\le (\alpha n)^{p}\left\| \tilde{y}_0\right\| \) for all \(1\le j\le n\). The series \(\sum _{p\ge 0}(\alpha n |z|)^p\frac{1}{m(p)}\) has infinite radius of convergence. Indeed, it coincides with \(E(\alpha n|z|)\) (see (5)). Therefore, the formal power series y(z) represents an entire function. \(\square \)

A proof of the following result follows analogous arguments as in the case of the usual derivation. We only sketch its proof for the sake of completeness.

Lemma 2

The set of solutions to (11) is a subspace of \(({\mathcal {O}}({\mathbb {C}}))^{n}\) of dimension n.

Proof

Let y be a solution to (11), which is defined for some \(z_{0}\in {\mathbb {C}}\). In view of Lemma 1, one derives that y belongs to \(({\mathcal {O}}({\mathbb {C}}))^n\). It is straight to check that any linear combination of two solutions to (11) is also a solution to (11). Let \(\{\omega _1,\ldots ,\omega _n\}\) be a basis of \({\mathbb {C}}^{n}\) and consider the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{lcc} \partial _my&{}=Ay\\ y(z_0)&{}=\omega _j \end{array} \right. \end{aligned}$$
(13)

for every \(1\le j\le n\). Lemma 1 guarantees the existence of a unique entire solution to (13), say \(y_j\), for each \(1\le j\le n\) . The set \(\{y_1,\ldots ,y_n\}\) determines a basis of the vector space of solutions to (11). \(\square \)

The properties of the kernel functions described in Definition 2 allow us to obtain an explicit formula for the general solution of (11). First, we find the explicit solutions to (11).

Lemma 3

Let \(\lambda \in {\mathbb {C}}\) be an eigenvalue of \(A\in {\mathbb {C}}^{n\times n}\) with associated eigenvector \(v\in {\mathbb {C}}^n\). The following properties hold:

  • If \(\lambda \ne 0\), then the function \(y(z)=E(\lambda z)v\) is an entire solution of (11).

  • If \(\lambda =0\), then the constant function \(y(z)=v\) is a solution of (11).

Proof

First, assume that \(\lambda \ne 0\). The holomorphy properties of the kernel function E guarantee that \(y(z)=E(\lambda z)v\) belongs to \(({\mathcal {O}}({\mathbb {C}}))^n\). It is straight to check that \(\partial _m(E(\lambda z))=\lambda E(\lambda z)\). Indeed,

$$\begin{aligned} \partial _m(E(\lambda z))=\partial _{m}\left( \sum _{p\ge 0}\frac{(\lambda z)^p}{m(p)}\right) =\sum _{p \ge 0}\frac{\lambda ^{p+1}z^p}{m(p)}=\lambda E(\lambda z). \end{aligned}$$
(14)

In view of (14) one has that

$$\begin{aligned} A y(z)=A(E(\lambda z) v)=E(\lambda z) (A v)=E(\lambda z)\lambda v=\partial _m(E(\lambda z))v=\partial _m(E(\lambda z)v)=\partial _m(y(z)). \end{aligned}$$

This concludes the first part of the proof. Now, assume that \(\lambda =0\). Then, one has that

$$\begin{aligned} Av=0=\partial _{m}(v). \end{aligned}$$

Therefore, the constant vector \(y(z)=v\) provides a solution of (11). \(\square \)

Theorem 1

Let \(A\in {\mathbb {C}}^{n\times n}\) be a diagonalizable matrix. Let \(\{\lambda _j\}_{1\le j\le k}\), for some \(1\le k\le n\) be the set of eigenvalues of A, and let \(\{v_{j,1},\ldots , v_{j,\ell _j}\}\) be a basis of \(\hbox {Ker}(A-\lambda _jI_n)\) for every \(1\le j\le k\), and some \(\ell _j\ge 1\). Then, the general solution of (11) is given by

$$\begin{aligned} y(z)=\sum _{j=1}^{k}\sum _{p=1}^{\ell _j}C_{j,p}E(\lambda _j z)v_{j,p}, \end{aligned}$$
(15)

with \(C_{j,p}\) being arbitrary constants.

Proof

In view of Lemma 3, one obtains that for every \(1\le j\le k\) and all \(1\le p\le \ell _j\) the function \(y_{j,p}(z)=E(\lambda _j)v_{j,p}\) is an entire solution of (11). The fact that A is diagonalizable guarantees that \(\#\{v_{j,p}:1\le j\le k, 1\le p\le \ell _j\}=n\). It only rests to prove that the set \(\{E(\lambda _j z)v_{j,p}: 1\le j\le k,\lambda _j\ne 0,1\le p\le \ell _j\}\) is linearly independent. This last statement is straight to be checked by taking into account that \(\{v_{j,p}:1\le j\le k, 1\le p\le \ell _j\}\) is a basis of \({\mathbb {C}}^n\). \(\square \)

Remark 1

Observe that the terms of the sum in (15) which correspond to the eigenvalue \(\lambda =0\) are such that \(E(\lambda z)\equiv 1\). This will have consequences regarding the asymptotic behavior of the solution at infinity.

The general case makes use of Jordan canonical form. For this purpose, we define the following formal power series.

Definition 5

Let \(\lambda \in {\mathbb {C}}\). For every \(h\ge 0\) we define the formal power series

$$\begin{aligned} \Delta _hE(\lambda ,z)=\sum _{p\ge h}{p\atopwithdelims ()h}\frac{\lambda ^{p-h}z^p}{m(p)}. \end{aligned}$$
(16)

Lemma 4

The following statements are direct consequence of the definition of the operator \(\Delta _{h}\):

  1. 1.

    \(\Delta _0E(\lambda , z)=E(\lambda z)\).

  2. 2.

    \(\Delta _1E(\lambda , z)=\partial _{\lambda }(E(\lambda z))\).

  3. 3.

    If \(\lambda =0\), \(\Delta _hE(\lambda , z)=z^{h}/m(h)\).

  4. 4.

    For every \(h\ge 1\) it holds that

    $$\begin{aligned} (\partial _m-\lambda )(\Delta _hE(\lambda , z))=\Delta _{h-1}E(\lambda , z). \end{aligned}$$
    (17)

Proof

We only give detail of the proof of the last statement, being the other ones straightforward properties derived from the definition of the operator \(\Delta _h\). Let \(h\ge 1\). In view of (9), it holds that

$$\begin{aligned} \partial _{m}\left( \sum _{p\ge h}{p\atopwithdelims ()h}\frac{\lambda ^{p-h}z^p}{m(p)}\right) =\sum _{p\ge h}{p \atopwithdelims ()h}\frac{\lambda ^{p-h}z^{p-1}}{m(p-1)}. \end{aligned}$$
(18)

On the other hand, one has

$$\begin{aligned} \lambda \Delta _h E(\lambda , z)+\Delta _{h-1}E(\lambda , z)= & {} \lambda \sum _{p\ge h}{p\atopwithdelims ()h} \frac{\lambda ^{p-h}z^p}{m(p)}+\sum _{p\ge h-1}{p\atopwithdelims ()h-1} \frac{\lambda ^{p-(h-1)}z^p}{m(p)} \nonumber \\= & {} \sum _{p\ge h}\left[ {p\atopwithdelims ()h}+{ p \atopwithdelims ()h-1}\right] \frac{\lambda ^{p-h+1}z^p}{m(p)}+\frac{z^{h-1}}{m(h-1)} \nonumber \\= & {} \sum _{p\ge h}{p+1\atopwithdelims ()h} \frac{\lambda ^{p-h+1}z^p}{m(p)}+\frac{z^{h-1}}{m(h-1)}=\sum _{p\ge h}{p \atopwithdelims ()h}\frac{\lambda ^{p-h}z^{p-1}}{m(p-1)}, \end{aligned}$$
(19)

leading to the conclusion. \(\square \)

Lemma 5

For every \(h\ge 0\), and every \(\lambda \in {\mathbb {C}}\), the formal power series \(\Delta _hE(\lambda , z)\) is an entire function.

Proof

Taking into account statements 1. and 3. of Lemma 4, the result is clear for \(\lambda =0\) and also for \(h=0\). Assume that \(\lambda \in {\mathbb {C}}^{\star }\) and let \(h\ge 1\). The statement is clear provided that

$$\begin{aligned} \Delta _hE(\lambda , z)=\left[ \frac{1}{\lambda ^{h}h!}\omega ^{h}\left( \frac{d}{dw}\right) ^{h}E(\omega )\right] _{w=\lambda z}. \end{aligned}$$

\(\square \)

We now describe the general solution of (11) in the case of a non-diagonalizable matrix A. Let \(p_{A}(z)\) be the characteristic polynomial of A, i.e. the polynomial \(\hbox {det}(A-z I_n)\).

Theorem 2

Let \(\{\lambda _j\}_{1\le j\le k}\) for some \(1\le k\le n\) be the set of eigenvalues of A. Assume that \(\lambda _j\) is an eigenvalue of algebraic multiplicity \(m_j\ge 1\), for every \(1\le j\le k\). Then, the general solution of (11) can be written in the form

$$\begin{aligned} y(z)=\sum _{j=1}^{k}\sum _{p=1}^{m_j}C_{j,p}y_{j,p}(z), \end{aligned}$$
(20)

where \(C_{j,p}\) are arbitrary constants, and \(y_{j,p}\) are entire functions determined from the Jordan decomposition of A.

Proof

Let \(1\le j\le k\) and assume that the algebraic multiplicity of \(\lambda _j\), say \(m_j\), is larger than its geometric multiplicity, say \(\ell _j\ge 1\). Otherwise, one can proceed as in Theorem 1. The functions \(y_{j,1},\ldots ,y_{j,m_j}\) are constructed as follows. We write \(\lambda :=\lambda _j\) for simplicity. Let \(\{v_{j,1},\ldots ,v_{j,\ell _j}\}\) be a basis of \(\hbox {Ker}(A-\lambda _jI_n)\). For every \(1\le p\le \ell _j\) we proceed with an analogous construction as that of a classical Jordan block related to \(\lambda _j\), see [29], as an example of reference in this direction. We illustrate the procedure for the sake of completeness. Let us write \(\tilde{v}_1:=v_{j,p}\). We put

$$\begin{aligned} y_{j,1}:=E(\lambda z)\tilde{v}_1. \end{aligned}$$

We choose \(\tilde{u}_2\in {\mathbb {C}}^n\) such that

$$\begin{aligned} A\tilde{u}_2=\lambda \tilde{u}_2+\tilde{v}_1, \end{aligned}$$
(21)

i.e. \(\tilde{u}_2\in (A-\lambda I_n)^{-1}(\tilde{v}_1)\). Observe there exists such a vector due to the algebraic and geometric multiplicities of \(\lambda \) do not match. Therefore, \(\tilde{u}_2\in \ker ((A-\lambda I_n)^2)\). We define

$$\begin{aligned} y_{j,2}:=\Delta _1E(\lambda , z)\tilde{v}_1+E(\lambda z)\tilde{u}_2. \end{aligned}$$

If the procedure for the Jordan blocking is not concluded, we proceed analogously by choosing \(\tilde{u}_3\in (A-\lambda I_n)^{-1}(\tilde{u}_2)\) (therefore \(\tilde{u}_3\in \ker ((A-\lambda I_n)^3)\)), and defining

$$\begin{aligned} y_{j,3}:=\Delta _2E(\lambda , z)\tilde{v}_1+\Delta _1E(\lambda z)\tilde{u}_2+E(\lambda z)\tilde{u}_3. \end{aligned}$$

The recursion concludes with

$$\begin{aligned} y_{j,h_{p,j}}:=\Delta _{h_{p,j}-1}E(\lambda z)\tilde{v}_1+\Delta _{h_{p,j}-2}E(\lambda , z)\tilde{u}_2+\ldots + E(\lambda z)\tilde{u}_{h_{p,j}}, \end{aligned}$$

where \(\Delta _{h}\) is the operator defined in (16), and \(h_{p,j}\ge 2\) is determined by the Jordan decomposition associated to A, and satisfying \(\sum _{j=1}^{k}\sum _{p=1}^{\ell _j}h_{p,j}=n\).

First, observe that

$$\begin{aligned} \partial _{m}(y_{j,1})=\partial _{m}(E(\lambda z))\tilde{v}_1=\lambda E(\lambda z) \tilde{v}_1= E(\lambda z)A\tilde{v}_1=Ay_{j,1}. \end{aligned}$$

Also, in view of the property (17) and (21), one has that

$$\begin{aligned} \partial _m(y_{j,2})=\partial _m(\Delta _1E(\lambda , z)\tilde{v}_1+E(\lambda z)\tilde{u}_2)=\partial _{m}(\Delta _1E(\lambda , z)\tilde{v}_1)+\partial _m(E(\lambda z)\tilde{u}_2) \nonumber \\ =\lambda \Delta _1E(\lambda , z)\tilde{v}_1+ E(\lambda z)\tilde{v}_1+\lambda E(\lambda z)\tilde{u}_2=\Delta _{1}E(\lambda , z)A\tilde{v}_1+E(\lambda z)A\tilde{u}_2=Ay_{j,2}. \end{aligned}$$
(22)

This entails that \(y_{j,2}\) solves (11). One can proceed recursively to obtain for all \(1\le q\le h_{p,j}\) that \(y_{j,q}\) is a solution of (11). Indeed, one has

$$\begin{aligned} \partial _m(y_{j,q}) & = \partial _m(\Delta _{q-1}E( \lambda , z)\tilde{v}_1+\Delta _{q-2}E(\lambda , z)\tilde{u}_2+\ldots +E(\lambda z)\tilde{u}_{q}) \nonumber \\ &= \partial _m(\Delta _{q-1}E( \lambda , z)\tilde{v}_1)+\partial _m(\Delta _{q-2}E(\lambda , z)\tilde{u}_2)+\ldots +\partial _m(E(\lambda z)\tilde{u}_{q}) \nonumber \\ &= \lambda \Delta _{q-1}E( \lambda , z)\tilde{v}_1+\Delta _{q-2}E(\lambda , z)\tilde{v}_1+ \lambda \Delta _{q-2}E( \lambda , z)\tilde{u}_2+\Delta _{q-3}E(\lambda , z)\tilde{u}_2 \nonumber \\ &\quad + \ldots +\lambda \Delta _1E(\lambda , z)\tilde{u}_{q-1}+E(\lambda z)\tilde{u}_{q-1}+\lambda E(\lambda z)\tilde{u}_q \nonumber \\ &= \lambda \Delta _{q-1}E(\lambda , z)\tilde{v}_1+\Delta _{q-2}E( \lambda , z)(\tilde{v}_1+\lambda \tilde{u}_2)+\Delta _{q-3}E( \lambda , z)(\tilde{u}_2+\lambda \tilde{u}_3)\nonumber \\ & \quad +\ldots +\Delta _1E( \lambda , z)(\tilde{u}_{q-2}+\lambda \tilde{u}_{q-1})+E(\lambda , z)(\tilde{u}_{q-1}+\lambda \tilde{u}_q) \nonumber \\&= \Delta _{q-1}E(\lambda , z)A\tilde{v}_1+\Delta _{q-2}E( \lambda , z)A \tilde{u}_2+\Delta _{q-3}E( \lambda , z)A\tilde{u}_3+\ldots +\Delta _1E( \lambda , z)A\tilde{u}_{q-1}\nonumber \\& \quad + E(\lambda z)A\tilde{u}_{q}=Ay_{j,q}. \end{aligned}$$
(23)

It is straight to check that the set \(\{y_{j,p}:1\le j\le k,1\le p\le m_j\}\) determines a basis of the vector space of solutions to (11) as it is conformed with n vectors which are linearly independent. Indeed, for every \(1\le j\le k\) and \(1\le p\le \ell _j\) we observe that \(\{y_{j,1},\ldots ,y_{j,h_{p,j}}\}\) are obtained by a triangular linear combination of the basis of the Jordan block \(\{\tilde{v}_1,\ldots , \tilde{u}_{h_{p,j}}\}\), where the terms in the diagonal are given by \(E(\lambda z)\). This concludes the proof. \(\square \)

In view of Theorem 1 and Theorem 2 one can state an algorithm for the construction of the general solution of (11), by means of the Jordan decomposition of the matrix A and the functions \(\Delta _{h}E(\lambda , z)\) defined in (16). We illustrate the previous theory in several examples.

Example 1

Let \({\mathbb {M}}_{1}=(p!)_{p\ge 0}\) be Gevrey sequence of order 1. \({\mathbb {M}}_1\) is a strongly regular sequence which admits a nonzero proximate order. The associated kernel function \(E(z)=\exp (z)\) and for every \(h\ge 0\) one gets that \(\Delta _{h}E(\lambda , z)=\frac{z^h}{h!}\exp (\lambda z)\). The results of Theorem 1 and Theorem 2 coincide with the classical theory of solutions to linear systems of differential equations.

Example 2

This theory can be applied to fractional differential equations, where the Caputo \(1/s-\) fractional differential operators \({}^C D^{1/k}_{z}\) are involved in the equation. Indeed, the expression (10) relates both problems, that in terms of Caputo 1/s-fractional derivatives and the moment differential equations, with moment sequence \(m=(\Gamma (1+\frac{p}{k}))_{p\ge 0}\). Indeed, one observes in this case that

$$\begin{aligned} h!\Delta _h E(\lambda , t^{1/k})=t^{h/k}\left( \frac{d^{h}}{dz^{h}}E_{1/k}\right) (\lambda t^{1/k}), \end{aligned}$$

for all \(h\ge 0\), where \(E_{1/k}(z)\) is Mittag-Leffler function \(E_{1/k}(z)=\sum _{p\ge 0}\frac{z^p}{\Gamma (1+p/k)}\).

We refer to the work by B. Bonilla, M. Rivero and J. J. Trujillo [4] where the authors study systems of linear fractional differential equations with constant coefficients, and the statements of Theorem 1 and Theorem 2 are displayed in terms of the exponential matrix in this framework.

Asymptotic Study of the Solutions

In this section, we analyze the asymptotic behavior at infinity of the entire solutions of the problem (11), giving rise to stability results of such systems of moment differential equations. We maintain the same assumptions as in the previous section for the elements involved in the system (11).

First, we study the global growth of the entire solutions at infinity in terms of the eigenvalues of the matrix A in (11). For this purpose, we make use of Proposition 4.5, [12], which remains valid under more general arguments.

Proposition 2

Let \({\mathbb {M}}\) be a strongly regular sequence, and let \(f(z)=\sum _{p\ge 0}a_pz^p\) be an entire function. Then, the following statements are equivalent:

  • There exist \(C_1,C_2>0\) such that \(|f(z)|\le C_1\exp (M(C_2|z|))\), for \(z\in {\mathbb {C}}\).

  • There exist \(D_1,D_2>0\) such that \(|a_p|\le D_1D_2^p/M_p\), for all \(p\ge 0\).

Proposition 3

Let \(A\in {\mathbb {C}}^{n\times n}\) and consider the problem (11). Let \(y(z)=(y_1(z),\ldots ,y_n(z))\) be the solution of any Cauchy problem associated to equation (11). Then,

  • If \(\lambda =0\) is the only eigenvalue of A, then y(z) has a polynomial growth at infinity. In addition to this, there exists \(C>0\) such that

    $$\begin{aligned} |y_j(z)|\le C|z|^{n-1},\qquad 1\le j\le n,\quad z\in {\mathbb {C}}. \end{aligned}$$
  • There exist \(C_1,C_2>0\) such that

    $$\begin{aligned} |y_j(z)|\le C_1\exp (M(C_2|z|)),\qquad 1\le j\le n,\quad z\in {\mathbb {C}}, \end{aligned}$$

    where the function \(M(\cdot )\) is defined in (3).

Proof

The first part of the result is a direct consequence of the form of the explicit solution of (11), described in the proof of Theorem 2. If \(A\equiv 0\), then the statement is clear. Otherwise, one has that y(z) is a linear combination of constant vectors multiplied by one of the elements in \(\{E(\lambda z),\Delta _1E(\lambda , z),\ldots ,\Delta _{n-1}E(\lambda , z)\}\), i.e. in \(\{1/m(0),z/m(1),\ldots ,z^{n-1}/m(n-1)\}\). The result follows directly from here.

For the second part of the proof we make use of Proposition 2, and observe that any solution of y(z) is a linear combination of constant vectors multiplied by one of the elements in \(\cup _{\lambda \in \hbox {spec}(A)}\{E(\lambda z),\Delta _1E(\lambda , z),\ldots ,\Delta _{h-1}E(\lambda , z)\}\), for some \(0\le h\le n-1\). We observe that, given any \(\lambda \in \hbox {spec}(A)\), \(0\le h\le n-1\), and \(z\in {\mathbb {C}}\), one has that \(\Delta _h E(\lambda , z)=\sum _{p\ge 0}a_pz^p\), with

$$\begin{aligned} |a_p|\le {p\atopwithdelims ()h}\frac{|\lambda |^{p-h}}{m(p)}\le |\lambda |^{-h}(2|\lambda |)^{p}/m(p), \end{aligned}$$

for some \(h\ge 0\). At this point, we recall the fact that the sequence \({\mathbb {M}}\) and the sequence of moments m are equivalent (see Proposition 1), which entails that

$$\begin{aligned} |\Delta _h E(\lambda , z)|\le C_{h,1}\exp (M(C_{h,2}|z|)),\qquad z\in {\mathbb {C}}, \end{aligned}$$

for some \(C_{h,1},C_{h,2}>0\). Therefore, \(|y_j(z)|\) is upper bounded by a linear combination of functions of the form \(\exp (M(C_{h}|z|))\), for certain positive constants \(C_{h}\) and every \(z\in {\mathbb {C}}\). The definition of \(M(\cdot )\) allows us to group all of them into one term achieving the statement in the result for some \(C_1\) and where \(C_2\) is the maximum of the constants \(C_h\). \(\square \)

Further information of the growth of the solutions at infinity can be provided in terms of the measurement given by the order and type of the entire solutions. The following definition and results can be found in [18, 19]

Definition 6

Let \(f\in {\mathcal {O}}({\mathbb {C}})\). We write \(M_f(r):=\max \{|f(z)|: |z|=r\}\) for every \(r\ge 0\). The order of f is defined by

$$\begin{aligned} \rho =\rho _f:=\displaystyle \limsup _{r\rightarrow \infty }\frac{\ln ^{+}(\ln ^{+}(M_{f}(r)))}{\ln (r)}. \end{aligned}$$

Given \(f\in {\mathcal {O}}({\mathbb {C}})\) of order \(\rho \in {\mathbb {R}}\), the type of f is defined by

$$\begin{aligned} \sigma =\sigma _f:=\displaystyle \limsup _{r\rightarrow \infty }\frac{\ln ^{+}(M_f(r))}{r^{\rho }}. \end{aligned}$$

Here, \(\ln ^+(\cdot )=\max \{0,\ln (\cdot )\}\).

Example 3

The previous elements measure the growth of an entire function at infinity. In the case of \(f(z)=\exp (\sigma z^{\rho })\), for some \(\rho ,\sigma >0\), one has that

$$\begin{aligned} M_f(r)=\max \{\exp (\sigma r^{\rho }\cos (\rho \theta )):\theta \in {\mathbb {R}}\}= \exp (\sigma r^{\rho }), \end{aligned}$$

and

$$\begin{aligned} \rho _f=\lim _{r\rightarrow \infty }\frac{\ln ^+(\sigma r^{\rho })}{\ln (r)}=\rho ,\qquad \sigma _f=\lim _{r\rightarrow \infty }\frac{\sigma r^{\rho }}{r^{\rho }}=\sigma . \end{aligned}$$

Lemma 6

(Theorem 4.2.1 [6]) Let \(f_1,f_2\in {\mathcal {O}}({\mathbb {C}})\). Then \(\rho _{f_1+f_2}=\max \{\rho _{f_1},\rho _{f_2}\}\).

Lemma 7

(Theorem 4.2.3 [6]) Let \(p\in {\mathbb {C}}[z]\) with \(p\ne 0\), and \(f\in {\mathcal {O}}({\mathbb {C}})\). Then, \(\rho _{pf}=\rho _f\).

The following result is a straighforward application of the definition of type associated to a function.

Lemma 8

Let \(f,g\in {\mathcal {O}}({\mathbb {C}})\) both of order \(\rho >0\). Then, \(\sigma _{f+g}\le \max \{\sigma _f,\sigma _g\}\). For all \(C\in {\mathbb {R}}^{\star }\), \(\sigma _{Cf}=\sigma _f\).

The study of the order and type associated to an entire function of the form (5) is detailed in [18], Sect. 3. More precisely, one has the following result whose writting has been adapted to our settings.

Theorem 3

(Theorem 3.3, [18]) Let \({\mathbb {M}}\) be a strongly regular sequence which admits a nonzero proximate order, say \(\rho (t)\rightarrow \rho >0\), for \(t\rightarrow \infty \). Then, the function E(z) defined by (5) is an entire function of order \(\rho \) and type 1.

We also make use of the following result.

Theorem 4

(Theorem 1.10.3, [19]) Let \(f(z)=\sum _{p\ge 0}a_pz^p\) be an entire function. Then,

  1. (i)

    The order \(\rho \ge 0\) of f is given by

    $$\begin{aligned} \rho =\lim \sup _{p\rightarrow \infty }\frac{p\ln (p)}{-\ln |a_p|}. \end{aligned}$$
  2. (ii)

    If \(0<\rho <\infty \), then the type \(\sigma \ge 0\) of f is determined by

    $$\begin{aligned} (\sigma e \rho )^{1/\rho }=\limsup _{p\rightarrow \infty }p^{1/\rho }|a_p|^{1/p}. \end{aligned}$$

In view of Theorem 3 and Theorem 4, one can give more information on the growth at infinity of the solutions to (11).

Theorem 5

Let \({\mathbb {M}}\) be a strongly regular sequence which admits a nonzero proximate order, say \(\rho (t)\rightarrow \rho >0\), for \(t\rightarrow \infty \). Let \(y=y(z)\) be a solution of (11). Then,

  1. 1.

    if A admits a nonzero eigenvalue, then y is an entire function of order \(\rho \) and type upper bounded by \(\sigma :=\max \{|\lambda |^{\rho }:\lambda \in \hbox {spec}(A)\}\), or an entire function of order 0.

  2. 2.

    if 0 is the only eigenvalue of A, then y is a polynomial. Therefore its order is zero.

Proof

The second part of the statement is clear from the construction of the general solution of (11). Assume that A admits a nonzero eigenvalue, say \(\lambda \). First, we prove that for every \(h\ge 0\), the entire function \(\Delta _hE(\lambda , z)\) defined in (16) is of order \(\rho \) and type \(|\lambda |^{\rho }\), for every \(\lambda \in {\mathbb {C}}^{\star }\). Let \(h\ge 0\) be an integer number and \(\lambda \in {\mathbb {C}}^{\star }\). We observe that

$$\begin{aligned} \lim _{p\rightarrow \infty }\frac{-\ln \left[ {p\atopwithdelims ()h}|\lambda |^{p-h}\right] }{p\ln (p)}=\lim _{p\rightarrow \infty }\frac{-\ln (p^{h}|\lambda |^{p-h}h!)}{p\ln (p)}=0, \end{aligned}$$
(24)

which entails that

$$\begin{aligned} \limsup _{p\rightarrow \infty }\frac{p\ln (p)}{-\ln \left[ {p\atopwithdelims ()h}\frac{|\lambda |^{p-h}}{m(p)}\right] }=\limsup _{p\rightarrow \infty }\frac{p\ln (p)}{-\ln \left[ \frac{1}{m(p)}\right] }=\rho , \end{aligned}$$

regarding Theorem 3. From Theorem 4 we conclude that \(\Delta _hE(\lambda , z)\) is an entire function of order \(\rho \). On the other hand, we have that

$$\begin{aligned} \lim _{p\rightarrow \infty }\left[ {p\atopwithdelims ()h}|\lambda |^{p-h}\right] ^{1/p}=|\lambda |. \end{aligned}$$

We apply Theorem 3 and Theorem 4 to arrive at

$$\begin{aligned} (e\rho )^{1/\rho }=\lim \sup _{p\rightarrow \infty }p^{1/\rho }\left( \frac{1}{m(p)}\right) ^{1/p}. \end{aligned}$$

This entails that the type of the function \(\Delta _hE(\lambda , z)\), denoted by \(\sigma \) satisfies that

$$\begin{aligned} (\sigma e\rho )^{1/\rho }=\limsup _{p\rightarrow \infty }p^{1/\rho }\left( {p\atopwithdelims ()h}\frac{|\lambda |^{p-h}}{m(p)}\right) ^{1/p}=|\lambda |\limsup _{p\rightarrow \infty }p^{1/\rho }\left( \frac{1}{m(p)}\right) ^{1/p}=|\lambda |(e\rho )^{1/\rho }, \end{aligned}$$

which entails that \(\sigma =|\lambda |^{\rho }\).

In view of the form of the solution of (11) given by (20) and applying Lemma 6 and Lemma 7, one gets that the order of y is \(\rho \). Observe that the order falls to zero if the coefficients associated to nonzero eigenvalues in the solution vanish. Lemma 8 allows to conclude that, in the case that the order is \(\rho \), then the type of the solution is, at most \(\sigma :=\max \{|\lambda |^{\rho }:\lambda \in \hbox {spec}(A)\}\) . \(\square \)

Remark 2

Observe that the concrete type associated to a solution of a Cauchy problem associated to (11) is determined by the Cauchy data. For example, if \(m=(p!)_{p\ge 0}\) and \(A=\hbox {diag}(1,2)\), the solution of the Cauchy problem with \(y(0)=(1,0)\) has order 1 and type equal to 1, whereas the type of the solution of the Cauchy problem with \(y(0)=(0,1)\) is of type 2. The same holds for the order. Indeed, consider the matrix \(A=\hbox {diag}(1,0)\). The solution of the Cauchy problem with \(y(0)=(0,1)\) is \(y(z)\equiv (0,1)\) whereas the solution with \(y(0)=(1,0)\) has order and type equal to 1.

On the Radial Growth of the Solutions

The radial growth properties of the kernel function E allow us to give some information on the growth at infinity of the solutions to (11) along rays to infinity. More precisely, one has the following first result in this direction.

Proposition 4

Let \({\mathbb {M}}\) be a strongly regular sequence which admits a nonzero proximate order \(\rho (t)\rightarrow \rho >0\). Let m be the sequence of moments constructed as in Definition 2. Let \(A\in {\mathbb {C}}^{n\times n}\) be a diagonalizable matrix and consider the Cauchy problem (12), for some Cauchy data \((x_0,y_0)\in {\mathbb {C}}^{1+n}\). Then, the solution \(y=(y_1,\ldots ,y_n)\) of (12) satisfies that for every \(1\le h\le n\)

$$\begin{aligned} |y_h(re^{i\theta })|\le \frac{C}{r^{\beta }},\qquad r\ge R_0 \end{aligned}$$

for some \(C,\beta >0\) and some \(R_0>0\) provided that \(\theta \in {\mathcal {A}}_h\). \({\mathcal {A}}_h\) is some (possibly empty) set of directions.

Proof

Let (15) be the general solution of (11). For any choice of Cauchy data \((z_0,y_0)\in {\mathbb {C}}^{1+n}\), let \(C^0_{j,p}\in {\mathbb {C}}\) be the constants for \(1\le j\le k\) and \(1\le p\le \ell _j\) such that

$$\begin{aligned} y_h(z)=\sum _{j=1}^{k}\sum _{p=1}^{\ell _j}C^{0}_{j,p}E(\lambda _j z)v_{j,p,h},\quad 1\le h\le n \end{aligned}$$
(25)

determines the unique solution to the Cauchy problem (12). The set \({\mathcal {A}}_h\) is defined by

$$\begin{aligned} {\mathcal {A}}_{h}=\bigcap _{j=1}^{k}\left\{ \theta \in {\mathbb {R}}:\frac{\omega ({\mathbb {M}})\pi }{2}-\arg (\lambda _j)<\theta <2\pi -\frac{\omega ({\mathbb {M}})\pi }{2}-\arg (\lambda _j)\right\} . \end{aligned}$$

Observe that \(\theta \in {\mathcal {A}}_h\) if and only if \(re^{i\theta }\lambda _j\in S_{\pi }(\nu )\), for some \(0<\nu <2\pi -\omega ({\mathbb {M}})\pi \), for every \(1\le j\le k\) which entails bounds on the kernel function as in (4). \(\square \)

The previous result distinguishes certain directions for which the bound established in Proposition 3 has been tightened. The following definition and results can be found in [18, 19]. They will be the key point to determine the growth of the solution to (11) along rays to infinity.

Definition 7

Let \(\rho (t)\) be a proximate order of \(f\in {\mathcal {O}}({\mathbb {C}})\), i.e. \(\rho (t)\) is a proximate order and

$$\begin{aligned} 0<\sigma _f:=\limsup _{r\rightarrow \infty }\frac{\ln (M_f(r))}{r^{\rho (r)}}<\infty . \end{aligned}$$

Then, \(\sigma _f\) is known as the type of f relative to \(\rho \), and

$$\begin{aligned} h_f(\theta )=\limsup _{r\rightarrow \infty }\frac{\ln |f(re^{i\theta })|}{r^{\rho (r)}},\quad \theta \in {\mathbb {R}}, \end{aligned}$$

is known as the generalized indicator of f. We write \(h_{f}^{+}=\max \{0,h_f\}\).

Theorem 6

(Theorem 3.7, [18]) Let \({\mathbb {M}}\) be a strongly regular sequence which admits a proximate order \(\rho (t)\rightarrow \rho >0\). Let E be the associated kernel function defined in (5). Then,

  1. (i)
    $$\begin{aligned} h_E^+(\theta )=\left\{ \begin{array}{ccc} \cos (\rho \theta ) &{} if\quad &{} |\theta |\le \min \{\pi ,\frac{\pi }{2\rho }\}\\ 0 &{} if \quad &{} \rho \ge \frac{1}{2},\quad \min \{\pi ,\frac{\pi }{2\rho }\}\le |\theta |\le \pi \end{array} \right. \end{aligned}$$

    for every \(\theta \in [-\pi ,\pi ]\).

  2. (ii)

    If \(\rho >1\), then \(h_E(\theta )=h^+_E(\theta )\), for \(\theta \in {\mathbb {R}}\).

  3. (iii)

    If \(\rho \le 1/2\), then \(h_E^+(\theta )=\cos (\rho \theta )\), \(|\theta |\le \pi \).

  4. (iv)

    If \(1/2<\rho \le 1\) and there exists \(k\in {\mathbb {N}}\) such that \(m(-k-1)\ne 0\), then \(h_E(\theta )=h_E^+(\theta )\), \(\theta \in {\mathbb {R}}\).

Remark 3

Observe from Theorem 3.1 [18] that 1/m(z) extends analytically to the whole complex plane.

Lemma 9

Let \(\rho (t)\) be a proximate order of \(f\in {\mathcal {O}}({\mathbb {C}})\). Then, for every \(C\in {\mathbb {C}}^{\star }\) the function \(t\mapsto \rho (|C|t)\) is a proximate order of the function \(t\mapsto f(Ct)\). Moreover, the type of the function \(t\mapsto f(Ct)\) is given by \(\sigma _{f}|C|^{\rho }\).

Proof

It is straight to check that the function \({\tilde{\rho }}(t)=\rho (|C| t)\) is a proximate order. Observe that (7) holds because

$$\begin{aligned} \lim _{r\rightarrow \infty }{\tilde{\rho }}(r)=\lim _{r\rightarrow \infty }\rho (r)=\rho . \end{aligned}$$

In addition to this, one has

$$\begin{aligned} \lim _{r\rightarrow \infty }r{\tilde{\rho }}'(r)\ln (r)=\lim _{r\rightarrow \infty }|C|r\rho '(|C|r)\ln (|C|r)\frac{\ln (r)}{\ln (|C|r)}. \end{aligned}$$

Due to \(\frac{\ln (r)}{\ln (|C|r)}\rightarrow 1\) for \(r\rightarrow \infty \), we get that the previous limit is zero because \(\rho (t)\) is a proximate order. This entails that (8) holds and \({\tilde{\rho }}\) is a proximate order.

We now prove that \({\tilde{\rho }}(t)\) is a proximate order of \(\tilde{f}(t)=f(Ct)\). First, observe that

$$\begin{aligned} M_{f(Cz)}(r)&=\sup _{|z|=r}\{|f(C z)|\}=\sup _{\theta \in {\mathbb {R}}}\{|f(|C|re^{i(\theta +\arg (C))})\}\\&=\sup _{\theta \in {\mathbb {R}}}\{|f(|C|re^{i\theta })\}=\sup _{|z|=|C| r}\{|f(z)|\}=M_{f}(|C|r). \end{aligned}$$

From Definition 6, we have

$$\begin{aligned} \limsup _{r\rightarrow \infty }\frac{\ln (M_{\tilde{f}}(r))}{r^{{\tilde{\rho }}(r)}}=\limsup _{r\rightarrow \infty }\frac{\ln (M_{f}(|C|r))}{r^{\rho (|C|r)}}=\limsup _{r\rightarrow \infty }\frac{\ln (M_{f}(|C|r))}{(|C|r)^{\rho (|C|r)}}|C|^{\rho (|C|r)}. \end{aligned}$$
(26)

From the properties of proximate order, one has \(|C|^{\rho (|C|r)}\rightarrow |C|^{\rho }\) for \(t\rightarrow \infty \). Therefore, the (26) equals \(\sigma _{f}|C|^{\rho }>0\). \(\square \)

Lemma 10

Let \(\lambda \in {\mathbb {C}}^{\star }\). Let \({\mathbb {M}}\) be a strongly regular sequence admitting a nonzero proximate order, say \(\rho (t)\rightarrow \rho >0\), for \(t\rightarrow \infty \). We consider the associated kernel function E described in (5). Then, the generalized indicator of the function \(z\mapsto E(\lambda z)\) is given by

$$\begin{aligned} h_{E(\lambda z)}(\theta )=|\lambda |^{\rho }h_{E}(\theta +\arg (\lambda )). \end{aligned}$$

Proof

In view of Lemma 9, the function \(\rho (|\lambda |t)\) is a proximate order of the entire function \(E(\lambda z)\). We have

$$\begin{aligned} h_{E(\lambda z)}(\theta )=\limsup _{r\rightarrow \infty }\frac{\ln |E(\lambda r e^{i\theta })|}{r^{\rho (|\lambda |r)}}=\limsup _{r\rightarrow \infty }\frac{\ln |E(\lambda r e^{i\theta })|}{(|\lambda |r)^{\rho (|\lambda |r)}}|\lambda |^{\rho (|\lambda |r)} \nonumber \\ =|\lambda |^{\rho }\limsup _{r\rightarrow \infty }\frac{\ln |E(r e^{i(\theta +\arg (\lambda )})|}{r^{\rho (r)}}=|\lambda |^{\rho }h_{E}(\theta +\arg (\lambda )). \end{aligned}$$
(27)

This concludes the proof. \(\square \)

As a direct application of the previous Lemma, we come up with the explicit form of the generalized indicator of \(E(\lambda z)\).

Corollary 1

Let \({\mathbb {M}}\) be a strongly regular sequence which admits a proximate order \(\rho (t)\rightarrow \rho >0\). Let E be the associated kernel function defined in (5), and \(\lambda \in {\mathbb {C}}^{\star }\). Then,

  1. (i)
    $$\begin{aligned} h_{E(\lambda z)}^+(\theta )=\left\{ \begin{array}{lll} |\lambda |^{\rho }\cos (\rho (\theta +\arg (\lambda ))) &{} if\quad &{} |\theta +\arg (\lambda )|\le \min \{\pi ,\frac{\pi }{2\rho }\}\\ 0 &{} if \quad &{} \rho \ge \frac{1}{2},\quad \min \{\pi ,\frac{\pi }{2\rho }\}\le |\theta +\arg (\lambda )|\le \pi \end{array} \right. \end{aligned}$$
    (28)

    for every \(\theta \in [-\pi -\arg (\lambda ),\pi -\arg (\lambda )]\).

  2. (ii)

    If \(\rho >1\), then \(h_{E(\lambda z)}(\theta )=h^+_{E(\lambda z)}(\theta )\), for \(\theta \in {\mathbb {R}}\).

  3. (iii)

    If \(\rho \le 1/2\), then \(h_{E(\lambda z)}^+(\theta )=|\lambda |^{\rho }\cos (\rho (\theta +\arg (\lambda )))\), \(|\theta +\arg (\lambda )|\le \pi \).

  4. (iv)

    If \(1/2<\rho \le 1\) and there exists \(k\in {\mathbb {N}}\) such that \(m(-k-1)\ne 0\), then \(h_{E(\lambda z)}(\theta )=h_{E(\lambda z)}^+(\theta )\), \(\theta \in {\mathbb {R}}\).

Proof

We observe that \(h^{+}_{E(\lambda z)}(\theta )\) for every \(\theta \in [-\pi -\arg (\lambda ),\pi -\arg (\lambda )]\) is determined by

$$\begin{aligned} h^{+}_{E(\lambda z)}(\theta )&=\max \{h_{E(\lambda z)}(\theta ),0\}\\&=|\lambda |^{\rho }\max \{h_E(\theta +\arg (\lambda )),0\}=|\lambda |^{\rho }h^{+}_{E}(\theta +\arg (\lambda )), \end{aligned}$$

which yields (28).

If \(\rho >1\), then for every \(\theta \in {\mathbb {R}}\)

$$\begin{aligned} h_{E(\lambda z)}(\theta )=|\lambda |^{\rho }h_{E}(\theta +\arg (\lambda ))=|\lambda |^{\rho }h^+_E(\theta +\arg (\lambda ))=\max \{0,|\lambda |^{\rho }h_E(\theta +\arg (\lambda ))\}=h^+_{E(\lambda z)}(\theta ). \end{aligned}$$

If \(\rho \le 1/2\), then for every \(|\theta +\arg (\lambda )|\le \pi \)

$$\begin{aligned} h_{E(\lambda z)}^+(\theta )=|\lambda |^{\rho }h^{+}_{E}(\theta +\arg (\lambda ))=|\lambda |^{\rho }\cos (\rho (\theta +\arg (\lambda ))). \end{aligned}$$

Finally, if \(1/2<\rho \le 1\) and there exists \(k\in {\mathbb {N}}\) such that \(m(-k-1)\ne 0\), then for every \(\theta \in {\mathbb {R}}\) one has

$$\begin{aligned} h_{E(\lambda z)}(\theta )=|\lambda |^{\rho }h_{E}(\theta +\arg (\lambda ))=|\lambda |^{\rho }h^{+}_{E}(\theta +\arg (\lambda ))=h^+_{E(\lambda z)}(\theta ). \end{aligned}$$

\(\square \)

Our final result in this work states an upper bound for the generalized indicator of the solutions to (11).

Theorem 7

Let \({\mathbb {M}}\) be a strongly regular sequence which admits a nonzero proximate order \(\rho (t)\rightarrow \rho >0\). Let m be the sequence of moments constructed as in Definition 2. Assume that \(A\in {\mathbb {C}}^{n\times n}\) is a diagonalizable matrix, and consider the Cauchy problem (12), for some Cauchy data \((x_0,y_0)\in {\mathbb {C}}^{1+n}\). Then, the solution \(y=(y_1,\ldots ,y_n)\) of (12) satisfies for every \(1\le h\le n\) that

$$\begin{aligned} h_{y_{h}}(\theta )\le \max \left\{ |\lambda |^{\rho }h_{E}(\theta +\arg (\lambda )): \lambda \in \hbox {spec}(A)\right\} , \end{aligned}$$

for every \(\theta \in {\mathbb {R}}\).

Proof

Let \(1\le h\le n\). We write \(y_h\) in the form (25), for certain \(C_{j,p}^0,v_{j,p,h}\in {\mathbb {C}}\), that we assume to be nonzero without loss of generality, and where \(\{\lambda _{j}:1\le j\le k\}\) is the set of eigenvalues of A. We can also assume that \(\lambda =0\) is not the only eigenvalue of A. Otherwise, \(A\equiv 0\).

At this point, we may assume that at least one of the constants \(C_{j,p}^0\) associated to a nonzero eigenvalue differs from zero. Otherwise, the solution of the Cauchy problem is a polynomial.

We observe that for every \(1\le j\le k\) the function \(C^{0}_{j,p}E(\lambda _j z)v_{j,p,h}\) admits \(t\mapsto \rho (|\lambda _j| t)\) as proximate order, with \(\rho (t)\) being a proximate order associated to the kernel function E. In addition to this, the type of \(C^{0}_{j,p}E(\lambda _j z)v_{j,p,h}\) coincides with the type of \(E(\lambda _j z)\), which equals \(|\lambda _j|^{\rho }\).

For every \(r\ge 0\), one has

$$\begin{aligned} M_{y_{h}}(r)= & {} \sup \left\{ \left| \sum _{j=1}^{k}\sum _{p=1}^{\ell _j}C^{0}_{j,p}E(\lambda _j z)v_{j,p,h}\right| :|z|=r\right\} \nonumber \\&\quad \le \left( \sum _{j=1}^{k}\sum _{p=1}^{\ell _j} |C^{0}_{j,p}v_{j,p,h}|\right) \sup \{\max _{1\le j\le k}|E(\lambda _j z)|:|z|=r\}. \end{aligned}$$
(29)

Let us write \(C_1:=\sum _{j=1}^{k}\sum _{p=1}^{\ell _j} |C^{0}_{j,p}v_{j,p,h}|\). We have obtained that

$$\begin{aligned} M_{y_h}(r)\le C_1\max _{1\le j\le k}\{M_{E(\lambda _j z)}(r)\}. \end{aligned}$$

Now, let \(1\le j_0\le k\) be such that \(|\lambda _{j_0}|\ge |\lambda _j|\) for every \(1\le j\le k\). We recall that \(M_{E(\lambda _j z)}(r)=M_E(|\lambda _j|r)\) for every \(r\ge 0\). The monotonicity of \(M_{E}\) entails that \(M_{y_h}(r)\le C_1M_{E(\lambda _{j_0} z)}(r)\). We recall from the proof of Lemma 9 that \(t\mapsto \rho (|\lambda _{j_0}|t)\) is a proximate order of \(E(\lambda _{j_0}z)\). In order to check that it is also a proximate order for \(y_{h}\) we have

$$\begin{aligned} \limsup _{r\rightarrow \infty }\frac{\ln (M_{y_{h}}(r))}{r^{\rho (|\lambda _0|r)}}\le & {} \limsup _{r\rightarrow \infty }\frac{\ln (C_1M_{E}(|\lambda _{j_0}|r))}{r^{\rho (|\lambda _0|r)}} =\limsup _{r\rightarrow \infty }\frac{\ln (M_{E}(|\lambda _{j_0}|r))}{r^{\rho (|\lambda _0|r)}} \nonumber \\= & {} \limsup _{r\rightarrow \infty }\frac{\ln (M_{E}(|\lambda _{j_0}|r))}{|\lambda _0|r)^{\rho (|\lambda _0|r)}}|\lambda _0|^{\rho (|\lambda _0|r)}=\sigma _{E}|\lambda _0|^{\rho }=|\lambda _0|^{\rho }. \end{aligned}$$
(30)

Therefore, \(\sigma _{y_h}\le |\lambda _0|^{\rho }\).

Now, we study

$$\begin{aligned} \limsup _{r\rightarrow \infty }\frac{\ln |y_h(re^{i\theta })|}{r^{\rho (|\lambda _0|r)}}. \end{aligned}$$

Analogous computations as in the first part of the proof, and taking into account the properties of \(\limsup \) yield to upper estimate the previous expression by

$$\begin{aligned} \limsup _{r\rightarrow \infty }\frac{\ln (C_1)+\max _{1\le j\le k}\ln |E(\lambda _j r e^{i\theta })|}{r^{\rho (|\lambda _0|r)}}\le \max _{1\le j\le k}\left\{ \limsup _{r\rightarrow \infty }\frac{\ln |E(\lambda _j r e^{i\theta })|}{r^{\rho (|\lambda _0|r)}}\right\} . \end{aligned}$$

taking into account Lemma 10 we arrive at

$$\begin{aligned} \limsup _{r\rightarrow \infty }\frac{\ln |E(\lambda _j r e^{i\theta })|}{r^{\rho (|\lambda _0|r)}} =\limsup _{r\rightarrow \infty }|\lambda _0|^{\rho (|\lambda _0|r)}\frac{\ln |E(\frac{|\lambda _j|}{|\lambda _0|}|\lambda _0| r e^{i(\theta +\arg (\lambda _j))})|}{(|\lambda _0|r)^{\rho (|\lambda _0|r)}} \nonumber \\ =|\lambda _0|^{\rho }h_{E(|\lambda _j|/|\lambda _0|z)}(\theta +\arg (\lambda _j))=|\lambda _j|^{\rho }h_{E}(\theta +\arg ( \lambda _j). \end{aligned}$$
(31)

This entails that

$$\begin{aligned} \limsup _{r\rightarrow \infty }\frac{\ln |y_h(re^{i\theta })|}{r^{\rho (|\lambda _0|r)}}\le \max _{1\le j\le k}|\lambda _j|^{\rho }h_{E}(\theta +\arg ( \lambda _j), \end{aligned}$$

arriving at the conclusion. \(\square \)

Remark 4

Observe that in the previous proof, we have also obtained a proximate order of the solution \(y_h\) for all \(1\le h\le n\) and also an upper bound for its type \(\sigma _{y_h}\). Indeed, if \(\lambda \) is the eigenvalue of A of larger modulus, then the function \(\rho (|\lambda |t)\) is a proximate order of \(y_h\), and the type of \(y_h\) is upper bounded by \(|\lambda |^{\rho }\).

Remark 5

The exact type and order associated to a solution of a Cauchy problem are determined by the Cauchy data.