1 Introduction

Hyperbolic stochastic partial differential equations arise as models of various phenomena used in mathematical physics, economy, molecular biology and many other areas of science, where random fluctuations and uncertainties are incorporated into the equation by white noise or other singular generalized stochastic processes such as Poissonian processes or general Lévy processes. In this paper we will consider two types of hyperbolic problems for suitable differential operators, acting by the Wick product instead of classical multiplication. This is due to the fact that we allow random terms to be present both in the initial conditions and right-hand side of the equations, as well as in the coefficients of the involved operators. Having all these highly random terms will lead to singular solutions that do not allow to use ordinary multiplication, but require its renormalization also known as the Wick product. The Wick product is known to represent the highest order stochastic approximation of the ordinary product [32], and has been used in many models together with the Wiener chaos expansion method [18, 19, 26, 27, 29, 30, 40,41,42, 47].

Powerful tools used for deterministic equations with singular input data are pseudo-differential calculus and Fourier integral operators (see, e.g., [11, 22, 33, 34]), that have experienced a rapid development in the recent years (see, for instance, [1, 3, 4, 10, 44] and the references quoted therein). The roots of pseudo-differential operators and Fourier integral operators stem from microlocal analysis (see, e.g., [17, 20]). General approaches to solving deterministic hyperbolic equations are presented, e.g., in [22, 28], and we will rely on these results and their extensions.

Henceforth, in this paper we will present techniques for solving singular hyperbolic stochastic partial differential equations resulting from the synergy of these, nowadays classical, two powerful methods: chaos expansions and Fourier integral operators.

The first model we will consider is an initial-boundary value problem for a second order, wave-type, differential operator on a bounded open set \(U\subset {\mathbb {R}}^d\) having smooth boundary, and on a time interval \(I=[0,T]\), \(T>0\), namely

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t^2u(t,x;\omega ) -{\mathbf {A}}\lozenge u(t,x;\omega )=f(t,x;\omega ) &{} (t,x;\omega )\in I\times U\times \Omega \\ u(0,x; \omega )=u^0(x;\omega ), (\partial _t u)(0,x;\omega )=u^1(x;\omega ) &{} (x;\omega ) \in U\times \Omega \\ u(t,x;\omega )|_{\partial U}=0, \end{array}\right. } \end{aligned}$$
(1.1)

with a second order differential operator \(\displaystyle {\mathbf {A}}=\sum _{j=1}^d\partial _j \sum _{k=1}^d a_{jk}(t,x;\omega )\partial _k\), having space-time stochastic processes \(a_{jk}\) as coefficients that are continuously differentiable in time, and satisfying suitable ellipticity conditions, which will be specified in detail in Sects. 2 and 3 below. Here, chaos expansions will be used in connection with well-known representations and estimates for an infinite sequence of associated deterministic initial-boundary value problems on the bounded open set \(U\subset {\mathbb {R}}^d\).

The second model on which we will focus is an initial value (that is, Cauchy) problem for a differential hyperbolic operator of order \(m\in {\mathbb {N}}\), which we will study globally on \({\mathbb {R}}^d\), namely,

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathbf {L}}\lozenge u(t,x;\omega )=f(t,x;\omega ) &{} (t,x;\omega )\in I\times {\mathbb {R}}^d\times \Omega \\ (D^k_tu)(0,x; \omega )=u^k(x;\omega ), k=0,\dots ,m-1, &{} (x;\omega )\in {\mathbb {R}}^d\times \Omega , \end{array}\right. } \end{aligned}$$
(1.2)

where \(\displaystyle {\mathbf {L}}=D_t^m+\sum _{j=1}^{m} \sum _{|\alpha |\le j} a_{j\alpha }(t,x;\omega )D_x^\alpha D^{m-j}_t\), \(D_t=-i\partial _t\), \(D_x^\alpha =(-i)^{|\alpha |}\partial ^\alpha _x\), and \(a_{j\alpha }\) being smooth space-time stochastic processes, \(j=1,\dots ,m\), \(\alpha \in {\mathbb {N}}_0^d\) such that \(|\alpha |\le j\) (see Sects. 2 and 4 for the precise hypotheses on \({\mathbf {L}}\), f, and \(u^k\), \(k=0,\dots ,m-1)\). To perform our analysis in this case, we will again use chaos expansions, but this time in connection with the properties of a class of Fourier integral operators, defined through objects globally defined on \({\mathbb {R}}^d\).

Hyperbolic SPDEs via Wiener chaos expansion methods have been studied in [21], but our approach is more powerful and allows more singular input data in the model. The main idea we present in this paper relies on the chaos expansion method (also known as the propagator method): first, one uses the chaos expansion of all stochastic data in the equation to convert the SPDE into an infinite system of deterministic PDEs, then the PDEs are recursively solved, and finally one must sum up these solutions to obtain the chaos expansion form of the solution of the initial SPDE. The crucial point is to prove convergence of the series given by the chaos expansion that defines the solution, and this part relies on obtaining good energy estimates of the PDE solutions, proving their regularity and using estimates on the Wick products. This approach has many advantages, most notably it provides an explicit form of the solution of the SPDE from which one can directly compute the expectation, variance and other moments, and it is convenient also for numerical approximations by truncating the series in the chaos expansion to finite sums.

The second main tool we use in this paper is the SG calculus of Fourier integral operators (further abbreviated as SG-FIOs theory). Applications of the SG-FIOs theory to SG-hyperbolic Cauchy problems were initially given in [12, 14]. Many authors have, since then, expanded the SG-FIOs theory and its applications to the solution of hyperbolic problems in various directions. To mention a few, see, e.g., [10, 44], and the references quoted there and in [4]. In particular, the results in Theorem 2.11 have been applied in [4] to study classes of SG-hyperbolic Cauchy problems, constructing their fundamental solution \(\{E(t,s)\}_{0\leqslant s\leqslant t\leqslant T}\). The existence of the fundamental solution provides, via Duhamel’s formula, existence and uniqueness of the solution to the system, for any given Cauchy data in the weighted Sobolev spaces \(H^{z,\zeta }({\mathbb {R}}^d)\), \((z,\zeta )\in {\mathbb {R}}^2\). In short, the Cauchy problem for a linear SG-hyperbolic operator L, satisfying suitable additional hypotheses (which will be explicitly stated in Sect. 4), can be turned into an equivalent Cauchy problem for a first order system. The fundamental solution operator to such a system allows then to write the (unique) solution of the original Cauchy problem in terms of SG-FIOs (modulo remainder terms). A remarkable feature, typical for these classes of hyperbolic problems, is the well-posedness with loss of decay/gain of growth at infinity, observed, e.g., in [2, 3, 12]. We also mention that random-field solutions of hyperbolic SPDEs via Fourier integral operator methods have been recently studied in [5, 8], while function-valued solutions for associated semilinear hyperbolic SPDEs have been obtained in [7].

The plan of our exposition is as follows. In the preliminary section (Sect. 2) we provide a basic overview of the notation and recall some results that are required for further reading. We introduce the basic notions of white noise theory including chaos expansions of generalized stochastic processes, Wick products and stochastic differential operators, and we recall the fundamental notions of pseudo-differential calculus, Fourier integral operators, and weighted Sobolev spaces, within the environment of the so-called SG calculus. In Sect. 3, which represents the first main result of the paper, we prove existence and uniqueness of a local solution to the linear equation (1.1). In Sect. 4, which represents the second main result of the paper, we prove existence and uniqueness of a global solution to the Eq. (1.2) and then prove existence and uniqueness of solutions to systems of first order linear hyperbolic SPDEs of the form (1.2). Finally, Sect. 5 contains some examples of applications of the previous results, including the modeling of seismic wave equations and earthquake motions [37, 46], of molecular dynamics [15, 16, 39, 48], and of plasma reactions and dynamics of the inner structure of stars [15, 43].

Although our exposition follows the classical Hida–Kondratiev space approach within classical white noise theory, the chaos expansion method we present can easily be extended to model fractional Brownian motion with a Hurst parameter \(H \in (0, 1)\), fractional Poissonian noise or other fractional versions of stochastic processes. In [19] it was shown that there exists a unitary mapping between Gaussian and Poissonian white noise spaces. Hence, solutions of a stochastic differential equation on the Poissonian white noise space can be obtained by applying this mapping to the solution of the corresponding stochastic differential equation taken on the Gaussian white noise space. Fractional versions of spaces of this type were further studied in [24, 25] and can be related to the Gaussian white noise space by a simple change of the Hermite basis (2.1) to another basis of orthogonal polynomials (e.g. Charlier polynomials for the Poissonian noise).

2 Preliminaries

In this section we recall various notions, which we will use in the sequel.

2.1 Chaos Expansions and the Wick Product

Denote by \((\Omega , {\mathcal {F}}, P) \) the Gaussian white noise probability space \((S'({\mathbb {R}}), {\mathcal {B}}, \mu ), \) where \(S'({\mathbb {R}})\) denotes the space of tempered distributions, \({\mathcal {B}}\) the Borel sigma-algebra generated by the weak topology on \(S'({\mathbb {R}})\) and \(\mu \) the Gaussian white noise measure corresponding to the characteristic function

$$\begin{aligned} \int _{S'({\mathbb {R}})} \, e^{{i\langle \omega , \phi \rangle }} d\mu (\omega ) = \exp \left[ -\frac{1}{2} \Vert \phi \Vert ^2_{L^2({\mathbb {R}})}\right] , \quad \quad \phi \in S({\mathbb {R}}), \end{aligned}$$

given by the Bochner–Minlos theorem.

We recall the notions related to \(L^2(\Omega ,\mu )\) (see [19]), where \(\Omega =S'({\mathbb {R}})\) and \(\mu \) is Gaussian white noise measure. We adopt the notation \({\mathbb {N}}_0=\{0,1,2,\dots \}\), \({\mathbb {N}}={\mathbb {N}}_0\setminus \{0\}=\{1,2,\dots \}\). Define the set of multi-indices \({\mathcal {I}}\) to be \(({\mathbb {N}}_0^{\mathbb {N}})_c\), i.e. the set of sequences of non-negative integers which have only finitely many nonzero components. Especially, we denote by \(\mathbf{0 }=(0,0,0,\ldots )\) the multi-index with all entries equal to zero. The length of a multi-index is \(|\alpha |=\sum _{i=1}^\infty \alpha _i\) for \(\alpha =(\alpha _1,\alpha _2,\ldots )\in {\mathcal {I}}\), and it is always finite. Similarly, \(\alpha !=\prod _{i=1}^\infty \alpha _i!\), and all other operations are also carried out componentwise. We will use the convention that \(\alpha -\beta \) is defined if \(\alpha _n-\beta _n\geqslant 0\) for all \(n\in {\mathbb {N}}\), i.e., if \(\alpha -\beta \geqslant \mathbf{0 }\), and leave \(\alpha -\beta \) undefined if \(\alpha _n<\beta _n\) for some \(n\in {\mathbb {N}}\). We here denote by \(h_n\), \(n\in {\mathbb {N}}_0\), the Hermite orthogonal polynomials

$$\begin{aligned} h_n(x)=(-1)^n \, e^\frac{x^2}{2}\,\frac{d^n}{dx^n}\left( e^{-\frac{x^2}{2}}\right) , \end{aligned}$$

and by \(\xi _n\), \(n\in {\mathbb {N}}\), the Hermite functions

$$\begin{aligned} \xi _n(x)=((n-1)! \sqrt{\pi })^{-\frac{1}{2}}e^{-\frac{x^2}{2}}h_{n-1}(x\sqrt{2}). \end{aligned}$$

The Wiener–Itô theorem states that one can define an orthogonal basis \(\{H_\alpha \}_{\alpha \in {\mathcal {I}}}\) of \(L^2(\Omega ,\mu )\), where \(H_\alpha \) are constructed by means of Hermite orthogonal polynomials \(h_n\) and Hermite functions \(\xi _n\),

$$\begin{aligned} H_\alpha (\omega )=\prod _{n=1} ^\infty h_{\alpha _n}(\langle \omega ,\xi _n\rangle ),\quad \alpha =(\alpha _1,\alpha _2,\ldots , \alpha _n\ldots )\in {\mathcal {I}},\quad \omega \in \Omega =S'({\mathbb {R}}).\nonumber \\ \end{aligned}$$
(2.1)

Then, every \(F\in L^2(\Omega ,\mu )\) can be represented via the so called chaos expansion

$$\begin{aligned} F(\omega )=\sum _{\alpha \in {\mathcal {I}}} f_\alpha H_\alpha (\omega ), \quad \omega \in S'({\mathbb {R}}),\quad \sum _{\alpha \in {\mathcal {I}}} |f_\alpha |^2\alpha !<\infty ,\quad f_\alpha \in {\mathbb {R}},\quad \alpha \in {\mathcal {I}}. \end{aligned}$$

Denote by \(\varepsilon _k=(0,0,\ldots , 1, 0,0,\ldots ),\;k\in {\mathbb {N}}\) the multi-index with the entry 1 at the \(k\hbox {th}\) place. Denote by \({\mathcal {H}}_1\) the subspace of \(L^2(\Omega ,\mu )\), spanned by the polynomials \(H_{\varepsilon _k}(\cdot )\), \(k\in {\mathbb {N}}\). All elements of \({\mathcal {H}}_1\) are Gaussian stochastic processes, e.g. the most prominent one is Brownian motion given by the chaos expansion \(B(t,\omega ) = \sum _{k=1}^\infty \int _0^t \xi _k(s)ds\;H_{\varepsilon _k}(\omega ).\)

Denote by \({\mathcal {H}}_m\) the \(m\hbox {th}\) order chaos space, i.e. the closure of the linear subspace spanned by the orthogonal polynomials \(H_\alpha (\cdot )\) with \(|\alpha |=m\), \(m\in {\mathbb {N}}_0\). Then the Wiener-Itô chaos expansion states that \(L^2(\Omega ,\mu )=\bigoplus _{m=0}^\infty {\mathcal {H}}_m\), where \({\mathcal {H}}_0\) is the set of constants in \(L^2(\Omega ,\mu )\). The expectation of a random variable is its orthogonal projection onto \({\mathcal {H}}_0\), hence it is given by \(E(F(\omega ))=f_{(0,0,\ldots )}\).

It is well-known that the time-derivative of Brownian motion (white noise process) does not exist in the classical sense. However, changing the topology on \(L^2(\Omega ,\mu )\) to a weaker one, T. Hida [18] defined spaces of generalized random variables containing the white noise as a weak derivative of the Brownian motion. We refer to [18, 19, 23] for white noise analysis (as an infinite dimensional analogue of the Schwartz theory of deterministic generalized functions).

Let \((2{\mathbb {N}})^{\alpha }=\prod _{n=1}^\infty (2n)^{\alpha _n},\quad \alpha =(\alpha _1,\alpha _2,\ldots , \alpha _n,\ldots )\in {\mathcal {I}}.\) We will often use the fact that the series \(\sum _{\alpha \in \mathcal I}(2{\mathbb {N}})^{-p\alpha }\) converges for \(p>1\) [19, Proposition 2.3.3]. Define the Banach spaces

$$\begin{aligned} (S)_{1,p} =\{F=\sum _{\alpha \in {\mathcal {I}}}f_\alpha {H_\alpha }\in L^2(\Omega ,\mu ):\; \Vert F\Vert ^2_{(S)_{1,p}} =\sum _{\alpha \in {\mathcal {I}}}(\alpha !)^2 |f_\alpha |^2(2{\mathbb {N}})^{p\alpha }<\infty \},\quad p\in {\mathbb {N}}_0. \end{aligned}$$

Their topological dual spaces are given by

$$\begin{aligned} (S)_{-1,-p} =\{F=\sum _{\alpha \in {\mathcal {I}}}f_\alpha {H_\alpha }:\; \Vert F\Vert ^2_{(S)_{-1,-p}}= \sum _{\alpha \in {\mathcal {I}}}|f_\alpha |^2(2{\mathbb {N}})^{-p\alpha }<\infty \},\quad p\in {\mathbb {N}}_0. \end{aligned}$$

The Kondratiev space of generalized random variables is \((S)_{-1} =\bigcup _{p\in {\mathbb {N}}_0}(S)_{-1,-p}\) endowed with the inductive topology. It is the strong dual of \((S)_{1} =\bigcap _{p\in {\mathbb {N}}_0}(S)_{1,p}\), called the Kondratiev space of test random variables which is endowed with the projective topology. Thus,

$$\begin{aligned} (S)_{1} \subseteq L^2(\Omega ,\mu ) \subseteq (S)_{-1} \end{aligned}$$

forms a Gelfand triplet.

The time-derivative of the Brownian motion exists in the generalized sense and belongs to the Kondratiev space \((S)_{-1,-p}\) for \(p>\frac{5}{12}\) [23, p. 21]. We refer to it as to white noise and its formal expansion is given by \(W(t,\omega ) = \sum _{k=1}^\infty \xi _k(t)H_{\varepsilon _k}(\omega ).\)

We extended in [40] the definition of stochastic processes also to processes of the chaos expansion form \(U(t,\omega )=\sum _{\alpha \in {\mathcal {I}}}u_\alpha (t) {H_\alpha }(\omega )\), where the coefficients \(u_\alpha \) are elements of some Banach space X. We say that U is an X-valued generalized stochastic process, i.e. \(U(t,\omega )\in X\otimes (S)_{-1}\) if there exists \(p>0\) such that \(\Vert U\Vert _{X\otimes (S)_{-1,-p}}^2=\sum _{\alpha \in {\mathcal {I}}} \Vert u_\alpha \Vert _X^2(2{\mathbb {N}})^{-p\alpha }<\infty \).

The notation \(\otimes \) is used for the completion of a tensor product with respect to the \(\pi \)-topology (see [50]). We note that if one of the spaces involved in the tensor product is nuclear, then the completions with respect to the \(\pi \)- and the \(\varepsilon \)-topology coincide. It is known that \((S)_1\) and \((S)_{-1}\) are nuclear spaces [19, Lemma 2.8.2], thus in all forthcoming identities \(\otimes \) can be equivalently interpreted as the \({\widehat{\otimes }}_\pi \)- or \({\widehat{\otimes }}_\varepsilon \)-completed tensor product. Thus, when dealing with the tensor products with \((S)_{1,p}\) and \((S)_{-1,-p}\), we work with the \(\pi \)-topology.

The Wick product of two stochastic processes \(F=\sum _{\alpha \in {\mathcal {I}}}f_\alpha H_\alpha \) and \(G=\sum _{\beta \in {\mathcal {I}}}g_\beta H_\beta \in X\otimes (S)_{-1}\) is given by

$$\begin{aligned} F\lozenge G = \sum _{\gamma \in \mathcal I}\sum _{\alpha +\beta =\gamma }f_\alpha g_\beta H_\gamma = \sum _{\alpha \in {\mathcal {I}}}\sum _{\beta \leqslant \alpha } f_\beta g_{\alpha -\beta } H_\alpha , \end{aligned}$$

and the \(n\hbox {th}\) Wick power is defined by \(F^{\lozenge n}=F^{\lozenge (n-1)}\lozenge F\), \(F^{\lozenge 0}=1\). Note that \(H_{n\varepsilon _k}=H_{\varepsilon _k}^{\lozenge n}\) for \(n\in {\mathbb {N}}_0\), \(k\in {\mathbb {N}}\). The Wick product always exists and results in a new element of \(X\otimes (S)_{-1}\), moreover it exhibits the property of \(E(F\lozenge G)=E(F)E(G)\) holding true. The ordinary product of two generalized stochastic processes does not always exist and \(E(F\cdot G)=E(F)E(G)\) would hold only if F and G were uncorrelated.

One particularly important choice for the Banach space X is \(X=C^k[0,T]\), \(k\in {\mathbb {N}}\). We proved in [41] that differentiation of a stochastic process can be carried out componentwise in the chaos expansion, i.e. due to the fact that \((S)_{-1}\) is a nuclear space it holds that \(C^k([0,T],(S)_{-1})=C^k[0,T]\otimes (S)_{-1}\). This means that a stochastic process \(U(t,\omega )\) is k times continuously differentiable if and only if all of its coefficients \(u_\alpha (t)\), \(\alpha \in {\mathcal {I}}\) are in \(C^k[0,T]\).

The same holds for Banach space valued stochastic processes i.e. elements of \(C^k([0,T],X)\otimes (S)_{-1}\), where X is an arbitrary Banach space. By the nuclearity of \((S)_{-1}\), these processes can be regarded as elements of the tensor product space

$$\begin{aligned} C^k([0,T],X\otimes (S)_{-1})=C^k([0,T],X)\otimes (S)_{-1}=\bigcup _{p=0}^{\infty }C^k([0,T],X)\otimes (S)_{-1,-p}. \end{aligned}$$

In order to solve (1.1) and (1.2) we will choose some special Banach spaces, for example if U is an open subset of \({\mathbb {R}}^d\), then some appropriate choices may be \(X=L^2(U)\), the Sobolev spaces \(X=H_0^1(U)\), \(X=H^{-1}(U)\), \(X=H^{z,\zeta }({\mathbb {R}}^d)\), etc. depending on the input data in the SPDEs.

In general, the function spaces that we will adopt as those where to look for the solutions to (1.1) and (1.2) will be of the form

$$\begin{aligned} L^2(I,G_k)\otimes (S)_{-1}, \quad k\in {\mathbb {Z}}, \end{aligned}$$
(2.2)

or

$$\begin{aligned} \bigcap _{l\ge k\ge 0} C^k(I,G_k)\otimes (S)_{-1},\quad 1\le l\le \infty , \end{aligned}$$
(2.3)

where \(I\subset {\mathbb {R}}\) is an interval of the form [0, T] or \([0,\infty )\), and \(G_k\), \(k=0,1,2,\ldots ,l\), or \(k\in {\mathbb {Z}}_+\), are suitable Hilbert spaces (or Banach spaces) such that

$$\begin{aligned} \cdots \hookrightarrow G_{k+1}\hookrightarrow G_{k}\cdots \hookrightarrow G_1 \hookrightarrow G_{0}, \end{aligned}$$

where \(\hookrightarrow \) denotes dense continuous embeddings. We can also consider the topological duals of \(G_j\), \(j\in {\mathbb {Z}}_+\), denoted by \(G_{-j}\), respectively, and write

$$\begin{aligned} G_{0}\hookrightarrow G_{-1} \hookrightarrow G_{-2}\hookrightarrow \cdots \hookrightarrow G_{-k}\hookrightarrow G_{-(k+1)}\hookrightarrow \cdots . \end{aligned}$$

For example, if \(G_0=L^2(U)\), \(G_1=H_0^1(U)\), then \(G_{-1}=H^{-1}(U)\) and they form a Gelfand triple. Hence, one has

$$\begin{aligned} G_1\otimes (S)_{-1} \hookrightarrow G_0\otimes (S)_{-1} \hookrightarrow G_{-1}\otimes (S)_{-1}. \end{aligned}$$

In particular, for the spaces in (2.2) and in (2.3) we have, respectively,

$$\begin{aligned}&L^2(I,G_k)\otimes (S)_{-1} \simeq L^2(I,G_k\otimes (S)_{-1}) \simeq \bigcup _{r=0}^\infty L^2(I,G_k)\otimes (S)_{-1,-r}, \\&C^j(I,G_k)\otimes (S)_{-1} \simeq C^j(I,G_k\otimes (S)_{-1}) \simeq \bigcup _{r=0}^\infty C^j(I,G_k)\otimes (S)_{-1,-r}. \end{aligned}$$

2.2 Stochastic Operators and Differential Operators with Stochastic Coefficients

Let X be a Banach space endowed with the norm \(\Vert \cdot \Vert _X\). Consider \(X\otimes (S)_{-1}\) with elements \(u=\sum _{\alpha \in {\mathcal {I}}}u_\alpha H_\alpha \) so that \(\sum _{\alpha \in {\mathcal {I}}}\Vert u_\alpha \Vert ^2_X(2{\mathbb {N}})^{-p\alpha }<\infty \) for some \(p\geqslant 0\). Let \(D\subset X\) be a dense subset of X endowed with the norm \(\Vert \cdot \Vert _D\) and \(A_\alpha :D\rightarrow X\), \(\alpha \in {\mathcal {I}}\), be a family of linear operators on this common domain D. Assume that each \(A_\alpha \) is bounded i.e.,

$$\begin{aligned} \Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}={\mathrm {sup}}\{\Vert A_\alpha (x)\Vert _X:\; \Vert x\Vert _D\leqslant 1\}<\infty . \end{aligned}$$

In case when \(D=X\), we will write \({\mathcal {L}}(X)\) instead of \({\mathcal {L}}(D,X)\). The family of operators \(A_\alpha \), \(\alpha \in {\mathcal {I}}\), gives rise to a stochastic operator \(\mathbf{A }\lozenge :D\otimes (S)_{-1}\rightarrow X\otimes (S)_{-1}\), that acts in the following manner

$$\begin{aligned} \mathbf{A }\lozenge u = \sum _{\gamma \in {\mathcal {I}}}\left( \sum _{\beta +\lambda =\gamma } A_\beta (u_\lambda )\right) H_\gamma . \end{aligned}$$

In the next two lemmas we provide two sufficient conditions that ensure the stochastic operator \(\mathbf{A }\lozenge \) to be well-defined. Both conditions rely on the \(l^2\) or \(l^1\) bounds with suitable weights. They are actually equivalent to the fact that \(A_\alpha \), \(\alpha \in {\mathcal {I}}\), are polynomially bounded, but they provide finer estimates on the stochastic order (Kondratiev weight) of the domain and codomain of \(\mathbf{A }\lozenge \).

Lemma 2.1

If the operators \(A_\alpha \), \(\alpha \in {\mathcal {I}}\), satisfy \(\sum _{\alpha \in \mathcal I}\Vert A_\alpha \Vert ^2_{{\mathcal {L}}(D,X)}(2{\mathbb {N}})^{-r\alpha }<\infty \), for some \(r\geqslant 0\), then \(\mathbf{A }\lozenge \) is well-defined as a mapping \(\mathbf{A }\lozenge : D\otimes (S)_{-1,-p}\rightarrow X\otimes (S)_{-1,-(p+r+m)}\), \(m>1\).

Proof

For \(u\in X\otimes (S)_{-1,-p}\) and \(q=p+r+m\) we have

$$\begin{aligned}\begin{aligned}\sum _{\gamma \in {\mathcal {I}}}\Vert \sum _{\alpha +\beta =\gamma } A_\alpha (u_\beta ) \Vert _X^2 (2{\mathbb {N}})^{-q\gamma }&\leqslant \sum _{\gamma \in {\mathcal {I}}}\Big [\sum _{\alpha +\beta =\gamma } \Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}\Vert u_\beta \Vert _X\Big ]^2 (2{\mathbb {N}})^{-(p+r+m)\gamma }\\&=\sum _{\gamma \in {\mathcal {I}}}(2{\mathbb {N}})^{-m\gamma }\left( \sum _{\alpha +\beta =\gamma } \Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}^2(2{\mathbb {N}})^{-r\gamma }\right) \left( \sum _{\alpha +\beta =\gamma } \Vert u_\beta \Vert _X^2 (2{\mathbb {N}})^{-p\gamma }\right) \\&\leqslant M\left( \sum _{\alpha \in {\mathcal {I}}} \Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}^2(2{\mathbb {N}})^{-r\alpha }\right) \left( \sum _{\beta \in {\mathcal {I}}} \Vert u_\beta \Vert _X^2 (2{\mathbb {N}})^{-p\beta }\right) <\infty , \end{aligned} \end{aligned}$$

where \(M=\sum _{\gamma \in {\mathcal {I}}}(2{\mathbb {N}})^{-m\gamma }<\infty \), for \(m>1\). \(\square \)

Lemma 2.2

If the operators \(A_\alpha \), \(\alpha \in {\mathcal {I}}\), satisfy \(\sum _{\alpha \in \mathcal I}\Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}(2{\mathbb {N}})^{-\frac{r}{2}\alpha }<\infty \), for some \(r\geqslant 0\), then \(\mathbf{A }\lozenge \) is well-defined as a mapping \(\mathbf{A }\lozenge : D\otimes (S)_{-1,-r}\rightarrow X\otimes (S)_{-1,-r}\).

Proof

For \(u\in X\otimes (S)_{-1,-r}\), we have by the generalized Minkowski inequality that

$$\begin{aligned} \begin{aligned}&\sum _{\gamma \in {\mathcal {I}}}\Vert \sum _{\alpha +\beta =\gamma } A_\alpha (u_\beta ) \Vert _X^2 (2{\mathbb {N}})^{-r\gamma }\leqslant \sum _{\gamma \in {\mathcal {I}}}\Big [\sum _{\alpha +\beta =\gamma } \Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}\Vert u_\beta \Vert _X\Big ]^2 (2{\mathbb {N}})^{-r\gamma }\\&\quad \leqslant \sum _{\gamma \in {\mathcal {I}}}\Big [\sum _{\alpha +\beta =\gamma } \Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}(2{\mathbb {N}})^{-\frac{r}{2}\alpha }\Vert u_\beta \Vert _X (2{\mathbb {N}})^{-\frac{r}{2}\beta }\Big ]^2\\&\quad \leqslant \left( \sum _{\alpha \in {\mathcal {I}}} \Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}(2{\mathbb {N}})^{-\frac{r}{2}\alpha }\right) ^2\sum _{\beta \in {\mathcal {I}}} \Vert u_\beta \Vert _X^2 (2{\mathbb {N}})^{-r\beta }<\infty . \end{aligned}\end{aligned}$$

\(\square \)

For example, let \(D=H_0^1({\mathbb {R}})\), \(X=L^2({\mathbb {R}})\) and \(A_\alpha = a_\alpha \cdot \partial _x\), \(a_\alpha \in {\mathbb {R}}\), be scalars such that \(\sum _{\alpha \in \mathcal I}|a_\alpha |^2(2{\mathbb {N}})^{-r\alpha }<\infty \), for some \(r\geqslant 0\). Then \(\Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)}=a_\alpha \), hence for \(u\in H_0^1({\mathbb {R}})\otimes (S)_{-1}\) we have \(\mathbf{A }\lozenge u(x,\omega )=\sum _{\gamma \in {\mathcal {I}}}\left( \sum _{\alpha +\beta =\gamma }a_\alpha \cdot \partial _x(u_\beta (x))\right) H_\gamma (\omega )\) is a well-defined element in \(L^2({\mathbb {R}})\otimes (S)_{-1}\). A similar example may be constructed with \(D=L^2({\mathbb {R}})\) and \(X=H^{-1}({\mathbb {R}})\). Note that in these examples, we could have written the operator also in the form \(\mathbf{A }=a(\omega )\partial _x\), where \(a(\omega )=\sum _{\alpha \in {\mathcal {I}}}a_\alpha H_\alpha (\omega )\in (S)_{-1,-r}\).

Let us now consider the differential operator that governs Eq. (1.1). Let \(U\subset {\mathbb {R}}^d\), \(D=C^l(I,H_0^1(U))\), \(X=C^l(I,H^{-1}(U))\), \(l\geqslant 0\), and \(\mathbf{A }=\sum _{j=1}^d\partial _j \sum _{k=1}^d a_{jk}(t,x;\omega )\partial _k\), where \(a_{jk}(t,x;\omega )=\sum _{\alpha \in {\mathcal {I}}}a_{jk\alpha }(t,x)H_\alpha (\omega )\in C^l(I,L^\infty (U))\otimes (S)_{-1,-r}\). This operator acts in the following way: for an element \(u=\sum _{\beta \in {\mathcal {I}}}u_\beta H_\beta \in C^l(I,H_0^1(U))\otimes (S)_{-1}\) the action of \(\mathbf{A }\lozenge \) results in

$$\begin{aligned} \mathbf{A }\lozenge u= & {} \sum _{j=1}^d\partial _j \sum _{k=1}^d a_{jk}(t,x;\omega )\lozenge \partial _k u(t,x;\omega )\\= & {} \sum _{\gamma \in {\mathcal {I}}}\left( \sum _{\alpha +\beta =\gamma } \sum _{j=1}^d\partial _j \sum _{k=1}^d a_{jk\alpha }(t,x) \partial _k u_\beta (t,x)\right) H_\gamma (\omega ). \end{aligned}$$

Hence, we may identify the operator \(\mathbf{A }\lozenge \) with the family of operators \(A_\alpha : C^l(I,H_0^1(U))\rightarrow C^l(I,H^{-1}(U))\), \(\alpha \in {\mathcal {I}}\), where

$$\begin{aligned} A_\alpha = \sum _{j=1}^d\partial _j \sum _{k=1}^d a_{jk\alpha }(t,x) \partial _k,\qquad a_{jk\alpha }\in C^l(I,L^\infty (U)), \end{aligned}$$

and \(\Vert A_\alpha \Vert _{{\mathcal {L}}(D,X)} \leqslant \max _{1\leqslant j,k \leqslant d}\Vert a_{jk\alpha }\Vert _{C^l(I,L^\infty (U))}<\infty \), \(\alpha \in {\mathcal {I}}\). Hence,

$$\begin{aligned}&\sum _{\alpha \in {\mathcal {I}}}\Vert A_\alpha \Vert ^2_{{\mathcal {L}}(D,X)}(2{\mathbb {N}})^{-r\alpha }\leqslant \max _{1\leqslant j,k \leqslant d}\sum _{\alpha \in {\mathcal {I}}}\Vert a_{jk\alpha }\Vert ^2_{C^l(I,L^\infty (U))}(2{\mathbb {N}})^{-r\alpha }\\&\quad = \max _{1\leqslant j,k \leqslant d}\Vert a_{jk}\Vert ^2_{C^l(I,L^\infty (U))\otimes (S)_{-1,-r}}<\infty , \end{aligned}$$

and thus the operator \(\mathbf{A }\lozenge \) given by

$$\begin{aligned} \mathbf{A }\lozenge u (t,x;\omega ) = \sum _{\gamma \in {\mathcal {I}}}\left( \sum _{\beta +\lambda =\gamma } (A_\beta u_\lambda )(t,x)\right) H_\gamma (\omega ) \end{aligned}$$

is well-defined by Lemma 2.1.

Lemma 2.3

In particular, if the operator has deterministic coefficients, i.e. if \({{\tilde{\mathbf{A }}}}\) is of the form \({{\tilde{\mathbf{A }}}} =\sum _{j=1}^d\partial _j \sum _{k=1}^d a_{jk}(t,x)\partial _k\), with \(a_{jk}(t,x)\in C^l(I,L^\infty (U))\), then \({{\tilde{\mathbf{A }}}}\lozenge u ={{\tilde{\mathbf{A }}}}\cdot u\), \(u\in C^l(I,H_0^1(U))\otimes (S)_{-1}\).

Proof

Clearly, we may identify \({{\tilde{\mathbf{A }}}}\) with the constant net of operators \(A_\alpha ={{\tilde{A}}}\), \(\alpha \in {\mathcal {I}}\), thus

$$\begin{aligned} {{\tilde{\mathbf{A }}}}\lozenge u(t,x;\omega )={{\tilde{\mathbf{A }}}}\cdot u(t,x;\omega ) = \sum _{\alpha \in {\mathcal {I}}}{{\tilde{\mathbf{A }}}}u_\alpha (t,x) H_\alpha (\omega ), \end{aligned}$$

for all \(u(t,x;\omega )=\sum _{\alpha \in {\mathcal {I}}}u_\alpha (t,x) H_\alpha (\omega )\) and hence we obtain \({{\tilde{\mathbf{A }}}}:C^l(I,H_0^1(U))\otimes (S)_{-1,-r}\rightarrow C^l(I,H^{-1}(U))\otimes (S)_{-1,-r}\) for all \(r\geqslant 0\). \(\square \)

In Sect. 3 we will assume other types of conditions on the operator \(\mathbf{A }\lozenge \); in particular if we want better regularity of the solutions, e.g. \(u\in C^l(I,L^2(U))\otimes (S)_{-1}\) instead of \(u\in C^l(I,H^{-1}(U))\otimes (S)_{-1}\), then we must assume that some of its components \(A_\alpha \), \(\alpha \in {\mathcal {I}}\) are differential operators only of order one. Precise conditions will be provided in Sect. 3.

Considering the differential operator \(\mathbf{L }\) that governs Eq. (2.2), we will make special choices for the domain D and range X involving the so called weighted Sobolev spaces \(H^{z,\zeta }({\mathbb {R}}^d)\) and many other types of spaces that stem from pseudodifferential calculus. These will be introduced in the next section.

2.3 The Global SG Calculus of Pseudodifferential and Fourier Integral Operators

We here recall some basic definitions and facts about the SG-calculus of pseudodifferential and Fourier integral operators, through standard material appeared, e.g., in [4] and elsewhere (sometimes with slightly different notational choices). In the sequel we will often use the so-called Japanese bracket of \(y\in {\mathbb {R}}^d\), given by \(\langle y \rangle =\sqrt{1+|y|^2}\).

The class \(S ^{m,\mu }=S ^{m,\mu }({\mathbb {R}}^{d})\) of SG symbols of order \((m,\mu ) \in {\mathbb {R}}^2\) is given by all the functions \(a(x,\xi ) \in C^\infty ({\mathbb {R}}^d\times {\mathbb {R}}^d)\) with the property that, for any multiindices \(\alpha ,\beta \in {\mathbb {N}}_0^d\), there exist constants \(C_{\alpha \beta }>0\) such that the conditions

$$\begin{aligned} |D_x^{\alpha } D_\xi ^{\beta } a(x, \xi )| \leqslant C_{\alpha \beta } \langle x\rangle ^{m-|\alpha |}\langle \xi \rangle ^{\mu -|\beta |}, \qquad (x, \xi ) \in {\mathbb {R}}^d \times {\mathbb {R}}^d, \end{aligned}$$
(2.4)

hold (see [11, 31, 38]). For \(m,\mu \in {\mathbb {R}}\), \(\ell \in {\mathbb {N}}_0\),

$$\begin{aligned} |\!|\!|a|\!|\!|^{m,\mu }_\ell = \max _{|\alpha +\beta |\le \ell }\sup _{x,\xi \in {\mathbb {R}}^d}\langle x\rangle ^{-m+|\alpha |} \langle \xi \rangle ^{-\mu +|\beta |} | \partial ^\alpha _x\partial ^\beta _\xi a(x,\xi )|, \quad a\in \ S^{m,\mu }, \end{aligned}$$

is a family of seminorms, defining the Fréchet topology of \(S^{m,\mu }\). The corresponding classes of pseudodifferential operators \({{\text {Op}}}(S ^{m,\mu })={{\text {Op}}}(S ^{m,\mu }({\mathbb {R}}^d))\) are given by

$$\begin{aligned} ({{\text {Op}}}(a)u)(x)=(a(.,D)u)(x)=(2\pi )^{-d}\int e^{\mathrm {i}x\xi }a(x,\xi ){\hat{u}}(\xi )d\xi , \quad a\in S^{m,\mu }({\mathbb {R}}^d),u\in {\mathcal {S}}({\mathbb {R}}^d),\nonumber \\ \end{aligned}$$
(2.5)

extended by duality to \({\mathcal {S}}^\prime ({\mathbb {R}}^d)\). The operators in (2.5) form a graded algebra with respect to composition, i.e.,

$$\begin{aligned} {{\text {Op}}}(S ^{m_1,\mu _1})\circ {{\text {Op}}}(S ^{m_2,\mu _2}) \subseteq {{\text {Op}}}\left( S ^{m_1+m_2,\mu _1+\mu _2}\right) . \end{aligned}$$

The symbol \(c\in S ^{m_1+m_2,\mu _1+\mu _2}\) of the composed operator \({{\text {Op}}}(a)\circ {{\text {Op}}}(b)\), \(a\in S ^{m_1,\mu _1}\), \(b\in S ^{m_2,\mu _2}\), admits the asymptotic expansion

$$\begin{aligned} c(x,\xi )\sim \sum _{\alpha }\frac{i^{|\alpha |}}{\alpha !}\,D^\alpha _\xi a(x,\xi )\, D^\alpha _x b(x,\xi ), \end{aligned}$$
(2.6)

which implies that the symbol c equals \(a\cdot b\) modulo \(S ^{m_1+m_2-1,\mu _1+\mu _2-1}\).

Note that

$$\begin{aligned} S ^{-\infty ,-\infty }=S ^{-\infty ,-\infty }({\mathbb {R}}^{d})= \bigcap _{(m,\mu ) \in {\mathbb {R}}^2} S ^{m,\mu } ({\mathbb {R}}^{d}) ={\mathcal {S}}({\mathbb {R}}^{2d}). \end{aligned}$$

For any \(a\in S^{m,\mu }\), \((m,\mu )\in {\mathbb {R}}^2\), \({{\text {Op}}}(a)\) is a linear continuous operator from \({\mathcal {S}}({\mathbb {R}}^d)\) to itself, extending to a linear continuous operator from \({\mathcal {S}}^\prime ({\mathbb {R}}^d)\) to itself, and from \(H^{z,\zeta }({\mathbb {R}}^d)\) to \(H^{z-m,\zeta -\mu }({\mathbb {R}}^d)\), where \(H^{z,\zeta }=H^{z,\zeta }({\mathbb {R}}^d)\), \((z,\zeta ) \in {\mathbb {R}}^2\), denotes the Sobolev–Kato (or weighted Sobolev) space

$$\begin{aligned} H^{z,\zeta }({\mathbb {R}}^d)= \{u \in {\mathcal {S}}^\prime ({\mathbb {R}}^{n}) :\Vert u\Vert _{z,\zeta }= \Vert {\langle \cdot \rangle }^z\langle D \rangle ^\zeta u\Vert _{L^2}< \infty \}, \end{aligned}$$
(2.7)

(here \(\langle D \rangle ^\zeta \) is understood as a pseudodifferential operator) with the naturally induced Hilbert norm. When \(z\ge z^\prime \) and \(\zeta \ge \zeta ^\prime \), the continuous embedding \(H^{z,\zeta }\hookrightarrow H^{z^\prime ,\zeta ^\prime }\) holds true. It is compact when \(z>z^\prime \) and \(\zeta >\zeta ^\prime \). Since \(H^{z,\zeta }=\langle \cdot \rangle ^z\,H^{0,\zeta }=\langle \cdot \rangle ^z\, H^\zeta \), with \(H^\zeta \) the usual Sobolev space of order \(\zeta \in {\mathbb {R}}\), we find \(\zeta >k+\dfrac{d}{2} \Rightarrow H^{z,\zeta }\hookrightarrow C^k({\mathbb {R}}^d)\), \(k\in {\mathbb {N}}_0\). One actually finds

$$\begin{aligned}&\bigcap _{z,\zeta \in {\mathbb {R}}}H^{z,\zeta }({\mathbb {R}}^d)=H^{\infty ,\infty }({\mathbb {R}}^d)={\mathcal {S}}({\mathbb {R}}^d),\nonumber \\&\bigcup _{z,\zeta \in {\mathbb {R}}}H^{z,\zeta }({\mathbb {R}}^d)=H^{-\infty ,-\infty }({\mathbb {R}}^d)={\mathcal {S}}^\prime ({\mathbb {R}}^d), \end{aligned}$$
(2.8)

as well as, for the space of rapidly decreasing distributions, see [6] and [45, Chap. VII, § 5],

$$\begin{aligned} {\mathcal {S}}^\prime ({\mathbb {R}}^d)_\infty =\bigcap _{z\in {\mathbb {R}}}\bigcup _{\zeta \in {\mathbb {R}}}H^{z,\zeta }({\mathbb {R}}^d). \end{aligned}$$
(2.9)

The continuity property of the elements of \({{\text {Op}}}(S^{m,\mu })\) on the scale of spaces \(H^{z,\zeta }({\mathbb {R}}^d)\), \((m,\mu ),(z,\zeta )\in {\mathbb {R}}^2\), is expressed more precisely in the next theorem.

Theorem 2.4

([11, Chap. 3, Theorem 1.1]) Let \(a\in S^{m,\mu }({\mathbb {R}}^d)\), \((m,\mu )\in {\mathbb {R}}^2\). Then, for any \((z,\zeta )\in {\mathbb {R}}^2\), \({{\text {Op}}}(a)\in {\mathcal {L}}(H^{z,\zeta }({\mathbb {R}}^d),H^{z-m,\zeta -\mu }({\mathbb {R}}^d))\), and there exists a constant \(C>0\), depending only on \(d,m,\mu ,z,\zeta \), such that

$$\begin{aligned} \Vert {{\text {Op}}}(a)\Vert _{{\mathscr {L}}(H^{z,\zeta }({\mathbb {R}}^d), H^{z-m,\zeta -\mu }({\mathbb {R}}^d))}\le C|\!|\!|a|\!|\!|_{\left[ \frac{d}{2}\right] +1}^{m,\mu }, \end{aligned}$$
(2.10)

where [t] denotes the integer part of \(t\in {\mathbb {R}}\).

The class \({\mathcal {O}}(m,\mu )\) of the operators of order \((m,\mu )\) is introduced as follows, see, e.g., [11, Chap. 3, § 3].

Definition 2.5

A linear continuous operator \(A:{\mathcal {S}}({\mathbb {R}}^d)\rightarrow {\mathcal {S}}({\mathbb {R}}^d)\) belongs to the class \({\mathcal {O}}(m,\mu )\), \((m,\mu )\in {\mathbb {R}}^2\), of the operators of order \((m,\mu )\) if, for any \((z,\zeta )\in {\mathbb {R}}^2\), it extends to a linear continuous operator \(A_{z,\zeta }:H^{z,\zeta }({\mathbb {R}}^d)\rightarrow H^{z-m,\zeta -\mu }({\mathbb {R}}^d)\). We also define

$$\begin{aligned} {\mathcal {O}}(\infty ,\infty )=\bigcup _{(m,\mu )\in {\mathbb {R}}^2} {\mathcal {O}}(m,\mu ), \quad {\mathcal {O}}(-\infty ,-\infty )=\bigcap _{(m,\mu )\in {\mathbb {R}}^2} {\mathcal {O}}(m,\mu ). \end{aligned}$$

Remark 2.6

  1. 1.

    Trivially, any \(A\in {\mathcal {O}}(m,\mu )\) admits a linear continuous extension \(A_{\infty ,\infty }:{\mathcal {S}}^\prime ({\mathbb {R}}^d)\rightarrow {\mathcal {S}}^\prime ({\mathbb {R}}^d)\). In fact, in view of (2.8), it is enough to set \(A_{\infty ,\infty }|_{H^{z,\zeta }({\mathbb {R}}^d)}= A_{z,\zeta }\).

  2. 2.

    Theorem 2.4 implies \({{\text {Op}}}(S^{m,\mu }({\mathbb {R}}^d))\subset {\mathcal {O}}(m,\mu )\), \((m,\mu )\in {\mathbb {R}}^2\).

  3. 3.

    \({\mathcal {O}}(\infty ,\infty )\) and \({\mathcal {O}}(0,0)\) are algebras under operator multiplication, \({\mathcal {O}}(-\infty ,-\infty )\) is an ideal of both \({\mathcal {O}}(\infty ,\infty )\) and \({\mathcal {O}}(0,0)\), and \({\mathcal {O}}(m_1,\mu _1)\circ {\mathcal {O}}(m_2,\mu _2)\subset {\mathcal {O}}(m_1+m_2,\mu _1+\mu _2)\).

The following characterization of the class \({\mathcal {O}}(-\infty ,-\infty )\) is often useful.

Proposition 2.7

([11, Ch. 3, Prop. 3.4]) The class \({\mathcal {O}}(-\infty ,-\infty )\) coincides with \({{\text {Op}}}(S^{-\infty ,-\infty }({\mathbb {R}}^d))\) and with the class of smoothing operators, that is, the set of all the linear continuous operators \(A:{\mathcal {S}}^\prime ({\mathbb {R}}^d)\rightarrow {\mathcal {S}}({\mathbb {R}}^d)\). All of them coincide with the class of linear continuous operators A admitting a Schwartz kernel \(k_A\) belonging to \({\mathcal {S}}({\mathbb {R}}^{2d})\).

An operator \(A={{\text {Op}}}(a)\) and its symbol \(a\in S ^{m,\mu }\) are called elliptic (or \(S ^{m,\mu }\)-elliptic) if there exists \(R\ge 0\) such that

$$\begin{aligned} C\langle x\rangle ^{m} \langle \xi \rangle ^{\mu }\le |a(x,\xi )|,\qquad |x|+|\xi |\ge R, \end{aligned}$$

for some constant \(C>0\). If \(R=0\), \(a^{-1}\) is everywhere well-defined and smooth, and \(a^{-1}\in S ^{-m,-\mu }\). If \(R>0\), then \(a^{-1}\) can be extended to the whole of \({\mathbb {R}}^{2d}\) so that the extension \({\widetilde{a}}_{-1}\) satisfies \({\widetilde{a}}_{-1}\in S ^{-m,-\mu }\). An elliptic SG operator \(A \in {{\text {Op}}}(S ^{m,\mu })\) admits a parametrix \(A_{-1}\in {{\text {Op}}}(S ^{-m,-\mu })\) such that

$$\begin{aligned} A_{-1}A=I + R_1, \quad AA_{-1}= I+ R_2, \end{aligned}$$

for suitable \(R_1, R_2\in {{\text {Op}}}(S^{-\infty ,-\infty })\), where I denotes the identity operator. In such a case, A turns out to be a Fredholm operator on the scale of functional spaces \(H^{z,\zeta }\), \((z,\zeta )\in {\mathbb {R}}^2\).

We now recall the class of SG-phase functions. A real valued function \(\varphi \in C^\infty ({\mathbb {R}}^{2d})\) belongs to the class \({\mathfrak {P}}\) of SG-phase functions if it satisfies the following conditions:

  1. 1.

    \(\varphi \in S^{1,1}({\mathbb {R}}^{d})\);

  2. 2.

    \(\langle \varphi '_x(x,\xi )\rangle \asymp \langle \xi \rangle \) as \(|(x,\xi )|\rightarrow \infty \);

  3. 3.

    \(\langle \varphi '_\xi (x,\xi )\rangle \asymp \langle x\rangle \) as \(|(x,\xi )|\rightarrow \infty \).

For any \(a\in S^{m,\mu }\), \((m,\mu )\in {\mathbb {R}}^2\), \(\varphi \in {\mathfrak {P}}\), the SG FIOs are defined, for \(u\in {\mathcal {S}}({\mathbb {R}}^{n})\), as

$$\begin{aligned} ({{\text {Op}}}_\varphi (a)u)(x)&= (2\pi )^{-d}\int e^{\mathrm {i}\varphi (x,\xi )} a(x,\xi ) {{\widehat{u}}}(\xi )\, d\xi , \end{aligned}$$
(2.11)

and

$$\begin{aligned} ({{\text {Op}}}^*_\varphi (a)u)(x)&= (2\pi )^{-d}\iint e^{\mathrm {i}(x\cdot \xi -\varphi (y,\xi ))} \overline{a(y,\xi )} u(y)\, dyd\xi . \end{aligned}$$
(2.12)

Here the operators \({{\text {Op}}}_\varphi (a)\) and \({{\text {Op}}}_\varphi ^*(a)\) are sometimes called SG FIOs of type I and type II, respectively, with symbol a and (SG-)phase function \(\varphi \). Note that a type II operator satisfies \({{\text {Op}}}^*_\varphi (a)={{\text {Op}}}_\varphi (a)^*\), that is, it is the formal \(L^2\)-adjoint of the type I operator \({{\text {Op}}}_\varphi (a)\).

The following theorem summarizes composition results between SG pseudodifferential operators and SG FIOs of type I that we are going to use in the present paper, see [13] for proofs and composition results with SG FIOs of type II.

Theorem 2.8

([13]) Let \(\varphi \in {\mathfrak {P}}\) and assume \(b\in S^{m_1,\mu _1}({\mathbb {R}}^{d})\), \(a\in S^{m_2,\mu _2}({\mathbb {R}}^{d})\), \((m_j,\mu _j)\in {\mathbb {R}}^2\), \(j=1,2\). Then,

$$\begin{aligned} {{\text {Op}}}(b)\circ {{\text {Op}}}_\varphi (a)&= {{\text {Op}}}_\varphi (c_1+r_1) = {{\text {Op}}}_\varphi (c_1) \mod {{\text {Op}}}(S^{-\infty ,-\infty }({\mathbb {R}}^{d})), \\ {{\text {Op}}}_\varphi (a) \circ {{\text {Op}}}(b)&= {{\text {Op}}}_\varphi (c_2+r_2) = {{\text {Op}}}_\varphi (c_2) \mod {{\text {Op}}}(S^{-\infty ,-\infty }({\mathbb {R}}^{d})), \end{aligned}$$

for some \(c_j\in S^{m_1+m_2,\mu _1+\mu _2}({\mathbb {R}}^{d})\), \(r_j\in S^{-\infty ,-\infty }({\mathbb {R}}^{d})\), \(j=1,2\).

To consider the composition of SG FIOs of type I and type II some more hypotheses are needed, leading to the definition of the classes \({\mathfrak {P}}_\delta \) and \({\mathfrak {P}}_\delta (\lambda )\) of regular SG-phase functions (see [13]). We recall now their definition. Let \(\lambda \in [0,1)\) and \(\delta >0\). A function \(\varphi \in {\mathfrak {P}}\) belongs to the class \({\mathfrak {P}}_\delta (\lambda )\) if it satisfies the following conditions:

  1. 1.

    \(\vert \det (\varphi ''_{x\xi })(x,\xi )\vert \geqslant \delta \), \(\forall (x,\xi )\);

  2. 2.

    the function \(J(x,\xi ):=\varphi (x,\xi )-x\cdot \xi \) is such that

    $$\begin{aligned} \displaystyle \sup _{\genfrac{}{}{0.0pt}1{x,\xi \in {\mathbb {R}}^d}{|\alpha +\beta |\leqslant 2}}\frac{|D_\xi ^\alpha D_x^\beta J(x,\xi )|}{\langle x\rangle ^{1-|\beta |}\langle \xi \rangle ^{1-|\alpha |}}\leqslant \lambda . \end{aligned}$$
    (2.13)

If only condition (1) holds, we write \(\varphi \in {\mathfrak {P}}_\delta \). Let \(\ell \in {\mathbb {N}}_0\). In [22] the following seminorms are defined:

$$\begin{aligned} \Vert J\Vert _{2,\ell }:= & {} \displaystyle \sum _{2\leqslant |\alpha +\beta |\leqslant 2+\ell }\sup _{x,\xi \in {\mathbb {R}}^{d}} \displaystyle \frac{|D_\xi ^\alpha D_x^\beta J(x,\xi )|}{\langle x\rangle ^{1-|\beta |}\langle \xi \rangle ^{1-|\alpha |}},\\ \Vert J\Vert _\ell:= & {} \displaystyle \sup _{\genfrac{}{}{0.0pt}1{x,\xi \in {\mathbb {R}}^d}{|\alpha +\beta |\leqslant 1}}\frac{|D_\xi ^\alpha D_x^\beta J(x,\xi )|}{\langle x\rangle ^{1-|\beta |}\langle \xi \rangle ^{1-|\alpha |}}+\Vert J\Vert _{2,\ell }. \end{aligned}$$

We define the following subclass of the class of regular SG phase functions: Let \(\lambda \in [0,1)\), \(\delta >0\), \(\ell \geqslant 0\). Recall [22]: A function \(\varphi \) belongs to the class \({\mathfrak {P}}_\delta (\lambda ,\ell )\) if \(\varphi \in {\mathfrak {P}}_\delta (\lambda )\) and \(\Vert J\Vert _\ell \leqslant \lambda \) for the corresponding J.

Theorem 2.9

([13]) Let \(\varphi \) be a regular SG phase function and \(a\in S^{m,\mu }({\mathbb {R}}^d)\), \((m,\mu )\in {\mathbb {R}}^2\). Then, \({{\text {Op}}}_\varphi (a)\in {\mathcal {O}}(m,\mu )\).

Remark 2.10

Similarly to Theorem 2.4, chosen \(a\in S^{m,\mu }({\mathbb {R}}^d)\), \((m,\mu )\in {\mathbb {R}}^2\), and a regular SG phase function \(\varphi \), for any \((z,\zeta )\in {\mathbb {R}}^2\) there exists a constant \(C>0\), depending only on \(d,m,\mu ,z,\zeta \), such that

$$\begin{aligned} \Vert {{\text {Op}}}_\varphi (a)\Vert _{{\mathscr {L}}(H^{z,\zeta }({\mathbb {R}}^d), H^{z-m,\zeta -\mu }({\mathbb {R}}^d))}\le C \cdot p_{dm\mu }( a,\varphi ), \end{aligned}$$

where \(p_{dm\mu }\) depends continuously on a finite collection of seminorms of a and \(\varphi \), as symbols in \(S^{m,\mu }({\mathbb {R}}^d)\) and \(S^{1,1}({\mathbb {R}}^d)\), respectively.

Let \(M\geqslant 2\). The study of the composition of SG FIOs of type I \({{\text {Op}}}_{\varphi _j}(a_j)\) with regular SG-phase functions \(\varphi _j\in {\mathfrak {P}}_\delta (\lambda _j)\) and symbols \(a_j\in S^{m_j,\mu _j}\), \(j=1,\ldots ,M\), has been done in [4]. The result of such composition is still an SG-FIO with a regular SG-phase function \(\varphi \) given by the so-called multi-product \(\varphi _1\sharp \cdots \sharp \varphi _M\) of the phase functions \(\varphi _j\), \(j=1,\ldots ,M\), and symbol a as in Theorem 2.11 here below, a corollary of the main Theorem in [4].

Theorem 2.11

([4]) Consider, for \(j=1,2, \dots , M\), \(M\ge 2\), the SG-FIOs of type I \({{\text {Op}}}_{\varphi _j}(a_j)\) with \(a_j\in S^{m_j,\mu _j}({\mathbb {R}}^{d})\), \((m_j,\mu _j)\in {\mathbb {R}}^2\), and \(\varphi _j\in {\mathfrak {P}}_\delta (\lambda _j)\) such that \(\lambda _1+\cdots +\lambda _M\le \lambda \le \frac{1}{4}\) for some sufficiently small \(\lambda >0\). Then, there exists \(a\in S^{m,\mu }({\mathbb {R}}^{d})\), \(m=m_1+\cdots +m_M\), \(\mu =\mu _1+\cdots +\mu _M\), such that, setting \(\phi =\varphi _1\sharp \cdots \sharp \varphi _M\), we have

$$\begin{aligned} {{\text {Op}}}_{\varphi _1}(a_1) \circ \cdots \circ {{\text {Op}}}_{\varphi _M}(a_M)={{\text {Op}}}_{\phi }(a). \end{aligned}$$

Moreover, for any \(\ell \in {\mathbb {N}}_0\) there exist \(\ell ^\prime \in {\mathbb {N}}_0\), \(C_\ell >0\) such that

$$\begin{aligned} |\!|\!|a|\!|\!|_\ell ^{m,\mu } \le C_\ell \prod _{j=1}^M |\!|\!|a_j |\!|\!|_{\ell ^\prime }^{m_j,\mu _j}. \end{aligned}$$
(2.14)

3 Solutions on Bounded Domains

First we consider the local problem (1.1), i.e., the problem on a bounded spatial domain. By decomposing u, f, \(a_{jk}\) into their chaos expansions, and using the properties of the Wick product, we find that (1.1) is equivalent to an infinite sequence of systems:

$$\begin{aligned} (\partial ^2_t-{\mathbf {A}}\lozenge )u(t,x;\omega )&=(\partial ^2_t-{\mathbf {A}}\lozenge )\sum _{\gamma \in {\mathcal {I}}}u_\gamma (t,x)\cdot H_\gamma (\omega )\nonumber \\&=\sum _{\gamma \in {\mathcal {I}}}\left( (\partial ^2_t u_\gamma )(t,x)-\sum _{\beta +\lambda =\gamma }A_\beta (t,x)u_\lambda (t,x)\right) \cdot H_\gamma (\omega ) \nonumber \\&= \sum _{\gamma \in {\mathcal {I}}}f_\gamma (t,x)\cdot H_\gamma (\omega ) \nonumber \\&\Leftrightarrow \left\{ (\partial ^2_t u_\gamma )(t,x)-\sum _{\beta +\lambda =\gamma }A_\beta (t,x)u_\lambda (t,x)\right. =f_\gamma (t,x), \quad \gamma \in {\mathcal {I}}, \end{aligned}$$
(3.1)

where \(\displaystyle A_\beta (t,x)=\sum _{j=1}^d\partial _j\sum _{k=1}^d a_{jk\beta }(t,x)\partial _k\), and \(a_{jk}(t,x;\omega )=\displaystyle \sum _{\gamma \in {\mathcal {I}}}a_{jk\gamma }(t,x)\cdot H_\gamma (\omega )\).

We also decompose the initial values:

$$\begin{aligned} u^0(x;\omega ) = \sum _{\gamma \in {\mathcal {I}}}u_\gamma ^0(x)H_\gamma (\omega ),\;u^1(x;\omega ) = \sum _{\gamma \in {\mathcal {I}}}u_\gamma ^1(x)H_\gamma (\omega ). \end{aligned}$$

Thus (1.1) reduces to the so called propagator system

$$\begin{aligned}{}[\partial ^2_t -A_{(0,0,0,\ldots )}]u_\gamma (t,x)=f_\gamma (t,x)+\sum _{0\leqslant \lambda <\gamma }A_{\gamma -\lambda }(t,x)u_{\lambda }(t,x), \quad \gamma \in {\mathcal {I}},\qquad \end{aligned}$$
(3.2)

together with the initial and boundary value conditions

$$\begin{aligned} u_\gamma (0,x)=u_\gamma ^0(x),\;\; (\partial _t u_\gamma )(0,x)=u_\gamma ^1(x),\;\; u_\gamma (t,x)|_{\partial U}=0, \quad \gamma \in {\mathcal {I}}. \end{aligned}$$
(3.3)

The initial boundary value problems given by (3.2) together with (3.3) can be solved recursively by induction over the length of the multiindex \(\gamma \). To this aim, we will use the following well-known result, see [28, Vol. 1, Chap. 3, Theorem 8.1].

Theorem 3.1

([28]) Let V and H be Hilbert spaces, \(V\subset H\), V dense in H and separable. Let \(\Vert \cdot \Vert _H\) and \(\Vert \cdot \Vert _V\) denote the norms in H and V, respectively, and \([\cdot ,\cdot ]\) the sesquilinear scalar product in H. Identifying H and its antidual, we then have \(V\subset H\subset V^\prime \). If \(v\in V\), \(f\in V^\prime \), [fv] denotes their scalar product in the antiduality.

Let \(t\in I=[0,T]\), \(T>0\). Assume that a(tuv), \(t\in I\), is a family of continuous sesquilinear forms on V, such that

$$\begin{aligned}&\exists c>0\; \forall t\in I\;\forall u,v\in V\; |a(t;u,v)|\le c\,\Vert u\Vert _V\,\Vert v\Vert _V; \end{aligned}$$
(3.4)
$$\begin{aligned}&{\left\{ \begin{array}{ll} \text {the function }I\ni t\mapsto a(t;u,v)\text { is, for any }u,v\in V, \\ \text {once continuously differentiable in }I; \end{array}\right. } \end{aligned}$$
(3.5)
$$\begin{aligned}&{\left\{ \begin{array}{ll} \forall t\in I\;\forall u,v\in V\;a(t;u,v)=\overline{a(t;v,u)},\text { and, for suitable }\beta \in {\mathbb {R}}, \alpha >0, \\ \forall t\in I\;\forall v\in V\;a(t;v,v)+\beta \Vert v\Vert _H^2\ge \alpha \Vert v\Vert _V^2. \end{array}\right. } \end{aligned}$$
(3.6)

Let \(A(t)\in {\mathcal {L}}(V,V^\prime )\) be the operator family definedFootnote 1, for any \(t\in I\), \(u\in V\), as \([A(t)u,v]=a(t;u,v)\) for all \(v\in V\).

Consider the Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} (\partial ^2_t u)(t) + A(t)u(t)=f(t),\quad t\in I, \\ u(0)=u_0, (\partial _t u)(0)=u_1. \end{array}\right. } \end{aligned}$$
(3.7)

If \(f\in L^2(I,H)\), \(u_0\in V\), \(u_1\in H\), then, there exists a unique u solution of (3.7) such that \(u\in L^2(I,V)\), \(\partial _t u\in L^2(I,H)\), \(\partial _t^2 u\in L^2(I,V^\prime )\). Moreover, the energy inequality

$$\begin{aligned} \Vert u(t)\Vert _V^2+\Vert (\partial _t u)(t)\Vert _H^2\le C\left( \Vert u_0\Vert _V^2+\Vert u_1\Vert _H^2+\int _I \Vert f(s)\Vert _H^2\,ds\right) \end{aligned}$$
(3.8)

holds true for any \(t\in I\), for a suitable \(C>0\) depending only on A(t) and T.

Notice that, in addition to Theorem 3.1, after possibly a modification on a set of measure zero, the next result also holds true, see again, e.g., [28, Vol. 1, Chap. 3, Theorem 8.2].

Theorem 3.2

([28]) Assume the same hypotheses and notation of Theorem 3.1. Then, after possibly a modification on a set measure zero, the solution of (3.7) satisfies

$$\begin{aligned} (u,\partial _t u)\in C(I,V)\times C(I,H). \end{aligned}$$
(3.9)

Moreover, the mapping \((f,u_0,u_1)\mapsto (u,\partial _t u)\), associating the right-hand side and initial data of (3.7) to its solution and first t-derivative, is a well-defined, linear and continuous application

$$\begin{aligned} {\mathcal {T}}_\mathrm {loc}:L^2(I,H)\times V \times H \rightarrow C(I,V)\times C(I,H). \end{aligned}$$

Corollary 3.3

Assume the same hypotheses and notation of Theorem 3.1. Assume also that \(f\in C(I,H)\). Then, after possibly a modification on a set measure zero, the solution of (3.7) satisfies

$$\begin{aligned} u\in C(I,V)\cap C^1(I,H) \cap C^2(I,V^\prime ). \end{aligned}$$

The next one is our first main theorem. The main assumption is that the component of \({\mathbf {A}}\) associated with the multiindex \((0,0,\ldots )\) is the principal part of \({\mathbf {A}}\) actually governing the equation (assumption (A0)), and that all the subsequent terms of the expansion of \({\mathbf {A}}\) are, at most, operators of order one (assumption (A1)).

Theorem 3.4

Assume that \(U\subset {\mathbb {R}}^d\) is a bounded open subset with smooth boundary, that \(I=[0,T]\), \(T>0\), is a finite time interval, and that the operators \(A_\gamma \), \(\gamma \in {\mathcal {I}}\), appearing in (3.1) satisfy the following two assumptions:

  1. (A0)

    The coefficients involved in the principal part \(A_{(0,0,\ldots )}=\sum _{j=1}^d\partial _j\sum _{k=1}^d a_{jk(0,0,\ldots )}(t,x)\partial _k\) are \(a_{jk(0,0,\ldots )}\in C^\infty (\overline{I\times U})\) for all \(j,k=1,\dots ,d\). Moreover, the continuous sesquilinear form a(tuv) on \(H^1_0(U)\) defined by

    $$\begin{aligned} a(t;u,v)=\sum _{j,k=1}^d\int _U a_{jk(0,0,\ldots )}(t,x)\cdot (\partial _j u)(x)\cdot \overline{(\partial _k v)(x)}\,dx \end{aligned}$$

    satisfies (3.6) with \(V=H^1_0(U)\), \(H=L^2(U)\), \(\beta =0\), and a suitable \(\alpha >0\);

  2. (A1)

    The remaining operators \(A_\gamma \) for \(\gamma \ne (0,0,\ldots )\) are of the form

    $$\begin{aligned} \displaystyle A_\gamma =\sum _{j=1}^d\partial _j\sum _{k=1}^d a_{jk\gamma }(t,x)\partial _k \end{aligned}$$

    with \(a_{jk\gamma }\in C^1(I,L^\infty (U))\) such that \(a_{jk\gamma }=-a_{kj\gamma }\), \(j,k=1,\dots ,d\), and there exists \(p\geqslant 0\) such that

    $$\begin{aligned} \sum _{\begin{array}{c} \gamma \in {\mathcal {I}}\\ \gamma \ne (0,0,\ldots ) \end{array}} \Vert A_{\gamma }\Vert _{{\mathcal {L}}(V,H)} (2{\mathbb {N}})^{-\frac{p}{2}\gamma }<\infty . \end{aligned}$$
    (3.10)

Finally, set \(V'=H^{-1}(U)\), and assume that \(u_0\in V\otimes (S)_{-1,-p}\), \(u_1\in H\otimes (S)_{-1,-p}\), \(f\in C(I,V')\otimes (S)_{-1,-p}\cap L^2(I,V')\otimes (S)_{-1,-p}\). Then, (1.1) admits a unique solution \(u\in C(I,V)\otimes (S)_{-1,-p}\cap C^1(I,H)\otimes (S)_{-1,-p} \cap C^2(I,V')\otimes (S)_{-1,-p}\).

Proof

Clearly, the hypotheses imply that a(tuv) satisfies also (3.4) and (3.5), so that Theorems  3.1 and  3.2, as well as Corollary 3.3, can be applied to (3.2). The solution to (3.2) can be written in the Duhamel form

$$\begin{aligned} u_\gamma (t) = E_0(t)u_\gamma ^0+E_1(t)u_\gamma ^1+\int _0^t E(t,s)\left( f_\gamma (s)+\sum _{0\leqslant \lambda <\gamma }A_{\gamma -\lambda }u_\lambda (s)\right) ds, \end{aligned}$$

where \(E_0(t)\), \(E_1(t)\), E(ts) depend only on \(A_{(0,0,\ldots )}\) and \(\max \{\Vert E_0(t)\Vert ,\Vert E_1(t)\Vert ,\Vert E(t,s)\Vert \}\leqslant C\) by (3.8). Also, by the regularity of the solutions and the fact that all operators \(A_\gamma \) with \(\gamma >0\) are, at most, of order one, for each \(\delta \in {\mathcal {I}}\) there exists a constant \(K_\delta >0\) (actually \(K_\delta =\Vert A_\delta \Vert _{{\mathcal {L}}(V,H)}\)) such that \(\Vert A_\delta u_\lambda (t)\Vert _{H}\leqslant K_\delta \Vert u_\lambda (t)\Vert _V\), \(t\in I\), for all \(\lambda \in {\mathcal {I}}\). Now, by (3.8), for some other constant \(C>0\), depending only on \(A_{(0,0,\ldots )}\), H and V,

$$\begin{aligned}&\Vert u_\gamma (t)\Vert ^2_V\\&\quad \le \Vert u_\gamma (t)\Vert ^2_V+\Vert (\partial _t u_\gamma )(t)\Vert _H^2\\&\quad \leqslant C\left( \Vert u_\gamma ^0\Vert ^2_V+\Vert u_\gamma ^1\Vert ^2_H+2\int _0^t \Vert f_\gamma (s)\Vert ^2_H\; ds+2\int _0^t\left( \sum _{0\leqslant \lambda <\gamma }K_{\gamma -\lambda }\Vert u_\lambda (s)\Vert _V\right) ^2 ds\right) , \end{aligned}$$

that is,

$$\begin{aligned} \Vert u_\gamma \Vert ^2_{L^2(I,V)}&= \int _0^T \Vert u_\gamma (t)\Vert ^2_V\;dt \\&\leqslant C\left( T\Vert u_\gamma ^0\Vert ^2_V+T\Vert u_\gamma ^1\Vert ^2_H+2\int _0^T\left( \int _0^t \Vert f_\gamma (s)\Vert ^2_H\; ds\right) dt\right. \\&\quad \left. +2\int _0^T\left( \int _0^t\left( \sum _{0\leqslant \lambda <\gamma }K_{\gamma -\lambda }\Vert u_\lambda (s)\Vert _V\right) ^2 ds\right) dt\right) . \end{aligned}$$

Clearly,

$$\begin{aligned} \int _0^T\left( \int _0^t \Vert f_\gamma (s)\Vert ^2_H\; ds\right) dt\leqslant \int _0^T\left( \int _0^T \Vert f_\gamma (s)\Vert ^2_H\; ds\right) dt = T \Vert f_\gamma \Vert ^2_{L^2(I,H)}. \end{aligned}$$

Also, we observe that

$$\begin{aligned} \int _0^T\!\!\left( \int _0^t \!\!\left( \sum _{0{\leqslant }\lambda<\gamma }K_{\gamma -\lambda }\Vert u_\lambda (s)\Vert _V\right) ^2 ds\right) dt\leqslant T \int _0^T\left( \sum _{0{\leqslant }\lambda <\gamma }K_{\gamma -\lambda }\Vert u_\lambda (s)\Vert _V\right) ^2 ds. \end{aligned}$$

Thus, we find

$$\begin{aligned}&\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma \Vert ^2_{L^2(I,V)}(2{\mathbb {N}})^{-p\gamma }\leqslant CT \Big (\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma ^0\Vert _V^2(2{\mathbb {N}})^{-p\gamma }+\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma ^1\Vert _H^2(2{\mathbb {N}})^{-p\gamma }\\&\quad + 2\sum _{\gamma \in {\mathcal {I}}}\Vert f_\gamma \Vert ^2_{L^2(I,H)}(2{\mathbb {N}})^{-p\gamma } \\&\quad +2\sum _{\gamma \in {\mathcal {I}}}\int _0^T\left( \sum _{0\leqslant \lambda <\gamma }K_{\gamma -\lambda }\Vert u_\lambda (s)\Vert _V\right) ^2 ds(2{\mathbb {N}})^{-p\gamma }\Big ). \end{aligned}$$

By the assumptions,

$$\begin{aligned}&M_0=\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma ^0\Vert _V^2(2{\mathbb {N}})^{-p\gamma }<\infty , \quad M_1=\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma ^1\Vert _H^2(2{\mathbb {N}})^{-p\gamma }<\infty \\&M_f=\sum _{\gamma \in {\mathcal {I}}}\Vert f_\gamma \Vert ^2_{L^2(I,H)}(2{\mathbb {N}})^{-p\gamma }<\infty . \end{aligned}$$

Also, by the generalized Hölder inequality, we obtain

$$\begin{aligned}&\sum _{\gamma \in {\mathcal {I}}}\int _0^T\left( \sum _{0\leqslant \lambda<\gamma } K_{\gamma -\lambda }\Vert u_\lambda (s)\Vert _V\right) ^2 ds(2{\mathbb {N}})^{-p\gamma } \\&\quad = \int _0^T\sum _{\gamma \in {\mathcal {I}}}\left( \sum _{0\leqslant \lambda <\gamma } K_{\gamma -\lambda }(2{\mathbb {N}})^{-\frac{p(\gamma -\lambda )}{2}}\Vert u_\lambda (s) \Vert _{V}(2{\mathbb {N}})^{-\frac{p\lambda }{2}}\right) ^2ds \\&\quad \leqslant \int _0^T\left( \sum _{\begin{array}{c} \delta \in {\mathcal {I}}\\ \delta \ne (0,0,\ldots ) \end{array}}K_{\delta } (2{\mathbb {N}})^{-\frac{p\delta }{2}}\right) ^2\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma (s)\Vert ^2_{V}(2{\mathbb {N}})^{-p\gamma } ds\\&\quad = M_A^2\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma \Vert ^2_{L^2(I,V)}(2{\mathbb {N}})^{-p\gamma }, \end{aligned}$$

where, by assumption (3.10),

$$\begin{aligned} M_A= \sum _{\begin{array}{c} \delta \in {\mathcal {I}}\\ \delta \ne (0,0,\ldots ) \end{array}} K_{\delta } (2{\mathbb {N}})^{-\frac{p\delta }{2}}<\infty . \end{aligned}$$

Thus,

$$\begin{aligned} \sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma \Vert ^2_{L^2(I,V)}(2{\mathbb {N}})^{-p\gamma }\leqslant CT\left( M_0+M_1+2M_f +2M_A^2 \sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma \Vert ^2_{L^2(I,V)}(2{\mathbb {N}})^{-p\gamma }\right) . \end{aligned}$$

Let us choose \({{\tilde{T}}}\) small enough so that \(1-2CM_A^2{{\tilde{T}}}>0\) and let \({{\tilde{I}}}_0=[0,{{\tilde{T}}}]\). Then,

$$\begin{aligned}\Vert u\Vert ^2_{L^2({{\tilde{I}}}_0,V)\otimes (S)_{-1,-p}}\leqslant \frac{C{{\tilde{T}}}(M_0+M_1+2M_f)}{1-2CM_A^2{{\tilde{T}}}}. \end{aligned}$$

We construct solutions on \([{{\tilde{T}}},2{{\tilde{T}}}]\), \([2{{\tilde{T}}},3{{\tilde{T}}}]\) etc. in a similar manner. On \({{\tilde{I}}}_1 = [{{\tilde{T}}},2{{\tilde{T}}}]\) one can rewrite the problem as:

$$\begin{aligned} \left\{ \begin{aligned}{}[\partial ^2_t -A_{(0,0,0,\ldots )}]v_\gamma (t,x)&=f_\gamma (t+{{\tilde{T}}},x)+\sum _{0\leqslant \lambda <\gamma }A_{\gamma -\lambda }(t,x)v_{\lambda }(t,x), \quad \gamma \in {\mathcal {I}},\quad t\in (0,{{\tilde{T}}}],\\ v_\gamma (0,x)&=u_\gamma ({{\tilde{T}}},x),\\ \partial _t v_\gamma (0,x)&=\partial _t u_\gamma ({{\tilde{T}}},x),\\ v_\gamma (t,x)|_{\partial U}&=0. \end{aligned} \right. \end{aligned}$$
(3.11)

Hence, \(v_\gamma (t,x)=u_\gamma (t+{{\tilde{T}}})\), \(t\in [0,{{\tilde{T}}}]\), and this implies that \(\Vert u_\gamma \Vert ^2_{L^2({{\tilde{I}}}_1,V)}=\Vert v_\gamma \Vert ^2_{L^2({{\tilde{I}}}_0,V)}.\) Thus, by similar computation as on \({{\tilde{I}}}_0\), we find

$$\begin{aligned} \Vert u\Vert ^2_{L^2({{\tilde{I}}}_1,V)\otimes (S)_{-1,-p}}=\Vert v\Vert ^2_{L^2({{\tilde{I}}}_0,V)\otimes (S)_{-1,-p}}\leqslant \frac{C{{\tilde{T}}}(M_{0,1}+M_{1,1}+2M_{f,1})}{1-2CM_A^2{{\tilde{T}}}},\end{aligned}$$

where

$$\begin{aligned}&M_{0,1}=\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma ({{\tilde{T}}},\cdot )\Vert _V^2(2{\mathbb {N}})^{-p\gamma }<\infty , M_{1,1}=\sum _{\gamma \in {\mathcal {I}}}\Vert \partial _t u_\gamma ({{\tilde{T}}},\cdot )\Vert _H^2(2{\mathbb {N}})^{-p\gamma }<\infty ,\\&M_{f,1}=\sum _{\gamma \in {\mathcal {I}}}\Vert f_\gamma \Vert ^2_{L^2({{\tilde{I}}}_1,H)}(2{\mathbb {N}})^{-p\gamma }<\infty . \end{aligned}$$

Note that \(I=[0,T]\) can be covered by finitely many intervals of the form \({{\tilde{I}}}_k=[k{{\tilde{T}}},(k+1){{\tilde{T}}}]\), say with r intervals. We construct piecewise solutions on each of these intervals, each one continuously extending the previous one, resulting in a solution on the entire [0, T] interval. Clearly,

$$\begin{aligned} \Vert u\Vert ^2_{L^2(I,V)\otimes (S)_{-1,-p}}&=\sum _{k=0}^r\Vert u\Vert ^2_{L^2({{\tilde{I}}}_k,V) \otimes (S)_{-1,-p}}\\&\leqslant \frac{C{{\tilde{T}}}}{1-2CM_A^2{{\tilde{T}}}} \sum _{k=0}^r\left( M_{0,k}+M_{1,k}+2M_{f,k}\right) \\&= \frac{C{{\tilde{T}}}}{1-2CM_A^2{{\tilde{T}}}} \Big (\Vert u^0\Vert ^2_{V\otimes (S)_{-1,-p}}+\Vert u^1\Vert ^2_{H\otimes (S)_{-1,-p}}\\&\quad +\sum _{k=1}^r( \Vert u(k{{\tilde{T}}},\cdot )\Vert ^2_{V\otimes (S)_{-1,-p}}+\Vert \partial _t u(k{{\tilde{T}}},\cdot )\Vert ^2_{H\otimes (S)_{-1,-p}}) \\&\qquad \qquad \qquad \qquad \qquad + 2\Vert f\Vert ^2_{L^2(I,H)\otimes (S)_{-1,-p}}\Big ). \end{aligned}$$

This establishes that the solution u belongs to \(L^2(I,V)\otimes (S)_{-1,-p}\). After a possible modification on a set of measure zero, similarly as in Corollary 3.3, we obtain that u is in \(C(I,V)\otimes (S)_{-1,-p}\cap C^1(I,H)\otimes (S)_{-1,-p} \cap C^2(I,V')\otimes (S)_{-1,-p}\). \(\square \)

The following corollary is a straightforward consequence of the construction of the solution via the system (3.1). It establishes the stochastic unbiasedness of the solution: The expectation of the solution is the deterministic function which is the unique solution of the (deterministic) PDE obtained by taking expectations of all the stochastic coefficients and stochastic initial data.

Corollary 3.5

Assume that all conditions of Theorem 3.4 are valid. Then, the solution u of (1.1) has the expectation \(E(u)=u_{(0,0,\ldots )}\), which is the unique solution of the PDE

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t^2u_{(0,0,\ldots )}(t,x) -E({\mathbf {A}})u_{(0,0,\ldots )}(t,x)=E(f(t,x;\omega )) &{} (t,x)\in I\times U \\ u_{(0,0,\ldots )}(0,x)=E(u^0(x;\omega )), (\partial _t u_{(0,0,\ldots )})(0,x)=E(u^1(x;\omega )) &{} x \in U \\ u_{(0,0,\ldots )}(t,x)|_{\partial U}=0, \end{array}\right. } \end{aligned}$$
(3.12)

where \(E({\mathbf {A}})=A_{(0,0,\ldots )}=\sum _{j=1}^d\partial _j\sum _{k=1}^d a_{jk(0,0,\ldots )}(t,x)\partial _k\), \(E(f)=f_{(0,0,\ldots )}\), \(E(u^0)=u^0_{(0,0,\ldots )}\), \(E(u^1)=u^1_{(0,0,\ldots )}\).

4 Solutions on \({\mathbb {R}}^d\)

In this section we treat linear hyperbolic equations of arbitrary order \(m\in {\mathbb {N}}\), and linear hyperbolic systems of first order, whose coefficients are globally defined on the whole space \({\mathbb {R}}^d\).

4.1 Linear Hyperbolic Equations with Polynomially Bounded Coefficients

We first consider linear operators of the form

$$\begin{aligned} {\mathbf {L}}=D_t^m+\displaystyle \sum _{j=1}^m \sum _{|\alpha |\le j} a_{j\alpha }(t,x;\omega ) D_x^\alpha D_t^{m-j}= \sum _{j=0}^m \sum _{|\alpha |\le j} a_{j\alpha }(t,x;\omega ) D_x^\alpha D_t^{m-j},\nonumber \\ \end{aligned}$$
(4.1)

where \(m\geqslant 1\), \(a_{m0}(t,x;\omega )\equiv 1\), \(\displaystyle a_{j\alpha }(t,x;\omega )=\sum _{\gamma \in {\mathcal {I}}}a_{j\alpha \gamma }(t,x)\cdot H_\gamma (\omega )\), \(a_{j\alpha \gamma }\in C^\infty ([0,T]\times {\mathbb {R}}^d)\), for \(|\alpha |\le j\), \(j=1,\dots ,m\), \(\gamma \in {\mathcal {I}}\), and, for all \(k\in {\mathbb {N}}_0\), \(\beta \in {\mathbb {N}}_0^d\), \(0\leqslant |\alpha |\leqslant j\), \(1\leqslant j\leqslant m\), \(\gamma \in {\mathcal {I}}\), there exists a constant \(C_{j\alpha \gamma k\beta }>0\) such that

$$\begin{aligned} |\partial ^k_t\partial ^\beta _x a_{j\alpha \gamma }(t,x)|\le C_{j\alpha \gamma k\beta } \langle x\rangle ^{|\alpha |-|\beta |},\quad (t,x)\in [0,T]\times {\mathbb {R}}^d. \end{aligned}$$

The operator \({\mathbf {L}}\) acts onto \(\displaystyle u(t,x;\omega )=\sum _{\gamma \in {\mathcal {I}}}u_\gamma (t,x)\cdot H_\gamma (\omega )\) as

$$\begin{aligned} ({\mathbf {L}}\lozenge u)(t,x;\omega )= \sum _{\gamma \in {\mathcal {I}}}\left[ \sum _{\beta +\lambda =\gamma }(L_\beta u_\gamma )(t,x)\right] \cdot H_\gamma (\omega ), \end{aligned}$$

where

$$\begin{aligned} L_{(0,0,\ldots )}&=D_t^{m}+\sum _{j=1}^m \sum _{|\alpha |\le j} a_{j\alpha (0,0,\ldots )}(t,x) D_x^\alpha D_t^{m-j}, \end{aligned}$$
(4.2)
$$\begin{aligned} L_{\beta }&=\sum _{j=1}^m \sum _{|\alpha |\le j} a_{j\alpha \beta }(t,x) D_x^\alpha D_t^{m-j}, \quad \beta \not =(0,0,\ldots ). \end{aligned}$$
(4.3)

Similarly as in Sect. 3 where we assumed in (A1) of Theorem 3.4 that only the principal part of the operator is of second order and the remaining ones are of first order, we will assume here that only \(L_{(0,0,\ldots )}\) is a differential operator of order m, while all other operators \(L_{\beta }\) are of order \(m-1\) at most. Hence, \(L_{\beta }\) take on the form

$$\begin{aligned} L_{\beta }=\sum _{j=1}^m \sum _{|\alpha |\le j-1} a_{j\alpha \beta }(t,x) D_x^\alpha D_t^{m-j}, \quad \beta \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}. \end{aligned}$$
(4.4)

We also assume \(\displaystyle f(t,x;\omega )=\sum \nolimits _{\gamma \in {\mathcal {I}}}f_\gamma (t,x)\cdot H_\gamma (\omega )\), with \(\displaystyle f_\gamma \in \bigcap \nolimits _{k\ge 0}C^k([0,T], H^{s-k,\sigma -k})\), \(s,\sigma \in {\mathbb {R}}\), \(\gamma \in {\mathcal {I}}\).

The hyperbolicity of \({\mathbf {L}}\) means that the symbol \({\mathcal {L}}_m(t,x,\tau ,\xi )\) of the SG-principal part \({\mathcal {L}}_m\) of \({\mathbf {L}}\), defined here below, satisfies

$$\begin{aligned} {\mathcal {L}}_m(t,x,\tau ,\xi ):= \tau ^m-\displaystyle \sum _{j=1}^m\displaystyle \sum _{|\alpha |= j}{a}_{j\alpha (0,0,0,\ldots )}(t,x)\xi ^\alpha \tau ^{m-j} =\prod _{j=1}^m\left( \tau -\tau _j(t,x,\xi )\right) ,\nonumber \\ \end{aligned}$$
(4.5)

with \(\tau _j(t,x,\xi )\) real-valued, \(\tau _j\in C^\infty ([0,T];S^{1,1})\), \(j=1,\dots ,m\). The latter means that, for any \(\alpha ,\beta \in {\mathbb {N}}_0^d\), \(k\in {\mathbb {N}}_0\), there exists a constant \(C_{j k\alpha \beta }>0\) such that

$$\begin{aligned} |\partial _t^k\partial ^\alpha _x\partial ^\beta _\xi \tau _j(t,x,\xi )|\le C_{j k\alpha \beta }\langle x\rangle ^{1-|\alpha |}\langle \xi \rangle ^{1-|\beta |}, \end{aligned}$$

for \((t,x,\xi )\in [0,T]\times {\mathbb {R}}^{2d}\), \(j=1,\dots ,m\). The real solutions \(\tau _j=\tau _j(t,x,\xi )\), \(j=1,\dots ,m\), of the equation \({\mathcal {L}}_m(t,x,\tau ,\xi )=0\) with respect to \(\tau \) are usually called characteristic roots of the operator \({\mathbf {L}}\).

We will deal with the following three classes of equations of the form (1.2), and corresponding operators \({\mathbf {L}}\):

  1. 1.

    strictly hyperbolic equations, that is, \({\mathcal {L}}_m\) satisfies (4.5) with real-valued, distinct and separated roots \(\tau _j\), \(j=1,\dots ,m\), in the sense that there exists a constant \(C>0\) such that

    $$\begin{aligned} |\tau _j(t,x,\xi )-\tau _k(t,x,\xi )|\geqslant C\langle x\rangle \langle \xi \rangle ,\quad \forall j\ne k,\ (t,x,\xi )\in [0,T]\times {\mathbb {R}}^{2d};\nonumber \\ \end{aligned}$$
    (4.6)
  2. 2.

    hyperbolic equations with (roots of) constant multiplicities, that is, \({\mathcal {L}}_m\) satisfies (4.5) and the real-valued, characteristic roots can be divided into n groups (\(1\leqslant n\leqslant m\)) of distinct and separated roots, in the sense that, possibly after a reordering of the \(\tau _j\), \(j=1,\dots , m\), there exist \(l_1,\ldots l_n\in {\mathbb {N}}\) with \(l_1+\ldots +l_n=m\) and n sets

    $$\begin{aligned}&G_1=\{\tau _1=\cdots =\tau _{l_1}\},\quad G_2=\{\tau _{l_1+1}= \cdots =\tau _{l_1+l_2}\},\quad \ldots \\&\quad G_n=\{\tau _{m-l_n+1}=\cdots =\tau _{m}\}, \end{aligned}$$

    satisfying, for a constant \(C>0\),

    $$\begin{aligned}&\tau _j\in G_p,\tau _k\in G_q ,\ p\ne q,\ 1\leqslant p,q\leqslant n \nonumber \\&\quad \Rightarrow |\tau _j(t,x,\xi )-\tau _k(t,x,\xi )|\geqslant C\langle x\rangle \langle \xi \rangle , \quad \forall (t,x,\xi )\in [0,T]\times {\mathbb {R}}^{2d};\nonumber \\ \end{aligned}$$
    (4.7)

    notice that, in the case \(n=1\), we have only one group of m coinciding roots, that is, \({\mathcal {L}}_m\) admits a single real root of multiplicity m, while for \(n=m\) we recover the strictly hyperbolic case; the number \(l=\max _{j=1,\dots ,n}l_j\) is the maximum multiplicity of the roots of \({\mathcal {L}}_m\);

  3. 3.

    hyperbolic equations with involutive roots, that is, \({\mathcal {L}}_m\) satisfies (4.5) with real-valued characteristic roots such that

    $$\begin{aligned}&{[}D_t-{{\text {Op}}}(\tau _j(t)), D_t-{{\text {Op}}}(\tau _k(t))]={{\text {Op}}}(a_{jk}(t))\,(D_t-{{\text {Op}}}(\tau _j(t))\nonumber \\&+{{\text {Op}}}(b_{jk}(t))\,(D_t-{{\text {Op}}}(\tau _k(t)))+{{\text {Op}}}(c_{jk}(t)), \end{aligned}$$
    (4.8)

    for some \(a_{jk},b_{jk},c_{jk}\in C^\infty ([0,T],S^{0,0})\), \(j,k=1,\dots ,m\).

Remark 4.1

Recall that roots of constant multiplicities are always involutive, while the converse statement is not true in general, as shown, e.g., in [8].

Definition 4.2

We will say that the (linear) operator \({\mathbf {L}}\) in (4.1) and the associated Cauchy problem (1.2) are strictly (SG-)hyperbolic, weakly (SG-)hyperbolic with constant multiplicities, or weakly (SG-)hyperbolic with involutive roots, respectively, if such properties are satisfied by the roots of \({\mathcal {L}}_m\), as explained above.

The next one is a key result in the analysis of SG-hyperbolic Cauchy problems by means of the corresponding class of Fourier operators. Given a symbol \(\varkappa \in C([0,T]^2; S^{1,1})\), set \(\Delta _{T_0}=\{(s,t)\in [0,T_0]^2:0\le s\le t\le T_0\}\), \(0<T_0\le T\), and consider the eikonal equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\varphi (t,s,x,\xi )=\varkappa (t,x,\varphi '_x(t,s,x,\xi )),&{} t\in [s,T_0], \\ \varphi (s,s,x,\xi )=x\cdot \xi ,&{} s\in [0,T_0), \end{array}\right. } \end{aligned}$$
(4.9)

with \(0<T_0\leqslant T\). By the theory developed in [1, 12], it is possible to prove that the following proposition holds true.

Proposition 4.3

([1, 12]) For some small enough \(T_0\in (0,T]\), Eq. (4.9) admits a unique solution \(\varphi \in C^1(\Delta _{T_0},\) \(S^{1,1}({\mathbb {R}}^d))\), satisfying \(J\in C^1(\Delta _{T_0},S^{1,1}({\mathbb {R}}^d))\) and

$$\begin{aligned} \partial _s\varphi (t,s,x,\xi )=-\varkappa (s,\varphi '_\xi (t,s,x,\xi ),\xi ), \end{aligned}$$
(4.10)

for any \((t,s)\in \Delta _{T_0}\). Moreover, for every \(\ell \in {\mathbb {N}}_0\) there exists \(\delta >0\), \(c_\ell \geqslant 1\) and \({\widetilde{T}}_\ell \in (0,T_0]\) such that \(\varphi (t,s,x,\xi )\in {\mathfrak {P}}_\delta (c_\ell |t-s|)\), with \(\Vert J\Vert _{2,\ell }\leqslant c_\ell |t-s|\) for all \((t,s)\in \Delta _{{\widetilde{T}}_\ell }\).

Remark 4.4

Of course, if additional regularity with respect to \(t\in [0,T]\) is fulfilled by the symbol \(\varkappa \) in the right-hand side of (4.9), this reflects in a corresponding increased regularity of the resulting solution \(\varphi \) with respect to \((t,s)\in \Delta _{T_0}\). Since here we are not dealing with problems concerning the t-regularity of the solution, we assume smooth t-dependence of the coefficients of \({\mathbf {L}}\).

In the approach we follow here, which is the same used in [14] and elsewhere, a further key result is the next proposition, an adapted version of the so-called Mizohata Lemma of Perfect FactorizationFootnote 2. formulated for the SG-hyperbolic operator \(L_{(0,0,\ldots )}\) in the case of roots with constant multiplicities. Of course, it holds true also in the more restrictive case of strict hyperbolicity, which coincides with the situation where \(\displaystyle l=\max _{j=1,\dots ,n}l_j=1\Leftrightarrow n=m\).

Proposition 4.5

([14]) Let \({\mathbf {L}}\) be a hyperbolic operator with constant multiplicities \(l_{j}\), \(j=1,\dots ,n \le m\). Denote by \(\theta _j\in G_j\), \(j=1,\dots ,n\), the distinct real roots of \({\mathcal {L}}_m\) in (4.5). Then, it is possible to factor \(L_{(0,0,\ldots )}\) as

$$\begin{aligned} L_{(0,0,\ldots )} = L_{(0,0,\ldots )n} \cdots L_{(0,0,\ldots )1} + \sum _{j=1}^m {{\text {Op}}}(r_{j}(t)) D_{t}^{m-j}, \end{aligned}$$
(4.11)

with

$$\begin{aligned}&L_{(0,0,\ldots )j}= (D_{t} - {{\text {Op}}}(\theta _{j}(t)))^{l_{j}} + \sum _{k=1}^{l_{j}} {{\text {Op}}}(h_{jk}(t)) \, (D_{t} - {{\text {Op}}}(\theta _{j}(t)))^{l_{j}-k},\end{aligned}$$
(4.12)
$$\begin{aligned}&h_{jk} \in C^\infty ([0,T], S^{k-1, k-1}({\mathbb {R}}^d)), \quad r_{j} \in C^\infty ([0,T],S^{-\infty ,-\infty }({\mathbb {R}}^d)),\nonumber \\&\quad j=1, \dots , n, k = 1, \dots , l_{j}. \end{aligned}$$
(4.13)

Similarly to the local equations considered in the previous Sect. 3, Eq. (1.2) reduces to

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle [L_{(0,0,\ldots )}u_\gamma ](t,x)=f_\gamma (t,x)-\sum _{0\leqslant \lambda <\gamma }(L_{\gamma -\lambda }u_{\lambda })(t,x), \\ (D^j_t u_\gamma )(0,x)=u_\gamma ^j(x),\; j=0,\dots ,m-1,\quad \gamma \in {\mathcal {I}}. \end{array}\right. } \end{aligned}$$
(4.14)

Under the hypotheses of weak SG-hyperbolicity with constant multiplicities or with involutive roots, plus a suitable Levi conditionFootnote 3, or of strict SG-hyperbolicity, it is possible to show that the Cauchy problem (4.14) can be solved recursively by induction on the length of the multiindex \(\gamma \). This follows from the next Theorem 4.6, which summarizes some of the main results proved in [1, 4, 8, 12, 14], applied to \(L_{(0,0,\ldots )}\).

Theorem 4.6

Let \({\mathbf {L}}\) be SG-hyperbolic of the form in (4.1), either strictly hyperbolic, weakly hyperbolic with constant multiplicites, or weakly hyperbolic with involutive characteristics. In the weakly hyperbolic cases, assume the following Levi condition:

  1. 1.

    if \({\mathbf {L}}\) is hyperbolic with constant multiplicities, the symbol families \(h_{jk}\) from (4.12) satisfy \(h_{jk}\in C^\infty ([0,T], S^{0,0}({\mathbb {R}}^d))\), \(j=1,\dots ,n\), \(k=1,\dots ,l_j\);

  2. 2.

    if \({\mathbf {L}}\) is hyperbolic with involutive roots, \(L_{(0,0,\ldots )}\) can be written in the product form (4.11) with the factors given by (4.12) and the corresponding symbol families \(h_{jk}\) satisfying \(h_{jk}\in C^\infty ([0,T], S^{0,0}({\mathbb {R}}^d))\), \(j=1,\dots ,n\), \(k=1,\dots ,l_j\).

Assume also that \({u}_{\gamma }^{j}{\in }{H}^{{s}{+}{m}{-}{j}{-1}, {\sigma }{+}{m}{-}{j}{-}{1}}({\mathbb {R}}^d)\), \(j=0,\dots ,m-1\), and \(\displaystyle g_\gamma \in \bigcap _{k\ge 0}C^k([0,T],H^{s-k,\sigma -k}({\mathbb {R}}^d))\), \(s,\sigma \in {\mathbb {R}}\), \(\gamma \in {\mathcal {I}}\). Then, there exists a time-horizon \(T^\prime \in (0,T]\) such that the Cauchy problems

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle L_{(0,0,\ldots )}u_\gamma (t,x)=g_\gamma (t,x), \\ (D^j_t u_\gamma )(0,x)=u_\gamma ^j(x),\; j=0,\dots ,m-1,\quad \gamma \in {\mathcal {I}}, \end{array}\right. } \end{aligned}$$
(4.15)

admit a unique solution \(\displaystyle u\in \bigcap _{k\ge 0} C^k([0,T^\prime ],H^{s+m-l-k,\sigma +m-l-k}({\mathbb {R}}^d))\), where l equals the maximum multiplicity of the roots of \({\mathcal {L}}_m\) in the case of constant multiplicities (in particular, \(l=1\) in the case of strict hyperbolicity), and we set \(l=m\) in the case of weak hyperbolicity with involutive roots. More precisely, there exist operator families \(E_j(t)\), \(j=0,\dots ,m-1\), E(ts), \(t,s\in [0,T^\prime ]\), depending only on \(L_{(0,0,\ldots )}\), such that \(\displaystyle E_j\in \bigcap _{k\ge 0}C^k([0,T^\prime ],{\mathcal {O}}(l-j+k,l-j+k))\), \(j=0,\dots ,m-1\), \(\displaystyle E\in \bigcap _{k\ge 0}C^k([0,T^\prime ]\times [0,T^\prime ],{\mathcal {O}}(l-m+k,l-m+k))\), and

$$\begin{aligned} u_\gamma (t) = \sum _{j=0}^{m-1}E_j(t)u_\gamma ^j+ \int _0^t E(t,\tau ) g_\gamma (\tau )d\tau . \end{aligned}$$
(4.16)

Remark 4.7

  1. 1.

    It follows that, for any

    $$\begin{aligned} j\in {\mathbb {N}}_0, D^j_t u_\gamma \in C([0,T^\prime ], H^{s+m-l-j,\sigma +m-l-j}({\mathbb {R}}^d)), \end{aligned}$$

    and the mapping

    $$\begin{aligned} (g_\gamma ,u_\gamma ^0,\dots ,u_\gamma ^{m-1}) \mapsto (u_\gamma ,D_t u_\gamma , \dots ,D_t^{m}u_\gamma ), \end{aligned}$$

    associating the right-hand side and the initial data of (4.15) to its solutions and its first m derivatives with respect to t, is a well defined, linear and continuous application

    $$\begin{aligned}&{\mathcal {T}}_\mathrm {glob}:\left( \bigcap _{0\le k \le m-1}C^k([0,T^\prime ],H^{s-k,\sigma -k}({\mathbb {R}}^d))\right) \\&\times \left( {\mathop {\times }\limits _{j=0}^{m-1}} H^{s+m-j-1,\sigma +m-j-1}({\mathbb {R}}^d)\right) \rightarrow {\mathop {\times }\limits _{j=0}^{m}}C([0,T^\prime ], H^{s{+}m-l-j,\sigma {+}m-l-j}({\mathbb {R}}^d)), \end{aligned}$$

    uniformly with respect to \(\gamma \in {\mathcal {I}}\).

  2. 2.

    The entries of the matrix-valued operator \({\mathcal {T}}_\mathrm {glob}\) can be expressed (modulo smoothing reminders) by means of finite linear combinationsFootnote 4 of SG-FIOs of the form \({{\text {Op}}}_{\varphi _j(t)}(a_j(t))\), with smooth regular phase functions families \(\varphi _j(t)\), obtained as solutions of the eikonal equations (4.9), with the real, distinct characteristic roots \(\theta _j\) of \({\mathcal {L}}_m\) in place of \(\kappa \), and suitable smooth amplitude families \(a_j(t)\), \(j=0,\dots ,n\), see [1, 4, 12, 14]. The continuity of \({\mathcal {T}}_\mathrm {glob}\) then follows by Theorem 2.9 and Remark 2.10. The uniform continuity with respect to \(\gamma \in {\mathcal {I}}\) is an immediate consequence of the fact that such SG-FIOs, as well as \(T^\prime >0\), only depend on \(L_{(0,0,\ldots )}\).

  3. 3.

    By the continuity of \({\mathcal {T}}_\mathrm {glob}\), we obtain an energy-type estimate for the solution \(u_\gamma \) and its first m derivatives with respect to t, namely

    $$\begin{aligned}&\sum _{j=0}^{m}\Vert D_t^ju_\gamma \Vert _{C([0,T^\prime ],H^{s+m-l-j,\sigma +m-l-j}({\mathbb {R}}^d))} \nonumber \\&\le C \left( T^\prime \Vert g_\gamma \Vert _{\bigcap _{0{\le } k {\le } m-1}C^k([0,T^\prime ],H^{s-k,\sigma -k}({\mathbb {R}}^d))} {+}\sum _{j=0}^{m-1} \Vert u_\gamma ^j\Vert _{H^{s+m-j-1,\sigma +m-j-1}({\mathbb {R}}^d)} \right) ,\nonumber \\ \end{aligned}$$
    (4.17)

    for a suitable constant \(C>0\), independent of \(\gamma \in {\mathcal {I}}\).

We can now prove the second main result of the paper, which is Theorem 4.8.

Theorem 4.8

Assume that \({\mathbf {L}}\) in (1.2) is SG-hyperbolic of order \(m\in {\mathbb {N}}\), either strictly, weakly with constant multiplicities, or weakly with involutive roots, in the sense of Definition 4.2. Let \({\mathcal {L}}_m\) in (4.1) be the SG principal symbol of \({\mathbf {L}}\), in the sense that the remaining operators \(L_\delta \), \(\delta \not =(0,0,\ldots )\), defined in (4.4), are of the form

$$\begin{aligned} L_\delta =\sum _{j=l}^m\sum _{|\alpha |\le j-l} a_{j\alpha \delta }(t,x) D_x^\alpha D_t^{m-j}, \; \delta \not =(0,0,\ldots ), \end{aligned}$$

where l denotes the maximum multiplicity of the distinct characteristic roots of the principal symbol \({\mathcal {L}}_m\) in (4.5) (in particular, \(l=1\) for strictly hyperbolic operators, and \(l=m\) for weakly hyperbolic operators with involutive characteristics). In the weakly hyperbolic cases, assume also the corresponding Levi condition, as in Theorem 4.6.

We also assume that there exists \(r\geqslant 0\) such that

$$\begin{aligned} \sum _{\begin{array}{c} \gamma \in {\mathcal {I}}\\ \gamma \ne (0,0,\ldots ) \end{array}} \Vert L_{\gamma }\Vert _{{\mathcal {L}}(\, \bigcap _{0\le k \le m}C^k([0,T],H^{s+m-l-k,\sigma +m-l-k}({\mathbb {R}}^d)), \, \bigcap _{0\le k \le m}C^k([0,T],H^{s-k,\sigma -k}({\mathbb {R}}^d) \,)} (2{\mathbb {N}})^{-\frac{r}{2}\gamma }<\infty . \nonumber \\ \end{aligned}$$
(4.18)

Finally, assume, for \((s,\sigma )\in {\mathbb {R}}^2\), \(u^j_0\in H^{s+m-j-1,\sigma +m-j-1}({\mathbb {R}}^d)\otimes (S)_{-1,-r}\), \(j=0,\dots ,m-1\), and \(\displaystyle f\in \bigcap _{k\ge 0} C^k([0,T],H^{s-k,\sigma -k}({\mathbb {R}}^d))\otimes (S)_{-1,-r}\).

Then, there exists a a time-horizon \(T^\prime \in (0,T]\) such that the Cauchy problem (1.2) admits a unique solution \(\displaystyle u\in \bigcap _{k=0}^m C^k([0,T^\prime ],H^{s+m-l-k,\sigma +m-l-k}({\mathbb {R}}^d))\otimes (S)_{-1,-r}\).

Proof

By (4.16) in Theorem 4.6, with an argument analogous to the one followed in the proof of Theorem 3.4, we obtain an infinite dimensional system equivalent to (1.2), whose solutions are given by

$$\begin{aligned} u_\gamma (t) = \sum _{j=0}^{m-1}E_j(t)u_\gamma ^j+ \int _0^t E(t,s)\left( f_\gamma (s)-\sum _{0\leqslant \lambda <\gamma }L_{\gamma -\lambda }u_\lambda (s)\right) ds, \quad \gamma \in {\mathcal {I}}, \end{aligned}$$

where \(E_j(t)\), \(j=0,\dots ,m-1\), and E(ts) depend only on \(L_{(0,0,\ldots )}\) and satisfy \(\displaystyle E_j\in \bigcap _{k\ge 0}C^k([0,T^\prime ],{\mathcal {O}}(l-j+k,l-j+k))\), \(j=0,\dots ,m-1\), and \(\displaystyle E\in \bigcap _{k\ge 0}C^k([0,T^\prime ]\times [0,T^\prime ],{\mathcal {O}}(l-m+k,l-m+k))\). Notice that, by the regularity of the solutions (cf. Remark 4.7) and the fact that all operators \(L_\gamma \) with \(\gamma \not =(0,0,\ldots )\) are, at most, of order \((m-l,m-l)\), Theorem 2.4 implies that for each \(\delta \in {\mathcal {I}}\) there exists a constant \(K_\delta >0\) such that, for all \(\lambda \in {\mathcal {I}}\), \(k=0,\dots ,m\),

$$\begin{aligned}&\Vert D^k_t L_\delta u_\lambda (t)\Vert _{H^{s-k,\sigma -k}}\\&= \left\| \sum _{p+q=k}\frac{k!}{p!\,q!}(D^p_t L_\delta ) [D_t^q u_\lambda (t)]\right\| _{H^{s+m-l-q-(m-l+p),\sigma +m-l-q-(m-l+p)}} \\&\leqslant \frac{K_\delta }{m+1} \sum _{j=0}^{m}\Vert D^j_t u_\lambda \Vert _{C([0,T^\prime ],H^{s+m-l-j,\sigma +m-l-j})}, \quad t\in [0,T^\prime ], \end{aligned}$$

and

$$\begin{aligned} \sum _{k=0}^m\Vert D^k_t L_\delta u_\lambda \Vert _{C([0,T^\prime ],H^{s-k,\sigma -k})}\leqslant K_\delta \sum _{j=0}^{m}\Vert D^j_t u_\lambda \Vert _{C([0,T^\prime ],H^{s+m-l-j,\sigma +m-l-j})}. \end{aligned}$$

By (4.17), for some other constant \(C>0\), depending only on \(L_{(0,0,\ldots )}\), \(s,\sigma ,d,m,l\),

$$\begin{aligned} \sum _{j=0}^{m}\Vert D_t^j u_\gamma&\Vert _{C([0,T^\prime ], H^{s+m-l-j,\sigma +m-l-j})} \\&\leqslant C \left( T^\prime \left\| f_\gamma - \sum _{0\leqslant \lambda<\gamma }L_{\gamma -\lambda }u_\lambda \right\| _{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s-k,\sigma -k})}\right. \\&\quad \left. +\sum _{j=0}^{m-1} \Vert u_\gamma ^j\Vert _{H^{s+m-j-1,\sigma +m-j-1}} \right) , \\&\leqslant C \left( T^\prime \Vert f_\gamma \Vert _{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s-k,\sigma -k})}\right. \\&\quad \left. + T^\prime \sum _{0\leqslant \lambda <\gamma }\left\| L_{\gamma -\lambda }u_\lambda \right\| _{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s-k,\sigma -k})}\right. \\&\quad \quad \left. +\sum _{j=0}^{m-1} \Vert u_\gamma ^j\Vert _{H^{s+m-j-1,\sigma +m-j-1}} \right) , \end{aligned}$$

Thus, we find (for some new constant \({{\tilde{C}}}>0\))

$$\begin{aligned} \sum _{\gamma \in {\mathcal {I}}}&\left( \sum _{j=0}^{m}\Vert D^j_t u_\gamma \Vert _{C([0,T^\prime ], H^{s+m-l-j,\sigma +m-l-j})}\right) ^2 (2{\mathbb {N}})^{-r\gamma } \\&\leqslant {{\tilde{C}}} \sum _{\gamma \in {\mathcal {I}}} \left[ \sum _{j=0}^{m-1}\Vert u_\gamma ^j\Vert ^2_{H^{s+m-j-1,\sigma +m-j-1}} + {T^\prime }^2\Vert f_\gamma \Vert ^2_{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s-k,\sigma -k})} \right. \\&\quad \quad \left. +{T^\prime }^2 \left( \sum _{0\leqslant \lambda <\gamma } K_{\gamma -\lambda }\Vert u_\lambda \Vert _{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s+m-l-k,\sigma +m-l-k})}\right) ^2 \right] (2{\mathbb {N}})^{-r\gamma }. \end{aligned}$$

By the assumptions,

$$\begin{aligned} M_j= & {} \sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma ^j\Vert ^2_{H^{s+m-j, \sigma +m-j}}(2{\mathbb {N}})^{-r\gamma }<\infty , j=0,\dots ,m-1 \\ \Rightarrow M_I= & {} \sum _{j=0}^{m-1} \sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma ^j\Vert ^2_{H^{s+m-j, \sigma +m-j}}(2{\mathbb {N}})^{-r\gamma }<\infty ,\\ M_f= & {} \sum _{\gamma \in {\mathcal {I}}}\Vert f_\gamma \Vert ^2_{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s-k,\sigma -k})} (2{\mathbb {N}})^{-r\gamma }<\infty . \end{aligned}$$

Also, by straightforward estimates, we obtain

$$\begin{aligned} \sum _{\gamma \in {\mathcal {I}}}&\left( \sum _{0\leqslant \lambda<\gamma } K_{\gamma -\lambda }\Vert u_\lambda \Vert _{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s+m-l-k,\sigma +m-l-k})}\right) ^2(2{\mathbb {N}})^{-r\gamma } \\&=\sum _{\gamma \in {\mathcal {I}}}\left( \sum _{0\leqslant \lambda <\gamma } K_{\gamma -\lambda }(2{\mathbb {N}})^{-\frac{r(\gamma -\lambda )}{2}} \Vert u_\lambda \Vert _{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s+m-l-k,\sigma +m-l-k})}(2{\mathbb {N}})^{-\frac{r\lambda }{2}}\right) ^2 \\&\leqslant \left[ \sum _{\begin{array}{c} \delta \in {\mathcal {I}}\\ \delta \ne (0,0,\ldots ) \end{array}}K_{\delta } (2{\mathbb {N}})^{-\frac{r}{2}\delta }\right] ^2 \sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma \Vert ^2_{\bigcap _{0\le k \le m-1}C^k([0,T],H^{s+m-l-k,\sigma +m-l-k})} (2{\mathbb {N}})^{-r\gamma } \\&\le M_L^2\sum _{\gamma \in {\mathcal {I}}}\Vert u_\gamma \Vert ^2_{\bigcap _{0\le k \le m}C^k([0,T],H^{s+m-l-k,\sigma +m-l-k})} (2{\mathbb {N}})^{-r\gamma }, \end{aligned}$$

where, by assumption (4.18),

$$\begin{aligned} M_L= \sum _{\begin{array}{c} \delta \in {\mathcal {I}}\\ \delta \ne (0,0,\ldots ) \end{array}} K_{\delta } (2{\mathbb {N}})^{-\frac{r}{2}\delta }<\infty . \end{aligned}$$

Thus,

$$\begin{aligned}&\Vert u\Vert ^2_{\bigcap _{0\le k \le m}C^k([0,T],H^{s+m-l-k,\sigma +m-l-k})\otimes (S)_{-1,-r}} \\&\quad \le {{\tilde{C}}}(M_I+{T^\prime }^2 M_f)+{{\tilde{C}}}M_L^2 {T^\prime }^2 \Vert u\Vert ^2_{\bigcap _{0\le k \le m}C^k([0,T],H^{s-k,\sigma -k})\otimes (S)_{-1,-r}}. \end{aligned}$$

By possibly further reducing \(T^\prime >0\), so that \(1-{{\tilde{C}}} M_L^2 {T^\prime }^2 >0\), we find

$$\begin{aligned} \Vert u\Vert ^2_{\bigcap _{0\le k \le m}C^k([0,T],H^{s+m-l-k,\sigma +m-l-k})\otimes (S)_{-1,-r}} \leqslant \frac{{{\tilde{C}}}(M_I+{T^\prime }^2 M_f)}{1-{{\tilde{C}}} M_L^2 {T^\prime }^2}, \end{aligned}$$

which gives the claim.\(\square \)

4.2 Linear Hyperbolic Systems of First Order with Polynomially Bounded Coefficients

We now turn our attention to hyperbolic first order linear systems with coefficients having at most polynomial growth at spatial infinity. Namely, let \({\mathbf {L}}\) now denote the operator

$$\begin{aligned} {\mathbf {L}}(t,D_t;x,D_x) = D_t + \Lambda (t,x,D_x;\omega ) +R(t,x,D_x;\omega ), \end{aligned}$$
(4.19)

where \(\Lambda ={\mathrm {diag}}(\Lambda _1,\dots ,\Lambda _N)\) is a parameter-dependent, (\(N\times N\))-dimensional, diagonal operator matrix, whose entries \(\lambda _j(t,x,D_x;\omega )\), \(j=1,\dots , N\), are pseudo-differential operators with parameter-dependent symbols

$$\begin{aligned} \lambda _j(t,x,\xi ;\omega )= \sum _{\gamma \in {\mathcal {I}}} \lambda _{j\gamma }(t,x,\xi )\cdot H_\gamma (\omega ), \; \lambda _{j\gamma }\in C^\infty ([0,T]; S^{1,1}), \end{aligned}$$

and \(R=(R_{jk})_{j,k=1,\dots ,N}\) is a parameter-dependent, (\(N\times N\))-dimensional operator matrix of pseudo-differential operators with symbols

$$\begin{aligned} r_{jk}(t,x,\xi ;\omega )=\sum _{\gamma \in {\mathcal {I}}} r_{jk\gamma }(t,x,\xi )\cdot H_\gamma (\omega ), r_{jk\gamma }\in C^\infty ([0,T]; S^{0,0}). \end{aligned}$$

We consider the Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathbf {L}}\lozenge U(t,s,x;\omega ) = F(t,x;\omega ), &{} (t,s)\in \Delta _T, \\ U(s,s,x;\omega ) = G(x;\omega ), &{} s\in [0,T), \end{array}\right. } \end{aligned}$$
(4.20)

on the simplex \(\Delta _{T}:=\{(t,s)\vert \ 0\leqslant s\leqslant t\leqslant T\}\), where, for \(r,\varrho \in {\mathbb {R}}\),

$$\begin{aligned} F(t,x;\omega )= \sum _{\gamma \in {\mathcal {I}}}F_\gamma (t,x)\cdot H_\gamma (\omega ), \quad F_\gamma \in \bigcap _{k\ge 0}C^k([0,T],H^{r-k,\varrho -k}\otimes {\mathbb {R}}^N), \gamma \in {\mathcal {I}}, \\ G(x;\omega )= \sum _{\gamma \in {\mathcal {I}}}G_\gamma (x)\cdot H_\gamma (\omega ),\quad G_\gamma \in H^{r,\varrho }\otimes {\mathbb {R}}^N. \end{aligned}$$

\({\mathbf {L}}\) acts onto \(U(t,s,x;\omega )=\displaystyle \sum _{\gamma \in {\mathcal {I}}}U_\gamma (t,s,x)\cdot H_\gamma (\omega )\) as

$$\begin{aligned} {\mathbf {L}}\lozenge U(t,s,x;\omega ) = \sum _{\gamma \in {\mathcal {I}}}\left[ \sum _{\beta +\lambda =\gamma }(L_\beta U_\lambda )(t,s,x)\right] \cdot H_\gamma (\omega ), \end{aligned}$$

where

$$\begin{aligned} L_{(0,0,\ldots )}&=D_t+ {\mathrm {diag}}\left( {{\text {Op}}}(\lambda _{1 (0,0,\ldots )}(t)),\dots ,({{\text {Op}}}(\lambda _{N (0,0,\ldots )}(t))\right) \nonumber \\&\quad + ({{\text {Op}}}(r_{jk (0,0,\ldots )}(t)))_{j,k=1,\dots ,N}, \end{aligned}$$
(4.21)
$$\begin{aligned} L_{\beta }&={\mathrm {diag}}({{\text {Op}}}(\lambda _{1\beta }(t)),\dots ,({{\text {Op}}}(\lambda _{N\beta }(t))) + ({{\text {Op}}}(r_{jk\beta }(t)))_{j,k=1,\dots ,N}, \quad \beta \not =(0,0,\ldots ). \end{aligned}$$
(4.22)

Definition 4.9

The system (4.19) and the associated Cauchy problem (4.20) are called SG-hyperbolic if the symbols \(\lambda _{j (0,0,\ldots )}\in C^\infty ([0,T], S^{1,1}({\mathbb {R}}^d))\), \(j=1,\dots ,N\), appearing in the SG-principal part \(L_{(0,0,\ldots )}\) of \({\mathbf {L}}\) defined in (4.21), are real-valued, and the other symbols appearing in (4.21) and (4.22) satisfy \(\lambda _{j\beta },r_{jk\gamma }\in C^\infty ([0,T],S^{0,0}({\mathbb {R}}^d))\), \(j,k=1,\dots ,N\), \(\beta ,\gamma \in {\mathcal {I}}\), \(\beta \not =(0,0,\ldots )\).

The system (4.19) and the Cauchy problem (4.20) are called strictly (SG-)hyperbolic, weakly (SG-)hyperbolic with constant multiplicities, or weakly (SG-)hyperbolic with involutive characteristics (or SG-involutive), respectively, if such properties, are satisfied with the eigenvalues \(\lambda _{j(0,0,\ldots )}\), \(j=1,\dots , N\), in place of the characteristic roots in Definition 4.2.

We proceed as in the previous sections, obtaining a family of Cauchy problems for SG-hyperbolic systems indexed by \(\gamma \in {\mathcal {I}}\), namely,

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle [L_{(0,0,\ldots )}U_\gamma ](t,s,x)=F_\gamma (t,x)- \sum _{0\leqslant \lambda <\gamma }(L_{\gamma -\lambda }U_{\lambda })(t,s,x), \\ U_\gamma (s,s,x)=G_\gamma (x). \end{array}\right. } \end{aligned}$$
(4.23)

Then, the fundamental solution \(E(t,s)\in \displaystyle \bigcap _{k\ge 0}C^k(\Delta _{T^\prime },{\mathcal {O}}(k,k))\), \(s\in [0,T^\prime )\), exists (see [11]), and can, in general, be expressed as a limit of matrices of Fourier integral operators (see [22, 49]; see Section 5 of [4] for the SG case). In view of the properties of the family \(\lambda _{j(0,0,\ldots )}\), \(j=1,\dots , N\), in the three cases considered here E(ts) can actually be reduced, modulo smoothing operators, to a finite linear combination of (compositions of) SG Fourier integral operators, see [1, 12,13,14]. The next Theorem 4.10 is the analogue for systems of Theorem 4.6.

Theorem 4.10

Let \({\mathbf {L}}\) be SG-hyperbolic of the form in (4.19), either strictly hyperbolic, weakly hyperbolic with constant multiplicites, or weakly hyperbolic with involutive characteristics, according to Definition 4.9. Assume also that \(G_\gamma \in H^{r,\rho }({\mathbb {R}}^d)\otimes {\mathbb {R}}^N\), and \(\displaystyle W_\gamma \in \bigcap _{k\ge 0}C^k([0,T],H^{r-k,\rho -k}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)\), \(r,\rho \in {\mathbb {R}}\), \(\gamma \in {\mathcal {I}}\). Then, there exists a time-horizon \(T^\prime \in (0,T]\) such that the Cauchy problems

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle L_{(0,0,\ldots )}U_\gamma (t,s,x)=W_\gamma (t,x), \\ U_\gamma (s,s,x)=G_\gamma (x), \quad \gamma \in {\mathcal {I}}, \end{array}\right. } \end{aligned}$$
(4.24)

admit a unique solution \(\displaystyle U_\gamma \in \bigcap _{k\ge 0} C^k(\Delta _{T^\prime },H^{r-k,\rho -k}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)\), \(s\in [0,T^\prime )\). More precisely, there exists an operator family E(ts), \((t,s)\in \Delta _{T^\prime }\), \(s\in [0,T^\prime )\), depending only on \(L_{(0,0,\ldots )}\), such that \(\displaystyle E\in \bigcap _{k\ge 0}C^k(\Delta _{T^\prime },{\mathcal {O}}(k,k))\), \(s\in [0,T^\prime )\), and

$$\begin{aligned} U_\gamma (t,s) = E(t,s)G_\gamma ^j+ i\int _s^t E(t,\tau ) W_\gamma (\tau )d\tau . \end{aligned}$$

Remark 4.11

  1. 1.

    The fundamental solution of (4.24) is a family \(\{E(t,s)\vert (t,s)\in \Delta _{T'}\}\) of operators satisfying

    $$\begin{aligned} {\left\{ \begin{array}{ll} L_{(0,0,\ldots )} E(t,s) = 0, &{} (t,s)\in \Delta _{T'}, \\ E(s,s)=I, &{} s\in [0,T'), \end{array}\right. } \end{aligned}$$

    such that, for any \(k,l\in {\mathbb {N}}_0\), \(\partial ^k_t\partial ^l_sE(t,s)\) belongs to \(C(\Delta _{T^\prime }, {\mathcal {O}}(k+l,k+l))\).

  2. 2.

    It follows that, for any \(j\in {\mathbb {N}}_0\), \(D^j_t U_\gamma \in C([0,T^\prime ],H^{r-j,\rho -j}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)\), and, for any \(M\in {\mathbb {N}}\), the mapping \((W_\gamma ,G_\gamma )\mapsto (U_\gamma ,D_t U_\gamma , \dots ,D_t^{M}U_\gamma )\), associating the right-hand side and the initial data of (4.24) to its solutions and its first M derivatives with respect to t, is a well defined, linear and continuous application

    $$\begin{aligned}&{\mathcal {T}}^M_\mathrm {glob}:\left( \bigcap _{0\le k \le M-1}C^k([0,T^\prime ],H^{s-k,\sigma -k}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)\right) \\&\quad \times \left( {\mathop {\times }\limits _{j=0}^{M-1}} H^{r-j,\rho -j}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N\right) \rightarrow {\mathop {\times }\limits _{j=0}^{M}}C([0,T^\prime ],H^{r-j,\rho -j}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N), \end{aligned}$$

    uniformly with respect to \(\gamma \in {\mathcal {I}}\).

  3. 3.

    By the continuity of \({\mathcal {T}}_\mathrm {glob}^M\), we obtain an energy-type estimate for the solution \(U_\gamma \) and its first \(M\in {\mathbb {N}}\) derivatives with respect to t, namely

    $$\begin{aligned}&\sum _{j=0}^{M}\Vert D_t^jU_\gamma \Vert _{C([0,T^\prime ],H^{r-j,\rho -j}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)} \nonumber \\&\quad \le C \left( T^\prime \Vert W_\gamma \Vert _{\bigcap _{0\le k \le M-1}C^k([0,T^\prime ],H^{r-k,\rho -k}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)} +\Vert G_\gamma \Vert _{H^{r,\rho }({\mathbb {R}}^d)\otimes {\mathbb {R}}^N} \right) ,\nonumber \\ \end{aligned}$$
    (4.25)

    for a suitable constant \(C>0\), independent of \(\gamma \in {\mathcal {I}}\).

  4. 4.

    Theorem 4.6 is a consequence of Theorem 4.10. Indeed, in the three cases considered here, the Cauchy problem (4.15) is turned, by suitable techniques, into an equivalent Cauchy problem of the form (4.24). The Levi conditions on the lower order terms of the involved operator play a crucial role in this aspect (see, e.g., [1, 4, 8, 12, 14, 22, 33,34,35,36])

We can now state the third main result of the paper, the next Theorem 4.12. Up to a few minor details, the proof is obtained by the same argument employed to prove Theorem 4.8, and is left for the reader.

Theorem 4.12

Assume that \({\mathbf {L}}\) in (4.20) is SG-hyperbolic, either strictly, weakly with constant multiplicities, or weakly with involutive characteristics, in the sense of Definition 4.9. We also assume that, for some \(M\in {\mathbb {N}}_0\), there exists \(p\geqslant 0\) such that

$$\begin{aligned} \sum _{\begin{array}{c} \gamma \in {\mathcal {I}}\\ \gamma \ne (0,0,\ldots ) \end{array}} \Vert L_{\gamma }\Vert _{{\mathcal {L}}(\, \bigcap _{0\le k\le M}C^k([0,T],H^{r-k,\rho -k}({\mathbb {R}}^d)), \, \bigcap _{0\le k\le M}C^k([0,T],H^{r-k,\rho -k}({\mathbb {R}}^d) \,)} (2{\mathbb {N}})^{-\frac{p}{2}\gamma }<\infty .\nonumber \\ \end{aligned}$$
(4.26)

Finally, assume, for \((r,\rho )\in {\mathbb {R}}^2\), \(G\in (H^{r,\rho }({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)\otimes (S)_{-1,-p}\), and \(\displaystyle F\in \bigcap _{k\ge 0} C^k([0,T],H^{r-k,\rho -k}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)\otimes (S)_{-1,-p}\).

Then, there exists a a time-horizon \(T^\prime \in (0,T]\) such that the Cauchy problem (4.20) admits a unique solution \(\displaystyle U\in \bigcap _{k=0}^M C^k(\Delta _{T^\prime },H^{r-k,\rho -k}({\mathbb {R}}^d)\otimes {\mathbb {R}}^N)\otimes (S)_{-1,-p}\), \(s\in [0,T^\prime )\).

Similarly as in Corollary 3.5 we may observe that the solution exhibits the unbiasedness property, i.e. its expectation coincides with the solution of the associated PDE obtained by taking expectations of all stochastic elements in (4.20).

5 Examples

In this concluding section we provide some examples of hyperbolic SPDEs with singularities and possible applications of the previously obtained results in linear problems that arise in physics, geology, cosmology, engineering and ample other areas of science. Apart from Examples 5.1 and 5.2, which serve as a toy model in order to facilitate our algorithm for solving, the other examples serve to provide a motivation for the study of hyperbolic SPDEs and to illustrate how randomness may occur, but their full solution will be part of a forthcoming paper.

Example 5.1

We start with the most prominent example, the wave equation as the prototype of hyperbolic PDEs. For technical simplicity, we consider the one-dimensional case where a nice analytic formula is known for the solutions, to serve as a simple illustration for our method.

Consider the wave equation with a random wave speed \(\mathbf{c }(t,x,;\omega )\) which has expectation \(E(\mathbf{c })=c_{(0,0,\ldots )}>0\) with no time/space dependence (for instance, stationary processes have constant expectations):

$$\begin{aligned} {\left\{ \begin{array}{ll} u_{tt}(t,x;\omega )=E(\mathbf{c })u_{xx}(t,x;\omega )+\tilde{ \mathbf{c }}(t,x;\omega )\lozenge u(t,x;\omega )+f(t,x;\omega ) &{} (t,x;\omega )\in {\mathbb {R}}\times {\mathbb {R}}\times \Omega \\ u(0,x; \omega )=\phi (x;\omega ),\;\;u_t(0,x; \omega )=\psi (x;\omega ), &{} (x;\omega )\in {\mathbb {R}}\times \Omega , \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.1)

where \(\tilde{\mathbf{c }}=\mathbf{c }-E(\mathbf{c })\) denotes the centralized process having zero expectation. By converting (5.1) to an infinite system of PDEs as in (3.2)and (3.3) we arrive via D’Alembert’s formulae to the family of solutions:

$$\begin{aligned} u_{(0,0,\ldots )}(t,x)= & {} \frac{1}{2}\left( \phi _{(0,0,\ldots )}(x+ct)+ \phi _{(0,0,\ldots )}(x-ct)\right) \\&+\frac{1}{2c}\int _{x-ct}^{x+ct}\psi _{(0,0,\ldots )}(s)ds+ \frac{1}{2c}\int _0^t\int _{x-c(t-s)}^{x+c(t-s)}f_{(0,0,\ldots )}(s,y)\;dy\;ds \end{aligned}$$
$$\begin{aligned}&u_\gamma (t,x) = \frac{1}{2}\left( \phi _{\gamma }(x+ct)+ \phi _{\gamma }(x-ct)\right) \\&\quad +\frac{1}{2c}\int _{x-ct}^{x+ct}\psi _{\gamma }(s)ds\\&\quad + \frac{1}{2c}\int _0^t\int _{x-c(t-s)}^{x+c(t-s)}\left( f_{\gamma }(s,y) +\sum _{\begin{array}{c} \beta +\lambda = \gamma \\ \lambda \ne \gamma \end{array}}c_\beta (s,y) u_\lambda (s,y)\right) dy\;ds,\\&\quad \gamma \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}, \end{aligned}$$

where \(u_\gamma \) are calculated recursively on the length of the multiindex \(\gamma \) using the previously obtained \(u_\lambda \), \(|\lambda |<|\gamma |\), and for notational convenience we denoted \(\displaystyle c=\sqrt{c_{(0,0,\ldots )}}\).

Another wave-like equation that accounts for assumption (A1) in Theorem 3.4 would take on the form

$$\begin{aligned} {\left\{ \begin{array}{ll} u_{tt}(t,x;\omega )=E(\mathbf{c })u_{xx}(t,x;\omega )+\tilde{ \mathbf{c }}(t,x;\omega )\lozenge u_x(t,x;\omega )+f(t,x;\omega ) &{} (t,x;\omega )\in {\mathbb {R}}\times {\mathbb {R}}\times \Omega \\ u(0,x; \omega )=\phi (x;\omega ),\;\;u_t(0,x; \omega )=\psi (x;\omega ), &{} (x;\omega )\in {\mathbb {R}}\times \Omega , \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.2)

with following chaos coefficients of the solution:

$$\begin{aligned}&u_\gamma (t,x) = \frac{1}{2}\left( \phi _{\gamma }(x+ct)+ \phi _{\gamma }(x-ct)\right) \\&\quad +\frac{1}{2c}\int _{x-ct}^{x+ct}\psi _{\gamma }(s)ds\\&\quad + \frac{1}{2c}\int _0^t\int _{x-c(t-s)}^{x+c(t-s)} \left( f_{\gamma }(s,y) +\sum _{\begin{array}{c} \beta +\lambda =\gamma \\ \lambda \ne \gamma \end{array}}c_\beta (s,y) \frac{\partial }{\partial x}u_\lambda (s,y)\right) dy\;ds,\\&\quad \;\;\gamma \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}, \end{aligned}$$

Note that both in (5.1) and (5.2) no second order x-derivatives are present in the solution, only lower order operators (only zeroth and first derivatives appear).

Example 5.2

We consider now the influence of assumption (A1) on the two-dimensional wave equation. Let \(\mathbf{C }=\sum _{j,k=1}^2\partial _j c_{jk}(t,x;\omega )\partial _k\) be an operator such that the following hold: \(c_{12\alpha }(t,x)=-c_{21\alpha }(t,x)\), \(c_{11\alpha }(t,x)=c_{22\alpha }(t,x)=0\), \(\alpha \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}\). Additionally, we assume that C has a constant expectation (not depending on space/time) and \(c_{11(0,0,\ldots )}=c_{22(0,0,\ldots )}>0\), while \(c_{12(0,0,\ldots )}=-c_{21(0,0,\ldots )}\). For notational convenience we let \(\displaystyle c=\sqrt{c_{11(0,0,\ldots )}}=\sqrt{c_{22(0,0,\ldots )}}\). The propagator system for the two-dimensional wave equation with this special operator form reads as:

$$\begin{aligned}&{\left\{ \begin{array}{ll} \frac{\partial ^2}{\partial t^2}u_{(0,0,\ldots )}(t,x)=c^2\Delta _{xx}u_{(0,0,\ldots )}(t,x)+f_{(0,0,\ldots )}(t,x), &{} (t,x)\in {\mathbb {R}}\times {\mathbb {R}}\\ u_{(0,0,\ldots )}(0,x)=\phi _{(0,0,\ldots )}(x),\;\;\frac{\partial }{\partial t}u_{(0,0,\ldots )}(0,x)=\psi _{(0,0,\ldots )}(x), &{} x\in {\mathbb {R}}, \end{array}\right. }\nonumber \\ \end{aligned}$$
(5.3)
$$\begin{aligned}&{\left\{ \begin{array}{ll} \frac{\partial ^2}{\partial t^2}u_{\gamma }(t,x)=c^2\Delta _{xx}u_{\gamma }(t,x)+\sum \limits _{\begin{array}{c} \lambda +\beta =\gamma \\ \beta \ne \gamma \end{array}}A_\lambda u_\beta (t,x)+f_{\gamma }(t,x), &{} (t,x)\in {\mathbb {R}}\times {\mathbb {R}}\\ u_{\gamma }(0,x)=\phi _{\gamma }(x),\;\;\frac{\partial }{\partial t}u_{\gamma }(0,x)=\psi _{\gamma }(x), &{} x\in {\mathbb {R}}, \end{array}\right. } \end{aligned}$$
(5.4)

where

$$\begin{aligned} A_\lambda (t,x)= & {} (c_{11\lambda ,1}(t,x)+c_{21\lambda ,2}(t,x))\frac{\partial }{\partial x_{1}}+ (-c_{21\lambda ,1}(t,x)+c_{22\lambda ,2}(t,x))\frac{\partial }{\partial x_{2}},\nonumber \\&\lambda \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}, \end{aligned}$$
(5.5)

with \(c_{mn\lambda ,i}(t,x)=\frac{\partial }{\partial x_i}c_{mn\lambda }(t,x)\), \(i=1,2\), involves only first order partial \(x-\hbox {derivatives}\). The solution is hence given by

$$\begin{aligned}&u_{(0,0,\ldots )}(t,x)=\frac{1}{2\pi c^2t^2}\int _{B(x,ct)} \frac{ct\phi _{(0,0,\ldots )}(y)+ct^2\psi _{(0,0,\ldots )}(y)+ct\nabla \phi _{(0,0,\ldots )}(y)\cdot (y-x)}{\sqrt{c^2t^2-|y-x|^2}}\;dy\\&\quad + \int _0^t \frac{1}{2\pi c^2(t-s)^2}\int _{B(x,c(t-s))}\frac{c(t-s)^2 f_{(0,0,\ldots )}(y,s)}{\sqrt{(t-s)^2-|y-x|^2}}\;dyds,\\&u_{\gamma }(t,x) =\frac{1}{2\pi c^2t^2}\int _{B(x,ct)} \frac{ct\phi _{\gamma }(y)+ct^2\psi _{\gamma }(y)+ct\nabla \phi _{\gamma }(y)\cdot (y-x)}{\sqrt{c^2t^2-|y-x|^2}}\;dy \\&\quad + \int _0^t \frac{1}{2\pi c^2(t-s)^2}\int _{B(x,c(t-s))}\frac{c(t-s)^2 \left( f_{\gamma }(y,s)+\sum \limits _{\begin{array}{c} \lambda +\beta =\gamma \\ \quad \beta \ne \gamma \end{array}}A_\lambda (y,s) u_\beta (y,s)\right) }{\sqrt{(t-s)^2-|y-x|^2}}\;dyds,\;\; \\&\quad \gamma \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}. \end{aligned}$$

Remark 5.3

However, it is important to note that assumption (A1) in Theorem 3.4, as well as assumption (4.4) in Theorems 4.8 and 4.12, are sufficient conditions but they are not necessary conditions. If one may obtain other good estimates on the regularity of the solutions of the PDEs defining the propagator of the system, then the highest order x-derivatives may be included as well into the SPDE and the desired convergence of the chaos expansion will follow by similar methods as presented. These alternative estimates are out of the scope of this paper and will be presented elsewhere.

Example 5.4

Other similar examples would include the stochastic Helmholtz equation with a random wave number \(u_{tt}(t,x;\omega )-k(\omega )\lozenge u(t,x;\omega )=f(t,x;\omega )\), with suitable assumptions on the expectation of k and its coefficients in line with Theorem 3.4; or its multidimensional counterparts with a time-space dependent wave number \(k(t,x;\omega )\). Note that in case of the purely random (not dependent on time and space variables) wave speed \(c(\omega )\) and wave number \(k(\omega )\), the solutions obtained by our chaos expansion method coincide with the solutions obtained in [47], where general equations of the form \(P(\omega ,D)\lozenge u(t,x;\omega )=f(t,x;\omega )\) were considered.

Example 5.5

In this example we provide some instances of operators with variable (time-space depending) coefficients that provide an insight into the fine difference between strict and weak hyperbolicity. The examples are based on [8].

  1. 1.

    Let \(A(\omega ),B(\omega )\in (S)_{-1}\) be such that \(E(A)=E(B)=V>0\). The operator

    $$\begin{aligned} {\mathbf {L}}=-\partial ^2_t+A(\omega )(1+|x|^2)\Delta _x-B(\omega )(1+|x|^2),\quad x\in {\mathbb {R}}^d, \end{aligned}$$

    is strictly hyperbolic. It can be rewritten in the form of the wave-operator

    $$\begin{aligned} {\mathbf {L}}=-\Box _M-B(\omega )(1+|x|^2), \end{aligned}$$

    where \(\Box _M=\partial ^2_t-A(\omega )(1+|x|^2)\Delta _x\) is the D’Alembert operator associated with a randomized Riemannian metric M on \({\mathbb {R}}^d\), perturbed by a random polynomially growing potential \(B(\omega )(1+|x|^2)\). Note that the principal part of the operator that governs the SPDE according to (4.2) is given by

    $$\begin{aligned} L_{(0,0,\ldots )} = D_t^2-V(1+|x|^2)(1-\Delta _x),\quad x\in {\mathbb {R}}^d, \end{aligned}$$

    having symbol \(L_{(0,0,\ldots )}(x,\tau ,\xi )=\tau ^2-V\langle x\rangle ^2\langle \xi \rangle ^2\) and roots \(\tau _{\pm }(x,\xi )=\pm \langle x\rangle \langle \xi \rangle \sqrt{V}\), which are real, distinct and separated at every point of \([0,T]\times {\mathbb {R}}^{2d}\).

    Assuming that \(A_\alpha =0\), \(\alpha \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}\), in the chaos expansion of the random variable A, \(L_\gamma \) will involve no second order x-derivatives for \(\gamma \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}\) and the sufficient condition (4.4) will grant the solvability of the equation.

  2. 2.

    Let \(K(t,x;\omega )\) be a stochastic process with nonzero constant expectation \(E(K)=K_{(0,0,\ldots )}\ne 0\). For notational convenience we will write \(k=K_{(0,0,\ldots )}\). The operator

    $$\begin{aligned}&{\mathbf {L}}= (D_t^2-K(\omega )^{\lozenge 2}\langle x\rangle ^2\langle D\rangle ^2)^{\lozenge 2}\\&\quad = \left( D_t^4-2K(\omega )^{\lozenge 2}\langle x\rangle ^2\langle D\rangle ^2 D_t^2+K(\omega )^{\lozenge 4} \langle x\rangle ^4\langle D\rangle ^4 + \mathrm{Op}(p) \right) ,\\&\quad \quad x\in {\mathbb {R}}^d,p\in S^{3,3}({\mathbb {R}}^d) \end{aligned}$$

    has its principal part (expectation) given by

    $$\begin{aligned}&L_{(0,0,\ldots )} = (D_t^2-k^2\langle x\rangle ^2\langle D\rangle ^2)^{2}=\\&\left( D_t^4-2k^{2}\langle x\rangle ^2\langle D\rangle ^2 D_t^2+k^{4} \langle x\rangle ^4\langle D\rangle ^4 + \mathrm{Op}(p) \right) ,\quad x\in {\mathbb {R}}^d,p\in S^{3,3}({\mathbb {R}}^d). \end{aligned}$$

    This is a weakly hyperbolic operator with roots of constant multiplicities (here it has two roots, both of multiplicity 2). Indeed, its symbol is \(L_{(0,0,\ldots )}(x,\tau ,\xi )=(\tau ^2-k^2\langle x\rangle ^2\langle D\rangle ^2)^{2}\), with separated roots \(\tau _{\pm }(x,\xi )=\pm k\,\langle x\rangle \langle \xi \rangle \), both of multiplicity two.

    Moreover, if \(K(t,x;\omega )\) has a chaos expansion such that

    $$\begin{aligned}&\sum _{\alpha +\beta =\gamma }\left( \sum _{\lambda +\mu =\alpha }K_\lambda (t,x) K_\mu (t,x)\right) \left( \sum _{\eta +\nu =\beta }K_\eta (t,x) K_\nu (t,x)\right) =0,\\&\quad \gamma \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}, \end{aligned}$$

    then the fourth order x-derivative will disappear from \(L_\gamma \), \(\gamma \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}\), and condition (4.4) will be satisfied.

  3. 3.

    Let \(C_1(t,x;\omega ),C_2(t,x;\omega )\) be stochastic processes with constant expectations \(e_{i}=E(C_i)=C_{i(0,0,\ldots )}\), \(i=1,2\), respectively, assuming that \(e_1\ne 0\) and \(e_2=1\). The operator

    $$\begin{aligned}&{\mathbf {L}}= (D_t+t C_1(\omega )D_{x_1}+C_2(\omega )D_{x_2})\lozenge (D_t-C_1(\omega )(t-2x_2)D_{x_1}),\\&\quad x=(x_1,x_2)\in {\mathbb {R}}^2, \end{aligned}$$

    has its principal part (expectation) given by

    $$\begin{aligned} L_{(0,0,\ldots )} = (D_t+t e_1D_{x_1}+D_{x_2}) (D_t-e_1(t-2x_2)D_{x_1}),\quad x=(x_1,x_2)\in {\mathbb {R}}^2, \end{aligned}$$

    which is a weakly hyperbolic operator with involutive roots of non-constant multiplicities (see [36]). Indeed, its symbol \(L_{(0,0,\ldots )}(\tau ,x,\xi )=(\tau + te_1\xi _1+\xi _2)(\tau -e_1(t-2x_2)\xi _1)\) has two real roots \(\tau _1(t,x,\xi )=-te_1\xi _1-\xi _2\) and \(\tau _2(t,x,\xi )=e_1(t-2x_2)\xi _1\), which are not always separated, in fact they overlap on the set \(\{(t,x,\xi )\subseteq [0,T]\times {\mathbb {R}}^{2d}:\; \xi _2 = 2e_1\xi _1(x_2-t)\}\). Involutiveness follows from the fact that \([D_t+t e_1D_{x_1}+D_{x_2}), D_t-e_1(t-2x_2)D_{x_1}]=-e_1[D_t,(t-2x_2)D_{x_1}]-e_1^2[t D_{x_1}+D_{x_2},(t-2x_2)D_{x_1}]+e_1[t D_{x_1},D_t]=e_1(2-2)D_{x_1}=0.\)

    Moreover, if \(C_1(t,x;\omega ),C_2(t,x;\omega )\) have chaos expansions such that

    $$\begin{aligned}&\sum _{\alpha +\beta =\gamma }c_{1\alpha }(t,x)c_{2\beta }(t,x)=0,\\&\quad \sum _{\alpha +\beta =\gamma }c_{1\alpha }(t,x)c_{1\beta }(t,x)=0, \quad \gamma \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}, \end{aligned}$$

    then the second order x-derivatives will disappear from \(L_\gamma \), \(\gamma \in {\mathcal {I}}\setminus \{(0,0,\ldots )\}\), and condition (4.4) will be satisfied.

The following equations serve as a motivation for the further study of hyperbolic SPDEs. Under additional conditions they may be adopted to our setting with (A1) in a similar manner as in the previous examples.

Example 5.6

The elastic wave equation, that accounts both for longitudinal and transverse motion in three dimensions, has the general form \(\rho u_{tt}=(\lambda +2\mu )\nabla (\nabla \cdot u)-\mu \nabla \times (\nabla \times u)+f\). It describes the propagation of waves in solid elastic materials, e.g. seismic waves in the Earth and ultrasonic waves used to detect flaws in materials [46]. In the previous equation u denotes the displacement vector, f the driving force, \(\rho \) the density of the material and \(\lambda ,\mu \) denote the Lamé parameters related to the elastic properties of the medium describing the strain-stress relations. In most physical models these parameters are constant, but since they are subject to some measuring errors (either due to instrument errors or reading errors), it is more convenient to treat them as random variables. The driving force may also incorporate some randomness, both pure randomness and measuring uncertainty, hence it is modeled as a stochastic process. Hence we arrive at a stochastic hyperbolic equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \rho (\omega )\lozenge u_{tt}(t,x;\omega )=(\lambda (\omega )+2\mu (\omega ))\\ \lozenge \nabla (\nabla \cdot u(t,x;\omega ))-\mu (\omega )\\ \lozenge \nabla \times (\nabla \times u(t,x;\omega ))+f(t,x;\omega ) &{} (t,x;\omega )\in {\mathbb {R}}\times {\mathbb {R}}\times \Omega \\ u(0,x; \omega )=0,\;\;u_t(0,x; \omega )=0, &{} (x;\omega )\in {\mathbb {R}}\times \Omega , \end{array}\right. } \end{aligned}$$
(5.6)

where we assumed zero initial displacement and zero initial velocity.

If the material consists of different layers (e.g. the Earth’s crust does have this property) then it would be more appropriate to consider the coefficients \(\rho ,\lambda ,\mu \) to be stochastic processes \(\rho (t,x;\omega ),\lambda (t,x;\omega ),\mu (t,x;\omega )\) depending on the layer and on the time evolution of the system. Theorem 4.8 provides the necessary tools to solve the equation in its most general form.

Example 5.7

Other examples where classical hyperbolic PDEs may be replaced by SPDEs with random coefficients involve the telegrapher’s equations, where voltage (V) and current (I) along a transmission line can be modeled by the wave equation \(u_{xx}(t,x;\omega )=k_1(\omega )\lozenge u_{tt}(t,x;\omega )+k_2(\omega )\lozenge u_t(t,x;\omega )+k_3(\omega )\). Here u stands either for voltage or current (both follow the same pattern), and \(k_1,k_2,k_3\) are random variables depending on conductance, resistance, inductance and capacitance (characteristics of the wire material) that incorporate some randomness due to measuring errors and due to unpredictable inhomogeneity of the material, or they might even be regarded as stochastic processes with time-space dependence.

The telegrapher’s equation is formally equivalent to the so called hyperbolic heat conduction equation (relativistic heat conduction equation) sometimes used instead of classical parabolic heat conduction to account for the fact that speed of propagation cannot be infinite and must be bounded by the speed of light in vacuum.

Some other interesting models related to the telegrapher’s equation concerning random planar movements of a particle driven by Poissonian forces of the fluid are given in [37].

Example 5.8

The next example is provided in [15] and concerns the study of the internal structure of the sun. The Solar & Heliospheric Observatory (SOHO) project was run by NASA and ESA by launching a spacecraft in 1995 with the mission to measure the motion of the sun’s surface. From the pulsating waves around the sun’s surface scientists would deduce the location of the origin of the shock waves and gain a certain insight into the inner structure of the sun. Assuming that the shock sources are randomly located on a sphere of radius R inside the sun, the dilatation is governed by the Navier equation given by

$$\begin{aligned}&\partial ^2_t u(t,x;\omega )=c(x;\omega )\lozenge \rho (x;\omega ) \lozenge \left( \nabla \cdot \left( \rho (x,\omega )^{\lozenge (-1)}\lozenge \nabla u\right) \right. \\&\quad \left. +\nabla \cdot f(t,x;\omega )\right) ,\quad (t,x,\omega )\in {\mathbb {R}}\times B(0,R)\times \Omega , \end{aligned}$$

where B(0, R) is the ball centered at the origin with radius R, \(c(x;\omega )\) is the speed of wave propagation at position x, \(\rho (x;\omega )\) is the density at position x (we account for measuring errors by letting c and \(\rho \) be random), and \(f(t,x;\omega )\) models the shock that originates at time t at position x.

According to the SOHO website the spacecraft was meant to operate only until 1998, but it was so successful that ESA and NASA decided to prolong its mission and it is sending data obtained from the sun up to this date.

Finally we note the very important fact that in the theory of general relativity Einstein’s equations can be converted into a symmetric hyperbolic system of equations [43]. Some other papers also consider as an advantage to apply a stochastic approach and treat diffusions in a Lorentzian manifold using stochastic differential equations in the orthonormal frame bundle of the manifold [9]. Hence the results of Theorem 4.12 may be applied into a newly developing field of general relativity where the space geometry incorporates randomness.