Hyperbolic systems with nondiagonalisable principal part and variable multiplicities, I: wellposedness
 155 Downloads
Abstract
In this paper we analyse the wellposedness of the Cauchy problem for a rather general class of hyperbolic systems with spacetime dependent coefficients and with multiple characteristics of variable multiplicity. First, we establish a wellposedness result in anisotropic Sobolev spaces for systems with upper triangular principal part under interesting natural conditions on the orders of lower order terms below the diagonal. Namely, the terms below the diagonal at a distance k to it must be of order \(\,k\). This setting also allows for the Jordan block structure in the system. Second, we give conditions for the Schur type triangularisation of general systems with variable coefficients for reducing them to the form with an upper triangular principal part for which the first result can be applied. We give explicit details for the appearing conditions and constructions for \(2\times 2\) and \(3\times 3\) systems, complemented by several examples.
Mathematics Subject Classification
35L45 (primary) 46E35 (secondary)1 Introduction
 (Q1)
Under what structural conditions on the zero order part \(B(t,x,D_x)\) is the Cauchy problem (1) wellposed in \(C^\infty \) or, even better, in suitable scales of Sobolev spaces?
 (Q2)
Under what conditions on the general matrix \(A(t,x,D_x)\) of first order pseudodifferential operators can we reduce it (microlocally) to another system with A satisfying the upper triangular condition (2)?
In the case of \(2\times 2\) systems the questions above have been analysed with the answer to (Q1) given by the following theorem:
Theorem A
Theorem B
The case of (microlocally) diagonalisable systems of any order with fully variable coefficients was considered by Rozenblum [41] under the condition of transversality of the intersecting characteristics. Also allowing the variable multiplicities, this transversality condition was later removed in [32, 33] with sharp \(L^p\)estimates for solutions, with further applications to the spectral asymptotics of the corresponding elliptic systems.
Before stating our main results and collecting some necessary basic notions we give a brief overview of the state of the art for hyperbolic equations and systems. We have a complete understanding of strictly hyperbolic systems, i.e., systems without multiplicities, with \(C^\infty \)coefficients. This starts with the groundbreaking work of Lax [35] and Hörmander [28] and heavily relies on the modern theory of Fourier integral operators (FIO). Wellposedness is here obtained in the space of distributions \({\mathcal {D}}'\). There are also wellposedness results for less regular coefficients with respect to t. For instance, wellposedness with loss of derivatives has been obtained by Colombini and Lerner [9] for second order strictly hyperbolic equations with LogLipschitz coefficients with respect to t and smooth in x. It is possible to further drop the regularity in t (for instance Hölder), however, this has to be balanced by stronger regularity in x (Gevrey) and leads to more specific (Gevrey) wellposedness results (see [3, 31] and references therein). Paradifferential techniques have been recently used for this kind of strictly hyperbolic equations by Colombini et al. [6, 7].
The analysis of hyperbolic equations with multiplicities (weakly hyperbolic) has started with the seminal paper by Colombini et al. [5] in the case of coefficients depending only on time. Profound difficulties in such analysis have been exhibited by Colombini et al. [4, 8] showing that even the second order wave equation in \({\mathbb {R}}\) with smooth timedependent propagation speed (but with multiplicity) and smooth Cauchy data need not be wellposed in \({\mathcal {D}}'\). However, they turn out to be wellposed in suitable Gevrey classes or spaces of ultradistributions. In the last decades many results were obtained for weakly hyperbolic equations with tdependent coefficients ([3, 11, 16, 18, 19, 20, 34], to quote only very few). More recently, advances in the theory of weakly hyperbolic systems with tdependent coefficients have been obtained for systems of any size in presence of multiplicities with regular or low regular (Hölder) coefficients [16, 22, 23]. In addition, in [17] precise conditions on the lower order terms (Levi conditions) have been formulated to guarantee Gevrey and ultradistributional wellposedness. Previously very few results were known in the field for systems of a certain size (\(2\times 2\), \(3\times 3\)) [12, 13] or of a certain form (for instance without lower order terms or with principal part of a certain form) [44].
Weakly hyperbolic equations with xdependent coefficients were considered for the first time in the celebrated paper by Bronshtein [2]. As shown already in some earlier works by Ivrii, the corresponding Cauchy problem is wellposed under “almost analytic regularity”, namely, if the coefficients and initial data are in suitable Gevrey classes. Bronshtein’s result was extended to (t, x)dependent scalar equations by Ohya and Tarama [38] and to systems by Kajitani and Yuzawa [31]. The regularity assumptions are always quite strong with respect to x (Gevrey) and not below Hölder in t. See also [10, 37]. Geometrical and microlocal analytic approaches are known for equations or systems under specific assumptions on the characteristics and/or lower order terms. See [29, 30, 33, 36, 39], to quote only a few. Timedependent coefficients of low regularity (distributional) have been considered in [21].
If there is no question about the domain under consideration, we will abbreviate the symbol and operatorclasses by \(S^m_{1,0}\) and \(\Psi ^m_{1,0}\), respectively, or simply by \(S^m\) and \(\Psi ^m\).
We also denote by \(C([0,T], S_{1,0}^m(\mathbb R^n \times \mathbb R^n))\) the space of all symbols \(a(t,x,\xi )\in S_{1,0}^m(\mathbb R^n\times \mathbb R^n)\) which are continuous with respect to t. The set of operators associated to the symbols in \(C([0,T], S_{1,0}^m(\mathbb R^n \times \mathbb R^n))\) is denoted by \(C([0,T], \Psi _{1,0}^m(\mathbb R^n \times \mathbb R^n))\).
Again, if there is no question about the domain under consideration, we will abbreviate the symbol and operatorclasses by \(C S_{1,0}^m\) and \(C\Psi ^m_{1,0}\), respectively, or simply by \(C S^m\) and \(C\Psi ^m\).
Let us give our main result concerning the first question (Q1) for the systems with the principal part A satisfying the upper triangular condition (2). Here, \(f_k\), \(u_k\) and \(u_k^0\), for \(k=1,\ldots ,m\), stand for the components of the vectors f, u and \(u_0\), respectively.
Theorem 1
Remark 1
As stated earlier, we allow A and B to have complex valued symbols as long as the symbols of \(\Lambda \) in (2), i.e. the eigenvalues of \(A(t,x,\xi )\), are real valued.
The main condition of Theorem 1 for the Sobolev wellposedness is that the pseudodifferential operator \(b_{ij}\) below the diagonal (i.e. for \(i>j\)) must be of order \(ji\). In other words, the terms below the diagonal at a distance k to it must be of order \(\,k\).
In Sect. 2 we will prove Theorem 1 after we explain its idea in the cases of \(m=2\) and \(m=3\).
Consequently, in Sect. 3 we give an answer to the second question (Q2) above in the form of a suitable variable coefficients extension of the Schur triangularisation. For constant matrices such a procedure is well known (see e.g. [1, Theorem 5.4.1]).
Theorem C
(Schur’s triagularisation theorem) Given a (constant) \(m \times m\) matrix A with eigenvalues \(\lambda _1, \ldots , \lambda _m\) in any prescribed order, there is a unitary \(m \times m\) matrix T such that \(R = T^{1} A T\) is upper triangular with the diagonal elements \(r_{ii} = \lambda _i\). Furthermore, if the entries of A and its eigenvalues are all real, T may be chosen to be real orthogonal.
It follows that R can be written as \(D+N\), where \(D={{\mathrm{diag}}}(\lambda _1,\ldots ,\lambda _m)\) and N is a nilpotent upper triangular matrix.
If the matrix A depends on one or several parameters, namely \(A=A(t,x,\xi )\), the situation becomes less clear and it is difficult to give a complete description, in particular together with a prescribed regularity of the involved transformation matrices. The regularity of the matrix A and the desire to maintain it through the transformation puts already constrains on the matrix as, in general, the eigenvalues can only be expected to be Lipschitz continuous in the parameters even if all the entries depend smoothly on the parameters (see, e.g., [2, 40] and the references therein). In the sequel, we will present some sufficient conditions to ensure the existence of an upper triangularisation for \(A(t,x,\xi )\) which respects its regularity. For example, it will apply to the case when A is a matrix of first order symbols continuous with respect to t, i.e., \(A(t,x,\xi ) \in \big ( C S^1 \big )^{m \times m}\).
Our main result for this part of the problem is the following theorem.
Theorem 2
Furthermore, there is an expression for the matrix symbol T which will be given in Theorem 6. Also, the assumption (5) can be relaxed, see Remark 6. In Sect. 3 we will prove this result as well as describe the procedure how to obtain the desired upper triangular form. Moreover, we work out in detail the cases of \(m=2\) and \(m=3\) clarifying this Schur triangualisation procedure and give a number of examples.
The results and techniques of this paper are a natural outgrowth of the paper [27] where the case \(m=2\) was considered and to which the results of the present paper reduce in the case of \(2\times 2\) systems. It is with great sorrow that we remember the untimely departure of our colleague and friend Todor Gramchev who was the inspiration for both [27] and the present paper.
2 Wellposedness in anisotropic Sobolev spaces
This section is devoted to proving the wellposedness of the Cauchy problem (1). For the reader’s convenience we first give a detailed proof in the cases \(m=2\) and \(m=3\). This will inspire us in proving Theorem 1. We note that the case \(m=2\) has been studied in [27] and we will briefly review its derivartion. However, first we collect a few results about Fourier integral operators that we will need in the sequel.
2.1 Auxiliary remarks
If \(a_j\in S^m\), i.e. if the amplitude \(a_j\) in (6) is a symbol of order m, we will write \(G^0_j\in I_{1,0}^{m}.\) However, in the above construction of propagators for hyperbolic equations, we have \(a_j\in S^0\), so that \(G^0_j \in I_{1,0}^{0}\).
By \(I^m_{1,0}\), we denote the class of Fourier integral operators with amplitudes in \(S^m_{1,0}\). For further information, the reader may consult [15, 42, 43] and the references therein.
With that, we can record the following estimate:
Lemma 1
This statement follows from the continuity of \(\lambda _j, \varphi _j, a_j, A_j\) with respect to t and from the \(H^\sigma \)boundedness of nondegenerate Fourier integral operators, see e.g. [15] (there are also surveys on such questions [42, 43]). It is important to note that the constant for the estimate for \(G_j\) does not depend on the initial data of the Cauchy problem; see also Remark 2.
2.2 The case \(m=2\)
The operator \(G_1 \circ b_{12} \circ G_2 \circ b_{21}\) belongs to \( C I_{1,0}^{1}\) since \(b_{21} \in C \Psi _{1,0}^{1}\) and \(b_{12} \in C \Psi _{1,0}^{0}\).
Remark 2
Note that the constant \(T^*\) depends only on A and s. Thus, the argument above can be iterated by taking \(u(T^*,x)\) as new initial data. In this way one can cover an arbitrary finite interval [0, T] and obtain a solution in \( C([0,T],H^s)\times C([0,T],H^{s+1}) \).
Remark 3
Since \(a_{12}(t,x,D_x)\) is a first order operator combining (11) with (13) we easily see that in order to get Sobolev wellposedness of order s we need to take initial data \(u_1^0\) and \(u_2^0\) in \(H^s\) and \(H^{s+1}\), respectively, and right handside functions \(f_1\) and \(f_2\) in \(C([0,T],H^s)\) and \(C([0,T],H^{s+1})\), respectively.
We have therefore proved the following theorem stated for the first time in [27, Theorem 7.2].
Theorem 3
Remark 4
2.3 The case \(m=3\)
2.4 The general case
Theorem 1
Proof
3 Schur decomposition of \(m \times m\) matrices
In this section we investigate how to reduce an \(m\times m\) matrix to the upper triangular form. We recall that such decomposition is wellknown for constant matrices and goes under the name of Schur’s triangularisation, with its statement given in Theorem C.
One of the difficulties when dealing with variable multiplicities is the loss of regularity in the parameters at the points of multiplicities. In the following, we will assume that A is a matrix of (possibly) complex valued first order symbols, continuous with respect to t, i.e., \(A(t,x,\xi ) \in \big ( C S^1 \big )^{m \times m}\).
We will now develop a parameter dependent extension of the Schur triangularisation procedure and we will describe it step by step. Then we will give an example for it for the systems of low sizes, namely, for \(m=2\) and \(m=3\).
In the case of \(m=2\) the construction below was introduced in [27] and now we give its general version for systems of any size.
Normal forms of matrices depending on several parameters have a long history and are notoriously involved; for some remarks and related works, we refer the reader to [14, 24, 25, 45].
3.1 First step or Schur step
The first step in our triangularisation follows the construction in the constant case except that we will not get a unitary transformation matrix. For this reason we talk of a Schur step. Throughout this paper \(e_i\) denotes the ith vector of the standard basis of \(\mathbb R^n\) with an appropriate dimension n.
Proposition 1
Proof
First let us note that we can assume that \(j=1\) in (29). If that is not the case, we can exchange the rows 1 and j as well as columns 1 and j to move the jth component of the eigenvector to the first component.
Applying Proposition 1 repeatedly for \(m2\) times to E, we obtain a full Schur transformation of A, that is a full reduction to an upper triangular form. In the next subsection we describe this iteration in detail. This triangularisation procedure is summarised in Theorem 6 where sufficient conditions on the eigenvectors of A are given.
3.2 The triangularisation procedure
Remark 5
 Step 1
 By Proposition 1 there exists a matrix \(T_1\) such thatThe matrix \(T_1\) is given by$$\begin{aligned} T^{1}_1 A T_1 = \begin{bmatrix} \lambda _1&\quad a_{12}&\quad \cdots&\quad a_{1m} \\ 0&\quad&\quad&\quad \\ \vdots&\quad&\quad E_{m1}&\quad \\ 0&\quad&\quad&\quad \end{bmatrix}. \end{aligned}$$with$$\begin{aligned} T_1 = \begin{bmatrix} \omega _1&\quad e_2&\quad \ldots&\quad e_m \end{bmatrix}, \quad \omega _1 = \begin{bmatrix} \omega _{11}&\quad \ldots&\quad \omega _{1m} \end{bmatrix}^T \end{aligned}$$In the sequel we make use of the projector \(\Pi _k : \mathbb R^m \rightarrow \mathbb R^{mk}\), \(0 \le k \le m1\), defined by$$\begin{aligned} \omega _{1j} = \frac{\left\langle h^{(1)}(t,x,\xi )  e_j \right\rangle }{\left\langle h^{(1)}(t,x,\xi )  e_1 \right\rangle }. \end{aligned}$$Note that \(\Pi _0\) is the identity map \(I_m:\mathbb R^m\rightarrow \mathbb R^m\).$$\begin{aligned} \Pi _k \begin{bmatrix} x_1 \\ \vdots \\ x_m \end{bmatrix} = \begin{bmatrix} x_{k+1} \\ \vdots \\ x_m \end{bmatrix}. \end{aligned}$$
 Step 2
 Since \(h_2\) is an eigenvector of A with eigenvalue \(\lambda _2\) we get that \(T_1^{1}h_2\) is an eigenvector of \(T^{1}_1A T_1\) with eigenvalue \(\lambda _2\) as well. By the structure of \(T^{1}_1A T_1\) we easily see that \(h^{(2)} := \Pi _1 T_1^{1} h_2\) is an eigenvector of \(E_{m1}\), corresponding to \(\lambda _2\). Arguing as in Remark 5 we assume thatto be able to apply Proposition 1 to \(E_{m1}\). We get that there exists an \((m1) \times (m1)\) matrix \({\tilde{T}}_2\) such that \({\tilde{T}}_2^{1}E_{m1}{\tilde{T}}_2\) is of form$$\begin{aligned} \left\langle \Pi _1 T_1^{1} h_2  e_1 \right\rangle \ne 0 \quad \forall (t,x,\xi ) \in [0,T] \times {\mathbb {R}}^n \times \{ \xi  \ge M \}, \end{aligned}$$(33)where in the first row the first row of \(E_{m1}\) appears. Thus, setting$$\begin{aligned} \begin{bmatrix} \lambda _2&\quad *&\quad \ldots&\quad *\\ 0&\quad&\quad&\quad \\ \vdots&\quad&\quad E_{m2}&\quad \\ 0&\quad&\quad&\quad \end{bmatrix}, \end{aligned}$$we obtain$$\begin{aligned} T_2 = \begin{bmatrix} 1&\quad 0&\quad \ldots&\quad 0 \\ 0&\quad&\quad&\quad \\ \vdots&\quad&\quad {\tilde{T}}_2&\quad \\ 0&\quad&\quad&\quad \end{bmatrix}, \end{aligned}$$Note that in (34) we write explicitly only the entries most relevant to our triangularisation. To compute the matrix \({\tilde{T}}_2\), we set$$\begin{aligned} T_2^{1}T_1^{1} A T_1 T_2 = \begin{bmatrix} \lambda _1&\quad *&\quad *&\quad \ldots&\quad *\\ 0&\quad \lambda _2&\quad *&\quad \ldots&\quad *\\ 0&\quad 0&\quad&\quad&\quad \\ \vdots&\quad \vdots&\quad&\quad E_{m2}&\quad \\ 0&\quad 0&\quad&\quad&\quad \end{bmatrix}. \end{aligned}$$(34)where$$\begin{aligned} \omega _2 = \begin{bmatrix} \omega _{22}&\ldots&\omega _{2m} \end{bmatrix}^T, \end{aligned}$$and then$$\begin{aligned} \omega _{2j}(t,x,\xi ) := \frac{\left\langle h^{(2)}(t,x,\xi )  e_j \right\rangle }{\left\langle h^{(2)}(t,x,\xi )  e_1 \right\rangle } , \quad j=2,\ldots ,m, \end{aligned}$$It is clear that \(T_2\) has the same structure as \(T_1\), i.e., it is defined via a rescaled eigenvector as the first column and an identity matrix (\(I_{m1}\) for \(T_1\) and \(I_{m2}\) for \(T_2\)).$$\begin{aligned} {\tilde{T}}_2 = \begin{bmatrix} \omega _2&e_2&\ldots&e_{m1} \end{bmatrix}. \end{aligned}$$
 Step K
 By iterating the method \(k1\) times we can find \(k1\) matrices \(T_1, T_2, \ldots , T_{k1}\) of size \(m\times m\) such thatwhere \(E_{mk+1}\) is a \((mk+1) \times (mk+1)\) matrix and the equality is true on \([0,T]\times {\mathbb {R}}^n\times \{\xi \ge M\}\). Since \(h_k\) is an eigenvector of A corresponding to \(\lambda _k\), the vector$$\begin{aligned}&T_{k1}^{1} \cdot \ldots \cdot T_1^{1} A T_1 \cdot \ldots \cdot T_{k1} = \\&\qquad \begin{bmatrix} \lambda _1&\quad *&\quad *&\quad \ldots&\quad \ldots&\quad *\\ 0&\quad \ddots&\quad *&\quad \ldots&\quad \ldots&\quad *\\ 0&\quad 0&\quad \lambda _{k1}&\quad *&\quad \ldots&\quad *\\ 0&\quad 0&\quad 0&\quad&\quad&\quad \\ \vdots&\quad \vdots&\quad \vdots&\quad&\quad E_{m{k+1}}&\quad \\ 0&\quad 0&\quad 0&\quad&\quad \end{bmatrix}, \end{aligned}$$is an eigenvector of$$\begin{aligned} T_{k1}^{1} T_{k2}^{1} \cdot \cdots \cdot T_1^{1} h_k \end{aligned}$$and$$\begin{aligned} T_{k1}^{1} T_{k2}^{1} \cdot \cdots \cdot T_1^{1} A T_1 T_2 \cdot \cdots \cdot T_{k1} \end{aligned}$$an eigenvector of \(E_{mk+1}\) corresponding to \(\lambda _k\). Thus, to satisfy the assumptions of Proposition 1 and keeping in mind Remark 5, we require that$$\begin{aligned} h^{(k)} := \Pi _{k1} T_{k1}^{1} T_{k2}^{1} \cdot \cdots \cdot T_1^{1} h_k \in \big ( C S^0 \big )^{mk+1} \end{aligned}$$It follows that there exists an \((mk+1) \times (mk+1)\) transformation matrix \({\tilde{T}}_k\) such that \({\tilde{T}}^{1}_{k} \ldots {\tilde{T}}_1^{1}A{\tilde{T}}_{1} \ldots {\tilde{T}}_k\) is of the form$$\begin{aligned} \left\langle h^{(k)}(t,x,\xi )  e_1 \right\rangle \ne 0 \quad \forall (t,x,\xi ) \in [0,T] \times {\mathbb {R}}^n \times \{ \xi  \ge M \}. \end{aligned}$$(35)and set$$\begin{aligned} \begin{bmatrix} \lambda _k&\quad *&\quad \ldots&\quad *\\ 0&\quad&\quad&\quad \\ \vdots&\quad&\quad E_{mk}&\quad \\ 0&\quad&\quad&\quad \end{bmatrix}. \end{aligned}$$The matrix \({\tilde{T}}_k\) is defined by$$\begin{aligned} T_k = \begin{bmatrix} I_{k1}&\mathbf {0} \\ \mathbf {0}&{\tilde{T}}_k \end{bmatrix}. \end{aligned}$$where$$\begin{aligned} {\tilde{T}}_k = \begin{bmatrix} \omega _k&\quad e_2&\quad \ldots&\quad e_{mk+1} \end{bmatrix}, \quad \omega _k = \begin{bmatrix} \omega _{kk}&\quad \ldots&\quad \omega _{km} \end{bmatrix}^T, \end{aligned}$$$$\begin{aligned} \omega _{kj} = \frac{\left\langle h^{(k)}(t,x,\xi )  e_j \right\rangle }{\left\langle h^{(k)}(t,x,\xi )  e_1 \right\rangle }, \quad j = k, \ldots , m. \end{aligned}$$
 Step m1
 This is the last step as \(E_2\) is a \(2 \times 2\) matrix. We have thatis an eigenvector of \(E_2\) corresponding to \(\lambda _{m1}\) and that \({\tilde{T}}_{m1}\) exists as before if$$\begin{aligned} h^{(m1)} = \Pi _{m2} T_{m2}^{1} \cdot \cdots \cdot T_{1}^{1} h_{m1} \in \big ( C S^0 \big )^{2} \end{aligned}$$The matrix \({\tilde{T}}_{m1}\) is given by$$\begin{aligned} \left\langle h^{(m1)}(t,x,\xi )  e_1 \right\rangle \ne 0 \quad \in \forall (t,x,\xi ) \in [0,T] \times {\mathbb {R}}^n \times \{ \xi  \ge M \}. \end{aligned}$$(36)where$$\begin{aligned} {\tilde{T}}_{m1} = \begin{bmatrix} \omega _{m1}&e_{2} \end{bmatrix} = \begin{bmatrix} \omega _{m1,m1}&\quad 0 \\ \omega _{m1,m}&\quad 1 \end{bmatrix}, \end{aligned}$$and then$$\begin{aligned} \omega _{m1,j} = \frac{\left\langle h^{(m1)}(t,x,\xi )  e_j \right\rangle }{\left\langle h^{(m1)}(t,x,\xi )  e_1 \right\rangle } , \quad j =m1,m, \end{aligned}$$$$\begin{aligned} T_{m1} = \begin{bmatrix} I_{m2}&\quad \mathbf 0 \\ \mathbf 0&\quad {\tilde{T}}_{m1} \end{bmatrix}. \end{aligned}$$

\(h_1, \ldots , h_{m1}\) are the eigenvectors of the matrix A corresponding to the eigenvalues \(\lambda _1, \ldots , \lambda _{m1}\).
 \(h^{(1)}=h_1\) andfor \(i=2, \ldots ,m1\).$$\begin{aligned} h^{(i)} = \Pi _{i1}T_{i1}^{1} T_{i2}^{1} \cdot \,\cdots \, \cdot T_1^{1} h_i \in \big ( C S^0\big )^{mk+1} , \end{aligned}$$(37)
 the matrices \(T_k\) are inductively defined as follows: \(T_0 = I_{m}\) andwhere$$\begin{aligned} T_k = \begin{bmatrix} I_{k1}&\quad \mathbf 0 \\ \mathbf 0&\quad {\tilde{T}}_k \end{bmatrix}, \quad {\tilde{T}}_k = \begin{bmatrix} \omega _{k}&\quad e_{2}&\quad \ldots&\quad e_{mk} \end{bmatrix}, \quad e_i \in \mathbb R^{mk}, \end{aligned}$$$$\begin{aligned} \omega _{kj} = \frac{\left\langle h^{(k)}(t,x,\xi )  e_j \right\rangle }{\left\langle h^{(k)}(t,x,\xi )  e_1 \right\rangle }, \quad j = k,\ldots ,m. \end{aligned}$$
Summarising, we can formulate a more precise version of Theorem 2.
Theorem 6
Remark 6
Remark 7
If \(A(t,x,\xi )\) has complex symbols (as allowed in Theorem 1, see also Remark 1) and real eigenvalues, the eigenvalues of the Schur transformed system clearly remain real. The upper triangular entries may still be complex valued symbols.
Remark 8
Theorem 6 is quite general in the sense that the functions \(a_{ij}\) could be complexvalued. In this paper, we are concerned with hyperbolic matrices, i.e. we assume that the eigenvalues \(\lambda _1, \ldots , \lambda _m\) are real. We stress that the Schur transform does not change the hyperbolicity of the matrix as the eigenvalues of \(T^{1} A T\) are also \(\lambda _1, \ldots , \lambda _m\).
Remark 9
For our applications in this and future work it is important that the transform T in Theorem 6 keeps the regularity of the original matrix A, i.e. that the elements of the Schur transform \(T^{1} A T\) are in the same class as the elements of A. Here, we stated everything with \(C S^1\) and \(C S^0\) as that is the regularity considered in this paper. Note that one could replace C with \(C^k\) or \(C^\infty \) and find a matrix T such that the transformed matrix \(T^{1}AT\) inherits the same regularity with respect to t. In addition, one could also drop the regularity in t to \(L^\infty \) and the triangularisation procedure would still work preserving the boundedness in t through every step.
For the sake of simplicity and the reader’s convenience, in the next subsections we analyse Theorem 6 in the special cases of \(m=2\) and \(m=3\).
3.3 The case \(m=2\)
We now formulate Theorem 6 in the special case \(m=2\). In this way we recover the formulation given in [27].
Theorem 7
Proof
3.3.1 Example
 (i)By direct computations we can easily see that if \(h_1=[h_{11}\,\, h_{12}]^T=e_1\) then the matrix A is automatically in the upper triangular form. Indeed,implies \(a_{21}=0\). A typical example (already discussed in [27]) is the Jordan block matrix$$\begin{aligned} a_{21}h_{11}+a_{22}h_{12}=\lambda _1 h_{12} \end{aligned}$$where \(\lambda _1=0\) is an eigenvalue with eigenvector \(h_1=e_1\).$$\begin{aligned} A=\begin{bmatrix} 0&\quad 1\\ 0&\quad 0 \end{bmatrix}, \end{aligned}$$
 (ii)Condition (40) is trivially fulfilled when \(\mathrm{det A}\equiv 0\) and A is of the formfor \(a=a(t,x,\xi )\). Indeed, also in this case one can take 0 as an eigenvalue with eigenvector \(h_1= [1 \,\, 1]^T\).$$\begin{aligned} \begin{bmatrix} a&\quad a\\ a&\quad a \end{bmatrix}, \end{aligned}$$
3.4 The case \(m=3\)
Thus, we can state
Theorem 8
We end this subsection by discussing some examples of \(3\times 3\) matrices fulfilling the assumptions above on their eigenvalues.
3.4.1 Examples
 (i)If the matrix A has eigenvectorsthen conditions (44) and (45) are easily fulfilled with \(j_1=1\) and \(j_2=2\). Indeed, \(h_{11}=1\) and$$\begin{aligned} h_1= \begin{bmatrix} 1\\ 0\\ 1 \end{bmatrix} \quad \text {and} \quad h_2= \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix} \end{aligned}$$More in general to satisfy (44) and (45) it would be enough to have two eigenvectors$$\begin{aligned} h_{22}h_{11}h_{12}h_{21}=h_{22}h_{11}=1. \end{aligned}$$with \(h_{11}\ne 0\), \(h_{22}\ne 0\) and \(h_{12}=0\).$$\begin{aligned} h_1= \begin{bmatrix} h_{11}\\ h_{12}\\ h_{13} \end{bmatrix} \quad \text {and} \quad h_2= \begin{bmatrix} h_{21}\\ h_{22}\\ h_{23} \end{bmatrix} \end{aligned}$$
 (ii)A matrix with eigenvectorshas a special form. Indeed, for \(\lambda _1\) and \(\lambda _2\) eigenvalues corresponding to \(h_1\) and \(h_2\), respectively, by using the eigenvector equations we obtain$$\begin{aligned} h_1= \begin{bmatrix} 1\\ 0\\ 1 \end{bmatrix} \quad \text {and} \quad h_2= \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix} \end{aligned}$$and$$\begin{aligned} \begin{aligned} a_{13}&=\lambda _1a_{11},\\ a_{23}&=a_{21},\\ a_{33}&=\lambda _1a_{31}, \end{aligned} \end{aligned}$$Hence$$\begin{aligned} \begin{aligned} a_{12}&=\lambda _2a_{11},\\ a_{22}&=\lambda _2a_{21},\\ a_{32}&=a_{31}. \end{aligned} \end{aligned}$$$$\begin{aligned} A=\begin{bmatrix} a_{11}&\quad \lambda _2a_{11}&\quad \lambda _1a_{11}\\ a_{21}&\quad \lambda _2a_{21}&\quad a_{21}\\ a_{31}&\quad a_{31}&\quad \lambda _1a_{31} \end{bmatrix}. \end{aligned}$$
References
 1.Bernstein, D.S.: Matrix Mathematics—Theory, Facts, and Formulas, 2nd edn. Princeton University Press, Princeton (2009)MATHGoogle Scholar
 2.Bronshtein, M.D.: Smoothness of roots of polynomials depending on parameters. Sibirsk. Mat. Zh., 20(3), 493–501, (1979). Sib. Math. J. 20, 347–352 (1980). (English transl)CrossRefGoogle Scholar
 3.Colombini, F., Kinoshita, T.: On the Gevrey well posedness of the Cauchy problem for weakly hyperbolic equations of higher order. J. Differ. Equ. 186, 394–419 (2002). https://doi.org/10.1016/S00220396(02)000098 MathSciNetCrossRefMATHGoogle Scholar
 4.Colombini, F., Spagnolo, S.: An example of a weakly hyperbolic Cauchy problem not well posed in \(C^\infty \). Acta Math. 148, 243–253 (1982). https://doi.org/10.1007/BF02392730 MathSciNetCrossRefMATHGoogle Scholar
 5.Colombini, F., De Giorgi, E., Spagnolo, S.: Sur les équations hyperboliques avec des coefficients qui ne dépendent que du temps. Ann. Sc. Norm. Super. Pisa Cl. Sci. 6, 511–559 (1979)MATHGoogle Scholar
 6.Colombini, F., Del Santo, D., Fanelli, F., Métivier, G.: Timedependent loss of derivatives for hyperbolic operators with non regular coefficients. Commun. Partial Differ. Equ. 38(10), 1791–1817 (2013). https://doi.org/10.1080/03605302.2013.795968 MathSciNetCrossRefMATHGoogle Scholar
 7.Colombini, F., Del Santo, D., Fanelli, F., Métivier, G.: A wellposedness result for hyperbolic operators with Zygmund coefficients. J. Math. Pures Appl. 9(100), 455–475 (2013). https://doi.org/10.1016/j.matpur.2013.01.009 MathSciNetCrossRefMATHGoogle Scholar
 8.Colombini, F., Jannelli, E., Spagnolo, S.: Nonuniqueness in hyperbolic Cauchy problems. Ann. Math. 126, 495–524 (1987). https://doi.org/10.2307/1971359 MathSciNetCrossRefMATHGoogle Scholar
 9.Colombini, F., Lerner, N.: Hyperbolic operators with nonLipschitz coefficients. Duke Math. J. 77(3), 657–698 (1995). https://doi.org/10.1215/S0012709495077217 MathSciNetCrossRefMATHGoogle Scholar
 10.Colombini, F., Nishitani, T.: Second order weakly hyperbolic operators with coefficients sum of powers of functions. Osaka J. Math. 44(1), 121–137 (2007)MathSciNetMATHGoogle Scholar
 11.D’Ancona, P., Kinoshita, T.: On the wellposedness of the Cauchy problem for weakly hyperbolic equations of higher order. Math. Nachr. 278, 1147–1162 (2005). https://doi.org/10.1002/mana.200310299 MathSciNetCrossRefMATHGoogle Scholar
 12.D’Ancona, P., Kinoshita, T., Spagnolo, S.: Weakly hyperbolic systems with Hölder continuous coefficients. J. Differ. Equ. 203(1), 64–81 (2004). https://doi.org/10.1016/j.jde.2004.03.016 CrossRefMATHGoogle Scholar
 13.D’Ancona, P., Kinoshita, T., Spagnolo, S.: On the 2 by 2 weakly hyperbolic systems. Osaka J. Math. 45(4), 921–939 (2008)MathSciNetMATHGoogle Scholar
 14.Dieci, L., Eirola, T.: On smooth decompositions of matrices. SIAM J. Matrix Anal. Appl. 20(3), 800–819 (1999). https://doi.org/10.1137/S0895479897330182 MathSciNetCrossRefMATHGoogle Scholar
 15.Duistermaat, J.J.: Fourier Intergal Operators Progress in Mathematics, vol. 130. Birkhäuser Boston, Inc, Boston (1996)Google Scholar
 16.Garetto, C.: On hyperbolic equations and systems with nonregular time dependent coefficients. J. Differ. Equ. 259(11), 5846–5874 (2015). https://doi.org/10.1016/j.jde.2015.07.011 MathSciNetCrossRefMATHGoogle Scholar
 17.Garetto, C., Jäh, C.: Wellposedness of hyperbolic systems with multiplicities and smooth coefficients. Math. Ann. 369(1–2), 441–485 (2017). https://doi.org/10.1007/s0020801614368 MathSciNetCrossRefMATHGoogle Scholar
 18.Garetto, C., Ruzhansky, M.: Wellposedness of weakly hyperbolic equations with time dependent coefficients. J. Differ. Equ. 253(5), 1317–1340 (2012). https://doi.org/10.1016/j.jde.2012.05.001 MathSciNetCrossRefMATHGoogle Scholar
 19.Garetto, C., Ruzhansky, M.: Weakly hyperbolic equations with nonanalytic coefficients and lower order terms. Math. Ann. 357(2), 401–440 (2013). https://doi.org/10.1007/s0020801309109 MathSciNetCrossRefMATHGoogle Scholar
 20.Garetto, C., Ruzhansky, M.: A note on weakly hyperbolic equations with analytic principal part. J. Math. Anal. Appl. 412(1), 1–14 (2014). https://doi.org/10.1016/j.jmaa.2013.09.011 MathSciNetCrossRefMATHGoogle Scholar
 21.Garetto, C., Ruzhansky, M.: Hyperbolic second order equations with nonregular time dependent coefficients. Arch. Ration. Mech. Anal. 217(1), 113–154 (2015). https://doi.org/10.1007/s0020501408301 MathSciNetCrossRefMATHGoogle Scholar
 22.Garetto, C., Ruzhansky, M.: On hyperbolic systems with time dependent Hölder characteristics. Ann. Mat. Pura Appl. 196(1), 155–164 (2017). https://doi.org/10.1007/s1023101605676 MathSciNetCrossRefMATHGoogle Scholar
 23.Garetto, C., Ruzhansky, M.: On \(C^\infty \) wellposedness of hyperbolic systems with multiplicities. Ann. Mat. Pura Appl. 196(5), 1819–1834 (2017). https://doi.org/10.1007/s1023101706392 MathSciNetCrossRefMATHGoogle Scholar
 24.Gingold, H.: On continuous triangularization of matrix functions. SIAM J. Math. Anal. 10(4), 709–720 (1979). https://doi.org/10.1137/0510065 MathSciNetCrossRefMATHGoogle Scholar
 25.Gingold, H., Hsieh, P.F.: Globally analytic triangularization of a matrix function. Linear Algebra Appl. 169, 75–101 (1992). https://doi.org/10.1016/00243795(92)901727 MathSciNetCrossRefMATHGoogle Scholar
 26.Gramchev, T., Orrú, N.: Cauchy problem for a class of nondiagonalizable hyperbolic systems. Discret. Contin. Dyn. Syst., 533–542 (2011). https://doi.org/10.3934/proc.2011.2011.533
 27.Gramchev, T., Ruzhansky, M.: Cauchy problem for \(2 \times 2\) hyperbolic systems of pseudodifferential equations with nondiagonalisable principal part. Studies in phase space analysis with applications to PDEs. Progr. Nonlinear Differ. Equ. Appl. 84, 129–144 (2013)MATHGoogle Scholar
 28.Hörmander, L.: The Analysis of Linear Partial Differential Operators, vol. I–IV. Springer, Heidelberg (1985)MATHGoogle Scholar
 29.Hörmander, L.: Hyperbolic systems with double characteristics. Comm. Pure Appl. Math. 46, 261–301 (1993). https://doi.org/10.1002/cpa.3160460207 MathSciNetCrossRefMATHGoogle Scholar
 30.Ivrii, V.Ya., Petkov, V.M.: Necessary conditions for the correctness of the Cauchy problem for nonstrictly hyperbolic equations. (Russian). Russ. Math. Surv. 29, 3–70 (1974). https://doi.org/10.1070/RM1974v029n05ABEH001295 CrossRefMATHGoogle Scholar
 31.Kajitani, K., Yuzawa, Y.: The Cauchy problem for hyperbolic systems with Hölder continuous coefficients with respect to the time variable. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 5(4), 465–482 (2006)MathSciNetMATHGoogle Scholar
 32.Kamotski, I., Ruzhansky, M.: Estimates and spectral asymptotics for systems with multiplicities. Funct. Anal. Appl. 39, 308–310 (2005). https://doi.org/10.1007/s1068800500522 MathSciNetCrossRefGoogle Scholar
 33.Kamotski, I., Ruzhansky, M.: Regularity properties, representation of solutions and spectral asymptotics of systems with multiplicities. Comm. Partial Differ. Equ. 32, 1–35 (2007). https://doi.org/10.1080/03605300600856816 MathSciNetCrossRefMATHGoogle Scholar
 34.Kinoshita, T., Spagnolo, S.: Hyperbolic equations with nonanalytic coefficients. Math. Ann. 336, 551–569 (2006). https://doi.org/10.1007/s0020800600097 MathSciNetCrossRefMATHGoogle Scholar
 35.Lax, P.: Asymptotic solutions of oscillatory initial value problems. Duke Math. J. 24, 627–646 (1957). https://doi.org/10.1215/S0012709457024717 MathSciNetCrossRefMATHGoogle Scholar
 36.Melrose, R.B., Uhlmann, G.A.: Microlocal structure of involutiveconical refraction. Duke Math. J. 46, 571–582 (1982). https://doi.org/10.1215/S0012709479046301 CrossRefMATHGoogle Scholar
 37.Nishitani, T.: On the Cauchy problem for \(D^2_tD_x a(t, x)^n D_x\). Ann. Univ. Ferrara Sez. VII Sci. Mat. 52(2), 395–430 (2006)MathSciNetCrossRefMATHGoogle Scholar
 38.Ohya, Y., Tarama, S.: The Cauchy Problem with multiple characteristics in the Gevery class–Hölder coefficients in \(t\). Hyperbolic Equations and Related Topics. Kataka/Kioto, pp. 273–306 (1984)Google Scholar
 39.Parenti, C., Parmeggiani, A.: On the Cauchy problem for hyperbolic operators with double characteristics. Commun. Partial Differ. Equ. 34, 837–888 (2009). https://doi.org/10.1080/03605300902892360 MathSciNetCrossRefMATHGoogle Scholar
 40.Parusinski, A., Rainer, A.: Regularity of roots of polynomials. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 16(2), 481–517 (2016)MathSciNetMATHGoogle Scholar
 41.Rozenblum, G.: Spectral asymptotic behaviour of elliptic systems (Russian). Zap. LOMI 96, 255–271 (1980)MATHGoogle Scholar
 42.Ruzhansky, M.: Singularities of affine fibrations in the theory of regularity of Fourier integral operators. Russ. Math. Surv. 55(1), 93–161 (2000). https://doi.org/10.1070/RM2000v055n01ABEH000250 MathSciNetCrossRefMATHGoogle Scholar
 43.Ruzhansky, M.: Regularity theory of Fourier integral operators with complex phases and singularities of affine fibrations. CWI Tract, 131. Stichting Mathematisch Centrum, Centrum voor Wiskunde en Informatica, Amsterdam (2001)Google Scholar
 44.Yuzawa, Y.: The Cauchy problem for hyperbolic systems with Hölder continuous coefficients with respect to time. J. Differ. Equ. 219(2), 363–374 (2005). https://doi.org/10.1016/j.jde.2004.12.006 MathSciNetCrossRefMATHGoogle Scholar
 45.Wasow, W.: On holomorphically similar matrices. J. Math. Anal. Appl. 4(2), 202–206 (1962). https://doi.org/10.1016/0022247X(62)900501 MathSciNetCrossRefMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.