This chapter is intended to give a brief introduction as well as a summary of the present text. We shall highlight some of the main ideas and methods behind the theory and will also aim to provide some background on the main concept in the manuscript: the notion of so-called

$$\displaystyle \begin{aligned} \mathbf{Evolutionary}\ \mathbf{Equations} \end{aligned}$$

dating back to Picard in the seminal paper [82]; see also [84, Chapter 6].

Another expression used to describe the same thing (and in order to distinguish the concept from evolution equations) is that of evo-systems . Before going into detail on what we think of when using the term evolutionary equations, we provide some wider context to (some) solution methods of partial differential equations.

1.1 From ODEs to PDEs

In order to study and understand partial differential equations (PDEs) in general people have started out looking for methods known from the theory of ordinary differential equations (ODEs) to apply these to PDEs. The process of getting from a PDE to some ODE is by no means unique nor ‘canonical’. That is to say there might be more than one way of reformulating a PDE into an (generalised) ODE setting (if at all).

The benefits of such a strategy, if it works, are obvious: Since for ODEs solution methods are well-known and well understood, some intuition from ODEs may be passed onto the solution process for PDEs. One way of directly apply ODE-methods to PDEs can be carried out for transport type equations, where the method of characteristics uses the fact that—using the implicit function theorem—some solutions of PDEs correspond to solutions of ODEs. In this section we shall not delve into this direction of PDE theory but refer to the standard literature such as [39] instead.

Another way of using ODE theory for PDEs is summarised by what might be called infinite-dimensional generalisations. In a nutshell instead of solving a PDE directly, one solves (infinitely many) ODEs instead. For some equations this strategy can be applied by the separation of variables ansatz. Somewhat similarly, one can generalise linear ODEs into an infinite-dimensional setting under the umbrella term evolution equation to signify differential equations involving time. In order to provide some more detail to this strategy we shortly recall how to solve linear ODEs: Let us consider an n × n-matrix A with entries from the field \(\mathbb {K}\) of complex or real numbers, \(\mathbb {C}\) or \(\mathbb {R}\), and address the system of ordinary differential equations

$$\displaystyle \begin{aligned} \begin{cases} u'(t)=Au(t), & t>0,\\ u(0)=u_{0} \end{cases} \end{aligned}$$

for some given initial datum, \(u_{0}\in \mathbb {K}^{n}\). This solution can be computed with the help of the matrix exponential

$$\displaystyle \begin{aligned} \mathrm{e}^{tA}=\sum_{k=0}^{\infty}\frac{(tA)^{k}}{k!}\in\mathbb{K}^{n\times n} \end{aligned}$$

in the form

$$\displaystyle \begin{aligned} u(t)=\mathrm{e}^{tA}u_{0}. \end{aligned}$$

As it turns out, this u is continuously differentiable and u satisfies the above equation. We note in particular that etA u 0 →e0A u 0 = u 0 as and that \(\mathrm {e}^{\left (t+s\right )A}=\mathrm {e}^{tA}\mathrm {e}^{sA}\). In a way, to obtain the solution for the system of ordinary differential equations we need to construct \((\mathrm {e}^{tA})_{t\geqslant 0}\), the so-called fundamental solution.

In order to have a particular example for the infinite-dimensional generalisation in mind, let us have a look at the heat equation next. This is the prototypical example for an (infinite-dimensional) evolution equation: Let \(\Omega \subseteq \mathbb {R}^d\) be open. Then consider

$$\displaystyle \begin{aligned} \begin{cases} \partial_{t}\theta(t,x)=\Delta\theta(t,x), & (t,x)\in\left(0,\infty\right)\times\Omega,\\ \theta(0,x)=\theta_{0}(x), & x\in\Omega, \end{cases} \end{aligned}$$

where \(\Delta =\sum _{j=1}^{d}\partial _{j}^{2}\) is the usual Laplacian carried out with respect to the ‘x-variables’ or ‘spatial variables’, and θ 0 is a given initial heat distribution and θ is the unknown (scalar-valued) heat distribution. The above heat equation is also accompanied with some boundary conditions for θ(t, x) which are required to be valid for all t > 0 and x ∈  Ω. For definiteness, we consider homogeneous Dirichlet boundary conditions, that is, θ(t, x) = 0 for all t > 0 and x ∈  Ω, in the following.

In order to mark the considered boundary conditions we shall write ΔD instead of just Δ and look at the heat equation in the form

$$\displaystyle \begin{aligned} u'=\Delta_{\mathrm{D}}u,\quad u(0)=u_0 \end{aligned}$$

with the understanding that u is considered to be a vector-valued function assigning each time \(t\geqslant 0\) to a function space X of functions \(\Omega \to \mathbb {K}\); here we choose X = L 2( Ω). If Ω is bounded, it is possible to diagonalise ΔD and the corresponding eigenvector expansion leads to infinitely many ODEs of the form

$$\displaystyle \begin{aligned} u^{\prime}_k = \lambda_k u_k,\quad u_k(0) = u_{0,k} \end{aligned}$$

for suitable scalars λ k, \(k\in \mathbb {N}\). The solution sequence (u k)k for these ODEs is the sequence of coefficients of the eigenvector expansion of u.

A different infinite-dimensional generalisation of the finite-dimensional setting leads to a solution method valid for all Ω.

This generalisation does not consist in changing the PDE to many ODEs but only to a single one with an infinite-dimensional state space. The method is described best by looking at the fundamental solution in the ODE setting rather than the equation. The idea is to find a fundamental solution with state space X so that we replace the family \((\mathrm {e}^{tA})_{t\geqslant 0}\) of matrices acting on \(\mathbb {K}^n\) by a family \((T(t))_{t\geqslant 0}\) of linear operators in X. This leads to the notion of so-called C 0-semigroups and the fundamental solution of the heat equation is then the (appropriately interpreted) family \((\mathrm {e}^{t\Delta _{\mathrm {D}}})_{t\geqslant 0}\), see [38, 48, 81] for some standard references. More precisely, for X = L 2( Ω) and θ 0 ∈ L 2( Ω), the function \(\theta \colon t\mapsto \mathrm {e}^{t\Delta _{\mathrm {D}}}\theta _{0}\in L_2(\Omega )\) satisfies the above heat equation in a certain generalised sense.

In general, for equations written in the form u′ = Au for appropriate A, a solution theory, that is, the proof for existence, uniqueness and continuous dependence on the data, is then contained in the construction of the fundamental solution (e.g., C 0-semigroup) in terms of the ingredients of the equation. This infinite-dimensional generalisation from the ODE case proves to be versatile and has been applied to many different particular PDEs of the form u′ = Au.

Albeit quite successful there are also some drawbacks in the application of the abovementioned theories. For particular PDEs either the considered methods are not applicable or their application necessitates more or less involved workarounds.

In the next section, we describe a particular problem for which invoking for instance semigroup theory would seem unnatural let alone not at all straightforward. It follows, however, the general scheme of looking at fundamental solutions in an infinite-dimensional context.

1.2 Time-independent Problems

The construction of fundamental solutions is also a valuable method for obtaining a solution for time-independent problems, see, e.g., [39]. To see this, let us consider Poisson’s equation in \(\mathbb {R}^{3}\): Given \(f\in C_{\mathrm {c}}^\infty (\mathbb {R}^{3})\) we want to find a function \(u\colon \mathbb {R}^{3}\to \mathbb {R}\) with the property that

$$\displaystyle \begin{aligned} -\Delta u(x)=f(x)\quad (x\in\mathbb{R}^{3}). \end{aligned}$$

It can be shown that u given by

$$\displaystyle \begin{aligned} u(x)=\frac{1}{4\pi}\int_{\mathbb{R}^{3}}\frac{1}{\lvert x-y \rvert }f(y)\,\mathrm{d} y \end{aligned}$$

is well-defined, twice continuously differentiable and satisfies Poisson’s equation; cf. Exercise 1.3. Note that \(x\mapsto \frac {1}{4\pi \lvert x \rvert }\) is also referred to as the fundamental solution or Green’s function for Poisson’s equation. The formula presented for u is the convolution with the fundamental solution. The formula used to define u also works for f being merely bounded and measurable with compact support. In this case, however, the pointwise formula of Poisson’s equation cannot be expected to hold anymore, since changing f on a set of measure 0 does not influence the values of u. Thus, only a posteriori estimates under additional conditions on f render u to be twice continuously differentiable (say) with Poisson’s equation holding for all \(x\in \mathbb {R}^{3}\). However, similar to the semigroup setting, it is possible to generalise the meaning of − Δu = f. Then, again, the fundamental solution can be used to construct a solution for Poisson’s equation for more general f.

The situation becomes different when we consider a boundary value problem instead of the problem above. More precisely, let \(\Omega \subseteq \mathbb {R}^{3}\) be an open set and let f ∈ L 2( Ω). We then ask whether there exists u ∈ L 2( Ω) such that

$$\displaystyle \begin{aligned} \begin{cases} -\Delta u =f, & \text{ on }\Omega,\\ \quad \;\; u = 0, & \text{ on }\partial\Omega. \end{cases} \end{aligned}$$

Notice that the task of just (mathematically) formulating this equation, let alone establishing a solution theory, is something that needs to be addressed. Indeed, we emphasise that it is unclear as to what Δu is supposed to mean if u ∈ L 2( Ω), only. It turns out that the problem described is not well-posed in general. In particular—depending on the shape of Ω and the norms involved—it might, for instance, lack continuous dependence on the data, f.

In any case, the solution formula that we have used for the case when \(\Omega =\mathbb {R}^{3}\) does not work anymore. Indeed, only particular shapes of Ω permit to explicitly construct a fundamental solution ; see [39, Section 2.2]. Despite this, when Ω is merely bounded, it is still possible to construct a solution, u, for the above problem. There are two key ingredients required for this approach. One is a clever application of Riesz’s representation theorem for functionals in Hilbert spaces and the other one involves inventing ‘suitable’ interpretations of Δu in Ω and u = 0 on  Ω. Thus, the method of ‘solving’ Poisson’s equation amounts to posing the correct question, which then can be addressed without invoking the fundamental solution . With this in mind, one could argue that the setting makes the problem solvable.

1.3 Evolutionary Equations

The central aim for evolutionary equations is to combine the rationales from both the C 0-semigroup theory and that from the time-independent case. That is to say, we wish to establish a setting that treats time-independent problems as well as time-dependent problems. At the same time we need to generalise solution concepts. We shall not aim to construct the fundamental solution in either the spatial or the temporal directions. The problem class will comprise of problems that can be written in the form

$$\displaystyle \begin{aligned} \left(\partial_{t}M(\partial_{t})+A\right)U=F \end{aligned}$$

where U is the unknown and F the known right-hand side. Furthermore, A is an (unbounded, skew-selfadjoint) operator acting in some Hilbert space that is thought of as modelling spatial coordinates; t is a realisation of the (time-)derivative operator and M( t) is an analytic, bounded operator-valued function M, which is evaluated at the time derivative. In the course of the next chapters, we shall specify the definitions and how standard problems fit into this problem class. In particular, we will specify the Hilbert spaces modelling space-time in which the above equation is considered.

Before going into greater depth on this approach, we would like to emphasise the key differences and similarities which arise when compared to the derivation of more traditional solution theories that we outlined above.

Since the solution theory for evolutionary equations will also encapsulate time-independent problems, we predominantly focus on inhomogeneous problems. In fact, the choice of Hilbert spaces implies implicit homogeneous initial conditions at t = −. However, inhomogeneous initial values at t = 0 will also be considered in this manuscript. In fact, it turns out that these initial value problems can be recast into problems of the above type.

In any case, as we do not want to require the existence of any fundamental solution we will also need to introduce a generalisation of the concept of a solution. Moreover, we shall see that both t and A are unbounded operators whereas M( t) is a bounded operator. Thus, we need to make sense of the operator sum of the two unbounded operators t M( t) and A, which, in general, cannot be realised as being onto but rather as having dense range, only.

A post-processing procedure will then ensure that for more regular right-hand sides, F, the solution U will also be more regular. In some cases this will, for instance, amount to U being continuous in the time variable. We shall entirely confine ourselves within the Hilbert space case though. In this sense, the solution theory to be presented will be, in essence, an application of the projection theorem applied in a Hilbert space that combines both spatial and temporal variables.

The operator M( t) is thought of as carrying all the ‘complexity’ of the model. What we mean by complexity will become more apparent when we discuss some examples.

Finally, let us stress that A being ‘skew-selfadjoint’ is a way of implementing first order systems in our abstract setting. In fact, we shall focus on first order equations in both time and space. This is also another change in perspective when compared to classical approaches. As classical treatments might emphasise the importance of the Laplacian (and hence Poisson’s equation) and variants thereof, evolutionary equations rather emphasise Maxwell’s equations as the prototypical PDE. This change of point of view will be illustrated in the following section, where we address some classical examples.

1.4 Particular Examples and the Change of Perspective

Here we will focus on three examples. These examples will also be the first to be readdressed when we discuss the solution theory of evolutionary equations in a later chapter. In order to simplify the current presentation we will not consider boundary value problems but solely concentrate on problems posed on \(\Omega =\mathbb {R}^{3}\). Furthermore, we shall dispose of any initial conditions. For a more detailed account on the derivation of these equations, we refer to the appendix of this manuscript.

Maxwell’s Equations

The prototypical evolutionary equation is the system provided by Maxwell’s equations. Maxwell’s equations consist of two equations describing an electro-magnetic field, (E, H), subject to a given certain external current, j,

$$\displaystyle \begin{aligned} \partial_{t}\varepsilon E+\sigma E-\operatorname{\mathrm{curl}} H & =j,\\ \partial_{t}\mu H+\operatorname{\mathrm{curl}} E & =0. \end{aligned} $$

We shall detail the properties of the material parameters ε, μ, and σ later on; for a definition of \( \operatorname {\mathrm {curl}}\) see Sect. 6.1. For the time being it is safe to assume that they are non-negative real numbers and that they additionally satisfy that μ(ε + σ) > 0. Now, in the setting of evolutionary equations, we gather the electro-magnetic field into one column vector and obtain

$$\displaystyle \begin{aligned} \left(\partial_{t}\begin{pmatrix} \varepsilon & 0\\ 0 & \mu \end{pmatrix}+\begin{pmatrix} \sigma & 0\\ 0 & 0 \end{pmatrix}+\begin{pmatrix} 0 & -\operatorname{\mathrm{curl}}\\ \operatorname{\mathrm{curl}} & 0 \end{pmatrix}\right)\begin{pmatrix} E\\ H \end{pmatrix}=\begin{pmatrix} j\\ 0 \end{pmatrix}. \end{aligned}$$

We shall see later that we obtain an evolutionary equation by setting

A formulation that fits well into an infinite-dimensional ODE-setting would be, for example,

$$\displaystyle \begin{aligned} \partial_{t}\begin{pmatrix} E\\ H \end{pmatrix}=\begin{pmatrix} \varepsilon & 0\\ 0 & \mu \end{pmatrix}^{-1}\begin{pmatrix} -\sigma & \operatorname{\mathrm{curl}}\\ -\operatorname{\mathrm{curl}} & 0 \end{pmatrix}\begin{pmatrix} E\\ H \end{pmatrix}+\begin{pmatrix} \varepsilon & 0\\ 0 & \mu \end{pmatrix}^{-1}\begin{pmatrix} j\\ 0 \end{pmatrix}, \end{aligned}$$

provided that ε > 0. The inhomogeneous right-hand side \((\frac {1}{\varepsilon } j, 0)\) can then be dealt with by means of the variation of constants formula, which is the incarnation of the convolution of \((\frac {1}{\varepsilon } j, 0)\) with the fundamental solution in this time-dependent situation. Thus, in order to apply for example semigroup theory, the main task lies in showing that

gives rise to a suitable interpretation of \((\mathrm {e}^{t\widetilde A})_{t\geqslant 0}\).

A different formulation needs to be put in place if ε = 0 everywhere. The situation becomes even more complicated if ε and σ are bounded, non-negative, measurable functions of the spatial variable such that \(\varepsilon +\sigma \geqslant c\) for some c > 0. In the setting of evolutionary equations, this problem, however, can be dealt with. Note that then one cannot expect E to be continuous with respect to the temporal variable unless j is smooth enough.

Wave Equation

We shall discuss the scalar wave equation in a medium where the wave propagation speed is inhomogeneous in different directions of space. This is modelled by finding \(u\colon \mathbb {R}\times \mathbb {R}^{3}\to \mathbb {R}\) such that, given a suitable forcing term \(f\colon \mathbb {R}\times \mathbb {R}^{3}\to \mathbb {R}\) (again we skip initial values here), we have

where \(a=a^{\top }\in \mathbb {R}^{3\times 3}\) is positive definite ; that is, \(\left \langle \xi ,a\xi \right \rangle _{\mathbb {R}^{3}}>0\) for all \(\xi \in \mathbb {R}^{3}\setminus \{0\}\). In the context of evolutionary equations, we rewrite this as a first order problem in time and space. For this, we introduce and and obtain that

Thus,

render the wave equation as an evolutionary equation.

Let us mention briefly that it is also possible to rewrite the wave equation as a first order system in time only. For this, a standard ODE trick is used: one simply sticks with the additional variable v =  t u and obtains that

In this formulation the ‘complexity’ of the model is contained in the operator

Heat Equation

We have already formulated classical approaches to the heat equation

in which we have added a heat source Q and a conductivity \(a=a^{\top }\in \mathbb {R}^{3\times 3}\) being positive definite. Here, however, we reformulate the heat equation as a first order system in time and space to end up (again setting ) with

In the context of evolutionary equations we then have that

The advantage of this reformulation is that it becomes easily comparable to the first order formulation of the wave equation outlined above. For instance it is now possible to easily consider mixed type problems of the form

with \(s\colon \mathbb {R}^{3}\to [0,1]\) being an arbitrary measurable function. In fact, in the solution theory for evolutionary equations, this does not amount to any additional complication of the problem. Models of this type are particularly interesting in the context of so-called solid-fluid interaction , where the relations of a solid body and a flow of fluid surrounding it are addressed.

1.5 A Brief Outline of the Course

We now present an overview of the contents of the following chapters.

Basics

In order to properly set the stage, we shall begin with some background of operator theory in Banach and Hilbert spaces. We assume the reader to be acquainted with some knowledge on bounded linear operators, such as the uniform boundedness principle, and basic concepts in the topology of metric spaces, such as density and closure. The most important new material will be the adjoint of an operator, which needs not be bounded anymore. In order to deal with this notion, we will consider relations rather than operators as they provide the natural setting for unbounded operators. Having finished this brief detour on operator theory, we will turn to a generalisation of Lebesgue spaces. More precisely, we will survey ideas from Lebesgue’s integration theory for functions attaining values in an infinite-dimensional Banach space.

The Time Derivative

Banach space-valued (or rather Hilbert space-valued) integration theory will play a fundamental role in defining the time derivative as an unbounded, continuously invertible operator in a suitable Hilbert space. In order to obtain continuous invertibility, we have to introduce an exponential weighting function, which is akin to the exponential weight introduced in the space of continuous functions for a proof of the Picard–Lindelöf theorem; that is, the unique existence theorem for solutions for ODEs. It is therefore natural to discuss the application of this operator to ODEs. Hence, in passing, we will present a Hilbert space solution theory for ordinary differential equations. Here, we will also have the opportunity to discuss ordinary differential equations with delay and memory. After this short detour, we will turn back to the time derivative operator and describe its spectrum. For this we introduce the so-called Fourier–Laplace transformation which transforms the time derivative into a multiplication operator. This unitary transformation will additionally serve to define (analytic and bounded) functions of the time derivative. This is absolutely essential for the formulation of evolutionary equations.

Evolutionary Equations

Having finished the necessary preliminary work, we will then be in a position to provide the proper justification of the formulation and solution theory for evolutionary equations. We will accompany this solution theory not only with the three leading examples from above, but also with some more sophisticated equations. Amazingly, the considered space-time setting will allow us to discuss (time-)fractional differential equations, partial differential equations with delay terms and even a class of integro-differential equations. Withdrawing the focus on regularity with respect to the temporal variable, we are en passant able to generalise well-posedness conditions from the classical literature. However, we shall stick to the treatment of analytic operator-valued functions M only. Therefore, we will also include some arguments as to why this assumption seems to be physically meaningful. It will turn out that analyticity and causality are intimately related via both the so-called Paley–Wiener theorem and a representation theorem for time translation invariant causal operators.

Initial Value Problems for Evolutionary Equations

As it has been outlined above, the focus of evolutionary equations is on inhomogeneous right-hand sides rather than on initial value problems. However, there is also the possibility to treat initial value problems with the approach discussed here. For this, we need to introduce extrapolation spaces. This then enables us to formulate initial value problems as inhomogeneous equations. We have to make a concession on the structure of the problem, however. In fact, we will focus on the case when \(M(\partial _{t})=M_{0}+\partial _{t}^{-1}M_{1}\) for some bounded linear operators M 0, M 1 acting in the spatial variables alone. The initial condition will then read as . Hence, one might argue that the initial condition is only assumed in a rather generalised sense. This is due to the fact that M 0 might be zero. However, for the case A = 0 we will also discuss the initial condition , which amounts to a treatment of so-called differential-algebraic equations in both finite- and inifinite-dimensional state spaces.

Properties of Solutions and Inhomogeneous Boundary Value Problems

Turning back to the case when A ≠ 0 we will discuss qualitative properties of solutions of evolutionary equations. One of which will be exponential decay. We will identify a subclass of evolutionary equations where it is comparatively easy to show that if the right-hand side decays exponentially then so too must the solution. If the right-hand side is smooth enough we obtain that U(t), the solution of the evolutionary equation at time t, decays exponentially if t →. Furthermore, we will frame inhomogeneous boundary value problems in the setting of evolutionary equations. The method will require a bit more on the regularity theory for evolutionary equations and a definition of suitable boundary values. In particular, we shall present a way of formulating classical inhomogeneous boundary value problems for domains without any boundary regularity.

Properties of the Solution Operator and Extensions

In the final part, we shall have another look at the advantages of the problem formulation. In fact, we will have a look at the notion of homogenisation of differential equations. In the problem formulation presented here, we shall analyse the continuity properties of the solution operator with respect to weak operator topology convergence of the operator M( t). We will address an example for ordinary differential equations (when A = 0) and one for partial differential equations (when A ≠ 0). It will turn out that the respective continuity properties are profoundly different from one another.

Furthermore, we have the occasion to address the notion of ‘maximal regularity’ in the context of evolutionary equations. Maximal regularity has initially been coined for parabolic-type problems like the heat equation. It turns out that evolutionary equations have a property similar to maximal regularity if one assumes the block structure of M( t) and A to satisfy certain requirements. These requirements lead to a subclass of evolutionary equations containing classical parabolic type equations. We conclude the body of the text with two extensions of Picard’s theorem. The first of which addresses non-autonomous problems and the second non-linear evolutionary inclusions.

1.6 Comments

The focus presented here on the main notions behind evolutionary equations is mostly in order to properly motivate the theory and highlight the most striking differences in the philosophy. There are other solution concepts (and corresponding general settings) developed for partial differential equations; either time-dependent or without involving time.

There is an abundance of examples and additional concepts for C 0-semigroups for which we refer to the aforementioned standard treatments again. There is also a generalisation to problems that are second order in time, e.g., u″ = Au, where u(0) and u′(0) are given. This gives rise to cosine families of bounded linear operators which is another way of generalising the fundamental solution concept, see, for example, [107].

The main focus of all of these equations is to address initial value problems, where the (first/second) time derivative of the unknown is explicit.

Another way of writing many PDEs from mathematical physics into a common form uses the notion of Friedrichs systems, see [43, 44]. However, the main focus of Friedrichs systems is on static, that is, time-independent partial differential equations. A time-dependent variant of constant coefficient Friedrichs systems are so-called symmetric-hyperbolic systems, see e.g. [12]. In these cases, whether the authors treat constant coefficients or not, the framework of evolutionary equations adds a profound amount of additional complexity by including the operator M( t).

The treatment of time-dependent problems in space-time settings and addressing corresponding well-posedness properties of a sum of two unbounded operators has also been considered in [26] with elaborate conditions on the operators involved. In their studies, the flexibility introduced by the operator M( t) in our setting is missing, thus the time derivative operator is not thought of having any variable coefficients attached to it.

Exercises

Exercise 1.1

Let \(\phi \in C(\mathbb {R},\mathbb {R})\). Assume that ϕ(t + s) = ϕ(t)ϕ(s) for all \(t,s\in \mathbb {R}\), ϕ(0) = 1. Show that ϕ(t) = eαt (\(t\in \mathbb {R}\)) for some \(\alpha \in \mathbb {R}\).

Exercise 1.2

Let \(n\in \mathbb {N}\), \(T\colon \mathbb {R}\to \mathbb {R}^{n\times n}\) continuously differentiable such that T(t + s) = T(t)T(s) for all \(t,s\in \mathbb {R}\), T(0) = I. Show that there exists \(A\in \mathbb {R}^{n\times n}\) with the property that T(t) = etA (\(t\in \mathbb {R}\)).

Exercise 1.3

Show that \(x\mapsto u(x)=\frac {1}{4\pi }\int _{\mathbb {R}^{3}}\frac {1}{\lvert x-y \rvert }f(y)\,\mathrm {d} y\) satisfies Poisson’s equation, given \(f\in C_{\mathrm {c}}^\infty (\mathbb {R}^{3})\).

Exercise 1.4

Let \(f\in C_{\mathrm {c}}^\infty (\mathbb {R})\). Define for \(x,t\in \mathbb {R}\). Show that u satisfies the differential equation t u =  x u and u(0, x) = f(x) for all \(x\in \mathbb {R}\).

Exercise 1.5

Let X, Y  be Banach spaces, \((T_{n})_{n\in \mathbb {N}}\) be a sequence in L(X, Y ), the set of bounded linear operators. If \(\sup \left \{ \left \Vert T_{n} \right \Vert \,;\, n\in \mathbb {N} \right \}=\infty ,\) show that there is x ∈ X and a strictly increasing sequence \((n_{k})_{k \in \mathbb {N}}\) in \(\mathbb {N}\) such that \(\left \Vert T_{n_{k}}x \right \Vert \to \infty .\)

Exercise 1.6

Let \(n\in \mathbb {N}\). Denote by \(\mathrm {GL}(n;\mathbb {K})\) the set of continuously invertible n × n matrices. Show that \(\mathrm {GL}(n;\mathbb {K})\subseteq \mathbb {K}^{n\times n}\) is open.

Exercise 1.7

Let \(n\in \mathbb {N}\). Show that \(\Phi \colon \mathrm {GL}(n;\mathbb {K})\ni A\mapsto A^{-1}\in \mathbb {K}^{n\times n}\) is continuously differentiable. Compute Φ.