Suitable Spaces for Shape Optimization

The differential-geometric structure of certain shape spaces is investigated and applied to the theory of shape optimization problems constrained by partial differential equations and variational inequalities. Furthermore, we define a diffeological structure on a new space of so-called $H^{1/2}$-shapes. This can be seen as a first step towards the formulation of optimization techniques on diffeological spaces. The $H^{1/2}$-shapes are a generalization of smooth shapes and arise naturally in shape optimization problems.


Introduction
Shape optimization is of great importance in a wide range of applications. A lot of real world problems can be reformulated as shape optimization problems which are constrained by variational inequalities (VIs) or partial differential equations (PDEs) which sometimes arise from simplified VIs. Aerodynamic shape optimization [44], acoustic shape optimization [45], optimization of interfaces in transmission problems [17,42], image restoration and segmentation [20], electrochemical machining [19] and inverse modelling of skin structures [41] can be mentioned as examples. The subject of shape optimization is covered by several fundamental monographs, see, for instance, [12,53]. In contrast to a finite dimensional optimization problem, which can be obtained, e.g., by representing shapes as splines, the connection of shape calculus with infinite dimensional spaces [12,25,53] leads to a more flexible approach. In recent work, it has been shown that PDE constrained shape optimization problems can be embedded in the framework of optimization on shape spaces. E.g., in [48], shape optimization is considered as optimization on a Riemannian shape manifold. However, this particular manifold contains only shapes with infinitely differentiable boundaries, which limits the practical applicability.
Questions like "How can shapes be defined?" or "How does the set of all shapes look like?" have been extensively studied in recent decades. Already in 1984, David G. Kendall has introduced the notion of a shape space in [27]. Often, a shape space is just modelled as a linear (vector) space, which in the simplest case is made up of vectors of landmark positions (cf. [11,27,52]). However, there is a large number of different shape concepts, e.g., plane curves [39,40], surfaces in higher dimensions [3,28,37], boundary contours of objects [16,33,61], multiphase objects [60], characteristic functions of measurable sets [62] and morphologies of images [13]. In a lot of processes in engineering, medical imaging and science, there is a great interest to equip the space of all shapes with a significant metric to distinguish between different shape geometries. In the simplest shape space case (landmark vectors), the distances between shapes can be measured by the Euclidean distance, but in general, the study of shapes and their similarities is a central problem. In order to tackle natural questions like "How different are shapes?", "Can we determine the measure of their difference?" or "Can we infer some information?" mathematically, we have to put a metric on the shape space. There are various types of metrics on shape spaces, e.g., inner metrics [3,39], outer metrics [5,27,39], metamorphosis metrics [23,58], the Wasserstein or Monge-Kantorovic metric on the shape space of probability measures [2,6], the Weil-Petersson metric [31,50], current metrics [14] and metrics based on elastic deformations [16,61]. However, it is a challenging task to model both, the shape space and the associated metric. There does not exist a common shape space or shape metric suitable for all applications. Different approaches lead to diverse models. The suitability of an approach depends on the requirements in a given situation. In the setting of VI or PDE constrained shape optimization, one has to deal with polygonal shape representations from a computational point of view. This is because finite element (FE) methods are usually used to discretize the models. In [49], an inner product, which is called Steklov-Poincaré metric, and a suitable shape space for the application of FE methods are proposed. The combination of this particular shape space and its associated inner product is an essential step towards applying efficient FE solvers as outlined in [51]. However, so far, this shape space and its properties are not investigated. There are a lot of open questions about this shape space. From a theoretical point of view, it is necessary to clarify its structure. If we do not know the structure, there is no chance to get control over the space. One of the main aims of this paper is to show that this shape space has a diffeological structure.
This paper is organized as follows. In Section 2, besides a short overview of basics concepts in shape optimization, the connection of shape calculus with geometric concepts of shape spaces is stated. Section 3 is concerned with the space of socalled H 1/2 -shapes and provides it with a diffeological structure.

Optimization in shape spaces
First, we set up notation and terminology of basic shape optimization concepts (Subsection 2.1). For a detailed introduction into shape calculus, we refer to the monographs [12,53]. Afterwards, shape calculus is combined with geometric concepts of shape spaces (Subsection 2.2).

Basic concepts in shape optimization
One of the main focuses of shape optimization is to investigate shape functionals and solve shape optimization problems. First, we give the definition of a shape functional.
or Neumann boundary conditions combined with initial conditions and hyperbolic problems require Cauchy boundary conditions. Some PDEs arise from simplified VIs. Thus, we get shape optimization problems of the following form: Here the set Ω ⊂ R d is assumed to be open and connected. Moreover, V Ω denotes a Hilbert space with dual space V Ω and (·, ·) V Ω ×VΩ is the duality pairing. It is assumed that a : V Ω × V Ω → R is a symmetric and continuous bilinear form, φ Ω : V Ω → R is a proper convex function, y ∈ V Ω and f Ω ∈ V Ω . For a parabolic problem, a time derivative term has to be added in (2.3) in analogy to [25,Chapter 10].
Remark 2.2. The problem class (2.2)-(2.3) is very challenging because of the necessity to operate in inherently non-linear and non-convex shape spaces. In classical VIs, there is no explicit dependence on the domain, which adds an unavoidable source of non-linearity and non-convexity due to the non-linear and non-convex nature of shape spaces.
Let D be as in the above definition. Moreover, let {F t } t∈[0,T ] be a family of mappings F t : D → R d such that F 0 = id, where D denotes the closure of D and T > 0. This family transforms the domain Ω into new perturbed domains x ∈ Ω} with Ω 0 = Ω and the boundary Γ of Ω into new perturbed boundaries Such a transformation can be described by the velocity method or by the perturbation of identity. We concentrate on the perturbation of identity, which is defined by F t (x) := x + tV (x), where V denotes a sufficiently smooth vector field.
When J in (2.1) depends on a solution of a PDE, we call the shape optimization problem PDE constrained. Shape optimization problems of the form (2.2)-(2.3) are called VI constrained. To solve PDE or VI constrained shape optimization problems, we need their shape derivatives.
If for all directions V ∈ C k 0 (D, R d ) the Eulerian derivative (2.4) exists and the mapping In this case, J is called shape differentiable of class C k at Ω.
There are a lot of options to prove shape differentiability of shape functionals which depend on a solution of a PDE or VI and to derive the shape derivative of a shape optimization problem. The min-max approach [12], the chain rule approach [53], the Lagrange method of Céa [8] and the rearrangement method [26] have to be mentioned in this context. A nice overview about these approaches is given in [57]. Note that the approach of Céa is frequently used to derive shape derivatives, but itself gives no proof of shape differentiability. Indeed, there are cases where the method of Céa fails (cf. [43,56]).
Among other things, the Hadamard Structure Theorem (cf. [53,Theorem 2.27]) states that only the normal part of a vector field on the boundary has an impact on the value of the shape derivative. In many cases, the shape derivative arises in two equivalent notational forms: Here r ∈ L 1 (Γ) and R is a differential operator acting linearly on the vector field . Recent progresses in PDE constrained optimization on shape manifolds are based on the surface formulation, also called Hadamard-form, as well as intrinsic shape metrics. Major effort in shape calculus has been devoted towards such surface expressions (cf. [12,53]), which are often very tedious to derive. Along the way, volume formulations appear as an intermediate step. Recently, it has been shown that this intermediate formulation has numerical advantages, see, for instance, [7,17,21,42]. In [32], also practical advantages of volume shape formulations have been demonstrated. E.g., they require less smoothness assumptions. Furthermore, the derivation as well as the implementation of volume formulations require less manual and programming work. However, volume integral forms of shape derivatives require an outer metric on the domain surrounding the shape boundary. In [49], both points of view are harmonized by deriving a metric from an outer metric. Based on this metric, efficient shape optimization algorithms, which also reduce the analytical effort so far involved in the derivation of shape derivatives, are proposed in [49,51,59]. The next subsection explains how shape calculus and in particular shape derivatives can be combined with geometric concepts of shape spaces. This combination results in efficient optimization techniques in shape spaces.

Shape calculus combined with geometric concepts of shape spaces
As pointed out in [46], shape optimization can be viewed as optimization on Riemannian shape manifolds and the resulting optimization methods can be constructed and analyzed within this framework. This combines algorithmic ideas from [1] with the Riemannian geometrical point of view established in [3]. In this subsection, we analyze the connection of Riemannian geometry on the space of smooth shapes to shape optimization.

The space of smooth shapes
We first concentrate on two-dimensional shapes. In this section, a shape of dimension two is defined as a simply connected and compact set Ω of R 2 with C ∞ -boundary Γ. Since the boundary of an object or a shape is all that matters, we can think of two-dimensional shapes as the images of simple closed smooth curves in the plane of the unit circle. Such simple closed smooth curves can be represented by embeddings from the circle S 1 into the plane R 2 , see, for instance, [30]. Therefore, the set of all embeddings from S 1 into R 2 , denoted by Emb(S 1 , R 2 ), represents all simple closed smooth curves in R 2 . However, note that we are only interested in the shape itself and that images are not changed by re-parametrizations. Thus, all simple closed smooth curves which differ only by re-parametrizations can be considered equal to each other because they lead to the same image. Let Diff(S 1 ) denote the set of all diffeomorphisms from S 1 into itself. This set is a regular Lie group (cf. [29,Chapter VIII,38.4]) and consists of all the smooth re-parametrizations mentioned above. In [38], the set of all two-dimensional shapes is characterized by i.e., the obit space of Emb(S 1 , R 2 ) under the action by composition from the right by the Lie group Diff(S 1 ). A particular point on B e (S 1 , R 2 ) is represented by a curve c : S 1 → R 2 , θ → c(θ) and illustrated in the left picture of Figure 1b. The tangent space is isomorphic to the set of all smooth normal vector fields along c, i.e., where n denotes the exterior unit normal field to the shape boundary c such that n(θ) ⊥ c θ (θ) for all θ ∈ S 1 , where c θ = ∂c ∂θ denotes the circumferential derivative as in [38]. Since we are dealing with parametrized curves, we have to work with the arc length and its derivative. Therefore, we use the following notation: In [29], it is proven that the shape space B e (S 1 , R 2 ) is a smooth manifold. Is it even perhaps a Riemannian shape manifold? This question was investigated by Peter W. Michor and David Mumford. They show in [38] that the standard L 2metric on the tangent space is too weak because it induces geodesic distance equals zero. This phenomenon is called the vanishing geodesic distance phenomenon. The authors employ a curvature weighted L 2 -metric as a remedy and prove that the vanishing phenomenon does not occur for this metric. Several Riemannian metrics on this shape space are examined in further publications, e.g., [3,37,39]. All these metrics arise from the L 2 -metric by putting weights, derivatives or both in it. In this manner, we get three groups of metrics: The almost local metrics which arise by putting weights in the L 2 -metric (cf. [4,39]), the Sobolev metrics which arise by putting derivatives in the L 2 -metric (cf. [3,39]) and the weighted Sobolev metrics which arise by putting both, weights and derivatives, in the L 2 -metric (cf. [4]). It can be shown that all these metrics do not induce the phenomenon of vanishing geodesic distance under special assumptions. To list all these goes beyond the scope of this paper, but they can be found in the above-mentioned publications. All Riemannian metrics mentioned above are inner metrics. This means that the metric is defined directly on the deformation vector field such that the deformation is prescribed on the shape itself and the ambient space stays fixed.
In the following, we clarify how the above-mentioned inner Riemannian metrics can be defined on the shape space B e (S 1 , R 2 ). The important point to note here is that we want to define an inner metric. This means that we have to define a Riemannian metric on the space Emb(S 1 , R 2 ). A Riemannian metric on Emb(S 1 , R 2 ) is a family g = (g c (h, k)) c∈Emb(S 1 ,R 2 ) of inner products g c (h, k), where h and k denote vector fields along c ∈ Emb(S 1 , R 2 ). The most simple inner product on the tangent bundle to Emb( and that a tangent vector h ∈ T c Emb(S 1 , R 2 ) has an orthonormal decomposition into smooth tangential components h and normal components h ⊥ (cf. [38, Section 3, 3.2]). In particular, h ⊥ is an element of the bundle of tangent vectors which are normal to the Diff(S 1 )-orbits denoted by N c . This normal bundle is well defined and is a smooth vector subbundle of the tangent bundle. In [38], it is outlined how the restriction of the metric g c to the subbundle N c gives the quotient metric. The quotient metric induced by the L 2 -metric is given by where h = αn and k = βn denote two elements of the tangent space T c B e (S 1 , R 2 ) given in (2.8). Unfortunately, in [38], it is shown that this L 2 -metric induces vanishing geodesic distance, as already mentioned above. For the following discussion, among all the above-mentioned Riemannian metrics, we pick the first Sobolev metric defined in the sequel.
Definition 2.4 (Sobolev metric). Let n ∈ N * . The n-th Sobolev metric is given by where A > 0 denotes the metric parameter.
In particular, due to Definition 2.4 and the isomorphism (2.8) of the tangent space T c B e (S 1 , R 2 ), we can define the first Sobolev metric on B e (S 1 , R 2 ). Definition 2.5 (Sobolev metric on B e (S 1 , R 2 )). The first Sobolev metric on B e (S 1 , R 2 ) is given by

12)
where h = αn, k = βn denote two elements of the tangent space T c B e (S 1 , R 2 ), A > 0 and c denotes the Laplace-Beltrami operator on the surface c.
An essential operation in Riemannian geometry is the covariant derivative. In differential geometry, it is often written in terms of the Christoffel symbols. In [3], Christoffel symbols associated with the Sobolev metrics are provided. However, in order to provide a relation with shape calculus, another representation of the covariant derivative in terms of the Sobolev metric g 1 is needed. The Riemannian connection provided by the following theorem makes it possible to specify the Riemannian shape Hessian.
s is a differential operator on C ∞ (S 1 , R 2 ) and L −1 1 denotes its inverse operator. The covariant derivative associated with the Sobolev metric g 1 can be expressed as where v = c θ |c θ | denotes the unit tangent vector.
Proof. Let h, k, m be vector fields on R 2 along c ∈ Emb(S 1 , R 2 ). Moreover, d(·)[m] denotes the directional derivative in direction m. By [38], (2.18) Since the differential operator D s is anti self-adjoint for the L 2 -metric g 0 , i.e., holds. We proceed analogously to the proof of Theorem 2.1 in [46], which exploits the product rule for Riemannian connections. Thus, we conclude from that the covariant derivative associated with g 1 is given by (2.13).  Remark 2.7. As stated in [39], the inverse operator L −1 1 is an integral operator whose kernel has an expression in terms of the arc length distance between two points on a curve and their unit normal vectors. For the existence and more details about L −1 1 we refer to [39].
For the sake of completeness it should be mentioned that the shape space B e (S 1 , R 2 ) and its theoretical results can be generalized to higher dimensions. Let M be a compact manifold and let N denote a Riemannian manifold with dim(M ) < dim(N ). In [37], the space of all submanifolds of type M in N is defined by In Figure 1a, the left picture illustrates a three-dimensional shape which is an element of the shape space B e (S 2 , R 3 ). In contrast, the right shape in this figure is a three-dimensional shape which is not an element of this shape space. Note that the vanishing geodesic distance phenomenon occurs also for the L 2 -metric in higher dimensions as verified in [37]. For the definition of the Sobolev metric g 1 in higher dimensions we refer to [3].

Optimization based on Sobolev metrics
We consider the Sobolev metric g 1 on the shape space B e . The Riemannian connection with respect to this metric, which is given in Theorem 2.6, makes it possible to specify the Riemannian shape Hessian of an optimization problem. Now, we have to detail the Riemannian shape gradient. Due to the Hadamard Structure Theorem, there exists a scalar distribution r on the boundary Γ of the domain Ω under consideration. If we assume r ∈ L 1 (Γ), the shape derivative can be expressed on the boundary Γ of Ω (cf. (2.6)). The distribution r is often called the shape gradient. However, note that gradients depend always on chosen scalar products defined on the space under consideration. Thus, it rather means that r is the usual L 2 -shape gradient. If we want to optimize on a shape manifold, we have to find a representation of the shape gradient with respect to a Riemannian metric defined on the shape manifold under consideration. Such a representation is called Riemannian shape gradient. The shape derivative can be expressed more concisely as In order to get an expression of the Riemannian shape gradient with respect to the Sobolev metric g 1 , we look at the isomorphism (2.8). Due to this isomorphism, a tangent vector h ∈ T Γ B e is given by h = αn with α ∈ C ∞ (Γ). This leads to the following definition.
Definition 2.8 (Riemannian shape gradient with respect to the Sobolev metric). A Riemannian representation of the shape derivative, i.e., the Riemannian shape gradient of a shape differentiable objective function J in terms of the Sobolev metric g 1 , is given by
Now, we can specify the Riemannian shape Hessian. It is based on the Riemannian connection ∇ related to the Sobolev metric g 1 . This Riemannian connection is given in Theorem 2.6. In analogy to [1], we can define the Riemannian shape Hessian as follows: Definition 2.9 (Riemannian shape Hessian). In the setting above, the Riemannian shape Hessian of a two times shape differentiable objective function J is defined as the linear mapping (2.23) The Riemannian shape gradient and the Riemannian shape Hessian are defined. These two objects are required to formulate optimization methods in the shape space B e . E.g., in the setting of PDE constrained shape optimization problems, a Lagrange-Newton method is obtained by applying a Newton method to find stationary points of the Lagrangian of the optimization problem. In contrast to this method, which requires the Hessian in each iteration, quasi-Newton methods only need an approximation of the Hessian. Such an approximation is realized, e.g., by a limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) update. However, these methods in (B e , g 1 ) are based on surface expressions of shape derivatives. E.g., in a limited-memory BFGS method, a representation of the shape gradient with respect to the Sobolev metric g 1 has to be computed and applied as a Dirichlet boundary condition in the linear elasticity mesh deformation. This involves two operations, which are non-standard in FE tools and, thus, lead to additional coding effort. To explain all this goes beyond the scope of this paper. Thus, we refer to [48] for the limited-memory BFGS method in B e and more details about the two nonstandard operations. However, in Figure 2, the entire optimization algorithm for the limited-memory BFGS case is summarized. Note that this method boils down to a steepest descent method by omitting the computation of the BFGS-update and that it only needs the gradient-but not the Hessian-in each iteration. Moreover, in order to deform the mesh, we can choose any other elliptic equation instead of the linear elasticity equation.

Optimization based on Steklov-Poincaré metrics
If we consider Sobolev metrics, we have to deal with surface formulations of shape derivatives. An intermediate and equivalent result in the process of deriving these expressions is the volume expression as already mentioned above. These volume expressions are preferable over surface forms. This is not only because of saving analytical effort, but also due to additional regularity assumptions, which usually have to be required in order to transform volume into surface forms, as well as because of saving programming effort. However, in the case of the more attractive volume formulation, the shape manifold B e and the corresponding inner products g 1 are not appropriate. One possible approach to use volume forms is to consider Steklov-Poincaré metrics defined in the sequel.
Let Ω ⊂ X ⊂ R d be a compact domain with C ∞ -boundary Γ := ∂Ω, where X denotes a bounded domain with Lipschitz-boundary Γ out := ∂X. In particular, this  means Γ ∈ B e (S d−1 , R d ). We consider the following scalar products, the so-called Steklov-Poincaré metrics (cf. [49]): Definition 2.10 (Steklov-Poincaré metric). In the setting above, the Steklov-Poincaré metric is given by (

2.24)
Here S pr denotes the projected Poincaré-Steklov operator which is given by with a(·, ·) being a symmetric and coercive bilinear form.
In particular, in the setting above, a domain Ω ⊂ X ⊂ R d of a bounded domain X is compact if its closure Ω is compact in X.
Remark 2.12. Note that a Steklov-Poincaré metric depends on the choice of the bilinear form. Thus, different bilinear forms lead to various Steklov-Poincaré metrics.
In the following, we state the connection of B e with respect to the Steklov-Poincaré metric g S to shape calculus. As already mentioned, the shape derivative can be expressed as the surface integral (2.6) due to the Hadamard Structure Theorem. Recall that the shape derivative can be written more concisely (cf. (2.21)). Due to isomorphism (2.8) and expression (2.21), we can state the connection of the shape space B e with respect to the Steklov-Poincaré metric g S to shape calculus: Definition 2.13 (Shape gradient with respect to Steklov-Poincaré metric). Let r denote the (standard) L 2 -shape gradient given in (2.6). Moreover, let S pr be the projected Poincaré-Steklov operator and let γ 0 be as in Definition 2.10. A representation h ∈ T Γ B e ∼ = C ∞ (Γ) of the shape gradient in terms of g S is determined by Now, the shape gradient with respect to Steklov-Poincaré metric is defined. This enables the formulation of optimization methods in B e which involve volume formulations of shape derivatives. From (2.28) we get h = S pr r = (γ 0 U ) T n, where Here J surf (Ω) denotes parts of the objective function leading to surface shape derivative expressions, e.g., perimeter regularizations, and is incorporated as Neumann boundary condition. Parts of the objective function leading to volume shape derivative expressions are denoted by J vol (Ω). The elliptic operator a(·, ·) can be chosen, e.g., as the weak form of the linear elasticity equation. In this case, it is used as both, an inner product and a mesh deformation, leading to only one linear system which has to be solved. However, note that from a theoretical point of view the volume and surface shape derivative formulations have to be equal to each other for all test functions. In order to avoid a discretization error (cf. [49]), DJ vol [V ] is assembled only for test functions V whose support includes Γ, i.e., We call (2.30) the deformation equation. In contrast to Figure 2, which gives the complete optimization algorithm in the case of surface shape derivative expressions, Figure 3 summarizes the entire optimization algorithm in the setting of the Steklov-Poincaré metric and, thus, in the case of volume shape derivative expressions. As in the surface shape derivative case this method boils down to a steepest descent method by omitting the computation of the BFGS-update. For more details about this approach and in particular the implementation details we refer to [49,51].
Remark 2.14. Note that it is not ensured that U ∈ H 1 0 (X, R d ) is C ∞ . Thus, h = S pr r = (γ 0 U ) n is not necessarily an element of T Γ B e . However, under special assumptions depending on the coefficients of a second-order partial differential operator and the right-hand side of a PDE, a weak solution U which is at least H 1 0 -regular is C ∞ (cf. [15, Section 6.3, Theorem 6]).
The algorithm outlined in Figure 3 involves volume formulations of shape derivatives and a corresponding metric, the Steklov-Poincaré metric g S , which is very attractive from a computational point of view. The computation of a representation of the shape gradient with respect to the chosen inner product of the tangent space  is moved into the mesh deformation itself. The elliptic operator is used as an inner product and a mesh deformation. This leads to only one linear system, which has to be solved. However, the shape space B e containing smooth shapes unnecessarily limits the application of this algorithm. More precisely, numerical investigations have shown that the optimization techniques also work on shapes with kinks in the boundary (cf. [47,49,51]). This means that they are not limited to elements of B e and another shape space definition is required. Thus, in [49], the definition of smooth shapes is extended to so-called H 1/2 -shapes. In the next section, it is clarified what we mean by H 1/2 -shapes. However, only a first try of a definition is given in [49]. From a theoretical point of view there are several open questions about this shape space. The most important question is how the structure of this shape space is. If we do not know the structure, there is no chance to get control over the space. Moreover, the definition of this shape space has to be adapted and refined. The next section is concerned with the space of H 1/2 -shapes and in particular with its structure.
3 The shape space B 1/2 The Steklov-Poincaré metric correlates shape gradients with H 1 -deformations. Under special assumptions, these deformations give shapes of class H 1/2 , which are defined below. As already mentioned above the shape space B e unnecessarily limits the application of the methods mentioned in the previous section. In the setting of B e , shapes can be considered as the images of embeddings. From now on we have to think of shapes as boundary contours of deforming objects. Therefore, we need another shape space. In this section, we define the space of H 1/2 -shapes and clarify its structure as diffeological space. First, we do not only define diffeologies and related objects, but also explain the difference between diffeologies and manifolds (Subsection 3.1). Afterwards, the space of H 1/2 -shapes is defined and we see that it is a diffeological space (Subsection 3.2).

A brief introduction into diffeological spaces
In this subsection, we define diffeologies and related objects. Moreover, we clarify the difference between manifolds and diffeological spaces. For a detailed introduction into diffeological spaces we refer to [24].

Definitions
We start with the definition of a diffeological space and related objects like a diffeology, with which a diffeological space is equipped, and plots, which are the elements of a diffeology. Afterwards, we consider subset and quotient diffeologies. These two objects are required in the main theorem of Subsection 3.2.
A diffeology on Y is any set D Y of parametrizations in Y such that the following three axioms are satisfied: (ii) Locality: Let I be an arbitrary set. Moreover, let {p i : O i → Y } i∈I be a family of maps which extend to a map p : A non-empty set Y together with a diffeology D Y on Y is called a diffeological space and denoted by (Y, D Y ). The parametrizations p ∈ D Y are called plots of the diffeology D Y . If a plot p ∈ D Y is defined on O ⊂ R n , then n is called the dimension of the plot and p is called n-plot.
In the literature, there are a lot of examples of diffeologies, e.g., the diffeology of the circle, the square, the set of smooth maps, etc. For those we refer to [24].
Remark 3.2. A diffeology as a structure and a diffeological space as a set equipped with a diffeology are distinguished only formally. Every diffeology on a set contains the underlying set as the set of non-empty 0-plots (cf. [24]).
Next, we want to connect diffeological spaces. This is possible though smooth maps between two diffeological spaces.

Definition 3.3 (Smooth map between diffeological spaces, diffeomorphism). Let
If f is bijective and if both, f and its inverse f −1 , are smooth, f is called a diffeomorphism. In this case, X is called diffeomorphic to Y .
The stability of diffeologies under almost all set constructions is one of the most striking properties of the class of diffeological spaces, e.g., the subset, quotient, functional or powerset diffeology. In the following, we concentrate on the subset and quotient diffeology. The concept of these are required in the proof of the main theorem in the next subsection.
Subset diffeology. Every subset of a diffeological space carries a natural subset diffeology, which is defined by the pullback of the ambient diffeology by the natural inclusion.
Before we can construct the subset diffeology, we have to clarify the natural inclusion and the pullback. For two sets A, B with A ⊂ B, the (natural) inclusion is given by ι A : A → B, x → x. The pullback is defined as follows: Theorem and Definition 3.4 (Pullback). Let X be a set and (Y, D Y ) be a diffeological space. Moreover, f : X → Y denotes some map.
(i) There exists a coarsest diffeology of X such that f is smooth. This diffeology is called the pullback of the diffeology D Y by f and is denoted by f * (D Y ).
(ii) Let p be a parametrization in X.
The construction of subset diffeologies is related to so-called inductions.
The illustration of an induction as well as the criterions for being an induction can be found in [24, Chapter 1, 1.31]. Now, we are able to define the subset diffeology (cf. [24]).
Theorem and Definition 3.6 (Subset diffeology). Let (X, D X ) be a diffeological space and let A ⊂ X be a subset. Then A carries a unique diffeology D A , called the subset or induced diffeology, such that the inclusion map ι A : A → X becomes an induction, namely, D A = ι * A (D X ). We call (A, D A ) the diffeological subspace of X.
Quotient diffeology. Like every subset of a diffeological space inherits the subset diffeology, every quotient of a diffeological space carries a natural quotient diffeology defined by the pushforward of the diffeology of the source space to the quotient by the canonical projection. First, we have to clarify the canonical projection. For a set A and an equivalence relation ∼ on A, the canonical projection is defined as π : where [x] := {x ∈ X : x ∼ x } denotes the equivalence class of x with respect to ∼. Moreover, the pushforward has to be defined: Theorem and Definition 3.7 (Pushforward). Let (X, D X ) be a diffeological space and Y be a set. Moreover, f : X → Y denotes a map.
(i) There exists a finest diffeology of Y such that f is smooth. This diffeology is called the pushforward of the diffeology D X by f and is denoted by f * (D X ). Remark 3.8. If a map f from a diffeological space (X, D X ) into a set Y is surjective, then f * (D X ) consists precisely of the plots p : U → Y which locally are of the form f • q for plots q ∈ D X since those already contain the constant parametrizations.
The construction of quotient diffeologies is related to so-called subductions.
denotes the pushforward of the diffeology D X by f .
The illustration of a subduction as well as the criterions for being a subduction can be found in [24,Chapter 1,1.48]. Now, we can define the quotient diffeology (cf. [24]).
Theorem and Definition 3.10 (Quotient diffeology). Let (X, D X ) be a diffeological space and ∼ be an equivalence relation on X. Then the quotient set X/∼ carries a unique diffeologcial sturcture D X/∼ , called the quotient diffeology, such that the canonical projection π : X → X/∼ becomes a subduction, namely, D X/∼ = π * (D X ).
We call X/∼, D X/∼ the diffeological quotient of X by the relation ∼.

Differences between diffeologies and manifolds
Section 3 is concerned with manifolds. Manifolds can be generalized in many ways.
In [55], a summary and comparison of possibilities to generalize smooth manifolds are given. One generalization is a diffeological space on which we concentrate in this section. In the following, the main differences between manifolds and diffeological spaces are figured out. For simplicity, we concentrate on finite-dimensional manifolds. However, it has to be mentioned that infinite-dimensional manifolds can also be understood as diffeological spaces. This follows, e.g., from [29,Corollary 3.14] or [34]. Given a smooth manifold there is a natural diffeology on this manifold consisting of all parametrizations which are smooth in the classical sense. This yields the following definition.  In order to characterize the diffeological spaces which aris from manifolds, we need the concept of smooth points. The concept of smooth points is quite simple. Let us consider the coordinate axes, e.g., in R 2 . All points of the two axis with exception of the origin are smooth points. Now, we are able to formulate the following theorem: Theorem 3.14. A diffeological space (X, D X ) is associated with a (not necessarily paracompact or Hausdorff ) smooth manifold if and only if each of its points is smooth.
Proof. We have to show the following statements: To (ii): Let (X, D X ) be a diffeological space for which all points are smooth. Then there exist an open cover X = i∈I U i and diffeomorphisms f i : is smooth (in the diffeological sense) for all i, j ∈ I. Due to Remark 3.12, the map is smooth in the classical sense for all i, j ∈ I. Thus, {(U i , f i )} i∈I defines a smooth atlas and a manifold structure on X is defined. Let D be the associated diffeology. A similar argument as above shows that the diffeology D agrees with the original one D X .

The diffeological shape space
We extend the definition of smooth shapes, which are elements of the shape space B e , to shapes of class H 1/2 . In the following, it is clarified what we mean by H 1/2shapes. We would like to recall that a shape in the sense of the shape space B e is given by the image of an embedding from the unit sphere S d−1 into the Euclidean space R d . In view of our generalization, it has technical advantages to consider so-called Lipschitz shapes which are defined as follows.
Definition 3.16 (Lipschitz shape). A d-dimensional Lipschitz shape Γ 0 is defined as the boundary Γ 0 = ∂X 0 of a compact Lipschitz domain X 0 ⊂ R d with X 0 = ∅.
The set X 0 is called a Lipschitz set.
Example of Lipschitz shapes are illustrated in Figure 4. In contrast, Figure 5 shows examples of shapes which are non-Lipschitz shapes.  General shapes-in our novel terminology-arise from H 1 -deformations of a Lipschitz set X 0 . These H 1 -deformations, evaluated at a Lipschitz shape Γ 0 , give deformed shapes Γ if the deformations are injective and continuous. These shapes are called of class H 1/2 and proposed firstly in [49]. The following definitions differ from [49]. This is because of our aim to define the space of H 1/2 -shapes as diffeological space which is suitable for the formulation of optimization techniques and its applications.
The space of all d-dimensional H 1/2 -shapes is given by

1)
where and the equivalence relation ∼ is given by The set H 1/2 (Γ 0 , R d ) is obviously a subset of the Sobolev-Slobodeckij space H 1/2 (Γ 0 , R d ), which is well-known as a Banach space (cf. [36,Chapter 3]). Banach spaces are manifolds and, thus, we can view H 1/2 (Γ 0 , R d ) with the corresponding diffeology. This encourages the following theorem which provides the space of H 1/2shapes with a diffeological structure. With this theorem we reach one of the main aims of this paper.
So far, we have defined the space of H 1/2 -shapes and showed that it is a diffeological space. The appearance of a diffeological space in the context of shape optimization can be seen as a first step or motivation towards the formulation of optimization techniques on diffeological spaces. Note that, so far, there is no theory for shape optimization on diffeological spaces. Of course, properties of the shape space B 1/2 Γ 0 , R d have to be investigated. E.g., an important question is how the tangent space looks like. Tangent spaces and tangent bundles are important in order to state the connection of B 1/2 Γ 0 , R d to shape calculus and in this way to be able to formulate optimization algorithms in B 1/2 Γ 0 , R d . There are many equivalent ways to define tangent spaces of manifolds, e.g., geometric via velocities of curves, algebraic via derivations or physical via cotangent spaces (cf. [30]). Many authors have generalized these concepts to diffeological spaces, e.g., [10,18,24,54]. In [54], tangent spaces are defined for diffeological groups by identifying smooth curves using certain states. Tangent spaces and tangent bundles for many diffeological spaces are given in [18]. Here smooth curves and a more intrinsic identification are used. However, in [10], it is pointed out that there are some errors in [18]. In [24], the tangent space to a diffeological space at a point is defined as a subspace of the dual of the space of 1-forms at that point. These are used to define tangent bundles. In [10], two approaches to the tangent space of a general diffeological space at a point are studied. The first one is the approach introduced in [18] and the second one is an approach which uses smooth derivations on germs of smooth real-valued functions. Basic facts about these tangent spaces are proven, e.g., locality and that the internal tangent space respects finite products. Note that the tangent space to B 1/2 Γ 0 , R d as diffeological space and related objects which are needed in optimization methods, e.g., retractions and vector transports, cannot be deduced or defined so easily. The study of these objects and the formulation of optimization methods on a diffeological space go beyond the scope of this paper and are topics of subsequent work. Moreover, note that the Riemannian structure g S on B 1/2 Γ 0 , R d has to be investigated in order to define B 1/2 Γ 0 , R d as a Riemannian diffeological space. In general, a diffeological space can be equipped with a Riemannian structure as outlined, e.g., in [35].
Besides the tangent spaces, another open question is which assumptions guarantee that the image of a Lipschitz shape under w ∈ H 1/2 Γ 0 , R d is again a Lipschitz shape. Of course, the image of a Lipschitz shape under a continuously differentiable function is again a Lipschitz shape, but the requirement that w is a C 1 -function is a too strong. One idea is to require that w has to be a bi-Lipschitz function. Unfortunately, the image of a Lipschitz shape under a bi-Lipschitz function is not necessarily a Lipschitz shape as the example given in [22,Subsection 4.1] shows. We can summarize that the question above is very hard and a lot of effort has to be put into it to find the answer. However, this goes beyond the scope of this paper and will be a topic of subsequent work.

Conclusion
The differential-geometric structure of the shape space B e is applied to the theory of PDE or VI constrained shape optimization problems. In particular, a Riemannian shape gradient and a Riemannian shape Hessian with respect to the Sobolev metric g 1 is defined. The specification of the Riemannian shape Hessian requires the Riemannian connection which is given and proven for the Sobolev metric. It is outlined that we have to deal with surface formulations of shape derivatives if we consider the Sobolev metric. In order to use the more attractive volume formulations, we considered the Steklov-Poincaré metrics g S and stated their connection to shape calculus by defining the shape gradient with respect to g S . The gradi-  ents with respect to both, g 1 and g S , and the Riemannian shape Hessian, open the door to formulate optimization algorithms in B e . However, the shape space B e limits the application of optimization techniques. Thus, we extend the definition of smooth shapes to H 1/2 -shapes and define a novel shape space. It is shown that this space has a diffeological structure. In this context, we clarify the differences between manifolds and diffeological spaces. From a theoretical point of view, a diffeological space is very attractive in shape optimization. It can be supposed that a diffeological structure suffices for many differential-geometric tools used in shape optimization techniques. In particular, objects which are needed in optimization methods, e.g., retractions and vector transports, have to be deduced. Note that these objects cannot be defined so easily and additional work is required to formulate optimization methods on a diffeological space, which remain open for further research and will be touched in subsequent papers.