Abstract
We formulate variational material modeling in a space-time context. The starting point is the description of the space-time cylinder and the definition of a thermodynamically consistent Hamilton functional which accounts for all boundary conditions on the cylinder surface. From the mechanical perspective, the Hamilton principle then yields thermo-mechanically coupled models by evaluation of the stationarity conditions for all thermodynamic state variables which are displacements, internal variables, and temperature. Exemplary, we investigate in this contribution elastic wave propagation, visco-elasticity, elasto-plasticity with hardening, and gradient-enhanced damage. Therein, one key novel aspect are initial and end time velocity conditions for the wave equation, replacing classical initial conditions for the displacements and the velocities. The motivation is intensively discussed and illustrated with the help of a prototype numerical simulation. From the mathematical perspective, the space-time formulations are formulated within suitable function spaces and convex sets. The unified presentation merges engineering and applied mathematics due to their mutual interactions. Specifically, the chosen models are of high interest in many state-of-the art developments in modeling and we show the impact of this holistic physical description on space-time Galerkin finite element discretization schemes. Finally, we study a specific discrete realization and show that the resulting system using initial and end time conditions is well-posed.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Mechanics and mathematics share a common history: often, mechanical problems inspired mathematicians to invent new techniques to successfully describe the observed physical phenomena. A prominent example is the brachistochrone problem stated by J. Bernoulli in 1696 which was the origin for the invention of variational calculus. Therefrom, a long-standing fruitful history in the interaction of applied mathematics and mechanics in variational material modeling evolved, specifically the famous principles from Lagrange and Hamilton which date back to 1788 [53] (Méchanique AnalytiqueFootnote 1) and 1834 [36] (On a General Method in Dynamics),Footnote 2 respectively. For a concise historical background, we refer to the preface of [57]. A further example of the interaction was the understanding that a thermodynamic state is uniquely defined if and only if, along with displacements and temperature, the current ‘inner state’ is known. This idea, which originated from the pioneering works of Onsager in the 1940s, was adapted in continuum mechanics by introducing internal variables. This, in turn, stimulated a strong and fruitful interrelation to mathematics by carefully analyzing and discovering convexity properties of the free energy density such as rank-1-convexity, poly-convexity, and quasi-convexity, e.g. [4, 5, 19]. Both the variational calculus and the convexity properties are related to the mathematical field of analysis. A more recent example is also given in the context of numerics: the internal variables allowed to formulate modern mechanical models, e.g., for visco-elastic and plastic materials, which were supposed to be evaluated in a finite dimensional context as soon as the computational power had reached a level which was necessary for the break-through of the finite element method. Then, the interplay of mechanical modeling and the development of suitable numerical solvers established a strong cooperation between mathematics and mechanics, which continues further developing since the 1980s, e.g. [80].
Mechanical models usually result in nonstationary, nonlinear, coupled partial differential equations, in which several equations interact, and most often demand numerical solutions including physics-based discretizations and physics-based solvers. Moreover, such systems can be subject to inequality constraints (as we will see in this work as well) yielding coupled variational inequality systems (CVIS) [86]. For a holistic numerical analysis of these partial differential equations in space and time, the so-called space-time approach is known since 1969 in the pioneering work of Oden [66], Argyris and Scharpf [1], and Fried [29]. It has been further developed since then; we refer the reader to recent overviews [54, 55, 83, 87] and references cited therein.
In space-time modeling, temporal and spatial coordinates are treated in one common continuum, the so-called space-time cylinder, and then allow for a joint discretization. The terminology ‘cylinder’ is commonly used, e.g., [54], and has to be understood in a larger geometrical sense for all cases in which both the spatial and the temporal geometries are still ‘domains’; for the definition see for instance [21]. Specifically, this enables for common mathematical models, a common mathematical theory, and a common numerical analysis and corresponding algorithms. It offers the huge advantage of analyzing the properties of the numerical solution of a physical problem in a holistic manner and thus, to identify suitable solution strategies for a particular model. Specifically, similar types of discretizations in space and time (generically implicit, A-stable schemes, numerical stability, e.g., [22]) are of interest. Often, these are Galerkin finite element methods (FEM), either continuous Galerkin (cG), discontinuous Galerkin (dG), or mixtures, as for instance cG in space and dG in time. This allows for flexible discretizations in space and time. In a natural way, higher-order basis functions can be employed. Considering in particular the temporal discretization in terms of FEM, the integral form comprises information on the entire continuous time interval \(I_m\) rather than only at discrete time points \(t_{m-1}\) and \(t_m\) as for finite differences. In the numerical analysis, this has the advantage that well-known best approximation (Galerkin orthogonality) and convergence results from FEM theory can be employed that usually require weaker regularity conditions of the governing functions than finite difference schemes. Based upon these results, FEM-based a priori and a posteriori error estimates can be derived, where the latter enable error-controlled (space-time) adaptivity. Specifically, working in optimization [39, 84] or dual-weighted residual error estimation, e.g., [6, 10, 79] and selected chapters in [71], the adjoint equation is needed which is derived and discretized in a consistent fashion utilizing space-time modeling. The biggest shortcomings in space-time discretizations are often heavy notations, specifically when dG discretizations are involved, and clearly the computational cost and memory requirements for truly space-time numerical solutions [30, 78].
In the 1970s and 1980s, similar studies to ours by using Hamilton’s principle and Hamilton’s law for modeling dynamics and elastodynamics in a space-time context have been published [8, 68] and we also refer to the classical textbook [26] in which engineering models are formulated within space-time function sets and the research papers [6, 41, 42, 45], where specifically [42][pp. 327–332] provides an exhaustive literature review from the late 80s viewpoint. However, it seemed that space-time approaches were only used for conservative and isothermal systems or they were derived from the respective strong forms for the balance of linear momentum and, in case of thermo-mechanics, the heat conduction equation.
Our intention is to show in this contribution that these procedures can be replaced: indeed and as already noticed in various works, a space-time formulation was already present in Hamilton’s principle of stationary action, which is of special interest: it has recently been demonstrated that an extended Hamilton’s principle yields the thermo-mechanically coupled model equations following from the stationarity conditions for all thermodynamic state variables, which are displacements, internal variables for microstructure evolution, and temperature, cf. [46]. Consequently, the extended Hamilton principle yields the space-time formulation for thermo-mechanics of dissipative continua in a holistic sense from a stationary principle which, form a mathematical perspective, offers the benefit of providing the correct test functions automatically.
It is worth mentioning that there exist other extremal principles including thermal coupling. Examples can be found in [11, 18, 60, 61], for instance. However, all of these approaches share the very same procedure: to discretize first the time derivatives. In a space-time context, this can be interpreted that a special (dG) time approximation has been chosen in advance to the evaluation of the stationarity. In contrast, the Hamilton principle we use does not need this specification of the test functions. Conversely, the stationarity conditions are all derived for arbitrary(ly smooth) test functions. This implies that our formulation offers the most general formulation. Consequently, the modeling of coupled mechanical continua via this stationary principle directly results in a holistic space-time formulation for all state variables. It hence serves as further example of how mechanics and mathematics are overlapping research areas with mutual exchanges. We demonstrate this important understanding by means of four specific mechanical models which cover many possible types of equations in solid mechanics. These are the elastic wave problem, rate-dependent, rate-independent, and gradient-enhanced equation types which result in ordinary differential equations, differential-algebraic equations, partial differential equations, and variational inequalities, respectively. These all demand their own, specified space-time formulations. A comparable approach has been presented in [31] which was restricted to systems with no internal variable. The previous descriptions can be summarized as novelties in a compact fashion as follows:
-
1.
Analysis of the classical principles of stationary action and virtual work regarding boundary conditions;
-
2.
Derivation of mathematical-consistent space-time formulations from an extended Hamilton principle;
-
3.
Detailed explanation of boundary conditions on the end-time cylinder part, including a prototype numerical simulation;
-
4.
Numerical discretization using Galerkin finite elements in space and time in which the temporal part is based on discontinuous Galerkin (dG). A well-posedness analysis for three time points of the fully discrete elastic wave equation is included as well;
-
5.
Correspondance of classical strong formulations and space-time weak formulations in order to address research communities starting from strong forms and vice versa. These correspondances are explicitly stated in the Appendix for each of our four principle problems to have a one-to-one comparison.
The outline of this paper is as follows. First, in Sect. 2, we introduce and describe the space-time cylinder and recall the fundamentals of thermodynamics. Specifically, the key notation is introduced as well as all boundary conditions on the space-time cylinder surface. Second, we discuss the boundary conditions for the classical extremal principles of stationary action and virtual work in Sect. 3. In Sect. 4, we setup the notation, recall the extended Hamilton principle and provide a further roadmap of the paper. In Sect. 5, we introduce a reformulated Hamilton functional which results in a mixed formulation and apply it to the elastic wave propagation for which a numerical simulation result is presented. In Sect. 6, we address visco-elasticity. Next, in Sect. 7, we consider elasto-plasticity with hardening, and in Sect. 8, gradient-enhanced damage modeling will be discussed. In Sect. 9, we present a numerical discretization of the gradient-enhanced damage model; here, also a numerical regularization of the variational inequality is proposed. Our work is summarized in Sect. 10 in which also some future directions will be given.
2 Mathematical and thermodynamic preliminaries
2.1 Notation
Let \(\Omega \subset {\mathbb {R}}^d\) be a bounded domain (connected set of points), where \(d=3\) is the dimension. Let \(\partial \Omega \) be a sufficiently smooth boundary such that an outward pointing normal vector \(\varvec{n}\) can be defined. Specifically, \(\partial \Omega =\partial \Omega _\textrm{D}\cup \partial \Omega _\textrm{N}\) and \(\partial \Omega _\textrm{D}\cap \partial \Omega _\textrm{N}=\emptyset \) with \(\partial \Omega _{D}\) denoting the boundary with Dirichlet conditions and \(\partial \Omega _\textrm{N}\) denoting the boundary with Neumann conditions. Moreover, for each problem (later three in this work), the respective Dirichlet and Neumann boundaries are distinguished by \(\partial \Omega _{\textrm{D},\varvec{u}},\partial \Omega _{\textrm{D},\theta },\partial \Omega _{\textrm{D},\varvec{\alpha }}\) and \(\partial \Omega _{\textrm{N},\varvec{u}},\partial \Omega _{\textrm{N},\theta },\partial \Omega _{\textrm{N},\varvec{\alpha }}\), and they are respectively non-overlapping for each variable. Next, the time interval is denoted by \(I = (0,T)\) with the end time value \(T\in ]0,\infty [\). The closures of \(\Omega \) and I are denoted by \(\bar{\Omega }\) and \(\bar{I}\), respectively. For the mathematical descriptions we often employ \((\varvec{a},\varvec{b}) = \int _{\Omega } \varvec{a}\cdot \varvec{b}\ \textrm{d}{V}\) when \(\varvec{a},\varvec{b}\in {\mathbb {R}}^d\). The same notation is employed for second-order tensors: \((\varvec{A},\varvec{B}) = \int _{\Omega } \varvec{A}: \varvec{B}\ \textrm{d}{V}\) when \(\varvec{A},\varvec{B}\in {\mathbb {R}}^{d\times d}\), where the Frobenius scalar product (double contraction) is defined as \(\varvec{A}:\varvec{B}= \sum _{i,j=1}^d A_{ij} B_{ij}\). The Frobenius norm is denoted by \(\Vert \varvec{A}\Vert :=\Vert \varvec{A}\Vert _F = \left( \sum _{i,j=1}^d A_{ij}^2 \right) ^{1/2}\) and the Euclidian norm by \(\Vert \varvec{a}\Vert :=\left( \sum _{i=1}^da_i^2\right) ^{1/2}\) for \(\varvec{a}\in {\mathbb {R}}^d\).
2.2 Solution sets: function spaces and convex sets
For the mathematical descriptions of the problem statements in this paper, we work with the usual Hilbert spaces \(L^2(\Omega )\) and \(H^1(\Omega )\) and their time-dependent extensions, namely Bochner spaces, \(L^2(I,V)\), which means \(L^2\) regularity in time of functions mapped from the time interval I to values in V, where \(V:=L^2(\Omega )\) or \(V:=H^1(\Omega )\); see, for instance, [88] or [52] or [56, Chapter 1]. Furthermore, for differentiation in normed function spaces, we introduce some notation below and refer for a compact summary of important concepts such as Gâteaux derivatives and chain rules to [21]. Since we frequently deal with variational inequalities (and coupled variational inequality systems (CVIS) [86]), we also need convex closed sets, e.g., denoted by K, in suitable function spaces; see e.g., [26, 50].
2.3 Definitions
Continuous physical bodies are characterized by a closed set \(\Omega \subset {\mathbb {R}}^d\). Then, material properties are assigned to all spatial points \(\varvec{x}\in \Omega \) such that these spatial points are referred to as material points [21, 40, 73]. The properties of the material points can be both time-independent and time-dependent. The time-independent properties are usually referred to as material parameters which can be experimentally determined for a specific material. The time-dependent properties, in contrast, characterize the thermodynamics state and are thus referred to as thermodynamic state variables. Examples for the internal variables might be plastic strains, damage states, volume fractions, concentrations, or combinations of the latter. In this contribution, we investigate dissipative thermo-mechanics for which the thermodynamic state is uniquely set by the displacements \(\varvec{u}\), the material-specific internal variables \(\varvec{\alpha }\), and temperature \(\theta \). Then, we arrive at
Definition 1
(Thermodynamic state) The thermodynamic state is defined by the set
which provides full thermo-mechanical information including dissipative effects.
The set of state variables needs to be adjusted depending on the material behavior to be described. For instance, for dissipative electro-thermo-mechanics, the electric field \(\varvec{E}\) and the magnetic field \(\varvec{B}\) need to be added. On the other hand, for purely reversible processes, the thermodynamic state reduces to \(\Lambda ^\textrm{e}:=\{\varvec{u},\theta \}\). Then, the physical state is given by evaluation of all elements in \(\Lambda \) for all elements in \(\Omega \), i.e., \(\varvec{u}=\varvec{u}(\varvec{x})\), \(\varvec{\alpha }=\varvec{\alpha }(\varvec{x})\), and \(\theta =\theta (\varvec{x})\), where the spatial domain \(\Omega \ni \varvec{x}\) can be imagined without loss of generality as a circle. However and as mentioned, the thermodynamic state variables are time-dependent. This property can be included into the schematic picture by extruding the circle into the perpendicular direction, which denotes the time dimension. The resulting ‘geometric’ object is thus a cylinder in the space-time continuum. The dimensions are given by the geometric length [m] , i.e., the radius of the circle, and the time length [s], i.e., the height of the cylinder, and they are given in SI units. However, the presented derivations in this work hold true for arbitrary three-dimensional spatial domains with sufficiently smooth boundaries. Thus, we arrive at (see for instance [52, 56])
Definition 2
(Space-time cylinder) Let \({\mathbb {R}}^{d+1}\) be the \((d+1)\)-dimensional Euclidian space with points \((\varvec{x},t)\), where \(\varvec{x}=(x_1,\ldots ,x_d)\in {\mathbb {R}}^d\). The space-time cylinder is defined by
with the time interval \(I = (0,T)\), with \(t\in I\), where \(T\in ]0,\infty [\) is the end time point, and \(\Omega \subset {\mathbb {R}}^d\) is the spatial domain.
Of course, the space-time cylinder transforms to a four-dimensional space with the coordinates \((x_1,x_2,x_3,t)\) and arbitrary shape for a general three-dimensional spatial domain. A schematic plot of the space-time cylinder for a two-dimensional spatial domain is given in Fig. 1. We recognize
Definition 3
(Integration over the volume of the space-time cylinder) The integration over the volume of the space-time cylinder for an integrable scalar-valued function \(f:Q\rightarrow {\mathbb {R}}\) with \(f=f(\varvec{x},t)\) is given by
2.4 State variables
It has been demonstrated in a recent publication [46] that an extended Hamilton principle provides not only the governing equations for the displacements, in analogy to the principle of least action; in contrast, it provides the governing equations for all thermodynamic state variables which are the displacements \(\varvec{u}=\varvec{u}(\varvec{x},t)\), the internal variables \(\varvec{\alpha }=\varvec{\alpha }(\varvec{x},t)\) which describe the microstructural state, and the temperature \(\theta =\theta (\varvec{x},t)\), see [46]. All thermodynamic state variables are functions of space \(\varvec{x}\in \Omega \) and time \(t\in I\); consequently, all state variables are elements of the space-time cylinder: \((\varvec{x},t)\in Q\), \(\varvec{u}: Q \rightarrow {\mathbb {R}}^d\), \(\varvec{\alpha }: Q\rightarrow {\mathbb {R}}\) or \(\varvec{\alpha }: Q\rightarrow {\mathbb {R}}^{d\times d}\), depending on the specific microstruture evolution to be modeled, and \(\theta : Q \rightarrow {\mathbb {R}}\). The internal variables are interpreted in the following derivations as vectorial quantity; for other cases, e.g., scalar- or matrix-valued internal variables, the products have to be adapted accordingly, i.e. the scalar products \((\bullet )\cdot (\bullet )\) reduces to \((\bullet )(\bullet )\) for scalar-valued internal variables or expands to \((\bullet ):(\bullet )\) for matrix-valued internal variables.
2.5 Space-time boundary conditions and evolution laws
The thermodynamic state \(\Lambda \) is not fixed but time-dependent. To be more precise, it depends both on the externally applied conditions at the space-time point \((\varvec{x},t)\) and also on the states at other points in time. Consequently, two quantities are of interest
-
1.
the boundary conditions and
-
2.
the evolution of \(\Lambda \) within the space-time cylinder.
We will come to 2. later. For the first part, we can return to the figure of the space-time cylinder.
2.5.1 Space-time boundary conditions
First, we introduce
Definition 4
(Boundary of the space-time cylinder) The boundary of the space-time cylinder is defined by
We see that the lateral surface (i.e., classical boundary conditions of the spatial domain) of the space-time cylinder is defined by \(\{\partial \Omega \times I \}\) whereas the end faces are defined by \(\{\Omega \times \partial I\}\). To capture the entire space-time cylinder, we thus need to define the boundary conditions at all surfaces. Specifically, we notice that \(\partial I = \{0,T\}\) with \(\partial I = \partial I_0 \cup \partial I_T\), wherein the boundary part \(\partial I_0=\{0\}\) constitutes the well-known initial condition. Specific definitions for \(t=T\) (end time space-time cylinder face) are rather unusual in the classical literature and much less (if at all) discussed; we refer the reader for a further discussion to Sect. 3.4.
We thus specify the integration over the surfaces by
Definition 5
(Integration over the surface of the space-time cylinder) The integration of an integrable scalar-valued function \(f=f(\varvec{x},t)\) over the lateral surface of the space-time cylinder is given by
and the integration of a different integrable scalar-valued function \(p=p (\varvec{x},t)\) over the end faces of the space-time cylinder is given by
Here, \(\textrm{d}A\) indicates the integration over the spatial surface \(\partial \Omega \) and \(\textrm{d}s\) indicates the integration over the temporal surface \(\partial I\). We specify the respective boundary integration for some scalar-valued function g on the Dirichlet and Neumann boundaries for the three governing variables \(\varvec{u}\), \(\theta \) and \(\varvec{\alpha }\) by
and the initial condition
The results of the operators for surface integration, i.e., \({\mathcal {B}}_{\partial \Omega }\) and \({\mathcal {B}}_{\partial I}\), have the same physical units.
The notation for a function \(f=f(\varvec{x},t)\) and its space-time boundary conditions, i.e., spatial boundary conditions as well as temporal conditions is as follows:
where \(\varvec{x}_\partial = \varvec{x}\in \partial \Omega \). Moreover, \(f^\star \) is also used to indicate quantities which depend on the current material behavior and which thus need to be modeled. Hence, they are treated as given data, which are frozen variables during variations, i.e., \(\delta f^\star \equiv 0\). The space-time cylinder including all boundary conditions is schematically plotted in Fig. 2.
Let us assume that the function f is a traction and thus has the physical SI unit N/m\(^2\). Then, it holds that \({\mathcal {B}}_{\partial \Omega }\) has the physical unit N\(\times \)s. Furthermore, if f has the physical unit N/m\(^2\), it follows that p has the physical unit \(\frac{kg}{m^3}\frac{m}{s}\), i.e., p has the unit of a volume-specific linear momentum. The time dimension is, in contrast to the space dimension, scalar-valued. The fundamental theorem of calculus implies that \(\int _{\partial I}p \ \textrm{d}s \equiv p(\varvec{x},T) - p(\varvec{x},0)\). Consequently, \(\int _\Omega p(\varvec{x},t) \ \textrm{d}V\) already needs to yield the final unit of \({\mathcal {B}}_{\partial I}[p]\), which needs to be identical to \({\mathcal {B}}_{\partial \Omega }[f]\). Thus, p has the physical unit \(\frac{\textrm{kg}}{{\textrm{m}}^3}\frac{\textrm{m}}{\textrm{s}}\).
2.5.2 Evolution of \(\Lambda \) within the space-time cylinder
It remains to provide the second quantity of interest: the evolution of \(\Lambda \) in the space-time cylinder, i.e., we need to describe how the thermodynamic state variables evolve over space and time. As we will show in the next section, this can be holistically achieved by an extended Hamilton principle.
2.6 Fundamental balance laws of thermodynamics
The starting point for the derivation of the extended Hamilton principle are the fundamental balance laws of thermodynamics specified via the first and second law of thermodynamics.
The first law of thermodynamics (balance of energy) is given by
with the following energy functionals:
Here, \(\rho \) is the density with the unit \([\rho ] = \text {kg}/\text {m}^3\), \(\Psi \) is the free energy density with the unit \([\Psi ] = \text {J}/\text {m}^3\), \(\varvec{\varepsilon }=\nabla ^\text {sym}\varvec{u}\equiv \frac{1}{2}(\nabla \varvec{u}+ \varvec{u}\nabla )\) is the dimensionless linearized strain, and s is the entropy density with the unit \([s] = \text {J}/(\text {K}\text {m}^3)\). Quantities marked with \((\bullet )^\star \) are externally given and thus do not follow from any stationarity condition, but need to be modeled. In a practical sense, this means that they are treated constant when computing a variation. Body forces are denoted by \(\varvec{b}^\star \) with the unit \([\varvec{b}^\star ] = \text {N}/\text {m}^3\), the traction vector is \(\varvec{t}^\star \) with the unit \([\varvec{t}^\star ] = \text {N}/\text {m}^2\), the external heat source is \(h^\star \) with the unit \([h^\star ] = \text {W}/\text {m}^3\), and the heat flux is termed as \(\varvec{q}^\star \) with the unit \([\varvec{q}^\star ] = \text {W}/\text {m}^2\). Furthermore, \(\varvec{n}\) is the outward pointing unit normal vector. For later use, let us introduce the mechanical stress \(\varvec{\sigma }\), which is related to the free energy density by
with the physical unit \([\varvec{\sigma }]=\text {N}/\text {m}^2\) and the heat flux vector \(\varvec{q}^\star \), which is modeled by Fourier’s law as
with the heat conductivity \(\omega \), having the unit \([\omega ]=\text {W}/(\text {m}\text {K})\). Later from Sect. 4 on, \(\theta ^\star \) will be replaced by \(\theta \) in all stationarity conditions that follow.
Remark 1
The thermal work \({\mathcal {Q}}\) results as time integration of the thermal power \(\displaystyle {\mathcal {P}}^\textrm{therm} := \int _\Omega h^\star \ \textrm{d}V - \int _{\partial \Omega _{\textrm{N},\theta }}\varvec{n}\cdot \varvec{q}^\star \ \textrm{d}A\). The negative sign ensures an increase of energy in the body when the heat flux vector \(\varvec{q}^\star \) is pointing inside of the body.
The second law of thermodynamics (balance of entropy) is given by
where \(\dot{{\mathcal {S}}} := \frac{\textrm{d}}{\textrm{dt}}{{\mathcal {S}}}\), with the following temperature-specific energy functionals:
Remark 2
Irreversible processes are characterized by a strictly positive entropy production \(\Delta ^\textrm{s}>0\) while \(\Delta ^\textrm{s}=0\) holds for reversible processes. Moreover, from \(\Delta ^\textrm{s}\ge 0\), it follows \(\dot{{\mathcal {S}}}\ge 0\) which is the well-known Clausius-Duhem inequality. The division of \(h^\star \) and \(\varvec{q}^\star \) by temperature ensures (13) to be a one-form (also known in German as Pfaff’sche form), e.g., we have for reversible processes the path-independence of the integrals in \({\mathcal {S}}(T)={\mathcal {S}}(0) + \int _I{\mathcal {Q}}^\textrm{s}_\textrm{s} \ \textrm{d}t + \int _I{\mathcal {Q}}^\textrm{s}_\textrm{f} \ \textrm{d}t\), i.e., the value of \({\mathcal {S}}(T)\) can be determined by evaluating the antiderivatives of \({\mathcal {Q}}^\textrm{s}_\textrm{s}\) and \({\mathcal {Q}}^\textrm{s}_\textrm{f}\) on I and thus only depend on the values at the beginning and the end of the time interval.
2.7 Free energy and total potentials
Next, we continue with the free energy:
Definition 6
(Free energy) Let us define the free energy as
Then, the first and second law of thermodynamics can be combined by elimination of \(h^\star \) in \({\mathcal {Q}}\) in (10), and rearranging yields
It is worth mentioning that an indefinite time integral is present in the last three terms in (15) due to the definition of the thermal work \({\mathcal {Q}}\), cf. [46] for more details.
Furthermore, we come to
Definition 7
(Total potential) We define the total potential in the space-time cylinder by
where \({\mathcal {B}}_Q[\varvec{b}^\star \cdot \varvec{u}]\) and \( {\mathcal {B}}_{\partial \Omega _{\textrm{N},\varvec{u}}}[\varvec{t}^\star \cdot \varvec{u}]\) are the external contributions to the total potential.
The thermodynamic state of each material point in the space-time cylinder \(\varvec{x}\in \Omega \) is expressed by the momentum vector defined by \(\textrm{d}m \, \dot{\varvec{u}}\) where \(\textrm{d}m\) is the mass of the material point. Then, the evolution of the thermodynamic state can be related to a scalar when integrating the momentum vector in the space-time domain. This scalar is referred to as action. The position within the space-domain is uniquely related to the time-dependent displacement vector \(\varvec{u}\). This motivates to define the action for each material point by
with fixed start and end points \(\varvec{u}_0:=\varvec{u}(0)\) and \(\varvec{u}_T=\varvec{u}(T)\), respectively. Due to the time dependence, the displacement increment can be reformulated by \(\textrm{d}\varvec{u}= \dot{\varvec{u}} \textrm{d}t\). Consequently, we obtain for the action of the total body
The action can be interpreted as integration of \(\rho \Vert \dot{\varvec{u}}\Vert ^2\) over the volume of the space-time cylinder, i.e., \({\mathcal {A}}= {\mathcal {B}}_Q[\rho \Vert \dot{\varvec{u}}\Vert ^2]\). Later, we will explicitly work with the velocity variable defined as \(\varvec{v}:= \dot{\varvec{u}}\).
Similarly to the boundary terms for the total potential, we arrive at
Definition 8
(Total action) We define the total action by expanding \({\mathcal {A}}\) by boundary terms at the end faces of the space-time cylinder. This results in
where \(\rho \dot{\varvec{u}}^\star \) is the volume-specific linear momentum with the prescribed velocity field \(\dot{\varvec{u}}^\star \) at the end faces of the space-time cylinder, i.e., at \(t=0\) and \(t=T\), respectively.
Remark 3
We notice that the time boundary integral
includes the difference of both temporal boundaries, namely at \(t=0\) and \(t=T\). Specifically, the possibility of prescribing a condition for \(\dot{\varvec{u}}\) on \(t=T\) is rather unusual since we pretend the solution \(\dot{\varvec{u}}\) at the end time; we refer the reader to Sect. 3 and Remark 6. As we will demonstrate, this indeed provides more flexibility for the formulation of boundary conditions. Further details about this alternative understanding are also provided in our discussion in Sect. 3.4.
3 The principle of least action and boundary conditions
The purpose of this section is the mechanical analysis of the standard procedure of extremal principles in mechanics. First, we demonstrate special requirements on the test functions. Then, we show how modified functionals yield stationarity conditions that also include the boundary/initial conditions in weak form. This observation motivates us to formulate boundary conditions on the entire space-time cylinder (including end time conditions) which do not pose any requirements on the test functions. However, we notice that the entire procedure presented remains also valid when classical functionals are used.
3.1 The classical principle of least action
Let us start with the principle of least action which is usually stated for systems of rigid particles in standard form as
with the coordinate \(q=q(t)\), the mass m and the positive spring constant d. The stationarity condition reads
which can only be fulfilled if and only if
-
1.
the (strong form) differential equation \(m \ddot{q} + d q=0\) holds true and
-
2.
the test function \(\delta q\) vanishes for \(t=0\) and \(t=T\).
For solving the differential equation in Condition 1, two conditions are required which might be initial or boundary conditions on \(t=\{0,T\}\), i.e., we need to assume information on the position and/or velocity at the starting or/and end point. Usually, the position and velocity at \(t=0\) are prescribed, i.e., initial conditions are chosen. These conditions are mainly used since they are immediately plausible because of temporal causality, when information goes into the direction of positive time. However, other choices are also admissible. Furthermore, the test function \(\delta q\) needs to fulfill the constraints on the boundary of the time interval in Condition 2. This property is usually neglected in numerical implementations.
3.2 The principle of least action including initial conditions
The fundamental idea is to postulate an extended action functional whose stationarity condition also yields the usual initial condition as discussed above. To achieve this goal, we model the extended action functional by
where \(q_0^\star \) and \(\dot{q}_0^\star \) denote some prescribed initial conditions for the position and velocity, respectively, and \({\tilde{c}}\) is a penalty parameter with the physical unit \([{\tilde{c}}]=\text {kg}/\text {s}\). The stationarity condition is computed to
Then, the following conditions must be fulfilled as stationarity condition:
-
1.
the differential equation \(m \ddot{q} + d q=0\) holds true,
-
2.
the test function \(\delta q\) vanishes for \(t=T\),
-
3.
the initial position equals the prescribed one by \(q=q_0^\star \) at \(t=0\) and
-
4.
the initial velocity equals the prescribed one by \(\dot{q}=\dot{q}_0^\star \) at \(t=0\).
By using this extended functional, the initial conditions arise naturally in the weak form within the stationarity condition. However, the constraint on the test function at the end time (Condition 2) is still present.
3.3 The principle of virtual work including initial conditions
The principal of virtual work is related to the principle of stationary action in Sec. 3.1 by
For solving this problem, the finite element method can be applied. However, it remains to model the boundary and/or initial conditions. The goal is to propose an extended functional from whose stationary conditions the boundary and/or initial directly follow in weak form. To this end, we postulate, in analogy to Sec. 3.2, the following extended total potential:
with the initial displacement and velocity fields \(\varvec{u}_0^\star \) and \(\dot{\varvec{u}}_0^\star \), respectively, and a penalization parameter \(c_{\varvec{u}}\) with the physical unit \([c_{\varvec{u}}]=\text {kg}/(\text {m}^3\text {s})\). The stationarity condition is computed as
Let us perform integration by parts in space and time. This gives us
where we made use of the constitutive relation \(\varvec{\sigma }=\partial \Psi /\partial \varvec{\varepsilon }\). Let us collect the terms with identical integrals and those evaluated at the same time points. Furthermore, we make use of Gauss’ theorem. Then, we obtain
Due to the independence of the respective integrals, the terms above must vanish individually. We thus identify the following conditions to hold true for stationarity of \({\mathcal {L}}^\textrm{VW,IC}\):
-
1.
the balance of linear momentum \(\rho \ddot{\varvec{u}}-\nabla \cdot \varvec{\sigma }-\varvec{b}^\star ={\varvec{0}} \ \forall (\varvec{x},t)\in \Omega \times I\);
-
2.
the spatial boundary condition \(\varvec{n}\cdot \varvec{\sigma }=\varvec{t}^\star \ \forall (\varvec{x},t)\in \partial \Omega _{\textrm{N},\varvec{u}}\times I \);
-
3.
the test function \(\delta \varvec{u}\) vanishes for \(t=T\);
-
4.
the initial velocity equals the prescribed velocity \(\dot{\varvec{u}}=\dot{\varvec{u}}_0^\star \ \forall (\varvec{x},t)\in \Omega \times \{0\}\);
-
5.
the initial displacement field equals the prescribed one \(\varvec{u}=\varvec{u}_0^\star \ \forall (\varvec{x},t)\in \Omega \times \{0\}\).
Concluding, the formulation of extended functionals enables us to obtain boundary and/or initial conditions in weak form. The strategy of extended functionals is thus equivalent to the usual way of modeling the boundary and/or initial conditions. However, the usage of extended functionals provides the advantage to obtain the weak form from the stationarity condition whose solution in a space-time setting directly fulfills the prescribed conditions. As we see later, a different extension can be formulated which is free of any constraints on the test functions (as Condition 3 above); here, as consequence, the velocities at the initial time and end time have to be prescribed for which a physical interpretation will be given below.
3.4 Boundary term at the end time face of the space-time cylinder
Regarding our goal to derive the entire boundary value problem in a holistic manner from a stationary principle, it is unsatisfactory that the boundary conditions are to be modeled additionally to the postulation of the functional. Furthermore, restrictions on the test functions are questionable when the stationarity conditions are interpreted in the holistic perspective of the space-time domain. Thus, we aim at formulating an extended functional which is free of these requirements while simultaneously yielding the required boundary conditions in weak form. As we will see in Sect. 4, this goal can be achieved. However, the boundary conditions are then given in terms of temporal derivatives, i.e., the velocities, which need to be known both at the beginning and at the end of the time interval I. Since rather unusual when compared to classical approaches, we discuss the end time space-time boundary condition in more detail.
As already mentioned, this condition is an end time condition of the temporal part. An extensive literature research in space-time contributions, e.g., [6, 41, 54, 55], did not reveal the need for end time conditions to the best of our knowledge. However, discussions can be found, for instance, in [41][p. 340] and [81]. There, the end-time condition is argued to contradict causality (information is transported into the positive time direction) since in this case solutions depend on the past and the future. This perspective is motivated by interpreting time as a unidirectional coordinate in which physical processes evolve. In this work, we interpret the space-time domain as a holistic four-dimensional object in which time does not have a specified direction. We will see that our formulation as a stationarity problem yield less restrictions for the test functions while simultaneously the boundary conditions appear naturally in weak form also on \(\partial I_T = \{T\}\). This, however, does not contradict causality of the time domain since time is considered as a unity. We will see furthermore that only temporal Neumann boundary conditions, i.e. in terms of the velocities, appear at the start and end time face of the space-time cylinder; see also Remark 6. In turn, classical initial conditions for the function in term of Dirichlet boundary conditions are here not necessary anymore. Interestingly, this condition arises only due to the fact that we start from the viewpoint of a stationarity problem, namely Hamilton’s principle. In classical formulations that begin with the strong form, the end time condition is not required, which, in combination with the deduced argument of causality, may be one of the reasons why this setting is not employed so far in the literature.
4 The Hamilton principle as guidance through the space-time cylinder
The Hamilton principle can be interpreted as generalization of the well-known principle of least action for rigid bodies as presented above.
4.1 Extended Hamilton functional and Euler–Lagrange equations
Replacing one of the kinetic energies in (19) by (15) results in the extended Hamilton functional, cf. [46], denoted here as \({\mathcal {H}}^\text {EX}\):
For a holistic representation, we add additional terms to include Dirichlet boundary conditions and initial data in weak fashion in analogous fashion as presented in Sect. 3.3. These parts are collected in the functional \({\mathcal {H}}^\text {BC}\), defined as
with the penalty constants \(c_{\varvec{u}}\), \(c_{\theta }\) and \(c_{\varvec{\alpha }}\) with the physical units \([c_{\varvec{u}}]=\text {kg}/(\text {m}^3\text {s})\), \([c_{\varvec{\alpha }}]=\text {kg}/(\text {m}\text {s})\) and \([c_{\theta }]=\text {kg}/(\text {m}\text {s}\text {K}^2)\), respectively, and the heat capacity \(\kappa \) with the physical unit \([\kappa ]=\text {J}/(\text {m}^3\text {K})\). The parameter \(\tilde{r}\) and its physical unit depends on the respective model. As demonstrated later, it might be a viscosity (see Sect. 6), a yield stress (see Sect. 7) or an energetic threshold value for microstructure evolution (see Sect. 8). It is worth mentioning that the integrals evaluated at \(t=0\) set the initial conditions for temperature and internal variable whereas the integral \({\mathcal {B}}_{\partial I}[\rho \dot{\varvec{u}}^\star \cdot \varvec{u}]\) sets the initial and end velocities which are the temporal gradients of the displacements. Then, we obtain
which constitutes of the physically motivated parts in \({\mathcal {H}}^\text {EX}\) and the mathematically motivated parts in \({\mathcal {H}}^\text {BC}\). The weak imposition of Dirichlet values is often done in discontinuous Galerkin (dG) methods, e.g., [23, 76], and goes back to [64]; see also [3]. In Sect. 9, we employ dG in time, where we prescribe initial conditions in an analogous fashion.
The Hamilton functional was complemented by the functional \({\mathcal {C}}\) which accounts for model specific constraints. More details will be provided later. It is worth mentioning that the boundary term \({\mathcal {B}}_{\partial I}[\rho \dot{\varvec{u}}^\star \cdot \varvec{u}]\) results in boundary conditions at the end of the time interval, i.e., at \(t=\{0,T\}\). However, different other options are possible as discussed in Sec. 3. These different options do not restrict our derivations which follow. In our opinion, the functional as proposed in (31) yields the physically and mathematically most convincing formulations.
The Hamilton principle postulates stationarity of the Hamilton functional \({\mathcal {H}}\) in (31) with respect to all variables, i.e.
Proposition 1
(Total derivative) Let \({\mathcal {F}}:\Omega \subset X\rightarrow Y\) be a mapping, where \(X:=X_1\times X_2 \times X_3\), here with \(\varvec{u}\in X_1, \varvec{\alpha }\in X_2, \theta \in X_3\), and Y are normed vector spaces and \(\Omega \) is open in X. If \({\mathcal {F}}\) is differentiable at a point \(\varvec{a}\in \Omega \), the n (above \(n=3\) with \(\varvec{u}\), \(\varvec{\alpha }\) and \(\theta \)) partial derivatives \(\partial _j {\mathcal {F}}[\varvec{a}](\delta \varvec{a})\) for \(1\le j\le n\) exist and it holds into all directions \(\delta \varvec{a}=(\delta \varvec{a}_1,\ldots ,\delta \varvec{a}_n)\in X\) that
A proof can be found in [21, Theorem 7.1-2, p. 455]. In (4.1), we identify \({\mathcal {F}}:={\mathcal {H}}\) and \(\varvec{a}=(\varvec{a}_1,\varvec{a}_2,\varvec{a}_3) = (\varvec{u},\varvec{\alpha },\theta )\) and lastly \(\delta \varvec{a}_1:= \delta \varvec{u}\), \(\delta \varvec{a}_2 :=\delta \varvec{\alpha }\) and \(\delta \varvec{a}_3 := \delta \theta \).
Due to the independence of the variations of the displacements, the internal variables, and the temperature, i.e., \(\delta \varvec{u}\), \(\delta \varvec{\alpha }\) and \(\delta \theta \), respectively, the necessary conditions are evaluated independently:
System (32) constitutes the well-known Euler-Lagrange equations which are mathematically the first-order necessary conditions for optimality.
Definition 9
(Gâteaux derivative) Let X and Y be normed vector spaces and \(\Omega \) be open in X. Note, in this paper \(Y={\mathbb {R}}\). Let \({\mathcal {F}}:\Omega \subset X\rightarrow Y\) be a mapping and \(\varvec{a}\in \Omega \) and non-zero vector \(\delta \varvec{a}\in X\) and assume that \(\epsilon \subset {\mathbb {R}}\rightarrow {\mathcal {F}}[\varvec{a}+\epsilon \delta \varvec{a}]\in Y\) is differentiable at \(\epsilon = 0\). Then, \({\mathcal {F}}\) has at \(\varvec{a}\in \Omega \) the Gâteaux derivative into the direction \(\delta \varvec{a}\), i.e., a directional derivative, which is defined by
with \(\delta {\mathcal {F}}[\varvec{a}](\delta \varvec{a}) \in Y\).
Proposition 2
The Gâteaux derivatives of \({\mathcal {H}}\) are computed to be
where \(\delta \varvec{u},\delta \varvec{\alpha },\delta \theta \) are admissible directions in suitable function spaces. The condition (33)\(_{1}\) is also known as principle of virtual work and is mathematically known as the so-called weak form of the balance of linear momentum. In contrast to classical formulations, we receive here the boundary conditions in weak form at the beginning and end time for the velocities: \(- \int _I \int _\Omega \rho \dot{\varvec{u}} \cdot \delta \dot{\varvec{u}} \ \textrm{d}V \ \textrm{d}t + \int _{\partial I}\int _\Omega \rho \dot{\varvec{u}}^\star \cdot \delta \varvec{u}\ \textrm{d}V \ \textrm{d}s = -\int _{\partial I}\int _\Omega \rho (\dot{\varvec{u}}-\dot{\varvec{u}}^\star )\cdot \delta \varvec{u}\ \textrm{d}V\ \textrm{d}s + \int _I\int _\Omega \rho \ddot{\varvec{u}}\cdot \delta \varvec{u}\ \textrm{d}V \ \textrm{d}t = -\int _\Omega \rho (\dot{\varvec{u}}-\dot{\varvec{u}}^\star )\cdot \delta \varvec{u}\ \textrm{d}V|_{t=0} - \int _\Omega \rho (\dot{\varvec{u}}-\dot{\varvec{u}}^\star )\cdot \delta \varvec{u}\ \textrm{d}V|_{t=T} + \int _I\int _\Omega \rho \ddot{\varvec{u}}\cdot \delta \varvec{u}\ \textrm{d}V \ \textrm{d}t\). Here, we used mass conservation, i.e., \(\dot{\rho }=0\). Furthermore, we receive the Dirichlet boundary conditions for the displacements, the internal variable and the temperature on the respective boundaries \(\partial \Omega _{\textrm{D},\varvec{u}}\), \(\partial \Omega _{\textrm{D},\varvec{\alpha }}\) and \(\partial \Omega _{\textrm{D},\theta }\).
For the computation of the variations in (33), we start from (31) and employ Definition 9. Then, we differentiate in the points \(\varvec{a}:=\varvec{u}\), \(\varvec{a}:=\varvec{\alpha }\) and \(\varvec{a}:=\theta \) into the directions \(\delta \varvec{a}=\delta \varvec{u}\), \(\delta \varvec{a}=\delta \varvec{\alpha }\) and \(\delta \varvec{a}=\delta \theta \), subsequently. The first equation is obtained from \(\delta _{\varvec{u}} {\mathcal {H}}[\varvec{u},\varvec{\alpha },\theta ] = \delta _{\varvec{u}}\int _I ({\mathcal {K}}[\varvec{u}] - {\mathcal {G}}[\varvec{u},\varvec{\alpha },\theta ]) \ \textrm{d}t + \delta _{\varvec{u}}{\mathcal {B}}_Q[\varvec{b}^\star \cdot \varvec{u}] + \delta _{\varvec{u}}{\mathcal {B}}_{\partial \Omega _{\textrm{D},\varvec{u}}}[\frac{c_{\varvec{u}}}{2} \Vert \varvec{u}- \varvec{u}^\star \Vert ^2] + \delta _{\varvec{u}}{\mathcal {B}}_{\partial \Omega _{\textrm{N},\varvec{u}}}[\varvec{t}^\star \cdot \varvec{u}] - \delta _{\varvec{u}}{\mathcal {B}}_{\partial I}[\rho \dot{\varvec{u}}^\star \cdot \varvec{u}] \), where we notice that \(\varvec{\varepsilon }:=\varvec{\varepsilon }(\varvec{u})\). The second equation is obtained from \(\delta _{\varvec{\alpha }} {\mathcal {H}}[\varvec{u},\varvec{\alpha },\theta ] = \delta _{\varvec{\alpha }} \big ( \int _I( - {\mathcal {G}}[\varvec{u},\varvec{\alpha },\theta ] - \int _\Omega D \ \textrm{d}V ) \ \textrm{d}t + {\mathcal {B}}_{\partial \Omega _{\textrm{D},\varvec{\alpha }}}[\frac{c_{\varvec{\alpha }}}{2}\Vert \varvec{\alpha }-\varvec{\alpha }^\star \Vert ^2] + \int _\Omega \frac{\tilde{r}}{2} \Vert \varvec{\alpha }-\varvec{\alpha }^\star _0\Vert ^2 \ \textrm{d}V |_{t=0} - {\mathcal {C}}[\varvec{\alpha }] \big ) \) with the definition for the dissipated energy \(D:=\int \Delta ^{\textrm{s},\star }\theta \ \textrm{d}t\) which is modeled as \(D=\varvec{p}^{\textrm{diss},\star }\cdot \varvec{\alpha }\). The third equation is obtained from \(\delta _\theta {\mathcal {H}}[\varvec{u},\varvec{\alpha },\theta ]=\delta _\theta \big (\int _I(-{\mathcal {G}}[\varvec{u},\varvec{\alpha },\theta ]- \int _\Omega \int \dot{\theta }s \ \textrm{d}t \ \textrm{d}V - \int _\Omega \int \frac{1}{\theta } \varvec{q}^\star \cdot \nabla \theta \ \textrm{d}t \ \textrm{d}V - \int _\Omega \int \Delta ^{\textrm{s},\star }\theta \ \textrm{d}t \ \textrm{d}V ) \ \textrm{d}t +{\mathcal {B}}_{\partial \Omega _{\textrm{D},\theta }}[\frac{c_\theta }{2}(\theta -\theta _0^\star )^2] + \int _\Omega \frac{\kappa }{2}(\theta -\theta _0^\star )^2\ \textrm{d}V |_{t=0} \big )\) with the definition of the heat capacity \(\kappa := - \theta \partial ^2\Psi /\partial \theta ^2\). Here, we made use of the constitutive equation for the entropy \(s=-\partial \Psi /\partial \theta \) and modeled the entropy production by \(\Delta ^{\textrm{s},\star }=-\partial \Psi /\partial \varvec{\alpha }\cdot \dot{\varvec{\alpha }}/\theta \). For details, we refer to [46].
Remark 4
Modeling the dissipated energy \(D=\int \Delta ^{\textrm{s},\star }\theta \ \textrm{d}t\) independently of \(\Delta ^{\textrm{s},\star }\) violates the fundamental theorem of calculus. However, this is intended: physically observed path-dependence is achieved by doing so. More details on the derivation of (33) are provided in [46].
Proposition 3
(Isothermal and quasi-static problem statement) For the isothermal and quasi-static case, the stationarity conditions in (33) reduce to
where \(\delta \varvec{u}\) and \(\delta \varvec{\alpha }\) are admissible functions from suitable function spaces.
Proof
Follows immediately from Proposition 2 when there are no temperature variations, i.e., \({\delta }\theta \equiv 0\) and no velocity variations, i.e., \({\delta \dot{\varvec{u}}}\equiv {\varvec{0}}\). \(\square \)
The non-conserving forces are usually derived from a so-called dissipation function \(\Delta ^\text {diss}\) by
The non-conserving forces \(\varvec{p}^{\textrm{diss},\star }\) are related to changes in the microstructure of the materials. They need to be modeled depending on the experimentally observed material behavior. Consequently, they are treated as ‘external’ forces which are treated constant during variations (which is indicated by \((\bullet )^\star \)). In the same sense, the displacement-dependent tractions, i.e., \(\varvec{t}^\star =\varvec{t}^\star (\varvec{u})\), are kept frozen for the variation. As a counter-example, the variations are considered for fluid-structure interaction since they serve here as ‘internal’ forces acting within the fluid-structure system.
Let us investigate (34)\(_2\) further: integration by parts of the second integral results in
with the normal vector \(\varvec{n}\) on the surface of the body indicated by \(\partial \Omega = \partial \Omega _{\textrm{D},\varvec{\alpha }}\cup \partial \Omega _{\textrm{N},\varvec{\alpha }}\), \(\partial \Omega _{\textrm{D},\varvec{\alpha }}\cap \partial \Omega _{\textrm{N},\varvec{\alpha }}=\emptyset \), where the surface with Dirichlet boundary conditions is denoted by \(\partial \Omega _{\textrm{D},\varvec{\alpha }}\) and the one with Neumann conditions by \(\partial \Omega _{\textrm{N},\varvec{\alpha }}\), respectively. Since the volume and the surface of the body may be chosen independently, the stationarity condition demands the volume and surface integrals to vanish individually, which is mathematically well-known as the fundamental lemma of the calculus of variations, e.g., [21].Footnote 3 Consequently, we obtain from (34)\(_2\) the strong form
with the Neumann condition
and the Dirichlet and initial conditions
Defining the (extended) thermodynamic driving force by
the local condition (37) can be written in short as
which is (an extended version of) the famous Biot equation [12, 13]. Considering the functional dependency \(\Delta ^\text {diss}=\Delta ^\text {diss}({\dot{\varvec{\alpha }}})\), we recognize that the dissipation function can be alternatively formulated in terms of \(\varvec{p}\) by employing a Legendre transformation: the thermodynamic flux \(\dot{\varvec{\alpha }}\) is replaced by the thermodynamic force \(\varvec{p}\).
Proposition 4
The stationarity condition for the temperature in (33)\(_3\) provides an equivalent class, but not equal condition, as the usual weak form for the heat conduction equation.
Proof
Let us redefine the test function for the temperature by \(\delta \dot{\tilde{\theta }}=\frac{1}{\theta }\delta \theta \) and the bracket term in (33)\(_3\) as
Then, by fulfilling the Dirichlet boundary conditions and the initial conditions for \(\theta \), (33)\(_3\) can be rewritten as
Volume and surface integrals can be chosen independently. Thus, the two terms have to vanish separately. Then, the surface integral yields the well-known Neumann boundary condition for adiabatic bodies \(\varvec{n}\cdot \varvec{q}^\star = 0\). Furthermore, let us integrate the inner undefined integral in the first term by parts. This yields
which yields the equivalent expression
such that both terms, again, have to vanish individually. From the first term, we recognize that indeed \(a=0\) and consequently, \(\dot{a}=0\). The condition \(a=0\) constitutes the usual strong form of the heat conduction equation. \(\square \)
Remark 5
The formulations presented here can also be given for a finite deformation setting. However, for improved readability for both communities, i.e., engineering and applied mathematics, we restrict ourselves to the linearized kinematics in this contribution. More details for finite kinematics can be found, e.g., in [46].
4.2 Specific material models
Having the space-time cylinder from Sect. 2, the considerations in Sect. 3 and the Hamilton principle from the current section both at hand, we consider in the following four specific material models to illustrate our proposed paradigm of space-time material modeling. The key procedure relies on four ingredients, namely specifying
-
1.
the material-dependent internal variables \(\varvec{\alpha }\) needed to describe the thermodynamic state;
-
2.
the free energy density \(\Psi \);
-
3.
the dissipation function \(\Delta ^\text {diss}\);
-
4.
experimentally observed constraints for the evolution of the internal variables.
These then allow us to model the deformation as well as microstructure and temperature evolution of a specific material. To be more precise, exemplary, we consider elastic wave propagation, visco-elasticity, elasto-plasticity with hardening, and a gradient-enhanced damage model. Hereby, we consider, i) an elastic model including self-heating, ii) rate-dependence (visco-elasticity), iii) rate-independence (elasto-plasticity with hardening), and iv) non-local material modeling (gradient-enhanced damage model). After deriving the individual models, we demonstrate how these models, and thus the variational modeling technique, is related to a mathematical perspective. To be more precise, we showcase how the variational modeling can be interpreted by formulating space-time function spaces and convex sets which opens the field for a new paradigm to account for space-time weak formulations of variational material modeling.
5 Elastic wave propagation
As first example, we consider elastic wave propagation and a reduced set of thermodynamic state variables. Our plan is as follows: first, we introduce constitutive laws and parameters. Afterwards, we introduce a mixed first-order-in-time system for the displacements. Then, we introduce the function spaces and finally the weak formulation in space-time format. The corresponding strong forms are summarized in the Appendix.
5.1 Modeling
For the elastic wave equation, the reduced set of thermodynamic state variables \(\Lambda ^\textrm{e}:=\{\varvec{u},\theta \}\) is used. We assume an isotropic thermal strain and postulate the free energy as
with the elasticity tensor \({\mathbb {C}}\in {\mathbb {R}}^{d\times d \times d \times d}\) where each component has the unit \([{\mathbb {C}}_{ijkl}]=\text {N}/\text {m}^2\), the thermal expansion coefficient \(\alpha ^\textrm{t}\) with unit \([\alpha ^\textrm{t}]=1/\text {K}\), and the identity matrix \(\varvec{I}\). Then, the term for elastic self-heating reads
Since we deal with an elastic problem with the reduced set of thermodynamic state variables \(\Lambda ^\textrm{e}\), only the stationarity conditions for the displacements in (33)\(_1\) and for the temperature in (33)\(_3\) are considered for the elastic wave propagation.
5.2 Mixed system
For numerical purposes,Footnote 4 it is beneficial to introduce a mixed formulation in which the velocity \(\varvec{v}:=\dot{\varvec{u}}\) is introduced as independent variable. Such a mixed system fits exactly to the thermodynamical considerations in Sect. 4 and we shall see how the mathematical space-time formulation and the Hamilton principle complement each other in an intriguing way. To begin, the Hamilton functional in (31) has to be reformulated accordingly and we arrive at
Definition 10
(Mixed Hamilton functional) The mixed Hamilton functional \({\mathcal {H}}^\text {m}\) is defined as
where the first two terms \({\mathcal {B}}_Q[\rho \dot{\varvec{u}}\cdot \varvec{v}]-{\mathcal {B}}_Q[\tfrac{1}{2}\rho \Vert \varvec{v}\Vert ^2]\) replace the time integral of the kinetic energy. Consequently, they reduce to the previously used formulation for the unmixed system as \(\int _I{\mathcal {K}}\ \textrm{d}t \equiv {\mathcal {B}}_Q[\tfrac{1}{2}\rho \Vert \varvec{u}\Vert ^2]\) when \(\varvec{v}=\dot{\varvec{u}}\). The term \({\mathcal {B}}_Q[\rho \dot{\varvec{u}}\cdot \varvec{v}]\) couples the displacements and the velocity, and the term \({\mathcal {B}}_{\partial I}[\rho \varvec{v}^\star \cdot \varvec{u}]\) accounts for the boundary condition at the front ends of the space-time cylinder which is now formulated in terms of the prescribed velocity \(\varvec{v}^\star \).
Consistently, the stationarity of the mixed Hamilton functional \({\mathcal {H}}^\text {m}\) in (46) now reads
Proposition 5
The Gâteaux derivatives of \({\mathcal {H}}\) with respect to the displacements \(\varvec{u}\) and the velocity \(\varvec{v}\) are computed to be
and
Integration by parts of (48) yields
where we used \(\varvec{\sigma }=\partial \Psi /\partial \varvec{\varepsilon }\). We finally notice that the variation of \({\mathcal {H}}^\text {m}\) with respect to the internal variables \(\varvec{\alpha }\) and the temperature \(\theta \) remain the same as those of \({\mathcal {H}}\).
Remark 6
The velocity boundary conditions \(\varvec{v}^\star _0\) and \(\varvec{v}^\star _T\) provoke two implications:
-
1.
The initial displacement field \(\varvec{u}=\varvec{u}(\varvec{x},0)\) is not prescribed but results from the mechanical equilibrium with the spacial boundary conditions \(\varvec{b}^\star |_{t=0}\), \(\varvec{t}^\star |_{t=0}\), and \(\varvec{u}^\star _{\partial \Omega _{\textrm{D},\varvec{u}}}|_{t=0}\). From a numerical perspective this means that the displacement field and the boundary conditions are also conform at \(t=0\), as supposed to be.
-
2.
The displacement field depends on the velocity at the end time \(\varvec{v}^\star _T\). When assuming that the end time point T is chosen such that the system is in mechanical equilibrium, \(\varvec{v}^\star _T\) can be computed from the condition that the acceleration term vanishes: \(\rho \dot{\varvec{v}}={\varvec{0}}\) at \(t=T\). From an application point of view, this means that \(\varvec{v}^\star _T\) is not explicitly prescribed, but replaced by a boundary value problem for \(t=T\) with the condition \(\rho \dot{\varvec{v}}={\varvec{0}}\). Since this end-time problem is coupled to the displacements and velocity fields of the entire time interval I, solving for the fields can only be performed in a holistic way.
5.3 Numerical experiment
Let us investigate a numerical experiment for our rather unusual temporal boundary conditions at \(t=\{0,T\}\). To this end, we make use of the equations (49) and (50) which simplify for the one-dimensional case and a linear elastic material with \(\sigma =E \varepsilon = E u^\prime \) and the Young’s modulus \(E>0\) to the strong form
Here, we assume homogeneous surface tractions, i.e., \(t^\star = 0\). For a numerical solution, we employ a central finite difference scheme for the discretization in space and an implicit Euler scheme for the discretization in time. Then, the differential equations (51)\(_1\) and (51)\(_2\) transform to the algebraic equations
with the spatial increment \(\Delta x\) and the temporal increment \(\Delta t\). Here, the subscript i refers to the node number and the superscript m to the current time point. As an example, we compute a rotating bar (with rotating coordinate system) which is supported at \(x=0\) mm, resulting in \(u(0,t)=0\) mm and \(v(0,t)=0\) mm/s. The body force is given by \(b^\star =b^\star (x)= \rho \omega ^2 x\) with the angular velocity \(\omega =\dot{\omega } t\) for constant angular acceleration \(\dot{\omega }=\text {const}\). We chose 11 spatial elements in the bar with equal length, and a total length of the bar of \(l=2000\) mm. The time interval is 0.5 ms. The material parameters and the spatial and temporal increments used for the numerical experiment are collected in Table 1.
We perform numerical experiments for two cases. The first case is chosen with homogeneous velocities at the initial and at the end of the time interval I, i.e., \(v(x,0)=v_0^\star (x)=v(x,T)=v_T^\star (x)=0\) mm/s. The resulting displacements and velocities over space and time are plotted in Fig. 3 at the top. The second case also has homogeneous initial velocities, but the spatially linear velocity \(v(x,T)=v_T^\star (x)=0.5 \frac{x}{l}\) mm/s at the end of the time interval. It is obvious that the different boundary conditions for the velocity provoke very different deformation and velocity fields in space and time. For the first case, the initial deformation, which is a computation result and not an initial condition, is almost a quadratic function in space. This initial deformation reduces under the influence of inertia effects and the body force. The ‘back swinging’ then results in the prescribed homogeneous velocity at the end of the time interval.
For the second case, the initial deformation is much more complex. Then, again influenced by inertia and body force, waves in space and time evolve which reflect at the spatial boundaries and are effected by the space- and time-dependent body force. During the course of time, the initial ‘wavy’ fields homogenize due to the complex interplay until the prescribed linear velocity at the end of the time interval is eventually present.
It is worth mentioning that the initial deformation field is always an automatic outcome of this space-time formulation of the wave equation as derived in Sect. 5.2 with a well-posedness justification in Sect. 9.6. This alternative formulation might also be much more beneficial for application examples than using the classical formulation with initial conditions for both the displacements and the velocities. For instance, path-dependent optimization problems might be solvable in a much easier way using our formulation: when ‘history’ is treated in a holistic way, the dependence on the entire path is always present and thus, all variables are intrinsically optimized in this regard.
5.4 Function spaces and weak formulation
In this subsection, we design function spaces and we use them to mathematically specify the previous variations in the weak formulation. Formally, weak forms were already introduced before.
Displacements We begin with the vector-valued displacement variable \(\varvec{u}\). From the strong form in (102), we identify non-homogeneous Dirichlet conditions \(\varvec{u}^*\) on the boundary \(\partial \Omega _{\textrm{D},\varvec{u}}\). To this end, we define for the spatial (not yet time) components
where the index 0 indicates that we are dealing with homogeneous Dirichlet boundary conditions.Footnote 5 Specifically, \(V_0^u\) serves as space for the test functions, namely the displacement variations \(\delta \varvec{u}\). By a suitable extension of the Dirichlet boundary data (e.g., [15, Chapter II, §2] or [56, Chapter 2]), the trial space for \(\varvec{u}\) is \(V^u:=\{\varvec{u}^*|_{\partial \Omega _{\textrm{D},\varvec{u}}} + V_0^u\}\). In time, we assume \(L^2\) regularity, as it is usually done [26, 52, 56]. This means
Again, the index 0 in \(X^u_0\) indicates that the spatial space \(V_0^u\) has homogeneous Dirichlet boundary data.
Velocities Next, we discuss the vector-valued velocity variable \(\varvec{v}\). Since we have no spatial boundary conditions and due to regularity, we set for the spatial part
which is well-known in this context, e.g., [6]. The time-dependent Sobolev space then reads \(X^v := L^2(I,V^v)\) such that
In the space-time solution, the goal is to determine both variables in a way such that ([6, 33] or [88, Chapter 5])
Here, \((V_0^{u})^*\) is the dual space associated to \(V_0^{u}\). The dual space consists of all linear, bounded mappings from the space \(V^{\varepsilon }\) into the real numbers \({\mathbb {R}}\), i.e., \((V_0^{u})^* := L(V_0^u,{\mathbb {R}})\). For further information on dual spaces, we refer the reader to functional analysis textbooks, e.g., [17, 21, 85].
Temperature It remains to introduce the function spaces for the parabolic heat equation with the scalar-valued temperature variable \(\theta \). For such parabolic evolution problems, we refer the reader again to [52, 56, 88] and we employ for the test space
and for the trial space \(V^{\theta }:=\{\theta ^\star |_{\partial \Omega _{\textrm{D},\theta }} + V_0^{\theta }\}\). In the space-time context, this means
where \((V_0^{\theta })^*\) is the dual space of \(V_0^{\theta }\).
Imposing boundary conditions Before we state the weak formulations, in view of the space-time cylinder geometry (see Sect. 2), let us notice that we treat spatial and temporal boundary conditions differently from now on. We recall that in Sect. 4 the extended Hamilton principle yielded all conditions in a weak form. As described (and well-known) in, for example [76], Dirichlet boundary conditions can be prescribed in a functional framework either in the weak form or by building (essential) boundary into the function space. In the following, we choose the later option in which spatial Dirichlet boundary conditions are built into the function spaces. The temporal conditions will be imposed weakly and will appear explicitly as right hand side data in the following weak formulations.
Weak formulation To this end, we can state the weak formulation. We define abstract forms for the equations (left-hand side) and the data (right-hand side). Depending on whether the governing partial differential equation is linear or nonlinear, we employ different abstract notations for the left-hand sides. For linear partial differential equations, let X be a function space, we use as notation \({\mathcal {A}}(w,\delta w)\) with \({\mathcal {A}}:X\times X \rightarrow {\mathbb {R}}\). This is a so-called bilinear form, which is linear in both arguments. Therein, \(w\in X\) is the trial function and \(\delta w \in X\) is the test function. For nonlinear partial equations, we rather use as notation \({\mathcal {A}}[w](\delta w)\), which is a so-called semi-linear form which is nonlinear in the first argument and linear in the second argument. The right-hand sides are denoted by \({\mathcal {L}}(\delta w)\) with \({\mathcal {L}}:X\rightarrow {\mathbb {R}}\) and they only depend on problem data, but not on the solution function.
With these definitions and notation, the space-time mixed system weak formulation reads:
Proposition 6
(Weak formulation) Find \(\varvec{v}\in X^v\) and \(\varvec{u}\in X^u\) such thatFootnote 6
where the semi-linear form is given by
and the right-hand side functional is given by
The weak form of the temperature equation reads: find \(\theta \in X^{\theta }\) such that
with
and
Remark 7
In both the displacement-velocity system and the temperature equation, the temporal boundary conditions are prescribed in a weak sense. For the former, we have two conditions, namely on \(\partial I=\{0,T\}\) since the system is second order in time. In the temperature equation, we only have one initial condition on the bottom surface of the space-time cylinder, namely for \(t=0\). We also refer the reader to Sect. 2 and specifically to Sect. 3.4.
Remark 8
Sometimes, the mixed formulation for the elastic wave equation (102) raises difficulties for the design of the correct function spaces. Starting from the strong formulation, it is not immediately clear which variable is determined in which equation. Having our derivations at hand, we automatically arrive at the correct descriptions. In (102), the first equation determines \(\varvec{u}\) (not \(\varvec{v}\)) and the second equation determines \(\varvec{v}\) (not \(\varvec{u}\)). From the Hamilton principle, we obtain immediately the correct variations for each equation in the weak formulation \({\mathcal {A}}_u[\varvec{u},\varvec{v}](\delta \varvec{u},\delta \varvec{v})\) and in particular the first equation determines \(\varvec{u}\in X^u\) and the second equation determines \(\varvec{v}\in X^v\). Specifically, the first equation is tested with \(\delta \varvec{u}\in X^u_0\) and the second equation is tested with \(\delta \varvec{v}\in X^v\) which automatically yields the correct boundary conditions for each equation through their respective function spaces.
6 Visco-elasticity
In this section, we consider our second model. The procedure is as before: first, we introduce the mechanical model. In a second step, based on the extended Hamilton principle, we provide the space-time formulation.
6.1 Modeling
For a visco-elastic material, the internal variable \(\varvec{\alpha }\rightarrow \varvec{\varepsilon }^\textrm{v}\) is introduced by \(\varvec{\varepsilon }= \varvec{\varepsilon }^\textrm{e}+ \varvec{\varepsilon }^\textrm{v}\) where \(\varvec{\varepsilon }^\textrm{e}\) is the elastic part of the total strain \(\varvec{\varepsilon }\). Then, the free energy density reads
again with the elasticity tensor of order four \({\mathbb {C}}\in {\mathbb {R}}^{d\times d\times d\times d}\).
The evolution of viscous strains is rate-dependent. For such a behavior, a dissipation function of order two has to be chosen, cf. [46], such that
is chosen with the scalar-valued viscosity \(\eta >0\) with the unit \([\eta ]=\text {J}\text {s}/\text {m}^3\).
The evolution of viscous strains is volume preserving which is expressed by \(\varvec{I}:\varvec{\varepsilon }^\textrm{v}=0\) from which the constraint function \(c_{\varvec{\varepsilon }^\textrm{v}}:=\lambda ^\textrm{v}(\varvec{I}:\dot{\varvec{\varepsilon }}^\textrm{v})\) is formulated. Here, we used the identity matrix \(\varvec{I}\) and a Lagrange multiplier \(\lambda ^\textrm{v}\in {\mathbb {R}}\) with the unit \([\lambda ^\textrm{v}]=\text {N}/\text {m}^2\). The constraint force \(\varvec{p}^\textrm{c}_{\varvec{\varepsilon }^\textrm{v}} := \partial c_{\varvec{\varepsilon }^\textrm{v}}/\partial \dot{\varvec{\varepsilon }}^\textrm{v}\) enables us to account for constraints formulated in the rates of the internal variables by defining the constraints functional to be
Then, (33)\(_2\) transforms to
Evaluation of the local condition results in
and where we used
Double contracting (i.e., the Frobenius scalar product, see Sect. 2.1) of (57) and using the constraint \(\varvec{I}:\dot{\varvec{\varepsilon }}^\textrm{v}=0\) allows us to compute
Then, (57) reads as
with the stress deviator \(\textrm{dev}\varvec{\sigma }:= \varvec{\sigma }- \frac{1}{3} \textrm{tr}\varvec{\sigma }\, \varvec{I}\).
6.2 Function spaces and weak formulation
In an analogous way as done in Sect. 5.4, we formulate now function spaces and therefore specify mathematically the variations for the weak formulations for visco-elasticity.
Displacements, velocities, temperature According to our derivations in Sect. 6.1 and with the previous introduction of the mixed-order system, we deal with four variables, namely \(\varvec{u},\varvec{v},\varvec{\varepsilon }^\textrm{v}\) and \(\theta \), and we need four function spaces. For the displacement variable \(\varvec{u}\) and its variation (test function) \(\delta \varvec{u}\) we use \(X^u\) and \(X^u_0\) from before; see Sect. 5.4. For the velocity variable \(\varvec{v}\) and its variation \(\delta \varvec{v}\), we employ \(X^v\). For the temperature equation, we have \(\theta \in X^{\theta }\) and \(\delta \theta \in X_0^{\theta }\).
Viscous strain The new variable in comparison to the previous section is the viscous strain for which we define
Proposition 7
(Weak formulation) Find \(\varvec{v}\in X^v\) and \(\varvec{u}\in X^u\) such that
where the semi-linear form is given by
and the right-hand side functional is given by
The weak form of the temperature equation reads: find \(\theta \in X^{\theta }\) such that
with
and
The weak form of the viscous strain reads: find \(\varvec{\varepsilon }^\textrm{v}\in X^{\varepsilon }\) such that
with
and
where we notice from before that the initial viscous strain field is set to \(\varvec{\varepsilon }^{\textrm{v},\star }_0(\varvec{x})={\varvec{0}}\). Here, we inserted the Lagrange parameter \(\lambda ^\textrm{v}=\frac{1}{3}\varvec{\sigma }:\varvec{I}\) such that the stress deviator \(\textrm{dev}\varvec{\sigma }\) appears.
7 Elasto-plasticity with hardening
In this section, we show how to obtain an elasto-plastic model with hardening from our Hamilton principle. In the space-time formulation, the biggest difference to before is that one of the solution variables is subject to an inequality constraint which requires to work in closed convex sets rather than with linear function spaces.
Plasticity is a complex phenomenum, both from the mechanical and the mathematical side [7, 26, 34, 37, 58, 82] (and many references cited therein). In this work, we do not have the most general plasticity formulation in mind [82][Chapter II], but rather plastic materials with hardening. First, we notice that the simplest models are Hencky-type models [38] known as linear-elastic plastic materials, also known as linear-elastic perfectly plastic type, [82][p. 66] and [26][Chapter V]. For further comments on the Hencky model, we refer the reader for instance to [58][p. 240]. The corresponding rate-dependent formulation is known as Prandtl-Reuss model. These models rather belong to nonlinear elasticity [82][Chapter I, p. 77, Remark 4.2], and can therefore still described with the help of usual Sobolev spaces. Therein, the displacements still have \(H^1\) regularity and the stresses have \(L^2\) regularity, while in the most general case of plasticity, the stresses still satisfy \(L^2\) regularity, but \(\varvec{u}\) looses \(H^1\) regularity [82][Chapter I, p. 58]. In [82][Chapter I, Section 3.3], several models in the linear-elastic and nonlinear elastic regime are discussed, inter alia models with hardening, while [82][Chapter II] addresses general plasticity models with a final existence result established in [82][Chapter II, Section 8]. Therein, BD (bounded deformation) function spaces [59] are introduced that enlarge the class of admissible trial and test functions to measures and specifically allow for discontinuities in the displacement field. This is also plausible from a mechanical viewpoint due to strain localization.
In order to stay with usual Sobolev spaces and still being mechanically reasonable, we proceed with plasticity with hardening where the plastic evolution is given by an additional differential equation. Due to hardening, these models still can be described in classical function spaces.
7.1 Modeling
Similarly to the modeling of visco-elastic materials, the internal variable is a possibly permanent part of the strain denoted by \(\varvec{\varepsilon }^\textrm{p}\in {\mathbb {R}}^{d\times d}_\textrm{sym}\). Consequently, we decompose \(\varvec{\varepsilon }^\textrm{e}=\varvec{\varepsilon }-\varvec{\varepsilon }^\textrm{p}\) which allows to define the free energy density as
with some monotonically increasing hardening potential \(\Psi ^\textrm{h}\) depending on the hardening variable \(\alpha ^\textrm{h}\in {\mathbb {R}}_+\). Thus, the internal variable is in this case \(\varvec{\alpha }=\{\varvec{\varepsilon }^\textrm{p},\alpha ^\textrm{h}\}\). In contrast to the rate-dependent evolution of the microstructure for visco-elastic materials, the plastic strains evolve in a rate-independent way. This demands to modify the ansatz for the dissipation function to be homogeneous of order one (instead of being homogeneous of order two for the rate-dependent evolution). Hence, we use
with some dissipation parameter \(\sigma _\textrm{Y}>0\) which will be specified later. Since the plastic strains evolve in a volume-preserving fashion as the viscous strains do, we introduce the analogous constraint functional in this case. Furthermore, we follow the standard assumption for the kinematic relation between the plastic strain and the hardening variable, i.e., \(\dot{\alpha }^\textrm{h}=\Vert \dot{\varvec{\varepsilon }}^\textrm{p}\Vert \). We thus introduce the constraint functional for plasticity with hardening by
with the constraint forces \(\varvec{p}^\textrm{c}_{\varvec{\varepsilon }^\textrm{p}} := \partial c_{\varvec{\varepsilon }^\textrm{p}}/\partial \dot{\varvec{\varepsilon }}^\textrm{p}\) and \(p^\textrm{c}_{\alpha ^\textrm{h}}:=\partial c_{\varvec{\varepsilon }^\textrm{p}}/\partial \dot{\alpha }^\textrm{h}\), respectively. The constraint forces result from the constraint function \(c_{\varvec{\varepsilon }^\textrm{p}}:= \lambda ^\textrm{p}_1(\dot{\varvec{\varepsilon }}^\textrm{p}:\varvec{I})+\lambda ^\textrm{p}_2 (\dot{\alpha }^\textrm{h}-\Vert \dot{\varvec{\varepsilon }}^\textrm{p}\Vert )\). This is in similar fashion as the non-conservative forces result from the dissipation function. In the constraint functions, two Lagrange parameters \((\lambda ^\textrm{p}_1,\lambda ^\textrm{p}_2)\in {\mathbb {R}}\times {\mathbb {R}}\) both with the unit \([\lambda ^\textrm{p}_1]=[\lambda ^\textrm{p}_2]=\text {N}/\text {m}^2\) appear. Then, the stationarity condition (33)\(_2\) is evaluated with respect to the plastic strain and the hardening variable. This reads
It is worth mentioning that here a subdifferential appears:
The local evolution of (64) results in the differential inclusionFootnote 7 and the algebraic expression
where \(\varvec{\sigma }={\mathbb {C}}:(\varvec{\varepsilon }-\varvec{\varepsilon }^\textrm{p})\) and \(\lambda ^\textrm{p}_2 < 0\) due to the monotonicity of \(\Psi ^\textrm{h}\). If we consider the case of microstructure evolution, i.e., \(\dot{\varvec{\varepsilon }}^\textrm{p}\not = {\varvec{0}}\), and we double contract both sides with the identity tensor \(\varvec{I}\), we can insert the constraint of volume preservation \(\varvec{I}:\dot{\varvec{\varepsilon }}^\textrm{p}=0\) such that the first Lagrange multiplier is computed to be
Then, we can rearrange (66) yielding
where we define the consistency parameter \(\rho ^\textrm{p}:= \frac{\Vert \dot{\varvec{\varepsilon }}^\textrm{p}\Vert }{\sigma _\textrm{Y}-\lambda ^\textrm{p}_2}\) and specifically, we have \(\rho ^\textrm{p}\ge 0\) with the unit \([\rho ^\textrm{p}]=\text {m}^2/\text {N}\text {s}\). Furthermore, from the constraint for the evolution of \(\alpha ^\textrm{h}\), we obtain
which closes the set of evolution equations for the plastic strain and the hardening variable.
The material model for plasticity is not complete yet: there is an indicator missing which separates the frozen microstructure from the evolving one. Therefore, it is beneficial to perform the Legendre transformation of the dissipation function as mentioned in Sec. 4: it is formulated as
where we made use that the thermodynamic driving force \(\varvec{p}\) accounting for the constraint of volume preservation is exactly the stress deviator \(\textrm{dev}\varvec{\sigma }\). Let us insert (68) into (70) which gives
Consequently, the Legendre transform of the dissipation function results in an indicator function. We thus define the (von Mises) yield function (see also [82][p. 65])
which serves as condition for the separation of elastic and plastic material behavior: plastic strains are frozen, i.e. \(\dot{\varvec{\varepsilon }}^\textrm{p}={\varvec{0}}\), as long as \(\Phi ^\textrm{p} < 0\), but may evolve, i.e. \(\dot{\varvec{\varepsilon }}^\textrm{p}\not ={\varvec{0}}\), when \(\Phi ^\textrm{p}=0\). We recognize that the dissipation parameter \(\sigma _\textrm{Y}\) is indeed the deviator norm of the initial yield stress (tensor for \(\alpha ^\textrm{h}=0\)) with the unit \([\sigma _\textrm{Y}]=\text {N}/\text {m}^2\) and an appropriate modeling of \(\Psi ^\textrm{h}\) accounts for hardening.
It is worth mentioning that an alternative route to obtain the same set of modeling equations is provided by first defining the yield function and then postulating that the plastic strains evolve in shortest direction onto the yield surface with unknown ‘length’ \(\rho ^\textrm{p}\). The shortest direction is provided by \(\partial \Phi ^\textrm{p}/\partial \textrm{dev}\varvec{\sigma }\) due to its orthogonality with \(\partial \textrm{dev}\varvec{\sigma }/\partial p\) with some parameter p which parameterizes the yield surface \(\Phi ^\textrm{p}(\textrm{dev}\varvec{\sigma }(p)) = \Phi ^\textrm{p}(p) = 0\). The benefit of using Hamilton’s principle is easily identified: no driving forces have to be defined that fulfill the constraints automatically. This is a simple task for plasticity modeling but it turns out to be challenging for more complex material behaviors. Using the Hamilton principle, instead, yields equations that fulfill the constraints in terms of the rates of the internal variables identically. Therefrom, corresponding driving forces can be defined which directly account for the constraints. In case of plasticity, these driving forces are given by the stress deviator.
7.2 Differential inclusions versus variational inequalities
In order to prepare the link between the previous section and the next section, we briefly describe the relationship between differential inclusions and variational inequalities. For rigorous developments, we refer the reader to [2]; and specifically therein Chapter 0. In both formulations, we have variations \(\delta \varvec{\varepsilon }^\textrm{p}\). However, these variations are restricted. In differential inclusions, we are given a problem statment of the form
For instance, from (66)\(_1\), we have
Now, the variations are directions in the Gâteaux derivative, but due to the differential inclusion, these derivatives are only subdifferentials. These subdifferentials result from a closed convex subset which provides the basis for variational inequality formulation. Let us now define such a closed convex set, see e.g., [26, 50, 51], denoted by K,
Then, the variations \(\delta \varvec{\varepsilon }^\textrm{p}\) are restricted to convex sets and convex combinations, i.e.,
for \(0\le \epsilon \le 1\). In the weak form, only variations from K are taken into account, i.e., Gâteaux derivatives as defined in Def. 9. This yields then abstract formulations of the form: find \(\varvec{\varepsilon }^\textrm{p}\in K\) such that
in which specifically in the argument of the test function, still the trial function appears; see e.g., [50, Chapter 3]. A concise derivation for the obstacle problem can be found in [86, Section 4.4.3.4].
7.3 Function spaces and weak formulation
In this subsection, we derive the space-time weak formulation. The principal variables \(\varvec{u}\) and \(\dot{\varvec{\varepsilon }}^\textrm{p}\) still appear in equations, but the norm of \(\dot{\varvec{\varepsilon }}^\textrm{p}\) is subject to a constraint, yielding complementarity conditions as we have seen just before. We notice that extensive mathematical work can be found in the standard reference [26].
Displacements, velocities, temperature In total, we deal with four variables, namely \(\varvec{u},\varvec{v},\varvec{\varepsilon }^\textrm{v}\) and \(\theta \), and we need four solution sets. For the displacement variable, the velocities, and the temperature there is no change in the function spaces to before and we refer the reader to Sect. 5.4.
Plastic strain For the plastic strain \(\varvec{\varepsilon }^\textrm{p}\), we assume sufficient regularity,Footnote 8 and we define the convex setFootnote 9
where \(\varvec{\sigma }={\mathbb {C}}:(\varvec{\varepsilon }-\varvec{\varepsilon }^\textrm{p})\). Therein, the space \(L^2(\Omega )_{\textrm{sym}}^{d\times d}:=L^2(\Omega ,{\mathbb {R}}^{d\times d}_\textrm{sym})\) means that each component of \(\varvec{\varepsilon }^\textrm{p}\) is an \(L^2\) function and furthermore \(\varvec{\varepsilon }^\textrm{p}\) is symmetric. Due to the inequality constraint \(\Phi ^\textrm{p}(\varvec{\sigma }) \le 0\), the object \(V^p\) is not anymore a linear function space, but rather a closed convex set only.Footnote 10
Weak formulation The weak formulation corresponding to Problem 3 reads:
Proposition 8
(Weak formulation) Find \(\varvec{v}\in X^v\) and \(\varvec{u}\in X^u\) such that
where the semi-linear form is given by
and the right-hand side functional is given by
The weak form of the temperature equation reads: find \(\theta \in X^{\theta }\) such that
with
and
The weak form of the plastic strain reads: find \(\varvec{\varepsilon }^\textrm{p}\in X^{p}\) such that
with
and the right-hand side functional
where we notice from before that \(\varvec{\varepsilon }^{\textrm{p},\star }_0 ={\varvec{0}}\). Furthermore, \(\alpha ^\textrm{h}\in {\mathbb {R}}_+\) is determined via the kinematic condition \(\dot{\alpha }^\textrm{h}=\Vert \dot{\varvec{\varepsilon }}^\textrm{p}\Vert \) with the initial condition \(\alpha ^\textrm{h}_0=0\).
8 Gradient-enhanced damage modeling
As last example, we consider a model for gradient-enhanced damage (see [69, 70]) which is closely related to regularized phase-field fracture and both have been intensively investigated over the last years.Footnote 11 In view of our developments of the current work, the extension to plasticity is concerned with the inequality constraint. While we deal with a local inequality constraint acting on the evolution of the plastic strain in plasticity, the constraint is of a non-local type in gradient-enhanced damage and acts directly on one of the solution variables.
8.1 Modeling
The evolution of the damage variable \(d\leftarrow \varvec{\alpha }\) is assumed to be rate-independent. We thus define a dissipation function of order one similar to elasto-plasticity as
where \(\mu \ge 0\).
The current damaged state weakens the effective stiffness of the material. In general, different definitions for the damage function \(f=f(d)\) are possible, for instance \(f=(1-d)^2\). However, we propose
Definition 11
(Damage function) We define the exponential damage function as
with the obvious properties \(f(0)=1\) (undamaged), \(\displaystyle \lim _{d\rightarrow \infty } f= 0\) (fully damaged),Footnote 12 and \(f^\prime = -f\). Furthermore, it holds
For the mechanical part of the free energy density, we consequently propose
For increasing damage variable d, the value of the damage function f decreases such that \(\Psi ^\textrm{m}\) is a non-convex function, and therefore several (local) minima exist. It is thus well-known that the characterizing stationary conditions of the balance of linear momentum lack of uniqueness. To turn the problem well-posed, we employ the strategy of gradient enhancement. To this end, we make use of the enhancement
which regularizes the model, and where \(\beta >0\) with the unit \([\beta ]=\text {J}/\text {m}\). The total free energy density is thus
from which the stress results to be
Then, the stationarity of the Hamilton functional with respect to the damage variable in(33)\(_2\) specifies with \(\varvec{\alpha }\rightarrow d\) and the definition for the non-conservative force in (35) to
with the subdifferential
Let us investigate the second term in (81) and integrate it by parts:
since \(\delta f = f^\prime \delta d = - f \delta d\). Considering the independence of the volume and the surface of the body and sufficient regularity of all integrands (again mathematically the fundamental lemma of calculus of variations), (81) can be rearranged as
Introducing the (extended) thermodynamic driving force by
the evolution equation (84) is given by the differential inclusion
which is rearranged to
Similarly to the modeling of plasticity, a criterion for damage evolution is required to close the material model. Thus, we perform a Legendre transform for the dissipation function and we obtain
Consequently, damage evolves for \(p^\textrm{d}=r\) and does not evolve for \(p^\textrm{d}<r\) from which we find the unit \([r]=\text {J}/\text {m}^3\).
8.2 Function spaces and weak formulation
In extension to plasticity discussed in Sect. 7.3, the inequality constraint acts now directly on a non-local solution variable, namely the damage function f. From the mechanical viewpoint, we deal with a local constraint in plasticity, while we have now a non-local constraint in gradient-enhanced damage. The function space for the damage variable f reduces to a closed convex set of a suitable function space (here the Sobolev space \(H^1(\Omega )\)) similar as for the obstacle problem [50, 51] and similar to what we encountered already in Sect. 7.3 for the plastic strain evolution. The Euler-Lagrange equations for the displacements, velocities, and temperature remain the same as before. The resulting system is a coupled variational inequality system (CVIS) [86] with in total four coupled unknowns.
Displacements, velocities, temperature As before, we need four solution sets. Here, for the displacement variable, the velocities, and the temperature there is no change in the function spaces to before and we refer the reader to Sect. 5.4.
Damage function It remains to discuss the setting for the damage function. For the relation from the differential inclusion (84) and the design of concex sets, we refer the reader to Sect. 7.2. Thus, the closed convex set for the damage functionFootnote 13 is defined as follows. First, we assume (see e.g., [49])
resulting into
Consequently, the constraint \(\dot{f} \le 0 \text { a.e. in } I\times \Omega \) is well-defined, and yields the convex set
Weak formulation The weak formulation corresponding to Problem 4 reads:
Proposition 9
(Weak formulation) Find \(\varvec{v}\in X^v\) and \(\varvec{u}\in X^u\) such that
where the semi-linear form is given by
and the right-hand side functional is given by
The weak form of the temperature equation reads: find \(\theta \in X^{\theta }\) such that
with
and
The variational inequality for the damage function reads: find \(f\in X^f\) such that
with
and
As final, abstract monolithic CVIS, we can write
Proposition 10
Find \(\varvec{U}:= (\varvec{u},\varvec{v},\theta ,f)\in X:=X^u\times X^v\times X^{\theta }\times X^f\) such that
where \(\delta \varvec{U}:= (\delta \varvec{u},\delta \varvec{v},\delta \theta ,\delta f)\), and \({\bar{\varvec{U}}} := (0,0,0,f)\in X\), and \(X_0 := X_0^u\times X^v\times X_0^{\theta }\times X^f\). Therein, \({\mathcal {A}}[\varvec{U}](\delta \varvec{U})\) and \({\mathcal {L}}(\delta \varvec{U})\) are composed with the single semi-linear and linear forms from Proposition 9.
9 Numerical regularization and discretization
In this section, we perform a space-time Galerkin discretization. Temporal discretization is based on a discontinuous Galerkin (dG) finite element scheme while spatial discretization is executed with continuous Galerkin (cG) finite elements. This combination is well-known, e.g., [6, 79], since the flexibility of a dG(r) discretization results into implicit, strongly A-stable, time-stepping schemes [22, 44], while for many problems in continuum mechanics classical cG finite elements are employed. Of course, a dG discretization in space is in principle possible as well as cG discretizations in time. However, they suffer from a reduced numerical stability since the functions are required to be globally continuous. Consequently, our method of choice, as one example, is a dG(r)cG(s) discretization. An open research question for more complex choices is its technical realization, i.e., its implementation and debugging.
Let us start from Proposition 10. The other formulations from the other sections arise as adaptations. Our procedure is as follows: first, we regularize the variational inequality by simple penalization, then we semi-discretize in time, and finally, we arrive at the full space-time discretization. The penalization procedure allows us to relax the constraint such that we can work again with linear function spaces rather than convex solution sets (see e.g., [50, 51] or [89]).
9.1 Regularization of the inequality constraint
As we observe in Proposition 10, we deal with a convex set in \(X^f\) only. We now enlarge the solution set by introducing \(X^f_{\gamma }\) with \( X^f_{\gamma } := L^2(I,H^1(\Omega )) \) where we replace the convex set K by the linear function space \(H^1(\Omega )\) in the image space. To this end, the corresponding weak form becomes an equality: find \(f\in X_{\gamma }^f\) such that
with
and the penalization functional
with a penalization parameter \(\gamma >0\) with the unit \([\gamma ]=\) Js/m\(^3\) and \(\langle x \rangle ^+ = x\) for \(x>0\) and \(\langle x \rangle ^+ = 0\) for \(x\le 0\). This penalization formulation is the same as for phase-field fracture [48][Section 2]. Let us briefly comment on the constraint. According to our derivations in Sect. 8, i.e., (112), we are interested in \(\dot{f}\le 0\) only. The penalization functional is constructed in such a way that the functional is zero when the constraint is fulfilled and penalizes the weak form, when the constraint is violated. For the penalization form \({\mathcal {A}}_{\gamma }[f](\delta f)\), it holds
Finally, the right-hand side is given by
With this, we arrive at the penalized formulation:
Proposition 11
(Penalized formulation) Find \(\varvec{U}:= (\varvec{u},\varvec{v},\theta ,f)\in X:=X^u\times X^v\times X^{\theta }\times X_{\gamma }^f\) such that
where \(\delta \varvec{U}:= (\delta \varvec{u},\delta \varvec{v},\delta \theta ,\delta f)\), and \(X_0 := X_0^u\times X^v\times X_0^{\theta }\times X_{\gamma }^f\) .
Remark 9
We notice that other regularization strategies such as augmented Lagrangians, active set methods, or interior point methods could have been also employed. In terms of robustness, efficiency and accuracy these approaches are usually better suited than simple penalization; see for instance [32, 43, 65]. However, simple penalization provides a direct first approximation of variational inequalities, specifically, when we deal with coupled variational inequality systems (CVIS) as in the current work. In future work in which computations will be addressed, we will extend simply penalization to one of the other methods as already done for similar type of problems in [86] by using augmented Lagrangian formulations or primal-dual active set methods.
9.2 Temporal discretization with discontinuous Galerkin
Based on the penalized formulation in Proposition 11, we now derive a semi-discrete temporal version. Let
be a decomposition of the time interval I with half-open subintervals \(I_m := (t_{m-1},t_m]\)Footnote 14 and the non-constant time step size \(k_m := t_m - t_{m-1}\) for \(m=1,\ldots ,M\). For each function space, we formulate the semi-discrete part:
where k denotes the semi-discretization in time, and \(r_1,r_2,r_3,r_4\in {\mathbb {N}}_0\) denote the respective polynomial degrees of our finite elements in time. We note that \({\tilde{X}}_k^{u,r_1},{\tilde{X}}_k^{v,r_2},{\tilde{X}}_k^{v,r_3}\) and \({\tilde{X}}_k^{v,r_4}\) are not subspaces of their corresponding continuous-level function spaces, since we allow for discontinuities in time. Moreover, specifically for the test function spaces, namely for \(\varvec{u}\) and \(\theta \), we introduce
Next, we introduce jump terms that connect two solutions from two adjacent time intervals.
Definition 12
Let \(\varvec{U}_k := (\varvec{u}_k,\varvec{v}_k,\theta _k,f_k)\in {\tilde{X}}_k^r:= {\tilde{X}}_k^{u,r_1}\times {\tilde{X}}_k^{v,r_2}\times {\tilde{X}}_k^{\theta ,r_3}\times {\tilde{X}}_k^{f,r_4}\). Then, the jump at time point \(t_m\) is defined by
We notice that \(\varvec{U}_{k,m} := \varvec{U}_{k}(t_m)\) where \(t_m\) is the time point with index m.
With the help of the decomposition of I, the jump terms and the semi-discrete spaces, we can now formulate (e.g., [87]) a semi-discrete system:
Proposition 12
Find \(\varvec{U}_k := (\varvec{u}_k,\varvec{v}_k,\theta _k,f_k)\in {\tilde{X}}_k^r:= {\tilde{X}}_k^{u,r_1}\times {\tilde{X}}_k^{v,r_2}\times {\tilde{X}}_k^{\theta ,r_3}\times {\tilde{X}}_k^{f,r_4}\) such that
where \(\delta \varvec{U}_k := (\delta \varvec{u}_k,\delta \varvec{v}_k,\delta \theta _k,\delta f_k)\), and \({\tilde{X}}_{0,k}^r := {\tilde{X}}_{0,k}^{u,r_1}\times {\tilde{X}}_k^{v,r_2}\times {\tilde{X}}_{0,k}^{\theta ,r_3}\times {\tilde{X}}_k^{f,r_4}\). Therein, we have
which are defined in detail as
where \(\Omega (m)\) is defined as the set \(\Omega (m):=\{ \varvec{x}\in \Omega | \; f_{m+1}^-(\varvec{x}) > f_{m}^-(\varvec{x}) \}\). Therein, the spatial parts (terms without time derivatives) are defined as
The right-hand sides are given by:
The above semi-linear form \({\mathcal {A}}[\varvec{U}_k](\delta \varvec{U}_k)\) needs some further explanations. For linear time derivative terms, as it is the case for the first three terms, jump terms can be associated in a classical way [44, 79] and [87]. However, the term
is nonlinear since \(\Psi _{0,k}\) contains solution variables and \(\dot{f}_k\) is a solution variable, too. Here, we follow [47] in which a careful analysis yields two jump terms, namely
The second unusual modification is concerned with the penalization functional. Since it is also time-dependent, we need to add jump terms here as well. Formally, we follow [48][Section 2]. We recall the penalization functional
The corresponding jump terms are defined on \(\Omega \), but only on the subset where the penalization is active, namely \(f_{m+1}^- > f_{m}^-\), i.e., \(f_{m+1}^-(\varvec{x}) > f_{m}^-(\varvec{x})\) for \(\varvec{x}\in \Omega \), which results into the functional
9.3 Spatial discretization with continuous Galerkin
For the spatial discretization, we intend to work with a classical continuous Galerkin finite element scheme [15, 16, 20, 89]. To this end, the spatial discretization parameter is denoted as usually by h. First, we introduce the fully discrete function spaces at each time point \(t_m\) associated to the spatial mesh \({\mathcal {T}}_h^m\). Here, the m indicates that the spatial mesh can change from time point \(t_{m-1}\) to \(t_m\) when using mesh adaptivity. Consequently, we obtain for \(V^u\) the discrete space \(V_h^{u,s_1,m}\) in which the index u is as before, \(s_1\) denotes the finite element polynomial degree for the spatial discretization, m the current time point index at \(t_m\), and the subindex h indicates that we work on the discrete level. The other function spaces are defined accordingly. With this, we have:
Moreover, specifically for the test function spaces, namely for the variations \(\delta \varvec{u}\) and \(\delta \theta \), we introduce
Formally, we then arrive at the fully discrete system
Proposition 13
Find \(\varvec{U}_{kh} := (\varvec{u}_{kh},\varvec{v}_{kh},\theta _{kh},f_{kh})\in {\tilde{X}}_{kh}^{r,s}:= {\tilde{X}}_{kh}^{u,r_1,s_1}\times {\tilde{X}}_{kh}^{v,r_2,s_2}\times {\tilde{X}}_{kh}^{\theta ,r_3,s_3}\times {\tilde{X}}_{kh}^{f,r_4,s_4}\) such that
where \(\delta \varvec{U}_{kh} := (\delta \varvec{u}_{kh},\delta \varvec{v}_{kh},\delta \theta _{kh},\delta f_{kh})\), and \({\tilde{X}}_{0,k,h}^{r,s} := {\tilde{X}}_{0,k,h}^{u,r_1,s_1}\times {\tilde{X}}_{kh}^{v,r_2,s_2}\times {\tilde{X}}_{0,k,h}^{\theta ,r_3,s_3}\times {\tilde{X}}_{k,h}^{f,r_4,s_4}\). Therein, we have
The single terms are given by
Therein, the spatial parts (terms without time derivatives) are defined as
The right-hand sides are given by:
9.4 dG(0) realization in time and interpretation as time-stepping scheme
Finally, starting from Proposition 13, we realize a dG(r), i.e., dG(0), discretization if we chose \(r_1=r_2=r_3=r_4=0\). Due to the discontinous test functions in time, the global scheme decouples into a sequential approach for each time interval \(I_m, m=1,\ldots ,M\) which can be interpreted as time-stepping scheme.
Total system on each \(I_m\) On each time interval \(I_m\), we have
Carrying out time integration Now, we employ constant-in-time test functions, integrate the time derivative terms, and we employ the right-sided box rule. The term \(\Psi _{0,kh}\) is approximated in an implicit fashion, namely \(\Psi _{0,kh}:= \Psi _{0,kh,m}:= \Psi _{0,kh,m}^-\). Then, we arrive at
Remark 10
At this point, we emphasize another big advantage of such Galerkin type formulations in comparison to classical time-stepping schemes based on finite differences. The integrals are resolved by using some quadrature formula in the Galerkin finite element context. Here, we have the liberty to easily use different quadrature formulas for each integral. This is of interest in error-controlled adaptivity (e.g., [9, 27, 67]) or in cases where singularities are detected that require certain approximations properties in certain time intervals \(I_m\) for certain equations.
Combining time-integrated time derivatives with jump terms Next, we investigate the first three time-integrated terms and the jump terms, while we neglect for a moment the other terms. This enables us to perform the following simplification:
Now, we consider the fourth time derivative term and its two jump terms:
Since the functions are constant in time, it holds \(\Psi _{0,kh}^- = \Psi _{0,kh}\) and \(\delta \theta _{h} = \delta \theta _{kh,m-1}^+ = \delta \theta _{kh,m}^-\), and we obtain for the fourth time derivative term
The fifth term follows in the same fashion and we obtain
Inserting the results into the original system The terms after the last equal sign are now employed in the previous relation yielding
Resolving space-time boundary conditions Next, we explicitly rewrite the temporal boundary term as
Notational setups and final time-discrete system Since on each time interval \(I_m\) the functions are constant in time, we can set \(\varvec{U}_{kh,m} := \varvec{U}_{kh,m}^-\) for the trial functions and their corresponding test functions. Thus, we obtain
Finally, this yields:
Proposition 14
(dG(0) timestepping) Let the initial conditions \(\varvec{v}^\star _0,\theta ^\star _0,f^\star _0\) at \(t_0\) and the end time condition \(\varvec{v}^\star _T\) at \(t_M=T\) be given. For the time point indices \(m=1,\ldots ,M\), the current time step size is \(k_m = t_m - t_{m-1}\), and let the previous time step solution \(\varvec{U}_{kh,m-1}\) be given. Then, find \(\varvec{U}_{kh,m}\in V_h^{u,s_1,m}\times V_h^{v,s_2,m}\times V_h^{\theta ,s_3,m}\times V_h^{f,s_4,m}\) such that
for all test functions \(\delta \varvec{U}_{kh,m}\in V_{0,h}^{u,s_1,m}\times V_h^{v,s_2,m}\times V_{0,h}^{\theta ,s_3,m}\times V_h^{f,s_4,m}\). Therein, the spatial parts are given by
Remark 11
The discretization of the other problem statements from the Sects. 7, 6, and 5 are obtained with the same formal procedure, with the only change in the definition of the semi-linear forms and solution sets. In Sect. 7, a regularization of the inequality constraint still must be undertaken which is not necessary for the former Sects. 6 and 5.
Remark 12
The previous dG(0) discretization is a variant of the well-known implicit, strongly A-stable backward Euler scheme. This correspondance for simpler equations, e.g., heat conduction or wave equation only, is well-known in the literature; see e.g. [6].
9.5 Idea of higher-order schemes such as dG(1)
The starting point for higher order schemes is (89). For instance, for a dG(1) realization with \(r_1=r_2=r_3=r_4=1\) follows the classical idea of the finite element method. For dG(1), linear polynomials, we need two basis functions per time interval, rather than only one (constant function) as for the dG(0) method. Let us recall the temporal part of the dG(0) basis function:
In the dG(1) method, the two basis functions can be obtained with the Newton representation (e.g., [75]) of Lagrange interpolation
These temporal parts need to be multiplied with their spatial parts, i.e., \(\delta \varvec{U}_{kh} = \delta \varvec{U}_{h}\delta \varvec{U}_k\), which is omitted here, but can be found for instance in [87] for the heat equation. These two basis functions are subsequently inserted into (89), which yields per time interval \(I_m\) now two coupled (spatial) solutions, namely \(\varvec{U}_{kh,m-1}^+\) and \(\varvec{U}_{kh,m}^-\). The overall solution on \(I_m\) is then given by the linear combination
Consequently, we now must solve spatial systems of double size per \(I_m\) which can be computationally expensive. However, superconvergence effects can be proven for simple model problems in the time points \(t_m\), yielding a temporal convergence order of 3 for the dG(1) scheme; see e.g., [72][Section 7.3].
9.6 Well-posedness of the time-discrete dG(0) elastic wave system
One key innovation of this work is the prescription of velocity initial and time conditions as demonstrated in Sect. 5, and specifically summarized in Remark 6. Based on the previous derivations in the current section with the final result presented in Proposition 14, we now show explicitly that the space-time system is well-posed for \(M=2\) and dG(0) time-discretization. Considering the elastic wave equation only, from Proposition 14, we have
with
Without loss of generality, let us assume \(\varvec{t}^\star = 0\), and let us re-order and collect:
As a reminder, in the classical sense, we would now prescribe the initial conditions \(\varvec{u}^\star _0\) and \(\varvec{v}^\star _0\) such that for \(m=1\), we set
and from this, we obtain from (90) for \(m=1,\ldots ,M\) sequences \((\varvec{u}_{k\,h,m})_{m\in {\mathbb {N}}},(\varvec{v}_{kh,m})_{m\in {\mathbb {N}}}\) of discrete solutions. In our new philosophy, we rather prescribe
We notice that still displacements are prescribed on the spatial boundaries and for this reason, there is justified hope that our change in the initial conditions still yields a solvable system. In the following, we study the case \(M=2\) which could be generalized with induction. We assume same time-step sizes such that \(k:= k_1 = k_2\) and the corresponding time points
For \(M=2\), we have from (90)
and
with the initial time and end time conditions
Let us briefly count the unknowns in this scheme:
We have four equations in (91) and four unknowns, which yields a quadratic linear system.
Remark 13
As a reminder, in the classical setting, we have
and we also solve for four unknowns, namely
Here, it is well-known that this procedure works and the scheme is well-posed.
In our new setting, the actual solution becomes more difficult since the unknowns in (92) cannot be computed in a sequential manner, i.e., given \((\varvec{u}_{kh,m-1},\varvec{v}_{kh,m-1})\) compute \((\varvec{u}_{kh,m},\varvec{v}_{kh,m})\). Rather, we have to put a global view since we need to compute from the beginning and the end all in one. For this reason, let us explicitly derive the linear system. To this end, we first order into unknowns on the left-hand side and known values on the right-hand side. Here, (91) yields
and
As previously introduced in Sect. 9.3, we have
with the finite element representations
with the nodal representations
Let us define now the discrete matrices (see the mathematical finite element literature, e.g., [15, 16], for similar notations of finite element matrices)
and discrete right-hand sides, corresponding to (94),(96), (95),(93), respectively
where we recall that \(\varvec{v}_{kh,0}:= \varvec{v}_0^\star \) and \(\varvec{v}_{kh,2}:= \varvec{v}_T^\star \). Then,
Therein, the rows correspond to (94),(96),(95),(93), respectively. The determinant of this block system is
Here, all terms (matrices) are non-zero, because they consist of usual mass matrices and stiffness matrices, and consequently the determinant is non-zero and therefore this system has a unique solution. Thus, the solutions
and consequently their finite element representations
exist and our newly proposed discrete space-time formulation of the elastic wave part for \(M=2\) in the dG(0) setting is well-posed. Thus, we have shown:
Proposition 15
Given the data \(\varvec{b}^\star , \varvec{v}^\star _0\) and \(\varvec{v}^\star _T\), the fully discretized elastic wave equation in mixed form (90), with dG(0) in time for \(M=2\) and continuous Galerkin finite elements in space, is well-posed and admits unique finite element solutions \(\varvec{u}_{kh,1}, \varvec{v}_{kh,1}, \varvec{u}_{kh,2},\varvec{v}_{kh,2}\).
Corollary 1
The previous result can be extended via induction to cases \(M\ge 3\).
9.7 Practical realization
The well-posedness analysis yields immediately a practical realization for the implementation of the elastic wave part. Moreover, the solutions can be explicitly computed by standard procedures, e.g., Gaussian elimination or LU decomposition for such a small \(4\times 4\) block system. This should not be confused being a simple, fast, solution process since each block therein consists of a classical finite element matrix, which can be (very) large for (very) fine spatial discretizations with (very) small mesh parameters h. To this end, we obtain from (97) the triangular system
Here, the reader should recall that the right-hand side is modified due to the row modifications as well. For this reason, we denote the right hand side with tilde. Now, we can explicitly compute via solving linear equation systems of the form
and finally have the corresponding displacements and velocity
We notice that these steps are (very) typical matrix-vector or matrix-matrix multiplications as they arise often in similar block systems such as for instance Schur complement computations; e.g., [91]. Moreover, the principal computational cost by solving backwards four systems is the same as we would have for the classical procedure with initial conditions in \(\varvec{u}\) and \(\varvec{v}\). The only difference is that in the classical sense, we can go from time point, say \(t_{m-1}\), to time point \(t_m\) and solve the equations, while in our newly proposed system, we first have to derive the fully-coupled linear equation system, here (97) (for \(M=2\)), and resolve this first, before solving the actual finite element systems. This approach remains reasonable for moderate \(M\sim 100\) or \(M\sim 1000\) by using symbolic computations from Maple or Wolfram Mathematica. For large M (many time steps), iterative solvers, e.g., [77], or multigrid methods [35] should be employed, where we specifically mention in the space-time context the overall space-time multigrid solution [30].
10 Conclusions
In this work, we proposed a new paradigm for variational material modeling within a mathematically consistent space-time framework. On the one hand, stationary problems in mechanics are known since the year 1696 and space-time descriptions have been mathematically discussed for partial differential equations and variational inequalities since the year 1968 in [26, 52, 56, 88]. On the other hand, only recent advances for thermo-mechanically coupled modeling via extended Hamilton principles allowed us to embed the resultant models into common space-time frameworks. Of special importance is the description of the space-time cylinder and specifically its surface on which boundary and initial conditions are defined. Having the extended Hamilton principle and the space-time cylinder at hand, we demonstrated in terms of four models the power of our approach, i.e., the elastic wave problem, visco-elasticity, elasto-plasticity with hardening and gradient-enhanced damage modeling. For the latter two examples, inequality constraints yield complementarity systems which require mathematically to work in convex sets, rather than linear function spaces. Finally, we performed the numerical discretization with a focus on the temporal parts employing discontinuous Galerkin finite elements. Therein, the regularization of the inequality constraints was based on on simple penalization. Since the penalization acts in time, corresponding dG jump terms had to be derived. Moreover, a nonlinear time-derivative term required a careful investigation in terms of their jumps terms, too. These details were worked out in great detail and finally, a dG(0) realization with constant basis functions in time was conducted in order to arrive at a practical scheme. The idea of higher-order temporal discretizations was outlined as well. Finally, we studied the well-posedness of the elastic wave part for \(M=2\) (three time points) in the dG(0) since the prescription of velocity conditions only is unusual and at the same time one of the key innovations of this paper. The practical realization in terms of an algorithmic scheme is also shown for this system and can be adopted as starting point for an implementation of the full elastic wave equation. A reduced, prototype realization, implementation, and numerical simulation of a scalar-valued wave equation is undertaken in Sect. 5.3.
Notes
For Poisson’s problem in the classical mathematical finite element literature, the derivation of the strong form from the variational formulation is discussed in detail in [16, Chapter 5].
The displacements and velocities are combined in one common semi-linear form to be consistent with the literature, e.g., [6], but we could also have defined \({\mathcal {A}}_u[\varvec{u}](\delta \varvec{u})\) and \({\mathcal {A}}_v[\varvec{v}](\delta \varvec{v})\) separately, i.e.,
$$\begin{aligned}&{\mathcal {A}}_u[\varvec{u}]({\delta {\varvec{u}}}) = \int _I \int _{\Omega } \rho \, {\dot{\varvec{v}}} \cdot {\delta {\varvec{u}}} \ \textrm{d}V \ \textrm{d}t \\&\quad + \int _I \int _{\Omega } \varvec{\sigma }: \nabla ^\textrm{sym}{\delta {\varvec{u}}} \ \textrm{d}V \ \textrm{d}t - \int _{\partial I}\int _\Omega \rho \,\varvec{v}\cdot {\delta {\varvec{u}}} \ \textrm{d}V \ \textrm{d}s,\\&{\mathcal {A}}_v[\varvec{v}]({\delta {\varvec{v}}}) = - \int _I\int _\Omega \rho \, \varvec{v}\cdot {\delta {\varvec{v}}} \ \textrm{d}V \ \textrm{d}t + \int _I\int _\Omega \rho \, {\dot{\varvec{u}}} \cdot {\delta {\varvec{v}}} \ \textrm{d}V \ \textrm{d}t \end{aligned}$$and respectively their right-hand sides.
For mathematical descriptions and terminology of differential inclusions, see, e.g., [2].
For generalizations with plastic strains belonging to spaces of Borel measures, we refer for instance to [58], with specific statements on the spatial solution sets for \(\varvec{u}, \varvec{\varepsilon }, \varvec{\varepsilon }^\textrm{p}\) and \(\varvec{\sigma }\) on p. 240, and for quasi-static small-strain plasticity with vanishing hardening, we refer to [7].
For the correspondence in phase-field fracture, we refer to [62] where d denotes the damage function.
In variational phase-field fracture, we notice that the convex set is defined in an analogous fashion. Therein, usually the inequality constraint \(\dot{f} \le 0\) is discretized in time, e.g., \(\dot{f} \approx f(t_m) - f(t_{m-1}) \le 0\), since in most studies a quasi-static evolution of damage/fracture is considered; see e.g., [63]. There exist, however, dynamic (i.e., second order in time as we deal with in the current study) formulations, e.g., [14], in which the convex set for the damage/fracture variable is defined in a different way. Working with the time-continuous constraint \(\dot{f} \le 0\) was only considered recently in [48, Remark 2.1] and [49].
References
Argyris J, Scharpf D (1969) Finite elements in time and space. Nucl Eng Des 10:456–464
Aubin J, Cellina A (1984) Differential inclusions: set-valued maps and viability theory. Springer, Berlin
Babuska I (1973) The finite element method with Lagrangian multipliers. Numer Math 20:179–192
Ball J, James R (1989) Fine phase mixtures as minimizers of energy. In: Analysis and continuum mechanics. Springer, pp 647–686
Ball JM (2002) Some open problems in elasticity. Geom Mech Dyn 66:3–59
Bangerth W, Geiger M, Rannacher R (2010) Adaptive Galerkin finite element methods for the wave equation. Comput Methods Appl Math 10:3–48
Bartels S, Mielke A, Roubicek T (2012) Quasi-static small-strain plasticity in the limit of vanishing hardening and its numerical approximation. SIAM J Numer Anal 50(2):951–976
Baruch M, Riff R (1982) Hamilton’s principle, Hamilton’s law—6 to the n power correct formulations. AIAA J 20:687–692
Becker R, Rannacher R (2001) An optimal control approach to a posteriori error estimation in finite element methods. In: Acta Numerica, pp 1–102. Cambridge University Press, Cambridge
Besier M, Rannacher R (2012) Goal-oriented space-time adaptivity in the finite element Galerkin method for the computation of nonstationary incompressible flow. Int J Numer Methods Fluids 70:1139–1166
Betsch P, Schiebl M (2020) Generic-based formulation and discretization of initial boundary value problems for finite strain thermoelasticity. Comput Mech 65(2):503–531
Biot M (1955) Variational principles in irreversible thermodynamics with application to viscoelasticity. Phys Rev 97(6):1463
Biot MA (1954) Theory of stress–strain relations in anisotropic viscoelasticity and relaxation phenomena. J Appl Phys 25(11):1385–1391
Bourdin B, Larsen C, Richardson C (2011) A time-discrete model for dynamic fracture based on crack regularization. Int J Fract 168(2):133–143
Braess D (2007) Finite elemente. Springer, Berlin, vierte, überarbeitete und erweiterte edition
Brenner SC, Scott LR (2007) The mathematical theory of finite element methods. Number 15 in Texts in applied mathematics; 15; Texts in applied mathematics, 3rd ed. Springer, New York
Brezis H (2011) Functional analysis. Sobolev spaces and partial differential equations. Springer, New York
Canadija M, Mosler J (2011) On the thermomechanical coupling in finite strain plasticity theory with non-linear kinematic hardening by means of incremental energy minimization. Int J Solids Struct 48(7–8):1120–1129
Carstensen C, Hackl K, Mielke A (2002) Non-convex potentials and microstructures in finite-strain plasticity. Proc R Soc Lond Ser A Math Phys Eng Sci 458(2018):299–317
Ciarlet PG (1987) The finite element method for elliptic problems, [u.a.], 2. pr. edition. North-Holland, Amsterdam
Ciarlet PG (2013) Linear and nonlinear functional analysis with applications. SIAM, Philadelphia
Delfour M, Hager W, Trochu F (1981) Discontinuous Galerkin methods for ordinary differential equations. Math Comp 36:455–473
Di Pietro D, Ern A (2012) Mathematical aspects of discontinuous Galerkin methods. Mathématiques et Applications. Springer, Berlin
Diehl P, Lipton R, Wick T, Tyagi M (2022) A comparative review of peridynamics and phase-field models for engineering fracture mechanics. Comput Mech 66:1–35
Dörfler W, Hochbruck M, Köhler J, Rieder A, Schnaubelt R, Wieners C (2022) Wave phenomena: mathematical analysis and numerical approximation. Oberwolfach seminars. Birkhäuser, Cham
Duvaut G, Lions JL (1976) Inequalities in mechanics and physics. Springer, Berlin
Eriksson K, Estep D, Hansbo P, Johnson C (1995) Introduction to adaptive methods for differential equations. Acta Numer 66:105–158
Evans LC (2010) Partial differential equations. American Mathematical Society, Philadelphia
Fried I (1969) Finite element analysis of time-dependent phenomena. AIAA J 7:1170–1173
Gander MJ, Neumüller M (2016) Analysis of a new space-time parallel multigrid algorithm for parabolic problems. SIAM J Sci Comput 38(4):A2173–A2208
Gay-Balmaz F, Yoshimura H (2018) From Lagrangian mechanics to nonequilibrium thermodynamics: a variational perspective. Entropy 21(1):8
Glowinski R, Tallec PL (1989) Augmented Lagrangian and operator-splitting methods in nonlinear mechanics. SIAM Studies in Applied Mathematics, vol 9. SIAM, Philadelphia
Großmann C, Roos H-G, Stynes M (2007) Numerical treatment of partial differential equations. Springer, Berlin
Gruber P, Knees D, Nesenenko S, Thomas M (2010) Analytical and numerical aspects of time-dependent models with internal variables. ZAMM J Appl Math Mech/Zeitschrift für Angewandte Mathematik und Mechanik 90(10–11):861–902
Hackbusch W (1985) Multi-grid methods and applications. Springer, Berlin
Hamilton W (1834) On a general method in dynamics. Philos Trans R Soc II:247–308
Han W, Reddy D (2012) Plasticity. Springer, New York
Hencky H (1924) Zur Theorie plastischer Deformationen und hierdurch im Material hervorgerufenen Nachspannungen. Z Angew Math Mech 4:323–335
Hinze M, Pinnau R, Ulbrich M, Ulbrich S (2009) Optimization with PDE constraints. Number 23 in mathematical modelling: theory and applications. Springer, Dordrecht
Holzapfel G (2000) Nonlinear solid mechanics: a continuum approach for engineering. Wiley, New York
Hughes TJ, Hulbert GM (1988) Space-time finite element methods for elastodynamics: formulations and error estimates. Comput Methods Appl Mech Eng 66(3):339–363
Hulbert GM, Hughes TJ (1990) Space-time finite element methods for second-order hyperbolic equations. Comput Methods Appl Mech Eng 84(3):327–348
Ito K, Kunisch K (2008) Lagrange multiplier approach to variational problems and applications, volume 15 of advances in design and control. Society for Industrial and Applied Mathematics (SIAM), Philadelphia
Johnson C (1988) Error estimates and adaptive time-step control for a class of one-step methods for stiff ordinary differential equations. SIAM J Numer Anal 25(4):908–926
Johnson C (1993) Discontinuous Galerkin finite element methods for second order hyperbolic problems. Comput Methods Appl Mech Eng 107(1):117–129
Junker P, Balzani D (2021) An extended Hamilton principle as unifying theory for coupled problems and dissipative microstructure evolution. Contin Mech Thermodyn 66:1–26
Khimin D, Roth J, Wick T (2022) Space-time fluid-structure interaction: formulation and dg(0) time discretization. Oslo ECCOMAS proceedings. https://doi.org/10.23967/eccomas.2022.257
Khimin D, Steinbach M, Wick T (2022) Space-time formulation, discretization, and computational performance studies for phase-field fracture optimal control problems. J Comput Phys 66:111554
Khimin D, Steinbach M, Wick T (2023) Space-time mixed system formulation of phase-field fracture optimal control problems. J Optim Theory Appl. https://doi.org/10.1007/s10957-023-02272-7
Kikuchi N, Oden J (1988) Contact problems in elasticity. Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia
Kinderlehrer D, Stampacchia G (2000) An introduction to variational inequalities and their applications. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics
Ladyzhenskaja O, Solonnikov V, Uralceva N (1968) Linear and quasi-linear equations of parabolic type. Translations of mathematical monographs, vol 23. AMS
Lagrange J (1811) Mécanique analytique. Paris
Langer U, Steinbach O (eds) (2019) Space-time methods: application to partial differential equations, volume 25 of Radon series on computational and applied mathematics. de Gruyter, Berlin
Larsson S, Nochetto R, Sauter S, Wieners C (2022) Space-time methods for time-dependent partial differential equations. Oberwolfach Rep 6(1):1–80
Lions JL, Magenes E (1972) Non-homogeneous boundary value problems and applications. Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen Band 181. Springer, Berlin
Mariano PM (ed) (2021) Variational views in mechanics. Adv. Mech. Math., Birkhäuser, Cham
Maso G, DeSimone A, Mora A (2006) Quasistatic evolution problems for linearly elastic-perfectly plastic materials. Arch Rational Mech Anal 180:237–291
Matthies H, Strang G, Christiansen E (1979) Energy methods in finite element analysis, chapter the saddle-point of a differential program. Wiley, New York, pp 309–318
Miehe C (2002) Strain-driven homogenization of inelastic microstructures and composites based on an incremental variational formulation. Int J Numer Methods Eng 55(11):1285–1322
Miehe C (2011) A multi-field incremental variational framework for gradient-extended standard dissipative solids. J Mech Phys Solids 59(4):898–923
Miehe C, Welschinger F, Hofacker M (2010) Thermodynamically consistent phase-field models of fracture: variational principles and multi-field FE implementations. Int J Numer Methods Eng 83(10):1273–1311
Mikelić A, Wheeler MF, Wick T (2015) A quasi-static phase-field approach to pressurized fractures. Nonlinearity 28(5):1371–1399
Nitsche J (1971) Über ein Variationsprinzip zur Lösung von Dirichlet-Problemen bei Verwendung von Teilräumen, die keinen Randbedingungen unterworfen sind. Abh Math Sem Univ Hamburg 36:9–15
Nocedal J, Wright SJ (2006) Numerical optimization. Springer series in operations research and financial engineering
Oden J (1969) A general theory of finite elements II. Appl Int J Numer Methods Eng 1:247–259
Oden JT (2018) Adaptive multiscale predictive modelling. Acta Numer 27:353–450
Peters D, Izadpanah A (1988) \(hp\)-version finite elements for the space-time domain. Comput Mech 3:73–88
Pham K, Marigo J.-J (2010) Approche variationnelle de l’endommagement : I. les concepts fondamentaux. Comptes Rendus Mécanique 338(4):191–198
Pham K, Marigo J.-J (2010) Approche variationnelle de l’endommagement : Ii. les modèles à gradient. Comptes Rendus Mécanique 338(4):199-206
Ramm E, Rank E, Rannacher R, Schweizerhof K, Stein E, Wendland W, Wittum G, Wriggers P, Wunderlich W (2003) Error-controlled adaptive finite elements in solid mechanics. Wiley, New York
Rannacher R ( 2017) Numerik gewöhnlicher Differentialgleichungen. Heidelberg University Publishing
Rannacher R (2017) Probleme der Kontinuumsmechanik und ihre numerische Behandlung. Heidelberg University Publishing
Rannacher R, Suttmeier F-T (1999) A posteriori error estimation and mesh adaptation for finite element models in elasto-plasticity. Comput Methods Appl Mech Eng 176(1–4):333–361
Richter T, Wick T (2017) Einführung in die numerische Mathematik - Begriffe. Springer, Konzepte und zahlreiche Anwendungsbeispiele
Rivière B (2008) Discontinuous Galerkin methods for solving elliptic and parabolic equations: theory and implementation. SIAM
Saad Y (2003) Iterative methods for sparse linear systems. SIAM, Philadelphia
Schafelner A (2021) Space-time finite element methods. PhD thesis, Johannes Kepler University Linz
Schmich M, Vexler B (2008) Adaptivity with dynamic meshes for space-time finite element discretizations of parabolic equations. SIAM J Sci Comput 30(1):369–393
Simo JC, Hughes TJ (2006) Computational inelasticity, vol 7. Springer, Berlin
Stakgold I, Holst M (2011) Green’s functions and boundary value problems. Wiley, New York
Temam R (2018) Mathematical problems in plasticity. Dover, New York
Tezduyer T, Takizawa K (2019) Space-time computations in practical engineering applications: a summary of the 25-year history. Comp Mech 63:747–753
Tröltzsch F (2009) Optimale Steuerung partieller Differentialgleichungen - Theorie, 2nd edn. Verfahren und Anwendungen. Vieweg und Teubner, Wiesbaden
Werner D (2004) Funktionalanalysis. Springer, Berlin
Wick T (2020) Multiphysics phase-field fracture: modeling, adaptive discretizations, and solvers. De Gruyter, Berlin
Wick T (2022) Space-time methods: formulations, discretization, solution, goal-oriented error control and adaptivity, Compact Textbooks in Mathematics. Springer. https://thomaswick.org/links/Wi22_st_book_preprint_Nov_2022.pdf(to appear)
Wloka J (1987) Partial differential equations. Cambridge University Press, Cambridge
Wriggers P (2008) Nonlinear finite element methods. Springer, Berlin
Wu J-Y, Nguyen VP, Thanh Nguyen C, Sutula D, Bordas S, Sinaie S (2019) Phase field modelling of fracture. Advances in Applied Mechanics, vol 53. https://www.sciencedirect.com/science/article/pii/S0065215619300134
Zhang F (ed) (2005) The Schur complement and its applications. Numerical methods and algorithms, vol 4. Springer, Berlin
Acknowledgements
This work is supported by the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy within the cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Moreover, we acknowledge the International Research Training Group 2657 (Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 433082294) in which both authors and their groups investigate modern modeling and discretization space-time methods for high-dimensional problems. The authors furthermore thank V. Meine and D.R. Jantos for their support in the creation of the plots. We thank both reviewers for their numerous questions that helped to significantly improve the manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Strong forms
A Strong forms
We collect in this appendix the strong form formulations for all models discussed. This is obtained from applying the fundamental lemma of calculus of variations to the respective weak formulations given above.
1.1 A.1 Strong form of the elastic wave propagation (Sec. 5)
Problem 1
Let \(\varvec{b}^\star :\Omega \times I\rightarrow {\mathbb {R}}^d\) be given volume forces, \(\varvec{t}^\star :\partial \Omega _{\textrm{N},\varvec{u}}\times I\rightarrow {\mathbb {R}}^d\) given surface tractions, \(\varvec{u}^\star :\partial \Omega _{\textrm{D},\varvec{u}}\times I\rightarrow {\mathbb {R}}^d\) spatial Dirichlet boundary data, and \(\varvec{v}^\star :\Omega \times \partial I\rightarrow {\mathbb {R}}^d\) temporal boundary data. Furthermore, let \(\theta ^\star :\partial \Omega _{\textrm{D},\theta }\times I\rightarrow {\mathbb {R}}\) be Dirichlet boundary conditions for the temperature and \(\theta ^\star _0:\Omega \rightarrow {\mathbb {R}}\) be temperature initial condition. The Euler-Lagrange equation for the displacements \(\varvec{u}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^d\) and velocities \(\varvec{v}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^d\) read
with the stress
and the governing equation for the temperature \(\theta :\bar{\Omega }\times \bar{I} \rightarrow {\mathbb {R}}\) constitutes as
where \(\varvec{I}:{\mathbb {C}}:\dot{\varvec{\varepsilon }} = {\mathbb {C}}_{iiop}\dot{\varepsilon }_{op}\) and where the heat flux vector can be modeled using Fourier’s law, i.e., \(\varvec{q}^\star =-\omega \,\nabla \theta \), see (12). Accounting for non-adiabatic processes, i.e., \(\varvec{n}\cdot \varvec{q}^\star \not =0 \,\forall (\varvec{x},t)\in \partial \Omega _{\textrm{N},\theta }\times I\) can be easily performed by reasonably expanding the extended Hamilton functional \({\mathcal {H}}\) in (31). More details are given in [46].
1.2 A.2 Strong form of visco-elasticity (Sec. 6)
Problem 2
Let \(\varvec{b}^\star :\Omega \times I\rightarrow {\mathbb {R}}^d\) be given volume forces, \(\varvec{t}^\star :\partial \Omega _{\textrm{N},\varvec{u}}\times I\rightarrow {\mathbb {R}}^d\) given surface tractions, \(\varvec{u}^\star :\partial \Omega _{\textrm{D},\varvec{u}}\times I\rightarrow {\mathbb {R}}^d\) spatial Dirichlet boundary data, and \(\varvec{v}^\star :\Omega \times \partial I\rightarrow {\mathbb {R}}^d\) temporal boundary data. Furthermore, let \(\theta ^\star :\partial \Omega _{\textrm{D},\theta }\times I\rightarrow {\mathbb {R}}\) be Dirichlet boundary conditions for the temperature, \(\theta ^\star _0:\Omega \rightarrow {\mathbb {R}}\) be the initial condition for temperature, and \(\varvec{\varepsilon }^{\textrm{v},\star }_0:\Omega \rightarrow {\mathbb {R}}^{d\times d}\) the matrix-valued initial condition for the viscous strain. The Euler-Lagrange equations for the displacements \(\varvec{u}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^d\) and the velocities \(\varvec{v}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^d\) read
with the stress
and the governing equation for the temperature \(\theta :\bar{\Omega }\times \bar{I} \rightarrow {\mathbb {R}}\) constitutes as
where again the heat flux vector \(\varvec{q}^\star \) is modeled using Fourier’s law in (12), i.e., \(\varvec{q}^\star =-\omega \,\nabla \theta \). These equations are complemented by the evolution equation for the viscous strain: find \(\varvec{\varepsilon }^\textrm{v}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^{d\times d}\) such that
which all result from the stationarity of the Hamilton functional. Usually, we chose \(\varvec{\varepsilon }^{\textrm{v},\star }_0={\varvec{0}}\).
1.3 A.3 Strong form of elasto-plasticity with hardening (Sec. 7)
Problem 3
Let \(\varvec{b}^\star :\Omega \times I\rightarrow {\mathbb {R}}^d\) be given volume forces, \(\varvec{t}^\star :\partial \Omega _{\textrm{N},\varvec{u}}\times I\rightarrow {\mathbb {R}}^d\) given surface tractions, \(\varvec{u}^\star :\partial \Omega _{\textrm{D},\varvec{u}}\times I\rightarrow {\mathbb {R}}^d\) spatial Dirichlet boundary data, and \(\varvec{v}^\star :\Omega \times \partial I\rightarrow {\mathbb {R}}^d\) temporal boundary data. Furthermore, let \(\theta ^\star :\partial \Omega _{\textrm{D},\theta }\times I\rightarrow {\mathbb {R}}\) be Dirichlet boundary conditions for the temperature, \(\theta ^\star _0:\Omega \rightarrow {\mathbb {R}}\) be the initial condition for the temperature, and \(\varvec{\varepsilon }^{\textrm{p},\star }_0:\Omega \rightarrow {\mathbb {R}}^{d\times d}\) the matrix-valued initial condition for the plastic strain. The Euler-Lagrange equations for the displacements \(\varvec{u}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^d\) and the velocities \(\varvec{v}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^d\) read
with the stress
and for the temperature \(\theta :\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}\)
where the heat flux vector \(\varvec{q}^\star \) can be modeled by Fourier’s law (12), i.e., \(\varvec{q}^\star =-\omega \,\nabla \theta \). The system of equations is closed by the differential algebraic equations for the plastic strains: find \(\varvec{\varepsilon }^\textrm{p}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^{d\times d}\) such that
Similarly to visco-elasticity, we usually chose \(\varvec{\varepsilon }^{\textrm{p},\star }_0={\varvec{0}}\).
1.4 A.4 Strong form of gradient-enhanced damage modeling (Sec. 8)
Problem 4
Let \(\varvec{b}^\star :\Omega \times I\rightarrow {\mathbb {R}}^d\) be given volume forces, \(\varvec{t}^\star :\partial \Omega _{\textrm{N},\varvec{u}}\times I\rightarrow {\mathbb {R}}^d\) given surface tractions, \(\varvec{u}^\star :\partial \Omega _{\textrm{D},\varvec{u}}\times I\rightarrow {\mathbb {R}}^d\) spatial Dirichlet boundary data, and \(\varvec{v}^\star :\Omega \times \partial I\rightarrow {\mathbb {R}}^d\) temporal boundary data. Furthermore, let \(\theta ^\star :\partial \Omega _{\textrm{D},\theta }\times I\rightarrow {\mathbb {R}}\) be Dirichlet boundary conditions for the temperature, \(\theta ^\star _0:\Omega \rightarrow {\mathbb {R}}\) be the initial condition for the temperature, and \(f^\star _0\) the scalar-valued initial condition for the damage. The Euler-Lagrange equations for the displacements \(\varvec{u}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^d\) and the velocities \(\varvec{v}:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^d\) read
with the stress
the temperature \(\theta :\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}\) such that
with Fourier’s law in (12) for modeling the heat flux vector \(\varvec{q}^\star \), i.e., \(\varvec{q}^\star =-\omega \,\nabla \theta \) and the damage function \(f:\bar{\Omega }\times \bar{I}\rightarrow {\mathbb {R}}^+\) such that the following complementarity system holds true
From the definition \(f(d)=\exp (-d)\) it follows from \(\dot{d} \ge 0\) that \(\dot{f} \le 0\). Moreover, the complementarity system (112) and the constraint \(\dot{f} \le 0\) are closely related to regularized phase-field fracture (e.g., [86]). Therein, the constraint is rather formulated in terms of the damage function f, and \(\Phi ^\textrm{d}\) is slightly differently defined [86, Section 4.5.3].
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Junker, P., Wick, T. Space-time variational material modeling: a new paradigm demonstrated for thermo-mechanically coupled wave propagation, visco-elasticity, elasto-plasticity with hardening, and gradient-enhanced damage. Comput Mech 73, 365–402 (2024). https://doi.org/10.1007/s00466-023-02371-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00466-023-02371-2