Abstract
Different representations of linear dissipative Hamiltonian and portHamiltonian differential–algebraic equations (DAE) systems are presented and compared. Using global geometric and algebraic points of view, translations between different representations are presented. Characterizations are also derived when a general DAE system can be transformed into one of these structured representations. Approaches for computing the structural information and the described transformations are derived that can be directly implemented as numerical methods. The results are demonstrated with a large number of examples.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Since the original introduction of the energybased modeling concept of portHamiltonian (pH) systems in [28, 35], see also [6, 22, 24, 33, 34, 38, 40], various novel definitions and formulations have been made to incorporate on the one hand systems defined on manifolds, see [14, 41], and on the other hand systems with algebraic constraints defined in the form of differential–algebraic equations (DAEs), see [4, 31]. Constraints in pH systems typically arise in electrical or transport networks, where Kirchhoff’s laws constrain the models at network nodes, as balance equations in chemical engineering, or as holonomic or nonholonomic constraints in mechanical multibody systems. Furthermore, they arise in the interconnection of pH systems when the interface conditions are explicitly formulated and enforced via Lagrange multipliers, see [4, 31, 39,40,41,42] for a variety of examples, or the recent survey [32].
In this paper, we recall different model representations of pHDAEs, respectively, dissipative Hamiltonian DAEs (dHDAEs) and provide a systematic geometric and algebraic theory that also provides a ’translation’ between different representations. We also discuss approaches for explicitly computing the structural information that can be directly implemented as numerical methods. We mainly restrict ourselves to finitedimensional linear timeinvariant systems without inputs and outputs, but indicate in several places where extensions to general systems are possible. We also discuss only real systems although most of the results can also be formulated for complex problems in a similar way.
For general linear timeinvariant homogeneous differential–algebraic equations
where \(E,A\in {\mathbb {R}}^{n,n}\) (without additional geometric and algebraic structures), as well as for their extensions to linear timevarying and nonlinear systems, the theory is well understood [25]. On the other hand, in contrast with general DAEs (1), the DAEs that arise in energybased modeling have extra structure and symmetries, and it is natural to identify and exploit this extra structure for purposes of analysis, simulation and control.
In this paper, we will restrict our attention to regular systems, i.e., systems with \(\det (\lambda EA)\) not identically zero, although many of the results are expected to extend to over and underdetermined systems; cf. Conclusions. The structural properties for regular systems are characterized via the (real) Weierstraß canonical form of the matrix pair (E, A), see, e.g., [20]. To determine this canonical form, one computes nonsingular matrices U, W such that
where \({\mathcal {J}}\) is in real Jordan canonical form and \({\mathcal {N}}\) is nilpotent in Jordan canonical form which is associated with the eigenvalue \(\infty \)
An important quantity that will be used throughout the paper is the size of the largest Jordan block in \({\mathcal {N}}\) (the index of nilpotency), which is called the (differentiation) index of the pair (E, A) as well as the associated DAE, where, by convention, \(\nu =0\) if E is invertible.
The goal of this paper is to study in detail the relationships and differences between different classes of (extended) Hamiltonian DAEs; without making any a priori invertibility assumptions. Furthermore, we exploit at the same time a geometric point of view on Hamiltonian DAEs using Lagrange and Dirac structures (as well maximally monotone subspaces; cf. Sect. 4.2), and an algebraic point of view using (computationally feasible) condensed forms. From a DAE analysis point of view, similar to the result known already for Hamiltonian DAEs of the form (2) in [29], we also show that the index of an (extended) dissipative Hamiltonian DAEs can be at most two. Moreover, we will show that index two algebraic constraints may only arise from singularity of P, i.e., from the Lagrange structure, and thus are strictly linked to singular Hamiltonians.
Another important question that we will answer is the characterization when a general DAE is equivalent to a structured DAE the form ((2) or ((10)).
The paper is organized as follows.
In Sect. 2, we discuss dissipative Hamiltonian DAEs and present several examples. The concept of extended Hamiltonian DAEs is discussed in Sect. 3. In Sect. 4, we will present the geometric theory of dissipative Hamiltonian DAEs; we introduce Dirac and Lagrange structures as well maximally monotone subspaces to incorporate dissipation. Section 5 presents different coordinate representations and their relation. To derive algebraic characterizations, in Sect. 6 we present different types of equivalence transformations and corresponding condensed forms for different classes of (extended dissipative) Hamiltonian DAEs. In Sect. 7, it is analyzed when general linear DAEs can be represented as (extended dissipative) Hamiltonian systems. In all sections, we present examples that illustrate the properties of different representations. In several appendices, we present proofs that can be implemented as numerically stable algorithms.
2 Dissipative Hamiltonian DAEs
A wellestablished representation of DAE systems with symmetry structure is that of dissipative Hamiltonian DAEs (dHDAEs) which has been studied in detail in [4, 29, 30]. These systems have the form
where \(J=J^\top , R=R^\top \in {\mathbb {R}}^{\ell ,\ell }\), and \(E,Q\in {\mathbb {R}}^{\ell ,n}\) with \(E^\top Q=Q^\top E\). In this representation, the associated Hamilton function (Hamiltonian) is given by a quadratic form
which (as an energy) is typically nonnegative for all z, but more general Hamiltonians also arise in practice. Note that in the ODE case, i.e., if \(E=I\), then this reduces to the standard quadratic Hamiltonian \({\mathcal {H}}^z(z)=\frac{1}{2} z^\top Q z\).
Remark 1
The equivalent formulations \(\frac{d}{dt}(Ez)= E\dot{z}\) are both used in the literature and obviously lead to the same results in the linear constant coefficient case, but not anymore in the linear timevarying or nonlinear case. Actually, for the structured DAEs considered in this paper E is the product \(E=KP\) of two matrices K and P, and for generalizations it is appropriate to consider \(K\frac{d}{dt}(Pz)\).
Furthermore, the dissipation term R is typically positive semidefinite, denoted as \(R\ge 0\), but we will first discuss the case that \(R=0\) which we call Hamiltonian DAE (HDAE).
Remark 2
The definition of (dHDAEs) in (2) can be easily extended to systems with ports in which case it takes the form
with
and
see [4]. We will call these systems dissipative pHDAE (dpHDAE) systems.
Remark 3
Note that by the symmetry of \(E^\top Q\) the Hamiltonian \({\mathcal {H}}^z(z)=\frac{1}{2} z^\top E^\top Q z\) can be also written as a function of Qz or Ez.
Remark 4
In standard portHamiltonian modeling, the matrix R in (2) is always assumed to be symmetric (as well as positive semidefinite). Alternatively, one could start from a general matrix R only satisfying \(R+R^\top \ge 0\). Then, by splitting R into its symmetric and skewsymmetric part, one could add the skewsymmetric part to the skewsymmetric structure matrix J and continue as in (2) with the symmetric part \(\frac{1}{2} (R+R^\top )\) of R.
Structured DAEs of the form (2) arise naturally in all physical domains.
Example 5
The subclass of RLC networks as considered in, e.g., [4, 12, 19] has the form
where the real positive definite diagonal matrices L, C, G describe inductances, capacitances and conductances, respectively. The matrices \(D_C, D_L, D_R, D_S\) are the parts of the incidence matrix corresponding, respectively, to the capacitors, inductors, resistors (conductors) and current sources of the circuit graph, where \(D_S\) is of full column rank. Furthermore, V are the node potentials and \(I_L, I_S\) denote the currents through the inductors and sources, respectively. This system has the form (2), with \(E=E^\top \ge 0\), Q equal to the identity matrix, and where J and \(R\) are defined to be the skewsymmetric and symmetric part, respectively, of the matrix on the right hand side of (5). The Hamiltonian is given by \({\mathcal {H}}(V,I_L)= \frac{1}{2} V^\top D_C C D_C^\top V+ \frac{1}{2} I_L^\top L I_L\), and it does not involve the variables \(I_S\), which are in the kernel of E.
Example 6
Space discretization of the Stokes equation in fluid dynamics, see, e.g., [18], leads to a dissipative Hamiltonian system
where \(A=A^\top \) is a positive semidefinite discretization of the negative Laplace operator, B is a discretized gradient and \(M=M^\top \) is a positive definite mass matrix. The homogeneous system has the form (2), with
The Hamiltonian is given by \({\mathcal {H}}= \frac{1}{2} v^\top _h M v_h\); it does not involve the variables \(p_h\).
Example 7
Space discretization of the Euler equation describing the acoustic wave propagation in a gas pipeline network [15, 16] leads to a DAE
where \(p_h\) is the discretized pressure, \(q_h\) is a discretized flux and \(\lambda \) is a Lagrange multiplier that penalizes the violation of the conservation of mass and momentum at the pipeline nodes.
The homogeneous system has the form (2) with \(Q=I\) and Hamiltonian \( {\mathcal {H}}(p_h,q_h) =\frac{1}{2} ( p_h^\top M_1 p_h+ q_h^\top M_2 q_h)\). The Hamiltonian does not involve the Lagrange multiplier \(\lambda \).
Example 8
Consider a linear mechanical system (with q denoting the vector of position coordinates) \(M\ddot{q} +D\dot{q} + Wq= f\), together with kinematic constraints \(G\dot{q}=0\), see, e.g., [17]. The constraints give rise to constraint forces \(G^\top \lambda \), with \(\lambda \) a vector of Lagrange multipliers. This yields the dynamics \(M\ddot{q} +D\dot{q} + Wq = f+G^\top \lambda \), and the resulting system can be written in firstorder form as
Here, \(E=E^\top \) and \(Q=Q^\top \) are commuting matrices, E, W, and \(R=R^\top \) are positive semidefinite, so the homogeneous system is of the form (2). The Hamiltonian is given by \(\frac{1}{2} ( \dot{q}^\top M \dot{q} + q^\top W q)\) (kinetic plus potential energy) and does not involve the Lagrange multiplier \(\lambda \).
As indicated in the presented examples, singularity of E, and thus the presence of algebraic constraints, implies that the Hamiltonian (the total stored energy) \({\mathcal {H}}^z(z)=\frac{1}{2} z^\top E^\top Q z\) does not involve all of the variables contained in the vector z. In fact in some of the examples, the variables that do not show up in the Hamiltonian are Lagrange multipliers. Conversely, singularity of E may arise as a limiting situation of otherwise regular Hamiltonians, as shown by the following simple example.
Example 9
Consider the model of standard mass–spring–damper system with model equation
and Hamiltonian \({\mathcal {H}}(q,p) = \frac{1}{2} kq^2 + \frac{p^2}{2\,m}\).
To compute the limit \(m \rightarrow 0\), we rewrite the system in coordinates q and \(v:=\frac{p}{m}\) as
with Hamiltonian \({\mathcal {H}}(q,v)= \frac{1}{2} kq^2 + \frac{1}{2}mv^2\). For \(m \rightarrow 0\), this converges to the DAE system
which is of (differentiation) index one if \(d \ne 0\), and of index two if \(d=0\). The limiting Hamiltonian \({\mathcal {H}}(q,v)= \frac{1}{2} kq^2\) is not a function of v anymore.
Alternatively, we can compute the limit for \(k \rightarrow \infty \). For this, we rewrite the system in coordinates \(F:=kq\) and p as
with Hamiltonian \({\mathcal {H}}(F,p)= \frac{1}{2k} F^2 + \frac{1}{2\,m}p^2\). For \(k \rightarrow \infty \), this converges to the DAE system
which is index two for any d, with Hamiltonian \({\mathcal {H}}(F,p)= \frac{1}{2m}p^2\), which is not involving F.
We may also take the limits \(m \rightarrow 0\) and \(k \rightarrow \infty \) simultaneously. By rewriting the system in the variables F, v, with Hamiltonian \({\mathcal {H}}(F,p)= \frac{1}{2k} F^2 + \frac{1}{2}mv^2\). This leads to the purely algebraic system
with zero Hamiltonian, and having a single solution \(v=0\), \(F=0\), irrespective of the damper.
3 Extended Hamiltonian DAEs
Classically, Hamiltonian systems and also their extension to systems with inputs and outputs, called portHamiltonian systems, see, e.g., [13, 22, 28, 35, 36, 38, 40], are defined by a Dirac structure, representing the powerconserving interconnection structure, an energydissipation relation and a Hamilton function (Hamiltonian), capturing the total energy storage in the system. The Dirac structure formalizes the generalized junction structure known from portbased modeling theory. Importantly, the Dirac structure may entail linear constraints on the socalled effort variables, called effort constraints. Through the gradient vector of the Hamilton function, these effort constraints induce algebraic constraints on the state variables, see especially [31, 39] for a discussion of general nonlinear portHamiltonian DAEs. In the special case of a linear autonomous system without energy dissipation and without inputs and outputs, and after choosing a basis such that \({\mathcal {X}}={\mathbb {R}}^{n}\), the equations of motion take the DAE form
where the pair of matrices \(K,L\in {\mathbb {R}}^{n,n}\), satisfying \(KL^\top + LK^\top =0\) and \({{\,\textrm{rank}\,}}\left[ \begin{array}{cc} K&L \end{array}\right] = n\), describes the Dirac structure, and \({\mathcal {H}}^x(x) = \frac{1}{2}x^\top Q x\) defines the Hamilton function for some \(Q=Q^\top \). Denoting by \(\ker \) and \(\mathrm{im\,}\) the kernel and image of a linear map or its matrix representation, geometrically the Dirac structure is defined by the subspace \({\mathcal {D}} \subset {{\mathcal {X}}} \times {\mathcal {X}}^*\) given as \(\ker \left[ \begin{array}{cc} K&\quad L \end{array}\right] \), where \({\mathcal {X}}\) is the linear state space, and \({\mathcal {X}}^*\) its dual space.
Remark 10
More precisely, \({\mathcal {D}} \subset \hat{{\mathcal {X}}} \times {\mathcal {X}}^*\), where \(\hat{{\mathcal {X}}}\) is the tangent space to \({\mathcal {X}}\) at \(x \in {\mathcal {X}}\). However, since \({\mathcal {X}}\) is linear, tangent spaces at any \(x\in {\mathcal {X}}\) can be identified with each other and with \({\mathcal {X}}\).
Remark 11
In this paper, we frequently switch between coordinatefree representations with state space \({\mathcal {X}}\) and coordinate representations that are obtained for the case \({\mathcal {X}}={\mathbb {R}}^{n}\) after choosing a basis. Furthermore, in this case it will be tacitly assumed that the dual basis is chosen for the dual space \({\mathcal {X}}^*\). Although this is not the most general setup (from an abstract linear algebraic or functional analytic point of view), it makes the presentation of results much more convenient.
Algebraic constraints occur if the matrix K is singular, and are represented via
Singularity of K often results from the network structure as the following example demonstrates.
Example 12
Consider a general linear LCelectrical circuit. Let the circuit graph be determined by an incidence matrix D, defining Kirchhoff’s current laws \(I \in \ker D\) and voltage laws \(V \in \mathrm{im\,}D^\top \). Split the currents I into currents \(I_C\) through the capacitors and currents \(I_L\) through the inductors, and the voltages V into voltages \(V_C\) across the capacitors and voltages \(V_L\) across the inductors. Furthermore, let \(I_C={\dot{q}}_C\) and \(V_L=  {\dot{\varphi }}\), with q the vector of charges at the capacitors and \(\varphi \) the flux linkages of the inductors. Split the incidence matrix D accordingly as \(D= \left[ \begin{array}{cc} D_C&\quad D_L \end{array}\right] \). Then, Kirchhoff’s current laws take the form
Furthermore, let F be a maximal annihilator of \(D^\top \), i.e., \(\ker F= \mathrm{im\,}D^\top \). Then, Kirchhoff’s voltage laws are given as \(F \left[ \begin{array}{c} V_C \\ V_L \end{array}\right] = 0\), and after splitting \(F= \left[ \begin{array}{cc} F_C&F_L \end{array}\right] \) accordingly, we have
Writing the linear constitutive equations for the capacitors as \(q=C V_C\) for some positive definite diagonal capacitance matrix C and those for the inductors as \(\varphi =L I_L\) for some positive definite diagonal inductance matrix L, we finally obtain the system of equations
which is in the form (6) with Hamilton function \({\mathcal {H}}(q,\varphi )= \frac{1}{2}q^\top C^{1} q + \frac{1}{2}\varphi ^\top L^{1} \varphi \). It is easily checked that singularity of \(K =\left[ \begin{array}{cc} D_C &{}\quad 0 \\ 0 &{}\quad F_L \end{array}\right] \) corresponds to parallel interconnection of capacitors or series interconnection of inductors. (Note that since \(\ker F= \mathrm{im\,}D^\top \) the linear space \(\mathrm{im\,}F^\top = \ker D\) is spanned by the cycles of the circuit graph.) See, e.g., [23] for pHDAE modeling of electrical circuits.
Motivated by [4], the portHamiltonian point of view on DAE systems was extended in [41] by replacing the gradient vector Qx of the Hamiltonian function \({\mathcal {H}}(x)=\frac{1}{2} x^\top Qx\) by a general Lagrangian subspace, called Lagrange structure in the present paper, since we want to emphasize the similarity with Dirac structures. As we will see, this extension allows to bridge the gap with the (dissipative) Hamiltonian formulation (4) of Sect. 2.
In the linear homogeneous case without dissipation, one starts with a Dirac structure \({\mathcal {D}} \subset {{\mathcal {X}}} \times {\mathcal {X}}^*\) and a Lagrange structure \({\mathcal {L}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\). The composition of the Dirac structure \({\mathcal {D}}\) and the Lagrange structure \({\mathcal {L}}\), over the shared variables \(e \in {\mathcal {X}}^*\), is then defined as
Substituting \(f={\dot{x}}\), this leads to the coordinatefree definition of the dynamics
In order to obtain a coordinate representation of the dynamics (8), the simplest option (later on in Sect. 5.1 we will discuss another one) is to start from the image representation of the Lagrange structure \({\mathcal {L}}\), defined by a pair of matrices \(P,S\in {\mathbb {R}}^{n,n}\) satisfying \(P^\top S=S^\top P\) and \({{\,\textrm{rank}\,}}\left[ \begin{array}{cc} P^\top&\quad S^\top \end{array}\right] = n\). Taking coordinates x for \({\mathcal {X}}={\mathbb {R}}^{n}\) and dual coordinates e for its dual space \({\mathcal {X}}^*={\mathbb {R}}^{n}\), the image representation of \({\mathcal {L}}\) is given by
for some parameterizing vector z in a space \({\mathcal {Z}}\) (of the same dimension as \({\mathcal {X}}\)). Analogously, we consider a kernel representation of the Dirac structure \({\mathcal {D}}\) given by matrices K, L satisfying \(KL^\top =LK^\top \) and \({{\,\textrm{rank}\,}}\left[ \begin{array}{cc} K&L \end{array}\right] = n\), such that \({\mathcal {D}}= \{(f,e) \in {\mathcal {X}} \times {\mathcal {X}}^* \mid Kf + Le=0 \}\). (More details are given in Sect. 4.)
Substituting \(f={\dot{x}} = P{\dot{z}}\) and \(e=Sz\), this leads to the coordinate representation
We will call this class extended Hamiltonian differential–algebraic systems (extended HDAEs). If dissipation is incorporated, see Sect. 4.2, then it is called extended dHDAEs.
Importantly, the presence of algebraic constraints in (10) may arise both by singularity of K (as was already the case for (6)) as well as by singularity of P (as was the case for (2)). This motivated the introduction of the notions of Dirac algebraic constraints (corresponding to singularity of K) and of Lagrange algebraic constraints (corresponding to singularity of P) in [41].
Similar to dHDAE systems (2), the Hamiltonian of the extended HDAE system (10) is specified by the Lagrange structure and is given by
Indeed, one immediately has the energy conservation property
since \((f,e) \in {\mathcal {D}}\) and thus \(e^\top f=0\).
While singularity of K in physical systems modeling typically arises from interconnection due to the network structure, singularity of P often arises as a limiting situation. An elaborate example will be provided later as Example 17.
Remark 13
Note that if K is invertible, then by multiplying (10) with \(K^{1}\) from the left, we obtain the lossless version of the dHDAE system (2) in Sect. 2 with \(E=P\), \(Q=S\), \(J=J^\top = K^{1} L\) (and \(R=0\)). Thus, the replacement of the Hamiltonian \({\mathcal {H}}(x)=\frac{1}{2} x^\top Q x\) by a Lagrange subspace (9) constitutes a first step toward an overarching formulation of (dissipative) Hamiltonian DAE systems.
4 Geometric theory of (dissipative) Hamiltonian DAEs
In this section, we take a systematic geometric view on Hamiltonian DAE systems, extending the existing geometric treatment of extended HDAE systems, as already discussed in Sect. 3. We also incorporate the discussed classes of dHDAE systems from Sect. 2.
Consider an ndimensional linear state space \({\mathcal {X}}\) with elements denoted by x. Let \(\hat{{\mathcal {X}}}\) denote the tangent space to \({\mathcal {X}}\) at \(x \in {\mathcal {X}}\), with elements denoted by f and called flow vectors. As mentioned before, since \({\mathcal {X}}\) is linear, tangent spaces at different \(x \in {\mathcal {X}}\) can be identified with each other and with \({\mathcal {X}}\), implying that \(f \in {\mathcal {X}}\) as well. Furthermore, let \({\mathcal {X}}^*\) be the dual space of \({\mathcal {X}}\), with elements denoted by e and called effort vectors.
4.1 Dirac and Lagrange structures
The product space \({\mathcal {X}} \times {\mathcal {X}}^*\) is endowed with the two canonical bilinear forms
represented by the two matrices
where we recognize \(\Pi _\) as the standard symplectic form on \({\mathcal {X}}\).
Definition 14
A subspace \({\mathcal {D}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) is called a Dirac structure if the bilinear form \(\langle \cdot , \cdot \rangle _+\) is zero on \({\mathcal {D}}\) and moreover \({\mathcal {D}}\) is maximal with respect to this property. A subspace \({\mathcal {L}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) is a Lagrange structure if the bilinear form \(\langle \cdot , \cdot \rangle _\) is zero on \({\mathcal {L}}\) and moreover \({\mathcal {L}}\) is maximal with respect to this property. A Lagrange structure \({\mathcal {L}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) is called nonnegative if the quadratic form defined by \(\Pi _+\) is nonnegative on \({\mathcal {L}}\).
Remark 15
In this paper, we have chosen the terminology ’Lagrange structure,’ instead of the more common terminology ’Lagrangian subspace,’ in order to emphasize the similarity to Dirac structures. Also note that the definition of Dirac structures can be extended to manifolds instead of linear state spaces \({\mathcal {X}}\); in this context Dirac structures on linear spaces are often referred to as constant Dirac structures.
We have the following characterizations of Lagrange and Dirac structures, see, e.g., [13, 40].
Proposition 16
Consider an ndimensional linear state space \({\mathcal {X}}\) and its dual space \({\mathcal {X}}^*\).

(i)
A subspace \({\mathcal {D}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) is a Dirac structure if and only if \({\mathcal {D}}={\mathcal {D}}^{\perp \!\!\!\perp _+}\), where \({}^{\perp \!\!\!\perp _+}\) denotes the orthogonal complement with respect to the bilinear form \(\langle \cdot , \cdot \rangle _+\). Furthermore, \({\mathcal {D}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) is a Dirac structure if and only \(e^\top f=0\) for all \((f,e) \in {\mathcal {D}}\) and \(\dim {\mathcal {D}}=n\).

(ii)
A subspace \({\mathcal {L}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) is a Lagrange structure if and only if \({\mathcal {L}}={\mathcal {L}}^{\perp \!\!\!\perp _}\), where \({}^{\perp \!\!\!\perp _}\) denotes the orthogonal complement with respect to the bilinear form \(\langle \cdot , \cdot \rangle _\). Any Lagrange structure satisfies \(\dim {\mathcal {L}} =n\).
Dirac and Lagrange structures admit structured coordinate representations, see, e.g., [40, 41]. For this paper, the following representations are most relevant. Using matrices \(K,L \in {\mathbb {R}}^{n,n}\), any Dirac structure \({\mathcal {D}} \subset {{\mathcal {X}}} \times {\mathcal {X}}^*\) admits the kernel/image representation
with K, L satisfying \({{\,\textrm{rank}\,}}\left[ \begin{array}{cc} K&\quad L\end{array}\right] =n\) and the generalized skewsymmetry condition
Conversely any such pair K, L defines a Dirac structure.
Analogously, any Lagrange structure \({\mathcal {L}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) can be represented as
for certain matrices \(S,P \in {\mathbb {R}}^{n,n}\) satisfying \({{\,\textrm{rank}\,}}\left[ \begin{array}{c} P \\ S \end{array}\right] =n\) as well as the generalized symmetry condition
A Lagrange structure is, furthermore, nonnegative if and only if \(S^\top P \ge 0\).
As already described in Sect. 3, by using the image representation \(x=Pz\), \(e=Sz\) of the Lagrange structure \({\mathcal {L}}\) and the kernel representation \(Kf + Le=0\) of the Dirac structure \({\mathcal {D}}\) one is led to the representation (10) of the extended HDAE system defined by \({\mathcal {D}}\) and \({\mathcal {L}}\).
The following is a physical example where both K and P turn out to be singular. The singularity of K is due to the presence of kinematic constraints, while the singularity of P is caused by a limiting argument in the energy expression.
Example 17
Consider two masses \(m_1\) and \(m_2\) connected by a spring with spring constant k, where the right mass \(m_2\) is subject to the kinematic constraint \(v_2=0\) (velocity is zero). With positions \(q_1,q_2\) and momenta \(p_1,p_2\), the Hamiltonian is given by
Denoting by \(e_{q1}, e_{q2}\) the spring forces at both ends of the spring and by \(e_{p1}, e_{p2}\) the velocities of the two masses, we obtain the relation
To consider the limit \(k \rightarrow \infty \), meaning that the spring is replaced by rigid connection, we first express the system in different coordinates.
This yields the transformed representation
Taking the limit \(k \rightarrow \infty \) yields the Lagrange structure \({\mathcal {L}}\) in image representation
and the limiting Hamiltonian is just the kinetic energy
The system has a Lagrange algebraic constraint due to the linear dependency in the rows of P. The Dirac structure \({\mathcal {D}}\) is given as
After elimination of the Lagrange multiplier \(\lambda \), this yields
and hence
Finally subtracting the second equation from the first equation, we obtain the HDAE system
Here the first equation is the Lagrange algebraic constraint \({\tilde{z}}_3=0\) (and eventually \({\tilde{z}}_1=0\)) obtained by letting \(k \rightarrow \infty \) (corresponding to singularity of P), and the last equation is the Dirac algebraic constraint \( \frac{{\tilde{z}}_3}{m_2} + \frac{{\tilde{z}}_4}{m_1 + m_2}=0\), i.e., \({\tilde{z}}_4=0\) resulting from the kinematic constraint, leading to singularity of K and resulting in the trivial dynamics \(\dot{{\tilde{z}}}_2 = 0 \).
Remark 18
Instead of using a parametrization of the Lagrange structure \({\mathcal {L}}\), one can also use a parameterization of the Dirac structure \({\mathcal {D}}\)
with \(v \in {\mathcal {V}}\), where \({\mathcal {V}}\) is an ndimensional parameter space. This yields an extended dHDAE system (but now in the parameter vector v) given by
which is the adjoint system of (10). See [26] for a detailed discussion of adjoint systems of DAEs.
4.2 Incorporation of dissipation
As noted in Sect. 4, extended HDAE systems (10), geometrically defined by a Dirac and Lagrange structure, already include HDAE systems (2) without dissipation. Conversely, any HDAE system with K invertible can be rewritten into the form (2) with \(R=0\).
In order to complete the geometric viewpoint toward the inclusion of dissipation (and thus to (2)), we recall the geometric definition of a portHamiltonian system [35, 37, 40]. By replacing the Hamiltonian function by a Lagrange structure as in [41], and specializing to the case without external variables (inputs and outputs), such systems will be called extended dHDAE systems.
Definition 19
Consider a state space \({\mathcal {X}}\) with linear coordinates x and a linear space of resistive flows \({\mathcal {F}}_R\). Furthermore, consider a Dirac structure \({\mathcal {D}}\) on \({\mathcal {X}} \times {\mathcal {F}}_R\), a Lagrange structure \({\mathcal {L}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) and a nonnegative Lagrange structure \({\mathcal {R}} \subset {\mathcal {F}}_R \times {\mathcal {F}}^*_R\). Then, an extended dissipative Hamiltonian DAE (extended dHDAE) system is defined as the tuple \(({\mathcal {X}},{\mathcal {F}}_R,{\mathcal {D}},{\mathcal {L}},{\mathcal {R}})\) with
If \({\mathcal {L}}\) is represented as in (15), i.e.,
then it immediately follows from the properties of the Dirac structure \({\mathcal {D}}\) and the nonnegative Lagrange structure \({\mathcal {R}}\) that the dynamics of the extended dHDAE satisfies
More generally, we will now introduce the notion a maximally monotone subspace, which is overarching the notions of a Dirac structure \({\mathcal {D}}\) and a nonnegative Lagrange structure \({\mathcal {R}}\).
Definition 20
Consider a linear space \({\mathcal {X}}\). A subspace \({\mathcal {M}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) is called monotone subspace if
for all \((f,e) \in {\mathcal {M}}\), and it is maximally monotone if additionally \({\mathcal {M}}\) is maximal with respect to this property (i.e., there does not exist a monotone subspace \({\mathcal {M}}' \subset {\mathcal {X}} \times {\mathcal {X}}^*\) with \({\mathcal {M}} \subsetneq {\mathcal {M}}'\)).
Remark 21
The definition of a monotone subspace is a special case of the notion of a monotone relation \({\widetilde{M}}\), which is defined as a subset of \({\mathcal {X}} \times {\mathcal {X}}^*\) satisfying
for all \((f_1,e_1), (f_2,e_2) \in {\widetilde{M}}\). Clearly if \({\widetilde{M}}\) is a subspace then (22) reduces to (21). (Maximally) monotone subspaces with a sign change were employed before in [21], using the terminology of ’(maximally) linear dissipative relations.’ Nonlinear portHamiltonian systems with respect to a general (maximally) monotone relation were coined as incremental portHamiltonian systems in [10]; see [9] for further developments.
Obviously, a subspace \({\mathcal {M}}\) is monotone if and only if the quadratic form defined by \(\Pi _+\) is nonnegative on \({\mathcal {M}}\), since \(\langle (f,e), (f,e) \rangle _+ = 2 e^\top f\). This yields
Proposition 22
Consider a state space \({\mathcal {X}}\) with \(\dim {\mathcal {X}}=n\). Then, any monotone subspace of \({\mathcal {X}} \times {\mathcal {X}}^*\) has dimension less than or equal to n, and any maximally monotone subspace of \({\mathcal {X}} \times {\mathcal {X}}^*\) has dimension n. Any maximally monotone subspace \({\mathcal {M}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) can be represented as
for \(M,N\in {\mathbb {R}}^{n,n}\) satisfying \({{\,\textrm{rank}\,}}\left[ \begin{array}{cc} N&\quad M \end{array}\right] =n\) and
Conversely, any subspace defined by M, N satisfying (24) is a maximally monotone subspace.
Proof
The proof follows, since \(\Pi _+\) has n positive and n negative eigenvalues. \(\square \)
Obviously, any Dirac structure \({\mathcal {D}}\) given by a pair of matrices K, L is maximally monotone (take \(M=K\) and \(N=L\)). In a similar way, any nonnegative Lagrange structure \({\mathcal {R}}\) given by a pair of matrices P, S with \(S^\top P \ge 0\) is maximally monotone by taking \(N^\top =P\), \(M^\top =S\).
Importantly, also the composition of two maximally monotone subspaces is again maximally monotone. In order to prove this, we first state the following lemma.
Lemma 23
Let \(A: {\mathcal {F}} \rightarrow {\mathcal {G}}\) be a linear map between two linear spaces \({\mathcal {F}},{\mathcal {G}}\). Let \({\mathcal {M}}_{{\mathcal {G}}} \subset {\mathcal {G}} \times {\mathcal {G}}^*\) be a maximally monotone subspace. Then, the pullback of \({\mathcal {M}}_{{\mathcal {G}}}\) via A, defined as
is maximally monotone. Furthermore, let \({\mathcal {M}}_{{\mathcal {F}}} \subset {\mathcal {F}} \times {\mathcal {F}}^*\) be a maximally monotone subspace. Then, the pushforward of \({\mathcal {M}}_{{\mathcal {F}}}\) via A, defined as
is maximally monotone.
Proof
It is immediately checked that \(b_A ({\mathcal {M}}_{{\mathcal {G}}})\) is monotone. Furthermore,
and thus, \(b_A ({\mathcal {M}}_{{\mathcal {G}}})\) is maximally monotone. The proof to show that \(f_A ({\mathcal {M}}_{{\mathcal {F}}})\) is maximally monotone is analogous. \(\square \)
Using Lemma 23, we can show that maximally monotone subspaces satisfy the following composition property. This same property was recently derived for maximally monotone relations in [9], assuming additional regularity conditions.
Proposition 24
Consider an extended dHDAE system as in (19) and let \({\mathcal {M}}_a\) and \({\mathcal {M}}_b\) be maximally monotone subspaces
with \({\mathcal {E}}={\mathcal {F}}^*, {\mathcal {E}}_a={\mathcal {F}}_a^*, {\mathcal {E}}_b={\mathcal {F}}_b^*\). Define the composition
Then, \({\mathcal {M}}_a \circ {\mathcal {M}}_b \subset {\mathcal {F}}_a \times {\mathcal {F}}_b \times {\mathcal {E}}_a \times {\mathcal {E}}_b\) is again maximally monotone.
Proof
Let \({\mathcal {V}}_a:={\mathcal {F}}\) and \({\mathcal {V}}_b:={\mathcal {F}}\). Define the linear maps
Then, for the maximally monotone subspace
it can be readily checked that
where \({\mathcal {M}}_a \times {\mathcal {M}}_I \times {\mathcal {M}}_b\) is clearly maximally monotone. Then, the proof finishes by applying Lemma 23. \(\square \)
We immediately have the following corollary.
Corollary 25
Consider a Dirac structure \({\mathcal {D}} \subset {\mathcal {X}} \times {\mathcal {F}}_R \times {\mathcal {X}}^* \times {\mathcal {F}}^*_R\), together with a nonnegative Lagrangian subspace \({\mathcal {R}} \subset {\mathcal {F}}_R \times {\mathcal {F}}^*_R\). Then, the composition of \({\mathcal {D}}\) and \({\mathcal {R}}\) defined via
is maximally monotone. In particular, for any \((f,e) \in {\mathcal {D}} \circ {\mathcal {R}}\), one has
Remark 26
We conjecture that conversely any maximally monotone subspace \({\mathcal {M}}\) can be generated this way, i.e., as the composition of a certain Dirac structure \({\mathcal {D}}\) and a certain nonnegative Lagrangian subspace \({\mathcal {R}}\).
The presented analysis of maximally monotone subspaces leads to the following geometric definition of an extended dHDAE system, covering both dHDAE systems (2) and extended HDAE systems (10). See [21] for related results (using the terminology of (maximally) dissipative linear relations).
Definition 27
Consider a linear state space \({\mathcal {X}}\) with coordinates x, a maximally monotone subspace \({\mathcal {M}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) and a Lagrange structure \({\mathcal {L}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\). Then, an extended dHDAE system is a system \(({\mathcal {X}},{\mathcal {M}},{\mathcal {L}})\) satisfying
A coordinate representation of an extended dHDAE system is obtained as follows. Consider a coordinate expression (23) of the maximally monotone subspace \({\mathcal {M}}\) (with M, N satisfying (24)). This means that any element \((f,e) \in {\mathcal {M}}\) can be represented as
for some \(v \in {\mathbb {R}}^n\). Furthermore, any \((x,e) \in {\mathcal {L}}\) can be represented as in (20). Substituting \(f={\dot{x}}=P{\dot{z}}\), this yields
Now construct matrices C, D satisfying
Then, premultiplication by such a maximal annihilator \(\left[ \begin{array}{cc} C&\quad D \end{array}\right] \) eliminates the auxiliary variables v, and one obtains the coordinate representation
Remark 28
The geometric construction of extended dissipative Hamiltonian system can be immediately generalized to extended dissipative portHamiltonian DAE (dpHDAE) systems with external port variables (inputs and outputs), by extending the maximally monotone subspace \({\mathcal {M}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) to a maximally monotone subspace \({\mathcal {M}}_e \subset {\mathcal {X}} \times {\mathcal {X}}^* \times {\mathcal {F}}_P \times {\mathcal {F}}_P^*\), where \({\mathcal {F}}_P \times {\mathcal {F}}_P^*\) is the space of external port variables.
Two particular cases of Definition 27 are of special interest. The first one is where the maximally monotone subspace \({\mathcal {M}}\) is actually a Dirac structure as in (6) with K, L satisfying (14). In this case, one can take \(\left[ \begin{array}{cc} C&\quad D \end{array}\right] =\left[ \begin{array}{cc} K&\quad L \end{array}\right] \), and thus, the extended dHDAE system reduces to the extended HDAE system
The other special case is where the maximally monotone subspace \({\mathcal {M}}\) in (23) is such that M is invertible. In this case, without loss of generality \(M^\top \) can be taken to be the identity matrix, and the maximal annihilator \(\left[ \begin{array}{cc} C&\quad D \end{array}\right] \) can be taken to be of the form \(\left[ \begin{array}{cc} I&\quad D \end{array}\right] \). Hence, \(D=N^\top \), and thus, (27) reduces to
Furthermore, \(MN^\top + NM^\top \ge 0\) reduces to \(N^\top + N \ge 0\), and hence,
with \(J=J^\top \) and \(R=R^\top \ge 0\). Thus, in this case the extended dpHDAE system takes the familiar form (2) with \(E=P\), \(S=Q\) expressed as
Remark 29
Similar to the theory exposed in [41] for HDAE systems (10), the algebraic constraints of the dpHDAE system (27) can be split into two classes: one corresponding to singularity of P (Lagrange algebraic constraints in [41]) and one corresponding to singularity of C. In case of (10) the second class of algebraic constraints are called Dirac algebraic constraints in [41], but now they correspond to the maximally monotone subspace.
Furthermore, mimicking the developments in [41], one can transform algebraic constraints associated with index one belonging to one class into algebraic constraints in the other, by the use of additional state variables (serving as Lagrange multipliers).
5 Representation of DAE systems generated by Dirac and Lagrange structures in the state variables x
The representation (10) of an extended HDAE system as discussed in the previous sections does not use the state variables x of the state space \({\mathcal {X}}\), but instead an equally dimensioned vector \(z\in {\mathcal {Z}}\) parameterizing the Lagrange structure, cf. (9) and (15). In this section, we show how a different DAE representation involving the original state vector \(x \in {\mathcal {X}}\) can be obtained. Furthermore, we discuss in what sense this representation in x is equivalent with the representation (10) involving z.
5.1 A coordinate representation in the original state variables x
Consider a Dirac structure \({\mathcal {D}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\), a Lagrange structure \({\mathcal {L}} \subset {\mathcal {X}} \times {\mathcal {X}}^*\) and the resulting dynamics specified (in coordinatefree form) as \({\mathcal {D}} \circ {\mathcal {L}} \subset {{\mathcal {X}}} \times {\mathcal {X}}\). Let x be coordinates for the state space \({\mathcal {X}}\), and let the Dirac structure represented by a pair of matrices K, L and the Lagrange structure by a pair of matrices P, S. To derive a coordinate representation employing directly the state vector x, we first consider the combined representations of \({\mathcal {D}}\) and \({\mathcal {L}}\), both in kernel representation, i.e.,
where e are dual coordinates for \({\mathcal {X}}^*\). In order to obtain a DAE system only involving x we need to eliminate the variables e. This can be done by considering a maximal annihilator (left nullspace) \(\left[ \begin{array}{cc} M&\quad N \end{array}\right] \) of \(\left[ \begin{array}{c} L \\ P^\top \end{array}\right] \), i.e.,
and thus, in particular,
Since
premultiplication of the equations (29) by \(\left[ \begin{array}{cc} M&\quad N \end{array}\right] \) thus yields
Hence, the resulting DAE system is given by
Remark 30
Also for extended dHDAE systems (including dissipation), we can consider, instead of the coordinate representation (27) involving the parameterizing vector z, a representation that is using the original state x. In fact, let as before, cf. (26), \(\left[ \begin{array}{cc} C&\quad D \end{array}\right] \) denote a maximal annihilator of \(\left[ \begin{array}{cc} N&\quad M \end{array}\right] ^\top \), i.e., \(\ker \left[ \begin{array}{cc} C&\quad D \end{array}\right] =\mathrm{im\,}\left[ \begin{array}{cc} N&\quad M \end{array}\right] ^\top \). Then, consider, similar to (29), the stacked matrix
and a maximal annihilator \(\left[ \begin{array}{cc} V&\quad W \end{array}\right] \) to \(\left[ \begin{array}{cc} D^\top&P \end{array}\right] ^\top \), that is, \(\ker \left[ \begin{array}{cc} V&\quad W \end{array}\right] = \mathrm{im\,}[D^\top P]^\top \). Then, premultiplication by \(\left[ \begin{array}{cc} V&\quad W \end{array}\right] \) yields the representation
The analysis performed in the current subsection for (32) can be performed, mutatis mutandis, for (33) as well.
Recall that in the coordinate representation (10), we have the expression \({\mathcal {H}}^z(z)= \frac{1}{2} z^\top S^\top Pz\) for the Hamiltonian. In the representation (32), we do not yet have a Hamiltonian associated with the extended HDAE system. To define such a Hamiltonian \({\mathcal {H}}^x\) in (32), we would need that P is invertible, in which case it is given by
If P is invertible then there is a direct relation between the Hamiltonians (11) and (33). In fact, substituting \(x=Pz\), we immediately obtain
Alternatively, if S is invertible, then one can use the coenergy (Legendre transform) of \(H^x\) given by
for which \({\mathcal {H}}^e(e) = {\mathcal {H}}^z(z)\) with \(e=Sz\).
Note that if P is invertible, then also M is invertible. This follows, since then \(ML=NP^\top \) implies that the columns of N are in \(\mathrm{im\,}M\), and since \({{\,\textrm{rank}\,}}\left[ \begin{array}{cc} M&\quad N \end{array}\right] =n\), this means M is invertible. The converse that M invertible implies P invertible follows analogously. In a similar fashion, it follows that L is invertible if and only if N is invertible. These observations imply the following simplifications of the representation (32) under additional assumptions.

1.

(a)
If P is invertible then \(N=MLP^{\top }\) and by multiplying (32) from the left by \(M^{1}\) we obtain the system
$$\begin{aligned} K {\dot{x}} = LP^{\top }S^\top x = L \left( SP^{1}\right) ^\top x = L SP^{1} x, \end{aligned}$$(35)where the last equality follows from \(S^\top P = P^\top S\). This is exactly the form of a Hamiltonian DAE system in case of a general Dirac structure and a Lagrange structure that is given as the graph of a symmetric matrix \(Q:=SP^{1}\), see [35, 38,39,40]. Indeed, the Lagrange structure simplifies to the gradient of the Hamiltonian function \({\mathcal {H}}^x(x)= \frac{1}{2}x^\top SP^{1}x\).

(b)
If in this case, additionally K is invertible, then we obtain the Poisson formulation of Hamiltonian systems, see, e.g., [2],
$$\begin{aligned} {\dot{x}} = \left( K^{1} L\right) \left( SP^{1}\right) x= JQx, \end{aligned}$$(36)with \(J=J^\top =K^{1} L\), and \(Q=Q^\top \).

(a)

2.

(a)
If L and thus also N are invertible, then by (31) we have \(M=NP^\top L^{1}\) and multiplying with \(N^{1}\) from the left we get the DAE
$$\begin{aligned} P^\top J\dot{x}=P^\top L^{1}K {\dot{x}} = S^\top x \end{aligned}$$(37)with \(J:=\left( L^{1}K \right) ^\top =  L^{1}K\).

(b)
If additionally P is invertible, then with \(Q=SP^{1}= P^{\top } S^\top \) this may be rewritten as
$$\begin{aligned} J{\dot{x}} = Qx, \end{aligned}$$(38)which is the standard symplectic formulation of a Hamiltonian system in case additionally J is invertible, see, e.g., [2].

(a)
5.2 Relation between representations (10) and (32)
An immediate question that arises is how the representations (10) and (32) are related. We have already seen that if P is invertible then the relationship is obvious, since in this case \(x=Pz\) defines an ordinary state space transformation. However, if P is not invertible, then the representations are not state space equivalent, as the following simple example demonstrates.
Example 31
For \(P=[0]\), \(S=[1]\), \(K=[1]\), \(L=[0]\), we have that (10) is the singular system \(0 \cdot {\dot{z}} = 0 \cdot z\). On the other hand,
and \({\mathcal {D}} \circ {\mathcal {L}}\) is the origin in \({{\mathcal {X}}} \times {\mathcal {X}}\), defining the degenerate DAE system \({\dot{x}}=x=0\).
However, representations (10) (in the parameterizing z variables) and (32) (in the original state variables x) can be shown to be equivalent in the following generalized sense. First note that for any representation P, S of a Lagrange structure there exist nonsingular matrices V, W such that
This is a direct consequence of Lemma 37 that will be presented in the next section. Setting \(\left[ \begin{array}{c} z_1 \\ z_2 \end{array}\right] = W^{1} z\) and \(\left[ \begin{array}{c} x_1 \\ x_2 \end{array}\right] =V^{1}x\), it follows that \(z_1=x_1\) and \(z_2=e_2\). After such a transformation, the system \(KP {\dot{z}} = LSz\) takes the form
If we add to the vector \(\left[ \begin{array}{cc} x_1 \\ e_2 \end{array}\right] \) the subvector \(x_2\), and if we consider the equations (40) together with the original Lagrange algebraic constraint \(x_2=0\), then the so extended system can be rewritten as
On the other hand, as shown in Sect. 5.1, the extended dHDAE system defined by the Dirac structure \({\mathcal {D}}\) and the Lagrangian structure \({\mathcal {L}}\) in the state space variables x can be expressed as
with \(e_1,e_2\) serving as auxiliary variables. Instead of eliminating \(e_1,e_2\) from these equations, as discussed in Sect. 4.1, we can only eliminate \(e_1\) by premultiplication of (42) by the full row rank matrix
which directly leads to the system (41). This extended equivalence between (10) and (32) is summarized in the following proposition.
Proposition 32
Consider the pHDAE representations (10) and (32) defined by the same Lagrange structure \({\mathcal {L}}\) represented by matrices P, S, and by the same Dirac structure \({\mathcal {D}}\) represented by K, L. Consider a transformation such that P, S and K, L are transformed into the form (39) with corresponding partitioning
where \(x_1=z_1\). Adding to (10) the Lagrange algebraic constraint \(x_2=0\) corresponding to \(x=Pz\), the resulting dHDAE system is given by (41). This system is equivalent to the representation (29) of (32) after elimination of the variables \(e_1\).
Note that the subvector \(e_2=z_2\) can be regarded as the Lagrange multiplier vector corresponding to the constraint \(x_2=0\). As such, \(e_2=z_2\) does not contribute to the expression of the Hamiltonian \({\mathcal {H}}^z(z)\).
Let us illustrate the previous discussion with some further examples.
Example 33
Consider a system in the form (10) with \(K= \left[ \begin{array}{cc} I &{}\quad 0 \\ 0 &{}\quad I \end{array}\right] \), \(L= \left[ \begin{array}{cc} 0 &{}\quad I \\ I &{}\quad 0 \end{array}\right] \), \(P= \left[ \begin{array}{cc} I &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] \), \(S= \left[ \begin{array}{cc} I &{}\quad 0 \\ 0 &{}\quad I \end{array}\right] \) which is the general DAE
In order to compute the representation (32), we solve
and with
we get the system
Example 34
Consider the system of the form (10) with \(K= 0\), \(L= I\), \(P= I\), \(S= I\), i.e., \(0 \cdot {\dot{z}} = z\). Solving
yields \(M=I, N=I\), and we obtain the system (32) given by \(0 \cdot {\dot{x}} = x\).
Example 35
Consider the system of the form (10) with \(K= 0\), \(L= I\), \(P= 0\), \(S= I\), i.e., \(0 \cdot {\dot{z}} = z\). Solving
yields \(M=0, N=I\), and thus, the representation (32) is \(0 \cdot {\dot{x}} = x\).
Example 36
Consider the system of the form (10) with \(K= 0\), \(L= \begin{bmatrix}0 &{}\quad I \\ I &{}\quad 0 \end{bmatrix}\), \(P= I\), \(S= I\), i.e., \(0 \cdot {\dot{z}} = \begin{bmatrix}0 &{}\quad I \\ I &{}\quad 0 \end{bmatrix}z\). Solving
yields \(M= I, N= \left[ \begin{array}{cc} 0 &{}\quad I \\ I &{}\quad 0 \end{array}\right] \), and thus, the representation (32) is given by
6 Equivalence transformations and condensed forms
To characterize the properties of extended dHDAEs, we use transformations to condensed forms from which the properties can be read off.
For general DAEs (1) given by matrix pairs (E, A), \(E,A{\mathbb {R}}^{\ell ,n}\)( or the representation via matrix pencils \(\lambda EA\)), we can perform equivalence transformations of the coefficients of the form
with \(U\in {\mathbb {R}}^{\ell ,ell}\), \(W\in {\mathbb {R}}^{n,n}\) nonsingular. This corresponds to a scaling of the equation with \(U^\top \) and a change of variables \(z= W{{\tilde{z}}}\). Under such transformations, there is a onetoone relationship between the solution spaces, see [25], and the canonical form is the Weierstraß canonical form.
For structured systems of the form (2), the associated equivalence transformation that preserves the structure is of the form
with \(U\in {\mathbb {R}}^{\ell ,\ell }\), \(W\in {\mathbb {R}}^{n,n}\) nonsingular. A condensed form for this case has been presented in [29].
Finally, for systems of the form (10), the equivalence transformations have the form
where \(U\in {\mathbb {R}}^{\ell ,\ell }\), \(V \in {\mathbb {R}}^{n,n}\), \(W\in {\mathbb {R}}^{m,m}\) are nonsingular.
The geometric interpretation of the set of transformations in (44) is clear: V defines a coordinate transformation on the state space \({\mathcal {X}}\) while \(V^{\top }\) is the corresponding dual transformation on the dual state space \({\mathcal {X}}^*\). Also note that the combination of V and \(V^{\top }\) on the product space \({\mathcal {X}} \times {\mathcal {X}}^*\) leaves the canonical bilinear forms defined by the matrices \(\Pi _\) and \(\Pi _+\) invariant. (In fact, it can be shown that any transformation on \({\mathcal {X}} \times {\mathcal {X}}^*\) that leaves both canonical bilinear forms invariant is necessarily of this form for some invertible V.) Finally, \(U^\top \) is an invertible transformation on the equation space for the kernel representation of the Dirac structure \({\mathcal {D}}\), while W is an invertible transformation on the parametrization space \({\mathcal {Z}}\) for the Lagrange structure \({\mathcal {L}}\).
In all three cases, in view of an implementation of the transformations as numerically stable procedures, we are also interested in the case that U, V, W are real orthogonal matrices. We then have that \(V^{1}=V^\top \) and for both pairs (K, L) and (P, S) this is a classical orthogonal equivalence transformation.
Using the described equivalence transformations, we can derive condensed forms for pencils \(\lambda P S\) with \(P^\top S= S^\top P\) associated with Lagrange subspaces (or isotropic subspaces if the dimension is not n, see, e.g., [29]). Here we slightly modify the representation and also give a constructive proof that can be implemented as numerically stable algorithm in Appendix A.
Lemma 37
Let \(P,S\in {\mathbb {R}}^{n,m}\) be such that \(P^\top S=S^\top P\). Then, there exist invertible matrices \(V\in {\mathbb {R}}^{n,n}\), \(W\in {\mathbb {R}}^{m,m}\) such that
with \(\left[ \begin{array}{ccc} S_{51}&\quad S_{5,2}&\quad S_{5,3}\end{array}\right] \) of full row rank \(n_5\). (Note that block sizes may be zero). Moreover, if the pencil \(\lambda PS\) is regular, then the condensed form is unique, except for the order of blocks, and just contains the first four block rows and columns.
Proof
See Appendix A. \(\square \)
Note that the condensed form is in general not unique in the fifth block row, but the block sizes \(m_1,m_2,m_3,m_4\) and the row dimension \(n_5\) are.
Corollary 38
Let \(P,S\in {\mathbb {R}}^{n,m}\) be such that \(P^\top S=S^\top P\). Then, there exist real orthogonal matrices \(V\in {\mathbb {R}}^{n,n}\), \(W\in {\mathbb {R}}^{m,m}\) such that
with \(P_{11},S_{11}\in {\mathbb {R}}^{m_1+m_2,m_1+m_2}\), \(P_{22} \in {\mathbb {R}}^{m_3,m_3}\), \(S_{33} \in {\mathbb {R}}^{m_4,m_4}\) invertible, \(\left[ \begin{array}{ccc} S_{41}&\quad S_{42} \end{array}\right] \) of full row rank \(n_5\), and \(P_{11}^\top S_{11}=S_{11}^\top P_{11}\). Here the block sizes \(m_1+m_2\), \(m_3\), \(m_4\), and \(m_5\) are as in (45).
Proof
The proof follows by performing Steps 1. and 2. of the proof of Lemma 37, see Appendix B, which yields
followed by a singular value decomposition \({{\hat{V}}}^\top _3{{\hat{S}}}_{11} {{\hat{W}}}_3= \left[ \begin{array}{cc} {\check{S}}_{11} &{}\quad 0 \\ 0 &{} 0 \end{array}\right] \) with \({\check{S}}_{11}\) nonsingular diagonal and a full rank decomposition \({\check{V}}_4^\top {{\hat{S}}}_{31}{{\hat{W}}}_3 =\left[ \begin{array}{cc} S_{41}&\quad S_{42} \end{array}\right] \). \(\square \)
Corollary 38 shows that the characteristic quantities \(m_1+m_2\), \(m_3\) and \(m_4\), as well as \(n_5\) can be obtained by purely real orthogonal transformations. The quantities \(m_1,m_2\) can then be determined from the real orthogonal staircase form of the symmetric pencil \(\lambda P_{11}^\top S_{11} S_{11}^\top P_{11}\) which has been presented in [8] and implemented as production software in [7].
There is an analogous condensed form for pencils of the form \(\lambda KL\) satisfying \(KL^\top =LK^\top \). For the case of regular pairs, this directly follows from the canonical form presented in [11], but again we present the construction so that it can be directly implemented as a numerical method, see Appendix B.
Lemma 39
Let \(K,L\in {\mathbb {R}}^{\ell ,n}\) be such that \(KL^\top =LK^\top \). Then, there exist invertible matrices \(U\in {\mathbb {R}}^{\ell ,\ell }\), \(V\in {\mathbb {R}}^{n,n}\) such that
with \(\left[ \begin{array}{c} L_{15} \\ L_{2,5} \\ L_{3,5}\end{array}\right] \) of full column rank \(n_5\). (Note that block sizes may be zero). Moreover, if the pencil \(\lambda KL\) is regular, then the condensed form is unique except for the order of blocks and just contains the first four block rows and columns.
Proof
See Appendix B. \(\square \)
Note again that the form (47) is not unique in general, but the block sizes \(\ell _1,\ell _2,\ell _3,\ell _4\) and the column dimension \(n_5\) are.
Corollary 40
Let \(K,L\in {\mathbb {R}}^{\ell ,n}\) be such that \(LK^\top =KL^\top \). Then, there exist real orthogonal matrices \(U\in {\mathbb {R}}^{\ell ,\ell }\), \(V\in {\mathbb {R}}^{n,n}\) such that
with \(K_{11},L_{11}\in {\mathbb {R}}^{2\ell _1,2\ell _1}\), \(K_{22} \in {\mathbb {R}}^{\ell _3,\ell _3}\), \(L_{33} \in {\mathbb {R}}^{\ell _4,\ell _4}\) invertible, \(\left[ \begin{array}{c} L_{14} \\ L_{2,4} \end{array}\right] \) is of full column rank \(n_5\), and \(K_{11}^\top L_{11}=L_{11}^\top K_{11}\). Here the block sizes \(\ell _1\), \(\ell _3\), \(\ell _4\), and \(n_5\) are as in (47).
Proof
The proof follows by performing Steps 1. and 2. of the proof of Lemma 39, which yields
followed by a singular value decomposition \({{\hat{U}}}^\top _3{{\hat{L}}}_{11} {{\hat{V}}}_4= \left[ \begin{array}{cc} {\check{L}}_{11} &{}\quad 0 \\ 0 &{} 0 \end{array}\right] \) with \({\check{L}}_{11}\) nonsingular diagonal and a full rank decomposition \({{\hat{U}}}_3^\top {{\hat{L}}}_{13}{{\hat{V}}}_3 =\left[ \begin{array}{c} L_{14} \\ L_{24} \end{array}\right] \). \(\square \)
Corollary 40 shows that the characteristic quantities \(\ell _1\), \(\ell _3\), \(\ell _4\), as well as \(n_5\) can be obtained by purely real orthogonal transformations.
The presented condensed forms can now be used in generating a condensed form for systems of the form (10).
Lemma 41
Consider a system of the form (10) with \(K,P,L,S\in {\mathbb {R}}^{n,n}\) and regular pencil \(\lambda KPLS\). Then, there exist invertible matrices U, V, W as in (44) such that
where \(S_{11}=S_{11}^\top \) and \(K_{11} L_{11}^\top =L_{11} K_{11}^\top \).
Proof
Since the pencil \(\lambda KPLS\) is square and regular, it is square, and also the pencil \(\lambda PS\) is regular; otherwise, by Lemma 37 there would be common kernel of P and S which would imply the pencil \(\lambda KPLS\) to be singular.
Thus, by Lemma 37 there exist nonsingular matrices \(W_1, V_1\in {\mathbb {R}}^{n,n}\) such that
with \(n_1= m_1+m_2+m_3\), \(n_2 =m_4\) and \({{\tilde{S}}}_{11}\) symmetric. The regularity of the pencil \(\lambda KPLS\) implies that
has full column rank, and hence, there exist invertible matrices \(U_2\in {\mathbb {R}}^{n,n}\), \({{\tilde{V}}}_2\in {\mathbb {R}}^{n_2,n_2}\), and
such that
With
we then get that
has the desired form with \(U=U_2\), \(V=V_1V_2V_3\), \(W=W_1\) and where \(S_{11}=S_{11}^\top \) and \(K_{11} L_{11}^\top =L_{11} K_{11}^\top \). \(\square \)
Transforming the system as in (49) and setting
partitioned accordingly, from the first block row of the coefficient matrices we obtain a reduced system given by
with \({\bar{P}}=I_{n_1}\), \({\bar{S}} ={\bar{S}}^\top =S_{11}\), \({\bar{K}}=K_{11}\) and \({\bar{L}}= L_{11}\), together with an equation \(z_2=K_{21} \dot{z}_1\), where \(z_2\) does not contribute to the Hamiltonian \({\mathcal {H}}^z(z)=\frac{1}{2} z^\top P^\top S z\). Note that the second equation is an index two constraint, because it uses the derivative of \(z_1\), [25]. It arises from the Lagrange structure due to the singularity of P.
An analogous representation can be constructed from the condensed form of Lemma 39.
Lemma 42
Consider a system of the form (10) with \(K,P,L,S\in {\mathbb {R}}^{n,n}\) and regular pencil \(\lambda KPLS\). Then, there exist invertible matrices U, V, W as in (44) such that
where \(L_{11}=L_{11}^\top \) and \(P_{11}^\top S_{11}=S_{11}^\top P_{11}\).
Proof
Since the pencil \(\lambda KPLS\) is square and regular, also the pencil \(\lambda KL\) is regular, otherwise by Lemma 39 there would be a common left nullspace of K and L which would imply the pencil \(\lambda KPLS\) to be singular.
Thus, by Lemma 39, there exist nonsingular matrices \(U_1, V_1\in {\mathbb {R}}^{n,n}\) such that
with \(n_1= 2\ell _1+\ell _3\), \(n_2 =\ell _4\) and \({{\tilde{L}}}_{11}\) skewsymmetric. The regularity of the pencil \(\lambda KPLS\) implies that
has full row rank and hence there exist invertible matrices \(W_2\in {\mathbb {R}}^{n,n}\), \({{\tilde{V}}}_2\in {\mathbb {R}}^{n_2,n_2}\), and
such that
With
we then get that
has the desired form with \(W=W_2\), \(V=V_1V_2V_3\), \(U=U_1\) and where \(L_{11}=L_{11}^\top \) and \(P_{11}^\top S_{11}=S_{11}^\top P_{11}\). \(\square \)
Transforming \(KP\dot{z}=LSz\) as in (51) and setting
partitioned accordingly, from the first block row of the coefficient matrices we obtain a reduced system given by
with \({\bar{P}}=P_{11}\), \({\bar{S}} =S_{11}\), \({\bar{K}}=I_{n_1}\) and \({\bar{L}}= L_{11}=L_{11}^\top \), together with a differential algebraic equation \(0=z_2\), so that \(z_2\) does not contribute to the Hamiltonian.
Remark 43
System (50) is a dHDAE of the form (35) in which the Lagrange structure is spanned by the columns of
See also [30, 32] for similar constructions in the context of removing the factor Q in systems of the form (2).
Similarly, System (52) is a pHDAE of the form (2) (with \(R=0\)) and the Dirac structure is spanned by the columns of
We also perform a similar construction for systems of the form (27). Since C and D are chosen to be a maximal annihilator such that \(C N^\top + D M^\top =0\) in (26) and \(MN^\top +NM^\top \ge 0\) with \({{\,\textrm{rank}\,}}\left[ \begin{array}{cc} N&\quad M \end{array}\right] =n\), we can use the same construction as in the proof of Lemma 39 to first transform N and M in such a way that
This implies that we may choose C and D such that
If \(\lambda CPDS\) is regular, then it follows that the last \(n_2\) rows of CP have full row rank, and hence, altogether we have the following condensed form.
Lemma 44
Consider a system of the form (27) with \(C,P,L,S\in {\mathbb {R}}^{n,n}\) and regular pencil \(\lambda CPDS\) and \(\left[ \begin{array}{cc} C&D\end{array}\right] \) a maximal annihilator as in (26). Then, there exist invertible matrices U, V, W as in (44) such that
where \(C_{11}=C_{11}^\top \) and \(P_{11}^\top S_{11}= S_{11}^\top P_{11}\).
As a consequence, by transforming \(CP\dot{z}=DSz\) as in (53) and setting
partitioned accordingly, from the second block row of the coefficient matrices we obtain \(\dot{z}_2=0\), i.e., \(z_2\) is a constant function and the first block row gives an inhomogeneous reduced system
with \({\bar{P}}=P_{11}\), \({\bar{S}} =S_{11}\), \({\bar{P}}^\top {\bar{S}}={\bar{S}}^\top {\bar{P}}\), \({\bar{C}}=M_{11}^T\) and \({\bar{D}}= I_{n_1}\). It will then depend on the initial condition for \(z_2\) whether \(z_2=0\) in which case it does not contribute to the Hamiltonian; otherwise, the Hamiltonian is still a quadratic function in \(z_1\) plus some linear and constant terms.
Remark 45
The condensed forms in this section require rank decisions. Even if they are done in a numerically stable way using singular value decompositions, they can give wrong decisions in finite precision arithmetic. It is a common strategy to use in the case of doubt the worst case scenario. In the case of condensed forms, this would be to assume that the problem is a DAE of index two.
In this section, we have derived structured condensed forms and shown that these can also be used to identify a subsystem which is of one of the wellestablished forms plus an algebraic constraint whose solution does not contribute to the Hamiltonian. In the next section, we analyze, when general DAEs can be transformed to the forms (2) or (10).
7 Representation of DAEs into the form \(KP\dot{z}=LS z\) or \(\frac{d}{dt}(Ez) =(JR)Qz\)
For general DAE systems \(E\dot{x}=Ax\), it has been characterized in [30] when they are equivalent to a dHDAE system of the form (2). We present here a simplified result for the regular case.
Theorem 46
i) A regular pencil \(L(\lambda )=\lambda {{\hat{E}}}{{\hat{A}}}\) is equivalent to a pencil of the form \(\lambda E(JR)Q\) as in (2) with \(\lambda EQ\) being regular if and only if the following conditions are satisfied:

1.
The spectrum of \(L(\lambda )\) is contained in the closed left half plane.

2.
The finite nonzero eigenvalues on the imaginary axis are semisimple, and the partial multiplicities of the eigenvalue zero are at most two.

3.
The index of \(L(\lambda )\) is at most two.
ii) A regular pencil \(L(\lambda )=\lambda {{\hat{E}}}{{\hat{A}}}\) is equivalent to a pencil of the form \(\lambda E(JR)\) as in (2) (i.e., with \(Q=I\)) if and only if the following conditions are satisfied:

1.
The spectrum of \(L(\lambda )\) is contained in the closed left half plane.

2.
The finite eigenvalues on the imaginary axis (including zero) are semisimple.

3.
The index of \(L(\lambda )\) is at most two.
As a corollary for the case without dissipation, we have the following result.
Corollary 47
A regular pencil \(L(\lambda ) =\lambda {{\hat{E}}} {{\hat{A}}}\) is equivalent to a pencil of the form \(\lambda EJ\) as in (2) (with \(Q=I,R=0\)) if and only if the following conditions are satisfied:

1.
All finite eigenvalues are on the imaginary axis and semisimple.

2.
The index of \(L(\lambda )\) is at most two.
To study when general regular DAEs of the form (1) can be expressed as extended dHDAEs of the form (10), we first consider a condensed form under orthogonal equivalence.
Theorem 48
Consider a regular pencil \(\lambda EA\) with \(E,A\in {\mathbb {R}}^{n,n}\) of index at most two. Then, there exist real orthogonal matrices \(U\in {\mathbb {R}}^{n,n}\) and \(V\in {\mathbb {R}}^{n,n}\) such that
with \(A_{14}\in {\mathbb {R}}^{n_1,n_1}\), \(A_{41}\in {\mathbb {R}}^{n_1,n_1}\), \(E_{22}\in {\mathbb {R}}^{n_2,n_2}\), and \(A_{33}\in {\mathbb {R}}^{n_3,n_3}\) invertible.
Proof
The proof is presented in Appendix C. \(\square \)
Transforming the DAE (1) as \(U^\top EV V^\top \dot{x}= U^\top A V V^\top x\) and setting \(V^\top x= [x_1^\top , \ldots , x_4^\top ]^\top \), it follows that \(x_1=0\), \(x_2\) is determined form the implicit ordinary differential equation (note that \(E_{22}\) is invertible)
\(x_3= A_{33}^{1} A_{32} x_2\), and \(x_4\) is uniquely determined in terms of \(x_2,\dot{x}_2, x_3\). Initial conditions can be prescribed freely for \(x_2\) only.
Corollary 49
Consider a general regular pencil \(\lambda EA\) with \(E,A\in {\mathbb {R}}^{n,n}\) that is of index at most two and for which all finite eigenvalues are in the closed left half plane and those on the imaginary axis are semisimple. Then, there exist invertible matrices \(U\in {\mathbb {R}}^{n,n}\) and \(V\in {\mathbb {R}}^{n,n}\) such that
where
Proof
The proof follows by considering the condensed form (55), and using block elimination with the invertible matrices \(A_{33}\), \(A_{14}\), \(A_{41}\), \(E_{22}\) to transform pencil in (55) to the form
Let \(U_1^\top {{\tilde{E}}}_{11} V_1= \left[ \begin{array}{cc} I_{{\hat{n}}_1}&{} 0 \\ 0 &{}\quad 0 \end{array}\right] \) be the echelon form of \({{\tilde{E}}}_{11}\). We scale the first block row of with \(U_1^\top \), the fourth block row with \(V_1^{1}\), the first block column by \(V_1\) and the fourth block column by \(U_1^{\top }\) and obtain a form
For any positive definite solution X of the Lyapunov inequality
one can multiply the second block row by X and obtain that \(E_{33}=X\) and \(A_{33} =X{{\hat{A}}}_{33}\) has the desired form, see, e.g., [1, 3]. \(\square \)
Note that the transformation to a system of the form (2) can also be achieved in a similar way for singular pencils with zero minimal indices.
If there is no dissipation, i.e., if \(R_{33}=0\), then \(A_{33}\) is skewsymmetric.
In Corollary 49, we have shown that general systems \(E\dot{x}=Ax\) can be transformed to a very special canonical form and the following remark shows that for the case \(R_{33}=0\) each of the blocks in the canonical form can be expressed as a pencil of the form \(\lambda KPLS\).
Remark 50
Consider a regular pencil \(\lambda EA\) in the form (57), then after a permutation one gets four blocks which all can be written in the form \(\lambda KPLS\) as in (10).

(1)
We have
$$\begin{aligned} \left[ \begin{array}{cc} I_{{{{\hat{n}}}}_1} &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} \dot{z}_1 \\ \dot{z}_5 \end{array}\right]  \left[ \begin{array}{cc} 0 &{}\quad I_{{{{\hat{n}}}}_1} \\ I_{{{{\hat{n}}}}_1} &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} z_1 \\ z_5 \end{array}\right] = K_{1}P_{1}\left[ \begin{array}{c} \dot{z}_1 \\ \dot{z}_5 \end{array}\right]  L_{1}S_{1}\left[ \begin{array}{c} z_1 \\ z_5 \end{array}\right] \end{aligned}$$with
$$\begin{aligned} K_1=\left[ \begin{array}{cc} I_{{{{\hat{n}}}}_1} &{}\quad 0 \\ 0 &{}\quad I_{{{{\hat{n}}}}_1} \end{array}\right] ,\ L_1= \left[ \begin{array}{cc} 0 &{}\quad I_{{{{\hat{n}}}}_1} \\ I_{{{{\hat{n}}}}_1} &{} 0 \end{array}\right] ,\ P_1= \left[ \begin{array}{cc} I_{{{{\hat{n}}}}_1} &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] , \ S_1= \left[ \begin{array}{cc} I_{{{{\hat{n}}}}_1} &{}\quad 0 \\ 0 &{}\quad I_{{{{\hat{n}}}}_1} \end{array}\right] \end{aligned}$$and Hamiltonian \({\mathcal {H}}^z_1= \frac{1}{2} \left[ \begin{array}{c} z_1 \\ z_5\end{array}\right] ^\top \left[ \begin{array}{cc} I_{{{{\hat{n}}}}_1} &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} z_1 \\ z_5 \end{array}\right] ={ \frac{1}{2} z_1^\top z_1}\) which is actually 0. In this case, we can insert the derivative of the second equation into the first (index reduction) and obtain
$$\begin{aligned} \left[ \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} \dot{z}_1 \\ \dot{z}_5 \end{array}\right] = \left[ \begin{array}{cc} 0 &{}\quad I_{{{{\hat{n}}}}_1} \\ I_{{{{\hat{n}}}}_1} &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} z_1 \\ z_5 \end{array}\right] \end{aligned}$$without changing the Hamiltonian.

(2)
We have
$$\begin{aligned} \left[ \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} \dot{z}_2 \\ \dot{z}_6 \end{array}\right] =\left[ \begin{array}{cc} 0 &{}\quad I_{{{{\hat{n}}}}_2} \\ I_{{{{\hat{n}}}}_2} &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} z_2 \\ z_6 \end{array}\right] = K_{2}P_{2}\left[ \begin{array}{c} \dot{z}_2 \\ \dot{z}_6 \end{array}\right]  L_{2}S_{2}\left[ \begin{array}{c} z_2 \\ z_6 \end{array}\right] \end{aligned}$$with different possibilities of representation, e.g., a)
$$\begin{aligned} K_2=\left[ \begin{array}{cc} I_{{{{\hat{n}}}}_2} &{}\quad 0 \\ 0 &{}\quad I_{{{{\hat{n}}}}_2} \end{array}\right] ,\ L_3= \left[ \begin{array}{cc} 0 &{}\quad I_{{{{\hat{n}}}}_2} \\ I_{{{{\hat{n}}}}_2} &{}\quad 0 \end{array}\right] ,\ P_2= \left[ \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] , \ S_3= \left[ \begin{array}{cc} I_{{{{\hat{n}}}}_1} &{}\quad 0 \\ 0 &{}\quad I_{{{{\hat{n}}}}_1} \end{array}\right] \end{aligned}$$and Hamiltonian \({\mathcal {H}}^z_2= \frac{1}{2} \left[ \begin{array}{c} z_2 \\ z_6\end{array}\right] ^\top \left[ \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} z_2 \\ z_6 \end{array}\right] =0\), or b)
$$\begin{aligned} K_2=\left[ \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] ,\ L_2= \left[ \begin{array}{cc} 0 &{}\quad I_{{{{\hat{n}}}}_2} \\ I_{{{{\hat{n}}}}_2} &{}\quad 0 \end{array}\right] ,\ P_2= \left[ \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] , \ S_2= \left[ \begin{array}{cc} I_{{{{\hat{n}}}}_2} &{}\quad 0 \\ 0 &{}\quad I_{{{{\hat{n}}}}_2} \end{array}\right] \end{aligned}$$and Hamiltonian \({\mathcal {H}}^z_2= \frac{1}{2} \left[ \begin{array}{c} z_2 \\ z_6\end{array}\right] ^\top \left[ \begin{array}{cc} 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{array}\right] \left[ \begin{array}{c} z_2 \\ z_6 \end{array}\right] =0\).

(3)
We have \(\lambda E_{33}A_{33}= \lambda K_{3}P_{3} L_{3}S_{3}\) with \(K_3=I_{\hat{n_3}}\), \(P_3=E_{33}\), \(L_3= A_{33}\), \(S_{3}= I_{{{{\hat{n}}}}_3}\). Here the Hamiltonian is \({\mathcal {H}}^z_3= \frac{1}{2} z_3^\top E_{33} z_3\).

(4)
We have \(\lambda 0 I_{{{{\hat{n}}}}_4}= \lambda K_{4}P_{4} L_{4}S_{4}\) with \(K_4=0\), \(P_4=I_{{{{\hat{n}}}}_4}\), \(L_4=I_{{{{\hat{n}}}}_4}\), \(S_4=I_{{{{\hat{n}}}}_4}\). Here the Hamiltonian is \({\mathcal {H}}^z_4= \frac{1}{2} z_4^\top z_4\).
Note that the presented representations are in no way unique, but if the condensed form is available or computable, and the properties of Theorem 48 hold, then we can express the general DAE in the representation (10) or (2).
This discussion yields the following useful corollary.
Corollary 51
Consider a regular pencil of the form \(\lambda KPLS\) associated with the dHDAE (10). Then, it has index at most two, and index two can only occur if the system has a singular Lagrange structure.
Proof
Consider the representations in Remark 50. Then, the index two structure occurs only in the first case where \(K_1,L_1\) are invertible, but the product \(P_1^TS_1\) is singular. Thus index two arises only from a singular Lagrange structure. \(\square \)
8 Conclusion and outlook
Different definitions of (extended, dissipative) Hamiltonian or port Hamiltonian differential–algebraic systems lead to different representations. We have collected all the known representations as well as a few new ones and analyzed them from a geometric as well as an algebraic point of view. The latter leads to condensed forms that can be directly implemented in numerical algorithms to compute the structural properties of the systems. We have also studied the effect that different representations have on the index of the differential–algebraic system as well as on the associated Hamilton function. In general, it can be seen that certain algebraic constraints do not contribute to the Hamiltonian and therefore can be separated from the system in an appropriate coordinate system. We have also characterized when a general differential–algebraic system can be transformed to different representations. Several important tasks remain open. These include extensions to the case of nonregular systems. These can be based on the results and methods in Appendices A and B that are already proved for the nonsquare case. For systems with inputs and outputs, linear timevarying and nonlinear systems, the extensions are currently under consideration.
Availability of data and materials
Not applicable.
References
Achleitner F, Arnold A, Mehrmann V (2021) Hypocoercivity and controllability in linear semidissipative ODEs and DAEs. ZAMM Z Angew Math Mech (in press)
Arnol’d VI (2013) Mathematical methods of classical mechanics, vol 60. Springer, New York
Beattie C, Mehrmann V, Van Dooren P (2019) Robust portHamiltonian representations of passive systems. Automatica 100:182–186
Beattie C, Mehrmann V, Xu H, Zwart H (2018) PortHamiltonian descriptor systems. Math Control Signals Syst 30(17):1–27
Benner P, Byers R, Faßbender H, Mehrmann V, Watkins D (2000) Choleskylike factorizations of skewsymmetric matrices. Electron Trans Numer Anal 11:85–93
Breedveld PC (2008) Modeling and simulation of dynamic systems using bond graphs. EOLSS Publishers Co. Ltd./UNESCO, Oxford, pp 128–173
Brüll T, Mehrmann V (2007) STCSSP: A FORTRAN 77 routine to compute a structured staircase form for a (skew)symmetric/(skew)symmetric matrix pencil. Preprint 312007, Institut für Mathematik, TU Berlin
Byers R, Mehrmann V, Xu H (2007) A structured staircase algorithm for skewsymmetric/symmetric pencils. Electron Trans Numer Anal 26:1–13
Camlibel MK, van der Schaft A (2022) PortHamiltonian systems and monotonicity. arXiv:2206.09139
Camlibel MK, van der Schaft AJ (2013) Incrementally PortHamiltonian systems. In: 52nd IEEE conference on decision and control. IEEE, pp 2538–2543
Courant TJ (1990) Dirac manifolds. Trans Am Math Soc 319(2):631–661
Dai L (1989) Singular control systems, vol 118. Lecture Notes in Control and Inform. Sci. SpringerVerlag, Berlin, Heidelberg
Duindam V, Macchelli A, Stramigioli S, Bruyninckx H (2009) Modeling and control of complex physical systems: the portHamiltonian approach. SpringerVerlag, Berlin, Heidelberg
Eberard D, Maschke B, Van Der Schaft A (2007) An extension of pseudoHamiltonian systems to the thermodynamic space: towards a geometry of nonequilibrium thermodynamics. Rep Math Phys 60(2):175–198
Egger H, Kugler T (2018) Damped wave systems on networks: exponential stability and uniform approximations. Numer Math 138(4):839–867
Egger H, Kugler T, LiljegrenSailer B, Marheineke N, Mehrmann V (2018) On structure preserving model reduction for damped wave propagation in transport networks. SIAM J Sci Comput 40:A331–A365
EichSoellner E, Führer C (1998) Numerical methods in multibody dynamics. Vieweg+Teubner Verlag, Wiesbaden
Emmrich E, Mehrmann V (2013) Operator differentialalgebraic equations arising in fluid dynamics. Comput Methods Appl Math 13(4):443–470
Freund RW (2011) The SPRIM algorithm for structurepreserving order reduction of general RLC circuits. In: Benner P, Hinze M, ter Maten EJW (eds) Model reduction for circuit simulation. SpringerVerlag, Dordrecht, pp 25–52
Gantmacher FR (1959) Theory of matrices, vol 1. Chelsea, New York
Gernandt H, Haller FE, Reis E (2021) A linear relation approach to portHamiltonian differentialalgebraic equations. SIAM J Matrix Anal Appl 42(2):1011–1044
Golo G, van der Schaft AJ, Breedveld PC, Maschke BM (2003) Hamiltonian formulation of bond graphs. In: Rantzer A, Johansson R (eds) Nonlinear and hybrid systems in automotive control. Springer, Heidelberg, pp 351–372
Günther M, Bartel A, Jacob B, Reis T (2021) Dynamic iteration schemes and portHamiltonian formulation in coupled differentialalgebraic equation circuit simulation. Int J Circ Theor Appl 49(2):430–452
Jacob B, Zwart H (2012) Linear portHamiltonian systems on infinitedimensional spaces. Advances and Applications. Birkhäuser, Basel, Operator Theory
Kunkel P, Mehrmann V (2006) Differentialalgebraic equations. Analysis and Numerical Solution. European Mathematical Society, Zürich
Kunkel P, Mehrmann V (2011) Formal adjoints of linear DAE operators and their role in optimal control. Electron J Linear Algebra 22:672–693
Liesen J, Mehrmann V (2015) Linear algebra. Springer Undergraduate Mathematics Series. SpringerVerlag, Cham
Maschke BM, van der Schaft A (1992) Portcontrolled Hamiltonian systems: modelling origins and system theoretic properties. IFAC Proc Vol 25(13):359–365
Mehl C, Mehrmann V, Wojtylak M (2018) Linear algebra properties of dissipative Hamiltonian descriptor systems. SIAM J Matrix Anal Appl 39(3):1489–1519
Mehl C, Mehrmann V, Wojtylak M (2021) Distance problems for dissipative Hamiltonian systems and related matrix polynomials. Linear Algebra Appl, pp 335–366
Mehrmann V, Morandin R (2019) Structurepreserving discretization for portHamiltonian descriptor systems. In: 58th IEEE conference on decision and control (CDC), Nice, France, pp 6863–6868
Mehrmann V, Unger B (2023) Control of portHamiltonian differentialalgebraic systems and applications. Acta Numerica, To appear
Ortega R, van der Schaft AJ, Mareels Y, Maschke BM (2001) Putting energy back in control. Control Syst Mag 21:18–33
van der Schaft AJ (2006) PortHamiltonian systems: an introductory survey. In: Verona JL, SanzSole M, Verdura J (eds) Proceedings of the International Congress of Mathematicians, vol. III, Invited Lectures. Madrid, Spain, pp 1339–1365
van der Schaft AJ, Maschke BM (1995) The Hamiltonian formulation of energy conserving physical systems with external ports. Arch. Elektron. Übertragungstech. 45:362–371
van der Schaft AJ, Maschke BM (2013) PortHamiltonian systems on graphs. SIAM J Control Optim 51:906–937
Scholz L (2017) Condensed forms for linear portHamiltonian descriptor systems. Preprint 09–2017, Institut für Mathematik, Technische Universität Berlin
Van der Schaft A (2017) L2gain and passivity techniques in nonlinear control, 3rd edn. Springer, Cham
van der Schaft A (2013) PortHamiltonian differentialalgebraic systems. In: Ilchmann A, Reis T (eds) Surveys in DifferentialAlgebraic Equations I. DifferentialAlgebraic Equations Forum. SpringerVerlag, Berlin, Heidelberg, pp 173–226
van der Schaft A, Jeltsema D (2014) PortHamiltonian systems theory: An introductory overview. Foundations and Trends in Systems and Control 1(2–3):173–378
van der Schaft A, Maschke B (2018) Generalized portHamiltonian DAE systems. Systems Control Lett 121:31–37
van der Schaft A, Maschke B (2020) Dirac and Lagrange algebraic constraints in nonlinear portHamiltonian systems. Vietnam J. Mathematics 48(4):929–939
Acknowledgements
We thank two anonymous referees for very valuable comments and suggestions that helped to substantially improve the paper.
Funding
Open Access funding enabled and organized by Projekt DEAL. Research of the first author supported by Deutsche Forschungsgemeinschaft (DFG) through priority Program SPP 1984 within project 790 392: Distributed dynamic control of network security.
Author information
Authors and Affiliations
Contributions
Both authors did the research together; there is no distinction that can be specified.
Corresponding author
Ethics declarations
Ethical approval and consent for participants
Not applicable.
Conflict of interest
There are no competing interests of a financial or personal nature.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Appendix A: Proof of Lemma 37
Proof
We present a proof in form of an algorithmic procedure that can be implemented as a numerical algorithm.
Step 1. Let \(V_1\) and \(W_1\) be real orthogonal matrices such that
where \({{\tilde{P}}}_{11}\in {\mathbb {R}}^{{{\tilde{m}}}_1,{{\tilde{m}}}_1}\) is diagonal with positive diagonal elements. This transformation can be constructed via a singular value decomposition of P and a numerical rank decision.
Then, using the structure (16), it follows that \({{\tilde{P}}}_{11}^\top {{\tilde{S}}}_{11}={{\tilde{S}}}_{11}^\top {{\tilde{P}}}_{11}\) and \({{\tilde{S}}}_{12}=0\).
Step 2. Let \({{\tilde{V}}}_2\), \({{\tilde{W}}}_2\) be real orthogonal matrices such that
where \({{\hat{S}}}_{22}\in {\mathbb {R}}^{m_4,m_4}\) is diagonal with positive diagonal elements. This transformation can be constructed again via a singular value decomposition and a numerical rank decision. Set
and form
with \({{\hat{P}}}_{11}^\top ={{\tilde{P}}}_{11}\).
Step 3. Let
and form
where by the structure (16) now \({\check{S}}_{11}\) is symmetric. Note that although we are working with nonorthogonal transformation matrices in this step, the numerical errors can be controlled, since we are inverting diagonal matrices.
Step 4. Let
be the canonical form of the skewsymmetric matrix \({\check{S}}_{11}\) under congruence which can be obtained by first computing the spectral decomposition and then scaling the nonsingular diagonal parts by congruence to be \(\pm I\), see, e.g., [27].
Furthermore, let
be a full rank decomposition partitioned accordingly, with \({\check{V}}_3\) real orthogonal. Then, set
and form
which is as claimed. \(\square \)
1.2 Appendix B: Proof of Lemma 39
Proof
The proof is similar to that of Lemma 37 just adapting to different symmetries and transformation structure. The following algorithmic procedure can be directly implemented as a numerical algorithm.
Step 1. Let \(U_1\) and \(V_1\) be real orthogonal matrices such that
where \({{\tilde{K}}}_{11}\in {\mathbb {R}}^{{{\tilde{\ell }}}_1,{{\tilde{\ell }}}_1}\) is diagonal with positive diagonal elements. This transformation can be constructed via a singular value decomposition of K and a numerical rank decision.
Then, using the structure (14), it follows that \({{\tilde{K}}}_{11}^\top {{\tilde{L}}}_{11}={{\tilde{L}}}_{11}^\top {{\tilde{K}}}_{11}\) and \({{\tilde{L}}}_{21}=0\).
Step 2. Let \({{\tilde{U}}}_2\), \({{\tilde{V}}}_2\) be real orthogonal matrices such that
where \({{\hat{L}}}_{22}\in {\mathbb {R}}^{\ell _4,\ell _4}\) is diagonal with positive diagonal elements. This transformation can be constructed again via a singular value decomposition and a numerical rank decision. Set
and form
with \({{\hat{K}}}_{11}={{\tilde{K}}}_{11}\).
Step 3. Let
and form
where by the structure (14) now \({\check{L}}_{11}\) is skewsymmetric. Note that although we are working with nonorthogonal transformation matrices in this step, the numerical errors can be controlled since we are inverting diagonal matrices.
Step 4. Let
be the canonical form of the skewsymmetric matrix \({\check{L}}_{11}\) under congruence which can be obtained by first computing the spectral decomposition and then scaling the nonsingular diagonal parts by congruence to be \(\pm I\). This procedure is implemented in a numerically robust way in [5]. Furthermore, let
be a full rank decomposition partitioned accordingly, with \({\check{V}}_3\) real orthogonal. Then, set
and form
which is as claimed. \(\square \)
1.3 Appendix C: Proof of Theorem 48
Proof
We again present a proof that can be implemented as a numerical algorithm.
Step 1. Let \(U_1\) and \(V_1\) be real orthogonal matrices such that
where \({{\tilde{E}}}_{11}\in {\mathbb {R}}^{{{\tilde{n}}}_1,{{\tilde{n}}}_1}\) is diagonal with positive diagonal elements. This transformation can be constructed via a singular value decomposition of E and a numerical rank decision.
Step 2. Let \({{\tilde{U}}}_2\), \({{\tilde{V}}}_2\) be real orthogonal matrices such that
where \({{\hat{A}}}_{22}\in {\mathbb {R}}^{n_3,n_3}\) is diagonal with positive diagonal elements. This transformation can be constructed again via a singular value decomposition and a numerical rank decision. Set
and form
with \({{\hat{E}}}_{11}={{\tilde{E}}}_{11}\).
The regularity of the pencil implies that \(A_{13}\) has full column rank \(n_1={{\tilde{n}}}_1n_3\) and that \(A_{31}\) has full row rank \(n_1={{\tilde{n}}}_1n_3\), because otherwise there would be common right or left nullspace, respectively.
Step 3. Let \({{\tilde{U}}}_{31}\), \({{\tilde{V}}}_{31}\), \({{\tilde{U}}}_{13}\), \({{\tilde{V}}}_{13}\) be real orthogonal matrices of appropriate dimensions such that
where \({{\hat{A}}}_{41}\in {\mathbb {R}}^{n_1,n_1}\) and \({{\hat{A}}}_{14}\in {\mathbb {R}}^{n_1,n_1}\) are diagonal with positive diagonal elements. This transformation can be constructed again via a singular value decomposition and numerical rank decisions. Set
then \(U_3^\top U_2^\top U_1^\top E V_1 V_2 V_3\) and \(U_3^\top U_2^\top U_1^\top A V_1 V_2 V_3\) are as claimed in (55). The invertibility of \(E_{22}\) then follows from the assumption that the pencil has index at most two, see [25]. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mehrmann, V., van der Schaft, A. Differential–algebraic systems with dissipative Hamiltonian structure. Math. Control Signals Syst. 35, 541–584 (2023). https://doi.org/10.1007/s00498023003492
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00498023003492