Linear passive systems and maximal monotone mappings

This paper deals with a class of dynamical systems obtained from inter-connecting linear systems with static set-valued relations. We ﬁrst show that such an interconnection can be described by a differential inclusions with a maximal monotone set-valued mappings when the underlying linear system is passive and the static relation is maximal monotone. Based on the classical results on such differential inclusions, we conclude that such interconnections are well-posed in the sense of existence and uniqueness of solutions. Finally, we investigate conditions which guarantee well-posedness but are weaker than passivity.


Introduction
It is a true pleasure for us to contribute an article to this special issue in honor of Jong-Shi Pang on the occasion of his 60th birthday. In the last decade, we had the privilege to develop a fruitful research collaboration with Jong-Shi on the so-called linear complementarity systems, combining notions/tools from systems theory and mathematical programming. This paper builds upon and expands further some of the ideas that came about from our collaboration with Jong-Shi.
Variational inequalities were introduced by Stampacchia in 1964 [1] as a tool in the study of elliptic partial differential equations, and have since been recognized as instrumental in a large class of optimization and equilibrium problems. Applications range from elastoplasticity to traffic and from electrical networks to mathematical finance; see for instance [2,3]. The role of maximal monotonicity in the context of variational inequalities, as a sufficient condition for well-behavedness, can be compared to the role of convexity in optimization problems. Maximal monotone mappings were introduced in 1961 by Minty [4], who had already earlier applied the notion of monotone relations in an abstract formulation for electrical networks of nonlinear resistors [5]. Extensions to dynamic problems were undertaken in the same decade; intimate connections between semigroups of nonlinear contractions and maximal monotone mappings were established by Crandall and Pazy [6] and further developed by Brézis [7].
The development of the theory of semigroups of nonlinear contractions took place in the classical context of dynamics given by a closed system of (partial) differential equations. Engineers have long appreciated the power of open (input-output) dynamical systems as a device for modeling as well as for analysis. It comes naturally in many applications in the engineering sciences, as well as in biology and economics, to look at a dynamical system as a composite of smaller systems which are connected by the specification of relations between certain variables associated to the subsystems. These variables may be referred to as "inputs" and "outputs", or more generally as "connecting variables" since the suggestion of unidirectionality that comes with the input/output terminology is not always appropriate. Systems equipped with connecting variables in this sense may be simply referred to as "open dynamical systems". Early contributions were made in the 1930's in the field of electrical engineering by among others Nyquist and Bode, and the field has received intensive study ever since the pioneering work of Kalman around 1960 and the associated successes in the Apollo space program and in many other applications.
Within the class of open dynamical systems, linear time-invariant systems play a special role as a prime example and as a first breeding ground of ideas that are later developed in wider contexts. More or less similarly, linear complementarity problems [8] take a special position among variational inequalities. Dynamical systems that arise as interconnections of linear time-invariant systems and linear complementarity problems came under investigation in the 1990's under the name "linear complementarity systems" [9,10]. Part of the motivation came from the fact that these systems can be looked at as a particular class of systems with mixed continuous and discrete state variables, also called "multimodal systems" or "hybrid systems". More generally, differential variational inequalities were studied by Pang and Stewart [11]. Linear time-invariant systems together with static relations described by set-valued mappings have been used extensively. An incomplete inventory includes electrical networks with switching elements as in power converters [12][13][14][15][16], linear relay systems [17,18], piecewise linear systems [19], and projected dynamical systems [20,21]; see also [22][23][24] for further examples and [25,26] for numerical analysis of maximal monotone differential inclusions.
The history of linear time-invariant systems connected to static (nonlinear) relations in fact goes back a long way. This way of describing a dynamical system has been used intensively as a tool in stability analysis within the context of so-called Lur'e systems; see [27] for a survey. The notion of passivity (also known as dissipativity) plays an important role in this theory. The term is used here as a description of a characteristic of an open dynamical system, and is motivated by the notion of stored energy in electrical networks and in many other applications in physics. The term "dissipativity" is used as well in the context of maximal monotone mappings; in fact, in their paper cited above [6], Crandall and Pazy use the term "dissipative set" in place of "maximal monotone mapping". This already indicates that there are strong conceptual relations between the notions of passivity and maximal monotonicity. Indeed, passive complementarity systems present themselves as a natural class of dynamical systems [28].
In this paper, our goal is to establish the well-posedness (in the sense of existence and uniqueness of solutions) for systems that arise as interconnections of passive linear time-invariant systems and maximal monotone mappings. Our proof strategy relies on a reduction to the classical case of a closed dynamical system. To achieve this, we present a new result in the spirit of preservation of maximal monotonicity under certain operations. Such results are known to be often nontrivial; even the question whether the sum of two maximal monotone mappings is again maximal monotone does not have a straightforward answer (cf. [29, Section 12.F]). Moreover we provide a "pole-shifting" technique, which is analogous to a well-known method in the classical theory, to extend the results to a larger class of systems. The well-posedness of interconnections of linear passive systems with maximal monotone mappings has been studied before by Brogliato [30]. In the cited paper, well-posedness is proved under some additional conditions, which were later partially removed in [31,32]. Here we obtain the result without imposing additional conditions. The paper is organized as follows. In Sect. 2, we quickly review tools from convex analysis and systems theory that will be extensively employed in the paper. The class of systems the paper deals with will be introduced in Sect. 3. This will be followed by the main results in Sect. 4. Finally, the paper closes with the conclusions in Sect. 5.

Preliminaries
The following notational conventions will be in force throughout the paper. We denote the set of real numbers by R, nonnegative real numbers by R + , n-vectors of real numbers by R n , and n × m real-valued matrices by R n×m . The set of locally absolutely continuous, locally integrable, and locally square integrable functions defined from R + to R n are denoted, respectively, by AC loc (R + , R n ), L 1,loc (R + , R n ), and L 2,loc (R + , R n ).
To denote the scalar product of two vectors x, y ∈ R n , we sometimes use the notation x, y := x T y where x T denotes the transpose of x. The Euclidean norm of a vector x is denoted by x := (x T x) 1 2 . For a subspace of W of R n , W ⊥ denotes the orthogonal subspace, that is {y ∈ R n | x, y = 0 for all x ∈ W}.
We say that a (not necessarily symmetric) matrix M ∈ R n×n is positive semi-definite if x T M x 0 for all x ∈ R n . We sometimes write M 0 meaning that M is positive 123 semi-definite. Also, we say that M is positive definite if it is positive semi-definite and x T M x = 0 implies that x = 0.

Convex sets
To a large extent, we follow the notation of the book [29] in the context of convex analysis. We quickly recall concepts/notation which are often employed throughout the paper.
Let S ⊆ R n be a set. We denote its closure, interior, and relative interior by cl(S), int(S), rint(S), respectively. Its horizon cone S ∞ is defined by S ∞ := {x | ∃ x ν ∈ S, λ ν ↓ 0 such that λ ν x ν → x}. When S is convex, N S (x) denotes the normal cone to S at x. For a linear map L : R m → R n , we denote its kernel and image by ker L and im L, respectively. By L −1 (S), we denote the inverse image of the set S under L.
For the sake of completeness, we collect some well-known facts on convex sets in the following proposition.

Maximal monotone set-valued mappings
Let F : R n ⇒ R n be a set-valued mapping, that is F(x) ⊆ R n for each x ∈ R n . We define its domain, image, and graph, respectively, as follows: The inverse mapping F −1 : Throughout the paper, we are interested in the so-called maximal monotone setvalued mappings. A set valued-mapping F : R m ⇒ R m is said to be monotone if for all (x i , y i ) ∈ graph(F). It is said to be maximal monotone if no enlargement of its graph is possible in R n × R n without destroying monotonicity. We refer to [7] and [29] for detailed treatment of maximal monotone mappings. A particular class of maximal monotone mappings is formed by the subgradient mappings associated with (possibly discontinuous) extended-real valued convex functions. Indeed, it is well-known that the subgradient mapping of a proper, lower semicontinuous convex function is maximal monotone [29,Thm. 12.17]. When m = 1, every maximal monotone mapping is such a subgradient mapping [29,Ex. 12.26]. However, not every maximal monotone mapping corresponds to a subgradient mapping in higher dimensions.
Typically, verifying monotonicity is much easier than verifying maximal monotonicity. Among various characterizations of maximal monotonicity (e.g. Minty's classical theorem [29,Thm. 12.12]), the following will be in use later.
and only if, it satisfies the following conditions:

Differential inclusions
Differential inclusions will play a major role in the rest of the paper. Consider a differential inclusion of the forṁ where x, u ∈ R n and F : R n ⇒ R n is a set-valued mapping. We say that a function x ∈ AC loc (R + , R n ) is a solution of (2) for the initial condition x 0 and a function (2) is satisfied for almost all t 0.
In particular, we are interested in differential inclusions with maximal monotone set-valued mappings. The following theorem summarizes the classical existence and uniqueness results for the solutions of such differential inclusions.

Theorem 1 Consider the differential inclusioṅ
where x, u ∈ R n and F : R n ⇒ R n is a maximal monotone set-valued mapping. For each μ 0, there exists a unique solution of the differential inclusion (3) for the initial condition x 0 ∈ cl(dom(F)) and locally integrable function u.

123
In case int(dom(F)) = ∅, we employ a dimension-reduction argument inspired by [29,proof of Thm. 12.41]. Let X be the affine hull of dom(F). Since X is an affine set, there exist a vector ξ ∈ R n and a subspace W ⊆ R n such that X = ξ + W. Let T 1 ∈ R n×n 1 and T 2 ∈ R n×n 2 be matrices such that their columns form bases for W and W ⊥ , respectively. One can choose these matrices in such a way that the matrix for allx ∈ R n . Consider the differential inclusioṅ Note that x is a solution of (3) for the initial condition x 0 and the function u if and only ifx(t) := T T x(t) − ξ is a solution of (4) for the initial condition T T (x 0 − ξ) and functionû(t) := T T u(t) + μξ . Therefore, it suffices to prove the claim for the differential inclusion (4). Since dom(F) = ∅, statement 2 of Proposition 2 implies that rint(dom(F)) = ∅. Then, it follows from [29, Thm. 12.43] thatF is a maximal monotone. Note that dom It follows from Proposition 2 that for all x ∈ cl(dom(F)). This implies that for all x ∈ dom(F). Letx be partitioned accordingly asx = col(x 1 ,x 2 ). It follows from (5) thatx ∈ dom(F) only ifx 2 = 0. Definê Due to (5), there existsξ 1 such that col(ξ 1 , 0) ∈ rint(dom(F)). Then, it follows from [29, Exercise 12.46] thatF 1 is maximal monotone. Due to (6), we havê This means that dom(F) = dom(F 1 ) × {0}. Note that by construction int(dom(F 1 )) is non-empty. Letû be partitioned accordingly asû = col(û 1 ,û 2 ).

Linear passive systems
is satisfied for all 0 t 1 t 2 and for all trajectories (z, The classical Kalman-Yakubovich-Popov lemma states that the system (8) is passive if, and only if, the linear matrix inequalities admits a solution K . Moreover, V (x) = 1 2 x T K x defines a storage function in case K is a solution the linear matrix inequalities (10).
In the following proposition, we summarize some of the consequences of passivity that will be used later. To formulate these consequences, we need to introduce some notation. For a subspace W ⊆ R n and a linear mapping A ∈ R n×n , we denote the largest A-invariant subspace that is contained in W by W | A . It is well-known (see e.g. [34]

3 Linear systems coupled to relations
Consider the linear systemẋ where x ∈ R n is the state, u ∈ R n is the input, and (z, w) ∈ R m+m are the external variables that satisfy By solving z from the relations (11b) and (11c), we obtain the differential inclusioṅ where and In the sequel, we will be interested in the existence and uniqueness of solutions for (12) when the linear system Σ(A, B, C, D) is a passive system and M is maximal monotone. First, two examples of systems of the form (11) are in order.
Example 1 Consider the diode bridge circuit depicted in Fig. 1. This circuit consists of two linear resistors with resistances R 1 > 0 and R 2 > 0, one linear capacitor with Fig. 1 Diode bridge circuit capacitance C > 0, one linear inductor with inductance L > 0, one voltage source u, and four ideal diodes D i with i = 1, 2, 3, 4. One can derive the governing circuit equations in the form of (11) as follows: Here x 1 is the current through the inductor, x 2 is the voltage across the capacitor and where the inequalities must be understood componentwise. 123 rather than with diodes as in the example above, therefore provide examples of linear passive systems coupled to maximal monotone mappings that are not subdifferentials.

Example 2
A simple deterministic queueing model with continuous flows may be constructed as follows. Consider n servers working in parallel for a single user. The cost of using server j is proportional to the queue length associated to this server; this quantity in turn is determined by the load that has been placed on the server previously and on the processing speed of the server, which we will here assume to be constant. Loads and queue lengths cannot be negative. The total load is distributed by the user among the servers according to the Wardrop principle, which means that no load is placed on servers when there are other servers which have lower cost. The total load is chosen by the user as a non-increasing function of the realized cost. Introduce the following notation: auxiliary variable relating to nonnegativity of queue lengths y j (t) auxiliary variable relating to nonnegativity of queue lengths e j (t) cost of j-th server at time t in excess of realized (i.e. minimal) cost k j positive proportionality constant linking queue length to cost j (t) load placed on server j at time t s(t) total load at time t a(t) realized cost at time t f (·) constitutive relation linking realized cost to total load.
We can then write equations as follows: The equations (16a) and (16e) together ensure that queue lengths are indeed always nonnegative; the Wardrop principle is expressed by (16f). The relations (16a-16d) above can be written in vector form as follows, with K := diag(k 1 , . . . , k n ): The relations (16e-16g) constitute the negative of a maximal monotone set-valued mapping, while the linear input-output system given by (17) is passive (even conservative) with respect to the storage function x → 1 2 x T K x. The example can be generalized in several ways, for instance to situations with multiple users.

Main results
Maximal monotonicity of the set-valued mapping H as defined in (13) will play a key role in our development. The following theorem asserts that H is maximal monotone if the underlying linear system is passive and the set-valued mapping M is maximal monotone.

Theorem 2 Suppose that
i. Σ (A, B, C, D) is passive with the storage function x → 1 2 x T x, ii. M is maximal monotone, and iii. im C ∩ rint(im(M + D)) = ∅. (13) is maximal monotone.

Then, the set-valued mapping H defined in
Proof The proof is based on the application of Proposition 2 to H .

H is monotone:
Take Then, where This would imply Therefore, it follows from (18) that From Then, it follows from (21) that H is monotone.

there exists a convex set S H such that S H ⊆ dom(H ) ⊆ cl(S H ):
Let P = (M + D) −1 . Since Σ(A, B, C, D) is passive, it follows from (10) that D is positive semi-definite and hence induces a maximal monotone single-valued mapping whose domain is the entire R m . Then, [29,Cor. 12.44] implies that M + D is maximal monotone and [29, Ex. 12.8] implies that P is maximal monotone. Note that dom(P) = im(M + D). Due to Proposition 2, there exists a convex set S P such that Moreover, it follows from [29, Thm. 12.41] that one can take S P = rint(cl(dom (P))). Since dom(H ) = C −1 (dom(P)), it follows from (23) that Define S H = C −1 (S P ). Since S P is convex, so is S H . It follows from statement 1 of Proposition 1 that S P = rint(dom(P)). As im C ∩ rint(im(M + D)) = ∅ and rint(im(M + D)) = rint(dom(P)) = S P , statement 2 of Proposition 1 implies that C −1 (cl(S P )) = cl(C −1 (S P )) = cl(S H ). Consequently, we get from (24) Since S H is convex, so is cl(dom(H )). We know from [29,Ex. 3.12] that for all ξ ∈ dom(H ). We claim that for all ξ ∈ dom(H ). To prove this, let ζ B ∈ (B P(Cξ)) ∞ for some ξ ∈ dom(H ). Then, there exist sequences ζ ν B and λ ν such that From 29a-29c, we know that for all ν ζ ν B = Bη ν (30) for some η ν ∈ P(Cξ). Thus, we get This means that For each ν 1 and ν 2 , one gets as M is maximal monotone. This would yield Since D is positive semi-definite due to passivity, we get η ν 1 −η ν 2 ∈ ker(D + D T ), i.e.
Then, one can findη such that for all ν η ν =η +η ν (36) for someη ν ∈ ker(D + D T ). Define Note that and since Bv = C T v whenever v ∈ ker(D + D T ) due to the second statement of Proposition 3 and K = I . Clearly, 123 Consequently, ζ B ∈ (C T P(Cξ)) ∞ , i.e., The same arguments are still valid if we swap B and C T . Therefore, (28) holds.

graph(H ) is closed:
Let (x ν , y ν ) be a convergent sequence in graph(H ). Then, for each ν there exists It is enough to show that (ξ, −Aξ + Bζ ) ∈ graph(H ). To do so, let W be the smallest subspace that contains im(M + D) = dom((M + D) −1 ). It follows from maximal monotonicity of (M + D) −1 that for each ν holds for any z ∈ W ⊥ . Now, let z ν = z ν 1 + z ν 2 where z ν 1 ∈ ker B ∩ W ⊥ and Note that From (52), we have z ν 2 ∈ (M + D) −1 (C x ν ). In view of (51) and (54), it is enough to show that the sequence z ν 2 is bounded. On the contrary, suppose that z ν 2 is unbounded. Without loss of generality, we can assume that the sequence It follows from (51) and (54) that Thus, we get Due to passivity with K = I and monotonicity of (M + D) −1 , we have By dividing by z ν 2 2 and taking the limit as ν tends to infinity, we obtain Since D is positive semi-definite due to the first statement of Proposition 3, this results in Then, it follows from (57), K = I , and the second statement of Proposition 3 that 123 Let η ∈ im(M + D) and ζ ∈ (M + D) −1 (η). From monotonicity of (M + D) −1 , we have Taking the limit as ν tends to infinity, we obtain This means that the hyperplane span({ζ ∞ }) ⊥ separates the sets im C and im(M + D). Since im C = rint(im C) and im C ∩ rint(im(M + D)) = ∅, it follows from [38,Thm. 11.3] that im C and im(M + D) cannot be properly separated. Therefore, both im C and im(M + D) must be contained in the hyperplane span({ζ ∞ }) ⊥ . Since W is the smallest subspace that contains im(M + D), we get W ⊆ span({ζ ∞ }) ⊥ which implies ζ ∞ ∈ W ⊥ . Together with (57), we get In view of (53) and (55), this yields ζ ∞ = 0. This, however, clearly contradicts with (55) which implies ζ ∞ = 1. Therefore, z ν 2 must be bounded. Then, it follows from Proposition 2 that H is maximal monotone.
Remark 2 It is well-known that maximal monotonicity is preserved under certain operations such as addition [29,Cor. 12.44] and piecewise affine transformations [29,Thm. 12.43]. None of these results immediately imply that the set-valued mapping H of the form (13) is maximal monotone when Σ (A, B, C, D) is passive and M is maximal monotone. As such, Theorem 2 can be considered as a particular result on maximal monotonicity preserving operations.
Well-posedness of systems of the form (11) and their variants has been addressed in several papers [30,31,[39][40][41] for linear passive (or passive-like) systems and maximal monotone mappings. However, the relevant results appeared in these papers require extra conditions on the linear system and/or the maximal monotone mapping. The following theorem provides conditions for the existence and uniqueness of solutions to the differential inclusion (12) when the linear system Σ (A, B, C, D) is passive and the set-valued map M is maximal monotone without requiring any additional conditions.

Theorem 3 Suppose that
i. Σ (A, B, C, D) is passive with the storage function x → 1 2 x T K x where K is positive definite, ii. M is maximal monotone, and iii. im C ∩ rint(im(M + D)) = ∅.
Then, for each initial condition x 0 such that C x 0 ∈ cl(im(M + D)) and locally integrable function u, the differential inclusion (12) admits a unique solution.
Proof By hypothesis, Σ(A, B, C, D) is passive with a positive definite storage function x → 1 2 x T K x. By definingx = K −1/2 x, we can rewrite the differential inclusion (12) asẋ whereH Clearly, x → K −1/2 x is a bijection between the solutions of (12) and those of (64). Furthermore, it can be easily verified that Σ (Ã,B,C, D) is passive with the storage function x → 1 2 x T x. As such, we can assume, without loss of generality, x → 1 2 x T x is a positive definite storage function for the system Σ (A, B, C, D).
Then, it follows from Theorem 2 that H is maximal monotone. Therefore, the claim follows from Theorem 1 with μ = 0.

Remark 4
In order to apply Theorem 3 to Example 1, note that Σ(A, B, C, D) constitutes a passive system as discussed in the example. Clearly, M is maximal monotone. Finally, it follows from [8,Cor.3.8.10] that im(M + D) = R + × R × R × R + . As such, we have Next, we present two extensions of Theorem 3. The first one deals with systems which are not passive themselves but can be made passive by shifting the eigenvalues of the matrix A. Then, the differential inclusion (12) admits a unique solution for each initial condition x 0 such that C x 0 ∈ cl(im(M + D)) and locally integrable function u.
Proof The proof readily follows from Theorems 2 and 1 with μ = α.
123 Remark 5 In case D is positive semi-definite and there exists a positive definite matrix K such that K B = C T , one can always find a positive number α such that Σ(A − α I, B, C, D) is passive. As such, Theorem 2 of [31] can be recovered as a special case from Corollary 1.
The second extension deals with the case of positive semi-definite storage functions. To formulate this result, we need to introduce some nomenclature. For a maximal monotone set-valued mapping F, the element of minimal norm of F(x) will be denoted by F o (x).

Corollary 2 Suppose that
i. Σ (A − α I, B, C, D) is passive for some α 0, ii. M is maximal monotone, iii. im C ∩ rint(im(M + D)) = ∅, and iv. there exists a positive real number α such that for all w ∈ im(M + D).
Then, the differential inclusion (12) admits a solution for each initial condition x 0 such that C x 0 ∈ cl(im(M + D)) and locally integrable function u. Moreover, if x and x are two solutions for the same initial condition and locally integrable function u then K x = Kx.
Proof When K is positive definite, Corollary 1 readily implies the claim. Suppose that K is positive semi-definite but not positive definite. Then, one can change the coordinates in such a way that Suppose that A, B, and C matrices are given by accordingly to the partition of K . Then, the linear matrix inequalities (10) imply that A 12 = 0, C 2 = 0, and Σ(A 11 −α I, B 1 , C 1 , D) is passive with positive definite storage function x 1 → 1 2 x T 1 x 1 . Note that the differential inclusion (12) is given bẏ in the new coordinates. Also note that and im C = im C 1 in the new coordinates. Then, it follows from Corollary 1 that the differential inclusion (66) admits a unique solution for each initial condition x 10 and locally integrable function u 1 . Since x 1 is locally absolutely continuous, it follows from (65) that the function t → (M + D) −1 o (C 1 x 1 (t)) is locally integrable. Hence, the differential inclusion (67) admits a solution for each initial condition x 20 and locally integrable function u 2 . Therefore, we proved the existence of solutions as claimed. The rest follows from the uniqueness of x 1 .
In general, checking the existence of an α 0 such that Σ (A − α I, B, C, D) is passive amounts to checking the feasibility of the matrix inequalities Note that these matrix inequalities do not constitute linear matrix inequalities and cannot be verified easily. However, the particular structure of these matrix inequalities lead to easily verifiable algebraic necessary and sufficient conditions for their feasibility. To present these conditions, we need to introduce some notation. For a matrix A ∈ R n×n and two subspaces V, W ⊆ R n , we define Subspaces satisfying the property above have been studied in geometric linear control theory under the name of conditioned invariant subspaces (see e.g. [34]). It is wellknown that the set T(A, V, W) is closed under subspace intersection. As such, there always exists a minimal element, say T * (A, V, W) such that Moreover, one can devise a subspace algorithm (see e.g. [34]) which would return the minimal subspace in a finite number of steps for a given triple (A, V, W).
The following lemma on positive semi-definite solutions of matrix equations, taken partly from [42], will be needed in the proof of the theorem below.

Lemma 1 If the equation Y K = X , where Y and X are given matrices, has a symmetric and positive semi-definite solution, then the general form of such solutions is
where U is an arbitrary symmetric and positive semi-definite matrix, and Z − denotes a generalized inverse of the matrix Z , i.e. Z Z − Z = Z . For the solution as given above, we have 123 2 ⇒ 1: We first prove that there exists a symmetric positive semi-definite matrix K such that i. K B E = C T E, ii. ker K is A-invariant, and iii. ker K ⊆ ker C.
Existence of a symmetric and positive semi-definite matrix K satisfying the condition (i) follows from [42,Thm. 2.2] together with the relations 2b and 2c. Moreover, [42,Thm. 2.2] implies that any such matrix K must be of the form (69). Since im B E ⊆ T * (A, ker E T C, im B E) and ker(I − Y − Y ) T = im Y T = im B E, there exists a matrix N such that Let U = N T N . Clearly, U is symmetric and positive semi-definite. Note that Then, it follows from (70) that On the one hand, we have from the definition of T * (A, ker E T C, im B E). On the other hand, we have from the condition 2d. The last two inclusions imply that this choice of U and hence K satisfies the condition (ii) whereas the condition 2e readily implies that (iii) is satisfied as well. The last step of the proof is to show that there exists a real number α 0 such that To this end, we can assume, without loss of generality, that the matrices A, K , B, C, and D + D T are of the forms where A i j ∈ R n i ×n j , K 1 ∈ R n 1 ×n 1 , B i j ∈ R n i ×m j , C i j ∈ R m i ×n j , D 1 ∈ R m 1 ×m 1 , n 1 + n 2 = n, m 1 + m 2 = m, and both K 1 and D 1 are symmetric and positive definite matrices. Note that the structure of A and C follows from the conditions (ii) and (iii). Also note that the condition (i) boils down to K 1 B 12 = C T 21 . Then, we have It follows from positive definiteness of both K 1 and D 1 that there exists α 0 such that (80) holds.

Concluding remarks
In this paper, we have shown that the interconnection of a linear system with a static set-valued relation is well-posed in the sense of existence and uniqueness of solutions whenever the underlying linear system is passive and the static relation is maximal monotone. Similar well-posedness results have already appeared in the literature with extra conditions on the linear systems as well as the static relations. Removing those extra conditions requires employing a completely different set of arguments (and hence tools). Based on the recent characterisations of maximal monotonicity, we have shown that such interconnections can be represented by differential inclusions with maximal monotone set-valued mappings. As such, the classical well-posedness results for such differential inclusions can be immediately applied to the class of systems at the hand. As it has already been observed in the literature earlier, well-posedness results can be established under weaker requirements on the linear system than passivity. One such particular property is the so-called passivity by pole shifting. As a side result, we have also provided geometric necessary and sufficient conditions for passivity by pole shifting.