Discrete Roesser state models from 2 D frequency data

We identify a general, i.e. not necessarily denominator-separable Roesser model from 2 D discrete vector-geometric trajectories generated by a controllable, quarter-plane causal system. Our procedure consists of two steps: the ﬁrst one is the computation of state trajectories from the factorization of constant matrices directly constructed from input-output data. The second step is the computation of the state, output, and input matrices of a Roesser model as solutions of a system of linear equations involving the given input-output data and the computed state trajectories.

We want to find matrices A, B, C, D such that (1) holds for the data (2) and some associated state trajectories x i := x i,1 x i,2 , i = 1, . . . , N . Such quadruple (A, B, C, D) will be called an unfalsified Roesser model for the data (2).
Roesser model system identification has been considered previously, see [2,4,14,15,19,16,18,17], and it has been applied in modelling the spatial dynamics of deformable mirrors (see [31]), heat exchangers (see [3]), batch processes controlled by iterative learning control (see [32]), and in image processing (see [18]). Our approach to compute an unfalsified model differs fundamentally from previous work. It is based on an idea pursued in the 2D continuoustime case in [22], and derived from the 1D Loewner framework, see [1,21] (and also [24,25] for analogous approaches to 1D identification based on the factorization of "energy" matrices). Namely, we use the data (2) to compute state trajectories corresponding to it, and subsequently we compute a state representation for the data-and such state trajectories by solving linear equations in the unknown matrices A, B, C, D. From a methodological point of view our two-stage procedure is thus analogous to 2D subspace identification algorithms: first compute state trajectories compatible with the data, then fit a state-space model to the input-output and the computed state trajectories. However, our approach is essentially an application of the consequences of duality, rather than shift-invariance as in subspace identification: in our procedure, state trajectories are computed by factorizing constant matrices built from the data and its dual, rather than Hankel-type matrices consisting of shifts of the data in the two independent variables. Such aspect makes our method conceptually simple, and it helps to reduce the amount of bookkeeping necessary for calculations. Moreover, approaching the problem from a frequency-domain and a duality point of view allows us to avoid imposing restrictive assumptions on the data-generating system, such as the separability-in-the-denominator property required by earlier work on 2D subspace identification such as [2,14,15,19]. We note that the recent publication [18], provides a subspace algorithm for the identification of general, i.e. not necessarily separable-in-the-denominator, Roesser models.
The paper is structured as follows. In section 2 we gather the necessary background material; this section contains several original results in the theory of 2D bilinear-and quadratic difference forms, a tool extensively used in our approach. In section 3 we illustrate some original results on duality of Roesser models, including a "pairing" result crucial for our identification procedure. In section 6 we illustrate our method for the identifying Roesser models. Section 7 contains some concluding remarks.
Notation We denote by R m×n (respectively C m×n ) the set of all m×n matrices with entries in R (respectively C). C •×n denotes the set of matrices with n columns and an unspecified (finite) number of rows. Given A ∈ C m×n , we denote by A * its conjugate transpose. If A has full column rank, we denote by A † a left-inverse of A. If A, B are matrices with the same number of columns (or linear maps acting on the same space), col(A, B) is the matrix (map) obtained stacking A on top of B.
C[z −1 1 , z 1 , z −1 2 , z 2 ] is the ring of bivariate Laurent polynomials in the indeterminates z 1 , z 2 with complex coefficients, and C m×n [z −1 1 , z 1 , z −1 2 , z 2 ] that of m × n bivariate Laurent polynomial matrices. The ring of m × n Laurent polynomial matrices with real coefficients in the indeterminates ζ 1 , the set w : Z 2 → C w consisting of all sequences of Z 2 taking their values in C w , and by 2 (Z 2 , C w ) the set of square-summable sequences in (C w ) Z 2 . The notation (·, ·) appended to a symbol (e.g. u) is used to denote a trajectory u : Z 2 → C u . With slight abuse of notation, given λ ∈ C we denote by exp λ the geometric series whose value at k ∈ Z is exp λ (k) := λ k .
We define vec as the linear map defined by vec : and define σ 2 analogously. The reverse shift operators σ −1 is called a 2D linear behavior if it is the solution set of a system of linear, constant-coefficient partial difference equations in two independent variables, i.e.
where R is a Laurent polynomial matrix in the indeterminates z i , i = 1, 2. We call (3) a kernel representation of B, and we denote the set consisting of all 2D linear behaviors with w external variables with L w 2 . In this paper we follow the behavioral-and algebraic terminology of [27,28] (see also [29] for refinements of such behavioral concepts and for the corresponding algebraic characterizations). A g × w Laurent polynomial matrix R in the indeterminates z 1 , z 2 is called left prime if the existence of some for which equality R = DR holds implies that D is unimodular. The property of right-primeness is defined analogously. The following is a characterization of controllable behaviors (see p. 414 of [28] for the definition).
It follows from Prop. 2 of [28] that a controllable behavior is uniquely identified by its transfer function (see p. 415 of [28] for the definition). Controllability is crucial for the definition of the dual of a L w 2 -behavior, as defined in the next section.

Dual discrete 2D behaviors
Let B ∈ L w 2 be a controllable 2D system, and let J = J ∈ R w×w satisfy J 2 = I w . Define the J-dual of B as the set B ⊥ J consisting of all trajectories w : Z 2 → R w such that for all w ∈ B ∩ 2 (Z 2 , R w ) and w ∈ B ⊥ J ∩ 2 (Z 2 , R w ) the following equality holds: The definition of orthogonality on the complexification of B is straightforward by substituting w (k 1 , k 2 ) with w (k 1 , k 2 ) * in the previous equation. B ⊥ J is also a controllable 2D linear behavior, and its image-and kernel representations are related to those of B as stated in the following result.
Proof We first prove the first equality in (5). Without loss of generality (if necessary, changing the latent variable of the image representation by postmultiplication of M by a suitable unimodular matrix) we can assume that 2 . Now consider the following chain of equalities: where in the last equality we have used M (z 1 , 2 . Now define k j := k j + i j , j = 1, 2, and observe that k j → ±∞ if and only if k j → ±∞, j = 1, 2.
With such positions, we can write the last displayed expression as In turn the last expression can be rewritten as Now use the arbitrariness of (·, ·) to conclude that The second equality follows from RM = 0, J 2 = I w , and standard results in behavioral system theory (see [27]).
In the rest of this paper we assume that m = p, and we denote by J the matrix We also assume that the external variable of B is partitioned as w = col(u, y) with u a m-dimensional input variable, and y a p-dimensional output variable. It is a matter of straightforward verification to check that Prop. 2 implies the following result.
Corollary 1 Assume that B ∈ L w 2 is controllable, and let J = J be such that J 2 = I w . Denote the transfer function of B by H(z 1 , z 2 ) and the transfer We conclude this section showing how to compute trajectories of the dual system from those of the primal one with the mirroring technique (see also [6,7,26] for the use of such idea in the 1D case). Given the importance of dual trajectories in our identification procedure, such technique is crucial to our approach.
Proposition 3 Let B ∈ L w 2 be controllable, and J = J ∈ R w×w be such that J 2 = I w . Let w ∈ C w , and denote by w(·, ·) ∈ B a trajectory whose value This concludes the proof.

Example 1
We show how the result of Prop. 3 can be constructively used for computing trajectories of the dual system from trajectories of the primal one. Consider the case Partition w as w =: u y as in (2) and let w(k 1 , k 2 ) = u y λ 1 k1 λ 2 k2 for some u, y ∈ C m and λ 1 , λ 2 ∈ C. It follows from Prop. 3 that the trajectory w whose belongs to the dual system. Thus in the case of J = 0 I m I m 0 dual trajectories can be computed from primal ones by inspection.

2D bilinear-and quadratic difference forms
Bilinear-and quadratic difference forms (BdF and QdF in the following) have been used in the analysis of stability of 2D discrete systems in [9][10][11][12]23]. We briefly review them here, and introduce some novel results.
To simplify the notation for elements of the ring we define multi-indices j := (j 1 , j 2 ), i := (i 1 , i 2 ) ∈ Z 2 , and the notation ζ := (ζ 1 , ζ 2 ) and η := (η 1 , η 2 ). Every element of the ring can be written in the form only a finite number of nonzero such matrices is present in the expression for Φ(ζ, η). We associate with Φ(ζ, η) a bilinear difference form L Φ defined by Without loss of generality when dealing with QdFs we can assume that The increment in the first direction of a BdF L Φ is defined by for all (k 1 , k 2 ) ∈ Z 2 . The increment of L Φ in the second direction is defined analogously. Such increments are themselves BdFs; it follows straightforwardly from the definition that their four-variable representations, denoted with the same symbols with slight abuse of notation, are respectively The discrete divergence of a v-BdFs L Φ := col(L Φ1 , L Φ2 ) is the BdF defined by From the definition it follows that the four-variable polynomial representation of the discrete divergence of L Φ is Define the following map: It is straightforward to verify that if (9) holds, then ∂ (Φ(ζ, η)) = 0 w1×w2 . The following result states that also the converse holds.
The following statements are equivalent: The following result states that there are nonzero v-BdFs whose divergence is zero; this is well-known in the continuous case, even considering non-constant fields.
The following three statements are equivalent: Proof The equivalence of statements 1) and 2) follows from the relation (9) between the discrete divergence and its four-variable polynomial representation. The implication 3) =⇒ 2) follows from straightforward verification. To follows readily from such equality.

Dual Roesser representations and pairing
To the best of the author's knowledge and despite the popularity of Roesser models, what the dual system is of one admitting such a representation, and whether such dual system admits a Roesser representation itself, have not been investigated before. In this section we prove some statements related to such issues, which moreover are crucial for our system identification approach. Foremost among them is a "pairing" result, showing how bilinear forms on the external variables of the primal and the dual system are related to bilinear forms on the state variables of the two systems.
We associate to (1) a "backward" Roesser model described by the equations We call (10) a representation of the dual system to (1), for reasons given in Prop.s 6 and 7 below.
In the following result we show that the transfer function of the dual system (10) is related to that of (1) by a straightforward transformation.

Proposition 6 Define
then the transfer function of the dual system (10) is Proof Taking the two-variable z-transform of (10) yields the following expression for the transfer function from u to y : from which the claim follows.
Define J by (6); the result of Prop. 6, the fact that the transfer function uniquely defines a controllable system, and Cor. 1 imply that (10) is a state representation of the J-dual behavior of the external behavior of (1). In the following we find it easier to use a different hybrid representation of such dual, that obtained from (10) defining the latent variables x i := σ −1 ix i , i = 1, 2; then the equations (11) can be rewritten as In the following pairing result we describe a relation between the bilinear form induced by −J on B × B ⊥ J and a discrete divergence on the field generated by the inner product of the state variable x of (1) and the latent variable x of (11).
Proposition 7 Consider the latent variable representations (1) of B and (11) of B ⊥ J . Then for all col(u, y, x) satisfying (1) and col(u , y , x ) satisfying (11) the following equation holds: Proof It is a matter of straightforward verification to check that the following chain of equalities holds: The claim follows.
We can reformulate the result of Prop. 7 saying that the bilinear form induced by −J on the external trajectories of the primal and the dual system is the divergence of the field induced by the inner products on the first and second state components of the primal and the dual system. Such relation is of paramount importance for our system identification procedure.

The data matrix and the 2D Stein matrix equation
In the following we assume that besides the primal data (2), also a set of dual trajectories whose value at (k 1 , k 2 ) ∈ Z 2 is where w i j1,j2 = u i j1,j2 y i j1,j2 ∈ C p+m , j = 0, . . . , L i , = 1, 2 and µ j,i ∈ C, i = 1, . . . , N , j = 1, 2. Recall from Prop. 3 that such trajectories can also be computed from trajectories obtained from experiments conducted on the primal system, and consequently such assumption is not restrictive for practical purposes.
In the rest of the paper we only consider the case of primal and dual data with multiplicity one, i.e. L i j = 1 = L i j , j = 1, 2, i = 1, . . . , N in (2), and vector-geometric trajectories w i exp µ1,i exp µ2,i , w i exp λ1,i exp λ2,i , i = 1, . . . , N ; in such case the value of the primal and dual trajectories at (k 1 , k 2 ) is In order to compute real models (1), (10), we will also assume that the data sets D := {w i } i=1,...,N and D := {w i } i=1,...,N are closed under conjugation, i.e. that Such assumption can be made without loss of generality, since it is always possible to close D and D by adding conjugate trajectories, if necessary.
The following result (that uses the definition of observability on p. 6 of [30]) implies that if the external trajectories are vector-geometric, also the corresponding state trajectories of the primal system are vector-geometric.
Proposition 8 Let col(x, u, y) satisfy (1). Assume that u = u exp λ1 exp λ2 and y = y exp λ1 exp λ2 for some u ∈ C m and y ∈ C p , respectively, and some λ i ∈ C, i = 1, 2. Assume that the representation (1) is observable. Then there exists a unique x ∈ C n1+n2 such that the state trajectory corresponding to u and y is x = x exp λ1 exp λ2 .
Proof The fact that the state trajectory x is vector-geometric follows in a straightforward way from the second equation in (1) and the fact that u and y) are vector-geometric and associated with the same 2D frequency (λ 1 , λ 2 ). The uniqueness of the state trajectory, and consequently of the state direction x, follows from the observability of (1).
A result analogous to Prop. 8 also holds true for the dual system represented by (10) and equivalently by (11); we will not state it explicitly here.
We associate to the dual and primal vector-geometric sequences w i (·, ·), w j (·, ·), i, j = 1, . . . , N the data matrix D defined by: In the following result we make explicit the connection between D and the state trajectories corresponding to the data w i (·, ·), w j (·, ·), i, j = 1, . . . , N .
Proposition 9 Let (1) be a state representation of B, and let (11) be a representation of its dual; assume that such representations are observable. Denote the unique state trajectories corresponding to w i and w j by x i and x j , respectively, and the associated state directions by x i =: x i,1 x i,2 and x j =: x j,1 x j,2 , i, j = 1, . . . , N . Define for k = 1, 2 the matrices The following equation holds: Proof The claim follows in a straightforward way by applying the pairing equation (12) to the vector-geometric data w i , w j and the associated state trajectories x i , x j .
By analogy with the classical matrix equation X − M X Λ = Q where M , Λ and Q are given and the unknown is the matrix X , we call (17) the 2D Stein matrix equation, and for future reference we rewrite it as where S k are the unknowns, k = 1, 2. Such equation is of paramount importance in our approach to 2D system identification: from it we compute admissible state trajectories associate with the input-output data, and also the state equations corresponding to such variables. The solutions of the linear matrix equation (18) can be parametrized in a straightforward way, as we now show. We first show how to compute one pair of solutions, and then consider the homogeneous matrix equation associated with (18).
Using analogous arguments to those available in the theory of matrix Sylvester equations (see e.g. [13]) it can be shown that if λ i k µ j k = 1, i, j = 1, . . . , N , k = 1, 2, then for every choice of the right-hand side Q there exist unique solutions X k to the k-th 1D matrix Stein equation k = 1, 2. If Q = 1 2 Q and if S k solves the 1D Stein equation (19), k = 1, 2, then summing up the two 1D Stein equations it follows that the pair (S 1 , S 2 ) solves the 2D Stein equation (18). From such discussion it follows that under the assumption λ i k µ j k = 1, i, j = 1, . . . , N , k = 1, 2, one can compute a particular solution of the 2D matrix Stein equation directly from two 1D Stein equations. In many cases (e.g. in experimental setting where the inputs can be arbitrarily chosen and the system can be started at rest), the frequencies λ i k , µ j k can be freely chosen from the experimental data, and the assumption λ i k µ j k = 1, i, j = 1, . . . , N , k = 1, 2 is not particularly restrictive. We will assume it is valid for the rest of the paper.
We now study the homogeneous 2D Stein matrix equation: A parametrization of all solution pairs (S 1 , S 2 ) to such equation is straightforward to derive from the following result, whose proof is omitted. 2. The following equality holds: k, j = 1, . . . , N ; 3. The following equality holds: k, j = 1, . . . , N .
On the basis of the result of Prop. 10, we characterize all solutions to (18) as follows.

Computation of state trajectories
We can restate the result of Prop. 9 by saying that the matrices S k , k = 1, 2 defined in (16) are solutions of the 2D Stein matrix equation with M k , Λ k ∈ C N ×N , k = 1, 2 defined by (16), D ∈ C N ×N by (15), and S k ∈ C N ×N , k = 1, 2 being the unknown matrices. Based on such observation, for pedagogical reasons we propose the following preliminary version of a 2D identification procedure, which we refine in the rest of this section and in sect. 6.

Algorithm 1
Input: Primal and dual data as in (14) Output: An unfalsified Roesser model for the data.
Step 1: Construct the matrix D defined by (15) from the data (14).
Step 2: Compute a pair (S 1 , S 2 ) of solutions to the matrix equation (23).
Step 4: Define and solve for A ij , B i , C i , i, j = 1, 2 and D in Step 5: Return A, B, C, D.
We now discuss several issues related to such procedure.
Identifiability: Recall from Prop. 5 that S k , k = 1, 2 defined in (16) are not the only solutions of 2D matrix Stein equation (23): if S k , k = 1, 2 satisfy the homogeneous 2D Stein matrix equation (20), then also S k +S k , k = 1, 2 are solutions of (23). The converse also holds: since the matrix equation (23) is linear, it follows that every solution pair to it can be written as S k + S k for some pair (S 1 , S 2 ) solving (20) and S k defined in (16). Such non-unicity of the solutions to (23) is an unavoidable consequence of the non-invertibility of the divergence operator, or equivalently the existence of nonzero solution pairs to (20). Consequently, identifiability is not a well-posed question in the identification approach sketched in Algorithm 1.
Complexity: Another issue arising from the non-unicity of the solutions to (23) is the fact that Algorithm 1 may compute models having larger state dimension than that of the generating system. Note that the sum of the ranks of S k , k = 1, 2 coming from a generic solution pair computed in Step 2 will generically be N + N , and consequently presumably higher than the minimal state dimension of a Roesser model of the data-generating system. Thus generically the Roesser model computed in Step 4 would be high-dimensional and impractical for use e.g. in simulation, control, and so forth. In section 5.1 we illustrate a procedure using rank-minimization to compute a minimal complexity model; see also Remark 1 where an alternative approach using Gröbner basis computation is sketched.
Computation of A, B, C, D: Finally, sufficient conditions must be established guaranteeing that solutions A, B, C, D exist to the system of linear equations (25).
The rest of this section is devoted to modifying Algorithm 1 to address the complexity issue; the solvability of the system of linear equations (25) is considered in section 6. We will show that with small modifications our data matrix-based approach to Roesser model identification offers the opportunity to address in a conceptually simple way the problem of deriving a minimalcomplexity unfalsified Roesser model.

Computation of minimal complexity state trajectories
We define the complexity of a model (1) as the dimension n 1 + n 2 of the state variable. Given a controllable, quarter-plane causal behavior B ∈ L w 2 , we define its minimal complexity to be the minimal dimension of the state variable among all possible Roesser representations of B. Thus every B ∈ L w 2 can be mapped to point lying on a line n 1 + n 2 = c in N × N, where c is its complexity, see Fig. 1. Note that complexity of a given model obtained in our framework is  related to the total rank rank(S 1 ) + rank(S 2 ) of the particular solution to (23) chosen to perform the rank-revealing factorization in Step 3 of Algorithm 1. Assuming the data to be sufficiently informative about the system dynamics, such total rank ranges between the complexity n of the actual data-generating system, and 2N , see the previous discussion on complexity. In the light of the characterization (22) in Cor. 2, the problem of computing a minimal total rank solution to (23) can be formulated as a rank minimization problem (see [5]), as we presently show. In order to do this, we need to state a preliminary result. The subset C of R 2N ×2N defined by is convex.
Proof Let α ∈ R, 0 ≤ α ≤ 1; the claim follows in a straightforward way from the equality which we now prove. The following chain of equalities is a direct consequence of the definition of f and the linearity of the mat and vec maps: as was to be proved. From such equality it follows that the set is convex, from which the claim follows directly.
It follows from Prop. 11 that the optimization problem defined by min rank is in the standard form of a rank minimization problem min rank(X) where C is a convex set, and can be solved by several algorithms implemented on standard platforms. It goes beyond the scope of the present paper to enter into details about which algorithms to use in order to solve (26), and to discuss important issues such as their numerical accuracy and complexity. The interested reader is referred to the growing literature on the subject. We can now refine Algorithm 1 as follows.

Algorithm 2
Input: Primal and dual data as in (14) Output: A minimal complexity unfalsified Roesser model for the data.
Step 1: Construct the matrix D defined by (15) from the data (14).
Step 2: Define C as in Prop. 11, and solve the optimization problem (26).
Step 4: Define and solve for A ij , B i , C i , i, j = 1, 2 and D in Step 5: Return A, B, C, D.

Remark 1
In [20] a Gröbner basis approach to solve the rank minimization problem (26) is illustrated. Such approach uses a parametrization similar to that of Prop. 10 in order to transform the problem of finding fixed-rank matrices solving the 2D Sylvester equation (the continuous counterpart of the Stein equation) into a polynomial algebraic problem. In order to compute a minimal complexity Roesser model for the data, beginning with c = 1 we check whether there exist solution pairs (S 1 , S 2 ) to (23) such that rank(S k ) = n k , k = 1, 2 and n 1 + n 2 = c. If no such solution pair exists, we increment c by 1 and repeat the check. Note that working under the assumption λ i k µ j k = 1, i, j = 1, . . . , N , k = 1, 2, equation (23) is solved by (S 1 , S 2 ), where S i , i = 1, 2, are the unique solution to (19) Consequently, such search ends after at most N steps.
The largest part of the computational effort of such approach is due to the complexity of Gröbner basis calculations, which becomes especially heavy for problems involving more than ten data trajectories. However, such approach has the advantage that a parametrization of all solutions to (23) with a given total rank is obtained; a numerical approach based on rankminimization algorithms only produces one solution among many. Consequently, such parametrization opens up the possibility of exploring the space of unfalsified models of given complexity, with potential application to 2D datadriven model order reduction (see [24,25] for the 1D case). Such procedure also shares with the one sketched in Algorithm 2 a conceptually appealing simplicity that avoids some difficulties inherent in other approaches based on shift-invariance.

Identification of Roesser models
To set up a system of linear equations (25), (28) we resort once more to the 2D Stein equation (23). Define, analogously with (24), (27), the input-output matrices of the dual data by and assume that rank-revealing factorizations S k = F k F k , k = 1, 2 have been computed. Now rewrite (23) as: Denote the j-th column of F k by f k,j , j = 1, . . . , N , k = 1, 2. Observe that the columns of the matrix col(F 1 , F 2 ) are the values at (0, 0) of the 2D-geometric sequence col(f 1,j exp λ1 exp λ2 , f 2,j exp λ1 exp λ2 ), and those of col( are the values at (0, 0) of the shifted 2D-geometric sequence col(σ 1 col(f 1,j exp λ1 exp λ2 , σ 2 f 2,j exp λ1 exp λ2 .
The idea underlying our computation of an unfalsified model is to left-invert so as to obtain directly from (30) an unfalsified Roesser model.
The following result gives sufficient conditions so that a left inverse exists so that a Roesser model can be computed from (30).
Let S 1 , S 2 ∈ R N ×N solve (23), and let S i = F * i F i , i = 1, 2 be rank-revealing factorizations. Assume that: There exist a left inverse K : and G ∈ C p×N such that KU * = 0 (n1+n2)×p and G Y * U * = 0 p×m I p .
Let K, G satisfy (31), and define Then A, B, C, D define an unfalsified Roesser model for the data.
Proof From assumption 1) it follows that M * admits a left inverse. From assumption 2) conclude that such a left-inverse can be chosen satisfying the first equation in (31). Now multiply both sides of (30) by such left-inverse to conclude that where A and B are defined by the first two equations in (32). Now use assumption 3) to conclude that a matrix G exists such that the second equation in (31) holds. Multiply both sides of (30) by such G and rearrange the terms also using equation (33), to conclude that where C and D are defined by the last two equations in (32). The fact that A, B, C and D define an unfalsified model for the primal data follows from (33) and (34) defining col(x 1,j , x 2,j ) := col(f 1,j , f 2,j ) exp λ1,j exp λ2,j , to be the state trajectory corresponding to the j-th input-output data, j = 1, . . . , N . This concludes the proof of the Theorem.

Remark 2
The sufficient conditions stated in Prop. 12 fall short of being completely satisfactory, since they involve the matrices arising from the factorizations of S k rather than the matrices S k , k = 1, 2, themselves, or in the best case, the input-output data itself. We also do not make any claim about the conservativeness of such sufficient conditions. The issue of deriving tighter conditions expressed only in terms of the input-output data is a pressing issue for further research.

Conclusions
We have presented a novel approach to the identification of unfalsified Roesser discrete models from vector-geometric data, based on the idea of first computing state trajectories compatible with the given input-output trajectories, and secondly using such trajectories together with the data in order to compute the A, B, C, D matrices of the Roesser equations. Our procedure is based on new results concerning duality of such models (sect. 3) and on a parametrization of the solutions to the 2D Stein matrix equation (sect. 4). Such results lead to an algorithm for the computation of state trajectories (sect. 5), which can be refined in a straightforward way to one for the computation of minimal complexity ones (sect. 5.1). The 2D Stein equation is exploited once again in order to find an unfalsified model for the input-output data and the computed state matrices (see sect. 6).
In several preceding publications concerned with linear time-invariant systems, the author and his collaborators put forward an "energy"-based approach to identification. Given the abundance of powerful methods to solve system identification problems for such class of systems, it can be argued that such results amounted to a relatively minor contribution. The author hopes that the application of such ideas to 2D systems as in [22] and in the present paper, may shift the balance of judgment more in his favour. The ideas underlying the approach presented here and in the germane publication [22] can be applied to a wider class of systems, and to more general classes of data than vector-geometric or exponential ones. Their potential lies in the generality of the idea of duality, which we believe can be used to overcome the difficulties (e.g. of bookkeeping) inherent in applying shift-invariance techniques to multidimensional systems, or to bypass them altogether for system classes where such property is not satisfied (e.g. 1D time-varying and nonlinear systems, for which promising results are being obtained as we write).
Limiting ourselves to the class of multidimensional systems considered in this paper, three areas of research are currently being investigated. Firstly, we need to generalize our approach to the case of data other than vectorgeometric through the use of compact-support trajectories and infinite series involving their shifts (as in the 1D case, see [25]). Secondly, we want to identify other classes of 2D systems than the Roesser one, amenable to be identified with duality ideas. The main issue to be addressed is to determine classes of systems admit a "pairing relation" with their dual, which can be expressed as the divergence of a field involving the state trajectories, and if possible algebraically characterize such property. Moreover, it is important to ascertain whether such divergence is amenable to a computationally straightforward treatment; for example, in Rem. 8 p. 2751 of [22] it has been shown that (continuous-time) Fornasini-Marchesini models have been shown to admit a pairing relation, but one which does not seem to be conducive to a direct exploitation to derive from it state trajectories. Finally, we plan to investigate whether duality relations can be used to compute minimal Roesser model from non-minimal ones, and in the problem of state-space realisation from transfer functions.