Skip to main content

System Identification in the Behavioral Setting

A Structured Low-Rank Approximation Approach

  • Conference paper
  • First Online:
Latent Variable Analysis and Signal Separation (LVA/ICA 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9237))

Abstract

System identification is a fast growing research area that encompasses a broad range of problems and solution methods. It is desirable to have a unifying setting and a few common principles that are sufficient to understand the currently existing identification methods. The behavioral approach to system and control, put forward in the mid 80’s, is such a unifying setting. Till recently, however, the behavioral approach lacked supporting numerical solution methods. In the last 10 years, the structured low-rank approximation setting was used to fulfill this gap. In this paper, we summarize recent progress on methods for system identification in the behavioral setting and pose some open problems. First, we show that errors-in-variables and output error system identification problems are equivalent to Hankel structured low-rank approximation. Then, we outline three generic solution approaches: (1) methods based on local optimization, (2) methods based on convex relaxations, and (3) subspace methods. A specific example of a subspace identification method—data-driven impulse response computation—is presented in full details. In order to achieve the desired unification, the classical ARMAX identification problem should also be formulated as a structured low-rank approximation problem. This is an outstanding open problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Absil, P.A., Mahony, R., Sepulchre, R.: Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton (2008)

    Book  MATH  Google Scholar 

  2. Absil, P.A., Mahony, R., Sepulchre, R., Dooren, P.V.: A Grassmann-Rayleigh quotient iteration for computing invariant subspaces. SIAM Rev. 44(1), 57–73 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  3. Galassi, M., et al.: GNU scientific library reference manual. http://www.gnu.org/software/gsl/

  4. Golub, G., Pereyra, V.: Separable nonlinear least squares: the variable projection method and its applications. Inst. Phys. Inverse Prob. 19, 1–26 (2003)

    Article  MathSciNet  Google Scholar 

  5. Liu, Z., Hansson, A., Vandenberghe, L.: Nuclear norm system identification with missing inputs and outputs. Control Lett. 62, 605–612 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  6. Markovsky, I.: How effective is the nuclear norm heuristic in solving data approximation problems? In: Proceedings of the 16th IFAC Symposium on System Identification, Brussels, pp. 316–321 (2012)

    Google Scholar 

  7. Markovsky, I.: Low Rank Approximation: Algorithms, Implementation, Applications. Springer, London (2012)

    Book  Google Scholar 

  8. Markovsky, I.: A software package for system identification in the behavioral setting. Control Eng. Prac. 21, 1422–1436 (2013)

    Article  Google Scholar 

  9. Markovsky, I., Usevich, K.: Software for weighted structured low-rank approximation. J. Comput. Appl. Math. 256, 278–292 (2014)

    Article  MathSciNet  Google Scholar 

  10. Markovsky, I., Van Huffel, S., Pintelon, R.: Block-Toeplitz/Hankel structured total least squares. SIAM J. Matrix Anal. Appl. 26(4), 1083–1099 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  11. Markovsky, I., Willems, J.C., Van Huffel, S., De Moor, B.: Exact and Approximate Modeling of Linear Systems: A Behavioral Approach. Monographs on Mathematical Modeling and Computation, vol. 11. SIAM, Bangkok (2006)

    Book  Google Scholar 

  12. Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math. 11, 431–441 (1963)

    Article  MATH  MathSciNet  Google Scholar 

  13. Söderström, T.: Errors-in-variables methods in system identification. Automatica 43, 939–958 (2007)

    Article  MATH  Google Scholar 

  14. Willems, J.C.: From time series to linear system–part II. Exact modelling. Automatica 22(6), 675–694 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  15. Willems, J.C.: From time series to linear system–part I. Finite dimensional linear time invariant systems, part II. Exact modelling, part III. Approximate modelling. Automatica 22, 23, 561–580, 675–694, 87–115 (1986, 1987)

    Google Scholar 

  16. Willems, J.C., Rapisarda, P., Markovsky, I., Moor, B.D.: A note on persistency of excitation. Control Lett. 54(4), 325–329 (2005)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC Grant agreement number 258581 “Structured low-rank approximation: Theory, algorithms, and applications”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ivan Markovsky .

Editor information

Editors and Affiliations

A Subspace Method for Impulse Response Estimation

A Subspace Method for Impulse Response Estimation

Let \(\mathscr {B}\) be a linear time-invariant system of order \(\mathtt {n}\) with lag \(\ell \) and let \(w = (u,y)\) be an input-output partitioning of the variables. In [16], it is shown that, under the following conditions,

  • the data \(w_\mathrm{d}\) is exact, i.e., \(w_\mathrm{d}\in \mathscr {B}\),

  • \(\mathscr {B}\) is controllable,

  • \(u_\mathrm{d}\) is persistently exciting, i.e., \(\mathscr {H}_{\mathtt {n}+\ell +1}(u_\mathrm{d})\) is full rank,

the Hankel matrix \(\mathscr {H}_{t}(w_\mathrm{d})\) with t block-rows, composed from \(w_\mathrm{d}\), spans the space  of all t-samples long trajectories of the system \(\mathscr {B}\), i.e.,

This implies that there exists a matrix G, such that

$$\begin{aligned} \mathscr {H}_{t}( {y_\mathrm{d}})G = H, \end{aligned}$$

where H is the vector of the first t samples of the impulse response of \(\mathscr {B}\). The problem of computing the impulse response H from the data \(w_\mathrm{d}\) reduces to the one of finding a particular G.

Define \(U_\mathrm{p}\), \(U_\mathrm{f}\), \(Y_\mathrm{p}\), \(Y_\mathrm{f}\) as follows

where

$$ \mathrm{row\,dim}(U_\mathrm{p}) = \mathrm{row\,dim}(Y_\mathrm{p}) = \ell $$

and

$$ \mathrm{row\,dim}(U_\mathrm{f}) = \mathrm{row\,dim}(Y_\mathrm{f}) = t. $$

Then if \(w_\mathrm{d} = (u_\mathrm{d},{y_\mathrm{d}})\) is a trajectory of a controllable linear time-invariant system \(\mathscr {B}\) of order \(\mathtt {n}\) and lag \(\ell \) and if \(u_\mathrm{d}\) is persistently exciting of order \(t+\ell +\mathtt {n}\), the system of equations

figure k

is solvable for , and for any particular solution G, the matrix \(Y_{\mathrm{f}} G\) contains the first t samples of the impulse response of \(\mathscr {B}\), i.e.,

$$\begin{aligned} Y_{\mathrm{f}} G = H. \end{aligned}$$

This gives Algorithm 1 for the computation of H.

figure l

Algorithm 1 computes the first t samples of the impulse response; however, the persistency of excitation condition imposes a limitation on how big t can be. This limitation can be avoided by a modification of the algorithm. L consecutive samples, where L is a user specified parameter that is small enough to allow the application of Algorithm 1, are computed iteratively. Then, provided the system is stable, by monitoring the decay of H in the course of the computations, gives a way to determine how many samples are needed to capture the transient behavior of the system.

In case of noisy data, the system of Eq. (**) on step 1 in Algorithm 1 has no exact solution. Using a least squares approximate solution instead, turns Algorithm 1 in a heuristic for approximate system identification. The algorithm is heuristic because the maximum likelihood estimator requires structured total least squares solution of (**). The structured total least squares problem, however, is nonlinear optimization problem [10].

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Markovsky, I. (2015). System Identification in the Behavioral Setting. In: Vincent, E., Yeredor, A., Koldovský, Z., Tichavský, P. (eds) Latent Variable Analysis and Signal Separation. LVA/ICA 2015. Lecture Notes in Computer Science(), vol 9237. Springer, Cham. https://doi.org/10.1007/978-3-319-22482-4_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-22482-4_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-22481-7

  • Online ISBN: 978-3-319-22482-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics