Skip to main content

Nonlinear Modeling Problems

  • Chapter
  • First Online:
Low-Rank Approximation

Part of the book series: Communications and Control Engineering ((CCE))

  • 873 Accesses

Abstract

Applied to nonlinear modeling problem, the maximum-likelihood estimation principle leads to nonconvex optimization problems and yields inconsistent estimators in the errors-in-variables setting. This chapter presents a computationally cheap and statistically consistent estimation method based on a bias correction procedure, called adjusted least squares estimation. The adjusted least squares method is applied to curve fitting (static modeling) and system identification. Section 7.1 presents a general nonlinear data modeling framework. The model class consists of affine varieties with bounded complexity (dimension and degree) and the fitting criteria are algebraic and geometric. Section 7.2 shows that the underlying computational problem is polynomially structured low-rank approximation. In the algebraic fitting method, the approximating matrix is unstructured and the corresponding problem can be solved globally and efficiently. The geometric fitting method aims to solve the polynomially structured low-rank approximation problem, which is nonconvex and has no analytic solution. The equivalence of nonlinear data modeling and low-rank approximation unifies existing curve fitting methods, showing that algebraic fitting is a relaxation of geometric fitting, obtained by removing the structural constraint. Motivated by the fact that the algebraic fitting method is efficient but statistically inconsistent, Sect. 7.3.3 proposes a bias correction procedure. The resulting adjusted least squares method yields a consistent estimator. Simulation results show that it is effective also for small sample sizes. Section 7.4 considers the class, called polynomial time-invariant, of discrete-time, single-input, single-output, nonlinear dynamical systems described by a polynomial difference equation. The identification problem is: (1) find the monomials appearing in the difference equation representation of the system (structure selection), and (2) estimate the coefficients of the equation (parameter estimation). Since the model representation is linear in the parameters, the parameter estimation by minimization of the 2-norm of the equation error leads to unstructured low-rank approximation. However, knowledge of the model structure is required and even with the correct model structure, the method is statistically inconsistent. For the structure selection, we propose to use 1-norm regularization and for the bias correction, we use the adjusted least squares method.

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

J. von Neumann

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Billings S (2013) Nonlinear system identification: NARMAX methods in the time, frequency, and spatio-temporal domains. Wiley, New York

    Book  Google Scholar 

  • Bookstein FL (1979) Fitting conic sections to scattered data. Comput Graph Image Process 9:59–71

    Article  Google Scholar 

  • Boyd S, Chua L, Desoer C (1984) Analytical foundations of Volterra series. IMA J Math Control Inf 1(3):243–282

    Article  Google Scholar 

  • Cox D, Little J, O’Shea D (2004) Ideals, varieties, and algorithms. Springer, Berlin

    MATH  Google Scholar 

  • Fitzgibbon A, Pilu M, Fisher R (1999) Direct least-squares fitting of ellipses. IEEE Trans Pattern Anal Mach Intell 21(5):476–480

    Article  Google Scholar 

  • Gander W, Golub G, Strebel R (1994) Fitting of circles and ellipses: least squares solution. BIT 34:558–578

    Article  MathSciNet  Google Scholar 

  • Giri F, Bai EW (2010) Block-oriented nonlinear system identification, vol 1. Springer, Berlin

    Book  Google Scholar 

  • Hastie T, Stuetzle W (1989) Principal curves. J Am Stat Assoc 84:502–516

    Article  MathSciNet  Google Scholar 

  • Kanatani K (1994) Statistical bias of conic fitting and renormalization. IEEE Trans Pattern Anal Mach Intell 16(3):320–326

    Article  Google Scholar 

  • Markovsky I, Kukush A, Van Huffel S (2004) Consistent least squares fitting of ellipsoids. Numer Math 98(1):177–194

    Article  MathSciNet  Google Scholar 

  • Paduart J, Lauwers L, Swevers J, Smolders K, Schoukens J, Pintelon R (2010) Identification of nonlinear systems using polynomial nonlinear state space models. Automatica 46(4):647–656

    Article  MathSciNet  Google Scholar 

  • Schölkopf B, Smola A, Müller K (1999) Kernel principal component analysis. MIT Press, Cambridge, pp 327–352

    Google Scholar 

  • Shklyar S, Kukush A, Markovsky I, Van Huffel S (2007) On the conic section fitting problem. J Multivar Anal 98:588–624

    Article  MathSciNet  Google Scholar 

  • Suykens J, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  Google Scholar 

  • Usevich K, Markovsky I (2016) Adjusted least squares fitting of algebraic hypersurfaces. Linear Algebra Appl 502:243–274

    Article  MathSciNet  Google Scholar 

  • Van Huffel S (ed) (1997) Recent advances in total least squares techniques and errors-in-variables modeling. SIAM, Philadelphia

    MATH  Google Scholar 

  • Vidal R, Ma Y, Sastry S (2005) Generalized principal component analysis (GPCA). IEEE Trans Pattern Anal Mach Intell 27(12):1945–1959

    Article  Google Scholar 

  • Zhang Z, Zha H (2005) Principal manifolds and nonlinear dimension reduction via local tangent space alignment. SIAM J Sci Comput 26:313–338

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ivan Markovsky .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Markovsky, I. (2019). Nonlinear Modeling Problems. In: Low-Rank Approximation. Communications and Control Engineering. Springer, Cham. https://doi.org/10.1007/978-3-319-89620-5_7

Download citation

Publish with us

Policies and ethics