Adaptive Kernel Leaning Networks with Application to Nonlinear System Identification

  • Haiqing Wang
  • Ping Li
  • Zhihuan Song
  • Steven X. Ding
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4232)


By kernelizing the traditional least-square based identification method, an adaptive kernel learning (AKL) network is proposed for nonlinear process modeling, which utilizes kernel mapping and geometric angle to build the network topology adaptively. The generalization ability of AKL network is controlled by introducing a regularized optimization function. Two forms of learning strategies are addressed and their corresponding recursive algorithms are derived. Numerical simulations show this simple AKL networks can learn the process nonlinearities with very small samples, and has excellent modeling performance in both the deterministic and stochastic environments.


Generalization Ability High Dimensional Feature Space Adaptive Neural Network Quadratic Cost Function Nonlinear System Identification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Sjöberg, J., Zhang, Q.H., Benveniste, A., et al.: Nonlinear Black-box Modeling in System Identification: a Unified Overview. Automatic 31, 1691–1724 (1995)MATHCrossRefGoogle Scholar
  2. 2.
    Ljung, L.: System Identification: Theory for the User, 2nd edn. Prentice-Hall, New Jersey (1999)Google Scholar
  3. 3.
    Narendra, K.S., Parthasrathy, K.: Identification and Control of Dynamical Systems Using Neural Networks. IEEE. Trans. Neural Networks 1, 4–27 (1990)CrossRefGoogle Scholar
  4. 4.
    Lixin, W.: Adaptive Fuzzy Systems and Control: Design and Stability Analysis. Prentice-Hall, New Jersey (1994)Google Scholar
  5. 5.
    Vapnik, V.N.: The nature of Statistical Learning Theory. Springer, Heidelberg (1995)MATHGoogle Scholar
  6. 6.
    Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. The MIT Press, Cambridge (2002)Google Scholar
  7. 7.
    Engel, Y., Mannor, S., Meir, R.: Kernel Recursive Least Squares (2003),
  8. 8.
    Csató, L., Opper, M.: Sparse representation for Gaussian process models. In: Leen, T.K., Dietterich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems, vol. 13, pp. 444–450. MIT Press, Cambridge (2001)Google Scholar
  9. 9.
    Franc, V., Hlavac, V.: Training set approximation for kernel methods. In: Proceedings of the 8th Computer Vision Winter Workshop, Prague, Czech Republic, Czech Pattern Recognition Society, pp. 121–126 (2003)Google Scholar
  10. 10.
    Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. The John Hopkins University Press, Baltimore (1996)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Haiqing Wang
    • 1
    • 2
  • Ping Li
    • 1
  • Zhihuan Song
    • 1
  • Steven X. Ding
    • 2
  1. 1.National Lab of Industrial Control TechnologyZhejiang UniversityHangzhouP.R. China
  2. 2.Inst. Auto. Cont. and Comp. Sys.University of Duisburg-EssenDuisburgGermany

Personalised recommendations