Sum and Product Kernel Regularization Networks

  • Petra Kudová
  • Terezie Šámalová
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4029)


We study the problem of learning from examples (i.e., supervised learning) by means of function approximation theory. Approximation problems formulated as regularized minimization problems with kernel-based stabilizers exhibit easy derivation of solution, in the shape of a linear combination of kernel functions (one-hidden layer feed-forward neural network schemas). Based on Aronszajn’s formulation of sum of kernels and product of kernels, we derive new approximation schemas – Sum Kernel Regularization Network and Product Kernel Regularization Network. We present some concrete applications of the derived schemas, demonstrate their performance on experiments and compare them to classical solutions. For many tasks our schemas outperform the classical solutions.


Kernel Function Product Kernel Information Society Reproduce Kernel Hilbert Space Representer Theorem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Cucker, F., Smale, S.: On the mathematical foundations of learning. Bulletin of the American Mathematical Society 39, 1–49 (2001)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Girosi, F.: An equivalence between sparse approximation and support vector machines. Technical report, Massachutesetts Institute of Technology A.I. Memo No. 1606 (1997)Google Scholar
  3. 3.
    Girosi, F., Jones, M., Poggio, T.: Regularization theory and Neural Networks architectures. Neural Computation 2, 219–269 (1995)CrossRefGoogle Scholar
  4. 4.
    Poggio, T., Smale, S.: The mathematics of learning: Dealing with data. Notices of the AMS 50, 536–544 (2003)MathSciNetGoogle Scholar
  5. 5.
    Kudová, P.: Learning with kernel based regularization networks. In: Information Technologies - Applications and Theory, pp. 83–92 (2005)Google Scholar
  6. 6.
    Aronszajn, N.: Theory of reproducing kernels. Transactions of the AMS 68, 337–404 (1950)MATHMathSciNetCrossRefGoogle Scholar
  7. 7.
    Šidlofová, T.: Existence and uniqueness of minimization problems with fourier based stabilizers. In: Proceedings of Compstat, Prague (2004)Google Scholar
  8. 8.
    Šámalová, T., Kudová, P.: Sum and product kernel networks. Technical report, Institute of Computer Science, AS CR (2005)Google Scholar
  9. 9.
    Prechelt, L.: PROBEN1 – a set of benchmarks and benchmarking rules for neural network training algorithms. Technical Report 21/94, Universitaet Karlsruhe (1994)Google Scholar
  10. 10.
    LAPACK: Linear algebra package,
  11. 11.
    PAPI: Performance appl. prog. interface,

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Petra Kudová
    • 1
  • Terezie Šámalová
    • 1
  1. 1.Institute of Computer ScienceAcademy of Sciences of CRPrague 8Czech Republic

Personalised recommendations