MaxMinOver Regression: A Simple Incremental Approach for Support Vector Function Approximation

  • Daniel Schneegaß
  • Kai Labusch
  • Thomas Martinetz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)


The well-known MinOver algorithm is a simple modification of the perceptron algorithm and provides the maximum margin classifier without a bias in linearly separable two class classification problems. In [1] and [2] we presented DoubleMinOver and MaxMinOver as extensions of MinOver which provide the maximal margin solution in the primal and the Support Vector solution in the dual formulation by dememorising non Support Vectors. These two approaches were augmented to soft margins based on the ν-SVM and the C2-SVM. We extended the last approach to SoftDoubleMaxMinOver [3] and finally this method leads to a Support Vector regression algorithm which is as efficient and its implementation as simple as the C2-SoftDoubleMaxMinOver classification algorithm.


Support Vector Machine Support Vector Input Vector Support Vector Regression Step Width 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Martinetz, T., Labusch, K., Schneegass, D.: Softdoubleminover: A simple procedure for maximum margin classification. In: Proc. of the International Conference on Artificial Neural Networks, pp. 301–306 (2005)Google Scholar
  2. 2.
    Martinetz, T.: Maxminover: A simple incremental learning procedure for support vector classification. In: Proc. of the International Joint Conference on Neural Networks, pp. 2065–2070. IEEE Press, Los Alamitos (2004)Google Scholar
  3. 3.
    Schneegass, D., Martinetz, T., Clausohm, M.: Onlinedoublemaxminover: A simple approximate time and information efficient online support vector classification method. In: Proc. of the European Symposium on Artificial Neural Networks (2006) (in preparation)Google Scholar
  4. 4.
    Cortes, C., Vapnik, V.: Support-vector networks. Machine Learning 20(3), 273–297 (1995)MATHGoogle Scholar
  5. 5.
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)MATHGoogle Scholar
  6. 6.
    LeCun, Y., Jackel, L., Bottou, L., Brunot, A., Cortes, C., Denker, J., Drucker, H., Guyon, I., Muller, U., Sackinger, E., Simard, P., Vapnik, V.: Comparison of learning algorithms for handwritten digit recognition. In: Int.Conf.on Artificial Neural Networks, pp. 53–60 (1995)Google Scholar
  7. 7.
    Osuna, E., Freund, R., Girosi, F.: Training support vector machines:an application to face detection. In: CVPR 1997, pp. 130–136 (1997)Google Scholar
  8. 8.
    Schölkopf, B.: Support vector learning (1997)Google Scholar
  9. 9.
    Friess, T., Cristianini, N., Campbell, C.: The kernel adatron algorithm: a fast and simple learning procedure for support vector machine. In: Proc. 15th International Conference on Machine Learning (1998)Google Scholar
  10. 10.
    Platt, J.: Fast Training of Support Vector Machines using Sequential Minimal Optimization. In: Advances in Kernel Methods - Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)Google Scholar
  11. 11.
    Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: A fast iterative nearest point algorithm for support vector machine classifier design. IEEE-NN 11(1), 124–136 (2000)Google Scholar
  12. 12.
    Li, Y., Long, P.: The relaxed online maximum margin algorithm. Machine Learning 46(1-3), 361–387 (2002)MATHCrossRefGoogle Scholar
  13. 13.
    Cristianini, N., Shawe-Taylor, J.: Support Vector Machines And Other Kernel-based Learning Methods. Cambridge University Press, Cambridge (2000)Google Scholar
  14. 14.
    Vapnik, V.N.: Statistical Learning Theory. John Wiley & Sons, Inc., New York (1998)MATHGoogle Scholar
  15. 15.
    Martinetz, T.: Minover revisited for incremental support-vector-classification. In: Rasmussen, C.E., Bülthoff, H.H., Schölkopf, B., Giese, M.A. (eds.) DAGM 2004. LNCS, vol. 3175, pp. 187–194. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Daniel Schneegaß
    • 1
    • 2
  • Kai Labusch
    • 1
  • Thomas Martinetz
    • 1
  1. 1.Institute for Neuro- and BioinformaticsUniversity at LübeckLübeckGermany
  2. 2.Information & Communications, Learning SystemsSiemens AG, Corporate TechnologyMunichGermany

Personalised recommendations