Advantages of Unbiased Support Vector Classifiers for Data Mining Applications

  • A. Navia-Vázquez
  • F. Pérez-Cruz
  • A. Artés-Rodríguez
  • A.R. Figueiras-Vidal
Article

Abstract

Many learning algorithms have been used for data mining applications, including Support Vector Classifiers (SVC), which have shown improved capabilities with respect to other approaches, since they provide a natural mechanism for implementing Structural Risk Minimization (SRM), obtaining machines with good generalization properties. SVC leads to the optimal hyperplane (maximal margin) criterion for separable datasets but, in the nonseparable case, the SVC minimizes the L1 norm of the training errors plus a regularizing term, to control the machine complexity. The L1 norm is chosen because it allows to solve the minimization with a Quadratic Programming (QP) scheme, as in the separable case. But the L1 norm is not truly an “error counting” term as the Empirical Risk Minimization (ERM) inductive principle indicates, leading therefore to a biased solution. This effect is specially severe in low complexity machines, such as linear classifiers or machines with few nodes (neurons, kernels, basis functions). Since one of the main goals in data mining is that of explanation, these reduced architectures are of great interest because they represent the origins of other techniques such as input selection or rule extraction. Training SVMs as accurately as possible in these situations (i.e., without this bias) is, therefore, an interesting goal.

We propose here an unbiased implementation of SVC by introducing a more appropriate “error counting” term. This way, the number of classification errors is truly minimized, while the maximal margin solution is obtained in the separable case. QP can no longer be used for solving the new minimization problem, and we apply instead an iterated Weighted Least Squares (WLS) procedure. This modification in the cost function of the Support Vector Machine to solve ERM was not possible up to date given the Quadratic or Linear Programming techniques commonly used, but it is now possible using the iterated WLS formulation. Computer experiments show that the proposed method is superior to the classical approach in the sense that it truly solves the ERM problem.

support vector classifier empirical risk minimization unbiased data mining error count weighted least squares 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    V. Vapnik, The Nature of Statistical Learning Theory, New York: Springer-Verlag, 1995.CrossRefMATHGoogle Scholar
  2. 2.
    F. Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Washington, DC: Spartan Press, 1961.Google Scholar
  3. 3.
    B. Telfer and H. Szu, “Energy Functions for Minimizing Missclassification Error with Minimum Complexity Networks,” Neural Networks, vol. 7, 1994, pp. 805-818.CrossRefGoogle Scholar
  4. 4.
    S. Raudys, “Evolution and Generalization of a Single Neurone I. Single-Layer Perceptron as Seven Statistical Classifiers,” Neural Networks, vol. 11, 1998, pp. 283-296.CrossRefGoogle Scholar
  5. 5.
    S. Raudys, “Evolution and Generalization of a Single Neurone II. Complexity of Statistical Classifiers and Sample Size Considerations,” Neural Networks, vol. 11, 1998, pp. 297-313.CrossRefGoogle Scholar
  6. 6.
    J. Cid-Sueiro and J.L. Sancho-Gómez, “Saturated Perceptrons for Maximum Margin and Minimum Missclassification Error,” Neural Processing Letters, vol. 14, no. 3, 2001, pp. 217-226.CrossRefMATHGoogle Scholar
  7. 7.
    L. Bobrowsky and J. Sklansky, “Linear Classifiers by Window Training,” IEEE Transactions on Systems, Man and Cybernetics, vol. 25, 1995, pp. 1-9.CrossRefGoogle Scholar
  8. 8.
    H. Do-Tu and M. Installe, “Learning Algorithms for Non-Parametric Solution to the Minimum Error Classification Problem,” IEEE Transactions on Computers, vol. 27, no. 7, 1978, pp. 648-659.MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    A. Guerrero Curieses and J. Cid-Sueiro, “A Natural Approach to Sample Selection in Binary Classification,” in Proceedings of Learning'00, Madrid, Spain (CD-ROM), paper no. 29, 2000.Google Scholar
  10. 10.
    N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, Cambridge: Cambridge University Press, 2000.CrossRefGoogle Scholar
  11. 11.
    F. Pérez-Cruz, A. Navia-Vázquez, P. Alarcón-Diana. and A. Artés-Rodríguez, “Support Vector Classifier with Hyperbolic Tangent Penalty Function,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing ICASSP'2000, vol. 6, Piscataway, NJ, USA, 2000, pp. 3458-3461.CrossRefGoogle Scholar
  12. 12.
    B. Scholkopf, P. Knirsch, A. Smola, and C. Burges, “Fast Approximation of Support Vector Kernel Expansions, and an Interpretation of Clustering as Approximation in Feature Spaces,” in Proc. 20. DAGM Symp. Mustererkennung, Springer Lecture Notes in Computer Science, vol. 1, 1998.Google Scholar
  13. 13.
    S. Ahalt, A. Krishnamurthy, P. Chen, and D. Melton, “Competitive Learning Algorithms for Vector Quantization,” Neural Networks, vol. 3, 1990, pp. 277-290.CrossRefGoogle Scholar
  14. 14.
    A. Navia-Vázquez, F. Pérez-Cruz, A. Artés-Rodríguez, and A. Figueiras-Vidal, “Weighted Least Squares Training of Support Vector Classifiers Leading to Compact and Adaptive Schemes,” IEEE Trans. Neural Networks, vol. 12, no. 5, 2001, pp. 1047-1059.CrossRefGoogle Scholar
  15. 15.
    F. Pérez-Cruz, A. Navia-Vázquez, A.R. Figueiras-Vidal, and A. Artés-Rodríguez, “Empirical Risk Minimization for Support Vector Machines,” IEEE Trans. on Neural Networks, vol. 14, no. 2, 2003, pp. 296-303.CrossRefGoogle Scholar
  16. 16.
    F. Pérez-Cruz, “Máquina de Vectores Soporte Adaptativa y Compacta,” Ph.D. thesis, Universidad Politécnica de Madrid, 2000.Google Scholar
  17. 17.
    F. Pérez-Cruz, Navia-Vázquez, J. Rojo-álvarez, and A. Artés-Rodríguez, “A New Training Algorithm for Support Vector Machines,” in Proc. 5th Bayona Workshop on Emerging Technologies in Telecomms., vol. 1, 1999, pp. 343-351.Google Scholar
  18. 18.
    A.N. Tikhonov and V.Y. Arsenin, Solution of Ill-Posed Problems, Washington, DC: Winston, 1977.Google Scholar
  19. 19.
    C.J.C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Data Mining and Knowledge Discovery, vol. 2, 1998, pp. 121-167.CrossRefGoogle Scholar
  20. 20.
    T. Joachims, “Making large-Scale SVM Learning Practical,” in Advances in Kernel Methods—Support Vector Learning, B. Schlkopf, C. Burges, and A. Smola (Eds.), Cambridge, MA: MIT Press, 1999, pp. 169-184.Google Scholar
  21. 21.
    Y. Freund and R. Shapire, “A Decission-Theoretic Generalization of On-line Learning and an Application to Boosting,” Journal of Computer Science, vol. 55, 1997, pp. 119-139.MATHGoogle Scholar
  22. 22.
    T. Joachims, “Text Categorization with Support Vector Machines: Learning with Many Relevant Features,” in Proc. 10th European Conf. on Machine Learning (ECML), 1998.Google Scholar
  23. 23.
    J. Platt, “Inductive Learning Algorithms and Representations for Text Categorization,” in Proc. 7th International Conference on Information and Knowledge Management, 1998.Google Scholar
  24. 24.
    F. Pérez-Cruz, P. Alarcón-Diana, A. Navia-Vázquez, and A. Artés-Rodríguez, “Fast Training of Support Vector Classifiers,” in Advances in Neural Information Processing Systems, vol. 13, 2000, pp. 734-740.Google Scholar
  25. 25.
    A. Dempster, N. Laird, and D. Rubin, “Maximum Likelihood from Incomplete Data via EM Algorithm (with discussion),” Journal of the Royal Statistical Society, vol. 39, 1977, pp. 1-38.MathSciNetMATHGoogle Scholar
  26. 26.
    J. Shawe-Taylor and N. Cristianini, “On the Generalisation of Soft Margin Algorithms,” NeuroCOLT2 Tech. Rep. 82, Dep. Computer Science, Univ. London, 2000.Google Scholar

Copyright information

© Kluwer Academic Publishers 2004

Authors and Affiliations

  • A. Navia-Vázquez
    • 1
  • F. Pérez-Cruz
    • 1
  • A. Artés-Rodríguez
    • 1
  • A.R. Figueiras-Vidal
    • 1
  1. 1.DTSC, Univ. Carlos III de MadridLeganés, MadridSpain

Personalised recommendations