Advantages of Unbiased Support Vector Classifiers for Data Mining Applications
Many learning algorithms have been used for data mining applications, including Support Vector Classifiers (SVC), which have shown improved capabilities with respect to other approaches, since they provide a natural mechanism for implementing Structural Risk Minimization (SRM), obtaining machines with good generalization properties. SVC leads to the optimal hyperplane (maximal margin) criterion for separable datasets but, in the nonseparable case, the SVC minimizes the L1 norm of the training errors plus a regularizing term, to control the machine complexity. The L1 norm is chosen because it allows to solve the minimization with a Quadratic Programming (QP) scheme, as in the separable case. But the L1 norm is not truly an “error counting” term as the Empirical Risk Minimization (ERM) inductive principle indicates, leading therefore to a biased solution. This effect is specially severe in low complexity machines, such as linear classifiers or machines with few nodes (neurons, kernels, basis functions). Since one of the main goals in data mining is that of explanation, these reduced architectures are of great interest because they represent the origins of other techniques such as input selection or rule extraction. Training SVMs as accurately as possible in these situations (i.e., without this bias) is, therefore, an interesting goal.
We propose here an unbiased implementation of SVC by introducing a more appropriate “error counting” term. This way, the number of classification errors is truly minimized, while the maximal margin solution is obtained in the separable case. QP can no longer be used for solving the new minimization problem, and we apply instead an iterated Weighted Least Squares (WLS) procedure. This modification in the cost function of the Support Vector Machine to solve ERM was not possible up to date given the Quadratic or Linear Programming techniques commonly used, but it is now possible using the iterated WLS formulation. Computer experiments show that the proposed method is superior to the classical approach in the sense that it truly solves the ERM problem.
Unable to display preview. Download preview PDF.
- 2.F. Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Washington, DC: Spartan Press, 1961.Google Scholar
- 9.A. Guerrero Curieses and J. Cid-Sueiro, “A Natural Approach to Sample Selection in Binary Classification,” in Proceedings of Learning'00, Madrid, Spain (CD-ROM), paper no. 29, 2000.Google Scholar
- 11.F. Pérez-Cruz, A. Navia-Vázquez, P. Alarcón-Diana. and A. Artés-Rodríguez, “Support Vector Classifier with Hyperbolic Tangent Penalty Function,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing ICASSP'2000, vol. 6, Piscataway, NJ, USA, 2000, pp. 3458-3461.CrossRefGoogle Scholar
- 12.B. Scholkopf, P. Knirsch, A. Smola, and C. Burges, “Fast Approximation of Support Vector Kernel Expansions, and an Interpretation of Clustering as Approximation in Feature Spaces,” in Proc. 20. DAGM Symp. Mustererkennung, Springer Lecture Notes in Computer Science, vol. 1, 1998.Google Scholar
- 16.F. Pérez-Cruz, “Máquina de Vectores Soporte Adaptativa y Compacta,” Ph.D. thesis, Universidad Politécnica de Madrid, 2000.Google Scholar
- 17.F. Pérez-Cruz, Navia-Vázquez, J. Rojo-álvarez, and A. Artés-Rodríguez, “A New Training Algorithm for Support Vector Machines,” in Proc. 5th Bayona Workshop on Emerging Technologies in Telecomms., vol. 1, 1999, pp. 343-351.Google Scholar
- 18.A.N. Tikhonov and V.Y. Arsenin, Solution of Ill-Posed Problems, Washington, DC: Winston, 1977.Google Scholar
- 20.T. Joachims, “Making large-Scale SVM Learning Practical,” in Advances in Kernel Methods—Support Vector Learning, B. Schlkopf, C. Burges, and A. Smola (Eds.), Cambridge, MA: MIT Press, 1999, pp. 169-184.Google Scholar
- 22.T. Joachims, “Text Categorization with Support Vector Machines: Learning with Many Relevant Features,” in Proc. 10th European Conf. on Machine Learning (ECML), 1998.Google Scholar
- 23.J. Platt, “Inductive Learning Algorithms and Representations for Text Categorization,” in Proc. 7th International Conference on Information and Knowledge Management, 1998.Google Scholar
- 24.F. Pérez-Cruz, P. Alarcón-Diana, A. Navia-Vázquez, and A. Artés-Rodríguez, “Fast Training of Support Vector Classifiers,” in Advances in Neural Information Processing Systems, vol. 13, 2000, pp. 734-740.Google Scholar
- 26.J. Shawe-Taylor and N. Cristianini, “On the Generalisation of Soft Margin Algorithms,” NeuroCOLT2 Tech. Rep. 82, Dep. Computer Science, Univ. London, 2000.Google Scholar