Large-scale linear nonparallel SVMs
- 216 Downloads
Large-scale problems have been a very active topic in machine learning area. In the time of big data, it is a challenge and meaningful work to solve such problems. Standard SVM can make linear classification on large-scale problems effectively, with acceptable training time and excellent prediction accuracy. However, nonparallel SVM (NPSVM) and ramp loss nonparallel SVM (RNPSVM) are proposed with better performance than SVM on benchmark datasets. It is motivated to introduce NPSVMs into the area of large-scale issues. In this paper, we propose large-scale linear NPSVMs, solved by the alternating direction method of multipliers (ADMM), to handle large-scale classification problems. ADMM breaks large problems into smaller pieces, avoiding solving intractable problems and leading to higher training speed. The primal problems of NPSVM are convex and differentiable, and they can be managed directly by ADMM. But the objective functions of RNPSVM, composed of convex ones and concave ones, should first be processed by CCCP algorithm and transformed as a series of convex programs. Then, we apply ADMM to solve these programs in every iteration. Experiments of NPSVMs on large-scale problems verify that the algorithms can classify large-scale tasks effectively.
KeywordsLarge-scale Nonparallel SVM Ramp loss function ADMM
This work has been partially supported by grants from National Natural Science Foundation of China (Nos. 61472390, 11271361, 71331005, 11226089 and 91546201), Major International (Regional) Joint Research Project (No. 71110107026) and the Beijing Natural Science Foundation (No. 1162005).
Compliance with ethical standards
Conflict of interest
We declare that we have no conflicts of interest.
This article does not contain any studies with human participants or animals performed by any of the authors
- Akbani R, Kwek S, Japkowicz N (2004) Applying support vector machines to imbalanced datasets. In: Machine learning: ECML 2004, Springer, Berlin, p 39–50Google Scholar
- Bradley PS, Mangasarian OL (1998) Feature selection via concave minimization and support vector machines. ICML 98:82–90Google Scholar
- Deng J, Berg AC, Li K, Fei-Fei L (2010) What does classifying more than 10,000 image categories tell us? In: Computer vision–ECCV 2010, Springer, Berlin, pp 71–84Google Scholar
- Deng N, Tian Y (2004) New method in data mining: support vector machines. Science Press, BeijingGoogle Scholar
- Kasiviswanathan SP, Melville P, Banerjee A, Sindhwani V (2011) Emerging topic detection using dictionary learning. In: Proceedings of the 20th ACM international conference on Information and knowledge management, ACM, pp 745–754Google Scholar
- Li W, Zhao R, Wang X (2012) Human reidentification with transferred metric learning. In: ACCV (1), pp 31–44Google Scholar
- Ma J, Saul LK, Savage S, Voelker GM (2009) Identifying suspicious urls: an application of large-scale online learning. In: Proceedings of the 26th annual international conference on machine learning, ACM, pp 681–688Google Scholar
- Maas AL, Daly RE, Pham PT, Huang D, Ng AY, Potts C (2011) Learning word vectors for sentiment analysis. In: Proceedings of the 49th annual meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, Association for Computational Linguistics, pp 142–150Google Scholar
- Tan J, Zhang C, Deng N (2010) Cancer related gene identification via p-norm support vector machine. In: The 4th international conference on computational systems biology, vol 1, pp 101–108Google Scholar
- Weston J, Watkins C et al (1999) Support vector machines for multi-class pattern recognition. ESANN 99:219–224Google Scholar