Skip to main content
Log in

Minimum Class Variance SVM+ for data classification

  • Regular Article
  • Published:
Advances in Data Analysis and Classification Aims and scope Submit manuscript

Abstract

In this paper, a new Support Vector Machine Plus (SVM+) type model called Minimum Class Variance SVM+ (MCVSVM+) is presented. Similar to SVM+, the proposed model utilizes the group information in the training data. We show that MCVSVM+ has both the advantages of SVM+ and Minimum Class Variance Support Vector Machine (MCVSVM). That is, MCVSVM+ not only considers class distribution characteristics in its optimization problem but also utilizes the additional information (i.e. group information) hidden in the data, in contrast to SVM+ that takes into consideration only the samples that are in the class boundaries. The experimental results demonstrate the validity and advantage of the new model compared with the standard SVM, SVM+ and MCVSVM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. http://archive.ics.uci.edu/ml/.

References

  • Cai F (2011) Advanced learning approaches based on SVM+ methodology. University of Minnesota

  • Cai F, Cherkassky V (2009) SVM+ Regression and Multi-Task Learning. In: Proceedings of International Joint Conference on Neural Networks. Atlanta, Georgia

  • Cao L (2003) Support vector machines experts for time series forecasting. Neuro Comput 51:321–339

    Google Scholar 

  • Demiriz A, Bennett KP, Breneman CM, Embrechts MJ (2001) Support vector machine regression in chemometrics, Computing Science and Statistics. In: Proceedings of the 33rd Symposium on the Interface, American Statistical Association for the Inter face Foundation of North America. Washington, D.C

  • Joachims T (1998) Text categorization with support vector machines: learning with many relevant features. Machine Learning: ECML-98. pp 137–142

  • Kitagawa G (1996) Monte-Carlo filter and smoother for non-Gaussian nonlinear state space models. J Comput Gr Stat 5:1–25

    MathSciNet  Google Scholar 

  • Leiva-Murillo JM, Gmez-Chova L, Camps-Valls G (2013) Multitask remote sensing data classification. Geosci Remote Sensing IEEE Trans 51(1):151–161

    Article  Google Scholar 

  • Liang L, Cai F, Cherkassky V (2009) Predictive learning with structured (grouped) data. Neural Netw 22:766–773

    Article  MATH  Google Scholar 

  • Liang L, Cherkassky V (2007) Learning using Structured Data: Application to FMRI Data Analysis. In: Proceedings of International Joint Conference on Neural Networks. Hong Kong

  • Liang L, Cherkassky V (2008) Connection between SVM+ and multi-task learning. In: Proceedings of International Joint Conference on Neural Networks. Hong Kong

  • Mika S, Ratsch G, Weston J, Scholkopf B, Smola A, Muller K-R (2003) Constructing descriptive and discriminative nonlinear features: Ayleigh coefficients in Kernel feature spaces. IEEE Trans Pattern Anal Mach Intell 25(5):623–628

    Article  Google Scholar 

  • Muller KR, Mika S, Ratsch G, Tsuda K, Scholkopf B (2001) An introduction to Kernel-based learning algorithms. IEEE Trans Neural Netw 12(2):181–201

    Article  Google Scholar 

  • Osuna E, Freund R, Girosi F (1997) Training support vector machines: anapplica- tion to face detection. In: Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognition. pp 130–136

  • Tefas A, Kotropoulos C, Pitas I (2001) Using support vector machines to enhance the performance of elastic graph matching for frontal face authentication. IEEE Trans Pattern Anal Mach Intell 23(7):735–746

    Article  Google Scholar 

  • Vapnik V (1995) The Nature of Statistical Learning Theory. Springer, New York

    Book  MATH  Google Scholar 

  • Vapnik V (1998) Statistical Learning Theory. Wiley, New York

    MATH  Google Scholar 

  • Vapnik V (2006) Empirical inference science: afterword of 2006. Springer, New York

    MATH  Google Scholar 

  • Vapnik V, Vashist A (2009) A new learning paradigm: Learning using privileged information. Neural Network. pp 544–557

  • Vapnik V, Vashist A, Pavlovitch N (2009) Learning using hidden information (Learning with teacher). In: International Joint Conference on. IEEE. pp 3188–3195

  • Zafeiriou S, Tefas A, Pitas I (2007) Minimum class variance support vector machines. Image Process IEEE Trans 16(10):2551–2564

    Article  MathSciNet  Google Scholar 

  • Zhu WX, Zhong P (2014) A new one-class SVM based on hidden information. Knowl Based Syst 60:35–43

    Article  Google Scholar 

  • Zhu WX, Wang KN, Zhong P (2014) Improving support vector classification by learning group information hidden in the data. ICIC Express Lett Part B Appl 5(3):781–786

    Google Scholar 

Download references

Acknowledgments

The work is supported by the National Science Foundation of China (Grant No. 11171346).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ping Zhong.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, W., Zhong, P. Minimum Class Variance SVM+ for data classification. Adv Data Anal Classif 11, 79–96 (2017). https://doi.org/10.1007/s11634-015-0212-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11634-015-0212-z

Keywords

Mathematics Subject Classification

Navigation