Skip to main content
Log in

Fast and Memory-Efficient Import Vector Domain Description

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

One-class learning is a classical and hard computational intelligence task. In the literature, there are some effective and powerful solutions to address the problem. There are examples in the kernel machines realm, Support Vector Domain Description, and the recently proposed Import Vector Domain Description (IVDD), which directly delivers the sample probability of belonging to the class. Here, we propose and discuss two optimization techniques for IVDD to significantly improve the memory footprint and consequently to scale to datasets that are larger than the original formulation. We propose two strategies. First, we propose using random features to approximate the gaussian kernel together with a primal optimization algorithm. Second, we propose a Nyström-like approximation of the functional together with a fast converging and accurate self-consistent algorithm. In particular, we replace the a posteriori sparsity of the original optimization method of IVDD by randomly selecting a priori landmark samples in the dataset. We find this second approximation to be superior. Compared to the original IVDD with the RBF kernel, it achieves high accuracy, is much faster, and grants huge memory savings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml

  2. Cambria E, Huang G, Kasun LLC, Zhou H, Vong CM, Lin J, Yin J, Cai Z, Liu Q, Li K, Leung VCM, Feng L, Ong Y, Lim M, Akusok A, Lendasse A, Corona F, Nian R, Miche Y, Gastaldo P, Zunino R, Decherchi S, Yang X, Mao K, Oh B, Jeon J, Toh K, Teoh ABJ, Kim J, Yu H, Chen Y, Liu J (2013) Extreme learning machines [trends amp; controversies]. IEEE Intell Syst 28(6):30–59. https://doi.org/10.1109/MIS.2013.140

    Article  Google Scholar 

  3. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(27):1–27

    Article  Google Scholar 

  4. Decherchi S, Rocchia W (2017) Import vector domain description: a kernel logistic one-class learning algorithm. IEEE Trans Neural Netw Learn Syst 28(7):1722–1729. https://doi.org/10.1109/TNNLS.2016.2547220

    Article  MathSciNet  Google Scholar 

  5. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J Royal Statistic Soc Ser B 39:1–38

    MathSciNet  MATH  Google Scholar 

  6. Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class svm with deep learning. Pattern Recognit 58:121–134. https://doi.org/10.1016/j.patcog.2016.03.028

    Article  Google Scholar 

  7. Gerfo LL, Rosasco L, Odone F, Vito ED, Verri A (2008) Spectral algorithms for supervised learning. Neural Comput 20(7):1873–1897. https://doi.org/10.1162/neco.2008.05-07-517

    Article  MathSciNet  MATH  Google Scholar 

  8. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501. https://doi.org/10.1016/j.neucom.2005.12.126 Neural Networks

    Article  Google Scholar 

  9. Oglic D, Gärtner T (2017) Nyström method with kernel k-means++ samples as landmarks. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning, proceedings of machine learning research, vol 70. PMLR, International convention centre, Sydney, Australia, pp 2652–2660. http://proceedings.mlr.press/v70/oglic17a.html

  10. Platt JC (1999) Advances in kernel methods. chap. Fast training of support vector machines using sequential minimal optimization. MIT Press, Cambridge, pp 185–208

    Google Scholar 

  11. Rahimi A, Recht B (2008) Random features for large-scale kernel machines. In: Platt JC, Koller D, Singer Y, Roweis ST (eds) Advances in neural information processing systems 20. Curran Associates, Inc., New York, pp 1177–1184

    Google Scholar 

  12. Rayana S (2016) ODDS library. http://odds.cs.stonybrook.edu

  13. Rudi A, Camoriano R, Rosasco L (2015) Less is more: Nyström computational regularization. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in neural information processing systems 28. Curran Associates Inc, New York, pp 1657–1665

    Google Scholar 

  14. Schölkopf B, Platt JC, Shawe-Taylor JC, Smola AJ, Williamson RC (2001) Estimating the support of a high-dimensional distribution. Neural Comput 13(7):1443–1471. https://doi.org/10.1162/089976601750264965

    Article  MATH  Google Scholar 

  15. Tax DM, Duin RP (1999) Support vector domain description. Pattern Recognit Lett 20(11–13):1191–1199. https://doi.org/10.1016/S0167-8655(99)00087-2

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergio Decherchi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Decherchi, S., Cavalli, A. Fast and Memory-Efficient Import Vector Domain Description. Neural Process Lett 52, 511–524 (2020). https://doi.org/10.1007/s11063-020-10243-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-020-10243-6

Keywords

Navigation