Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11731))

Included in the following conference series:

Abstract

We revisit the kernel minimum enclosing ball problem and show that it can be solved using simple recurrent neural networks. Once solved, the interior of a ball can be characterized in terms of a function of a set of support vectors and local minima of this function can be thought of as prototypes of the data at hand. For Gaussian kernels, these minima can be naturally found via a mean shift procedure and thus via another recurrent neurocomputing process. Practical results demonstrate that prototypes found this way are descriptive, meaningful, and interpretable.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://cbcl.mit.edu/software-datasets/FaceData2.html.

References

  1. Bauckhage, C.: A neural network implementation of Frank-Wolfe optimization. In: Lintas, A., Rovetta, S., Verschure, P.F.M.J., Villa, A.E.P. (eds.) ICANN 2017. LNCS, vol. 10613, pp. 219–226. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68600-4_26

    Chapter  Google Scholar 

  2. Bauckhage, C., Thurau, C.: Making archetypal analysis practical. In: Denzler, J., Notni, G., Süße, H. (eds.) DAGM 2009. LNCS, vol. 5748, pp. 272–281. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03798-6_28

    Chapter  Google Scholar 

  3. Ben-Hur, A., Horn, D., Siegelmann, H., Vapnik, V.: Support vector clustering. J. Mach. Learn. Res. 2(Dec), 125–137 (2001)

    MATH  Google Scholar 

  4. Cheng, Y.: Mean shift, mode seeking, and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 17(8), 767–776 (1995). https://doi.org/10.1109/34.400568

    Article  Google Scholar 

  5. Cutler, A., Breiman, L.: Archetypal analysis. Technometrics 36(4), 338–347 (1994). https://doi.org/10.1080/00401706.1994.10485840

    Article  MathSciNet  MATH  Google Scholar 

  6. Dong, T., et al.: Imposing category trees onto word-embeddings using a geometric construction. In: Proceedings ICLR (2019)

    Google Scholar 

  7. Dong, T., Wang, Z., Li, J., Bauckhage, C., Cremers, A.: Triple classification using regions and fine-grained entity typing. In: Proceedings AAAI (2019)

    Google Scholar 

  8. Evangelista, P.F., Embrechts, M.J., Szymanski, B.K.: Some properties of the Gaussian kernel for one class learning. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds.) ICANN 2007. LNCS, vol. 4668, pp. 269–278. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74690-4_28

    Chapter  Google Scholar 

  9. Frank, M., Wolfe, P.: An algorithm for quadratic programming. Naval Res. Logist. 3(1–2), 95–110 (1956)

    Article  MathSciNet  Google Scholar 

  10. Fukunaga, K., Hostetler, L.: The estimation of the gradient of a density function with applications in pattern recognition. IEEE Trans. Inf. Theory 21(1), 32–40 (1975). https://doi.org/10.1109/TIT.1975.1055330

    Article  MathSciNet  MATH  Google Scholar 

  11. Jäger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004). https://doi.org/10.1126/science.1091277

    Article  Google Scholar 

  12. LeCun, Y., Boottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  13. Mahoney, M., Drineas, P.: CUR matrix decompositions for improved data analysis. PNAS 106(3), 697–702 (2009). https://doi.org/10.1073/pnas.0803205106

    Article  MathSciNet  MATH  Google Scholar 

  14. Ruff, L., et al.: Deep one-class classification. In: Proceedings ICML (2018)

    Google Scholar 

  15. Schleif, F.M., Gisbrecht, A., Tino, P.: Supervised low rank indefinite kernel approximation using minimum enclosing balls. Neurocomputing 318(Nov), 213–226 (2018). https://doi.org/10.1016/j.neucom.2018.08.057

    Article  Google Scholar 

  16. Sifa, R.: An overview of Frank-Wolfe optimization for stochasticity constrained interpretable matrix and tensor factorization. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11140, pp. 369–379. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01421-6_36

    Chapter  Google Scholar 

  17. Tax, D., Duin, R.: Support vector data description. Mach. Learn. 54(1), 45–46 (2004). https://doi.org/10.1023/B:MACH.0000008084.60811.49

    Article  MATH  Google Scholar 

  18. Thurau, C., Kersting, K., Bauckhage, C.: Deterministic CUR for improved large-scale data analysis: an empirical study. In: Proceedings SDM. SIAM (2012)

    Google Scholar 

  19. Thurau, C., Kersting, K., Wahabzada, M., Bauckhage, C.: Descriptive matrix factorization for sustainability: adopting the principle of opposites. Data Min. Knowl. Discov. 24(2), 325–354 (2012). https://doi.org/10.1007/s10618-011-0216-z

    Article  MathSciNet  MATH  Google Scholar 

  20. Tsang, I., Kwok, J., Cheung, P.M.: Core vector machines: fast SVM training on very large data sets. J. Mach. Learn. Res. 6(Apr), 363–392 (2010)

    MathSciNet  MATH  Google Scholar 

  21. Wang, S., Zhang, Z.: Improving CUR matrix decompositions and the Nyström approximation via adaptive sampling. J. Mach. Learn. Res. 14(1), 2729–2769 (2010)

    MATH  Google Scholar 

  22. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benshmarking machine learning algorithms. arXiv:1708.07747 [cs.LG] (2017)

  23. Zhang, K., Kwok, J.: Clustered Nyström method for large scale manifold learning and dimension reduction. IEEE Trans. Neural Netw. 21(10), 1576–1587 (2010). https://doi.org/10.1109/TNN.2010.2064786

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Bauckhage .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bauckhage, C., Sifa, R., Dong, T. (2019). Prototypes Within Minimum Enclosing Balls. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions. ICANN 2019. Lecture Notes in Computer Science(), vol 11731. Springer, Cham. https://doi.org/10.1007/978-3-030-30493-5_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30493-5_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30492-8

  • Online ISBN: 978-3-030-30493-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics