Skip to main content

Dimension-Adaptive Bounds on Compressive FLD Classification

  • Conference paper
Algorithmic Learning Theory (ALT 2013)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8139))

Included in the following conference series:

Abstract

Efficient dimensionality reduction by random projections (RP) gains popularity, hence the learning guarantees achievable in RP spaces are of great interest. In finite dimensional setting, it has been shown for the compressive Fisher Linear Discriminant (FLD) classifier that for good generalisation the required target dimension grows only as the log of the number of classes and is not adversely affected by the number of projected data points. However these bounds depend on the dimensionality d of the original data space. In this paper we give further guarantees that remove d from the bounds under certain conditions of regularity on the data density structure. In particular, if the data density does not fill the ambient space then the error of compressive FLD is independent of the ambient dimension and depends only on a notion of ‘intrinsic dimension’.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arriaga, R.I., Vempala, S.: An algorithmic theory of learning: Robust concepts and random projection. In: Proceedings of the 40th Annual Symposium on Foundations of Computer Science (FOCS), pp. 616–623 (1999)

    Google Scholar 

  2. Biau, G., Devroye, L., Lugosi, G.: On the performance of clustering in Hilbert spaces. IEEE Transactions on Information Theory 54, 781–790 (2008)

    Article  MathSciNet  Google Scholar 

  3. Dasgupta, S.: Learning mixtures of Gaussians. In: Proceedings of the 40th Annual Symposium on Foundations of Computer Science (FOCS), pp. 634–644 (1999)

    Google Scholar 

  4. Dasgupta, S., Gupta, A.: An elementary proof of the Johnson-Lindenstrauss lemma. Random Structures and Algorithms 22, 60–65 (2002)

    Article  MathSciNet  Google Scholar 

  5. Durrant, R.J., Kabán, A.: Compressed Fisher linear discriminant analysis: Classification of randomly projected data. In: Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD (2010)

    Google Scholar 

  6. Durrant, R.J., Kabán, A.: A tight bound on the performance of Fisher’s linear discriminant in randomly projected data spaces. ICPR 2010 33(7), 911–919 (2012); Special Issue on Awards from ICPR 2010

    Article  Google Scholar 

  7. Durrant, R.J., Kabán, A.: Error bounds for kernel Fisher linear discriminant in Gaussian Hilbert space. In: 15th International Conference on Artificial Intelligence and Statistics (AiStats), JMLR W&CP, vol. 22, pp. 337–345 (2012)

    Google Scholar 

  8. Durrant, R.J., Kabán, A.: Sharp Generalization Error Bounds for Randomly-projected Classifiers. In: 30th International Conference on Machine Learning (ICML 2013), JMLR W&CP, vol. 28(3), pp. 693–701 (2013)

    Google Scholar 

  9. Farahmand, A., Szepesvári, C., Audibert, J.-Y.: Manifold-adaptive dimension estimation. In: Proceedings of the 24th Annual International Conference on Machine Learning (ICML), pp. 265–272 (2007)

    Google Scholar 

  10. Halko, N., Martisson, P.G., Tropp, J.A.: Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review 53(2), 217–288 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Horn, R.A., Johnson, C.R.: Matrix analysis. CUP (1985)

    Google Scholar 

  12. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous univariate distributions, 2nd edn., vol. 1. Wiley (1994)

    Google Scholar 

  13. Krishnan, S., Bhattacharyya, C., Hariharan, R.: A randomized algorithm for large scale support vector learning. In: Proceedings of the 21st Annual Conference on Neural Information Processing Systems, NIPS (2007)

    Google Scholar 

  14. Maniglia, S., Rhandi, A.: Gaussian measures on separable Hilbert spaces and applications. Quaderni del Dipartimento di Matematica dell’ Università del Salento, pp. 1-24 (2004)

    Google Scholar 

  15. McLachlan, G.J.: Discriminant analysis and statistical pattern recognition. Wiley (1992)

    Google Scholar 

  16. Mika, S., Ratsch, G., Weston, J., Scholkopf, B., Mullers, K.R.: Fisher discriminant analysis with kernels. In: Proc. of the 1999 IEEE Signal Processing Society Workshop. IEEE (2002)

    Google Scholar 

  17. Sarlós, T.: Improved approximation algorithms for large matrices via random projections. In: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 143–152 (2006)

    Google Scholar 

  18. Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. In: Compressed Sensing, pp. 210–268. Cambridge Univ. Press, Cambridge (2012)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kabán, A., Durrant, R.J. (2013). Dimension-Adaptive Bounds on Compressive FLD Classification. In: Jain, S., Munos, R., Stephan, F., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2013. Lecture Notes in Computer Science(), vol 8139. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40935-6_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-40935-6_21

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-40934-9

  • Online ISBN: 978-3-642-40935-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics