Abstract
Efficient dimensionality reduction by random projections (RP) gains popularity, hence the learning guarantees achievable in RP spaces are of great interest. In finite dimensional setting, it has been shown for the compressive Fisher Linear Discriminant (FLD) classifier that for good generalisation the required target dimension grows only as the log of the number of classes and is not adversely affected by the number of projected data points. However these bounds depend on the dimensionality d of the original data space. In this paper we give further guarantees that remove d from the bounds under certain conditions of regularity on the data density structure. In particular, if the data density does not fill the ambient space then the error of compressive FLD is independent of the ambient dimension and depends only on a notion of ‘intrinsic dimension’.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Arriaga, R.I., Vempala, S.: An algorithmic theory of learning: Robust concepts and random projection. In: Proceedings of the 40th Annual Symposium on Foundations of Computer Science (FOCS), pp. 616–623 (1999)
Biau, G., Devroye, L., Lugosi, G.: On the performance of clustering in Hilbert spaces. IEEE Transactions on Information Theory 54, 781–790 (2008)
Dasgupta, S.: Learning mixtures of Gaussians. In: Proceedings of the 40th Annual Symposium on Foundations of Computer Science (FOCS), pp. 634–644 (1999)
Dasgupta, S., Gupta, A.: An elementary proof of the Johnson-Lindenstrauss lemma. Random Structures and Algorithms 22, 60–65 (2002)
Durrant, R.J., Kabán, A.: Compressed Fisher linear discriminant analysis: Classification of randomly projected data. In: Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD (2010)
Durrant, R.J., Kabán, A.: A tight bound on the performance of Fisher’s linear discriminant in randomly projected data spaces. ICPR 2010 33(7), 911–919 (2012); Special Issue on Awards from ICPR 2010
Durrant, R.J., Kabán, A.: Error bounds for kernel Fisher linear discriminant in Gaussian Hilbert space. In: 15th International Conference on Artificial Intelligence and Statistics (AiStats), JMLR W&CP, vol. 22, pp. 337–345 (2012)
Durrant, R.J., Kabán, A.: Sharp Generalization Error Bounds for Randomly-projected Classifiers. In: 30th International Conference on Machine Learning (ICML 2013), JMLR W&CP, vol. 28(3), pp. 693–701 (2013)
Farahmand, A., Szepesvári, C., Audibert, J.-Y.: Manifold-adaptive dimension estimation. In: Proceedings of the 24th Annual International Conference on Machine Learning (ICML), pp. 265–272 (2007)
Halko, N., Martisson, P.G., Tropp, J.A.: Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review 53(2), 217–288 (2011)
Horn, R.A., Johnson, C.R.: Matrix analysis. CUP (1985)
Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous univariate distributions, 2nd edn., vol. 1. Wiley (1994)
Krishnan, S., Bhattacharyya, C., Hariharan, R.: A randomized algorithm for large scale support vector learning. In: Proceedings of the 21st Annual Conference on Neural Information Processing Systems, NIPS (2007)
Maniglia, S., Rhandi, A.: Gaussian measures on separable Hilbert spaces and applications. Quaderni del Dipartimento di Matematica dell’ Università del Salento, pp. 1-24 (2004)
McLachlan, G.J.: Discriminant analysis and statistical pattern recognition. Wiley (1992)
Mika, S., Ratsch, G., Weston, J., Scholkopf, B., Mullers, K.R.: Fisher discriminant analysis with kernels. In: Proc. of the 1999 IEEE Signal Processing Society Workshop. IEEE (2002)
Sarlós, T.: Improved approximation algorithms for large matrices via random projections. In: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 143–152 (2006)
Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. In: Compressed Sensing, pp. 210–268. Cambridge Univ. Press, Cambridge (2012)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kabán, A., Durrant, R.J. (2013). Dimension-Adaptive Bounds on Compressive FLD Classification. In: Jain, S., Munos, R., Stephan, F., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2013. Lecture Notes in Computer Science(), vol 8139. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40935-6_21
Download citation
DOI: https://doi.org/10.1007/978-3-642-40935-6_21
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40934-9
Online ISBN: 978-3-642-40935-6
eBook Packages: Computer ScienceComputer Science (R0)