Journal of Signal Processing Systems

, Volume 79, Issue 2, pp 189–199 | Cite as

Learning Incoherent Subspaces: Classification via Incoherent Dictionary Learning

Article

Abstract

In this article we present the supervised iterative projections and rotations (s-ipr) algorithm, a method for learning discriminative incoherent subspaces from data. We derive s-ipr as a supervised extension of our previously proposed iterative projections and rotations (ipr) algorithm for incoherent dictionary learning, and we employ it to learn incoherent sub-spaces that model signals belonging to different classes. We test our method as a feature transform for supervised classification, first by visualising transformed features from a synthetic dataset and from the ‘iris’ dataset, then by using the resulting features in a classification experiment.

Keywords

Subspace learning Dictionary learning Incoherent subspaces Supervised classification Feature transform Sparse approximation 

References

  1. 1.
    Aronszajn, N. (1950). Theory of reproducing kernels. Transactions of the American Mathematical Society, 68(3), 337–404.CrossRefMATHMathSciNetGoogle Scholar
  2. 2.
    Bair, E., Paul, D., Tibshirani, R. (2006). Prediction by supervised principal components. Journal of the American Statistical Association, 101, 119–137.CrossRefMATHMathSciNetGoogle Scholar
  3. 3.
    Barchiesi, D., & Plumbley, M.D. (2013). Learning incoherent dictionaries for sparse approximation using iterative projections and rotations. IEEE Transactions on Signal Processing, 61(8), 2055–2065.CrossRefGoogle Scholar
  4. 4.
    Barshan, E., Ghodsi, A., Azimifar, Z., Jahromi, M.Z. (2011). Supervised principal component analysis: Visualization, classification and regression on subspaces and submanifolds. Pattern Recognition, 44(7), 1357–1371.CrossRefMATHGoogle Scholar
  5. 5.
    Duda, R., & Hart, P.E. (1973). Pattern classification and scene analysis. New York: Wiley.Google Scholar
  6. 6.
    Elad, M. (2010). Sparse and redundant representations. New York: Springer.Google Scholar
  7. 7.
    Elhamifar, E., & Vidal, R. (2013). Sparse subspace clustering: algorithm, theory, and applications. To appear in IEEE transactions on pattern analysis and machine intelligence.Google Scholar
  8. 8.
    Gretton, A., Bousquet, O., Smola, A., Schölkopf, B. (2005). Measuring statistical dependence with hilbert-schmidt norms. Algorithmic learning theory (pp. 63–77). New York: Springer.Google Scholar
  9. 9.
    Li, K.-C. (1991). Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86(414), 316–327.CrossRefMATHMathSciNetGoogle Scholar
  10. 10.
    Pearson, K. (1901). On lines and planes of closest fit to systems of points in space. The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, Sixth Series, 2, 559–572.CrossRefGoogle Scholar
  11. 11.
    Rubinstein, R., Bruckstein, A., Elad, M. (2010). Dictionaries for sparse representation modeling. In Proceedings of the IEEE, 98(6), 1045–1057.CrossRefGoogle Scholar
  12. 12.
    Schnass, K., & Vandergheynst, P. (2010). A union of incoherent spaces model for classification. In Proceedings of the IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 5490–5493).Google Scholar
  13. 13.
    Van Der Maaten, L., Postma, E., Van Den Herik, J. (2009). Dimensionality reduction: A comparative review. Tech. Rep., TiCC, Tilburg University.Google Scholar
  14. 14.
    Xing, E.P., Jordan, M.I., Russell, S., Ng, A. (2002). Distance metric learning with application to clustering with side-information. In Advances in neural information processing systems (pp. 505–512).Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Centre for Digital MusicQueen Mary University of LondonLondonUK

Personalised recommendations