Subspace Mapping of Noisy Text Documents

  • Axel J. Soto
  • Marc Strickert
  • Gustavo E. Vazquez
  • Evangelos Milios
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6657)

Abstract

Subspace mapping methods aim at projecting high-dimensional data into a subspace where a specific objective function is optimized. Such dimension reduction allows the removal of collinear and irrelevant variables for creating informative visualizations and task-related data spaces. These specific and generally de-noised subspaces spaces enable machine learning methods to work more efficiently. We present a new and general subspace mapping method, Correlative Matrix Mapping (CMM), and evaluate its abilities for category-driven text organization by assessing neighborhood preservation, class coherence, and classification. This approach is evaluated for the challenging task of processing short and noisy documents.

Keywords

Subspace Mapping Compressed Document Representation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Zhang, J., Huang, H., Wang, J.: Manifold Learning for Visualizing and Analyzing High-Dimensional Data. IEEE Intel. Syst. 25, 54–61 (2010)CrossRefGoogle Scholar
  2. 2.
    van der Maaten, L., Postma, E., van den Herik, J.: Dimensionality Reduction: A Comparative Review. Tilburg University, TiCC TR 2009–005 (2009)Google Scholar
  3. 3.
    Strickert, M., Soto, A.J., Vazquez, G.E.: Adaptive Matrix Distances Aiming at Optimum Regression Subspaces. In: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning - ESANN 2010, pp. 93–98 (2010)Google Scholar
  4. 4.
    Soto, A.J., Strickert, M., Vazquez, G.E., Milios, E.: Adaptive Visualization of Text Documents Incorporating Domain Knowledge. In: Challenges of Data Visualization, NIPS 2010 Workshop (2010)Google Scholar
  5. 5.
    Machine Learning Open Source Software, http://mloss.org
  6. 6.
  7. 7.
    McLachlan, G.: Discriminant Analysis and Statistical Pattern Recognition. Wiley-Interscience, Hoboken (2004)MATHGoogle Scholar
  8. 8.
    Hardoon, D.R., Szedmak, S.R., Shawe-Taylor, J.R.: Canonical Correlation Analysis: An Overview with Application to Learning Methods. Neural Comput. 16, 2639–2664 (2004)CrossRefMATHGoogle Scholar
  9. 9.
    Goldberger, J., Roweis, S., Hinton, G., Salakhutdinov, R.: Neighborhood Components Analysis. Adv. Neural Inf. Process. Syst. 17, 513–520 (2005)Google Scholar
  10. 10.
    Globerson, A., Roweis, S.: Metric Learning by Collapsing Classes. Adv. Neural Inf. Process. Syst. 18, 451–458 (2006)Google Scholar
  11. 11.
    Aviation Safety Reporting System, http://asrs.arc.nasa.gov/
  12. 12.
    Lee, J.A., Verleysen, M.: Quality Assessment of Dimensionality Reduction: Rank-Based Criteria. Neurocomputing 72, 1431–1443 (2009)CrossRefGoogle Scholar
  13. 13.
    Dunnet, C.W.: A Multiple Comparisons Procedure for Comparing Several Treatments with a Control. J. Am. Stat. Assoc. 50, 1096–1121 (1955)CrossRefGoogle Scholar
  14. 14.
    Soto, A.J., Strickert, M., Vazquez, G.E., Milios, E.: Technical Report, Dalhousie University (in preparation), http://www.cs.dal.ca/research/techreports

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Axel J. Soto
    • 1
  • Marc Strickert
    • 2
  • Gustavo E. Vazquez
    • 3
  • Evangelos Milios
    • 1
  1. 1.Faculty of Computer ScienceDalhousie UniversityCanada
  2. 2.Institute for Vision and GraphicsSiegen UniversityGermany
  3. 3.Dept. Computer ScienceUniv. Nacional del SurArgentina

Personalised recommendations