Advertisement

Multiple Kernel Learning for Spectral Dimensionality Reduction

  • Diego Hernán Peluffo-Ordóñez
  • Andrés Eduardo Castro-Ospina
  • Juan Carlos Alvarado-Pérez
  • Edgardo Javier Revelo-Fuelagán
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9423)

Abstract

This work introduces a multiple kernel learning (MKL) approach for selecting and combining different spectral methods of dimensionality reduction (DR). From a predefined set of kernels representing conventional spectral DR methods, a generalized kernel is calculated by means of a linear combination of kernel matrices. Coefficients are estimated via a variable ranking aimed at quantifying how much each variable contributes to optimize a variance preservation criterion. All considered kernels are tested within a kernel PCA framework. The experiments are carried out over well-known real and artificial data sets. The performance of compared DR approaches is quantified by a scaled version of the average agreement rate between K-ary neighborhoods. Proposed MKL approach exploits the representation ability of every single method to reach a better embedded data for both getting more intelligible visualization and preserving the structure of data.

Keywords

Dimensionality reduction Generalized kernel Kernel PCA Multiple kernel learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Borg, I.: Modern multidimensional scaling: Theory and applications. Springer (2005)Google Scholar
  2. 2.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15(6), 1373–1396 (2003)CrossRefzbMATHGoogle Scholar
  3. 3.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  4. 4.
    Hinton, G.E., Roweis, S.T.: Stochastic neighbor embedding. In: Advances in Neural Information Processing Systems, pp. 833–840 (2002)Google Scholar
  5. 5.
    Wolf, L., Bileschi, S.: Combining variable selection with dimensionality reduction. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 2, pp. 801–806, June 2005Google Scholar
  6. 6.
    Peluffo, D.H., Lee, J.A., Verleysen, M., Rodríguez-Sotelo, J.L., Castellanos-Domínguez, G.: Unsupervised relevance analysis for feature extraction and selection: a distance-based approach for feature relevance. In: International Conference on Pattern Recognition, Applications and Methods - ICPRAM 2014Google Scholar
  7. 7.
    Langone, R., Alzate, C., Suykens, J.A.: Kernel spectral clustering with memory effect. Statistical Mechanics and its Applications, Physica A (2013)Google Scholar
  8. 8.
    Maestri, M., Cassanello, M., Horowitz, G.: Kernel PCA performance in processes with multiple operation modes. Chemical Product and Process Modeling 4(5), 7 (2009)CrossRefGoogle Scholar
  9. 9.
    Wolf, L., Shashua, A.: Feature selection for unsupervised and supervised inference: The emergence of sparsity in a weight-based approach. Journal of Machine Learning 6, 1855–1887 (2005)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering. Computer Methods and Programs in Biomedicine (2012)Google Scholar
  11. 11.
    Ham, J., Lee, D.D., Mika, S., Schölkopf, B.: A kernel view of the dimensionality reduction of manifolds. In: Proceedings of the Twenty-first International Conference on Machine learning, p. 47. ACM (2004)Google Scholar
  12. 12.
    Peluffo-Ordóñez, D., Lee, J., Verleysen, M.: Generalized kernel framework for unsupervised spectral methods of dimensionality reduction. In: IEEE Symposium Series on Computational Intelligence (2014)Google Scholar
  13. 13.
    Peluffo-Ordónez, D., Garcia-Vega, S., Langone, R., Suykens, J., Castellanos-Dominguez, G., et al.: Kernel spectral clustering for dynamic data using multiple kernel learning. In: The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE (2013)Google Scholar
  14. 14.
    Molina-Giraldo, S., Álvarez-Meza, A.M., Peluffo-Ordoñez, D.H., Castellanos-Domínguez, G.: Image segmentation based on multi-kernel learning and feature relevance analysis. In: Pavón, J., Duque-Méndez, N.D., Fuentes-Fernández, R. (eds.) IBERAMIA 2012. LNCS, vol. 7637, pp. 501–510. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  15. 15.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  16. 16.
    Lee, J.A., Renard, E., Bernard, G., Dupont, P., Verleysen, M.: Type 1 and 2 mixtures of Kullback-Leibler divergences as cost functions in dimensionality reduction based on similarity preservation. Neurocomputing (2013)Google Scholar
  17. 17.
    Cook, J., Sutskever, I., Mnih, A., Hinton, G.E.: Visualizing similarity data with a mixture of maps. In: International Conference on Artificial Intelligence and Statistics, pp. 67–74 (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Diego Hernán Peluffo-Ordóñez
    • 1
  • Andrés Eduardo Castro-Ospina
    • 2
  • Juan Carlos Alvarado-Pérez
    • 3
    • 4
  • Edgardo Javier Revelo-Fuelagán
    • 5
  1. 1.Universidad Cooperativa de Colombia – PastoPastoColombia
  2. 2.Research Center of the Instituto Tecnologico MetropolitanoMedellinColombia
  3. 3.Universidad de SalamancaSalamancaSpain
  4. 4.Universidad MarianaPastoColombia
  5. 5.Universidad de NariñoPastoColombia

Personalised recommendations