Advertisement

Leaving Local Optima in Unsupervised Kernel Regression

  • Daniel Lückehe
  • Oliver Kramer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8681)

Abstract

Embedding high-dimensional patterns in low-dimensional latent spaces is a challenging task. In this paper, we introduce re-sampling strategies to leave local optima in the data space reconstruction error (DSRE) minimization process of unsupervised kernel regression (UKR). For this sake, we concentrate on a hybrid UKR variant that combines iterative solution construction with gradient descent based optimization. Patterns with high reconstruction errors are removed from the manifold and re-sampled based on Gaussian sampling. Re-sampling variants consider different pattern reconstruction errors, varying numbers of re-sampled patterns, and termination conditions. The re-sampling process with UKR can also improve ISOMAP embeddings. Experiments on typical benchmark data sets illustrate the capabilities of strategies for leaving optima.

Keywords

Local Optimum Latent Space Reconstruction Error Gaussian Kernel Function Nonlinear Dimensionality Reduction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bache, K., Lichman, M.: UCI machine learning repository (2013)Google Scholar
  2. 2.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, Berlin (2009)CrossRefzbMATHGoogle Scholar
  3. 3.
    Hull, J.: A database for handwritten text recognition research. IEEE PAMI 5(16), 550–554 (1994)CrossRefGoogle Scholar
  4. 4.
    Klanke, S., Ritter, H.: Variants of unsupervised kernel regression: General cost functions. Neurocomputing 70(7-9), 1289–1303 (2007)CrossRefGoogle Scholar
  5. 5.
    Kramer, O.: Dimensionality reduction by unsupervised k-nearest neighbor regression. In: International Conference on Machine Learning and Applications (ICMLA), pp. 275–278 (2011)Google Scholar
  6. 6.
    Lee, J.A., Verleysen, M.: Nonlinear Dimensionality Reduction. Springer (2007)Google Scholar
  7. 7.
    Nadaraya, E.: On estimating regression. Theory of Probability and Its Application 10, 186–190 (1964)CrossRefGoogle Scholar
  8. 8.
    Nocedal, J., Wright, S.J.: Numerical Optimization. Springer (2000)Google Scholar
  9. 9.
    Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability, vol. 26. Chapman and Hall, London (1986)CrossRefzbMATHGoogle Scholar
  10. 10.
    Tenenbaum, J.B., Silva, V.D., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Daniel Lückehe
    • 1
  • Oliver Kramer
    • 2
  1. 1.Department of GeoinformationJade University of Applied SciencesOldenburgGermany
  2. 2.Department of Computing ScienceUniversity of OldenburgOldenburgGermany

Personalised recommendations