Skip to main content

Unsupervised Nearest Neighbors with Kernels

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNAI,volume 7526)

Abstract

In this paper we introduce an extension of unsupervised nearest neighbors for embedding patterns into continuous latent spaces of arbitrary dimensionality with stochastic sampling. Distances in data space are employed as standard deviation for Gaussian sampling in latent space. Neighborhoods are preserved with the nearest neighbor data space reconstruction error. Similar to the previous unsupervised nearest neighbors (UNN) variants this approach is an iterative method that constructs a latent embedding by selecting the position with the lowest error. Further, we introduce kernel functions for computing the data space reconstruction error in a feature space that allows to better handle non-linearities. Experimental studies show that kernel unsupervised nearest neighbors (KUNN) is an efficient method for embedding high-dimensional patterns.

Keywords

  • Kernel Function
  • Feature Space
  • Latent Space
  • Data Space
  • Locally Linear Embedding

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   54.99
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   72.00
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bentley, J.L.: Multidimensional binary search trees used for associative searching. Communications of the ACM 18(9), 509–517 (1975)

    CrossRef  MathSciNet  MATH  Google Scholar 

  2. Jolliffe, I.: Principal component analysis. Springer series in statistics. Springer, New York (1986)

    Google Scholar 

  3. Kramer, O.: Dimensionalty reduction by unsupervised nearest neighbor regression. In: Proceedings of the 10th International Conference on Machine Learning and Applications (ICMLA), pp. 275–278. IEEE (2011)

    Google Scholar 

  4. Kramer, O.: On Evolutionary Approaches to Unsupervised Nearest Neighbor Regression. In: Di Chio, C., Agapitos, A., Cagnoni, S., Cotta, C., de Vega, F.F., Di Caro, G.A., Drechsler, R., Ekárt, A., Esparcia-Alcázar, A.I., Farooq, M., Langdon, W.B., Merelo-Guervós, J.J., Preuss, M., Richter, H., Silva, S., Simões, A., Squillero, G., Tarantino, E., Tettamanzi, A.G.B., Togelius, J., Urquhart, N., Uyar, A.Ş., Yannakakis, G.N. (eds.) EvoApplications 2012. LNCS, vol. 7248, pp. 346–355. Springer, Heidelberg (2012)

    CrossRef  Google Scholar 

  5. Kramer, O.: On unsupervised nearest-neighbor regression and robust loss functions. In: International Conference on Artificial Intelligence, pp. 164–170. SciTePress (2012)

    Google Scholar 

  6. Kramer, O.: A Particle Swarm Embedding Algorithm for Nonlinear Dimensionality Reduction. In: Dorigo, M., Birattari, M., Blum, C., Christensen, A.L., Engelbrecht, A.P., Groß, R., Stützle, T. (eds.) ANTS 2012. LNCS, vol. 7461, pp. 1–12. Springer, Heidelberg (2012)

    Google Scholar 

  7. Lawrence, N.D.: Probabilistic non-linear principal component analysis with gaussian process latent variable models. Journal of Machine Learning Research 6, 1783–1816 (2005)

    MathSciNet  MATH  Google Scholar 

  8. Lee, J.A., Verleysen, M.: Quality assessment of dimensionality reduction: Rank-based criteria. Neurocomputing 72(7-9), 1431–1443 (2009)

    CrossRef  Google Scholar 

  9. Meinicke, P., Klanke, S., Memisevic, R., Ritter, H.: Principal surfaces from unsupervised kernel regression. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(9), 1379–1391 (2005)

    CrossRef  Google Scholar 

  10. Omohundro, S.M.: Five balltree construction algorithms. Technical report, International Computer Science Institute (ICSI) (1989)

    Google Scholar 

  11. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)

    CrossRef  Google Scholar 

  12. Smola, A.J., Mika, S., Schölkopf, B., Williamson, R.C.: Regularized principal manifolds. Journal of Machine Learning Research 1, 179–209 (2001)

    MATH  Google Scholar 

  13. Tan, S., Mavrovouniotis, M.: Reducing data dimensionality through optimizing neural network inputs. AIChE Journal 41(6), 1471–1479 (1995)

    CrossRef  Google Scholar 

  14. Tenenbaum, J.B., Silva, V.D., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kramer, O. (2012). Unsupervised Nearest Neighbors with Kernels. In: Glimm, B., Krüger, A. (eds) KI 2012: Advances in Artificial Intelligence. KI 2012. Lecture Notes in Computer Science(), vol 7526. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33347-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33347-7_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33346-0

  • Online ISBN: 978-3-642-33347-7

  • eBook Packages: Computer ScienceComputer Science (R0)