Skip to main content

A Fast and Easy Regression Technique for k-NN Classification Without Using Negative Pairs

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10234))

Included in the following conference series:

  • 3801 Accesses

Abstract

This paper proposes an inexpensive way to learn an effective dissimilarity function to be used for k-nearest neighbor (k-NN) classification. Unlike Mahalanobis metric learning methods that map both query (unlabeled) objects and labeled objects to new coordinates by a single transformation, our method learns a transformation of labeled objects to new points in the feature space whereas query objects are kept in their original coordinates. This method has several advantages over existing distance metric learning methods: (i) In experiments with large document and image datasets, it achieves k-NN classification accuracy better than or at least comparable to the state-of-the-art metric learning methods. (ii) The transformation can be learned efficiently by solving a standard ridge regression problem. For document and image datasets, training is often more than two orders of magnitude faster than the fastest metric learning methods tested. This speed-up is also due to the fact that the proposed method eliminates the optimization over “negative” object pairs, i.e., objects whose class labels are different. (iii) The formulation has a theoretical justification in terms of reducing hubness in data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://archive.ics.uci.edu/ml/.

  2. 2.

    Datasets were downloaded from http://www.cad.zju.edu.cn/home/dengcai/Data/TextData.html.

  3. 3.

    We used the publicly available features from https://zimingzhang.files.wordpress.com/2014/10/cnn-features1.key.

  4. 4.

    We also conducted experiments where the dimensionality of features was set to 100. The results are not presented here due to lack of space, but the same trend was observed.

  5. 5.

    LMNN: https://bitbucket.org/mlcircus/lmnn/downloads,

    ITML: http://www.cs.utexas.edu/~pjain/itml/,

    DML-eig: http://www.albany.edu/~yy298919/software.html.

  6. 6.

    This code will be made available at our homepage.

References

  1. Bellet, A., Habrard, A., Sebban, M.: A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709 (2014)

  2. Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: ICML 2007, pp. 209–216 (2007)

    Google Scholar 

  3. Dhillon, P.S., Talukdar, P.P., Crammer, K.: Learning better data representation using inference-driven metric learning. In: ACL 2010, pp. 377–381 (2010)

    Google Scholar 

  4. Hara, K., Suzuki, I., Shimbo, M., Kobayashi, K., Fukumizu, K., Radovanović, M.: Localized centering: reducing hubness in large-sample data. In: AAAI 2015 (2015)

    Google Scholar 

  5. Jain, P., Kulis, B., Davis, J.V., Dhillon, I.S.: Metric and kernel learning using a linear transformation. JMLR 13, 519–547 (2012)

    MathSciNet  MATH  Google Scholar 

  6. Kulis, B.: Metric learning: a survey. Found. Trends Mach. Learn. 5(4), 287–364 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  7. Radovanović, M., Nanopoulos, A., Ivanović, M.: Hubs in space: popular nearest neighbors in high-dimensional data. JMLR 11, 2487–2531 (2010)

    MathSciNet  MATH  Google Scholar 

  8. Schnitzer, D., Flexer, A., Schedl, M., Widmer, G.: Local and global scaling reduce hubs in space. JMLR 13, 2871–2902 (2012)

    MathSciNet  MATH  Google Scholar 

  9. Shigeto, Y., Suzuki, I., Hara, K., Shimbo, M., Matsumoto, Y.: Ridge regression, hubness, and zero-shot learning. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Soares, C., Gama, J., Jorge, A. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9284, pp. 135–151. Springer, Cham (2015). doi:10.1007/978-3-319-23528-8_9

    Chapter  Google Scholar 

  10. Suzuki, I., Hara, K., Shimbo, M., Saerens, M., Fukumizu, K.: Centering similarity measures to reduce hubs. In: EMNLP 2013, pp. 613–623 (2013)

    Google Scholar 

  11. Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. JMLR 10, 207–244 (2009)

    MATH  Google Scholar 

  12. Weinberger, K.Q., Tesauro, G.: Metric learning for kernel regression. In: AISTATS 2007, pp. 608–615 (2007)

    Google Scholar 

  13. Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning, with application to clustering with side-information. In: NIPS 2002, pp. 505–512 (2002)

    Google Scholar 

  14. Xu, Z., Weinberger, K.Q., Chapelle, O.: Distance metric learning for kernel machines. arXiv preprint arXiv:1208.3422 (2013)

  15. Ying, Y., Li, P.: Distance metric learning with eigenvalue optimization. JMLR 13, 1–26 (2012)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We thank anonymous reviewers for their comments and suggestions. This work was partially supported by JSPS Kakenhi Grant No. 15H02749.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masashi Shimbo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Shigeto, Y., Shimbo, M., Matsumoto, Y. (2017). A Fast and Easy Regression Technique for k-NN Classification Without Using Negative Pairs. In: Kim, J., Shim, K., Cao, L., Lee, JG., Lin, X., Moon, YS. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2017. Lecture Notes in Computer Science(), vol 10234. Springer, Cham. https://doi.org/10.1007/978-3-319-57454-7_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-57454-7_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-57453-0

  • Online ISBN: 978-3-319-57454-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics