Abstract
This paper proposes an inexpensive way to learn an effective dissimilarity function to be used for k-nearest neighbor (k-NN) classification. Unlike Mahalanobis metric learning methods that map both query (unlabeled) objects and labeled objects to new coordinates by a single transformation, our method learns a transformation of labeled objects to new points in the feature space whereas query objects are kept in their original coordinates. This method has several advantages over existing distance metric learning methods: (i) In experiments with large document and image datasets, it achieves k-NN classification accuracy better than or at least comparable to the state-of-the-art metric learning methods. (ii) The transformation can be learned efficiently by solving a standard ridge regression problem. For document and image datasets, training is often more than two orders of magnitude faster than the fastest metric learning methods tested. This speed-up is also due to the fact that the proposed method eliminates the optimization over “negative” object pairs, i.e., objects whose class labels are different. (iii) The formulation has a theoretical justification in terms of reducing hubness in data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
Datasets were downloaded from http://www.cad.zju.edu.cn/home/dengcai/Data/TextData.html.
- 3.
We used the publicly available features from https://zimingzhang.files.wordpress.com/2014/10/cnn-features1.key.
- 4.
We also conducted experiments where the dimensionality of features was set to 100. The results are not presented here due to lack of space, but the same trend was observed.
- 5.
- 6.
This code will be made available at our homepage.
References
Bellet, A., Habrard, A., Sebban, M.: A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709 (2014)
Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: ICML 2007, pp. 209–216 (2007)
Dhillon, P.S., Talukdar, P.P., Crammer, K.: Learning better data representation using inference-driven metric learning. In: ACL 2010, pp. 377–381 (2010)
Hara, K., Suzuki, I., Shimbo, M., Kobayashi, K., Fukumizu, K., Radovanović, M.: Localized centering: reducing hubness in large-sample data. In: AAAI 2015 (2015)
Jain, P., Kulis, B., Davis, J.V., Dhillon, I.S.: Metric and kernel learning using a linear transformation. JMLR 13, 519–547 (2012)
Kulis, B.: Metric learning: a survey. Found. Trends Mach. Learn. 5(4), 287–364 (2013)
Radovanović, M., Nanopoulos, A., Ivanović, M.: Hubs in space: popular nearest neighbors in high-dimensional data. JMLR 11, 2487–2531 (2010)
Schnitzer, D., Flexer, A., Schedl, M., Widmer, G.: Local and global scaling reduce hubs in space. JMLR 13, 2871–2902 (2012)
Shigeto, Y., Suzuki, I., Hara, K., Shimbo, M., Matsumoto, Y.: Ridge regression, hubness, and zero-shot learning. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Soares, C., Gama, J., Jorge, A. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9284, pp. 135–151. Springer, Cham (2015). doi:10.1007/978-3-319-23528-8_9
Suzuki, I., Hara, K., Shimbo, M., Saerens, M., Fukumizu, K.: Centering similarity measures to reduce hubs. In: EMNLP 2013, pp. 613–623 (2013)
Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. JMLR 10, 207–244 (2009)
Weinberger, K.Q., Tesauro, G.: Metric learning for kernel regression. In: AISTATS 2007, pp. 608–615 (2007)
Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning, with application to clustering with side-information. In: NIPS 2002, pp. 505–512 (2002)
Xu, Z., Weinberger, K.Q., Chapelle, O.: Distance metric learning for kernel machines. arXiv preprint arXiv:1208.3422 (2013)
Ying, Y., Li, P.: Distance metric learning with eigenvalue optimization. JMLR 13, 1–26 (2012)
Acknowledgments
We thank anonymous reviewers for their comments and suggestions. This work was partially supported by JSPS Kakenhi Grant No. 15H02749.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Shigeto, Y., Shimbo, M., Matsumoto, Y. (2017). A Fast and Easy Regression Technique for k-NN Classification Without Using Negative Pairs. In: Kim, J., Shim, K., Cao, L., Lee, JG., Lin, X., Moon, YS. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2017. Lecture Notes in Computer Science(), vol 10234. Springer, Cham. https://doi.org/10.1007/978-3-319-57454-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-57454-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-57453-0
Online ISBN: 978-3-319-57454-7
eBook Packages: Computer ScienceComputer Science (R0)