Skip to main content

Computing Optimal Attribute Weight Settings for Nearest Neighbor Algorithms

  • Chapter
Lazy Learning
  • 440 Accesses

Abstract

Nearest neighbor (NN) learning algorithms, examples of the lazy learning paradigm, rely on a distance function to measure the similarity of testing examples with the stored training examples. Since certain attributes are more discriminative, while others can be less or totally irrelevant, attributes should be weighed differently in the distance function. Most previous studies on weight setting for NN learning algorithms are empirical. In this paper we describe our attempt on deciding theoretically optimal weights that minimize the predictive error for NN algorithms. Assuming a uniform distribution of examples in a 2-d continuous space, we first derive the average predictive error introduced by a linear classification boundary, and then determine the optimal weight setting for any polygonal classification region. Our theoretical results of optimal attribute weights can serve as a baseline or lower bound for comparing other empirical weight setting methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Aha, D. W. (1989). Incremental, instance-based learning of independent and graded concept descriptions. In Proceedings of the 1989 International Workshop on Machine Learning, pp. 387–391. Irvine, CA: Morgan Kaufmann.

    Google Scholar 

  • Aha, D. W. & Bankert, R. L. (1994). Feature selection for case-based classification of cloud types: An empirical comparison. In Aha, D. W. (ed.), Proceedings of theAAA1–94 Workshop on Case-Based Reasoning (Technical Report WS-94–01), Menlo Park, CA: AAAI Press.

    Google Scholar 

  • Aha, D. W., Kibler, D. & Albert, M. K. (1991). Instance-based learning algorithms. Machine Learning, 6: 37–66.

    Google Scholar 

  • Albert, M. K. & Aha, D. W. (1991). Analyses of instance-based learning algorithms. In Proceedings of the Ninth National Conference on Artificial Intelligence, pp. 553–558. Menlo Park, CA: AAAI Press.

    Google Scholar 

  • Cardie, C. (1993). Using decision trees to improve case-based learning. In Proceedings of the Tenth International Conference on Machine Learning, pp. 25–32. Morgan Kaufmann, San Mateo, CA.

    Google Scholar 

  • Caruana, R. & Freitag, D. (1994). Greedy attribute selection. In Proceedings of the 1994 International Conference on Machine Learning, pp. 28–36. Morgan Kaufmann, CA.

    Google Scholar 

  • Char, B., Geddes, K., Gonnet, G., Leong, B., Monagan, M. & Watt S. M. (1992). First Leaves: A Tutorial Introduction to Maple V. Springer-Verlag and Waterloo Maple Publishing.

    Google Scholar 

  • Cost, S. & Salzberg, S. (1993). A weighted nearest neighbor algorithm for learning with symbolic features. Machine Learning 10: 57–78.

    Google Scholar 

  • Cover, T. (1968). Estimation by the nearest neighbor rule. IEEE Transactions on Information Theory 14: 50–55.

    Article  MATH  Google Scholar 

  • Cover, T. & Hart, P. (1967). Nearest neighbor pattern classification. IEEE Transactions on Information Theory 13: 21–27.

    Article  MATH  Google Scholar 

  • Daelemans, W., Gillis, S., Durieux, G. & van den Bosch, A. (1993). Learnability and markedness in data-driven acquisition of stress. Tech. rep. ITK Research Report No. 43, Institute for Language Technology and AI (ITK), Tilburg University.

    Google Scholar 

  • Dasarathy, B. (1991). Nearest neighbor (NN) norms: NN pattern classification techniques. Los Alamitos, CA: IEEE Computer Society Press.

    Google Scholar 

  • Fukunaga, K. (1990). Introduction to Statistical Pattern Recognition ( Second edition ). Academic Press.

    MATH  Google Scholar 

  • Kelly, J. D. & Davis, L. (1991). A hybrid algorithm for classification. In Mylopoulos, J. & Reite, R. (eds.), Proceedings of the Thirteenth International Conference on Artificial Intelligence, pp. 645–650. Morgan Kaufmann, San Mateo, CA.

    Google Scholar 

  • Kibler, D. & Aha, D. W. (1987). Learning representative exemplars of concepts: An initial case study. In Proceedings of the 1987 International Workshop on Machine Learning, pp. 24–30. Irvine, CA: Morgan Kaufmann.

    Google Scholar 

  • Langley, P. & Iba, W. (1993). Average-case analysis of a nearest neighbor algorithm. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pp. 889–894. Morgan Kaufmann: San Mateo, CA.

    Google Scholar 

  • Ling, C. X., Parry, J. J. & Wang, H. (1994). Deciding weights for IBL using C4.5. Submitted.

    Google Scholar 

  • Mohri, M. & Tanaka, H. (1994). An optimal weighting criterion of case indexing for both numeric and symbolic attributes. Tech. rep. WS-94–01, Case-Based Reasoning: Papers from the 1994 Workshop. Menlo Park, CA: AAAI Press.

    Google Scholar 

  • Moore, A. W. & Lee, M. S. (1994). Efficient algorithms for minimizing cross validation error. In Proceedings of the 1994 International Conference on Machine Learning, pp. 190–198. Morgan Kaufmann, CA.

    Google Scholar 

  • Okamoto, S. & Satoh, K. (1995). An average-case analysis of k-nearest neighbor classifier. In Proceedings of the First International Conference on Case-Based Reasoning, pp. 253–264. Sesimbra, Portugal: Springer-Verlag.

    Google Scholar 

  • Quinlan, J. (1986). Induction of decision trees. Machine Learning 1 (1), 81–106.

    Google Scholar 

  • Quinlan, J. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann: San Mateo, CA.

    Google Scholar 

  • Satoh, K. & Okamoto, S. (1994). Toward PAC-learning of weights from qualitative distance information. Tech. rep. WS-94–01, Case-Based Reasoning: Papers from the 1994 Workshop. Menlo Park, CA: AAAI Press.

    Google Scholar 

  • Skalak, D. (1994). Prototype and feature selection by sampling and random mutation hill climbing algorithms. In Proceedings of the 1994 International Conference on Machine Learning, pp. 293–301. Morgan Kaufmann, CA.

    Google Scholar 

  • Stanfill, C. & Waltz, D. (1986). Toward memory-based reasoning. Communications of the ACM 29: 1213–1228.

    Article  Google Scholar 

  • Wettschereck, D. & Aha, D. (1995). Weighting features. In Proceedings of the First International Conference on Case-Based Reasoning, pp. 347–358. Sesimbra, Portugal: Springer-Verlag.

    Google Scholar 

  • Wettschereck, D. & Dietterich, T. (1995). An experimental comparison of the nearest neighbor and nearest hyperrectangle algorithms. Machine Learning 19: 5–28.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Ling, C.X., Wang, H. (1997). Computing Optimal Attribute Weight Settings for Nearest Neighbor Algorithms. In: Aha, D.W. (eds) Lazy Learning. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-2053-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-94-017-2053-3_10

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-4860-8

  • Online ISBN: 978-94-017-2053-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics