Advertisement

Learning feature weights for CBR: Global versus local

  • Andrea Bonzano
  • Pádraig Cunningham
  • Barry Smyth
Machine Learning 3
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1321)

Abstract

k-Nearest Neighbour is a popular case retrieval technique in Case-Based Reasoning. It has the disadvantage that its accuracy depends strongly on the weights assigned to the case features. This problem can be addressed by using Introspective Learning to discover appropriate values for feature weights. The basic idea with Introspective Learning (IL) is to examine cases that are similar in order to discover which features are important and which are not. The only problem with this idea is that there are a myriad of ways in which weights can be updated based on this kind of analysis. There are several different cues that can trigger a weight change; there are several ways in which the weights can be changed and there is the added complication that weights can be global or local. In this paper we report some analysis of IL in a CBR system for conflict resolution in Air Traffic Control. We show that local weights are best in this particular domain and we show which update cues are most effective. We also show that overfitting can be a problem with IL and we discuss how it can be avoided.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Atkeson, C.G., Moore, A.W., Schaal, S., (1997) Locally Weighted Learning, To appear in Artificial Intelligence Review.Google Scholar
  2. Bonzano A., Cunningham P., Smyth B., (1997) Using introspective learning to improve retrieval in CBR: A case study in air traffic control, to be presented at International Conference on Case-Based Reasoning, Providence Rhode Island, 1997.Google Scholar
  3. Fox, S. & Leake, D. B. (1995) Using Introspective Reasoning to Refine Indexing. Proceedings of the 14th International Joint Conference on Artificial Intelligence, pp. 391–397.Google Scholar
  4. Hansen L.K., Larsen J., Fog T., Early Stop Criterion from the Bootstrap Ensemble, Proceedings of ICASSP'97, Munich, Germany, April 1997.Google Scholar
  5. Muñoz-Avila, H., Hüllen, J. (1996) Feature Weighting by ExplainingCase-Based Planning Episodes. Advances in Case-Based Reasoning (Ed.s I. Smith & B. Faltings), Proceedings of the Third European Workshop on Case-Based Reasoning, pp. 280–294, Springer-Verlag.Google Scholar
  6. Ricci, F., Avesani P., (1995) Learning a Local Similarity Metric for Case-Based Reasoning. (Ed.s M. Veloso & A. Aamodt), Proceedings of The 1st International Conference on Case-Based Reasoning, pp301–311, Springer-Verlag.Google Scholar
  7. Saltzberg, S. L. (1991) A Nearest Hyperrectangle Learning Method. Machine Learning, 1.Google Scholar
  8. Wetterschereck, D., & Aha, D. W. (1995) Weighting Features. Case-Based Reasoning Research and Development (Ed.s M. Veloso & A. Aamodt), Proceedings of The 1st International Conference on Case-Based Reasoning, pp347–358, Springer-Verlag.Google Scholar
  9. Wettschereck, D., Aha, D. W., & Mohri, T. (1997). A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. To appear in Artificial Intelligence Review. (also available on the Web from http://www.aic.nrl.navy.mil/~aha/)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Andrea Bonzano
    • 1
  • Pádraig Cunningham
    • 1
  • Barry Smyth
    • 2
  1. 1.Artificial Intelligence GroupTrinity College DublinIreland
  2. 2.Department of Computer ScienceUniversity College DublinIreland

Personalised recommendations