Advertisement

Right of Inference: Nearest Rectangle Learning Revisited

  • Byron J. Gao
  • Martin Ester
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4212)

Abstract

In Nearest Rectangle (NR) learning, training instances are generalized into hyperrectangles and a query is classified according to the class of its nearest rectangle. The method has not received much attention since its introduction mainly because, as a hybrid learner, it does not gain accuracy advantage while sacrificing classification time comparing to some other interpretable eager learners such as decision trees. In this paper, we seek for accuracy improvement of NR learning through controlling the generation of rectangles, so that each of them has the right of inference. Rectangles having the right of inference are compact, conservative, and good for making local decisions. Experiments on benchmark datasets validate the effectiveness of the proposed approach.

Keywords

Near Neighbor Local Decision Training Instance Hybrid Learner Related Data Mining 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Aha, D.: Lazy learning. Artificial Intelligence Review 11, 7–10 (1997)CrossRefGoogle Scholar
  2. 2.
    Apte, C., Weiss, S.: Data mining with decision trees and decision rules. Future Generation Computer Systems (1997)Google Scholar
  3. 3.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
  4. 4.
    Dasarathy, B.V.: Nearest Neighbor (NN) norms: NN pattern classification techniques. IEEE Computer Society Press, Los Alamitos (1991)Google Scholar
  5. 5.
    Gao, B.J., Ester, M.: Cluster description formats, problems, and algorithms. In: SIAM International Conference on Data Mining (2006)Google Scholar
  6. 6.
    Murthy, S.K.: Automatic construction of decision trees from data: a multi-disciplinary survey. Data Mining and Knowledge Discovery 2(4), 345–389 (1998)CrossRefGoogle Scholar
  7. 7.
    Quinlan, J.R.: Combining instance-based and model-based learning. In: ICML (1993)Google Scholar
  8. 8.
    Quinlan, J.R.: C4.5: programs for machine learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  9. 9.
    Salzberg, S.: A nearest hyperrectanhgle learning method. Machine Learning 6, 251–276 (1991)Google Scholar
  10. 10.
    Wettschereck, D.: A hybrid nearest-neighbor and nearest-hyperrectangle algorithm. In: ECML (1994)Google Scholar
  11. 11.
    Wettschereck, D., Dietterich, T.G.: An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms. Machine Learning 19, 5–27 (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Byron J. Gao
    • 1
  • Martin Ester
    • 1
  1. 1.School of Computing ScienceSimon Fraser UniversityCanada

Personalised recommendations