Classification by Voting Feature Intervals

  • Gülşen Demiröz
  • H. Altay Güvenir
Part II: Regular Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1224)

Abstract

A new classification algorithm called VFI (for Voting Feature Intervals) is proposed. A concept is represented by a set of feature intervals on each feature dimension separately. Each feature participates in the classification by distributing real-valued votes among classes. The class receiving the highest vote is declared to be the predicted class. VFI is compared with the Naive Bayesian Classifier, which also considers each feature separately. Experiments on real-world datasets show that VFI achieves comparably and even better than NBC in terms of classification accuracy. Moreover, VFI is faster than NBC on all datasets.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Akkuş., A., & Güvenir, H. A. (1995). K Nearest Neighbor Classification on Feature Projections. Proceedings of ICML'96, 12–19.Google Scholar
  2. 2.
    Demiröz, G., & Güvenir, H. A. (1996). Genetic Algorithms to Learn Feature Weights for the Nearest Neighbor Algorithm. Proceedings of BENELEARN-96, 117–126.Google Scholar
  3. 3.
    Güvenir, H. A., & Şirin, İ. (1996). Classification by Feature Partitioning. Machine Learning, Vol. 23, 47–67.Google Scholar
  4. 4.
    Holte, R. C. (1993). Very simple classification rules perform well on most commonly used datasets. Machine Learning, Vol. 11, 63–91.Google Scholar
  5. 5.
    Kononenko, I. (1993). Inductive and Bayesian Learning in Medical Diagnosis. Applied Artificial Intelligence, Vol. 7, 317–337.Google Scholar
  6. 6.
    Kononenko, I. & Bratko, I. (1991). Information-Based Evaluation Criterion for Classifier's Performance. Machine Learning, Vol. 6, 67–80.Google Scholar
  7. 7.
    Murphy, P. (1995). UCI Repository of machine learning databases, [Anonymous FTP from ics.uci.edu in the directory pub/machine-learning databases]. Department of Information and Computer Science, University of California, Irvine.Google Scholar
  8. 8.
    Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, Vol.1, 81–106.Google Scholar
  9. 9.
    Quinlan, J. R. (1989). Unknown attribute values in induction. Proceedings of 6th International Workshop on Machine Learning, 164–168.Google Scholar
  10. 10.
    Wettschereck,D. & Aha, D. W. (1995). Weighting Features. Proceedings of the First International Conference on Case-Based Reasoning (ICCBR-95).Google Scholar

Copyright information

© Springer-Verlag 1997

Authors and Affiliations

  • Gülşen Demiröz
    • 1
  • H. Altay Güvenir
    • 1
  1. 1.Department of Computer Engineering and Information ScienceBilkent UniversityAnkaraTurkey

Personalised recommendations