Advertisement

Neighborhood Random Classification

  • Djamel Abdelkader Zighed
  • Diala Ezzeddine
  • Fabien Rico
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7301)

Abstract

Ensemble methods (EMs) have become increasingly popular in data mining because of their efficiency. These methods(EMs) generate a set of classifiers using one or several machine learning algorithms (MLAs) and aggregate them into a single classifier (Meta-Classifier, MC). Of the MLAs, k-Nearest Neighbors (kNN) is one of the most well-known used in the context of EMs. However, handling the parameter k can be difficult. This drawback is the same for all MLA that are instance based. Here, we propose an approach based on neighborhood graphs as an alternative. Thanks to these related graphs, like relative neighborhood graphs (RNGs) or Gabriel graphs (GGs), we provide a generalized approach with less arbitrary parameters. Neighborhood graphs have never been introduced into EM approaches before. The results of our algorithm : Neighborhood Random Classification are very promising as they are equal to the best EM approaches such as Random Forest or those based on SVMs. In this exploratory and experimental work, we provide the methodological approach and many comparative results.

Keywords

Ensemble methods neighborhood graphs relative neighborhood Graphs Gabriel Graphs k-Nearest Neighbors 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Breiman, L.: Bias, variance, and arcing classifiers. Statistics (1996)Google Scholar
  2. 2.
    Breiman, L.: Random forests. Machine Learning 45(1), 5–32 (2001)zbMATHCrossRefGoogle Scholar
  3. 3.
    Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: a survey and categorisation. Information Fusion 6(1), 5–20 (2005)CrossRefGoogle Scholar
  4. 4.
    Demsar, J.: Statistical comparisons of classifiers over multiple data sets (2006)Google Scholar
  5. 5.
    Domingos, P.: A unified bias-variance decomposition and its applications. In: ICML, pp. 231–238. Citeseer (2000)Google Scholar
  6. 6.
    Ham, J.S., Chen, Y., Crawford, M.M., Ghosh, J.: Investigation of the random forest framework for classification of hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing 43(3) (2005)Google Scholar
  7. 7.
    Ho, T., Kleinberg, E.: Building projectable classifiers of arbitrary complexity. In: International Conference on Pattern Recognition, vol. 13, pp. 880–885 (1996)Google Scholar
  8. 8.
    Kohavi, R., Wolpert, D.: Bias plus variance decomposition for zero-one loss functions. In: Machine Learning-International Workshop, pp. 275–283. Citeseer (1996)Google Scholar
  9. 9.
    O’Mahony, M.P., Cunningham, P., Smyth, B.: An Assessment of Machine Learning Techniques for Review Recommendation. In: Coyle, L., Freyne, J. (eds.) AICS 2009. LNCS, vol. 6206, pp. 241–250. Springer, Heidelberg (2010), http://portal.acm.org/citation.cfm?id=1939047.1939075 CrossRefGoogle Scholar
  10. 10.
    Park, J.C., Shin, H., Choi, B.K.: Elliptic Gabriel graph for finding neighbors in a point set and its application to normal vector estimation. Computer-Aided Design 38(6), 619–626 (2006)CrossRefGoogle Scholar
  11. 11.
    Planeta, D.S.: Linear time algorithms based on multilevel prefix tree for finding shortest path with positive weights and minimum spanning tree in a networks. CoRR abs/0708.3408 (2007)Google Scholar
  12. 12.
    Prasad, A.M., Iverson, L.R., Liaw, A.: Newer classification and regression tree techniques: bagging and random forests for ecological prediction. Ecosystems 9(2), 181–199 (2006)CrossRefGoogle Scholar
  13. 13.
    Preparata, F.P., Shamos, M.I.: Computational geometry: an introduction. Springer (1985)Google Scholar
  14. 14.
    Schapire, R.: The boosting approach to machine learning: An overview. Lecture Note in Statistics, pp. 149–172. Springer (2003)Google Scholar
  15. 15.
    Shipp, C.A., Kuncheva, L.I.: Relationships between combination methods and measures of diversity in combining classifiers. Information Fusion 3(2), 135–148 (2002)CrossRefGoogle Scholar
  16. 16.
    Toussaint, G.T.: The relative neighbourhood graph of a finite planar set. Pattern Recognition 12(4), 261–268 (1980)MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Wang, X., Tang, X.: Random sampling lda for face recognition, pp. 259–267 (2004), http://portal.acm.org/citation.cfm?id=1896300.1896337
  18. 18.
    Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all* 1. Artificial Intelligence 137(1-2), 239–263 (2002)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Djamel Abdelkader Zighed
    • 1
  • Diala Ezzeddine
    • 2
  • Fabien Rico
    • 2
  1. 1.Institut des Sciences de l’Homme (ISH - USR 3385) Université de LyonLyonFrance
  2. 2.Laboratoire EricUniversité de LyonBron CedexFrance

Personalised recommendations