Neighborhood Random Classification

  • Djamel A. Zighed
  • Diala Ezzeddine
  • Fabien Rico
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8085)


Ensemble methods (EMs) have become increasingly popular in data mining because of their efficiency. These methods generate a set of classifiers using one or several machine learning algorithms (MLAs) and aggregate them into a single classifier (Meta-Classifier, MC). Decision trees (DT), SVM and k-Nearest Neighbors (kNN) are among the most well-known used in the context of EMs. Here, we propose an approach based on neighborhood graphs as an alternative. Thanks to these related graphs, like relative neighborhood graphs (RNGs), Gabriel graphs (GGs) or Minimum Spanning Tree (MST), we provide a generalized approach to the kNN approach with less arbitrary parameters such as the value of k. Neighborhood graphs have never been introduced into EM approaches before. The results of our algorithm : Neighborhood Random Classification are very promising as they are equal to the best EM approaches such as Random Forest or those based on SVMs. In this preliminary and experimental work, we provide the methodological approach and many comparative results. We also provide some results on the influence of neighborhood structure regarding the efficiency of the classifier and draw some issues that deserves to be studied.


Ensemble methods neighborhood graphs relative neighborhood Graphs Gabriel Graphs k-Nearest Neighbors 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Breiman, L.: Bias, variance, and arcing classifiers. Statistics (1996)Google Scholar
  2. 2.
    Breiman, L.: Random forests. Machine Learning 45(1), 5–32 (2001)zbMATHCrossRefGoogle Scholar
  3. 3.
    Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: a survey and categorisation. Information Fusion 6(1), 5–20 (2005)CrossRefGoogle Scholar
  4. 4.
    Domingos, P.: A unified bias-variance decomposition and its applications. In: ICML, pp. 231–238. Citeseer (2000)Google Scholar
  5. 5.
    Ham, J., Chen, Y., Crawford, M., Ghosh, J.: Investigation of the random forest framework for classification of hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing 43(3) (2005)Google Scholar
  6. 6.
    Ho, T., Kleinberg, E.: Building projectable classifiers of arbitrary complexity. In: International Conference on Pattern Recognition, vol. 13, pp. 880–885 (1996)Google Scholar
  7. 7.
    Kohavi, R., Wolpert, D.: Bias plus variance decomposition for zero-one loss functions. In: Machine Learning-International Workshop, pp. 275–283. Citeseer (1996)Google Scholar
  8. 8.
    O’Mahony, M.P., Cunningham, P., Smyth, B.: An assessment of machine learning techniques for review recommendation. In: Coyle, L., Freyne, J. (eds.) AICS 2009. LNCS, vol. 6206, pp. 241–250. Springer, Heidelberg (2010), CrossRefGoogle Scholar
  9. 9.
    Park, J., Shin, H., Choi, B.: Elliptic Gabriel graph for finding neighbors in a point set and its application to normal vector estimation. Computer-Aided Design 38(6), 619–626 (2006)CrossRefGoogle Scholar
  10. 10.
    Prasad, A., Iverson, L., Liaw, A.: Newer classification and regression tree techniques: bagging and random forests for ecological prediction. Ecosystems 9(2), 181–199 (2006)CrossRefGoogle Scholar
  11. 11.
    Preparata, F., Shamos, M.: Computational geometry: an introduction. Springer (1985)Google Scholar
  12. 12.
    Schapire, R.: The boosting approach to machine learning: An overview. Lecture Notes In Statistics, pp. 149–172. Springer (2003)Google Scholar
  13. 13.
    Shipp, C., Kuncheva, L.: Relationships between combination methods and measures of diversity in combining classifiers. Information Fusion 3(2), 135–148 (2002)CrossRefGoogle Scholar
  14. 14.
    Toussaint, G.: The relative neighbourhood graph of a finite planar set. Pattern Recognition 12(4), 261–268 (1980)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Wang, X., Tang, X.: Random sampling lda for face recognition, pp. 259–267 (2004),
  16. 16.
    Zhou, Z., Wu, J., Tang, W.: Ensembling neural networks: Many could be better than all* 1. Artificial Intelligence 137(1-2), 239–263 (2002)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Djamel A. Zighed
    • 1
  • Diala Ezzeddine
    • 2
  • Fabien Rico
    • 2
  1. 1.Institut des Sciences de l’Homme (ISH - USR 3385)Université de LyonLyonFrance
  2. 2.Laboratoire EricUniversité de LyonBron CedexFrance

Personalised recommendations