Combination of Multiple Nearest Neighbor Classifiers Based on Feature Subset Clustering Method

  • Li-Juan Wang
  • Qiang Hua
  • Xiao-Long Wang
  • Qing-Cai Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3930)


This paper proposes a new method called FC-MNNC based on feature subset clustering for combining multiple NNCs to obtain better performance than that of using a single NNC. In FC-MNNC, the component NNCs based on the reasonably partitioned feature subsets are parallel and independently able to classify one pattern and the final decision is aggregated by the majority voting rule. Here, two methods are used to partition the feature set. In method I, GA is used for clustering features to form different feature subsets according to the accuracy of the combination classification. And method II is the transitive closure clustering method based on the pair-wise correlation between features. To demonstrate the performance of FC-MNNC, we select four UCI databases for our experiments. The experimental results show that: (i) in FC-MNNC, the performance of method II isn’t better than that of method I; (ii) the accuracy of FC-MNNC based on method I is better than that of the standard NNC and feature selection using GA in individual classifier; (iii) the performance of FC-MNNC based on method I is not worse than that of feature subset selection using GA in multiple NNCs; and (iv) FC-MNNC is robust against irrelevant features.


Feature Selection Classification Accuracy Feature Subset Individual Classifier Neighbor Classifier 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Cho, S.-B., Kim, J.H.: Combining multiple neural networks by fuzzy integral for robust classification. IEEE trans. on SMC 25, 380–384 (1995)Google Scholar
  2. 2.
    Cho, S.-B., Kim, J.H.: Multiple network fusion using fuzzy logic. IEEE trans. on NN 6, 497–501 (1995)Google Scholar
  3. 3.
    Tumer, K., Ghosh, J.: Error correction and error reduction in ensemble classifiers. Connection science 8(314), 385–404 (1996)CrossRefGoogle Scholar
  4. 4.
    Kittler, J., Hojjatoleslami, A., windeatt, T.: Strategies for combining classifiers employing shared and distinct representations. Pattern recognition letters 18, 1373–1377 (1997)CrossRefGoogle Scholar
  5. 5.
    Bay, S.D.: Nearest neighbor classifiers from multiple feature subsets. Intelligent data analysis 3, 191–209 (1999)CrossRefGoogle Scholar
  6. 6.
    Breiman, L.: Bagging predictors. Machine learning 24, 123–140 (1996)MATHMathSciNetGoogle Scholar
  7. 7.
    Langley, P., Iba, W.: Average–case analysis of a nearest neighbor algorithm. In: Proc. of the thirteenth international joint conference on artificial intelligence, pp. 889–894 (1993)Google Scholar
  8. 8.
    Vishwath, P., Murty, M.N., Bhatnagar, C.: Fusion of multiple approximate nearest neighbor classifier for fast and efficient classification. Information fusion 5, 239–250 (2004)CrossRefGoogle Scholar
  9. 9.
    Kuncheva, L.I., Jain, L.C.: Designing classifier fusion systems by genetic algorithms. IEEE trans. on evolutionary computation 4, 327–336 (2000)CrossRefGoogle Scholar
  10. 10.
    Lam, L., Suen, C.Y.: Application of majority voting to pattern recognition: An analysis of its behavior and performance. IEEE trans. on SMC 27, 553–568 (1997)Google Scholar
  11. 11.
    Mitchell, T.M.: Machine learning. China Machine Press, Beijing (2003)Google Scholar
  12. 12.
    UCI repository of machine learning databases and domain theories. FTP address,

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Li-Juan Wang
    • 1
    • 2
  • Qiang Hua
    • 2
  • Xiao-Long Wang
    • 1
  • Qing-Cai Chen
    • 1
  1. 1.Department of Computer Science and TechnologyHarbin Institute of TechnologyHarbinChina
  2. 2.Machine Learning Center, Faculty of Mathematics and Computer ScienceHebei UniversityBaodingChina

Personalised recommendations