Cost-Sensitive Classification with k-Nearest Neighbors

  • Zhenxing Qin
  • Alan Tao Wang
  • Chengqi Zhang
  • Shichao Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8041)

Abstract

Cost-sensitive learning algorithms are typically motivated by imbalance data in clinical diagnosis that contains skewed class distribution. While other popular classification methods have been improved against imbalance data, it is only unsolved to extend k-Nearest Neighbors (kNN) classification, one of top-10 datamining algorithms, to make it cost-sensitive to imbalance data. To fill in this gap, in this paper we study two simple yet effective cost-sensitive kNN classification approaches, called Direct-CS-kNN and Distance-CS-kNN. In addition, we utilize several strategies (i.e., smoothing, minimum-cost k value selection, feature selection and ensemble selection) to improve the performance of Direct-CS-kNN and Distance-CS-kNN. We conduct several groups of experiments to evaluate the efficiency with UCI datasets, and demonstrate that the proposed cost-sensitive kNN classification algorithms can significantly reduce misclassification cost, often by a large margin, as well as consistently outperform CS-4.5 with/without additional enhancements.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Domingos, P.: MetaCost: a general method for making classifiers cost-sensitive. In: Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 155–164 (1999)Google Scholar
  2. 2.
    Elkan, C.: The foundations of cost-sensitive learning. In: Nebel, B. (ed.) Proceeding of the Seventeenth International Joint Conference of Artificial Intelligence, Seattle, August 4-10, pp. 973–978. Morgan Kaufmann (2001)Google Scholar
  3. 3.
    Greiner, R., Grove, A.J., Roth, D.: Learning cost-sensitive active classifiers. Artificial Intelligence 139(2), 137–174 (2002)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artificial intelligence 97(1-2), 273–324 (1997)MATHCrossRefGoogle Scholar
  5. 5.
    Kotsiantis, S., Pintelas, P.: A cost sensitive technique for ordinal classification problems. In: Vouros, G.A., Panayiotopoulos, T. (eds.) SETN 2004. LNCS (LNAI), vol. 3025, pp. 220–229. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Kotsiantis, S., Kanellopoulos, D., Pintelas, P.: Handling imbalanced datasets: A review. GESTS International Transactions on Computer Science and Engineering 30(1), 25–36 (2006)Google Scholar
  7. 7.
    Li, J., Li, X., Yao, X.: Cost-Sensitive Classification with Genetic Programming. In: The 2005 IEEE Congress on Evolutionary Computation, vol. 3 (2005)Google Scholar
  8. 8.
    Ling, C.X., Yang, Q., Wang, J., Zhang, S.: Decision trees with minimal costs. In: Brodley, C.E. (ed.) Proceeding of the Twenty First International Conference on Machine Learning, Banff, Alberta, July 4-8, vol. 69, pp. 69–76. ACM Press (2004)Google Scholar
  9. 9.
    Margineantu, D.D.: Methods for Cost-sensitive Learning. Oregon State University (2001)Google Scholar
  10. 10.
    Niculescu-Mizil, A., Caruana, R.: Predicting good probabilities with supervised learning. Association for Computing Machinery, Inc., New York (2005)Google Scholar
  11. 11.
    Oza, N.C.: Ensemble Data Mining Methods, NASA Ame Research Center (2000)Google Scholar
  12. 12.
    Platt, J.C.: Probabilities for SV machines. In: Advances in Neural Information Processing Systems, pp. 61–74 (1999)Google Scholar
  13. 13.
    Provost, F., Domingos, P.: Tree Induction for Probability-Based Ranking. Machine Learning 52, 199–215 (2003)MATHCrossRefGoogle Scholar
  14. 14.
    Quinlan, J.R.: C4.5: Programs for machine learning. Morgan Kaufmann, San Mateo (1993)Google Scholar
  15. 15.
    Sun, Q., Pfahringer, B.: Bagging Ensemble Selection. In: Wang, D., Reynolds, M. (eds.) AI 2011. LNCS, vol. 7106, pp. 251–260. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  16. 16.
    Turney, P.: Types of cost in inductive concept learning. In: Workshop on Cost-Sensitive Learning at the Seventeenth International Conference on Machine Learning, p. 1511 (2000)Google Scholar
  17. 17.
    Wang, T., Qin, Z., Jin, Z., Zhang, S.: Handling over-fitting in test cost-sensitive decision tree learning by feature selection, smoothing and pruning. Journal of Systems and Software (JSS) 83(7), 1137–1147 (2010)CrossRefGoogle Scholar
  18. 18.
    Wang, T., Qin, Z., Zhang, S.: Cost-sensitive Learning - A Survey. Accepted by International Journal of Data Warehousing and Mining (2010)Google Scholar
  19. 19.
    Wettschereck, D., Aha, D.W., Mohri, T.: A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artificial Intelligence Review 11(1), 273–314 (1997)CrossRefGoogle Scholar
  20. 20.
    Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Techniques with Java Implementations, 2nd edn. Morgan Kaufmann Publishers (2000)Google Scholar
  21. 21.
    Wolpert, D.H.: Stacked generalization. Neural Networks 5, 241–259 (1992)CrossRefGoogle Scholar
  22. 22.
    Wu, X., Kumar, V., Ross Quinlan, J., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G.J., Ng, A., Liu, B., Yu, P.S.: Top 10 algorithms in data mining. Knowledge and Information Systems 14(1), 1–37 (2008)CrossRefGoogle Scholar
  23. 23.
    Zadrozny, B., Elkan, C.: Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In: Proceedings of the 18th International Conference on Machine Learning, pp. 609–616 (2001)Google Scholar
  24. 24.
    Zadrozny, B., Elkan, C.: Learning and making decisions when costs and probabilities are both unknown. In: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 204–213. ACM Press, San Francisco (2001)CrossRefGoogle Scholar
  25. 25.
    Zadrozny, B., Elkan, C.: Transforming classifier scores into accurate multiclass probability estimates, pp. 694–699. ACM, New York (2002)Google Scholar
  26. 26.
    Zadrozny, B.: One-Benefit learning: cost-sensitive learning with restricted cost information. In: Proceedings of the 1st International Workshop on Utility-Based Data Mining, pp. 53–58. ACM Press, Chicago (2005)CrossRefGoogle Scholar
  27. 27.
    Zhang, J., Mani, I.: kNN approach to unbalanced data distributions: a case study involving information extraction (2009)Google Scholar
  28. 28.
    Zhang, S.: KNN-CF Approach: Incorporating Certainty Factor to kNN Classification. IEEE Intelligent Informatics Bulletin 11(1) (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Zhenxing Qin
    • 1
  • Alan Tao Wang
    • 1
  • Chengqi Zhang
    • 1
  • Shichao Zhang
    • 1
    • 2
  1. 1.The Centre for QCIS, Faculty of Engineering and Information TechnologyUniversity of Technology SydneyAustralia
  2. 2.College of CS&ITGuangxi Normal UniversityGuilinChina

Personalised recommendations