Skip to main content

A Cascaded Method to Reduce the Computational Burden of Nearest Neighbor Classifier

  • Conference paper
  • First Online:
Proceedings of the First International Conference on Computational Intelligence and Informatics

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 507))

Abstract

Nearest neighbor classifiers demand high computational resources, i.e., time and memory. Two distinct approaches are followed by researchers in pattern recognition to reduce this computational burden. The first approach is reducing the reference set (training set) and the second approach is dimensionality reduction which is referred to as prototype selection and feature reduction (a.k.a feature extraction or feature selection), respectively. In this paper, we cascaded the two methods to achieve the reduction in both directions. The experiments are done on the bench mark datasets, and the results obtained are satisfactory.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cover T. and Hart P.: Nearest neighbor pattern classification, Information Theory, IEEE Transactions on 13.1 pp. 21–27 (1967).

    Google Scholar 

  2. Verleysen, Michel, and Damien Franois.: The curse of dimensionality in data mining and time series prediction. Computational Intelligence and Bioinspired Systems, Springer Berlin Heidelberg, pp 758–770 (2005).

    Google Scholar 

  3. Hart P.: The Condensed Nearest neighbor rule, IEEE Trans on Information Theory, vol. IT-14 (3), pp 515–516 (1968).

    Google Scholar 

  4. Tomek I.: Two modifications of cnn, IEEE Trans on Syst. Man. Cybern., vol. SMC-6 no 11, pp 769–772 (1976).

    Google Scholar 

  5. M.N.M and Susheela Dev, V.: An incremental prototype set building technique, Pattern Recognition., vol. 35, pp 505–513 (2002).

    Google Scholar 

  6. Viswanath, P., Murthy, N. and Bhatnagar, S.: Overlap pattern synthesis with an efficient nearest neighbor classifiers, Pattern Recognition., vol. 38, no. 8, pp 11871195 (2005).

    Google Scholar 

  7. Babu, V.S. and Viswanath, P.: Weighted k-nearest leader classifier for large data sets, Pattern Recognition and Machine Intelligence, pp. 17–24 (2007).

    Google Scholar 

  8. Sarma, T.H. and Viswanath, P.: An improvement to k-nearest neighbor classifiers, Recent advances in Intelligent Computational Systems (RAICS) 2011 IEEE, pp 227231 (2011).

    Google Scholar 

  9. Raja Kumar, R., Viswanath, P., Shoba Bindu, C.: A New Prototype Selection Method for Nearest Neighbor Classification, Proc. of Int. Conf. on Advances in Communication, Network, and Computing, CNC, 2014 ACEEE, pp 1–8 (2014).

    Google Scholar 

  10. Izenman, A.J.: Linear discriminant analysis. In Modern Multivariate Statistical Techniques, pp. 237–280, Springer New York (2008).

    Google Scholar 

  11. Sebastian Raschka (2013), http://www.sebastianraschka.com/.

  12. Anil, J. and Zongker, D.: Feature Selection: Evaluation, Application, and Small Sample Performance, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 2 (1997).

    Google Scholar 

  13. Anil, J. and Zongker, D.: Algorithms for Feature Selection: An Evaluation, IEEE proceedings of ICPR’96, (1996).

    Google Scholar 

  14. Babu, Ravindra, T., Narasimha Murthy, M. and Subrahmanya S.V.: Data Compression Schemes for Mining Large Datasets, In Springer London (2013).

    Google Scholar 

  15. Kucheva, Ludmila, I. and Jain, Lakshmi, C.: Nearest Neighbor Classifier: Simultaneous editing and feature selection, Pattern Recognition Letters, vol. 2, no. 11, pp 1149–1156 (1999).

    Google Scholar 

  16. Kuncheva, Ludmila, I.: Reducing the Computational demand of the nearest neighbor classifier (2001).

    Google Scholar 

  17. Han, J., Kamber M., and Pei J.: Data mining: concepts and techniques: concepts and techniques. Elsevier (2011).

    Google Scholar 

  18. Sequential Forward Selection, http://research.cs.tamu.edu/prism/lectures/pr/prl11.pdf.

  19. Murphy D.A.P.M.: UCI repository of machine learning databases, [http://www.ics.uci.edu/mlearn/MLRepository.html], Department of Information and Computer Science, University of California, Irvine, CA. (1994).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Raja Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Science+Business Media Singapore

About this paper

Cite this paper

Raja Kumar, R., Viswanath, P., Shobha Bindu, C. (2017). A Cascaded Method to Reduce the Computational Burden of Nearest Neighbor Classifier. In: Satapathy, S., Prasad, V., Rani, B., Udgata, S., Raju, K. (eds) Proceedings of the First International Conference on Computational Intelligence and Informatics . Advances in Intelligent Systems and Computing, vol 507. Springer, Singapore. https://doi.org/10.1007/978-981-10-2471-9_27

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-2471-9_27

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-2470-2

  • Online ISBN: 978-981-10-2471-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics