Skip to main content
Log in

An evolutionary approach for achieving scalability with general regression neural networks

  • Published:
Natural Computing Aims and scope Submit manuscript

Abstract

In this paper, we present an approach to overcome the scalability issues associated with instance-based learners. Our system uses evolutionary computational techniques to determine the minimal set of training instances needed to achieve good classification accuracy with an instance-based learner. In this way, instance-based learners need not store all the training data available but instead store only those instances that are required for the desired accuracy. Additionally, we explore the utility of evolving the optimal feature set used by the learner for a given problem. In this way, we attempt to deal with the so-called “curse of dimensionality” associated with computational learning systems. To these ends, we introduce the Evolutionary General Regression Neural Network. This design uses an estimation of distribution algorithm to generate both the optimal training set as well as the optimal feature set for a general regression neural network. We compare its performance against a standard general regression neural network and an optimized support vector machine across four benchmark classification problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Bellman RE (1961) Adaptive control processes: a guided tour. Princeton University Press

  • Böhm C, Berchtold S, Keim DA (2001) Searching in high-dimensional spaces: index structures for improving the performance of multimedia databases. ACM Comput Surveys 33(3):322–373

    Article  Google Scholar 

  • Burges CJC (1998) A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2):955–974

    Article  Google Scholar 

  • Carlisle A, Dozier G (2001) An off-the-shelf PSO. In: Workshop on particle swarm optimization. Indianapolis, IN, pp 1–6.

  • El-Naqa I, Yang Y, Galatsanos NP, Nishikawa RM, Wernick MN (2004) A similarity learning approach to content-based image retrieval: application to digital mammography. IEEE Trans Med Imaging 23(10):1233–1244

    Article  Google Scholar 

  • Engelbrecht AP (2002) Computational intelligence: an introduction. John Wiley and Sons, Ltd.

  • Fu J, Lee S, Wong S, Yeh A, Wang JY, Wu H (2005) Image Segmentation Feature Selection and Pattern Classification for Mammographic Microcalcifications. Computerized Medical Imaging Graphics 29(6):419–429

    Article  Google Scholar 

  • Haykin S (1999) Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice Hall.

  • Hearst MA (1998) Support vector machines. IEEE Intelligent Syst 18–21

  • Joachims T (1999) Making large-scale SVM learning practical. In: Schölkopf B, Burges C, Smola A (eds) Advances in Kernel Methods – Support Vector Learning. MIT Press

  • Köppen M (2000) The curse of dimensionality

  • Kushmerick N (1999) Learning to remove internet advertisements. In: Proceedings of 3rd international conference on autonomous agents

  • Larranaga P, Lozano J (2002) Estimation of distribution algorithms: a new tool for evolutionary computation. Kluwer Academic Publishers

  • Li R, Emmerich MT, Eggermont J, Bovenkamp EG (2006) Mixed-integer optimization of coronary vessel image analysis using evolution strategies. In: GECCO ’06: proceedings of the 8th annual conference on Genetic and evolutionary computation

  • Mangasarian OL, Wolberg WH (1990) Cancer diagnosis via linear programming. SIAM News 23(5):1–18

    Google Scholar 

  • Mehrotra K, Mohan CK, Ranka S (1997) Elements of artificial neural networks. MIT Press

  • Mitchell TM (1997) Machine learning. McGraw-Hill, 1st edn

  • Novak E, Ritter K (1997) The curse of dimension and a universal method for numerical integration. In: Nürnberger G, Schmidt JW, Walz G (eds) Multivariate approximation and splines, ISNM. Birkhäuser, Basel, pp 177–188

  • Okun O, Priisalu H (2006) Fast nonnegative matrix factorization and its application for protein fold recognition. EURASIP J Appl Signal Processing 2006, Article ID 71817, 8 pages. 10.1155/ASP/2006/71817.

  • Picton P (2000) Neural Networks. Palgrave.

  • Schölkopf B (2000) Statistical Learning and Kernel Methods. Technical Report MSR-TR-2000-23, Microsoft Research.

  • Specht D (1991) A general regression neural network. IEEE Trans Neural Networks 2(6):568–576

    Article  Google Scholar 

  • Vapnik V (1982) Estimation of dependences based on empirical data; translated by Samuel Kotz. Springer Verlag.

  • Vapnik V (1995) The nature of statistical learning theory. Springer.

  • Zitzler E, Laumanns M, Bleuler S (2004) A tutorial on evolutionary multiobjective optimization. In: Gandibleux X, Sevaux M, Sorensen K, T’kindt V (eds) Metaheuristics for multiobjective optimisation. Springer, Berlin, pp 3–37

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kenan Casey.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Casey, K., Garrett, A., Gay, J. et al. An evolutionary approach for achieving scalability with general regression neural networks. Nat Comput 8, 133–148 (2009). https://doi.org/10.1007/s11047-007-9052-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11047-007-9052-x

Keywords

Navigation