Skip to main content

Forward and Backward Selection in Regression Hybrid Network

  • Conference paper
  • First Online:
Multiple Classifier Systems (MCS 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2364))

Included in the following conference series:

Abstract

We introduce a Forward Backward and Model Selection algorithm (FBMS) for constructing a hybrid regression network of radial and perceptron hidden units. The algorithm determines whether a radial or a perceptron unit is required at a given region of input space. Given an error target, the algorithm also determines the number of hidden units. Then the algorithm uses model selection criteria and prunes unnecessary weights. This results in a final architecture which is often much smaller than a RBF network or a MLP. Results for various data sizes on the Pumadyn data indicate that the resulting architecture competes and often outperform best known results for this data set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. The Wadsworth Statistics/Probability Series, Belmont, CA, 1984.

    MATH  Google Scholar 

  2. C.E. Rasmussen, R.M. Neal, G.E. Hinton, D. Van Camp, Z. Ghahrman M. Revow, R. Kustra, and R. Tibshirani. The delve manual. 1996.

    Google Scholar 

  3. S. Cohen and N. Intrator. Automatic model selection of ridge and radial functions. In Second International workshop on Multiple Classifier Systems, 2001.

    Google Scholar 

  4. S. Cohen and N. Intrator. A hybrid projection based and radial basis function architecture: Initial values and global optimization. To appear in Special issue of PAA on Fusion of Multiple Classifiers, 2001.

    Google Scholar 

  5. D. L. Donoho and I. M. Johnstone. Projection-based approximation and a duality with kernel methods. Annals of Statistics, 17:58–106, 1989.

    Article  MATH  MathSciNet  Google Scholar 

  6. G.W. Flake. Square unit augmented, radially extended, multilayer percpetrons. In G. B. Orr and K. Müller, editors, Neural Networks: Tricks of the Trade, pages 145–163. Springer, 1998.

    Google Scholar 

  7. J. H. Friedman. Mutltivariate adaptive regression splines. The Annals of Statistics, 19:1–141, 1991.

    Article  MATH  MathSciNet  Google Scholar 

  8. B. Hassibi and D. G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In C. L. Giles, S. J. Hanson, and J. D. Cowan, editors, Advances in Neural Information Processing Systems, volume 5. Morgan Kaufmann, San Mateo, CA, 1993.

    Google Scholar 

  9. R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixtures of local experts. Neural Computation, 3(1):79–87, 1991.

    Article  Google Scholar 

  10. N. Sugie K. Suzuki, I. Horiba. A simple neural network algorithm with application to filter synthesis. Neural Processing Letters, Kluwer Academic Publishers, Netherlands, 13:43–53, 2001.

    Google Scholar 

  11. R. E. Kass and A. E. Raftery. Bayes factors. Journal of The American Statistical Association, 90:773–795, 1995.

    Article  MATH  Google Scholar 

  12. Y.C. Lee, G. Doolen, H.H. Chen, G.Z. Sun, T. Maxwell, H.Y. Lee, and C.L. Giles. Machine learning using higher order correlation networks. Physica D, pages 22-D: 276–306, 1986.

    MathSciNet  Google Scholar 

  13. R. M. Neal. Bayesian Learning for Neural Networks. Springer, New York, 1996.

    MATH  Google Scholar 

  14. S. J. Nowlan. Soft competitive adaptation: Neural network learning algorithms basd on fitting statistical mixtures. Ph.D. dissertation, Carnegie Mellon University, 1991.

    Google Scholar 

  15. A. Papoulis. Probbaility, Random Variables, and Stochastic Process, volume 1. McGRAW-HILL, New York, third edition, 1991.

    Google Scholar 

  16. D. G. Stork R. O. Duda, P. E. Hart. Pattern Classification. John Wiley Sons, INC., New York, 2001.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cohen, S., Intrator, N. (2002). Forward and Backward Selection in Regression Hybrid Network. In: Roli, F., Kittler, J. (eds) Multiple Classifier Systems. MCS 2002. Lecture Notes in Computer Science, vol 2364. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45428-4_10

Download citation

  • DOI: https://doi.org/10.1007/3-540-45428-4_10

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43818-2

  • Online ISBN: 978-3-540-45428-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics