Forward and Backward Selection in Regression Hybrid Network
We introduce a Forward Backward and Model Selection algorithm (FBMS) for constructing a hybrid regression network of radial and perceptron hidden units. The algorithm determines whether a radial or a perceptron unit is required at a given region of input space. Given an error target, the algorithm also determines the number of hidden units. Then the algorithm uses model selection criteria and prunes unnecessary weights. This results in a final architecture which is often much smaller than a RBF network or a MLP. Results for various data sizes on the Pumadyn data indicate that the resulting architecture competes and often outperform best known results for this data set.
KeywordsHybrid Network Architecture SMLP Clustering Regularization Nested Models Model Selection
Unable to display preview. Download preview PDF.
- C.E. Rasmussen, R.M. Neal, G.E. Hinton, D. Van Camp, Z. Ghahrman M. Revow, R. Kustra, and R. Tibshirani. The delve manual. 1996.Google Scholar
- S. Cohen and N. Intrator. Automatic model selection of ridge and radial functions. In Second International workshop on Multiple Classifier Systems, 2001.Google Scholar
- S. Cohen and N. Intrator. A hybrid projection based and radial basis function architecture: Initial values and global optimization. To appear in Special issue of PAA on Fusion of Multiple Classifiers, 2001.Google Scholar
- G.W. Flake. Square unit augmented, radially extended, multilayer percpetrons. In G. B. Orr and K. Müller, editors, Neural Networks: Tricks of the Trade, pages 145–163. Springer, 1998.Google Scholar
- B. Hassibi and D. G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In C. L. Giles, S. J. Hanson, and J. D. Cowan, editors, Advances in Neural Information Processing Systems, volume 5. Morgan Kaufmann, San Mateo, CA, 1993.Google Scholar
- N. Sugie K. Suzuki, I. Horiba. A simple neural network algorithm with application to filter synthesis. Neural Processing Letters, Kluwer Academic Publishers, Netherlands, 13:43–53, 2001.Google Scholar
- S. J. Nowlan. Soft competitive adaptation: Neural network learning algorithms basd on fitting statistical mixtures. Ph.D. dissertation, Carnegie Mellon University, 1991.Google Scholar
- A. Papoulis. Probbaility, Random Variables, and Stochastic Process, volume 1. McGRAW-HILL, New York, third edition, 1991.Google Scholar