Abstract
An ensemble of GM-RVFL networks is applied to the stochastic time series generated from the logistic-kappa map, and the dependence of the generalisation performance on the regularisation method and the weighting scheme is studied. For a single-model predictor, application of the Bayesian evidence scheme is found to lead to superior results. However, when using network committees, under-regularisation can be advantageous, since it leads to a larger model diversity, as a result of which a more substantial decrease of the generalisation ‘error’ can be achieved.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Reference
This partitioning of the available data into a small training set and a large cross-validation set is not realistic for practical applications. The small training set size was chosen for testing the effects of overfitting. The large cross-validation set was used for getting a reliable estimate of the weighting scheme (13.33), with which the alternative weighting scheme (13.31) and a uniform weighting scheme are to be compared.
Values that had achieved good results in the simulations of Chapter 16 were simply used again.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag London Limited
About this chapter
Cite this chapter
Husmeier, D. (1999). Demonstration: Committees of Networks Trained with Different Regularisation Schemes. In: Neural Networks for Conditional Probability Estimation. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0847-4_14
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0847-4_14
Publisher Name: Springer, London
Print ISBN: 978-1-85233-095-8
Online ISBN: 978-1-4471-0847-4
eBook Packages: Springer Book Archive