Maximum Likelihood Topology Preserving Ensembles
Statistical re-sampling techniques have been used extensively and successfully in the machine learning approaches for generations of classifier and predictor ensembles. It has been frequently shown that combining so called unstable predictors has a stabilizing effect on and improves the performance of the prediction system generated in this way. In this paper we use the re-sampling techniques in the context of a topology preserving map which can be used for scale invariant classification, taking into account the fact that it models the residual after feedback with a family of distributions and finds filters which make the residuals most likely under this model. This model is applied to artificial data sets and compared with a similar version based on the Self Organising Map (SOM).
Unable to display preview. Download preview PDF.
- Kohonen, T., Barna, G., Chrisley, R.: Statistical Pattern Recognition with Neural Networks. In: Proceeding of International Joint Conference of Neural Networks, pp. 61–88. IEEE Press, Los Alamitos (1988)Google Scholar
- Corchado, E., Fyfe, C.: Maximum Likelihood Topology Preserving Algorithms. In: Proceedings of the U.K. Workshop on Computational Intelligence, Birmingham, UK (2002)Google Scholar
- Corchado, E., Fyfe, C.: The Scale Invariant Map and Maximum Likelihood Hebbian Learning. In: International Conference on Knowledge-Based & Intelligent Information & Engineering System. IOS Press, Amsterdam (2002)Google Scholar
- Fyfe, C., Corchado, E.: Maximum likelihood Hebbian rules. ESANN. European Symposium on Artificial Neural Networks (2002) ISBN 2-930307-02-1Google Scholar
- Ruta, D., Gabrys, B.: Classifier Selection for Majority Voting. Special issue of the journal of information fusion on Diversity in Multiple Classifier Systems 6(1), 63–81 (2005)Google Scholar
- Petrakieva, L., Fyfe, C.: Bagging and Bumping Self-organising Maps. Computing and Information Systems (2003)Google Scholar
- Gabrys, B.: Combining Neuro-Fuzzy Classifiers for Improved Generalisation and Reliability. In: Proceedings of the Int. Joint Conference on Neural Networks (IJCNN 2002) a part of the WCCI 2002 Congress, Honolulu, USA, May 2002, pp. 2410–2415 (2002) ISBN: 0-7803-7278-6Google Scholar