Run-Time Performance Analysis of the Mixture of Experts Model
The Mixture of Experts (ME) model is one of the most popular ensemble methods used in pattern recognition and machine learning. Despite many studies on the theory and application of the ME model, to our knowledge, its training, testing, and evaluation costs have not been investigated yet. After analyzing the ME model in terms of number of required floating point operations, this paper makes an experimental comparison between the ME model and the recently proposed Mixture of Random Prototype Experts. Experiments have been performed on selected datasets from the UCI machine learning repository. Experimental results confirm the expected behavior of the two ME models, while highlighting that the latter performs better in terms of accuracy and run-time performance.
KeywordsHide Layer Expert Model Online Evaluation Gating Function Expert Network
Unable to display preview. Download preview PDF.
- 1.Jacobs, R., Jordan, M., Barto, A.: Task decomposition through competition in a modular connectionist architecture: the what and where vision tasks. Tech rep. University of Massachusetts, Amherst, MA (1991)Google Scholar
- 4.Murphy, P.M., Aha, D.W.: UCI Repository of Machine Learning Databases, Dept. of Information and Computer Science, Univ. of California, Irvine (1994)Google Scholar
- 8.Hennessy, J., Patterson, D.: Computer architecture: a quantitative approach. Morgan Kaufmann, San Mateo (1990)Google Scholar