Convergence Problem in GMM Related Robot Learning from Demonstration
Convergence problems can occur in some practical situations when using Gaussian Mixture Model (GMM) based robot Learning from Demonstration (LfD). Theoretically, Expectation Maximization (EM) is a good technique for the estimation of parameters for GMM, but can suffer problems when used in a practical situation. The contribution of this paper is a more complete analysis of the theoretical problem which arise in a particular experiment. The research question that is answered in this paper is how can a partial solution be found for such practical problem. Simulation results and practical results for laboratory experiments verify the theoretical analysis. The two issues covered are repeated sampling on other models and the influence of outliers (abnormal data) on the policy/kernel generation in GMM LfD. Moreover, an analysis of the impact of repeated samples to the CHMM, and experimental results are also presented.
- 1.Ge, F., Moore, W., Antolovich, M., Gao, J.: Application of learning from demonstration to a mining tunnel inspection robot. In: 2011 First International Conference on Robot, Vision and Signal Processing (RVSP), pp. 32–35. IEEE (2011)Google Scholar
- 2.Archambeau, C., Lee, J.A., Verleysen, M.: On convergence problems of the em algorithm for finite gaussian mixtures. In: Proc. 11th European Symposium on Artificial Neural Networks, pp. 99–106 (2003)Google Scholar
- 3.Bilmes, J.: A gentle tutorial of the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models. Technical report (1998)Google Scholar
- 6.Movellan, J.R.: Tutorial on Hidden Markov Models (2003), http://mplab.ucsd.edu/tutorials/hmm.pdf