Partially-Hidden Markov Models

  • Emmanuel Ramasso
  • Thierry Denœux
  • Noureddine Zerhouni
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 164)


This paper addresses the problem of Hidden Markov Models (HMM) training and inference when the training data are composed of feature vectors plus uncertain and imprecise labels. The “soft” labels represent partial knowledge about the possible states at each time step and the “softness” is encoded by belief functions. For the obtained model, called a Partially-Hidden Markov Model (PHMM), the training algorithm is based on the Evidential Expectation-Maximisation (E2M) algorithm. The usual HMM model is recovered when the belief functions are vacuous and the obtained model includes supervised, unsupervised and semi-supervised learning as special cases.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Baum, L.E., Petrie, T., Soules, G., Weiss, N.: A maximization technique occurring in statistical analysis of probabilistic functions of markov chains. Ann. Math. Stat. 41, 164–171 (1970)MathSciNetMATHCrossRefGoogle Scholar
  2. 2.
    Bishop, C.: Pattern Recognition and Machine Learning. Springer (2006)Google Scholar
  3. 3.
    Côme, E., Oukhellou, L., Denoeux, T., Aknin, P.: Learning from partially supervised data using mixture models and belief functions. Pattern Recognition 42, 334–348 (2009)MATHCrossRefGoogle Scholar
  4. 4.
    Dempster, A.: Upper and lower probabilities induced by multiple valued mappings. Annals of Mathematical Statistics 38, 325–339 (1967)MathSciNetMATHCrossRefGoogle Scholar
  5. 5.
    Denoeux, T.: Maximum likelihood estimation from uncertain data in the belief function framework. IEEE Transactions on Knowledge and Data Engineering (2011), doi:10.1109/TKDE.2011.201Google Scholar
  6. 6.
    Dong, M., He, D.: A segmental hidden semi-markov model (hsmm)-based diagnostics and prognostics framework and methodology. Mechanical Systems and Signal Processing 21, 2248–2266 (2007)CrossRefGoogle Scholar
  7. 7.
    Forney, G.: The viterbi algorithm. Proceedings of the IEEE 61(3), 268–278 (1973)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Murphy, K.P.: Dynamic Bayesian networks: Representation, inference and learning. Ph.D. thesis, UC Berkeley (2002)Google Scholar
  9. 9.
    Rabiner, L.: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE 77, 257–285 (1989)CrossRefGoogle Scholar
  10. 10.
    Ramasso, E.: Contribution of belief functions to hidden markov models. In: IEEE Workshop on Machine Learning and Signal Processing, Grenoble, France, pp. 1–6 (2009)Google Scholar
  11. 11.
    Saporta, G., Youness, G.: Comparing two partitions: Some proposals and experiments. In: COMPSTAT (2002)Google Scholar
  12. 12.
    Saxena, A., Goebel, K., Simon, D., Eklund, N.: Damage propagation modeling for aircraft engine run-to-failure simulation. In: Int. Conf. on Prognostics and Health Management, Denver, CO, USA, pp. 1–9 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Emmanuel Ramasso
    • 1
  • Thierry Denœux
    • 2
  • Noureddine Zerhouni
    • 1
  1. 1.Automatic Control and Micro-Mechatronic Systems DepartmentFEMTO-ST Institute, UMR CNRS 6174 - UFC / ENSMM / UTBMBesançonFrance
  2. 2.Heudiasyc, U.M.R. C.N.R.S. 7253, Centre de Recherches de RoyallieuUniversité de Technologie de CompiègneCompiègne CedexFrance

Personalised recommendations