Abstract
This paper addresses the problem of Hidden Markov Models (HMM) training and inference when the training data are composed of feature vectors plus uncertain and imprecise labels. The “soft” labels represent partial knowledge about the possible states at each time step and the “softness” is encoded by belief functions. For the obtained model, called a Partially-Hidden Markov Model (PHMM), the training algorithm is based on the Evidential Expectation-Maximisation (E2M) algorithm. The usual HMM model is recovered when the belief functions are vacuous and the obtained model includes supervised, unsupervised and semi-supervised learning as special cases.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Baum, L.E., Petrie, T., Soules, G., Weiss, N.: A maximization technique occurring in statistical analysis of probabilistic functions of markov chains. Ann. Math. Stat. 41, 164–171 (1970)
Bishop, C.: Pattern Recognition and Machine Learning. Springer (2006)
Côme, E., Oukhellou, L., Denoeux, T., Aknin, P.: Learning from partially supervised data using mixture models and belief functions. Pattern Recognition 42, 334–348 (2009)
Dempster, A.: Upper and lower probabilities induced by multiple valued mappings. Annals of Mathematical Statistics 38, 325–339 (1967)
Denoeux, T.: Maximum likelihood estimation from uncertain data in the belief function framework. IEEE Transactions on Knowledge and Data Engineering (2011), doi:10.1109/TKDE.2011.201
Dong, M., He, D.: A segmental hidden semi-markov model (hsmm)-based diagnostics and prognostics framework and methodology. Mechanical Systems and Signal Processing 21, 2248–2266 (2007)
Forney, G.: The viterbi algorithm. Proceedings of the IEEE 61(3), 268–278 (1973)
Murphy, K.P.: Dynamic Bayesian networks: Representation, inference and learning. Ph.D. thesis, UC Berkeley (2002)
Rabiner, L.: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE 77, 257–285 (1989)
Ramasso, E.: Contribution of belief functions to hidden markov models. In: IEEE Workshop on Machine Learning and Signal Processing, Grenoble, France, pp. 1–6 (2009)
Saporta, G., Youness, G.: Comparing two partitions: Some proposals and experiments. In: COMPSTAT (2002)
Saxena, A., Goebel, K., Simon, D., Eklund, N.: Damage propagation modeling for aircraft engine run-to-failure simulation. In: Int. Conf. on Prognostics and Health Management, Denver, CO, USA, pp. 1–9 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ramasso, E., Denœux, T., Zerhouni, N. (2012). Partially-Hidden Markov Models. In: Denoeux, T., Masson, MH. (eds) Belief Functions: Theory and Applications. Advances in Intelligent and Soft Computing, vol 164. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29461-7_42
Download citation
DOI: https://doi.org/10.1007/978-3-642-29461-7_42
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-29460-0
Online ISBN: 978-3-642-29461-7
eBook Packages: EngineeringEngineering (R0)