Partially-Hidden Markov Models
This paper addresses the problem of Hidden Markov Models (HMM) training and inference when the training data are composed of feature vectors plus uncertain and imprecise labels. The “soft” labels represent partial knowledge about the possible states at each time step and the “softness” is encoded by belief functions. For the obtained model, called a Partially-Hidden Markov Model (PHMM), the training algorithm is based on the Evidential Expectation-Maximisation (E2M) algorithm. The usual HMM model is recovered when the belief functions are vacuous and the obtained model includes supervised, unsupervised and semi-supervised learning as special cases.
Unable to display preview. Download preview PDF.
- 2.Bishop, C.: Pattern Recognition and Machine Learning. Springer (2006)Google Scholar
- 5.Denoeux, T.: Maximum likelihood estimation from uncertain data in the belief function framework. IEEE Transactions on Knowledge and Data Engineering (2011), doi:10.1109/TKDE.2011.201Google Scholar
- 8.Murphy, K.P.: Dynamic Bayesian networks: Representation, inference and learning. Ph.D. thesis, UC Berkeley (2002)Google Scholar
- 10.Ramasso, E.: Contribution of belief functions to hidden markov models. In: IEEE Workshop on Machine Learning and Signal Processing, Grenoble, France, pp. 1–6 (2009)Google Scholar
- 11.Saporta, G., Youness, G.: Comparing two partitions: Some proposals and experiments. In: COMPSTAT (2002)Google Scholar
- 12.Saxena, A., Goebel, K., Simon, D., Eklund, N.: Damage propagation modeling for aircraft engine run-to-failure simulation. In: Int. Conf. on Prognostics and Health Management, Denver, CO, USA, pp. 1–9 (2008)Google Scholar