Statistics and Computing

, Volume 9, Issue 4, pp 269–278

MML Markov classification of sequential data

  • T. Edgoose
  • L. Allison


General purpose un-supervised classification programs have typically assumed independence between observations in the data they analyse. In this paper we report on an extension to the MML classifier Snob which enables the program to take advantage of some of the extra information implicit in ordered datasets (such as time-series). Specifically the data is modelled as if it were generated from a first order Markov process with as many states as there are classes of observation. The state of such a process at any point in the sequence determines the class from which the corresponding observation is generated. Such a model is commonly referred to as a Hidden Markov Model. The MML calculation for the expected length of a near optimal two-part message stating a specific model of this type and a dataset given this model is presented. Such an estimate enables us to fairly compare models which differ in the number of classes they specify which in turn can guide a robust un-supervised search of the model space. The new program, tSnob, is tested against both ‘synthetic’ data and a large ‘real world’ dataset and is found to make unbiased estimates of model parameters and to conduct an effective search of the extended model space.

Classification Hidden Markov Modelling MML spatial data 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Laird, N. M., Dempster, A. P. and Rubin, D. B. (1977) Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society (Series B), 39, 1–22.Google Scholar
  2. Soules, G., Baum, L. E., Petrie, T. and Weiss N. (1970) A maximisation technique occurring in the statistical analysis of probabilistic functions of Markov chains. Annals of Mathematical Statistics, 41, 164–171.Google Scholar
  3. Cheeseman, P. C. (1988) Autoclass II conceptual clustering system. Proceedings Machine Learning Conference, pp. 54–64.Google Scholar
  4. Edgoose, T., Allison, L. and Dowe, D. L. (1998) An MML Classification of Protein Structure that knows about Angles and Sequence. Forthcoming in the Proceedings of the 3rd Pacific Symposium on Biocomputing.Google Scholar
  5. Fisher, N. I. (1993) em Statistical Analysis of Circular Data. Cambridge University Press, Cambridge.Google Scholar
  6. Leroux, B. G. and Puterman, M. L. (1992) Maximum-PenalizedLikelihood Estimation for Independent and Markov Dependent Mixture Models. Biometrics, 48, 545–558.Google Scholar
  7. Rabiner, L. R. (1989) A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, 257–286.Google Scholar
  8. Rissanen, J. (1987) Stochastic complexity. Journal of the Royal Statistical Society (Series B), 49, 223–239.Google Scholar
  9. Wallace, C. S. (1990) Classification by minimum length inference. AAAI Spring Symposium on the Theory and Application of Minimum Length Encoding, Standford, pp. 5–9.Google Scholar
  10. Wallace, C. S. and Boulton, D. M. (1968) An information measure for classification. Computer Journal, 11, 185–194.Google Scholar
  11. Wallace, C. S. and Freeman, P. R. (1987) Estimation and inference by compact coding. Journal of the Royal Statistical Society (Series B), 49, 240–252.Google Scholar
  12. Wallace, C. S. and Dowe, D. L. (1994) Estimation of the von Mises concentration parameter using Minimum Message Length. In proceedings of the Twelfth Australian Statistical Society Conference, Monash University, Melbourne, Australia.Google Scholar

Copyright information

© Kluwer Academic Publishers 1999

Authors and Affiliations

  • T. Edgoose
  • L. Allison

There are no affiliations available

Personalised recommendations