Statistics and Computing

, Volume 10, Issue 1, pp 73–83 | Cite as

MML clustering of multi-state, Poisson, von Mises circular and Gaussian distributions

  • Chris S. Wallace
  • David L. Dowe
Article

Abstract

Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also statistically consistent and efficient. We provide a brief overview of MML inductive inference (Wallace C.S. and Boulton D.M. 1968. Computer Journal, 11: 185–194; Wallace C.S. and Freeman P.R. 1987. J. Royal Statistical Society (Series B), 49: 240–252; Wallace C.S. and Dowe D.L. (1999). Computer Journal), and how it has both an information-theoretic and a Bayesian interpretation. We then outline how MML is used for statistical parameter estimation, and how the MML mixture modelling program, Snob (Wallace C.S. and Boulton D.M. 1968. Computer Journal, 11: 185–194; Wallace C.S. 1986. In: Proceedings of the Nineteenth Australian Computer Science Conference (ACSC-9), Vol. 8, Monash University, Australia, pp. 357–366; Wallace C.S. and Dowe D.L. 1994b. In: Zhang C. et al. (Eds.), Proc. 7th Australian Joint Conf. on Artif. Intelligence. World Scientific, Singapore, pp. 37–44. See http://www.csse.monash.edu.au/-dld/Snob.html) uses the message lengths from various parameter estimates to enable it to combine parameter estimation with selection of the number of components and estimation of the relative abundances of the components. The message length is (to within a constant) the logarithm of the posterior probability (not a posterior density) of the theory. So, the MML theory can also be regarded as the theory with the highest posterior probability. Snob currently assumes that variables are uncorrelated within each component, and permits multi-variate data from Gaussian, discrete multi-category (or multi-state or multinomial), Poisson and von Mises circular distributions, as well as missing data. Additionally, Snob can do fully-parameterised mixture modelling, estimating the latent class assignments in addition to estimating the number of components, the relative abundances of the parameters and the component parameters. We also report on extensions of Snob for data which has sequential or spatial correlations between observations, or correlations between attributes.

clustering mixture modelling minimum message length MML Snob induction coding information theory statistical inference machine learning classification intrinsic classification unsupervised learning numerical taxonomy 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barron A.R. and Cover T.M. 1991. Minimum complexity density estimation. IEEE Transactions on Information Theory 37: 1034–1054.Google Scholar
  2. Baxter R.A. and Oliver J.J. 1997. Finding overlapping distributions with MML. Statistics and Computing 10(1): 5–16.Google Scholar
  3. Boulton D.M. 1975. The information criterion for intrinsic classificationa. Ph. D. Thesis, Dept. Computer Science, Monash University, Australia.Google Scholar
  4. Boulton D.M. and Wallace C.S. 1969. The information content of a multistate distribution. Journal of Theoretical Biology 23: 269–278.Google Scholar
  5. Boulton D.M. and Wallace C.S. 1970. A program for numerical classification. Computer Journal 13: 63–69.Google Scholar
  6. Boulton D.M. and Wallace C.S. 1973a. An information measure for hierarchic classification. The Computer Journal 16: 254–261.Google Scholar
  7. Boulton D.M. and Wallace C.S. 1973b. A comparison between information measure classification. In: Proceedings of ANZAAS Congress, Perth.Google Scholar
  8. Boulton D.M. and Wallace. C.S. 1975. An information measure for single-link classification. The Computer Journal 18(3): 236–238.Google Scholar
  9. Chaitin. G.J. 1966. On the length of programs for computing finite sequences. Journal of the Association for Computing Machinery 13: 547–549.Google Scholar
  10. Cheeseman P., Self M., Kelly J., Taylor W., Freeman D., and Stutz J. 1988. Bayesian classification. In: Seventh National Conference on Artificial Intelligence, Saint Paul, Minnesota, pp. 607–611.Google Scholar
  11. Conway J.H. and Sloane N.J.A. 1988. Sphere Packings, Lattices and Groups. London, Springer Verlag.Google Scholar
  12. Dellaportas P., Karlis D., and Xekalaki E. 1997. Bayesian Analysis of Finite Poisson Mixtures. Technical Report No. 32. Department of Statistics, Athens University of Economics and Business, Greece.Google Scholar
  13. Dowe D.L., Allison L., Dix T.I., Hunter L., Wallace C.S., and Edgoose T. 1996. Circular clustering of protein dihedral angles by minimum message length. In: Proc. 1st Pacific Symp. Biocomp., HI, U.S.A., pp. 242–255.Google Scholar
  14. Dowe D.L., Baxter R.A., Oliver J.J., and Wallace C.S. 1998. Point estimation using the Kullback-Leibler loss function and MML. In: Proc. 2nd Pacific Asian Conference on Knowledge Discovery and Data Mining (PAKDD'98), Melbourne, Australia. Springer Verlag, pp. 87–95.Google Scholar
  15. Dowe D.L. and Korb K.B. 1996. Conceptual difficulties with the efficient market hypothesis: towards a naturalized economics. In: Dowe D.L., Korb K.B., and Oliver J.J. (Eds.), Proceedings of the Information, Statistics and Induction in Science (ISIS) Conference, Melbourne, Australia. World Scientific, pp. 212–223.Google Scholar
  16. Dowe D.L., Oliver J.J., Baxter R.A., and Wallace C.S. 1995. Bayesian estimation of the von Mises concentration parameter. In: Proc. 15th Maximum Entropy Conference, Santa Fe, New Mexico.Google Scholar
  17. Dowe D.L., Oliver J.J., and Wallace C.S. 1996. MML estimation of the parameters of the spherical Fisher distribution. In: Sharma A. et al. (Eds.), Proc. 7th Conf. Algorithmic Learning Theory (ALT'96), LNAI 1160, Sydney, Australia, pp. 213–227.Google Scholar
  18. Dowe D.L. and Wallace. C.S. 1997. Resolving the Neyman-Scott problem by minimum message length. In: Proc. Computing Science and Statistics – 28th Symposium on the Interface, Vol. 28, pp. 614–618.Google Scholar
  19. Dowe D.L. and Wallace C.S. 1998. Kolmogorov complexity, minimum message lenth and inverse learning. In: Proc. 14th Australian Statistical Conference (ASC-14), Gold Coast, Qld., Australia, pp. 144.Google Scholar
  20. Edgoose T.C. and Allison L. 1998. Unsupervised markov classification of sequenced data using MML. In: McDonald C. (Ed.), Proc. 21st Australasian Computer Science Conference (ACSC'98), Singapore. Springer-Verlag, ISBN: 981-3083-90-5, pp. 81–94.Google Scholar
  21. Edgoose T.C., Allison L., and Dowe D.L. 1998. AnMMLclassification of protein structure that knows about angles and sequences. In: Proc. 3rd Pacific Symp. Biocomp. (PSB-98) HI, U.S.A., pp. 585–596.Google Scholar
  22. Edwards R.T. and Dowe D.L. 1998. Single factor analysis in MML mixture modelling. In: Proc. 2nd Pacific Asian Conference on Knowledge Discovery and Data Mining (PAKDD'98), Melbourne, Australia. Springer Verlag, pp. 96–109.Google Scholar
  23. Everitt B.S. and Hand D.J. 1981. Finite Mixture Distributions. London, Chapman and Hall.Google Scholar
  24. Fisher D.H. 1987. Conceptual clustering, learning from examples, and inference. In: Machine Learning: Proceedings of the Fourth International Workshop. Morgan Kaufmann, pp. 38–49.Google Scholar
  25. Fisher N.I. 1993. Statistical Analysis of Circular Data. Cambridge University Press.Google Scholar
  26. Fraley C. and Raftery A.E. 1998. Mclust: software for modelbased clustering and discriminant analysis. Technical Report TR 342, Department of Statistics, Univeristy of Washington, U.S.A. Journal of Classification, to appear.Google Scholar
  27. Georgeff M.P. and Wallace C.S. 1984. A general criterion for inductive inference. In: O'shea T. (Ed.), Advances in Artificial Intelligence: Proc. Sixth European Conference on Artificial Intelligence, Amsterdam. North Holland, pp. 473–482.Google Scholar
  28. Hunt L.A. and Jorgensen M.A. 1999. Mixture model clustering using the multimix program. Australian and New Zealand Journal of statistics 41(2): 153–171.Google Scholar
  29. Jorgensen M.A. and Hunt L.A. 1996. Mixture modelling clustering of data sets with categorical and continous variables. In: Dowe D.L., Korb K. B., and Oliver J.J. (Eds.), Proceedings of the Information, Statistics and Induction in Science (ISIS) Conference, Melbourne, Australia. World Scientific, pp. 375–384.Google Scholar
  30. Kearns M., Mansour Y., Ng A. Y., and Ron D. 1997. An experimental and theoretical comparison of model selection methods. Machine Learning 27: 7–50.Google Scholar
  31. Kissane D.W., Bloch S., Dowe D.L., Snyder R.D., Onghena P., McKenzie D.P., and Wallace C.S. 1996. The Melbourne family grief study, I: Perceptions of family functioning in bereavement. American Journal of Psychiatry 153: 650–658.Google Scholar
  32. Mardia K.V. 1972. Statistics of Directional Data. Academic Press.Google Scholar
  33. McLachlan G.J. 1992. Discriminant Analysis and Statistical Pattern Recognition. New York, Wiley.Google Scholar
  34. McLachlan G.J. and Basford. K.E. 1998. Mixture Models. New York, Marcel Dekker.Google Scholar
  35. McLachlan G.J. and Krishnan T. 1996. The EM Algorithm and Extensions. New York, Wiley.Google Scholar
  36. McLachlan G.J., Peel D., Basford K.E., and Adams P. 1999. The EMMIX software for the fitting of mixtures of Normal and t-components. Journal of Statistical Software 4, 1999.Google Scholar
  37. Neal R.M. 1998. Markov chain sampling methods for dirichlet process mixture models. Technical Report 9815, Dept. of Statistics and Dept. of Computer Science, University of Toronto, Canada, pp. 17.Google Scholar
  38. Neyman J. and Scott E.L. 1948. Consistent estimates based on partially consistent observations. Econometrika 16: 1–32.Google Scholar
  39. Oliver J. Baxter R., and Wallace C. 1996. Unsupervised learning using MML. In: Proc. 13th International Conf. Machine Learning (ICML 96), San Francisco, CA. Morgan Kaufmann, pp. 364–372.Google Scholar
  40. Oliver J.J. and Dowe D.L. 1996. Minimum message length mixture modelling of spherical von Mises-Fisher distributions. In: Proc. Sydney International Statistical Congress (SISC-96), Sydney, Australia, p. 198.Google Scholar
  41. Patrick J.D. 1991. Snob: A program for discriminating between classes. Technical report TR 151, Dept. of Computer Science, Monash University, Clayton, Victoria 3168, Australia.Google Scholar
  42. Prior M., Eisenmajer R., Leekam S., Wing L., Gould J., Ong B., and Dowe D.L. 1998. Are there subgroups within the autistic spectrum? A cluster analysis of a group of children with autistic spectrum disorders. J. child Psychol. Psychiat. 39(6): 893–902.Google Scholar
  43. Rissanen. J.J. 1978. Modeling by shortest data description. Automatica, 14: 465–471.Google Scholar
  44. Rissanen. J.J. 1989. Stochastic Complexity in Statistical Inquiry. Singapore, World Scientific.Google Scholar
  45. Rissanen J.J. and Ristad E.S. 1994. Unsupervised Classfication with stochastic complexity. In: Bozdogan H. et al. (Ed.), Proc. of the First US/Japan Conf. on the Frontiers of Statistical Modeling: An Informational Approach. Kluwer Academic Publishers, pp. 171–182.Google Scholar
  46. Roeder K. 1994. A graphical technique for determining the number of components in a mixture of normals. Journal of the American Statistical Association 89(426): 487–495.Google Scholar
  47. Schou G. 1978. Estimation of the concentration parameter in von Mises-Fisher distributions. Biometrika 65: 369–377.Google Scholar
  48. Solomonoff R.J. 1964. A formal theory of inductive inference. Information and Control 7: 1–22, 224–254.Google Scholar
  49. Solomonoff R.J. 1995. The discovery of algorithmic probability: A guide for the programming of true creativity. In: Vitanyi P. (Ed.), Computational Learning Theory: EuroCOLT'95. Springer-Verlag, pp. 1–22.Google Scholar
  50. Stutz J. and Cheeseman P. 1994. Autoclass: A Bayesian approach to classfication. In: Skilling J. and Subuiso S. (Eds.), Maximum Entropy and Bayesian Methods. Dordrecht, Kluwer academic.Google Scholar
  51. Titterington D.M., Smith A.F.M., and Makov U.E. 1985. Statistical Analysis of Finite Mixture Distributions. John Wiley and Sons, Inc.Google Scholar
  52. Vapnik V.N. 1995. The Nature of Statistical Learning Theory. Springer.Google Scholar
  53. Viswanathan M. and Wallace C.S. 1999. A note on the comparison of polynomial selection methods. In: Proc. 7th Int. Workshop on Artif. Intelligence and Statistics. Morgan Kaufmann, pp. 169–177.Google Scholar
  54. Viswanathan M., Wallace C.S., Dowe D.L., and Korb K.B. 1999. Finding cutpoints in Noisy Binary Sequences. In: Proc. 12th Australian Joint Conf. on Artif. Intelligence.Google Scholar
  55. Wahba G. 1990. Spline Models for Observational Data. SIAM.Google Scholar
  56. Wallace C.S. 1986. An improved program for classfication. In: Proceedings of the Nineteenth Australian Computer Science Conference (ACSC-9), Vol. 8, Monash University, Australia, pp. 357–366.Google Scholar
  57. Wallace C.S. 1990. Classfication by Minimum Message Length inference. In: Goos G. and Hartmanis J. (Eds.), Advances in Computing and Information – ICCI'90. Berlin, Springer-Verlag, pp. 72–81.Google Scholar
  58. Wallace C.S. 1995. Multiple factor analysis by MML estimation. Technical Report 95/218, Dept. of Computer Science, Monash University, Clayton, Victoria 3168, Australia. J. Multiv. Analysis, (to appear).Google Scholar
  59. Wallace C.S. 1989. False Oracles and SMML Estimators. In: Dowe D.L., Korb K.B., and Oliver J.J. (Eds.), Proceedings of the Information, Statistics and Induction in science (ISIS) Conference, Melbourne, Australia. World Scientific, pp. 304–316, Tech Rept 89/128, Dept. Comp. Sci., Monash Univ., Australia.Google Scholar
  60. Wallace C.S. 1998. Intrinsic Classification of Spatially-Correlated Data. Computer Journal 41(8): 602–611.Google Scholar
  61. Wallace C.S. and Boulton D.M. 1968. An information measure for classification. Computer Journal 11: 185–194.Google Scholar
  62. Wallace C.S. and Boulton D.M. 1975. An invariant Bayes method for point estimation. Classification Society Bulletin 3(3): 11–34.Google Scholar
  63. Wallace C.S. and Dowe D.L. 1993. MML estimation of the von Mises concentration parameter. Technical Report TR 93/193, Dept. of Comp. Sci., Monash Univ., Clayton 3168, Australia. Aust. and N.Z. J. Stat, prov. accepted.Google Scholar
  64. Wallace C.S. and Dowe D.L. 1994. Estimation of the von Mises concentration parameter using minimum message length. In: Proc. 12th Australian Statistical Soc. Conf., Monash University, Australia.Google Scholar
  65. Wallace C.S. and Dowe D.L. 1994. Intrinsic classification by MML – the Snob program. In: Zhang C. et al. (Eds.), Proc. 7th Australia Joint Conf. on Artif. Intelligence. World Scientific, Singapore, pp. 37–44. See http://www.csse.monash.edu.au/-dld/Snob.html.Google Scholar
  66. Wallace C.S. and Dowe D.L. 1996. MML mixture modelling of Multistate, Poisson, von Mises circular and Gaussian distributions. In: Proc. Sydney International Statistical Congress (SISC-96), Sydney, Australia, p. 197.Google Scholar
  67. Wallace C.S. and Dowe D.L. 1997. MML mixture modelling of Multistate, Poisson, von Mises circular and Gaussian distributions. In: Proc. 6th Int. Workshop on Artif. Intelligence and Statistics, pp. 529–536.Google Scholar
  68. Wallace C.S. and Dowe D.L. 1999. Minimum Message Length and Kolmogorov Complexity. Computer Journal (Special issue on Kolmogorov Complexity) 42(4): 270–283.Google Scholar
  69. Wallace C.S. and Freeman P.R. 1987. Estimation and inference by compact coding. J. Royal Statistical Society (Series B), 49: 240–252.Google Scholar
  70. Wallace C.S. and Freeman P.R. 1992. Single factor analysis by MML estimation. Journal of the Royal Statistical Society (Series B) 54: 195–209.Google Scholar

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • Chris S. Wallace
    • 1
  • David L. Dowe
    • 2
  1. 1.Computer Science and Software EngineeringMonash UniversityClaytonAustralia
  2. 2.Computer Science and Software EngineeringMonash UniversityClaytonAustralia

Personalised recommendations