Three Examples of Monte-Carlo Markov Chains: At the Interface Between Statistical Computing, Computer Science, and Statistical Mechanics

  • Persi Diaconis
  • Susan Holmes
Part of the The IMA Volumes in Mathematics and its Applications book series (IMA, volume 72)

Abstract

The revival of interest in Markov chains is based in part on their recent applicability in solving real world problems and in part on their ability to resolve issues in theoretical computer science. This paper presents three examples which are used to illustrate both parts: a Markov chain algorithm for estimating the tails of the bootstrap also illustrates the Jerrum-Sinclair theory of approximate counting. The Geyer-Thompson work on Monte-Carlo evaluation of maximum likelihood is compared with work on evaluation of the partition function. Finally, work of Diaconis-Sturmfels on conditional inference is complemented by the work of theoretical computer scientists on approximate computation of the volume of convex polyhedra.

Keywords

Markov Chain Partition Function Ising Model Grade Point Average Convex Polyhedron 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barone, P., Frigessi A. and Piccioni M. (editors), 1992, Stochastic Models, Statistical Methods, and Algorithms in Image Analysis, Lecture Notes in Statistics, Springer-Verlag.Google Scholar
  2. Broder, A. (1986). How Hard is it to Marry at Random? (On the approximation of the permanent.) Proc. 18th ACM Symp. Th. Comp., 50–58. Erratum in Proc. 20th ACM Symp. Th. Comp., 551.Google Scholar
  3. Chung F., Graham R., and Yau, S.T. (1994). Unpublished manuscript.Google Scholar
  4. Diaconis, P. and Efron, B. (1986). Testing for independence in a two-way table: new interpretations of the chi-square statistic (with discussion). Ann. Statist. 13, 845–905.MathSciNetCrossRefGoogle Scholar
  5. Diaconis, P. and Gangolli, A. (1994). The number of arrays with given row and column sums. In this volume.Google Scholar
  6. Diaconis P., Graham, R.L., and Sturmfels, B. (1994). Primitive Partition Identities. Technical Report No. 9, Dept. of Statistics, Stanford University.Google Scholar
  7. Diaconis, P. and Holmes, S. (1994a). Gray codes for randomization procedures. Technical Report No. 10, Dept. of Statistics, Stanford University.Google Scholar
  8. Diaconis, P. and Holmes, S. (1994b). Random walks for bootstrap tails. Technical Report, Dept. of Mathematics, Harvard University.Google Scholar
  9. Diaconis, P. and Saloff-Coste, L. (1993). Comparison theorems for reversible Markov chains. Ann. Appl. Prob. 3, 696–730.MathSciNetMATHCrossRefGoogle Scholar
  10. Diaconis, P. and Sturmfels, B. (1993). Algebraic algorithms for sampling from conditional distributions. Technical Report, Dept. of Mathematics, Harvard University.Google Scholar
  11. Dyer, M. and Frieze, A. (1991). Computing the volume of convex bodies: a case where randomness proveably helps. Probabilistic Combinatorics and its Applications, B. Bollobàs (ed.), Proc. Symp. Appl. Math. 44, 123-170, Amer. Math. Soc, ProvidencGoogle Scholar
  12. Efron, B. (1979). Bootstrap methods: another look at the jackknife. Ann. Statist., 7, 1–26.MathSciNetMATHCrossRefGoogle Scholar
  13. Efron, B. and Tibshirani, R. (1993). An Introduction to the Bootstrap. Chapman and Hall.Google Scholar
  14. Gangolli, A. (1991). Convergence bounds for Markov chains and applications to sampling. Ph.D. Dissertation, Dept. of Computer Science, Stanford University.Google Scholar
  15. Geyer, C. and Thompson, E. (1992). Constrained Monte Carlo maximum likelihood for dependent data. Jour. Roy. Statist. Soc. B 54, 657–699.MathSciNetGoogle Scholar
  16. Geyer, C. and Thompson, E. (1993). Analysis of relatedness in the California condors, from DNA fingerprints. Mol Biol. Evol. 10, 571–589.Google Scholar
  17. Gillman, D. (1993). A Chernoff bound for random walks on expander graphs. Preprint.Google Scholar
  18. Hall, P. (1992). The Bootstrap and Edgeworth Expansions, Springer-Verlag.Google Scholar
  19. Höglund, T. (1974). Central limit theorems and statistical inference for finite Markov chains. Zeit. Wahr. Verw. Gebeits. 29, 123–151.MATHCrossRefGoogle Scholar
  20. Jerrum, M. and Sinclair, A. (1989). Approximating the permanent. Siam J. Comput. vn18, 1149–1178.Google Scholar
  21. Jerrum, M. and Sinclair, A. (1993). Polynomial time approximation algorithms for the Ising model, Siam J. Comp., 22, 1087–1116.MathSciNetMATHCrossRefGoogle Scholar
  22. Jerrum M., Valiant, L. and Vazirani, V. (1986). Random generation of combinatorial structures from a uniform distribution, Theor. Comput. Sci. 43, 169–188.MathSciNetMATHCrossRefGoogle Scholar
  23. Kannan R., Lovasz, L. and Simonovitz, M. (1994). Unpublished manuscript.Google Scholar
  24. Kannan, R. and Mount, J. (1994). Unpublished manuscript.Google Scholar
  25. Mann, B. (1994). Some formulae for enumeration of contingency tables. Technical Report, Harvard University.Google Scholar
  26. Sinclair, A. (1993). Algorithms for random generation and counting, Birkhäuser, Boston.MATHCrossRefGoogle Scholar
  27. Valiant, L. (1979). The complexity of computing the permanent. Theor. Comput. Sci. 8, 189–201.MathSciNetMATHCrossRefGoogle Scholar
  28. Welsh, D. (1993). Complexity: Knots, colourings and counting, Cambridge University Press.Google Scholar

Copyright information

© Springer Science+Business Media New York 1995

Authors and Affiliations

  • Persi Diaconis
    • 1
  • Susan Holmes
    • 2
  1. 1.Dept. of MathematicsHarvard UniversityCambridgeUSA
  2. 2.INRAUnité de BiométrieMontpellierFrance

Personalised recommendations