Advertisement

Factorial Markov Random Fields

  • Junhwan Kim
  • Ramin Zabih
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2352)

Abstract

In this paper we propose an extension to the standard Markov Random Field (MRF) model in order to handle layers. Our extension, which we call a Factorial MRF (FMRF), is analogous to the extension from Hidden Markov Models (HMM’s) to Factorial HMM’s. We present an efficient EM-based algorithm for inference on Factorial MRF’s. Our algorithm makes use of the fact that layers are a priori independent, and that layers only interact through the observable image. The algorithm iterates between wide inference, i.e., inference within each layer for the entire set of pixels, and deep inference, i.e., inference through the layers for each single pixel. The efficiency of our method is partly due to the use of graph cuts for binary segmentation, which is part of the wide inference step. We show experimental results for both real and synthetic images.

Keywords

Grouping and segmentation Layer representation Graphical model Bayesian inference Markov Random Field Factorial Hidden Markov Model 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. Ayer and H. Sawhney. Layered representation of motion video using robust maximum-likelihood estimation of mixture models and MDL encoding. In Proceedings of the International Conference on Computer Vision, pages 777–784, 1995.Google Scholar
  2. 2.
    Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of mincut/max-flow algorithms for energy minimization in computer vision. In Proceedings of the International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, volume 2134 of LNCS, pages 359–374, 2001.CrossRefGoogle Scholar
  3. 3.
    M. Brand, N. Oliver, and A. Pentland. Coupled hidden markov models for complex action recognition. Technical Report 407, Vision and Modeling Group, MIT Media lab, November 1996.Google Scholar
  4. 4.
    N. M. Dempster, A.P. Laird and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist Soc. B, 39:185–197, 1977.MathSciNetGoogle Scholar
  5. 5.
    B.J. Frey. Filling in scenes by propagating probabilities through layers and into appearance models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages I:185–192, 2000.Google Scholar
  6. 6.
    Zoubin Ghahramani and Michael I. Jordan. Factorial hidden markov models. In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 472–478. The MIT Press, 1996.Google Scholar
  7. 7.
    D. Greig, B. Porteous, and A. Seheult. Exact maximum a posteriori estimation for binary images. In J. R. Statist. Soc. B, 51(2):271–279, 1989.Google Scholar
  8. 8.
    W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1):97–109, 1970.zbMATHCrossRefGoogle Scholar
  9. 9.
    N. Jojic, N. Petrovic, B. Frey, and T. Huang. Transformed hidden markov models: Estimating mixture models of images and inferring spatial transformations in video sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2000.Google Scholar
  10. 10.
    Michael Jordan, Zoubin Ghahramani, Tommi Jaakkola, and Lawrence Saul. An introduction to variational methods for graphical models. In M.I. Jordan (Ed.), Learning in Graphical Models, MIT Press, 1999.Google Scholar
  11. 11.
    Vladimir Kolmogorov and Ramin Zabih. Computing visual correspondence with occlusions using graph cuts. In Proceedings of the International Conference on Computer Vision, pages 508–515, 2001.Google Scholar
  12. 12.
    S. Z. Li. Markov Random Field Modeling in Computer Vision. Springer-Verlag, Tokyo, 1995.Google Scholar
  13. 13.
    V. Caselles M. Bertalmio, G. Sapiro and C. Ballester. Image inpainting. In SIGGRAPH, pages 417–424, 2000.Google Scholar
  14. 14.
    P.H.S. Torr, R. Szeliski, and P. Anandan. An integrated Bayesian approach to layer extraction from image sequences. PAMI, 23(3):297–303, March 2001.Google Scholar
  15. 15.
    C. Vogler and D. Metaxas. Parallel Hidden Markov Models for American Sign Language recognition. In Proceedings of the International Conference on Computer Vision, pages 116–122, 1999.Google Scholar
  16. 16.
    J. Y. A. Wang and E. H. Adelson. Representing Moving Images with Layers. IEEE Transactions on Image Processing, 3(5):625–638, September 1994.Google Scholar
  17. 17.
    Y. Weiss. Smoothness in layers: Motion segmentation using nonparametric mixture estimation. In Proc. IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recogn., pages 520–526, 1997.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Junhwan Kim
    • 1
  • Ramin Zabih
    • 1
  1. 1.Computer Science DepartmentCornell UniversityIthaca

Personalised recommendations