Pattern Recognition and Image Analysis

, Volume 20, Issue 3, pp 324–334 | Cite as

Variational segmentation algorithms with label frequency constraints

  • D. KropotovEmail author
  • D. Laptev
  • A. Osokin
  • D. Vetrov
Representation, Processing, Analysis, and Understanding of Images


We consider image and signal segmentation problems within the Markov random field (MRF) approach and try to take into account label frequency constraints. Incorporating these constraints into MRF leads to an NP-hard optimization problem. For solving this problem we present a two-step approximation scheme that allows one to use hard, interval and soft constraints on label frequencies. On the first step a factorized approximation of the joint distribution is made (only local terms are included) and then, on the second step, the labeling is found by conditional maximization of the factorized joint distribution. The latter task is reduced to an easy-to-solve transportation problem. Basing on the proposed two-step approximation scheme we derive the ELM algorithm for tuning MRF parameters. We show the efficiency of our approach on toy signals and on the task of automated segmentation of Google Maps.

Key words

image segmentation signal segmentation Markov random fields hidden Markov models discrete optimization area prior variational inference linear programming 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    C. Bishop, D. Spiegelhalter, and J. Winn, “VIBES: A Variational Inference Engine for Bayesian Networks,” NIPS (2003).Google Scholar
  2. 2.
    G. Parisi, Statistical Field Theory (Addison-Wesley, 1988).Google Scholar
  3. 3.
    B. Frey, “A Revolution: Belief Propagation in Graphs with Cycles,” NIPS (1998).Google Scholar
  4. 4.
    V. Kolmogorov and M. Wainwright, “On the Optimality of Tree-Reweighted Max-Product Message-Passing,” UAI (2005).Google Scholar
  5. 5.
    T. Minka, “Expectation Propagation for Approximate Bayesian Inference,” UAI (2001).Google Scholar
  6. 6.
    R.M. Neal, “Probabilistic Inference using Markov Chain Monte Carlo Methods,” Tech. report CRG-TR-93-1, 1993.Google Scholar
  7. 7.
    J. Kleinberg and E. Tardos, “Approximation Algorithms for Classification Problems with Pairwise Relationships: Metric Labeling and Markov Random Fields,” J. ACM, No. 5 (2002).Google Scholar
  8. 8.
    T. Werner, “A Linear Programming Approach to Max-Sum Problem: A Review,” EEE TPAMI 29(7), 1165–1179 (2007).Google Scholar
  9. 9.
    D. Greig, B. Porteous, and A. Seheult, “Exact Maximum a Posteriori Estimation for Binary Images,” J. Royal Statistical Soc., No. 2 (1989).Google Scholar
  10. 10.
    Y. Boykov, O. Veksler, and R. Zabih, “Fast Approximate Energy Minimization via Graph Cuts,” IEEE Trans. Pattern Analysis Machine Intelligence, No. 11 (2001).Google Scholar
  11. 11.
    D. Cremers and L. Grady, Statistical Priors for Efficient Combinatorial Optimization via Graph Cuts,” ECCV (2006).Google Scholar
  12. 12.
    E. Sudderth and M. Jordan, “Shared Segmentation of Natural Scenes Using Dependent Pitman-Yor Processes,” NIPS (2008).Google Scholar
  13. 13.
    J. Naor and R. Schwartz, “Balanced Metric Labeling,” in Proc. of Symposium on Theory of Computing (STOC), 2005.Google Scholar
  14. 14.
    T. Werner, “High-Arity Interactions, Polyhedral Relaxations, and Cutting Plane Algorithm for Soft Constraint Optimisation (MAP-MRF),” CVPR (2008).Google Scholar
  15. 15.
    V. Kolmogorov, Y. Boykov, and C. Rother, “Applications of Parametric Maxflow in Computer Vision,” ICCV (2007).Google Scholar
  16. 16.
    O. Woodford, C. Rother, and V. Kolmogorov, “A Global Perspective on MAP Inference for Low-Level Vision,” ICCV (2009).Google Scholar
  17. 17.
    I. H. Sigal and A. P. Ivanova, An Introduction to Discrete Programming: Models and Computational Algorithms, 2nd ed. (Fizmatlit, Moscow, 2007) [in Russian].Google Scholar
  18. 18.
    C. M. Bishop, Pattern Recognition and Machine Learning (Springer, 2006).Google Scholar
  19. 19.
    M. I. Jordan, Z. Gharamani, T. S. Jaakkola, and L. K. Saul, “An Introduction to Variational Methods for Graphical Models,” in Learning in Graphical Models, Ed. by M. I. Jordan, 1998, pp. 105–162.Google Scholar
  20. 20.
    A.J. Viterbi, “Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm,” IEEE Trans. Inform. Theory 13, 260–267 (1967).zbMATHCrossRefGoogle Scholar
  21. 21.
    L. E. Baum, “An Inequality and Associated Maximization Technique in Statistical Estimation of Probabilistic Functions of Markov Processes,” Inequalities 3, 1–8 (1972).Google Scholar
  22. 22.
    J. Lafferty, A. McCallum, and F. Pereira, “Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,” ICML, 282–289 (2001).Google Scholar

Copyright information

© Pleiades Publishing, Ltd. 2010

Authors and Affiliations

  1. 1.Dorodnicyn Computing Centre of the Russian Academy of SciencesMoscowRussia
  2. 2.CMC DepartmentLomonosov Moscow State UniversityMoscowRussia

Personalised recommendations