Advertisement

Submodular Relaxation for MRFs with High-Order Potentials

  • Anton Osokin
  • Dmitry Vetrov
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7585)

Abstract

In the paper we propose a novel dual decomposition scheme for approximate MAP-inference in Markov Random Fields with sparse high-order potentials, i.e. potentials encouraging relatively a small number of variable configurations. We construct a Lagrangian dual of the problem in such a way that it can be efficiently evaluated by minimizing a submodular function with a min-cut/max-flow algorithm. We show the equivalence of this relaxation to a specific type of linear program and derive the conditions under which it is equivalent to generally tighter LP-relaxation solved in [1]. Unlike the latter our relaxation has significantly less dual variables and hence is much easier to solve. We demonstrate its faster convergence on several synthetic and real problems.

Keywords

Markov random fields energy minimization MAP-inference dual decomposition high-order potentials 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Komodakis, N., Paragios, N.: Beyond pairwise energies: Efficient optimization for higher-order MRFs. In: CVPR (2009)Google Scholar
  2. 2.
    Kohli, P., Ladický, L., Torr, P.H.S.: Robust higher order potentials for enforcing label consistency. IJCV 82, 302–324 (2009)CrossRefGoogle Scholar
  3. 3.
    Ishikawa, H.: Transformation of general binary MRF minimization to the first order case. IEEE TPAMI 33, 1234–1249 (2011)CrossRefGoogle Scholar
  4. 4.
    Rother, C., Kohli, P., Feng, W., Jia, J.: Minimizing sparse higher order energy functions of discrete variables. In: CVPR (2009)Google Scholar
  5. 5.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE TPAMI 23, 1222–1239 (2001)CrossRefGoogle Scholar
  6. 6.
    Kohli, P., Kumar, M.P., Torr, P.: \(\mathcal{P}^3\)&Beyond: Move making algorithms for solving higher order functions. IEEE TPAMI 31, 1645–1656 (2008)CrossRefGoogle Scholar
  7. 7.
    Delong, A., Osokin, A., Isack, H., Boykov, Y.: Fast approximate energy minimization with label costs. IJCV 96, 1–27 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Komodakis, N., Paragios, N., Tziritas, G.: MRF energy minimization and beyond via dual decomposition. IEEE TPAMI 33, 531–552 (2010)CrossRefGoogle Scholar
  9. 9.
    Kappes, J.H., Savchynskyy, B., Schnörr, C.: A bundle approach to efficient MAP-inference by lagrangian relaxation. In: CVPR (2012)Google Scholar
  10. 10.
    Komodakis, N.: Towards More Efficient and Effective LP-Based Algorithms for MRF Optimization. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part II. LNCS, vol. 6312, pp. 520–534. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  11. 11.
    Osokin, A., Vetrov, D., Kolmogorov, V.: Submodular decomposition framework for inference in associative Markov networks with global constraints. In: CVPR (2011)Google Scholar
  12. 12.
    Boros, E., Hammer, P.L.: Pseudo-boolean optimization. Discrete Appl. Math. 123, 155–225 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Kolmogorov, V., Wainwright, M.: On the optimality of tree-reweighted max-product message passing. In: UAI (2005)Google Scholar
  14. 14.
    Kolmogorov, V., Rother, C.: Minimizing non-submodular functions with graph cuts – a review. IEEE TPAMI 29, 1274–1279 (2007)CrossRefGoogle Scholar
  15. 15.
    Christoudias, C., Georgescu, B., Meer, P.: Synergism in low-level vision. In: CVPR (2002)Google Scholar
  16. 16.
    Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE TPAMI 26, 1124–1137 (2004)CrossRefGoogle Scholar
  17. 17.
    Kolmogorov, V.: Convergent tree-reweighted message passing for energy minimization. IEEE TPAMI 28, 1568–1583 (2006)CrossRefGoogle Scholar
  18. 18.
    Lewis, A., Overton, M.: Nonsmooth optimization via BFGS. SIAM J. Optimization (2008)Google Scholar
  19. 19.
    Burke, J., Lewis, A., Overton, M.: A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM J. Optimization 15, 751–779 (2005)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Anton Osokin
    • 1
  • Dmitry Vetrov
    • 1
  1. 1.Moscow State UniversityRussia

Personalised recommendations