Advertisement

Automatically Selecting Inference Algorithms for Discrete Energy Minimisation

  • Paul Henderson
  • Vittorio Ferrari
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9909)

Abstract

Minimisation of discrete energies defined over factors is an important problem in computer vision, and a vast number of MAP inference algorithms have been proposed. Different inference algorithms perform better on factor graph models (GMs) from different underlying problem classes, and in general it is difficult to know which algorithm will yield the lowest energy for a given GM. To mitigate this difficulty, survey papers [1, 2, 3] advise the practitioner on what algorithms perform well on what classes of models. We take the next step forward, and present a technique to automatically select the best inference algorithm for an input GM. We validate our method experimentally on an extended version of the OpenGM2 benchmark [3], containing a diverse set of vision problems. On average, our method selects an inference algorithm yielding labellings with 96 % of variables the same as the best available algorithm.

Keywords

Problem Instance Problem Class Inference Algorithm Stereo Match Semantic Segmentation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Kolmogorov, V., Rother, C.: Comparison of energy minimization algorithms for highly connected graphs. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 1–15. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M., Rother, C.: A comparative study of energy minimization methods for markov random fields with smoothness-based priors. IEEE Trans. on PAMI 30(6), 1068–1080 (2008)CrossRefGoogle Scholar
  3. 3.
    Kappes, J., Andres, B., Hamprecht, F., Schnörr, C., Nowozin, S., Batra, D., Kim, S., Kausler, B., Kröger, T., Lellmann, J., Komodakis, N., Savchynskyy, B., Rother, C.: A comparative study of modern inference techniques for structured discrete energy minimization problems. IJCV 115, 1–30 (2015)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Bishop, C.: Pattern Recognition and Machine Learning. Springer, New York (2006)zbMATHGoogle Scholar
  5. 5.
    Kolmogorov, V.: Convergent tree-reweighted message passing for energy minimization. IEEE Trans. PAMI 28(10), 1568–1583 (2006)CrossRefGoogle Scholar
  6. 6.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. PAMI 23(11), 1222–1239 (2001)CrossRefGoogle Scholar
  7. 7.
    Greig, D.M., Porteous, B.T., Seheult, A.H.: Exact maximum a posteriori estimation for binary images. J. Roy. Stat. Soc. 51(2), 271–279 (1989)Google Scholar
  8. 8.
    Rother, C., Kolmogorov, V., Lempitsky, V., Szummer, M.: Optimizing binary MRFs via extended roof duality. In: CVPR (2007)Google Scholar
  9. 9.
    Storvik, G., Dahl, G.: Lagrangian-based methods for finding MAP solutions for MRF models. IEEE Trans. Image Process. 9, 469–479 (2000)CrossRefGoogle Scholar
  10. 10.
    Guignard, M., Kim, S.: Lagrangean decomposition: a model yielding stronger Lagrangean bounds. Math. Prog. 39, 215–228 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Komodakis, N., Paragios, N., Tziritas, G.: MRF optimization via dual decomposition: message-passing revisited. In: ICCV, pp. 1–8 (2007)Google Scholar
  12. 12.
    Kappes, J., Savchynskyy, B., Schnörr, C.: A bundle approach to efficient MAP-inference by lagrangian relaxation. In: CVPR (2012)Google Scholar
  13. 13.
    Martins, A.F.T., Figueiredo, M.A.T., Aguiar, P.M.Q., Smith, N.A., Xing, E.P.: AD3: alternating directions dual decomposition for MAP inference in graphical models. JMLR 16, 495–545 (2015)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Andres, B., Kappes, J.H., Köthe, U., Schnörr, C., Hamprecht, F.A.: An empirical comparison of inference algorithms for graphical models with higher order factors using openGM. In: Goesele, M., Roth, S., Kuijper, A., Schiele, B., Schindler, K. (eds.) Pattern Recognition. LNCS, vol. 6376, pp. 353–362. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  15. 15.
    Alahari, K., Kohli, P., Torr, P.: Dynamic hybrid algorithms for discrete map MRF inference. IEEE Trans. PAMI 32(10), 1846–1857 (2010)CrossRefGoogle Scholar
  16. 16.
    Ishikawa, H.: Higher-order clique reduction in binary graph cut. In: CVPR, pp. 2993–3000 (2009)Google Scholar
  17. 17.
    Ishikawa, H.: Transformation of general binary MRF minimization to the first order case. IEEE Trans. PAMI 33(6), 1234–1249 (2011)CrossRefGoogle Scholar
  18. 18.
    Fix, A., Gruber, A., Boros, E., Zabih, R.: A graph cut algorithm for higher-order markov random fields. In: ICCV (2011)Google Scholar
  19. 19.
    Kschischang, F.R., Frey, B.J.: Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theor. 47(2), 498–519 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Wainwright, M.J., Jaakkola, T.S., Willsky, A.S.: MAP estimation via agreement on (hyper)trees: message-passing and linear-programming approaches. IEEE Trans. Inf. Theor. 51(11), 3697–3717 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Sontag, D., Meltzer, T., Globerson, A., Weiss, Y., Jaakkola, T.: Tightening LP relaxations for MAP using message-passing. In: Proceedings of UAI, pp. 503–510 (2008)Google Scholar
  22. 22.
    Doppa, J.R., Kumar, P., Wick, M., Singh, S., Salakhutdinov, R.: ICML 2013 workshop on inferning (2013). http://inferning.cs.umass.edu/
  23. 23.
    Guillaumin, M., Van Gool, L., Ferrari, V.: Fast energy minimization using learned state filters. In: CVPR (2013)Google Scholar
  24. 24.
    Conejo, B., Komodakis, N., Leprince, S., Avouac, J.P.: Inference by learning: speeding-up graphical model optimization via a coarse-to-fine cascade of pruning classifiers. In: NIPS, pp. 1–9 (2014)Google Scholar
  25. 25.
    Stoyanov, V., Eisner, J.: Fast and accurate prediction via evidence-specific MRF structure. In: ICML Workshop on Inferning (2012)Google Scholar
  26. 26.
    Roig, G., Boix, X., De Nijs, R., Ramos, S., Kuhnlenz, K., Van Gool, L.: Active map inference in CRFS for efficient semantic segmentation. In: ICCV, pp. 2312–2319 (2013)Google Scholar
  27. 27.
    Jiang, J., Moon, T., Daumé III., H., Eisner, J.: Prioritized asynchronous belief propagation. In: ICML Workshop on Inferning (2013)Google Scholar
  28. 28.
    Rice, J.R.: The algorithm selection problem. Adv. Comps. 15, 65–118 (1976)CrossRefGoogle Scholar
  29. 29.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intel. Res. 32, 565–606 (2008)zbMATHGoogle Scholar
  30. 30.
    Kotthoff, L., Gent, I.P., Miguel, I.: A preliminary evaluation of machine learning in algorithm selection for search problems. In: Symposium on Combinatorial Search (2011)Google Scholar
  31. 31.
    Lellmann, J., Schnörr, C.: Continuous multiclass labeling approaches and algorithms. SIAM J. Im. Sci. 4(4), 1049–1096 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Nowozin, S., Rother, C., Bagon, S., Sharp, T., Yao, B., Kohli, P.: Decision tree fields. In: ICCV (2011)Google Scholar
  33. 33.
    Gould, S., Fulton, R., Koller, D.: Decomposing a scene into geometric and semantically consistent regions. In: ICCV (2009)Google Scholar
  34. 34.
    Hoiem, D., Efros, A.A., Hebert, M.: Recovering occlusion boundaries from an image. IJCV 91(3), 328–346 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Kim, S., Nowozin, S., Kohli, P., Yoo, C.D.: Higher-order correlation clustering for image segmentation. In: NIPS (2011)Google Scholar
  36. 36.
    Andres, B., Kappes, J.H., Beier, T., Köthe, U., Hamprecht, F.A.: Probabilistic image segmentation with closedness constraints. In: ICCV (2011)Google Scholar
  37. 37.
    Brandes, U., Delling, D., Gaertler, M., Görke, R., Hoefer, M., Nikoloski, Z., Wagner, D.: On modularity clustering. IEEE Trans. KDE 2(2), 172–188 (2008)zbMATHGoogle Scholar
  38. 38.
    Andres, B., Kroeger, T., Briggman, K.L., Denk, W., Korogod, N., Knott, G., Koethe, U., Hamprecht, F.A.: Globally optimal closed-surface segmentation for connectomics. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part III. LNCS, vol. 7574, pp. 778–791. Springer, Heidelberg (2012)Google Scholar
  39. 39.
    Jaimovich, A., Elidan, G., Margalit, H., Friedman, N.: Towards an integrated protein-protein interaction network: a relational Markov network approach. J. Comp. Biol. 13(2), 145–164 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Yanover, C., Schueler-Furman, O., Weiss, Y.: Minimizing and learning energy functions for side-chain prediction. J. Comp. Biol. 15(7), 899–911 (2008)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Shotton, J., Winn, J., Rother, C., Criminisi, A.: TextonBoost for image understanding: multi-class object recognition and segmentation by jointly modeling appearance, shape and context. IJCV 81(1), 2–23 (2009)CrossRefGoogle Scholar
  42. 42.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2007 (VOC 2007) Results (2007). http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html
  43. 43.
    Alexe, B., Deselaers, T., Ferrari, V.: What is an object? In: CVPR (2010)Google Scholar
  44. 44.
    Isack, H., Boykov, Y.: Energy-based geometric multi-model fitting. IJCV 97(2), 123–147 (2012)CrossRefzbMATHGoogle Scholar
  45. 45.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Comm. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  46. 46.
    Rother, C., Kohli, P., Feng, W., Jia, J.: Minimizing sparse higher order energy functions of discrete variables. In: CVPR (2009)Google Scholar
  47. 47.
    Kohli, P., Kumar, M., Torr, P.: P3 & beyond: solving energies with higher order cliques. In: CVPR (2007)Google Scholar
  48. 48.
    Kohli, P., Ladicky, L., Torr, P.: Robust higher order potentials for enforcing label consistency. In: CVPR (2008)Google Scholar
  49. 49.
    Bergtholdt, M., Kappes, J.H., Schnörr, C.: Learning of graphical models and efficient inference for object class recognition. In: Franke, K., Müller, K.-R., Nickolay, B., Schäfer, R. (eds.) DAGM 2006. LNCS, vol. 4174, pp. 273–283. Springer, Heidelberg (2006). doi: 10.1007/11861898_28 CrossRefGoogle Scholar
  50. 50.
    Komodakis, N., Tziritas, G., Paragios, N.: Performance vs computational efficiency for optimizing single and dynamic MRFs: setting the state of the art with primal-dual strategies. CVIU 112(1), 14–29 (2008)Google Scholar
  51. 51.
    Besag, J.: On the statistical analysis of dirty pictures. J. Roy. Stat. Soc. 48(3), 48–259 (1986)MathSciNetzbMATHGoogle Scholar
  52. 52.
    Kernighan, B.W., Lin, S.: An efficient heuristic procedure for partitioning graphs. Bell Sys. Tech. J. 49(2), 291–307 (1970)CrossRefzbMATHGoogle Scholar
  53. 53.
    Sontag, D., Choe, D.K., Li, Y.: Efficiently searching for frustrated cycles in MAP inference. In: Proceedings of UAI, pp. 795–804 (2012)Google Scholar
  54. 54.
    Criminisi, A., Shotton, J., Konukoglu, E.: Decision forests for classification, regression, density estimation, manifold learning and semi-supervised learning. Microsoft Research Cambridge, Technical report MSRTR-2011-114 (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.School of InformaticsUniversity of EdinburghEdinburghScotland, UK

Personalised recommendations