Advertisement

On Parameter Learning in CRF-Based Approaches to Object Class Image Segmentation

  • Sebastian Nowozin
  • Peter V. Gehler
  • Christoph H. Lampert
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6316)

Abstract

Recent progress in per-pixel object class labeling of natural images can be attributed to the use of multiple types of image features and sound statistical learning approaches. Within the latter, Conditional Random Fields (CRF) are prominently used for their ability to represent interactions between random variables. Despite their popularity in computer vision, parameter learning for CRFs has remained difficult, popular approaches being cross-validation and piecewise training.

In this work, we propose a simple yet expressive tree-structured CRF based on a recent hierarchical image segmentation method. Our model combines and weights multiple image features within a hierarchical representation and allows simple and efficient globally-optimal learning of ≈ 105 parameters. The tractability of our model allows us to pose and answer some of the open questions regarding parameter learning applying to CRF-based approaches. The key findings for learning CRF models are, from the obvious to the surprising, i) multiple image features always help, ii) the limiting dimension with respect to current models is the amount of training data, iii) piecewise training is competitive, iv) current methods for max-margin training fail for models with many parameters.

Keywords

Image Region Conditional Random Field Factor Graph Parameter Learning Segmentation Accuracy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

978-3-642-15567-3_8_MOESM1_ESM.pdf (138 kb)
Electronic Supplementary Material (138 KB)

References

  1. 1.
    Lauritzen, S.L.: Graphical Models. Oxford University Press, Oxford (1996)Google Scholar
  2. 2.
    Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT Press, Cambridge (2009)Google Scholar
  3. 3.
    Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML (2001)Google Scholar
  4. 4.
    Sutton, C., McCallum, A.: An introduction to conditional random fields for relational learning. In: Introduction to Statistical Relational Learning. MIT Press, Cambridge (2007)Google Scholar
  5. 5.
    Kschischang, F.R., Frey, B.J., Loeliger, H.A.: Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47, 498–519 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Szummer, M., Kohli, P., Hoiem, D.: Learning CRFs using graph cuts. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 582–595. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  7. 7.
    Winn, J.M., Shotton, J.: The layout consistent random field for recognizing and segmenting partially occluded objects. In: CVPR (2006)Google Scholar
  8. 8.
    Shotton, J., Winn, J., Rother, C., Criminisi, A.: Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context. IJCV 81 (2007)Google Scholar
  9. 9.
    Kumar, S., Hebert, M.: Discriminative random fields: A discriminative framework for contextual interaction in classification. In: ICCV (2003)Google Scholar
  10. 10.
    He, X., Zemel, R.S., Carreira-Perpiñán, M.Á.: Multiscale conditional random fields for image labeling. In: CVPR (2004)Google Scholar
  11. 11.
    Schnitzspan, P., Fritz, M., Schiele, B.: Hierarchical support vector random fields: Joint training to combine local and global features. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 527–540. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  12. 12.
    Nowozin, S., Lampert, C.H.: Global connectivity potentials for random field models. In: CVPR (2009)Google Scholar
  13. 13.
    Gould, S., Rodgers, J., Cohen, D., Elidan, G., Koller, D.: Multi-class segmentation with relative location prior. IJCV 80, 300–316 (2008)CrossRefGoogle Scholar
  14. 14.
    Batra, D., Sukthankar, R., Chen, T.: Learning class-specific affinities for image labelling. In: CVPR (2008)Google Scholar
  15. 15.
    Reynolds, J., Murphy, K.: Figure-ground segmentation using a hierarchical conditional random field. In: CRV (2007)Google Scholar
  16. 16.
    Plath, N., Toussaint, M., Nakajima, S.: Multi-class image segmentation using conditional random fields and global classification. In: ICML (2009)Google Scholar
  17. 17.
    Kohli, P., Ladický, L., Torr, P.H.S.: Robust higher order potentials for enforcing label consistency. In: CVPR (2008)Google Scholar
  18. 18.
    Ladický, L., Russell, C., Kohli, P.: Associative hierarchical crfs for object class image segmentation. In: ICCV (2009)Google Scholar
  19. 19.
    Munoz, D., Bagnell, J.A., Vandapel, N., Hebert, M.: Contextual classification with functional max-margin markov networks. In: CVPR (2009)Google Scholar
  20. 20.
    Kumar, S., August, J., Hebert, M.: Exploiting inference for approximate parameter learning in discriminative fields: An empirical study. In: Rangarajan, A., Vemuri, B.C., Yuille, A.L. (eds.) EMMCVPR 2005. LNCS, vol. 3757, pp. 153–168. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  21. 21.
    Korc, F., Förstner, W.: Approximate parameter learning in conditional random fields: An empirical investigation. In: Rigoll, G. (ed.) DAGM 2008. LNCS, vol. 5096, pp. 11–20. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  22. 22.
    Parise, S., Welling, M.: Learning in Markov random fields: An empirical study. In: Joint Statistical Meeting, JSM 2005 (2005)Google Scholar
  23. 23.
    Finley, T., Joachims, T.: Training structural SVMs when exact inference is intractable. In: ICML (2008)Google Scholar
  24. 24.
    Willsky, A.S.: Multiresolution markov models for signal and image processing. Proceedings of the IEEE (2002)Google Scholar
  25. 25.
    Lim, J.J., Gu, C., Arbeláez, P., Malik, J.: Context by region ancestry. In: ICCV (2009)Google Scholar
  26. 26.
    Arbeláez, P.: Boundary extraction in natural images using ultrametric contour maps. In: Workshop on Perceptual Organization in Computer Vision (2006)Google Scholar
  27. 27.
    Everingham, M., Gool, L.V., Williams, C.K., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge, VOC 2009 Results (2009), http://www.pascal-network.org/challenges/VOC/voc2009/workshop/
  28. 28.
    Shotton, J., Johnson, M., Cipolla, R.: Semantic texton forests for image categorization and segmentation. In: CVPR (2008)Google Scholar
  29. 29.
    Mooij, J.M.: libDAI: A free/open source C++ library for discrete approximate inference methods (2008), http://www.libdai.org/
  30. 30.
    Gehler, P., Nowozin, S.: On feature combination for multiclass object classification. In: ICCV (2009)Google Scholar
  31. 31.
    Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. In: NIPS (2007)Google Scholar
  32. 32.
    Sutton, C.A., McCallum, A.: Piecewise training for undirected models. In: UAI (2005)Google Scholar
  33. 33.
    Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. JMLR 6, 1453–1484 (2005)MathSciNetGoogle Scholar
  34. 34.
    Blaschko, M.B., Lampert, C.H.: Learning to localize objects with structured output regression. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 2–15. Springer, Heidelberg (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Sebastian Nowozin
    • 1
  • Peter V. Gehler
    • 2
  • Christoph H. Lampert
    • 3
  1. 1.Microsoft Research CambridgeUK
  2. 2.ETH ZurichSwitzerland
  3. 3.Institute of Science and TechnologyAustria

Personalised recommendations