Gates for Handling Occlusion in Bayesian Models of Images: An Initial Study

  • Daniel Oberhoff
  • Dominik Endres
  • Martin A. Giese
  • Marina Kolesnik
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7006)

Abstract

Probabilistic systems for image analysis have enjoyed increasing popularity within the last few decades, yet principled approaches to incorporating occlusion as a feature into such systems are still few [11,10,7]. We present an approach which is strongly influenced by the work on noisy-or generative factor models (see e.g. [3]). We show how the intractability of the hidden variable posterior of noisy-or models can be (conditionally) lifted by introducing gates on the input combined with a sparsifying prior, allowing for the application of standard inference procedures. We demonstrate the feasibility of our approach on a computer vision toy problem.

Keywords

Latent Variable Mixture Model Bayesian Model Dirichlet Process Input Pixel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)MATHGoogle Scholar
  2. 2.
    Blei, D.M., Jordan, M.I.: Variational methods for the Dirichlet process. In: Proceedings of the 21st International Conference on Machine Learning (2004)Google Scholar
  3. 3.
    Courville, A., Eck, D., Bengio, Y.: An infinite factor model hierarchy via a noisy-or mechanism. In: Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C.K.I., Culotta, A. (eds.) Advances in Neural Information Processing Systems, vol. 22, pp. 405–413 (2009)Google Scholar
  4. 4.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR, vol. (1), pp. 886–893 (2005)Google Scholar
  5. 5.
    Földíak, P.: Learning invariance from transformation sequences. Neural Computation 3, 194–200 (1991)CrossRefGoogle Scholar
  6. 6.
    Jaakkola, T.S., Jordan, M.I.: Variational probabilistic inference and the qmr-dt network. J. Artif. Int. Res. 10, 291–322 (1999), http://portal.acm.org/citation.cfm?id=1622859.1622869 MATHGoogle Scholar
  7. 7.
    Lücke, J., Turner, R., Sahani, M., Henniges, M.: Occlusive components analysis. In: Proceedings of NIPS, vol. 22, pp. 1069–1077 (2009)Google Scholar
  8. 8.
    Minka, T., Winn, J.: Gates: A graphical notation for mixture models. In: Proceedings of NIPS, vol. 21, pp. 1073–1080 (2008)Google Scholar
  9. 9.
    Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–609 (1996)CrossRefGoogle Scholar
  10. 10.
    Roux, N.L., Heess, N., Shotton, J., Winn, J.: Learning a generative model of images by factoring appearance and shape. Tech. rep., Microsoft Research (2010)Google Scholar
  11. 11.
    Tamminen, T., Lampinen, J.: A Bayesian occlusion model for sequential object matching. In: Proc. British Machine Vision Conference 2004, pp. 547–556 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Daniel Oberhoff
    • 1
  • Dominik Endres
    • 2
  • Martin A. Giese
    • 2
  • Marina Kolesnik
    • 1
  1. 1.Fraunhofer FIT-LIFEAugustinGermany
  2. 2.Section for Computational Sensomotorics, Dept. of Cognitive NeurologyUniversity Clinic, CIN, HIH and University of TübingenTübingenGermany

Personalised recommendations