Advertisement

A Constrained Semi-supervised Learning Approach to Data Association

  • Hendrik Kück
  • Peter Carbonetto
  • Nando de Freitas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3023)

Abstract

Data association (obtaining correspondences) is a ubiquitous problem in computer vision. It appears when matching image features across multiple images, matching image features to object recognition models and matching image features to semantic concepts. In this paper, we show how a wide class of data association tasks arising in computer vision can be interpreted as a constrained semi-supervised learning problem. This interpretation opens up room for the development of new, more efficient data association methods. In particular, it leads to the formulation of a new principled probabilistic model for constrained semi-supervised learning that accounts for uncertainty in the parameters and missing data. By adopting an ingenious data augmentation strategy, it becomes possible to develop an efficient MCMC algorithm where the high-dimensional variables in the model can be sampled efficiently and directly from their posterior distributions. We demonstrate the new model and algorithm on synthetic data and the complex problem of matching image features to words in the image captions.

Keywords

Posterior Distribution Data Association Markov Chain Monte Carlo Algorithm Statistical Machine Translation Annotate Image 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dellaert, F., Seitz, S., Thorpe, C., Thrun, S.: EM, MCMC, and chain flipping for structure from motion with unknown correspondence. Machine Learning 50, 45–71 (2003)zbMATHCrossRefGoogle Scholar
  2. 2.
    Fergus, R., Perona, P., Zisserman, A.: Object class recognition by unsupervised scale-invariant learning. In: IEEE Conference on Computer Vision and Pattern Recognition (2003)Google Scholar
  3. 3.
    Duygulu, P., Barnard, K., de Freitas, N., Forsyth, D.: Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Avitzour, D.: A maximum likelihood approach to data association. IEEE Transactions on Aerospace and Electronic Systems 28, 560–566 (1992)CrossRefGoogle Scholar
  5. 5.
    Blei, D., Jordan, M.: Modeling annotated data. In: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 127–134. ACM Press, New York (2003)Google Scholar
  6. 6.
    Celeux, G., Hurn, M., Robert, C.P.: Computational and inferential difficulties with mixture posterior distributions. Journal of the American Statistical Association 95, 957–970 (2000)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Stephens, M.: Bayesian Methods for Mixtures of Normal Distributions. PhD thesis, Department of Statistics, Oxford University, England (1997)Google Scholar
  8. 8.
    McFadden, D.: A method of simulated moments for estimation of discrete response models without numerical integration. Econometrica 57, 995–1026 (1989)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Liu, J., Wong, W.H., Kong, A.: Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemes. Biometrika 81, 27–40 (1994)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Dietterich, T.G., Lathrop, R.H., Lozano-Perez, T.: Solving the multiple instance learning with axis-parallel rectangles. Artificial Intelligence 89, 31–71 (1997)zbMATHCrossRefGoogle Scholar
  11. 11.
    Andrews, S., Tsochantaridis, I., Hofmann, T.: Support vector machines for multiple-instance learning. In: Advances in Neural Information Processing Systems 16. MIT Press, Cambridge (2003)Google Scholar
  12. 12.
    Zhu, X., Ghahramani, Z., Lafferty, J.: Semi-supervised learning using Gaussian fields and harmonic functions. In: International Conference on Machine Learning (2003)Google Scholar
  13. 13.
    Belkin, M., Niyogi, P.: Semi-supervised learning on manifolds. Technical Report TR-2002-12, Computer Science Department, The University of Chicago, MA (1994)Google Scholar
  14. 14.
    Carbonetto, P., de Freitas, N., Gustafson, P., Thompson, N.: Bayesian feature weighting for unsupervised learning, with application to object recognition. In: AI-STATS, Florida, USA (2003)Google Scholar
  15. 15.
    Tipping, M.E.: Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211–244 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Tham, S.S., Doucet, A., Ramamohanarao, K.: Sparse Bayesian learning for regression and classification using Markov chain Monte Carlo. In: International Conference on Machine Learning, pp. 634–641 (2002)Google Scholar
  17. 17.
    Bernardo, J.M., Smith, A.F.M.: Bayesian Theory. Wiley Series in Applied Probability and Statistics (1994)Google Scholar
  18. 18.
    Geweke, J.: Efficient simulation from the multivariate normal and Student t-distributions subject to linear constraints. In: Proceedings of 23rd Symp. Interface, pp. 571–577 (1991)Google Scholar
  19. 19.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 731–737 (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Hendrik Kück
    • 1
  • Peter Carbonetto
    • 1
  • Nando de Freitas
    • 1
  1. 1.Dept. of Computer ScienceUniversity of British ColumbiaVancouverCanada

Personalised recommendations