Synthesizing Real World Stereo Challenges

  • Ralf Haeusler
  • Daniel Kondermann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8142)

Abstract

Synthetic datasets for correspondence algorithm benchmarking recently gained more and more interest. The primary aim in its creation commonly has been to achieve highest possible realism for human observers which is regularly assumed to be the most important design target. But datasets must look realistic to the algorithm, not to the human observer. Therefore, we challenge the realism hypothesis in favor of posing specific, isolated and non-photorealistic problems to algorithms. There are three benefits: (i) Images can be created in large numbers at low cost. This addresses the currently largest problem in ground truth generation. (ii) We can combinatorially iterate through the design space to explore situations of highest relevance to the application. With increasing robustness of future stereo algorithms, datasets can be modified to increase matching challenges gradually. (iii) By isolating the core problems of stereo methods we can focus on each of them in turn. Our aim is not to produce a new dataset. Instead, we contribute with a new perspective on synthetic vision benchmark generation and show encouraging examples to validate our ideas. We believe that the potential of using synthetic data for evaluation in computer vision has not yet been fully utilized. Our first experiments demonstrate it is worthwhile to setup purpose designed datasets, as typical stereo failure can readily be reproduced, and thereby be better understood. Datasets are made available online [1].

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Datasets and ground truth for experiments in this paper, http://hci.iwr.uni-heidelberg.de//Benchmarks/document/synthesizing_stereo_challenges/
  2. 2.
    Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. International Journal of Computer Vision 92(1), 1–31 (2011)CrossRefGoogle Scholar
  3. 3.
    Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  4. 4.
    Förstner, W.: 10 pros and cons against performance characterization of vision algorithms. In: Workshop on Performance Characterization of Vision Algorithms, pp. 13–29 (1996)Google Scholar
  5. 5.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361 (June 2012)Google Scholar
  6. 6.
    Haeusler, R., Nair, R., Kondermann, D.: Ensemble Learning for Confidence Measures in Stereo Vision. In: CVPR (to appear 2013)Google Scholar
  7. 7.
    Hansen, P., Alismail, H., Rander, P., Browning, B.: Online continuous stereo extrinsic parameter estimation. In: CVPR, pp. 1059–1066. IEEE (2012)Google Scholar
  8. 8.
    Haralick, R.M.: Performance characterization in computer vision. In: Chetverikov, D., Kropatsch, W.G. (eds.) CAIP 1993. LNCS, vol. 719, pp. 1–9. Springer, Heidelberg (1993)CrossRefGoogle Scholar
  9. 9.
    Hirschmüller, H.: Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2008)CrossRefGoogle Scholar
  10. 10.
    Hirschmüller, H., Gehrig, S.K.: Stereo matching in the presence of sub-pixel calibration errors. In: CVPR, pp. 437–444 (2009)Google Scholar
  11. 11.
    Hu, X., Mordohai, P.: A quantitative evaluation of confidence measures for stereo vision. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2121–2133 (2012)CrossRefGoogle Scholar
  12. 12.
    Julesz, B., Papathomas, T.: Foundations of Cyclopean Perception. MIT Press (2006)Google Scholar
  13. 13.
    Kajiya, J.T.: The rendering equation. In: Evans, D.C., Athay, R.J. (eds.) SIGGRAPH, pp. 143–150. ACM (1986)Google Scholar
  14. 14.
    Meister, S., Jähne, B., Kondermann, D.: Outdoor stereo camera system for the generation of real-world benchmark data sets. Optical Engineering 51(02), 021107 (2012)Google Scholar
  15. 15.
    Meister, S., Kondermann, D.: Real versus realistically rendered scenes for optical flow evaluation. In: 2011 14th ITG Conference on Electronic Media Technology (CEMT), pp. 1–6. IEEE (2011)Google Scholar
  16. 16.
    Neilson, D., Yang, Y.H.: Evaluation of constructable match cost measures for stereo correspondence using cluster ranking. In: CVPR (2008)Google Scholar
  17. 17.
    Nilsson, J., Ödblom, A., Fredriksson, J., Zafar, A.: Using augmentation techniques for performance evaluation in automotive safety. In: Furht, B. (ed.) Handbook of Augmented Reality, pp. 631–649. Springer, New York (2011)Google Scholar
  18. 18.
    Perlin, K.: An image synthesizer. In: Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1985, pp. 287–296. ACM, New York (1985), http://doi.acm.org/10.1145/325334.325247 CrossRefGoogle Scholar
  19. 19.
    Pfeiffer, D., Gehrig, S., Schneider, N.: Exploiting the Power of Stereo Confidences. In: CVPR (to appear 2013)Google Scholar
  20. 20.
    Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision 47(1-3), 7–42 (2002)CrossRefMATHGoogle Scholar
  21. 21.
    Scharstein, D., Szeliski, R.: High-accuracy stereo depth maps using structured light. In: CVPR, vol. (1), pp. 195–202 (2003)Google Scholar
  22. 22.
    Thacker, N.A., Clark, A.F., Barron, J.L., Beveridge, J.R., Courtney, P., Crum, W.R., Ramesh, V., Clark, C.: Performance characterization in computer vision: A guide to best practices. Computer Vision and Image Understanding 109(3), 305–334 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Ralf Haeusler
    • 1
  • Daniel Kondermann
    • 2
  1. 1.Computer Science DepartmentThe University of AucklandNew Zealand
  2. 2.Heidelberg Collaboratory for Image ProcessingGermany

Personalised recommendations