Stereo Ground Truth with Error Bars

  • Daniel Kondermann
  • Rahul Nair
  • Stephan Meister
  • Wolfgang Mischler
  • Burkhard Güssefeld
  • Katrin Honauer
  • Sabine Hofmann
  • Claus Brenner
  • Bernd Jähne
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9007)

Abstract

Creating stereo ground truth based on real images is a measurement task. Measurements are never perfectly accurate: the depth at each pixel follows an error distribution. A common way to estimate the quality of measurements are error bars. In this paper we describe a methodology to add error bars to images of previously scanned static scenes. The main challenge for stereo ground truth error estimates based on such data is the nonlinear matching of 2D images to 3D points. Our method uses 2D feature quality, 3D point and calibration accuracy as well as covariance matrices of bundle adjustments. We sample the reference data error which is the 3D depth distribution of each point projected into 3D image space. The disparity distribution at each pixel location is then estimated by projecting samples of the reference data error on the 2D image plane. An analytical Gaussian error propagation is used to validate the results. As proof of concept, we created ground truth of an image sequence with 100 frames. Results show that disparity accuracies well below one pixel can be achieved, albeit with much large errors at depth discontinuities mainly caused by uncertain estimates of the camera location.

Supplementary material

336672_1_En_39_MOESM1_ESM.pdf (7.5 mb)
Supplementary material (pdf 7,695 KB)

References

  1. 1.
    Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. Int. J. Comput. Vis. 92, 1–31 (2011)CrossRefGoogle Scholar
  2. 2.
    Kondermann, D.: Ground truth design principles: an overview. In: Proceedings of the International Workshop on Video and Image Ground Truth in Computer Vision Applications, p. 5. ACM (2013)Google Scholar
  3. 3.
    Onkarappa, N., Sappa, A.D.: Synthetic sequences and ground-truth flow field generation for algorithm validation. Multimedia Tools Appl. 4, 1–15 (2013)Google Scholar
  4. 4.
    Haltakov, V., Unger, C., Ilic, S.: Framework for generation of synthetic ground truth data for driver assistance applications. In: Weickert, J., Hein, M., Schiele, B. (eds.) GCPR 2013. LNCS, vol. 8142, pp. 323–332. Springer, Heidelberg (2013)Google Scholar
  5. 5.
    Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  6. 6.
    Meister, S., Kondermann, D.: Real versus realistically rendered scenes for optical flow evaluation. In: Proceedings of 14th ITG Conference on Electronic Media Technology, Informatik Centrum Dortmund e.V. (2011)Google Scholar
  7. 7.
    Güssefeld, B., Kondermann, D., Schwartz, C., Klein, R.: Are reflectance field renderings appropriate for optical flow evaluation? In: IEEE International Conference on Image Processing 2014 (ICIP 2014), Paris, France (2014)Google Scholar
  8. 8.
    Liu, C., Freeman, W.T., Adelson, E.H., Weiss, Y.: Human-assisted motion annotation. In: IEEE Computer Society Conference on Computer Vision and PatternRecognition, CVPR 2008, pp. 1–8 (2008)Google Scholar
  9. 9.
    Donath, A., Kondermann, D.: Is crowdsourcing for optical flow ground truth generation feasible? In: Chen, M., Leibe, B., Neumann, B. (eds.) ICVS 2013. LNCS, vol. 7963, pp. 193–202. Springer, Heidelberg (2013) Google Scholar
  10. 10.
    Morales, S., Klette, R.: A third eye for performance evaluation in stereo sequence analysis. In: Jiang, X., Petkov, N. (eds.) CAIP 2009. LNCS, vol. 5702, pp. 1078–1086. Springer, Heidelberg (2009) CrossRefGoogle Scholar
  11. 11.
    Meister, S., Izadi, S., Kohli, P., Hämmerle, M., Rother, C., Kondermann, D.: When can we use kinectfusion for ground truth acquisition? In: Proceedings Workshop on Color-Depth Camera Fusion in Robotics (2012)Google Scholar
  12. 12.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The kitti vision benchmark suite. In: Computer Vision and Pattern Recognition (CVPR), Providence, USA (2012)Google Scholar
  13. 13.
    Strecha, C., von Hansen, W., Van Gool, L., Fua, P., Thoennessen, U.: On benchmarking camera calibration and multi-view stereo for high resolution imagery. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)Google Scholar
  14. 14.
    Vaudrey, T., Rabe, C., Klette, R., Milburn, J.: Differences between stereo and motion behaviour on synthetic and real-world stereo sequences. In: Proceedings of 23rd International on Conference Image and Vision Computing New Zealand (IVCNZ 2008), pp.1–6 (2008)Google Scholar
  15. 15.
    Kanatani, K.: Statistical optimization for geometric fitting: theoretical accuracy bound and high order error analysis. Int. J. Comput. Vis. 80, 167–188 (2008)CrossRefGoogle Scholar
  16. 16.
    Kanatani, K.: Uncertainty modeling and model selection for geometric inference. IEEE Trans. Pattern Anal. Mach. Intell. 26, 1307–1319 (2004)CrossRefGoogle Scholar
  17. 17.
    Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment – a modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) ICCV-WS 1999. LNCS, vol. 1883, pp. 298–372. Springer, Heidelberg (2000) CrossRefGoogle Scholar
  18. 18.
    Förstner, W.: Reliability analysis of parameter estimation in linear models with applications to mensuration problems in computer vision. Comp. Vis. Graph. Image Proc. 40, 273–310 (1987)CrossRefGoogle Scholar
  19. 19.
    Dickscheid, T., Läbe, T., Förstner, W.: Benchmarking automatic bundle adjustment results. In: 21st Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS), Part B3a, pp. 7–12 (2008)Google Scholar
  20. 20.
    Jähne, B.: Digitale Bildverarbeitung, 7th edn. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  21. 21.
    Abraham, S., Hau, T.: Towards autonomous high-precision calibration of digital cameras. In: Videometrics, V. (ed.) Proceedings of SPIE Annual Meeting, vol. 3174, pp. 82–93. Citeseer (1997)Google Scholar
  22. 22.
    Afonso, M.V., Bioucas-Dias, J.M., Figueiredo, M.A.: Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process. 19, 2345–2356 (2010)CrossRefMathSciNetGoogle Scholar
  23. 23.
    Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime TV-\(\mathit{L}^1\) optical flow. In: Hamprecht, F.A., Schnörr, C., Jähne, B. (eds.) DAGM 2007. LNCS, vol. 4713, pp. 214–223. Springer, Heidelberg (2007) CrossRefGoogle Scholar
  24. 24.
    Agarwal, S., Mierle, K., Others: ceres solver. (http://ceres-solver.org)
  25. 25.
    Boehler, W., Bordas Vicent, M., Marbs, A.: Investigating laser scanner accuracy. Int. Arch. Photogrammetry Remote Sens. Spat. Inf. Sci. 34, 696–701 (2003)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Daniel Kondermann
    • 1
  • Rahul Nair
    • 1
  • Stephan Meister
    • 1
  • Wolfgang Mischler
    • 1
  • Burkhard Güssefeld
    • 1
  • Katrin Honauer
    • 1
  • Sabine Hofmann
    • 2
  • Claus Brenner
    • 2
  • Bernd Jähne
    • 1
  1. 1.Heidelberg Collaboratory for Image Processing at IWRRuprecht-Karls-Universität HeidelbergHeidelbergGermany
  2. 2.Institute of Cartography and GeoinformaticsLeibniz Universität HannoverHanoverGermany

Personalised recommendations