SyB3R: A Realistic Synthetic Benchmark for 3D Reconstruction from Images

  • Andreas LeyEmail author
  • Ronny Hänsch
  • Olaf Hellwich
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9911)


Benchmark datasets are the foundation of experimental evaluation in almost all vision problems. In the context of 3D reconstruction these datasets are rather difficult to produce. The field is mainly divided into datasets created from real photos with difficult experimental setups and simple synthetic datasets which are easy to produce, but lack many of the real world characteristics. In this work, we seek to find a middle ground by introducing a framework for the synthetic creation of realistic datasets and their ground truths. We show the benefits of such a purely synthetic approach over real world datasets and discuss its limitations.


Ground Truth Sensor Noise Principal Point Motion Blur Chromatic Aberration 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This paper was supported by a grant (HE 2459/21-1) from the Deutsche Forschungsgemeinschaft (DFG).

Supplementary material

419982_1_En_15_MOESM1_ESM.pdf (24.2 mb)
Supplementary material 1 (pdf 24818 KB)


  1. 1.
    Wu, C.: Towards linear-time incremental structure from motion. In: 2013 International Conference on 3D Vision, 3DV 2013, pp. 127–134 (2013)Google Scholar
  2. 2.
    Dyer, C.: Volumetric scene reconstruction from multiple views. In: Davis, L.S., (ed.) Foundations of Image Understanding, pp. 469–489. Kluwer (2001)Google Scholar
  3. 3.
    Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Conference on Computer Vision and Pattern Recognition (CVPR 2006), vol. 1, pp. 519–526 (2006)Google Scholar
  4. 4.
    Slabaugh, G., Culbertson, B., Malzbender, T., Shafer, R.: A survey of methods for volumetric scene reconstruction from photographs. In: International Workshop on Volume Graphics, pp. 81–101 (2001)Google Scholar
  5. 5.
    Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47, 7–42 (2002)CrossRefzbMATHGoogle Scholar
  6. 6.
    Scharstein, D., Hirschmüller, H., Kitajima, Y., Krathwohl, G., Nesic, N., Wang, X., Westling, P.: High-resolution stereo datasets with subpixel-accurate ground truth. In: German Conference on Pattern Recognition (GCPR 2014) (2014)Google Scholar
  7. 7.
    Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., Aanaes, H.: Large scale multi-view stereopsis evaluation. In: Conference on Computer Vision and Pattern Recognition (CVPR 2014), pp. 406–413 (2014)Google Scholar
  8. 8.
    Strecha, C., von Hansen, W., Gool, L.V., Fua, P., Thoennessen, U.: On benchmarking camera calibration and multi-view stereo for high resolution imagery. In: Conference on Computer Vision and Pattern Recognition (CVPR 2008), pp. 1–8 (2008)Google Scholar
  9. 9.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR 2012), pp. 3354–3361 (2012)Google Scholar
  10. 10.
    Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: Conference on Computer Vision and Pattern Recognition (CVPR 2015) (2015)Google Scholar
  11. 11.
    Martorell, M.P., Maki, A., Martull, S., Ohkawa, Y., Fukui, K.: Towards a simulation driven stereo vision system. In: ICPR2012, pp. 1038–1042 (2012)Google Scholar
  12. 12.
    Martull, S., Martorell, M.P., Fukui, K.: Realistic cg stereo image dataset with ground truth disparity maps. In: ICPR2012 Workshop TrakMark2012, pp. 40–42 (2012)Google Scholar
  13. 13.
    Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33783-3_44 Google Scholar
  14. 14.
    Ley, A., Hänsch, R., Hellwich, O.: Project Website.
  15. 15.
    Ramanath, R., Snyder, W.E., Yoo, Y., Drew, M.S.: Color image processing pipeline. IEEE Signal Process. Mag. 22(1), 34–43 (2005)CrossRefGoogle Scholar
  16. 16.
    Deever, A., Kumar, M., Pillman, B.: Digital camera image formation: processing and storage. In: Digital Image Forensics: There is More to a Picture than Meets the Eye, pp. 45–77. Springer, New York (2013)Google Scholar
  17. 17.
    Muldoon, M., Acosta, J.: Blend Swap.
  18. 18.
    Debevec, P.E., Malik, J.: Recovering high dynamic range radiance maps from photographs. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 97, pp. 369–378 (1997)Google Scholar
  19. 19.
    Wu, C.: Siftgpu: a gpu implementation of scale invariant feature transform (sift) (2007).
  20. 20.
    Wu, C., Agarwal, S., Curless, B., Seitz, S.M.: Multicore bundle adjustment. In: Conference on Computer Vision and Pattern Recognition (CVPR 2011), pp. 3057–3064. IEEE (2011)Google Scholar
  21. 21.
    Furukawa, Y., Ponce, J.: Accurate, dense, and robust multi-view stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1362–1376 (2010)CrossRefGoogle Scholar
  22. 22.
    arenyart: Toad. Released under CC-Zero
  23. 23.
    James, M.R., Robson, S.: Straightforward reconstruction of 3d surfaces and topography with a camera: accuracy and geoscience application. J. Geophys. Res. Earth Surf. 117(F3) (2012)Google Scholar
  24. 24.
    ColeHarris: Skull. Released under CC-Zero
  25. 25.
    matpiet: Spartan helmet. Released under CC-Zero, slightly adapted for Cycles
  26. 26.
  27. 27.
    Ley, A., Hänsch, R., Hellwich, O.: Reconstructing white walls: multi-view, multi-shot 3D reconstruction of textureless surfaces. ISPRS Ann. Photogrammetry, Remote Sens. Spat. Inf. Sci. III(3), 91–98 (2016)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Computer Vision and Remote Sensing GroupTechnische Universität BerlinBerlinGermany

Personalised recommendations