Frontiers of Computer Science

, Volume 9, Issue 5, pp 691–702 | Cite as

Camera array calibration for light field acquisition

  • Yichao Xu
  • Kazuki Maeno
  • Hajime Nagahara
  • Rin-ichiro Taniguchi
Research Article

Abstract

Light field cameras are becoming popular in computer vision and graphics, with many research and commercial applications already having been proposed. Various types of cameras have been developed with the camera array being one of the ways of acquiring a 4D light field image using multiple cameras. Camera calibration is essential, since each application requires the correct projection and ray geometry of the light field. The calibrated parameters are used in the light field image rectified from the images captured by multiple cameras. Various camera calibration approaches have been proposed for a single camera, multiple cameras, and a moving camera. However, although these approaches can be applied to calibrating camera arrays, they are not effective in terms of accuracy and computational cost. Moreover, less attention has been paid to camera calibration of a light field camera. In this paper, we propose a calibration method for a camera array and a rectification method for generating a light field image from the captured images. We propose a two-step algorithm consisting of closed form initialization and nonlinear refinement, which extends Zhang’s well-known method to the camera array. More importantly, we introduce a rigid camera constraint whereby the array of cameras is rigidly aligned in the camera array and utilize this constraint in our calibration. Using this constraint, we obtained much faster and more accurate calibration results in the experiments.

Keywords

light field camera array calibration rectification digital refocusing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Levoy M, Hanrahan P. Light field rendering. In: Proceedings of the ACM Conference on Computer Graphics. 1996, 31–42Google Scholar
  2. 2.
    Ng R, Levoy M, Brédif M, Duval G, Horowitz M, Hanrahan P. Light Field Photography with a Hand-Held Plenoptic Camera. Computer Science Technical Report CSTR, 2005Google Scholar
  3. 3.
    Vaish V, Levoy M, Szeliski R, Zitnick C L, Kang S B. Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2006, 2331–2338Google Scholar
  4. 4.
    Seitz S M, Curless B, Diebel J, Scharstein D, Szeliski R S. A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2006, 519–528Google Scholar
  5. 5.
    Wetzstein G, Roodnick D, Heidrich W, Raskar R. Refractive shape from light field distortion. In: Proceedings of IEEE International Conference on Computer Vision. 2011, 1180–1186Google Scholar
  6. 6.
    Maeno K, Nagahara H, Shimada A, Taniguchi R. Light field distortion feature for transparent object recognition. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2013, 2786–2793Google Scholar
  7. 7.
    Wilburn B, Joshi N, Vaish V, Talvala E V E, Antunez E, Barth A, Adams A, Levoy M, Horowitz M. High performance imaging using large camera arrays. ACM Transactions on Graphics, 2005, 24(3): 765–776CrossRefGoogle Scholar
  8. 8.
    Zhang Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330–1334CrossRefGoogle Scholar
  9. 9.
    Ueshiba T, Tomita F. Plane-based calibration algorithm for multicamera systems via factorization of homography matrices. In: Proceedings of IEEE International Conference on Computer Vision. 2003, 966–973CrossRefGoogle Scholar
  10. 10.
    Snavely N, Seitz S M, Szeliski R. Modeling the world from internet photo collections. International Journal of Computer Vision, 2008, 80: 189–210CrossRefGoogle Scholar
  11. 11.
    Bok Y, Jeon H G, Kweon I S. Geometric calibration of micro-lensbased light-field cameras using line features. In: Proceedings of European Conference on Computer Vision. 2014, 8694: 47–61Google Scholar
  12. 12.
    Dansereau D G, Pizarro O, Williams S B. Decoding, calibration and rectification for lenselet-based plenoptic cameras. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2013, 1027–1034Google Scholar
  13. 13.
    Cho D, Lee M, Kim S, Tai Y W. Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction. In: Proceedings of IEEE International Conference on Computer Vision. 2013Google Scholar
  14. 14.
    Tsai R Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics and Automation, 1987, 3: 323–344CrossRefGoogle Scholar
  15. 15.
    Vaish V, Wilburn B, Joshi N, Levoy M. Using plane + parallax for calibrating dense camera arrays. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1). 2004, 2–9Google Scholar
  16. 16.
    Svoboda T, Martinec D, Pajdla T. A convenient multi-camera selfcalibration for virtual environments. PRESENCE: Teleoperators and Virtual Environments, 2005, 14(4): 407–422CrossRefGoogle Scholar
  17. 17.
    Loop C, Zhang Z. Computing rectifying homographies for stereo vision. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 1999, 125–131Google Scholar
  18. 18.
    Fusiello A, Trucco E, Verri A. A compact algorithm for rectification of stereo pairs. Machine Vision and Applications, 2000, 12(1): 16–22CrossRefGoogle Scholar
  19. 19.
    Deng K, Wang L, Lin Z, Feng T, Deng Z. Correction and rectification of light fields. Computers & Graphics, 2003, 27(2): 169–177CrossRefGoogle Scholar
  20. 20.
    Fukushima N, Yendo T, Fujii T, Tanimoto M. A novel rectification method for two-dimensional camera array by parallelizing locus of feature points. in: International Workshop on Advanced Image Technology. 2008, B5–1Google Scholar
  21. 21.
    Heikkila J, Silven O. A four-step camera calibration procedure with implicit image correction. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 1997, 1106–1112CrossRefGoogle Scholar
  22. 22.
    Wei G Q, Ma S D. Implicit and explicit camera calibration: Theory and experiments. IEEE Transactions of Pattern Analysis and Machine Intelligence, 1994, 16(5): 469–480CrossRefGoogle Scholar
  23. 23.
    Levenberg K. A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Applied Mathmatics, 1944, II(2): 164–168MathSciNetGoogle Scholar
  24. 24.
    Marquardt D W. An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics, 1963, 11(2): 431–441MATHMathSciNetCrossRefGoogle Scholar
  25. 25.
    Ihm I, Park S, Lee R K. Rendering of spherical light fields. In: Proceedings of the 5th Pacific Conference On Computer Graphics And Applications. 1997, 59–68Google Scholar
  26. 26.
    Georgiev T, Lumsdaine A. Focused plenoptic camera and rendering. Journal of Electronic Imaging, 2010, 19(2)Google Scholar

Copyright information

© Higher Education Press and Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Yichao Xu
    • 1
  • Kazuki Maeno
    • 1
  • Hajime Nagahara
    • 1
  • Rin-ichiro Taniguchi
    • 1
  1. 1.Graduate School of Information Science and Electrical EngineeringKyushu UniversityFukuokaJapan

Personalised recommendations