Full-Field Surface 3D Shape and Displacement Measurements Using an Unfocused Plenoptic Camera
Full-field surface 3D shape and displacement measurements using a single commercial unfocused plenoptic camera (Lytro Illum) are reported in this work. Before measurements, the unfocused plenoptic camera is calibrated with two consecutive steps, including lateral calibration and depth calibration. Each raw image of a checkerboard pattern recorded by Lytro Illum is first extracted to an array of sub-aperture images (SAIs), and the center sub-aperture images (CSAIs) at diverse poses are used for lateral calibration to determine intrinsic and extrinsic parameters. The parallax maps between the CSAI and the remaining SAIs at each pose are then determined for depth parameters estimation using depth calibration. Furthermore, a newly developed physical-based depth distortion model is established to correct the serious distortion of the depth field. To realize shape and deformation measurements, the raw images of a test sample with speckle patterns premade on its surface are captured by Lytro Illum and extracted to arrays of SAIs. The parallax maps between the CSAI and the target SAIs are obtained using subset-based digital image correlation. Based on the pre-computed intrinsic and depth parameters and the disparity map, the full-field surface 3D shape and displacement of a test object are finally determined. The effectiveness and accuracy of the proposed approach are evaluated by a set of experiments involving the shape reconstruction of a cylinder, in-plane and out-of-plane displacement measurements of a flat plate and 3D full-field displacement measurements of a cantilever beam. The preliminary results indicate that the proposed method is expected to become a novel approach for full-field surface 3D shape and displacement measurements.
KeywordsUnfocused plenoptic camera Digital image correlation Surface 3D shape reconstruction Full-field displacement measurement Depth distortion model
This work is supported by the National Natural Science Foundation of China (Grants No. 11427802, 11632010), the Aeronautical Science Foundation of China (2016ZD51034), and State Key Laboratory of Traction Power of Southwest Jiaotong University (Grant No. TPL1607).
Center sub-aperture image. A raw image captured by a plenoptic camera can be extracted to an array of sub-aperture images, which are equivalent to an array of images captured with slight parallaxes. The one at the center is named as Center sub-aperture image.
Digital image correlation, a widely used optical technique for surface profile and deformation measurements.
Epipolar image, a 2D slice of the 4D light field by fixing the horizontal angular and spatial coordinates, or the vertical angular and spatial coordinates simultaneously.
Inverse-compositional Gauss-Newton algorithm, an efficient algorithm for subset matching in DIC.
Microlens array, inserted in front of the sensor and behind the main lens of the plenoptic camera.
Particle image velocimetry, an optical method to measure the velocity field of the flow.
Root mean square, a statistical value defined as the square root of the mean square.
Region of interest, a region on the image selected for further measurements. We only measure the profile or deformation in this region.
Sub-aperture image, images extracted from the raw image of the plenoptic camera by fixing angular coordinates.
Zero-mean normalized sum of squared difference.
- 1.Ives FE (1903) Parallax stereogram and process of making same. US Pat 725, 567 1–3Google Scholar
- 2.Lippmann G (1908) La photographie intégrale. CR Acad Sci 146:446–451Google Scholar
- 7.Ng R, Levoy M, Duval G et al (2005) Light field photography with a hand-held plenoptic camera. Informational 1–11Google Scholar
- 8.Ng R (2006) Digital light field photography. Stanford Univ 1–203Google Scholar
- 9.Lumsdaine A, Georgiev T (2008) Full resolution lightfield rendering. Indiana Univ Adobe Syst Tech Rep 1–12Google Scholar
- 10.Lumsdaine A, Georgiev T (2009) The focused plenoptic camera. 2009 I.E. Int Conf Comput Photogr ICCP 09Google Scholar
- 12.Levoy M, Hanrahan P (1996) Light field rendering. Proc 23rd annu conf comput graph interact tech - SIGGRAPH ‘96 31–42Google Scholar
- 13.Wanner S, Fehr J, Jähne B (2011) Generating EPI representations of 4D light fields with a single lens focused plenoptic camera. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 6938 LNCS:90–101Google Scholar
- 14.Johannsen O, Sulc A, Goldluecke B (2016) Occlusion-aware depth estimation using sparse light field coding. In: Rosenhahn B, Andres B (eds) Pattern Recognition, Gcpr 2016. Springer International Publishing, Cham, pp 207–218Google Scholar
- 15.Lin H, Chen C, Kang SB, Yu J (2015) Depth recovery from light field using focal stack symmetry. Proc IEEE int conf comput vis 2015 inter, pp 3451–3459Google Scholar
- 16.Tao MW, Srinivasan PP (2015) Depth from shading, defocus, and correspondence using light-field angular coherence. CVPR, pp 1940–1948Google Scholar
- 17.Wang TC, Efros AA, Ramamoorthi R (2015) Occlusion-aware depth estimation using light-field cameras. In: Proc. ieee int. conf. comput. vis, pp 3487–3495Google Scholar
- 18.Jeon HG, Park JJ, Choe G et al (2015) Accurate depth map estimation from a lenslet light field camera. Proc IEEE comput soc conf comput vis pattern recognit 07–12–June, pp 1547–1555Google Scholar
- 19.Dansereau DG, Pizarro O, Williams SB (2013) Decoding, calibration and rectification for lenselet-based plenoptic cameras. Proc IEEE comput soc conf comput vis pattern recognit, pp 1027–1034Google Scholar
- 21.Johannsen O, Heinze C, Goldluecke B, Perwaß C (2013) On the calibration of focused plenoptic cameras. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 8200 LNCS, pp 302–317Google Scholar
- 23.Zeller N, Quint F, Stilla U (2014) Calibration and accuracy analysis of a focused plenoptic camera. ISPRS Ann Photogramm Remote Sens Spat Inf Sci II-3, pp 205–212Google Scholar
- 26.Lynch K (2011) Development of a 3D fluid velocimetry technique based on light field imaging. MS Thesis, Auburn University, AlabamaGoogle Scholar
- 27.Lynch K, Fahringer T, Thurow B (2012) Three-dimensional particle image velocimetry using a plenoptic camera. 50th AIAA Aerosp Sci Meet Incl New Horizons Forum Aerosp Expo, pp 1–14Google Scholar
- 29.Bolan J, Johnson KC, Thurow BS (2014) Preliminary investigation of three-dimensional flame measurements with a plenoptic camera. 30th AIAA Aerodyn Meas Technol Gr Test Conf, pp 1–12Google Scholar
- 33.Adelson EH, Bergen JR (1991) The plenoptic function and the elements of early vision. Comput Model Vis Process, pp 3–20Google Scholar
- 35.Cho D, Lee M, Kim S, Tai YW (2013) Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction. Proc IEEE int conf comput vis, pp 3280–3287Google Scholar
- 36.Geiger A, Moosmann F, Car O, Schuster B (2012) Automatic camera and range sensor calibration using a single shot. 2012 I.E. int conf robot autom, pp 3936–3943. https://doi.org/10.1109/ICRA.2012.6224570
- 38.Brown D (1966) Decentering distortion of lenses. Photogramm Eng 32:444–462Google Scholar
- 39.Triggs B, McLauchlan PF, Hartley RI, Fitzgibbon AW (1999) Bundle adjustment—a modern synthesis. In: International workshop on vision algorithms. Springer, Berlin, Heidelberg, pp 298–372Google Scholar
- 40.Zhang Z (1999) Flexible camera calibration by viewing a plane from unknown orientations. Proc seventh IEEE int conf comput vis, vol 1, pp 666–673Google Scholar
- 41.Moré JJ (1978) The Levenberg-Marquardt algorithm: implementation and theory[M]//Numerical analysis. Springer, Berlin, Heidelberg, pp 105–116Google Scholar
- 43.Szeliski R (2010) Computer vision: algorithms and applications. Computer (Long Beach Calif) 5:832Google Scholar