ISVC 2016: Advances in Visual Computing pp 858-867 | Cite as
Dual Back-to-Back Kinects for 3-D Reconstruction
Abstract
In this paper, we investigated the use of two Kinects for capturing the 3-D model of a large scene. Traditionally the method of utilising one Kinect is used to slide across the area, and a full 3-D model is obtained. However, this approach requires the scene with a significant number of prominent features and careful handling of the device. To tackle the problem we mounted two back-to-back Kinects on top of a robot for scanning the environment. This setup requires the knowledge of the relative pose between the two Kinects. As they do not have a shared view, calibration using the traditional method is not possible. To solve this problem, we place a dual-face checkerboard (the front and back patterns are the same) on top of the back-to-back Kinects, and a planar mirror is employed to enable either Kinect to view the same checkerboard. Such an arrangement will create a shared calibration object between the two sensors. In such an approach, a mirror-based pose estimation algorithm is applied to solve the problem of Kinect camera calibration. Finally, we can merge all local object models captured by the Kinects together to form a combined model with a larger viewing area. Experiments using real measurements of capturing an indoor scene were conducted to show the feasibility of our work.
Keywords
Point Cloud Linear Method Camera Calibration Virtual View Virtual CameraNotes
Acknowledgement
This work is supported by a direct grant (Project Code: 4055045) from the Faculty of Engineering of the Chinese University of Hong Kong.
References
- 1.Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136. IEEE (2011)Google Scholar
- 2.Kim, S., Kim, J.: Occupancy mapping and surface reconstruction using local Gaussian processes with kinect sensors. IEEE Trans. Cybern. 43, 1335–1346 (2013)CrossRefGoogle Scholar
- 3.Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000)CrossRefGoogle Scholar
- 4.Whelan, T., Johannsson, H., Kaess, M., Leonard, J.J., McDonald, J.: Robust real-time visual odometry for dense RGB-D mapping. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 5724–5731. IEEE (2013)Google Scholar
- 5.Sturm, P., Bonfort, T.: How to compute the pose of an object without a direct view? In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3852, pp. 21–31. Springer, Heidelberg (2006). doi: 10.1007/11612704_3
- 6.Kumar, R.K., Ilie, A., Frahm, J.M., Pollefeys, M.: Simple calibration of non-overlapping cameras with a mirror. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–7. IEEE (2008)Google Scholar
- 7.Hesch, J.A., Mourikis, A.I., Roumeliotis, S.I.: Mirror-based extrinsic camera calibration. In: Chirikjian, G.S., Choset, H., Morales, M., Murphey, T. (eds.) Algorithmic Foundation of Robotics VIII, pp. 285–299. Springer, Heidelberg (2009)CrossRefGoogle Scholar
- 8.Rodrigues, R., Barreto, J.P., Nunes, U.: Camera pose estimation using images of planar mirror reflections. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 382–395. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15561-1_28 CrossRefGoogle Scholar
- 9.Takahashi, K., Nobuhara, S., Matsuyama, T.: A new mirror-based extrinsic camera calibration using an orthogonality constraint. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1051–1058. IEEE (2012)Google Scholar
- 10.Gluckman, J., Nayar, S.K.: Catadioptric stereo using planar mirrors. Int. J. Comput. Vis. 44, 65–79 (2001)CrossRefMATHGoogle Scholar
- 11.Bouguet, J.Y.: Camera calibration toolbox for Matlab (2004)Google Scholar