Advertisement

Wide Field of View Kinect Undistortion for Social Navigation Implementation

  • Razali Tomari
  • Yoshinori Kobayashi
  • Yoshinori Kuno
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7432)

Abstract

In planning navigation schemes for social robots, distinguishing between humans and other obstacles is crucial for obtaining a safe and comfortable motion. A Kinect camera is capable of fulfilling such a task but unfortunately can only deliver a limited field of view (FOV). Recently a lens that is capable of improving the Kinect’s FOV has become commercially available from Nyko. However, this lens causes a distortion in the RGB-D data, including the depth values. To address this issue, we propose a two-staged undistortion strategy. Initially, pixel locations in both RGB and depth images are corrected using an inverse radial distortion model. Next, the depth data is post-filtered using 3D point cloud analysis to diminish the noise as a result of the undistorting process and remove the ground/ceiling information. Finally, the depth values are rectified using a neural network filter based on laser-assisted training. Experimental results demonstrate the feasibility of the proposed approach for fixing distorted RGB-D data.

Keywords

Kinect Fish Eye Lens Undistortion Neural Network 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bonin, F., Ortiz, A., Oliver, B.: Visual Navigation for Mobile Robots: A Survey. Journal of Intelligent and Robotic Systems 53(3), 263–296 (2008)CrossRefGoogle Scholar
  2. 2.
    Engelhard, N., Endres, F., Hess, J., Sturm, J., Burgard, W.: Real-time 3D Visual SLAM with a Hand-Held RGB-D Camera. In: Proc. of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum (2011)Google Scholar
  3. 3.
    Tran, J.: Low-Cost 3D Scene Reconstruction for Response Robots in Real Time. In: Proc. of IEEE Intl. Symp. on Safety, Security, and Rescue Robotics, pp. 161–166 (2011)Google Scholar
  4. 4.
    Tomari, R., Kobayashi, Y., Kuno, Y.: Multi-view Head Detection and Tracking with Long Range Capability for Social Navigation Planning. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Wang, S., Kyungnam, K., Benes, B., Moreland, K., Borst, C., DiVerdi, S., Yi-Jen, C., Ming, J. (eds.) ISVC 2011, Part II. LNCS, vol. 6939, pp. 418–427. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  5. 5.
    Brown, D.: Decentering Distortion of Lenses. Photogrammetric Eng. 7, 444–462 (1966)Google Scholar
  6. 6.
    Chang, C., Su, C.: A Comparison of a Statistical Regression and Neural Network Methods in Modeling Measurement Errors for Computer Vision Inspection Systems. Computers & Industrial Engineering 28(3), 593–603 (1995)CrossRefGoogle Scholar
  7. 7.
    Shah, S., Aggarwal, J.K.: Mobile Robot Navigation and Scene Modeling Using Stereo Fish-Eye Lens System. Machine Vision and Application, 159–173 (1996)Google Scholar
  8. 8.
    de Villers, J., Nicolls, F.: Application of Neural Networks to Inverse Lens Distortion Modeling. In: Proc. of 21st Annual Symposium of the Pattern Recognition Society of South Africa, vol. 1, pp. 63–68 (2010)Google Scholar
  9. 9.
    Ahmed, M., Hemayed, E., Farag, A.: Neurocalibration: A Neural Network that Can Tell Camera Calibration Parameters. IEEE Trans. PAMI 79, 384–390 (1999)Google Scholar
  10. 10.
    Smith, L.N., Smith, M.L.: Automatic Machine Vision Calibration Using Statistical and Neural Network Methods. Image and Vision Computing 23, 887–899 (2005)CrossRefGoogle Scholar
  11. 11.
  12. 12.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Razali Tomari
    • 1
  • Yoshinori Kobayashi
    • 1
    • 2
  • Yoshinori Kuno
    • 1
  1. 1.Graduate School of Science & EngineeringSaitama UniversitySakura-KuJapan
  2. 2.Japan Science Technology Agency, PRESTOKawaguchiJapan

Personalised recommendations