Skip to main content
Log in

Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

  • Published:
Journal of Mechanical Science and Technology Aims and scope Submit manuscript

Abstract

This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States’ Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we construct a camera–laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor generally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. D. Comaniciu and P. Meer, Robust analysis of feature spaces: Color image segmentation, Proceedings of 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE (1997).

    Google Scholar 

  2. S. C. Zhu and A. Yuille, Region competition: Unifying snakes, region growing, and Bayes/MDL for multiband image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 18 (9) (1996) 884–900.

    Article  Google Scholar 

  3. B. Douillard et al., On the segmentation of 3D LIDAR point clouds, 2011 IEEE International Conference on Robotics and Automation (ICRA), IEEE (2011).

    Google Scholar 

  4. H. Woo et al., A new segmentation method for point cloud data, International Journal of Machine Tools and Manufacture, 42 (2) (2002) 167–178.

    Article  Google Scholar 

  5. D. Holz et al., Real-time plane segmentation using RGB-D cameras, RoboCup 2011: Robot Soccer World Cup XV, Springer Berlin Heidelberg (2012) 306–317.

    Chapter  Google Scholar 

  6. U. Asif, M. Bennamoun and F. Sohel, A model-free approach for the segmentation of unknown objects, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), IEEE (2014).

    Google Scholar 

  7. L. Cruz, D. Lucio and L. Velho, Kinect and rgbd images: Challenges and applications, Graphics, 25th SIBGRAPI Conference on Patterns and Images Tutorials (SIBGRAPI-T), IEEE (2012).

    Google Scholar 

  8. J. Han et al., Enhanced computer vision with microsoft kinect sensor: A review, IEEE Transactions on Cybernetics, 43 (5) (2013) 1318–1334.

    Article  Google Scholar 

  9. S. Izadi et al., KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, ACM (2011).

    Google Scholar 

  10. P. Henry et al., RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments, The International Journal of Robotics Research, 31 (5) (2012) 647–663.

  11. J. Owens, Object detection using the Kinect. No. ARL-TN-0474, Army Research Lab Aberdeen Proving Ground MD Vehicle Technology Directorate (2012).

  12. O. Wulf and B. Wagner, Fast 3D scanning methods for laser measurement systems, International Conference on Control Systems and Computer Science (CSCS14) (2003).

    Google Scholar 

  13. R. Unnikrishnan and M. Hebert, Fast extrinsic calibration of a laser rangefinder to a camera, Carnegie Mellon University (2005).

    Google Scholar 

  14. I. Lee and J.-H. Oh, Humanoid posture selection for reaching motion and a cooperative balancing controller, Journal of Intelligent & Robotic Systems (2015) 1-16.

  15. J. Bohg and D. Kragic, Learning grasping points with shape context, Robotics and Autonomous Systems, 58 (4) (2010) 362–377.

    Article  Google Scholar 

  16. R. B. Rusu et al., Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), IEEE (2009).

    Google Scholar 

  17. R. B. Rusu et al., Close-range scene segmentation and reconstruction of 3D point cloud maps for mobile manipulation in domestic environments, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), IEEE (2009).

    Google Scholar 

  18. Z.-C. Marton et al., General 3D modelling of novel objects from a single view, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (2010).

  19. D. Holz et al., Real-time plane segmentation using RGB-D cameras, RoboCup 2011: Robot Soccer World Cup XV, Springer Berlin Heidelberg (2012) 306–317.

  20. J. J. Koenderink and A. J. van Doorn, The internal representation of solid shape with respect to vision, Biological Cybernetics, 32 (4) (1979) 211–216.

    Article  MATH  Google Scholar 

  21. D. G. Lowe, Local feature view clustering for 3D object recognition, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001 (CVPR 2001), IEEE, 1 (2001).

    Google Scholar 

  22. J. Shi and J. Malik, Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (8) (2000) 888–905.

    Article  Google Scholar 

  23. R. Schnabel, R. Wahl and R. Klein, Efficient RANSAC for point-cloud shape detection, Computer Graphics Forum, Blackwell Publishing Ltd., 26 (2) (2007).

  24. J. Papon et al., Voxel cloud connectivity segmentationsupervoxels for point clouds, 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (2013).

    Google Scholar 

  25. U. Asif, M. Bennamoun and F. Sohel, A model-free approach for the segmentation of unknown objects, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), IEEE (2014).

    Google Scholar 

  26. G. Kootstra, N. Bergstrom and D. Kragic, Fast and automatic detection and segmentation of unknown objects, 2010 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE (2010).

    Google Scholar 

  27. N. Bergstrom, M. Bjorkman and D. Kragic, Generating object hypotheses in natural scenes through human-robot interaction, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (2011).

    Google Scholar 

  28. R. Achanta et al., SLIC superpixels compared to state-of-theart superpixel methods, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34 (11) (2012) 2274–2282.

    Article  Google Scholar 

  29. Z. Zhang, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (11) (2000) 1330–1334.

    Article  Google Scholar 

  30. J. Heo and J. Oh, Biped walking pattern generation using an analytic method for a unit step with a stationary time interval between steps, IEEE Transactions on Industrial Electronics, 62 (2) (2015) 1091–1100.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun-Ho Oh.

Additional information

Recommended by Associate Editor Deok Jin Lee

Inho Lee received his B.S., M.S., and Ph.D. degrees in Mechanical Engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2009, 2011 and 2016, respectively. He worked on the development of humanoid robots through the following projects: HUBO, HUBO2, and DRC-HUBO. He is currently working in Institute for Human and Machine Cognition Robotics Lab and his research interests include motion planning, quadruped and bipedal walking, and stabilization control for humanoid robots, manipulation, sensors, actuators, and microprocessor applications.

Jaesung Oh received his B.S. degree in Mechanical and Control Engineering from the Handong Global University, Pohang, South Korea, and his M.S. degree in Mechanical Engineering from KAIST in 2013 and 2015, respectively. He is currently a doctorate student of the Department of Mechanical Engineering, KAIST. His research interests include humanoid robot, as well as the applications of robotics and control.

Inhyeok Kim received his B.S. degree in Mechanical Engineering from Yonsei University, Seoul, South Korea, in 2004. He received his M.S. and Ph.D. degrees in Mechanical Engineering from KAIST, Daejeon, South Korea, in 2006 and 2013, respectively. He is currently working in Naver Labs and his research interests include humanoid robot and whole-body inverse kinematics, motor control, and microprocessor applications.

Junho Oh received his B.S. and M.S. degrees in Mechanical Engineering from Yonsei University, Seoul, South Korea, and his Ph.D. degree in Mechanical Engineering from the University of California, Berkeley, in 1977, 1979, and 1985, respectively. Since 1985, he has been with the Department of Mechanical Engineering, KAIST, where he is currently a Professor and a Director of the Humanoid Robot Research Center. His research interests include humanoid robots, sensors, actuators, and applications of microprocessors. He is a member of the IEEE, KSME, KSPE, and ICROS.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, I., Oh, J., Kim, I. et al. Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios. J Mech Sci Technol 31, 2997–3003 (2017). https://doi.org/10.1007/s12206-017-0543-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12206-017-0543-0

Keywords

Navigation