Abstract
Visual servoing allows to control the motion of a robot using information from its visual sensors to achieve manipulation tasks. In this work we design and implement a robust visual servoing framework for reaching and grasping behaviours for a humanoid service robot with limited control capabilities. Our approach successfully exploits a 5-degrees of freedom manipulator, overcoming the control limitations of the robot while avoiding singularities and stereo vision techniques. Using a single camera, we combine a marker-less model based tracker for the target object, a pattern tracking for the end-effector to deal with the robot’s inaccurate kinematics, and alternate pose based visual servo technique with eye-in-hand and eye-to-hand configurations to achieve a fully functional grasping system. The overall method shows better results for grasping than conventional motion planing and simple inverse kinematics techniques for this robotic morphology, demonstrating a 48.8% of increment in the grasping success rate.
Thanks to Giovanni Claudio for his help on the use of ViSP and bridging Pepper robot with ROS.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Schack, T., Ritter, H.: The cognitive nature of action functional links between cognitive psychology, movement science, and robotics. Prog. Brain Res. 174, 231–250 (2009)
Espiau, B., Chaumette, F., Rives, P.: A new approach to visual servoing in robotics, pp. 313–326. IEEE (1992)
Siciliano, B., Khatib, O.: Springer Handbook of Robotics. Springer Science & Business Media, Heidelberg (2008). https://doi.org/10.1007/978-3-540-30301-5
Aldebaran cartesian control. http://www.bx.psu.edu/~thanh/naoqi/naoqi/motion/control-cartesian.html. Accessed 03 Feb 2017
Aldebaran aldebaran - pepper robot specifications. http://doc.aldebaran.com/2-0/family/juliette_technical/. Accessed 05 May 2017
Lippiello, V., Ruggiero, F., Siciliano, B., Villani, L.: Preshaped visual grasp of unknown objects with a multi-fingered hand. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5894–5899. IEEE (2010)
Corke, P., Good, M.: Controller design for high-performance visual servoing. IFAC Proc. 26(2), 629–632 (1993)
Rizzi, A.A., Koditschek, D.E.: An active visual estimator for dexterous manipulation. IEEE Trans. Robot. Autom. 12(5), 697–713 (1996)
Horaud, R., Dornaika, F., Espiau, B.: Visually guided object grasping. IEEE Trans. Rob. Autom. 14(4), 525–532 (1998)
Kraft, D., Detry, R., Pugeault, N., Baseski, E., Piater, J.H., Kruger, N.: Learning objects and grasp affordances through autonomous exploration. In: ICVS (2009)
Macura, Z., Cangelosi, A., Ellis, R., Bugmann, D., Fischer, M.H., Myachykov, A.: A cognitive robotic model of grasping (2009)
Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Rob. Res. 0278364917710318 (2016)
Morales, A., Chinellato, E., Fagg, A.H., Pobil, A.P.D.: An active learning approach for assessing robot grasp reliability. In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566) (2004)
Vicente, P., Jamone, L., Bernardino, A.: Towards markerless visual servoing of grasping tasks for humanoid robots. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3811–3816. IEEE (2017)
Allen, P.K., Yoshimi, B., Timcenko, A.: Real-time visual servoing. In: 1991 IEEE International Conference on Robotics and Automation, Proceedings, pp. 851–856. IEEE (1991)
Chaumette, F., Marchand, É.: A redundancy-based iterative approach for avoiding joint limits: application to visual servoing. IEEE Trans. Robot. Autom. 17(5), 719–730 (2001)
Mansard, N., Stasse, O., Chaumette, F., Yokoi, K.: Visually-guided grasping while walking on a humanoid robot. In: 2007 IEEE International Conference on Robotics and Automation, pp. 3041–3047. IEEE (2007)
Vahrenkamp, N., Wieland, S., Azad, P., Gonzalez-Aguirre, D.I., Asfour, T., Dillmann, R.: Visual servoing for humanoid grasping and manipulation tasks. In: Humanoids (2008)
Claudio, G., Spindler, F., Chaumette, F.: Vision-based manipulation with the humanoid robot Romeo. In: Humanoids (2016)
Taylor, G., Kleeman, L.: Grasping unknown objects with a humanoid robot (2002)
Marey, M., Chaumette, F.: A new large projection operator for the redundancy framework. In: 2010 IEEE International Conference on Robotics and Automation, pp. 3727–3732 (2010)
Inria peppercontrol. https://github.com/lagadic/pepper_control. Accessed 03 Feb 2017
ROS ros.org. http://wiki.ros.org/. Accessed 02 Feb 2017
ROS naoqi driver. http://wiki.ros.org/naoqi_driver. Accessed 03 Feb 2017
Inria visp naoqi bridge. http://jokla.me/software/visp_naoqi/. Accessed 01 Feb 2017
Irse whycon. https://github.com/lrse/whycon. Accessed 02 Feb 2017
ROS vision visp. https://github.com/lagadic/vision_visp. Accessed 03 Feb 2017
QRCode optical flow. http://docs.opencv.org/3.2.0/d7/d8b/tutorial_py_lucas_kanade.html. Accessed 03 Feb 2017
OpenCV opencv team. http://opencv.org/. Accessed 02 Feb 2017
Kato, Y., Deguchi, D., Takahashi, T., Ide, I., Murase, H.: Low resolution QR-code recognition by applying super-resolution using the property of QR-codes. In: ICDAR (2011)
Belussi, L., Hirata, N.S.T.: Fast QR code detection in arbitrarily acquired images. In: SIBGRAPI (2011)
Nitsche, M., Krajnik, T., vCizek, P., Mejail, M., Duckett, T.: Whycon: an efficient, marker-based localization system (2015)
INRIA whycon tracking. https://github.com/lagadic/pepper_hand_pose. Accessed 03 Feb 2017
Comport, A.I., Marchand, É., Pressigout, M., Chaumette, F.: Real-time markerless tracking for augmented reality: the virtual visual servoing framework. IEEE Trans. Vis. Comput. Graph. 12, 615–628 (2006)
INRIA visp edge tracking. http://visp-doc.inria.fr/manual/visp-2.6.0-tracking-overview. Accessed 03 Feb 2017
Inria visp. https://visp.inria.fr/. Accessed 03 Feb 2017
ROS moveit simpple grasps. https://github.com/davetcoleman/moveit_simple_grasps/. Accessed 28 Apr 2017
Aldebaran movement detection. http://doc.aldebaran.com/2-4/naoqi/vision/almovementdetection.html#almovementdetection. Accessed 13 Apr 2017
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Ardón, P., Dragone, M., Erden, M.S. (2018). Reaching and Grasping of Objects by Humanoid Robots Through Visual Servoing. In: Prattichizzo, D., Shinoda, H., Tan, H., Ruffaldi, E., Frisoli, A. (eds) Haptics: Science, Technology, and Applications. EuroHaptics 2018. Lecture Notes in Computer Science(), vol 10894. Springer, Cham. https://doi.org/10.1007/978-3-319-93399-3_31
Download citation
DOI: https://doi.org/10.1007/978-3-319-93399-3_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-93398-6
Online ISBN: 978-3-319-93399-3
eBook Packages: Computer ScienceComputer Science (R0)