Advertisement

Two-Stage Picking Method for Piled Shiny Objects

  • Yuya Sato
  • Kensuke HaradaEmail author
  • Nobuchika Sakata
  • Weiwei Wan
  • Ixchel G. Ramirez-Alpizar
Conference paper
Part of the Mechanisms and Machine Science book series (Mechan. Machine Science, volume 73)

Abstract

In this paper, we propose a novel two-steps algorithm for randomized bin-picking. Since it is difficult to detect the pose of a shiny object randomly piled in a bin, we give up picking objects one by one based on the visual information on the pile. Rather, a robot first roughly picks some of the objects from the pile without using the visual information and roughly place them onto a working table. Then, a robot picks the objects from the working table by detecting their 2D position based on the 2D RGB image. We performed experiments for multiple shiny objects with different shape and weight. Throughout the experimental study, we show that, just by adding one more step of robot motion, we can realize robust bin-picking for shiny objects.

Keywords

Robotic Bin-picking Shiny Object Trajectory Planning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Zhou et al., “Stacked objects segmentation based on depth image”, 6th Int. Conf. on Optical and Photonic Engineering, 2018.Google Scholar
  2. 2.
    Y. Domae et al., “Fast Graspability Evaluation on Single Depth Maps for Bin Picking with General Grippers”, IEEE Int. Conf. on Robotics and Automation, 2014.Google Scholar
  3. 3.
    D.C. Dupuis et al., “Two-Fingered Grasp Planning for Randomized Bin-Picking”, Robotics, Science and Systems 2008 Manipulation Workshop, 2008.Google Scholar
  4. 4.
    K. Harada et al., “Project on Development of a Robot System for Random Picking–Grasp/Manipulation Planner for a Dual-arm Manipulator–”, IEEE/SICE Int. Symposium on System Integration, 2014.Google Scholar
  5. 5.
  6. 6.
    D. Buchholz et al., “Efficient bin-picking and grasp planning based on depth data”, IEEE Int. Conf. on Robotics and Automation, 2013.Google Scholar
  7. 7.
    K. Harada et al., “Initial Experiments on Learning-Based Randomized Bin-Picking Allowing Finger Contact with Neighboring Objects”, IEEE Int. Conf. on Automation Science and Engineering, 2016.Google Scholar
  8. 8.
    S. Levine et al., “Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection”, Int. Symposium on Experimental Robotics, 2016.Google Scholar
  9. 9.
    K. Bousmalis et al., “Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping”, https://arxiv.org/abs/1709.07857, 2017.
  10. 10.
    Mahler, J. et al., “Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics”, https://arxiv.org/abs/1703.09312, 2017.
  11. 11.
    Z.Wang et al., Three dimensional measurement of specular objects, Trans. of the Institute of Electronics, Information and Communication Engineers (Part D), Vol. J75-D2, No. 7, 1992 (in Japanese).Google Scholar
  12. 12.
  13. 13.
    3D Media Company Ltd., http://www.3dmedia.co.jp/
  14. 14.
    T.K. Lien and P.G.G. Davis, “A novel gripper for limp materials based on lateral Coanda ejectors,” CIRP Annals - Manufacturing Technology, vol. 57, 2008.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yuya Sato
    • 1
  • Kensuke Harada
    • 1
    • 2
    Email author
  • Nobuchika Sakata
    • 3
  • Weiwei Wan
    • 1
    • 2
  • Ixchel G. Ramirez-Alpizar
    • 1
  1. 1.Osaka UniversityToyonakaJapan
  2. 2.National Institute of Advanced Industrial Science and TechnologyTsukubaJapan
  3. 3.Nara Institute of Advanced Science and TechnologyIkomaJapan

Personalised recommendations