Skip to main content

Generation Method of Robot Assembly Motion Considering Physicality Gap Between Humans and Robots

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14362))

Included in the following conference series:

  • 329 Accesses

Abstract

In this study, we propose a method for simplifying the assigning of robot motion parameters as this task is very time consuming. Parts used in a factory have different shapes and sizes but are categorized under the same name. With the proposed method, the robot’s grasping point for one part is determined using a human’s optimal grasping point for another part as a cue. The novelty of our method is that we consider the physicality gap between humans and robots as well as the functions of industrial parts. An example of this gap is the difference between a human and robot’s hand shape. A function refers to the role of each component of an industrial part. Grasping points are determined from the grasping function of a target part using features related to the physicality gap. In an experiment using connecting rods, the average success rate of robot motions with the method was 82.8%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Domae, Y., Okuda, H., Taguchi, Y., Sumi, K., Hirai, T.: Fast graspability evaluation on single depth maps for bin picking with general gripper. In: Proceedings of the IEEE Conference on ICRA, HongKong, pp. 1997–2004 (2014)

    Google Scholar 

  2. Araki, R., et al.: Grasping detection using deep convolutional neural network with graspability. J. Rob. Soc. Japan 36(8), 559–566 (2018)

    Article  Google Scholar 

  3. Zhang, X., Koyama, K., Domae, Y., Wan, W., Harada, K.: A topological solution of entanglement for complex-shaped parts in robotic bin-picking. In: Proceedings of the IEEE Conference on CASE, pp. 461–467 (2021)

    Google Scholar 

  4. Song, S., Zeng, A., Lee, J., Funkhouser, T.: Grasping in the wild: learning 6dof closed-loop grasping from low-cost demonstrations. IEEE Rob. Autom. Lett. 5(3), 4978–4985 (2020)

    Article  Google Scholar 

  5. Chen, X., Ghadirzadeh, A., Bhorkman, M., Jensfelt, P.: Adversarial feature training for generalizable robotic visuomotor control. In: Proceedings of the IEEE Conference on ICRA, pp. 1142–1148 (2020)

    Google Scholar 

  6. Kevin, Z., Andy, Z., Johnny, L., Shuran, S.: Form2Fit: learning shape priors for generalizable assembly from disassembly. In: Proceedings of the IEEE Conference on ICRA, pp. 9404–9410 (2020)

    Google Scholar 

  7. Turpin, D., Wang, L., Tshogkas, S., Dickinson, S., Garg, A.: GIFT: generalizable interaction-aware functional tool affordances without labels. In Robotics: Science and Systems (2021). https://arxiv.org/abs/2106.14973

  8. Qin, Z., Fang, K., Zhu, Y., Fei-Fei, L., Savarese, S.: KETO:learning keypoint representations for tool manipulation. In: Proceedings of the IEEE Conference on ICRA, pp. 7278–7285 (2020)

    Google Scholar 

  9. Ardon, P., Pairet, E., Petillot, Y., Petrick, P.A.R., Ramamoorthy, S., Lohan, S.K.: Self-assessment of grasp affordance transfer. In: Proceedings of the IEEE/RSJ Conference on IROS, pp. 9385–9392 (2020)

    Google Scholar 

  10. Liang, J., Boularias, A.: Learning category-level manipulation tasks from point clouds with dynamic graph CNNs. In: Proceedings of the IEEE Conference on ICRA, UK, pp. 1807–1813 (2023)

    Google Scholar 

  11. Suzuki, T., Hashimoto, M.: A method for transferring robot motion parameters using functional attributes of parts. Lect. Notes Comput. Sci. 13018, 154–165 (2021)

    Article  Google Scholar 

  12. Chu, F.-J., Xu, R., Vela, P.A.: Learning affordance segmentation for real-world robotic manipulation via synthetic images. IEEE Rob. Autom. Lett. 4(2), 1140–1447 (2019)

    Article  Google Scholar 

  13. Chu, F.-J., Xu, R., Vela, P.A.: Toward affordance detection and ranking on novel objects for real-world robotic manipulation. IEEE Rob. Autom. Lett. 4(4), 4070–4077 (2019)

    Article  Google Scholar 

  14. Minh, C., Gilani, S., Islam, S., Suter, D.: Learning affordance segmentation: an investigative study. In: Proceedings of the DICTA, pp. 2870–2877 (2020)

    Google Scholar 

  15. Luo, H., Zhai, W., Zhang, J., Cao, Y., Tao, D.: Learning visual affordance grounding from demonstration videos (2021). https://arxiv.org/pdf/2108.05675v1.pdf

  16. Suzuki, T., Hashimoto, M.: Estimation of robot motion parameters based on functional consistency for randomly stacked parts. In: Proceedings of the VISAPP, Portugal, pp. 519–528 (2023)

    Google Scholar 

  17. He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-RCNN. In: Proceedings of the IEEE Conference on ICCV, Italy, pp. 2961–2969 (2017)

    Google Scholar 

Download references

Acknowledgement

This paper is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Takahiro Suzuki or Manabu Hashimoto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suzuki, T., Hashimoto, M. (2023). Generation Method of Robot Assembly Motion Considering Physicality Gap Between Humans and Robots. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2023. Lecture Notes in Computer Science, vol 14362. Springer, Cham. https://doi.org/10.1007/978-3-031-47966-3_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47966-3_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47965-6

  • Online ISBN: 978-3-031-47966-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics