Abstract
In this study, we propose a method for simplifying the assigning of robot motion parameters as this task is very time consuming. Parts used in a factory have different shapes and sizes but are categorized under the same name. With the proposed method, the robot’s grasping point for one part is determined using a human’s optimal grasping point for another part as a cue. The novelty of our method is that we consider the physicality gap between humans and robots as well as the functions of industrial parts. An example of this gap is the difference between a human and robot’s hand shape. A function refers to the role of each component of an industrial part. Grasping points are determined from the grasping function of a target part using features related to the physicality gap. In an experiment using connecting rods, the average success rate of robot motions with the method was 82.8%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Domae, Y., Okuda, H., Taguchi, Y., Sumi, K., Hirai, T.: Fast graspability evaluation on single depth maps for bin picking with general gripper. In: Proceedings of the IEEE Conference on ICRA, HongKong, pp. 1997–2004 (2014)
Araki, R., et al.: Grasping detection using deep convolutional neural network with graspability. J. Rob. Soc. Japan 36(8), 559–566 (2018)
Zhang, X., Koyama, K., Domae, Y., Wan, W., Harada, K.: A topological solution of entanglement for complex-shaped parts in robotic bin-picking. In: Proceedings of the IEEE Conference on CASE, pp. 461–467 (2021)
Song, S., Zeng, A., Lee, J., Funkhouser, T.: Grasping in the wild: learning 6dof closed-loop grasping from low-cost demonstrations. IEEE Rob. Autom. Lett. 5(3), 4978–4985 (2020)
Chen, X., Ghadirzadeh, A., Bhorkman, M., Jensfelt, P.: Adversarial feature training for generalizable robotic visuomotor control. In: Proceedings of the IEEE Conference on ICRA, pp. 1142–1148 (2020)
Kevin, Z., Andy, Z., Johnny, L., Shuran, S.: Form2Fit: learning shape priors for generalizable assembly from disassembly. In: Proceedings of the IEEE Conference on ICRA, pp. 9404–9410 (2020)
Turpin, D., Wang, L., Tshogkas, S., Dickinson, S., Garg, A.: GIFT: generalizable interaction-aware functional tool affordances without labels. In Robotics: Science and Systems (2021). https://arxiv.org/abs/2106.14973
Qin, Z., Fang, K., Zhu, Y., Fei-Fei, L., Savarese, S.: KETO:learning keypoint representations for tool manipulation. In: Proceedings of the IEEE Conference on ICRA, pp. 7278–7285 (2020)
Ardon, P., Pairet, E., Petillot, Y., Petrick, P.A.R., Ramamoorthy, S., Lohan, S.K.: Self-assessment of grasp affordance transfer. In: Proceedings of the IEEE/RSJ Conference on IROS, pp. 9385–9392 (2020)
Liang, J., Boularias, A.: Learning category-level manipulation tasks from point clouds with dynamic graph CNNs. In: Proceedings of the IEEE Conference on ICRA, UK, pp. 1807–1813 (2023)
Suzuki, T., Hashimoto, M.: A method for transferring robot motion parameters using functional attributes of parts. Lect. Notes Comput. Sci. 13018, 154–165 (2021)
Chu, F.-J., Xu, R., Vela, P.A.: Learning affordance segmentation for real-world robotic manipulation via synthetic images. IEEE Rob. Autom. Lett. 4(2), 1140–1447 (2019)
Chu, F.-J., Xu, R., Vela, P.A.: Toward affordance detection and ranking on novel objects for real-world robotic manipulation. IEEE Rob. Autom. Lett. 4(4), 4070–4077 (2019)
Minh, C., Gilani, S., Islam, S., Suter, D.: Learning affordance segmentation: an investigative study. In: Proceedings of the DICTA, pp. 2870–2877 (2020)
Luo, H., Zhai, W., Zhang, J., Cao, Y., Tao, D.: Learning visual affordance grounding from demonstration videos (2021). https://arxiv.org/pdf/2108.05675v1.pdf
Suzuki, T., Hashimoto, M.: Estimation of robot motion parameters based on functional consistency for randomly stacked parts. In: Proceedings of the VISAPP, Portugal, pp. 519–528 (2023)
He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-RCNN. In: Proceedings of the IEEE Conference on ICCV, Italy, pp. 2961–2969 (2017)
Acknowledgement
This paper is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Suzuki, T., Hashimoto, M. (2023). Generation Method of Robot Assembly Motion Considering Physicality Gap Between Humans and Robots. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2023. Lecture Notes in Computer Science, vol 14362. Springer, Cham. https://doi.org/10.1007/978-3-031-47966-3_30
Download citation
DOI: https://doi.org/10.1007/978-3-031-47966-3_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47965-6
Online ISBN: 978-3-031-47966-3
eBook Packages: Computer ScienceComputer Science (R0)