Abstract
The handling of textiles by robots remains a largely unexplored and underdeveloped area of robotics. This is mainly due to the complexity of the actions resulting from the properties of the textiles and the difficulty in accurately determining the state of the textiles. Due to the considerable variability in the shape and size of planar, non-rigid objects, we have addressed this challenge by using advanced deep learning methods. In this work, we demonstrate a vision-to-motion DNN (Deep Neural Network ) trained to straighten a single crumpled corner on a rectangular piece of fabric that was deformed and then flattened inside a simulated environment. The neural network was trained to identify a correct grab point at which to grab the simulated fabric, and also a correct drop point to which to move the grabbed piece of fabric. For this simplified example, our trained model was able to achieve good results with an average error of 4.4 mm in determining the grab point position and an average error of 4.2 mm in determining the drop point position. Using the predicted points, the robot performed a smoothing motion to bring the deformed fabric almost to its canonical state.
Supported by young researcher grant (PR-11324), research grant Robot Textile and Fabric Inspection and Manipulation – RTFM (J2-4457) and program group Automation, Robotics, and Biocybernetics (P2-0076), all by the Slovenian Research Agency (ARRS).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: The International Conference on Learning Representations (ICLR) (2015)
Laskey, M., Lee, J., Fox, R., Dragan, A., Goldberg, K.: DART: noise injection for robust imitation learning, p. 14 (2017)
Lee, R., Ward, D., Cosgun, A., Dasagi, V., Corke, P., Leitner, J.: Learning arbitrary-goal fabric folding with one hour of real robot experience. CoRR abs/2010.03209 (2020)
Nimac, P., Mavsar, M., Gams, A.: Cloth smoothing simulation with vision-to-motion skill model. Društvo Slovenska sekcija. IEEE (2022)
Pahič, R., Ridge, B., Gams, A., Morimoto, J., Ude, A.: Training of deep neural networks for the generation of dynamic movement primitives. Neural Netw. 127, 121–131 (2020)
Seita, D., et al.: Deep imitation learning of sequential fabric smoothing policies. CoRR abs/1910.04854 (2019)
Triantafyllou, D., Mariolis, I., Kargakos, A., Malassiotis, S., Aspragathos, N.A.: A geometric approach to robotic unfolding of garments. Robot. Auton. Syst. 75, 233–243 (2016)
Tsurumine, Y., Cui, Y., Uchibe, E., Matsubara, T.: Deep reinforcement learning with smooth policy update: application to robotic cloth manipulation. Robot. Auton. Syst. 112, 72–83 (2019)
Wu, Y., Yan, W., Kurutach, T., Pinto, L., Abbee l, P.: Learning to manipulate deformable objects without demonstrations. arXiv:1910.13439 [cs] (2020). arXiv: 1910.13439
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Nimac, P., Gams, A. (2023). Cloth Flattening with Vision-to-Motion Skill Model. In: Petrič, T., Ude, A., Žlajpah, L. (eds) Advances in Service and Industrial Robotics. RAAD 2023. Mechanisms and Machine Science, vol 135. Springer, Cham. https://doi.org/10.1007/978-3-031-32606-6_43
Download citation
DOI: https://doi.org/10.1007/978-3-031-32606-6_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-32605-9
Online ISBN: 978-3-031-32606-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)