Skip to main content

Cloth Flattening with Vision-to-Motion Skill Model

  • Conference paper
  • First Online:
Advances in Service and Industrial Robotics (RAAD 2023)

Part of the book series: Mechanisms and Machine Science ((Mechan. Machine Science,volume 135))

Included in the following conference series:

Abstract

The handling of textiles by robots remains a largely unexplored and underdeveloped area of robotics. This is mainly due to the complexity of the actions resulting from the properties of the textiles and the difficulty in accurately determining the state of the textiles. Due to the considerable variability in the shape and size of planar, non-rigid objects, we have addressed this challenge by using advanced deep learning methods. In this work, we demonstrate a vision-to-motion DNN (Deep Neural Network ) trained to straighten a single crumpled corner on a rectangular piece of fabric that was deformed and then flattened inside a simulated environment. The neural network was trained to identify a correct grab point at which to grab the simulated fabric, and also a correct drop point to which to move the grabbed piece of fabric. For this simplified example, our trained model was able to achieve good results with an average error of 4.4 mm in determining the grab point position and an average error of 4.2 mm in determining the drop point position. Using the predicted points, the robot performed a smoothing motion to bring the deformed fabric almost to its canonical state.

Supported by young researcher grant (PR-11324), research grant Robot Textile and Fabric Inspection and Manipulation – RTFM (J2-4457) and program group Automation, Robotics, and Biocybernetics (P2-0076), all by the Slovenian Research Agency (ARRS).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 299.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: The International Conference on Learning Representations (ICLR) (2015)

    Google Scholar 

  2. Laskey, M., Lee, J., Fox, R., Dragan, A., Goldberg, K.: DART: noise injection for robust imitation learning, p. 14 (2017)

    Google Scholar 

  3. Lee, R., Ward, D., Cosgun, A., Dasagi, V., Corke, P., Leitner, J.: Learning arbitrary-goal fabric folding with one hour of real robot experience. CoRR abs/2010.03209 (2020)

    Google Scholar 

  4. Nimac, P., Mavsar, M., Gams, A.: Cloth smoothing simulation with vision-to-motion skill model. Društvo Slovenska sekcija. IEEE (2022)

    Google Scholar 

  5. Pahič, R., Ridge, B., Gams, A., Morimoto, J., Ude, A.: Training of deep neural networks for the generation of dynamic movement primitives. Neural Netw. 127, 121–131 (2020)

    Article  MATH  Google Scholar 

  6. Seita, D., et al.: Deep imitation learning of sequential fabric smoothing policies. CoRR abs/1910.04854 (2019)

    Google Scholar 

  7. Triantafyllou, D., Mariolis, I., Kargakos, A., Malassiotis, S., Aspragathos, N.A.: A geometric approach to robotic unfolding of garments. Robot. Auton. Syst. 75, 233–243 (2016)

    Article  Google Scholar 

  8. Tsurumine, Y., Cui, Y., Uchibe, E., Matsubara, T.: Deep reinforcement learning with smooth policy update: application to robotic cloth manipulation. Robot. Auton. Syst. 112, 72–83 (2019)

    Article  Google Scholar 

  9. Wu, Y., Yan, W., Kurutach, T., Pinto, L., Abbee l, P.: Learning to manipulate deformable objects without demonstrations. arXiv:1910.13439 [cs] (2020). arXiv: 1910.13439

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Nimac .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nimac, P., Gams, A. (2023). Cloth Flattening with Vision-to-Motion Skill Model. In: Petrič, T., Ude, A., Žlajpah, L. (eds) Advances in Service and Industrial Robotics. RAAD 2023. Mechanisms and Machine Science, vol 135. Springer, Cham. https://doi.org/10.1007/978-3-031-32606-6_43

Download citation

Publish with us

Policies and ethics