Skip to main content

Towards a Programming-Free Robotic System for Assembly Tasks Using Intuitive Interactions

  • Conference paper
  • First Online:
Social Robotics (ICSR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13086))

Included in the following conference series:

  • 2715 Accesses

Abstract

Although industrial robots are successfully deployed in many assembly processes, high-mix, low-volume applications are still difficult to automate, as they involve small batches of frequently changing parts. Setting up a robotic system for these tasks requires repeated re-programming by expert users, incurring extra time and costs. In this paper, we present a solution which enables a robot to learn new objects and new tasks from non-expert users without the need for programming. The use case presented here is the assembly of a gearbox mechanism. In the proposed solution, first, the robot can autonomously register new objects using a visual exploration routine, and train a deep learning model for object detection accordingly. Secondly, the user can teach new tasks to the system via visual demonstration in a natural manner. Finally, using multimodal perception from RGB-D (color and depth) cameras and a tactile sensor, the robot can execute the taught tasks with adaptation to changing configurations. Depending on the task requirements, it can also activate human-robot collaboration capabilities. In summary, these three main modules enable any non-expert user to configure a robot for new applications in a fast and intuitive way.

This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project #A18A2b0046).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bicchi, A., Peshkin, M.A., Colgate, J.E.: Safety for physical human–robot interaction. In: Siciliano, B. (ed.) Springer Handbook of Robotics, vol. Springe, pp. 1335–1348. (2008). https://doi.org/10.1007/978-3-540-30301-5_58

    Chapter  Google Scholar 

  2. Wang, L., Liu, S., Liu, H., Wang, X.V.: Overview of human-robot collaboration in manufacturing. In: Wang, L., Majstorovic, V.D., Mourtzis, D., Carpanzano, E., Moroni, G., Galantucci, L.M. (eds.) Proceedings of 5th International Conference on the Industry 4.0 Model for Advanced Manufacturing. LNME, pp. 15–58. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-46212-3_2

    Chapter  Google Scholar 

  3. El Zaatari, S., Marei, M., Li, W., Usman, Z.: Cobot programming for collaborative industrial tasks: an overview. Robot. Auton. Syst. 116, 162–180 (2019)

    Article  Google Scholar 

  4. Pasquale, G., Ciliberto, C., Odone, F., Rosasco, L., Natale, L.: Are we done with object recognition? The iCub robot’s perspective. Robot. Auton. Syst. 112, 260–281 (2019)

    Article  Google Scholar 

  5. Label fusion: a pipeline for generating ground truth labels for real RGBD data of cluttered scenes. In: ICRA2018, pp. 1–8 (2018)

    Google Scholar 

  6. Dwibedi, D., Misra, I., Hebert, M.: Cut, paste and learn: Surprisingly easy synthesis for instance detection. In: ICCV2017, pp. 1310–1319 (2017)

    Google Scholar 

  7. Dehghan, M., et al.: Online object and task learning via human robot interaction. In: ICRA2019, pp. 2132–2138 (2019)

    Google Scholar 

  8. Kasaei, S., et al.: Interactive open-ended learning for 3D object recognition: an approach and experiments. J. Intell. Robot. Syst. 80, 537–553 (2015). https://doi.org/10.1007/s10846-015-0189-z

    Article  Google Scholar 

  9. Kasaei, A., et al.: Perceiving, learning, and recognizing 3D objects: an approach to cognitive service robots. In: AAAI-2018 (2018)

    Google Scholar 

  10. Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57, 469–483 (2009)

    Article  Google Scholar 

  11. Zhu, Z., Huosheng, H.: Robot learning from demonstration in robotic assembly: a survey. Robotics 7, 17 (2018)

    Article  Google Scholar 

  12. Jangwon Lee. A survey of robot learning from demonstrations for Human-Robot Collaboration. arXiv e-prints, page arXiv:1710.08789, October 2017

  13. Stanford Artificial Intelligence Laboratory et al. Robotic operating system. ROS Melodic Morenia. https://www.ros.org. Accessed 23 May 2018

  14. Dutagaci, H., Cheung, C.P., Godil, A.: A benchmark for best view selection of 3D objects. In: 3DOR 2010, pp. 45–50 (2010)

    Google Scholar 

  15. Polonsky, O., et al.: What’s in an image? Towards the computation of the “best’’ view of an object. Vis. Comput. 21, 840–847 (2005)

    Article  Google Scholar 

  16. Q. Xu et al. Active image sampling on canonical views for novel object detection. In: ICIP 2020, pp. 2241–2245. IEEE (2020)

    Google Scholar 

  17. Fang, F., et al.: Self-teaching strategy for learning to recognize novel objects in collaborative robots. In: ICRAI 2019, pp. 18–23 (2019)

    Google Scholar 

  18. Fang, F., et al.: Detecting objects with high object region percentage. In: ICPR 2020, pp. 7173–7180 (2020)

    Google Scholar 

  19. Shmelkov, K., Schmid, C., Alahari, K.: Incremental learning of object detectors without catastrophic forgetting. In: ICCV 2017, pp. 3400–3409 (2017)

    Google Scholar 

  20. Peng, C., Zhao, K., Lovell, B.C.: Faster ILOD: incremental learning for object detectors based on faster RCNN. Pattern Recognit. Lett. 140, 109–115 (2020)

    Article  Google Scholar 

  21. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28, 91–99 (2015)

    Google Scholar 

  22. easy\(\_\)handeye. https://github.com/IFL-CAMP/easy_handeye

  23. uSkin sensors. https://xelarobotics.com/en/uskin-sensor

  24. Xu, Q., et al.: TAILOR: teaching with active and incremental learning for object registration. In: AAAI 2021, vol. 35, pp. 16120–16123 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Nicolas Gauthier , Wenyu Liang , Qianli Xu , Fen Fang , Liyuan Li , Yan Wu or Joo Hwee Lim .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gauthier, N. et al. (2021). Towards a Programming-Free Robotic System for Assembly Tasks Using Intuitive Interactions. In: Li, H., et al. Social Robotics. ICSR 2021. Lecture Notes in Computer Science(), vol 13086. Springer, Cham. https://doi.org/10.1007/978-3-030-90525-5_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90525-5_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90524-8

  • Online ISBN: 978-3-030-90525-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics