KI - Künstliche Intelligenz

, Volume 28, Issue 4, pp 305–313 | Cite as

Technologies for the Fast Set-Up of Automated Assembly Processes

  • Norbert Krüger
  • Aleš Ude
  • Henrik Gordon Petersen
  • Bojan Nemec
  • Lars-Peter Ellekilde
  • Thiusius Rajeeth Savarimuthu
  • Jimmy Alison Rytz
  • Kerstin Fischer
  • Anders Glent Buch
  • Dirk Kraft
  • Wail Mustafa
  • Eren Erdal Aksoy
  • Jeremie Papon
  • Aljaž Kramberger
  • Florentin Wörgötter
Research Project

Abstract

In this article, we describe technologies facilitating the set-up of automated assembly solutions which have been developed in the context of the IntellAct project (2011–2014). Tedious procedures are currently still required to establish such robot solutions. This hinders especially the automation of so called few-of-a-kind production. Therefore, most production of this kind is done manually and thus often performed in low-wage countries. In the IntellAct project, we have developed a set of methods which facilitate the set-up of a complex automatic assembly process, and here we present our work on tele-operation, dexterous grasping, pose estimation and learning of control strategies. The prototype developed in IntellAct is at a TRL4 (corresponding to ‘demonstration in lab environment’).

Keywords

Robotics Automated assembly Pose estimation Robot control 

References

  1. 1.
    Collins K, Palmer AJ, Rathmill K (1985) The development of a European benchmark for the comparison of assembly robot programming systems. In: Robot technology and applications (Robotics Europe Conference), pp 187–199Google Scholar
  2. 2.
    Yang Y, Lin L, Song Y, Nemec B, Ude A, Rytz JA, Buch AG, Krüger N, Savarimuthu TR (2014) Programming of peg-in-hole actions by human demonstration. In: Proceedings of the 2014 Internation Conference on Mechatronics and ControlGoogle Scholar
  3. 3.
    Ijspeert AJ, Nakanishi J, Hoffmann H, Pastor P, Schaal S (2013) Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput 25(2):328–373MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Marhenke I, Fischer K, Savarimuthu TR (2014) Teleoperation for learning by demonstration: data glove versus object manipulation for intuitive robot control. In: Proceedings of the HRI’14 467 conference, late breaking results, Bielefeld, Germany, pp 242–243Google Scholar
  5. 5.
    Campbell C, Peters R II, Bodenheimer R, Bluethmann W, Huber E, Ambrose R (2006) Superpositioning of behaviors learned through teleoperation. IEEE Trans Robotics 22(1):79–91CrossRefGoogle Scholar
  6. 6.
    Kirstein F, Fischer K, Solvason D (2014) Human embodiment creates problems for robot learning by demonstration using a control panel. In: Proceedings of the HRI’14 Conference, Late Breaking Results, Bielefeld, Germany, pp 212–213Google Scholar
  7. 7.
    aus der Wieschen M, Fischer K, Kuklinsky K (2014) Intuitive error resolution strategies during robot demonstration. In: Proceedings of the HRI’14 Conference, Late Breaking Results, Bielefeld, Germany, pp 120–121Google Scholar
  8. 8.
    Marhenke I, Fischer K, Savarimuthu TR (2014) Reasons for singularity in robot teleoperation. In: Proceedings of the HRI’14 Conference, Late Breaking Results, Bielefeld, Germany, pp 242–243Google Scholar
  9. 9.
    Savarimuthu T, Liljekrans D, Ellekilde LP, Ude AA, Nemec B, Krüger N (2013) Analysis of human peg-in-hole executions in a robotic embodiment using uncertain grasps. In: 9th International Workshop on Robot Motion and Control, RoMoCoGoogle Scholar
  10. 10.
    Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vis 88(2):303–338CrossRefGoogle Scholar
  11. 11.
    Bronstein A, Bronstein M, Castellani U, Falcidieno B, Fusiello A, Godil A, Guibas L, Kokkinos I, Lian Z, Ovsjanikov M et al (2010) Shrec 2010: robust large-scale shape retrieval benchmark. In: Proceedings of the 3DOR, vol 5Google Scholar
  12. 12.
    Schlette C, Buch AG, Aksoy EE, Steil T, Papon J, Savarimuthu TR, Worgotter F, Kruger N, Rossmann J (2014) A new benchmark for pose estimation with ground truth from virtual reality. Accepted for Production Engineering Research & Development (accepted)Google Scholar
  13. 13.
    Mustafa W, Pugeault N, Krüger N (2013) Multi-view object recognition using view-point invariant shape relations and appearance information. In: Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE, pp 4230–4237Google Scholar
  14. 14.
    Breiman L (2001) Random forests. Mach Learn 45(1):5–32CrossRefMATHGoogle Scholar
  15. 15.
    Buch AG, Kraft D, Kamarainen JK, Petersen HG, Krüger N (2013) Pose estimation using local structure-specific shape and appearance context. In: Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE, pp 2080–2087Google Scholar
  16. 16.
    Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395MathSciNetCrossRefGoogle Scholar
  17. 17.
    Papon J, Kulvicius T, Aksoy EE, Wörgötter F (2013) Point cloud video object segmentation using a persistent supervoxel world-model. In: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on. IEEE, pp 3712–3718Google Scholar
  18. 18.
    Ude A, Nemec B, Petrič T, Morimoto J (2014) Orientation in cartesian space dynamic movement primitives. In: IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, pp 2997–3004Google Scholar
  19. 19.
    Newman WS, Branicky MS, Podgurski HA, Chhatpar S, Huang L, Swaminathan J, Zhang H (1999) Force-responsive robotic assembly of transmission components. In: IEEE International Conference on Robotics and Automation (ICRA), vol 3, Detroit, Michigan, pp 2096–2102Google Scholar
  20. 20.
    Broenink JF, Tiernego MLJ (1996) Peg-in-Hole assembly using impedance control with a 6 DOF robot. In: Proceedings of the 8th European Simulation Symposium, pp 504–508Google Scholar
  21. 21.
    Stemmer A, Albu-Schäffer A, Hirzinger G (2007) An analytical method for the planning of robust assembly tasks of complex shaped planar parts. In: IEEE International Conference on Robotics and Automation (ICRA), Rome, Italy, pp 317–323Google Scholar
  22. 22.
    Bruyninckx H, Dutre S, De Schutter J (1995) Peg-on-hole: a model based solution to peg and hole alignment. In: IEEE International Conference on Robotics and Automation (ICRA), vol 2, Nagoya, pp 1919–1924Google Scholar
  23. 23.
    Bristow D, Tharayil M, Alleyne A (2006) A survey of iterative learning control. IEEE Control Syst Mag 26(3):96–114CrossRefGoogle Scholar
  24. 24.
    Moore K, Chen Y, Ahn HS (2006) Iterative learning control: a tutorial and big picture view. In: 45th IEEE Conference on Decision and Control, San Diego, pp 2352–2357Google Scholar
  25. 25.
    Nemec B, Abu-Dakka F, Jørgensen JA, Savarimuthu TR, Ridge B, Jouffroy J, Petersen HG, Krüger N, Ude A (2013) Transfer of assembly operations to new workpiece poses by adaptation to the desired force profile. In: International Conference on Advanced Robotics, Montevideo, UruguayGoogle Scholar
  26. 26.
    Wolniakowski A, Miatliuk K, Krüger N, Rytzs JA (2014) Automatic evaluation of task-focused parallel jaw gripper design. SIMPAR 2014Google Scholar
  27. 27.
    Nemec B, Gams A, Ude A (2013) Velocity adaptation for self-improvement of skills learned from user demonstrations. In: IEEE-RAS International Conference on Humanoid Robots, Atlanta, pp 423–428Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Norbert Krüger
    • 1
  • Aleš Ude
    • 3
  • Henrik Gordon Petersen
    • 1
  • Bojan Nemec
    • 3
  • Lars-Peter Ellekilde
    • 1
  • Thiusius Rajeeth Savarimuthu
    • 1
  • Jimmy Alison Rytz
    • 1
  • Kerstin Fischer
    • 2
  • Anders Glent Buch
    • 1
  • Dirk Kraft
    • 1
  • Wail Mustafa
    • 1
  • Eren Erdal Aksoy
    • 4
  • Jeremie Papon
    • 4
  • Aljaž Kramberger
    • 3
  • Florentin Wörgötter
    • 4
  1. 1.The Maersk Mc-Kinney Moller Institute (MMMI)University of Southern DenmarkOdenseDenmark
  2. 2.Institute for Design and CommunicationUniversity of Southern DenmarkOdenseDenmark
  3. 3.Humanoid and Cognitive Robotics Lab, Department of Automatics, Biocybernetics, and RoboticsJožef Stefan InstituteLjubljanaSlovenia
  4. 4.Bernstein Center for Computational Neuroscience (BCCN)Georg-August University GöttingenGöttingenGermany

Personalised recommendations