Advertisement

Enabling Robust and Autonomous Materialhandling in Logistics Through Applied Deep Learning Algorithms

  • Christian PossEmail author
  • Thomas Irrenhauser
  • Marco Prueglmeier
  • Daniel Goehring
  • Vahid Salehi
  • Firas Zoghlami
Chapter
  • 105 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1098)

Abstract

In recent years, logistics costs in the automotive industry have risen significantly. One way to reduce these costs is to automate the entire material flow. To meet the flexible industrial challenges and dynamic changes, robots with intelligent perception are necessary. Such a perception algorithm is presented in the following. It consists of three modules. In the first module, all objects in the field of vision of the robot are detected, and their position is determined. Then the relevant objects for the respective process are selected. Finally, the gripping point of the next object to be handled is determined. By integrating the robots, it can be shown that by combining intelligent modules with pragmatic frame modules, automation in a challenging industrial environment is feasible.

References

  1. 1.
    K. Permenter, Cost and Complexity: Top Challenges of Today’s Chief Supply Chain Officer (Aberdeen Group, 2012)Google Scholar
  2. 2.
    A. Stringflow, How to Reduce Logistics Costs: 19 Experts Reveal Ways Organizations Can Cut Their Logistics Transportation and Carry Costs (New Technology, 2018)Google Scholar
  3. 3.
    D. Arnold et al., Handbuch Logistik (Springer, Berlin, 2008), ISBN: 978-3-540-72929-7Google Scholar
  4. 4.
    R. Hodson, How robots are grasping the art of gripping. Nature 557, 23–25 (2018)Google Scholar
  5. 5.
    D. Holz, Registration with the point cloud library. IEEE Robot. Autom. Mag. 22(4), 110–124 (2015)CrossRefGoogle Scholar
  6. 6.
    A. Romea et al., Object recognition and full pose registration from a single image for robotic manipulation, in ICRA (2009)Google Scholar
  7. 7.
    M. Dogar et al., Physics-based grasp planning through clutter, in RSS (2012)Google Scholar
  8. 8.
    I. Lenz et al., Deep learning for detecting robotic grasps, in ICLR (2013)Google Scholar
  9. 9.
    A. ten Pas et al., Grasp pose detection in point clouds (2017). arXiv: 1706.09911v1
  10. 10.
    W. Kehl et al. SSD-6D: making RGB-based 3D detection and 6D pose estimation great again, in Proceedings of the IEEE International Conference on Computer Vision (2017)Google Scholar
  11. 11.
    F. Spenrath et al., Gripping point determination for bin picking using heuristic search. Procedia CIRP 62, 606–611 (2017).  https://doi.org/10.1016/j.procir.2016.06.015CrossRefGoogle Scholar
  12. 12.
    R. Bostelman et al., Survey of industrial manipulation technologies for autonomous assembly applications. NIST-Internal Report 7844 (2012)Google Scholar
  13. 13.
    M. Robertson et al., New soft robots really suck: vacuum-powered systems empower diverse capabilities. Sci. Robot. 2(9) (2017)Google Scholar
  14. 14.
    S. Gu et al., Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates, in 2017 IEEE International Conference on Robotics and Automation (ICRA) (2017). IEEEGoogle Scholar
  15. 15.
    L. Fanet al., SURREAL: open-source reinforcement learning framework and robot manipulation benchmark, in Conference on Robot Learning (2018)Google Scholar
  16. 16.
    J. Mahler et al., Dex-Net 3.0: computing robust robot vacuum suction grasp targets in point clouds using a new analytic model and deep learning (2017). arXiv:1709.06670
  17. 17.
    A. Zeng et al., Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge, in Robotics and Automation (ICRA) (2017)Google Scholar
  18. 18.
    S. Kai-Tai et al., CAD-based pose estimation design for random bin picking using a RGB-D camera. J. Intell. Robot. Syst. 87(3–4), 455–470 (2017)Google Scholar
  19. 19.
    J. Tremblay et al., Falling things: a synthetic dataset for 3D object detection and pose estimation (2018). arXiv:1804.06534
  20. 20.
    H. Hiteshree et al., Analysis of feature based object mining and tagging algorithm considering different levels of occlusion, in Communication and Signal Processing (ICCSP) (2017)Google Scholar
  21. 21.
    S. Sundararajan et al., Continuous set of image processing methodology for efficient image retrieval using BOW SHIFT and SURF features for emerging image processing applications, in 2017 International Conference on Technological Advancements in Power and Energy (TAP Energy) (2017)Google Scholar
  22. 22.
    W. Li et al., Visual recognition in RGB images and videos by learning from RGB-D data. IEEE Trans. Pattern Anal. Mach. Intell. 40(8), 2030–2036 (2018)CrossRefGoogle Scholar
  23. 23.
    R. Shaoqing et al., Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 6, 1137–1149 (2018)Google Scholar
  24. 24.
    R. Shaoqing et al., Object detection networks on convolutional feature maps. IEEE Trans. Pattern Anal. Mach. Intell. 39(7), 1476–1481 (2017)CrossRefGoogle Scholar
  25. 25.
    J. Redmon, A. Farhadi, Yolov3: an incremental improvement. arXiv:1804.02767
  26. 26.
    Liu, Wei, et al. Ssd: Single shot multibox detector, in European Conference on Computer Vision (Springer, Cham, 2016), pp. 21–37Google Scholar
  27. 27.
    R. Girshick, Fast r-cnn, in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 1440–1448Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Christian Poss
    • 1
    Email author
  • Thomas Irrenhauser
    • 1
  • Marco Prueglmeier
    • 1
  • Daniel Goehring
    • 2
  • Vahid Salehi
    • 3
  • Firas Zoghlami
    • 3
  1. 1.BMW GroupMunichGermany
  2. 2.Freie Universiterlins BerlinBerlinGermany
  3. 3.University of Applied SciencesMunichGermany

Personalised recommendations