Advertisement

Reliable Workspace Monitoring in Safe Human-Robot Environment

  • Amine Abou Moughlbay
  • Héctor HerreroEmail author
  • Raquel Pacheco
  • Jose Luis Outón
  • Damien Sallé
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 527)

Abstract

The implementation of a reliable vision system for full perception of the human-robot environment is a key issue for the flexible collaborative production industries, especially for the frequently changing applications. The use of such system facilitates the perception and recognition of the human activity, and consequently highly increases the robustness and reactivity of safety strategies in collaborative tasks. This paper presents an implementation of several techniques for workspace monitoring in collaborative human-robot applications. A reliable perception of the overall environment is performed to generate a consistent point cloud which is used for human detection and tracking. Additionally, safety strategies on the robotic system (reduced velocity, emergency stop, ...) are activated when the human-robot distance approaches predefined security thresholds.

Keywords

Workspace monitoring Human-robot collaboration Human detection Point cloud fusion Safety strategies 

Notes

Acknowledgments

The research leading to these results has been funded in part by the European Union’s seventh framework program (FP7/2007-2013) under grant agreements #608604 (LIAA: Lean Intelligent Assembly Automation).

References

  1. 1.
    ISO: ISO 10218–1: Robots and robotic devices-safety requirements for industrial robots-part 1: Robots. Geneva, Switzerland: International Organization for Standardization (2011)Google Scholar
  2. 2.
    Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: Ros: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3, p. 5 (2009)Google Scholar
  3. 3.
    Johnson, B., Greenberg, S.: Judging people’s availability for interaction from video snapshots. In: Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences, HICSS-32, p. 9. IEEE (1999)Google Scholar
  4. 4.
    Ning, H., Han, T.X., Walther, D.B., Liu, M., Huang, T.S.: Hierarchical space-time model enabling efficient search for human actions. IEEE Trans. Circ. Syst. Video Technol. 19(6), 808–820 (2009)CrossRefGoogle Scholar
  5. 5.
    Gupta, A., Srinivasan, P., Shi, J., Davis, L.S.: Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 2012–2019. IEEE (2009)Google Scholar
  6. 6.
    Wu, J., Osuntogun, A., Choudhury, T., Philipose, M., Rehg, J.M.: A scalable approach to activity recognition based on object use. In: IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–8. IEEE (2007)Google Scholar
  7. 7.
    Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64(2–3), 107–123 (2005)CrossRefGoogle Scholar
  8. 8.
    Dollár, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio-temporal features. In: 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72. IEEE (2005)Google Scholar
  9. 9.
    Liu, J., Ali, S., Shah, M.: Recognizing human actions using multiple features. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)Google Scholar
  10. 10.
    Jhuang, H., Serre, T., Wolf, L., Poggio, T.: A biologically inspired system for action recognition. In: IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–8. IEEE (2007)Google Scholar
  11. 11.
    Rodriguez, M.D., Ahmed, J., Shah, M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)Google Scholar
  12. 12.
    Boiman, O., Irani, M.: Detecting irregularities in images and in video. Int. J. Comput. Vis. 74(1), 17–31 (2007)CrossRefGoogle Scholar
  13. 13.
    Liao, L., Fox, D., Kautz, H.: Extracting places and activities from GPS traces using hierarchical conditional random fields. Int. J. Robot. Res. 26(1), 119–134 (2007)CrossRefGoogle Scholar
  14. 14.
    Zhu, C., Sheng, W.: Human daily activity recognition in robot-assisted living using multi-sensor fusion. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 2154–2159. IEEE (2009)Google Scholar
  15. 15.
    Marcon, M., Pierobon, M., Sarti, A., Tubaro, S.: 3d markerless human limb localization through robust energy minimization. In: Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications, M2SFA2 2008 (2008)Google Scholar
  16. 16.
    The OpenNI Organization: Introducing openni, open natural interaction library. http://www.openni.org. Accessed: 30 Nov 2015
  17. 17.
    Munaro, M., Menegatti, E.: Fast RGB-D people tracking for service robots. Auton. Robots 37(3), 227–242 (2014)CrossRefGoogle Scholar
  18. 18.
    Munaro, M., Basso, F., Menegatti, E.: Tracking people within groups with RGB-D data. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2101–2107. IEEE (2012)Google Scholar
  19. 19.
    Noorit, N., Suvonvorn, N., Karnchanadecha, M.: Model-based human action recognition. In: Second International Conference on Digital Image Processing, p. 75460P. International Society for Optics and Photonics (2010)Google Scholar
  20. 20.
    LIAA: Lean intelligent assembly automation. http://www.project-leanautomation.eu. Accessed: 05 Jun 2016
  21. 21.
    Noonan, P.J., Anton-Rodriguez, J.M., Cootes, T.F., Hallett, W.A., Hinz, R.: Multiple target marker tracking for real-time, accurate, and robust rigid body motion tracking of the head for brain pet. In: 2013 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), pp. 1–6. IEEE (2013)Google Scholar
  22. 22.
    Niekum, S.: ROS wrapper for alvar, an open source ar tag tracking library. http://wiki.ros.org/ar_track_alvar. Accessed: 30 Nov 2015
  23. 23.
    Kammerl, J., Woodall, W.: PCL (point cloud library) ros interface stack. http://wiki.ros.org/pcl_ros. Accessed: 30 Nov 2015

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Amine Abou Moughlbay
    • 1
  • Héctor Herrero
    • 1
    Email author
  • Raquel Pacheco
    • 1
  • Jose Luis Outón
    • 1
  • Damien Sallé
    • 1
  1. 1.TECNALIA, Industry and Transport Division, Parque Científico y Tecnológico de GipuzkoaDonostia-san SebastiánSpain

Personalised recommendations