Advertisement

Precise Measurement of Cargo Boxes for Gantry Robot Palletization in Large Scale Workspaces Using Low-Cost RGB-D Sensors

  • Yaadhav Raaj
  • Suraj Nair
  • Alois Knoll
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10114)

Abstract

This paper presents a novel algorithm for extracting the pose and dimensions of cargo boxes in a large measurement space of a robotic gantry, with sub-centimetre accuracy using multiple low cost RGB-D Kinect sensors. This information is used by a bin-packing and path-planning software to build up a pallet. The robotic gantry workspaces can be up to 10 m in all dimensions, and the cameras cannot be placed top-down since the components of the gantry actuate within this space. This presents a challenge as occlusion and sensor noise is more likely.

This paper presents the system integration components on how point cloud information is extracted from multiple cameras and fused in real-time, how primitives and contours are extracted and corrected using RGB image features, and how cargo parameters from the cluttered cloud are extracted and optimized using graph based segmentation and particle filter based techniques. This is done with sub-centimetre accuracy irrespective of occlusion or noise from cameras at such camera placements and range to cargo.

Keywords

Point Cloud Particle Filter Depth Image Move Little Square Bilateral Filter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgement

This work is funded by the Civil Aviation Authority of Singapore (CAAS) under the Aviation Challenge 2 grant.

Supplementary material

416263_1_En_29_MOESM1_ESM.zip (13.9 mb)
Supplementary material 1 (zip 14260 KB)

References

  1. 1.
    Har-peled, S.: A practical approach for computing the diameter of a point set. In: SCG 2001 (2001)Google Scholar
  2. 2.
  3. 3.
    Viegas, J.P.L., Vieira, S.M., Sousa, J.M.C., Henriques, E.M.P.: Metaheuristics for the 3D bin packing problem in the steel industry. In: Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014, pp. 338–343 (2014)Google Scholar
  4. 4.
    Robotics, U.: Universal robotics @ONLINE (2015). http://www.universalrobotics.com/applications/
  5. 5.
    Drost, B., Ulrich, M., Navab, N., Ilic, S.: Model globally, match locally: efficient and robust 3D object recognition (2012)Google Scholar
  6. 6.
    Holz, D., Behnke, S., Holz, D., Topalidou-kyniazopoulou, A.: Real-time object detection, localization and verification for fast robotic depalletizing verification for fast robotic depalletizing. In: IROS (2015)Google Scholar
  7. 7.
    Lloyd, R., McCloskey, S.: Recognition of 3D package shapes for single camera metrology. In: 2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014, pp. 99–106 (2014)Google Scholar
  8. 8.
    Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Levin, D., Silva, C.T.: Computing and rendering point set surfaces. IEEE Trans. Vis. Comput. Graphics 9, 3–15 (2003)CrossRefGoogle Scholar
  9. 9.
    Richtsfeld, A., Morwald, T., Prankl, J., Zillich, M., Vincze, M.: Segmentation of unknown objects in indoor environments. In: IEEE International Conference on Intelligent Robots and Systems, pp. 4791–4796 (2012)Google Scholar
  10. 10.
    Wu, K., Ranasinghe, R., Dissanayake, G.: A fast pipeline for textured object recognition in clutter using an RGB-D sensor. In: 2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014, pp. 1650–1655 (2014)Google Scholar
  11. 11.
    Somani, N., Cai, C., Perzylo, A., Rickert, M., Knoll, A.: Object recognition using constraints from primitive shape matching. In: 10th International Symposium on Visual Computing (ISVC 2014) (2014)Google Scholar
  12. 12.
    Anwer, A., Baig, A., Nawaz, R.: Calculating real world object dimensions from Kinect RGB-D image using dynamic resolution. In: Proceedings of 2015 12th International Bhurban Conference on Applied Sciences and Technology, IBCAST 2015, pp. 198–203 (2015)Google Scholar
  13. 13.
    Aouada, D., Ottersten, B., Mirbach, B., Garcia, F., Solignac, T.: Real-time depth enhancement by fusion for RGB-D cameras. IET Comput. Vision 7, 335–345 (2013)CrossRefGoogle Scholar
  14. 14.
    Wang, H., Zhang, W., Chen, Y., Chen, M., Yan, K.: Semantic decomposition and reconstruction of compound buildings with symmetric roofs from LiDAR data and aerial imagery. Remote Sens. 7, 13945–13974 (2015)CrossRefGoogle Scholar
  15. 15.
    libfreenect2 @ONLINE (2013). https://github.com/OpenKinect/libfreenect2
  16. 16.
    iai-kinect2 @ONLINE (2015). https://github.com/code-iai/iai_kinect2
  17. 17.
    Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009)Google Scholar
  18. 18.
    Zhang, Z.: A flexible new technique for camera calibration (technical report). IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2002)CrossRefGoogle Scholar
  19. 19.
    Bradski, G.: Opencv. Dr. Dobb’s journal of software tools (2000)Google Scholar
  20. 20.
    Rusu, R.B., Cousins, S.: 3D is here: Point Cloud Library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China (2011)Google Scholar
  21. 21.
    Merry, B., Gain, J., Marais, P.: Moving least-squares reconstruction of large models with GPUs. IEEE Trans. Vis. Comput. Graphics 20, 249–261 (2014)CrossRefGoogle Scholar
  22. 22.
    Johnson, D.B.: Finding all the elementary circuits of a directed graph. 4, 77–84 (1975)Google Scholar
  23. 23.
    Fox, D., Burgard, W., Dellaert, F., Thrun, S.: Monte carlo localization: efficient position estimation for mobile robots dieter fox, wolfram burgard. In: 16th National Conference on Artificial Intelligence (AAAI99), pp. 343–349 (1999)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.TUM CREATESingaporeSingapore
  2. 2.Technische Universität München (TUM), Institüt für Informatik, Robotics and Embedded SystemMunichGermany

Personalised recommendations