Advertisement

Journal of Intelligent & Robotic Systems

, Volume 76, Issue 3–4, pp 401–425 | Cite as

Perception and Grasping of Object Parts from Active Robot Exploration

  • Jacopo Aleotti
  • Dario Lodi Rizzini
  • Stefano Caselli
Article

Abstract

Like humans, robots that need semantic perception and accurate estimation of the environment can increase their knowledge through active interaction with objects. This paper proposes a novel method for 3D object modeling for a robot manipulator with an eye-in-hand laser range sensor. Since the robot can only perceive the environment from a limited viewpoint, it actively manipulates a target object and generates a complete model by accumulation and registration of partial views. Three registration algorithms are investigated and compared in experiments performed in cluttered environments with complex rigid objects made of multiple parts. A data structure based on proximity graph, that encodes neighborhood relations in range scans, is also introduced to perform efficient range queries. The proposed method for 3D object modeling is applied to perform task-level manipulation. Indeed, once a complete model is available the object is segmented into its constituent parts and categorized. Object sub-parts that are relevant for the task and that afford a grasping action are identified and selected as candidate regions for grasp planning.

Keywords

Active exploration of the environment Range sensing Robot manipulation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aleotti, J., Caselli, S.: Part-based robot grasp planning from human demonstration. In: IEEE International Conference on Robotics and Automation, (ICRA), Shanghai, China (2011)Google Scholar
  2. 2.
    Aleotti, J., Caselli, S.: A 3D shape segmentation approach for robot grasping by parts. Robot. Auton. Syst. 60(3), 358–366 (2012)CrossRefGoogle Scholar
  3. 3.
    Aleotti, J., Lodi Rizzini, D., Caselli, S.: Object categorization and grasping by parts from range scan data. In: IEEE International Conference on Robotics and Automation, (ICRA), St. Paul, USA (2012)Google Scholar
  4. 4.
    Amenta, N., Choi, S., Kolluri, R.K.: The power crust. In: Proceedings of the Sixth ACM Symposium on Solid Modeling and Applications (2001)Google Scholar
  5. 5.
    Bab-Hadiashar, A., Gheissari, N.: Range image segmentation using surface selection criterion. IEEE Trans. Image Process. 15(7), 2006–2018 (2006)CrossRefGoogle Scholar
  6. 6.
    Berenson, D., Diankov, R., Nishiwaki, K., Kagami, S., Kuffner, J.: Grasp planning in complex scenes. In: 7th IEEE-RAS International Conference on Humanoid Robots, 42–48 (2007)Google Scholar
  7. 7.
    Berretti, S., Del Bimbo, A., Pala, P.: 3D Mesh decomposition using Reeb graphs. Image Vis. Comput. 27(10), 1540–1554 (2009)CrossRefGoogle Scholar
  8. 8.
    Biederman, I.: Recognition-by-components: a theory of human image understanding. Psychol. Rev. 94, 115–147 (1987)CrossRefGoogle Scholar
  9. 9.
    Biegelbauer, G., Vincze, M.: Efficient 3D object detection by fitting superquadrics to range image data for robot’s object manipulation. In: IEEE International Conference on Robotics and Automation (2007)Google Scholar
  10. 10.
    Bone, G.M., Lambert, A., Edwards, M.: Automated modeling and robotic grasping of unknown three-dimensional objects. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 292–298 (2008)Google Scholar
  11. 11.
    Borges, D.L., Fisher, R.B.: Class-based recognition of 3D objects represented by volumetric primitives. Image Vis. Comput. 15(8), 655–664 (1997)CrossRefGoogle Scholar
  12. 12.
    Chen, C., Payeur, P.: Scan-Based registration of range measurements. In: IEEE Instrumentation and Measurement Technology Conference (IMTC), vol. 1 (2002)Google Scholar
  13. 13.
    Chen, S.Y., Li, Y.F.: Vision sensor planning for 3-D model acquisition 35(5), 894–904 (2005)zbMATHGoogle Scholar
  14. 14.
    Detry, R., Ek, C.H., Madry, M., Piater, J., Kragic, D.: Generalizing grasps across partly similar objects. In: IEEE International Conference on Robotics and Automation (ICRA) (2012)Google Scholar
  15. 15.
    Dumitriu, D., Funke, S., Kutz, M., Milosavljević, N.: How much geometry it takes to reconstruct a 2-manifold in R 3. J. Exp. Algorithmics, 14:2:2.2–2:2.17 (2010)Google Scholar
  16. 16.
    Gächter, S., Harati, A., Siegwart, R.: Incremental object part detection toward object classification in a sequence of noisy range images. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 4037–4042 (2008)Google Scholar
  17. 17.
    Gibson, J.J.: The Ecological Approach to Visual Perception. Houghton Mifflin (1979)Google Scholar
  18. 18.
    Goldfeder, C., Allen, P.K., Lackner, C., Pelossof, R.: Grasp planning via decomposition trees. In: IEEE International Conference on Robotics and Automation, (ICRA), pp. 4679–4684. Roma, Italy (2007)Google Scholar
  19. 19.
    Goldfeder, C., Ciocarlie, M., Peretzman, J., Dang, H., Allen, P.K.: Data-driven grasping with partial sensor data. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1278 –1283 (2009)Google Scholar
  20. 20.
    Gonzalez-Aguirre, D., Hoch, J., Rohl, S., Asfour, T., Bayro-Corrochano, E., Dillmann, R.: Towards shape-based visual object categorization for humanoid robots. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 5226–5232 (2011)Google Scholar
  21. 21.
    Henderson, T.C., Bhanu, B.: Threepoint seed method for the extraction of planar faces from range data. In: Proceedings of the Workshop on Industrial Applications of Machine Vision (1982)Google Scholar
  22. 22.
    Hudson, N., Howard, T., Ma, J., Jain, A., Bajracharya, M., Myint, S., Kuo, C., Matthies, L., Backes, P., Hebert, P., Fuchs, T., Burdick, J.: End-to-end dexterous manipulation with deliberate interactive estimation. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2371–2378 (2012)Google Scholar
  23. 23.
    Huebner, K., Ruthotto, S., Kragic, D.: Minimum volume bounding box decomposition for shape approximation in robot grasping. In: IEEE International Conference on Robotics and Automation, (ICRA), pp. 1628–1633. Pasadena, USA (2008)Google Scholar
  24. 24.
    Johnson, A.E., Hebert, M.: Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern. Anal. Mach. Intel. 21(5):433–449 (1999)Google Scholar
  25. 25.
    E. Kalogerakis, A. Hertzmann, K. Singh.: Learning 3D mesh segmentation and labeling. ACM Trans. Graph. 29(3) (2010)Google Scholar
  26. 26.
    M. Kazhdan, M. Bolitho, H. Hoppe: Poisson surface reconstruction. In: Eurographics Symposium on Geometry Processing (2006)Google Scholar
  27. 27.
    Kennedy, J., Eberhart, R.: Particle swarm optimization. In: IEEE International Conference on Neural Networks, vol. 4, pp.1942–1948 (1995)Google Scholar
  28. 28.
    Yong Kim, S., Hwang, H., Hong, H., Choi, M.: An Improved ICP Algorithm Based on the Sensor Projection for Automatic 3D Registration. Springer Lecture Notes on Artificial Intelligence 2927, 648–657 (2004)Google Scholar
  29. 29.
    Klank, U., Pangercic, D., Rusu, R.B., Beetz, M.: Real-time CAD model matching for mobile manipulation and grasping. In: 9th IEEE-RAS International Conference on Humanoid Robots, pp. 290–296 (2009)Google Scholar
  30. 30.
    Klasing, K., Wollherr, D., Buss, M.: Realtime segmentation of range data using continuous nearest neighbors. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2431–2436 (2009)Google Scholar
  31. 31.
    Klein, J., Zachmann, G.: Point cloud surfaces using geometric proximity graphs. Comput. Graph. 28(6), 839–850 (2004)CrossRefGoogle Scholar
  32. 32.
    Kraft, D., Detry, R., Pugeault, N., Başeski, E., Guerin, F., Piater, J.H., Krüger, N.: Development of object and grasping knowledge by robot exploration. IEEE Trans. Auton. Ment. Dev. 2(4), 368–383 (2010)CrossRefGoogle Scholar
  33. 33.
    Kraft, D., Pugeault, N., Baseski, E., Popovic, M., Kragic, D., Kalkan, S., Wörgötter, F., Krüger, N.: Birth of the object: detection of objectness and extraction of object shape through object-action complexes. Int. J. Humanoid Robot. 5(2), 247 (2008)CrossRefGoogle Scholar
  34. 34.
    Krainin, M., Curless, B., Fox, D.: Autonomous generation of complete 3D object models using next best view manipulation planning. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 5031–5037, (2011)Google Scholar
  35. 35.
    Krainin, M., Henry, P., Ren, X., Fox, D.: Manipulator and object tracking for in-hand 3D object modeling. Int. J. Robot. Res. 30(11), 1311–1327 (2011)CrossRefGoogle Scholar
  36. 36.
    Kriegel, S., Bodenmliller, T., Suppa, M., Hirzinger, G.: A surface-based next-best-view approach for automated 3D model completion of unknown objects. In: IEEE International Conference on Robotics and Automation (ICRA) pp. 4869–4874 (2011)Google Scholar
  37. 37.
    Mamou, K., Ghorbel, F.: A simple and efficient approach for 3D mesh approximate convex decomposition. In: 16th IEEE International Conference on Image Processing (ICIP), pp. 3501–3504, (2009)Google Scholar
  38. 38.
    Marton, Z.-C., Rusu, R.B., Jain, D., Klank, U., Beetz, M.: Probabilistic categorization of kitchen objects in table settings with a composite sensor. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2009)Google Scholar
  39. 39.
    Mian, A.S., Bennamoun, M., Owens, R.: Three-dimensional model-based object recognition and segmentation in cluttered scenes. IEEE Trans. Patt. Anal. Mach. Intel. 28(10), 1584–1601 (2006)CrossRefGoogle Scholar
  40. 40.
    Quang-Loc, N., Levine, M.D.: Representing 3-D objects in range images using geons. Comput. Vision Image Underst. 63(1), 158–168 (1996)CrossRefGoogle Scholar
  41. 41.
    Pilu, M., Fisher, R.B.: Model-driven grouping and recognition of generic object parts from single images. Robot. Auton. Syst. 21(1), 107–122 (1997)CrossRefGoogle Scholar
  42. 42.
    Pito, R.: A solution to the next best view problem for automated surface. Acquisition 21(10), 1016–1030 (1999)Google Scholar
  43. 43.
    Quigley, M., Batra, S., Gould, S., Klingbeil, E., Le, Q., Wellman, A., Ng, A.Y.: High-accuracy 3D sensing for mobile manipulation: improving object detection and door opening. In: IEEE International Conference on Robotics and Automation (ICRA) pp. 2816–2822 (2009)Google Scholar
  44. 44.
    Rao, D., Le, Q.V., Phoka, T., Quigley, M., Sudsang, A., Ng, A.Y.: Grasping novel objects with depth segmentation. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2010)Google Scholar
  45. 45.
    Reed, M.K., Allen, P.K., Stamos, I.: Automated model acquisition from range images with view planning. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 72–77 (1997)Google Scholar
  46. 46.
    Richtsfeld, M., Zillich, M.: Grasping unknown objects based on 2-1/2D range data. In: IEEE International Conference on Automation Science and Engineering (CASE), pp. 691–696 (2008)Google Scholar
  47. 47.
    Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (FPFH) for 3D registration. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 3212–3217 (2009)Google Scholar
  48. 48.
    Rusu, R.B., Cousins, S.: 3D is here: point cloud library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, (2011)Google Scholar
  49. 49.
    Rusu, R.B., Blodow, N., Marton, Z.C., Beetz, M.: Close-range scene segmentation and reconstruction of 3D point cloud maps for mobile manipulation in domestic environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–6 (2009)Google Scholar
  50. 50.
    Rusu, R.B., Holzbach, A., Diankov, R., Bradski, G., Beetz, M.: Perception for mobile manipulation and grasping using active stereo. In: IEEE-RAS International Conference on Humanoid Robots, pp. 632–638 (2009)Google Scholar
  51. 51.
    Sahbani, A., El-Khoury, S.: A hybrid approach for grasping 3D objects. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS), pp. 1272–1277. St. Louis, MO, (2009)Google Scholar
  52. 52.
    Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic grasping of novel objects using vision. Int. J. Robotics Res. 27(2), 157–173 (2008)CrossRefGoogle Scholar
  53. 53.
    Shin, J., Gachter, S., Harati, A., Pradalier, C., Siegwart, R.: Object classification based on a geometric grammar with a range camera. In: IEEE International Conference on Robotics and Automation, pp. 2443–2448 (2009)Google Scholar
  54. 54.
    Soska, K.C., Adolph, K.E., Johnson, S.P.: Systems in development: motor skill acquisition facilitates three-dimensional object completion. Dev. Psychol. 46, 129–138 (2010)CrossRefGoogle Scholar
  55. 55.
    Stulp, F., Theodorou, E., Buchli, J., Schaal, S.: Learning to grasp under uncertainty. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 5703–5708 (2011)Google Scholar
  56. 56.
    Torabi, L., Gupta, K.: An autonomous six-DOF eye-in-hand system for in situ 3D object modelingThe International Journal of Robotics Research (2011)Google Scholar
  57. 57.
    Triebel, R., Shin, J., Siegwart, R.: Segmentation and unsupervised part-based discovery of repetitive objects.In: Proceeding of the 6th Robotics: Science and Systems Conference (RSS) (2010)Google Scholar
  58. 58.
    Tsuda, A., Kakiuchi, Y., Nozawa, S., Ueda, R., Okada, K., Inaba, M.: On-line next best grasp selection for in-hand object 3D modeling with dual-arm coordination. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1799–1804 (2012)Google Scholar
  59. 59.
    Vasquez-Gomez, J.I., Lopez-Damian, E., Sucar, L.E.: View planning for 3D object reconstruction. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4015–4020 (Oct. 2009)Google Scholar
  60. 60.
    Wopfner, M., Brich, J., Hochdorfer, S., Schlegel, C.: Mobile manipulation in service robotics: scene and object recognition with manipulator-mounted laser ranger. In: International Symposium on Robotics (ISR), and German Conference on Robotics (ROBOTIK), pp. 1–7 (2010)Google Scholar
  61. 61.
    Zerroug, M., Nevatia, R.: Part-based 3D descriptions of complex objects from a single image. IEEE Trans. Pattern Anal. Mach. Intel., 21(9): pp. 835–848, (1999)Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • Jacopo Aleotti
    • 1
  • Dario Lodi Rizzini
    • 1
  • Stefano Caselli
    • 1
  1. 1.Dipartimento di Ingegneria dell’InformazioneUniversity of ParmaParmaItaly

Personalised recommendations