Advertisement

A Vision-Based System for Grasping Novel Objects in Cluttered Environments

  • Ashutosh Saxena
  • Lawson Wong
  • Morgan Quigley
  • Andrew Y. Ng
Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 66)

Summary

We present our vision-based system for grasping novel objects in cluttered environments. Our system can be divided into four components: 1) decide where to grasp an object, 2) perceive obstacles, 3) plan an obstacle-free path, and 4) follow the path to grasp the object. While most prior work assumes availability of a detailed 3-d model of the environment, our system focuses on developing algorithms that are robust to uncertainty and missing data, which is the case in real-world experiments. In this paper, we test our robotic grasping system using our STAIR (STanford AI Robots) platforms on two experiments: grasping novel objects and unloading items from a dishwasher. We also illustrate these ideas in the context of having a robot fetch an object from another room in response to a verbal request.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bicchi, A., Kumar, V.: Robotic grasping and contact: a review. In: ICRA (2000)Google Scholar
  2. 2.
    Mason, M.T., Salisbury, J.K.: Manipulator grasping and pushing operations. In: Robot Hands and the Mechanics of Manipulation. The MIT Press, Cambridge (1985)Google Scholar
  3. 3.
    Shimoga, K.: Robot grasp synthesis: a survey. IJRR 15, 230–266 (1996)Google Scholar
  4. 4.
    Okamura, A.M., Smaby, N., Cutkosky, M.R.: An overview of dexterous manipulation. In: ICRA (2000)Google Scholar
  5. 5.
    Saxena, A., Driemeyer, J., Kearns, J., Ng, A.Y.: Robotic grasping of novel objects. In: NIPS, vol. 19 (2006)Google Scholar
  6. 6.
    Saxena, A., Driemeyer, J., Kearns, J., Osondu, C., Ng, A.Y.: Learning to grasp novel objects using vision. In: ISER (2006)Google Scholar
  7. 7.
    Kragic, D., Christensen, H.I.: Robust visual servoing. IJRR 22, 923–939 (2003)Google Scholar
  8. 8.
    Piater, J.H.: Learning visual features to predict hand orientations. In: ICML Workshop on Machine Learning of Spatial Knowledge (2002)Google Scholar
  9. 9.
    Coelho, J., Piater, J., Grupen, R.: Developing haptic and visual perceptual categories for reaching and grasping with a humanoid robot. Robotics and Autonomous Systems 37, 195–218 (2001)zbMATHCrossRefGoogle Scholar
  10. 10.
    Morales, A., Sanz, P.J., del Pobil, A.P.: Vision-based computation of three-finger grasps on unknown planar objects. In: IEEE/RSJ Intelligent Robots and Systems Conference (2002)Google Scholar
  11. 11.
    Morales, A., Sanz, P.J., del Pobil, A.P., Fagg, A.H.: An experiment in constraining vision-based finger contact selection with gripper geometry. In: IEEE/RSJ Intelligent Robots and Systems Conference (2002)Google Scholar
  12. 12.
    Bowers, D.L., Lumia, R.: Manipulation of unmodeled objects using intelligent grasping schemes. IEEE Trans on Fuzzy Systems 11(3) (2003)Google Scholar
  13. 13.
    Kamon, I., Flash, T., Edelman, S.: Learning to grasp using visual information. In: ICRA (1996)Google Scholar
  14. 14.
    Edsinger, A., Kemp, C.C.: Manipulation in human environments. In: IEEE/RAS Int’l Conf. on Humanoid Robotics, Humanoids 2006 (2006)Google Scholar
  15. 15.
    Platt, R., Grupen, R., Fagg, A.: Improving grasp skills using schema structured learning. In: ICDL (2006)Google Scholar
  16. 16.
    Hsiao, K., Kaelbling, L., Lozano-Perez, T.: Grasping POMDPs. In: ICRA (2007)Google Scholar
  17. 17.
    Simeona, T., Laumond, J., Cortes, J., Sahbani, A.: Manipulation planning with probabilistic roadmaps. IJRR (2003)Google Scholar
  18. 18.
    Saxena, A., Driemeyer, J., Ng, A.Y.: Learning 3-D object orientation from images. Presented in NIPS workshop on Principles of Learning Problem Design (2007)Google Scholar
  19. 19.
    Thrun, S., Montemerlo, M.: The graphslam algorithm with applications to large-scale mapping of urban structures. IJRR 25, 403–430 (2005)Google Scholar
  20. 20.
    Saxena, A., Chung, S.H., Ng, A.Y.: Learning depth from single monocular images. In: NIPS, vol. 18 (2005)Google Scholar
  21. 21.
    Saxena, A., Schulte, J., Ng, A.Y.: Depth estimation using monocular and stereo cues. In: IJCAI (2007)Google Scholar
  22. 22.
    Saxena, A., Sun, M., Ng, A.Y.: Learning 3-d scene structure from a single still image. In: ICCV workshop on 3D Representation for Recognition (3dRR-07) (2007)Google Scholar
  23. 23.
    Saxena, A., Sun, M., Ng, A.Y.: 3-d reconstruction from sparse views using monocular vision. In: ICCV workshop on Virtual Representations and Modeling of Large-scale environments, VRML (2007)Google Scholar
  24. 24.
    Schwarzer, F., Saha, M., Latombe, J.-C.: Adaptive dynamic collision checking for single and multiple articulated robots in complex environments. IEEE Trans. on Robotics and Automation 21, 338–353 (2005)Google Scholar
  25. 25.
    Petrovskaya, A., Khatib, O., Thrun, S., Ng, A.Y.: Bayesian estimation for autonomous object manipulation based on tactile sensors. In: ICRA (2006)Google Scholar
  26. 26.
    Quigley, M., Berger, E., Ng, A.Y.: Stair: Hardware and software architecture. In: AAAI Robotics Workshop (2007)Google Scholar
  27. 27.
    Gerkey, B., Vaughan, R.T., Howard, A.: The player/stage project: Tools for multi-robot and distributed sensor systems. In: ICAR (2003)Google Scholar
  28. 28.
    Montemerlo, M., Roy, N., Thrun, S., Haehnel, D., Stachniss, C., Glover, J.: Carmen: Robot navigation toolkit (2000), http://carmen.sourceforge.net/
  29. 29.
    Microsoft, Microsoft robotics studio (2006), msdn.microsoft.com/robotics/
  30. 30.
    Saxena, A., Driemeyer, J., Ng, A.Y.: Robotic grasping of novel objects using vision. In: IJRR (2008)Google Scholar
  31. 31.
    Gould, S., Arfvidsson, J., Kaehler, A., Sapp, B., Meissner, M., Bradski, G., Baumstarch, P., Chung, S., Ng, A.Y.: Peripheral-foveal vision for real-time object recognition and tracking in video. In: IJCAI (2007)Google Scholar
  32. 32.
    Krsmanovic, F., Spencer, C., Jurafsky, D., Ng, A.Y.: Have we met? MDP based speaker ID for robot dialogue. In: InterSpeech–ICSLP (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Ashutosh Saxena
    • 1
  • Lawson Wong
    • 1
  • Morgan Quigley
    • 1
  • Andrew Y. Ng
    • 1
  1. 1.Computer Science DepartmentStanford UniversityStanfordUSA

Personalised recommendations