Advertisement

Learning to Grasp Novel Objects Using Vision

  • Ashutosh Saxena
  • Justin Driemeyer
  • Justin Kearns
  • Chioma Osondu
  • Andrew Y. Ng
Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 39)

Summary

We consider the problem of grasping novel objects, specifically, ones that are being seen for the first time through vision. We present a learning algorithm which predicts, as a function of the images, the position at which to grasp the object. This is done without building or requiring a 3-d model of the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. Using our robotic arm, we successfully demonstrate this approach by grasping a variety of differently shaped objects, such as duct tape, markers, mugs, pens, wine glasses, knife-cutters, jugs, keys, toothbrushes, books, and others, including many object types not seen in the training set.

Keywords

Synthetic Data Real Image Synthetic Image Average Absolute Error Unknown Object 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bicchi, A., Kumar, V.: Robotic grasping and contact: a review. In: ICRA (2000)Google Scholar
  2. 2.
    Bower, T., Broughton, J., Moore, M.: Demonstration of intention in the reaching behaviour of neonate humans. Nature 228, 679–681 (1970)CrossRefGoogle Scholar
  3. 3.
    Castiello, U.: The neuroscience of grasping. Nature Reviews Neuroscience 6 (2005)Google Scholar
  4. 4.
    Glassner, A.S.: An Introduction to Ray Tracing. Morgan Kaufmann Publishers, Inc., San Francisco (1989)zbMATHGoogle Scholar
  5. 5.
    Mason, M.T., Salisbury, J.K.: Manipulator grasping and pushing operations. In: Robot Hands and the Mechanics of Manipulation, The MIT Press, Cambridge, MA (1985)Google Scholar
  6. 6.
    Michels, J., Saxena, A., Ng, A.Y.: High speed obstacle avoidance using monocular vision and reinforcement learning. In: ICML (2005)Google Scholar
  7. 7.
    Miller, A., Knoop, S., Christensen, H., Allen, P.: Automatic grasp planning using shape primitives. In: ICRA (2003)Google Scholar
  8. 8.
    Neuronics: Katana user manual (2004), http://www.neuronics.ch/
  9. 9.
    Pelossof, R., Miller, A., Allen, P., Jebara, T.: An svm learning approach to robotic grasping. In: ICRA (2004)Google Scholar
  10. 10.
    Piater, J.H.: Learning visual features to predict hand orientations. In: ICML Workshop on Machine Learning of Spatial Knowledge (2000)Google Scholar
  11. 11.
    Platt, R., Fagg, A.H., Grupen, R.: Manipulation gaits: Sequences of grasp control tasks. In: ICRA (2004)Google Scholar
  12. 12.
    Platt, R., Fagg, A.H., Grupen, R.: Reusing schematic grasping policies. In: IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan (2005)Google Scholar
  13. 13.
    Saxena, A., Chung, S.H., Ng, A.Y.: Learning depth from single monocular images. In: NIPS 18 (2005)Google Scholar
  14. 14.
    Saxena, A., Driemeyer, J., Kearns, J., Ng, A.Y.: Robotic grasping of novel objects. In: NIPS (2006)Google Scholar
  15. 15.
    Shin-ichi, T., Satoshi, M.: Living and working with robots. Nipponia (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Ashutosh Saxena
    • 1
  • Justin Driemeyer
    • 1
  • Justin Kearns
    • 1
  • Chioma Osondu
    • 1
  • Andrew Y. Ng
    • 1
  1. 1.Computer Science DepartmentStanford UniversityStanford 

Personalised recommendations