Skip to main content

Estimating Grasping Patterns from Images Using Finetuned Convolutional Neural Networks

  • Conference paper
  • First Online:
Towards Autonomous Robotic Systems (TAROS 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10965))

Included in the following conference series:

Abstract

Identification of suitable grasping pattern for numerous objects is a challenging computer vision task. It plays a vital role in robotics where a robotic hand is used to grasp different objects. Most of the work done in the area is based on 3D robotic grippers. An ample amount of work could also be found on humanoid robotic hands. However, there is negligible work on estimating grasping patterns from 2D images of various objects. In this paper, we propose a novel method to learn grasping patterns from images and data recorded from a dataglove, provided by the TUB Dataset. Our network retrains, a pre-trained deep Convolutional Neural Network (CNN) known as AlexNet, to learn deep features from images that correspond to human grasps. The results show that there are some interesting grasping patterns which are learned. In addition, we use two methods, Support Vector Machines (SVM) and hotelling’s T2 test to demonstrate that the dataset does include distinctive grasps for different objects. The results show promising grasping patterns that resembles actual human grasps.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In the description of the TUB dataset 17 subjects are mentioned. However, data is recorded from 18 subjects in the real dataset.

References

  1. Cyberglove ii cyberglove systems llc. http://www.cyberglovesystems.com/cyberglove-ii/. Accessed 12 Oct 2007

  2. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)

  3. Bengio, Y., et al.: Learning deep architectures for AI. Found. Trends® Mach. Learn. 2(1), 1-127 (2009)

    Google Scholar 

  4. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)

    Google Scholar 

  5. Jiang, Y., Moseson, S., Saxena, A.: Efficient grasping from RGBD images: learning using a new rectangle representation. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 3304–3311. IEEE (2011)

    Google Scholar 

  6. Kappler, D., Bohg, J., Schaal, S.: Leveraging big data for grasp planning. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 4304–4311. IEEE (2015)

    Google Scholar 

  7. Kopicki, M., Rosales, C.J., Marino, H., Gabiccini, M., Wyatt, J.L.: Learning and inference of dexterous grasps for novel objects with underactuated hands. arXiv preprint arXiv:1609.07592 (2016)

  8. Kragic, D., Christensen, H.I.: Robust visual servoing. Int. J. Robot. Res. 22(10–11), 923–939 (2003)

    Article  Google Scholar 

  9. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)

    Google Scholar 

  10. Le, Q.V.: Building high-level features using large scale unsupervised learning. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8595–8598. IEEE (2013)

    Google Scholar 

  11. Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. Int. J. Robot. Res. 34(4–5), 705–724 (2015)

    Article  Google Scholar 

  12. Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 37, 421–436 (2016). https://doi.org/10.1177/0278364917710318

    Article  Google Scholar 

  13. Maitin-Shepard, J., Cusumano-Towner, M., Lei, J., Abbeel, P.: Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2308–2315. IEEE (2010)

    Google Scholar 

  14. Miller, A.T., Miller, A.T.: Graspit!: a versatile simulator for robotic grasping. IEEE Robot. Autom. Mag. 11, 110–122 (2004)

    Article  Google Scholar 

  15. Puhlmann, S., Heinemann, F., Brock, O., Maertens, M.: A compact representation of human single-object grasping. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1954–1959. IEEE (2016)

    Google Scholar 

  16. Rogez, G., Supancic, J.S., Ramanan, D.: Understanding everyday hands in action from RGB-D images. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3889–3897 (2015)

    Google Scholar 

  17. Rosales, C., Spinelli, F., Gabiccini, M., Zito, C., Wyatt, J.L.: Gpatlasrrt: a local tactile exploration planner for recovering the shape of novel objects. Int. J. Humanoid Robot. 15(01), 1850014 (2018)

    Article  Google Scholar 

  18. Saxena, A., Driemeyer, J., Kearns, J., Ng, A.Y.: Robotic grasping of novel objects. In: Advances in neural information processing systems, pp. 1209–1216 (2007)

    Google Scholar 

  19. Saxena, A., Wong, L.L., Ng, A.Y.: Learning grasp strategies with partial shape information. In: AAAI, vol. 3, pp. 1491–1494 (2008)

    Google Scholar 

  20. Sohn, K., Jung, D.Y., Lee, H., Hero, A.O.: Efficient learning of sparse, distributed, convolutional feature representations for object recognition. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 2643–2650. IEEE (2011)

    Google Scholar 

  21. Varley, J., Weisz, J., Weiss, J., Allen, P.: Generating multi-fingered robotic grasps via deep learning. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4415–4420. IEEE (2015)

    Google Scholar 

  22. Yu, J., Weng, K., Liang, G., Xie, G.: A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation. In: 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1175–1180. IEEE (2013)

    Google Scholar 

  23. Zhang, L.E., Ciocarlie, M., Hsiao, K.: Grasp evaluation with graspable feature matching. In: RSS Workshop on Mobile Manipulation: Learning to Manipulate (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashraf Zia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zia, A., Tiddeman, B., Shaw, P. (2018). Estimating Grasping Patterns from Images Using Finetuned Convolutional Neural Networks. In: Giuliani, M., Assaf, T., Giannaccini, M. (eds) Towards Autonomous Robotic Systems. TAROS 2018. Lecture Notes in Computer Science(), vol 10965. Springer, Cham. https://doi.org/10.1007/978-3-319-96728-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96728-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-96727-1

  • Online ISBN: 978-3-319-96728-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics