Advertisement

Robotic Grasping and Manipulation Competition @IROS2016: Team Tsinghua

  • Fuchun Sun
  • Huaping Liu
  • Bin Fang
  • Di Guo
  • Tao Kong
  • Chao Yang
  • Yao Huang
  • Mingxuan Jing
  • Junyi Che
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 816)

Abstract

This chapter describes the preparation and implementation of the Robotic Grasping and Manipulation Competition @IROS 2016. Our Tsinghua Team participated in both the hand-in-hand and fully-autonomous tracks. The structure of a novel designed gripper and an algorithm for object detection and grasp pose estimation are described. The competition results demonstrates the effectiveness of the strategies used in the competition.

Keywords

Hand-in-hand Fully-autonomous Object detection 

References

  1. 1.
    Okada, T.: An artificial finger equipped with adaptability to an object. Bull. Electrotech. Lab. 37(2), 1078–1090 (1974)Google Scholar
  2. 2.
    Loucks, C.S., Johnson, V.J., Boissiere, P.T., Starr, G.P., Steele, J.P.H.: Modeling and control of the Stanford/JPL hand. In: Proceedings of 1987 IEEE International Conference on Robotics and Automation, vol. 4, pp. 573–578. IEEE (1987)Google Scholar
  3. 3.
    Butterfass, J., Grebenstein, M., Liu, H., Hirzinger, G.: DLR-hand II: next generation of a dextrous robot hand. In: Proceedings of 2001 IEEE International Conference on Robotics and Automation, ICRA, vol. 1, pp. 109–114. IEEE (2001)Google Scholar
  4. 4.
  5. 5.
  6. 6.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678. ACM (2014)Google Scholar
  7. 7.
    Gualtieri, M., ten Pas, A., Saenko, K., Platt, R.: High precision grasp pose detection in dense clutter. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 598–605. IEEE (2016)Google Scholar
  8. 8.
    Foote, T.: tf: the transform library. In: 2013 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), pp. 1–6. IEEE (2013)Google Scholar
  9. 9.
  10. 10.
    Kong, T., Yao, A., Chen, Y., Sun, F.: HyperNet: towards accurate region proposal generation and joint object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 845–853 (2016)Google Scholar
  11. 11.
    Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Fuchun Sun
    • 1
  • Huaping Liu
    • 1
  • Bin Fang
    • 1
  • Di Guo
    • 1
  • Tao Kong
    • 1
  • Chao Yang
    • 1
  • Yao Huang
    • 1
  • Mingxuan Jing
    • 1
  • Junyi Che
    • 1
  1. 1.Department of Computer Science and TechnologyTsinghua UnviersityBejingChina

Personalised recommendations