Skip to main content

General Object Tip Detection and Pose Estimation for Robot Manipulation

  • Conference paper
  • First Online:
Computer Vision Systems (ICVS 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9163))

Included in the following conference series:

Abstract

Robot manipulation tasks like inserting screws and pegs into a hole or automatic screwing require precise tip pose estimation. We propose a novel method to detect and estimate the tip of elongated objects. We demonstrate that our method can estimate tip pose to millimeter-level accuracy. We adopt a probabilistic, appearance-based object detection framework to detect pegs and bits for electric screw drivers. Screws are difficult to detect with feature- or appearance-based methods due to their reflective characteristics. To overcome this we propose a novel adaptation of RANSAC with a parallel-line model. Subsequently, we employ image moments to detect the tip and its pose. We show that the proposed method allows a robot to perform object insertion with only two pairs of orthogonal views, without visual servoing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)

    Article  MATH  Google Scholar 

  2. Chaumette, F.: Image moments: a general and useful set of features for visual servoing. IEEE Trans. Rob. 20(4), 713–723 (2004)

    Article  Google Scholar 

  3. Dantam, N.T., Amor, H.B., Christensen, H.I., Stilman, M.: Online multi-camera registration for bimanual workspace trajectories. In: 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 588–593. IEEE (2014)

    Google Scholar 

  4. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  5. Hoffmann, H., Chen, Z., Earl, D., Mitchell, D., Salemi, B., Sinapov, J.: Adaptive robotic tool use under variable grasps. Rob. Auton. Syst. 62(6), 833–846 (2014)

    Article  Google Scholar 

  6. Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)

    Article  MATH  Google Scholar 

  7. Huang, S., Yamakawa, Y., Senoo, T., Ishikawa, M.: Dynamic compensation by fusing a high-speed actuator and high-speed visual feedback with its application to fast peg-and-hole alignment. Adv. Rob. 28(9), 613–624 (2014)

    Google Scholar 

  8. Liu, S., Xie, W.F., Su, C.Y.: Image-based visual servoing using improved image moments. In: International Conference on Information and Automation, 2009, ICIA 2009, pp. 577–582. IEEE (2009)

    Google Scholar 

  9. Song, H.C., Kim, Y.L., Song, J.B.: Automated guidance of peg-in-hole assembly tasks for complex-shaped parts. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 4517–4522. IEEE (2014)

    Google Scholar 

  10. Stückler, J., Behnke, S.: Adaptive tool-use strategies for anthropomorphic service robots. In: 14th IEEE-RAS International Conference on Humanoid Robots. IEEE (2014)

    Google Scholar 

  11. Teney, D., Piater, J.: Continuous pose estimation in 2D images at instance and category levels. In: Tenth Conference on Computer and Robot Vision, pp. 121–127. IEEE, May 2013. https://iis.uibk.ac.at/public/papers/Teney-2013-CRV.pdf

  12. Teney, D., Piater, J.: Multiview feature distributions for object detection and continuous pose estimation. Comput. Vis. Image Underst. 125, 265–282 (2014). https://iis.uibk.ac.at/public/papers/Teney-2014-CVIU.pdf

    Article  Google Scholar 

  13. Toussaint, M.: Newton methods for k-order markov constrained motion problems. CoRR abs/1407.0414 (2014). http://arxiv.org/abs/1407.0414

  14. Wang, J., Cho, H.: Micropeg and hole alignment using image moments based visual servoing method. IEEE Trans. Industr. Electron. 55(3), 1286–1294 (2008)

    Article  Google Scholar 

Download references

Acknowledgments

The research leading to these results has received funding from the European Communitys Seventh Framework Programme FP7/2007-2013 (Specific Programme Cooperation, Theme 3, Information and Communication Technologies) under grant agreement no. 610878, 3rd HAND.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dadhichi Shukla .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Shukla, D., Erkent, Ö., Piater, J. (2015). General Object Tip Detection and Pose Estimation for Robot Manipulation. In: Nalpantidis, L., Krüger, V., Eklundh, JO., Gasteratos, A. (eds) Computer Vision Systems. ICVS 2015. Lecture Notes in Computer Science(), vol 9163. Springer, Cham. https://doi.org/10.1007/978-3-319-20904-3_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-20904-3_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-20903-6

  • Online ISBN: 978-3-319-20904-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics