General Object Tip Detection and Pose Estimation for Robot Manipulation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9163)

Abstract

Robot manipulation tasks like inserting screws and pegs into a hole or automatic screwing require precise tip pose estimation. We propose a novel method to detect and estimate the tip of elongated objects. We demonstrate that our method can estimate tip pose to millimeter-level accuracy. We adopt a probabilistic, appearance-based object detection framework to detect pegs and bits for electric screw drivers. Screws are difficult to detect with feature- or appearance-based methods due to their reflective characteristics. To overcome this we propose a novel adaptation of RANSAC with a parallel-line model. Subsequently, we employ image moments to detect the tip and its pose. We show that the proposed method allows a robot to perform object insertion with only two pairs of orthogonal views, without visual servoing.

Keywords

Pose estimation Tool tip detection Peg-in-hole insertion 

Notes

Acknowledgments

The research leading to these results has received funding from the European Communitys Seventh Framework Programme FP7/2007-2013 (Specific Programme Cooperation, Theme 3, Information and Communication Technologies) under grant agreement no. 610878, 3rd HAND.

References

  1. 1.
    Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)MATHCrossRefGoogle Scholar
  2. 2.
    Chaumette, F.: Image moments: a general and useful set of features for visual servoing. IEEE Trans. Rob. 20(4), 713–723 (2004)CrossRefGoogle Scholar
  3. 3.
    Dantam, N.T., Amor, H.B., Christensen, H.I., Stilman, M.: Online multi-camera registration for bimanual workspace trajectories. In: 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 588–593. IEEE (2014)Google Scholar
  4. 4.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Hoffmann, H., Chen, Z., Earl, D., Mitchell, D., Salemi, B., Sinapov, J.: Adaptive robotic tool use under variable grasps. Rob. Auton. Syst. 62(6), 833–846 (2014)CrossRefGoogle Scholar
  6. 6.
    Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)MATHCrossRefGoogle Scholar
  7. 7.
    Huang, S., Yamakawa, Y., Senoo, T., Ishikawa, M.: Dynamic compensation by fusing a high-speed actuator and high-speed visual feedback with its application to fast peg-and-hole alignment. Adv. Rob. 28(9), 613–624 (2014)Google Scholar
  8. 8.
    Liu, S., Xie, W.F., Su, C.Y.: Image-based visual servoing using improved image moments. In: International Conference on Information and Automation, 2009, ICIA 2009, pp. 577–582. IEEE (2009)Google Scholar
  9. 9.
    Song, H.C., Kim, Y.L., Song, J.B.: Automated guidance of peg-in-hole assembly tasks for complex-shaped parts. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 4517–4522. IEEE (2014)Google Scholar
  10. 10.
    Stückler, J., Behnke, S.: Adaptive tool-use strategies for anthropomorphic service robots. In: 14th IEEE-RAS International Conference on Humanoid Robots. IEEE (2014)Google Scholar
  11. 11.
    Teney, D., Piater, J.: Continuous pose estimation in 2D images at instance and category levels. In: Tenth Conference on Computer and Robot Vision, pp. 121–127. IEEE, May 2013. https://iis.uibk.ac.at/public/papers/Teney-2013-CRV.pdf
  12. 12.
    Teney, D., Piater, J.: Multiview feature distributions for object detection and continuous pose estimation. Comput. Vis. Image Underst. 125, 265–282 (2014). https://iis.uibk.ac.at/public/papers/Teney-2014-CVIU.pdf CrossRefGoogle Scholar
  13. 13.
    Toussaint, M.: Newton methods for k-order markov constrained motion problems. CoRR abs/1407.0414 (2014). http://arxiv.org/abs/1407.0414
  14. 14.
    Wang, J., Cho, H.: Micropeg and hole alignment using image moments based visual servoing method. IEEE Trans. Industr. Electron. 55(3), 1286–1294 (2008)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Dadhichi Shukla
    • 1
  • Özgür Erkent
    • 1
  • Justus Piater
    • 1
  1. 1.Intelligent and Interactive Systems, Institute of Computer ScienceUniversity of InnsbruckInnsbruckAustria

Personalised recommendations