Advertisement

In-hand Manipulation for Active Object Recognition

  • Xiang Dou
  • Xinying Xu
  • Huaping LiuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11740)

Abstract

Visual object recognition systems mounted on the automatic mobile agents face the challenge of unconstrained data, but simultaneously can realize the object recognition task in the human robot interactive environment by actively exploring. In this paper, we propose an active object recognition method to actively explore the more interesting positions of observation to reduce the uncertainty of object recognition. We propose to use the bayesian formula to iteratively accumulate the experience of image perception, and actively explore better recognition positions through control strategies. It is similar to humans actively exploring in the cognitive process, establishing the coupling relationship between perception and action, which is conducive to selecting the suitable posture to predict the labels. Furthermore, the effects of inhibiting action exploration behavior are analyzed with passive strategies. The results from the recognition of GERMS confirm that the proposed method successfully learns the effective category identification strategy, and the active selection action further boosts the recognition performance.

Keywords

Active object recognition Bayesian perception Visual recognition 

Notes

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 61673238.

References

  1. 1.
    Weimer, D., Scholz-Reiter, B., Shpitalni, M.: Design of deep convolutional neural network architectures for automated feature extraction in industrial inspection. CIRP Ann. 65(1), 417–420 (2016)CrossRefGoogle Scholar
  2. 2.
    Urmson, C.P., Dolgov, D.A., Nemec, P.: Driving pattern recognition and safety control. U.S. Patent No. 8,634,980, 21 January 2014Google Scholar
  3. 3.
    Fleck, S., Straßer, W.: Smart camera based monitoring system and its application to assisted living. Proc. IEEE 96, 1698–1714 (2008)CrossRefGoogle Scholar
  4. 4.
    Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  5. 5.
    Kendoul, F., Nonami, K., Fantoni, I., Lozano, R.: An adaptive vision-based autopilot for mini flying machines guidance, navigation and control. Auton. Robots 27, 165 (2009)CrossRefGoogle Scholar
  6. 6.
    Malmir, M., Sikka, K., Forster, D., Movellan, J.R., Cottrell, G.: Deep Q-learning for Active Recognition of GERMS: baseline performance on a standardized dataset for active learning. In: BMVC (2015)Google Scholar
  7. 7.
    Gibson, J.J.: The Ecological Approach to Visual Perception: Classic Edition. Psychology Press, London (2014)CrossRefGoogle Scholar
  8. 8.
    Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press, Cambridge (2005)zbMATHGoogle Scholar
  9. 9.
    Thrun, S.: Probabilistic algorithms in robotics. AI Mag. 21, 93 (2000)Google Scholar
  10. 10.
    Lloyd, J.: Goal-based learning in tactile robotics. Diss. Faculty of Environment and Technology, University of West of England, Bristol (2016)Google Scholar
  11. 11.
    Bajcsy, R., Aloimonos, Y., Tsotsos, J.K.: Revisiting active perception. Auton. Robots 42, 177–196 (2018)CrossRefGoogle Scholar
  12. 12.
    Willett, R., Nowak, R., Castro, R.M.: Faster rates in regression via active learning. In: Advances in Neural Information Processing Systems (2006)Google Scholar
  13. 13.
    Settles, B.: Active Learning. Morgan & Claypool Publishers, San Rafael (2012)zbMATHGoogle Scholar
  14. 14.
    Ward-Cherrier, B., Cramphorn, L., Lepora, N.F.: Tactile manipulation with a TacThumb integrated on the open-hand M2 gripper. IEEE Robot. Autom. Lett. 1, 169–175 (2016)CrossRefGoogle Scholar
  15. 15.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Vincent, B.T.: A tutorial on Bayesian models of perception. J. Math. Psychol. 66, 103–114 (2015)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Martinez-Hernandez, U., Dodd, T.J., Prescott, T.J.: Feeling the shape: active exploration behaviors for object recognition with a robotic hand. IEEE Trans. Syst. Man Cybern.: Syst. 48, 1–10 (2017)Google Scholar
  18. 18.
    Oudeyer, P.-Y., Kaplan, F., Hafner, V.V.: Intrinsic motivation systems for autonomous mental development. IEEE Trans. Evol. Comput. 11, 265–286 (2007)CrossRefGoogle Scholar
  19. 19.
    Gottlieb, J., Oudeyer, P.Y., Lopes, M., Baranes, A.: Information-seeking, curiosity, and attention: computational and neural mechanisms. Trends Cogn. Sci. 17, 585–593 (2013)CrossRefGoogle Scholar
  20. 20.
    Fortenberry, B., Chenu, J., Movellan, J.R.: RUBI: a robotic platform for real-time social interaction. In: Proceedings of the International Conference on Development and Learning, ICDL 2004. The Salk Institute, San Diego (2004)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.College of Electrical and Power EngineeringTaiyuan University of TechnologyTaiyuanChina
  2. 2.State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and TechnologyTsinghua UniversityBeijingChina

Personalised recommendations