Advertisement

Virtual plate based controlling strategy of toy play for robot’s communication development in JA space

  • Wei WangEmail author
  • Xiao-Dan Huang
Research Article

Abstract

Toy play is a basic skill for a humanoid robot after it has joint attention (JA) ability. Because such skill is helpful for human-robot interaction and cooperation, we must realize this skill to enhance the robots communication ability with person. In this paper, we researched a toy play controlling strategy in JA space based on a virtual plate with a serial robot arm, which has five degrees of freedom (5-DoF). For this purpose, a reachable space of joint attention was constructed firstly. And then the toy play controlling strategy was proposed in details. Here we used a virtual plate to enhance the toy play effect. In order to realize this skill better, toy play energy and some restraining relations were analyzed. By contrasting the audio waveform in the experiments, good performance effect of toy play was demonstrated.

Keywords

Human robot cooperation joint attention (JA) space reachable space toy play ability a virtual plate 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgments

This work was supported by Hebei Province Natural Science Foundation for Youths (No. F2015402108), the Foundation for Young Scholars of Hebei Educational Committee (No.QN20131152), Handan Municipal Science and Technology Projects (No. 1421103054).

References

  1. [1]
    T. Charman, S. Baron-Cohen, J. Swettenham, G. Baird, A. Cox, A. Drew. Testing joint attention, imitation, and play as infancy precursors to language and theory of mind. Cognitive Development, vol. 15, no. 4, pp. 481–498, 2000.CrossRefGoogle Scholar
  2. [2]
    S. Dunphy-Lelii, J. LaBounty, J. D. Lane, H. M. Wellman. The social context of infant intention understanding. Journal of Cognition and Development, vol. 15, no. 1, pp. 60–77, 2014.CrossRefGoogle Scholar
  3. [3]
    T. Grossmann, S. Lloyd-Fox, M. H. Johnson. Brain responses reveal young infants’sensitivity to when a social partner follows their gaze. Developmental Cognitive Neuroscience, vol. 6, pp. 155–161, 2013.CrossRefGoogle Scholar
  4. [4]
    Z. Y. Xia, L. Li, J. Xiong, Y. Qiang, K. Chen. Design aspects and development of humanoid robot THBIP-2. Robotica, vol. 26, no. 1, pp. 109–116, 2008.CrossRefGoogle Scholar
  5. [5]
    L. G. Zhang, Q. Huang, J. Yang, Y. Shi, Z. J. Wang, A. R. Jafri. Design of humanoid complicated dynamic motion with similarity considered. Acta Automatica Sinica, vol. 33, no. 5, pp. 522–528, 2007. (in Chinese)CrossRefzbMATHGoogle Scholar
  6. [6]
    H. Sumioka, Y. Yoshikawa, M. Asada. Reproducing interaction contingency toward open-ended development of social actions: case study on joint attention. IEEE Transactions on Autonomous Mental Development, vol. 2, no. 1, pp. 40–50, 2010.CrossRefGoogle Scholar
  7. [7]
    C. M. Huang, A. L. Thomaz. Joint attention in human-robot interaction. In Proceedings of the AAAI Fall Symposium, AAAI, Menlo Park, USA, pp. 32–37, 2010.Google Scholar
  8. [8]
    S. M. Anzalone, S. Boucenna, S. Ivaldi, M. Chetouani. Evaluating the engagement with social robots. International Journal of Social Robotics, vol. 7, no. 4, pp. 465–478, 2015.CrossRefGoogle Scholar
  9. [9]
    J. F. Ferreira, J. Dias. Attentional mechanisms for socially interactive robots-A survey. IEEE Transactions on Autonomous Mental Development, vol. 6, no. 2, pp. 110–125, 2014.CrossRefGoogle Scholar
  10. [10]
    G. Skantze, A. Hjalmarsson, C. Oertel. Turn-taking, feedback and joint attention in situated human-robot interaction. Speech Communication, vol. 65, pp. 50–66, 2014.CrossRefGoogle Scholar
  11. [11]
    T. Ichijo, N. Munekata, K. Hiraki, T. Ono. Entrainment effect caused by joint attention of two robots. In Proceedings of the 9th Annual ACM/IEEE International Conference on Human-Robot Interaction, ACM, New York, USA, pp. 178–179, 2014.Google Scholar
  12. [12]
    E. Carlson, J. Triesch. A computational model of the emergence of gaze following. Connectionist Models of Cognition and Perception II, H. Bowman, C. Labiouse, Eds., Singapore: World Scientific, pp. 105–114, 2003.Google Scholar
  13. [13]
    Y. Nagai, K. Hosoda, A. Morita, M. Asada. A constructive model for the development of joint attention. Connection Science, vol. 15, no. 4, pp. 211–229, 2003.CrossRefGoogle Scholar
  14. [14]
    C. Breazeal, D. Buchsbaum, J. Grey, D. Gatenby, B. Blumberg. Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots. Artificial Life, vol. 11, no. 1–2, pp. 31–62, 2005.CrossRefGoogle Scholar
  15. [15]
    M. Imai, T. Ono, H. Ishiguro. Physical relation and expression: Joint attention for human-robot interaction. IEEE Transactions on Industrial Electronics, vol. 50, no. 4, pp. 636–643, 2003.CrossRefGoogle Scholar
  16. [16]
    Y. Nagai, M. Asada, K. Hosoda. Learning for joint attention helped by functional development. Advanced Robotics, vol. 20, no. 10, pp. 1165–1181, 2006.CrossRefGoogle Scholar
  17. [17]
    H. Sumioka, K. Hosoda, Y. Yoshikawa, M. Asada. Acquisition of joint attention through natural interaction utilizing motion cues. Advanced Robotics, vol. 21, no. 9, pp. 983–999, 2007.CrossRefGoogle Scholar
  18. [18]
    M. Hashimoto, H. Kondo, Y. Tamatsu. Gaze guidance using a facial expression robot. Advanced Robotics, vol. 23, no. 14, pp. 1831–1848, 2009.CrossRefGoogle Scholar
  19. [19]
    J. K. Chu, R. H. Li, Q. Y. Li, H. Q. Wang. A visual attention model for robot object tracking. International Journal of Automation and Computing, vol. 7, no. 1, pp. 39–46, 2010.CrossRefGoogle Scholar
  20. [20]
    C. Breazeal. Robot in society: Friend or appliance?. In Proceedings of Autonomous Agents Workshop on Emotion-based Agent Architectures, IEEE, Seattle, USA, pp. 11–37, 1999.Google Scholar
  21. [21]
    Z. Leon. Robotic yo-yo: Modelling and control strategies. Robotica, vol. 24, no. 2, pp. 211–220, 2006.Google Scholar
  22. [22]
    T. Petrivc, A. Gams, A. J. Ijspeert, L. v Zlajpah. On-line frequency adaptation and movement imitation for rhythmic robotic tasks. The International Journal of Robotics Research, vol. 30, no. 14, pp. 1775–1788, 2011.CrossRefGoogle Scholar
  23. [23]
    A. Cooka, E. Pedro, A. Kim. Robots: Assistive technologies for play, learning and cognitive development. Technology and Disability, vol. 22, no. 3, pp. 127–145, 2010.Google Scholar
  24. [24]
    W. G. Song. Robotics: Kinematics, Dynamics and Control, Beijing, China: Science Press, pp. 18–56, 2007. (in Chinese)Google Scholar

Copyright information

© Institute of Automation, Chinese Academy of Sciences and Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.School of Information and Electrical EngineeringHebei University of EngineeringHandanChina

Personalised recommendations