Advertisement

A Gesture Recognition Method Based on Spiking Neural Networks for Cognition Development

  • Dong Niu
  • Dengju Li
  • Rui Yan
  • Huajin TangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11301)

Abstract

This paper proposes a gesture recognition method based on spiking neural network (SNN). The method can be used to develop the cognition behavior by associating the recognition results with semantic information from the observed target. Firstly, a single shot multi-box detector (SSD) is used to recognize the target object and locate it. Then two SNNs based on Izhikevich model are used to record trajectories of plane motion and depth motion. After projecting and translating the data extracted from the SNN, self-organizing mapping (SOM) and support vector machine (SVM) are applied to realize the gesture recognition. Finally, the associative memory model is used to associate gestures with semantics to achieve cognition. The experiment results show that SNN can well memorize the spatial-temporal information of various gestures. Furthermore, based on the spiking trains from the Izhikevich model, we can realize good results from the clustering and classification.

Keywords

Cognitive development Gesture recognition Spiking neural network 

Notes

Acknowledgments

This work was supported by the National Natural Science Foundation of China under grant number 61773271.

References

  1. 1.
    Yorita, A., Kubota, N.: Cognitive development in partner robots for information support to elderly people. IEEE Trans. Auton. Ment. Dev. 3(1), 64–73 (2011)CrossRefGoogle Scholar
  2. 2.
    Sperber, D., Wilson, D.: Relevance: Communication and Cognition, 2nd edn. Blackwell Press, Oxford (1995)Google Scholar
  3. 3.
    Saponaro, G., Salvi, G., Bernardino, A.: Robot anticipation of human intentions through continuous gesture recognition. In: 2013 International Conference on Collaboration Technologies and Systems (CTS), pp. 218–225. IEEE Press, New York (2013)Google Scholar
  4. 4.
    Tang, H., Tian, B., Shim, V.A., Tan, K.C.: A neuro-cognitive system and its application in robotics. In: 10th IEEE International Conference on Control and Automation, pp. 406–411. IEEE Press, New York (2013)Google Scholar
  5. 5.
    Tang, H., Huang, W., Narayanamoorthy, A., Yan, R.: Cognitive memory and mapping in a brain-like system for robotic navigation. Neural Netw. 87, 27–37 (2017)CrossRefGoogle Scholar
  6. 6.
    Roy, N., et al.: Towards personal service robots for the elderly. Carnegie Mellon University (2000)Google Scholar
  7. 7.
    Kanoh, M., Kato, S., Itoh, H.: Facial expressions using emotional space in sensitivity communication robot “ifbot”. In: International Conference on Intelligent Robots and Systems, pp. 1586–1591. IEEE Press, New York (2004)Google Scholar
  8. 8.
    Hussain, S., Saxena, R., Han, X., Khan, J.A., Shin, H.: Hand gesture recognition using deep learning. In: International SoC Design Conference (ISOCC), pp. 48–49. IEEE Press, New York (2017)Google Scholar
  9. 9.
    Yan, R., Tee, K.P., Chua, Y., Li, H., Tang, H.: Gesture recognition based on localist attractor networks with application to robot control. IEEE Comput. Intell. Mag. 7(1), 64–74 (2012)CrossRefGoogle Scholar
  10. 10.
    Vishwakarma, D.K.: Hand gesture recognition using shape and texture evidences in complex background. In: International Conference on Inventive Computing and Informatics (ICICI), pp. 278–283. IEEE Press, New York (2017)Google Scholar
  11. 11.
    Szegedy, C., Toshev, A., Erhan, D.: Deep neural networks for object detection. In: Advances in Neural Information Processing Systems, vol. 26, pp. 2553–2561 (2013)Google Scholar
  12. 12.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587. IEEE Press, New York (2014)Google Scholar
  13. 13.
    Girshick, R.: Fast R-CNN. In: IEEE International Conference on Computer Vision, pp. 1440–1448. IEEE Press, New York (2015)Google Scholar
  14. 14.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)CrossRefGoogle Scholar
  15. 15.
    Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)
  16. 16.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  17. 17.
    Maass, W., Bishop, C.M.: Pulsed Neural Networks, 1st edn. MIT Press, Cambridge (2001)zbMATHGoogle Scholar
  18. 18.
    Hodgkin, A.L., Huxley, A.F.: A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117(4), 500–544 (1952)CrossRefGoogle Scholar
  19. 19.
    Burkitt, A.N.: A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 95(1), 1–19 (2006)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Koch, C., Segev, I.: Methods in Neuronal Modeling: From Ions to Networks. MIT Press, Cambridge (1998)Google Scholar
  21. 21.
    Goutte, C., Gaussier, E.: A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In: Losada, D.E., Fernández-Luna, J.M. (eds.) ECIR 2005. LNCS, vol. 3408, pp. 345–359. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31865-1_25CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Neuromorphic Computing Research Center, College of Computer ScienceSichuan UniversityChengduChina

Personalised recommendations