Advertisement

Children Activity Descriptions from Visual and Textual Associations

  • Somnuk Phon-AmnuaisukEmail author
  • Ken T. Murata
  • Praphan Pavarangkoon
  • Takamichi Mizuhara
  • Shiqah Hadi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11909)

Abstract

Augmented visual monitoring devices with the ability to describe children’s activities, i.e., whether they are asleep, awake, crawling or climbing, open up possibilities for various applications in promoting safety and well being amongst children. We explore children’s activity description based on an encoder-decoder framework. The correlations between semantic of the image and its textual description are captured using convolution neural network (CNN) and recurrent neural network (RNN). Encoding semantic information as activation patterns of CNN and decoding textual description using probabilistic language model based on RNN can produce relevant descriptions but often suffer from lack of precision. This is because a probabilistic model generates descriptions based on the frequency of words conditioned by contexts. In this work, we explore the effects of adding contexts such as domain specific images and adding pose information to the encoder-decoder models.

Notes

Acknowledgments

This publication is the output of the ASEAN IVO http://www.nict.go.jp/en/asean_ivo/index.html project titled Event Analysis: Applications of computer vision and AI in smart tourism industry and financially supported by NICT (http://www.nict.go.jp/en/index.html). We wish to thank Centre for Innovative Engineering (CIE), Universiti Teknologi Brunei for their partial financial support given to this research. We would also like to thank anonymous reviewers for their constructive comments and suggestions.

References

  1. 1.
    Phon-Amnuaisuk, S., Murata, K.T., Pavarangkoon, P., Mizuhara, T., Yamamoto, K., Mizuhara, T.: Exploring the applications of faster R-CNN and single-shot multi-box detection in a smart nursery domain. arXiv:1808.08675 (2018)
  2. 2.
    Kojima, A., Tamura, T., Fukunaga, K.: Natural language description of human activities from video images based on concept hierarchical of actions. Int. J. Comput. Vision 50(2), 171–184 (2002)CrossRefGoogle Scholar
  3. 3.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)CrossRefGoogle Scholar
  4. 4.
    Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR). CoRR abs/1412.2306 (2015)Google Scholar
  5. 5.
    Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 652–663 (2016)CrossRefGoogle Scholar
  6. 6.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  7. 7.
    Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Prentice Hall, Upper Saddle River (2001)Google Scholar
  8. 8.
    Shi, J., Tomasi, C.: Good features to track. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600 (1994)Google Scholar
  9. 9.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, vol. 2, pp. 1150–1157 (1999)Google Scholar
  10. 10.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. Comput. Vis. Image Underst. (CVIU) 110(3), 346–359 (2008)CrossRefGoogle Scholar
  11. 11.
    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009)Google Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)Google Scholar
  13. 13.
    Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR) (2015)Google Scholar
  14. 14.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the International Conference on Learning representations (ICLR) CoRR arXiv: 1409.1556 (2015)
  15. 15.
    He. K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. CoRR, abs/1512.03385 (2015). http://arxiv.org/abs/1512.03385
  16. 16.
    Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861 (2015). http://arxiv.org/abs/1512.03385
  17. 17.
    Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE Conference on Computer Vision (ICCV), pp. 1140–1148 CoRR arXiv:1504.08083v2 (2015)
  18. 18.
    Ye, J., Stevenson, G., Dobson, S.: USMART: an unsupervised semantic mining activity recognition technique. ACM Trans. Interact. Intell. Syst. (TiiS) 4(4), 16 (2015)Google Scholar
  19. 19.
    Civitarese, G., Bettini, C., Sztyler, T., Riboni, D., Stuckenschmidt, H.: newNECTAR: collaborative active learning for knowledge-based probabilistic activity recognition. Pervasive Mob. Comput. 56(2019), 88–105 (2019)CrossRefGoogle Scholar
  20. 20.
    Vahdatpour, A., Amini, N., Sarrafzadeh, M.: Toward unsupervised activity discovery using multi-dimensional motif detection in time series. In: Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), pp. 1261–1266 (2009)Google Scholar
  21. 21.
    Rashidi, P., Cook, D.J., Holder, L.B., Schmitter-Edgecombe, M.: Discovering activities to recognize and track in a smart environment. IEEE Trans. Knowl. Data Eng. 23(4), 527–539 (2011)CrossRefGoogle Scholar
  22. 22.
    Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Somnuk Phon-Amnuaisuk
    • 1
    • 2
    Email author
  • Ken T. Murata
    • 3
  • Praphan Pavarangkoon
    • 3
  • Takamichi Mizuhara
    • 4
  • Shiqah Hadi
    • 1
    • 2
  1. 1.Media Informatics Special Interest Group, CIEUniversiti Teknologi BruneiGadongBrunei
  2. 2.School of Computing and InformaticsUniversiti Teknologi BruneiGadongBrunei
  3. 3.National Institute of Information and Communications TechnologyTokyoJapan
  4. 4.CLEALINKTECHNOLOGY Co., Ltd.KyotoJapan

Personalised recommendations