Person Search by Queried Description in Vietnamese Natural Language
- 196 Downloads
Nowadays, surveillance camera systems are widely deployed today from public places to private houses. This leads to huge image databases. Recent years have witnessed a significant improvement of surveillance video analysis, especially for person detection and tracking. However, finding the interested person in these databases is still very challenging issue. A majority of the existing person search methods bases on the assumption that the example image of the person of interest is available. This assumption is however not always satisfied in practical situations. Therefore, recently, person search by using query of natural language description has attracted the attention of researchers. However, they mainly dedicate to queried descriptions in English. In this paper, we propose a person search method with query of Vietnamese natural language. For this, Gated Neural Attention - Recurrent Neural Network (GNA-RNN) is employed to learn the affinity from pairs of description and image and then to estimate the similarity between query and images in the database. To evaluate the effectiveness of the proposed method, extensive experiments have performed in two datasets: CUHK-PEDES with description translated in Vietnamese and our own collected dataset named VnPersonSearch. The promising experimental results show the great potential of the proposed method.
KeywordsPerson search Natural language Vietnamese query Deep learning
This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 102.01-2017.315.
- 2.Li, S., Xiao, T., Li, H., Zhou, B., Yue, D., Wang, X.: Person search with natural language description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1970–1979 (2017)Google Scholar
- 3.Yamaguchi, M., Saito, K., Ushiku, Y., Harada, T.: Spatio-temporal person retrieval via natural language queries. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1453–1462 (2017)Google Scholar
- 4.Zhou, T., Chen, M., Yu, J., Terzopoulos, D.: Attention-based natural language person retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 27–34 (2017)Google Scholar
- 5.Gkioxari, G., Malik, J.: Finding action tubes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 759–768 (2015)Google Scholar
- 6.Nagaraja, V.K., Morariu, V.I., Davis, L.S.: Modeling context between objects for referring expression understanding. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 792–807. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_48CrossRefGoogle Scholar
- 7.Soomro, K., Idrees, H., Shah, M.: Action localization in videos through context walk. In: Proceedings of the IEEE international conference on computer vision, pp. 3280–3288 (2015)Google Scholar
- 8.Reed, S., Akata, Z., Lee, H., Schiele, B.: Learning deep representations of fine-grained visual descriptions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 49–58 (2016)Google Scholar
- 9.Frome, A., Corrado, G.S., Shlens, J., Bengio, S., Dean, J., Mikolov, T., et al.: Devise: A deep visual-semantic embedding model. In: Advances in Neural Information Processing Systems, pp. 2121–2129 (2013)Google Scholar
- 10.He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015)Google Scholar
- 11.Nguyen, D.Q., Nguyen, D.Q., Vu, T., Dras, M., Johnson, M.: A fast and accurate Vietnamese word segmenter. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan, European Languages Resources Association (ELRA), May 2018Google Scholar