Abstract
As deep learning continues to evolve, datasets are becoming increasingly larger, leading to higher costs for manual labeling. Zero-shot learning eliminates the need for substantial labor costs to label training datasets. It even allows the model to predict new classes with slight modifications. However, the accuracy of zero-shot learning still remains relatively low and makes practical applications challenging. Some recent research has tried to improve accuracy by manually annotating features, but this approach again requires labor-intensive input. To reduce these labor costs, we employ ChatGPT, which can generate features automatically without any manual involvement. Importantly, this approach maintains high accuracy levels, surpassing other automated methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Wu, N., Kera, H., Kawamoto, K.: Improving zero-shot action recognition using human instruction with text description. Appl. Intell. 1–15 (2023). https://doi.org/10.1007/s10489-023-04808-w
Ouyang, L., et al.: Training language models to follow instructions with human feedback. Proc. NeurIPS 35, 27730–27744 (2022)
Lampert, C.H., Nickisch, H., Harmeling, S.: Learning to detect unseen object classes by between-class attribute transfer. In: Proceedings of CVPR, pp. 951–958 (2009)
Frome, A., et al.: Devise: A deep visual-semantic embedding model. In: Proceedings of NeurIPS, pp. 2121–2129 (2013)
Xu, W., Xian, Y., Wang, J., Schiele, B., Akata, Z.: VGSE: Visually-grounded semantic embeddings for zero-shot learning. In: Proceedings of CVPR, pp. 9316–9325 (2022).
Cheng, D., Wang, G., Wang, B., Zhang, Q., Han, J., Zhang, D.: Hybrid routing transformer for zero-shot learning. Pattern Recogn. 137, 109270 (2023)
Cheng, D., Wang, G., Wang, N., Zhang, D., Zhang, Q., Gao, X.: Discriminative and robust attribute alignment for zero-shot learning. IEEE Transactions on Circuits and Systems for Video Technology (Early Access) 33(8), 4244–4256 (2023)
Kerrigan, A., Duarte, K., Rawat, Y., Shah, M.: Reformulating zero-shot action recognition for multi-label actions. Proc. NeurIPS 34, 25566–25577 (2021)
Xing, M., Feng, Z., Su, Y., Peng, W., Zhang, J.: Ventral and dorsal stream theory based zero-shot action recognition. Pattern Recogn. 116, 107953 (2021)
Chen, S., Huang, D.: Elaborative rehearsal for zero-shot action recognition. In: Proceedings of ICCV, pp. 13638–13647 (2021)
Wu, W., Wang, X., Luo, H., Wang, J., Yang, Y., Ouyang, W.: Bidirectional cross-modal knowledge exploration for video recognition with pre-trained vision-language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6620–6630 (2023)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)
Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
Chowdhery, A., et al.: Palm: Scaling language modeling with pathways (2022). arXiv preprint arXiv:2204.02311
Naeem, M.F., et al.: I2MVFormer: large language model generated multi-view document supervision for zero-shot image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15169–15179 (2023)
Guo, J., et al.: From images to textual prompts: zero-shot visual question answering with frozen large language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10867–10877 (2023)
Soomro, K., Zamir, A.R., Shah, M.: UCF101: A dataset of 101 human actions classes from videos in the wild (2012). arXiv:1212.0402
Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: HMDB: a large video database for human motion recognition. In: Proceedings of ICCV, pp. 2556–2563 (2011)
Gao, J., Zhang, T., Xu, C.: I know the relationships: Zero-shot action recognition via two-stream graph convolutional networks and knowledge graphs. Proc. AAAI 33(1), 8303–8311 (2019)
Mishra, A., Pandey, A., Murthy, H.A.: Zero-shot learning for action recognition using synthesized features. Neurocomputing 390, 117–130 (2020)
Su, Y., Xing, M., An, S., Peng, W., Feng, Z.: Vdarn: Video disentangling attentive relation network for few-shot and zero-shot action recognition. Ad Hoc Netw. 113, 102380 (2021)
Gao, Z., Hou, Y., Li, W., Guo, Z., Yu, B.: Learning using privileged information for zero-shot action recognition. In: Proc. ACCV, pp. 773–788 (2022)
Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vision 130(9), 2337–2348 (2022)
Funding
Funding. This work was supported by the Japan Society for the Promotion of Science KAKENHI Grant Number JP19K12039 and JP22H03658.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wu, N., Kera, H., Kawamoto, K. (2024). Zero-Shot Action Recognition with ChatGPT-Based Instruction. In: Xin, B., Kubota, N., Chen, K., Dong, F. (eds) Advanced Computational Intelligence and Intelligent Informatics. IWACIII 2023. Communications in Computer and Information Science, vol 1932. Springer, Singapore. https://doi.org/10.1007/978-981-99-7593-8_3
Download citation
DOI: https://doi.org/10.1007/978-981-99-7593-8_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7592-1
Online ISBN: 978-981-99-7593-8
eBook Packages: Computer ScienceComputer Science (R0)