Abstract
Can negation be depicted? It has been claimed in various areas, including philosophy, cognitive science, and AI, that depicting negation through visual expressions such images and pictures is challenging. Recent empirical findings have shown that humans can indeed understand certain images as expressing negation, whereas this ability is not exhibited by machine learning models trained on image data. To elucidate the computational ability underlying the understanding of negation in images, this study first focuses on the image captioning task, specifically the performance of models pre-trained on large linguistic and image datasets for generating text from images. Our experiment demonstrates that a state-of-the-art model achieves some success in generating consistent captions from images, particularly in photographs rather than illustrations. However, when it comes to generating captions containing negation from images, the model is not as proficient as humans. To further investigate the performance of machine learning models in a more controlled setting, we conducted an additional analysis using a Visual Question Answering (VQA) task. This task enables us to specify where in the image the model should focus its attention when answering a question. As a result of this setting, the model’s performance was improved. These results will shed light on the disparities in the attentional focus between humans and machine learning models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahn, M., et al.: Do as I can, not as I say: grounding language in robotic affordances. In: CoRL 2023, PMLR, vol. 205, pp. 287–318 (2023)
Altman, S.: The structure of primate social communication. In: Altman, S. (ed.) Social Communication Among Primates, pp. 325–362. University of Chicago Press, Chicago (1967)
Anderson, P., et al.: Vision-and-language navigation: interpreting visually-grounded navigation instructions in real environments. In: CVPR 2018, pp. 3674–3683. IEEE (2018)
Bender, E. M., Koller, A.: Climbing towards NLU: on meaning, form, and understanding in the age of data. In: ACL 2020, pp. 5185–5198 (2020)
Bernardi, R., Pezzelle, S.: Linguistic issues behind visual question answering. Lang. Linguist. Compass 15(6), elnc3.12417 (2021). https://doi.org/10.1111/lnc3.12417
Cohn, N.: The Visual Language of Comics: Introduction to the Structure and Cognition of Sequential Images. Bloomsbury Academic, London (2013)
Gokhale, T., Banerjee, P., Baral, C., Yang, Y.: VQA-LOL: visual question answering under the lens of logic. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12366, pp. 379–396. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58589-1_23
Horn, L.R.: A Natural History of Negation. University of Chicago Press, Chicago (1989)
Kim, W., Son, B., Kim, I.: ViLT: vision-and-language transformer without convolution or region supervision. In: ICML 2021. PMLR, vol. 139, pp. 5583–5594 (2021)
Li, J., Li, D., Xiong, C., Hoi, S.: BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In: ICML 2022. PMLR vol. 162, pp. 12888–12900 (2022)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Manmadhan, S., Kovoor, B.C.: Visual question answering: a state-of-the-art review. Artif. Intell. Rev. 53, 5705–5745 (2020). https://doi.org/10.1007/s10462-020-09832-7
Matsui, Y., et al.: Sketch-based manga retrieval using manga109 dataset. Multimed. Tools Appl. 76(20), 21811–21838 (2017). https://doi.org/10.1007/s11042-016-4020-z
van Miltenburg, E., Morante, R., Elliott, D.: Pragmatic factors in image description: the case of negations. In: VL 2016, pp. 54–59. ACL (2016)
Park, D. H., Darrell, T., Rohrbach, A.: Robust change captioning. In: ICCV 2019, pp. 4624–4633, IEEE (2019)
Qiu, Y., Satoh, Y., Suzuki, R., Iwata, K., Kataoka, H.: Indoor scene change captioning based on multimodality data. Sensors 20(17), 4761 (2020). https://doi.org/10.3390/s20174761
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: ICML 2021. PMLR vol. 139, pp. 8748–8763 (2021)
Russell, B.: The Problems of Philosophy. Oxford University Press, Oxford (1912)
Sato, Y., Mineshima, K.: How diagrams can support syllogistic reasoning: an experimental study. J. Log. Lang. Inf. 24, 409–455 (2015). https://doi.org/10.1007/s10849-015-9225-4
Sato, Y., Mineshima, K.: Visually analyzing universal quantifiers in photograph captions. In: Giardino, V., Linker, S., Burns, R., Bellucci, F., Boucheix, J.M., Viana, P. (eds.) Diagrams 2022. LNCS, vol. 13462, pp. 373–377. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15146-0_34
Sato, Y., Mineshima, K., Ueda, K.: Can negation be depicted? Comparing human and machine understanding of visual representations. Cogn. Sci. 47(3), e13258 (2023). https://doi.org/10.1111/cogs.13258
Søgaard, A.: Grounding the vector space of an octopus: word meaning from raw text. Minds Mach. 33(1), 33–54 (2023). https://doi.org/10.1007/s11023-023-09622-4
Yoshikawa, Y., Shigeto, Y., Takeuchi, A.: STAIR captions: constructing a large-scale Japanese image caption dataset. In: ACL 2017, pp. 417–421 (2017)
Wittgenstein, L.: Notebooks 1914-1916. In: Anscombe, G.E.M., von Wright, G.H. (eds.) University of Chicago Press, Chicago (1984). (Original Work Published 1914)
Acknowledgements
All comic images in this paper are from the Manga-109 dataset and are licensed for use. “HighschoolKimengumi vol. 20” p. 139 \(\copyright \) Motoei Niizawa/Shueisha for illust 1 of Fig. 1, “MoeruOnisan vol19” p. 58 \(\copyright \) Tadashi Sato/Shueisha for illust 2 of Fig. 1, “OL Lunch” p. 9 \(\copyright \) Yoko Sanri/Shogakukan. Regarding photographs, they are retrieved from MS-COCO. The COCO image id is #449681 for Photo 7, #163084 for Photo 2, #51587 for Photo 3, #65737 for Photo 1.
This study was supported by Grant-in-Aid for JSPS KAKENHI Grant Number JP20K12782 and JP21K00016 as well as JST CREST Grant Number JPMJCR2114.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sato, Y., Mineshima, K. (2024). Can Machines and Humans Use Negation When Describing Images?. In: Baratgin, J., Jacquet, B., Yama, H. (eds) Human and Artificial Rationalities. HAR 2023. Lecture Notes in Computer Science, vol 14522. Springer, Cham. https://doi.org/10.1007/978-3-031-55245-8_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-55245-8_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-55244-1
Online ISBN: 978-3-031-55245-8
eBook Packages: Computer ScienceComputer Science (R0)