Aesthetic Judgments, Movement Perception and the Neural Architecture of the Visual System
We have developed a deep learning-based AI creativity system, which can be used to create computer-generated artworks, in the form of still images as well as time-based pieces (videos). Within the scope of this article, we will briefly describe our system and will then demonstrate its application in a psychological study on aesthetic experiences. We also propose a new hypothesis regarding a potential interaction between the neural architecture of the two visual pathways, and the effect of movement perception on the formation of aesthetic judgments. Specifically, we postulate that perceived movement within the visual scene engages reflexive attention, an attentional focus shift towards the processing of visual changes, and subsequently affects how information are relayed via the dorsal and ventral streams. We outline a recent pilot study in support of our proposed framework, which serves as the first study that investigates the relationship between the two visual streams and aesthetic experiences. Our study demonstrated evidence for our hypothesis, with time-based artworks showing higher aesthetic appeal at slower playback speeds.
KeywordsNeuroscience Brain simulation Artificial intelligence Deep learning Visual pathways Neural pathways Neuro-architecture Aesthetics
We acknowledge our colleague Graeme McCaig who was instrumental in creating our modified Deep Dream system and was invaluable in his mentorship of the work. This work was partially supported by SSHRC and NSERC grants respectively.
- DiPaola S, McCaig G, Gabora L (2018) Informing Artificial intelligence generative techniques using cognitive theories of human creativity. Procedia Comput Sci Special Issue: Bio Inspired Cognitive Architectures, 11 pagesGoogle Scholar
- DiPaola S, McCaig R (2016) Using artificial intelligence techniques to emulate the creativity of a portrait painter. In: Proceedings of Electronic Visualization and the Arts. British Computer Society, London. 8 pagesGoogle Scholar
- Goodale MA, Milner AD (1992) Separate visual pathways for perception and action. TINS 15:19–25Google Scholar
- How to speed up/slow down a video (2019). FFMPEG Wiki. Accessed 30 mar 2019Google Scholar
- Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R et al (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the ACM international conference on multimedia, pp 675–678. ACMGoogle Scholar
- Mordvintsev A, Olah C, Tyka M (2015) Online Blog. http://googleresearch.blogspot.ca/2015/06/inceptionism-going-deeper-into-neural.html
- Pelowski M, Leder H, Mitschke V, Specker E, Gerger G, Tinio PPL, Vaporova E, Bieg T, Husslein-Arco A (2018) Capturing aesthetic experiences with installation art: an empirical assessment of emotion, evaluations, and mobile eye tracking in Olafur Eliasson’s “Baroque, Baroque!”. Front Psychol 9:1255. https://doi.org/10.3389/fpsyg.2018.01255CrossRefGoogle Scholar