Skip to main content

A Saliency-Based Technique for Advertisement Layout Optimisation to Predict Customers’ Behaviour

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Abstract

Customer retail environments represent an exciting and challenging context to develop and put in place cutting-edge computer vision techniques for more engaging customer experiences. Visual attention is one of the aspects that play such a critical role in the analysis of customers behaviour on advertising campaigns continuously displayed in shops and retail environments. In this paper, we approach the optimisation of advertisement layout content, aiming to grab the audience’s visual attention more effectively. We propose a fully automatic method for the delivery of the most effective layout content configuration using saliency maps out of each possible set of images with a given grid layout. Visual Saliency deals with the identification of the most critical regions out of pictures from a perceptual viewpoint. We want to assess the feasibility of saliency maps as a tool for the optimisation of advertisements considering all possible permutations of images which compose the advertising campaign itself. We start by analysing advertising campaigns consisting of a given spatial layout and a certain number of images. We run a deep learning-based saliency model over all permutations. Noticeable differences among global and local saliency maps occur over different layout content out of the same images. The latter aspect suggests that each image gives its contribution to the global visual saliency because of its content and location within the given layout. On top of this consideration, we employ some advertising images to set up a graphical campaign with a given design. We extract relative variance values out the local saliency maps of all permutations. We hypothesise that the inverse of relative variance can be used as an Effectiveness Score (ES) to catch those layout content permutations showing the more balanced spatial distribution of salient pixel. A group of 20 participants have run some eye-tracking sessions over the same advertising layouts to validate the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abouelaziz, I., Chetouani, A., El Hassouni, M., Latecki, L.J., Cherifi, H.: 3D visual saliency and convolutional neural network for blind mesh quality assessment. Neural Comput. Appl. 32(21), 16589–16603 (2019). https://doi.org/10.1007/s00521-019-04521-1

    Article  Google Scholar 

  2. Ardizzone, E., Bruno, A.: Image quality assessment by saliency maps. In: VISAPP (1), pp. 479–483 (2012)

    Google Scholar 

  3. Borji, A., Itti, L.: Cat 2000: a large scale fixation dataset for boosting saliency research. CVPR 2015 workshop on “Future of Datasets” (2015). arXiv preprint arXiv:1505.03581

  4. Borji, A., Sihite, D.N., Itti, L.: Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans. Image Process. 22(1), 55–69 (2013)

    Article  MathSciNet  Google Scholar 

  5. Bruno, A., Gugliuzza, F., Ardizzone, E., Giunta, C.C., Pirrone, R.: Image content enhancement through salient regions segmentation for people with color vision deficiencies. I-Perception 10(3), 2041669519841073 (2019)

    Google Scholar 

  6. Bruno, A., Gugliuzza, F., Pirrone, R., Ardizzone, E.: A multi-scale colour and keypoint density-based approach for visual saliency detection. IEEE Access 8, 121330–121343 (2020)

    Article  Google Scholar 

  7. Bylinskii, Z., et al.: Mit saliency benchmark. http://saliency.mit.edu/

  8. Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., Durand, F.: What do different evaluation metrics tell us about saliency models? arXiv preprint arXiv:1604.03605 (2016)

  9. Deja, S.: Gazerecorder. https://api.gazerecorder.com/

  10. Diao, W., Sun, X., Zheng, X., Dou, F., Wang, H., Fu, K.: Efficient saliency-based object detection in remote sensing images using deep belief networks. IEEE Geosci. Remote Sens. Lett. 13(2), 137–141 (2016)

    Article  Google Scholar 

  11. Fuchs, K., Grundmann, T., Fleisch, E.: Towards identification of packaged products via computer vision: convolutional neural networks for object detection and image classification in retail environments. In: Proceedings of the 9th International Conference on the Internet of Things, pp. 1–8 (2019)

    Google Scholar 

  12. Gabellini, P., D’Aloisio, M., Fabiani, M., Placidi, V.: A large scale trajectory dataset for shopper behaviour understanding. In: Cristani, M., Prati, A., Lanz, O., Messelodi, S., Sebe, N. (eds.) ICIAP 2019. LNCS, vol. 11808, pp. 285–295. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30754-7_29

    Chapter  Google Scholar 

  13. Gidlöf, K., Anikin, A., Lingonblad, M., Wallin, A.: Looking is buying. how visual attention and choice are affected by consumer preferences and properties of the supermarket shelf. Appetite 116, 29–38 (2017)

    Google Scholar 

  14. Huddleston, P.T., Behe, B.K., Driesener, C., Minahan, S.: Inside-outside: using eye-tracking to investigate search-choice processes in the retail environment. J. Retail. Consum. Serv. 43, 85–93 (2018)

    Article  Google Scholar 

  15. Hussain, Z., et al.: Automatic understanding of image and video advertisements. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1705–1715 (2017)

    Google Scholar 

  16. Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations. In: MIT Technical Report (2012)

    Google Scholar 

  17. Kahn, B.E.: Using visual design to improve customer perceptions of online assortments. J. Retail. 93(1), 29–42 (2017)

    Article  Google Scholar 

  18. La Porta, S., Marconi, F., Lazzini, I.: Collecting retail data using a deep learning identification experience. In: Cristani, M., Prati, A., Lanz, O., Messelodi, S., Sebe, N. (eds.) ICIAP 2019. LNCS, vol. 11808, pp. 275–284. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30754-7_28

    Chapter  Google Scholar 

  19. Liciotti, D., Frontoni, E., Mancini, A., Zingaretti, P.: Pervasive system for consumer behaviour analysis in retail environments. In: Nasrollahi, K. (ed.) FFER/VAAM -2016. LNCS, vol. 10165, pp. 12–23. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56687-0_2

    Chapter  Google Scholar 

  20. Nguyen, T.V., Zhao, Q., Yan, S.: Attentive systems: a survey. Int. J. Comput. Vis. 126(1), 86–110 (2018)

    Article  Google Scholar 

  21. Paolanti, M., et al.: Semantic 3D object maps for everyday robotic retail inspection. In: Cristani, M., Prati, A., Lanz, O., Messelodi, S., Sebe, N. (eds.) ICIAP 2019. LNCS, vol. 11808, pp. 263–274. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30754-7_27

    Chapter  Google Scholar 

  22. Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 724–732 (2016)

    Google Scholar 

  23. Sran, P.K., Gupta, S., Singh, S.: Segmentation based image compression of brain magnetic resonance images using visual saliency. Biomed. Signal Process. Control 62, 102089 (2020)

    Article  Google Scholar 

  24. Sturari, M., et al.: Robust and affordable retail customer profiling by vision and radio beacon sensor fusion. Pattern Recogn. Lett. 81, 30–40 (2016)

    Article  Google Scholar 

  25. Vaira, R., Pietrini, R., Pierdicca, R., Zingaretti, P., Mancini, A., Frontoni, E.: An IOT edge-fog-cloud architecture for vision based pallet integrity. In: Cristani, M., Prati, A., Lanz, O., Messelodi, S., Sebe, N. (eds.) ICIAP 2019. LNCS, vol. 11808, pp. 296–306. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30754-7_30

    Chapter  Google Scholar 

  26. Wang, W., Shen, J., Shao, L.: Video salient object detection via fully convolutional networks. IEEE Trans. Image Process. 27(1), 38–49 (2017)

    Google Scholar 

Download references

Acknowledgment

This research was supported by Innovate UK. Smart Grants (39012) - Shoppar: Dynamically Optimised Digital Content.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandro Bruno .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bruno, A., Lancette, S., Zhang, J., Moore, M., Ward, V.P., Chang, J. (2021). A Saliency-Based Technique for Advertisement Layout Optimisation to Predict Customers’ Behaviour. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12662. Springer, Cham. https://doi.org/10.1007/978-3-030-68790-8_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68790-8_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68789-2

  • Online ISBN: 978-3-030-68790-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics