Abstract
Posters are widely used a powerful tool for communication. They are very informative but are normally viewed for only 3 s, which calls for efficient and effective information delivery. It is thus important to know where people would look for posters. Saliency models could be of great help where expensive and time-consuming eye-tracking experiment isn’t an option. However, current datasets for saliency model training mainly deal with natural scenes, which makes research on saliency models for posters difficult. To address this problem, we collected 1700 high-quality posters as well as their eye-tracking data where each image is viewed by 15 participants. This could be the groundwork for future research in the field of saliency prediction for posters. It is noticeable that posters are rich in texts (e.g. title, slogan, description paragraph). The various types of texts serve respective functions, making some relatively more important than others. Nevertheless, the difference is largely neglected in current studies where researchers put same emphasis on all text regions, and the problem is especially crucial when it comes to saliency model for posters. Our further analysis of the eye-tracking results with focus on text offers some insights into the issue.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hutton, S.B., Nolte, S.: The effect of gaze cues on attention to print advertisements. Appl. Cogn. Psychol. 25(6), 887–892 (2011)
Xu, P., Ehinger, K.A., Zhang, Y.: TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking. Computer Science (2015)
Kim, N.W., et al.: BubbleView: an interface for crowdsourcing image importance maps and tracking visual attention. ACM Trans. Comput. Hum. Interact. 24(5), 1–40 (2017)
Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations (2012)
Judd, T., Ehinger, K., Durand, F.: Learning to predict where humans look. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2106–2113. IEEE (2009)
Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An eye fixation database for saliency detection in images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 30–43. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_3
Murray, N., Marchesotti, L., Perronnin, F.: AVA: a large-scale database for aesthetic visual analysis. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2408–2415. IEEE (2012)
Borji, A., Itti, L.: Cat2000: A large scale fixation dataset for boosting saliency research. arXiv preprint arXiv:1505.03581 (2015)
Shen, C., Zhao, Q.: Webpage saliency. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 33–46. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10584-0_3
Frintrop, S., Rome, E., Christensen, H.I.: Computational visual attention systems and their cognitive foundations: a survey. ACM Trans. Appl. Percept. (TAP) 7(1), 1–39 (2010)
Kümmerer, M., Theis, L., Bethge, M.: Deep gaze I: boosting saliency prediction with feature maps trained on imagenet. arXiv preprint arXiv:1411.1045 (2014)
Bylinskii, Z., Recasens, A., Borji, A., Oliva, A., Torralba, A., Durand, F.: Where Should Saliency Models Look Next? In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016, Part V. LNCS, vol. 9909, pp. 809–824. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_49
Kruthiventi, S.S., Ayush, K., Babu, R.V.: Deepfix: a fully convolutional neural network for predicting human eye fixations. IEEE Trans. Image Process. 26(9), 4446–4456 (2017)
Tseng, P.H., Carmi, R., Cameron, I.G., Munoz, D.P., Itti, L.: Quantifying center bias of observers in free viewing of dynamic natural scenes. J. Vis. 9(7), 4–4 (2009)
Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.: WSUN: a Bayesian framework for saliency using natural statistics. J. Vis. 8(7), 32–32 (2008)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Fang, Y., Zhu, L., Cao, X., Zhang, L., Li, X. (2020). Visual Saliency: How Text Influences. In: Meiselwitz, G. (eds) Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis. HCII 2020. Lecture Notes in Computer Science(), vol 12194. Springer, Cham. https://doi.org/10.1007/978-3-030-49570-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-49570-1_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49569-5
Online ISBN: 978-3-030-49570-1
eBook Packages: Computer ScienceComputer Science (R0)