Visual Saliency Computation pp 45-71 | Cite as
Location-Based Visual Saliency Computation
Abstract
This Chapter reviews the bottom-up visual saliency models for computing location-based saliency. These models can be roughly categorized into three domains, including the spatial domain, the transform domain and the spatiotemporal domain. For each domain, we will present the technical details of one or two representative approaches, while their followers and other approaches in the domain will also be briefly introduced. Note that we only focus on the bottom-up models for location-based saliency computation in this Chapter. The object-based saliency models will be discussed in Chap. 4, while the learning-based saliency models that also consider the influences of top-down factors will be presented in Chaps. 5, 6 and 7.
Keywords
Sparse Code Neural Information Processing System Saliency Detection Visual Saliency Saliency ModelPreview
Unable to display preview. Download preview PDF.
References
- Bogdanova, I., Bur, A., Hugli, H.: Visual attention on the sphere. IEEE Transactions on Image Processing 17(11), 2000–2014 (2008), doi:10.1109/TIP.2008.2003415CrossRefMathSciNetGoogle Scholar
- Borji, A., Itti, L.: Exploiting local and global patch rarities for saliency detection. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 478–485 (2012), doi:10.1109/CVPR.2012.6247711Google Scholar
- Bruce, N.D., Tsotsos, J.K.: Saliency based on information maximization. In: Advances in Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, pp. 155–162 (2005)Google Scholar
- Cerf, M., Harel, J., Einhauser, W., Koch, C.: Predicting human gaze using low-level saliency combined with face detection. In: Advances in Neural Information Processing Systems (NIPS), Vancouver, BC, Canada (2009)Google Scholar
- Cheng, M.M., Zhang, G.X., Mitra, N., Huang, X., Hu, S.M.: Global contrast based salient region detection. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 409–416 (2011), doi:10.1109/CVPR.2011.5995344Google Scholar
- Elazary, L., Itti, L.: Interesting objects are visually salient. Journal of Vision 8(3):3, 1–15 (2008), doi:10.1167/8.3.3Google Scholar
- Gao, D., Mahadevan, V., Vasconcelos, N.: The discriminant center-surround hypothesis for bottom-up saliency. In: Advances in Neural Information Processing Systems, NIPS (2009)Google Scholar
- Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2376–2383 (2010), doi:10.1109/CVPR.2010.5539929Google Scholar
- Gopalakrishnan, V., Hu, Y., Rajan, D.: Random walks on graphs to model saliency in images. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1698–1705 (2009), doi:10.1109/CVPR.2009.5206767Google Scholar
- Guo, C., Ma, Q., Zhang, L.: Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2008), doi:10.1109/CVPR.2008.4587715Google Scholar
- Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems (NIPS), pp. 545–552 (2007)Google Scholar
- Hou, X., Zhang, L.: Saliency detection: A spectral residual approach. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2007), doi:10.1109/CVPR.2007.383267Google Scholar
- Hou, X., Zhang, L.: Dynamic visual attention: Searching for coding length increments. In: Advances in Neural Information Processing Systems (NIPS), pp. 681–688 (2009)Google Scholar
- Hu, Y., Rajan, D., Chia, L.T.: Adaptive local context suppression of multiple cues for salient visual attention detection. In: Preceedings of the IEEE International Conference on Multimedia and Expo, ICME (2005a), doi:10.1109/ICME.2005.1521431Google Scholar
- Hu, Y., Rajan, D., Chia, L.T.: Robust subspace analysis for detecting visual attention regions in images. In: Proceedings of the 13th Annual ACM International Conference on Multimedia, MULTIMEDIA 2005, pp. 716–724. ACM, New York (2005), doi:10.1145/1101149.1101306Google Scholar
- Itti, L.: Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Transactions on Image Processing 13(10), 1304–1318 (2004), doi:10.1109/TIP.2004.834657CrossRefGoogle Scholar
- Itti, L.: Crcns data sharing: Eye movements during free-viewing of natural videos. In: Collaborative Research in Computational Neuroscience Annual Meeting, Los Angeles, California (2008)Google Scholar
- Itti, L., Baldi, P.: Bayesian surprise attracts human attention. In: Advances in Neural Information Processing Systems (NIPS), pp. 547–554 (2005a)Google Scholar
- Itti, L., Baldi, P.: A principled approach to detecting surprising events in video. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 631–637 (2005b), doi:10.1109/CVPR.2005.40Google Scholar
- Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998), doi:10.1109/34.730558CrossRefGoogle Scholar
- Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: Preceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2106–2113 (2009), doi:10.1109/ICCV.2009.5459462Google Scholar
- Kadir, T., Brady, M.: Saliency, scale and image description. International Journal of Computer Vision 45(2), 83–105 (2001), doi:10.1023/A:1012460413855CrossRefMATHGoogle Scholar
- Kienzle, W., Wichmann, F.A., Scholkopf, B., Franz, M.O.: A nonparametric approach to bottom-up visual saliency. In: Advances in Neural Information Processing Systems (NIPS), pp. 689–696 (2007)Google Scholar
- Le Meur, O., Le Callet, P., Barba, D., Thoreau, D.: A coherent computational approach to model bottom-up visual attention. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(5), 802–817 (2006), doi:10.1109/TPAMI.2006.86CrossRefGoogle Scholar
- Li, S., Lee, M.C.: Efficient spatiotemporal-attention-driven shot matching. In: Proceedings of the 15th Annual ACM International Conference on Multimedia, MULTIMEDIA 2007, pp. 178–187. ACM, New York (2007), doi:10.1145/1291233.1291275Google Scholar
- Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004), doi:10.1023/B:VISI.0000029664.99615.94CrossRefGoogle Scholar
- Marat, S., Ho Phuoc, T., Granjon, L., Guyader, N., Pellerin, D., Guérin-Dugué, A.: Modelling spatio-temporal saliency to predict gaze direction for short videos. International Journal of Computer Vision 82(3), 231–243 (2009), doi:10.1007/s11263-009-0215-3CrossRefGoogle Scholar
- Navalpakkam, V., Itti, L.: Search goal tunes visual features optimally. Neuron 53, 605–617 (2007)CrossRefGoogle Scholar
- Rapantzikos, K., Tsapatsoulis, N., Avrithis, Y., Kollias, S.: Bottom-up spatiotemporal visual attention model for video analysis. IET Image Processing 1(2), 237–248 (2007)CrossRefGoogle Scholar
- Riche, N., Mancas, M., Gosselin, B., Dutoit, T.: Rare: A new bottom-up saliency model. In: Preceedings of the 19th IEEE International Conference on Image Processing (ICIP), pp. 641–644 (2012), doi:10.1109/ICIP.2012.6466941Google Scholar
- Seo, H.J., Milanfar, P.: Static and space-time visual saliency detection by self-resemblance. Journal of Vision 9(12):15, 1–27 (2009), doi:10.1167/9.12.15Google Scholar
- Softky, W.R., Koch, C.: The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. The Journal of Neuroscience 13(1), 334–350 (1993)Google Scholar
- Srivastava, A., Lee, A.B., Simoncelli, E.P., Zhu, S.C.: On advances in statistical modeling of natural images. Journal of Mathematical Imaging and Vision 18, 17–33 (2003)CrossRefMATHMathSciNetGoogle Scholar
- Sun, X., Yao, H., Ji, R.: What are we looking for: Towards statistical modeling of saccadic eye movements and visual saliency. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1552–1559 (2012), doi:10.1109/CVPR.2012.6247846Google Scholar
- Tatler, B.W., Baddeley, R.J., Gilchrist, I.D.: Visual correlates of fixation selection: Effects of scale and time. Vision Research 45(5), 643–659 (2005), doi:10.1016/j.visres.2004.09.017CrossRefGoogle Scholar
- Vikram, T.N., Tscherepanow, M., Wrede, B.: A saliency map based on sampling an image into random rectangular regions of interest. Pattern Recognition, 3114–3124 (2012)Google Scholar
- Walther, D.: Interactions of visual attention and object recognition: Computational modeling, algorithms, and psychophysics. PhD thesis, California Institute of Technology (2006)Google Scholar
- Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Networks 19(9), 1395–1407 (2006)CrossRefMATHGoogle Scholar
- Wang, W., Wang, Y., Huang, Q., Gao, W.: Measuring visual saliency by site entropy rate. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2368–2375 (2010), doi:10.1109/CVPR.2010.5539927Google Scholar
- Yang, J., Yang, M.H.: Top-down visual saliency via joint crf and dictionary learning. In: Preceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2296–2303 (2012), doi:10.1109/CVPR.2012.6247940Google Scholar
- Zhai, Y., Shah, M.: Visual attention detection in video sequences using spatiotemporal cues. In: Proceedings of the 14th Annual ACM International Conference on Multimedia, MULTIMEDIA 2006, pp. 815–824. ACM, New York (2006), doi:10.1145/1180639.1180824Google Scholar
- Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: A bayesian framework for saliency using natural statistics. Journal of Vision 8(7):32, 1–20 (2008), doi:10.1167/8.7.32Google Scholar
- Zhao, Q., Koch, C.: Learning a saliency map using fixated locations in natural scenes. Journal of Vision 11(3):9, 1–15 (2011), doi:10.1167/11.3.9Google Scholar