This paper proposes a novel saliency-aware inter-image color transfer method to perform image manipulation. Specifically, given the source image, the candidate images are first retrieved from a group of images with the same semantic category, and the corresponding saliency maps are obtained using an existing saliency model. Then, the inter-image color transfer method is proposed to transfer the colors of the high-saliency region in each candidate image to the target object region in the source image, for generating the manipulated image. Finally, from a set of manipulated images, the one with the highest weighted F-measure of its saliency map is selected as the final result. Experimental results show that the proposed method not only highlights objects effectively but also preserves the naturalness of images well, and consistently outperforms other image manipulation methods when viewing the manipulated images with or without the source image as the reference.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Bernhard M, Zhang L, and Wimmer M (2011) Manipulating attention in computer games. In: Proc. of IEEE Image, Video, and Multidimensional Signal Processing Workshop, pp. 153–158
Fei-Fei L, Fergus R, Perona P (2006) One-shot learning of object categories. IEEE Trans Pattern Anal Mach Intell 28(4):594–611
Fei-Fei L, Fergus R, Perona P (2007) Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Comput Vis Image Underst 106(1):59–70
Fried O, Shechtman E, Goldman DB, and Finkelstein A (2015) Finding distractors in images. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1703–1712
Gatys LA, Kümmerer M, Wallis TS, and Bethge M (2017) Guiding human gaze with convolutional neural networks. arXiv preprint arXiv:1712.06492
Hagiwara A, Sugimoto A, and Kawamoto K (2011) Saliency-based image editing for guiding visual attention. In: Proc. of the 1st international workshop on pervasive eye tracking & mobile eye-based interaction, pp. 43–48
Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, Shum HY (2011) Learning to detect a salient object. IEEE Trans Pattern Anal Mach Intell 33(2):353–367
Liu Z, Zou W, Le Meur O (2014) Saliency tree: A novel saliency detection framework. IEEE Trans Image Process 23(5):1937–1952
Margolin R, Zelnik-Manor L, and Tal A (2014) How to evaluate foreground maps?. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255
Mateescu VA, and Bajić IV (2014) Attention retargeting by color manipulation in images. In: Proc. of the 1st International Workshop on Perception Inspired Video Processing, pp. 15–20
Mateescu VA, Bajic IV (2016) Visual attention retargeting. IEEE Multimedia 23(1):82–91
Mechrez R, Shechtman E, and Zelnik-Manor L (2018) Saliency driven image manipulation. In: Proc. of IEEE Workshop on Applications of Computer Vision, pp. 1368–1376
Nguyen TV, Ni B, Liu H, Xia W, Luo J, Kankanhalli M, Yan S (2013) Image re-attentionizing. IEEE Trans Multimedia 15(8):1910–1919
Pal R, and Roy D (2017). Enhancing saliency of an object using genetic algorithm. In: Proc. of IEEE Conference on Computer and Robot Vision, pp. 337–344
Reinhard E, Adhikhmin M, Gooch B, Shirley P (2001) Color transfer between images. IEEE Comput Graph Appl 21(5):34–41
Ren J, Liu Z, Zhou X, Sun G, Bai C (2018) Saliency integration driven by similar images. J Vis Commun Image Represent 50:227–236
Song M, Chen C, Wang S, Yang Y (2014) Low level and high-level prior learning for visual saliency estimation. Inf Sci 281:573–585
Song H, Liu Z, Du H, Sun G, Le Meur O, Ren T (2017) Depth-aware salient object detection and segmentation via multiscale discriminative saliency fusion and bootstrap learning. IEEE Trans Image Process 26(9):4204–4216
Su SL, Durand F, and Agrawala M (2005) De-emphasis of distracting image regions using texture power maps. In: Proc. of IEEE International Workshop on Texture Analysis and Synthesis, pp. 119–124
Takimoto H, Hitomi S, Yamauchi H, Kishihara M, Okubo K (2017) Image modification based on spatial frequency components for visual attention retargeting. IEICE Trans on Information and Systems 100(6):1339–1349
Tao D, Cheng J, Song M, Lin X (2016) Manifold ranking-based matrix factorization for saliency detection. IEEE Transactions on Neural Networks and Learning Systems 27(6):1122–1134
Vazquez-Corral J, and Bertalmío M (2017) Gamut mapping for visual attention retargeting. In: Proc. of Color and Imaging Conference, pp. 313–316
Wang L, Wang L, Lu H, Zhang P, and Ruan X (2016) Saliency detection with recurrent fully convolutional networks. In: Proc. of European Conference on Computer Vision, pp. 825–841
Wong LK, and Low KL (2011) Saliency retargeting: An approach to enhance image aesthetics. In: Proc. of IEEE Workshop on Applications of Computer Vision, pp. 73–80
Yan Z, Zhang H, Wang B, Paris S, Yu Y (2016) Automatic photo adjustment using deep neural networks. ACM Trans Graph 35(2):1–15
Zavalishin SS, and Bekhtin YS (2018) Visually aesthetic image contrast enhancement. In: Proc. of the 7th IEEE Mediterranean Conference on Embedded Computing, pp. 1–4
Zhang P, Wang D, Lu H, Wang H, and Ruan X (2017) Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 202–211
Zhou X, Liu Z, Sun G, Wang X (2017) Adaptive saliency fusion based on quality assessment. Multimed Tools Appl 76(22):23187–23211
Zhu W, Liang S, Wei Y, and Sun J (2014) Saliency optimization from robust background detection. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2814–2821
This work was supported by the National Natural Science Foundation of China under Grant No. 61771301.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Liu, X., Liu, Z., Jiao, Q. et al. Saliency-aware inter-image color transfer for image manipulation. Multimed Tools Appl 78, 21629–21644 (2019). https://doi.org/10.1007/s11042-019-7450-6
- Image manipulation
- inter-image color transfer