Abstract
Depth-image-based rendering (DIBR) is widely used in 3DTV, free-viewpoint video, and interactive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2D image quality metrics and state-of-the-art DIBR-related metrics.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Fehn, C. Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3DTV. In: Proceedings of the SPIE 5291, Stereoscopic Displays and Virtual Reality Systems XI, 93–104, 2004.
Smolic, A. 3D video and free viewpoint video: From capture to display. Pattern Recognition Vol. 44, No. 9, 1958–1968, 2011.
Smolic, A.; Mueller, K.; Merkle, P.; Kauff, P.; Wiegand, T. An overview of available and emerging 3D video formats and depth enhanced stereo as efficient generic solution. In: Proceedings of the Picture Coding Symposium, 1–4, 2009.
Wang, X.; Liang, X.; Yang, B.; Li, F. W. Scalable remote rendering using synthesized image quality assessment. IEEE Access Vol. 6, 36595–36610, 2018.
Mark, W. Post-rendering 3D image warping: Visibility, reconstruction, and performance for depth-image warping. Technical Report. Chapel Hill, NC, USA, 1999.
Zhou, Y.; Li, L.; Gu, K.; Fang, Y.; Lin, W. Quality assessment of 3D synthesized images via disoccluded region discovery. In: Proceedings of the IEEE International Conference on Image Processing, 1012–1016, 2016.
Battisti, F.; Bosc, E.; Carli, M.; Le Callet, P.; Perugia, S. Objective image quality assessment of 3D synthesized views. Signal Processing: Image Communication Vol. 30, 78–88, 2015.
Gu, K.; Jakhetiya, V.; Qiao, J. F.; Li, X.; Lin, W.; Thalmann, D. Model-based referenceless quality metric of 3D synthesized images using local image description. IEEE Transactions on Image Processing Vol. 27, No. 1, 394–405, 2018.
Tian, S.; Zhang, L.; Morin, L.; Déforges, O. NIQSV+: A no-reference synthesized view quality assessment metric. IEEE Transactions on Image Processing Vol. 27, No. 4, 1652–1664, 2018.
Bosc, E.; Pepion, R.; Le Callet, P.; Koppel, M.; Ndjiki-Nya, P.; Pressigout, M.; Morin, L. Towards a new quality metric for 3-D synthesized view assessment. IEEE Journal of Selected Topics in Signal Processing Vol. 5, No. 7, 1332–1343, 2011.
Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 13, No. 4, 600–612, 2004.
Sharifi, K.; Leon-Garcia, A. Estimation of shape parameter for generalized Gaussian distributions in subband decompositions of video. IEEE Transactions on Circuits and Systems for Video Technology Vol. 5, No. 1, 52–56, 1995.
Mittal, A.; Moorthy, A. K.; Bovik, A. C. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing Vol. 21, No. 12, 4695–4708, 2012.
Mittal, A.; Soundararajan, R.; Bovik, A. C. Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters Vol. 20, No. 3, 209–212, 2013.
Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1733–1740, 2014.
Bosse, S.; Maniry, D.; Wiegand, T.; Samek, W. A deep neural network for image quality assessment. In: Proceedings of the IEEE International Conference on Image Processing, 3773–3777, 2016.
Bare, B.; Li, K.; Yan, B. An accurate deep convolutional neural networks model for no-reference image quality assessment. In: Proceedings of the IEEE International Conference on Multimedia and Expo, 1356–1361, 2017.
Kim, J.; Nguyen, A.; Ahn, S.; Luo, C.; Lee, S. Multiple level feature-based universal blind image quality assessment model. In: Proceedings of the 25th IEEE International Conference on Image Processing, 291–295, 2018.
Lin, K.-Y.; Wang, G. Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 732–741, 2018.
Zhang, F.-L.; Wu, X.; Li, R.-L.; Wang, J.; Zheng, Z.-H.; Hu, S.-M. Detecting and removing visual distractors for video aesthetic enhancement. IEEE Transactions on Multimedia Vol. 20, No. 8, 1987–1999, 2018.
Sheikh, H. R.; Wang, Z.; Cormack, L.; Bovik, A. C. Live image quality assessment database release 2 (2005). 2016. Available at https://doi.org/live.ece.utexas.edu/research/quality.
Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008-a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics Vol. 10, No. 4, 30–45, 2009.
Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; Jay Kuo, C.-C. Image database TID2013: Peculiarities, results and perspectives. Signal Processing: Image Communication Vol. 30, 57–77, 2015.
Conze, P.-H.; Robert, P.; Morin, L. Objective view synthesis quality assessment. In: Proceedings of the SPIE 8288, Stereoscopic Displays and Applications XXIII, 82881M, 2012.
Sandić Stanković, D.; Kukolj, D.; Le Callet, P. DIBR synthesized image quality assessment based on morphological wavelets. In: Proceedings of the 7th International Workshop on Quality of Multimedia Experience, 1–6, 2015.
Sandić Stanković, D.; Kukolj, D.; Le Callet, P. DIBR-synthesized image quality assessment based on morphological multi-scale approach. EURASIP Journal on Image and Video Processing Vol. 2017, 4, 2017.
Heng, W.; Jiang. T. From image quality to patch quality: An image-patch model for no-reference image quality assessment. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1238–1242, 2017.
Zhu, W.; Liang, S.; Wei, Y.; Sun, J. Saliency optimization from robust background detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2814–2821, 2014.
Yang, X.; Ling, W.; Lu, Z.; Ong, E. P.; Yao, S. Just noticeable distortion model and its applications in video coding. Signal Processing: Image Communication Vol. 20, No. 7, 662–680, 2005.
Kimata, H.; Kitahara, M.; Kamikura, K.; Yashima, Y. Free-viewpoint video communication using multiview video coding. NTT Technical Review Vol. 2, No. 8, 21–26, 2004.
Zitnick, C. L.; Kang, S. B.; Uyttendaele, M.; Winder, S.; Szeliski, R. High-quality video view interpolation using a layered representation. ACM Transactions on Graphics Vol. 23, No. 3, 600–608, 2004.
Domañski, M.; Grajek, T.; Klimaszewski, K.; Kurc, M.; Stankiewicz, O.; Stankowski, J.; Wegner, K. Poznan multiview video test sequences and camera parameters. ISO/IEC JTC1/SC29/WG11 MPEG, M17050, 2009.
Mayer, N.; Ilg, E.; Hausser, P.; Fischer, P.; Cremers, D.; Dosovitskiy, A.; Brox, T. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4040–4048, 2016.
Hirschmuller, H.; Scharstein, D. Evaluation of cost functions for stereo matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–8, 2007.
Bosc, E.; Pépion, R.; Le Callet, P.; Köppel, M.; Ndjiki-Nya, P.; Morin, L.; Pressigout, M. Perceived quality of DIBR-based synthesized views. In: Proceedings of SPIE 8135, Applications of Digital Image Processing XXXIV, 81350I, 2011.
Song, R.; Ko, H.; Jay Kuo, C.-C. MCL-3D: A database for stereoscopic image quality assessment using 2D-image-plus-depth source.Journal of Information Science and Engineering Vol. 31, 1593–1611, 2015.
Winkler, S. Analysis of public image and video databases for quality assessment. IEEE Journal of Selected Topics in Signal Processing Vol. 6, No. 6, 616–625, 2012.
I. T. Union. ITU-R BT.910. In: Subjective video quality assessment methods for multimedia applications. 1999.
I. T. Union. ITU-R BT.500-12. In: Recommendation: Methodology for the subjective assessment of the quality of television pictures. 1993.
Chandler, D. M.; Hemami, S. S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Transactions on Image Processing Vol. 16, No. 9, 2284–2298, 2007.
Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing Vol. 20, No. 8, 2378–2386, 2011.
Liu, L.; Liu, B.; Huang, H.; Bovik, A. C. No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication Vol. 29, No. 8, 856–863, 2014.
Bao, P.; Gourlay, D. Low bandwidth remote rendering using 3D image warping. In: Proceedings of the International Conference on Visual Information Engineering. Ideas, Applications, Experience, 61–64, 2003.
Bao, P.; Gourlay, D. A framework for remote rendering of 3-D scenes on limited mobile devices. IEEE Transactions on Multimedia Vol. 8, No. 2, 382–389, 2006.
Shi, S.; Nahrstedt, K.; Campbell, R. A real-time remote rendering system for interactive mobile graphics. ACM Transactions on Multimedia Computing, Communications, and Applications Vol. 8, No. 3s, Article No. 46, 2012.
Acknowledgements
The authors would like to thank the anonymous reviewers for their valuable comments. They would also thank Kai Wang and Jialei Li for their assistance in dataset construction and public release. The work was sponsored by the National Key R&D Program of China (No. 2017YFB1002702), and the National Natural Science Foundation of China (Nos. 61572058, 61472363).
Author information
Authors and Affiliations
Corresponding author
Additional information
Xiaochuan Wang received his M.Sc. degree from Beihang University, Beijing, China, in 2012. He is currently pursuing his Ph.D. degree in the State Key Laboratory of Virtual Reality Technology and System, Beihang University, China. His current research interests include mobile graphics, remote rendering, image quality assessment, and multiview video systems.
Xiaohui Liang received his Ph.D. degree in computer science and engineering from Beihang University in 2002. He is currently a professor in the State Key Laboratory of Virtual Reality Technology and System, Beihang University. His research interests include computer graphics, animation, visualization, and virtual reality.
Bailin Yang received his Ph.D. degree in the Department of Computer Science from Zhejiang University in 2007. He is currently a professor in the Department of Computer and Electronic Engineering of Zhejiang Gongshang University. His research interests are in mobile graphics, real-time rendering, and mobile games.
Frederick W. B. Li received his Ph.D. degree in computer science from City University of Hong Kong in 2001. He is currently an assistant professor at Durham University, UK. Before that, he was an assistant professor in Hong Kong Polytechnic University and project manager of a Hong Kong Government Innovation and Technology Fund (ITF) project. His research interests include distributed virtual environments, computer graphics, and e-learning systems. Dr. Li has served as a guest editor of special issues of the International Journal of Distance Education Technologies and the Journal of Multimedia. He has served on conference committees of a number of conferences, including as Program Co-Chair of ICWL 2007-08, 2013, 2015, and IDET 2008-09, and Workshop Co-Chair of ICWL 2009 and U-Media 2009.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.
About this article
Cite this article
Wang, X., Liang, X., Yang, B. et al. No-reference synthetic image quality assessment with convolutional neural network and local image saliency. Comp. Visual Media 5, 193–208 (2019). https://doi.org/10.1007/s41095-019-0131-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41095-019-0131-6