Advertisement

Signal, Image and Video Processing

, Volume 12, Issue 2, pp 355–362 | Cite as

On the use of deep learning for blind image quality assessment

  • Simone Bianco
  • Luigi Celona
  • Paolo NapoletanoEmail author
  • Raimondo Schettini
Original Paper

Abstract

In this work, we investigate the use of deep learning for distortion-generic blind image quality assessment. We report on different design choices, ranging from the use of features extracted from pre-trained convolutional neural networks (CNNs) as a generic image description, to the use of features extracted from a CNN fine-tuned for the image quality task. Our best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple subregions of the original image. Experimental results on the LIVE In the Wild Image Quality Challenge Database show that DeepBIQ outperforms the state-of-the-art methods compared, having a linear correlation coefficient with human subjective scores of almost 0.91. These results are further confirmed also on four benchmark databases of synthetically distorted images: LIVE, CSIQ, TID2008, and TID2013.

Keywords

Deep learning Convolutional neural networks Transfer learning Blind image quality assessment Perceptual image quality 

References

  1. 1.
    Alaei, A., Raveaux, R., Conte, D.: Image quality assessment based on regions of interest. Signal Image Video Process. 11(4), 673–680 (2017)Google Scholar
  2. 2.
    Allen, E., Triantaphillidou, S., Jacobson, R.: Image quality comparison between jpeg and jpeg2000. I. Psychophysical investigation. J. Imaging Sci. Technol. 51(3), 248–258 (2007)CrossRefGoogle Scholar
  3. 3.
    Bengio, Y.: Deep learning of representations for unsupervised and transfer learning. In: Unsupervised and Transfer Learning Challenges in Mach. Learn. vol. 7, p. 19 (2012)Google Scholar
  4. 4.
    Bianco, S., Ciocca, G., Marini, F., Schettini, R.: Image quality assessment by preprocessing and full reference model combination. In: IS&T/SPIE Electronic Imaging, pp. 72,420O (2009)Google Scholar
  5. 5.
    Bovik, A.C.: Automatic prediction of perceptual image and video quality. Proc. IEEE 101(9), 2008–2024 (2013)Google Scholar
  6. 6.
    Ciancio, A., Da Costa, A.L.N.T., da Silva, E.A., Said, A., Samadani, R., Obrador, P.: No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Trans. Image Process. 20(1), 64–75 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Ciocca, G., Corchs, S., Gasparini, F., Schettini, R.: How to assess image quality within a workflow chain: an overview. Int. J. Digit. Libr. 15(1), 1–25 (2014)CrossRefGoogle Scholar
  8. 8.
    Corchs, S., Gasparini, F., Schettini, R.: No reference image quality classification for jpeg-distorted images. Digit. Signal Process. 30, 86–100 (2014)CrossRefGoogle Scholar
  9. 9.
    Eckert, M.P., Bradley, A.P.: Perceptual quality metrics applied to still image compression. Signal Process. 70(3), 177–200 (1998)CrossRefzbMATHGoogle Scholar
  10. 10.
    Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: Liblinear: a library for large linear classification. J. Mach. Learn. Res. 9, 1871–1874 (2008)zbMATHGoogle Scholar
  11. 11.
    Ghadiyaram, D., Bovik, A.C.: Blind image quality assessment on real distorted images using deep belief nets. In: Global Conference on Signal and Information Processing (GlobalSIP), pp. 946–950. IEEE (2014)Google Scholar
  12. 12.
    Ghadiyaram, D., Bovik, A.C.: Crowdsourced study of subjective image quality. In: Asilomar Conference on Signals, Systems and Computers (2014)Google Scholar
  13. 13.
    Ghadiyaram, D., Bovik, A.C.: Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 25(1), 372–387 (2016)MathSciNetCrossRefGoogle Scholar
  14. 14.
    He, L., Gao, X., Lu, W., Li, X., Tao, D.: Image quality assessment based on S-CIELAB model. Signal Image Video Process. 5(3), 283–290 (2011)CrossRefGoogle Scholar
  15. 15.
    Hou, W., Gao, X., Tao, D., Li, X.: Blind image quality assessment via deep learning. IEEE Trans. Neural Netw Learn. Syst. 26(6), 1275–1286 (2015)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Huang, Y.M., Du, S.X.: Weighted support vector machine for classification with uneven training class sizes. In: 2005 International Conference on Mach. Learn. and Cybernetics, vol. 7, pp. 4365–4369. IEEE (2005)Google Scholar
  17. 17.
    Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In: ACM MM, pp. 675–678. ACM (2014)Google Scholar
  18. 18.
    Kang, L., Ye, P., Li, Y., Doermann, D.: Convolutional neural networks for no-reference image quality assessment. In: CVPR, pp. 1733–1740 (2014)Google Scholar
  19. 19.
    Kang, L., Ye, P., Li, Y., Doermann, D.: Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. In: ICIP, pp. 2791–2795. IEEE (2015)Google Scholar
  20. 20.
    Kottayil, N.K., Cheng, I., Dufaux, F., Basu, A.: A color intensity invariant low-level feature optimization framework for image quality assessment. Signal Image Video Process. 10(6), 1169–1176 (2016)CrossRefGoogle Scholar
  21. 21.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  22. 22.
    Larson, E.C., Chandler, D.M.: Most apparent distortion: full-reference image quality assessment and the role of strategy. JEI 19(1), 011,006 (2010)Google Scholar
  23. 23.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  24. 24.
    LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K-R.: Efficient Back Prop. In: Montavon, G., Orr, G.B., Müller, K-R. (eds) Neural Networks: Tricks of the Trade, 2nd edn, pp. 9–48, Springer, Berlin, Heidelberg (2012)Google Scholar
  25. 25.
    Li, J., Yan, J., Deng, D., Shi, W., Deng, S.: No-reference image quality assessment based on hybrid model. Signal Image Video Process. 11(6), 985–992 (2017)CrossRefGoogle Scholar
  26. 26.
    Li, J., Zou, L., Yan, J., Deng, D., Qu, T., Xie, G.: No-reference image quality assessment using Prewitt magnitude based on convolutional neural networks. Signal Image Video Process. 10(4), 609–616 (2016)CrossRefGoogle Scholar
  27. 27.
    Lv, Y., Jiang, G., Yu, M., Xu, H., Shao, F., Liu, S.: Difference of Gaussian statistical features based blind image quality assessment: a deep learning approach. In: ICIP, pp. 2344–2348. IEEE (2015)Google Scholar
  28. 28.
    Mahmoudpour, S., Kim, M.: No-reference image quality assessment in complex-shearlet domain. Signal Image Video Process. 10(8), 1465–1472 (2016)CrossRefGoogle Scholar
  29. 29.
    Manap, R.A., Shao, L.: Non-distortion-specific no-reference image quality assessment: a survey. Inf. Sci. 301, 141–160 (2015)CrossRefGoogle Scholar
  30. 30.
    Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans Image Process. 21(12), 4695–4708 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Mittal, A., Soundararajan, R., Bovik, A.C.: Making a completely blind image quality analyzer. SPL 20(3), 209–212 (2013)Google Scholar
  32. 32.
    Mittal, A., Moorthy, A.K., Bovik, A.C., Chen, C.W., Chatzimisios, P., Dagiuklas, T., Atzori, L.: No-reference approaches to image and video quality assessment. In: Multimedia Quality of Experience (QoE): Current Status and Future Requirements, vol. 99. Wiley (2015)Google Scholar
  33. 33.
    Moorthy, A.K., Bovik, A.C.: Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans Image Process. 20(12), 3350–3364 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Pappas, T.N., Safranek, R.J., Chen, J.: Perceptual criteria for image quality evaluation. In: Handbook of Image and Video Processing, pp. 669–684 (2000)Google Scholar
  35. 35.
    Ponomarenko, N., Ieremeiev, O., Lukin, V., Egiazarian, K., Jin, L., Astola, J., Vozel, B., Chehdi, K., Carli, M., Battisti, F., et al.: Color image database tid2013: peculiarities and preliminary results. In: Visual Information Processing (EUVIP), 2013 4th European Workshop on, pp. 106–111. IEEE (2013)Google Scholar
  36. 36.
    Ponomarenko, N., et al.: Tid 2008-a database for evaluation of full-reference visual quality assessment metrics. Adv. Mod. Radioelectron. 10(4), 30–45 (2009)MathSciNetGoogle Scholar
  37. 37.
    Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: CVPR Workshops, pp. 806–813 (2014)Google Scholar
  38. 38.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet Large Scale Visual Recognition Challenge. Int J Comput Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  39. 39.
    Saad, M.A., Bovik, A.C., Charrier, C.: Blind image quality assessment: a natural scene statistics approach in the DCT domain. Trans. Image Process. 21(8), 3339–3352 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Seshadrinathan, K., Bovik, A.C.: Automatic prediction of perceptual quality of multimedia signals—a survey. Multimed. Tools Appl. 51(1), 163–186 (2011)CrossRefGoogle Scholar
  41. 41.
    Sheikh, H.R., Wang, Z., Cormack, L., Bovik, A.C.: Live image quality assessment database release 2 (2005). http://live.ece.utexas.edu/research/quality/subjective.htm. Accessed 29 Aug 2017
  42. 42.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
  43. 43.
    Soundararajan, R., Bovik, A.C.: Survey of information theory in visual quality assessment. Signal Image Video Process. 7(3), 391–401 (2013)CrossRefGoogle Scholar
  44. 44.
    Szegedy, C., et al.: Going deeper with convolutions. In: IEEE CVPR, pp. 1–9 (2015)Google Scholar
  45. 45.
    Tang, H., Joshi, N., Kapoor, A.: Blind image quality assessment using semi-supervised rectifier networks. In: CVPR, pp. 2877–2884 (2014)Google Scholar
  46. 46.
    Triantaphillidou, S., Allen, E., Jacobson, R.: Image quality comparison between jpeg and jpeg2000. II. Scene dependency, scene analysis, and classification. J. Imaging Sci. Technol. 51(3), 259–270 (2007)CrossRefGoogle Scholar
  47. 47.
    Vu, C.T., Phan, T.D., Chandler, D.M.: S3: a spectral and spatial measure of local perceived sharpness in natural images. Trans. Image Process. 21(3), 934–945 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  48. 48.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  49. 49.
    Xu, J., Ye, P., Li, Q., Du, H., Liu, Y., Doermann, D.: Blind image quality assessment based on high order statistics aggregation. IEEE Trans. Image Process. 25(9), 4444–4457 (2016)MathSciNetCrossRefGoogle Scholar
  50. 50.
    Ye, P., Kumar, J., Kang, L., Doermann, D.: Real-time no-reference image quality assessment based on filter learning. In: CVPR, pp. 987–994 (2013)Google Scholar
  51. 51.
    Zhang, Y., Moorthy, A.K., Chandler, D.M., Bovik, A.C.: C-DIIVINE: no-reference image quality assessment based on local magnitude and phase statistics of natural scenes. Signal Process. Image Commun. 29(7), 725–747 (2014)CrossRefGoogle Scholar
  52. 52.
    Zhou, Z.H., Liu, X.Y.: Training cost-sensitive neural networks with methods addressing the class imbalance problem. Trans. Knowl. Data Eng. 18(1), 63–77 (2006)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd. 2017

Authors and Affiliations

  1. 1.Department of Informatics, Systems and CommunicationUniversity of Milano-BicoccaMilanItaly

Personalised recommendations