Advertisement

No-reference image quality assessment based on hybrid model

  • 438 Accesses

  • 9 Citations

Abstract

The aim of research on the no-reference image quality assessment problem is to design models that can predict the quality of distorted images consistently with human visual perception. Due to the little prior knowledge of the images, it is still a difficult problem. This paper proposes a computational algorithm based on hybrid model to automatically extract vision perception features from raw image patches. Convolutional neural network (CNN) and support vector regression (SVR) are combined for this purpose. In the hybrid model, the CNN is trained as an efficient feature extractor, and the SVR performs as the regression operator. Extensive experiments demonstrate very competitive quality prediction performance of the proposed method.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

References

  1. 1.

    Thung, K.H. et al.: A survey of image quality measures. In: Proceedings of International Conference for Technical Postgraduates (TECHPOS), p. 1–4 (2009)

  2. 2.

    Zhao, B.J., et al.: Image quality evaluation method based on human visual system. Chin. J. Electron. 19(1), 129–132 (2010)

  3. 3.

    Eskicioglu, A.M., et al.: Image quality measures and their performance. IEEE Trans. Commun. 43(12), 2959–2965 (1995)

  4. 4.

    Gu, K., Wang, S.Q., et al.: Content-weighted mean-squared error for quality assessment of compressed images. Signal Image Video Process. 10(5), 803–810 (2016)

  5. 5.

    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

  6. 6.

    Zhang, L., Zhang, D., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011)

  7. 7.

    Gu, K., Wang, S.Q., et al.: Analysis of distortion distribution for pooling in image quality prediction. IEEE Trans. Broadcasti. 62(2), 446–456 (2016)

  8. 8.

    Narwaria, M., Mantiuk, R., Callet, P.L.: HDR-VDP-2.2: a calibrated method for objective quality prediction of high-dynamic range and standard images. J. Electron. Imaging 24(1), 010501–010501 (2015)

  9. 9.

    Narwaria, M., Callet. P.L..: On improving the pooling in HDR-VDP-2 towards better HDR perceptual quality assessment. In: Proceedings of the SPIE, 9014(10) (2014)

  10. 10.

    Moorthy, A.K., et al.: A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 17(5), 513–516 (2010)

  11. 11.

    Moorthy, A.K., et al.: Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans. Image Process. 20(12), 3350–3364 (2011)

  12. 12.

    Narwaria, M., Lin, W.S., McLoughlin, I.V.: Fourier transform based scalable image quality measure. IEEE Trans. Image Process. 21(8), 3364–3377 (2012)

  13. 13.

    Narwaria, M., Lin, W.S.: SVD-based quality metric for image and video using machine learning. IEEE Trans. Syst. Cybern. (Part B) 42(2), 347–364 (2012)

  14. 14.

    Narwaria, M., Lin, W.S., Enis Cetin, A.: Scalable image quality assessment with 2D mel-cepstrum and machine learning approach. Pattern Recognit. 45, 299–313 (2012)

  15. 15.

    Saad, M., et al.: Blind image quality assessment: an natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 21(8), 3339–3352 (2012)

  16. 16.

    Ye, P., Kumar, J., Kang, L., Doermann, D.: Unsupervised feature learning framework for no-reference image quality assessment. IEEE Conf. Comput. Vis. Pattern Recognit. 157(10), 1098–1105 (2012)

  17. 17.

    Mittal, A., Moorthy, A., Bovik, A.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)

  18. 18.

    Kang, L. et al.: Convolutional neutral networks for no-reference image quality assessment. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1733–1740 (2014)

  19. 19.

    Li, J., Zou, L., Deng, D., et al.: No-reference image quality assessment using Prewitt magnitude based on convolutional neural networks. Signal Image Video Process. 10(4), 609–616 (2016)

  20. 20.

    Zhang, L., Zhang, L., Bovik, A.C.: A feature-enriched completely blind image quality evaluator. IEEE Trans. Image Process. 24(8), 2579–2591 (2015)

  21. 21.

    Girshick, R. et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

  22. 22.

    Zhang, Y., Sohn, K., Villegas, R., Pan, G., Lee, H.: Improving object detection with deep convolutional networks via Bayesian optimization and structured prediction. In: IEEE Conference on Computer Vision and Pattern Recognition (2015)

  23. 23.

    Lauer, F., et al.: A trainable feature extractor for handwritten digit recognition. Pattern Recognit. 40(6), 1816–1824 (2007)

  24. 24.

    Scholkopf, B., et al.: New support vector algorithms. Neural Comput. 12(5), 1207–1245 (2000)

  25. 25.

    Niu, X.X., et al.: A novel hybrid CNN/SVM classifier for recognizing handwritten digits. Pattern Recognit. 45(4), 1318–1325 (2012)

  26. 26.

    Sheikh, H.R. et al.: LIVE image quality assessment database release2. http://live.ece.utexas.edu/research/quality

  27. 27.

    Larson, E.C., et al.: Most apparent distortion: full reference image quality assessment and the role of strategy. J. Electron. Imaging 19(1), 011006–011006 (2010)

  28. 28.

    VQEG: Final report from the Video Quality Experts Group on the validation of objective models of video quality assessment -Phase II. August 2003. http://www.vqeg.org/

Download references

Acknowledgements

This work has been supported by Natural Science Foundation of China (No. 61501334).

Author information

Correspondence to Jia Yan.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Li, J., Yan, J., Deng, D. et al. No-reference image quality assessment based on hybrid model. SIViP 11, 985–992 (2017) doi:10.1007/s11760-016-1048-5

Download citation

Keywords

  • No-reference image quality assessment
  • Convolutional neural network
  • Support vector regression
  • Hybrid model
  • Machine learning