Skip to main content

Advertisement

Log in

No-reference image quality assessment of multi-level residual feature augmentation

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

No-reference image quality assessment (NR-IQA) has a wide range of application scenarios and occupies an important position in the field of digital image processing. In this paper, we propose a multi-level feature augmentation method for no-reference image quality assessment. This method can effectively extract local features to solve the problem of distortion diversity in authentic distortion images. In this algorithm, an attention enhancement module (AEM) is proposed to strengthen the area of interest, so that obtained features conform to the human visual system (HVS). Then, we propose a residual feature augmentation (RFA) module to supplement the information loss of dimensionality reduction before fusion. At the same time, according to the extracted semantic features of image, the perceptual rule of the quality prediction network is established, and multi-scale features are mapped into the quality prediction network. Extensive experiments show that the model achieves better performance in synthetic distortion image datasets and authentic distortion image datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Varga, D.: Composition-preserving deep approach to full-reference image quality assessment. Signal Image Video Process. 14(6), 1265–1272 (2020)

    Article  Google Scholar 

  2. Liu, M., et al.: Perceptual reduced-reference visual quality assessment for contrast alteration. IEEE Trans Broadcast. 63(1), 71–81 (2016)

    Article  Google Scholar 

  3. Li, J., et al.: No-reference image quality assessment based on hybrid model. Signal Image Video Process. 11(6), 985–992 (2017)

    Article  Google Scholar 

  4. Mahmoudpour, S., Kim, M.: No-reference image quality assessment in complex-shearlet domain. Signal Image Video Process. 10(8), 1465–1472 (2016)

    Article  Google Scholar 

  5. Zhu, H., Li, L., Wu, J., et al.: MetaIQA: Deep meta-learning for no-reference image quality assessment. In: IEEE/CVF conference on computer vision and pattern recognition, pp.14143–14152 (2020)

  6. Rajchel, M., Oszust, M.: No-reference image quality assessment of authentically distorted images with global and local statistics. Signal Image Video Process. 15(1), 83–91 (2021)

    Article  Google Scholar 

  7. Rakhshanfar, M., Amer, M.A.: Sparsity-based no-reference image quality assessment for automatic denoising. Signal Image Video Process. 12(4), 739–747 (2018)

    Article  Google Scholar 

  8. Xu, J., et al.: Blind image quality assessment based on high order statistics aggregation. IEEE Trans. Image Process. 25(9), 4444–4457 (2016)

    Article  MATH  MathSciNet  Google Scholar 

  9. Su, S., et al.: Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network. In: IEEE conference on computer vision and pattern recognition, pp. 3664–3673 (2020)

  10. Zhang, W., et al.: Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 30(1), 36–47 (2020)

    Article  Google Scholar 

  11. Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  12. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal process. Lett. 20(3), 209–212 (2012)

    Article  Google Scholar 

  13. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W. and Hu, Q.: ECA-Net: Efficient channel attention for deep convolutional neural networks. In: IEEE conference on computer vision and pattern recognition, pp. 11531–11539 (2020)

  14. Guo, C., et al.: Augfpn: Improving multi-scale feature learning for object detection. In: IEEE/CVF conference on computer vision and pattern recognition, pp. 12592–12601 (2020)

  15. Bosse, S., Maniry, D., Müller, K., Wiegand, T., Samek, W.: Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27(1), 206–219 (2018)

    Article  MATH  MathSciNet  Google Scholar 

  16. Wu, J., et al.: End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans. Image Process. 29, 7414–7426 (2020)

    Article  MATH  Google Scholar 

  17. Kede, Ma., et al.: End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 27(3), 1202–1213 (2018)

    Article  MATH  MathSciNet  Google Scholar 

  18. Liu, X., et al.: RankIQA: Learning from rankings for no-reference image quality assessment. In: ieee international conference on computer vision, pp. 1040–1049 (2017)

  19. Lin, K-Y., Wang, G.: Hallucinated-IQA: No-reference image quality assessment via adversarial learning. In: IEEE conference on computer vision and pattern recognition, pp. 732–741 (2018)

  20. Gu, J., et al.: Blind image quality assessment via learnable attention-based pooling. Pattern Recognit. 91, 332–344 (2019)

    Article  Google Scholar 

  21. Diqi, C., Wang, Y., Gao, W.: No-reference image quality assessment: An attention driven approach. IEEE Trans. Image Process. 29, 6496–6506 (2020)

    Article  MATH  Google Scholar 

  22. Li, D., et al.: Which has better visual quality: The clear blue sky or a blurry animal? IEEE Trans. Multimedia. 21(5), 1221–1234 (2019)

    Article  Google Scholar 

  23. Sheikh, H.R., Sabir, M.F., Bovik, A.C.: A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15(11), 3440–3451 (2006)

    Article  Google Scholar 

  24. Larson, E.C., Chandler, D.M.: Most apparent distortion: full-reference image quality assessment and the role of strategy. J. Electron. Image. 19(1), 011006 (2010)

    Article  Google Scholar 

  25. Ghadiyaram, D., Bovik, A.C.: Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 25(1), 372–387 (2016)

    Article  MATH  MathSciNet  Google Scholar 

  26. Hosu, V., et al.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Trans. Image Process. 29, 4041–4056 (2020)

    Article  MATH  Google Scholar 

  27. Thomee, B., et al.: YFCC100M: The new data in multimedia research. Commun. ACM 59(2), 64–73 (2016)

    Article  Google Scholar 

  28. Dendi, S.V.R., et al.: Generating image distortion maps using convolutional autoencoders with application to no reference image quality assessment. IEEE Signal Process. Lett. 26(1), 89–93 (2019)

    Article  Google Scholar 

  29. Wu, L., et al.: Unsupervised quaternion model for blind colour image quality assessment. Signal Process. 176, 107708 (2020)

    Article  Google Scholar 

  30. Liang, D., et al.: Deep blind image quality assessment based on multiple instance regression. Neurocomputing 431, 78–89 (2021)

    Article  Google Scholar 

  31. Kim, J., Lee, S.: Fully deep blind image quality predictor. IEEE J. Sel. Topics Signal Process. 11(1), 206–220 (2017)

    Article  Google Scholar 

  32. Zeng, H., Zhang, L., and Bovik, A.C.: A probabilistic quality representation approach to deep blind image quality prediction. arXiv preprint arXiv:1708.08190, (2017)

  33. Zhang, L., Zhang, L., Bovik, A.C.: A feature-enriched completely blind image quality evaluator. IEEE Trans. Image Process. 24(8), 2579–2591 (2015)

    Article  MATH  MathSciNet  Google Scholar 

  34. Liu, Y., et al.: Unsupervised blind image quality evaluation via statistical measurements of structure, naturalness, and perception. IEEE Trans. Circuits Syst. Video Technol. 30(4), 929–943 (2020)

    Article  Google Scholar 

  35. Ma, K., et al.: dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Trans. Image Process. 26(8), 3951–3964 (2017)

    Article  MATH  MathSciNet  Google Scholar 

  36. Min, X., et al.: Blind quality assessment based on pseudo-reference image. IEEE Trans. Multimedia. 20(8), 2049–2062 (2017)

    Article  Google Scholar 

  37. Wu, Q., Wang, Z., and Li, H.: A highly efficient method for blind image quality assessment. In: IEEE international conference on image processing. 339–343 (2015)

  38. Min, X., et al.: Blind image quality estimation via distortion aggravation. IEEE Trans Broadcast. 64(2), 508–517 (2018)

    Article  Google Scholar 

  39. Zhang, L., Shen, Y., Li, H.: VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 23(10), 4270–4281 (2014)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Research on Calligraphy Culture Inheritance Technology of Ancient Inscription Based on Artificial Intelligence, NSFC via Project 62076200, Natural Science Foundation of Shaanxi Province No.2021JM-340 and Science Technology Project of Weinan No.2021ZDYF-GYCX-150.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanlin Zheng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, C., Zheng, Y., Liao, K. et al. No-reference image quality assessment of multi-level residual feature augmentation. SIViP 17, 1275–1283 (2023). https://doi.org/10.1007/s11760-022-02335-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-022-02335-8

Keywords

Navigation