Skip to main content
Log in

No-reference image quality assessment with multi-scale weighted residuals and channel attention mechanism

  • Data analytics and machine learning
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

With the rapid development of deep learning, no-reference image quality assessment (NR-IQA) based on convolutional neural network (CNN) plays an important role in image processing. Currently, most CNN-based NR-IQA methods focus primarily on the global features of images while ignoring detail-rich local features and channel dependencies. In fact, there are subtle differences in detail between distorted and reference images, as well as differences in the contribution of different channels to IQA. Furthermore, multi-scale feature extraction can be used to fuse the detailed information from images with different resolutions, and the combination of global and local features is critical in extracting image features. As a result, in this paper, a multi-scale residual CNN with an attention mechanism (MsRCANet) is proposed for NR-IQA. Specifically, a multi-scale residual block is first used to extract features from distorted images. Then, the residual learning with active weighted mapping strategy and channel attention mechanism is used to further process image features to obtain more abundant information. Finally, the fusion strategy and full connection layer are used to evaluate image quality. The experimental results on four synthetic databases and three in-the-wild IQA databases, as well as cross-database validation results, show that the proposed method has good generalization ability and can be compared with the most advanced methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data availability

Enquiries about data availability should be directed to the authors.

References

  • Anish M, Anush K, Alan Bovik C (2012) No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 12:4695–4708

    MathSciNet  MATH  Google Scholar 

  • Bosse S, Maniry D, Mller K et al (2017) Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans Image Process 27(1):206–219

    Article  MathSciNet  MATH  Google Scholar 

  • Bosse S, Maniry D, Wiegand T, Samek W (2016) A deep neural network for image quality assessment. In: Proceedings of IEEE international conference on image processing, Phoenix, AZ, USA, Sep. pp 3773–3777

  • Chen D, Wang Y, Gao W (2020) No-reference image quality assessment: an attention driven approach. IEEE Trans Image Process 29(99):6496–6506

    Article  MATH  Google Scholar 

  • Cheng Z, Takeuchi M, Katto J (2017) A pre-saliency map based blind image quality assessment via convolutional neural networks. In: Proceedings of IEEE international symposium on multimedia (ISM), pp 77–82

  • Chen P, Niu Y, Huang D (2019) No-reference image quality assessment based on multi-scale convolutional neural networks. In: Intelligent computing-proceedings of the computing conference. Springer, Cham, pp 1202–1216

  • Chen X, Zhang Q, Lin M et al (2019) No-reference color image quality assessment: from entropy to perceptual quality. J Image Video Proc 77:1–14

    Google Scholar 

  • Dash PP, Wong A, Mishra A (2017) VeNICE: A very deep neural network approach to no-reference image assessment. In: Proceedings of the IEEE international conference on industrial technology (ICIT), pp 1091–1096

  • Deepti G, Alan C (2015) Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans Image Process 25(1):372–387

    MathSciNet  MATH  Google Scholar 

  • Dendi S, Dev C et al (2019) Generating image distortion maps using convolutional autoencoders with application to no reference image quality assessment. IEEE Signal Process Lett 26(1):89–93

    Article  Google Scholar 

  • Fang Y, Zhu H, Zeng Y et al (2020) Perceptual quality assessment of smartphone photography. In: Proceedings of the IEEE conference on computer vision and pattern Recognition, pp 3677–3686

  • Gao Z, Xie J, Wang Q et al (2020) Global second-order pooling convolutional networks[C]. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR). IEEE

  • Ghadiyaram D, Bovik AC (2017) Perceptual quality prediction on authentically distorted images using a bag of features approach. J Vis 17:32

    Article  Google Scholar 

  • Hao S, Guo Y, Wei Z et al (2019) Lightness-aware contrast enhancement for images with different illumination conditions. Multimed Tools Appl 78:3817–3830

    Article  Google Scholar 

  • He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  • Holzinger A (2018) From machine learning to explainable AI. In: World symposium on digital intelligence for systems and machines (DISA), pp 55-66

  • Holzinger A (2021) Explainable AI and multi-modal causability in medicine. i-com 19(3):171–179

  • Hong H et al (2016) Image detail enhancement with spatially guided filters. Signal Process Official Publ Eur Assoc Signal Process 120:789–796

    Google Scholar 

  • Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision, pp 7132–7141

  • Hyoungho J, Ryong L, Sanghwan L et al (2018) Residual convolutional neural network revisited with active weighted mapping

  • Jin X, Wu L, Li X, et al (2016) ILGNet: inception modules with connected local and global features for efficient image aesthetic quality classification using domain adaptation. IET computer vision, pp 1–6

  • Kang L, Ye P, Li Y, Doermann D (2014) Convolutional neural networks for no-reference image quality assessment. Proceedings of IEEE conference on computer vision and pattern recognition, Jun. 2014:1733–1740

    Google Scholar 

  • Kang L, Ye P, Li Y, Doermann D (2015) Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. In: Proceedings of IEEE International Conference on Image Processing, pp 2791–2795

  • Kim J, Lee S (2017) Fully deep blind image quality predictor selected topics in signal processing. IEEE J 11(1):206–220

    Google Scholar 

  • Kim J, Lee S (2017) Fully deep blind image quality predictor. IEEE J Select Topics Signal Process 11(1):206–220

    Article  Google Scholar 

  • Kim J, Zeng H, Ghadiyaram D, Lee S, Zhang L, Bovik AC (2017) Deep convolutional neural models for picture-quality prediction: Challenges and solutions to data-driven image quality assessment. IEEE Signal Proc Mag 34(6):130–141

    Article  Google Scholar 

  • Kim J, Nguyen A, Ahn S, Luo C, Lee S (2018) Multiple level feature-based universal blind image quality assessment model, In: Proceedings ICIP, pp 291–295

  • Larson C, Chandler M (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. J Electron Imag 19(1):011006

    Article  Google Scholar 

  • Li D, Jiang T, Lin W, Jiang M (2018) Which has better visual quality: The clear blue sky or a blurry animal? IEEE Trans Multimed 21(5):1221–1234

    Article  Google Scholar 

  • Li F, Fang K, Mei G, Zhang (2018) Multi-scale residual network for image super-resolution, In: Proceedings of European conference on computer vision, pp 527–542

  • Lin K, Wang G (2018) Hallucinated-IQA: no-reference image quality assessment via adversarial learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition

  • Li Y, Po L-M, Feng L, Yuan F (2016) No-reference image quality assessment with deep convolutional neural networks. In: Proceedings of IEEE intermational conference on digital signal processing (DSP), pp 685–689

  • Liu X, Bagdanov A (2017) RankIQA: Learning from rankings for no-reference image quality assessment

  • Li F, Zhang Y et al (2021) MMMNet: an end-to-end multi-task deep convolution neural network with multi-scale and multi-hierarchy fusion for blind image quality assessment. IEEE Trans Circuit Syst Video Technol

  • Min X, Zhai G, Gu K, Liu Y et al (2018) Blind image quality estimation via distortion aggravation. IEEE Trans Broadcast 64(2):508–517

    Article  Google Scholar 

  • Mittal A, Soundararajan R, Bovik AC (2012) Making a completely blind image quality analyzer. IEEE Signal Process Lett 20(3):209–212

    Article  Google Scholar 

  • Pan C, Xu Y, Yan Y, Gu K, Yang X (2016) Exploiting neural models for no-reference image quality assessment. In: Proceedings of visual communications and image processing, pp 1–4

  • Ponomarenko, N et al (2015) Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun, pp 57–77

  • Ponomarenko N, Lukin V, Zelensky A et al (2009) TID2008-a database for evaluation of full-reference visual quality assessment metrics. Adv Modern Radio Electron 10(4):30–45

    Google Scholar 

  • Rajchel M, Oszust M(2020) No-reference image quality assessment of authentically distorted images with global and local statistics, Signal, Image and Video Processing, (SIViP)

  • Ren H, Chen D, Wang Y (2018) RAN4IQA: Restorative adversarial nets for no-reference image quality assessment. In: Proceedings of the AAAI conference on artificial intelligence, pp 7308–7314

  • Sheikh H, Sabir M, Bovik A (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans Image Process 15(11):3440–3451

    Article  Google Scholar 

  • Sun C, Li H, Li W (2016) No-reference image quality assessment based on global and local content perception. In: Proceedings of visual communications and image processing, pp 1–4

  • Sun W, Min X, Zhai G, Ma S (2021) Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training

  • Sun W, Wang T, Min X et al (2021) Deep learning based full-reference and no-reference quality assessment models for compressed UGC videos. In: IEEE international conference on multimedia expo workshops (ICMEW) IEEE

  • Su S, Yan Q, Zhu Y et al (2020) Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3667–3676

  • Vlad H, Hanhe L et al (2020) KonIQ-10k: an ecologically valid database for deep learning of blind image quality assessment. IEEE Trans Image Process 29:4041–4056

    Article  MATH  Google Scholar 

  • Wang Z, Bovik AC (2002) A universal image quality index. IEEE Sign Process Lett 9(3):81–84

    Article  Google Scholar 

  • Wang Z, Bovik A, Sheikh H et al (2004) Image quality assessment: From error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  • Woo S, Park J, Lee J et al (2018) CBAM: convolutional block attention module[J]. Springer, Cham

    Google Scholar 

  • Wu J, Ma J, Liang F, Dong W, Shi G, Lin W (2020) End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans Image Process 99:7414–7426

    Article  MATH  Google Scholar 

  • Wu J, Zhang M, Li L, Dong W, Lin GW (2019) No-reference image quality assessment with visual pattern degradation. Inf Sci 504:487–500

    Article  MathSciNet  Google Scholar 

  • Xue W, Mou X, Zhang L et al (2014) Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans Image Process 23(11):4850–4862

    Article  MathSciNet  MATH  Google Scholar 

  • Xue W, Zhang L, Mou X (2013)Learning without human scores for blind image quality assessment. In: Proceedings of the IEEE conference on computer vision and pattern Recognition, pp 995–1002

  • Xu J, Ye P, Li Q, Du H, Liu Y, Doermann D (2016) Blind image quality assessment based on high order statistics aggregation. IEEE Trans Image Process 25(9):4444–4457

    Article  MathSciNet  MATH  Google Scholar 

  • Yang Q, Gong D, Zhang Y (2019) Two-stream convolutional networks for blind image quality assessment. IEEE Trans Image Process 28(5):2200

    Article  MathSciNet  Google Scholar 

  • Yang S, Jiang Q, Lin W, Wang Y (2019) SGDNet: An end-to-end saliency-guided deep neural network for no-reference image quality assessment, In: Proceedings of ACM international conference on multimedia association for computing machinery, pp 1383–1391

  • Ye P, Kumar J, Kang L, Doermann D (2012) Unsupervised featurelearning framework for no-reference image quality assessment. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1098–1105

  • Zhang L, Mou X et al (2011) FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang L, Zhang L, Bovik AC (2015) A feature-enriched completely blind image quality evaluator. IEEE Trans Image Process 24(8):2579–2591

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang W, Ma K, Yan J et al (2020) Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans Circuits Syst Video Technol 30(1):36–47

    Article  Google Scholar 

  • Zhang Y, Li K et al (2018) Image super-resolution using very deep residual channel attention networks

  • Zhang W, Ma K, Zhai G et al (2021) Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Trans Image Process 30:3474–3486

    Article  Google Scholar 

  • Zhang W, Qu C, Ma L et al (2016) Learning structure of stereoscopic image for no-reference quality assessment with convolutional neural network. Pattern Recogn 59:176–187

    Article  Google Scholar 

  • Zhang L, Shen Y, Li H (2014) VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans Image Process 23(10):4270–4281

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y (2018) Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2472–2481

  • Zuo L, Wang H, Fu J (2016) Screen content image quality assessment via convolutional neural network. In: Proceedings of the IEEE international conference on image processing (ICIP), pp 2082–2086

Download references

Funding

Funding was provided by the National Natural Science Foundation of China (Grant Numbers 61976027, 61572082) and the Liaoning Revitalization Talents Program (Grant Number XLYC2008002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changzhong Wang.

Ethics declarations

Conflict of interest

The authors have not disclosed any competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by National Natural Science Foundation of China under Grants 61976027, 61572082, Liaoning Revitalization Talents Program (XLYC2008002).

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C., Lv, X., Ding, W. et al. No-reference image quality assessment with multi-scale weighted residuals and channel attention mechanism. Soft Comput 26, 13449–13465 (2022). https://doi.org/10.1007/s00500-022-07535-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-022-07535-5

Keywords

Navigation