Advertisement

Classification of image distortion based on the generalized Benford’s law

  • Hussein Al-BandawiEmail author
  • Guang Deng
Article
  • 55 Downloads

Abstract

Distortion classification is an important step in blind image quality assessment. In this paper, a new image distortion classification algorithm is presented. Classification is based on features extracted from the distribution of the first digit of transform coefficients of the image. The generalized Benford’s law is used to model the distribution. The discrete cosine transform with three different patch sizes and the wavelet transform have been tested. Features, such as distribution data and model parameters, are extracted from an image. A kernel support vector machine is trained using these features. The LIVE database is used for both training and testing, while other four databases, namely, TID2008, CSIQ, Waterloo exploration database and McGill calibrated colour image database, are used for validation. Experimental results show that the performance of the proposed algorithm outperforms state-of-the-art algorithms in terms of classification accuracy.

Keywords

Generalized Benford’s law Image distortion classification 

Notes

Acknowledgements

Hussein Al-Bandawi has been supported by the Higher Committee for Eduction Development in Iraq. The authors thank the reviewers for providing critical and constructive comments.

References

  1. 1.
    Alaql O, Ghazinour K, Lu CC (2017) Classification of image distortions for image quality assessment. Proceedings of International Conference on Computational science and Computational Intelligence (CSCI), pp 653–658Google Scholar
  2. 2.
    Benford F (1938) The law of anomalous numbers. Proceedings of the American Philosophical Society, pp 551–572Google Scholar
  3. 3.
    Bishop C (2006) Pattern Recognition and Machine Learning. Springer, BerlinzbMATHGoogle Scholar
  4. 4.
    Cohen E, Yitzhaky Y (2010) No-reference assessment of blur and noise impacts on image quality. Signal Image Video Process 4(3):289–302CrossRefGoogle Scholar
  5. 5.
    Dodge S, Karam L (2016) Understanding how image quality affects deep neural networks. Proceedings of Eighth International Conference on Quality of Multimedia Experience, pp 1–6Google Scholar
  6. 6.
    Dong X, Shen J, Peng J, Shao L, Van Gool L (2016) Sub-markov random walk for image segmentation. IEEE Trans Image Process 25(2):516–527MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Ester M, Kriegel H, Sander J, Xu X (1996) A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of 2nd International Conference on Knowledge Discovery and Data Mining (KDD), pp 226–231Google Scholar
  8. 8.
    Fu D, Shi YQ, Su W et al (2007) A generalized Benford’s law for JPEG coefficients and its applications in image forensics. Proc SPIE 6505:1–11Google Scholar
  9. 9.
    Gu K, Zhai G, Lin W et al (2015) No-reference image sharpness assessment in autoregressive parameter space. IEEE Trans Image Process 24(10):3218–3231MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Kang LP, Li Y, Doermann D (2015) Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. Proceedings of IEEE International Conference on Image Processing, pp 2791–2795Google Scholar
  11. 11.
    Larson EC, Chandler DM (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. J Electron Imaging 19(1):11006–1–011006-21Google Scholar
  12. 12.
    Li L, Zhu H, Yang G, Qian J (2014) No-reference image quality assessment in curvelet domain. Signal Process Image Commun 29(4):494–505CrossRefGoogle Scholar
  13. 13.
    Li L, Lin W, Wang X, Yang G, Bahrami K, Kot AC (2016) No-reference image blur assessment based on discrete orthogonal moments. IEEE Trans Cybern 46 (1):39–50CrossRefGoogle Scholar
  14. 14.
    Liu Y, Wang J, Cho S (2013) A no-reference metric for evaluating the quality of motion deblurring. ACM Trans Graph 32(6):1–12Google Scholar
  15. 15.
    Liu L, Liu B, Huang H, Bovik AC (2014) No-reference image quality assessment based on spatial and spectral entropies. Signal Process Image Commun 29 (8):856–863CrossRefGoogle Scholar
  16. 16.
    Liu L, Hua Y, Zhao Q, Huang H, Bovik AC (2016) Blind image quality assessment by relative gradient statistics and adaboosting neural network. Signal Process Image Commun 40:1–15CrossRefGoogle Scholar
  17. 17.
    Ma K, Duanmu Z, Wu Q et al (2017) Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans Image Process 26 (2):1004–1016MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Mittal A, Moorthy AK, Bovik AC (2012) No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 21(12):4695–4708MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Moorthy AK, Bovik AC (2010) A two-step framework for constructing blind image quality indices. IEEE Signal Process Lett 17(5):513–516CrossRefGoogle Scholar
  20. 20.
    Moorthy AK, Bovik AC (2010) Statistics of natural image distortions. Proceedings of International Conference on Acoustics, Speech and Signal Processing, pp 962–965Google Scholar
  21. 21.
    Moorthy AK, Bovik AC (2011) Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 20(12):3350–3364MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Nafchi HZ, Cheriet M (2018) Efficient no-reference quality assessment and classification model for contrast distorted images. IEEE Trans. Broadcast 64(2):518–523CrossRefGoogle Scholar
  23. 23.
    Olmos A, Kingdom FAA (2004) A biologically inspired algorithm for the recovery of shading and reflectance images. Perception 33:1463–1473CrossRefGoogle Scholar
  24. 24.
    Ponomarenko N, Lukin V, Zelensky A (2009) TID2008-A database for evaluation of full-reference visual quality assessment metrics. Adv Modern Radioelectron 10(4):30–45Google Scholar
  25. 25.
    Qadir G, Zhao X, Ho ATS (2010) Estimating JPEG2000 compression for image forensics using Benford’s law. Proc SPIE 7733:1–10Google Scholar
  26. 26.
    Sazzad Z, Kawayoke Y, Horita Y (2008) No reference image quality assessment for JPEG2000 based on spatial features. Signal Process Image Commun 23(4):257–268CrossRefGoogle Scholar
  27. 27.
    Saad MA, Bovik AC, Charrier C (2012) Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans Image Process 21 (8):3339–3352MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    Song X, Feng F, Han X, Yang X, Liu W, Nie L (2018) Neural compatibility modeling with attentive knowledge distillation. arXiv:http://arXiv.org/abs/1805.00313v1
  29. 29.
    Shen J, Du Y, Wang W, Li X (2014) Lazy random walks for superpixel segmentation. IEEE Trans Image Process 23(4):1451–1462MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Shen J, Hao X, Liang Z, Liu Y, Wang W, Shao L (2016) Real-time superpixel segmentation by DBSCAN clustering algorithm. IEEE Trans Image Process 25(12)):5933–5942MathSciNetzbMATHCrossRefGoogle Scholar
  31. 31.
    Shen J, Peng J, Dong X, Shao L, Porikli F (2017) Higher order energies for image segmentation. IEEE Trans Image Process 26(10):4911–4922MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    Shen J, Peng J, Dong X, Shao L (2018) Submodular trajectories for better motion segmentation in videos. IEEE Trans Image Process 27(6):2688–2700MathSciNetzbMATHCrossRefGoogle Scholar
  33. 33.
    Sheikh HR, Sabir MF, Bovik AC (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans Image Process 15 (11):3440–3451CrossRefGoogle Scholar
  34. 34.
    Song X, Ming Z, Nie L, Zhao Y, Chua T (2016) Volunteerism tendency prediction via harvesting multiple social networks, ACM Trans Inf Sys (TOIS) 34(2).  https://doi.org/10.1145/2832907 CrossRefGoogle Scholar
  35. 35.
    Wang Z, Bovik AC, Evan BL (2000) Blind measurement of blocking artifacts in images. Proceedings of IEEE International Conference on Image Processing, pp 981–984Google Scholar
  36. 36.
    Wang W, Shen J, Li X, Porikli F (2015) Robust video object cosegmentation. IEEE Trans Image Process 24(10):3137–3148MathSciNetzbMATHCrossRefGoogle Scholar
  37. 37.
    Wang W, Shen J, Shao L (2018) Video salient object detection via fully convolutional networks. IEEE Trans Image Process 27(1):38–49MathSciNetzbMATHCrossRefGoogle Scholar
  38. 38.
    Wang W, Shen J (2018) Deep visual attention prediction. IEEE Trans Image Process 27(5):2368–2378MathSciNetzbMATHCrossRefGoogle Scholar
  39. 39.
    Wang W, Shen J, Ling H (2018) A deep network solution for attention and aesthetics aware photo cropping. IEEE Trans Image Proces,  https://doi.org/10.1109/TPAMI.2018.2840724
  40. 40.
    Ye P, Kumar J, Kang L, Doermann D (2012) Unsupervised feature learning framework for no-reference image quality assessment. Proceedings of IEEE Conf. CVPR, pp 1098–1105Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of EngineeringLa Trobe UniversityBundooraAustralia

Personalised recommendations