Calibration of Structural Similarity Index Metric to Detect Artefacts in Game Engines

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9972)


Previous studies reveal that Image Quality Metics (IQMs) can be efficiently used to automatically detect perceptual visibility of artefacts in the game engines. Very good matching was achieved for shadow acne, peter panning, and Z-fighting deteriorations, while IQM with the best detection rate proved to be the Structural Similarity Index Metric (SSIM). However, this metric generates noticeably worse results for the aliasing. Using SSIM, the artefacts are identified as differences in intensity, contrast, and structure between an image with deterioration and the corresponding reference. In this work we calibrate SSIM to improve matching for aliasing artefacts. We compare results generated by SSIM with the reference data created during subjective experiments in which people manually mark the visible local artefacts in the screenshots from game engines. In other words, we maximise convergence in the detection between the maps created by humans and computed by SSIM. The results of the cross-validation performed on a large collection of examples revealed that AUC (area under curve) in the receiver-operator analysis can be improved from 0.92 for default SSIM parameters to 0.97 for optimised parameters.


  1. 1.
    Williams, L.: Casting curved shadows on curved surfaces. ACM SIGGRAPH Comput. Graph. 12, 270–274 (1978)CrossRefGoogle Scholar
  2. 2.
    Piórkowski, R., Mantiuk, R.: Using full reference image quality metrics to detect game engine artefacts. In: Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, pp. 83–90. ACM (2015)Google Scholar
  3. 3.
    Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)CrossRefGoogle Scholar
  4. 4.
    Mantiuk, R., Kim, K.J., Rempel, A.G., Heidrich, W.: HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graph. 30, 40:1–40:14 (2011)CrossRefGoogle Scholar
  5. 5.
    Zhang, X.M., Wandell, B.A.: A spatial extension to CIELAB for digital color image reproduction. In: Proceedings of the SID Symposiums, pp. 731–734 (1996)Google Scholar
  6. 6.
    Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers, 2004, vol. 2, pp. 1398–1402. IEEE (2003)Google Scholar
  7. 7.
    Mantiuk, R., Tomaszewska, A.M., Mantiuk, R.: Comparison of four subjective methods for image quality assessment. Comput. Graph. Forum 31, 2478–2491 (2012)CrossRefGoogle Scholar
  8. 8.
    Wang, Z., Bovik, A.: Modern Image Quality Assessment. Morgan & Claypool Publishers, San Rafael (2006)Google Scholar
  9. 9.
    Wu, H., Rao, K.: Digital Video Image Quality and Perceptual Coding. CRC Press, Boca Raton (2005)CrossRefGoogle Scholar
  10. 10.
    Lin, W., Kuo, C.C.J.: Perceptual visual quality metrics: a survey. J. Vis. Commun. Image Represent. 22, 297–312 (2011)CrossRefGoogle Scholar
  11. 11.
    Pedersen, M., Hardeberg, J.Y.: Full-reference image quality metrics: classification and evaluation. Found. Trends\(\textregistered \) Comput. Graph. Vis. 7, 1–80 (2012)Google Scholar
  12. 12.
    Čadík, M., Herzog, R., Mantiuk, R., Mantiuk, R., Myszkowski, K., Seidel, H.P.: Learning to predict localized distortions in rendered images. Comput. Graph. Forum 32, 401–410 (2013)Google Scholar
  13. 13.
    Akenine-Möller, T., Haines, E., Hoffman, N.: Real-Time Rendering, 3rd edn. A K Peters Ltd., Wallesley (2008)CrossRefGoogle Scholar
  14. 14.
    Čadík, M., Herzog, R., Mantiuk, R., Myszkowski, K., Seidel, H.P.: New measurements reveal weaknesses of image quality metrics in evaluating graphics artifacts. ACM Trans. Graph. (TOG) 31, 147 (2012)Google Scholar
  15. 15.
    Sergej, T., Mantiuk, R.: Perceptual evaluation of demosaicing artefacts. In: Campilho, A., Kamel, M. (eds.) ICIAR 2014, Part I. LNCS, vol. 8814, pp. 38–45. Springer, Heidelberg (2014)Google Scholar
  16. 16.
    Baldi, P., Brunak, S., Chauvin, Y., Anderson, C.A.F., Nielsen, H.: Assessing the accuracy of prediction algorithms for classification: an overview. Bioinformatics 16, 640–648 (2000)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Faculty of Computer Science and Information TechnologyWest–Pomeranian University of TechnologySzczecinPoland

Personalised recommendations