Rethinking the Defocus Blur Detection Problem and a Real-Time Deep DBD Model

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12355)


Defocus blur detection (DBD) is a classical low level vision task. It has recently attracted attention focusing on designing complex convolutional neural networks (CNN) which make full use of both low level features and high level semantic information. The heavy networks used in these methods lead to low processing speed, resulting difficulty in applying to real-time applications. In this work, we propose novel perspectives on the DBD problem and design convenient approach to build a real-time cost-effective DBD model. First, we observe that the semantic information does not always relate to and sometimes mislead the blur detection. We start from the essential characteristics of the DBD problem and propose a data augmentation method accordingly to inhibit the semantic information and enforce the model to learn image blur related features rather than the semantic features. A novel self-supervision training objective is proposed to enhance the model training consistency and stability. Second, by rethinking the relationship between defocus blur detection and salience detection, we identify two previously ignored but common scenarios, based on which we design a hard mining strategy to enhance the DBD model. By using the proposed techniques, our model that uses a slightly modified U-Net as backbone, improves the processing speed by more than 3 times and performs competitively against state of the art methods. Ablation study is also conducted to verify the effectiveness of each part of our proposed methods.


Defocus blur detection Self-supervision Hard-mining 


  1. 1.
    Chen, L., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. arXiv\({:}\) Computer Vision and Pattern Recognition (2018)Google Scholar
  2. 2.
    Elder, J.H., Zucker, S.W.: Local scale control for edge detection and blur estimation. IEEE Trans. Pattern Anal. Mach. Intell. 20(7), 699–716 (1998)CrossRefGoogle Scholar
  3. 3.
    Golestaneh, S.A., Karam, L.J.: Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In: Computer Vision and Pattern Recognition, pp. 596–605 (2017)Google Scholar
  4. 4.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv\(:\) Neural and Evolutionary Computing (2012)Google Scholar
  5. 5.
    Hou, Q., Cheng, M., Hu, X., Borji, A., Tu, Z., Torr, P.H.S.: Deeply supervised salient object detection with short connections. IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 815–828 (2019)CrossRefGoogle Scholar
  6. 6.
    Huang, R., Feng, W., Fan, M., Wan, L., Sun, J.: Multiscale blur detection by learning discriminative deep features. Neurocomputing 285, 154–166 (2018)CrossRefGoogle Scholar
  7. 7.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv\(:\) Learning (2015)Google Scholar
  8. 8.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv\(:\) Learning (2014)Google Scholar
  9. 9.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Neural Inf. Process. Syst. 141(5), 1097–1105 (2012)Google Scholar
  10. 10.
    Liu, R., Li, Z., Jia, J.: Image partial blur detection and classification. Comput. Vis. Pattern Recognit., 1–8 (2008) Google Scholar
  11. 11.
    Pang, Y., Zhu, H., Li, X., Li, X.: Classifying discriminative features for blur detection. IEEE Trans. Syst. Man Cybern. 46(10), 2220–2227 (2016)Google Scholar
  12. 12.
    Park, J., Tai, Y.W., Cho, D., Kweon, I.S.: A unified approach of multi-scale deep and hand-crafted features for defocus estimation. In: Computer Vision and Pattern Recognition (2017)Google Scholar
  13. 13.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). Scholar
  14. 14.
    Rutishauser, U., Walther, D., Koch, C., Perona, P.: Is bottom-up attention useful for object recognition? In: Computer Vision and Pattern Recognition, vol. 2, pp. 37–44 (2004)Google Scholar
  15. 15.
    Saad, E., Hirakawa, K.: Defocus blur-invariant scale-space feature extractions. IEEE Trans. Image Process. 25(7), 3141–3156 (2016)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Shi, J., Li, X., Jia, J.: Discriminative blur detection features. In: Computer Vision and Pattern Recognition (2014)Google Scholar
  17. 17.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv\(:\) Computer Vision and Pattern Recognition (2014)Google Scholar
  18. 18.
    Su, B., Lu, S., Tan, C.L.: Blurred image region detection and classification. In: ACM Multimedia, pp. 1397–1400 (2011)Google Scholar
  19. 19.
    Tai, Y., Brown, M.S.: Single image defocus map estimation using local contrast prior. In: International Conference on Image Processing, pp. 1777–1780 (2009)Google Scholar
  20. 20.
    Tang, C., Wu, J., Hou, Y., Wang, P., Li, W.: A spectral and spatial approach of coarse-to-fine blurred image region detection. IEEE Signal Process. Lett. 23(11), 1652–1656 (2016)CrossRefGoogle Scholar
  21. 21.
    Tang, C., Zhu, X., Liu, X., Wang, L., Zomaya, A.: DeFusionNET: defocus blur detection via recurrently fusing and refining multi-scale deep features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2700–2709 (2019)Google Scholar
  22. 22.
    Vu, C.T., Phan, T.D., Chandler, D.M.: S3: a spectral and spatial measure of local perceived sharpness in natural images. IEEE Trans. Image Process. 21(3), 934–945 (2012)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Xu, G., Quan, Y., Ji, H.: Estimating defocus blur via rank of local patches. In: Computer Vision and Pattern Recognition, pp. 5381–5389 (2017)Google Scholar
  24. 24.
    Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: Computer Vision and Pattern Recognition (2010)Google Scholar
  25. 25.
    Zhang, S., Shen, X., Lin, Z., Mech, R., Costeira, J.P., Moura, J.M.F.: Learning to understand image blur. In: Computer Vision and Pattern Recognition, pp. 6586–6595 (2018)Google Scholar
  26. 26.
    Zhang, Y., Hirakawa, K.: Blur processing using double discrete wavelet transform. In: Computer Vision and Pattern Recognition, pp. 1091–1098 (2013)Google Scholar
  27. 27.
    Zhao, J., Feng, H., Xu, Z., Li, Q., Tao, X.: Automatic blur region segmentation approach using image matting. Signal Image Video Process. 7(6), 1173–1181 (2012). Scholar
  28. 28.
    Zhao, W., Zhao, F., Wang, D., Lu, H.: Defocus blur detection via multi-stream bottom-top-bottom network. IEEE Trans. Pattern Anal. Mach. Intell. (2019) Google Scholar
  29. 29.
    Zhao, W., Zheng, B., Lin, Q., Lu, H.: Enhancing diversity of defocus blur detectors via cross-ensemble network. In: Computer Vision and Pattern Recognition, pp. 8905–8913 (2019)Google Scholar
  30. 30.
    Zhu, X., Cohen, S., Schiller, S.N., Milanfar, P.: Estimating spatially varying defocus blur from a single image. IEEE Trans. Image Process. 22(12), 4879–4891 (2013)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Zhuo, S., Sim, T.: Defocus map estimation from a single image. Pattern Recognit. 44(9), 1852–1858 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiChina
  2. 2.MoE Key Lab of Artificial Intelligence, AI InstituteShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations