Advertisement

Deep Decoupling of Defocus and Motion Blur for Dynamic Segmentation

  • Abhijith PunnappurathEmail author
  • Yogesh Balaji
  • Mahesh Mohan
  • Ambasamudram Narayanan Rajagopalan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9911)

Abstract

We address the challenging problem of segmenting dynamic objects given a single space-variantly blurred image of a 3D scene captured using a hand-held camera. The blur induced at a particular pixel on a moving object is due to the combined effects of camera motion, the object’s own independent motion during exposure, its relative depth in the scene, and defocusing due to lens settings. We develop a deep convolutional neural network (CNN) to predict the probabilistic distribution of the composite kernel which is the convolution of motion blur and defocus kernels at each pixel. Based on the defocus component, we segment the image into different depth layers. We then judiciously exploit the motion component present in the composite kernels to automatically segment dynamic objects at each depth layer. Jointly handling defocus and motion blur enables us to resolve depth-motion ambiguity which has been a major limitation of the existing segmentation algorithms. Experimental evaluations on synthetic and real data reveal that our method significantly outperforms contemporary techniques.

Keywords

Segmentation Neural network Defocus blur Motion blur 

Supplementary material

419982_1_En_46_MOESM1_ESM.pdf (2.7 mb)
Supplementary material 1 (pdf 2754 KB)

References

  1. 1.
    Aizenberg, I., Paliy, D., Zurada, J., Astola, J.: Blur identification by multilayer neural network based on multivalued neurons. IEEE Trans. Neural Netw. 19(5), 883–898 (2008)CrossRefGoogle Scholar
  2. 2.
    Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1124–1137 (2004)CrossRefzbMATHGoogle Scholar
  3. 3.
    Caglioti, V., Giusti, A.: On the apparent transparency of a motion blurred object. Int. J. Comput. Vis. 86(2–3), 243–255 (2010)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Chakrabarti, A., Zickler, T., Freeman, W.T.: Analyzing spatially-varying blur. In: Proceedings of CVPR, pp. 2512–2519 (2010)Google Scholar
  5. 5.
    Chaudhuri, S., Rajagopalan, A.N.: Depth from Defocus - A Real Aperture Imaging Approach. Springer, New York (1999)CrossRefGoogle Scholar
  6. 6.
    Dai, S., Wu, Y.: Removing partial blur in a single image. In: Proceedings of CVPR, pp. 2544–2551 (2009)Google Scholar
  7. 7.
    Deng, X., Shen, Y., Song, M., Tao, D., Bu, J., Chen, C.: Video-based non-uniform object motion blur estimation and deblurring. Neurocomputing 86, 170–178 (2012)CrossRefGoogle Scholar
  8. 8.
    Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proc. IEEE 90(7), 1151–1163 (2002)CrossRefGoogle Scholar
  9. 9.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC 2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html
  10. 10.
    Favaro, P., Mennucci, A., Soatto, S.: Observing shape from defocused images. Int. J. Comput. Vis. 52(1), 25–43 (2003)CrossRefzbMATHGoogle Scholar
  11. 11.
    Favaro, P., Soatto, S.: A variational approach to scene reconstruction and image segmentation from motion-blur cues. In: Proceedings of CVPR, pp. 631–637 (2004)Google Scholar
  12. 12.
    Favaro, P., Soatto, S.: 3-D Shape Estimation and Image Restoration: Exploiting Defocus and Motion-Blur. Springer-Verlag, New York (2006)zbMATHGoogle Scholar
  13. 13.
    Gupta, A., Joshi, N., Zitnick, C.L., Cohen, M., Curless, B.: Single image deblurring using motion density functions. In: Proceedings of ECCV, pp. 171–184 (2010)Google Scholar
  14. 14.
    Hu, Z., Yang, M.H.: Fast non-uniform deblurring using constrained camera pose subspace. In: Proceeding of BMVC, pp. 1–11 (2012)Google Scholar
  15. 15.
    Hu, Z., Yang, M.-H.: Good regions to deblur. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 59–72. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33715-4_5 Google Scholar
  16. 16.
    Jia, J.: Single image motion deblurring using transparency. In: Proceedings of CVPR, pp. 1–8 (2007)Google Scholar
  17. 17.
    Köhler, R., Hirsch, M., Mohler, B., Schölkopf, B., Harmeling, S.: Recording and playback of camera shake: benchmarking blind deconvolution with a real-world database. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 27–40. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33786-4_3 CrossRefGoogle Scholar
  18. 18.
    Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008)CrossRefGoogle Scholar
  19. 19.
    Li, N., Ye, J., Ji, Y., Ling, H., Yu, J.: Saliency detection on light field. In: Proceedings of CVPR, pp. 2806–2813 (2014)Google Scholar
  20. 20.
    Paramanand, C., Rajagopalan, A.: Depth from motion and optical blur with an unscented Kalman filter. IEEE Trans. Image Process. 21(5), 2798–2811 (2012)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Paramanand, C., Rajagopalan, A.: Motion blur for motion segmentation. In: Proceedings of ICIP, pp. 4244–4248 (2013)Google Scholar
  22. 22.
    Sheikh, Y., Shah, M.: Bayesian modeling of dynamic scenes for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 27(11), 1778–1792 (2005)CrossRefGoogle Scholar
  23. 23.
    Shi, J., Xu, L., Jia, J.: Discriminative blur detection features. In: Proceedings of CVPR, pp. 2965–2972 (2014)Google Scholar
  24. 24.
    Shi, J., Xu, L., Jia, J.: Just noticeable defocus blur detection and estimation. In: Proceedings of CVPR, pp. 657–665 (2015)Google Scholar
  25. 25.
    Shroff, N., Veeraraghavan, A., Taguchi, Y., Tuzel, O., Agrawal, A., Chellappa, R.: Variable focus video: reconstructing depth and video for dynamic scenes. In: Proceedings of ICCP, pp. 1–9 (2012)Google Scholar
  26. 26.
    Sorel, M., Flusser, J.: Space-variant restoration of images degraded by camera motion blur. IEEE Trans. Image Process. 17(2), 105–116 (2008)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In: Proceedings of CVPR, pp. 769–777 (2015)Google Scholar
  28. 28.
    Tai, Y.W., Tan, P., Brown, M.S.: Richardson-Lucy deblurring for scenes under a projective motion path. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1603–1618 (2011)CrossRefGoogle Scholar
  29. 29.
    Vedaldi, A., Lenc, K.: Matconvnet - convolutional neural networks for matlab (2015)Google Scholar
  30. 30.
    Whyte, O., Sivic, J., Zisserman, A., Ponce, J.: Non-uniform deblurring for shaken images. Int. J. Comput. Vis. 98(2), 168–186 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Yan, R., Shao, L.: Image blur classification and parameter identification using two-stage deep belief networks. In: Proceedings of BMVC (2013)Google Scholar
  32. 32.
    Zhuo, S., Sim, T.: Defocus map estimation from a single image. Pattern Recogn. 44(9), 1852–1858 (2011)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Abhijith Punnappurath
    • 1
    Email author
  • Yogesh Balaji
    • 1
  • Mahesh Mohan
    • 1
  • Ambasamudram Narayanan Rajagopalan
    • 1
  1. 1.Department of Electrical EngineeringIndian Institute of Technology MadrasChennaiIndia

Personalised recommendations