Advertisement

Multi-Temporal Recurrent Neural Networks for Progressive Non-uniform Single Image Deblurring with Incremental Temporal Training

Conference paper
  • 755 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12351)

Abstract

Blind non-uniform image deblurring for severe blurs induced by large motions is still challenging. Multi-scale (MS) approach has been widely used for deblurring that sequentially recovers the downsampled original image in low spatial scale first and then further restores in high spatial scale using the result(s) from lower spatial scale(s). Here, we investigate a novel alternative approach to MS, called multi-temporal (MT), for non-uniform single image deblurring by exploiting time-resolved deblurring dataset from high-speed cameras. MT approach models severe blurs as a series of small blurs so that it deblurs small amount of blurs in the original spatial scale progressively instead of restoring the images in different spatial scales. To realize MT approach, we propose progressive deblurring over iterations and incremental temporal training with temporally augmented training data. Our MT approach, that can be seen as a form of curriculum learning in a wide sense, allows a number of state-of-the-art MS based deblurring methods to yield improved performances without using MS approach. We also proposed a MT recurrent neural network with recurrent feature maps that outperformed state-of-the-art deblurring methods with the smallest number of parameters.

Notes

Acknowledgement

This work was supported partly by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2017R1D1A1B05035810), the Technology Innovation Program or Industrial Strategic Technology Development Program (10077533, Development of robotic manipulation algorithm for grasping/assembling with the machine learning using visual and tactile sensing information) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea), and a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI18C0316).

Supplementary material

504443_1_En_20_MOESM1_ESM.pdf (9.4 mb)
Supplementary material 1 (pdf 9581 KB)

References

  1. 1.
    Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: dataset and study. In: CVPRW (2017)Google Scholar
  2. 2.
    Aljadaany, R., Pal, D.K., Savvides, M.: Douglas-Rachford networks: learning both the image prior and data fidelity terms for blind image deconvolution. In: CVPR (2019)Google Scholar
  3. 3.
    Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE T-PAMI 33, 898–916 (2011)Google Scholar
  4. 4.
    Bahat, Y., Efrat, N., Irani, M.: Non-uniform Blind Deblurring by Reblurring. In: ICCV (2017)Google Scholar
  5. 5.
    Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: ICML (2009)Google Scholar
  6. 6.
    Chakrabarti, A.: A neural approach to blind motion deblurring. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 221–235. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_14 CrossRefGoogle Scholar
  7. 7.
    Cho, S., Lee, S.: Fast motion deblurring. ACM Trans. Graph. 28, 1–8 (2009)CrossRefGoogle Scholar
  8. 8.
    Couzinie-Devy, F., Sun, J., Alahari, K., Ponce, J.: Learning to estimate and remove non-uniform image blur. In: CVPR (2013)Google Scholar
  9. 9.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  10. 10.
    Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The PASCAL visual object classes (VOC) challenge. IJCV 88, 303–338 (2010)CrossRefGoogle Scholar
  11. 11.
    Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. ACM Trans. Graph. 25, 787–794 (2006)CrossRefGoogle Scholar
  12. 12.
    Gao, H., Tao, X., Shen, X., Jia, J.: Dynamic scene deblurring with parameter selective sharing and nested skip connections. In: CVPR (2019)Google Scholar
  13. 13.
    Gong, D., et al.: From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. In: CVPR (2017)Google Scholar
  14. 14.
    Gupta, A., Joshi, N., Lawrence Zitnick, C., Cohen, M., Curless, B.: Single image deblurring using motion density functions. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 171–184. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15549-9_13CrossRefGoogle Scholar
  15. 15.
    Harmeling, S., Hirsch, M., Schölkopf, B.: Space-variant single-image blind deconvolution for removing camera shake. In: NIPS (2010)Google Scholar
  16. 16.
    Hirsch, M., Schuler, C.J., Harmeling, S., Schölkopf, B.: Fast removal of non-uniform camera shake. In: ICCV (2011)Google Scholar
  17. 17.
    Jolicoeur-Martineau, A.: The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734 (2018)
  18. 18.
    Kim, T.H., Ahn, B., Lee, K.M.: Dynamic scene deblurring. In: ICCV (2013)Google Scholar
  19. 19.
    Kim, T.H., Lee, K.M.: Segmentation-free dynamic scene deblurring. In: CVPR (2014)Google Scholar
  20. 20.
    Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: CVPR (2015)Google Scholar
  21. 21.
    Kim, T.H., Lee, K.M., Schölkopf, B., Hirsch, M.: Online video deblurring via dynamic temporal blending network. In: ICCV (2017)Google Scholar
  22. 22.
    Köhler, R., Hirsch, M., Mohler, B., Schölkopf, B., Harmeling, S.: Recording and playback of camera shake: benchmarking blind deconvolution with a real-world database. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 27–40. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33786-4_3CrossRefGoogle Scholar
  23. 23.
    Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: DeblurGAN: blind motion deblurring using conditional adversarial networks. In: CVPR (2018)Google Scholar
  24. 24.
    Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better. In: ICCV (2019)Google Scholar
  25. 25.
    Lai, W.S., Huang, J.B., Hu, Z., Ahuja, N., Yang, M.H.: A comparative study for single image blind deblurring. In: CVPR (2016)Google Scholar
  26. 26.
    Li, Y., Kang, S.B., Joshi, N., Seitz, S.M., Huttenlocher, D.P.: Generating sharp panoramas from motion-blurred videos. In: CVPR (2010)Google Scholar
  27. 27.
    Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: CVPR (2019)Google Scholar
  28. 28.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  29. 29.
    Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. In: ICCV (2017)Google Scholar
  30. 30.
    Nah, S., Baik, S., Hong, S., Moon, G., Son, S., Timofte, R., Lee, K.M.: NTIRE 2019 challenge on video deblurring and super-resolution: dataset and study. In: CVPRW (2019)Google Scholar
  31. 31.
    Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: CVPR (2017)Google Scholar
  32. 32.
    Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: CVPR (2019)Google Scholar
  33. 33.
    Pan, J., Sun, D., Pfister, H., Yang, M.H.: Blind image deblurring using dark channel prior. In: CVPR (2016)Google Scholar
  34. 34.
    Ramakrishnan, S., Pachori, S., Gangopadhyay, A., Raman, S.: Deep generative filter for motion deblurring. In: ICCVW (2017)Google Scholar
  35. 35.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  36. 36.
    Schuler, C.J., Hirsch, M., Harmeling, S., Schölkopf, B.: Learning to deblur. IEEE Trans. Pattern Anal. Mach. Intell. 38, 1439–1451 (2016)CrossRefGoogle Scholar
  37. 37.
    Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. 27, 1–10 (2008)Google Scholar
  38. 38.
    Shen, Z., et al.: Human-aware motion deblurring. In: ICCV (2019)Google Scholar
  39. 39.
    Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: CVPR (2017)Google Scholar
  40. 40.
    Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In: CVPR (2015)Google Scholar
  41. 41.
    Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: CVPR (2018)Google Scholar
  42. 42.
    Whyte, O., Sivic, J., Zisserman, A., Ponce, J.: Non-uniform deblurring for shaken images. In: CVPR (2010)Google Scholar
  43. 43.
    Wieschollek, P., Hirsch, M., Schölkopf, B., Lensch, H.P.A.: Learning blind motion deblurring. In: ICCV (2017)Google Scholar
  44. 44.
    Xu, L., Zheng, S., Jia, J.: Unnatural l0 sparse representation for natural image deblurring. In: CVPR (2013)Google Scholar
  45. 45.
    Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 157–170. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15549-9_12CrossRefGoogle Scholar
  46. 46.
    Xu, L., Ren, J.S., Liu, C., Jia, J.: Deep convolutional neural network for image deconvolution. In: NIPS (2014)Google Scholar
  47. 47.
    Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: CVPR (2019)Google Scholar
  48. 48.
    Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV (2018)Google Scholar
  49. 49.
    Zhou, S., Zhang, J., Pan, J., Xie, H., Zuo, W., Ren, J.: Spatio-temporal filter adaptive network for video deblurring. arXiv preprint arXiv:1904.12257 (2019)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Electrical EngineeringUNISTUlsanRepublic of Korea

Personalised recommendations