Advertisement

Journal of Scientific Computing

, Volume 78, Issue 3, pp 1488–1525 | Cite as

Variational Models for Joint Subsampling and Reconstruction of Turbulence-Degraded Images

  • Chun Pong Lau
  • Yu Hin Lai
  • Lok Ming LuiEmail author
Article
  • 60 Downloads

Abstract

Turbulence-degraded image frames are distorted by both turbulent deformations and space–time-varying blurs. To suppress these effects, we propose a multi-frame reconstruction scheme to recover a latent image from the observed distorted image sequence. Recent approaches are commonly based on registering each frame to a reference image, by which geometric turbulent deformations can be estimated and a sharp image can be restored. A major challenge is that a fine reference image is usually unavailable, as every turbulence-degraded frame is distorted. A high-quality reference image is crucial for the accurate estimation of geometric deformations and fusion of frames. Besides, it is unlikely that all frames from the image sequence are useful, and thus frame selection is necessary and highly beneficial. In this work, we propose a variational model for joint subsampling of frames and extraction of a clear image. A fine image and a suitable choice of subsample are simultaneously obtained by iteratively reducing an energy functional. The energy consists of a fidelity term measuring the discrepancy between the extracted image and the subsampled frames, as well as regularization terms on the extracted image and the subsample. Different choices of fidelity and regularization terms are explored. By carefully selecting suitable frames and extracting the image, the quality of the reconstructed image can be significantly improved. Extensive experiments have been carried out, which demonstrate the efficacy of our proposed model. In addition, the extracted subsamples and images can be put in existing algorithms to produce improved results.

Keywords

Turbulence Turbulent deformation Multi-frame reconstruction Frame selection Image restoration 

Mathematics Subject Classification

65D18 68U10 

Notes

Acknowledgements

The image used for generating the Car sequence is a resized version of the Car image retrieved from the RetargetMe benchmark for image retargeting by Rubinstein, Gutierrez, Sorkine-Hornung and Shamir. The Carfront sequence is retrieved from the project webpage of [1]. The image used for generating the Desert sequence is retrieved from WallpapersWide.com, whereas the image used for generating the Road sequence is by Dave and Les Jacobs for Blend Images and Getty Images. The Building and Chimney sequences are produced by Hirsch and Harmeling from Max Planck Institute for Biological Cybernetics, and are recovered from [31]’s project webpage. The code for RPCA is from [26]. The optical flow algorithm for the Centroid method is from [14]. The authors would like to thank the above persons for allowing them to use the video, pictures and algorithms in their experiments.

References

  1. 1.
    Anantrasirichai, N., Achim, A., Kingsbury, N.G., Bull, D.R.: Atmospheric turbulence mitigation using complex wavelet-based fusion. IEEE Trans. Image Process. 22(6), 2398–2408 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Aubailly, M., Vorontsov, M.A., Carhart, G.W., Valley, M.T.: Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach. Proc. SPIE 7463, 74630C (2009)CrossRefGoogle Scholar
  3. 3.
    Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM (JACM) 58(3), 11 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Frakes, D.H., Monaco, J.W., Smith, M.J.T.: Suppression of atmospheric turbulence in video using an adaptive control grid interpolation approach. In: 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), vol. 3, pp. 1881–1884 vol. 3 (2001).  https://doi.org/10.1109/ICASSP.2001.941311
  5. 5.
    Fried, D.L.: Probability of getting a lucky short-exposure image through turbulence. JOSA 68(12), 1651–1658 (1978)CrossRefGoogle Scholar
  6. 6.
    Furhad, M.H., Tahtali, M., Lambert, A.: Restoring atmospheric-turbulence-degraded images. Appl. Opt. 55(19), 5082–5090 (2016)CrossRefGoogle Scholar
  7. 7.
    Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009).  https://doi.org/10.1137/080725891 MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    He, R., Wang, Z., Fan, Y., Fengg, D.: Atmospheric turbulence mitigation based on turbulence extraction. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1442–1446. IEEE (2016)Google Scholar
  9. 9.
    Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.: Efficient filter flow for space-variant multiframe blind deconvolution. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 607–614. IEEE (2010)Google Scholar
  10. 10.
    Hufnagel, R., Stanley, N.: Modulation transfer function associated with image transmission through turbulent media. JOSA 54(1), 52–61 (1964)CrossRefGoogle Scholar
  11. 11.
    Joshi, N., Cohen, M.F.: Seeing Mt. rainier: lucky imaging for multi-image denoising, sharpening, and haze removal. In: 2010 IEEE International Conference on Computational Photography (ICCP), pp. 1–8 (2010).  https://doi.org/10.1109/ICCPHOT.2010.5585096
  12. 12.
    Li, D., Mersereau, R.M., Simske, S.: Atmospheric turbulence-degraded image restoration using principal components analysis. IEEE Geosci. Remote Sens. Lett. 4(3), 340–344 (2007)CrossRefGoogle Scholar
  13. 13.
    Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv:1009.5055 (2010)
  14. 14.
    Liu, C.: Beyond pixels: Exploring new representations and applications for motion analysis. Ph.D. thesis, Massachusetts Institute of Technology (2009)Google Scholar
  15. 15.
    Lou, Y., Kang, S.H., Soatto, S., Bertozzi, A.L.: Video stabilization of atmospheric turbulence distortion. Inverse Probl. Imag. 7(3), 839–861 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Mao, Y., Gilles, J.: Non rigid geometric distortions correction–application to atmospheric turbulence stabilization. Inverse Probl. Imag. 6, 531–546 (2012)Google Scholar
  17. 17.
    Meinhardt-Llopis, E., Micheli, M.: Implementation of the centroid method for the correction of turbulence. Image Process. On Line 4, 187–195 (2014)CrossRefGoogle Scholar
  18. 18.
    Micheli, M., Lou, Y., Soatto, S., Bertozzi, A.L.: A linear systems approach to imaging through turbulence. J. Math. Imaging Vis. 48(1), 185–201 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Pearson, J.E.: Atmospheric turbulence compensation using coherent optical adaptive techniques. Appl. Opt. 15(3), 622–631 (1976)CrossRefGoogle Scholar
  20. 20.
    Roggemann, M.C., Stoudt, C.A., Welsh, B.M.: Image-spectrum signal-to-noise-ratio improvements by statistical frame selection for adaptive-optics imaging through atmospheric turbulence. Opt. Eng. 33(10), 3254–3265 (1994)CrossRefGoogle Scholar
  21. 21.
    Roggemann, M.C., Welsh, B.M., Hunt, B.R.: Imaging Through Turbulence. CRC Press, Boca Raton (1996)Google Scholar
  22. 22.
    Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(1–4), 259–268 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Seitz, S.M., Baker, S.: Filter flow. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 143–150. IEEE (2009)Google Scholar
  24. 24.
    Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Gr. 27(3), 73:1–73:10 (2008).  https://doi.org/10.1145/1360612.1360672 CrossRefGoogle Scholar
  25. 25.
    Shimizu, M., Yoshimura, S., Tanaka, M., Okutomi, M.: Super-resolution from image sequence under influence of hot-air optical turbulence. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)Google Scholar
  26. 26.
    Sobral, A., Bouwmans, T., Zahzah, E.H.: LRSLibrary: low-rank and sparse tools for background modeling and subtraction in videos. In: Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing. CRC Press, Taylor and Francis Group (2015)Google Scholar
  27. 27.
    Tyson, R.K.: Principles of Adaptive Optics. CRC Press, Boca Raton (2015)Google Scholar
  28. 28.
    Vorontsov, M.A.: Parallel image processing based on an evolution equation with anisotropic gain: integrated optoelectronic architectures. JOSA A 16(7), 1623–1637 (1999)CrossRefGoogle Scholar
  29. 29.
    Vorontsov, M.A., Carhart, G.W.: Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images. JOSA A 18(6), 1312–1324 (2001)CrossRefGoogle Scholar
  30. 30.
    Xie, Y., Zhang, W., Tao, D., Hu, W., Qu, Y., Wang, H.: Removing turbulence effect via hybrid total variation and deformation-guided kernel regression. IEEE Trans. Image Process. 25(10), 4943–4958 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Zhu, X., Milanfar, P.: Removing atmospheric turbulence via space-invariant deconvolution. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 157–170 (2013)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.4326 CSSUniversity of MarylandCollege ParkUSA
  2. 2.Room 222A, Lady Shaw BuildingThe Chinese University of Hong KongShatinHong Kong
  3. 3.Room 207, Lady Shaw BuildingThe Chinese University of Hong KongShatinHong Kong

Personalised recommendations