The Visual Computer

, Volume 32, Issue 1, pp 99–109 | Cite as

Fog effect for photography using stereo vision

Original Article

Abstract

Fog is an important factor in photography with a special aesthetic, emotional, or compositional meaning. We present a fog-simulation method for photo editing using binocular stereo vision. Given a stereo pair, we estimate the depth information by stereo matching followed by a process to refine depth results for the given photo editing purpose. Then, depth-aware fog effects can be applied on the base image, with optional interaction for control purposes. Besides homogeneous fog, we provide three tools to control the density of the fog media. Thus, various kinds of heterogeneous atmospheric effects can also been simulated. Experiments show that the proposed method can achieve more natural-looking results than manually drawn fog, our results are very close to the appearance of fog in the real world.

Keywords

Stereo vision Computational photography Image-based rendering Stereo-map refinement Fog effect 

Notes

Acknowledgments

The authors thank Simon Hermann for providing a lib of the iSGM matcher. This project is supported by the China Scholarship Council.

References

  1. 1.
    Abbott, J., Morse, B.: Interactive depth-aware effects for stereo image editing. In: Proceeding of 3DTV-Conference, pp. 263–270 (2013)Google Scholar
  2. 2.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Susstrunk, S.: Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34, 2274–2282 (2012)CrossRefGoogle Scholar
  3. 3.
    Biri, V., Michelin, S., Arques, D.: Real-time animation of realistic fog. In: Proceedings Eurographics Workshop Rendering, pp. 1–4 (2002)Google Scholar
  4. 4.
    Blais, F.: Review of 20 years of range sensor development. J. Electron Imaging 13, 231–240 (2004)CrossRefGoogle Scholar
  5. 5.
    Carr, P., Hartley, R.: Improved single image dehazing using geometry. In: Proceedings of DICTA, pp. 103–110 (2009)Google Scholar
  6. 6.
    Cerezo, E., Pérez, F., Pueyo, X., Seron, F.J., Sillion, F.X.: A survey on participating media rendering techniques. Vis Comput 21, 303–328 (2005)CrossRefGoogle Scholar
  7. 7.
    Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell 24, 603–619 (2002)CrossRefGoogle Scholar
  8. 8.
    Deng, Y., Manjunath, B.: Unsupervised segmentation of color-texture regions in images and video. IEEE Trans Pattern Anal Mach Intell 23, 800–810 (2001)CrossRefGoogle Scholar
  9. 9.
    Engelhardt, T., Dachsbacher, C.: Epipolar sampling for shadows and crepuscular rays in participating media with single scattering. In: Proceedings of I3D, pp 119–125 (2010)Google Scholar
  10. 10.
    Fattal, R.: Single image dehazing. ACM. Trans. Graph. 27:72 (2008)Google Scholar
  11. 11.
    Fedkiw, R., Stam, J., Jensen, H.W.: Visual simulation of smoke. In: Proceedings of SIGGRAPH, pp. 15–22 (2001)Google Scholar
  12. 12.
    Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int J Comput Vis 70, 41–54 (2006)CrossRefGoogle Scholar
  13. 13.
    Fukunaga, K., Hostetler, L.: The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans Inform Theory 21, 32–40 (1975)CrossRefMathSciNetMATHGoogle Scholar
  14. 14.
    Gardner, G.Y.: Visual simulation of clouds. In: Proceedings of SIGGRAPH, pp. 297–304 (1985)Google Scholar
  15. 15.
    Grewe, L.L., Brooks, R.R.: Atmospheric attenuation reduction through multisensor fusion. In: Proceedings of SPIE 3376, Sensor Fusion II, pp. 102–109 (1998)Google Scholar
  16. 16.
    Hachisuka, T., Jarosz, W., Georgiev, I., Kaplanyan, A., Nowrouzezahrai, D., Spencer, B.: State of the art in photon density estimation. In: SIGGRAPH Asia Courses, 15 (2013)Google Scholar
  17. 17.
    Hermann, S., Klette, R.: Iterative semi-global matching for robust driver assistance systems. In: Proceedings of ACCV, LNCS. 7726:465–478 (2013)Google Scholar
  18. 18.
    Hirschmüller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: Proceedings of CVPR, pp. 807–814 (2005)Google Scholar
  19. 19.
    Hoiem, D., Efros, A.A., Hebert, M.: Automatic photo pop-up. ACM Trans Graph 24, 577–584 (2005)CrossRefGoogle Scholar
  20. 20.
    Hu, W., Dong, Z., Ihrke, I., Grosch, T., Yuan, G., Seidel, H.P.: Interactive volume caustics in single-scattering media. In: Proceedings of I3D, pp 109–117 (2010)Google Scholar
  21. 21.
    Kajiya, J.T., Von Herzen, B.P.: Ray tracing volume densities. In: Proceedings of SIGGRAPH, pp. 165–174 (1984)Google Scholar
  22. 22.
    Khoshelham, K., Elberink, S.O.: Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 12, 1437–1454 (2012)CrossRefGoogle Scholar
  23. 23.
    Klette, R.: Concise computer vision: an introduction into theory and algorithms. Springer, London (2014)CrossRefGoogle Scholar
  24. 24.
    Klette, R., Rosenfeld, A.: Digital geometry: geometric methods for digital picture analysis. Morgan Kaufmann, San Francisco (2004)Google Scholar
  25. 25.
    Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. ACM. Trans. Graph. 26:96 (2007)Google Scholar
  26. 26.
    Kopf, J., Neubert, B., Chen, B., Cohen, M., Cohen-Or, D., Deussen, O., Uyttendaele, M., Lischinski, D.: Deep photo: model-based photograph enhancement and viewing. ACM. Trans. Graph. 27:116 (2008)Google Scholar
  27. 27.
    Matsuo, T., Fukushima, N., Ishibashi, Y.: Weighted joint bilateral filter with slope depth compensation filter for depth map refinement. In: Proceedings of VISAPP, pp. 300–309 (2013)Google Scholar
  28. 28.
    Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Stanford University, USA (2005)Google Scholar
  29. 29.
    Oh, B.M., Chen, M., Dorsey, J., Durand, F.: Image-based modeling and photo editing. In: Proceedings of SIGGRAPH, pp. 433–442 (2001)Google Scholar
  30. 30.
    Remondino, F., El-Hakim, S.: Image-based 3d modelling: a review. The Photogrammetric Record 21:269–291 (2006)Google Scholar
  31. 31.
    Schaul, L., Fredembach, C., Susstrunk, S.: Color image dehazing using the near-infrared. In: Proceedings of ICIP, pp. 1629–1632 (2009)Google Scholar
  32. 32.
    Schechner, Y.Y., Kiryati, N.: Depth from defocus vs. stereo: how different really are they? Int. J. Comput. Vis. 39:141–162 (2000)Google Scholar
  33. 33.
    Schechner, Y.Y., Narasimhan, S.G., Nayar, S.K.: Instant dehazing of images using polarization. In: Proceedings of CVPR, pp. 325–332 (2001)Google Scholar
  34. 34.
    Shum, H., Kang, S.B.: Review of image-based rendering techniques. In: Proceedings Visual Communications Image Processing, pp. 2–13 (2000)Google Scholar
  35. 35.
    Tamlin: Photoshop realistic fog and mist tutorial. http://www.tutorialized.com/view/tutorial/Realistic-Fog-Mist/14031 (2006)
  36. 36.
    TrickyPhotoshop: Trickyphotoshop—how to create mist (fog) using Photoshop CS6. http://www.youtube.com/watch?v=F9NTddMLSvM (2012)
  37. 37.
    Xu, Z., Liu, X., Chen, X.: Fog removal from video sequences using contrast limited adaptive histogram equalization. In: Proceedings of CiSE, pp. 1–4 (2009)Google Scholar
  38. 38.
    Zhou, K., Hou, Q., Gong, M., Snyder, J., Guo, B., Shum, H.Y.: Fogshop: Real-time design and rendering of inhomogeneous, single-scattering media. In: Proceedings of PG, pp. 116–125 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.University of AucklandAucklandNew Zealand
  2. 2.Auckland University of TechnologyAucklandNew Zealand

Personalised recommendations