Pacific-Rim Symposium on Image and Video Technology

Image and Video Technology pp 228-240 | Cite as

Star-Effect Simulation for Photography Using Self-calibrated Stereo Vision

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9431)

Abstract

Star effects are an important design factor for night photos. Progress in imaging technologies made it possible that night photos can be taken free-hand. For such camera settings, star effects are not achievable. We present a star-effect simulation method based on self-calibrated stereo vision. Given an uncalibrated stereo pair (i.e. a base image and a match image), which can be just two photos taken with a mobile phone with about the same pose, we follow a standard routine: Extract a family of feature-point pairs, calibrate the stereo pair by using the feature-point pairs, and obtain depth information by stereo matching. We detect highlight regions in the base image, estimate the luminance according to available depth information, and, finally, render star patterns with an input texture. Experiments show that our results are similar to real-world star effect photos, and that they are more natural than results of existing commercial applications. The paper reports for the first time research on automatically simulating photo-realistic star effects.

Keywords

Star effect Computational photography Stereo vision Self-calibration 

Notes

Acknowledgements

This project is supported by the China Scholarship Council.

References

  1. 1.
    Blais, F.: Review of 20 years of range sensor development. J. Electron. Imaging 13, 231–240 (2004)CrossRefGoogle Scholar
  2. 2.
    Born, M., Wolf, E.: Principles of Optics. Cambridge University Press, Cambridge (1999)CrossRefGoogle Scholar
  3. 3.
    Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vis. 70, 41–54 (2006)CrossRefGoogle Scholar
  4. 4.
    Hartley, R.I.: Theory and practice of projective rectification. Int. J. Comput. Vis. 35, 115–127 (1999)CrossRefGoogle Scholar
  5. 5.
    Hirschmüller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: Proceedings of CVPR, pp. 807–814 (2005)Google Scholar
  6. 6.
    Hermann, S., Klette, R.: Iterative semi-global matching for robust driver assistance systems. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012, Part III. LNCS, vol. 7726, pp. 465–478. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  7. 7.
    Hoey, G.: Advanced photoshop starburst filter effect - Week 45 (2009). www.youtube.com/watch?v=lRKp4_EkIvc
  8. 8.
    Khoshelham, K., Elberink, S.O.: Accuracy and resolution of Kinect depth data for indoor mapping applications. Sensors 12, 1437–1454 (2012)CrossRefGoogle Scholar
  9. 9.
    Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. ACM Trans. Graph. 26(3), 96 (2007)CrossRefGoogle Scholar
  10. 10.
    Klette, R.: Concise Computer Vision: An Introduction into Theory and Algorithms. Springer, London (2014)CrossRefMATHGoogle Scholar
  11. 11.
    Klette, R., Rosenfeld, A.: Digital Geometry: Geometric Methods for Digital Picture Analysis. Morgan Kaufmann, San Francisco (2004)MATHGoogle Scholar
  12. 12.
    Liu, D., Klette, R.: Fog effect for photography using stereo vision. Vis. Comput. 1–11 (2015). doi: 10.1007/s00371-014-1058-7
  13. 13.
    Klette, R., Liu, D., Nicolescu, R.: Bokeh effects based on stereo vision. In: Azzopardi, G., Petkov, N., Yamagiwa, S. (eds.) CAIP 2015. LNCS, vol. 9256, pp. 198–210. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-23192-1_17 CrossRefGoogle Scholar
  14. 14.
    Lipson, A., Lipson, S.G., Lipson, H.: Optical Physics. Cambridge University Press, Cambridge (2010)CrossRefMATHGoogle Scholar
  15. 15.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints Int. J. Comput. Vis. 60, 91–110 (2004)CrossRefGoogle Scholar
  16. 16.
    Lukac, R.: Computational Photography: Methods and Applications. CRC Press, Boca Raton (2010)Google Scholar
  17. 17.
    mahalodotcom: How to create a lens flare effect in Photoshop (2011). www.youtube.com/watch?v=qmL0ct2Ries
  18. 18.
    Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Stanford University, CSTR 2005–02 (2005)Google Scholar
  19. 19.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: Proceedings of ICCV, pp. 2564–2571 (2011)Google Scholar
  20. 20.
    Schechner, Y.Y., Kiryati, N.: Depth from defocus vs. stereo: how different really are they? Int. J. Comput. Vis. 39, 141–162 (2000)CrossRefMATHGoogle Scholar
  21. 21.
    Klette, R., Song, Z.: Robustness of point feature detection. In: Wilson, R., Hancock, E., Bors, A., Smith, W. (eds.) CAIP 2013, Part II. LNCS, vol. 8048, pp. 91–99. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  22. 22.
    Smith, A.R.: Color gamut transform pairs. ACM Siggraph Comput. Graph. 12, 12–19 (1978)CrossRefGoogle Scholar
  23. 23.
    Wang, L., Wei, L.Y., Zhou, K., Guo, B., Shum, H.Y.: High dynamic range image hallucination. In: Proceedings of Eurograph, pp. 321–326 (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Department of Computer ScienceThe University of AucklandAucklandNew Zealand
  2. 2.School of EngineeringAuckland University of TechnologyAucklandNew Zealand

Personalised recommendations