Efficiently Capturing Object Contours for Non-Photorealistic Rendering

  • Jiyoung Park
  • Juneho Yi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4522)

Abstract

Non-photorealistic rendering (NPR) techniques aim to outline the shape of objects and reduce visual clutter such as shadows and inner texture edges. As the first phase result of our entire research, this work is concerned with a structured light based approach that efficiently detects depth edges in real world scenes. Depth edges directly represent object contours. We exploit distortion of the light pattern in the structured light image along depth discontinuities to reliably detect depth edges. However, in reality, distortion along depth discontinuities may not occur or be large enough to detect depending on the distance from the camera or projector. For practical application of the proposed approach, we have presented a novel method that guarantees the occurrence of the distortion along depth discontinuities for a continuous range of object location. Experimental results show a great promise that the technique can successfully provide object contours to be used for non-photorealistic rendering.

Keywords

depth edges structured light non-photorealistic rendering 

References

  1. 1.
    Bovik, A.C., Clark, M., Geisler, W.S.: Multichannel Texture Analysis Using Localized Spatial Filters. IEEE Trans. on Pattern Analysis and Machine Intelligence, 55–73 (1990)Google Scholar
  2. 2.
    Cass, T.A.: Robust Affine Structure Matching for 3D Object Recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1264–1265 (1998) Google Scholar
  3. 3.
    Chen, Y., Medioni, G.: Object Modelling by Registration of Multiple Range Image. Image and Vision Computing, pp. 145-155 (1992)Google Scholar
  4. 4.
    Loncaric, S.: A survey of shape analysis techniques. Pattern Recognition, pp. 983–1001 (1998) Google Scholar
  5. 5.
    Weiss, I., Ray, M.: Model-based recognition of 3D object from single vision. IEEE Trans. on Pattern Analysis and Machine Intelligence, 116–128 (2001)Google Scholar
  6. 6.
    Frohlinghaus, T., Buhmann, J.M.: Regularizing phase-based stereo. In: Proc. of 13th International Conference on Pattern Recognition, pp. 451–455 (1996)Google Scholar
  7. 7.
    Hoff, W., Ahuja, N.: Surfaces from Stereo: Integrating Feature Matching, Disparity Estimation, and Contour Detection. IEEE Trans. on Pattern Analysis and Machine Intelligence 11(2), 121–136 (1989)CrossRefGoogle Scholar
  8. 8.
    Lee, S., Choi, J., Kim, D., Jung, B., Na, J., Kim, H.: An Active 3D Robot Camera for Home Environment. In: Proc. of 4th IEEE Sensors Conference (2004)Google Scholar
  9. 9.
    Scharstein, D., Szeliski, R.: High-Accuracy Stereo Depth Maps Using Structured Light. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 195–202 (2003)Google Scholar
  10. 10.
    Raskar, R., Tan, K.H., Feris, R., Yu, J., Turk, M.: Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering Using Multi-Flash Imaging. In: Proc. of ACM SIGGRAPH Conference, vol. 23, pp. 679–688 (2004)Google Scholar
  11. 11.
    Feris, R., Turk, M., Raskar, R., Tan, K., Ohashi, G.: Exploiting Depth Discontinuities for Vision-based Fingerspelling Recognition. In: IEEE Workshop on Real-Time Vision for Human-Computer Interaction (2004)Google Scholar
  12. 12.
    Ma, W., Manjunath, B.S.: EdgeFlow: a technique for boundary detection and image segmentation. IEEE Trans. on Image Processing 9, 1375–1388 (2000)MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Jiyoung Park
    • 1
  • Juneho Yi
    • 2
  1. 1.Computer Graphics Research Team, Digital Content Research Division, Electronics and Telecommunications Research Institute, Daejeon 305-700Korea
  2. 2.School of Information and Communication Engineering, Sungkyunkwan University, Suwon 446-740Korea

Personalised recommendations