e & i Elektrotechnik und Informationstechnik

, Volume 128, Issue 10, pp 359–365 | Cite as

Content generation for 3D video/TV

  • N. Brosch
  • A. Hosni
  • G. Ramachandran
  • L. He
  • M. Gelautz
Originalarbeiten
  • 148 Downloads

Summary

A lack of suitable 3D content currently constitutes a major bottleneck for transferring the recent success of 3D cinema to our home TV. In this paper we describe a comprehensive 3D content generation system which enables to generate 3D content from either stereoscopic or monoscopic input videos. In particular, we present a method to convert original 2D image sequences to 3D content by adding depth information with only little user support. Furthermore, we show results of a stereo algorithm which provides the basis for automatic conversion of stereoscopic film material for viewing on different types of displays. In this context, we discuss the potential of inpainting techniques for filling in image regions that were originally occluded.

Keywords

3D video Stereo vision 2D to 3D conversion Inpainting 

Erstellen von Inhalten für 3D Video/TV

Zusammenfassung

Während 3D in unseren Kinos erfolgreich Einzug gehalten hat, behinderte bisher ein Mangel an passenden 3D-Inhalten das Etablieren von 3D-Fernsehen in unseren Wohnzimmern. In dieser Arbeit präsentieren wir ein Gesamtsystem für das Erstellen von 3D-Inhalten aus stereoskopischen oder monoskopischen Ausgangsvideos. Im Speziellen betrachten wir Techniken, die ursprüngliche 2D-Bildsequenzen zu 3D-Material konvertieren, indem sie, ohne viel Aufwand für BenutzerInnen, Tiefeninformation ergänzen. Weiters zeigen wir Ergebnisse eines Stereo-Algorithmus, der die Basis für die automatische Anpassung von stereoskopischem Filmmaterial an unterschiedliche Darstellungsmedien bietet. In diesem Kontext demonstrieren wir das Potenzial von Inpainting-Techniken zur Füllung von ursprünglich verdeckten Bildregionen.

Schlüsselwörter

3D Video Stereo Vision 2D-zu-3D-Konvertierung Inpainting 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Brosch, N., Rhemann, C., Gelautz, M. (2011): Segmentation-based depth propagation in videos. In: Proc. of ÖAGM/AAPR, 14Google Scholar
  2. Criminisi, A., Perez, P., Toyama, K. (2003): Object removal by exemplar-based inpainting. In: Proc. of CVPR 2, 721–728Google Scholar
  3. Grundmann, M., Kwatra, V., Han, M., Essa, I. (2010): Efficient hierarchical graph-based video segmentation. In: Proc. of CVPR, 1–14Google Scholar
  4. Guttmann, M., Wolf, L., Cohen-Or, D. (2009): Semi-automatic stereo extraction from video footage. In: Proc. of ICCV, 136–142Google Scholar
  5. He, K., Sun, J., Tang, X. (2010): Guided image filtering. In: Proc. of ECCV, 1–14Google Scholar
  6. He, L., Bleyer, M., Gelautz, M. (2011): Object removal by depth-guided inpainting. In: Proc. of ÖAGM/AAPR, 15Google Scholar
  7. Hosni, A., Bleyer, M., Gelautz, M., Rhemann, C. (2009): Local stereo matching using geodesic support weights. ICIP, 2093–2096Google Scholar
  8. Jain, J., Jain, A. (1981): Displacement Measurement and Its Application in Interframe Image Coding. IEEE Transactions on Communications 29, nr. 12: 1799–1808CrossRefGoogle Scholar
  9. Mendiburu, B. (2009): 3D Movie Making – Stereoscopic Digital Cinema from Script to Screen. Oxford, UK: Elsevier/Focal Press Inc.: 223Google Scholar
  10. Richardt, C., Orr, D., Davies, I., Criminisi, A., Dodgson, N. (2010): Real-time spatiotemporal stereo matching using the dual-cross-bilateral grid. In: Proc. of ECCV 3, 510–523Google Scholar
  11. Saito, H., Baba, S., Kimura, M., Vedula, S., Kanade, T. (1999): Appearance-based virtual view generation of temporally-varying events from multi-camera images in the 3D room, In: Proc. of 3DIM, 516–525Google Scholar
  12. Scharstein, D., Szeliski, R. (2002): A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47, nr. 1/2/3: 7–42MATHCrossRefGoogle Scholar
  13. Venkatesh, M. V., Cheung, S., Zhao, J. (2009): Efficient object-based video inpainting. Pattern Recognition Letters 30, nr. 2: 168–179CrossRefGoogle Scholar
  14. Yoon, K. J., Kweon, I. S. (2005): Locally adaptive support-weight approach for visual correspondence search. In: Proc. of CVPR 2, 924–931Google Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • N. Brosch
    • 1
  • A. Hosni
    • 2
  • G. Ramachandran
    • 3
  • L. He
    • 1
  • M. Gelautz
    • 4
  1. 1.Doctoral College on Computational PerceptionVienna University of Technology, Institute for Software Technology and Interactive SystemsViennaAustria
  2. 2.Vienna Ph.D. School of InformaticsVienna University of Technology, Institute for Software Technology and Interactive SystemsViennaAustria
  3. 3.Doctoral College on Computational PerceptionVienna University of Technology, Communications and Radio-Frequency EngineeringViennaAustria
  4. 4.Institute for Software Technology and Interactive SystemsVienna University of TechnologyViennaAustria

Personalised recommendations