Advanced Occlusion Handling for Virtual Studios

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7425)


Virtual studios typically use a layering method to achieve occlusion. A virtual object can be manually set in the foreground or background layer by a human controller, allowing it to appear in front of or behind an actor. Single point actor tracking systems have been used in virtual studios to automate occlusions. However, the suitability of single point tracking diminishes when considering more ambitious applications of an interactive virtual studio. As interaction often occurs at the extremities of the actor’s body, the automated occlusion offered by single point tracking is insufficient and multiple-point actor tracking is justified. We describe ongoing work towards an automatic occlusion system based on multiple-point skeletal tracking that is compatible with existing virtual studios. We define a set of occlusions required in the virtual studio; describe methods for achieving them; and present our preliminary results.


Virtual Object Video Layer Background Layer Occlusion Handling Television Studio 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Blond, L., Buck, M., Galli, R., Niem, W., Paker, Y., Schmidt, W., Thomas, G.: A virtual studio for Live Broadcasting: The Mona Lisa Project. IEEE MultiMedia 3(Part 2), 18–29 (1996)CrossRefGoogle Scholar
  2. 2.
    Grundhfer, A., Bimber, O.: VirtualStudio2Go: digital video composition for real environments. In: Proceedings of ACM SIGGRAPH Asia, part 151 (2008)Google Scholar
  3. 3.
    Grau, O., Price, M., Thomas, G.: Use of 3D techniques for virtual production. In: Proceeding of SPIE Videometrics and Optical Methods of 3d Shape Measurement, vol. 4309, pp. 40–50 (2000)Google Scholar
  4. 4.
    Kolb, A., Barth, E., Koch, R., Larsen, R.: Time-of-Flight Sensors in Computer Graphics. Computer Graphics Forum 28 (2009)Google Scholar
  5. 5.
    Kim, N., Woo, W., Kim, G.J.: 3-d virtual studio for natural interacting. IEEE Trans. Man and Cybernetics 36(Part. 4), 758–773 (2006)CrossRefGoogle Scholar
  6. 6.
    MICROSOFT, Kinect (2010), (last accessed: November 05, 2011)
  7. 7.
    ORAD (2011), (last accessed: November 11, 2011)
  8. 8.
    ORAD (2011), (last accessed: November 11, 2011)
  9. 9.
    PRIMESENSE, OpenNI (2011), (last accessed: November 12, 2011)
  10. 10.
    Pomi, A., Slusallek, P.: Interactive ray tracing for virtual studio applications. Journal of Virtual Reality Broad 2(Part 1) (2005)Google Scholar
  11. 11.
    Raskar, R., et al.: Prakash: lighting aware motion capture using photo-sensing markers and multiplexed illuminators. In: SIGGRAPH 2007, vol. 26(pt. 11), Article 36 (2007)Google Scholar
  12. 12.
    Shimoda, S., Hayashi, M., Kanatsugu, Y.: New Chromakey Imaging Technique with Hi-Vision Background. IEEE Trans. on Broadcasting 35(Part 4), 357–361 (1989)CrossRefGoogle Scholar
  13. 13.
    Thomas, G., Grau, O.: 3D image sequence acquisition for TV & film production. In: First International Symposium on 3D Data Processing Visualization and Transmission (3DPVT 2002) (2002)Google Scholar
  14. 14.
    Tamir, M., Sharir, A., Hasharon, R.: Virtual position sensing system, Patent no: US 6,438,508 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Birmingham City UniversityBirminghamUK

Personalised recommendations