3D representation of videoconference image sequences using VRML 2.0

  • Ioannis Kompatsiaris
  • Michael G. Strintzis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1425)


In this paper a procedure for visualisation of videoconference image sequences using Virtual Reality Modeling Language (VRML) 2.0 is described. First image sequence analysis is performed in order to estimate the shape and motion parameters of the person talking in front of the camera. For this purpose, we propose the K-Means with connectivity constraint algorithm as a general segmentation algorithm combining information of various types such as colour and motion. The algorithm is applied “hierarchically” in the image sequence and it is first used to separate the background from the foreground object and then to further segment the foreground object into the head and shoulders regions. Based on the above information, personal 3D shape parameters are estimated. The rigid 3D motion is estimated next for each sub-object. Finally a VRML file is created containing all the above estimated information.


virtualised reality model-based image sequence analysis Virtual Reality Modeling Language 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    D. Tzovaras, N. Grammalidis, and M. G. Strintzis, “Object — Based Coding of Stereo Image Sequences using Joint 3-D Motion/Disparity Compensation,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 7, Apr. 1997.Google Scholar
  2. 2.
    K. Aizawa, H. Harashima, and T. Saito, “Model-based analysis-synthesis image coding (MBASIC) system for a person's face,” Signal Processing: Image Communication, vol. 1, pp. 139–152, Oct. 1989.Google Scholar
  3. 3.
    H. G. Musmann, M. Hotter, and J. Ostermann, “Object-oriented analysis-synthesis coding of moving images,” Signal Processing: Image Communication, vol. 1, pp. 117–138, Oct. 1989.Google Scholar
  4. 4.
    “Overview of the MPEG-4 Standard,” tech. rep., ISO/IEC JTC1/SC29/WG11 N1730, Stockholm Jul. 1997.Google Scholar
  5. 5.
    VRML 2.0 Specification, http://vrml.sgi.com/moving-worlds.Google Scholar
  6. 6.
    T. Kanade and P. J. Narayanan, “Virtualised reality: Constructing virtual worlds from real scenes,” IEEE Multimedia, pp. 34–46, Jan.-March 1997.Google Scholar
  7. 7.
    P. E. Eren, C. Toklu, and M. Tekalp, “Object-based video manipulation and composition using 2d meshes in VRML,” in IEEE Workshop on Multimedia Signal Processing, (Princeton, New Jersey, USA), pp. 257–261, June 1997.Google Scholar
  8. 8.
    S. Z. Selim and M. A. Ismail, “K-means-type algorithms,” IEEE Trans. Pattern Anal. and Mach. Intell., vol. 6, pp. 81–87, January 1984.Google Scholar
  9. 9.
    M. J. T. Reindrs, Model Adaptation for image Coding. Delft University Press, 1995.Google Scholar
  10. 10.
    A. W. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct Least Squares Fitting of Ellipses,” in International Conference on Pttern Recognition, (Vienna, Austria), August 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Ioannis Kompatsiaris
    • 1
  • Michael G. Strintzis
    • 1
  1. 1.Information Processing Laboratory Electrical and Computer Engineering DepartmentAristotle University of ThessalonikiThessalonikiGreece

Personalised recommendations