Chapter

Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions

Volume 5641 of the series Lecture Notes in Computer Science pp 276-290

Supporting Engagement and Floor Control in Hybrid Meetings

  • Rieks op den AkkerAffiliated withHuman Media Interaction Group, University of Twente
  • , Dennis HofsAffiliated withHuman Media Interaction Group, University of Twente
  • , Hendri HondorpAffiliated withHuman Media Interaction Group, University of Twente
  • , Harm op den AkkerAffiliated withHuman Media Interaction Group, University of Twente
  • , Job ZwiersAffiliated withHuman Media Interaction Group, University of Twente
  • , Anton NijholtAffiliated withHuman Media Interaction Group, University of Twente

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferencing system for participation in hybrid meetings. The system has been developed as a research vehicle to see how technology based on automatic real-time recognition of conversational behavior in meetings can be used to improve engagement and floor control by remote participants. The system uses modules for online speech recognition, real-time visual focus of attention as well as a module that signals who is being addressed by the speaker. A built-in keyword spotter allows an automatic meeting assistant to call the remote participant’s attention when a topic of interest is raised, pointing at the transcription of the fragment to help him catch-up.