Virtual Environments and Advanced Interfaces
The International Conference on Ubiquitous Computing and Communications (IUCC) presents challenges across computer science: in systems design and engineering, in systems modelling, in user interface design, and in pervasive and reliable computing solutions and communication services anytime and anywhere.
Virtual Environments and Advanced Interfaces (VEAI) is a special session within IUCC that aims to promote the discussion about ways of enhancing human computer interaction and communication in immersive virtual environments. The workshop brings together specialists addressing relevant challenges related to virtual reality (VR), augmented reality (AR) and mixed reality (MR) software and hardware combined with sensors enabling contextual awareness, as well as user emotional state or state of mind that allow the creation of more effective pervasive and affective experiences. This hybrid reality is the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real time.
Current research methodologies demonstrate what needs to be adopted in embedded environments to allow intuitive interaction between users and/or agents and their environment. The emerging technologies used within VEAI have been built upon the rapid research and developmental advances in a wide range of key areas including wireless and wearable sensors, mobile and embedded systems and VR/AR agent technologies. Ubiquitous computing and communications have drawn significant interest from both academia and industry and continue to attract tremendous research attention due to their promising new business opportunity in information technology and engineering.
Interactive virtual environments
Natural interfaces/multimodal interaction in virtual reality
Context-aware immersive/interactive virtual environments
Virtual human representation Morgado in virtual reality
Sound synthesis and design for immersion
User sensing technology
The papers are organized into two main groups according to their applications. In the first group, we find three papers that present issues related to effective visualisation of virtual environments and multimodal interaction in virtual reality. Morgado et al. detail how the application of bot spooler architecture to integrate virtual worlds with e-learning management systems for corporate training can be used for integration of virtual worlds in learning management systems (LMS) for organisational management of e-learning activities. The presented system architecture foresees several alternative communication channels between LMS and virtual worlds, including the spooling of automated clients or “bots” and the flexibility to inject code if necessary and possible. Falconer E. identifies aspects of visualisation and soundscape design to provide high impact upon users’ sense of space in virtual simulations and discusses the use of a phenomenographic method to study how users’ experience affects their response to the virtual simulation. Economou et al. discuss issues related to effective virtual human representation in virtual environments, including avatars, puppeted avatars, replicants and autonomous agents. All papers in this first group highlight directions of future work to enable the creation of more affective user experiences in virtual environments.
In the second group, we find again three papers based on context-aware immersive/interactive virtual environments using sensing technologies. Luiza et al. address issues related to complexity of understanding the affective processes associated with user task engagement and investigate features extracted from electroencephalogram signals for the purpose of affective state modelling of users in order to achieve a system that can recognise and react to the user’s emotions. Antoniou et al. present how full utilisation of spatial and overall environmental degrees of freedom can be used in order to maintain the immersion, engagement and educational impact that web-based virtual patients have brought in contemporary medical education. The presented work demonstrates that careful balance has to be maintained between technological capacity and content requirements, based on user requirements in order to achieve more effective and impactful experiential educational modality. The final paper by Karchoud et al. presents a single long-life application that adapts itself dynamically to the current situation by detecting occurring events and building contextually described situations enabling mobile apps to respond effectively to user needs on the fly. The paper proposes a global architecture based on an event-condition-action (ECA) rule paradigm managing transparently all functionalities the user may require, by adapting itself dynamically to the existing situation.
In summary, the papers appearing in this special issue suggest pathways for future research and development allowing effective understanding of contextual, location-based and biometric information to enable the creation of enhanced interaction in virtual environments.