Technological advances related to multimedia frameworks have transformed the ways in which users interact with and access all types of content. More recently, specific solutions related to augmented and mixed reality have also played a role in multimedia frameworks. As a consequence, increasingly more powerful and portable devices serve a variety of purposes, including leisure, social relations, education, medicine, or access to information [1]. These transformations in a user’s interactions with the surrounding world necessitate evaluating and optimizing both applications and their uses. When we focus on the evaluation of the user experience, we find uncounted resources that are primarily focused in professional sectors (studies of accessibility and usability) [2]. More recently, it has been found that in the educational framework, it is necessary to evaluate how to integrate new Information Technologies to improve the participation and motivation of all educational levels [3, 4]. On the other hand, digital content, services, systems, and methodologies have been studied over the last several decades to improve and generate new models and methods for accessing content (rules and recommendations), thereby adapting those contents to all types of users and devices [5]. These efforts are dynamic, particularly considering the constant technological revolution that continuously transforms these devices and their capabilities. Substantial effort is exerted to adapt the content on mobile devices, since their growing popularity and cost decrement have afforded mobile devices a significant presence in our society. In particular, aspects such as security [6] and adaptation and communication with older users [7] or users with disabilities [8] are perhaps the most developed fields within design or multimedia studies. These aspects are the main disciplines in the effort to generate applications that are accessible to all types of users, with customizable and usable interactions adapted to basic navigation rules. Digital workflows have improved problems of navigation and communication and university students (digital natives) are often able to work more efficiently than many experienced professionals who are unable to use the new technologies. This special issue focuses on research work related to the design, development, evaluation, and use of new interaction media/applications and their combinations, as well as approaches focused on assessing the motivation and the degree of use satisfaction in these interactions. The issue focuses on the visualization of complex data, both two-dimensional and three-dimensional [9], advanced interfaces, multimedia uses, and in general, different approaches to Human–Computer Interaction and Computing in order to improve the universal access and usability of the information with different adaptive techniques [10].

This UAIS special issue comes after the successful organization of two international events: The International Workshop on User Experience in e-Learning and Augmented Technologies in Education (UXeLATE 2012) in conjunction with the 20th ACM Multimedia Conference 2012, which were held in Nara, Japan (http://www.acmmm12.org/), and the invited session titled “Social and Visual Technologies: New trends in the improvement of university education”, programmed in the thematic area of “Virtual, Augmented and Mixed Reality”, of the 15th HCI International Conference 2013, which was held in Las Vegas, USA (http://www.hcii2013.org/). This special issue is focused on the use of new interaction media and applications in order to improve multimedia and mixed/augmented reality content accessibility for all types of users, particularly in the educational sector and related to the content mentioned above. Nine papers describing methodologies, experiences, and case studies from the areas of usability and accessibility to multimedia content comprise this special issue.

To start, Fonseca et al. [11] in their paper propose a mixed-method research to evaluate the motivation and satisfaction of architecture degree students using collaborative and augmented visual technologies to present their building projects, in this specific case using Augmented Reality (AR), and social media tools. Complementing the classical quantitative studies based on Likert scales, they proposed a new level of student evaluation using the Bipolar Laddering test (BLA), a qualitative method historically related with the User Experience (UX) field. Through this mixed approach, they have demonstrated that the final results expand the innate limitation of quantitative methods, due to the qualitative techniques involving the users’ emotional subjective responses.

The paper by Mehler et al. [12] provides a theoretical assessment of gestures in the context of authoring image-related hypertexts centered in the example of the museum information system WikiNect. The authors define gestural writing as a sort of coding, in which propositions are only expressed by means of gestures. Also, they demonstrate that gestural writing primarily focuses on the perceptual level of image descriptions. During their experiments and by exploring the metaphorical potential of image schemata, the results show how to extend the expressiveness of gestural writing in order to reach the conceptual level of image descriptions. The main conclusions of the work are that HCI interface design strives after easy handling. The most intuitive forms of interactions are known to be iconic and indexical means. The present paper provides a starting point for fathoming this common HCI view in a semiotic perspective.

Ferracani et al. [13] in their paper present an interesting initiative that can improve the access and training of emergency medicine operators by adopting natural interaction paradigms in immersive environments. The use of immersive simulations in medical training is extremely useful to confront emergency operators with scenarios that range from usual to extreme without exposing the simulation participants in any harm. Their proposal EMERGENZA (emergency in Italian, and developed as a “serious game”) allows to simulate a first aid scenario with a configurable virtual environment using interactive 3D graphics. Users can interact through a natural interface for navigation and interaction with the virtual environment. In order to evaluate the prototype, the authors chose and tested several heuristics to measure the overall system usability. The paper also describes the results showing that the adoption of natural interaction in immersive virtual environments receives good feedback from users.

In the field of augmented technologies used in the education framework, Sánchez et al. [14] evaluate the implementation of a GPS (Geographical Positioning System) to register virtual information on real inside space using AR tools in the field of Architecture and Building Engineering degrees. Using commercial software as Layar for mobile devices, the authors designed a system to visualize complex 3D models, which are linked to virtual information channels through a database and geo-located in their real position. The basis of the proposal, centered in the current information society, is students’ innate affinity with friendly digital devices such as smart phones or tablets. For these reasons, the educational content visualization in real environments was found to help students to evaluate and share their own-generated architectural proposals and improve their spatial skills. The method proposed in this paper aims to improve access to 3D multimedia content on mobile devices and adapt it to all types of users and content. In addition, a usability analysis was carried out to demonstrate the feasibility and effectiveness of this technology in educational settings.

Following in the educational framework, García-Peñalvo and Conde [15] focus their research in the evaluation of the impact of a mobile Personal Learning Environment (PLE) in different educational contexts. The paper presents PLE and mobile technologies as a solution to support lifelong learning, but also presents some problems related with its relation with the institutions. The main contribution of the paper is a service-based framework to make this type of interaction possible, the communication of mobile personal learning environments with the institutional learning platforms. The framework presented has been implemented as an Android solution and tested by students and teachers. The main conclusions presented can be resumed from the students’ perspective and in a controlled context; the opportunity to represent students’ PLE on a mobile device that includes functionalities and/or information from the LMS, which could be combined with other tools they use to learn, encourages them to participate in the subjects and helps them to learn. These results show that the definition of a mobile Personal Learning Environment is possible, and its use increases students’ motivation.

Barnache and Hernández-Ibáñez [16] in their paper describe and evaluate a case study of using virtual worlds as a tool in the school to engage the children-learning process. The paper describes the results of the interaction of three groups of children within a flexible virtual space that connects schools and museums. The proposed integrated educational space not only includes the exploration of exhibition areas but also the telepresence talks on the part of museum personnel, simulations, educational work in the form of virtual quests, all within a multiuser virtual environment based on OpenSim, simultaneously accessible from the different institutions involved in the experiment. The paper presents results which could serve as a starting point for a future implementation of this platform for connecting educational institutions and museums across an entire city. As in other papers in this issue, the naturalness with which the young students interact with digital contents was also notable. It is relevant to note that the simple projection on a screen of the remote docent avatar in the virtual hall offered better results in terms of understanding of the lecture, compared to those obtained from a configuration with all students inside the virtual world. This example constitutes a simple and effective method for enabling remote talks, even to different classes simultaneously.

The paper by Gonçalves et al. [17] focuses on the improvements that a 2.5 D/3D Distributed Control System (DCS) provides to users of these systems usually in 2D, allowing a full view of the entire manufacturing process. The main goal of the paper was to present and discuss how is it possible to increase the quantity and quality of information received by a DCS console, on the status of the industrial process to be controlled. This compels the creation of an innovative DCS operator display that meets usability and accessibility principles. This improved information allows the operator to acquire knowledge on the current state of the process and thus take thoughtful and reasoned decisions. In addition, it reduces the level of anxiety and increases operator productivity and commitment. The paper also describes a detailed description of the new 3D DCS interface proposed, the main consideration to be implemented and proposes future research in this field in two areas: the graphical area and the use of different peripherals.

Panchanathan and McDaniel [18] focus on new Human-Centered Multimedia Computing (HCMC) methodology (as a relevant subfield of HCC), by considering perspectives from individuals with disabilities. The paper argues that while technological solutions provide significant benefits for the broader population, individuals with disabilities have been largely ignored often having to force-fit or adapt themselves to available solutions. For this reason, the authors introduce a person-centered approach to HCMC known as Person-Centered Multimedia Computing. In the paper, they further enrich the PCMC methodology by incorporating interdisciplinary inspirations that take into account the diverse challenges associated with assistive technology design and deployment. As an example, the authors present several applications highlighting how considerations of technology, adaptation and policy from a disability perspective can enrich the design of person-centered accessible technologies. This approach has been implemented through their ongoing work on a NSF IGERT project, “Alliance for Person-centered Accessible Technologies” (APAcT), details of which are also provided in the paper.

Finally, the paper by Margetis et al. [19] proposes a new framework to augment educational environments, i.e., a typical classroom or any studying environment. As seen in most of the papers presented in this issue, pervasive computing environments have permeated current research and practice, augmenting existing environments with digital content. In this context, the paper investigates unobtrusive interaction and support of active educational or studying activities through appropriate context-sensitive information. The suitability of the proposed interaction technologies and overall approach has been demonstrated through three interactive applications integrated in the framework, each one supporting different interaction techniques and addressing different educational activities: SESIL addresses typical classroom activities such as reading and exercise-solving, the AR study desk targets exploratory educational activities where the learner aims to receive information about a specific topic, while the “Book of Ellie” mainly addresses the needs of younger children and edutainment activities, where learning is achieved through playing. To evaluate the proposal, a user experience evaluation of the three test bed applications has been carried out, aiming to assess the applicability of the approach and the suitability of each of the proposed technologies to the educational tasks in hand.

In conclusion, this special issue presents a new approach with novel data that can improve the digital skills and abilities of all users using augmented and multimedia technologies. These advanced technologies rely on tangible technology and full-body interaction with digital content and services through physical environments. The primary goal of these new interactions is to empower collaboration and learning by taking advantage of human abilities to grasp and manipulate physical objects and materials. Using 3D digital models and interfaces, the final users (e.g., students, educators, researchers, professionals) can understand the space, ideas, and content more clearly and can quickly improve Universal Access and usability of contents. The papers in this special issue provide samples to reflect new systems, approaches, and evaluation methods that aim to achieve a better comprehension of the human interaction with an extended range of Multimedia Technologies.

The Guest Editors wish to thank the Editor-in-Chief of the International Journal Universal Access in the Information Society, Professor Constantine Stephanidis, for his patience and constant support and help with the process of editing this issue. We would also like to thank all the authors for their contributions and the reviewers for their assessment of the papers. We hope that the readers of the UAIS Journal will find the papers of this special issue interesting.

1 List of reviewers

Francesc Alías, La Salle Campus Barcelona, Universitat Ramon Llull, Spain

Claudio Barradas, Instituto Politécnico de Santarem, Portugal

Yi-Fan Chen, Old Dominion University at Norfolk, Virginia, USA

Nikos Doulamis, National Technical University of Athens, Greece

Mireia Fernández, IN3 Universitat Oberta de Catalunya, Spain

Oscar García, ENTI The Videogame School, Spain

Francisco José Garcia-Peñalvo, Grupo de Investigación en InterAcción y eLearning, Universidad de Salamanca, Spain

Alex García-Alonso, Euskal Herriko Unibertsitatea at Donostia, Spain

Claudio R. Geyer, Universidade Federal do Rio Grande do Sul, Brazil

Renata Gorska, Cracow University of Technology, Poland

Heedong Ko, Image Media Research Center, Korea

Luis Hernández, Universidade da Coruña, Spain

Wolfgang Huerst, Utrecht University, The Netherlands

Rosa Iglesias, IKERLAN Research Center, Spain

Malinka Ivanova, Technical University of Sofia, Bulgaria

Raymond Kosala, Biuns University, Indonesia

Pilar Mareca, Universidad Politécnica de Madrid, Spain

Troy McDaniel, Center for Cognitive Ubiquitous Computing at Arizona State University, USA

Alexander Mehler, Goethe-Universität Frankfurt am Main, Germany

Sethuraman Panchanathan, Center for Cognitive Ubiquitous Computing at Arizona State University, USA

Ernest Redondo, Universitat Politècnica de Catalunya, Spain

Jayson Richardson, University of Kentucky at Lexington, USA

Alvaro Rocha, Associação Ibérica de STI, Portugal

Xenophon Zabulis, Institute of Computer Science—Foundation for Research and Technology—Hellas, Greece