Deafness is not simply a barrier of sound, but also a barrier of language. Those born deaf do not use spoken (or written) language as their primary language, but prefer to use sign language instead [1]. This group faces barriers to education, social services, and employment opportunities because the written form as well as the voiced form of spoken languages is inaccessible [25].

Sign language translation and avatar technology has the potential for creating better access to both forms of spoken language. This collected volume is based on presentations given at the symposium “Sign Language Translation and Avatar Technology (SLTAT),” held in October 2013 at DePaul University, Chicago, IL, USA. These articles represent many of the major areas of effort currently being undertaken by researchers in this discipline. These include machine translation, improved methods for capturing and editing of signed corpora for generating animation, systems for improved display of signed animation, and evaluation of signing produced via avatar display.

1 Content of this issue

The initial two papers discuss complete systems for machine translation. The first contribution, “A rule triggering system for automatic text-to-sign translation” by Michael Filhol, Mohamed N. Hadjadj, and Benoît Testu, reports on progress in machine translation from French text to LSF (French Sign Language). They describe a machine translation (MT) system supported by a set of production rules based on corpus analysis of LSF, and a system to trigger the rules via text processing. In the second paper, “From grammar based MT to post-processed SL representation,” Eleni Efthimiou, Stavroula-Evita Fotinea, Athanasia-Lida Dimou, Theodore Goulas, and Dimitris Kouremenos describe the implementation of a transfer-based machine translation system that translates from written Greek to Greek Sign Language (GSL), which is displayed via an avatar. A post-processing module complements a ruled-based MT module, and if required, the post-processing module allows users to modify output generated from the MT module.

The following two papers consider alternatives for capturing and editing data for corpus building and sign generation. In “Towards an intuitive sign language animation authoring system for the Deaf,” Alexis Heloir and Fabrizio Nunnari describe an online collaborative framework for members of the Deaf community to author signs for a 3D avatar. They developed a user interface (UI) using novel input devices and report that the framework allows novices to create signs at nearly the same speed as experts using keyboard and mouse input. Researchers Sylvie Gibet, François Lefebvre-Albaret, Ludovic Hamon, Rémi Brun, and Ahmed Turki present an approach to editing captured data to generate new utterances in their paper, “Interactive editing in French Sign Language dedicated to virtual signers: Requirements and challenges.” They discuss an innovative approach to introducing a human operator in the loop for constructing utterances, while maintaining constraints based on linguistic rules.

The third pair of papers focuses on the technology of avatar display. The article “KAZOO: A sign language generation platform based on production rules” by Annelies Braffort, Michael Filhol, Maxime Delorme, Laurence Bolot, Annick Choisier, and Cyril Verrecchia, introduces a web application for the display of sign language generation via an avatar. The system is based on sign language corpus analysis integrating 3D animation with linguistic modeling and facilitates automatic sign production using an abstract linguistic model (AZee). The second of this pair of papers considers sublinguistic in addition to linguistic modeling to improve realism and alleviate robotic appearance in avatar displays. Additionally, “An automated technique for real-time production of lifelike animations of American Sign Language” by John McDonald et al. discusses avatar optimizations that can lower the rendering overhead in real-time displays.

The final papers report on evaluating different aspects of avatar technology for comprehensibility and acceptability among members of the Deaf community. Robert Smith and Brian Nolan present research that explores and evaluates the effect of adding facial expressions in “Emotion facial expressions in synthesized sign language avatars: A manual evaluation.” They augmented an existing avatar capable of displaying Irish Sign Language (ISL) with seven basic, universal emotions [6] and evaluated the result with members of the Irish Deaf community. In “Building a Swiss German sign language avatar with JASigning and evaluating it among the Deaf community,” Sarah Ebling and John Glauert evaluate the aspects of JASigning for use in a system that translates German train announcements of the Swiss Federal Railways (Schweizerische Bundesbahnen, SBB) into Swiss German Sign Language (DSGS). They identify candidate features of avatar functionality required for the project and report on the results of a focus group with members of the Deaf community to obtain feedback for further improvement of the avatar.