Sign Search and Sign Synthesis Made Easy to End User: The Paradigm of Building a SL Oriented Interface for Accessing and Managing Educational Content

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10278)


Accessibility of electronic content by deaf and hard-of-hearing WWW users is crucially depending on the possibility to acquire information that can be presented in their native sign language (SL), from the vast amounts of text sources being constantly uploaded. Similarly crucial is the ability to easily create new electronic content that can enable dynamic message exchange, covering various communication needs.

Given that during the last decade, there have been created considerable language resources for a number of SLs worldwide, integration of a set of deaf accessibility aids in combination with standard Language Technology (LT) tools for text handling in the various platforms serving tasks of current everyday life, may drastically improve access to Web services by deaf and hard-of-hearing (HoH) populations.

In this paper, we present the example of integration of a set of tools which enable written content accessibility and dynamic student-student/student-teacher interaction via SL, as applied on the official educational content platform of the Greek Ministry of Education for the primary and secondary education levels, exploiting Greek Sign Language (GSL) resources.


Web accessibility via SL SL oriented HCI Dynamic sign language synthesis Fingerspelling for search input Deaf communication Deaf education Deaf accessibility tools evaluation 



The research leading to these results has received funding from POLYTROPON project (KRIPIS-GSRT, MIS: 448306) and is based on insights, technologies, and language resources initially developed within the Dicta-Sign project (FP7-ICT, grant agreement n°: 231135).


  1. 1.
    Dicta-Sign project, Deliverable D8.1: Project demonstrator: Sign-Wiki, February 2012.
  2. 2.
    Efthimiou, E., Fotinea, S.-E., Hanke, T., Glauert, J., Bowden, R., Braffort, A., Collet, C., Maragos, P., Lefebvre-Albaret, F.: The dicta-sign wiki: enabling web communication for the deaf. In: Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W. (eds.) ICCHP 2012, Part II. LNCS, vol. 7383, pp. 205–212. Springer, Heidelberg (2012)Google Scholar
  3. 3.
    Efthimiou, E., Fotinea, S.-E., Goulas, T., Kakoulidis, P.: User friendly interfaces for sign retrieval and sign synthesis. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2015. LNCS, vol. 9176, pp. 351–361. Springer, Cham (2015). doi: 10.1007/978-3-319-20681-3_33 CrossRefGoogle Scholar
  4. 4.
    Stokoe, W.C.: Sign Language Structure, 2nd edn. Linstock Press, Silver Spring MD (1978)Google Scholar
  5. 5.
    Prillwitz, S., Leven, R., Zienert, H., Hanke, T., Henning, J.: HamNoSys. Version 2.0. Hamburg Notation System for Sign Language: An Introductory Guide, Signum, Hamburg (1989)Google Scholar
  6. 6.
    Hanke, T.: HamNoSys - representing sign language data in language resources and language processing contexts. In: Proceedings of 1st Workshop on Representing and Processing of Sign Languages (LREC-2004), Paris, France, pp. 1–6 (2004)Google Scholar
  7. 7.
    Neidle, C.: SignStream™: a database tool for research on visual-gestural language. In: Bergman, B., Boyes-Braem, P., Hanke, T., Pizzuto, E. (eds.) Sign Transcription and Database Storage of Sign Information, special issue of Sign Language and Linguistics, vol. 4(1), pp. 203–214 (2002)Google Scholar
  8. 8.
    Pizzuto, E., Pietrandrea, P.: The notation of signed texts: Open questions and indications for further research. Sign Lang. Linguist. 4(1/2), 29–45 (2001)CrossRefGoogle Scholar
  9. 9.
    McDonald, J., Wolfe, R., Wilbur, R.B., Moncrief, R., Malaia, E., Fujimoto, S., Baowidan, S., Stec, J.: A New tool to facilitate prosodic analysis of motion capture data and a datadriven technique for the improvement of avatar motion. In: Proceedings of 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining (LREC-2016), Portorož, Slovenia, pp. 153–158, 28 May 2016Google Scholar
  10. 10.
    Huenerfauth, M.: Evaluation of a psycholinguistically motivated timing model for animations of american sign language. In: 10th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2008), pp. 129–136 (2008)Google Scholar
  11. 11.
    Adamo-Villani, N., Wilbur, Ronnie B.: ASL-Pro: American sign language animation with prosodic elements. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2015. LNCS, vol. 9176, pp. 307–318. Springer, Cham (2015). doi: 10.1007/978-3-319-20681-3_29 CrossRefGoogle Scholar
  12. 12.
    Huenerfauth, M.: A linguistically motivated model for speed and pausing in animations of american sign language. ACM Trans. Access. Comput. 2(2), 9 (2009)CrossRefGoogle Scholar
  13. 13.
    Wolfe, R., Cook, P., McDonald, J.C., Schnepp, J.: Linguistics as structure in computer animation: toward a more effective synthesis of brow motion in American sign language. Sign Lang. Linguist. 14(1), 179–199 (2011)CrossRefGoogle Scholar
  14. 14.
    Prokopidis, P., Georgandopoulos, B., Papageorgiou, C.: A suite of NLP tools for Greek. In: The 10th International Conference of Greek Linguistics. Komotini, Greece (2011)Google Scholar
  15. 15.
    Efthimiou, E., Fotinea, S.-E., Goulas, T., Kakoulidis, P., Dimou, A.-L., Vacalopoulou, A.: A complete environment for deaf learner support in the context of mainstream education. In: Proceedings of the Conference Universal Learning Design, vol. 5, Linz, 13–15 July, pp. 35–44 (2016). ISSN 1805-3947Google Scholar
  16. 16.
    Efthimiou, E., Fotinea, S.-E., Dimou, A.-L., Goulas, T., Karioris, P., Vasilaki, K., Vacalopoulou, A., Pissaris, M., Korakakis, D.: From a sign lexical database to an SL golden corpus – the POLYTROPON SL resource. In: Proceedings of 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining (LREC-2016), Portorož, Slovenia, pp. 63–68, 28 May 2016Google Scholar
  17. 17.
    Fotinea, S.-E., Efthimiou, E.: Tools for Deaf accessibility to an eGOV environment. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) ICCHP 2008. LNCS, vol. 5105, pp. 446–453. Springer, Heidelberg (2008)Google Scholar
  18. 18.
    Ebling, S., Wolfe, R., Schnepp, J., Baowidan, S., McDonald, J., Moncrief, R., Sidler-Miserez, S., Tissi, T. Synthesizing the finger alphabet of Swiss German Sign Language and evaluating the comprehensibility of the resulting animations. In: 6th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT), pp. 10–13 (2015)Google Scholar
  19. 19.
    Wolfe, R., McDonald, J., Toro, J., Baowidan, S., Moncrief, R., Schnepp, J.: Promoting better deaf/hearing communication through an improved interaction design for fingerspelling practice. In: Antona, M., Stephanidis, C. (eds.) UAHCI 2015. LNCS, vol. 9175, pp. 495–505. Springer, Cham (2015). doi: 10.1007/978-3-319-20678-3_48 CrossRefGoogle Scholar
  20. 20.
    Glauert, J., Elliott, R.: Extending the SiGML notation - a progress report. In: Second International Workshop on Sign Language Translation and Avatar Technology (SLTAT-2011), Dundee, UK (2011)Google Scholar
  21. 21.
    Efthimiou, E., Fotinea, S.-E., Goulas, T., Dimou, A.-L., Kouremenos, D.: From grammar based MT to post-processed SL representations. In: Wolfe, R., Efthimiou, E., Glauert, J., Hanke, T., McDonald, J., Schnepp, J. (eds.) Special issue: recent advances in sign language translation and avatar technology, Universal Access in the Information Society (UAIS) Journal, vol 15, pp. 499–511. Springer, Heidelberg  (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Institute for Language and Speech Processing/ATHENA RCAthensGreece

Personalised recommendations