Sign Languages of the World

Chapter
Part of the Cognitive Science and Technology book series (CSAT)

Abstract

Sign languages have been there since the start of the humanity and would have been the first means of communication among the primitive humans. Before people communicated with a vocabulary and using sounds, it is fair to assume that they communicated with various gestures using hand, face, mouth and body movements. However, today, the sign language is predominantly associated with disabilities from congenital to accidents. Most of the users are either hearing impaired or mute. There is also a subgroup of whom are children of such hearing impaired people whose senses are not affected but do use sign language because of the community needs in which they live.

Keywords

Australian sign language Historical perspective Auslan Finger spelling Auslan evolution American sign language Machine recognition Glove based sign language Neural networks Fourier descriptors Hidden markov model Mode of interaction 

References

  1. 1.
  2. 2.
    Johnston, T.: Signs of Australia: A New Dictionary of Auslan. North Rocks Press, NSW (1998)Google Scholar
  3. 3.
    Johnston, T., Schembri, A.: Australian Sign Language: An Introduction To Sign Language Linguistics. Cambridge University Press (2007)Google Scholar
  4. 4.
  5. 5.
    Nyst, V.: Sign Languages in West Africa. Sign Languages, pp. 405–432. Cambridge University Press (2010)Google Scholar
  6. 6.
    Padden, C.: Sign Language Geography. Mathur, Gaurav; Napoli, Donna, Deaf Around The World, pp. 19–37. Oxford University Press, New York (2010)Google Scholar
  7. 7.
    Hurlbut, H.: A Preliminary Survey of the Signed Languages of Malaysia. Cross-Linguistic Perspectives in Sign Language Research: Selected papers from TISLR, pp. 31–46. Signum Verlag, Hamburg (2003)Google Scholar
  8. 8.
    Kegl, J., Kouwenberg, S., Singler, J.: The Case of Signed Languages in the Context of Pidgin and Creole Studies. The Handbook of Pidgin and Creole Studies. Blackwell Publishing (2008)Google Scholar
  9. 9.
  10. 10.
    Petitto, L.A.: On the autonomy of language and gesture: Evidence from the acquisition of personal pronouns in American sign language. Cognition 27(1), 1–52 (1987)Google Scholar
  11. 11.
    Lillo-Martin, D.: Two kinds of null arguments in American sign language. Nat. Lang. Linguist. Theory 4(4), 415 (1986)Google Scholar
  12. 12.
    Valli, C.: Linguistics of American Sign Language: An Introduction, pp. 85–86. Clerc Books, Washington D.C. (2005)Google Scholar
  13. 13.
    Neidle, C.: The Syntax of American Sign Language: Functional Categories and Hierarchical Structures, p. 59. Cambridge, The MIT Press (2000)Google Scholar
  14. 14.
    Armstrong, D., Karchmer, M., William C.: Stokoe and the study of signed languages. In Armstrong, D., Karchmer, M., Van Cleve, J. (eds.) The Study of Signed Languages, pp. 11–14. Gallaudet University (2002)Google Scholar
  15. 15.
    Bahan, B.: Non-Manual Realization of Agreement in American Sign Language. Boston University (1996)Google Scholar
  16. 16.
    Bailey, C., Dolby, K.: The Canadian Dictionary of ASL. The University of Alberta Press, Edmonton (2002)Google Scholar
  17. 17.
    Bishop, M., Hicks, S.: Orange eyes: Bimodal bilingualism in hearing adults from deaf families. Sign Language Studies (Gallaudet University Press) 5(2), 188–230 (2005)Google Scholar
  18. 18.
    Collins, S.: Adverbial morphemes in tactile American sign language. Union Institute & University (2004)Google Scholar
  19. 19.
    Costello, E.: American sign language dictionary. Random House (2008)Google Scholar
  20. 20.
    Stokoe, W.C.: Sign language structure: An outline of the visual communication systems of the American deaf, studies in linguistics: Occasional papers (No. 8). Buffalo: Department of Anthropology and Linguistics, University of Buffalo, (1960)Google Scholar
  21. 21.
    Charlotte, B.-S., Cokely, D.: American Sign Language: A student text units 10–18. Washington, DC: Gallaudet University Press. (1991) [1981]Google Scholar
  22. 22.
    Nakamura, K.: About ASL. Deaf resource library (2008)Google Scholar
  23. 23.
    Supalla, S.J., Cripps, J.H.: ASL gloss as an intermediary writing system (2011) http://www.towson.edu/asld/documents/SupallaCripps_FNL_000.pdf. Accessed Sept. 30, 2013
  24. 24.
    Stokoe, W.C., Dorothy C.C., Croneberg, C.G.: A Dictionary of American Sign Languages On Linguistic Principles. Gallaudet College Press, Washington D.C. (1965)Google Scholar
  25. 25.
    Lane, H., Pillard, R., French, M.: Origins of the American Deaf-World. Sign Language Studies (Gallaudet University Press) 1(1), 17–44 (2000)Google Scholar
  26. 26.
    Lucas, C., Bayley, R., Valli, C.: What’s Your Sign for Pizza?: An Introduction to Variation in American Sign Language. Gallaudet University Press, Washington (2003)Google Scholar
  27. 27.
    Mitchell, R., Young, T., Bachleda, B., Karchmer, M.: How many people use ASL in the United States?: Why estimates need updating. Sign Language Studies (Gallaudet University) (2006)Google Scholar
  28. 28.
  29. 29.
    A Brief History of ASD. American School for the Deaf. http://www.asd-1817.org/page.cfm?p=429. Accessed Oct. 19, 2013
  30. 30.
    A Brief History of The American Asylum, at Hartford, For The Education and Instruction of the Deaf And Dumb http://www.disabilitymuseum.org/dhm/lib/detail.html?id=1371&page=all. Accessed Oct. 15, 2013
  31. 31.
    Premaratne, P., Nguyen, Q.: Consumer electronics control system based on hand gesture moment invariants. IET Computer Vis. 1(1), 35–41 (2007)Google Scholar
  32. 32.
    Premaratne, P., Ajaz, S., Premaratne, M.: Hand gesture tracking and recognition system using Lucas-Kanade algorithm for control of consumer electronics. Neurocomputing J. 116( 20), 242–249 (2013)Google Scholar
  33. 33.
    Premaratne, P., Ajaz, S., Premaratne, M.: Hand gesture tracking and recognition system for control of consumer electronics. Springer Lect. Notes Artif. Intel. (LNAI) 6839, 588–593 (2011)Google Scholar
  34. 34.
    Kawai, H., Tamura, S.: Recognition of sign language motion images. Pattern Recogn. 21(4), 343–353 (1988)Google Scholar
  35. 35.
    Kawai, H., Tamura, S.: Deaf-and-mute sign language generation system. Proceedings Medical Images and Icons SPIE 0515 (1984)Google Scholar
  36. 36.
    Kawai, H., Tamura, S.: Deaf-and-mute sign language generation system. Pattern Recogn. 18(3/4), 199–205 (1985)Google Scholar
  37. 37.
    Cootes, T.F., Taylor, C.J.: Active shape models—‘smart snakes’. Proceedings of the British Machine Vision Conference, 266–275 (1992)Google Scholar
  38. 38.
    Heap, T., Samaria, F.: Real-time hand tracking and gesture recognition usingsmart snakes. Proceedings of Interface to Real and Virtual Worlds, pp. 1–13 (1995)Google Scholar
  39. 39.
    Starner, T., Pentland, A.: Real-time american sign language recognition from video using hidden markov models, Technical Report 375, MIT Media Lab (1995)Google Scholar
  40. 40.
    Assan, M., Grobel, K.: Video-based sign language recognition using hidden Markov models. Gesture and Sign Language in Human-Computer Interaction, pp. 97–109 (1997)Google Scholar
  41. 41.
    Assan, M., Grobel, K.: Isolated sign language recognition using hidden markov models. IEEE International Conference on Computational Cybernetics and Simulation, pp. 162–167 (1997)Google Scholar
  42. 42.
    Vogler, C., Metaxas, D.: Handshapes and movements: Multiple-channel ASL recognition. Lect. Notes Artif. Intel. (LNAI) 2915, 247–258 (2004)Google Scholar
  43. 43.
    Vogler, C., Metaxas, D.: ASL Recognition based on a coupling between HMMs and 3D Motion Analysis. Technical Reports (CIS), Department of Computer and Information Science, University of Pennsylvania (1998)Google Scholar
  44. 44.
    Imagawa, K., Matsuo, H., Taniguchi, R., Arita, D.: Recognition of local features for camera-based sign language recognition system. Proceedings 15th International Conference on Pattern Recognition, 849–853 (2000)Google Scholar
  45. 45.
    Isaacs, J., Foo, S.: Hand pose estimation for American sign language recognition. Proceedings of the Thirty-Sixth Southeastern Symposium on System Theory, pp. 132–136 (2004)Google Scholar
  46. 46.
    Cooper, H., Ong, E., Pugeau, N., Bowden, R.: Sign language recognition using sub-units. J. Mach. Lear. Res. 13, 2205–2231 (2012)Google Scholar
  47. 47.
    Kim, S., Waldron, M.B.: Adaptation of self organizing network for ASL recognition. In Proceedings of the Annual International Conference of the IEEE Engineering in Engineering in Medicine and Biology Society, pp. 254–254 (1993)Google Scholar
  48. 48.
    Stokoe, W.C.: Sign language structure: An outline of the visual communication systems of the american deaf. Studies in Linguistics: Occasional Papers, 8, pp. 3–37 (1960)Google Scholar
  49. 49.
    Waldron, M.B., Simon, D.: Parsing method for signed telecommunication. In Proceedings of the Annual International Conference of the IEEE Engineering in Engineering in Medicine and Biology Society: Images of the Twenty-First Century 6, pp. 1798–1799 (1989)Google Scholar
  50. 50.
    Waldron, M.B., Kim, S.: Increasing manual sign recognition vocabulary through relabelling. In Proceedings of the IEEE International Conference on Neural Networks IEEE World Congress on Computational Intelligence 5, 2885–2889, (1994)Google Scholar
  51. 51.
    Waldron, M.B., Kim, S.: Isolated ASL sign recognition system for deaf persons. IEEE Trans. Rehabil. Eng. 3(3), 261–271 (1995)Google Scholar
  52. 52.
    Vogler, C., Metaxas, D.: Adapting hidden markov models for ASL recognition by using threedimensional computer vision methods. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, volume 1, pp. 156–161 (1997)Google Scholar
  53. 53.
    Liddell, S.K., Johnson, R.E.: American sign language: The phonological base. Sign Lang. Stud. 64, 195–278 (1989)Google Scholar
  54. 54.
    Vogler, C., Metaxas, D.: Parallel hidden markov models for American sign language recognition. In Proceedings of the IEEE International Conference on Computer Vision 1, pp. 116–122 (1999)Google Scholar
  55. 55.
    Kadir, T., Bowden, R., Ong, E.J., Zisserman, A.: Minimal training, large lexicon, unconstrained sign language recognition. In Proceedings of the BMVA British Machine Vision Conference 2, pp. 939–948 (2004)Google Scholar
  56. 56.
    Yin, P., Starner, T., Hamilton, H., Essa, I., Rehg, J.M.: Learning the basic units in american sign language using discriminative segmental feature selection. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4757–4760 (2009)Google Scholar
  57. 57.
    Kong, W.W., Ranganath, S.: Automatic hand trajectory segmentation and phoneme transcription for sign language. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 1–6 (2008)Google Scholar
  58. 58.
    Han, J.W., Awad, G., Sutherland, A.: Modelling and segmenting subunits for sign language recognition based on hand motion analysis. Pattern Recogn. Lett. 30(6), 623–633 (2009)Google Scholar
  59. 59.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In Proceedings of the European Conference on Computational Learning Theory, pp. 23–37 (1995)Google Scholar
  60. 60.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1, pp. 511–518 (2001)Google Scholar
  61. 61.
    Premaratne, P., Yang, S., Zou, Z., Vial, P.: Australian sign language recognition using moment invariants. Lect. Notes Artif. Int. 7996, 509–514 (2013)Google Scholar
  62. 62.
    Premaratne, P., Ajaz, S., Premaratne, M.: Hand gesture tracking and recognition system for control of consumer electronics. Springer Lect. Notes Artif. Int. (LNAI) 6839, 588–593 (2011)Google Scholar
  63. 63.
    Premaratne, P., Nguyen, Q., Premaratne, M.: Human computer interaction using hand gestures. In Advanced Intelligent Computing-Theories and Applications. Communications in Computer and Information Science vol. 93, pp. 381–386 (2010)Google Scholar
  64. 64.
    Premaratne, P., Ajaz, S., Premaratne, M.: Hand gesture tracking and recognition system using Lucas-Kanade algorithm for control of consumer electronics. Neurocomputing J. (2012)Google Scholar
  65. 65.
    Zou, Z., Premaratne, P., Premaratne, M., Monaragala, R., Bandara, N.: Dynamic hand gesture recognition system using moment invariants’. In Dias, D. (ed.) ICIAfS 2010: 5th International Conference on Information and Automation for Sustainability, IEEE Computational Intelligence Society, Colombo, Sri Lanka, pp. 108–113 (2010)Google Scholar
  66. 66.
    Zhongliang, Q., Wenjun, W.: Automatic ship classification by superstructure moment invariants and two-stage classifier, ICCS/ISITA ’92 Communications on the Move (1992)Google Scholar
  67. 67.
    Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Info. Theory, IT8 179–187 (1962)Google Scholar
  68. 68.
    Fels, S.S., Hinton, G.: GloveTalk: A neural network inteface between a DataGlove and a speech synthesiser. IEEE Trans. Neural Netw. 4(2–8), (1993)Google Scholar
  69. 69.
    Liang, R., Ouhyoung, M.: A realtime continuous gesture recognition system for sign language. Third IEEE International Conference on Automatic Face and Gesture Recognition (1998)Google Scholar
  70. 70.
    Fang, G., Gao, W., Zhao, D.: Large vocabulary sign language recognition based on hierarchical decision trees. Proceedings of the 5th International Conference on Multimodal Interfaces, pp. 125–131 (2003)Google Scholar
  71. 71.
    Oz, C., Leu, M.C.: American sign language word recognition with a sensory glove using artificial neural networks. Eng. Appl. Artif. Intel. 24(7), 1204–1213 (2011)Google Scholar

Copyright information

© Springer Science+Business Media Singapore 2014

Authors and Affiliations

  1. 1.School of Elec., Comp. and Telecom. Eng.The University of WollongongNorth WollongongAustralia

Personalised recommendations