Towards Automatic Annotation of Sign Language Dictionary Corpora

  • Marek Hrúz
  • Zdeněk Krňoul
  • Pavel Campr
  • Luděk Müller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6836)

Abstract

This paper deals with novel automatic categorization of signs used in sign language dictionaries. The categorization provides additional information about lexical signs interpreted in the form of video files. We design a new method for automatic parameterization of these video files and categorization of the signs from extracted information. The method incorporates advanced image processing for detection and tracking of hands and head of signing character in the input image sequences. For tracking of hands we developed an algorithm based on object detection and discriminative probability models. For the tracking of head we use active appearance model. This method is a very powerful for detection and tracking of human face. We specify feasible conditions of the model enabling to use the extracted parameters for basic categorization of the non-manual component. We introduce an experiment with the automatic categorization determining symmetry, location and contact of hands, shape of mouth, close eyes and others. The result of experiment is primary the categorization of more than 200 signs and discussion of problems and next extension.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aran, O., Burger, T., Caplier, A., Akarun, L.: Sequential belief-based fusion of manual and non-manual information for recognizing isolated signs. In: Sales Dias, M., Gibet, S., Wanderley, M.M., Bastos, R. (eds.) GW 2007. LNCS (LNAI), vol. 5085, pp. 134–144. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  2. 2.
    Trmal, J., Hrúz, M., Zelinka, J., Campr, P., Müller, L.: Feature space transforms for czech sign-language recognition. In: Interspeech 2008, pp. 2036–2039 (2008)Google Scholar
  3. 3.
    Krňoul, Z., Kanis, J., Železný, M., Müller, L.: Czech text-to-sign speech synthesizer. In: Popescu-Belis, A., Renals, S., Bourlard, H. (eds.) MLMI 2007. LNCS, vol. 4892, pp. 180–191. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Ong, S., Ranganath, S.: Automatic sign language analysis: A survey and the future beyond lexical meaning, pp. 873–891 (2005)Google Scholar
  5. 5.
    Zieren, J., Canzler, U., Bauer, B., Kraiss, K.: Sign Language Recognition, Advanced Man-Machine Interaction - Fundamentals and Implementation, pp. 95–139 (2006)Google Scholar
  6. 6.
    Hrúz, M., Campr, P., Železný, M.: Semi-automatic annotation of sign language corpora (2008)Google Scholar
  7. 7.
    Wang, Q., Zhang, W., Tang, X., Shum, H.Y.: Real-time bayesian 3-d pose tracking. IEEE Transactions on Circuits and Systems for Video Technology 16(12), 1533–1541 (2006)CrossRefGoogle Scholar
  8. 8.
    Zhang, W., Wang, Q., Tang, X.: Real time feature based 3-d deformable face tracking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 720–732. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  9. 9.
    Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 681–685 (2001)CrossRefGoogle Scholar
  10. 10.
    Volker, B.: Face recognition based on a 3d morphable model. In: Proceedings of FGR 2006, pp. 617–624. IEEE Computer Society, Washington, DC, USA (2006)Google Scholar
  11. 11.
    Zhou, M., Liang, L., Sun, J., Wang, Y.: Aam based face tracking with temporal matching and face segmentation, pp. 701–708 (2010)Google Scholar
  12. 12.
    Campr, P., Hrúz, M., Langer, J., Kanis, J., Železný, M., Müller, L.: Towards czech on-line sign language dictionary - technological overview and data collection, Valletta, Malta, pp. 41–44 (2010)Google Scholar
  13. 13.
    Piater, J., Hoyouyx, T., Du, W.: Video analysis for continuous sign language recognition. In: LREC 2010, 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies (2010)Google Scholar
  14. 14.
    Buehler, P., Everingham, M., Zisserman, A.: Employing signed tv broadcasts for automated learning of british sign language. In: LREC 2010, 4th Workshop on the Representation and Processing of Sign Languages (2010)Google Scholar
  15. 15.
    Aran, O., Ari, I., Campr, P., Hrúz, M., Kahramaner, D., Parlak, S.: Speech and sliding text aided sign retrieval from hearing impaired sign news videos, Louvain-la-Neuve, TELE, Universite catholique de Louvain, pp. 37–49 (2007)Google Scholar
  16. 16.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: CVPR 2001, 4th Workshop on the Representation and Processing of Sign Languages, IEEE Computer Society Conference (2001)Google Scholar
  17. 17.
    Krňoul, Z.: New features in synthesis of sign language addressing non-manual component. In: LREC 2010, 4th Workshop on the Representation and Processing of Sign Languages, ELRA (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Marek Hrúz
    • 1
  • Zdeněk Krňoul
    • 1
  • Pavel Campr
    • 1
  • Luděk Müller
    • 1
  1. 1.Department of CyberneticsUniversity of West BohemiaPlzenCzech Republic

Personalised recommendations