Speech and Gesture Recognition-Based Robust Language Processing Interface in Noise Environment
We suggest and implement WPS (Wearable Personal Station) and Web-based robust Language Processing Interface (LPI) integrating speech and sign language (the Korean Standard Sign Language; KSSL). In other word, the LPI is integration language recognition and processing system that can select suitable language recognition system according to noise degree in given noise environment, and it is extended into embedded and ubiquitous-oriented the next generation language processing system that can take the place of a traditional uni-modal language recognition system using only 1 sensory channel based on desk-top PC and wire communication net. In experiment results, while an average recognition rate of uni-modal recognizer using KSSL only is 92.58% and speech only is 93.28%, advanced LPI deduced an average recognition rate of 95.09% for 52 sentential recognition models. Also, average recognition time is 0.3 seconds in LPI.
KeywordsRecognition Rate Hand Gesture Hand Gesture Recognition Average Recognition Rate Language Recognition
Unable to display preview. Download preview PDF.
- Kim, S.-G.: Korean Standard Sign Language Tutor, 1st edn. Osung Publishing Company, Seoul (2000)Google Scholar
- Paulus, D., Hornegger, J.: Applied Pattern Recognition, 2nd edn. Vieweg (1998)Google Scholar
- Schuermann, J.: Pattern Classification: A Unified View of Statistical and Neural Approaches. Wiley&Sons (1996)Google Scholar
- Relational DataBase Management System, http://www.auditmypc.com/acronym/RDBMS.asp
- Oracle 10g DW Guide, http://www.oracle.com
- Vasantha kandasamy, W.B.: Smaranda Fuzzy Algebra. American Research Press, Seattle (2003)Google Scholar
- Scott McGlashan et al.: Voice Extensible Markup Language (VoiceXML) Version 2.0. W3C Recommendation (1992), http://www.w3.org