On the Use of Recurrent Neural Networks for Grammar Learning and Word Spotting
In this work we try to infer from the speech all the possible sequences of allophones and their distribution over time by using a recursive network architecture, in which its recurrent connections learn phoneme sequence relationships. We describe some experiments in the context of Keyword Spotting.
KeywordsHide Markov Model Recurrent Neural Network Word Level Grammatical Inference Phoneme Sequence
Unable to display preview. Download preview PDF.
- R. Rose, D. Paul. A Hidden Markov Model Based Keyword Recognition System. Proc ICASSP 90, pp. 129–132Google Scholar
- C.H. Lee, L.R. Rabiner, R. Pieraccini and J.G Wilpon. Acoustic modeling for large vocabulary speech recognition. Computer Speech and Language, pp. 127–165, 1990Google Scholar
- A.W. Smith and D. Zipser. Encoding sequential structure: Experience with the real-time recurrent learning algorithm. Proc. IJCNN, Vol. I, pp. 525–532, June 1989Google Scholar
- Y.D. Liu, et al. Grammatical inference and neural network state machines. Proc IJCNN, Vol. I, pp. 285–288, January 1990Google Scholar
- Morgan and Scofield. Neural Networks and Speech Processing; pp. 264–273. Kluwer Academic Publishers, 1991Google Scholar