Study of Parallelization of the Training for Automatic Speech Recognition
In this work we study the parallelization of the training phase for an automatic speech recognition system using the Hidden Markov Models. The vocabulary is uniformly distributed on processors, but the Markovian network of the treated application is duplicated on all processors. The proposed parallel algorithms are based on two strategies of communications. In the first one, called regrouping algorithm, communications are delayed until the training of all local sentences is finished. In the second one, called cutting algorithm, packages of optimal sizes is firstly determined and then asynchronous communications are performed after the training of each package. Experimental results show that good performances can be obtained with the second algorithm.
KeywordsAutomatic speech recognition Markovian modeling parallel processing
Unable to display preview. Download preview PDF.
- 1.E.M. Daoudi, A. Meziane, Y.O. MOhamed El Hadj, “Parallel HMM Model for Automatic Speech recognition”, RR-Lari, Oujda, 1999, Submitted for publication.Google Scholar
- 2.M. Fleury, A. C. Downton, A. F. Clark, “Parallel Structure in an Integrated Speech-Recognition Network”, In Proceedings of Euro-Par’99, pages 995–1004, 1999.Google Scholar
- 3.D. R. Forney, “The Viterbi Algorithm”, Proc IEEE, Vol 61, n o3, Mai 1973.Google Scholar
- 4.H. Noda, M. N. Shirazi, “A MRF-based parallel processing algorithm for speech recognition using linear predictive HMM”, ICASSP’ 94, pages I-597–I-600, 1994.Google Scholar
- 5.S. Phillips, A. Rogers, “Parallel Speech Recognition”, EUROSPEECH-97.Google Scholar