Temperature controlled PSO on optimizing the DBN parameters for phoneme classification
- 1 Downloads
Speech recognition has become an essential component to communicate with the latest gadgets and machines in ease through speech. Phoneme classification model for phonemes in Tamil continuous speech is built here by exploring the power of deep belief network (DBN), a powerful neural network architecture that is capable of learning complex problems. But building an efficient DBN highly relies on several parameters like number of layers, number of neurons, connection weights and bias. The effect of increasing the number of layers in DBN for phoneme recognition has been studied in our previous experiments. In addition, a methodology which employed particle swarm optimization (PSO) or its variants second generation PSO (SGPSO) and new method PSO (NMPSO) for optimizing the connection weights and bias of the DBN for phoneme classification were studied in our earlier work. Pre-training DBN with PSO faced the problem of particle stagnation and took longer time to converge, whereas DBN with SGPSO, NMPSO converges faster but still suffers from particle stagnation which prevents it from reaching an optimal solution. Here we try to minimize stagnation of particles in the population in addition to faster convergence by proposing a new improved PSO, named Temperature controlled TPSO to optimize the initial connection weights and bias parameters that controls the DBN efficiency. TPSO seems to converge faster with better optimizing the DBN connection weights and bias parameters when compared to the existing ones with reduced stagnation of population. The TPSO–DBN is designed and applied on a phoneme classification problem for Tamil continuous speech and found to classify phonemes comparatively better with a classification accuracy of 89.2%.
KeywordsPhoneme recognition Particle swarm optimization Deep belief network Tamil speech Phoneme classification DBN parameter optimization
- Chen, M. (2008). Second generation particle swarm optimization. In IEEE World Congress on computational intelligence evolutionary computation (pp. 90–96). CEC 2008.Google Scholar
- Ciregan, D., Meier, U., & Schmidhuber, J. (2012). Multi-column deep neural networks for image classification. In IEEE conference on computer vision and pattern recognition (CVPR), 2012, IEEE, pp. 3642–3649.Google Scholar
- Eberhart, R., & Kennedy, R. J. (1995). Particle swarm optimization. In Proceedings of IEEE international conference on neural networks IV, Vol. 1000.Google Scholar
- Garro, B.A., Sossa, H., & Vazquez, R.A. (2009). Design of artificial neural networks using a modified particle swarm optimization algorithm. In International Joint Conference on Neural Networks, 2009. IJCNN 2009, IEEE, pp. 938–945.Google Scholar
- Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R.R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:12070580.
- Laxmi Sree, B. R., & Vijaya, M. S. (In Pressa) Building acoustic model for phoneme recognition using PSO-DBN. International Journal of Business Intelligence and Data Mining. https://doi.org/10.1504/IJBIDM.2018.10010711.
- Løvbjerg, M., Rasmussen, T.K., & Krink, T. (2001). Hybrid particle swarm optimiser with breeding and subpopulations. In Proceedings of the 3rd annual conference on genetic and evolutionary computation, Morgan Kaufmann Publishers Inc., pp. 469–476.Google Scholar
- Laxmi Sree, B. R., & Vijaya, M. S. (2016). Graph cut based segmentation method for Tamil continuous speech. In S. Subramanian, R. Nadarajan, S. Rao, & S. Sheen (Eds.), Digital connectivity—Social impact. Communications in computer and information science (Vol. 679). Singapore: Springer.Google Scholar
- Srinivasan, A., & Srinivasan, A. (1999). Note on the location of optimal classifiers in n-dimensional roc space. Tech. rep.Google Scholar