Tailored MFCCs for Sound Environment Classification in Hearing Aids
Hearing aids have to work at low clock rates in order to minimize the power consumption and maximize battery life. The implementation of signal processing techniques on hearing aids is strongly constrained by the small number of instructions per second to implement the algorithms in the digital signal processor the hearing aid is based on. In this respect, the objective of this paper is the proposal of a set of approximations in order to optimize the implementation of standard Mel Frequency Cepstral Coefficient based sound environment classifiers in real hearing aids. After a theoretical analysis of these coefficients and a set of experiments under different classification schemes, we demonstrate that the suppression of the Discrete Cosine Transform from the feature extraction process is suitable, since its use does not suppose an improvement in terms of error rate, and it supposes a high computational load. Furthermore, the use of the most significative bit instead of the logarithm also supposes a considerable reduction in the computational load while obtaining comparable results in terms of error rate.
This work has been partially funded by the University of Alcalá (CCG2013/EXP-074), the Spanish Ministry of Economy and Competitiveness (TEC2012-38142-C04-02) and the Spanish Ministry of Defense (DN8644-ATREC).
- 3.Marzinzik, M.: Noise reduction schemes for digital hearing aids and their use for hearing impaired. PhD thesis, Carl von Ossietzky University Oldenburg (2000)Google Scholar
- 4.Dong, R., Hermann, D., Cornu, E., Chau, E.: Low-power implementation of an hmm-based sound environment classification algorithm for hearing aid application. In: Proceedings of the 15th European Signal Processing Conference (EUSIPCO 2007), vol. 4 (2007)Google Scholar
- 5.Gil-Pita, R., Alexandre, E., Cuadra, L., Vicen, R., Rosa-Zurera, M.: Analysis of the effects of finite precision in neural network-based sound classifiers for digital hearing aids. EURASIP Journal on Advances in Signal Processing, vol. 2009. (2009)Google Scholar
- 6.Xiang, J.J., McKinney, M.F., Fitz, K., Zhang, T.: Evaluation of sound classification algorithms for hearing aid applications. In: Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, IEEE, pp. 185–188. (2010)Google Scholar
- 7.Hunt, M., Lennig, M., Mermelstein, P.: Experiments in syllable-based recognition of continuous speech. In: Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP’80, IEEE vol. 5, pp. 880–883. (1980)Google Scholar
- 8.Mohino-Herranz, I., Gil-Pita, R., Alonso-Diaz, S., Rosa-Zurera, M.: Synthetical enlargement of mfcc based training sets for emotion recognition. Int. J. Comput. Sci. Inf. Technol. 6(1), 249–259 (2014)Google Scholar
- 9.Ye, J.: Least squares linear discriminant analysis. In: Proceedings of the 24th international conference on Machine learning, ACM pp. 1087–1093. (2007)Google Scholar
- 10.Bishop, C.M., Nasrabadi, N.M.: Pattern recognition and machine learning, vol. 1. Springer, New York (2006)Google Scholar
- 13.Cuadra, L., Alexandre, E., Gil-Pita, R., Vicen-Bueno, R., Álvarez-Pérez, L.: Influence of acoustic feedback on the learning strategies of neural network-based sound classifiers in digital hearing aids. EURASIP Journal on Advances in Signal Processing, pp. 14. (2009)Google Scholar
- 14.Scheirer, E., Slaney, M.: Construction and evaluation of a robust multifeature speech/music discriminator. In: ICASSP, pp. 1331–1334. (1997)Google Scholar
- 15.Thoshkahna, B., Sudha, V., Reemigration, K.: A speech-music discriminator using hiln model based features. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 425–428 (2006)Google Scholar
- 16.Vicen-Bueno, R., Gil-Pita, R., Utrilla-Manso, M., Álvarez-Pérez, L.: A hearing aid simulator to test adaptive signal processing algorithms. In: IEEE Int. Symposium on Intelligent Signal Processing (WISP), pp. 619–624. (2007)Google Scholar