Tailored MFCCs for Sound Environment Classification in Hearing Aids

  • Roberto Gil-Pita
  • Beatriz López-Garrido
  • Manuel Rosa-Zurera
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 315)

Abstract

Hearing aids have to work at low clock rates in order to minimize the power consumption and maximize battery life. The implementation of signal processing techniques on hearing aids is strongly constrained by the small number of instructions per second to implement the algorithms in the digital signal processor the hearing aid is based on. In this respect, the objective of this paper is the proposal of a set of approximations in order to optimize the implementation of standard Mel Frequency Cepstral Coefficient based sound environment classifiers in real hearing aids. After a theoretical analysis of these coefficients and a set of experiments under different classification schemes, we demonstrate that the suppression of the Discrete Cosine Transform from the feature extraction process is suitable, since its use does not suppose an improvement in terms of error rate, and it supposes a high computational load. Furthermore, the use of the most significative bit instead of the logarithm also supposes a considerable reduction in the computational load while obtaining comparable results in terms of error rate.

Notes

Acknowledgments

This work has been partially funded by the University of Alcalá (CCG2013/EXP-074), the Spanish Ministry of Economy and Competitiveness (TEC2012-38142-C04-02) and the Spanish Ministry of Defense (DN8644-ATREC).

References

  1. 1.
    Hamacher, V., Chalupper, J., Eggers, J., Fischer, E., Kornagel, U., Puder, H., Rass, U.: Signal processing in high-end hearing aids: state of the art, challenges, and future trends. EURASIP J. Appl. Sig. Process. 18, 2915–2929 (2005)CrossRefGoogle Scholar
  2. 2.
    Büchler, M., Allegro, S., Launer, S., Dillier, N.: Sound classification in hearing aids inspired by auditory scene analysis. EURASIP J. Appl. Sig. Process. 18, 2991–3002 (2005)CrossRefGoogle Scholar
  3. 3.
    Marzinzik, M.: Noise reduction schemes for digital hearing aids and their use for hearing impaired. PhD thesis, Carl von Ossietzky University Oldenburg (2000)Google Scholar
  4. 4.
    Dong, R., Hermann, D., Cornu, E., Chau, E.: Low-power implementation of an hmm-based sound environment classification algorithm for hearing aid application. In: Proceedings of the 15th European Signal Processing Conference (EUSIPCO 2007), vol. 4 (2007)Google Scholar
  5. 5.
    Gil-Pita, R., Alexandre, E., Cuadra, L., Vicen, R., Rosa-Zurera, M.: Analysis of the effects of finite precision in neural network-based sound classifiers for digital hearing aids. EURASIP Journal on Advances in Signal Processing, vol. 2009. (2009)Google Scholar
  6. 6.
    Xiang, J.J., McKinney, M.F., Fitz, K., Zhang, T.: Evaluation of sound classification algorithms for hearing aid applications. In: Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, IEEE, pp. 185–188. (2010)Google Scholar
  7. 7.
    Hunt, M., Lennig, M., Mermelstein, P.: Experiments in syllable-based recognition of continuous speech. In: Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP’80, IEEE vol. 5, pp. 880–883. (1980)Google Scholar
  8. 8.
    Mohino-Herranz, I., Gil-Pita, R., Alonso-Diaz, S., Rosa-Zurera, M.: Synthetical enlargement of mfcc based training sets for emotion recognition. Int. J. Comput. Sci. Inf. Technol. 6(1), 249–259 (2014)Google Scholar
  9. 9.
    Ye, J.: Least squares linear discriminant analysis. In: Proceedings of the 24th international conference on Machine learning, ACM pp. 1087–1093. (2007)Google Scholar
  10. 10.
    Bishop, C.M., Nasrabadi, N.M.: Pattern recognition and machine learning, vol. 1. Springer, New York (2006)Google Scholar
  11. 11.
    Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11(2), 431–441 (1963)CrossRefMathSciNetMATHGoogle Scholar
  12. 12.
    Kosmatopoulos, E.B., Polycarpou, M.M., Christodoulou, M.A., Ioannou, P.A.: High-order neural network structures for identification of dynamical systems. Neural Networks, IEEE Trans. 6(2), 422–431 (1995)CrossRefGoogle Scholar
  13. 13.
    Cuadra, L., Alexandre, E., Gil-Pita, R., Vicen-Bueno, R., Álvarez-Pérez, L.: Influence of acoustic feedback on the learning strategies of neural network-based sound classifiers in digital hearing aids. EURASIP Journal on Advances in Signal Processing, pp. 14. (2009)Google Scholar
  14. 14.
    Scheirer, E., Slaney, M.: Construction and evaluation of a robust multifeature speech/music discriminator. In: ICASSP, pp. 1331–1334. (1997)Google Scholar
  15. 15.
    Thoshkahna, B., Sudha, V., Reemigration, K.: A speech-music discriminator using hiln model based features. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 425–428 (2006)Google Scholar
  16. 16.
    Vicen-Bueno, R., Gil-Pita, R., Utrilla-Manso, M., Álvarez-Pérez, L.: A hearing aid simulator to test adaptive signal processing algorithms. In: IEEE Int. Symposium on Intelligent Signal Processing (WISP), pp. 619–624. (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Roberto Gil-Pita
    • 1
  • Beatriz López-Garrido
    • 2
  • Manuel Rosa-Zurera
    • 1
  1. 1.Signal Theory and Communications DepartmentUniversity of AlcaláMadridSpain
  2. 2.Servicio de Salud de Castilla La Mancha (SESCAM)Castilla La ManchaSpain

Personalised recommendations