Chapter

Speech, Audio, Image and Biomedical Signal Processing using Neural Networks

Volume 83 of the series Studies in Computational Intelligence pp 145-167

Speech/Non-Speech Classification in Hearing Aids Driven by Tailored Neural Networks

  • Enrique AlexandreAffiliated withDepartment of Signal Theory and Communications, University of Alcalá
  • , Lucas CuadraAffiliated withDepartment of Signal Theory and Communications, University of Alcalá
  • , Manuel Rosa-ZureraAffiliated withDepartment of Signal Theory and Communications, University of Alcalá
  • , Francisco López-FerrerasAffiliated withDepartment of Signal Theory and Communications, University of Alcalá

* Final gross prices may vary according to local VAT.

Get Access

This chapter explores the feasibility of using some kind of tailored neural networks to automatically classify sounds into either speech or non-speech in hearing aids. These classes have been preliminary selected aiming at focusing on speech intelligibility and user's comfort. Hearing aids in the market have important constraints in terms of computational complexity and battery life, and thus a set of trade-offs have to be considered. Tailoring the neural network requires a balance consisting in reducing the computational demands (that is the number of neurons) without degrading the classification performance. Special emphasis will be placed on designing the size and complexity of the multilayer perceptron constructed by a growing method. The number of simple operations will be evaluated, to ensure that it is lower than the maximum sustained by the computational resources of the hearing aid.