Automatic Sound Classification for Improving Speech Intelligibility in Hearing Aids Using a Layered Structure

  • Enrique Alexandre
  • Lucas Cuadra
  • Lorena Álvarez
  • Manuel Rosa-Zurera
  • Francisco López-Ferreras
Conference paper

DOI: 10.1007/11875581_37

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4224)
Cite this paper as:
Alexandre E., Cuadra L., Álvarez L., Rosa-Zurera M., López-Ferreras F. (2006) Automatic Sound Classification for Improving Speech Intelligibility in Hearing Aids Using a Layered Structure. In: Corchado E., Yin H., Botti V., Fyfe C. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2006. IDEAL 2006. Lecture Notes in Computer Science, vol 4224. Springer, Berlin, Heidelberg

Abstract

This paper presents some of our first results in the development of an automatic sound classification algorithm for hearing aids. The goal is to classify the input audio signal into four different categories: speech in quiet, speech in noise, stationary noise and non-stationary noise. In order to make the system more robust, a divide and conquer strategy is proposed, resulting thus in a layered structure. The considered classification algorithms will be based on the Fisher linear discriminant and neural networks. Some results will be given demonstrating the good behavior of the system compared with a classical approach with a four-classes classifier based on neural networks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Enrique Alexandre
    • 1
  • Lucas Cuadra
    • 1
  • Lorena Álvarez
    • 1
  • Manuel Rosa-Zurera
    • 1
  • Francisco López-Ferreras
    • 1
  1. 1.Dept. de Teoría de la Señal y Comunicaciones. Escuela Politécnica SuperiorUniversidad de AlcaláAlcalá de HenaresSpain

Personalised recommendations