Advertisement

Articulation Analysis in the Speech of Children with Cleft Lip and Palate

  • H. A. Carvajal-Castaño
  • Juan Rafael Orozco-ArroyaveEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11896)

Abstract

Hypernasality is a speech deficit that affects children with cleft lip and palate (CLP). It is characterized by the lack of control of the velum, which causes problems when controlling the amount of air passing from the oral to the nasal cavity while speaking. The automatic evaluation of hypernasality could help in the monitoring of speech-language therapies and in the design of better oriented exercises. Several articulation features have been used for the automatic detection of hypernasal speech. This paper evaluates the suitability of classical articulation features for the automatic classification of hypernasal and healthy speech recordings. Two different databases are considered with recordings collected under different acoustic conditions and with different audio settings. Besides the evaluation of the proposed approach upon each database separately, non-parametric statistical tests are performed to evaluate the possibility of merging features from the two databases with the aim of finding more robust systems that could be used in different acoustic conditions. The results indicate that the proposed approach has a high sensitivity, which indicates that it is suitable to detect hypernasal speech samples. We believe that promising results could be obtained with this approach in future experiments where the degree of hypernasality is evaluated.

Keywords

Cleft lip and palate Hypernasality Articulation measures Classification 

Notes

Acknowledgement

This work was partially funded by CODI at UdeA grant # PRG2018-23541 and SOS18-2-01_ES84180137.

References

  1. 1.
    World Health Organization: Global registry and database on craniofacial anomalies. Report of a WHO registry meeting on craniofacial anomalies (2001)Google Scholar
  2. 2.
    Golabbakhsh, M., et al.: Automatic identification of hypernasality in normal and cleft lip and palate patients with acoustic analysis of speech. J. Acoust. Soc. Am. 141(2), 929–935 (2017)CrossRefGoogle Scholar
  3. 3.
    Kummer, A.W.: Cleft Lip, Palate and Craniofacial Anomalies. CENGAGE Learning, Boston (2014)Google Scholar
  4. 4.
    He, L., Zhang, J., Liu, Q., Yin, H., Lech, M., Huang, Y.: Automatic evaluation of hypernasality based on a cleft palate speech database. J. Med. Syst. 39, 61 (2015)CrossRefGoogle Scholar
  5. 5.
    Rendón, S.M., Orozco Arroyave, J.R., Vargas Bonilla, J.F., Arias Londoño, J.D., Castellanos Domínguez, C.G.: Automatic detection of hypernasality in children. In: Ferrández, J.M., Álvarez Sánchez, J.R., de la Paz, F., Toledo, F.J. (eds.) IWINAC 2011. LNCS, vol. 6687, pp. 167–174. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21326-7_19CrossRefGoogle Scholar
  6. 6.
    Dubey, A.K., Tripathi, A., Prasanna, S.R.M., Dandapat, S.: Detection of hypernasality based on vowel space area. J. Acoust. Soc. Am. 143(5), 412–417 (2018)CrossRefGoogle Scholar
  7. 7.
    Orozco Arroyave, J.R., Arias Londoño, J.D., Vargas Bonilla, J.F., Nöth, E.: Automatic detection of hypernasal speech signals using nonlinear and entropy measurements. In: Proceedings of INTERSPEECH, pp. 2027–2030 (2012)Google Scholar
  8. 8.
    Orozco Arroyave, J.R., Vargas Bonilla, J.F., Arias Londoño, J.D., Murillo Rendón, S., Castellanos Domínguez, C.G., Garcés, J.F.: Nonlinear dynamics for hypernasality detection in Spanish vowels and words. Cogn. Comput. 5(4), 448–457 (2012)CrossRefGoogle Scholar
  9. 9.
    Carvajal Castaño, H.A.: Metodología para la reducción de ruido aditivo de fondo en sistemas basados en procesamiento de voz. Master’s thesis, Universidad de Antioquia (2013)Google Scholar
  10. 10.
    Godino Llorente, J.I., Gómez Vilda, P.: Automatic detection of voice impairments by means of short-term cepstral parameters and neural network based detectors. IEEE Trans. Biomed. Eng. 51(2), 380–384 (2004)CrossRefGoogle Scholar
  11. 11.
    Orozco Arroyave, J.R., Hönig, F., Arias Londoño, J.D., Vargas Bonilla, J.F., Nöth, E.: Spectral and cepstral analyses for Parkinson’s disease detection in Spanish vowels and words. Expert Syst. 32(6), 688–697 (2015)CrossRefGoogle Scholar
  12. 12.
    Rabiner, L., Schafer, R.W.: Theory and Applications of Digital Speech Processing. Prentice Hall, Upper Saddle River (2011)Google Scholar
  13. 13.
    Kaiser, J.F.: On a simple algorithm to calculate the ‘energy’ of a signal. In: Proceedings of ICASSP, pp. 381–384 (1990)Google Scholar
  14. 14.
    Cairns, D., Hansen, J., Riski, J.: A noninvasive technique for detecting hypernasal speech using a nonlinear operator. IEEE Trans. Biomed. Eng. 43(1), 35–45 (1996)CrossRefGoogle Scholar
  15. 15.
    Sapir, S., Ramig, L.O., Spielman, J.L., Fox, C.: Formant centralization ratio (FCR): a proposal for a new acoustic measure of dysarthric speech. J. Speech Lang. Hear. Res. 53(1), 114–125 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • H. A. Carvajal-Castaño
    • 1
    • 2
  • Juan Rafael Orozco-Arroyave
    • 1
    • 3
    Email author
  1. 1.Research Group on Applied Telecommunications - GITA, Electronic Engineering and Telecommunications Department, Faculty of EngineeringUniversidad de Antioquia UdeAMedellínColombia
  2. 2.Bioinstrumentation and Clinical Engineering Research Group - GIBIC, Bioengineering Department, Faculty of EngineeringUniversidad de Antioquia UdeAMedellínColombia
  3. 3.Pattern Recognition LabUniversity of Erlangen-NürembergErlangenGermany

Personalised recommendations