Advertisement

Correlation-Based Similarity Between Signals for Speaker Verification with Limited Amount of Speech Data

  • N. Dhananjaya
  • B. Yegnanarayana
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4105)

Abstract

In this paper, we present a method for speaker verification with limited amount (2 to 3 secs) of speech data. With the constraint of limited data, the use of traditional vocal tract features in conjunction with statistical models becomes difficult. An estimate of the glottal flow derivative signal which represents the excitation source information is used for comparing two signals. Speaker verification is performed by computing normalized correlation coefficient values between signal patterns chosen around high SNR regions (corresponding to the instants of significant excitation), without having to extract any further parameters. The high SNR regions are detected by locating peaks in the Hilbert envelope of the LP residual signal. Speaker verification studies are conducted on clean microphone speech (TIMIT) as well as noisy telephone speech (NTIMIT), to illustrate the effectiveness of the proposed method.

Keywords

Gaussian Mixture Model Vocal Tract Speech Data Equal Error Rate Speaker Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    NIST-SRE-2004: One-speaker detection. In: Proc. NIST Speaker Recognition Evaluation Workshop, Toledo, Spain (2004)Google Scholar
  2. 2.
    Ananthapadmanabha, T.V., Fant, G.: Calculation of true glottal flow and its components. Speech Communication, 167–184 (1982)Google Scholar
  3. 3.
    Plumpe, M.D., Quatieri, T.F., Reynolds, D.A.: Modeling of the glottal flow derivative waveform with application to speaker identification. IEEE Trans. Speech and Audio Processing 7, 569–586 (1999)CrossRefGoogle Scholar
  4. 4.
    Yegnanarayana, B., Reddy, K.S., Kishore, S.P.: Source and system features for speaker recognition using AANN models. In: Proc. Int. Conf. Acoustics Speech and Signal Processing, Salt Lake city, Utah, USA, vol. 1, pp. 409–412 (2001)Google Scholar
  5. 5.
    Murthy, K.S.R., Prasanna, S.R.M., Yegnanarayana, B.: Speaker-specific information from residual phase. In: Int. Conf. on Signal Processing and Communications, SPCOM 2004, Bangalore, India (2004)Google Scholar
  6. 6.
    Smits, R., Yegnanarayana, B.: Determination of instants of significant excitation in speech using group delay function. IEEE Trans. Speech and Audio Processing 3, 325–333 (1995)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • N. Dhananjaya
    • 1
  • B. Yegnanarayana
    • 1
  1. 1.Department of Computer Science and EngineeringIndian Institute of Technology MadrasChennaiIndia

Personalised recommendations