Advertisement

Automatic Scoring on English Passage Reading Quality

  • Junbo Zhang
  • Fuping Pan
  • Yongyong Yan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7332)

Abstract

In this paper, the computer automatic scoring for English discourse oral reading was studied. We analyzed the oral reading voices with speech recognition technology, and extracted a series of features in terms of pronunciation and fluency, and then mapped these features to scores. In the testing of English discourse oral reading for 4000 middle school students, the average scoring difference between machine and human teacher was 0.66, while the scoring difference in human teachers was 0.57. The experience result shows that this system can be used in practice.

Keywords

Automatic Scoring Pronunciation Quality 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hsu, M.H.: Proposing an interactive speaking improvement system for EFL learners. Expert Systems with Applications, 414–418 (2010)Google Scholar
  2. 2.
    Klaus, Z., Derrick, H., Xi, X.M., et al.: Automatic scoring of non-native spontaneous speech in tests of spoken. English Speech Communication, 883–895 (2009)Google Scholar
  3. 3.
    Tobias, C., Rainer, G., Christian, H., et al.: Automatic pronunciation scoring of words and sentences independent from the non-native’s first language. Computer Speech and Language, 65–88 (2009)Google Scholar
  4. 4.
    Huang, S., Li, H., Wang, S., et al.: Automatic assessment of speech fluency in computer aided speech grading system. In: NCMMSC (2009)Google Scholar
  5. 5.
    Kartik, A., Kundan, K., Om, D.D., et al.: Formant-based technique for automatic filled-pause detection in spontaneous spoken English. In: ICASSP (2009)Google Scholar
  6. 6.
    Boersma, P.: Praat, a system for doing phonetics by computer. Glot International, 341–345 (2002)Google Scholar
  7. 7.
    Hermansky, H.: Perceptual linear predictive (PLP) analysis of speech. Journal of the Acoustical Society of America (1990)Google Scholar
  8. 8.
    Witt, S., Young, S.: Phone-level pronunciation scoring and assessment for interactive language learning. Speech Communication (2000)Google Scholar
  9. 9.
    Silke, M.W.: Phone-level Pronunciation Scoring and Assessment for Interactive Language Learning. Speech Communication, 95–108 (2000)Google Scholar
  10. 10.
    Bilmes, J.A.: A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. U.C. Berkeley (1998)Google Scholar
  11. 11.
    Jose, C.S.: CDF-matching based Nonlinear Feature Transformations for Robust Speech Recognition. In: ICSLP (2000)Google Scholar
  12. 12.
    Hunt, R.J.: Percent agreement, Pearson’s correlation, and kappa as measures of inter-examiner reliability. Journal of Dental Research, 128–130 (1986)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Junbo Zhang
    • 1
  • Fuping Pan
    • 1
  • Yongyong Yan
    • 1
  1. 1.The Key Laboratory of Speech Acoustics and Content Understanding, Institute of AcousticsChinese Academy of SciencesBeijingChina

Personalised recommendations