Advertisement

Penalty Functions for Evaluation Measures of Unsegmented Speech Retrieval

  • Petra Galuščáková
  • Pavel Pecina
  • Jan Hajič
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7488)

Abstract

This paper deals with evaluation of information retrieval from unsegmented speech. We focus on Mean Generalized Average Precision, the evaluation measure widely used for unsegmented speech retrieval. This measure is designed to allow certain tolerance in matching retrieval results (starting points of relevant segments) against a gold standard relevance assessment. It employs a Penalty Function which evaluates non-exact matches in the retrieval results based on their distance from the beginnings of their nearest true relevant segments. However, the choice of the Penalty Function is usually ad-hoc and does not necessary reflect users’ perception of the speech retrieval quality. We perform a lab test to study satisfaction of users of a speech retrieval system to empirically estimate the optimal shape of the Penalty Function.

Keywords

Penalty Function Automatic Speech Recognition Mean Average Precision Retrieval Result Information Retrieval System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cleverdon, C.W., Mills, J., Keen, M.: Factors determining the performance of indexing systems. Test results, vol. 2. Aslib Cranfield Research Project, Cranfield, England (1966)Google Scholar
  2. 2.
    Eskevich, M., Magdy, W., Jones, G.J.F.: New Metrics for Meaningful Evaluation of Informally Structured Speech Retrieval. In: Baeza-Yates, R., de Vries, A.P., Zaragoza, H., Cambazoglu, B.B., Murdock, V., Lempel, R., Silvestri, F. (eds.) ECIR 2012. LNCS, vol. 7224, pp. 170–181. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  3. 3.
    Garofolo, J., Auzanne, C., Ellen, V., Sparck, J.K.: 1999 TREC-8 Spoken Document Retrieval (SDR) Track Evaluation Specification (1999), http://www.itl.nist.gov/iad/mig/tests/sdr/1999/spec.html
  4. 4.
    Ircing, P., Pecina, P., Oard, D.W., Wang, J., White, R.W., Hoidekr, J.: Information Retrieval Test Collection for Searching Spontaneous Czech Speech. In: Matoušek, V., Mautner, P. (eds.) TSD 2007. LNCS (LNAI), vol. 4629, pp. 439–446. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  5. 5.
    Kamps, J., Pehcevski, J., Kazai, G., Lalmas, M., Robertson, S.: INEX 2007 Evaluation Measures. In: Fuhr, N., Kamps, J., Lalmas, M., Trotman, A. (eds.) INEX 2007. LNCS, vol. 4862, pp. 24–33. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  6. 6.
    Larson, M., Eskevich, M., Ordelman, R., Kofler, C., Schmiedeke, S., Jones, G.J.F.: Overview of MediaEval 2011 rich speech retrieval task and genre tagging task. In: Larson, M., Rae, A., Demarty, C.H., Kofler, C., Metze, F., Troncy, R., Mezaris, V., Jones, G.J.F. (eds.) Working Notes Proceedings of the MediaEval 2011 Workshop. CEUR Workshop Proceedings, vol. 807, pp. 1–2. CEUR-WS.org (2011)Google Scholar
  7. 7.
    Liu, B., Oard, D.W.: One-sided measures for evaluating ranked retrieval effectiveness with spontaneous conversational speech. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2006, pp. 673–674. ACM, New York (2006)CrossRefGoogle Scholar
  8. 8.
    Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press, New York (2008)MATHCrossRefGoogle Scholar
  9. 9.
    Oard, D.W., Hackett, P.G.: Document Translation for Cross-Language Text Retrieval at the University of Maryland. In: Voorhees, E.M., Harman, D.K. (eds.) The Sixth Text REtrieval Conference (TREC-6), pp. 687–696. U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology (1997)Google Scholar
  10. 10.
    Oard, D.W., Wang, J., Jones, G.J.F., White, R.W., Pecina, P., Soergel, D., Huang, X., Shafran, I.: Overview of the CLEF-2006 Cross-Language Speech Retrieval Track. In: Peters, C., Clough, P., Gey, F.C., Karlgren, J., Magnini, B., Oard, D.W., de Rijke, M., Stempfhuber, M. (eds.) CLEF 2006. LNCS, vol. 4730, pp. 744–758. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  11. 11.
    Pecina, P., Hoffmannová, P., Jones, G.J.F., Zhang, Y., Oard, D.W.: Overview of the CLEF-2007 Cross-Language Speech Retrieval Track. In: Peters, C., Jijkoun, V., Mandl, T., Müller, H., Oard, D.W., Peñas, A., Petras, V., Santos, D. (eds.) CLEF 2007. LNCS, vol. 5152, pp. 674–686. Springer, Heidelberg (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Petra Galuščáková
    • 1
  • Pavel Pecina
    • 1
  • Jan Hajič
    • 1
  1. 1.Institute of Formal and Applied Linguistics, Faculty of Mathematics and PhysicsCharles University in PragueCzech Republic

Personalised recommendations