Advertisement

UNED@CL-SR CLEF 2005: Mixing Different Strategies to Retrieve Automatic Speech Transcriptions

  • Fernando López-Ostenero
  • Víctor Peinado
  • Valentín Sama
  • Felisa Verdejo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4022)

Abstract

In this paper we describe UNED’s participation in the CLEF CL-SR 2005 track. First, we explain how we tried several strategies to clean up the automatic transcriptions. Then, we describe how we performed 84 different runs mixing these strategies with named entity recognition and different pseudo-relevance feedback approaches, in order to study the influence of each method in the retrieval process, both in monolingual and cross-lingual environments. We noticed that the influence of named entity recognition was higher in the cross-lingual environment, where MAP scores double when we take advantage of an entity recognizer. The best pseudo-relevance feedback approach was the one using manual keywords. The effects of the different cleaning strategies were very similar, except for character 3-grams, which obtained poor scores compared with other approaches.

Keywords

Relevance Feedback Entity Recognition Conversational Speech Full Word Cleaning Strategy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Buckley, C., Salton, G., Allan, J., Singhal, A.: Automatic Query Expansion Using SMART: TREC 3. In: Proceedings of the 3rd Text Retrieval Conference (TREC3), pp. 69–80. National Institute of Standards and Technology (NIST), Gaithesburg (1995)Google Scholar
  2. 2.
    Callan, J.P., Croft, W.B., Harding, S.M.: The Inquery Retrieval System. In: Proceedings of the Third International Conference on Database and Expert Systems Applications, pp. 78–83. Springer, Heidelberg (1992)Google Scholar
  3. 3.
    Peinado, V., López-Ostenero, F., Gonzalo, J., Verdejo, F.: UNED at ImageCLEF 2005: Automatically Structured Queries with Named Entities over Metadata. In: Peters, C., Gey, F.C., Gonzalo, J., Müller, H., Jones, G.J.F., Kluck, M., Magnini, B., de Rijke, M., Giampiccolo, D. (eds.) CLEF 2005. LNCS, vol. 4022, pp. 578–581. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Peñas, A.: Website Term Browser: Un sistema interactivo y multilingüe de búsqueda textual basado en técnicas lingüísticas. PhD thesis, Departamento de Lenguajes y Sistemas Informáticos, Universidad Nacional de Educación a Distancia (2002)Google Scholar
  5. 5.
    Pirkola, A.: The Effects of Query Structure and Dictionary Setups in Dictionary-Based Cross-Language Information Retrieval. In: Proceedings of SIGIR 1998, 21st ACM International Conference on Research and Development in Information Retrieval, pp. 55–63 (1998)Google Scholar
  6. 6.
    White, R.W., Oard, D.W., Jones, G.J.F., Soergel, D., Huang, X.: Overview of the CLEF 2005 Cross-Language Speech Retrieval Track. In: Cross Language Evaluation Forum, Working Notes for the CLEF 2005 Workshop (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Fernando López-Ostenero
    • 1
  • Víctor Peinado
    • 1
  • Valentín Sama
    • 1
  • Felisa Verdejo
    • 1
  1. 1.NLP Group, ETSI Informática, UNEDMadridSpain

Personalised recommendations