Abstract
For the first interactive Cross-Language Evaluation Forum, the Maryland team focused on comparison of term-for-term gloss translation with full machine translation for the document selection task. The results show that (1) searchers are able to make relevance judgments with translations from either approach, and (2) the machine translation system achieved better effectiveness than the gloss translation strategy that we tried, although the difference is not statistically significant. It was noted that the “somewhat relevant” category was used differently by searchers presented with gloss translations than with machine translations, and some reasons for that difference are suggested. Finally, the results suggest that the F measure used in this evaluation is better suited for use with topics that have many known relevant documents than those with few.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
William Hersh, Andrew Turpin, Susan Price, Benjamin Chan, Dale Kraemer, Lynetta Sacherek, and Daniel Olson. Do batch and user evaluations give the same results? In Proceedings of the 23nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 17–24, August 1998.
Douglas W. Oard. Evaluating interactive cross-language information retrieval: Document selection. In Carol Peters, editor, Proceedings of the First Cross-Language Evaluation Forum. 2001. http://www.glue.umd.edu/~oard/research.html.
Douglas W. Oard and Anne R. Diekema. Cross-language information retrieval. In Annual Review of Information Science and Technology, volume 33. American Society for Information Science, 1998.
Douglas W. Oard, Gina-Anne Levow, and Clara I. Cabezas. TREC-9 experiments at Maryland: Interactive CLIR. In The Ninth Text Retrieval Conference (TREC-9), November 2000. http://trec.nist.gov..
Douglas W. Oard, Gina-Anne Levow, and Clara I. Cabezas. CLEF experiments at Maryland: Statistical stemming and backoff translation. In Carol Peters, editor, Proceedings of the First Cross-Language Evaluation Forum. 2001. http://www.glue.umd.edu/~oard/research.html.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wang, J., Oard, D.W. (2002). iCLEF 2001 at Maryland: Comparing Term-for-Term Gloss and MT. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds) Evaluation of Cross-Language Information Retrieval Systems. CLEF 2001. Lecture Notes in Computer Science, vol 2406. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45691-0_33
Download citation
DOI: https://doi.org/10.1007/3-540-45691-0_33
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44042-0
Online ISBN: 978-3-540-45691-9
eBook Packages: Springer Book Archive