iCLEF 2001 at Maryland: Comparing Term-for-Term Gloss and MT

  • Jianqiang Wang
  • Douglas W. Oard
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2406)

Abstract

For the first interactive Cross-Language Evaluation Forum, the Maryland team focused on comparison of term-for-term gloss translation with full machine translation for the document selection task. The results show that (1) searchers are able to make relevance judgments with translations from either approach, and (2) the machine translation system achieved better effectiveness than the gloss translation strategy that we tried, although the difference is not statistically significant. It was noted that the “somewhat relevant” category was used differently by searchers presented with gloss translations than with machine translations, and some reasons for that difference are suggested. Finally, the results suggest that the F measure used in this evaluation is better suited for use with topics that have many known relevant documents than those with few.

Keywords

Relevant Document Query Term Broad Topic Relevance Judgment Judgment Type 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    William Hersh, Andrew Turpin, Susan Price, Benjamin Chan, Dale Kraemer, Lynetta Sacherek, and Daniel Olson. Do batch and user evaluations give the same results? In Proceedings of the 23nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 17–24, August 1998.Google Scholar
  2. 2.
    Douglas W. Oard. Evaluating interactive cross-language information retrieval: Document selection. In Carol Peters, editor, Proceedings of the First Cross-Language Evaluation Forum. 2001. http://www.glue.umd.edu/~oard/research.html.
  3. 3.
    Douglas W. Oard and Anne R. Diekema. Cross-language information retrieval. In Annual Review of Information Science and Technology, volume 33. American Society for Information Science, 1998.Google Scholar
  4. 4.
    Douglas W. Oard, Gina-Anne Levow, and Clara I. Cabezas. TREC-9 experiments at Maryland: Interactive CLIR. In The Ninth Text Retrieval Conference (TREC-9), November 2000. http://trec.nist.gov..
  5. 5.
    Douglas W. Oard, Gina-Anne Levow, and Clara I. Cabezas. CLEF experiments at Maryland: Statistical stemming and backoff translation. In Carol Peters, editor, Proceedings of the First Cross-Language Evaluation Forum. 2001. http://www.glue.umd.edu/~oard/research.html.

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Jianqiang Wang
    • 1
  • Douglas W. Oard
    • 1
  1. 1.Human Computer Interaction LaboratoryCollege of Information Studies and Institute for Advanced Computer Studies University of MarylandCollege ParkUSA

Personalised recommendations