Thomson Legal and Regulatory Experiments at CLEF-2005

  • Isabelle Moulinier
  • Ken Williams
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4022)


For the 2005 Cross-Language Evaluation Forum, Thomson Legal and Regulatory participated in the Hungarian, French, and Portuguese monolingual search tasks as well as French-to-Portuguese bilingual retrieval. Our Hungarian participation focused on comparing the effectiveness of different approaches toward morphological stemming. Our French and Portuguese monolingual efforts focused on different approaches to Pseudo-Relevance Feedback (PRF), in particular the evaluation of a scheme for selectively applying PRF only in the cases most likely to produce positive results. Our French-to-Portuguese bilingual effort applies our previous work in query translation to a new pair of languages and uses corpus-based language modeling to support term-by-term translation. We compare our approach to an off-the-self machine translation system that translates the query as a whole and find the latter approach to be more performant. All experiments were performed using our proprietary search engine. We remain encouraged by the overall success of our efforts, with our main submissions for each of the four tasks performing above the overall CLEF median. However, none of the specific enhancement techniques we attempted in this year’s forum showed significant improvements over our initial result.


Machine Translation Prediction Rule Statistical Machine Translation Query Formulation Parallel Corpus 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
  2. 2.
    Croft, W.B., Callan, J., Broglio, J.: The INQUERY retrieval system. In: Proceedings of the 3rd International Conference on Database and Expert Systems Applications, Spain (1992)Google Scholar
  3. 3.
    Turtle, H.: Natural language vs. boolean query evaluation: a comparison of retrieval performance. In: Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Dublin, Ireland, pp. 212–220 (1994)Google Scholar
  4. 4.
  5. 5.
    Haines, D., Croft, W.: Relevance feedback and inference networks. In: Proceedings of the Sixteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2–11 (1993)Google Scholar
  6. 6.
  7. 7.
    Och, F.J., Ney, H.: A systematic comparison of various statistical alignment models. Computational Linguistics 29(1), 19–51 (2003)CrossRefGoogle Scholar
  8. 8.
    Koehn, P.: Europarl: A multilingual corpus for evaluation of machine translation. Draft (2002)Google Scholar
  9. 9.
    Yom-Tov, E., Fine, S., Carmel, D., Darlow, A., Amitay, E.: Juru at trec 2004: Experiments with prediction of query difficulty. In: Voorhees, E.M., Buckland, L.P. (eds.) The Thirteenth Text Retrieval Conference (TREC 2004), NIST Special Publication: SP 500–261 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Isabelle Moulinier
    • 1
  • Ken Williams
    • 1
  1. 1.Thomson Legal and RegulatoryEaganUSA

Personalised recommendations