Thomson Legal and Regulatory Experiments at CLEF-2005
For the 2005 Cross-Language Evaluation Forum, Thomson Legal and Regulatory participated in the Hungarian, French, and Portuguese monolingual search tasks as well as French-to-Portuguese bilingual retrieval. Our Hungarian participation focused on comparing the effectiveness of different approaches toward morphological stemming. Our French and Portuguese monolingual efforts focused on different approaches to Pseudo-Relevance Feedback (PRF), in particular the evaluation of a scheme for selectively applying PRF only in the cases most likely to produce positive results. Our French-to-Portuguese bilingual effort applies our previous work in query translation to a new pair of languages and uses corpus-based language modeling to support term-by-term translation. We compare our approach to an off-the-self machine translation system that translates the query as a whole and find the latter approach to be more performant. All experiments were performed using our proprietary search engine. We remain encouraged by the overall success of our efforts, with our main submissions for each of the four tasks performing above the overall CLEF median. However, none of the specific enhancement techniques we attempted in this year’s forum showed significant improvements over our initial result.
KeywordsMachine Translation Prediction Rule Statistical Machine Translation Query Formulation Parallel Corpus
Unable to display preview. Download preview PDF.
- 2.Croft, W.B., Callan, J., Broglio, J.: The INQUERY retrieval system. In: Proceedings of the 3rd International Conference on Database and Expert Systems Applications, Spain (1992)Google Scholar
- 3.Turtle, H.: Natural language vs. boolean query evaluation: a comparison of retrieval performance. In: Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Dublin, Ireland, pp. 212–220 (1994)Google Scholar
- 5.Haines, D., Croft, W.: Relevance feedback and inference networks. In: Proceedings of the Sixteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2–11 (1993)Google Scholar
- 8.Koehn, P.: Europarl: A multilingual corpus for evaluation of machine translation. Draft (2002)Google Scholar
- 9.Yom-Tov, E., Fine, S., Carmel, D., Darlow, A., Amitay, E.: Juru at trec 2004: Experiments with prediction of query difficulty. In: Voorhees, E.M., Buckland, L.P. (eds.) The Thirteenth Text Retrieval Conference (TREC 2004), NIST Special Publication: SP 500–261 (2004)Google Scholar