Advertisement

A Multiple Criteria Approach for Information Retrieval

  • Mohamed Farah
  • Daniel Vanderpooten
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4209)

Abstract

Research in Information Retrieval shows performance improvement when many sources of evidence are combined to produce a ranking of documents. Most current approaches assess document relevance by computing a single score which aggregates values of some attributes or criteria. We propose a multiple criteria framework using an aggregation mechanism based on decision rules identifying positive and negative reasons for judging whether a document should get a better ranking than another. The resulting procedure also handles imprecision in criteria design. Experimental results are reported.

Keywords

Information Retrieval Relevance Multiple Criteria 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Salton, G., Wong, A., Yang, C.S.: A vector space model for automatic indexing. Commun. ACM 18(11), 613–620 (1975)zbMATHCrossRefGoogle Scholar
  2. 2.
    Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M., Gatford, M.: Okapi at trec-3. In: Proc. of TREC-3, Gaithersburg, Maryland, USA (1994)Google Scholar
  3. 3.
    Gao, G., Nie, J.Y., Bai, J.: Integrating word relationships into language models. In: SIGIR 2005: Proc. of the 28th int. conf. on Research and development in information retrieval, pp. 298–305. ACM Press, New York (2005)Google Scholar
  4. 4.
    Kleinberg, J.M.: Authoritative sources in a hyperlinked environment. J. ACM 46(5), 604–632 (1999)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Brin, S., Page, L.: The anatomy of a large-scale hypertextual web search engine. In: WWW7: Proc. of the 7th int. conf. on World Wide Web, pp. 107–117. Elsevier, Amsterdam (1998)Google Scholar
  6. 6.
    Fox, E.A., Shaw, J.A.: Combination of multiple searches. In: Proc. of TREC-3, Gaithersburg, Maryland, USA, NIST (1994)Google Scholar
  7. 7.
    Craswell, N., Robertson, S., Zaragoza, H., Taylor, M.: Relevance weighting for query independent evidence. In: SIGIR 2005: Proc. of the 28th int. conf. on Research and development in information retrieval, pp. 416–423. ACM Press, New York (2005)Google Scholar
  8. 8.
    Frakes, W., Baeza-Yates, R.: Information Retrieval: Data Structures and Algorithms. Prentice-Hall, Englewood Cliffs (1992)Google Scholar
  9. 9.
    Salton, G., Fox, E.A., Wu, H.: Extended boolean information retrieval. Commun. ACM 26(11), 1022–1036 (1983)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Dubois, D., Prade, H.: Criteria aggregation and ranking of alternatives in the framework of fuzzy set theory. In: Zimmermann, H., Zadeh, L., Gaines, B. (eds.) Fuzzy Sets and Decision Analysis, pp. 209–240. North-Holland, Amsterdam (1984)Google Scholar
  11. 11.
    Savoy, J., Rasolofo, Y.: Report on the trec-9 experiment: Link-based retrieval and distributed collections. In: Proc. of TREC-9, Gaithersburg, USA, NIST (2001)Google Scholar
  12. 12.
    Boughanem, M., Loiseau, Y., Prade, H.: Rank-ordering documents according to their relevance in information retrieval using refinements of ordered-weighted aggregations. In: Detyniecki, M., Jose, J.M., Nürnberger, A., van Rijsbergen, C.J.‘. (eds.) AMR 2005. LNCS, vol. 3877, pp. 44–54. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Anh, V.N., Moffat, A.: Vector space ranking: Can we keep it simple? In: Proc. of the Australian document computing symposium, Sydney, pp. 7–12 (2002)Google Scholar
  14. 14.
    Borlund, P.: The concept of relevance in IR. J. Am. Soc. Inf. Sci. Technol. 54(10), 913–925 (2003)CrossRefGoogle Scholar
  15. 15.
    Granka, L.A., Joachims, T., Gay, G.: Eye-tracking analysis of user behavior in www search. In: SIGIR 2004: Proc. of the 27th int. conf. on Research and development in information retrieval, pp. 478–479. ACM Press, New York (2004)CrossRefGoogle Scholar
  16. 16.
    Shannon, C.E.: Prediction and entropy of printed english. Bell Systems Technical Journal 30, 50–64 (1951)zbMATHGoogle Scholar
  17. 17.
    Roy, B.: Main sources of inaccurate determination, uncertainty and imprecision. Mathematical and Computer Modelling 12(10-11), 1245–1254 (1989)CrossRefGoogle Scholar
  18. 18.
    Roy, B.: The outranking approach and the foundations of ELECTRE methods. Theory and Decision 31, 49–73 (1991)CrossRefMathSciNetGoogle Scholar
  19. 19.
    Roy, B., Hugonnard, J.: Ranking of suburban line extension projects on the Paris metro system by a multicriteria method. Transp. Research 16A(4), 301–312 (1982)CrossRefGoogle Scholar
  20. 20.
    Craswell, N., Hawking, D.: Overview of the trec-2004 web track. In: Proc. of TREC-2004, Gaithersburg, Maryland, USA, NIST (2004)Google Scholar
  21. 21.
    Sanderson, M., Zobel, J.: Information retrieval system evaluation: effort, sensitivity, and reliability. In: SIGIR 2005: Proc. of the 28th int. conf. on Research and development in information retrieval, pp. 162–169. ACM Press, New York (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Mohamed Farah
    • 1
  • Daniel Vanderpooten
    • 1
  1. 1.LAMSADEUniversité Paris-DauphineFrance

Personalised recommendations