Advertisement

Information Retrieval

, 12:615 | Cite as

Overview of the Reliable Information Access Workshop

  • Donna Harman
  • Chris Buckley
Reliable Information Access Workshop

Abstract

The Reliable Information Access (RIA) Workshop was held in the summer of 2003, with a goal of improved understanding of information retrieval systems, in particular with regard to the variability of retrieval performance across topics. The workshop ran massive cross-system failure analysis on 45 of the TREC topics and also performed cross-system experiments on pseudo-relevance feedback. This paper presents an overview of that workshop, along with some preliminary conclusions from these experiments. Even if this workshop was held 6 years ago, the issues of improving system performance across all topics is still critical to the field and this paper, along with the others in this issue, are the first widely published full papers for the workshop.

Keywords

Information retrieval Relevance feedback Failure analysis 

Notes

Acknowledgements

This research was funded in part by the Advanced Research and Development Activity in Information Technology (ARDA), a U.S. Government entity which sponsors and promotes research of import to the Intelligence Community which includes but is not limited to the CIA, DIA, NSA, NIMA and NRO.

References

  1. Buckley, C. (1985). Implementation of the SMART information retrieval system. Technical Report 85-686. Computer Science Department, Cornell University, Ithaca, NY, May.Google Scholar
  2. Buckley, C. (in press). Why current IR engines fail. Information Retrieval. doi: 10.1007/s10791-009-9103-2.
  3. Buckley, C., Singhal, A., Mitra, M., & Salton, G. (1996). New retrieval approaches using SMART: TREC-4. In D. K. Harman (Ed.), The Fourth Text REtrieval Conference (TREC-4) , Gaithersburg, MD, pp. 25–48.Google Scholar
  4. Clarke, C. L. A., Cormack, G. V., & Lynam, T. R. (2001). Exploiting redundancy in question answering. In W. B. Croft, D. J. Harper, D. H. Kraft, & J. Zobel (Eds.), Proceedings of the 24th annual international ACM SIGIR conference on research and development in information retrieval New Orleans, LA, pp. 358–365.Google Scholar
  5. Clarke, C. L. A., Cormack, G. V., Lynam, T. R., Buckley, C., & Harman, D. (in press). Swapping documents and terms. Information Retrieval. doi: 10.1007/s10791-009-9105-0.
  6. Collins-Thompson, K., Callan, J., Terra, E., & Clarke, C. L. A. (2004). The effect of document retrieval quality on factoid question answering performance. In K. Järvelin, J. Allan, P. Bruza, & M. Sanderson (Eds.), Proceedings of the 27th annual international ACM SIGIR conference on research and development in information retrieval Sheffield, UK, pp. 574–575.Google Scholar
  7. Cronen-Townsend, S., Zhou, Y., & Croft, W. B. (2002). Predicting query performance. In M. Beaulieu, R. Baeza-Yates, S. H. Myaeng, & K. Järvelin (Eds.), Proceedings of the 25th annual international ACM SIGIR conference on research and development in information retrieval Tampere, Finland, pp. 299–306.Google Scholar
  8. Evans, D. A., & Lefferts, R. G. (1994). Design and evaluation of the CLARIT-TREC-2 system. In D. K. Harman (Ed.), The Second Text REtrieval Conference (TREC-2) Gaithersburg, MD, pp. 137–150.Google Scholar
  9. Evans, D. A., & Lefferts, R. G. (1995). CLARIT-TREC experiments. Information Processing and Management, 31(3), 385–395.CrossRefGoogle Scholar
  10. Gu, Z. (2004). Comparison of using passages and documents for blind relevance feedback. In K. Järvelin, J. Allan, P. Bruza, & M. Sanderson (Eds.), Proceedings of the 27th annual international ACM SIGIR conference on research and development in information retrieval Sheffield, UK, pp. 482–483.Google Scholar
  11. Lavrenko, V., & Croft, W. B. (2001). Relevance-based language models. In W. B. Croft, D. J. Harper, D. H. Kraft, & J. Zobel (Eds.), Proceedings of the 24th annual international ACM SIGIR conference on research and development in information retrieval New Orleans, LA, pp. 120–127.Google Scholar
  12. Milic-Frayling N., Zhai C., Tong X., Jansen, P., & Evans, D. A. (1998). Experiments in query optimization: The CLARIT system TREC-6 report. In E. M. Voorhees & D. K. Harman (Eds.), The Sixth Text REtrieval Conference (TREC-6) (pp 415–454). Gaithersburg, MD.Google Scholar
  13. Montgomery, J., & Evans, D. A. (2004). Effect of varying number of documents in blind feedback. In K. Järvelin, J. Allan, P. Bruza, & M. Sanderson (Eds.), Proceedings of the 27th annual international ACM SIGIR conference on research and development in information retrieval Sheffield, UK, pp. 476–477.Google Scholar
  14. Ogilvie, P., Voorhees, E., & Callan, J. (in press). On the number of terms used in automatic query expansion. Information Retrieval. doi: 10.1007/s10791-009-9104-1.
  15. Ponte, J. M., & Croft, W. B. (1998). A language modeling approach to information retrieval. In W. B. Croft, A. Moffat, C. J. van Rijsbergen, R. Wilkinson, & J. Zobel (Eds.), Proceedings of the 21th annual international ACM SIGIR conference on research and development in information retrieval Melbourne, Australia, pp. 275–281.Google Scholar
  16. Robertson, S. E. (1990). On term selection for query expansion. Journal of Documentation, 46, 359–364.CrossRefGoogle Scholar
  17. Robertson, S. E., & Sparck Jones, K. (1976). Relevance weighting of search terms. Journal of the American Society for Information Science, 27(3), 129–146.Google Scholar
  18. Robertson, S. E., Walker, S., Jones, S., Hancock-Beaulieu, M. M., & Gatford, M. (1995). OKAPI at TREC-3. In D. K. Harman (Ed.), The Third Text REtrieval Conference (TREC-3) Gaithersburg, MD, pp. 109–126.Google Scholar
  19. Small, S., Strzalkowski, T., Liu, T., Shimizu, N., & Yamrom, B. (2004). A data driven approach to interactive question answering. In M. T. Maybury (Ed.), New directions in questions answering (pp. 129–140). AAAI/MIT Press.Google Scholar
  20. Soboroff, I. (in press). A guide to the RIA workshop data archive. Information Retrieval. doi: 10.1007/s10791-009-9102-3.
  21. Strzalkowski, T., Small, S., Hardy, H., Kantor, P., Min, W., Ryan, S., et al. (2008). Question answering as dialogue with data. In T. Strzalkowski & S. Harabagiu (Eds.), Advances in open-domain question answering (pp. 149–188). Springer.Google Scholar
  22. Voorhees, E. M., & Harman, D. (1997). Overview of the Fifth Text REtrieval Conference (TREC-5). In E. M. Voorhees & D. K. Harman (Eds.), The Fifth Text REtrieval Conference (TREC-5) Gaithersburg, MD, pp. 1–28.Google Scholar
  23. Warren, R. H. (2004). A review of relevance feedback experiments at the 2003 Reliable Information Access (RIA) Workshop. In K. Järvelin, J. Allan, P. Bruza, & M. Sanderson (Eds.), Proceedings of the 27th annual international ACM SIGIR conference on research and development in information retrieval Sheffield, UK, pp. 570–571.Google Scholar
  24. Williamson, D., Williamson, R., & Leskm, M. E. (1969). The Cornell implementation of the SMART system. In G. Salton (Ed.), ISR Report 16 to the NSF, pp. 1–62.Google Scholar
  25. Williamson, D., Williamson, R., & Lesk, M. E. (1971). The Cornell implementation of the SMART system. In G. Salton (Ed.), The SMART retrieval system (pp. 12–51). Prentice-Hall.Google Scholar
  26. Yeung, D. L., Clarke, C. L. A., Cormack, G. V., Lynam, T. R., & Terra, E. L. (2004). Task-specific query expansion. In E. M. Voorhees (Ed.), The Twelfth Text REtrieval Conference (TREC-12), Gaithersburg, MD.Google Scholar
  27. Yom-Tov, E., Fine, S., Carmel, D., & Darlow, A. (2005). Learning to estimate query difficulty. In G. Marchionini, A. Moffat, J. Tait, R. Baeza-Yates, & N. Ziviani (Eds.), Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval Salvador, Brazil, pp. 512–519.Google Scholar
  28. Zhai, C., & Lafferty, J. (2001). Model-based feedback in the language modeling approach to information retrieval. In Tenth International Conference on Information and Knowledge Management (CIKM 2001) Atlanta, GA, pp. 1–2.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.National Institute of Standards and TechnologyGaithersburgUSA
  2. 2.Sabir Research Inc.GaithersburgUSA

Personalised recommendations