Skip to main content
Log in

Semantic representation and enrichment of information retrieval experimental data

  • Published:
International Journal on Digital Libraries Aims and scope Submit manuscript

Abstract

Experimental evaluation carried out in international large-scale campaigns is a fundamental pillar of the scientific and technological advancement of information retrieval (IR) systems. Such evaluation activities produce a large quantity of scientific and experimental data, which are the foundation for all the subsequent scientific production and development of new systems. In this work, we discuss how to semantically annotate and interlink this data, with the goal of enhancing their interpretation, sharing, and reuse. We discuss the underlying evaluation workflow and propose a resource description framework model for those workflow parts. We use expertise retrieval as a case study to demonstrate the benefits of our semantic representation approach. We employ this model as a means for exposing experimental data as linked open data (LOD) on the Web and as a basis for enriching and automatically connecting this data with expertise topics and expert profiles. In this context, a topic-centric approach for expert search is proposed, addressing the extraction of expertise topics, their semantic grounding with the LOD cloud, and their connection to IR experimental data. Several methods for expert profiling and expert finding are analysed and evaluated. Our results show that it is possible to construct expert profiles starting from automatically extracted expertise topics and that topic-centric approaches outperform state-of-the-art language modelling approaches for expert finding.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. http://www.europeana.eu/

  2. http://id.loc.gov/

  3. http://lod.springer.com/

  4. http://data.elsevier.com/

  5. http://data.nature.com/

  6. http://trec.nist.gov/

  7. http://www.clef-initiative.eu/

  8. http://research.nii.ac.jp/ntcir/

  9. https://lucene.apache.org/openrelevance/

  10. http://wice.csse.unimelb.edu.au:15000/evalweb/ireval/

  11. http://www.cs.waikato.ac.nz/ml/weka/

  12. Blank nodes do not have identifiers in the RDF abstract syntax. The blank node identifiers have local scope and are purely an artefact of the serialization. Refer to http://www.w3.org/TR/rdf11-concepts/#section-blank-nodes for more details on blank nodes.

  13. http://www.dbpedia.org/

  14. http://dblp.l3s.de/

  15. http://www.w3.org/TR/WD-rdf-syntax-971002/

  16. http://trec.nist.gov/trec_eval/

  17. http://www.w3.org/TR/turtle/

  18. http://www.w3.org/TeamSubmission/n3/

  19. http://www.w3.org/TR/void/

  20. Linked Data: http://linkeddata.org

  21. DBpedia: http://dbpedia.org/

  22. http://dblp.uni-trier.de/

  23. Available at http://arnetminer.org/DBLP_Citation, last accessed July 9, 2013.

  24. A preliminary test on just the publications from the core venues showed that adding quotes around the publication title decreased recall from 80.3 to 70.86 %.

  25. http://toinebogers.com/?page_id=660

  26. An approach based on a Semantic Web search engine that uses key phrase search to find structured data was also considered, restricting the search to the DBPedia domain. The results were disappointing, because only a limited number of retrieved results can be analysed. Often, the relevant DBpedia concept does not appear in the top results.

  27. http://www.freebase.com/

References

  1. Ackoff, R.L.: From data to wisdom. J. Appl. Syst. Anal. 16, 3–9 (1989)

    Google Scholar 

  2. Agirre, E., Di Nunzio, G. M., Ferro, N., Mandl, T., Peters, C.: CLEF 2008: Ad Hoc Track Overview. In: Peters, C., Deselaers, T., Ferro, N., Gonzalo, J., Jones, G. J. F., Kurimo, M., Mandl, T., Peñas, A. (eds.) Evaluating Systems for Multilingual and Multimodal Information Access: Ninth Workshop of the Cross–Language Evaluation Forum (CLEF 2008). Revised Selected Papers, pp. 15–37. Lecture Notes in Computer Science (LNCS) 5706, Springer, Heidelberg (2009)

  3. Agosti, M., Berendsen, R., Bogers, T., Braschler, M., Buitelaar, P., Choukri, K., Di Nunzio, G.M., Ferro, N., Forner, P., Hanbury, A., Friberg Heppin, K., Hansen, P., Järvelin, A., Larsen, B., Lupu, M., Masiero, I., Müller, H., Peruzzo, S., Petras, V., Piroi, F., de Rijke, M., Santucci, G., Silvello, G., Toms, E.: PROMISE retreat report prospects and opportunities for information access evaluation. SIGIR Forum 46(2), 60–84 (2012a)

    Article  Google Scholar 

  4. Agosti, M., Di Buccio, E., Ferro, N., Masiero, I., Peruzzo, S., Silvello, G: DIRECTions: Design and Specication of an IR Evaluation Infrastructure. In: Catarci, T., Forner, P., Hiemstra, D., Peñas, A., Santucci, G. (eds) Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics. Proceedings of the Third International Conference of the CLEF Initiative (CLEF 2012), pp. 88–99. Lecture Notes in Computer Science (LNCS) 7488, Springer, Heidelberg (2012b)

  5. Agosti, M., Di Nunzio, G. M., Ferro, N.: Scientific Data of an Evaluation Campaign: Do We Properly Deal With Them? In Peters, C., Clough, P., Gey, F. C., Karlgren, J., Magnini, B., Oard, D. W., de Rijke, M., Stempfhuber, M. (eds.) Evaluation of Multilingual and Multi-modal Information Retrieval: Seventh Workshop of the Cross–Language Evaluation Forum (CLEF 2006). Revised Selected Papers, pp. 11–20. Lecture Notes in Computer Science (LNCS) 4730, Springer, Heidelberg (2007a)

  6. Agosti, M., Di Nunzio, G. M., Ferro, N.: The Importance of Scientific Data Curation for Evaluation Campaigns. In: Thanos, C., Borri, F., Candela, L. (eds.) Digital Libraries: Research and Development. First Int. DELOS Conference. Revised Selected Papers, pp. 157–166. Lecture Notes in Computer Science (LNCS) 4877, Springer, Heidelberg (2007b)

  7. Agosti, M., Ferro, N.: Towards an evaluation infrastructure for DL performance evaluation. In: Tsakonas, G., Papatheodorou, C. (eds.) Evaluation of Digital Libraries: An Insight into Useful Applications and Methods, pp. 93–120. Chandos Publishing, Oxford (2009)

    Chapter  Google Scholar 

  8. Agosti, M., Ferro, N., Fox, E.A., Gonçalves, M.A.: Modelling DL quality: a comparison between approaches: the DELOS reference model and the 5S model. In: Thanos, C., Borri, F., Launaro, A. (eds.) Second DELOS Conference—Working Notes. ISTI-CNR, Gruppo ALI, Pisa, Italy (2007c)

    Google Scholar 

  9. Agosti, M., Ferro, N., Silvello, G.: Digital library interoperability at high level of abstraction. Future Gener. Comput. Syst. 55, 129–146 (2016)

    Article  Google Scholar 

  10. Agosti, M., Ferro, N., Thanos, C.: DESIRE 2011: First International Workshop on Data infrastructurEs for Supporting Information Retrieval Evaluation. In: Ounis, I., Ruthven, I., Berendt, B., de Vries, A. P., Wenfei, F. (eds) Proc. 20th Int. Conference on Information and Knowledge Management (CIKM 2011), pp. 2631–2632. ACM Press, New York (2009)

  11. Allan, J., Aslam, J., Azzopardi, L., Belkin, N., Borlund, P., Bruza, P., Callan, J., Carman, M., Clarke, C., Craswell, N., Croft, W.B., Culpepper, J.S., Diaz, F., Dumais, S., Ferro, N., Geva, S., Gonzalo, J., Hawking, D., Järvelin, K., Jones, G., Jones, R., Kamps, J., Kando, N., Kanoulos, E., Karlgren, J., Kelly, D., Lease, M., Lin, J., Mizzaro, S., Moffat, A., Murdock, V., Oard, D.W., de Rijke, M., Sakai, T., Sanderson, M., Scholer, F., Si, L., Thom, J., Thomas, P., Trotman, A., Turpin, A., de Vries, A.P., Webber, W., Zhang, X., Zhang, Y.: Frontiers, Challenges, and Opportunities for Information Retrieval—Report from SWIRL 2012, The Second Strategic Workshop on Information Retrieval in Lorne, February 2012. SIGIR Forum 46(1), 2–32 (2012)

  12. Angelini, M., Ferro, N., Santucci, G., Silvello, G.: VIRTUE: a visual tool for information retrieval performance evaluation and failure analysis. J. Vis. Lang. Comput. 25(4), 394–413 (2014)

    Article  Google Scholar 

  13. Angelini, M., Ferro, N., Santucci, G., and Silvello, G.: A Visual Analytics Approach for What-If Analysis of Information Retrieval Systems. In [76] (2016)

  14. Arguello, J., Crane, M., Diaz, F., Lin, J., Trotman, A.: Report on the SIGIR 2015 Workshop on Reproducibility, Inexplicability, and Generalizability of Results (RIGOR). SIGIR Forum 49(2), 107–176 (2015)

  15. Armstrong, T. G., Moffat, A., Webber, W., Zobel, J.: EvaluatIR: An Online Tool for Evaluating and Comparing IR Systems. In: Allan, J., Aslam, J. A., Sanderson, M., Zhai, C., Zobel, J. (eds.) Proc. 32nd Annual Int. ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2009), p. 833. ACM Press, New York (2009)

  16. Baeza-Yates, R., Ribeiro-Neto, B.: Modern Information Retrieval. Addison-Wesley, Harlow (1999)

    Google Scholar 

  17. Baggerly, K.: Disclose all data in publications. Nature 467, 401 (2010)

    Article  Google Scholar 

  18. Bailey, P., de Vries, A.P., Craswell, N., and Soboroff, I.: Overview of the TREC 2007 Enterprise Track. In Voorhees, E.M. and Buckland, L.P. (eds) The Sixteenth Text REtrieval Conference Proc. (TREC 2007). National Institute of Standards and Technology (NIST), Special Pubblication 500-274, Washington (2007)

  19. Balog, K., Azzopardi, L., de Rijke, M.: A language modeling framework for expert finding. Inf. Process. Manag. 45(1), 1–19 (2009)

    Article  Google Scholar 

  20. Balog, K., Bogers, T., Azzopardi, L., de Rijke, M., van den Bosch, A.: Broad expertise retrieval in sparse data environments. In: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’07, pages 551–558, New York, NY, USA. ACM (2007)

  21. Balog, K., de Rijke, M.: Determining Expert Profiles (With an Application to Expert Finding). In: Proc. of the International Joint Conferences on Artificial Intelligence (IJCAI 2007), pp. 2657–2662, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc (2007)

  22. Balog, K., de Rijke, M., Azzopardi, L.: Formal Models for Expert Finding in Enterprise Corpora. In: Proc. of the 29th annual International ACM SIGIR Conference on Research and Development in Information Retrieval—SIGIR ’06, pp. 43–50, New York, New York, USA. ACM Press (2006)

  23. Balog, K., Fang, Y., de Rijke, M., Serdyukov, P., Si, L.: Expertise retrieval. Found. Trends Inf. Retr. (FnTIR) 6(2–3), 127–256 (2012)

    Article  Google Scholar 

  24. Batini, C., Scannapieco, M.: Data Quality, Concepts, Methodologies and Techniques. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  25. Berendsen, R., Balog, K., Bogers, T., van den Bosch, A., de Rijke, M.: On the assessment of expertise profiles. J. Am. Soc. Inf. Sci. Technol. (JASIST) 64(10), 2024–2044 (2013)

    Article  Google Scholar 

  26. Bizer, C., Heath, T., Berners-Lee, T.: Linked Data—The Story So Far. Int. J. Semant. Web Inf. Syst. (IJSWIS) 5(3), 1–22 (2009)

    Article  Google Scholar 

  27. Bordea, G., Bogers, T., Buitelaar, P.: Benchmarking Domain-Specific Expert Search Using Workshop Program Committees. In: Workshop on Computational Scientometrics: Theory and Applications, at CIKM (2013a)

  28. Bordea, G., Kirrane, S., Buitelaar, P., Pereira, B.O.: Expertise Mining for Enterprise Content Management. In: Calzolari, N., Choukri, K., Declerck, T., Dogan, M.U., Maegaard, B., Mariani, J., Odijk, J., Piperidis, S. (eds.) Proc. of the Eighth Int. Conference on Language Resources and Evaluation (LREC-2012), pp. 3495–3498. European Language Resources Association (ELRA) (2012)

  29. Bordea, G., Polajnar, T., Buitelaar, P.: Domain-Independent Term Extraction Through Domain Modelling. In: 10th International Conference on Terminology and Artificial Intelligence (2013b)

  30. Borgman, C.L.: The conundrum of sharing research data. JASIST 63(6), 1059–1078 (2012)

    Article  Google Scholar 

  31. Borgman, C.L.: Big Data, Little Data, No Data. MIT Press, Cambridge, MA, USA (2015)

  32. Bowers, S.: Scientific workflow, provenance, and data modeling challenges and approaches. J. Data Semant. 1(1), 19–30 (2012)

    Article  Google Scholar 

  33. Buneman, P.: The providence of provenance. In: Gottlob, G., Grasso, G., Olteanu, D., Schallhart, C. (eds.) Proc. of the 29th British National Conference on Databases, BNCOD 2013, vol. 7968, Lecture Notes in Computer Science, pp. 7–12. Springer, Berlin, Heidelberg (2013)

  34. Buneman, P., Davidson, S.B., Frew, J.: Why data citation is a computational problem. Communications of the ACM (CACM) (2016) (forthcoming)

  35. Buneman, P., Silvello, G.: A rule-based citation system for structured and evolving datasets. IEEE Data Eng. Bull. 33(3), 33–41 (2010)

    Google Scholar 

  36. Burnett, S., Clarke, S., Davis, M., Edwards, R., Kellett, A.: Enterprise Search and Retrieval. Unlocking the Organisation’s Potential, Butler Direct Limited, Hull, UK (2006)

  37. Campbell, C.S., Maglio, P.P., Cozzi, A., Dom, B.: Expertise identification using email communications. In: CIKM ’03: Proceedings of the Twelfth International Conference on Information and Knowledge Management, pp. 528–531, New Orleans, LA (2003)

  38. Candela, L., Castelli, D., Ferro, N., Ioannidis, Y., Koutrika, G., Meghini, C., Pagano, P., Ross, S., Soergel, D., Agosti, M., Dobreva, M., Katifori, V., Schuldt, H.: The DELOS Digital Library Reference Model. Foundations for Digital Libraries. ISTI-CNR at Gruppo ALI, Pisa, Italy. http://www.delos.info/files/pdf/ReferenceModel/DELOS_DLReferenceModel_0 .98.pdf (2007)

  39. Candela, L., Castelli, D., Manghi, P., Tani, A.: Data Journals: A Survey. J. Assoc. Inf. Sci. Technol. (2015) (page IN PRINT)

  40. Chapelle, O., Ji, S., Liao, C., Velipasaoglu, E., Lai, L., Wu, S.-L.: Intent-based diversification of web search results: metrics and algorithms. Inf. Retr. 14(6), 572–592 (2011)

    Article  Google Scholar 

  41. Cheney, J., Chiticariu, L., Tan, W.C.: Provenance in databases: Why, How, and Where. Found. Trends Databases 1(4), 379–474 (2009)

    Article  Google Scholar 

  42. Cleverdon, C.W.: The Cranfield Tests on Index Languages Devices. In Spärck Jones, K., Willett, P. (eds.) Readings in Information Retrieval, pp. 47–60. Morgan Kaufmann Publisher, Inc., San Francisco, CA, USA (1997)

  43. Croft, W.B., Metzler, D., Strohman, T.: Search Engines: Information Retrieval in Practice. Addison-Wesley, Reading (2009)

    Google Scholar 

  44. Demartini, G.: Finding experts using wikipedia. In: Proceedings of the Workshop on Finding Experts on the Web with Semantics (FEWS2007) at ISWC/ASWC2007, pp. 33–41 (2007)

  45. Di Buccio, E., Di Nunzio, G.M., Silvello, G.: A curated and evolving linguistic linked dataset. Semant. Web 4(3), 265–270 (2013)

    Google Scholar 

  46. Draganidis, F., Metzas, G.: Competency based management: a review of systems and approaches. Inf. Manag. Comput. Secur. 14(1), 51–64 (2006)

    Article  Google Scholar 

  47. Dussin, M., Ferro, N.: Managing the Knowledge Creation Process of Large-Scale Evaluation Campaigns. In: Agosti, M., Borbinha, J., Kapidakis, S., Papatheodorou, C., and Tsakonas, G. (eds.) Proc. 13th European Conference on Research and Advanced Technology for Digital Libraries (ECDL 2009), pp. 63–74. Lecture Notes in Computer Science (LNCS) 5714, Springer, Heidelberg (2009)

  48. Ferro, N.: CLEF 15th Birthday: Past, Present, and Future. SIGIR Forum 48(2), 31–55 (2014)

    Article  Google Scholar 

  49. Ferro, N., Hanbury, A., Müller, H., Santucci, G.: Harnessing the Scientific Data Produced by the Experimental Evaluation of Search Engines and Information Access Systems. Proc. Comput. Sci. 4, 740–749 (2011)

    Article  Google Scholar 

  50. Ferro, N., Silvello, G.: CLEF 15th Birthday: What Can We Learn From Ad Hoc Retrieval? In Kanoulas, E., Lupu, M., Clough, P., Sanderson, M., Hall, M., Hanbury, A., and Toms, E. (eds.) Information Access Evaluation—Multilinguality, Multimodality, and Interaction. Proc. of the Fifth Int. Conference of the CLEF Initiative (CLEF 2014), pp. 31–43. Lecture Notes in Computer Science (LNCS) 8685, Springer, Heidelberg (2014a)

  51. Ferro, N., Silvello, G.: Making it easier to discover, re-use and understand search engine experimental evaluation data. ERCIM News 96, 26–27 (2014b)

    Google Scholar 

  52. Ferro, N., Silvello, G.: Rank-Biased Precision Reloaded: Reproducibility and Generalization. In: Fuhr, N., Rauber, A., Kazai, G., Hanbury, A. (eds.) Advances in Information Retrieval. Proc. 37th European Conference on IR Research (ECIR 2015), pp. 768–780. Lecture Notes in Computer Science (LNCS) 9022, Springer, Heidelberg (2015)

  53. Ferro, N. and Silvello, G.: A General Linear Mixed Models Approach to Study System Component Effects. In [76] (2016)

  54. Forner, P., Bentivogli, L., Braschler, M., Choukri, K., Ferro, N., Hanbury, A., Karlgren, J., Müller, H.: PROMISE technology transfer day: spreading the word on information access evaluation at an industrial event. SIGIR Forum 47(1), 53–58 (2013)

    Article  Google Scholar 

  55. Fricke, M.: The knowledge pyramid: a critique of the DIKW hierarchy. J. Inf. Sci. 35(2), 131–142 (2009)

    Article  Google Scholar 

  56. Gollub, T., Stein, B., Burrows, S., Hoppe, D.: TIRA: Configuring, Executing, and Disseminating Information Retrieval Experiments. In: Hameurlain, A., Tjoa, A.M., Wagner, R. (eds.) 23rd International Workshop on Database and Expert Systems Applications, DEXA 2012, pp. 151–155. IEEE Computer Society (2012)

  57. Gray, A.J.G., Groth, P., Loizou, A., Askjaer, S., Brenninkmeijer, C.Y.A., Burger, K., Chichester, C., Evelo, C.T.A., Goble, C.A., Harland, L., Pettifer, S., Thompson, M., Waagmeester, A., Williams, A.J.: Applying linked data approaches to pharmacology. Architectural decisions and implementation. Semant. Web 5(2), 101–113 (2014)

    Google Scholar 

  58. Harman, D.K.: Information Retrieval Evaluation. Morgan & Claypool Publishers, San Rafael, California (USA) (2011)

  59. Harman, D.K., Braschler, M., Hess, M., Kluck, M., Peters, C., Schaüble, P., Sheridan, P.: CLIR Evaluation at TREC. In Peters, C. (eds) Cross-Language Information Retrieval and Evaluation: Workshop of Cross-Language Evaluation Forum (CLEF 2000), pp. 7–23. Lecture Notes in Computer Science (LNCS) 2069, Springer, Heidelberg (2001)

  60. Harman, D.K., Voorhees, E.M. (eds.): TREC. Experiment and Evaluation in Information Retrieval. MIT Press, Cambridge (2005)

    Google Scholar 

  61. Heath, T., Bizer, C.: Linked Data: Evolving the Web into a Global Data Space. Synthesis Lectures on the Semantic Web: Theory and Technology. Morgan & Claypool Publishers, San Rafael, California (USA) (2011)

  62. Hennicke, S., Olensky, M., de Boer, V., Isaac, A., Wielemaker, J.: Conversion of EAD into EDM Linked Data. In: Prediu, L., Hennicke, S., Nürnberger, A., Mitschick, A., Ross, S. (eds.) Proc. 1st International Workshop on Semantic Digital Archives (SDA 2011) http://ceur-ws.org/Vol-801/. pp. 82–88 (2011)

  63. Hersey, A., Senger, S., Overington, J.P.: Open Data for Drug Discovery: Learning from the Biological Community. Future Med. Chem. 4(15), 1865–1867 (2012)

    Article  Google Scholar 

  64. Hooper, C.J., Marie, N., Kalampokis, E.: Dissecting the butterfly: representation of disciplines publishing at the web science conference series. In: Contractor, N.S., Uzzi, B., Macy, M.W., Nejdl, W. (eds) WebSci, pp. 137–140. ACM (2012)

  65. Isaac, A., Haslhofer, B.: Europeana Linked Open Data—data.europeana.eu. Semant. Web 4(3), 291–297 (2013)

    Google Scholar 

  66. ISO 9000: Quality management systems—Fundamentals and vocabulary. Recommendation ISO 9000:2005 (2005)

  67. Kharazmi, S., Scholer, F., Vallet, D., Sanderson, M.: Examining Additivity and Weak Baselines. ACM Trans. Inf. Syst. (TOIS) (2016)

  68. Lagoze, C., Van De Sompel, H., Johnston, P., Nelson, M., Sanderson, R., Warner, S.: ORE Specification—Abstract Data Model—Version 1.00. http://www.openarchives.org/ore/1.0/datamodel (2008)

  69. Leidig, J.P.: Epidemiology Experimentation and Simulation Management through Scientific Digital Libraries. PhD thesis, Virginia Tech (2012)

  70. Lupu, M., Hanbury, A.: Patent Retrieval. Found. Trends Inf. Retr. (FnTIR) 7(1), 1–97 (2013)

    Article  Google Scholar 

  71. Macdonald, C., Ounis, I.: Voting for candidates: adapting data fusion techniques for an expert search task. In: CIKM ’06: Proceedings of the 15th ACM international conference on Information and knowledge management, pp. 387–396, New York, NY, USA. ACM (2006)

  72. Maybury, M.: Expert finding systems. Technical Report MTR 06B000040, MITRE Corporation (2006)

  73. Mimno, D., McCallum, A.: Expertise Modeling for Matching Papers with Reviewers. In: SIGKDD ’07: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 500–509 (2007)

  74. Müller, H.: Medical (Visual) Information Retrieval. In: Agosti, M., Ferro, N., Forner, P., Müller, H., Santucci, G. (eds.) Information Retrieval Meets Information Visualization – PROMISE Winter School 2012, Revised Tutorial Lectures, pp. 155–166. Lecture Notes in Computer Science (LNCS) 7757, Springer, Heidelberg (2013)

  75. Ngonga Ngomo, A.-C.: On link discovery using a hybrid approach. J. Data Semant. 1(4), 203–217 (2012)

    Article  Google Scholar 

  76. Perego, R., Sebastiani, F., Aslam, J., Ruthven, I., Zobel, J. (eds.) Proc. 39th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016). ACM Press, New York, USA (2016)

  77. Petkova, D., Croft, W.B.: Hierarchical language models for expert finding in enterprise corpora. In: Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence, ICTAI ’06, pp. 599–608, Washington, DC, USA. IEEE Computer Society (2006)

  78. Pröll, S., Rauber, A.: Asking the Right Questions—Query-Based Data Citation to Precisely Identify Subsets of Data. ERCIM News (100) (2015)

  79. Robertson, S.E.: On the history of evaluation in IR. J. Inf. Sci. 34(4), 439–456 (2008)

    Article  Google Scholar 

  80. Rodriguez, M.A., Bollen, J.: An Algorithm to Determine Peer-Reviewers. In: ’08: Proceedings of the Seventeenth International Conference on Information and Knowledge Management, pp. 319–328. ACM (2008)

  81. Rowe, B.R., Wood, D.W., Link, A.L., Simoni, D.A.: Economic Impact Assessment of NIST’s Text REtrieval Conference (TREC) Program. RTI Project Number 0211875, RTI International, USA. http://trec.nist.gov/pubs/2010.economic.impact.pdf (2010)

  82. Rowley, J.: The Wisdom Hierarchy: Representations of the DIKW Hierarchy. J. Inf. Sci. 33(2), 163–180 (2007)

    Article  Google Scholar 

  83. Salton, G., McGill, M.J.: Introduction to Modern Information Retrieval. McGraw-Hill, New York (1983)

    MATH  Google Scholar 

  84. Serdyukov, P., Taylor, M., Vinay, V., Richardson, M., White, R.: Automatic people tagging for expertise profiling in the enterprise. In: Clough, P., Foley, C., Gurrin, C., Jones, G., Kraaij, W., Lee, H., Mudoch, V. (eds.) Advances in Information Retrieval. Lecture Notes in Computer Science, vol. 6611, pp. 399–410. Springer, Berlin (2011)

    Chapter  Google Scholar 

  85. Silvello, G.: A Methodology for Citing Linked Open Data Subsets. D-Lib Magazine 21(1/2) (2015). doi:10.1045/january2015-silvello

  86. Soboroff, I., de Vries, A.P., Craswell, N.: Overview of the trec 2006 enterprise track. In: The fifteenth Text REtrieval Conference Proceedings (TREC 2006) (2007)

  87. Stasinopoulou, T., Bountouri, L., Kakali, C., Lourdi, I., Papatheodorou, C., Doerr, M., Gergatsoulis, M.: Ontology-Based Metadata Integration in the Cultural Heritage Domain. In: Goh, D., Cao, T., Sølvberg, I.T., Rasmussen, E. (eds) Asian Digital Libraries. Looking Back 10 Years and Forging New Frontiers, vol. 4822, Lecture Notes in Computer Science, pp. 165–175. Springer, Berlin (2007)

  88. Tanaka, J.W., Taylor, M.: Object categories and expertise: Is the basic level in the eye of the beholder? Cogn. Psychol. 23(3), 457–482 (1991)

    Article  Google Scholar 

  89. Thiagarajan, R., Manjunath, G., Stumptner, M.: Finding experts by semantic matching of user profiles. PhD thesis, CEUR-WS (2008)

  90. van Rijsbergen, C.J.: Information Retrieval. Butterworth (1979)

  91. W3C: RDF Primer—W3C Recommendation 10 February 2004 (2004a)

  92. W3C: Resource Description Framework (RDF): Concepts and Abstract Syntax—W3C Recommendation 10 February 2004 (2004b)

  93. W3C: SKOS Simple Knowledge Organization System Primer—W3C Working Group Note 18 August 2009 (2009a)

  94. W3C: SKOS Simple Knowledge Organization System Reference—W3C Recommendation 18 August 2009 (2009b)

  95. Zapilko, B., Schaible, J., Mayr, P., Mathiak, B.: TheSoz: a SKOS representation of the thesaurus for the social sciences. Semant. Web 4(3), 257–263 (2013)

    Google Scholar 

  96. Zeleny, M.: Management support systems: towards integrated knowledge management. Hum. Syst. Manag. 7(1), 59–70 (1987)

    Google Scholar 

  97. Zobel, J., Webber, W., Sanderson, M., Moffat, A.: Principles for Robust Evaluation Infrastructure. In: Agosti, M., Ferro, N., Thanos, C. (eds.) Proc. Workshop on Data infrastructurEs for Supporting Information Retrieval Evaluation (DESIRE 2011), pp. 3–6. ACM Press, New York, USA (2011)

Download references

Acknowledgments

This work has been funded in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (INSIGHT). The authors would like to thank Barry Coughlan for his suggestions and collaboration.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gianmaria Silvello.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Silvello, G., Bordea, G., Ferro, N. et al. Semantic representation and enrichment of information retrieval experimental data. Int J Digit Libr 18, 145–172 (2017). https://doi.org/10.1007/s00799-016-0172-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00799-016-0172-8

Keywords

Navigation