Adopting Systematic Evaluation Benchmarks in Operational Settings
Evaluation of information systems in commercial and industrial settings differs from academic evaluation of methodology in important ways. Those differences have to do with differing organisational priorities between practice and research. Some of those priorities can be adjusted, others must be taken into account, to be able to include evaluation into an operational development pipeline.
Unable to display preview. Download preview PDF.
- Braschler M (2009) Best practices in system-oriented aspects for multilingual information access applications. In: Proceedings of the eChallenges 2009 conferenceGoogle Scholar
- Braschler M, Rietberger S, Imhof M, Järvelin A, Hansen P, Lupu M, Gäde M, Berendsen R, de Herrera AGS (2012) Best Practices Report, Deliverable 2.3. PROMISE projectGoogle Scholar
- Cleverdon CW, Mills J, Keen M (1966) Aslib Cranfield research project—factors determining the performance of indexing systems. Technical reportGoogle Scholar
- Imhof M, Braschler M (2015) Are test collections “Real”? Mirroring real-world complexity in IR test collections. In: Mothe J, Savoy J, Kamps J, Pinel-Sauvagnat K, Jones GJF, SanJuan E, Cappellato L, Ferro N (eds) Experimental IR meets multilinguality, multimodality, and interaction. Proceedings of the sixth international conference of the CLEF association (CLEF 2015). Lecture notes in computer science (LNCS), vol 9283. Springer, Heidelberg, pp 241–247CrossRefGoogle Scholar
- Jacobson I (1993) Object-oriented software engineering: a use case driven approach. Pearson Education India, DelhiGoogle Scholar
- Kazai G, Ingersoll G, Lin J (2016) Evaluation is for conference papers. I need to build a real life product! SIGIR 2016 Industry Track Panel, PisaGoogle Scholar