Measuring the match between evaluators and evaluees: cognitive distances between panel members and research groups at the journal level
- 588 Downloads
When research groups are evaluated by an expert panel, it is an open question how one can determine the match between panel and research groups. In this paper, we outline two quantitative approaches that determine the cognitive distance between evaluators and evaluees, based on the journals they have published in. We use example data from four research evaluations carried out between 2009 and 2014 at the University of Antwerp.
While the barycenter approach is based on a journal map, the similarity-adapted publication vector (SAPV) approach is based on the full journal similarity matrix. Both approaches determine an entity’s profile based on the journals in which it has published. Subsequently, we determine the Euclidean distance between the barycenter or SAPV profiles of two entities as an indicator of the cognitive distance between them. Using a bootstrapping approach, we determine confidence intervals for these distances. As such, the present article constitutes a refinement of a previous proposal that operates on the level of Web of Science subject categories.
KeywordsResearch evaluation Barycenter Similarity-adapted publication vector Journal overlay map Matching research expertise Similarity matrix
The authors thank Ronald Rousseau for stimulating and insightful suggestions related to the topic of the paper and Thomson Reuters for making the journal citation data available. This investigation has been made possible by the financial support of the Flemish government to ECOOM, among others. The opinions in the paper are the authors’ and not necessarily those of the government. We thank the reviewers for their constructive remarks.
- Bornmann, L., Mutz, R., Marx, W., Schier, H., & Daniel, H.-D. (2011). A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high profile journal select manuscripts that are highly cited after publication? Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 857–879. doi: 10.1111/j.1467-985X.2011.00689.x.MathSciNetCrossRefGoogle Scholar
- Buckley, H. L., Sciligo, A. R., Adair, K. L., Case, B. S., & Monks, J. M. (2014). Is there gender bias in reviewer selection and publication success rates for the New Zealand Journal of Ecology? New Zealand Journal of Ecology, 38(2), 335–339.Google Scholar
- Coryn, C. L. S., & Scriven, M. (2008). Editor’s notes. In C. L. S. Coryn & M. Scriven (Eds.), Reforming the evaluation of research: New directions for evaluation (Vol. 118, pp. 1–5). California: American Evaluation Association.Google Scholar
- Egghe, L., & Rousseau, R. (1990). Introduction to informetrics. Elsevier. Retrieved from https://uhdspace.uhasselt.be/dspace/handle/1942/587.
- ESF. (2011). European peer review guide: Integrating policies and practices into coherent procedures. Strasbourg: European Science Foundation.Google Scholar
- Gould, T. H. P. (2013). Do we still need peer review? An argument for change (Vol. 65). Plymouth: Scarecrow Press.Google Scholar
- Hashemi, S. H., Neshati, M., & Beigy, H. (2013). Expertise retrieval in bibliographic network: A topic dominance learning approach. In Proceedings of the 22nd ACM international conference on information & knowledge management (pp. 1117–1126). San Francisco, US: ACM. doi: 10.1145/2505515.2505697.
- Leydesdorff, L., & de Nooy, W. (2015). Can “Hot Spots” in the sciences be mapped using the dynamics of aggregated journal-journal citation relations? Retrieved from http://arxiv.org/abs/1502.00229.
- Leydesdorff, L., Rafols, I., & Chen, C. (2013). Interactive overlays of journals and the measurement of interdisciplinarity on the basis of aggregated journal–journal citations. Journal of the American Society for Information Science and Technology, 64(12), 2573–2586. doi: 10.1002/asi.22946.CrossRefGoogle Scholar
- Neshati, M., Beigy, H., & Hiemstra, D. (2012). Multi-aspect group formation using facility location analysis. In Proceedings of the seventeenth Australasian document computing symposium (pp. 62–71). New York: ACM. doi: 10.1145/2407085.2407094.
- Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2014). Assessment of expertise overlap between an expert panel and research groups. In E. Noyons (Ed.), Context counts: Pathways to master big and little data. Proceedings of the science and technology indicators conference 2014 Leiden (pp. 295–301). Leiden: Universiteit Leiden.Google Scholar
- Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2015). Is the expertise of evaluation panels congruent with the research interests of the research groups: A quantitative approach based on barycenters. Journal of Informetrics, 9(4), 704–721. doi: 10.1016/j.joi.2015.07.009.CrossRefGoogle Scholar
- Rousseau, R. (1989). Kinematical statistics of scientific output. Part I: Geographical approach. Revue Française de Bibliométrie, 4, 50–64.Google Scholar
- Rousseau, R. (2008). Triad or tetrad: Another representation. ISSI Newsletter, 4(1), 5–7.Google Scholar
- Rousseau, R., Rahman, A. I. M. J., Guns, R., & Engels, T. C. E. (2016). A note and a correction on measuring cognitive distance in multiple dimensions. Retrieved from http://arxiv.org/abs/1602.05183v2.
- Rybak, J., Balog, K., & Nørvåg, K. (2014). ExperTime: Tracking expertise over time. In Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval (pp. 1273–1274). Broadbeach: ACM. doi: 10.1145/2600428.2611190.
- van Eck, N. J., & Waltman, L. (2007). VOS: A new method for visualizing similarities between objects. In R. Decker & H.-J. Lenz (Eds.), Advances in data analysis: Proceedings of the 30th annual conference of the German Classification Society advances in data analysis (pp. 299–306). London: Springer.Google Scholar
- Verleysen, F. T., & Engels, T. C. E. (2013). Measuring internationalisation of book publishing in the social sciences and humanities using the barycentre method. In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Horlesberger, & H. Moed (Eds.), Proceedings of the 14th international society of scientometrics and informetrics conference (ISSI), 15–19 July 2013 (pp. 1170–1176). Vienna, Austria.Google Scholar
- VSNU. (2003). Standard evaluation protocol 2003–2009 for public research organisations. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.Google Scholar
- VSNU. (2009). Standard evaluation protocol 2009–2015: Protocol for research assessment in The Netherlands. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.Google Scholar