Scientometrics

, Volume 109, Issue 3, pp 1639–1663 | Cite as

Measuring the match between evaluators and evaluees: cognitive distances between panel members and research groups at the journal level

  • A. I. M. Jakaria Rahman
  • Raf Guns
  • Loet Leydesdorff
  • Tim C. E. Engels
Article

Abstract

When research groups are evaluated by an expert panel, it is an open question how one can determine the match between panel and research groups. In this paper, we outline two quantitative approaches that determine the cognitive distance between evaluators and evaluees, based on the journals they have published in. We use example data from four research evaluations carried out between 2009 and 2014 at the University of Antwerp.

While the barycenter approach is based on a journal map, the similarity-adapted publication vector (SAPV) approach is based on the full journal similarity matrix. Both approaches determine an entity’s profile based on the journals in which it has published. Subsequently, we determine the Euclidean distance between the barycenter or SAPV profiles of two entities as an indicator of the cognitive distance between them. Using a bootstrapping approach, we determine confidence intervals for these distances. As such, the present article constitutes a refinement of a previous proposal that operates on the level of Web of Science subject categories.

Keywords

Research evaluation Barycenter Similarity-adapted publication vector Journal overlay map Matching research expertise Similarity matrix 

Supplementary material

11192_2016_2132_MOESM1_ESM.pdf (2.4 mb)
Supplementary material 1 (PDF 2496 kb)

References

  1. Abramo, G., & D’Angelo, C. A. (2011). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87(3), 499–514.CrossRefGoogle Scholar
  2. Barker, K. (2007). The UK research assessment exercise: The evolution of a national research evaluation system. Research Evaluation, 16(1), 3–12. doi:10.3152/095820207X190674.CrossRefGoogle Scholar
  3. Berendsen, R., de Rijke, M., Balog, K., Bogers, T., & Bosch, A. (2013). On the assessment of expertise profiles. Journal of the American Society for Information Science and Technology, 64(10), 2024–2044. doi:10.1002/asi.22908.CrossRefGoogle Scholar
  4. Bornmann, L., Mutz, R., Marx, W., Schier, H., & Daniel, H.-D. (2011). A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high profile journal select manuscripts that are highly cited after publication? Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 857–879. doi:10.1111/j.1467-985X.2011.00689.x.MathSciNetCrossRefGoogle Scholar
  5. Borum, F., & Hansen, H. F. (2000). The local construction and enactment of standards for research evaluation: The case of the Copenhagen Business School. Evaluation, 6(3), 281–299. doi:10.1177/13563890022209299.CrossRefGoogle Scholar
  6. Boyack, K. W., Chen, M.-C., & Chacko, G. (2014). Characterization of the peer review network at the center for scientific review, National Institutes of Health. PLoS ONE, 9(8), e104244. doi:10.1371/journal.pone.0104244.CrossRefGoogle Scholar
  7. Boyack, K. W., & Klavans, R. (2014). Creation of a highly detailed, dynamic, global model and map of science. Journal of the Association for Information Science and Technology, 65(4), 670–685. doi:10.1002/asi.22990.CrossRefGoogle Scholar
  8. Buckley, H. L., Sciligo, A. R., Adair, K. L., Case, B. S., & Monks, J. M. (2014). Is there gender bias in reviewer selection and publication success rates for the New Zealand Journal of Ecology? New Zealand Journal of Ecology, 38(2), 335–339.Google Scholar
  9. Butler, L., & McAllister, I. (2011). Evaluating university research performance using metrics. European Political Science, 10(1), 44–58. doi:10.1057/eps.2010.13.CrossRefGoogle Scholar
  10. Chen, S., Arsenault, C., Gingras, Y., & Lariviere, V. (2015). Exploring the interdisciplinary evolution of a discipline: The case of biochemistry and molecular biology. Scientometrics, 102(2), 1307–1323. doi:10.1007/s11192-014-1457-6.CrossRefGoogle Scholar
  11. Cohen, W. M., & Levinthal, D. A. (1989). Innovation and learning: The two faces of R&D. The Economic Journal, 99(397), 569–596. doi:10.2307/2233763.CrossRefGoogle Scholar
  12. Cohen, W. M., & Levinthal, D. A. (1990). Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly, 35(1), 128–152. doi:10.2307/2393553.CrossRefGoogle Scholar
  13. Coryn, C. L. S., & Scriven, M. (2008). Editor’s notes. In C. L. S. Coryn & M. Scriven (Eds.), Reforming the evaluation of research: New directions for evaluation (Vol. 118, pp. 1–5). California: American Evaluation Association.Google Scholar
  14. Efron, B., & Tibshirani, R. J. (1998). An introduction to the bootstrap. Boca Raton, FL: Chapman & Hall/CRC.MATHGoogle Scholar
  15. Egghe, L., & Rousseau, R. (1990). Introduction to informetrics. Elsevier. Retrieved from https://uhdspace.uhasselt.be/dspace/handle/1942/587.
  16. Engels, T. C. E., Goos, P., Dexters, N., & Spruyt, E. H. J. (2013). Group size, h-index, and efficiency in publishing in top journals explain expert panel assessments of research group quality and productivity. Research Evaluation, 22(4), 224–236. doi:10.1093/reseval/rvt013.CrossRefGoogle Scholar
  17. Engels, T. C. E., Ossenblok, T. L. B., & Spruyt, E. H. J. (2012). Changing publication patterns in the social sciences and humanities, 2000–2009. Scientometrics, 93(2), 373–390.CrossRefGoogle Scholar
  18. ESF. (2011). European peer review guide: Integrating policies and practices into coherent procedures. Strasbourg: European Science Foundation.Google Scholar
  19. Fields, C. (2015). How small is the center of science? Short cross-disciplinary cycles in co-authorship graphs. Scientometrics, 102(2), 1287–1306. doi:10.1007/s11192-014-1468-3.CrossRefGoogle Scholar
  20. Gorjiara, T., & Baldock, C. (2014). Nanoscience and nanotechnology research publications: A comparison between Australia and the rest of the world. Scientometrics, 100(1), 121–148. doi:10.1007/s11192-014-1287-6.CrossRefGoogle Scholar
  21. Gould, T. H. P. (2013). Do we still need peer review? An argument for change (Vol. 65). Plymouth: Scarecrow Press.Google Scholar
  22. Grauwin, S., & Jensen, P. (2011). Mapping scientific institutions. Scientometrics, 89(3), 943–954. doi:10.1007/s11192-011-0482-y.CrossRefGoogle Scholar
  23. Hansson, F. (2010). Dialogue in or with the peer review? Evaluating research organizations in order to promote organizational learning. Science and Public Policy, 37(4), 239–251. doi:10.3152/030234210X496600.CrossRefGoogle Scholar
  24. Hashemi, S. H., Neshati, M., & Beigy, H. (2013). Expertise retrieval in bibliographic network: A topic dominance learning approach. In Proceedings of the 22nd ACM international conference on information & knowledge management (pp. 1117–1126). San Francisco, US: ACM. doi:10.1145/2505515.2505697.
  25. Hofmann, K., Balog, K., Bogers, T., & de Rijke, M. (2010). Contextual factors for finding similar experts. Journal of the American Society for Information Science and Technology, 61(5), 994–1014. doi:10.1002/asi.21292.CrossRefGoogle Scholar
  26. Jin, B., & Rousseau, R. (2001). An introduction to the barycentre method with an application to China’s mean centre of publication. Libri, 51(4), 225–233. doi:10.1515/LIBR.2001.225.CrossRefGoogle Scholar
  27. Kamada, T., & Kawai, S. (1989). An algorithm for drawing general undirected graphs. Information Processing Letters, 31(1), 7–15. doi:10.1016/0020-0190(89)90102-6.MathSciNetCrossRefMATHGoogle Scholar
  28. Kington, J. (2014). Balanced cross sections, shortening estimates, and the magnitude of out-of-sequence thrusting in the Nankai Trough accretionary prism. Japan: Figshare. doi:10.6084/m9.figshare.1015774.v1.Google Scholar
  29. Lawrenz, F., Thao, M., & Johnson, K. (2012). Expert panel reviews of research centers: The site visit process. Evaluation and Program Planning, 35(3), 390–397. doi:10.1016/j.evalprogplan.2012.01.003.CrossRefGoogle Scholar
  30. Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17. doi:10.1002/asi.22784.CrossRefGoogle Scholar
  31. Leydesdorff, L., & de Nooy, W. (2015). Can “Hot Spots” in the sciences be mapped using the dynamics of aggregated journal-journal citation relations? Retrieved from http://arxiv.org/abs/1502.00229.
  32. Leydesdorff, L., Heimeriks, G., & Rotolo, D. (2015). Journal portfolio analysis for countries, cities, and organizations: Maps and comparisons. Journal of the Association for Information Science and Technology,. doi:10.1002/asi.23551.Google Scholar
  33. Leydesdorff, L., & Rafols, I. (2012). Interactive overlays: A new method for generating global journal maps from web-of-science data. Journal of Informetrics, 6(2), 318–332. doi:10.1016/j.joi.2011.11.003.CrossRefGoogle Scholar
  34. Leydesdorff, L., Rafols, I., & Chen, C. (2013). Interactive overlays of journals and the measurement of interdisciplinarity on the basis of aggregated journal–journal citations. Journal of the American Society for Information Science and Technology, 64(12), 2573–2586. doi:10.1002/asi.22946.CrossRefGoogle Scholar
  35. Li, D., & Agha, L. (2015). Big names or big ideas: Do peer-review panels select the best science proposals? Science, 348(6233), 434–438. doi:10.1126/science.aaa0185.CrossRefGoogle Scholar
  36. McKenna, H. P. (2015). Research assessment: The impact of impact. International Journal of Nursing Studies, 52(1), 1–3. doi:10.1016/j.ijnurstu.2014.11.012.CrossRefGoogle Scholar
  37. Milat, A. J., Bauman, A. E., & Redman, S. (2015). A narrative review of research impact assessment models and methods. Health Research Policy and Systems, 13, 18. doi:10.1186/s12961-015-0003-1.CrossRefGoogle Scholar
  38. Molas-Gallart, J. (2012). Research governance and the role of evaluation: A comparative study. American Journal of Evaluation, 33(4), 583–598. doi:10.1177/1098214012450938.CrossRefGoogle Scholar
  39. Nedeva, M., Georghiou, L., Loveridge, D., & Cameron, H. (1996). The use of co-nomination to identify expert participants for technology foresight. R&D Management, 26(2), 155–168.CrossRefGoogle Scholar
  40. Neshati, M., Beigy, H., & Hiemstra, D. (2012). Multi-aspect group formation using facility location analysis. In Proceedings of the seventeenth Australasian document computing symposium (pp. 62–71). New York: ACM. doi:10.1145/2407085.2407094.
  41. Nooteboom, B. (1999). Inter-firm alliances: Analysis and design. London: Routledge.CrossRefGoogle Scholar
  42. Nooteboom, B. (2000). Learning by interaction: Absorptive capacity, cognitive distance and governance. Journal of Management and Governance, 4(1–2), 69–92.CrossRefGoogle Scholar
  43. Nooteboom, B., Van Haverbeke, W., Duysters, G., Gilsing, V., & van den Oord, A. (2007). Optimal cognitive distance and absorptive capacity. Research Policy, 36(7), 1016–1034. doi:10.1016/j.respol.2007.04.003.CrossRefGoogle Scholar
  44. Oleinik, A. (2014). Conflict(s) of interest in peer review: Its origins and possible solutions. Science and Engineering Ethics, 20(1), 55–75. doi:10.1007/s11948-012-9426-z.MathSciNetCrossRefGoogle Scholar
  45. Pina, D. G., Hren, D., & Marušić, A. (2015). Peer review evaluation process of Marie Curie actions under EU’s seventh framework programme for research. PLoS ONE, 10(6), e0130753. doi:10.1371/journal.pone.0130753.CrossRefGoogle Scholar
  46. Rafols, I., Porter, A. L., & Leydesdorff, L. (2010). Science overlay maps: A new tool for research policy and library management. Journal of the American Society for Information Science and Technology, 61(9), 1871–1887. doi:10.1002/asi.21368.CrossRefGoogle Scholar
  47. Rahm, E. (2008). Comparing the scientific impact of conference and journal publications in computer science. Information Services and Use, 28(2), 127–128.Google Scholar
  48. Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2014). Assessment of expertise overlap between an expert panel and research groups. In E. Noyons (Ed.), Context counts: Pathways to master big and little data. Proceedings of the science and technology indicators conference 2014 Leiden (pp. 295–301). Leiden: Universiteit Leiden.Google Scholar
  49. Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2015). Is the expertise of evaluation panels congruent with the research interests of the research groups: A quantitative approach based on barycenters. Journal of Informetrics, 9(4), 704–721. doi:10.1016/j.joi.2015.07.009.CrossRefGoogle Scholar
  50. Rons, N., De Bruyn, A., & Cornelis, J. (2008). Research evaluation per discipline: A peer-review method and its outcomes. Research Evaluation, 17(1), 45–57. doi:10.3152/095820208X240208.CrossRefGoogle Scholar
  51. Rousseau, R. (1989). Kinematical statistics of scientific output. Part I: Geographical approach. Revue Française de Bibliométrie, 4, 50–64.Google Scholar
  52. Rousseau, R. (2008). Triad or tetrad: Another representation. ISSI Newsletter, 4(1), 5–7.Google Scholar
  53. Rousseau, R., Rahman, A. I. M. J., Guns, R., & Engels, T. C. E. (2016). A note and a correction on measuring cognitive distance in multiple dimensions. Retrieved from http://arxiv.org/abs/1602.05183v2.
  54. Rybak, J., Balog, K., & Nørvåg, K. (2014). ExperTime: Tracking expertise over time. In Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval (pp. 1273–1274). Broadbeach: ACM. doi:10.1145/2600428.2611190.
  55. Simon, D., & Knie, A. (2013). Can evaluation contribute to the organizational development of academic institutions? An international comparison. Evaluation, 19(4), 402–418. doi:10.1177/1356389013505806.CrossRefGoogle Scholar
  56. Sobkowicz, P. (2015). Innovation suppression and clique evolution in peer-review-based, competitive research funding systems: An agent-based model. Journal of Artificial Societies and Social Simulation, 18(2), 13.CrossRefGoogle Scholar
  57. Tseng, Y. H., & Tsay, M. Y. (2013). Journal clustering of library and information science for subfield delineation using the bibliometric analysis toolkit: CATAR. Scientometrics, 95(2), 503–528. doi:10.1007/s11192-013-0964-1.CrossRefGoogle Scholar
  58. van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review and project selection: A case study in the social and behavioral sciences. Research Evaluation, 18(4), 273–288. doi:10.3152/095820209X475360.CrossRefGoogle Scholar
  59. van Eck, N. J., & Waltman, L. (2007). VOS: A new method for visualizing similarities between objects. In R. Decker & H.-J. Lenz (Eds.), Advances in data analysis: Proceedings of the 30th annual conference of the German Classification Society advances in data analysis (pp. 299–306). London: Springer.Google Scholar
  60. van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523–538. doi:10.1007/s11192-009-0146-3.CrossRefGoogle Scholar
  61. van Eck, N. J., Waltman, L., Dekker, R., & van den Berg, J. (2010). A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS. Journal of the American Society for Information Science and Technology, 61(12), 2405–2416. doi:10.1002/asi.21421.CrossRefGoogle Scholar
  62. Verleysen, F. T., & Engels, T. C. E. (2013). Measuring internationalisation of book publishing in the social sciences and humanities using the barycentre method. In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Horlesberger, & H. Moed (Eds.), Proceedings of the 14th international society of scientometrics and informetrics conference (ISSI), 1519 July 2013 (pp. 1170–1176). Vienna, Austria.Google Scholar
  63. Verleysen, F. T., & Engels, T. C. E. (2014). Barycenter representation of book publishing internationalization in the social sciences and humanities. Journal of Informetrics, 8(1), 234–240. doi:10.1016/j.joi.2013.11.008.CrossRefGoogle Scholar
  64. VSNU. (2003). Standard evaluation protocol 2003–2009 for public research organisations. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.Google Scholar
  65. VSNU. (2009). Standard evaluation protocol 2009–2015: Protocol for research assessment in The Netherlands. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.Google Scholar
  66. Waltman, L., & van Eck, N. J. (2012). A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392. doi:10.1002/asi.22748.CrossRefGoogle Scholar
  67. Wang, Q., & Sandström, U. (2015). Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology. Research Evaluation, 24(3), 271–281. doi:10.1093/reseval/rvv009.CrossRefGoogle Scholar
  68. Wessely, S. (1998). Peer review of grant applications: What do we know? The Lancet, 352(9124), 301–305. doi:10.1016/S0140-6736(97)11129-1.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  1. 1.Center for Research and Development Monitoring (ECOOM), Faculty of Social SciencesUniversity of AntwerpAntwerpBelgium
  2. 2.Amsterdam School of Communication Research (ASCoR)University of AmsterdamAmsterdamThe Netherlands
  3. 3.Antwerp Maritime AcademyAntwerpBelgium

Personalised recommendations