Three Aspects of the Research Impact by a Scientist: Measurement Methods and an Empirical Evaluation

Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 130)

Abstract

Three different approaches for evaluation of the research impact by a scientist are considered. Two of them are conventional ones, scoring the impact over (a) citation metrics and (b) merit metrics. The third one relates to the level of results. It involves a taxonomy of the research field, that is, a hierarchy representing its composition. The impact is evaluated according to the taxonomy ranks of the subjects that have emerged or have been crucially transformed due to the results by the scientist under consideration Mirkin (Control Large Syst Spec Issue 44:292–307, 2013). To aggregate criteria in approaches (a) and (b) we use an in-house automated criteria weighting method oriented towards as tight a representation of the strata as possible Orlov (Bus Inf, 2014). To compare the approaches empirically, we use publicly available data of about 30 scientists in the areas of data analysis and machine learning. As our taxonomy of the field, we invoke a corresponding part of the ACM Computing Classification System 2012 and slightly modify it to better reflect results by the scientists in our sample. The obtained ABC stratifications are rather far each other. This supports the view that all the three approaches (citations, merits, taxonomic rank) should be considered as different aspects, and, therefore, a good method for scoring research impact should involve all the three.

Keywords

Evaluation of research impact Citation index Merit metrics Aggregate criteria Linstrat method Multicriteria analysis 

Notes

Acknowledgements

This work was partially supported by the International Laboratory of Decision Choice and Analysis as part of a project within the Program for Fundamental Research of the National Research University Higher School of Economics Moscow.

References

  1. 1.
    Abramo, G., Cicero, T., D’Angelo, C.A.: National peer-review research assessment exercises for the hard sciences can be a complete waste of money: the Italian case. Scientometrics 95(1), 311–324 (2013)Google Scholar
  2. 2.
    Albert, B.: Impact factor distortions. Science 340(6134), 787 (2013)Google Scholar
  3. 4.
    Aragn, A.M.: A measure for the impact of research. Sci. Rep. 3, 1649 (2013). doi:10.1038/srep01649
  4. 5.
    Bollen, J., Van de Sompel, H., Hagberg, A., Chute, R.: A principal component analysis of 39 scientific impact measures. PloS ONE 4(6), e6022 (2009)Google Scholar
  5. 6.
    Brans, J.P., Vincke, P.: A preference ranking organisation method: the PROMETHEE method for MCDM. Manag. Sci. 31(6), 647–656 (1985)Google Scholar
  6. 7.
    Burgess, A., Davies, U., Doyle, M., Gilbert, A, Heine, C., Howard, C, Jones, S., McKelvey, D., Potter, K., Wright, S.: The Economists Pocket World in Figures: 2007 Edition, 254 pp. The Economist in Association with Profile Books Ltd., London (2006)Google Scholar
  7. 8.
    Canavan, J., Aisling, G., Aileen, S.: Measuring research impact: developing practical and cost-effective approaches. Evid. Policy 5(2), 167–177 (2009)Google Scholar
  8. 9.
    Choo, E.U., Bertram, S., William, C.W.: Interpretation of criteria weights in multicriteria decision making. Comput. Ind. Eng. 37(3), 527–541 (1999)Google Scholar
  9. 10.
    Eisen, J.A., MacCallum, C.J., Neylon, C.: Expert failure: re-evaluating research assessment. PLoS Biol. 11(10), e1001677 (2013). doi:10.1371/journal.pbio.1001677
  10. 11.
    Engels, T.C., Goos, P., Dexters, N., Spruyt, E.H.: Group size, h-index, and efficiency in publishing in top journals explain expert panel assessments of research group quality and productivity. Res. Eval. 22, 224–236 (2013)Google Scholar
  11. 12.
    Figueira, J.R., Greco, S., Roy, B., Sowiski, R.: An overview of ELECTRE methods and their recent extensions. J. Multi-Criteria Decis. Anal. 20(1–2), 61–85 (2013)Google Scholar
  12. 13.
    Fisher, W.D.: Clustering and Aggregation in Economics. The Johns Hopkins Press, Baltimore (1969)Google Scholar
  13. 14.
    Han, J., Kamber, M., Jian P.: Data Mining: Concepts and Techniques, 3rd edn. The Morgan Kaufmann Series in Data Management Systems. Morhgan Kaufmann, Amsterdam (2011)Google Scholar
  14. 15.
    Keeney, R.L., Raiffa, H.: Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Wiley, New York (1976)Google Scholar
  15. 16.
    Kksalan, M., Mousseau, V., Ozpeynirci, O., Ozpeynirci, S.B.: An outranking-based approach for assigning alternatives to ordered classes. Nav. Res. Logist. 56(1), 74–85 (2009)Google Scholar
  16. 17.
    Lee, F.S., Pham, X., Gu, G.: The UK research assessment exercise and the narrowing of UK economics. Camb. J. Econ. 37(4), 693–717 (2013)Google Scholar
  17. 18.
    Mirkin, B.: Core Concepts in Data Analysis: Correlation, Summarization, Visualization. Springer, London (2011)Google Scholar
  18. 19.
    Mirkin, B.: On the notion of research impact and its measurement. Control Large Syst. Spec. Issue 44, 292–307 (2013). Institute of Control Problems, Moscow (in Russian)Google Scholar
  19. 20.
    Mirkin, B., Orlov, M.: Methods for Multicriteria Stratification and Experimental Comparison of Them, p. 31. Preprint WP7/2013/06. Higher School of Economics, Moscow (2013, in Russian)Google Scholar
  20. 21.
    Ng, W.L.: A simple classifier for multiple criteria ABC analysis. Eur. J. Oper. Res. 177, 344–353 (2007)Google Scholar
  21. 22.
    Nobel Prize page: http://nobelprize.org/alfred_nobel/will/will-full.html(2014). Accessed 16 Oct 2014
  22. 23.
    Orlov, M.: An algorithm for deriving a multicriterion stratification. Bus. Inf. 4, 24–35 (2014, in Russian)Google Scholar
  23. 24.
    Orlov, M., Mirkin, B.: A concept of multicriteria stratification: a definition and solution. Procedia Comput. Sci. 31, 273–280 (2014)Google Scholar
  24. 25.
    Osterloh, M., Frey, B.S.: Ranking games. Eval. Rev. (2014). doi:10.1177.0193841X14524957Google Scholar
  25. 26.
    Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank Citation Ranking: Bringing Order to the Web. Technical Report. Stanford InfoLab (1999)Google Scholar
  26. 27.
    Ramanathan, R.: Inventory classification with multiple criteria using weighted linear optimization. Comput. Oper. Res. 33, 695–700 (2006)Google Scholar
  27. 28.
    San Francisco Declaration on Research Assessment (DORA): am.ascb.org/dora/(2014). Accessed 16 Oct 2014
  28. 29.
    Sun, Y., Han, J., Zhao, P., Yin, Z., Cheng, H., Wu, T.: RankClus: integrating clustering with ranking for heterogeneous information network analysis. In: Proceedings of EDBT 2009, pp. 565–576 (2009)Google Scholar
  29. 30.
    The 2012 ACM Computing Classification System. http://www.acm.org/about/class/2012(2014). Accessed 17 Oct 2014
  30. 31.
    The Complete University League Guide: http://www.thecompleteuniversityguide.co.uk/league-tables/methodology(2014). Accessed 25 Oct 2014
  31. 32.
    Thompson Reuters Intellectual Property and Science: http://ip-science.thomsonreuters.com/. Accessed 16 Oct 2014
  32. 33.
    Van Raan, A.F.: Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics 67(3), 491–502 (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.School of Computer Science and Information Systems, BirkbeckUniversity of LondonLondonUK
  2. 2.Department of Data Analysis and Machine IntelligenceNational Research University Higher School of EconomicsMoscowRussian Federation

Personalised recommendations