Advertisement

Journal of Classification

, Volume 35, Issue 1, pp 5–28 | Cite as

Qualitative Judgement of Research Impact: Domain Taxonomy as a Fundamental Framework for Judgement of the Quality of Research

  • Fionn Murtagh
  • Michael Orlov
  • Boris Mirkin
Article

Abstract

The appeal of metric evaluation of research impact has attracted considerable interest in recent times. Although the public at large and administrative bodies are much interested in the idea, scientists and other researchers are much more cautious, insisting that metrics are but an auxiliary instrument to the qualitative peer-based judgement. The goal of this article is to propose availing of such a well positioned construct as domain taxonomy as a tool for directly assessing the scope and quality of research. We first show how taxonomies can be used to analyze the scope and perspectives of a set of research projects or papers. Then we proceed to define a research team or researcher’s rank by those nodes in the hierarchy that have been created or significantly transformed by the results of the researcher. An experimental test of the approach in the data analysis domain is described. Although the concept of taxonomy seems rather simplistic to describe all the richness of a research domain, its changes and use can be made transparent and subject to open discussions.

Keywords

Research impact Scientometrics Stratification Rank aggregation Multicriteria decision making Semantic analysis Taxonomy 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. ABRAMO, G., CICERO, T., ANGELO, C.A. (2013), “National Peer-Review Research Assessment Exercises for the Hard Sciences Can Be a Complete Waste Of Money: The Italian Case”, Scientometrics, 95(1), 311–324.CrossRefGoogle Scholar
  2. ACM (2012), The 2012 ACM Computing Classification System, https://www.acm.org/publications/class-2012.
  3. ALBERT, B. (2013), “Impact Factor Distortions”, Science, 340(6134), 787.CrossRefGoogle Scholar
  4. ARAGNÓN, A.M. (2013), “A Measure for the Impact of Research”, Scientific Reports, 3, Article number: 1649.Google Scholar
  5. BERNERS-LEE, T. (2010), “Long Live the Web”, Scientific American, 303(6), 80–85.CrossRefGoogle Scholar
  6. BLEI, D.M., NG, A.Y., JORDAN, M.I., and LAFFERTY, J. (2003), “Latent Dirichlet Allocation”, Journal of Machine Learning Research, 3, 993–1022.zbMATHGoogle Scholar
  7. CANAVAN, J., GILLEN, A., and SHAW, A. (2009), “Measuring Research Impact: Developing Practical and Cost-Effective Approaches”, Evidence and Policy: A Journal of Research, Debate and Practice, 5.2, 167–177.CrossRefGoogle Scholar
  8. DORA (2013). San Francisco Declaration on Research Assessment (DORA), http://www.ascb.org/files/SFDeclarationFINAL.pdf.
  9. EISEN, J.A., MACCALLUM, C.J., and NEYLON, C. (2013), “Expert Failure: Re-Evaluating Research Assessment”, PLoS Biology, 11(10): e1001677. CrossRefGoogle Scholar
  10. ENGELS, T.C., GOOS, P., DEXTERS, N., and SPRUYT, E.H. (2013), “Group Size, h-Index, and Efficiency in Publishing in Top Journals Explain Expert Panel Assessments of Research Group Quality and Productivity”, Research Evaluation, 22(4), 224–236.CrossRefGoogle Scholar
  11. HALLANTIE, T. (2016), ”What It Takes to Succeed in FET-Open”, https://ec.europa.eu/digital-single-market/en/blog/what-it-takes-succeed-fet-open.
  12. HICKS, D., WOUTERS, P., WALTMAN, L., DE RIJCKE, S., and RAFULS, I. (2015), “The Leiden Manifesto for Research Metrics”. Nature, 520, 429–431.CrossRefGoogle Scholar
  13. LEE, F.S., PHAM, X., and GU, G. (2013), “The UK Research Assessment Exercise and the Narrowing of UK Economics”, Cambridge Journal of Economics, 37(4), 693–717.CrossRefGoogle Scholar
  14. METRIC TIDE (2016), “The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management”, http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html.
  15. MIRKIN, B. (2013), “On the Notion of Research Impact and Its Measurement”, Institute of Control Problems, Moscow (in Russian), Control in Large Systems, Special Issue: Scientometry and Experts in Managing Science, 44, 292–307.Google Scholar
  16. MIRKIN, B., and ORLOV, M. (2013), “Methods for Multicriteria Stratification and Experimental Comparison of Them”, Preprint (in Russian) WP7/2013/06, Higher School of Economics, Moscow, 31 pp.Google Scholar
  17. MIRKIN, B., and ORLOV, M. (2015). “Three Aspects of the Research Impact by a Scientist: Measurement Methods and an Empirical Evaluation”, in Optimization, Control, and Applications in the Information Age, eds. A. Migdalas, and A. Karakitsiou, Springer Proceedings in Mathematics and Statistics, 130, pp. 233–260.Google Scholar
  18. MURTAGH, F. (2008), “Editorial”, The Computer Journal, 51(6), 612–614.CrossRefGoogle Scholar
  19. MURTAGH, F. (2010), “The Correspondence Analysis Platform for Uncovering Deep Structure in Data and Information”, The Computer Journal, 53(3), 304–315.CrossRefGoogle Scholar
  20. NG, W.L. (2007), “A Simple Classifier for Multiple Criteria ABC Analysis”, European Journal of Operational Research, 177, 344–353.CrossRefzbMATHGoogle Scholar
  21. ORLOV, M., and MIRKIN, B. (2014), “A Concept of Multicriteria Stratification: A Definition and Solution”, Procedia Computer Science, 31, 273–280.CrossRefGoogle Scholar
  22. OSTERLOH, M., and FREY, B.S. (2014), “Ranking Games”, Evaluation Review, Sage, pp. 1–28.Google Scholar
  23. RAMANATHAN, R. (2006), “Inventory Classification with Multiple Criteria Using Weighted Linear Optimization”, Computers and Operations Research, 33, 695–700.CrossRefzbMATHGoogle Scholar
  24. SCHAPIRE, R.E. (1990), “The Strength of Weak Learnability”, Machine Learning, 5(2), 197–227.Google Scholar
  25. SIDIROPOULOS, A., KATSAROS, D., and MANOLOPOULOS, Y. (2014), “Identification of Influential Scientists vs. Mass Producers by the Perfectionism Index”, Preprint, ArXiv:1409.6099v1, 27 pp.Google Scholar
  26. SNOMED CT (2016), IHTSDO, International Health Terminology Standards Development Organization, SNOMEDCT, Systematized Nomenclature of Medicine, Clinical Terms, http://www.ihtsdo.org/snomed-ct.
  27. SUN, Y., HAN, J., ZHAO, P., YIN, Z., CHENG, H., and WU, T. (2009), “RankClus: Integrating Clustering with Ranking for Heterogeneous Information Network Analysis”, EDBT ’09 Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology, New York: ACM, pp. 565–576.Google Scholar
  28. THOMSON REUTERS (2016), “Thomson Reuters Intellectual Property and Science”, (Acquisition of the Thomson Reuters Intellectual Property and Science Business by Onex and Baring Asia Completed, Independent business becomes Clarivate Analytics), http://ip.thomsonreuters.com.
  29. UNIVERSITY GUIDE (2016), “The Complete University League Guide”, http://www.thecompleteuniversityguide.co.uk/league-tables/methodology.
  30. VAN RAAN, A.F. (2006). “Comparison of the Hirsch-index with Standard Bibliometric Indicators and with Peer Judgment for 147 Chemistry Research Groups”. Scientometrics, 67(3), 491–502.CrossRefGoogle Scholar

Copyright information

© Classification Society of North America 2018

Authors and Affiliations

  1. 1.University of DerbyDerbyUK
  2. 2.Department of ComputingGoldsmiths, University of LondonLondonUK
  3. 3.National Research University Higher School of EconomicsMoscowRussia
  4. 4.Birkbeck, University of LondonLondonUK

Personalised recommendations