Scientometrics

, Volume 110, Issue 1, pp 43–64 | Cite as

Neophilia ranking of scientific journals

Article

Abstract

The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists’ incentives to pursue innovative work.

Keywords

Novel science Novelty Journal rankings Citations Impact factor Text analysis 

References

  1. Abbott, A., Cyranoski, D., Jones, N., Maher, B., Schiermeier, Q., & Van Noorden, R. (2010). Metrics: Do metrics matter? Nature, 465, 860–862.CrossRefGoogle Scholar
  2. Adam, D. (2002). Citations: The counting house. Nature, 415, 726–729.CrossRefGoogle Scholar
  3. Alberts, B. (2013). Impact factor distortions. Science, 340, 787.CrossRefGoogle Scholar
  4. Besancenot, D., & Vranceanu, R. (2015). Fear of novelty: A model of scientific discovery with strategic uncertainty. Economic Inquiry, 53(2), 1132–1139.CrossRefGoogle Scholar
  5. Bird, S. B. (2008). Journal impact factors, h indices, and citation analyses in toxicology. Journal of Medical Toxicology, 4(4), 261–274.CrossRefGoogle Scholar
  6. Boudreau, K. J., Guinan, E. C., Lakhari, K. R., & Riedl, C. (2016). Looking across and looking beyond the knowledge frontier: Intellectual distance, novelty, and resource allocation in science. Management Science, 62(10), 2765–2783.CrossRefGoogle Scholar
  7. Brown, J. D. (2014). Citation searching for tenure and promotion: an overview of issues and tools. Reference Services Review, 42(1), 70–89.CrossRefGoogle Scholar
  8. Carlsson, H., & van Damme, E. (1993). Global games and equilibrium selection. Econometrica, 61(5), 989–1018.MathSciNetCrossRefMATHGoogle Scholar
  9. Chapron, G., & Husté, A. (2006). Open, fair, and free journal ranking for researchers. BioScience, 56(7), 558–559.CrossRefGoogle Scholar
  10. Chen, Y., Perl, Y., Geller, J., & Cimino, J. J. (2007). Analysis of a study of users, uses, and future agenda of the UMLS. Journal of the American Medical Informatics Association, 14(2), 221–231.CrossRefGoogle Scholar
  11. Egghe, L. (2006). Theory and practice of the g-index. Scientometrics, 69(1), 131–152.MathSciNetCrossRefGoogle Scholar
  12. Engemann, K. M., & Wall, H. J. (2009). A journal ranking for the ambitious economist. Federal Reserve Bank of St. Louis Review, 91(3), 127–139.Google Scholar
  13. Fleming, L. (2001). Recombinant uncertainty in technological search. Management Science, 47(1), 117–132.CrossRefGoogle Scholar
  14. Foster, J. G., Rzhetsky, A., & Evans, J. A. (2015). Tradition and innovation in scientists’ research strategies. American Sociological Review, 80(5), 875–908.CrossRefGoogle Scholar
  15. Frey, B., & Katja, R. (2010). Do rankings reflect research quality? Journal of Applied Science, 13(1), 1–38.Google Scholar
  16. Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178, 471–479.CrossRefGoogle Scholar
  17. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Science, 102, 16569–16572.CrossRefGoogle Scholar
  18. Hutchins, B. I., Yuan, X., Anderson, J. M., & Santangelo, G. M. (2015). Relative Citation Ratio (RCR): A new metric that uses citation rates to measure influence at the article level; BioRxiv pre-print.Google Scholar
  19. Katerattanakul, P., Razi, M. A., Han, B. T., & Kam, H.-J. (2005). Consistency and concern on IS journal rankings. Journal of Information Technology Theory and Application (JITTA), 7(2), 1–20.Google Scholar
  20. Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: Chicago University Press.Google Scholar
  21. Lee, Y.-N., Walsh, J. P., & Wang, J. (2015). Creativity in scientific teams: Unpacking novelty and impact. Research Policy, 44(3), 684–697.CrossRefGoogle Scholar
  22. Marshall, A. (1920). Principles of economics (8th ed.). London: Macmillan and Co.Google Scholar
  23. Moed, H. F. (2008). UK research assessment exercises: Informed judgments on research quality or quantity? Scientometrics, 74(1), 153–161.CrossRefGoogle Scholar
  24. Morris, S., & Shin, H. S. (2003). Global games: Theory and applications. In M. Dewatripont, L. Hansen, & S. Turnovsky (Eds.), Advances in economics and econometrics. Cambridge: Cambridge University Press.Google Scholar
  25. Osterloh, M., & Frey, B. S. (2015). Ranking games. Evaluation Review, 32, 102–129.CrossRefGoogle Scholar
  26. Packalen, M., & Bhattacharya, J. (2015a). Age and the trying out of new ideas, NBER working paper no. 20920.Google Scholar
  27. Packalen, M., & Bhattacharya, J. (2015b). New ideas in invention, NBER working paper no. 20922.Google Scholar
  28. Packalen, M., & Bhattacharya, J. (2015c). Cities and ideas, NBER working paper no. 20921.Google Scholar
  29. Palacios-Huerta, I., & Volij, O. (2004). The measurement of intellectual influence. Econometrica, 72(3), 963–977.CrossRefMATHGoogle Scholar
  30. Palacios-Huerta, I., & Volij, O. (2014). Axiomatic measures of intellectual influence. International Journal of Industrial Organization, 34, 85–90.CrossRefGoogle Scholar
  31. Rzhetzky, A., Foster, J. G., Foster, I. T., & Evans, J. A. (2015). Choosing experiments to accelerate collective discovery. Proceedings of the National Academy of Sciences, 112(47), 14569–14574.CrossRefGoogle Scholar
  32. Sakovics, J., & Steiner, J. (2012). Who matters in coordination problems? American Economic Review, 102(7), 3439–3461.CrossRefGoogle Scholar
  33. Tort, A. B., Targino, Z. H., & Amaral, O. B. (2012). Rising publication delays inflate journal impact factors. PLoS ONE, 7(12), e53374.CrossRefGoogle Scholar
  34. Uzzi, B., Mukherjee, S., Stringer, M., & Jones, B. (2013). Atypical combinations and scientific impact. Science, 342(6157), 468–472.CrossRefGoogle Scholar
  35. Wang, J., Veugelers, R., & Stephan, P. (2015). Bias against novelty in science: A cautionary tale for users of bibliometric indicators; working paper.Google Scholar
  36. Weingart, P. (2005). Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics, 62(1), 117–131.CrossRefGoogle Scholar
  37. Xu, R., Musen, M. A., & Shah, N. (2010). A compehensive analysis of five million UMLS metathesaurus terms using eighteen million MEDLINE citations. In AMIA annual symposium proceedings, pp. 907–911.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  1. 1.University of WaterlooWaterlooCanada
  2. 2.Stanford UniversityStanfordUSA

Personalised recommendations