Skip to main content
Log in

A suggested method for the measurement of world-leading research (illustrated with data on economics)

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Countries often spend billions on university research. There is growing interest in how to assess whether that money is well spent. Is there an objective way to assess the quality of a nation’s world-leading science? I suggest a method, and illustrate it with data on economics. Of 450 genuinely world-leading journal articles, the UK produced 10% and the rest of Europe slightly more. Interestingly, more than a quarter of these UK articles came from outside the best-known university departments. The proposed methodology could be applied to almost any academic discipline or nation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Throughout the paper, I will, in the background, try to bear in mind relative population sizes, and in particular that the UK is less than one tenth the size of North America + Continental Europe + Japan.

  2. I have been particularly influenced by Bruce Charlton’s and Peter Andras’s (2008) long-held view that we should evaluate whether the UK is producing revolutionary science and not merely normal, solid science. Bruce Charlton has pointed out to me that some non-revolutionary papers acquire high numbers of citations. He is plainly right. However, high citations numbers are presumably a necessary if not a sufficient condition, and I therefore feel the later exercise is potentially useful.

  3. As one reader put it, this is a highly ‘non-linear’ method. It puts a large weight on the very best articles in a scholarly discipline. But something of this type is required if we are trying to design a criterion for the upper 4* grade in a system, such as RAE 2008, where there are three categories of international excellence. It also recognizes the admittedly inegalitarian skewness in intellectual productivity—a phenomenon sometimes known as Lotka’s Law—whereby a rather small proportion of articles or people produce the majority of the work of lasting impact. I include self-citations because there is a case for leaving them in and they make only a trivial difference in the case of highly cited papers such as those covered here; I do not weight by the source of the citing journal; doing so would in my judgment be against the spirit of free competition among intellectual ideas. Nevertheless, I remain conscious of the difficulties and sociological influences pointed out by Bornmann and Daniel (2008).

  4. See work such as Hamermesh et al. (1982), Oppenheim (1995), Hamermesh and Schmidt (2003), Hamermesh and Pfann (2008), and Goodall (2006, 2009), in which citations are treated as real signals. A particularly early and innovative paper, which remains unpublished, is Smart and Waldfogel (1996). Some defence against possible peer review bias is provided by Oswald and Jalles (2007). However, citations are not free of error, and in the long run it may not be sensible to see citations as unambiguously valuable (the more that citations are emphasized, as in this article I would accept, the more that their signalling value will be eroded). Hudson (2007) identifies some serendipitous influences on citations totals.

  5. I wanted to have an economic history journal, because I think that sub-field is particularly important. But over the period even the Journal of Economic History is comparatively little-cited. The marginal cites on the 10th most-cited paper in JEH is 11. So I decided, reluctantly, that I could not quite justify including this with the Helpman list. In passing, two high-impact journals, the Journal of Economic Literature and Journal of Economic Perspectives, are also omitted from the Helpman list—presumably because they are collections of review articles. Two other omitted journals are the newish but increasingly important ones of the Journal of Economic Geography and Games and Economic Behaviour. In an Appendix, for completeness, I present UK results for these kinds of extra journals, but I do not include these data in the text discussion.

  6. This is a tricky thing to do completely accurately (occasionally the addresses of authors can be hard to work out), and it is likely that small errors of counting remain, but it is too be hoped that they are randomly distributed across universities.

  7. It is now believed that this error was made by Evidence Ltd in their work for the Helpman Report, Helpman et al. (2008), but the company has not provided me with enough information to judge its size.

  8. Neil Shephard has suggested to me that ideally the individual papers should be normalized by their year of publication (because a publication in 2001 has had longer to build up cites than one published in 2006). He is right, of course. The reason I do not do so here, and why I use a form of simple averaging, is that I am trying to assess UK economics rather than individual researchers’ work.

  9. Hashem Pesaran has made the point to me that ideally we need to know where the important research was done—rather than simply where the author is when credited in the journal. I would like to be able to do this. But it is not possible, at least for me, to adjust the data in this way.

  10. Thanks go to Richard Blundell for this suggestion.

  11. In fact the ISI database classifies Economics + Business together, and it appears from the Highly Cited Researcher data that the lack of top business researchers in the UK is what really pulls down our average to the 4%. On its own, my calculations suggest that (purely) UK economics would rank at about 7%.

  12. Bruce Charlton’s early intuition in discussions with me was, interestingly, that medicine and allied subjects are the UK’s top disciplines.

  13. Here the performance on top-ranked AER papers seems particularly creditable, although, as with some of the UK work, it should perhaps also be recorded that these few papers were quite heavily co-authored with Americans.

  14. In passing, my own instinct is that a research assessment exercise such as the next so-called Research Excellence Framework (REF) in the United Kingdom should not rely in a mechanical way upon bibliometrics. It should have some element of peer review (or at least peer overview).

  15. An economist would argue that it is the marginal productivity of research funding that matters. Public discussion, by contrast, is typically about average productivities.

  16. See also Frey (2003), Starbuck (2005), Ellison (2007), Macdonald and Kam (2007), Oswald (2007), and Wu (2007). I would argue, although may be biased, that this paper’s bibliometric approach is a way of coping with the serious problem—pointed out in different ways by Starbuck (2005) and Oswald (2007)—for national scientific evaluation that the elite journals publish many ‘poor’ articles, that is, ones that go on to have no impact. The paper does this by concentrating on within-journal rankings of influential articles.

References

  • Adams, J. (2005). Early citation counts correlate with accumulated impact. Scientometrics, 63(3), 567–581.

    Article  Google Scholar 

  • Bornmann, L., & Daniel, H.-D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64(1), 45–80.

    Article  Google Scholar 

  • Brazier, J., Roberts, J., & Deverill, M. (2002). The estimation of a preference-based measure of health from the SF-36. Journal of Health Economics, 21(2), 271–292.

    Article  Google Scholar 

  • Cardoso, A. R., Guimaraes, P., & Zimmerman, K. F. (2008). Comparing the early research performance of Ph.D. graduates in labor economics in Europe and the USA. IZA Discussion Paper, #3898.

  • Charlton, B. G., & Andras, P. (2008). Down-shifting among top UK scientists? The decline of revolutionary science and the rise of normal science. Medical Hypotheses, 70(3), 465–472.

    Article  Google Scholar 

  • Dreze, J. H., & Estevan, F. (2006). Research and higher education in economics: Can we deliver the Lisbon objectives? Journal of the European Economic Association, 5(2), 271–304.

    Article  Google Scholar 

  • Ellison, G. (2007, July). Is peer review in decline? Working paper, MIT.

  • Frey, B. S. (2003). Publishing as prostitution? Choosing between one’s own ideas and academic success. Public Choice, 116, 205–223.

    Article  Google Scholar 

  • Goodall, A. H. (2006). Should research universities be led by top researchers, and are they? A citations analysis. Journal of Documentation, 62(3), 388–411.

    Article  MathSciNet  Google Scholar 

  • Goodall, A. H. (2009). Socrates in the boardroom: Why research universities should be led by top scholars. Princeton: Princeton University Press.

  • Hamermesh, D. S., Johnson, G. E., & Weisbrod, B. A. (1982). Scholarship, citations and salaries: Economic rewards in economics. Southern Economic Journal, 49(2), 472–481.

    Article  Google Scholar 

  • Hamermesh, D. S., & Pfann, G. (2008). Markets for reputation: Evidence on quantity and quality in academe. Working paper, University of Texas.

  • Hamermesh, D. S., & Schmidt, P. (2003). The determinants of Econometric Society fellows elections. Econometrica, 71, 399–407.

    Article  Google Scholar 

  • Helpman, E., et al. (2008). ESRC International benchmarking study of UK economics. Swindon: ESRC.

    Google Scholar 

  • Hudson, J. (2007). Be known by the company you keep: Citations—quality or chance? Scientometrics, 71(2), 231–238.

    Article  Google Scholar 

  • Im, K. S., Pesaran, M. H., & Shin, Y. (2003). Testing for unit roots in heterogeneous panels. Journal of Econometrics, 115(1), 53–74.

    Article  MATH  MathSciNet  Google Scholar 

  • Macdonald, S., & Kam, J. (2007). Ring a ring o’ roses: Quality journals and gamesmanship in management studies. Journal of Management Studies, 44, 640–655.

    Article  Google Scholar 

  • Machin, S., & Oswald, A. J. (2000). UK economics and the future supply of academic economists. Economic Journal, 110, F334–F349.

    Article  Google Scholar 

  • Neary, J. P., Mirrlees, J. A., & Tirole, J. (2003). Evaluating economics research in Europe: An introduction. Journal of the European Economic Association, 1, 1239–1249.

    Article  Google Scholar 

  • Oppenheim, C. (1995). The correlation between citation counts and the 1992 Research Assessment Exercise Ratings for British library and information science university departments. Journal of Documentation, 51, 18–27.

    Article  Google Scholar 

  • Oswald, A. J. (2007). An examination of the reliability of prestigious scholarly journals: Evidence and implications for decision-makers. Economica, 74, 21–31.

    Article  Google Scholar 

  • Oswald, A. J., & Jalles, J. (2007). Unbiased peer review and the averaging fallacy. Working paper, Warwick University.

  • Smart, S., & Waldfogel, J. (1996). A citation-based test for discrimination at economics and finance journals. Working paper, Indiana University, and NBER paper 5460.

  • Starbuck, W. H. (2005). How much better are the most prestigious journals? The statistics of academic publication. Organization Science, 16, 180–200.

    Article  Google Scholar 

  • Vasilakos, N., Lanot, G., & Worral, T. (2007). Evaluating the performance of UK research in economics. Report sponsored by the Royal Economic Society.

  • Weinberg, B. (2009). An assessment of British science over the 20th century. Economic Journal, 119, F252–F269.

    Google Scholar 

  • Wu, S. (2007). Recent publishing trends at the AER, JPE, and QJE. Applied Economics Letters, 14(1), 59–63.

    Article  Google Scholar 

Download references

Acknowledgements

Bruce Charlton of Newcastle University has given me extremely helpful suggestions on this paper. I have also benefited from the comments of the editor Tibor Braun, and of Giuseppe Bertola, Danny Blanchflower, Richard Blundell, Ian Diamond, Jacques Dreze, Amanda Goodall, Steffen Huck, Joao Jalles, Jim Malcomson, Ben Martin, Hashem Pesaran, Neil Shephard, Bill Starbuck, Tony Thirlwall, John van Reenen, Steve Venti, John Vickers, Bruce Weinberg and Tim Worrall. I have deliberately not discussed this paper with the Economics RAE Panel member from Warwick University, Steve Broadberry, and have no way of knowing if the Panel used data like mine. I thank Cornell University and the University of Zurich for visiting professorships in 2008. Thanks go to the ESRC for research support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew J. Oswald.

Appendix

Appendix

Table 3 Further Journals and UK Articles

Rights and permissions

Reprints and permissions

About this article

Cite this article

Oswald, A.J. A suggested method for the measurement of world-leading research (illustrated with data on economics). Scientometrics 84, 99–113 (2010). https://doi.org/10.1007/s11192-009-0087-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-009-0087-x

Keywords

Navigation