, Volume 103, Issue 3, pp 849–877 | Cite as

What drives the relevance and reputation of economics journals? An update from a survey among economists

  • Justus HaucapEmail author
  • Johannes Muck


This paper analyses the interrelationship between perceived journal reputation and its relevance for academics’ work. Based on a survey of 705 members of the German Economic Association (GEA), we find a strong interrelationship between perceived journal reputation and relevance where a journal’s perceived relevance has a stronger effect on its reputation than vice versa. Moreover, past journal ratings conducted by the Handelsblatt and the GEA directly affect journals’ reputation among German economists and indirectly also their perceived relevance, but the effect on reputation is more than twice as large as the effect on perceived relevance. In general, citations have a non-linear impact on perceived journal reputation and relevance. While the number of landmark articles published in a journal (as measured by the so-called H-index) increases the journal’s reputation, an increase in the H-index even tends to decrease a journal’s perceived relevance, as long as this is not simultaneously reflected in a higher Handelsblatt and/or GEA rating. This suggests that a journal’s relevance is driven by average article quality, while reputation depends more on truly exceptional articles. We also identify significant differences in the views on journal relevance and reputation between different age groups.


Economic journals Academic journals Reputation Relevance Rigor Economists Fractional response models 

JEL Classification

A11 A14 I23 L82 



We would like to thank all economists who participated in our survey. Moreover, we thank Elisabeth Flieger, Susanne Schäfers and Olaf Siegert (all of ZBW—Leibniz-Informationszentrum Wirtschaft) for their excellent support in conducting the survey and assembling the data. For comments and very useful discussions we thank Michael Bräuninger, Florian Heiß and two anonymous referees.


  1. Albers, S. (2009). Misleading rankings of research in business. German Economic Review, 3, 352–363.CrossRefGoogle Scholar
  2. Azar, O. H. (2005). The review process in economics: Is it too fast? Southern Economic Journal, 72, 481–491.CrossRefGoogle Scholar
  3. Beed, C., & Beed, C. (1996). Measuring the quality of academic journals: The case of economics. Journal of Post-Keynesian Economics, 18, 369–396.Google Scholar
  4. Berlemann, M., & Haucap, J. (2015). Which factors drive the decision to opt out of individual research rankings? An empirical study of academic resistance to change. Research Policy. doi: 10.1016/j.respol.2014.12.002.
  5. Besley, T., & Hennessy, P. (2009). Letter to the Queen, dated July 22, 2009, online at:
  6. Blank, R. M. (1991). The effects of double-blind versus single-blind reviewing: Experimental evidence from the American Economic Review. American Economic Review, 81, 1041–1067.Google Scholar
  7. Blaug, M. (1997). Ugly currents in modern economics. Policy Options, 18(7), 3–8.Google Scholar
  8. Bräuninger, M., & Haucap, J. (2001). Was Ökonomen lesen und schätzen. Perspektiven der Wirtschaftspolitik, 2, 185–210.CrossRefGoogle Scholar
  9. Bräuninger, M., & Haucap, J. (2003). Reputation and relevance of economics journals. Kyklos, 56, 175–198.CrossRefGoogle Scholar
  10. Bräuninger, M., Haucap, J., & Muck, J. (2011a). Was schätzen und lesen deutschsprachige Ökonomen heute? Perspektiven der Wirtschaftspolitik, 12, 339–371.CrossRefGoogle Scholar
  11. Bräuninger, M., Haucap, J., & Muck, J. (2011b). Was lesen und schätzen Ökonomen im Jahr 2011?, DICE Ordnungspolitische Perspektiven 18.
  12. Chang, C.-L., McAleer, M., & Oxley, L. (2011a). What makes a great journal in economics? The singer, not the song. Journal of Economic Surveys, 25, 326–361.CrossRefGoogle Scholar
  13. Chang, C.-L., McAleer, M., & Oxley, L. (2011b). What makes a great journal great in sciences? Which came first, the chicken or the egg? Scientometrics, 87, 17–40.CrossRefGoogle Scholar
  14. Chang, C.-L., McAleer, M., & Oxley, L. (2011c). Great expectatrics: Great papers, great journals, great econometrics. Econometric Reviews, 30, 583–619.CrossRefMathSciNetGoogle Scholar
  15. Colander, D., Goldberg, M., Haas, A., Juselius, K., Kirman, A., Lux, T., & Sloth, B. (2009). The financial crisis and the systemic failure of the economics profession. Critical Review, 21(2), 249–267.CrossRefGoogle Scholar
  16. Danielson, A., & Delorme, C. D. (1976). Some empirical evidence on the variables associated with the ranking of economics journals. Southern Economic Journal, 43, 1149–1160.CrossRefGoogle Scholar
  17. Demarest, B., Freeman, G., & Sugimoto, C. R. (2014). The reviewer in the mirror: Examining gendered and ethnicized notions of reciprocity in peer review. Scientometrics, 101, 717–735.CrossRefGoogle Scholar
  18. Dow, S. C., Earl, P. E., Foster, J., Harcourt, G. C., Hodgson, G. M., Metcalfe, J. S., et al. (2009). The GFC and University economics education: An open letter to the Queen. Journal of Australian Political Economy, 64, 233–235.Google Scholar
  19. Dulleck, U., & Kerschbamer, R. (2006). On doctors, mechanics and computer specialists: The economics of credence goods. Journal of Economic Literature, 44, 5–42.CrossRefGoogle Scholar
  20. Ellis, L. V., & Durden, G. C. (1991). Why economists rank their journals the way they do. Journal of Economics and Business, 43, 265–270.CrossRefGoogle Scholar
  21. Ellison, G. (2002). Evolving standards for academic publishing: A q-r-theory. Journal of Political Economy, 110, 994–1034.CrossRefGoogle Scholar
  22. Ellison, G. (2011). Is peer-review in decline? Economic Inquiry, 49, 635–657.CrossRefGoogle Scholar
  23. Engers, M., & Gans, J. S. (1998). Why referees are not paid (enough). American Economic Review, 88, 1341–1349.Google Scholar
  24. Franses, P. H. (2014). Trends in three decades of rankings of Dutch economists. Scientometrics, 98, 1257–1268.CrossRefGoogle Scholar
  25. Frey, B. S. (2005). Problems with publishing: Existing state and solutions. European Journal of Law and Economics, 19, 173–190.CrossRefGoogle Scholar
  26. Frey, B. S., & Osterloh, M. (2014). Ranking games. Evaluation Review. doi: 10.1177/0193841X14524957.
  27. Frey, B. S., & Rost, K. (2010). Do rankings reflect research quality? Journal of Applied Economics, 13, 1–38.CrossRefGoogle Scholar
  28. Graber, M., Launov, A., & Wälde, L. (2008). Publish or perish? The Increasing Importance of publications for prospective economics professors in Austria, Germany and Switzerland. German Economic Review, 9, 457–472.CrossRefGoogle Scholar
  29. Harzing, A. W. (2007). Publish or perish.
  30. Hawkins, R. G., Ritter, L. S., & Walter, I. (1973). What economists think of their journals. Journal of Political Economy, 81, 1017–1032.CrossRefGoogle Scholar
  31. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572.CrossRefGoogle Scholar
  32. Hodgson, G. M., & Rothman, H. (1999). The editors and authors of economics journals: A case of institutional oligopoly? Economic Journal, 109, F165–F186.CrossRefGoogle Scholar
  33. Institute for Scientific Information. (2010). Journal citation report 2009. Philadelphia: Institute for Scientific Information.Google Scholar
  34. Institute for Scientific Information. (2011). Journal citation report 2010. Philadelphia: Institute for Scientific Information.Google Scholar
  35. Iyengar, K., & Balijepally, V. (2015). Ranking journals using the dominance hierarchy procedure: An illustration with IS journals. Scientometrics, 102, 5–23.CrossRefGoogle Scholar
  36. Kalaitzidakis, P., Mamuneas, T., & Stengos, T. (2003). Rankings of academic journals and institutions in economics. Journal of the European Economic Association, 1, 1346–1366.CrossRefGoogle Scholar
  37. Kleibergen, F., & Paap, R. (2006). Generalized reduced rank tests using the singular value decomposition. Journal of Econometrics, 127, 97–126.CrossRefMathSciNetGoogle Scholar
  38. Krugman, P. (2009). How did economists get it so wrong? New York times online, September 02, 2009.
  39. Krugman, P. (2013). How the case for austerity has crumbled. The New York review of books, June 6, 2013.
  40. Laband, D. N. (1990). Is there value-added from the review process in economics? Preliminary evidence from authors. Quarterly Journal of Economics, 105, 341–353.CrossRefGoogle Scholar
  41. Lucas, R. E. (2009). In defence of the dismal science. The Economist, 392(8643), 67.Google Scholar
  42. Ma, Z., Pan, Y., Yu, Z., Wang, J., Jia, J., & Wu, Y. (2013). A quantitative study on the effectiveness of peer review for academic journals. Scientometrics, 95, 1–13.CrossRefGoogle Scholar
  43. Oswald, A. J. (2007). An examination of the reliability of prestigious scholarly journals: Evidence and implications for decision-makers. Economica, 74, 21–31.CrossRefGoogle Scholar
  44. Papke, L. W., & Wooldridge, J. M. (1996). Econometric methods for fractional response variables with an application to 401 (K) plan participation rates. Journal of Applied Econometrics, 11, 619–632.CrossRefGoogle Scholar
  45. Piketty, T. (2014). Capital in the twenty-first century. Cambridge, MA: Belknap Press of Harvard University Press.Google Scholar
  46. Ramalho, E. A., Ramalho, J. J. S., & Henriques, P. D. (2010). Fractional regression models for second stage DEA efficiency analyses. Journal of Productivity Analysis, 34, 239–255.CrossRefGoogle Scholar
  47. Ramalho, E. A., Ramalho, J. J. S., & Murteira, J. M. R. (2011). Alternative estimating and testing empirical strategies for fractional regression models. Journal of Economic Surveys, 25, 19–68.CrossRefGoogle Scholar
  48. Reinhart, C., & Rogoff, K. (2013). Open letter to Paul Krugman. 25. Mai 2013.
  49. Ritzberger, K. (2008). A ranking of journals in economics and related fields. German Economic Review, 9, 402–430.CrossRefGoogle Scholar
  50. Schläpfer, F. (2010). How much does journal reputation tell us about the academic interest and relevance of economic research? GAIA: Ecological Perspectives for Science & Society, 19(2), 140–145.Google Scholar
  51. Schneider, F., & Ursprung, H. W. (2008). The 2008 GEA journal ranking for the economics profession. German Economic Review, 9, 532–538.CrossRefGoogle Scholar
  52. Seiler, C., & Wohlrabe, K. (2014). How robust are journal rankings based on the impact factor? Evidence from the economic sciences. Journal of Informetrics, 8, 904–911.CrossRefGoogle Scholar
  53. Sorzano, C. O. S., Vargas, J., Caffarena-Fernández, G., & Iriarte, A. (2014). Comparing scientific performance among equals. Scientometrics, 101, 1731–1745.CrossRefGoogle Scholar
  54. Staiger, D., & Stock, J. H. (1997). Instrumental variables regression with weak instruments. Econometrica, 65, 557–586.CrossRefzbMATHMathSciNetGoogle Scholar
  55. Statalist. (2010). Discussion onreg3 option-robust-”.
  56. Sutter, M., & Kocher, M. (2001). Tools for evaluating research output: Are citation-based rankings of economics journals stable? Evaluation Review, 25, 555–566.CrossRefGoogle Scholar
  57. The Economist. (2009a). What went wrong with economics? The Economist, 392(8640), 11–12.Google Scholar
  58. The Economist. (2009b). The other-wordly philosophers. The Economist, 392(8640), 65–67.Google Scholar
  59. Wall, H. J. (2009). Don’t get skewed over by journal rankings. B.E. Journal of Economic Analysis and Policy, 9(1), Article 34.Google Scholar
  60. Wooldridge, J. M. (2010). Econometric analysis of cross section and panel data (2nd ed.). Cambridge, MA: MIT Press.zbMATHGoogle Scholar
  61. Wooldridge, J. M. (2014). Quasi-maximum likelihood estimation and testing for nonlinear models with endogenous explanatory variables. Journal of Econometrics, 182, 226–234.Google Scholar
  62. Wooldridge, J. M. (2013). Introductory econometrics—A modern approach (5th ed.). Florence: South-Western.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2015

Authors and Affiliations

  1. 1.Düsseldorf Institute for Competition Economics (DICE)Heinrich-Heine-University of DüsseldorfDüsseldorfGermany

Personalised recommendations