Skip to main content

Journal rankings and publication strategy


We study the impact of journal ranking systems on publication outlet choice. We investigate the publication behavior of UK-based scholars registered on IDEAS/RePEc and analyze the publication outcomes of their academic work uploaded to the repository. Our estimates suggest that authors strategically choose outlets to maximize their publication scores. Our identification strategy is based on exploiting the change in the British ABS journal ranking in 2015. Working papers written before the 2015 ABS journal ranking change are significantly less likely to be published in ex-post downgraded journals. The effect cannot be attributed to the overall change in journal quality.


The importance of academic evaluation is on the rise. Performance-based research funding systems use a range of different ranking methods as a policy tool, designed to provide an efficient and fair allocation of research funds or to assist in recruitment and promotion decisions (Bajo et al. 2020; Salter et al. 2017). In addition, individual authors refer to journal rankings to guide their citation choices (Drivas et al. 2020) or to encourage students to read specific papers (Walker et al. 2019).

Journal rankings, such as the Academic Journal Guide published by the Chartered Association of Business Schools in the UK (henceforth ABS ranking) or a top-five indicator popular in economics, are based on a combination of bibliometric measures, academic tradition and expert opinion giving rise to a significant degree of heterogeneity between rankings. Furthermore, there are several drawbacks to these evaluations. Journal rankings are typically coarse, thus a small difference in the quality of a journal in which authors publish may lead to significant implications for their career prospects. Moreover, most rankings are not regularly updated. These features create an imbalance between the improvement of journal quality and the journal ranking.

Nonetheless, performance-based research funding systems, based on such rankings, have been implemented in a number of countries (see Hicks 2012 and Zacharewicz et al. 2019) inspiring extensive literature on the impact of those schemes on academic scholarship. Existing studies broadly confirm the common-sense intuition that “you get what you incentivize”. Heckman et al. (2020) show that the excessive emphasis placed on an academic’s top five publications in economics in relation to recruitment and tenure decisions incentivizes scholars to pursue follow-up and replication work at the expense of undertaking creative, pioneering research. Similarly, Butler (2003), in his study of the 1993 Australian reform that introduced undifferentiated publication counts, has shown that the number of publications in Australia significantly increased while the quality of publications significantly decreased. Quantitative results consistent with this trend have been found in Norway (Bloch et al. 2016) and in Poland (Korytkowski et al. 2019). Hence, “thinking with indicators” has become a central aspect of research activities, as is shown also by the studies for the Netherlands, Austria and the UK (Müller et al. 2017; Salter et al. 2017).

The literature described above analyzes the total impact of the change in evaluation systems on research strategy, while the response to a change in evaluation strategy can be split into two components. The first component is the change in research strategy, adjusting research activities to fit a new set of evaluation parameters. This includes changes in research quality, the quantity of publications or the topical scope. The second component is the change in publication strategy only, i.e. the choice of publication outlets keeping all other aspects of research quality fixed. Our key contribution to the literature is that we isolate the pure impact of the change of evaluation system on the change in publication strategy.

This work examines the impact of the 2015 ABS ranking change on the publication outlet choices of UK-based authors in economics and finance. To isolate the change in publication strategy, we analyze papers uploaded to IDEAS/RePEc online repository only in years 2010 –2014—a narrow window between two subsequent changes to the ABS journal ranking. We show that preprints of UK-based authors, uploaded before the 2015 ABS ranking change, are less likely to end up being published in the downgraded journals. Our estimates also suggest that this decrease in the share of papers published in that journal category cannot be attributed to a decrease in journal quality.

This paper also contributes to the broader literature on publication outcomes of papers uploaded to preprint repositories focusing on all preprints in economics and finance uploaded to a single repository from a single country. This distinguishes our paper from earlier studies, which focused on working papers published in select working paper series (e.g. Bauman et al. 2020a), or analyzed preprints of papers published in select journals (e.g. Brown et al. 2017; Wohlrabe et al. 2020) or studied complete repositories but in other disciplines than economics and finance (e.g. Larivière et al. 2014).

Do national rankings matter?

We study the impact of the changes to the British ABS ranking on publication outcomes. The ABS ranking is widely used for assessing the reputation of both individual researchers and their institutions (e.g., Salter et al. 2017). Walker et al. (2019) carried out a large-scale survey of UK business academics and collected responses from 8002 academics from 90 UK business and management schools. The basic descriptive statistics suggest that 67% of researchers always/almost every time use the ABS ranking system when preparing to submit. In addition, about 76% (79%) of academics, at least occasionally, use the ABS list to judge the research outputs of other academics (when assessing a promotion case).

In our analysis, we exploit the plausibly exogenous change in the ABS ranking in 2015 (published in February that year) from the ranking’s previous version in 2010. As we can see in Fig. 1, the ranking experienced a small revolution. In 2015, a new journal grade (4*) was added,Footnote 1 168 journals were upgraded, 42 were downgraded, 590 journals were added and only 579 maintained their grade. Overall, there was a substantial grade inflation, due to which one should perceive a ranking decrease in 2015 as a more significant change than in previous ranking updates.Footnote 2 This makes the 2015 ABS ranking change ideal for our study.

Fig. 1
figure 1

Note: The plot displays ranking changes for 712 journals that remained in the ranking throughout the four editions

Change in ABS rankings in years 2009–2018

As shown in Fig. 1, the ABS ranking also experienced a change in 2010, however, the change was less pronounced. In total, 49 journals had their ranking revised, 77 journals were added and one journal was removed. Another change came in 2018 and although none of those journals that were already ranked had their ranking revised, 177 journals were added to the list which significantly expanded the authors’ choice.

To study the impact of the ranking change on publication outlet choices, we analyze data on academic papers from the upload of a working paper to IDEAS/RePEc repository through to journal publication.Footnote 3 In our sample, we keep only those papers with at least one UK-based author registered at the repository that were uploaded in IDEAS/RePEc during the 2010–2014 period, i.e. prior to the ABS ranking change in 2015. 11,557 papers authored by 1054 UK-based researchers satisfy all criteria, of which 6,294 papers (54%) were published in an ABS-ranked journal.Footnote 4 This is consistent with Bauman et al. (2020a).Footnote 5 We follow those papers through to 2017, i.e. the last year before the subsequent ABS ranking change. In total, our unbalanced panel has 41,143 observations in annual frequency. We provide summary statistics of our data in the Appendix, Table 5.

We begin the analysis by calculating the share of papers published in a particular journal category (constant/downgraded/upgraded) conditional on paper age. We interpret this as the publication probability. Figure 2 shows that the paper’s age (i.e. the time since the first upload) is a key determinant of the publication probability. The figure highlights the strong impact of journal downgrading. The share of papers published in the journals that were downgraded in 2015 significantly drops after the 2015 ABS ranking was published.Footnote 6

Fig. 2
figure 2

Note: Age is the time since the paper’s first upload to the repository. ABS ranking is the ranking valid for the UK-based scholars in a given year, i.e. ABS10 represents years 2010–2014 and ABS15 years 2015–2017. Since our sample covers working papers uploaded in years 2010–2014, there are not papers less than one year in the years 2015–2017

Paper age and publication patterns.

To explore the link between changes in journal ranking and publication outcomes, we employ the following linear probability model:

$${Outcome}_{p,t}=\alpha +{ABS15}_{t}{\beta }_{1}+{Controls}_{p,t}+{\lambda }_{t}+ {\theta }_{p}+{\varepsilon }_{p,t}$$

where Outcome variable is a scaled dummy variable indicating publication in a downgraded/upgraded/unchanged journal in a year t. To facilitate the presentation of the results, we set the variable to take value 100 if a paper is published in that year in the journal category of interest and zero otherwise. This allows us to interpret the regression coefficients as percentage points. ABS15 is a dummy variable (on a zero–one scale) indicating the years after the ranking change, i.e. 2015, 2016 and 2017. Control variables include the number of versions that the paper has had.Footnote 7\({\lambda }_{t}\) and \({\theta }_{p}\) represent paper age and paper fixed effectsFootnote 8 respectively, where age is defined as the number of years since the paper was first posted on IDEAS/RePEc. Our model is estimated using fixed effect estimators which allows us to control for paper and time unobservable characteristics.

Baseline results are reported in Table 1. The key coefficient of interest is ABS15, which tells us how the share of working papers published in a given journal category changed after 2015, controlling for a paper’s age and other covariates. The results suggest that since 2015, UK-based scholars are less likely to publish in the downgraded journals. The share of papers published in that journal category declines by 0.17 percentage points, or around a quarter given the unconditional probability of 0.66%. We observe virtually no change for the two other journal categories. However, once we disaggregate the results, as we show in Table 2, we find that an increased share of papers is published in journals upgraded to ABS4 after the ranking change. The results remain qualitatively unchanged if we replace paper fixed effects with author fixed effects, as we show in the Appendix Table 9.Footnote 9

Table 1 Regression Results: Change in ABS Journal Ranking and Publication Outcomes
Table 2 Regression Results: Change in Journal Ranking and Publication Outcome, a detailed view

We also notice, perhaps unsurprisingly, that published papers which are subject to more revisions are more likely to be published as indicated by the coefficients on the variables indicating the number of versions. This is the case, particularly for those journals that were not downgraded.

When we review the ABS journal categories in more detail, as we show in Table 2, we find that journals downgraded to grades 3 or 2 suffer the most, while increases to grade 4 are associated with the highest increases in publication probability. This is to be expected, as publications in journals with grades 3 and 4 are typically crucial for research evaluation. We also observe a small, albeit insignificant, decrease in publication probability in journals upgraded to grade 4*. This grade was only created in 2015 and the increase is a likely manifestation of increasing global competition for publication in top journals.

Do citation rankings matter?

One may argue that researchers are less likely to submit to the downgraded journals, not necessarily due to the rankings, but due to the decreasing quality of these journals, which causes a ranking decrease. However, this is evidently not the case. First, the relationship between changes to objective citation-based measures such as SCImago Journal Rank (SJR indicator)Footnote 10 and ABS ranking changes is weak – the change in mean log values of the SJR indicator between years 2010–2014 and 2015–2009 has been the same for the downgraded journals and those that retained their previous rank.Footnote 11

More importantly, as we show in Table 3, we find an inverse relationship between the change in journal quality and the share of papers published by the UK-based scholars. We show this by re-estimating our regression model using alternative outcome variables representing the change in journal quality. More specifically, we analyze the change in publication probability in response to a change in journals’ weighted citations, as measured by the change in average SJR log score between the years 2005–2009 and 2010–2014. All journals are categorized into three groups according to the quartile (top, bottom and two middle) of the quality change distribution. The new dependent variable is a scaled dummy variable that takes value 100 if a journal belongs to a given category and zero otherwise.

Table 3 Regression Results: Change in Journal Ranking and Publication Outcome

Results reported in Table 3 show that, for the UK-based authors, the share of the papers published in journals that improved their quality decreases after 2015, while it increases for journals that experienced a decrease in their SJR indicators, although the change is not statistically significant.

A growing SJR score implies growing recognition and hence competition globally, while the pay-offs for the UK-based scholars remain unchanged. The measured change in journal quality is different from journal quality itself. In fact, as we show in the Appendix Table 7, the change in journal quality occurs almost uniformly across all ABS journal ranks.

Unfortunately, our data do not allow us to identify if the observed phenomenon is the outcome of changes in rejection rates that are likely to be associated with changes in citation rankings, or if the results are driven by the UK-based scholars avoiding journals which become more competitive globally.

One may fear a level of reverse causality in our setting. However, potential bias can only strengthen our results. Being downgraded discourages UK-based researchers from submitting, thus decreasing competition and ultimately increasing acceptance probability. Similarly, we expect a journal upgrade to increase the number of submissions, thus ultimately decreasing acceptance probability. The size of this bias, is, however, limited by the fact that the UK-based authors contribute to 14.6% of publications in economics.Footnote 12

Our results may also be mitigated by the perceived gap between the subjective rankings and the actual ranking, as recently found by Bryce et al. (2020). The observed impact of the ranking change is also unlikely to be homogenous. Walker et al. (2019) find in a survey that reliance on the ABS ranking differs across seniority groups and universities. We leave, however, the quantification of these effects for future studies.


Our study confirms that journal rankings are an important tool in shaping publication policy. However, this evaluation framework is often country-specific and it requires more frequent and objective changes. While UK authors respond to changes in journal rankings by directing their papers away from the downgraded journals, they also publish more frequently in journals upgraded to ABS 4 category. However, overall, they are less likely to publish in journals with a fast-growing SJR score.

The ABS ranking affects, not only UK-based scholars, but also institutions abroad that frequently rely on this ranking. Thus, a decision to downgrade a journal has the potential to decrease the number of submitted papers, ultimately affecting overall journal quality.


  1. In 2010 ABS ranking, the”world class” journals were acknowledged in the list, but did not constitute a separate grade.

  2. See Appendix Table 4 for details.

  3. See Wohlrabe and Gralka (2020) or Garcia-Suaza et al. (2020) for a detailed review of the IDEAS/RePEc database. Note that individual authors do not upload working papers on RePEc. The paper needs to be first submitted it to a WP series and the series owner subsequently uploads it on RePEc.

  4. See Appendix for more details on the sample construction.

  5. Bauman and Wohlrabe (2020a) focus on the most known (and highly selective) working paper series: NBER, CEPR, IZA and CESifo, while our sample includes all papers with UK-based authors. The authors show that approximately 4% of the papers are published as book chapters while there is no evidence on the remaining 46.5%. The authors try to follow those papers and analyze CVs of a random sample of authors of the missing papers. Manual analysis revealed that 36% of the missing papers were published in a journal under a different title. Unfortunately, we do not have the luxury of replicating the exercise for our sample and such papers are treated as unpublished in our analysis. We do not believe, however, that this approach brings systematic bias to our analysis. It also has to be noted that Bauman and Wohlrabe (2020b) follow the papers for longer period of time (19 years vs 12 years), which leads to higher average age of a published paper in their sample.

  6. The difference is statistically significant for the years after the year of uploading to as shown in Appendix Table 8. Note that due to the sample selection criteria (papers uploaded in years 2010 – 2014), we cannot compare the publication outcomes of papers just uploaded papers in the first year. Checking beyond the main sample we observe that the share of papers published in the first year (age = 0) is almost three times as high as the share published in the second year (age = 1) for all journal categories.

  7. Note that a change in the version of the paper does not always imply change in the paper’s content. Change in the version number may occur in two ways. For some, but not all, working paper series (e.g., NBER or SSRN) indeed each change to the paper is registered. However, the change in the version number variable occurs also if a paper is uploaded to a new repository. In such a case, we are not able to check if the new version is associated with actual change in the paper content.

  8. One would naturally consider adding author fixed effects. However, the set of authors of a paper is constant throughout all versions for each paper in our sample, thus author effects are included in the paper fixed effects.

  9. Author fixed affects are included for UK-based scholars only. For papers with multiple UK-based co-authors each observation is replicated and weighted with weights inverse to the number of UK-based authors. Note that our approach is different than Wohlrabe and Bürgi (2020) who instead of author fixed effects use measures of collective reputation of coauthors.

  10. The SJR is a size-independent journal quality indicator that applies a complex algorithm similar to Google's PageRank to Scopus-indexed journals (Mingers and Yang, 2017). SJR takes into account both the quantity of citations, prestige of the citing journals, as well as the field—citations from a thematically close journals are given more weight.

  11. See Appendix Table 6 for details.

  12. Share of papers with at least one UK-based author published in journals covered in the Econlit database and ranked in ABS in 2010 and 2015.

  13. Full journal list available at


  • Bajo, E., Barbi, M., & Hillier, D. (2020). Where should I publish to get promoted? A finance journal ranking based on business school promotions. Journal of Banking & Finance, 114, 105780.

    Article  Google Scholar 

  • Baumann, A. and Wohlrabe, K. (2020a) Where have all the working papers gone? Evidence from four major economics working paper series. Scientometrics. 124; 2433–2441.

  • Baumann, A. and Wohlrabe, K. (2020b) Where have all the working papers gone? Evidence from four major economics working paper series. CESifo Working Paper No. 8328.

  • Bloch, C., & Schneider, J. W. (2016). Performance-based funding models and researcher behavior: An analysis of the influence of the Norwegian Publication Indicator at the individual level. Research Evaluation, 25(4), 371–382.

    Google Scholar 

  • Brown, A. J., & Zimmermann, K. F. (2017). Three decades of publishing research in population economics. Journal of Population Economics, 30(1), 11–27.

    Article  Google Scholar 

  • Bryce, C., Dowling, M., & Lucey, B. (2020). The journal quality perception gap. Research Policy, 49(5), 103957.

    Article  Google Scholar 

  • Butler, L. (2003). Explaining Australia’s increased share of ISI publications—the effects of a funding formula based on publication counts. Research policy, 32(1), 143–155.

    Article  Google Scholar 

  • Drivas, K., & Kremmydas, D. (2020). The Matthew effect of a journal’s ranking. Research Policy, 49(4), 103951.

    Article  Google Scholar 

  • García-Suaza, A., Otero, J., & Winkelmann, R. (2020). Predicting early career productivity of PhD economists: Does advisor-match matter? Scientometrics, 122(1), 429–449.

    Article  Google Scholar 

  • Heckman, J. J., & Moktan, S. (2020). Publishing and promotion in economics: the tyranny of the top five. Journal of Economic Literature, 58(2), 419–470.

    Article  Google Scholar 

  • Hicks, D. (2012). Performance-based university research funding systems. Research policy, 41(2), 251–261.

    Article  Google Scholar 

  • Korytkowski, P., & Kulczycki, E. (2019). Examining how country-level science policy shapes publication patterns: the case of Poland. Scientometrics, 119(3), 1519–1543.

    Article  Google Scholar 

  • Larivière, V., Sugimoto, C. R., Macaluso, B., Milojević, S., Cronin, B., & Thelwall, M. (2014). arXiv E-prints and the journal of record: An analysis of roles and relationships. Journal of the Association for Information Science and Technology, 65(6), 1157–1169.

    Article  Google Scholar 

  • Mingers, J., & Yang, L. (2017). Evaluating journal quality: A review of journal citation indicators and ranking in business and management. European Journal of Operational Research, 257(1), 323–337.

    MathSciNet  Article  Google Scholar 

  • Müller, R., & de Rijcke, S. (2017). Exploring the epistemic impacts of academic performance indicators in the life sciences. Research Evaluation, 26(3), 157–168.

    Article  Google Scholar 

  • Salter, A., Salandra, R., & Walker, J. (2017). Exploring preferences for impact versus publications among UK business and management academics. Research Policy, 46(10), 1769–1782.

    Article  Google Scholar 

  • Walker, J. T., Fenton, E., Salter, A., & Salandra, R. (2019). What influences business academics’ use of the Association of Business Schools (ABS) list? Evidence from a survey of UK academics. British Journal of Management, 30(3), 730–747.

    Article  Google Scholar 

  • Wohlrabe, K. and Bürgi, C (2020) Do working papers increase journal citations? Evidence from the top 5 journals in economics. Applied Economics Letters

  • Wohlrabe, K., & Gralka, S. (2020). Using archetypoid analysis to classify institutions and faculties of economics. Scientometrics, 123, 159–179.

    Article  Google Scholar 

  • Zacharewicz, T., Lepori, B., Reale, E., & Jonkers, K. (2019). Performance-based research funding in EU Member States—a comparative assessment. Science and Public Policy, 46(1), 105–115.

    Article  Google Scholar 

Download references


The authors would like to thank the editor Wolfgang Glänzel, two anonymous referees, Tho Pham, Wojciech Charemza and participants at INE PAN seminar and CFE-CMStatistics 2020 conference for their valuable comments and suggestions.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Piotr Śpiewanowski.



Data collection

To select the sample for the study, we have merged data from two large publication databases – Econlit and Ideas/RePEc.


We have accessed the data from the Econlit database available to members of the American Economic Association. For each journal available in the database,Footnote 13 we have semi-automatically collected data on authorship for all research articles published between 2010 and 2018; in total 294,992 articles across 1,427 journals co-authored by 329,909 scholars.

The Econlit data does not provide author’s country, thus we have assembled a comprehensive list of UK research institutions from various sources to identify those with a UK affiliation. In total, we have identified 21,552 authors that held a UK affiliation on at least one paper throughout the sample period.

It has to be noted that the Econlit database has a strong economics focus, thus authors from other subdisciplines included in the ABS ranking are likely to be underrepresented in our sample.


Ideas/RePEc database is one of the largest bibliographic economics-focused databases. It indices around 3 million research items, most of which can be downloaded in full text. Each registered author has his/her profile which includes both published articles and working papers. It is also possible to identify the same paper placed in several archives, including published works.

Merged data

The Ideas/RePEc data, unfortunately, does not allow us to track authors’ affiliation in time. Thus, we have resorted to the Econlit database where we have identified all authors that had at least one publication with a UK affiliation in the period 2010–2018. To assure unique identifiers, we have matched author names and publication titles from Econlit with the Ideas/RePEc data using exact matching text matching (after removing non alphanumeric characters and white spaces). As registration with Ideas/RePEc is voluntary, we have matched 1067 unique authors which have co-authored 11,557 papers uploaded to the repository between 2010 and 2017.

Table 4 Change between ABS10 and ABS15 rankings: Detailed breakdown
Table 5 Summary statistics
Table 6 Relation between SJR indicator change and ABS ranking change
Table 7 Cross tabulation table: ABS 15 ranking and quartile of SJR indicator change
Table 8 Publication patterns and paper age before and after ranking change
Table 9 Replication of Tables 1 and 3 with author fixed effects

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Śpiewanowski, P., Talavera, O. Journal rankings and publication strategy. Scientometrics 126, 3227–3242 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Journal rankings
  • AJG/ABS list
  • Publication strategy

JEL Classification

  • L51
  • O38
  • I23