Introduction

The importance of academic evaluation is on the rise. Performance-based research funding systems use a range of different ranking methods as a policy tool, designed to provide an efficient and fair allocation of research funds or to assist in recruitment and promotion decisions (Bajo et al. 2020; Salter et al. 2017). In addition, individual authors refer to journal rankings to guide their citation choices (Drivas et al. 2020) or to encourage students to read specific papers (Walker et al. 2019).

Journal rankings, such as the Academic Journal Guide published by the Chartered Association of Business Schools in the UK (henceforth ABS ranking) or a top-five indicator popular in economics, are based on a combination of bibliometric measures, academic tradition and expert opinion giving rise to a significant degree of heterogeneity between rankings. Furthermore, there are several drawbacks to these evaluations. Journal rankings are typically coarse, thus a small difference in the quality of a journal in which authors publish may lead to significant implications for their career prospects. Moreover, most rankings are not regularly updated. These features create an imbalance between the improvement of journal quality and the journal ranking.

Nonetheless, performance-based research funding systems, based on such rankings, have been implemented in a number of countries (see Hicks 2012 and Zacharewicz et al. 2019) inspiring extensive literature on the impact of those schemes on academic scholarship. Existing studies broadly confirm the common-sense intuition that “you get what you incentivize”. Heckman et al. (2020) show that the excessive emphasis placed on an academic’s top five publications in economics in relation to recruitment and tenure decisions incentivizes scholars to pursue follow-up and replication work at the expense of undertaking creative, pioneering research. Similarly, Butler (2003), in his study of the 1993 Australian reform that introduced undifferentiated publication counts, has shown that the number of publications in Australia significantly increased while the quality of publications significantly decreased. Quantitative results consistent with this trend have been found in Norway (Bloch et al. 2016) and in Poland (Korytkowski et al. 2019). Hence, “thinking with indicators” has become a central aspect of research activities, as is shown also by the studies for the Netherlands, Austria and the UK (Müller et al. 2017; Salter et al. 2017).

The literature described above analyzes the total impact of the change in evaluation systems on research strategy, while the response to a change in evaluation strategy can be split into two components. The first component is the change in research strategy, adjusting research activities to fit a new set of evaluation parameters. This includes changes in research quality, the quantity of publications or the topical scope. The second component is the change in publication strategy only, i.e. the choice of publication outlets keeping all other aspects of research quality fixed. Our key contribution to the literature is that we isolate the pure impact of the change of evaluation system on the change in publication strategy.

This work examines the impact of the 2015 ABS ranking change on the publication outlet choices of UK-based authors in economics and finance. To isolate the change in publication strategy, we analyze papers uploaded to IDEAS/RePEc online repository only in years 2010 –2014—a narrow window between two subsequent changes to the ABS journal ranking. We show that preprints of UK-based authors, uploaded before the 2015 ABS ranking change, are less likely to end up being published in the downgraded journals. Our estimates also suggest that this decrease in the share of papers published in that journal category cannot be attributed to a decrease in journal quality.

This paper also contributes to the broader literature on publication outcomes of papers uploaded to preprint repositories focusing on all preprints in economics and finance uploaded to a single repository from a single country. This distinguishes our paper from earlier studies, which focused on working papers published in select working paper series (e.g. Bauman et al. 2020a), or analyzed preprints of papers published in select journals (e.g. Brown et al. 2017; Wohlrabe et al. 2020) or studied complete repositories but in other disciplines than economics and finance (e.g. Larivière et al. 2014).

Do national rankings matter?

We study the impact of the changes to the British ABS ranking on publication outcomes. The ABS ranking is widely used for assessing the reputation of both individual researchers and their institutions (e.g., Salter et al. 2017). Walker et al. (2019) carried out a large-scale survey of UK business academics and collected responses from 8002 academics from 90 UK business and management schools. The basic descriptive statistics suggest that 67% of researchers always/almost every time use the ABS ranking system when preparing to submit. In addition, about 76% (79%) of academics, at least occasionally, use the ABS list to judge the research outputs of other academics (when assessing a promotion case).

In our analysis, we exploit the plausibly exogenous change in the ABS ranking in 2015 (published in February that year) from the ranking’s previous version in 2010. As we can see in Fig. 1, the ranking experienced a small revolution. In 2015, a new journal grade (4*) was added,Footnote 1 168 journals were upgraded, 42 were downgraded, 590 journals were added and only 579 maintained their grade. Overall, there was a substantial grade inflation, due to which one should perceive a ranking decrease in 2015 as a more significant change than in previous ranking updates.Footnote 2 This makes the 2015 ABS ranking change ideal for our study.

Fig. 1
figure 1

Note: The plot displays ranking changes for 712 journals that remained in the ranking throughout the four editions

Change in ABS rankings in years 2009–2018

As shown in Fig. 1, the ABS ranking also experienced a change in 2010, however, the change was less pronounced. In total, 49 journals had their ranking revised, 77 journals were added and one journal was removed. Another change came in 2018 and although none of those journals that were already ranked had their ranking revised, 177 journals were added to the list which significantly expanded the authors’ choice.

To study the impact of the ranking change on publication outlet choices, we analyze data on academic papers from the upload of a working paper to IDEAS/RePEc repository through to journal publication.Footnote 3 In our sample, we keep only those papers with at least one UK-based author registered at the repository that were uploaded in IDEAS/RePEc during the 2010–2014 period, i.e. prior to the ABS ranking change in 2015. 11,557 papers authored by 1054 UK-based researchers satisfy all criteria, of which 6,294 papers (54%) were published in an ABS-ranked journal.Footnote 4 This is consistent with Bauman et al. (2020a).Footnote 5 We follow those papers through to 2017, i.e. the last year before the subsequent ABS ranking change. In total, our unbalanced panel has 41,143 observations in annual frequency. We provide summary statistics of our data in the Appendix, Table 5.

We begin the analysis by calculating the share of papers published in a particular journal category (constant/downgraded/upgraded) conditional on paper age. We interpret this as the publication probability. Figure 2 shows that the paper’s age (i.e. the time since the first upload) is a key determinant of the publication probability. The figure highlights the strong impact of journal downgrading. The share of papers published in the journals that were downgraded in 2015 significantly drops after the 2015 ABS ranking was published.Footnote 6

Fig. 2
figure 2

Note: Age is the time since the paper’s first upload to the repository. ABS ranking is the ranking valid for the UK-based scholars in a given year, i.e. ABS10 represents years 2010–2014 and ABS15 years 2015–2017. Since our sample covers working papers uploaded in years 2010–2014, there are not papers less than one year in the years 2015–2017

Paper age and publication patterns.

To explore the link between changes in journal ranking and publication outcomes, we employ the following linear probability model:

$${Outcome}_{p,t}=\alpha +{ABS15}_{t}{\beta }_{1}+{Controls}_{p,t}+{\lambda }_{t}+ {\theta }_{p}+{\varepsilon }_{p,t}$$

where Outcome variable is a scaled dummy variable indicating publication in a downgraded/upgraded/unchanged journal in a year t. To facilitate the presentation of the results, we set the variable to take value 100 if a paper is published in that year in the journal category of interest and zero otherwise. This allows us to interpret the regression coefficients as percentage points. ABS15 is a dummy variable (on a zero–one scale) indicating the years after the ranking change, i.e. 2015, 2016 and 2017. Control variables include the number of versions that the paper has had.Footnote 7\({\lambda }_{t}\) and \({\theta }_{p}\) represent paper age and paper fixed effectsFootnote 8 respectively, where age is defined as the number of years since the paper was first posted on IDEAS/RePEc. Our model is estimated using fixed effect estimators which allows us to control for paper and time unobservable characteristics.

Baseline results are reported in Table 1. The key coefficient of interest is ABS15, which tells us how the share of working papers published in a given journal category changed after 2015, controlling for a paper’s age and other covariates. The results suggest that since 2015, UK-based scholars are less likely to publish in the downgraded journals. The share of papers published in that journal category declines by 0.17 percentage points, or around a quarter given the unconditional probability of 0.66%. We observe virtually no change for the two other journal categories. However, once we disaggregate the results, as we show in Table 2, we find that an increased share of papers is published in journals upgraded to ABS4 after the ranking change. The results remain qualitatively unchanged if we replace paper fixed effects with author fixed effects, as we show in the Appendix Table 9.Footnote 9

Table 1 Regression Results: Change in ABS Journal Ranking and Publication Outcomes
Table 2 Regression Results: Change in Journal Ranking and Publication Outcome, a detailed view

We also notice, perhaps unsurprisingly, that published papers which are subject to more revisions are more likely to be published as indicated by the coefficients on the variables indicating the number of versions. This is the case, particularly for those journals that were not downgraded.

When we review the ABS journal categories in more detail, as we show in Table 2, we find that journals downgraded to grades 3 or 2 suffer the most, while increases to grade 4 are associated with the highest increases in publication probability. This is to be expected, as publications in journals with grades 3 and 4 are typically crucial for research evaluation. We also observe a small, albeit insignificant, decrease in publication probability in journals upgraded to grade 4*. This grade was only created in 2015 and the increase is a likely manifestation of increasing global competition for publication in top journals.

Do citation rankings matter?

One may argue that researchers are less likely to submit to the downgraded journals, not necessarily due to the rankings, but due to the decreasing quality of these journals, which causes a ranking decrease. However, this is evidently not the case. First, the relationship between changes to objective citation-based measures such as SCImago Journal Rank (SJR indicator)Footnote 10 and ABS ranking changes is weak – the change in mean log values of the SJR indicator between years 2010–2014 and 2015–2009 has been the same for the downgraded journals and those that retained their previous rank.Footnote 11

More importantly, as we show in Table 3, we find an inverse relationship between the change in journal quality and the share of papers published by the UK-based scholars. We show this by re-estimating our regression model using alternative outcome variables representing the change in journal quality. More specifically, we analyze the change in publication probability in response to a change in journals’ weighted citations, as measured by the change in average SJR log score between the years 2005–2009 and 2010–2014. All journals are categorized into three groups according to the quartile (top, bottom and two middle) of the quality change distribution. The new dependent variable is a scaled dummy variable that takes value 100 if a journal belongs to a given category and zero otherwise.

Table 3 Regression Results: Change in Journal Ranking and Publication Outcome

Results reported in Table 3 show that, for the UK-based authors, the share of the papers published in journals that improved their quality decreases after 2015, while it increases for journals that experienced a decrease in their SJR indicators, although the change is not statistically significant.

A growing SJR score implies growing recognition and hence competition globally, while the pay-offs for the UK-based scholars remain unchanged. The measured change in journal quality is different from journal quality itself. In fact, as we show in the Appendix Table 7, the change in journal quality occurs almost uniformly across all ABS journal ranks.

Unfortunately, our data do not allow us to identify if the observed phenomenon is the outcome of changes in rejection rates that are likely to be associated with changes in citation rankings, or if the results are driven by the UK-based scholars avoiding journals which become more competitive globally.

One may fear a level of reverse causality in our setting. However, potential bias can only strengthen our results. Being downgraded discourages UK-based researchers from submitting, thus decreasing competition and ultimately increasing acceptance probability. Similarly, we expect a journal upgrade to increase the number of submissions, thus ultimately decreasing acceptance probability. The size of this bias, is, however, limited by the fact that the UK-based authors contribute to 14.6% of publications in economics.Footnote 12

Our results may also be mitigated by the perceived gap between the subjective rankings and the actual ranking, as recently found by Bryce et al. (2020). The observed impact of the ranking change is also unlikely to be homogenous. Walker et al. (2019) find in a survey that reliance on the ABS ranking differs across seniority groups and universities. We leave, however, the quantification of these effects for future studies.

Conclusions

Our study confirms that journal rankings are an important tool in shaping publication policy. However, this evaluation framework is often country-specific and it requires more frequent and objective changes. While UK authors respond to changes in journal rankings by directing their papers away from the downgraded journals, they also publish more frequently in journals upgraded to ABS 4 category. However, overall, they are less likely to publish in journals with a fast-growing SJR score.

The ABS ranking affects, not only UK-based scholars, but also institutions abroad that frequently rely on this ranking. Thus, a decision to downgrade a journal has the potential to decrease the number of submitted papers, ultimately affecting overall journal quality.