Journal rankings and publication strategy

We study the impact of journal ranking systems on publication outlet choice. We investigate the publication behavior of UK-based scholars registered on IDEAS/RePEc and analyze the publication outcomes of their academic work uploaded to the repository. Our estimates suggest that authors strategically choose outlets to maximize their publication scores. Our identification strategy is based on exploiting the change in the British ABS journal ranking in 2015. Working papers written before the 2015 ABS journal ranking change are significantly less likely to be published in ex-post downgraded journals. The effect cannot be attributed to the overall change in journal quality.


Introduction
The importance of academic evaluation is on the rise. Performance-based research funding systems use a range of different ranking methods as a policy tool, designed to provide an efficient and fair allocation of research funds or to assist in recruitment and promotion decisions (Bajo et al. 2020;Salter et al. 2017). In addition, individual authors refer to journal rankings to guide their citation choices (Drivas et al. 2020) or to encourage students to read specific papers (Walker et al. 2019).
Journal rankings, such as the Academic Journal Guide published by the Chartered Association of Business Schools in the UK (henceforth ABS ranking) or a top-five indicator popular in economics, are based on a combination of bibliometric measures, academic tradition and expert opinion giving rise to a significant degree of heterogeneity between rankings. Furthermore, there are several drawbacks to these evaluations. Journal rankings are typically coarse, thus a small difference in the quality of a journal in which authors publish may lead to significant implications for their career prospects. Moreover, most rankings 1 3 are not regularly updated. These features create an imbalance between the improvement of journal quality and the journal ranking.
Nonetheless, performance-based research funding systems, based on such rankings, have been implemented in a number of countries (see Hicks 2012 andZacharewicz et al. 2019) inspiring extensive literature on the impact of those schemes on academic scholarship. Existing studies broadly confirm the common-sense intuition that "you get what you incentivize". Heckman et al. (2020) show that the excessive emphasis placed on an academic's top five publications in economics in relation to recruitment and tenure decisions incentivizes scholars to pursue follow-up and replication work at the expense of undertaking creative, pioneering research. Similarly, Butler (2003), in his study of the 1993 Australian reform that introduced undifferentiated publication counts, has shown that the number of publications in Australia significantly increased while the quality of publications significantly decreased. Quantitative results consistent with this trend have been found in Norway (Bloch et al. 2016) and in Poland (Korytkowski et al. 2019). Hence, "thinking with indicators" has become a central aspect of research activities, as is shown also by the studies for the Netherlands, Austria and the UK (Müller et al. 2017;Salter et al. 2017).
The literature described above analyzes the total impact of the change in evaluation systems on research strategy, while the response to a change in evaluation strategy can be split into two components. The first component is the change in research strategy, adjusting research activities to fit a new set of evaluation parameters. This includes changes in research quality, the quantity of publications or the topical scope. The second component is the change in publication strategy only, i.e. the choice of publication outlets keeping all other aspects of research quality fixed. Our key contribution to the literature is that we isolate the pure impact of the change of evaluation system on the change in publication strategy.
This work examines the impact of the 2015 ABS ranking change on the publication outlet choices of UK-based authors in economics and finance. To isolate the change in publication strategy, we analyze papers uploaded to IDEAS/RePEc online repository only in years 2010 -2014-a narrow window between two subsequent changes to the ABS journal ranking. We show that preprints of UK-based authors, uploaded before the 2015 ABS ranking change, are less likely to end up being published in the downgraded journals. Our estimates also suggest that this decrease in the share of papers published in that journal category cannot be attributed to a decrease in journal quality. This paper also contributes to the broader literature on publication outcomes of papers uploaded to preprint repositories focusing on all preprints in economics and finance uploaded to a single repository from a single country. This distinguishes our paper from earlier studies, which focused on working papers published in select working paper series (e.g. Bauman et al. 2020a), or analyzed preprints of papers published in select journals (e.g. Brown et al. 2017; or studied complete repositories but in other disciplines than economics and finance (e.g. Larivière et al. 2014).

Do national rankings matter?
We study the impact of the changes to the British ABS ranking on publication outcomes. The ABS ranking is widely used for assessing the reputation of both individual researchers and their institutions (e.g., Salter et al. 2017). Walker et al. (2019) carried out a large-scale survey of UK business academics and collected responses from 8002 academics from 90 1 3 UK business and management schools. The basic descriptive statistics suggest that 67% of researchers always/almost every time use the ABS ranking system when preparing to submit. In addition, about 76% (79%) of academics, at least occasionally, use the ABS list to judge the research outputs of other academics (when assessing a promotion case).
In our analysis, we exploit the plausibly exogenous change in the ABS ranking in 2015 (published in February that year) from the ranking's previous version in 2010. As we can see in Fig. 1, the ranking experienced a small revolution. In 2015, a new journal grade (4*) was added, 1 168 journals were upgraded, 42 were downgraded, 590 journals were added and only 579 maintained their grade. Overall, there was a substantial grade inflation, due to which one should perceive a ranking decrease in 2015 as a more significant change than in previous ranking updates. 2 This makes the 2015 ABS ranking change ideal for our study.
As shown in Fig. 1, the ABS ranking also experienced a change in 2010, however, the change was less pronounced. In total, 49 journals had their ranking revised, 77 journals were added and one journal was removed. Another change came in 2018 and although none of those journals that were already ranked had their ranking revised, 177 journals were added to the list which significantly expanded the authors' choice.
To study the impact of the ranking change on publication outlet choices, we analyze data on academic papers from the upload of a working paper to IDEAS/RePEc repository through to journal publication. 3 In our sample, we keep only those papers with at least one UK-based author registered at the repository that were uploaded in IDEAS/RePEc during the 2010-2014 period, i.e. prior to the ABS ranking change in 2015. 11,557 papers authored by 1054 UK-based researchers satisfy all criteria, of which 6,294 papers (54%)  5 We follow those papers through to 2017, i.e. the last year before the subsequent ABS ranking change. In total, our unbalanced panel has 41,143 observations in annual frequency. We provide summary statistics of our data in the Appendix,  We begin the analysis by calculating the share of papers published in a particular journal category (constant/downgraded/upgraded) conditional on paper age. We interpret this as the publication probability. Figure 2 shows that the paper's age (i.e. the time since the first upload) is a key determinant of the publication probability. The figure highlights the strong impact of journal downgrading. The share of papers published in the journals that were downgraded in 2015 significantly drops after the 2015 ABS ranking was published. 6 To explore the link between changes in journal ranking and publication outcomes, we employ the following linear probability model: where Outcome variable is a scaled dummy variable indicating publication in a downgraded/upgraded/unchanged journal in a year t. To facilitate the presentation of the results, we set the variable to take value 100 if a paper is published in that year in the journal category of interest and zero otherwise. This allows us to interpret the regression coefficients as percentage points. ABS15 is a dummy variable (on a zero-one scale) indicating the years after the ranking change, i.e. 2015, 2016 and 2017. Control variables include the number of versions that the paper has had. 7 t and p represent paper age and paper fixed effects 8 respectively, where age is defined as the number of years since the paper was first posted on IDEAS/RePEc. Our model is estimated using fixed effect estimators which allows us to control for paper and time unobservable characteristics.
Baseline results are reported in Table 1. The key coefficient of interest is ABS15, which tells us how the share of working papers published in a given journal category changed after 2015, controlling for a paper's age and other covariates. The results suggest that since 2015, UK-based scholars are less likely to publish in the downgraded journals. The share of papers published in that journal category declines by 0.17 percentage points, or around a quarter given the unconditional probability of 0.66%. We observe virtually no change for the two other journal categories. However, once we disaggregate the results, as we show in Table 2, we find that an increased share of papers is published in journals upgraded to ABS4 after the ranking change. The results remain qualitatively unchanged if we replace paper fixed effects with author fixed effects, as we show in the Appendix Table 9. 9 We also notice, perhaps unsurprisingly, that published papers which are subject to more revisions are more likely to be published as indicated by the coefficients on the variables indicating the number of versions. This is the case, particularly for those journals that were not downgraded.
When we review the ABS journal categories in more detail, as we show in Table 2, we find that journals downgraded to grades 3 or 2 suffer the most, while increases to grade 4 are associated with the highest increases in publication probability. This is to be expected, as publications in journals with grades 3 and 4 are typically crucial for research evaluation. We also observe a small, albeit insignificant, decrease in publication probability in journals upgraded to grade 4*. This grade was only created in 2015 and the increase is a likely manifestation of increasing global competition for publication in top journals.

Do citation rankings matter?
One may argue that researchers are less likely to submit to the downgraded journals, not necessarily due to the rankings, but due to the decreasing quality of these journals, which causes a ranking decrease. However, this is evidently not the case. First, the relationship between changes to objective citation-based measures such as SCImago Journal Rank (SJR Table 1 Regression Results: Change in ABS Journal Ranking and Publication Outcomes Notes: Dependent variables are publications in journals in a category described in the column title. Constant refers to journals that retained their ABS journal ranking between 2010 and 2015, While Downgraded and Upgraded refers to journals that had their rankings decreased and increased, respectively. The variable takes value 100 if a paper is published in a journal of that category and zero otherwise. ABS15 is a dummy variable that takes value one in years in which the ABS15 ranking is valid for the UK-based scholars, i.e. years 2015, 2016 and 2017. Variables 2nd-4th and subsequent version are dummy variables that represent the number of the working paper versions uploaded to the repository. We cap the number of versions at 4. All regression specifications include paper age and paper fixed effects and the number of versions covariates. All standard errors are clustered at the age level. *** p < 0.001, ** p < 0.01, * p < 0.05. indicator) 10 and ABS ranking changes is weak -the change in mean log values of the SJR indicator between years 2010-2014 and 2015-2009 has been the same for the downgraded journals and those that retained their previous rank. 11 More importantly, as we show in Table 3, we find an inverse relationship between the change in journal quality and the share of papers published by the UK-based scholars. We show this by re-estimating our regression model using alternative outcome variables representing the change in journal quality. More specifically, we analyze the change in publication probability in response to a change in journals' weighted citations, as measured by the change in average SJR log score between the years 2005-2009 and 2010-2014. All journals are categorized into three groups according to the quartile (top, bottom and two middle) of the quality change distribution. The new dependent variable is a scaled dummy variable that takes value 100 if a journal belongs to a given category and zero otherwise.

ABS ranking change
Results reported in Table 3 show that, for the UK-based authors, the share of the papers published in journals that improved their quality decreases after 2015, while it increases Notes: Dependent variables are publications in journals in a category described in the column title. Bottom 25% refers to the bottom quartile of the journal distribution ordered by the change in mean log SJR score between year 2010-2014 and 2005-2009. Middle 50% refers to the two middle quartiles of that distribution and Top 25% is the top quartile. The variable takes value 100 if a paper is published in a journal of that category and zero otherwise. ABS15 is a dummy variable that takes value one in years in which the ABS15 ranking is valid for the UK-based scholars, i.e. years 2015, 2016 and 2017. All regression specifications include paper age and paper fixed effects and the number of versions covariates. All standard errors are clustered at the age level. ***p < 0.001, ** p < 0.01, * p < 0.05 for journals that experienced a decrease in their SJR indicators, although the change is not statistically significant. A growing SJR score implies growing recognition and hence competition globally, while the pay-offs for the UK-based scholars remain unchanged. The measured change in journal quality is different from journal quality itself. In fact, as we show in the Appendix Table 7, the change in journal quality occurs almost uniformly across all ABS journal ranks.
Unfortunately, our data do not allow us to identify if the observed phenomenon is the outcome of changes in rejection rates that are likely to be associated with changes in citation rankings, or if the results are driven by the UK-based scholars avoiding journals which become more competitive globally.
One may fear a level of reverse causality in our setting. However, potential bias can only strengthen our results. Being downgraded discourages UK-based researchers from submitting, thus decreasing competition and ultimately increasing acceptance probability. Similarly, we expect a journal upgrade to increase the number of submissions, thus ultimately decreasing acceptance probability. The size of this bias, is, however, limited by the fact that the UK-based authors contribute to 14.6% of publications in economics. 12 Our results may also be mitigated by the perceived gap between the subjective rankings and the actual ranking, as recently found by Bryce et al. (2020). The observed impact of the ranking change is also unlikely to be homogenous. Walker et al. (2019) find in a survey that reliance on the ABS ranking differs across seniority groups and universities. We leave, however, the quantification of these effects for future studies.

Conclusions
Our study confirms that journal rankings are an important tool in shaping publication policy. However, this evaluation framework is often country-specific and it requires more frequent and objective changes. While UK authors respond to changes in journal rankings by directing their papers away from the downgraded journals, they also publish more frequently in journals upgraded to ABS 4 category. However, overall, they are less likely to publish in journals with a fast-growing SJR score.
The ABS ranking affects, not only UK-based scholars, but also institutions abroad that frequently rely on this ranking. Thus, a decision to downgrade a journal has the potential to decrease the number of submitted papers, ultimately affecting overall journal quality.

Econlit
We have accessed the data from the Econlit database available to members of the American Economic Association. For each journal available in the database, 13 we have semiautomatically collected data on authorship for all research articles published between 2010 and 2018; in total 294,992 articles across 1,427 journals co-authored by 329,909 scholars.
The Econlit data does not provide author's country, thus we have assembled a comprehensive list of UK research institutions from various sources to identify those with a UK affiliation. In total, we have identified 21,552 authors that held a UK affiliation on at least one paper throughout the sample period.
It has to be noted that the Econlit database has a strong economics focus, thus authors from other subdisciplines included in the ABS ranking are likely to be underrepresented in our sample.

Ideas/RePEc
Ideas/RePEc database is one of the largest bibliographic economics-focused databases. It indices around 3 million research items, most of which can be downloaded in full text. Each registered author has his/her profile which includes both published articles and working papers. It is also possible to identify the same paper placed in several archives, including published works.

Merged data
The Ideas/RePEc data, unfortunately, does not allow us to track authors' affiliation in time. Thus, we have resorted to the Econlit database where we have identified all authors that had at least one publication with a UK affiliation in the period 2010-2018. To assure unique identifiers, we have matched author names and publication titles from Econlit with the Ideas/RePEc data using exact matching text matching (after removing non alphanumeric characters and white spaces). As registration with Ideas/RePEc is voluntary, we have matched 1067 unique authors which have co-authored 11,557 papers uploaded to the repository between 2010 and 2017.     Table 9 Replication of Tables 1 and 3 with author fixed effects Notes: Dependent variables are publications in journals in a category described in the column title. Constant refers to journals that retained their ABS journal ranking between 2010 and 2015, While Downgraded and Upgraded refers to journals that had their rankings decreased and increased, respectively. The variable takes value 100 if a paper is published in a journal of that category and zero otherwise. ABS15 is a dummy variable that takes value one in years in which the ABS15 ranking is valid for the UK-based scholars, i.e. years 2015, 2016 and 2017. Variables 2nd-4th and subsequent version are dummy variables that represent the number of the working paper versions uploaded to the repository. We cap the number of versions at 4. All regression specifications include paper age and author effects and the number of versions covariates. For papers with multiple authors, each observation is replicated and weighted accordingly. All standard errors are clustered at the age level. *** p < 0.001, ** p < 0.01, * p < 0.05.