Introduction

Bibliometrics has become an omnipresent aspect of academic life. It is used to assess the impact or quality of individual papers, researchers, scientific journals, and academic institutions. Before the widespread use of databases collecting citations, peer-review was the only formal method by which the quality of research could be evaluated. Bibliometrics represents an easily available tool consuming little time and effort. While peer-review is still crucial in mutually evaluating researchers’ products (Nicholas et al., 2015), bibliometrics has become essential for the advancement of academic careers. Since science is highly competitive, high-scoring academics are induced to display their scores, further legitimizing the importance of bibliometrics (Wouters, 2014). Pressure to perform well on these scores already burden junior academics (manifesting e.g., in students experiencing mental health problems, see Evans et al., 2018), for whom doctoral dissertations in the form of several journal articles rather than one monographic work have become the rule. Before 2016, cumulative dissertations consisting of a number of stand-alone articles intended for peer-review stood at 34% for law, economics, and social science. For the cohort in 2019/2020, the number has increased to 40% (https://www.nacaps-datenportal.de/indikatoren/C1.html). Academics who do not yet hold an assistant professorship with tenure face even stronger incentives to obtain high bibliometric scores, as their career crucially depends on publications in top journals (Akerlof, 2020; Heckman & Moktan, 2020; Osterloh & Frey, 2020).

Since publications are now the hard currency for an academic career, the type and degree of publication pressure should significantly impact publication behavior. This is best illustrated by those researchers with the least experience: Doctoral students. When they start their academic journey, they may be familiar with the publication requirements but, often, are unfamiliar with the publication process. Countless experiences of ex-PhD’s can be found online in the form of blog posts or articles. Nature picked up on many negative consequences of publication pressure and discussed them in the focus named PHDs under publication pressure in nature human behavior.Footnote 1 Overall, pressure is argued to lead to a prioritization of quantity over quality. Doctoral students with publication pressure are discouraged from pursuing projects that are deemed to have a low probability of publication. This includes time-intensive projects (Kiai, 2019) such as interdisciplinary (Collyer, 2019) or replication (Stoevenbelt, 2019) projects as well as innovative (Dietze et al., 2019) or regional and non-western focused (Mulimani, 2019) projects. When pressure mounts high, unethical behavior is incentivized. Past doctoral students argue that pressure incentivizes p-hacking to avoid null-results (Head et al., 2015), HARKing to make hypotheses fit the results (Kerr, 1998), slicing work into multiple smaller publishable parts (Neill, 2008), convoluting the cumulative dissertation with co-authorship work and exploiting freedom in data-collection and analysis (Hobson, 2019; Kiai, 2019; Moradi, 2019; Yeung, 2019). In recent years it has even become more common to see publications featuring shared first authorship (Hu, 2009; Lapidow & Scudder, 2019). This is especially the case for PhD students, who have to comply with first authorship requirements. Competition among peers discourages researchers in general from sharing their research ideas (Landgrave, 2019). Focusing on article production also devalues other important skills, such as teaching and mentoring, research communication, or skills more suited for work in the private sector (Isaacson, 2019; Obradović, 2019). All those reasons contribute to degrading work satisfaction during the PhD. Doctoral students are six times as likely to experience depression or anxiety compared to the general population (Evans et al., 2018). Worst case, when facing failure to publish, doctoral students may even quit academia (Wu et al., 2019). These problems are partly or entirely connected to publication pressure, and it is at least questionable, whether the current system promotes proper self-selection. Compared to other starting positions, young people entering academia often have inaccurate expectations of their new job (Ganguli et al., 2022). Being familiar with the publication process is thus a stressor particular to junior scientists.

Our paper aims to describe how researchers in economics and business navigate through the publication process. In a survey of academics in Germany, Austria, and the German-speaking part of Switzerland, we collected data on the initial submission to a journal and the final publication of the last successful publication effort. Per respondent we therefore obtain the initial submission journal and the publication journal of the most recent publication. This data enables us to analyze and compare strategic behavior of academics with different academic positions and levels of publication pressure. The study focuses on strategic behavior with respect to the ranking of the respective journal. Our analysis in particular highlights how behavior between junior and senior scientists differs. The generated insights contribute to the continuing discussion of better practices in design of doctoral studies, strategic responses of academics, and publishing behavior in general.

In the following section “Literature review”, developments in bibliometrics and a literature review of the interaction between rankings and author behavior are provided. In “Hypotheses” section, the hypotheses underlying our analysis are derived. “Data” section provides a summary of the data, including a description of the survey data as well as the peculiarities associated with using rankings based on bibliometrics. “Results” section discusses the methodological approaches used to obtain the estimation results. These are shown and discussed in “Robustness” section. “Discussion” section discusses proposals to improve the circumstances for young academics and publication strategies in general, while “Conclusion” section concludes.

Literature review

There exist numerous approaches aiming at measuring the impact of research. As simple measures are often subject to methodological and practical critique, ever more complex metrics have been introduced (Mingers & Leydesdorff, 2015). An illustrative example of the trend is provided by the h-index, which addresses flaws in using citation counts or impact factors. The h-value is represented by the number h of papers of a researcher that have at least h citations. It adds an additional element instead of the one-dimensional citation count or impact-factor but is at the same time more complex to define. Most research output nowadays is consumed online. It has become important to measure online activity, such as the number of clicks, time spent on websites (which approximates the extent of reads), or online reactions. While these measures are not used in the analysis in the present paper, they imply similar incentive dynamics to academics as those used in our analysis. As online-metrics gain popularity and importance, rational researchers aim to boost and display their performance captured by such metrics. Because of their simplicity, the impact factor and h-index remain widely used among researchers in economics and business. Governing bodies base their decisions about grants and promotions heavily on these metrics. Hence, when researchers adjust their behavior because of citation metrics, these metrics are most likely to be on their minds.

While measurements are introduced to satisfy the demand for accountability and transparency, they can also have unintended consequences. Espeland and Sauder (2007) use the term reactivity to describe the phenomenon of people changing their behavior in response to being observed or measured. A vast literature exists showing the implications of incentives with respect to the use of rankings and scientometrics. An overview is provided in Binswanger (2015). As metrics in science have become important, researchers are inclined to cite and choose topics strategically (Frey, 2003; Horrobin, 1996), such as maximizing publications per research idea (Weingart, 2005) and increasing the number of co-authors per manuscript (Card & della Vigna, 2013; Hamermesh, 2013; Wuchty et al., 2007). The literature provides substantial support that rankings produce self-fulfilling prophecies (Espeland & Sauder, 2007; Osterloh & Frey, 2020). Empirical literature on the reaction of researchers to bibliometrics is smaller. In contrast to this study, data is less often collected by directly surveying scholars, but getting data from public databases. Michels and Schmoch (2014) find that to attain higher citation rates, researchers shifted submissions from specialized to broader journals and journals based in the United States with high impact factors. Some case studies analyze the impact of research evaluation programs in different countries. Many authors are concerned that research rankings homogenize research output and displace heterodox approaches (Aistleitner et al., 2018; Bloch, 2010; Corsi et al., 2010; Mingers & Willmott, 2013). Butler (2003) documents that researchers increased their publication activity, but in lower impact journals as a response to how the research evaluation policy has been undertaken in Australia. The measurements used, appear to have changed the publication habits of researchers at the two universities analyzed. For the United Kingdom’s Research Assessment Evaluation waves Moed (2008) finds, when evaluation is focused on citation counts, researchers increase article production, and when evaluation focuses on quality rather than quantity, researchers increase articles in high-impact journals.

The existing literature focuses on the observable published work of researchers. The number of authors on a paper, networks of authors or the evolution of publication output or outlets is obtainable in databases. Some only require counting the authors on a publication, others like network analysis require more complex methods. Our survey and analysis contribute to the literature, because data on the journal of initial submission and the number of submissions of the most recent successful publication effort was collected. This data cannot be obtained through public databases, but solely by surveying authors individually and hence was hidden behind the curtains of academic publishing.

Hypotheses

This section presents and motivates the hypotheses tested in the analysis. We aim to analyze to what quality (ranking) of journals, economics, and business researchers initially submit to, ultimately publish in, the number of submissions in between those two actions, and to what degree publication pressure influences these outcomes.

The researchers in our sample are assumed to behave according to the rational choice approach, in our case, by maximizing their career opportunities. They do that by aiming at the highest-ranked publication outlet possible, depending on the subjectively assessed quality of their manuscript and subjectively felt types of pressure, including time constraints. For some researchers this assumption may be stronger than for others. There certainly exist several idealist researchers not willing to engage in careerism, especially among more senior researchers that enjoy high job-security. However, academics from doctoral students up to assistant professors without tenure are required to publish, preferably in top journals, to ensure a successful academic career. While assistant professors with tenure and full professors enjoy high job-security, to rise to the top or be of interest to more prestigious employers, they still need to publish in top journals.

When researchers initially submit their manuscript, they assess its quality and then submit it to the highest ranked journal with a reasonably high probability of having it accepted. If the paper is rejected, they are assumed to re-assess the quality of their paper and consider lower ranked journals for the next submission. We define the ranking difference as follows:

$$\text{Ranking\; Difference}={\text{Ranking}}_{\text{Initial \;Journal}}-{\text{Ranking}}_{\text{Publication\; Journal}}$$

capturing the difference in the respective ranking between the journal of publication and the journal of initial submission. The logic outlined above implies that researchers subsequently choose lower-ranked journals when their paper was rejected. The more often a researcher has submitted a manuscript, the larger we expect the ranking difference to be.

Hypothesis 1

As the number of submissions increases, the ranking difference between the publication journal and the journal of initial submission is expected to increase.

We define two types of pressure: time- and quality pressure. The former describes that a researcher needs to produce a certain number of publications in a given time window, irrespective of the pedigree of the journals they are published in. The latter represents the need to produce publications in high-quality journals. While most academics face a combination of the two types, the dominant type of pressure depends on the academic position. What type of pressure is more important has implications for publishing behavior. When facing only time pressure, researchers may submit to lower ranked journals initially. If rejected, they tend move down more quickly in terms of ranks for the next submissions. On the other hand, if only top publications are required, researchers need to initially submit to a top-journal and, after rejection, try again in other top journals, which is associated with a small ranking difference from the initial submission.

Doctoral students usually face a limited time frame during which they are required to produce either a certain number of publications or manuscripts worthy of publication. For them, top publications are less relevant for the advancement of their careers and thus, they mainly face time pressure. Post-doctoral students already have a higher probability of pursuing a professorial position and thus ought to face more quality pressure than doctoral students. It is, however, difficult to assess, what type of pressure is dominant in this group. On the other hand, assistant professors without tenure mainly face quality pressure. Their probability of getting tenure crucially depends on top publications (Heckman & Moktan, 2020). Lastly, assistant professors with tenure and full professors face a generally lower degree of pressure than the other groups, consisting mainly of quality pressure. Based on these considerations, the following hypotheses will be tested:

Hypothesis 2

For doctoral students time-pressure dominates.

  1. (a)

    They are expected to submit to lower ranked journals initially compared to researchers in a more advanced academic position for which quality pressure dominates.

  2. (b)

    The change in ranking per submission from the initial to the publishing journal is expected to be larger than for academics facing mostly quality-pressure.

Hypothesis 3

Assistant professors without tenure face the highest degree of quality pressure.

  1. (a)

    They are expected to initially submit to higher-ranked journals compared to all other academics.

  2. (b)

    The change in ranking per submission from the initial to the journal they finally publish in is expected to be smaller than for all other academic positions.

Hypothesis 4

Assistant professors with tenure and full professors face less quality pressure than assistant professors without tenure and have only little time pressure.

  1. (a)

    They are expected to initially submit to higher-ranked journals than doctoral students.

  2. (b)

    The change in ranking per submission from initial to publishing journal is expected to be smaller than it is for doctoral students.

Data

This section shortly describes the survey used for our analysis, identifies the relevant variables, and describes the journal ranking data used.

Survey

Our analysis is based on a survey among business and economics scholars in Germany, Austria, and the German-speaking part of Switzerland, carried out between October and November 2021. In total, 26 universities were sampled, and 558 valid observations obtained.Footnote 2 An overview of the characteristics of the responses is provided in Tables 1 and 2.Footnote 3

Table 1 Description of the sample of responding researchers
Table 2 Academic positions among valid responses

Variables

The following variables are used in our analysis:

  • Initial and publication journal ranking: The ranking of the journal of initial submission and journal of publication for the most recent successful publication of each respondent.

  • Number of submissions: The number of submissions for this publication effort (including the initial and publishing submission).Footnote 4

  • Ranking difference and ranking difference per submission: The difference in the respective ranking as described in the previous section and the same number divided by the number of submissions, respectively.

  • Journal strategy: The preference for either first writing a manuscript and thereafter choosing a suited journal, or first choosing a journal and then writing a suitable manuscript (opportunism).

  • Journal Category: The preference for what category of journals initially to submit to. The survey covered four categories, which were decoded into two categories. One covering top and leading journals, the other containing lower ranked journals.

  • Publication pressure: Subjectively felt publication pressure reported by the respondents.

  • Other information on the respondent: Gender (Male), discipline (Economics), respondents who filled out the survey in English and therefore are assumed to be foreign academics (English) and the academic position (Doctoral student, Post-Doc, Ass. Professor without tenure, Ass. Professor with tenure, Professor).

For each respondent, data on the most recent successful publication effort was collected. Ideally, a survey should aim at obtaining as many publication efforts per researcher as possible to obtain publishing behavior representative of that researcher. However, asking for more publication efforts would come at the expense of accuracy, bias of responses, and a lower response rate. There are substantial challenges if respondents have to recollect publications that are well in the past. Since the whole publication process can reach an extensive duration, an accurate recollection of each step is unlikely (Huisman & Smits, 2017). More importantly, researchers are more likely to remember memorable publication efforts such as top publications or dissatisfying review processes, which would bias the sample. By asking for the most recent publication, essentially a random publication was surveyed for each researcher, which should yield a representative sample per academic position.

The distribution of the number of submissions before publication is given in Fig. 1. Anyone responding to have only submitted once must necessarily have been successful with their initial submission. Thus, the figure shows that a large share of respondents was successful with their initial publication attempt. Since we are interested in strategic behavior after rejection, the main analysis focuses on the remaining ~ 55% of the persons who had to choose a new publication outlet after rejections. Of the remaining sample, most respondents have submitted between two and five times in total. Only 27 scholars had to submit their last successful paper more than five times.

Fig. 1
figure 1

Distribution of number of submissions

Journal ranking data

Journal ranking data was obtained through the Scimago website. The data were merged to the survey data, by adjusting reported journal names in the survey data, such that they matched the names in the Scimago data. We matched the survey data to the Scimago journal ranking (SJR) including all subject areas, thereby allowing for economics and business scholars to have submitted to journals in other research areas. Most submissions in our sample, however, were to journals in economics and business. When journal rankings across different disciplines are analyzed, the rankings are sometimes adjusted to account for differing citing behavior. We chose not to adjust our rankings, as business and economics are reasonably close disciplines, and respondents in our sample of one discipline often submitted to journals of the other discipline. Additionally, many economics journals are also listed in the business SJR-list and vice-versa. Some survey respondents submitted or published to journals that were not in the Scimago list. As the survey included many German speakers, the journals not included on Scimago were mostly lesser-known German journals. There is no ranking data available for these journals, and they are thus excluded from the analysis.Footnote 5 The SJR-ranking, three-year impact factor (3y IF) and the Hirsch-index (h-index) are used for the analysis.

The impact factor is obtained by dividing the number of citations published works in a scientific journal received by the total number of published works in that journal over a given time frame. It is widely used among scientists, as its ranking fits the structure of high relevance journals relatively well and it is easy to compute (Garfield, 2006). However, it has also been vastly criticized, as it does not consider the importance of citing outlets and self-citations and is not well suited for comparing disciplines with different citing practices (Kurmis, 2003; Postma, 2007). The h-index for a journal represents the number of articles that amassed at least h citations over a given time frame. We obtained the data through Scopus, where the time frame is set at three years. The h-index for scientific journals deemphasizes the value of individual highly cited works and in turn emphasizes journals that consistently put out highly cited articles (Bornmann & Daniel, 2007). The SCImago journal rank is a measure based on citations, but the rank is adjusted based on the prestige of the citing source. It has been recommended as an alternative to impact factors (Falagas et al., 2008), even though it is not as popular as the impact factor or the h-index. Researchers are most likely to adjust their behavior with respect to the metrics that they are familiar with, which explains the metrics chosen for this exercise. The SJR-ranking is chosen as the primary focus, as it represents an improved fit to the relevant journals in a discipline. Since the SJR accounts for both citations and importance of the citing source, it is positively correlated with both the 3y IF and the h-index. In Table 3 the correlations between the SJR and the other metrics are provided, for economics and business separately. In the upper regions of the SJR, correlation with the impact factor is higher. The fact that this is reflected in our sample indicates that, the SJR is legitimate choice for analysis. Initial submissions in our sample are more often to higher ranked journals, which is indicated in the table. The correlation of the SJR with the 3y IF in economics is lower among the publication journals than among initial submissions. We argue that business and economics allow a meaningful comparison as the disciplines are close to each other.

Table 3 Correlations between SJR, three-year impact factor and h-index

Journal ranking data are heavily skewed. Few journals obtain a very high score and are associated with large ranking gaps to the median journal. The journals in the neighbourhood of the median journal will be very close in terms of ranking to the median journal. Figure 2 shows the strongly skewed distribution of the ranking data in our sample. (In addition, Fig. 3 in Appendix displays the distribution of the ranking difference variable.)

Fig. 2
figure 2

Distribution of ranking data

For all rankings, most submissions were to lower ranked journals. Submissions to the best ranked journals seem to occur somewhat more often when the submission was to the initial journal rather than to the publishing journal. The next subsection covers how the skewness of the data is addressed in the analysis.

Methodology

An ordinary least squares regression strategy was chosen for the analysis. Since the journal ranking data are skewed, the data is log-transformed whenever possible. The ranking difference variables span over positive and negative values and can therefore not be log-transformed in a meaningful way. Skewness in this case, is due to the few journals with very large ranking scores. By including the ranking score of the journal of initial submission as a regressor, we can control for this skewness.

Results

Table 4 presents the results of the regression of the number of submissions on the ranking difference. Since we are interested in the choice of journals after rejection, observations implying a successful initial submission (and therefore a ranking difference of zero) were excluded from this analysis. As described in “Data” section, the skewness of the ranking data automatically leads to large ranking differences whenever the journal of initial submission has a relatively high rank. Not controlling for the rank of the journal of initial submission distorts the coefficient relating to the number of submissions. This model would imply that a researcher moves down 1.53 SJR-ranking points with each submission. When controlling for the rank of the initial journal, the model implies that researchers choose a journal that is ranked 0.32 SJR-points lower than the previous journal with each additional submission. The typical respondent in our sample initially submitted to a journal with 2.44 SJR-ranking points, corresponding to the 134th rank on a compiled list of economics and business journals. They would then typically be declined once, after which they would choose a journal with an SJR of 2.13, which would be ranked 160th on the same list.

Table 4 Number of submissions and ranking difference

These results are in line with hypothesis 1: Academics who were rejected initially, on average, resubmit to lower-ranked journals. This is an implication of researchers interpreting being declined by a journal as a signal to adjust their own subjective evaluation of the paper downwards. They seem to deem the probability of acceptance larger, the lower a scientific journal is ranked.

The determinants for the ranking of the journal of initial submission are described in Table 5. As the distribution of the ranking scores are highly skewed, the response was log-transformed. Here the regression with doctoral students as reference group is displayed, since only this group exhibits significant differences compared to the other academic positions. The coefficients imply that all other academics initially submit to higher-ranked journals than doctoral students. Professors, on average, submit to journals that are ranked 35% higher than those of doctoral students. However, the effect is non-significant at the 5% level. Journals chosen by assistant professors with tenure are estimated to be ranked 75% higher than those chosen by doctoral students. Post-docs and assistant professors without tenure both submit to 55% more highly ranked journals than do doctoral students initially. In our sample the typical doctoral student initially submits to the 266th ranked journal on a combined list of economics and business journals. The regression then implies that assistant professors with tenure submit 136 ranks higher than doctoral students initially. The results are in line with hypotheses 2a and 4a, that doctoral students who face mainly time pressure, submit to lower ranked journals initially compared to other academics. The results also indicate that assistant professors without tenure choose more highly ranked journals for their initial submission. The estimated coefficients imply that they also choose better ranked journals than post-docs and professors. But these coefficients are associated with a larger p-value. Contrary to hypothesis 3a, the coefficient comparing assistant professors with and without tenure implies that those with tenure aim for slightly better ranked journals. This could be due to remaining habits from their time without tenure, ambition, or the smaller sample size for assistant professors. Overall, hypothesis 3a is weakly supported by our results. Concerning other control variables, we find that economists initially submit about twice as high compared to business scholars. This is indeed surprising, since both disciplines usually are part of one faculty and many journals are listed in both disciplines. According to Fabel et al. (2008), their close relationship in the German speaking region supports a similar standard when evaluating research performance. However, there are non-neglectable differences in customs and implicit norms regarding the publishing process. Specifically, the top journals in economics are generally higher ranked than those in business and could thus provide an explanation for why economists initially submit higher. Further potential explanations may be that in the economics discipline journal publications are relatively more important, or that business scholars publish more in local journals. The preferred journal quality categories for initial submission are strongly correlated with the ranking of the journal of initial submission. Opportunism, the level of pressure, male respondents and English respondents yield small and statistically non-significant coefficients. Country effects were not included in this analysis, because in combination with academic positions, they result in reference groups that are too small. When studying country effects but omitting the academic positions, the results indicate that Swiss researchers initially submit to higher ranked journals than German scholars, which in turn submit to higher ranked journals than Austrian academics. This aligns with findings by Fabel et al. (2008) about country differences in research productivity. The fact that, apart from the academic position, none of the other individual characteristics have a statistically significant impact, indicates the potential for improvement on the institutional level.

Table 5 Determinants of initial journal ranking

Table 6 shows determinants of the ranking score of the publication journal. The score is again log-transformed. The table displays results for doctoral students and post-doctoral students as reference group only, as they are the only groups with significant differences compared to the other academic positions. Compared to post-doctoral students and assistant professors without tenure, doctoral students generally publish in journals ranked 27% and 36% lower, respectively, the associated coefficients are not statistically significant. Statistical significance is observed for Professors, who publish in journals ranked 42% higher and assistant professors with tenure, who even publish 79% higher than doctoral students. Further, assistant professors with tenure also publish in journals having a 39% higher SJR-score than post-doctoral students. The ranking of the publishing journal of the remaining academics is estimated as slightly higher than post-doctoral researchers with a low degree of statistical confidence. The preferred journal category for initial submission (a dummy variable with 1 capturing a preference for higher tiered journals) is strongly correlated with the publishing journal as well. Economists still generally publish in higher-rated journals than business scholars. However, while the initial submission is to a journal rated about twice as high compared to business scholars, the publication outlet is, on average, ranked only 22% higher. This result is particularly interesting as it hints at the importance of the Top 5 in economics in contrast with 22 A + journals in business. The remaining controls are of low magnitude and statistical confidence. When country differences are investigated, the coefficients still indicate that academics in Switzerland submit to higher-ranked journals than academics from Germany and Austria. The estimated coefficients are, however, not statistically significant, hinting at unobserved influences of random elements among others in the publication process.

Table 6 Determinants of publishing journal ranking

The determinants of the ranking difference per submission are explored in Table 7. We hypothesize that scholars mainly under time pressure will have a larger ranking difference per submission than scholars mainly under quality pressure. However, scholars in different academic positions do not differ substantially regarding the gaps between two subsequent journals. The only weakly statistically significant effect is estimated for assistant professors without tenure compared to post-doctoral students. The former’s SJR-ranking difference per submission is therefore 0.54 points larger than that of post-doctoral students. Since the ranking difference is defined as the ranking of the publishing journal minus the ranking of the initial journal, a positive coefficient means that assistant professors without tenure choose journals that are closer in ranking to the previous journal compared to post-doctoral students. The coefficients with respect to the other positions are statistically non-significant at the 5% level but all have the expected sign. The results, therefore, provide weak support for assistant professors without tenure facing mainly quality pressure and thus only moving down in ranking in small steps as they resubmit. However, our data does not support the hypothesis that doctoral students move down in ranking faster than assistant professors with tenure. In fact, the two groups barely differ in movement through journal rankings.

Table 7 Determinants of ranking difference per submission

As a robustness check, the difference in submission behavior regarding the ranking difference per submission among the two disciplines was examined. Overall, the results are inconclusive, but tend to indicate that economists move down in ranking steps faster than business scholars. Additionally, the researcher’s country did not influence the extent of ranking difference per submission between the initial and publishing journal significantly.

Robustness

To investigate the robustness of the results, we conducted the same analyses with two other journal metrics: The 3y IF and the h-index. As described in ““Data” section, the impact factor is a citation-based measure that does not account for the quality of the citing source. On the other hand, the h-index represents a journal metric that encapsulates the reputation and consistency of journals. Given, that the SJR-score is not overlapping with either of the measures entirely, different results are expected. For this reason, coefficients and their magnitude will not be compared across analyses using different metrics. The robustness exercises are aimed to bolster the overall dynamics implied and the significance of the results.

Irrespective of the metric used, Table 8 shows: an increasing number of submissions is associated with a decrease in the respective score. When controlling for the ranking of the journal of initial submission, the coefficient of the number of submissions exhibits a lower t-value for the h-index compared with the impact factor or the SJR. This is because the journal ranking based on the h-index differs significantly from the ones implied by the impact factor or the SJR. Still, the result remains significant at the 1% significance level.

Table 8 Number of submissions and ranking difference

Table 9 shows that doctoral students submit initially to lower ranked journals compared to all other academic positions, irrespective of what ranking metric is used. Whereas all coefficients on the academic position dummies were associated with high confidence when the SJR was used, only the difference compared to assistant professors without tenure using the h-index remains statistically significant on the 5% level.

Table 9 Determinants of initial journal ranking

Robustness-checks for the determinants of the ranking of the journal of publication reveal a similar narrative, as shown in Table 10. The dynamics of doctoral students publishing in lower ranked journals are supported, no matter what metric is used. While this result was significant on the five-percent level for all academic positions when using the SJR, it is only significant when comparing doctoral students with assistant professors with tenure or full professors and using the h-index. The result of post-doctoral students publishing in lower ranked journals compared to assistant professors without tenure remains robust using the h-index.

Table 10 Determinants of publishing journal ranking

Table 11 shows no clear evidence for assistant professors without tenure moving downward in the SJR among academics. However, all the coefficients carry a sign in accordance with our expectations. The same holds true, except for assistant professors without tenure, when using the 3y IF. Using the h-index even produces results contrasting the intuition. In terms of the h-index, assistant professors without tenure move down in ranking the fastest.

Table 11 Determinants of ranking difference per submission

Which ranking mimics publishing behavior most accurately can be analyzed by comparing the proportion of variance explained by the regressions. For the initial submission, the SJR-model outperforms the 3y IF and h-index with respect to R2. For the ranking difference per submission, its R2 is, in comparison, somewhat better. Regarding the publishing outlets, more variance is explained when using the 3y IF, with the SJR performing slightly worse. While this seems to indicate that overall, the SJR mimics submission behavior best, the large share that remains unexplained points to the many random elements in the publishing process.

The dynamics first obtained in the analysis using the SJR are largely similar when using the 3y IF or the h-index. The level of confidence tends to be lower, when using either of the previously mentioned metrics. With the exception of the analysis of the ranking difference per submission, the h-index provides coefficients with a higher level of confidence than the 3y IF. Given that the impact factor does not adjust for the prestige of the citing source, this could be partly explained by authors accounting for reputation and prestige when choosing a journal.

Discussion

Young academics are generally inexperienced with the publication process. Nonetheless, our survey shows that they already face considerable pressure to publish. Taken together, these circumstances in some cases combine into a mental overload during the early stages of the academic career. In the worst case, doctoral students may quit academia or stick around only to develop mental health problems (Levecque et al., 2017). In this discussion, we want to focus on what causes the stress of young academics could have, what this stress, in turn, implies for young academics’ behavior, and what propositions may be suitable to alleviate some of this stress.

Stress factors and implications of stress for doctoral students

The PhD is the first step to an academic career, for which publications are important. Compared to a cumulative dissertation, the monography has a clear disadvantage in that it does not stack the CV with publications needed for academia. The cumulative dissertation may be more attractive for most doctoral students. This is exemplified by the increasing number of students, who opt for an article-based dissertation, which by design exerts publication pressure. Young academics do not have a lot of experience with the publication process. We suspect that some doctoral students feel a disconnect between what they were expecting from their PhD, versus what daily life they then experience. The focus on publications may diminish the time students can spend on learning skills for industry, research communication or teaching. This discrepancy can be another source of dissatisfaction and stress. A mediating factor for all these points could be the dissertation supervisor. When enough time is invested, the supervisor can help with the young academics’ uncertainties. They can help in choosing suitable journals, put rejections into perspective, but also manage expectations from the get-go. Being supervised in a satisfactory manner is a major determinant of not quitting academia (Mackie & Bates, 2019). However, when doctoral students are not supported by their supervisor (who may feel that they need to allocate their time to more important things than their PhD students), their stress rises.

Stress can then translate into different responses. Young academics are discouraged from pursuing projects with a low probability of getting published, especially innovative work. Even though interdisciplinary work is often touted by universities to be very important, PhD students are not incentivized to conduct such time intensive exercises. When realizing a project, they must improve the probability of acceptance of manuscripts as well as maximizing research output for a given level of effort. In the end, when publication requirements are not met, or will not be met, doctoral students may choose to quit academia. The focus on publications to progress through academia therefore selects people who are successful in publishing and tends to crowd out more intrinsically motivated researchers (comp. with Haucap & Muck, 2015).

Propositions for improvement

To mitigate the stress stemming from inexperience and expectations mismatch, it is important to match young academics’ expectations to publication behavior. Publications are not as important, when students at some point decide to go into industry after the PhD. If given a choice, those students could opt for a monographical dissertation to bypass some publication pressure. Or they may still choose to graduate with a cumulative dissertation, but they could then take the freedom to submit to lower ranked journals, where acceptance is more likely. There are also levels to academic positions as there is competition among universities. The top universities will generally attract the researchers with top publications. Thus, ambitious young academics should already take shots at the top journals during their PhD. If they are rejected, they should not hesitate to resubmit their work to get it published before their deadline. For other students, who aim for a position at smaller, more regional universities, top publications may not even be required (Frey & Briviba, 2023). They are best advised to aim for specialized and well ranked journals. Some stress can be alleviated by getting familiar with the publication process. This article contributes to this point, by showing how academics in different positions resubmit their manuscripts.

In general, we believe that the balance between publications and all other important skills for academia and industry is currently tipped too much towards the importance of publications (Osterloh & Frey, 2021). We therefore argue for universities to adjust their PhD curricula, placing less emphasis on the requirements to publish. As an analogy to pre-registered studies, supervisors could be provided with some power to decide what projects and papers are acceptable as dissertation products before a project is started, irrespective of the outcome. This may support the intrinsic motivation of young scholars, as they would no longer necessarily be discouraged from pursuing larger and more time-consuming projects. This change would however also require, for professorial appointment committees to emphasize the content of someone’s publications and not only the number of publications or the ranking of the journals.

Publishing houses could also contribute to the betterment of the current situation. Innovative papers are often rejected, as they do not fit the status quo or even directly counterargue points raised in research by reviewers. A specific proposition to improve the probability of innovative papers getting published would be qualified random selection applied to papers and even study designs (Osterloh & Frey, 2019). Using this approach, particularly for research designs, would reduce bias against null results and again support intrinsically motivated academics (Kaplan & Irvin, 2015). Decision bodies could even retain flexibility regarding how much they want their subjective judgement to be restricted using different degrees of partiality and weighing for the lottery (Shaw, 2022). Overall, we acknowledge that pressure per se is not evil and can indeed drive productivity. We believe that more creative research would develop in a system less characterized by publication pressure. The current system overemphasizes the importance of bibliometrics. While some proposed solutions inhibit negative externalities themselves, such as more bias in decisions, we believe that they do not outweigh the benefits of reduced publication pressure and more innovative and open research environments. Additionally, they would be directly beneficial to doctoral students, but also to universities, by improving the completion rates of their PhD programs, as well as to research at large.

General discussion

Our results imply that researchers generally move down in ranking, as they resubmit their manuscript. While shedding light on the determinants concerning publication behavior, other institutional aspects, such as the role of the department’s ranking or universities’ financial resources, remain to be explored. Taking a rejection as a signal to downgrade the quality of the manuscript may, however, be a flawed strategy. The reviewing process has been shown to be noisy and biased (Seidl et al., 2011). Particularly, referees do often not agree which papers should be accepted (Callaham et al., 1998; Ceci & Peters, 1982; Epstein, 2004; Peters & Ceci, 1980). Referees usually spend little time reviewing a manuscript, meaning they must rely on noisy signals to weigh its quality (Ben-Yashar & Nitzan, 2001). For researchers with sufficient time, a resubmission to a suitable journal on a similar level as the previous submission, may be the best strategy to maximize the journal rank. Academia and the number of articles submitted each year are still growing, hence a slowdown in the publication process can be observed (Ellison, 2002). The strategies researchers generally choose when trying to publish do not help in reducing the strain on the peer-review system. Additionally, each resubmission inhibits redundant processes on behalf of both the authors and reviewers. A system in which authors are incentivized to reduce the number of submissions could be expected to yield efficiency gains for research institutions and publishing houses.

Conclusion

While some empirical evidence documents which strategies are used by researchers to exploit bibliometrics, focusing on metrics regarding the publication journal, literature is sparse about scholars’ strategies during the entire publication process of a scientific paper. Our survey enables us to study both the journal of initial submission and the publication journal along with the number of submissions in between. The collection and attempt of quantification for this data is to our knowledge a novel addition to the literature. The survey shows how academics in different positions submit their papers, which is especially beneficial to young academics not yet well experienced in scientific publishing. The results show that authors are induced to first submit to a well-ranked journal. If they are rejected, they then choose a lower-ranked journal for resubmission. Researchers, therefore, try to maximize the ranking score of their manuscript. A researcher’s field or a paper’s topic largely dictates which journals are suitable. The more specialized or niche a topic the more restricted the set of available publication outlets. Broader research fields or topics allow for smaller ranking steps between submissions and are thus better suited for maximizing journal ranking scores. The result also implies that a researcher with unlimited time would submit to journals one after the other according to the respective ranking. Each submission to a new journal demands resources from persons in academia. Journals usually have unique guidelines with respect to how an article must be written and formally presented. Authors must invest much time and effort, which are unrelated to the content of the article. Further, when articles are accepted for review, a submission draws resources from uncompensated reviewers. It may also be argued that reviewer decisions are noisy and susceptible to bias. On one hand, maximizing journal ranking scores thus contributes to the documented phenomenon of rankings displacing heterodox approaches. On the other hand, the strategy itself can result in inefficient processes, in which many reviewers assess roughly the same manuscript. Universities can help address the first issue by adjusting their recruitment practices. Putting a smaller weight on a candidate’s bibliometric scores and shifting this attention to the content of their important publications should help reduce the effect journal rankings have on heterodox approaches. The problem regarding inefficient reviewing is that authority about the maximum number of submissions for a given paper lies solely with the authors. If journal rankings were not as important to the academic career, researchers might be more inclined to find a suitable journal from the start.

Most doctoral students in economics and business nowadays are required to produce a certain number of publications in a limited timeframe. Compared to more senior academics, for which the reputation of the publication outlet is more important, doctoral students face more time pressure. Doctoral students thus initially submit to lower-ranked journals and ultimately publish in lower-ranked journals compared to more senior academics. Doctoral students also ultimately publish in lower-ranked journals than academics in other positions. The scholarly literature along with the tendency in the rest of our results suggest, that to attain a professorship, researchers need to publish in top journals. The fact that doctoral students submit to lower-ranked journals could therefore reflect that doctoral students in the German-speaking area are not yet fully committed to an academic career. Labor market options outside academia, especially for junior academics in business, may remain attractive. If universities see a problem in not having enough junior academics remaining in academia, they may have to lower the importance they attribute to top publications in professorial appointments.

Future research may aim to obtain data directly from journal editors to find out how journals choose reviewers. When replicating the survey, researchers could require respondents to provide at least the most recent publication effort but then allow for further publication efforts in chronological order to reduce bias. Insightful data could also be gained by a survey obtaining all submission outlets of researchers’ publication efforts. This would allow for a more granular analysis of the issues discussed in this article and may even shed light on strategic behavior of academics with respect to publishing houses. Finally, a similar survey as was used in this article could be conducted in other regions. Adjusting the survey to obtain more detailed data on the types of pressure researchers face would be particularly interesting.