Introduction

At some point in the development of a study, researchers need to decide to which journal they are going to submit the results of their work [1]. This is a delicate decision for junior scientists, although the recent, rapid evolution of the world of scientific publishing makes it difficult even for experienced researchers.

The choice of the journal serves two main purposes. The first one consists of the necessity to communicate the results to peers interested in the subject of the study. The second one aims at satisfying the need of a fair and adequate recognition for a scientific contribution.

The knowledge of the journal-based structure of scientific communication constitutes a basic compass for selecting the journal to which to submit a scientific contribution.

Horizontally, journals are diversified and specialized in specific disciplines, topics, and audiences. Vertically, journals are stratified in terms of prestige and reputation, which are supposed to be related to the novelty and validity of the research they publish [2]. In principle then, the authors of an article to be published should identify the journal based on these two dimensions, namely a journal read by scientists that can be interested in their results and with the best reputation possible.

What makes this seemingly simple decision actually complicated?

One problematic aspect is that the journal with the highest reputation may not be read by the audience that fits at best the subject of the research, or vice versa. For example, one may need to choose between a journal of medium reputation that very well meets the topic of the article and another one of better reputation, but less pertinent or more general in character. Moreover, reputation typically correlates with the rejection rate. A rejection implies revision, loss of time, and delayed publication; one should select a journal in which the article has realistic chances of being accepted, although assessing the chances of acceptance is not an easy matter.

However, the greatest challenge to choosing the appropriate journal derives from the recent evolution of the scientific publishing system. To make an accurate evaluation about the reputation of a journal and its future prospect, it is of pivotal importance to know and understand critically the basic economic mechanisms of the scientific publishing system. Major changes that unfolded in recent years not only disrupted the scientific publishing system, but also hampered the reliability and validity of the bibliometric indicators that are commonly used as proxies of journal’s reputation and quality. Their durability to future critical analysis is not so sure.

In this frame, it is relevant to consider the recent evolution of the scientific publishing market and to analyze a few data and trends, which are essential to help making an informed decision. Therefore, the next section will describe four major phases in scientific publishing business models and the relevant implications. The final section discusses criteria and good practices with respects to the choice of the journal, which is ultimately going to be a key decision for the long-term reputation of those who enter the scientific community.

Evolution in scientific publishing business models and relevant consequences

For more than a 200 years, scientific papers published in scientific journals have been a major avenue to make research results public [3]. In the more recent history of scientific publishing, we can identify four partially overlapping phases.

Print (until the mid-1990s)

In the “print phase” not only journals were printed, but all other main phases in the review and publication process were in a printed form [3]. This was a slow, costly, but also “thoughtful” process: given the burden of sending a printed article, authors tried to select the right journal with extreme care, editors selected the reviewers with similar attention, and the reviewers were strongly reluctant to refuse a review.

In this phase, the market of scientific publishing was fragmented: the largest for-profit publishers, known as the “big five,”Footnote 1 controlled about 20% of the market, and many small- and medium-size publishers were also present. The revenues of for-profit publishers derived from university and institutional libraries’ subscription fees for the right to access the journals’ content, according to the so-called subscription-based business model.

Digitalization and concentration (mid-1990s to mid-2000s)

Although many journals still propose a print version, digitalization and the internet made paper disappear in all the other phases of the review process and become less and less important even in the publication one. Digitalization and the internet progressively transformed scientific journals into digital goods, and digital goods have a cost of reproduction close to zero. This characteristic enabled larger publishers to offer many more journals to potential customers, e.g., university and institutional libraries, with any additional costs, outcompeting smaller scientific publishers and eventually acquiring them [4].

Figure 1 illustrates this cycle, a virtuous one for the largest for-profit publishers. As a result, a major wave of mergers and acquisitions has occurred since the mid-1990s, which in 10 years led to an oligopoly of the big five, controlling over 50% of the market [4].

Fig. 1
figure 1

Digitalization and the virtuous cycle for the largest for-profit publishers

Open access (mid-2000s–ongoing)

It is important to remark that the scientific publishing industry has one of the highest profit margins of any industry. In 2023, the profit margins of the largest scientific publishing companies were over 40%, which is more than the most profitable sector in the US economy, regional banking, with as a profit margin around 30%Footnote 2 [4, 5].

Why is this sector so profitable? Consider a standard publishing company for magazines or newspapers that must pay the salary of the staff producing the articles and obtain revenues from readers in a competitive market and from advertisement. By comparison, scientific publishers do not pay the scientists writing the articles, acting also as editors and reviewers, while obtaining revenues from subscription fees and/or the authors’ publication fees (i.e., “APC” fee, see next). Profitability is also guaranteed by the fact that most journals often benefit from limited competition by occupying a specific niche and rank that provides a semi-monopolistic position. Figure 2 illustrates the business models in standard and scientific publishing.

Fig. 2
figure 2

Comparison of business models in standard and scientific publishing

The huge profits of the industry at the expense of the public purse provoked the outrage of librarians and academics since the late nineties [6]. Hence, while digitalization and the internet favored a virtuous cycle for the large publishers, they also enabled and encouraged the emergence of an open-science movement aiming at reducing barriers to knowledge through free access to scientific publications.

The growth of open access (OA) publishing has been staggering, and nowadays most articles are published as OA [7]. However, the label “open access” defines very different models. On the one hand, the open access movement pursued so-called green and diamond OA, which imply no fees for readers nor authors; on the other hand, the most successful models are by far gold and hybrid OA, which demand expensive fees paid by authors, according to the so-called author processing charge (APC) business model, sarcastically nicknamed the “author pays the cost” model. Figure 3 illustrates the different types of OA, according to four key traits, namely depending on the fact that (i) the content is free for authors and/or for readers, (ii) the articles undergo a peer review process, (ii) the authors retain the copyright, (iv) the authors do not retain the copyright.

Fig. 3
figure 3

Source: adapted from [8]

Open access model characteristics: one term for very different systems.

Even within the gold open access model we can identify different types of scientific publishers, as illustrated in Fig. 4. Recently established scientific publishers most often obtain revenues exclusively through the gold OA model, including nonprofit publishers, such as the Public Library of Science, which edits PLOS One, and for-profit publishers such as MDPI and Frontiers. Instead, traditional publishers, including large for-profit publishers as well as nonprofit publishers owned by a learned society,Footnote 3 adopt the gold OA model as a complement to the subscription fees (so-called hybrid model), which may lead to the so-called double dipping problem, when the public purse must pay the publisher twice for the same article: via subscription and via open access fees.

Fig. 4
figure 4

Gold open access: for-profit and nonprofit publishers

Diversification (mid-2010s–ongoing)

More recently, several parallel processes have been observed.

A worrisome trend lies in the staggering growth of predatory journals and paper mills. Predatory journals follow the APC business model, requiring fees from authors, but do not conduct an adequate peer review or any peer review process at all [9]. The identification of predatory journals is not always straightforward, as they try to conceal their real nature behind a curtain of apparently legitimate practices [10], and they do that so effectively that even curated databases have been recently infiltrated [11].

Another threatening phenomenon consists of paper mills: companies that “fabricate” scientific articles and sell co-authorship once the articles have been accepted for publications [12]. A recent study examined the field of biomedicine and estimated that out of 1.3 million biomedical publications listed in Scimago in 2020, more than 300,000 were likely fake and that the percentage of fake articles from Russia, Turkey, China, Egypt, and India, was between 39% and 48% [13].

The most recent notable process in scientific publishing consists of Transformative Agreements (TA). In 2015, a white paper of the Max Planck Society argued that: “the money already invested in the research publishing system is sufficient to enable a transformation [to open access publishing] that will be sustainable for the future” [14]. Transformative agreements aim at tackling two main problems in scientific publishing—rising subscription fees to read the scholarly literature and rising APC fees to publish in the scholarly literature—by moving publishing away from subscription payments, toward fully open access publishing. TA is an umbrella term encompassing several kinds of contracts between institutional consortia and publishers that may include traditional subscription licenses and APC discounts or waivers covering a certain number of articles in hybrid or fully open journals. The major initial supporters of TA were the Max Planck Digital Library and cOAlition S, a consortium of national research agencies and funders from 12 European countries promoting OA publishing. The number of TAs has grown fast since the mid-2010s, especially in central and northern European countries, in part thanks to regulatory obligations to do so, such those enforced by cOAlition S. So far, however, there is little evidence that TAs are truly containing costs. A report to the US Congress critically assessed TA for accepting and strengthening the current costly and opaque market for journal subscriptions [15], and cOAlition S dropped the support to TA beyond 2024, concerned that sustained support would make this agreement a permanent fixture and risk of replicating the same trends that it was designed to alleviate. In addition, since much time and effort are needed for negotiation, many argued that TAs favor larger publishers, by pushing small publishers to partner with larger ones, and authors to publish on journals covered by these agreements.

Business models and their implications

At this point, it is important to highlight some key characteristics of the two main business models in the scientific publishing industry, and how these characteristics affect the behavior of authors, publishers, and journals’ editors.

In the traditional “subscription-based” business model, the revenues of the publishers derive from subscription fees paid by public librarie—typically university and institutional libraries. In this model the customer is the reader, and the reader’s goal is to access high-quality content. Therefore, if the publishers aim at increasing profits, they need to promote the quality of the journals. On their side, the editors of the journals are scientists, whose goal is to maximize their prestige and reputation, and they can achieve these goals by curating the quality of the content. In turn, this model creates an alignment of incentives towards quality and common interest in rigorous, selective peer review.

In the APC model, the revenues derive from authors who publish articles: in a way, the customer is the author itself. The authors’ goal is to publish, and the publishers’ profits depend on the volume of articles published: they both have a common interest in publishing articles in large numbers and in shortest times.

The different goals and interests for authors, editors, and publishers in the two business models is reflected in the journals’ acceptance rate, the turnaround time, i.e., the time elapsed from the submission of an article to its acceptance, special issues policies, and the journals’ growth rates. For example, the acceptance rate of journals owned from MDPI, the largest APC for-profit publisher, is two to three times higher than the acceptance rate of traditional publishers [16]. There are huge differences also regarding the turnaround time. Peer review is a time-consuming process, and the turnaround time in 2022 was estimated between 134 and 198 days on average for journals from the major traditional publishers and nonprofit OA publishers, which all experienced longer and longer turnaround times in recent years—among others, because of the increasing challenges in finding reviewers [17]. The picture is very different for journals owned by APC for-profit publishers, which display increasingly faster turnaround time, as evaluated for MDPI (37 days), Frontiers (72 days), and Hindawi (83 days) [16].

A major difference is also observed in the proportion of articles published via standard or special issues. Peer review in special issues can adhere to strict standards, and involve independent reviewers selected by the journal editors. In other cases, the peer review is managed by the guest editors, namely the scientists that promote the Special Issue, invite contributors, and very often publish themselves one or more articles in the Special Issue. In this latter situation, the special issues have pros and cons. On the one hand, from the journals’ point of view they kill two birds with one stone: they attract new submissions and reduce the costs by delegating the management of the peer review process to the guest editors. On the other hand, because of some intrinsic endogamy in the process, the peer review is often not so independent and rigorous as it is for standard submissions, thus entailing the risk of accepting below-average contributions. The extent of the use of special issues differs deeply among publishers, and in a way consistent with the respective goals. Special issue articles represent only a tiny fraction, namely below 5%, of the publications for traditional and nonprofit OA publishers’ journals. On the contrary, special issue publications represent the vast majority for the major for-profit APC publishers, in the order of 70–80% [16]. For example, in 2023, the International Journal of Molecular Sciences and Sustainability, i.e., the two largest MDPI journals, published each more than 3000 special issues, which means more than 9 per day.Footnote 4 Because of similar practices and traits, it is a debated question whether MDPI can be even considered as a predatory publisher [18].

Perhaps, the most remarkable difference between the two categories of journals consists in the different growth rates displayed in recent years, namely in the exponential growth of APC publishers compared with stable or slow growing trends for the other publishers. For example, MDPI grew from 17,000 publications in 2015 to 263,000 publications in 2022, becoming the second largest for-profit publisher in the world, surpassing Springer and second only to Elsevier. Such staggering growth is best illustrated by the complete reshuffling of the largest scientific journals that occurred in few years.

Figure 5 compares the 15 largest scientific journals by number of articles publishers in a year, respectively in 2010 and in 2022. In 2010, two of them (gray histograms in the figure) were nonprofit OA journals, including PLOS One, the most prolific journal, with 6700 articles published that year; the other 13 journals were subscription-based journals, 11 of which were owned by a learned society (striped histograms), and two by a for-profit publisher (black histograms). In 2022, all the 15 largest journals published more articles than the largest journal in 2010. All but three were gold open access journals controlled by for-profit publishers, of which ten were from fast growing APC publishers: MPDI (eight) and Frontiers (two).

Fig. 5
figure 5

Largest journals by number of articles published in 2010 and 2022—Source: Scopus-Scimago. Gray: nonprofit OA; striped: learned society, nonprofit, subscription/hybrid; black: for-profit subscription/hybrid; white: for-profit APC

A similar reshuffling has been observed also in the specific field of chemistryFootnote 5: in 2010, all the journals were controlled by publishers following a subscription-based business model, ten of them being owned by learned societies, including the largest one—Acta Crystallographica Section E, with 4000 articles—and five being from for-profit subscription-based publishers. In 2022, only three journals are controlled by learned societies, and 12 from for-profit publishers; the three largest journalsFootnote 6 were all controlled by MDPI, and published, respectively, 16,000 10,000 and 9000 articles.

There is, of course, a tension between the goals of an APC for-profit publisher, whose profit depends on the quantity of articles published, and the scientific editors, who aim at improving their own reputation via quality and selectivity. Moreover, while the subscription-based business model guarantees a stable source of revenues, regardless of the publishing volumes, the APC business model requires journals to maintain large publishing volumes. Hence, editors pursuing quality via selectivity hinder the publishers’ goals, so that in for-profit APC journals the editors are often less experienced academics, or even nonacademics.

In turn, the for-profit APC model seems to fit well Christensen’s definition of disruptive innovations, namely innovations that make it much simpler and affordable to own and use a product for people who, historically, did not have the skills to be in the market [19, 20].Footnote 7 In this case, people that were not skilled enough to publish in the traditional system, now can, by paying… Disruptive innovations typically start in marginal market segments, so that dominant organizations need time before being able to realize how important they are [19]. But in this case, they eventually did, as APC turned to be a much more profitable business model than subscriptions fees, becoming increasingly popular also among traditional publishers [21].

Signals to consider when selecting a journal

Going back to the key point of the present contribution, we should carefully select journals with a good reputation and that will not lose their reputation at any time. What signals of quality, reputation, and rigorous ethical standards should we then be aware of? And are there any alarm bells of dubious practices?

Determining a journal’s quality is not simple. First, whereas the reputation of a scientific journal used to be correlated with its age, i.e., the older the better reputed, in recent years prestigious publishers have established new journals at increasing pace. These journals convey contrasting reputational signals: a prestigious brand combined with a young age. Second, and arguably most importantly, bibliometric indicators—which are commonly used to determine a journal quality—have several pitfalls and are vulnerable to manipulation. Bibliometrics indicators are based on citations received by the articles of a journals; citations are considered a signal of an article’s importance and quality, i.e., a proxy of scientific impact. Especially in natural aciences, the journal impact factor (IF) is used to estimate the quality and prestige of a journal. The IF measures the average number of citations received by the articles published in the last 2 years (or 5 years, for the IF at 5 years). However, the IF has several limitations. Most importantly, it weights every citation in the same way, and does not take into consideration the characteristics of who is citing. This may be problematic in several regards. For example, it has been debated whether indicators of scientific impact should consider self-citations, namely citations received from articles of the same journals. Clarivate—the company that owns and manage the Web of Science, the most known dataset of scientific publications—does include journal self-citations for the computation of the journal impact factor (JIF). On the contrary, Scopus, arguably the second most known dataset of scientific publications, controlled by Elsevier—does not consider self-citations for the calculation of the Scimago journal rank (SJR). Scientific journals vary remarkably in the share of self-citations, which can make a big difference. For example, considering the top 20 citing journals of the International Journal of Molecular Sciences, the largest journal in chemistry, 32.5% were self-citations; for the Journal of the American Chemical Society it was 12.4%.Footnote 8 The IF also does not take into account which other journals are the main source of citations; for example, considering the top-20 citing journals, 70% of the citations for the International Journal of Molecular Sciences were received from the journal itself or other journals from the same publisher (MDPI) and the rest from journals of the other major APC for-profit publisher, i.e., Frontiers.

In general, indicators of impact can be manipulated in many ways [22], and in recent years Clarivate delisted many journals from the science citation index due to suspicious practices, so that the IF of these journals is not reported anymore.Footnote 9 Therefore, when assessing a journal’s impact, scientists should preferably use indicators that do not consider self-citations, and check whether the share of self-citations and citations from journals of the same publisher are much higher compared to other journals in the same area.

Authors should also be wary of journals displaying some unusual traits, namely when they combine the two desirable properties of high acceptance rate and high values in indicators of impact, and when they display extreme growth rates. There is in fact a delicate balance between a journal reputation, the number of submissions it receives, and the level of selectivity: better reputation attracts more submissions, but the editors must be highly selective to maintain a good quality journal. Hence, it is very unlikely that a journal can couple good quality to low selectivity. A massive growth in the number of articles published by a journal in few years may also be a source of doubt about its rigor, because the mentioned delicate balance constitutes a constraint to the journal’ growth rate.

Signals of independence in the peer review process are also important. Special issues in many journals are not managed by the editors but by guest editors, which are often authors of one or more articles of the special issue and where co-authors often come from their own network. Hence, a high proportion of special issues can be considered a warning signal. In a similar way, one may keep an eye on whether the editors and the members of the editorial board publish frequently in the journal, and/or whether there are “serial” authors, i.e., authors publishing many articles a year, a large proportion of publications from the same country or institution(s), or from countries hosting paper mills.

As a final consideration, it is important to consider that our individual choices are not only relevant for our own career and recognition, but they also collectively shape scientific communication and science itself. Hopefully, an understanding of the mechanisms that are reshaping scientific publishing can inform better individual choices capable to promote the common scientific interest.