Introduction

Despite the ongoing evolution of scholarly communication, journal publications, especially those in ‘high impact’ or prestigious journals, still form the core of the scientific communication and reward system. While novel publication and assessment procedures are being developed (CoARA, 2022), journal articles containing prestigious journals’ seals of approval continue to be the main mechanism for sharing knowledge and comprise the prime currency of academic reward (McKiernan et al., 2019; Pontika et al., 2022). Even though the relationship between study quality, research impact and publication venue has long been questioned, researchers remain eager to get their work published in prestigious journals.

However, the reliance on journal publications, especially the competitive impulse to publish in prestigious (“high impact”) journals, has led to a number of problems. These include that it can lead to a privileging of sensational, “flashy” findings at the expense of methodological rigor (Brembs, 2018; Köhler et al., 2020; Oswald, 2007). Additionally, the peer-review process for these journals is often highly competitive and can lead to slow publication processes, with somewhat arbitrary decisions on what gets to be published (Tort et al., 2012). Furthermore, the emphasis on these publications can also lead to a lack of diversity in the scientific community, as researchers from underrepresented groups may have more difficulty getting their work published in these journals (Demeter, 2019; Maas et al., 2021; Wondimagegn et al., 2023).

Therefore, the gatekeeping mechanisms deciding who or what gets published within the pages of prestigious journals have been the subject of extensive debate and study. While historically continuously being the center of critique for being biased, slow, and fallible in many ways, peer review is still considered the best selection mechanism available and undoubtedly the most commonly used one (Mulligan et al., 2013; Ross-Hellauer et al., 2017). However, some journals have long had alternative publication models in which a selected group of authors can partly or fully bypass peer review processes. While these mechanisms contribute considerable outputs in some of science’s most prestigious outlets, hence prominently impacting both the scholarly literature and academic reward processes, they have only been minimally subjected to empirical analyses.

This study addresses this knowledge gap by studying “Contributed” articles in the Proceedings of the National Academy of Sciences of the United States of America (PNAS). Through this track, members of the US National Academy of Sciences (NAS) were able to “contribute” up to four articles every year via a streamlined process (reduced to two articles per year since the start of 2020), allowing them more control over the review process. Against a backdrop of claims about those articles providing important pathways to publish high-quality research, but also allegations of them constituting mechanisms of elitism, this study assesses the nature and impact of Contributed articles and the background and status of their authors and contributors. In particular, the paper addresses the following questions:

  1. 1.

    What are the characteristics of PNAS Contributed articles in terms of author background, discipline and peer review duration and how do they compare to PNAS Direct submissions?

  2. 2.

    How does the citation profile of PNAS Contributed articles compare to the profile of PNAS Direct submissions and to articles in other prestigious outlets by the same authors?

  3. 3.

    How do the author contribution roles of contributing authors compare to those of other authors of PNAS articles? Addressing these questions will contribute to our understanding of special submission pathways and the characteristics of their articles and authors.

Background

Peer review, evolving out of editorial oversight at community or society-led journals, became a “gold standard” in most journals in the latter half of the 20th Century (Baldwin, 2017). At least as an ideal, a scholar’s status came to be judged irrelevant to the assessment of individual pieces of research. Formalized as the Mertonian norm of universalism (Merton, 1973), judgements of a manuscript’s merits were supposed to be blind to its authors’ status, demographics or background. Nevertheless, despite a host of innovations in peer review procedures and formats, particularly including various forms of anonymization (Horbach & Halffman, 2018), the system has continuously been critiqued for failing to support Merton’s universalistic ideal (see, e.g., Jukola, 2016). Multiple journals have therefore struggled to find an appropriate balance between systems based on trust and efficient publishing on the one hand while aiming to prevent favoritism on the other. Relatedly, the extent to which editors should be allowed to publish their work in the journals they work for has been extensively debated. A recent study shows that editors tend to publish substantially in their own journals, especially men (Liu et al., 2023).

PNAS, as the flagship journal of the National Academy of Sciences, provides a favored position to members of its society. Founded in 1914, PNAS aimed to represent a “comprehensive survey of the more important results of the scientific research of this country”, as stated in the first issue by Edwin Bidwell Wilson, inaugural PNAS managing editor (Garfield, 1989). NAS members were able to submit directly and usually forgo peer review. Non-members could submit via an NAS member (“Communicated” submissions), who would then choose the reviewers and oversee editorial processes. In 1995, a “Direct submission” track was created whereby sponsorship from NAS members was no longer necessary. By 2007, Direct submissions accounted for 50% of articles published and 84% of those submitted (Schekman, 2009).

In 2010, PNAS removed the Communicated track, with the main reason stated to be the dwindling number of papers submitted via this track (Schekman, 2009). However, many NAS members and outsiders saw the change motivated at least in part by an attempt to mitigate accusations of “cronyism” (Kean, 2009). NAS members retained their privilege to “contribute” up to four articles of their own per year (two articles per year since 2020), however. These Contributed articles go through a somewhat different editorial process. Since January 2021, the process has been for the contributing member to submit the manuscript along with the names of peers who have agreed to review the work. After a brief assessment by the editorial board, the suggested referees are assigned. Upon publication of the manuscript, the names of the contributing NAS member, the handling editor and the referees are mentioned in the article’s byline. The process has, however, evolved over the last few years. Between 2007 and 2016, it was the duty of the contributing member to ensure that submissions were reviewed appropriately before being submitted. They were also required to forward these reviews, along with the reviewers’ names and contact details, to PNAS as a component of their submission process. Beginning in 2017, the peer review process for all contributed submissions has been managed by the editorial office of PNAS. In April 2014, reviewers had the option to include their names on the article byline in the published paper, a practice that turned into a compulsory requirement by October 2015.Footnote 1

Hence, while giving contributing authors more influence on the editorial process, the system does provide transparency about the actors involved and their roles. Justifying the decision to maintain this member-contributed track, then-Editor-in-Chief Randy Schekman explained that Contributed articles were amongst the journal’s most cited and that this track also incentivized active contribution to running the journal from NAS members (Kean, 2009). The Contributed track has, however, attracted criticisms of cronyism, being “anachronistic” and giving the journal “the appearance of an old boys’ club” (Aldhous, 2014).

Two previous studies have touched upon the issues we investigate. In their 2009 scientometric analysis of PNAS papers published across all tracks, Rand and Pfeiffer (2009) found that overall Contributed papers tended to garner fewer citations than Direct submission, but that the top 10% of Contributed submissions garnered higher levels of citations than the top decile of Direct submissions They hence concluded that PNAS-Contributed papers, “balance an overall lower impact with an increased probability of publishing exceptional papers” (Rand & Pfeiffer, 2009).

A 2016 preprint from Davis (2016) further compared citation performance across PNAS tracks to that point and found general underperformance (9% fewer citations) of Contributed papers when compared to Direct submissions. Papers from Social Sciences had the largest gap (12% fewer citations). However, Davis found that the effect had lessened over the years, from 13.6% (2005) to just 2.2% fewer citations (2014). Davis suggested that this closing of the citation gap was attributable to persistent moves to tighten editorial scrutiny of Contributed papers in the preceding years: “Successive editorial policies placing limits, restrictions, and other qualifications on the publication privileges of NAS members may be responsible for the submission of better performing Contributed papers” (Davis, 2016).

Materials and methods

Publication meta-data on articles published in PNAS from 2007 to 2020 was made available in XML format by the PNAS editorial team. Some information, such as publication dates, author names and DOI, were available directly from XML, whereas information on submission type, submission date and acceptance date were extracted from structured text fields. A total of 49,089 articles were published in this period; however, 2,168 were other types (e.g., communications). Of the remaining 46,921 articles, 35,878 (76.5%) were Direct submissions and 11,043 (23.5%) were Contributed submissions. Article metadata were matched through DOI to Web of Science publication records (WoS) to allow citation analysis and comparisons to the authors’ oeuvres. All PNAS articles are in the Science Citation Index-Expanded, but we count citations from this index as well as the Social Science Citation Index, the Arts and Humanities Citation Index and the Conference Proceedings Citation Indices. A total of 46,688 (99.5%) records were matched, hereof 35,693 (76.5%) Direct submissions and 10,995 (23.5%) Contributed submissions, hereof 54 inaugural articles.

PNAS publishes articles in sections according to the broad topic of articles. PNAS categorizes its articles under three broad disciplinary classifications (biological, physical and social sciences). 232 (0.52%) articles were not assigned to any of these categories and are excluded from our analyses. The remaining 46,391 articles were divided between biological sciences (n = 36,050, 77.7%), physical sciences (n = 7,976, 17.2%) and social sciences (n = 2,365, 5.1%).

Matching authors from PNAS to WoS records and subsequently to their non-PNAS publications is not trivial. Although one should expect names to be identical, some information is lost during the conversion from journal metadata to registrations in WoS. Using the Damerau-Levenshtein distance combined with the position in the author sequence, we can match 93.8% of names where the sequence position is equal, and the Damerau-Levenshtein distance is less than four. A manual inspection at this level found no false positive matches in 100 couplings. The at-most three edits (distance ≤ 3) mostly covered changes from local special characters to standard English letters.

We use the results of D’Angelo and van Eck’s (2020) automatic author disambiguation algorithm to create profiles of publications likely to match those of the actual, real authors. This algorithm generally produces very high levels of recall (96.0%) and precision (96.1%), which can be expected to be potentially higher for NAS members due to their assumed greater-than-average contributions to science. The algorithm prioritizes precision over recall, and as a result is more likely to split one author’s full publication set into more than one publication cluster, than to merge clusters of different authors’ publications into one. In most cases, the split-off publications will be singletons, with little consequence for the complete profile of an active scientist. However, there is still a risk—which is higher for Eastern Asian names—of merging publication sets of authors with similar or near-similar names (D’Angelo & van Eck, 2020), creating too large publication sets for a person.

Our author analyses focus on the set of authors who published at least one Contributed submission in the three main sections in the period 2007 to 2020. One thousand seven hundred authors qualify for this; however, a small number of these have very few publications. For practical reasons (we want to be able to create deciles of articles), we restrict the author analyses to those with at least ten publications in total (including those prior to 2007), leaving 1,640 authors. We manually checked the twenty authors with the most publications to identify potential cases of merged publication profiles. We found three, here-among the most prolific (1613 publications) and removed these from the set. The remaining set had a median number of 108 publications (mean publications per author = 144.6) and a maximum of 1027 publications for one scientist.

We used the same data for gender inference as in the Leiden Ranking, which applies gender-API and genderize.io to estimate the probability of a name being male or female based on statistical data (Boekhout et al., 2021). These methods generally provide robust estimates for gender inference in most countries except China and South Korea, and previous research has established that the remaining “unknown” category is comparable for men and women in manually established samples (Nielsen et al., 2017; Santamaría & Mihaljević, 2018).

PNAS has registered author contributions for the majority of their articles, however, they do not use a taxonomy for this, but allow authors to enter free-form descriptions. For our analysis of author contributions, we filtered the descriptions through a rule-based classification into the 14 contributor roles of the CRediT taxonomy and added two additional roles: advice giving and equal authorship, where the first covers those cases where an author’s contribution has been to provide expert guidance and advice on specific elements of a theory, method or data set, and the other covers those cases where authors indicated equal workloads between authors. As some descriptions cover multiple roles, we also allowed that in our classification rules. Most rules used the complete text of descriptions, which was possible as authors commonly used standardized ways of describing roles and often even used the CRediT taxonomy. All statements were split by semicolon, typically indicating the work of different authors. 219,400 statements were extracted this way, and 1707 rules were created to cover all statements. Each rule was used to search statements with front-truncation, e.g. “* wrote the paper” for the role “Writing—original draft”. 224 of these captured more than two statements with the wildcard, while 1483 were copied verbatim from statements to capture just one or two statements (0.6% of all statements), i.e. they were essentially manual classifications, e.g. “*contributed concept and design of this study” for the role “Conceptualization”.

Results

Figure 1a presents the number of days between submitting and accepting published articles in PNAS, distinguishing between Contributed and Direct submissions. PNAS metadata do not systematically provide information on the number or timing of revised versions. It shows a consistent pattern across all disciplines, though most pronounced in the biological and physical sciences. Direct submissions, on average, spend 121 days in peer review in the biological sciences, 126 in the physical sciences, and 145 in the social sciences, whereas Contributed submissions spend 74.8 days in peer review in the biological sciences (Δ = 46.2 days), 80.4 in the physical sciences (Δ = 45.6) and 119 in the social sciences (Δ = 26). Note that the data available does not allow for a more refined breakdown of the time between submission and acceptance.

Fig. 1
figure 1

a Time in days between submission and acceptance of published articles. Vertical lines indicate the mean duration of the review process. b The proportion of Contributed articles relative to Direct submissions over time

Figure 1b presents trends over time in terms of the proportion of PNAS articles appearing as Contributed articles compared to Direct submissions. The figure shows that Contributed articles are least common in the social sciences, with a share of about 15%. At the same time, they are more common in the biological and physical sciences, taking up about 25% of articles each year. In the last two fields, there was a rise in Contributed articles between 2012 and 2017, after which the share of Contributed articles fell back to the level of 2010.

Figure 2 addresses the impact of Contributed articles and Direct submissions, presenting the findings of our citation analyses. Figure 2a shows the median number and IQR of normalized citation scores (NCS), controlling for research field and publication year. We collect citations up until the end of 2022, and at most five years since publication, offering comparable citation scores for articles, although with less robust data in the most recent periods. Field normalization utilizes a paper-level field classification system developed by (Waltman & van Eck, 2012) and used in the Leiden Ranking of universities. The figure indicates only marginal differences between Contributed articles and Direct submissions for social and biological sciences, although the gap is somewhat more pronounced in physical sciences. In all cases, Direct submissions receive slightly more citations.

Fig. 2
figure 2

a Normalized citation scores for Contributed articles and Direct submissions. Thicker horizontal lines indicate median scores, and color boxes represent interquartile ranges. b Distribution of Contributed articles compared to overall PNAS papers over normalized citation score deciles. Deciles are ranked from lowest to highest NCS. c The proportion of citations to Contributed articles relative to Direct submissions over time

In contrast, Fig. 2b presents the distribution of Contributed articles over citation deciles. The figure emerged from rank-ordering all PNAS articles in terms of normalized citation scores, dividing them into ten equal-sized segments, and showing the distribution of Contributed articles over these segments, relative to the share of Contributed articles in the entire sample. The first decile contains the least-cited articles. This figure shows that Contributed articles are relatively frequent among the group of least-cited articles for all disciplines. Especially for the physical sciences, contributed articles are relatively uncommon among the most-cited articles. In contrast, the social sciences show a slight overrepresentation of Contributed articles among the highest-cited works. However, one should note the relatively small sample size for the social sciences, making it more vulnerable to minor variations, and also the quite small effect size (measured in percentage points) in this bin.

Figure 2c, meanwhile, presents the relative citation impact of these articles compared to Direct submissions over time. Contributed articles are cited slightly less than Direct submissions, but the difference between both article types is gradually diminishing, reaching near-equal citation impact in 2020. While citation impact is normalized, the shorter citation windows at the late end of the period may be a contributing factor to this observation, because of the lower number of citations per article. However, since this holds for both Direct and Contributed submissions, the impact of this limitation is likely to be small. Note that the citation impact of social science articles is strongly fluctuating, potentially due to the relatively small sample size in this field. This makes the annual distribution sensitive to outliers.

There are several potential explanations behind the slight difference in citation impact, beyond those related to the quality, visibility and relevance of the work—most importantly collaboration and publication venue (Bornmann, 2016), also using a single journal, showed that higher degrees of collaboration, whether counted as number of authors, institutions or countries, were associated with higher citation impact, while not being associated with higher quality ratings by colleagues. We checked the distributions for number of co-authors per article, by field and submission type. Medians were identical for the two main submission types across fields, and so were the means in the physical and social sciences. In the biological sciences, direct articles had on average 0.52 (8.14 vs. 8.66) fewer authors per paper. This is a very small difference in collaboration and likely doesn’t influence citation impact. If it were to have an influence, we would expect the lower number of authors in direct submissions to be associated with a lower citation impact, i.e. a dampening effect on the observed increase in citation impact.

Figure 3 shows the distribution of PNAS-Contributed articles, Direct submissions, and articles in Nature and Science journals among the citation deciles of authors of varying mean normalized citation scores (MNCS). In particular, the figure distributes all authors, based on their MNCS, over ten deciles on the x-axis, with authors with the lowest MNCS in the first decile. Subsequently, the papers of those authors are distributed by their normalized citation score (NCS) deciles on the y-axis, with papers receiving the fewest citations in the first decile. The figure panels then present heatmaps indicating the relative frequency of occurrence of specific article types among these citation deciles: panel (a) for PNAS-Contributed articles, (b) for PNAS Direct submissions, (c) for Nature articles, and (d) for articles in Science.

Fig. 3
figure 3

Heatmap of citation impact of a PNAS-Contributed papers, b PNAS Direct submissions, c Nature articles, d Science articles. Authors with a Contributed paper in PNAS are distributed among deciles based on their mean normalized citation score (MNCS) on the x-axis. All papers in Web of Science by these authors are distributed over deciles based on their normalized citation score (NCS) on the y-axis. Color levels show the rate (O/E) of observed (O) manuscripts per decile of the four article types over the expected (E) manuscripts of that type for the author, where the expected number of papers is a tenth of their total number of papers

The figure indicates that PNAS articles are relatively common among the higher-cited articles for all authors and underrepresented among their lowest-cited works. This pattern is substantially stronger for Contributed articles than for Direct submissions and stronger for authors in the lower range of the MNCS spectrum.

A much stronger pattern emerges for Nature and Science articles, however. Articles in these journals almost exclusively occur among the top 10% highest-cited works for all authors, particularly so for authors in the middle range of the MNCS spectrum.

Figure 4a presents the share of research-active NAS members (i.e., those still actively publishing) who, in a given year after their election, publish in PNAS. The figure indicates that men tend to consistently publish multiple contributed articles more than women do, but that in the early years after election, more women than men publish Direct articles in PNAS and single Contributed articles per year. However, after about seven years, men take over in both categories, with differences in single contributed articles per year growing particularly substantial for years long after election as NAS members.

Fig. 4
figure 4

a The proportion of NAS members, out of the members that have an active publication history (i.e. have not yet stopped publishing), that (i) publish at least one article in PNAS, (ii) publish one Contributed article, and (ii) publish multiple Contributed articles, in a single year, as function of the year since their election as NAS member. Lines indicate local polynomial regression fits. b The share of authors fulfilling various roles in the research and publication process compared to what could be expected if every co-author contributed equally to all roles. The figure shows panels for contributing authors, first authors, last authors, and all other coauthors. Note that contributing authors ‘overrule’ author positions, i.e. if a contributing author is also first author, they are still only included in the top left figure. Blue bars indicate authors taking up a role more often than expected, while red bars indicate authors taking up roles less often than expected. Roles are derived from the CRediT taxonomy with some adaptation

We further note that only a small fraction of active NAS members publishes more than one Contributed article per year, even though for most of the period studied they had the possibility to publish up to four every year. The share of active members publishing Direct submissions or only one Contributed article per year is two to three times as large, indicating that most members do not take full advantage of their Contributed-article privileges.

Figure 4b presents the distribution of coauthors over the various roles described in the CRediT taxonomy (Brand et al., 2015), augmented with an ‘Equal authorship’ category (indicating shared first authorship) and ‘Advice giving’ category (indicating roles like “provided advice on”, “contributed with ideas for”, “provided expertise on”, “assisted with knowledge about”). Bars indicate the percent ratio of how much a particular author group performs a particular role compared to the average of all author roles (expected value). The figure indicates that contributing authors were particularly likely to contribute to conceptualization of studies, writing first drafts of a manuscript, and supervision. The same holds true for last authors, but they also often contribute to additional roles, particularly project administration and funding acquisition. Hence, the contributing authors take up a truly unique niche in the authorship spectrum with a clearly distinct profile than authors in any particular position of the authorship order.

Discussion

Reflecting on the broader implications of these findings, we first note that PNAS-Contributed articles tend to spend less time in the review process than Direct submissions. The submission procedure for Contributed articles can likely explain part of this. The contributing author provides the names of suitable reviewers who agreed to review the paper. This means that the potentially time-consuming phase of finding appropriate referees willing to review the manuscript can be skipped for these articles. Rand and Pfeiffer (2009) pointed out that factors like NAS members being able to select their own reviewers may “soften the challenges of the peer review process” through tactical choice of referees and diminished reviewer anonymity, suggesting that the effect of such factors cannot be ruled out given the findings of generally lower citation performance for Contributed articles. Our findings similarly support this hypothesis. Other mechanisms could be at play too, but our data does not allow for further analysis of the underlying mechanisms and potential systemic factors at play. Regardless of the reasons for allowing faster turnaround times, the speed of the Contributed-track peer review process might well be one of its most appealing characteristics, especially given dual pressures to publish quickly to establish priority of findings, and to publish within prestigious venues. As some researchers have suggested, the possibility to get articles published in a high-reputation journal at a high speed, makes the track highly appealing (Aldhous, 2014).

In terms of citation impact, we note that Direct submissions tend to be slightly higher cited than Contributed articles. In line with earlier findings (Davis, 2016; Rand & Pfeiffer, 2009), we find in particular that Contributed articles are overrepresented among the group of least-cited PNAS papers. However, in contrast to Rand and Pfeiffer—but in line with the findings of Davis for the period 2009–2014—we also find that in both biological and physical sciences, Contributed articles underperform amongst most-cited articles (social sciences did display a small effect in this direction). This is significant since a major outcome of the Rand and Pfeiffer study was the suggestion that “the benefit of facilitating publication of extremely high-impact Contributed papers could be argued to out-weigh the potential cost of allowing more low-quality papers to also be published”. Our findings, as those of Davis (2016), indicate that no such benefit exists (at least in physical and biological sciences, the areas in which the large majority of PNAS papers are published). Davis suggested that the closing of the gap in citation impact between Direct and Contributed articles was attributable to persistent moves to tighten editorial scrutiny of Contributed papers in the preceding years. However, another explanatory factor for discrepancy between the finding in Rand and Pfeiffer’s paper and our study could be the lack of field normalization in the latter.

Through our analysis of data on citations to PNAS papers and the citation data of contributing authors, we show that while PNAS-Contributed articles tend to generally appear in lower per-author citation deciles than Direct PNAS submissions, they are still more likely to appear in the top citation deciles of authors. This indicates that authors ‘use’ the Contributed-articles-track in diverse ways, with especially authors with lower mean normalized citation scores profiting most from articles published as Contributed papers, in terms of citation impact. This finding may be related to the seniority and productivity of authors, which will be further analyzed in the future.

Regarding the contributions made by authors to PNAS articles, we see that the contribution profiles of PNAS first and last authors generally reflect previous research, with first authors heavily involved in investigation, analysis and writing the manuscript and last authors more likely to be involved in funding acquisition, conceptualization and supervision (Larivière et al., 2020). The contributorship profile of NAS members acting as contributing author is distinct from both, however. While they are somewhat more likely than the average to assume a role in supervision and conceptualization (markers of more senior authors), their rates of supervision are much less than for last authors, and they are underrepresented in other senior-level activities like funding acquisition and project administration. This suggests that contributing authors are less likely to be the drivers of the research, and it raises questions of the ways in which this route to publication is employed by authors. This exploratory study cannot delve deeper into these uses to expand on the causes and consequences of this contributorship profile. Follow-up qualitative work to understand the mechanisms at play is certainly desirable.

Scholarly journals have traditionally served many purposes. Apart from a platform to distribute knowledge to a specific community and to archive this knowledge for future generations, journals also play a major role in establishing a hierarchy of published results, distributing credit and recognition to its contributors (Fyfe et al., 2015, p. 350; Horbach & Halffman, 2018). Moreover, journals fulfill a role in developing and maintaining scholarly communities, providing a platform to negotiate norms and standards for academic research (Potts et al., 2017). Recent developments in scholarly communication, including the rise of preprints and mega journals crossing disciplinary boundaries, have challenged some of these traditional expectations of traditional journals. PNAS’ model of allowing NAS members to bypass parts of the editorial process, could be considered as a way of engaging community members. However, the distinct roles contributors take up and their specific backgrounds, point to a risk of elitism that might in fact jeopardize some of the purposes the journal is expected to serve.

Limitations and future work

Our analyses are subject to a few limitations, which point the way towards areas for further investigation. First, we analyze the number of days between submitting and accepting published articles in PNAS, but a more refined breakdown could include the time of submitted revisions. Unfortunately, the public data from PNAS did not give this information. Similarly, we currently did not have access to the review reports and editorial decision letters related to published manuscripts. An analysis of those could shed further light on the editorial process and its implications for the final published manuscript. Second, our analyses now primarily use data from a single journal. Other journals (e.g., mBio) also have special submission policies for members of specific communities. Future work could study these special tracks of other journals, comparing with PNAS in a larger study to assess whether any of our findings are specific to the context of the NAS.

Third, a major limitation of our work relates to the analysis of accepted articles only, because we had no access to data about rejected submissions. To provide a more complete perspective on whether Contributed articles face a more lenient review process or have a lesser impact than Direction submissions, future work could study both accepted and rejected manuscripts in both submission tracks. This could potentially increase our understanding of the variations in the review process for both submission tracks and it might be able to more properly correct for the potential difference in quality of initial submissions to both tracks. This is especially important since data suggests that Direct submissions are much more likely to be rejected before or during the review process than Contributed articles (Aldhous, 2014).

Fourth, our analyses are based on quantitative bibliometric data, allowing us to explore patterns related to PNAS Contributed articles, their authors and their impact, but to further understand the mechanisms behind the correlations we observe, qualitative follow-up research is needed, for example through interviews with contributing authors, journal editors involved in the process or researchers acting as invited reviewers. This could focus, for example, on the rationales that NAS members might have for submitting certain works as Contributed papers. Arguably, when reaching the status of an NAS member, researchers are less in need of high-profile papers to establish their reputation. Nevertheless, some of them have argued that getting such high-profile papers “with much less grief than the usual high-prestige journals—that’s worth something.” (neuroscientist and NAS member Solomon Snyder, quoted in Aldhous, 2014). This might partly be explained by the value that such publications have for NAS-members’ co-authors (especially junior researchers under their supervision). Future research involving more qualitative methods could explore such hypotheses.

Conclusions

These analyses help us further understand the implications of this privileged route to publication for NAS members and the epistemic and social aspects of this publication process. We have shown that articles via the contributor track have a distinct profile in terms of contributorship and impact, which implies they are being used in a particular way by a subset of NAS members.

In 2009, former PNAS Editor-in-Chief Randy Schekman justified the Contributed article track by explaining that Contributed articles were amongst the journal’s most cited and that the track incentivized active contribution to running the journal from NAS members (Kean, 2009). Regarding impact, our results suggest the opposite; regarding incentives, while that may be so, our finding that these articles have such a distinct contributorship profile, raises further questions whether an expedited pathway into an elite journal is a healthy incentive for journal engagement, considering the multiple side effects it may have for wider academic assessment.