Introduction

Measuring the effect of research funding on researchers’ productivity is a matter of ongoing debate. In recent work, Mariethoz et al. (2021) analysed the levels of funding and research outputs of 317 researchers in Earth Sciences and Environmental Sciences in Switzerland. They used publicly available data from the Swiss National Science Foundation (SNSF) to identify researchers who had obtained at least one SNSF grant during a ten year period and the funding amounts of their projects (the data used is publicly available through http://p3.snf.ch). The authors chose a bibliometric approach (publication and citation counts, M-quotient, number of highly cited articles). Their descriptive analysis used scatter plots between the productivity measures and funding amounts together with R2 values for each of these plots to determine whether there is a correlation. Note that the output metrics and funding amounts were averages of the yearly values taken over the defined period.

Based on their results, Mariethoz et al. make several strong claims about SNSF Division 2 Project FundingFootnote 1 and about public research funding in general. They argue that, (i) researchers who are “successful in raising funds are not necessarily in a position to be more productive or produce more impactful publications”. This conclusion is based on the weak observed correlations between the averaged productivity measures and the averaged SNSF funding amounts. The article states that (ii) the results may indicate “bias towards funding more senior researchers” and panel members, as the few panel members in the data acquire higher amounts of funding than other researchers, and as a statistically significant correlation between funding and age could be observed. As the authors cannot explain the absence of a strong correlation between funding amounts and productivity, they further conclude that (iii) “the SNSF Division II project funding scheme acts in a manner equivalent to a random lottery with regard to track record”. Finally, the authors state that (iv) “the present analysis emphasises, above all, the need to clarify the evaluation criteria of the SNSF and of funding bodies in general”.

An imaginary carrot?

We acknowledge that Mariethoz et al. (2021) touch upon a very important issue: how does public research funding, which means spending tax-payers' money, relate to beneficial knowledge gains? We also want to commend them on acknowledging that research practices are very field-specific. Their focus on one unique discipline, Earth Sciences and Environmental Sciences, allows a comparable population with “similar excellence criteria”.

It is very crucial also to address the limitations of their analysis. First, the data and methods section is incomplete. Their analysis is not reproducible for other researchers using the same data source. Which portions of the data have been excluded? What are the characteristics of the researchers included? The 317 selected researchers were observed over ten years, but the calendar period is unclear. To be included, researchers had to have obtained more than CHF1000/year on average over these ten years, but this is not justified. Furthermore, from the article, it is not always clear which statements and conclusions are based on which statistical tests and quantities.

The descriptive data analysis (e.g. scatter plots with R2 values) presented in the article is simplistic. The data used by the authors are longitudinal, e.g. yearly funding, and the yearly number of articles published. The authors then aggregated these data by averaging the annual funding amount and the publication metrics over the ten years. We do not know whether the researchers increased their publication output after acquiring funding or were already highly cited before receiving SNSF funding. The timing seems crucial to distinguish pre-grant outputs from post-grant outputs. Relying on aggregated and averaged data renders conclusions speculative. At the very least, more descriptive statistics on the researchers would be needed. Did they have several grants or just one? What was their role in the project (PI or collaborator)? Was the funding amount adjusted for the number of PIs or was the total amount of funding assigned to each researcher? This information is crucial to see whether the sample of researchers used in the analysis is homogeneous or whether the presented correlations could be masked by confounding variables.

Another important point is that researchers with small amounts of SNSF funding may have access to funding from other sources. Next to their home institutions’ resources, they can apply to international funding schemes, receive funding from industry or charitable funders. Therefore, a low level of SNSF funding does not necessarily indicate a lack of resources. We acknowledge that it is difficult to gather complete funding information; however, a clearly defined control group could compensate this. Information on rejected applications is classified as sensitive and therefore not publicly available. It is standard in the literature to work with a control group design to estimate the effectiveness of funding (see Heyard & Hottenrott (2021) for a recent review). If rejected applications are not available, then information on comparable researchers from other data sources can be obtained (see for instance studies by Arora & Gambardella, 2005; Beaudry & Allaoui, 2012; Benavente et al., 2012; Hottenrott & Lawson, 2017; Hottenrott & Thorwarth, 2011; Tahmooresnejad & Beaudry, 2018). In general, conclusions on the effects of funding on outputs can be drawn only with a control group design. A well-known guide is provided by Jaffe (2002) summarising the foundations of program evaluations; guidance which other studies in the field typically follow.

A final aspect of the relationship between funding and productivity, not discussed in Mariethoz et al., is the grant competition as such. Ayoubi et al. (2019) compared the productivity of SNSF SinergiaFootnote 2 grantees to the productivity of Sinergia applicants whose project was rejected. They concluded that participation in the research grant competition was enough to increase the productivity of the researchers even if they did not obtain the desired grant. All the researchers analysed in Mariethoz et al. entered the competition and were successful at least once. Hence, these researchers will be the more successful and productive scientists in the field. Moreover, within a single field, the funding amounts may depend on project characteristics (such as the type of work, duration, and goupe size). It is thus not surprising that the correlations in the selected sample appear small.

Due to the non-consideration of the time dependency of the data and confounding variables, we argue that the claims put forward by Mariethoz et al. and summarised above are problematic and not supported by the data. The article also cites exclusively studies that support their line of argument and ignores the much larger stream of research that finds positive correlations between funding and research outputs. Specifically, other studies examined the productivity of SNSF funded researchers over time and included meaningful control groups so that the effects of funding could be more validly estimated (Ayoubi et al., 2019; Heyard & Hottenrott, 2021).

SNSF funding, productivity and dissemination: analysis of more than 8′500 researchers

Heyard & Hottenrott (2021) used a propensity score matching procedure to compare the research productivity of cases (SNSF funded researchers) and controls from 2005 to 2019. Note that the funding outcome of this analysis was the funding decision, rather than the funding amount. Research productivity was measured through standard citation and publication counts, the Relative and Field Citation Ratios, and the Altmetric score. To compute the propensity scores, important demographic information on the researchers, i.e. confounding variables, were considered. The evaluation scores of the submitted projects were also included. Such a modelling approach facilitates the estimation of the effect an SNSF grant has on the research productivity. Furthermore, Heyard and Hottenrott took a quantitative multi-method approach. Mixed multivariate regression models were used in addition to the propensity score matching to relate funding to productivity, while taking into account confounding variables. Figure 1 shows the evolution of the number of scientific publications by funding status of the researcher in the year before, as co- or main applicant. From these crude data, one could conclude that a researcher without SNSF funding (in t-1) publishes on average up to two fewer articles than an SNSF grantee (in t-1) at t. However, as discussed in the article, confounding variables play an important role. Such confounders are defined as variables associated with funding success and publication habits: age, institution type, research area, year, gender, and project evaluation scores.

Fig. 1
figure 1

Trends of the crude yearly publication count depending on whether a researcher had no SNSF funding (orange), SNSF funding as co-applicant in a project (green) or SNSF funding as main applicant (blue) in the previous year from Heyard and Hottenrott (2021)

After accounting for these confounding variables, the results in Heyard and Hottenrott indicate an effect size of about one additional scientific publication in each of the three years following the funding. These results align with those for public grants in the UK (Hottenrott & Lawson, 2017). A similar effect was observed for preprints. Additionally, a higher average Altmetric score suggests that funded research attracts more public attention than other research. Finally, Heyard and Hottenrott give a comprehensive overview of the relevant literature regarding the potential impact of funding on research outcomes.

A call for more research on research

The literature on how funding relates to research productivity at the individual and the group level is still sparse. Hence, these are challenging questions to answer. Most studies have significant limitations, such as the non-generalizability of the results to all research areas, the difficulty of constructing proper control groups and the limited access to the required data. But this should not discourage anyone; rather, it underlines that additional work in the larger field of 'Research on Research' is urgently needed. For example, the effectiveness of grant peer review and the importance of chance in the funding allocation process are other areas that were briefly mentioned in Mariethoz et al. (2021). It is well established that peer review of grant proposals has several limitations, and bias against highly innovative and risky proposals is well documented (Cole et al., 1981; Guthrie et al., 2018). Fang & Casadevall (2016) even argue that the current grant allocation system employed by many funders is “in essence a lottery without the benefits of being random” and that the role of chance should be explicitly acknowledged. Previous studies suggested that peer review has difficulties in discriminating between applications that are neither clearly competitive nor clearly non-competitive (Klaus & Alamo, 2018; Scheiner & Bouchie, 2013), which is why some argue for using a modified lottery in the evaluation process (Fang & Casadevall, 2016). The SNSF is investigating how lottery elementsFootnote 3 may usefully be incorporated into its evaluation (Bieri et al., 2021; Heyard et al., 2021).

In conclusion, more research on research is required to define the best practices in funding allocation and to better understand the value that research funding and research contribute to society beyond publication metrics.