figure a

Attracting research funding is a central aspect of academic achievement. Researchers write and submit proposals, which are peer-reviewed so that panels can evaluate the quality, potential impact, relevance and innovation of the proposed work. It is also essential that researchers have the qualifications to conduct the proposed research. On that basis, one may thus intuitively think that scientists who can write an excellent research proposal are more likely to produce excellent publications. As a matter of fact, funding attribution based on past scientific success is often advocated (Gross and Bergstrom 2019). Scientists work hard on their research, in the hope that creative ideas and well-thought protocols will be reflected in their publication record, which in turn will enable them to secure future research funds. This vision is widely shared today by recruitment, tenure and promotion committees, which consider grant success as a good selection criterion.

Here we investigate whether successful funding correlates with research productivity or impact using grant outcome data of the Swiss National Science Foundation (SNSF) Division II projects in Earth and Environmental sciences. We focus on a single discipline to have a homogeneous population of researchers with similar excellence criteria. This funding scheme is ideal for such an analysis because the eligibility requirements are broad, the main criterion being that a given individual can hold a maximum of two projects at the same time as main Principal Investgator (PI). It is designed as the workhorse of Swiss research funding, with a yearly budget of about CHF500 mio and the aim of supporting all scientific areas, and a high success rate of about 50%. It can therefore be assumed that most established researchers regularly submit such projects. The proposal guidelines specify that the evaluations are based on two main criteria (Swiss National Science Foundation 2016): (1) the scientific quality of the proposed research project: scientific relevance, topicality and originality, suitability of methods, feasibility; (2) the scientific qualifications of the researchers: scientific track record and ability to conduct the proposed research. Evaluation is based on a peer-reviewed system. Members of the panel then assess the proposal and the reviews, propose a ranking and decide on whether funding should be awarded or not. Panels include established scientists of diverse disciplines.

Here we analyse funding outcomes of SNSF division II projects that have been awarded during the last ten years in Earth Sciences and Environmental Sciences (data available publicly on the P3 database of the SNSF http://p3.snf.ch). For each researcher working in a Swiss university or research institution who has obtained at least one grant in these disciplines and this period, we record how much funding was secured and how many years the person was eligible for funding. Only researchers who obtained more than CHF1000/year in average over the last ten years are considered, resulting in a population of 317 researchers. For the same pool of researchers, we also analyse the publication and citation record as inventoried in the Scopus database (www.scopus.com), using a series of metrics. We consider both raw metrics and metrics normalized by the academic age of the researcher, which is defined as the number of years since the first paper (Table 1). These include the total number of published papers, number of citations, number of first-author highly cited papers, the H-index (Hirsch 2005), as well as these four quantities divided by the academic age of the researcher. It should be noted here that normalized metrics allow filtering out career stage-related biases. As such, they may better reflect research impact, such as the M-quotient (the H-index divided by the number of years since first paper), and the frequency of highly cited papers as first author (defined as having over 100 citations). We would like to acknowledge that these metrics do not necessarily reflect the quality of the research produced or the excellence of researchers, as it will vary from one discipline to another. However, one may expect that a well-funded scientist is likely to be more productive on average than one that is not.

Table 1 Quantitative metrics of research output used in this study

The results of our analysis are displayed in Fig. 1 as scatter-plots between publication metrics and funding outcomes. Each dot represents one of the 317 researchers considered. Correlations of each quantitative research metric with 10-year average grant income are indicated as R2 values for each plot, along with the 5% and 95% confidence intervals of R estimated by a bootstrapping procedure (10′000 samples). Note that all variables except the year of the first published paper are log-normally distributed (see supplementary). We therefore use their decimal logarithms to estimate correlations. The distributions of the variables and the untransformed data are available in supplementary material.

Fig. 1
figure 1

Decimal logarithm of average funding received by each researcher (computed over the last 10 years) as a function of the logarithm of various metrics of scientific excellence (except the year of the first paper, which is normally distributed). Each dot represents a Swiss researcher in the fields of Earth Science or Environmental Science who was PI on a “Earth and Environmental sciences” Division project. Each scatter-plot represents 317 researchers, except for the log of the number of highly cited papers per year for which we only represent the 168 researchers that have at least one first-author highly cited paper. Titles indicate the R-squared values, with in brackets the 5% and 95% confidence intervals of the Pearson correlation coefficient obtained by bootstrapping

The correlations found are surprisingly low, but this is consistent with studies in other disciplines, notably medicine (Jacob and Lefgren 2011). The stronger relationships found in Fig. 1, albeit weak, are with non-normalized metrics that typically increase monotonically with the age of the researcher: the year of the first published paper, the H-index, the total number of citations and the total number of published papers (R2 in the range 0.12–0.14). These indicate a weak tendency of slightly more money going to older and more established researchers. However, the most notable finding of this analysis is the absence of a correlation with the M-quotient (R2 = 0.079) and the rate of highly cited paper (R2 = 0.0074), indices considered to reflect impactful contributions and used in previous studies (Nicholson and Ioannidis 2012). Like many funding bodies, the SNSF adheres to the DORA declaration (https://sfdora.org) stating that publications and citations numbers alone are largely insufficient and questionable to characterize one’s research quality. However, most researchers would expect some relationship between these metrics and grant funding because those should reflect, even very imperfectly, a researcher’s ability to have ideas and convert them in impactful research. One may argue that the SNSF panels are wise enough to nuance the value of a publication record in the light of more fundamental scientific advances that are not necessarily highly cited. However, this argument does not hold when we consider that the basis of project funding is an expected correlation between the excellence of proposals and the expected impact of the research outcomes, which is one of the criteria to award funding. Furthermore, the nuancing ability of the panels can be questioned because in our analysis, raw metrics are more correlated to funding than normalized ones, despite being known to be misleading criteria of research excellence (The PLoS Medicine Editors 2006).

Our data also indicates that panel members are awarded in average 2.2 times more funding than other scientists (based on only 7 researchers in our analysis who are also panel members). This does not necessary mean that panel members are systematically favoured in the evaluation: panel members are typically selected among already well-funded researchers, and moreover conflict-of-interest regulations are strictly enforced at the SNSF. However, this may be an indication of a bias towards funding more senior researchers, which is also shown by a statistically significant correlation between funding and age, as panel members are typically more established researchers. Furthermore, it may raise questions about a system that particularly reinforces seniority biases.

The main limitation of our analysis is that data on rejected grants is not available. The assumption we make here is that researchers who were less funded did submit projects which were subsequently rejected. While we cannot validate this assumption, it is likely true because the broad eligibility requirements, the absence of restrictions on resubmission and the high success rates mean that there is little rationale for a researcher not to apply for funding over a 10-year period. Furthermore, we only consider researchers that were funded at least once with this scheme, and who are therefore fully aware of the opportunities offered.

At this stage, we can only speculate on the cause of this lack of correlation. Fortunately, we did not observe any gender bias, as the average funding per year is very similar for researchers of both genders (CHF 66′084/year for females and CHF 65′264/year for males). Another hypothesis to explain the seemingly random distribution of funding is that of an effort to avoid possible biases, such as gender bias, institution bias, sub-discipline bias (Severin et al. 2019). If the elimination of those multiple biases dominates the selection criteria, it may come at the detriment of other criteria such as scientific excellence or value.

In the absence of a definitive explanation, the data seem to indicate that the SNSF Division II project funding scheme acts in a manner equivalent to a random lottery with regard to track record. Funding according to lottery systems have been advocated in the past (Adam 2019; Ioannidis 2011) and have been implemented in some instances (Liu et al. 2020), particularly for projects that have passed a first selection threshold. If the outcome of a careful evaluation of projects were equivalent to a random draw, a lottery would present the advantages of saving considerable time and energy preparing and assessing grants, as well as an easy way of avoiding biases of gender, age or discipline. We acknowledge that the Swiss research landscape is of a high level in international comparison (Laverde-Rojas and Correa 2019), indicating that their evaluation system results in positive incentives and positive research outcomes. As we discuss below, the present analysis emphasises, above all, the need to clarify the evaluation criteria of the SNSF and of funding bodies in general.

From the point of view of the funding body, if the intention is only to fund excellent projects regardless of the applicant, a transparent alternative to the current evaluation system would be to entirely disregard track records by making the applications anonymous (a step in this direction has been taken by the SNSF with the introduction of the SPARK pilot funding scheme). Such a strategy can be called project-based evaluation philosophy. However, if the intention is to fund excellent scientists and reward them, the track record of the applicants should be taken into account. One solution could be to design evaluation criteria that consider the past achievements of researchers, and also to reinforce mechanisms that build on the delivery of results following previously funded projects. This could then be termed a career-based evaluation philosophy. Importantly in such a system, criteria should equally apply to proposals submitted by evaluation panel members themselves, whose larger funding could otherwise be perceived as undue compensation – especially considering that their work in the panel is already remunerated.

In the authors’ experience, academic evaluation procedures (i.e., hiring, tenure, or academic kudos, which can be seen as the “carrot” motivating scientists) put a lot of emphasis on the researchers’ ability to raise funding. This is based on the perception that many funding agencies have made the choice of a career-based evaluation strategy. However, our analysis seems to contradict this perception, showing that the SNSF has here opted for a project-based evaluation philosophy. Indeed, some researchers have benefitted from high levels of funding over the last 10 years without many citations, high-impact or numerous papers. In writing this, we are conscious of the multiple limitations of an analysis based on metrics. We fully support the principles stated in the DORA initiative, however it seems that here the attribution of funds goes further than DORA in the direction of a project-based philosophy by apparently ignoring the publication record of researchers.

Our main concern is that the attribution philosophy of the SNSF and funding bodies in general, either project-based or career-based, is often not well known amongst researchers and career evaluation committees in universities. Funding outcomes have a significant impact on the perception of one’s research qualities and performance, because of a frequent assumption that the funding evaluation criteria are career-based. We find urgent to correct this perception. While the scheme analysed here clearly uses a project-based philosophy (the outcomes of projects are not significantly correlated to scientific productivity), its outcomes are often used as criteria for hiring decisions. We recommend that funding bodies transparently position themselves as either project-based or career-based, and clearly communicate in those terms. Analyses such as the one presented in this paper should be carried out broadly by funding agencies to determine which incentives they create among scientists and the institutions that hire them. A lack of clarity in the philosophy adopted may lead to possible unintended and undesirable consequences, such as career advancement based on lottery outcomes.