Skip to main content

Reply to the comment by Heyard et al. titled “Imaginary carrot or effective fertiliser? A rejoinder on funding and productivity”

In their Comment about our previous paper (Mariethoz et al., 2021), Heyard et al. state that our “claims are not supported by the data and analyses reported in the article” and that our “analysis is not reproducible for other researchers using the same data source”. They question both our methodology and our conclusions, comparing them to their own previous work and conclusions, which are different. We contest these statements and argue why they are unfounded.

A central criticism of our analysis is the fact that we use time-integrated data. We agree that our study does not consider the temporality of funding and research outcomes, i.e. we do not differentiate pre-grant and post-grant outputs. First, attributing a publication to a specific grant is difficult, if not highly artificial, therefore such temporality is hard to establish. Secondly, and more importantly, the temporality of funding and publications does not matter for our conclusions, which focus on establishing whether a correlation between funding and outcomes exists or not. More concretely, we can envision two interpretations for the observed lack of correlation 1) funding does not result in more research outcomes, or 2) research findings and associated publications do not result in increased funding success. Our analysis cannot and does not intend to distinguish between these two interpretations, but in either case there is cause for questioning, which is the aim of our article.

Regarding reproducibility, our sources are public data clearly indicated in our paper (the SNSP P3 and Scopus databases). The criteria for selecting researchers, grants, and the funding period are clearly stated such that anyone can replicate our findings and figures. The only information we did not disclose is the names and affiliations of researchers, but even those could be recovered by replicating our methodology. Heyard et al. argue that “the 317 selected researchers were observed over ten years, but the calendar period is unclear”. Our article text as well as the caption of Figure 1 mention “computed over the last 10 years”, which corresponds to the period 2010–2019 (the 10 years before our paper was submitted). While we could have indicated more precisely (i.e. from 1.1.2010 to 31.12.2019), we do not believe that this level of accuracy would have any impact on the conclusions, given that the funding decisions are given twice a year, and the publication date of a paper is considered with a 1-year granularity.

In their comment, Heyard et al. write that “to be included, researchers had to have obtained more than CHF1000/year on average over these ten years, but this is not justified”. We take the opportunity to do it here. We did the deliberate choice to only consider academics that have been active in research during the studied period (as opposed to people who have left academia, moved outside Switzerland, retired, etc.). Most of the grants amounts in our study are between CHF 150′000 and over 1′500′000. In practice, to be under the CHF1000/year limit, a researcher must have only one grant that ends a few months after the beginning of the studied period. This rule only removed a single researcher, which is unlikely to affect our conclusions. This point is further discussed later in this reply, as it constitutes a central criticism of the work of (Heyard & Hottenrott, 2020).

Heyard et al. request additional precisions on how we selected researchers in our analysis. We only considered as grant holders people listed as “Responsible applicant” of SNSF Division II grants in the P3 database (i.e.the Principal Investigator, as stated in our paper). It therefore excludes co-PIs and collaborators, without further adjustment of the amounts, ensuring uniform consideration of all researchers.

Heyard et al. also qualify our study as “simplistic” because the data are longitudinal averages, and the statistical methods we use involve standard correlations. Our analysis is simple and straightforward, due to the nature of our data and our will to present clear, reproduceable and interpretable results. Heyard et al. use additional data from the SNSF on unsuccessful applicants, which are not publicly available. In our situation, it is not possible to compare granted and non-granted researchers, and hence difficult to define a control group. Because we restrict our analysis to a small and controlled dataset of a specific scientific domain (317 samples), a complex model with multiple covariables is not required to test for correlation and would not be statistically robust. We understand that the adjective “simplistic” is to be taken in comparison with the model used in the study of Heyard and Hottenrott (2020), which would be overly complex in our case. For instance, they pair funded researchers with unfunded ones of similar characteristics using a nearest-neighbor approximation, which requires a very large number of samples, as well as data on rejected grants that are unavailable to us. We argue that such a complex model lacks interpretability, in addition to conceptual flaws and intractable simplifications that are pointed out later in this reply.

The next criticism of our work regards possible other sources of funding available to researchers, although they recognize the significant methodological difficulties that it would involve. We acknowledge this but maintain that most researchers would still apply to SNF Division II projects for reasons that we mentioned in our paper: “the broad eligibility requirements, the absence of restrictions on resubmission and the high success rates mean that there is little rationale for a researcher not to apply for funding over a 10-year period. Furthermore, we only consider researchers that were funded at least once with this scheme, and who are therefore fully aware of the opportunities offered”. This is supported by our empirical experience that most researchers in the Swiss geoscience landscape use SNF Division II projects as a central source of funding.

The authors regret that our paper does not discuss the “fertilizer” effect of grant writing and point out the study by Ayoubi et al., (2019) focusing on SNSF Sinergia grants. Notwithstanding the fact that the Sinergia scheme is inherently collaborative, unlike Division II grants, and the important body of literature that points at a potential waste of resources related to grant writing (e.g. Kaplan, Lacetera et al. 2008, Fortin & Currie, 2013; Herbert, Barnett et al., 2013; Pier et al., 2018), this discussion is not the one we are having in our paper. As mentioned in the first and last paragraphs of our paper (and reiterated in the first paragraphs of this Reply), we focus on the weight given to grant income by hiring committees when they shortlist and appoint applicants. In our opinion, there are deep misunderstandings in the community of Swiss geoscientists regarding an expected link between funding success and research excellence, which we find important to demystify.

The authors then claim that we “ignore a larger stream of research that finds positive correlations between funding and research outputs”, but mostly cite their own work. It was not our intention to exclude a particular school of thought in our paper, especially since our article was discussed with the SNSF prior to submission (the authors’ employer), and suggestions of peer-reviewed references were included.

The section of the Comment titled “SNSF funding, productivity and dissemination: analysis of 8′527 researchers” discusses the authors’ own unpublished work (Heyard & Hottenrott, 2020), pointing out that their conclusions, using a different model and a different dataset, contradict ours. While their study differs from ours on several aspects, we wish to point out a number shortcomings that struck us:

  1. 1.

    The study exaggeratedly focuses on the number papers published by a researcher, which is not a reliable indicator of research quality. As we mention in our study, considering only the number of papers has been shown to be misleading, and using a multitude of career-integrated metrics is a more reliable measure of a researcher’s excellence (if such metrics can be considered at all reliable, which can be debated). Furthermore, increasing the number of published papers is not in our opinion the goal of a taxpayer-supported funding agency, especially in the context of the current unsustainable increase in the number of published papers and the emergence of predatory publishers (Butler, 2013).

  2. 2.

    One main shortcoming of Heyard and Hottenrott (2020) is the possibility of a bias in the data used. It is mentioned on page 10 that 12% of the researchers have been removed from the dataset because they did not have a unique ID in the Dimensions database, creating a potential sampling bias. Furthermore, it is indicated on page 14 that out of 8′527 remaining researchers considered, 1′583 did not publish any peer-reviewed papers in the preceding five years. This represents 18.5% of the individuals considered in the study that may not be research-active or could have left academia. Such potential biases could be sufficient to explain the main finding of one yearly additional publication for grant holders, which corresponds to a 20% increase as researchers publish in average 4.9 papers per year.

  3. 3.

    Considering only a binary measure of whether researchers are funded or not can be seen as simplistic because it hides large disparities in the levels of funding. As mentioned, grant amounts vary by more than one order of magnitude. Pooling together individuals that receive grants of CHF 150′000 and over CHF 1′500′000 is a drastic transformation of the data that makes the rest of the analysis dubious in our opinion.

  4. 4.

    In light of the above, the evidence supporting the conclusions of Heyard and Hottenrott (2020) appears questionable, and it is their study rather than ours that is not supported by data. The fact that their analysis relies on undisclosed data means that it is not reproducible.

  5. 5.

    The last section of the Comment (“A call for more research on research”) provides some general perspectives on future research needs, in our opinion not directly related to the main focus of our paper. While we agree on the general need for more research, we can refer to a recent and much more comprehensive review of these questions by De Peuter and Conix (2021).

  6. 6.

    To conclude, we also want to point that the first two authors of the Comment are directly employed by the SNSF. We wish to clarify that our intention is not to be critical towards the SNSF, but to rectify perceptions that can lead to biases in academic hiring and promotion committees in Earth and Environmental Sciences.

References

  1. Ayoubi, C., Pezzoni, M., & Visentin, F. (2019). The important thing is not to win, it is to take part: What if scientists benefit from participating in research grant competitions? Research Policy, 48(1), 84–97.

    Article  Google Scholar 

  2. Butler, D. (2013). Investigating journals: The dark side of publishing. Nature, 495(7442), 433–435.

    Article  Google Scholar 

  3. De Peuter, S., & Conix, S. (2021). "The modified lottery: Formalizing the intrinsic randomness of research funding." Accountability in Research, 1–22. https://doi.org/10.1080/08989621.2021.1927727

  4. Fortin, J. M., & Currie, D. J. (2013). "Big science vs. little science: How scientific impact scales with funding." PLoS ONE, 8(6). https://doi.org/10.1371/journal.pone.0065263

  5. Herbert, D. L., Barnett, A. G., Clarke, P., & Graves, N. (2013). "On the time spent preparing grant proposals: an observational study of Australian researchers." BMJ Open, 3(5), e002800. https://doi.org/10.1136/bmjopen-2013-002800

  6. Heyard, R., & Hottenrott, H. (2020). "The Impact of Research Funding on Knowledge Creation and Dissemination: A study of SNSF Research Grants." arXiv preprint, arXiv: 2011.11274 (econ) [Submitted on 23 Nov 2020 (v1)]. http://arxiv.org/abs/2011.11274

  7. Kaplan, D., Lacetera, N., & Kaplan, C. (2008). "Sample size and precision in NIH peer review." PLoS ONE, 3(7), e2761. https://doi.org/10.1371%2Fjournal.pone.0002761

  8. Mariethoz, G., Herman, F., & Dreiss, A. (2021). The imaginary carrot: No correlation between raising funds and research productivity in geosciences. Scientometrics, 126(3), 2401–2407.

    Article  Google Scholar 

  9. Pier, E. L., Brauer, M., Filut, A., Kaatz, A., Raclaw, J., Nathan, M. J., Ford, C. E., & Carnes, M. (2018). Low agreement among reviewers evaluating the same NIH grant applications. Proceedings of the National Academy of Sciences, 115(12), 2952.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Gregoire Mariethoz.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mariethoz, G., Herman, F. & Dreiss, A. Reply to the comment by Heyard et al. titled “Imaginary carrot or effective fertiliser? A rejoinder on funding and productivity”. Scientometrics (2021). https://doi.org/10.1007/s11192-021-04131-6

Download citation