1 Introduction

Agriculture and its related activities contribute 23% of global anthropogenic greenhouse gas emissions (IPCC 2019), and significant emissions reductions are needed from the sector to meet the target of limiting global warming to 2 degrees Celsius as set out in the Paris Climate Agreement (Wollenberg et al. 2016). Efforts to enhance ambition in the lead up to the 26th Conference of Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC) to keep warming well below 2 degrees, and to strive toward 1.5 degrees, would require even greater ambition within the sector. At the same time, the sector is the source of livelihoods for those dependent on the over 475 million small farms (Lowder et al. 2014). These small-scale farmers are among the most vulnerable to the impacts of climate change and actions are needed to enable them to cope with climate change impacts (Loboguerrero et al. 2018). Actions to mitigate and adapt to the impacts of climate change need to occur as the world has seen an increase in hunger since 2014 (FAO 2018). In this context, the Agricultural Research for Development (AR4D) community needs to step up efforts to innovate in the face of climate change, and to inform decision-making to ensure large-scale uptake of innovations (Dinesh et al. 2018; Steiner et al. 2020; Vermeulen et al. 2012b). Science–policy engagement has become a crucial tool for researchers working on agriculture and climate change, to inform decision-making and enhance the impact of their work (Dinesh et al. 2018; UNEP 2017).

Research on science–policy engagement in the context of environmental change has identified ways to improve the efficacy of these efforts (Cash et al. 2003; Clark et al. 2016a; Holmes and Clark 2008; Kristjanson et al. 2009). Much of the lessons are drawn from successful case studies and empirical studies are still emerging (Dunn and Laing 2017; Van Enst et al. 2014). So far, lessons have not been generated systematically from failures (Turnhout et al. 2020; Wyborn et al. 2019), which can be a powerful tool to facilitate innovation, and as Thomas Watson said, “the way to succeed is to double your failure rate” (von Stamm 2018). Lesson learning from failure has been found to drive innovation in various contexts (Danner and Coopersmith 2015; Heath 2009; Knott and Posen 2005; von Stamm 2018), including in telecommunications (Baumard and Starbuck 2005), information technology (Gupta et al. 2019), policy-making (Dunlop 2017), pharmaceuticals (Khanna et al. 2016), and microfinance (Woolcock 1999). Despite these advances, our understanding of failures and lesson learning from failures remains quite limited (McGrath 2011), and this is especially true in the case of science–policy engagement for climate action in agriculture. There is an opportunity to address this knowledge gap, while at the same time applying lessons generated to improve the efficacy of science–policy engagement efforts and thus accelerate climate action.

Science–policy engagement scholars have identified challenges involved in the engagement process (e.g., Laing and Wallis 2016; Neßhöver et al. 2013; Sarkki et al. 2014; Talwar et al. 2011; Van Enst et al. 2014). However, much of these insights emerge from studying successful case studies, and while successes are recorded and reported, failures often remain undetected or are neglected (McGrath 2011; Rajkotia 2018; Vinck 2017). At the same time, studies in science–policy engagement show that current approaches to informing policy processes are not always delivering sufficient results (Hoppe et al. 2013; Kirchhoff et al. 2013; Strydom et al. 2010; van Kerkhoff and Lebel 2006), and there is a need to shift to fundamentally different approaches. Scholars have noted that failures in science–policy engagement are inevitable (Armitage et al. 2015; Lawton 2007; Wyborn et al. 2019), yet an effort to systematically generate lessons and learn from these failures has not been undertaken. In this context, this paper aims to generate lessons from unsuccessful science–policy engagement efforts and challenges of the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS).

CCAFS is an international research program with a focus on outcome-oriented research (Thornton et al. 2017; Vermeulen et al. 2012a), working with over 700 partner organizations at the local, sub-national, national, regional, and global levels to improve the livelihoods of small-scale farmers in the face of climate change. Outcome delivery is a key criterion to measure performance of projects. CCAFS interprets outcomes as changes in policies and practices of non-research partners (Dinesh et al. 2018; Earl et al. 2001); this includes informing policies and practices of Governments, international organizations, private sector, non-governmental, and farmer organizations. Examples include informing national policy and associated investments in Cambodia through participatory scenarios and provision of climate services to farmers in Senegal (Westermann et al. 2018). The program’s performance in delivering such outcomes is monitored through annual reporting processes. CCAFS’ emphasis on outcome delivery and science–policy engagement as a tool to achieve outcomes makes it a good case to study in the context of the emerging literature on Science–Policy Interface Organizations (SPIORGs) (Sarkki et al. 2019), within the wider literature on boundary organizations (Guston 2001).

CCAFS management is open to learning from its experiences, and past studies have generated lessons from the program’s science–policy engagement efforts (Cramer et al. 2018; Dinesh et al. 2018; Zougmoré et al. 2019), but, also within CCAFS, lessons from unsuccessful efforts and challenges are yet to be studied systematically. Studying failure within organizations is difficult because of psychological and organizational barriers (Cannon and Edmondson 2001). However, as a research program with a mandate for “lesson learning”, CCAFS is open to learning from its failures. In the present paper, we have endeavored to combine insider perspectives from two of the authors associated with the program with outsider perspectives from the other co-authors. The research questions we answer in the context of facilitating change for climate action in agriculture through science–policy engagement are: what challenges and failures can be faced? What strategies can be adopted to overcome these challenges and failures? To answer these questions, we first developed an explanatory framework based on the literature, consisting of factors which could potentially explain failure in science–policy engagement efforts. We then used this framework as the basis to administer a survey to CCAFS’ project leaders and coordinators. The results from this survey were analyzed to identify challenges and failures in the CCAFS context, and an approach has been developed to “fail intelligently.” Thereafter, we also conducted interviews with CCAFS management to validate our findings. Thus, in addition to contributing to the literature on science–policy engagement, failure management, and AR4D, the present paper will also help researchers to develop more effective science–policy engagement strategies which are more resilient to challenges and failures.

2 Explanatory framework

We understand unsuccessful science–policy engagement efforts or failures as instances where the expected outcome from efforts are not achieved, i.e., where goals are unmet (Kunert 2018; Leoncini 2017). In the context of CCAFS, this means efforts to drive changes in policies and practices of non-research partners are unsuccessful. Failures arise as a result of challenges or “fail factors” which may be faced in the science–policy engagement process, and we consider these challenges or “fail factors” to be independent variables, with “failure of science–policy engagement efforts to achieve expected results” as the dependent variable. While several challenges may be experienced in policy-engagement processes, “fail factors” are differentiated by the direct link that these have to the dependent variable. From the perspective of execution of science–policy engagement efforts, existence of these “fail factors” may be considered to be “early warning signs” (Leoncini 2017) that the expected outcome may not be achieved. The explanatory framework (Table 1) is envisaged as a context-specific tool to analyze failures (Edmondson 2011), with the proposed fail factors hypothesized based on the literature on science–policy engagement. Three of the hypothesized fail factors are a reversal of success factors identified by Cash et al. (2003), wherein principles of credibility, salience, and legitimacy are considered key success factors in science–policy engagement. We in turn make the assumption that lack of these success factors could lead efforts to fail. We complemented the Cash principles with additional factors including role of intermediaries, power dynamics, and institutional capacity, as these feature prominently in the literature on science–policy engagement efforts.

Table 1 Explanatory framework for failures in science–policy engagement efforts

3 Methods

Learning from failures within organizations is difficult and even learning organizations struggle due to the challenges involved (Cannon and Edmondson 2005). These challenges include technical ones, due to a lack of understanding of processes to learn from failure, as well as social challenges which stem from psychological reactions to failure. In this context, learning from failure, although important, is a challenging endeavor. To overcome technical challenges faced in learning from failure, we drew on the literature on failure management. To address social challenges around learning from failure, we tried to create a safe and open environment for researchers to share challenges and failures that they have faced. This was done in several ways; firstly, while two of the authors are associated with CCAFS, the other authors are external to the program and ensure greater objectivity. The survey was sent out by the second author who is an academic and not directly involved in CCAFS, which could help ensure that respondents were not at risk of bias or evaluation by the program’s management. Findings from the survey were processed anonymously and are presented at an aggregate level. From the responses, it is not possible to deduce the identity of individual respondents, neither is it possible to relate the responses to the performance of individuals. Despite this, there was substantial non-response—this might signal that talking/writing about failure is a delicate matter within CCAFS, as in other contexts.

In order to understand failures faced in CCAFS science–policy engagement efforts, we conducted a literature review and developed an explanatory framework (Table 1). We then used the explanatory framework to design a survey (Annex 1) which was administered to Leaders and Coordinators of CCAFS projects. The objective of this survey was to validate explanatory factors identified, gain further insights on how these factors affect science–policy engagement efforts, and to identify additional explanatory factors. CCAFS had a portfolio of 54 ongoing research projects at the time of this study, and the survey was sent to the Leaders and Coordinators of all these projects as it was not possible to identify projects with explicit failures since failures are not formally reported. Therefore, we took an open-ended approach, reaching out to all Project Leaders and Coordinators. In addition to the current portfolio, we also contacted Project Leaders and Coordinators of completed projects, to ensure that prior experiences are also captured. The survey was sent to a total of 156 recipients and we received 24 complete responses, which form the basis of our analysis. While the response rate is fairly low compared to the average survey response rate of 52.7% with a SD of 20.4 in organizational research (Baruch and Holtom 2008), this shows the difficulty associated with studying failure. Thirteen recipients of the survey who attempted to answer it did not complete it. This may have been due to uncertainty about issues related to failure (as explained by one respondent) or concern in disclosing these experiences. While we recognize the limitation of the sample to be statistically significant, the insights gained are useful for interpretative qualitative research which captures experiences from the CCAFS context. To address challenges associated with studying failure, the responses were anonymized, enabling respondents to frankly share their challenges and unsuccessful science–policy engagement experiences. The results were analyzed thematically (Guest et al. 2011), and common themes were identified using an inductive approach and are presented in the “Results” section. Thereafter, the Leaders of CCAFS’ four flagship research programs (priorities and policies for CSA, climate-smart technologies and practices, low emissions development, and climate services and safety nets), and the program’s Director and Head of Global Policy Research were interviewed using a semi-structured approach (see Annex 2) to share further insights. Based on these insights, we further refined the explanatory factors consisting of challenges in science–policy engagement efforts, and generated lessons to fail intelligently and to improve efficacy of efforts. It must be noted that the survey respondents and interviewees do not represent research users; while it is important to capture the perspectives of users, in the present study we endeavored to get greater granularity about the issues faced by researchers and gain perspectives on knowledge production.

4 Results

4.1 Knowledge generated is not perceived as credible

Forty-two percent of the respondents associated their challenges to the first type of factors listed in Table 1: demonstrating credibility to partners. In the case of these respondents, the issues varied, ranging from time constraints to build credibility to complexity and uncertainty involved in research outputs, which undermine efforts to build credibility. Lack of quantitative data to support engagement efforts and capacity to conduct analysis required by decision-makers were also factors which affected efforts to build credibility (Table 2).

Table 2 Inductive categorization of challenges encountered in demonstrating credibility

4.2 Knowledge generated is not salient

A majority of the respondents (63%) associated their challenges with research goals, questions, and results not being salient to the needs of decision-makers, the second category of explanatory factors from Table 1. Respondents encountered a number of challenges (Table 3), including lack of sufficient conversations and dialogue with decision-makers, differences in timelines of research and decision-making, retaining decision-makers’ attention, misunderstandings with decision-makers, and non-technical factors needed to inform decisions. Science–policy engagement can often be a long process and ensuring that the salience is retained across this process, even when there are changes to other factors, is important. A respondent noted, “Main challenge here is with respect to continuous changes of government staff. When the project starts, the goals are aligned but once people move or leaders change, those goals, all of a sudden, become not very well aligned.” This points to the need for adaptive strategies to ensure and retain salience in the engagement process. Such strategies included the setting up of science-policy multi-stakeholder platforms, getting results validated by decision-makers, applying methods which are quicker, being flexible to changes in the decision-making process, and developing a coherent theory of change and network mapping.

Table 3 Inductive categorization of challenges encountered in achieving salience

4.3 Knowledge generated is not legitimate

Only 22%Footnote 1 of the respondents found their challenges to be related to decision-makers not finding the research to be legitimate, the third explanatory factor from our framework (Table 1). In these instances where decision-makers found issues relating to legitimacy, this was due to the complexity of research, theoretical rather than practical orientation, lack of sufficient information (including other views), conflicts of interest, and existing prejudices, for example, on gender roles.

In the climate change context, communicating uncertainty can be a factor which informs the perceived legitimacy of knowledge, and respondents undertook a number of efforts to communicate uncertainty effectively. These included participatory processes to engage stakeholders and make them aware of uncertainties, convening roundtables with decision-makers, developing multiple scenarios, and tailored approaches to supporting decision-making. Overall, a fair and balanced approach where researchers are upfront about the limitations and uncertainties associated with their research was the dominant strategy, and existence of different communication channels was crucial.

4.4 Engagement process lacked appropriate intermediaries

Lack of appropriate intermediaries was not found to be a problem, as a majority of the respondents (73%) relied on intermediaries including knowledge brokers and boundary organizations in their engagement efforts. These included Non-Governmental Organizations (NGOs), United Nations (UN) agencies, private sector consultancies, national research institutes, and government agencies (Table 4). In addition to institutions, the role of thought leaders and champions was crucial in several instances. These are individuals well connected and respected in decision-making processes, and are able to connect researchers to decision-making processes. Referring to one such thought leader, a respondent said, “He seems to have links to everyone. He invited CCAFS to participate in a working group that was going to consolidate efforts on adaptation tracking tools.“ Of the respondents that did not use intermediaries, only two indicated that using intermediaries could have been valuable.

Table 4 Types of intermediaries engaged and their roles

4.5 Adverse power dynamics

Adverse power dynamics were a key factor affecting science–policy engagement process observed by 70% of the respondents, and the role of researchers within these dynamics influences the success or failure of efforts (Table 5). There were differences in how respondents viewed the role of researchers in such power dynamics. While some of the respondents believed that researchers should remain distant and neutral to these, others believed that researchers should actively engage in these, as one respondent remarked, “when power dynamics are in play, you play within them. Scientists and science are not outside of political action, we’re in the middle of it, if not necessarily central to it.” Power dynamics are often not within the control of researchers, and in some instances, the science community is not considered to be a political heavy weight, and such factors are also taken into account when developing strategies to navigate these. Many approaches were taken to navigate power dynamics encountered, including remaining neutral and evidence based, providing quid pro quo support to help advance goals, engaging in political processes, and identifying champions who can help navigate the power dynamics. Overall, researchers need to be extremely cautious while engaging in such power dynamics, engaging proactively but respectfully.

Table 5 Inductive categorization of challenges in navigating adverse power dynamics in science–policy engagement

4.6 Lack of institutional capacity

Most of the respondents (78%) found that their impact partners had adequate capacity to absorb research findings. Where capacity gaps existed, these related to sufficient technical staff not being available, capacity gaps to achieve scale with initiatives, and a lack of understanding of technology requirements and funding models for effective implementation.

4.7 Inductively derived fail factors in science–policy engagement

In addition to exploring the hypothetical fail factors of our explanatory framework, we posed an open-ended question on the top three reasons why science–policy engagement efforts failed to achieve expected outcomes, and respondents came up with a number of different reasons which we list as empirical fail factors (Table 6). Where these fail factors add further contextual detail to the hypothetical fail factors in our explanatory framework (Table 1), we have indicated this, while other factors outside the explanatory framework are also listed. Our assumption that lack of salience is a key fail factor is validated, but the survey results show the nuance involved. While in some cases this is because of research results not addressing the needs of decision-makers, in other cases this is due to a lack of demand for science-based solutions among decision-makers. Similarly, while we hypothesized institutional capacity gaps among partners to be a fail factor, we find that these gaps also extend to CCAFS researchers and manifest in the form of limited capacity for engagement and communications and to form and maintain partnerships. Differences in organizational cultures is also a key manifestation, which emerges from lack of capacity among both researchers and partners to adapt to the culture of the other. The main additional fail factor which we identified is around funding uncertainties which affected science–policy engagement efforts.

Table 6 Empirical fail factors in science–policy engagement efforts as a specification or addition to hypothetical fail factors

4.8 Fail factors contextualized in examples

The above sections are drawn from experiences of project leaders and coordinators. Through additional interviews with the CCAFS management, we identified concrete examples of failed science–policy engagement efforts, which help contextualize the above results. These examples are summarized in Table 7, together with the fail factors which led to efforts failing. It must be noted that adverse power dynamics and lack of institutional capacity are the two predominant fail factors identified from the CCAFS management’s perspective. This may be because in these examples, CCAFS management has taken a very proactive role through their portfolio management function to ensure that research results are salient, credible, and legitimate, complemented by support to form and develop partnerships. However, adverse power dynamics often affected the outcome, and in other cases the research partners chosen lacked the capacity or skills necessary to realize the outcome.

Table 7 Examples of failed science–policy engagement efforts identified through interviews with CCAFS management

5 Discussion

The results provide detailed empirical insights into failed science–policy interactions, a hitherto underexposed field of study. Experiences with failure were derived from reports of interviewees and, therefore, might to some extent be idiosyncratic. Nevertheless, they do provide new insights into challenges and failures which go beyond the factors currently identified in the literature. These insights drawn from unsuccessful efforts not only show “what not to do” but also how lessons can be generated systematically and how management can adapt to emerging failures in science–policy engagement efforts. In this section, we discuss the implications of the results for research and practice of science–policy engagement efforts.

5.1 Credibility, salience, and legitimacy

The three principles of enhancing credibility, salience, and legitimacy (Cash et al. 2003) have formed the basis of efforts to improve efficacy of science–policy engagement efforts. We hypothesized that the absence of these principles could lead efforts to fail. The results show that lack of credibility was not an important fail factor for respondents. While this is an important finding, science–policy engagement is context specific, and the specific contexts within which respondents operate could have influenced this. In a previous study on success factors of CCAFS science–policy engagement efforts (Dinesh et al. 2018), it was found that the credibility of the CGIAR and its researchers was a key success factor. This may point to a broader perceived credibility for the organization and explain why a lack of credibility was not faced by most respondents. From the responses of respondents who were faced with this issue, we gain lessons which can be useful to strengthen science–policy engagement efforts. This includes spending time and effort to build credibility, addressing complexity and uncertainty, and the production of case studies and quantitative data which can support engagement efforts.

Lack of salience on the other hand was found to be a key fail factor. However, this fail factor not only arises when efforts on the part of researchers and research managers to make outputs salient prove insufficient but also when there is a lack of demand for salient knowledge. CCAFS has an emphasis on generating evidence salient to the needs of decision-makers (Dinesh et al. 2018; Zougmoré et al. 2019), and this emphasis has enabled the program to deliver successes which have been recorded in the literature (Westermann et al. 2018), but there are areas where this can be further strengthened, for example, by improving dialogue on problem definitions/problem structuring to make results more relevant (Funtowicz and Ravetz 1997; van der Hel 2016), aligning the timelines of research and decision-making, accommodating for changes in decision-makers, and communicating and engaging better. Development of salient knowledge needs to start from true interaction with next users (i.e., the immediate next users of research rather than ultimate beneficiaries), as opposed to an approach of retrofitting existing knowledge and tools to needs, as this creates path dependence (Interview-C 2019). In engaging next users, care must be taken to address criticisms of such engagement approaches, including the costs versus benefits and adverse power dynamics (Oliver et al. 2019; Turnhout et al. 2020; Wyborn et al. 2019).

Lack of legitimacy was also not validated as a fail factor by respondents of our survey, and this may also be a context-specific feature of CCAFS, where good practices around ensuring legitimacy have been noted in the literature (Vervoort et al. 2013; Zougmoré et al. 2019). We also considered issues around communicating uncertainty in relation to legitimacy, and found that a number of actions were taken to communicate uncertainty in a fair and balanced banner. However, as noted by Sarkki et al., management of uncertainty is considered important in relation to all three principles (Sarkki et al. 2015), and responses in relation to credibility also show the relationship between communicating uncertainty and the credibility of research outputs and institutions. The relationship between communicating uncertainty and salience has been studied by others (Bromley-Trujillo and Karch 2019), and therefore communicating uncertainty is relevant in relation to all three principles.

5.2 Institutional arrangements and capacity

Appropriate institutional arrangements and capacity are key to ensure that knowledge leads to changes on the ground (Múnera and van Kerkhoff 2019). In this context, lack of institutional capacity among partner organizations was identified as a fail factor (Table 1). The results show that it is not only absorptive capacity that needs to be enhanced but also the capacity of researchers to do outcome-oriented research and engagement activities. For example, the role of partnerships is quite central to delivering outcomes, and this includes partnerships with boundary organizations, development agencies, government agencies, farmer organizations etc., and lack of suitable partnerships or non-performance of partnerships have caused efforts to fail. For example, in the Honduras example, developing different and more in-country partnerships could have been effective (Interview-B 2019). This stems from a lack of capacity to develop and manage suitable partnerships. Although CCAFS has an emphasis on partnerships at the programmatic level, failure arises from the lack of the right partnerships in specific contexts. While this is difficult to pre-empt, as performance of partners may change over time, adaptive management, which enables revisiting partnerships in response to needs, could be an effective strategy. Skills to develop partnerships also need to be fostered, as these tend to be different from research skills. As noted in the Mali case, skills to develop partnerships may have been absent resulting in efforts not succeeding (Interview-C 2019). Models of partnerships which have been tested in other contexts can also offer inspiration for CCAFS partnership building efforts (Dentoni et al. 2018).

While CCAFS has been successful in leveraging on the potential of science–policy engagement to achieve development outcomes (Dinesh et al. 2018; Thornton et al. 2017; Westermann et al. 2018), the degree to which the principles adopted at a programmatic level are operationalized varies; while there are projects which have taken this on board to deliver outcomes, there also remain projects/efforts which do not have effective science–policy engagement and communications strategies in place. CCAFS as a program advocates dedicating a third of research efforts for engagement and communications (Dinesh et al. 2018); however, a key reason for failure was that researchers did not have sufficient time to dedicate to engagement and communications activities, for example, in the cases from Mali and South East Asia. Effective implementation of programmatic priorities, including through resource allocation and capacity building, can help overcome this to a certain extent.

Limited institutional capacity on the part of decision-makers has been identified as a fail factor. CCAFS does make efforts to build capacity, including emphasis on institutional strengthening (CCAFS 2017); however, capacity gaps still exist among decision-makers. Strengthening efforts to build capacity is needed, but capacity is to some extent the result of the political and knowledge system, and a concerted effort is needed beyond a single program or institution, to build capacity of decision-makers to respond to challenges of climate change. Research and decision-making are two entirely different cultures, and while science–policy engagement offers a way for addressing these differences, deep cultural differences can cause efforts to fail. For example, the timeframes that both communities operate to are entirely different (Sarkki et al. 2014) and often impossible to reconcile. The role of knowledge brokers and translators can help bridge these differences, but a fundamental revisiting of organizational cultures is needed if both communities are seamlessly integrated in an ongoing science–policy engagement effort. In examples from Kenya and the UNFCCC, although efforts failed to achieve the expected outcomes in the expected timeframe, these outcomes were realized in later years because of political and institutional factors involved.

Much emphasis has been put on co-production of knowledge and social learning to engage decision-makers; however, in the contexts which CCAFS works in, high turnover of decision-makers was observed as a key challenge and a cause of failure. This points toward the need for engagement processes to go beyond individuals and to be institutionalized to ensure longevity. However, weak institutional structures may deter implementation of such efforts in some contexts. Moreover, recent work by Turnhout et al. shows that in order for co-production to be transformative, it needs to address unequal power relations (Turnhout et al. 2020).

5.3 Navigating power dynamics

Power dynamics play an important role in linking knowledge to action (Clark et al. 2016b; Turnhout et al. 2020; van Kerkhoff and Lebel 2006). In science–policy engagement efforts, researchers move outside the knowledge production process to enter the political realm, where power dynamics are crucial and navigation of these power dynamics may lead to success or failure in terms of achieving the expected outcome. Different approaches to engagement may be pursued, with varying implications to the empowerment of different stakeholders involved (van Kerkhoff and Lebel 2006). From the perspective of researchers engaging in decision-making processes, their power varies, for example, for researchers participating in a process set by authorities, their power may not go beyond defining problems, whereas when there is formal organization-level engagement, researchers are more powerful, although not in a position to challenge the power of decision-makers (van Kerkhoff and Lebel 2006). This relative power that researchers hold in the engagement process can cause efforts to succeed or fail, for example, while working with APEC, an integration approach (van Kerkhoff and Lebel 2006) was adopted to set a shared agenda; however, due to political priorities at play and researchers not being a powerful enough player, these efforts failed to realize expected outcomes. This was also true in the cases of informing decisions of the Nigerian Government and that of USAID, where changes in Governments and subsequent leadership played a key role in defining priorities, and researchers were not powerful enough to challenge this power. These power relations could be reversed in the case of co-production processes established by researchers themselves, wherein researchers tend to hold more power, and there is a need to ensure that other stakeholders are empowered (Turnhout et al. 2020; Wyborn et al. 2019).

In addition to the power play between researchers and decision-makers, an additional perspective observed was the role of other competing researchers/research groups. There is often competition among research groups for “their results” to inform decisions and to have the ear of the decision-makers. Such competition can be an external factor which affects the success or failure of engagement efforts. In the CCAFS context, such competition was not only observed from other research institutions but also within the same organization (Interview-B 2019).

5.4 Funding uncertainties

The role of funding organizations and funding commitments in determining the priority accorded to science–policy engagement has been noted (Arnott et al. 2020; Sarkki et al. 2019). This crucial role of funding organizations and commitments also emerged during our study, and specifically, we found that changes to funding on an annual basis make it difficult for researchers to plan and execute multi-year engagement strategies, and has been an important fail factor. While adaptive planning on the part of researchers can help mitigate this to some extent, large-scale changes to funding beyond the control of researchers can be detrimental. This can only be addressed through multi-year commitments and certainty from donors, which maximize the potential to address challenges. Funding uncertainties also extend beyond funding for engagement, to also include funding for implementing science-based decisions. Uncertain funding to implement and scale science-based solutions has also been identified as a cause of failure. However, while funding uncertainties have been a fail factor, it is also important to be cognizant of the fact that funding uncertainties should not be used as an excuse for other fundamental problems around project design and implementation (Interview-F 2019). Examples are emerging of researchers grouping to address challenges of scarce resources (Sarkki et al. 2019), and similar models may also benefit CCAFS and other organizations.

6 Failing intelligently at the interface between science and policy

While we have identified the key causes of failure of science–policy engagement efforts in the context of climate action in agriculture, failing is inevitable as studies in other sectors have shown. Therefore, rather than endeavoring to entirely avoid failures, a conscious effort to fail intelligently is more desirable. Such an approach will enable researchers to improve the efficacy of their science–policy engagement efforts. Intelligent failure arises from thoughtfully planned actions, which are executed effectively, at a scale which is modest, in areas where lessons can be generated from such failures (Sitkin 1992). This involves taking cognizance of failures, learning from failure, and developing a culture around failing intelligently to improve and innovate (Cannon and Edmondson 2005). In relation to science–policy engagement efforts in the context of climate action in agriculture, we propose the following steps to fail intelligently (Fig. 1). These steps are inspired by Cannon and Edmondson (2005), and aim to apply the generic set of principles to science–policy engagement efforts:

  1. 1.

    Plan for failures: At the design stage, take cognizance of failures which may be experienced in the science–policy engagement process and develop strategies to overcome these. The fail factors identified in the present paper offers researchers insights into potential challenges which may be faced and can enable the development of appropriate mitigation plans.

  2. 2.

    Minimize risks: Where there is a possibility of failure, ensure that risks are minimal in terms of resources expended and time spent in science–policy engagement efforts.

  3. 3.

    Design efforts intelligently for generating lessons, in success or failure: Design science–policy engagement strategies intelligently, so that in the event that these strategies fail, they generate lessons which can enable researchers to navigate similar challenges in the future, for example, in identifying early warning signs of failure (Leoncini 2017).

  4. 4.

    Make failures visible: Record failures carefully and foster a culture where failures are admitted early, and understood to be part of the culture of experimentation and innovation. This can be the most difficult step as it requires a change in organizational culture.

  5. 5.

    Learn from failures: Actively generate lessons from failures to improve the efficacy of science–policy engagement efforts.

    Fig. 1
    figure 1

    Steps for failing intelligently in science–policy engagement for climate action in agriculture

It must be noted that the applicability of these steps is context dependent, and in a highly competitive environment, some steps may be easier to implement than others. For example, in the CCAFS case, we noted that failure is a delicate subject overall, and most program participants were not willing to share their experiences in our survey. This means that step 4 would be the most challenging to implement in such a context. However, in most contexts, the right incentives and support from management would be crucial to empower researchers to learn from their failures in science–policy engagement.

7 Conclusions

We provide empirical insights into the challenges and failures faced in science–policy engagement efforts for climate action in the agricultural sector. By analyzing failures rather than successes, we provide a perspective which has until now not been reflected upon in the literature on science–policy engagement. Meanwhile, for the literature on failure management, we provide insights from application of failure management concepts in the science–policy engagement context. Specifically, we have identified fail factors, which can be addressed to improve the efficacy of science–policy engagement processes. These include the lack of salience in research results, lack of institutional capacity, adverse power dynamics, and funding uncertainties. Various dimensions of these fail factors and their relationship to the literature have been discussed, enabling future research and practice. Future research can shed light on context-specific performance of the fail factors as well as identify additional fail factors. Efforts to capture user perspectives on failure of science–policy engagement efforts will also be valuable. However, research efforts should transcend disciplinary boundaries to offer fresh insights to address pressing knowledge needs.

To address fail factors identified in research management, we propose that capacity-building efforts are undertaken, both within the research community and among decision-makers to build buy-in for science-based solutions. Priority should be accorded to build capacity of expert intermediaries and boundary spanners. Second, better matching of demand and supply of knowledge is needed, for example, through the production of synthesis outputs in formats which are useful for decision-makers. Platforms which facilitate matching of demand and supply can also play an important role. Third, to address the power imbalances faced by researchers, efforts need to be taken to strengthen the position of researchers, through their technical expertise and clear communications. However, the knowledge system operates in different scales, and it is necessary to be cognizant of the diversity (Warghade 2015). Principles of research funding, with its huge emphasis on success, need to be revisited to see failure as possible, as acceptable, and also valuable. Finally, an understanding of which factors fall beyond the sphere of influence of any given project is also valuable for those involved in that specific project. Even though external factors cannot be steered, they can still be adapted to; and moreover, individual researchers and projects can also work actively to extend their sphere of influence to bring factors that start as external within reach—something that may be especially feasible for projects that are supported over longer periods of time.

Our findings point toward redefining the role of the researcher (Turnhout et al. 2013). A researcher is no longer only a generator of knowledge, but a policy entrepreneur who identifies and accesses windows of opportunity, and as with all forms of entrepreneurships, both successes and failures can be faced in this path. However, as with entrepreneurship, intelligent failure (Edmondson 2011) can enable researchers to learn from failures, generate lessons for the wider community, and apply adaptive management strategies to be successful. The How on integrating learning from failures is key, as failures are often unreported; therefore, a shift in our approach to research and research management, which values failures for their lesson learning function, is needed.

In order to address the challenges of adaptation, mitigation, and food security, it is essential that knowledge-sharing mechanisms are improved within the agricultural sector. This requires wider changes to the knowledge system that is conducive of science–policy interfaces (Felt et al. 2016). In the absence of such a change, improved efforts of the research community will continue to deliver suboptimal result. Learning from failures can not only help improve practice at the level of the researchers but address wider issues within the knowledge and political systems.