Background

Policy dilemmas cross conventional academic boundaries. The academic response to the challenge of informing decision-making in such a context has been twofold: providing ready access to relevant scientific evidence with systematic reviews, or research syntheses, that include studies from different social, economic and geographic contexts, and draw on multiple academic disciplines; and building teams of academics and other stakeholders to address policy dilemmas by working in unconventional ways (see Box 1 for definitions). Indeed, most policy dilemmas raise many scientific questions across a range of disciplines [1]. Early systematic reviews in environmental science were largely academic endeavours and in these circumstances the validity of the work can be undermined by lack of consensus about review questions, specifically the choice of outcomes and analysis of contextual variables [2]. Since then, involving stakeholders in the production of systematic reviews has been seen as critical [3]. In addition a few systematic reviewers have broadened their analysis to address both impact and explanations and meaning of impact [4], both change and reasons for change [5], and to develop a theory of change [6]. These much needed methodological advances have important implications for delivery of services. In the health sector these implications are well illustrated by systematic reviews addressing the problems of patients offered an effective, but long and demanding, treatment for tuberculosis (TB). These reviews expose differences between the world of research, and the wider world that research is meant to serve (see Box 2).

Box 1 Definitions of key terms that describe the process and products of systematically reviewing policy-relevant research
Box 2 The mismatch between the worlds of research and implementation: an example from health

Currently, the content of systematic reviews is largely evaluations of programmes, sometimes adapted by researchers in the field specifically to enable rigorous evaluation, with studies stripped of their organisational and socio-political context during the review process. Consequently the synthesised findings of these primary studies, with high internal validity, offer persuasive evidence of impact for policy decision-making. Yet, the partial picture this evidence presents largely ignores the policy context which risks evidence-informed policy decisions subsequently stalling with programmes failing to deliver better policy outcomes. This situation is illustrated in Fig. 1.

Fig. 1
figure 1

Typical limitations of knowledge transfer between worlds of policy and research: Research-based information about the effects of services flows from where it is collected (bottom right), typically from practice arenas where data are framed by research tools and analysed to maximise the internal validity of primary studies (bottom left), and then synthesised to emphasise average effects with an assessment of the degree of heterogeneity of studies and judgements about generalisability of findings. Subsequently summaries of syntheses are presented to panels, such as guideline groups, making policy decisions (top right). Information flow from policies to guide research base practice are interrupted during implementation efforts where evidence maximising external validity is required for systems issues, to complement evidence addressing practice issues (middle right)

If systematic reviews are to address real world problems that are situated in complex systems, there is a need for systematic review designs that span academic disciplines; new ways of working to construct those designs; and methods to interpret the findings. This need is for transdisciplinary research methods—ways of working that cut across and beyond academic disciplines.

This paper offers some solutions to the challenge facing systematic reviews in environmental science, namely the need for a ‘balance… between a reductionist approach that simplifies the question but may limit both the quantity of information available and the applicability of its conclusions, and a holistic approach in which the question contains so much complexity that no studies have attempted to address it’ [2]. In doing so it also draws on other sectors where systematic reviews were introduced to policy decision making earlier.

Transdisciplinary methods

Here we offer three different transdisciplinary methods for producing systematic reviews: combining concepts from across and beyond academic disciplines in conceptual frameworks for systematic reviews; communication methods for working with people from across and beyond academic disciplines; and models for structuring findings to take into account contextual influences.

Conceptual frameworks to span boundaries

As systematic reviews are increasingly commissioned by policy organisations, rather than initiated by curious and reflective practitioners, the scope of individual questions addressed has broadened. For instance, a review investigating the impact of agricultural interventions on the nutritional status of children included studies from social science, agriculture, psychology, nutrition, economics and physiology [12]. The review was structured by a theory of change conceptual framework with components that included participation in educational programmes and adoption of technology, leading to changes in diet from home produce or to enhanced household income and food purchases; and from this on to improved nutritional uptake and health status. The theory of change was used instead of a traditional systematic review (SR) ‘PI/ECO’ (population, intervention/exposure, comparison, outcomes) structure to define components of and drive the review. The approach made a large and complex review manageable and coherent, while accommodating the individual packets of evidence which were quite different in terms of question, research evidence, discipline and context.

In contrast, when policy questions seek to develop understanding rather than assess the measures of effects of an intervention, conceptual frameworks may be the output of a review, rather than used as the driver. For a review analysing qualitative studies about protected terrestrial areas, such as national parks and forests, and human well-being [4], the resulting conceptual framework combined dimensions of well-being (health, social capital, economic capital and environmental capital) and governance (regulation, enforcement, participatory management and empowerment) against a backdrop of human rights. The result was a conceptual framework to present a set of coherent findings from very disparate studies spanning economics, education, epidemiology, environmental science, anthropology, law, history, and public health.

Although use of conceptual models is hardly new, they may be underused. A recent mapping review of over 1000 studies examining the links between conservation activities and human health and wellbeing found very few well-articulated, detailed theories of change, despite the sometimes long and complex chains of possible interactions that were being researched [13].

Communication methods for shaping review questions and conceptual frameworks

The construction of review questions and use of conceptual frameworks in systematic reviews requires collaborative teams that span academic and social systems and that think critically and creatively together by managing conflict well [3, 14]. Although there is widespread support for involving stakeholders when conducting systematic reviews [15], current guidance is directed more towards who to engage than how to work with them creatively to shape the review. Insights about such social interactions emerged from insider research [16, 17] and reflective practice addressing the early stages of the systematic review process when refining questions and framing reviews addressing broad issues [18]. From this insider research and reflective practice, we now recognise the parallels between shaping reviews and two other forms of creative thinking processes: qualitative analysis and non-directive counselling [18]. While the former examines observations for patterns and meaning to make sense of data, the latter refrains from interpretation or explanation but encourages others to talk freely and discover patterns and meaning themselves to make sense of their own experience. Originally developed to help individuals address personal problems [19], its core element of active (or reflective) listening has been subsequently developed and applied to support creative problem solving by groups [23]. The non-directive counselling approach has been helpful in supporting interdisciplinary review teams (inclusive of stakeholders) to solve the problem of shaping a conceptual framework for their review that will accommodate the interests of the review funder and the framings of existing relevant studies [18]. As a stepwise process for qualitative analysis and non-directive counselling has been clarified, shared and incorporated into text books and training programmes (Box 3), we see an opportunity to clarify and practice their application for shaping systematic reviews.

Box 3 Thinking and communication processes analogous to developing a question or conceptual framework for systematic reviewing [18]

However, the active listening that is at the heart of non-directive counselling brings risks. Systematic reviewers working closely with stakeholders who are bringing direct experience and strong interests risk losing their critical distance. Moreover, examining, comparing and reconciling the ideas, opinions and perspectives of different stakeholders through mutual challenge and constructive conflict [25] may be particularly difficult to attain when there is an imbalance in power or money, as in commissioned systematic reviews.

Models for structuring findings to take into account contextual influences

Considering the needs of multiple stakeholders is not only for the beginning of a review: there are also opportunities towards the end when interpreting emerging findings. Typically users of systematic reviews want to know how relevant the findings are to their own situation, or the populations for which they make decisions. The principle of globalising the evidence, but localising the decision [26] can be helped by careful description of the characteristics of the included studies, or carefully delineating the factors that might be important in contextualising the evidence, and then making sure this is systematically extracted and summarised. For example, subgroups may be distinguished by their place of residence, religion, occupation, gender, Race/ethnicity, education, socioeconomic status, and social networks and capital [27]. This approach, with its mnemonic PROGRESS, for capturing social determinants of health, has been integrated into guidance for pre-specifying subgroup analyses in systematic reviews [28, 29]. The method is well suited to public health because it provides a framework for epidemiological analyses.

However, the PROGRESS determinants of health ignore the inner layers of individual risk factors (such as genetics, physical impairment or lifestyle factors) that feature in biology and behavioural science. They also ignore the outer layers of ecological or geological factors central to environmental science. No mention is made of intersectional theory of sociology about social identities overlapping or intersecting [30], perhaps because multiplying subgroup analyses reduces statistical power in epidemiology [31]. Lastly, PROGRESS ignores any dynamics arising from: interactions between the multiple layers; the life course (age); life transitions (moving home, employment, school or leaving prison, hospital or a significant relationship); historical changes (conflicts, mass migrations (post)colonialism); or geological or climate changes (natural disasters).

A more flexible approach to investigating contextual influences or inequalities may be found in the work of Bronfenbrenner [32, 33] who conceptualised children’s lives as being shaped by environmental factors acting and interacting in a set of nested structures, from within families (at the micro level) to within their historical context (at the macro level). This has been applied to systematic reviews of research [34] and policy [35] addressing children’s rights in post-conflict areas. The potential for applying frameworks such as Bronfenbrenner’s to different systematic reviews is suggested by the various adaptations of similar ecological frameworks that can be found for primary research elsewhere, such as: environmental science [36]; migration studies [37]; and violence [38]. We illustrate that potential in Fig. 2 by visually summarising the findings of a systematic review of qualitative studies of microfinance [39].

Fig. 2
figure 2

(Adapted from a ‘pathways to peace’ framework [35] by the EPPI-Centre to present the key contextual issues influencing the outcomes of microfinance programmes [39])

An ecological model of women’s engagement with microfinance programmes. To complement evidence presented along a causal pathway or programme theory of change, which focuses primarily on the programme design and internal validity of evidence at each causal link, evidence can be presented within an ecological framework representing participants’ social context to facilitate analysis of external validity for implementation decisions

Ecological models not only offer a framework to make sense of review findings but, as they provide a way to navigate the complexity of people’s life circumstances, they also provide a framework for identifying stakeholders who can help with shaping the review or interpreting the findings. An ecological framework can be immensely beneficial when researching context-sensitive topic areas such as children, gender and the broader social, cultural and natural environments.

Practical challenges and ultimate benefits

Transdisciplinary working when conducting systematic reviews is not easy. The challenges manifest when working with contrasting paradigms, and epistemological, ontological and methodological differences. Our own experience tells us it requires time and effort to adapt to unfamiliar information resources, terminology, communication styles and research methods. Guidance is available from a systematic review which found that transdisciplinary research is enhanced by team leaders with good ideas and vision, contacts, good interpersonal skills, humility, familiarity with the disciplines and the opportunity to choose their team members and keep them all on board, and by team members with maturity, flexibility and personal commitment [40]. Grounding the unfamiliar in social and cultural contexts recognizable to the particular review team can encourage respect for different ideologies and paradigms, and a better understanding and appreciation of disciplinary diversity. Transdisciplinary research is also helped by the physical proximity of team members, the internet and email as a supporting platform, and an institutionally conducive environment. Constructive working practices include: developing a common goal and shared vision; having clarity about, and rotation of, roles; good communication and constructive comments among team members, and importantly, a collaborative ethos of openness and sharing in learning with and from distinct disciplines.

Ideally such teams synthesise more complete evidence, more coherently, and align reviews more closely with stakeholder interests, leading to more compelling evidence. For these reasons, commissioned systematic reviews, which tend to be both complex and time-pressured, require that care be taken not only in drafting substantive content of terms of reference for the conduct of the systematic review, but also in selecting a team of reviewers well motivated to take on transdisciplinary reviews. A track record in project management, a typical requirement in requests for proposals, does little to reveal the capacity of the leader for the critical tasks of forming a team, holding it together, and resolving different points of view. Further, transdisciplinary reviews attract different stakeholders who may be driven by disparate motivations. Generally, academics tend to be comfortable ‘producing knowledge’, partly because they are rewarded by the academic structures in which they are situated for doing so. Non-academics, on the other hand, are rewarded for ‘getting things done’ and seeking practical results and impacts, which may lead to different approaches and motivations in larger and more diverse teams. Once again, the ability of a team leader to manage any resulting tension in teams with academic and non-academic members, is critical to the successful outcome of the review. Indeed, producing knowledge combined with getting things done underpin good transdisciplinary research, which is commonly assessed in terms of relevance, credibility, legitimacy and effectiveness in problem solving or social change [41].

Despite these challenges, transdisciplinary working, with academics and other stakeholders, has led to growing numbers of systematic reviews that address policy questions. Transdisciplinary working has also made possible the adaptation of review methods for new fields and the sharing of knowledge between experienced reviewers and novice teams who bring subject expertise to build reviewing capacity and produce learning which is empowering and reflects both the local and global.

Conclusions

Systematic methods for answering important questions from existing literature are well developed. These methods need to be complemented by clearer methods that emphasise the thinking and debate for developing the questions, shaping reviews and interpreting emerging findings. Such work requires crossing academic and policy boundaries, and exploring how concepts, definitions and language differ. Communication methods analogous to collective qualitative analysis or non-directive counselling look promising for refining questions and constructing conceptual frameworks collectively. Ecological models look promising for understanding the context of research findings and addressing the big questions about social change.