Keywords

In each of the methods chapters in this book we have considered how exactly you can use a method; what things you need to do, what you need to collect, how you create a map or model, and do analysis. We have tried to be agnostic throughout about whether the use of methods should be based on quantitative data, formal evidence, qualitative data, or participatory workshops and processes (though some bias towards participatory processes may have slipped in since this is what we are most experienced in). In practice, any of these approaches can be used, and often are combined and overlap with one another.

The fact that these types of information for building maps overlap means it is worth trying to be clear about what they are. Figure 9.1 attempts to do this in a simple way. It roughly defines the four types. We can see some of the overlaps; participatory processes can also collect qualitative data, and that evidence can be based on quantitative and/or qualitative data. Arguably, evidence can also be based on participatory processes, but it is rarely framed in this way.

Fig. 9.1
A Venn diagram has 4 sets of circles labeled, from left to right, Participatory, Qualitative data, Existing evidence, and Quantitative data. The overlapping parts from left to right are, Workshop recordings and transcripts, formal studies using qualitative data, and formal studies using quantitative data.

Types of information for building system maps, and their overlaps. Source: authors’ creation

This chapter takes a step back and seeks to reflect on the question of what types of data and evidence we can use to build maps, to identify to some of the key pros and cons of each, and to consider how we might go about using them. Chapter 10 goes into detail on how to run systems mapping workshops, so we won’t cover that in this chapter. Rather, we first put forward a defence of the, sometimes critiqued, use of stakeholder opinion to build maps in a participatory mode. We follow this with a more practical consideration of how we can use more traditional data (qualitative and quantitative) and existing evidence to develop maps, including issues we need to take care around. We conclude with a few remarks on combining different types of data and evidence, and on the question of how to choose methods within different data availability contexts.

Defending the Use of a Participatory Process to Build and Use Your Map

One of the most common questions we get when running workshops or presenting on systems mapping is ‘what evidence do you use to back up the model?’ or, the closely related, ‘how do you validate the model?’ These are important questions, but questions that people from diverse backgrounds, and who have used different modelling types, approach in a variety of ways. Even the technical term ‘validation’ can have different meanings, nuances, and interpretations in different modelling domains. It is not always clear whether people think qualitative and workshop-generated data can be used to build a system map, but when asked, we often assume these types of question are based on a belief that they cannot, or that quantitative data would be preferable. The assumption appears to be that we need quantitative data or scientific evidence to trust or draw value from our maps, from our models. This reflects a strong belief held by some researchers that only quantitatively validated models, of any type, are valid or useful.

We must be upfront about a fundamental belief that many systems mapping practitioners have. That is, when working in a genuine complex adaptive system, it is highly unlikely we will have access to the breadth or depth of quantitative data or evidence we need to formally validate every node and edge in a system map, or to validate simulation outputs from something like System Dynamics. Even where we do have some data or evidence, it is likely to be patchy, with systematic reasons determining the areas we have data and evidence for, and those that we do not. The reasons for absent or patchy evidence are multiple. Many of the important components in a system are social or behavioural; they are actual practice on the ground, peoples’ perceptions, tacit, and local knowledge determine what they do. Data is not generally collected on many of these things; indeed it is difficult and expensive to do so. Moreover, the amount of time needed to collect data to validate on the ground knowledge is prohibitive in contexts in which decisions need to be made with reasonable haste, that is, in policy-making contexts. Genuine complex adaptive systems are also open, causal connections may come from many different domains, and we cannot always determine in advance what is relevant. In these systems, things are also always changing; focusing on onerous data collection may make us reticent to update maps with new knowledge. Therefore, the task of validating a model based on quantitative data or evidence will always be an uphill battle, if possible at all.

This should not mean we abandon any hope of using systems mapping, or that we build narrow models that only include things we have quantitative data or evidence on. Indeed, many of the methods in this book work best, relative to other methods, in data-poor contexts. Vitally, we believe there is still huge potential value in maps that are not underpinned directly by quantitative evidence. There are many situations in which we need to make decisions quickly, on the ground, in data-poor situations, but in which we think it is likely that system interconnections are present and likely to be important. Thus, we still want to think through system interconnections more thoroughly, even without data. The pragmatic response here is to use participatory approaches to illustrate that system effects might indeed be important and to help us both raise awareness of this and start to think things through. We are likely to make better decisions with a map like this than without, so it is still worth doing. We should also note, these methods have an important function in generating new questions, for example, what sort of systems effects might be present, and might we need to take account of? How do we adjust our strategies to take account of these possibilities? What extra data do we need to gather to see what is actually happening? These are often hugely valuable and may directly inform new data collection processes.

So, rather than unfairly critique or abandon systems mapping efforts in data-poor contexts, we should proceed with the right amount of caution, and when working in a participatory mode, with stakeholders in the system, emphasise the value of their beliefs in constructing useful models. More than simply emphasising the value of participatory models, in contexts where there is long-standing acceptance and deference to the idealised standards of what science is (i.e. falsifiability, empiricism, etc), we may need to be robust and ambitious in our advocacy of these forms of information.

The main sources of value in using maps and models built in participatory ways are as discussion and thinking tools. In reality, all models ultimately have this purpose—they are there to improve the quality of our thinking and discussion—even if we become overly focused on forecasting or related pursuits (note, we do not intend to dismiss forecasting based on validated models, as an activity, but rather to make clear, it is not the only thing we can do with models).

Maps built in a participatory fashion help us to surface and explore peoples’ mental models of a system or issue. By building them together, we surface assumptions and beliefs as a group that otherwise might be hidden or left undiscussed. The maps become ‘boundary objects’ (see Star & Griesemer, 1989) around which stakeholders and researchers can learn. They can help build consensus and capacity to make decisions around an issue, but also to find the places where disagreements are, and help us work through these and preserve ideas and represent both sides, if they are not to be resolved. Maps, and the researchers working on them, become ‘interested amateurs’ (Dennett, 2014; Johnson, 2015) in the system at hand; actors or objects that can be critiqued and improved by participants without the need to offend other stakeholders and their opinions. These types of use and value of participatory maps can also be developed from their analysis and outputs, not just their construction; analysis should always be seen as a participatory and iterative activity too.

In any applied situation, we need to fully understand the context and needs of stakeholders, users, and clients of our systems mapping research. These, along with our purpose and aims, will shape much of our work. A deep understanding of these contexts and needs is only likely to come from serious and iterative interaction with these people. This means, even where we are building maps from data and evidence, and perhaps are enjoying the opportunity to ‘geek-out’ on more formal methods, we still need to place this in a human context of use and value in which there will be some element of participation in a map’s use, even if we don’t really acknowledge it.

This is as far as we will go into the debate about the validity and value of participatory models. Others have covered this ground before (though these debates are often found in discipline-specific spaces, so more general lessons can be missed), for example, Voinov and Bousquet (2010), in an influential paper, outline and define core concepts for modelling with stakeholders, provide a detailed defence of drawing value from the process of modelling, rather than the results, and identify principles for modelling with stakeholders; Voinov et al. (2016) update and build on this, with a review on the topics covered in 2010; Hurlbert and Gupta (2015) consider when participation can be useful in research more generally, and what it can achieve in different contexts, many of these arguments apply to participatory modelling too; Prell et al. (2007) provide an in-depth consideration of why we might use participatory modelling, and how we can do it; finally, Voinov et al. (2018) provide a detailed assessment of the process of choosing different participatory modelling approaches. If you are going to be using systems mapping, or any modelling, in a participatory mode, we suggest you become familiar with some of this literature and the arguments in it.

Using Qualitative Data to Build Your Map

It is not uncommon to see system maps that have been built based on qualitative data. That is, data which may have been collected from interviews or focus groups, but where the map was not built during that interaction, but rather based on the recording, transcript, or analysis of it. In this approach, the textual data is converted into the boxes and connections of a systems map by the researcher. We are looking for assertions of what factors exist and how they are connected in the data. These assertions may be put quite simply (i.e. an interview says X is caused by Y) or may need to be extracted from participants’ descriptions and narratives of processes and actors within a system.

One of the key advantages of using qualitative data is that it is typically both rich and broad. It can go into a high level of detail and description about a system but can also easily cover the full spectrum of relevant issues and domains in a system. This means that in theory its coverage of a system can be both broad and detailed, though probably only where we have a lot of data. A second key advantage, similar to participatory processes, is that if we are collecting the data ourselves in an interview or focus group, we can adapt our questions and prompts in ways we see fit, guiding the collection process so that it meets our mapping needs.

There are disadvantages to using qualitative data to build system maps. Foremost, it puts a lot of power and responsibility in the judgement of the researcher or modeller to translate data into a map. This is rarely a straightforward process, especially when we are using data that was already collected or was collected with additional purposes in mind. There will be many dozens of decisions about how to create factors and connections that meet the ‘rules’ of the method while also reflecting what was said. There is also often disagreement or contradictions in what different stakeholders have said in interviews. These will need to be resolved or preserved by the researcher. Finally, it often happens that one interviewee describes something that can give rise to one map, another interviewee gives us something else, and when we combine them we have a new map, which no individual actually described. The map we end up with is a composite that no stakeholder has described or had the chance to react to and comment on. This is not necessarily a problem, but is an important point to reflect on; is this map valid, does it reflect the mental models of our participants?

There are several software options that can help us develop system maps from qualitative data. Almost all the software we mention in this book can be used to build a map, but those which allow us to connect to or tag qualitative data for a map in useful ways are what are of value here. Below, we describe three well-used qualitative data analysis software packages, and one purpose-built application, which allow us to do this. However, there are important constraints on their functionality, particularly around exporting maps in usable data format, and none are free to use.

  • NVivo (version R1): this version of NVivo has three types of map visualisation which have similarities to systems maps. There are simple ‘mind maps’ which you can create to brainstorm ideas, but which do not connect to the qualitative data or codes in your analysis. Next are ‘project maps’, which visualise a range of default nodes (e.g. documents, codes) and relationships between them (e.g. ‘contained within’, ‘occurs at same time as’). Importantly, you can create custom relationships, which you could use to represent causal influence. Finally, there are ‘concept maps’ which are the most flexible visualisation mode in the software, allowing you to draw many types of nodes, relationships, and annotations, and connect these to your qualitative data and analysis. This is the visualisation mode in NVivo you will most likely want to use. Unfortunately, you cannot export any of these visualisations in a standard network data format (e.g. gml, json), or markup language you could use to extract the network data from (e.g. XML). This is commercial software, but many academic institutions have general licences.

  • Atlas.ti: fundamental to the design and operation of Atlas.ti is a network structure and visualisation of your qualitative data and analysis. This works in a similar way to ‘project maps’ in NVivo, showing default types of nodes and relationships, with the option to manually add more. Because the network structure is central to the design of the software, and not a visualisation ‘add-on’, when you add relationships and nodes, these are added to all your other analysis in the software. Again, you cannot easily export the network data; however, you can export XPS files (a Microsoft Windows file type, similar to PDF), which advanced Windows users may be able to extract network data from. This is commercial software, but many academic institutions have general licences.

  • MAXQDA: this software has a ‘maps’ visualisation function, which allows you to view relationships between default node types and relationships in a similar mode to those above. You could in theory create codes which represent factors in a system and connect these to generate your map, but this seems a rather clunky way of generating a system map. You cannot export the network data. This is a commercial software, but many academic institutions have general licences.

  • Causal map (https://causalmap.app/): finally, it is worth mentioning this purpose-built software for building causal maps from qualitative data. It is still in development, but a beta version is available. We have not used it ourselves, but it appears to have all the functionality we could want for developing, analysing, and exporting maps. It is not free, there is a free trial, but prices start at £490 a year; there is an R package forthcoming.

Using Existing Evidence to Build Your Map

Before we consider quantitative data, it is worth reflecting on how we might use existing evidence to inform a map. By ‘existing evidence’ we mean any study or analysis which has already been done based on any type of data. This might include peer-reviewed academic studies, or grey literature from reputable sources. We would need to define some inclusion criteria, but we would expect that it would include similar sources to those which make it into systematic reviews, meta-analyses, or rapid evidence reviews. In this mode, we are in effect doing a systematic review or evidence review focused on building a causal description of a system.

One of the main advantages of doing this will be that we have strong evidential support for the map we create. Depending on how strict the inclusion criteria we set are, we will be able to make pretty strong assertions about the validity of our map if it is backed up by peer-reviewed studies. Given the formalism of the studies and analysis we will likely use, and the pervasive use of concepts such as variables and causal relationships, there will also be less of a need for a modeller or researcher to make lots of decisions in translating individual pieces of evidence into nodes and edges in a map; this conversion should be more straightforward than the equivalent process for qualitative data.

However, there will still be some researcher judgement involved, and again the issue of combining pieces of evidence which perhaps contradict each other. More important though, will be the issue of coverage of a system. In many systems, there simply will not be much relevant evidence we can use. Where there is evidence there will likely be systematic reasons for what is covered and what is not, with corresponding risks of systematic bias in the resulting map. We will need to be careful about areas of a system that are not covered by existing studies, and how we can account for them. This means this approach is only really applicable in domains where there is a strong tradition of evidence gathering in formal studies of all aspects of the system, or where we are only hoping to use evidence to inform part of a map, not all of it.

The process of using evidence in this way will start with doing the early stages of a more traditional evidence review or systematic review. We recommend you look at some of the guidance for using these approaches in the domains in which you are working and think about how you could use them to gather and classify evidence in your system. For systematic reviews, Petticrew and Roberts (2005) is accessible and thorough; for rapid evidence reviews, you may find Crawford et al. (2015) useful, though it is applied to health care, so may need translating for other contexts; finally, for evidence mapping, try O’Leary et al. (2017) for inspiration. You will likely not need to complete the full process for these approaches, but just do sufficient to collect and classify the evidence relevant to you. From here, the process will involve translating each piece of evidence into the nodes and edges it can underpin and adding these to your map. As you progress it will likely be useful to have a version of your map which is annotated in some way to record and visualise what evidence is underpinning what bits. You might also want to use this in the analysis of the map, for example, to choose your focus (perhaps on bits with less evidence).

Using Quantitative Data to Build Your Map

There are two different ways to use quantitative data directly in building your map. Both rely on the idea that the variables we have in a dataset are appropriate to use directly as the boxes in a system map. This is not a silly assumption but should be checked and thought through—you may want to consider whether the variables are at similar scales and whether there are likely to be lots of important factors not present in your data.

In the first mode of using quantitative data, you may have data of any quantitative form (e.g. time series with data points for multiple variables through time, or cross-sectional with only data at only one time point) that allows you to do analysis on the statistical and/or causal relationship(s) between variables. This will likely use traditional statistical approaches (e.g. regression) or causal inference methods (e.g. difference-in-differences—see Cunningham, 2021, for an introduction) to focus on which variables we might find an association or causal relationship between. When you find a relationship between variables, you can add a connection between them. Depending on the method you use, you may be able to use the analysis to tell you the direction of the arrow, but in many cases, you will have to use other knowledge or theory, or leave the connection undirected. You will need to decide and use a threshold at which point a relationship is ‘significant’ enough to draw a connection between two variables.

In a second mode, if you have time series data across a wide range of variables you think may be relevant in your system, you can analyse these as a whole to extract causal networks directly using one or several of a variety of methods for ‘estimating networks’. These methods rely on a range of measures, such as ‘conditional independence’. This is an important but tricky concept; take, for example, two variables, a child’s height and the number of words they know. These may appear to be related, but it is actually the child’s age which has a strong influence on them both; they are not causally related. So, a simple two-variable analysis may make them appear related, but if we include age, we will conclude they are ‘conditionally independent’. This process of looking at two variables, considering others, is useful for constructing networks from data. This mode of using quantitative data is less widespread, and still in its infancy in the social sciences, but has the potential to be powerful and quick to use.

The main advantage of using quantitative data in either of these ways is that we have a direct and fully transparent connection between the map and the information we have based it on. At the ‘information-to-map’ stage of the process, there is the least amount of researcher judgement required compared to other modes. We have conducted the analysis, perhaps collected the data, and do not have to make much, if any, interpretation to convert quantitative data to nodes and edges. Nodes will be the variables we have in our data, and connections will be drawn where our analysis meets some threshold (likely defined using existing norms and standards) in suggesting some association or causal relationship. However, we must not forget that a lot of researcher judgement is involved earlier in the process when we curate datasets and choose methods. Quantitative datasets are often treated as objective truth, but we must always keep in mind they are constructed and collected based on judgements, can contain errors, and offer only a snapshot of some part of a system.

The most salient disadvantages, or risks, to using quantitative data to directly inform maps are similar to those for existing evidence. It restricts our map to only those things which are both quantifiable and which we have data on. As we have repeatedly stated, this is often a serious restriction; we do not typically have a wealth of (good or even usable) data on a system, and where we do have it, there will be biases in what it includes and excludes. When using quantitative data, we must have ways to account for the aspects of a system for which data is difficult or impossible to collect and use. We should also be aware of the choices we make in what system to look at, when these are driven, at least in part, by what data we think is available. We must also be careful with our language and describe the arrows in our map using the language that the statistical or causal inference method uses. So, for example, if we use traditional statistical method, which does not allow us to talk about cause, we should only describe connections as showing relationships or associations, rather than causes.

Health warnings noted, let’s think about how you might actually use quantitative data. To underpin the inclusion of individual nodes and connections, you will first need to get hold of the data, and then use one, or several, of the many statistical and causal inference methods that exist. There are dozens, if not hundreds, of these methods (too many to mention here). They can be grouped in different ways, from those that only assert association, such as standard linear regression, through to causal inference methods such as difference-in-differences and instrumental variables (again, see Cunningham, 2021, for an introduction). Methods can also be grouped by the types of data they can be used on. If you have cross-sectional data, then ‘structural equation models’ are one of the most obvious approaches to use; there is a range of methods within this broad umbrella, for an introduction we recommend the introductory materials on the UK National Centre for Research Methods website—https://www.ncrm.ac.uk/resources/online/all/?id=10416. If you have time series data, then ‘Granger causality’ approaches are one of the most popular (Granger, 1980). Finally, if you have panel data (i.e. longitudinal data where the observations are from the same subjects in each time period), you can use causal inference methods such as difference-in-differences.

It is not within the remit of this book, nor do we have space, to introduce these methods. If you are not familiar with them already, we strongly recommend you develop a basic understanding of the range of methods, their pros and cons, and then dive deeper into the methods you think you might want to use. To help you start on this learning journey, we recommend the following: Cunningham (2021) for an accessible introduction, Pearl et al. (2016) for a more technical but short introduction, and Peters et al. (2017) for a more traditional textbook. Once you have the data and the method(s), it is a simple task to create your map (i.e. draw edges between nodes) when you find casual relationships. You can do this step-by-step on each two-variable set you have, slowly building the map up.

If you are lucky enough to have lots of time series data relevant to your system, there are some emerging methods which make the process more streamlined. These methods are referred to with different terms, but most common are ‘network estimation’ or ‘causal discovery’. The first step, again, will be getting hold of data. You will need to collect as much time series data as possible for your system. You should aim to collect data on as many domains as possible and look to maximise the number of time periods covered by the data. The methods themselves include basic correlation thresholding approaches, Granger causality approaches, statistical structure learning approaches including causal graphical models (i.e. Bayesian networks) and structural causal models, inner composition alignment, and cross-convergence mapping, to name a few. They all have different pros and cons revolving around what data they can use, what they can assert about causality, and what types of maps they give you (i.e. directed or not, weighted or not, cyclic or acyclic). Ospina-Forero et al. (2020) provide an excellent overview of these and other methods in the context of the sustainable development goals (SDGs). SDGs are an interesting way to frame systems because the scale of measurement (typically national) and good data collection around them mean they are often amenable to this type of analysis. Systems which are appropriate to define at a national level, or other scales at which there is lots of data collected and available, may be one of the most appropriate to develop quantitative data-driven system maps because of this data availability. Most of these methods will produce maps directly for you. The ‘final’ task thus becomes combining these and/or converting them into a form compliant with whichever systems mapping method you are hoping to use.

Using Different Types of Data and Evidence in Practice

Though we have outlined the use of different types of data and evidence individually in turn, we will often want to combine them. Indeed, given enough time and resources, it is hard to think of a reason why we would not want to combine them wherever possible. In a similar fashion to the combination of methods, as described in Chap. 11, combining data sources and evidence will help us approach a topic more holistically and give us helpful points around which to cross-compare, triangulate, and iterate. Different skills are needed to use different types of information, and this should not be underestimated as a barrier. Using quantitative data is technically demanding, whereas running participatory processes requires strong facilitation and communication skills.

Perhaps the most obvious combination of types of information is to develop a map in participatory mode, and then refine it with further information from different data sources and existing evidence (as is sometimes done with Bayesian Belief Networks). These could be used to annotate a map, validate it, or to inform quantification decisions (i.e. what the conditional probabilities are in a Bayesian Belief Network, or what the equations are in a System Dynamics model). The second use we see as holding most value is to use qualitative data and participatory processes to address the potential gaps in quantitative data and existing evidence. This replicates the logic of much mixed methods research in using the strengths of approaches to covers the weaknesses of others. Combining sources in any way will involve additional work in map visualisation and may create parallel streams in your analysis and use of maps.

As we often do, we want to finish with a plea for creativity. It is tempting to think of different types of data working better with certain methods. For example, you might think quantitative data will work best with Bayesian Belief Networks or System Dynamics because of their quantitative and more formal approach. However, we believe it is important to keep an open mind here; any of the methods can, and have, been used with different data sources, evidence, and processes. Moreover, some of the room for innovation, and potential new insights, may be in more unexpected and creative combinations. It may be interesting to ask, what might a Rich Picture of quantitative data look like? Or, how might a Theory of Change based on evidence look different to a Theory of Change from the mental models of a policy team? Any combination of method and information source is valid, as long as we are aware of the potential gaps and omissions in the information we use, and either address them directly, or adjust our aims and claims accordingly.