BBNs, as with all networks, are made up of nodes (which here, represent variables, factors, or outcomes in a system) and edges (which represent the causal relations between these nodes). So far, so familiar, and like many of the methods in this book. What sets BBNs apart is that each node has some defined states (e.g. on or off, high or low, present or not present) and some associated likelihoods of being in each of those states. These likelihoods are based, in probabilistic fashion, on the states of the nodes that they are connected to, that is, the nodes from which they have arrows going into them. In the language of probability, nodes are ‘conditionally dependent’ on the states of the nodes that they have a causal relationship with. So, the BBN is the network and the collection of conditional probabilities (usually shown in simple tables or plots annotated onto a network diagram), denoting the likelihood of nodes taking different states. The last key point to mention is that BBNs are acyclic; that is, they do not have any cycles or feedbacks, and the arrows must flow all in one direction. This is an important distinction between other methods which do have cycles, such as Causal Loop Diagrams and Participatory System Maps.
It is useful to make this more tangible quickly, so let’s look at an example. Figure 7.1 shows a simple BBN of the effects of ‘rainfall’ and ‘forest cover’ in a river catchment, and the links through to some different outcomes, such as ‘angling potential’ and ‘farmer income’. This is a simple BBN, you can see the acyclic structure, and focus on outcomes we or others might care about.
The same BBN, but this time with the states of each node shown along with the probability distribution for them is shown in Fig. 7.2. This allows us to see, for example, if ‘rainfall’ and ‘forest cover’ are both high, the likelihood of all the other nodes taking specific values.
We could explore different scenarios by setting the states of ‘rainfall’ and ‘forest cover’ differently and seeing how this affects the rest of the map. This is a common way of using BBNs, setting certain node states given our observations (or hypothetical scenarios we are interested in), and seeing what this implies about the probability of states in other nodes. With ‘rainfall’ and ‘forest cover’, we are setting values at the ‘top’ of the network (sometimes referred to as ‘root nodes’ or ‘parent nodes’, i.e. with no nodes going into them) and looking causally ‘down’, but it can be done the other way round too; setting the states of outcomes (sometimes referred to as ‘leaf nodes’ or ‘child nodes’) and looking ‘up’ the network to see what might have contributed to that outcome. These are the two main types of insight the analysis of BBNs can provide: (i) assessing the probability of achieving outcomes, and (ii) quantifying the impacts on outcomes of changes elsewhere in the system. As with all the methods in the book, there is also huge potential value in the process of building a BBN, to generate discussion and learning about the topic.
These examples help us get a quick sense of what BBNs are about, but they are focused on the ‘results’ of the BBN; this is what is shown in the node tables. What is not present are the conditional probability tables that underpin these outputs. Table 7.1 shows what one of these tables might look like for one of the factors in this BBN, ‘reservoir storage’. It shows different states of the parent node of ‘reservoir storage’, which is ‘river flow’, and the resulting probabilities of ‘reservoir storage’ taking each of its states. The numbers indicate the probability that ‘reservoir storage’ will take its values in the second column (i.e. if ‘river flow’ is good, then there is a 90% chance ‘reservoir storage’ is good, and 10% chance it is medium).
Table 7.1 An example conditional probability table based on the reservoir storage node in the BBN in Figs. 7.1 and 7.2 BBNs have some well-known and often-criticised constraints. First, they are acyclic; that is, there cannot be any feedback loops, of any length, in the network. This constraint is imposed for the calculations to work. In complex systems, it is rare for there to be no feedback loops; where there are feedback loops, these are often powerful drivers of dynamics in the system. It is possible to partially represent feedback loops by including multiple nodes for the same thing, but for different time points (we show an example of this below). BBNs that use this approach are often called ‘dynamic’ BBNs. A second constraint on BBNs developed with expert input is that most nodes cannot have more than two or three parent nodes (i.e. incoming connections) and nodes should not have more than a handful of states. This constraint is normally imposed so that the conditional probability tables, which are a key component of what is elicited from stakeholders to build the BBN, do not become unworkably large. By way of illustration, imagine a node with two states, and two parents each with two states themselves, this will require a 4 × 4 table. However, a node with three states, and three parents, each with three states themselves will need a 6 × 27 table. Imagine filling that in with stakeholders, cell by cell, with potentially important discussions at each step, and doing this for every node in the map. Combined, these two constraints mean that the underlying network in a BBN tends to end up being a relatively simplified model of reality compared to some of the other systems mapping methods in this book. This is not necessarily a problem, but it is a constraint we should be aware of.
These constraints often invoke ire from researchers and practitioners who want to represent whole systems and take a complex systems worldview (including from us in the past!). However, one of the typically misunderstood, or simply missed, nuances with BBN is that the use of conditional probabilities means that we can still capture some elements of the wider system in the analysis and discussion around constructing maps, even if they are not in the network explicitly. To demonstrate, consider Table 7.2. Here, we can see the probability of an outcome occurring given the state of two interventions. Even when we have both interventions ‘on’ there is still a 0.1 probability the outcome does not happen, and conversely, when neither intervention is ‘on’ there is still a 0.2 probability that the outcome does happen. These non-zero probabilities represent ‘everything else going on in the system’. They are often an important point of the elicitation process and allow us to capture influence on the outcome, even if we do not formally put them in the network.
Table 7.2 Simple hypothetical conditional probability table for two interventions and an outcome You may be wondering why BBNs are Bayesian. They are referred to as ‘Bayesian’ because of the use of the underlying logic of Bayesian statistics (which provides a way to update probabilities considering new data or evidence) rather than because they were developed by Thomas Bayes himself. Bayesian statistics, simply put, is a field within statistics that revolves around the idea of probability expressing an expectation of likelihood based on prior knowledge or on a personal belief. This probability may be updated based on new information arriving about factors we believe to influence that event. In a sense this operationalises how our belief about a particular probability should change rationally as we learn more. This is in opposition to the Frequentist view of probability which revolves around the idea that probability relates to the relative frequency of an event. We do not want to get into the large and ongoing debates within and between these two schools of thought. However, it is important to recognise that BBNs take that Bayesian idea of probability and implement it though the network structure and conditional probability tables; the parent nodes of any node hold the prior information we are using to update our beliefs about that node. We can also use new information about the states of child nodes to update our beliefs about parent nodes using Bayesian inference.
There is a lot of variety in how BBNs are built, either directly from data, or through participatory processes with experts and stakeholders. However, the object that is produced and the analysis that is done tend to be consistent. There are extensions, such as dynamic BBNs (as mentioned above), and hybrid BBNs (which allow us to include continuous variables as well as the categorical variables we have described above). Where there is more variety is in the terminology and jargon associated with BBN. Ironically, given the formalism of the method, this is one of the methods with the highest number of different names, but less surprisingly, the opaquest technical language.
You may see BBNs referred to as any of the following: ‘Bayesian networks’, ‘probability networks’, ‘dependency models’, ‘influence diagrams’, ‘directed graphical models’, or ‘causal probabilistic models’. We have also seen them referred to as ‘Theory of Change maps’ because of the similarities with these types of diagrams, that is, a focus on connections between inputs and outcomes, tendency to produce simple maps, and not include feedbacks of many causal influences. This plethora of terms seems to reflect the widespread use of BBNs in different domains rather than large differences in how they are used. Some of the key technical terms you may bump into might include ‘prior probability distribution’, or ‘prior’, which refers to the probability distribution that indicates our best guess about the probability that a state will take some value before new or additional evidence or data is taken into account; and ‘posterior probability distribution’, or ‘posterior’, which is the conditional probability we assign after taking into account new evidence or data. Note that the probability distributions assigned to states of root nodes are prior probabilities because they have no inputs on which their state is conditional.