# Adaptive risk consensus models: simulations and applications

- 99 Downloads

**Part of the following topical collections:**

## Abstract

A simulation framework that implements adaptive agent–agent interaction is developed, such that agent behaviour typical of complex adaptive systems is observed. Within this framework, agents monitor the state of the system they inhabit, and adapt their actions so as to optimise a local utility. No central control is present. The context for state is intended to be very general, but is interpreted as risk state, in which optimisation implies a minimisation of risk. Three adaptive interaction modes are proposed. In each, there is a trade-off between simplicity and effectiveness. Additionally a fourth ‘counter-adaptive’ mode is proposed to model situations of a prolonged high risk state. Corresponding ‘real’ examples from recent events are proposed.

## Keywords

Beta distribution Consensus Adaptive Risk Convergence Simulation Game theory Dynamical system Utility function## JEL Classification

C15 C51 G32## 1 Introduction

*state*of a system, measured on a continuous scale in the range (0,1). In particular, state may be interpreted as

*risk state*, especially in the context of financial risk. A

*risk state*with value at or near zero is interpreted as risk free, and a

*risk state*with value at or near one indicates maximum risk. Therefore, within the context of risk, the new elements introduced in order to simulate an adaptive system are:

- 1.
An assessment by agents of the risk state of the system.

- 2.
A predictive measure of an optimal way to achieve a target risk state.

- 3.
An attempt to reduce risk by mutual agreement.

*complex adaptive system*(

*CAS*) may be thought of as 'many agents working in parallel to accomplish a goal'.

The agents in this analysis are capable of only benign interaction. In an interaction between two agents, one can influence the *risk state* of the other, who is able to resist that influence. Both influence and resistance are purely mechanical. There is no concept of reasoning, planning or wanting an individual goal as in the *BDI* model of Bratman ([1]).

### 1.1 Structure of this paper

The previous work on *CAS*s has concentrated on particular modelling concepts, and consequent behaviour typical of complex systems (principally emergent behaviour) has been noted. Some of those approaches are discussed in Sect. 2. The adaptive framework presented here is intended to be more general in that the only assumptions concern the mechanics of how agents interact. The context is totally abstract and is technically irrelevant to the discussion. This generality is the major contribution of this paper. However, the context of *risk* helps to make the discussion easier to grasp. The bases of agent structure and agent–agent interactions are summarised in Sect. 3. The section that follows (4) gives details of three modes with which to implement an adaptive property for agent–agent interactions, and also one ‘counter-adaptive’ mode. Some results of applying those modes are presented in Sect. 5. Section 6.1 has details of instances where the models in this paper may be applied to financial or economic events.

## 2 Previous work on complex adaptive systems

In this section, we summarise prior research on *CAS*s, including closely related work on discrete dynamical systems, Markov, Chaos and Game Theory models.

Work on *CAS*s has been proceeding in earnest from the early 1990s onwards. The elements within a complex system are summarised in Rzevski and Skobelev [13]. Brownlee [2] gives an account of the beginnings of research into the topic, including the contributions of Holland and Gell-Mann. Holland in

Holland [8] extends those ideas to cover the points that are most relevant to an adaptive complex system, namely performance assessment and rule-definition. The way to approach those points is not unique, but the principal features include the ideas of a *replacement rule*. In any agent–agent interaction within a multi-step process, a subset of the system is replaced by a successor subset that results from an optimising calculation.

### 2.1 Discrete dynamical systems

The class of *discrete dynamical systems* comprises spatial models, characterised by neighbourhood dependence. Agent interactions take place among agents that are ‘near’ to one another, as measured by some metric. Many such *CAS* models use cellular automata in which a neighbourhood is often defined within a grid, and the effect of an interaction is to replace one or more agents by newly created agents. The governing replacement rules are well-defined and are usually rigid. Wolfram [14] made an early link between cellular automata and complexity in a general overview of cellular automata. The general principle of the use of replacement rules in this context is that a function *R* removes an agent, \(X_t\), that is within the system at time *t* by a *different* agent \(X_{t+1}\) at the time step \(t \rightarrow t+1\). The replacement involves a set of *J* agents \(\{Y_j \}_{j \in J}\) that have an immediate connection with \(X_t\). The replacement may be expressed as:

\(X_{t+1} = R(X_t, \{Y_j \}_{j \in J})\)

Wolfram shows how simple replacement rules result in emergent behaviour. The basic cellular automata model has immutable rules, which result in non-adaptive behaviour. In addition, there is no predictive element. To make it adaptive, the model can be extended to a more general case in which the network topology is variable. The system configuration at any given time is a function of states of the nodes and the topology of the network. In the context of discrete dynamical systems, *rule-definition* is defined by the network topology, and *performance assessment* is defined by the calculations for the rewrite events.

### 2.2 Markov models

Markov processes rely on probabilities of a change of state. A simple example may be found in Holland [7]. Kiefer and Larson [9] provide a more algebraic treatment in the context of credit default. The basis of a Markov model is a discrete set of states, with a probability of transition from one state to another. Again, there is no predictive element. With *J* states, denote the probability that an agent counterparty *X* is in state \(j\; (0 \le j \le J)\) at time *t* by \(P(X_t = j)\). Let the conditional probability of a transition at time *t* from state *j* to state \(k \; (0 \le k \le J)\) be \(p_{jk} = P(X_t = k | X_t = j)\). Then, the probabilities \(p_{jk}\) define a transition matrix * P*. When all possible states at time

*t*for the Markov process are organised in a vector \(S_t\), the probabilistic evolution of the Markov process can be represented by the equation

\(S_{t+1} = {{\varvec{P}}} S_t\).

This recurrence relation leads to an explicit expression for \(S_t\) in terms of an initial vector \(S_0\):

\(S_t = {{\varvec{P}}}^t S_0\).

Such an equation, which defines a future state explicitly in terms of an initial state, is fundamental to Markov processes. The difficulty in usage is to determine the elements of the transition matrix * P*.

### 2.3 Game theory models

*Players*(*agents*in complex systems)*Actions*available at each interaction point (rules governing interactions)*Pay-offs*(utilities that minimise loss or maximise gain)*Information*about the system or parts of it at each interaction

Several representations for games are in common use, the most common being *Normal form*. For *Normal form*, the pay-off is specified by any function that associates a pay-off for each player with every possible combination of actions. Thus, for two agents *X* and *Y*, if there are *K* available strategies \(\{c_1, c_2, \ldots , c_K\}\), and *X* chooses first and takes strategy *i*, leaving *Y* to choose second and taking strategy *j* \((0 \le i,j \le K)\), then the corresponding pay-off is entered into the *i*th row and the *j*th column of a matrix * M* as a pair of functions \(\{m_i^X, m_j^Y \}\). Very often these two functions are constants. The

*Normal form*representation can become very cumbersome if the set of available pay-offs is large, and also if there are more than two agents in an interaction. Usually, only two agents interact in the context of complex systems. The essential requirement of a pay-off function ensures that the system is adaptive, but emergent behaviour is not always clear and is usually not considered explicitly.

### 2.4 Chaos models

Chaos theory originates from work by Lorenz [10], and refers to unpredictable behaviour in a deterministic (rules-driven) system.

An account of the mathematical basis of chaos theory may be found in Ghen and Moiola [6]. Consider a set of states at time *t*, \({{\varvec{S}}}(t) = \{{\hat{S}}_1, {\hat{S}}_2, \ldots , {\hat{S}}_n |t\}\), with a transformation * T* that acts on those states:

\({\hat{S}}(t+1) = {{\varvec{T}}}({\hat{S}}(t))\).

Although an individual path \(\{{\hat{S}}(1), {\hat{S}}(2), \ldots \}\) may be non-deterministic, that path is often bounded. The adaptive modes in Sect. 3 of this paper also have the same property. Chaotic systems thus defined exhibit emergence and adaptivity, both depending on rules specified by * T*. Chaotic systems can be considered as a superset of complex systems because an important component of the latter is

*self-organisation*: individual agents cooperate to achieve a goal. However, a characteristic of chaotic systems is that small differences in initial conditions can produce widely diverging outcomes. This does not necessarily happen in complex systems due to moderation by other agents. Chaotic systems do not distinguish between a

*state*, and an

*agent*that has a property

*state*. Neither do they have a predictive element.

## 3 Adaptive agent interaction

In this section, we describe four agent interaction modes within the mathematical framework for complexity described in Mitic [11]. All of them extend the concepts introduced in that paper since they incorporate the idea that agents can adapt their behaviour so as to achieve particular goals. Our approach is consistent with the idea of replacement that was mentioned in Sect. 2: Agents are replaced by amended versions of those agents. This section also contains a discussion of convergence of the state of a group of agents, and includes a convergence proof.

We call the first mode *passive adaptive* (*PA*) interaction: The outcome of such an interaction is a pair of agents that passively accept the average pre-interaction state. We call the second mode *weakly active adaptive* (*WAA*) interaction: agents in a group interact actively so as to revert to the mean state of the group. In the third mode, *strongly active adaptive*, (*SAA*), active interaction is more general: agents aim towards a particular goal. Both active interaction modes incorporate the idea that agents within a system monitor the system, and adapt their behaviour by calculating how best to achieve their goal. Additionally a fourth mode is proposed that incorporates an element that acts in the opposite direction. We call it Counter Adaptive (*CA*). It is used to model situations where opposing parties cannot or will not compromise. Before describing each, we summarise the base complexity model.

### 3.1 Comparison with other models

They do not rely on a topology (such as a network, neighbourhood, etc.) of the system.

Their time evolution depends purely on rules governing agent-pair interactions.

They are not subject to any system control.

### 3.2 Agents and agent interaction: summary

A shortened version of the underlying complexity framework appears in Mitic [12]. Its key points are summarised here. An agent *X* has a state \({\hat{S}}_X\) in the range (0,1), and the agent itself is modelled using a Beta function \(\beta (a,b)\), where the Beta parameters *a* and *b* are in the range (1,999). Those parameters define the agent’s resistance to change and its state.

#### 3.2.1 Definitions

*X*is a triple as in Eq. 1, where \(I \in (0,1)\) specifies the influence of an agent (on another agent), and

*N*is an alphanumeric term that holds a name for

*X*.

*X*’s Beta distribution only, we write \(X \sim {\beta (a,b)}\). Similarly, when referring to

*X*’s Beta distribution with its influence parameter, we write \(X \sim {(\beta (a,b), I)}\). The state of an agent

*X*is denoted by \({\hat{S}}_X\), or by \({\hat{S}}_X(t)\) in cases where time dependence is required.

*State*is given by the expected value of

*X*’s Beta distribution:

#### 3.2.2 Agent interaction

*X*and

*Y*is another agent \(X^*\) who replaces

*X*and is termed the

*resultant*of the interaction. Agent

*Y*remains unchanged. The interaction is denoted by

*a*-values,

*b*-values and

*I*-values for

*X*and

*Y*. See the full explanation in Mitic [11]. Note that the brace operator is not symmetric: \(\left\langle X,Y \right\rangle \ne \left\langle Y,X \right\rangle\).

That interaction mode is non-adaptive. In the interaction models that follow the base interaction mode will be extended to incorporate increasingly complicated adaptive components in order to fulfil different requirements. In general, adaptive interactions will be denoted by double braces. For example, \(Z = \left\langle \left\langle X,Y|M \right\rangle \right\rangle\), where *M* is one of the three adaptive modes *PA*, *WAA* or *SAA*. The notation should be interpreted as '*X* and *Y* interact using adaptive mode *M* to produce resultant *Z*'. *Z* may be either a single agent, or a pair of agents, depending on the mode used. An alternative adaptive mode was hinted at in Wolfram [14], and it most closely resembles the *SAA* mode in that it involves a target state.

#### 3.2.3 State transition

*X*with Beta parameters (

*a*,

*b*) to an agent \(X^*\) with target state

*s*, a new agent is defined with new Beta parameters \(({\bar{a}}, {\bar{b}}) = (a, a (\frac{1}{s} - 1))\). The influence parameter

*I*is passed to the new agent unchanged. Thus:

*group agent*—a single agent that represents a set of agents. Therefore, the proof will be deferred until after a discussion of

*group agent*in 3.2.4.

#### 3.2.4 Group agent

In a multi-agent system, we can consider that those agents behave collectively as though they were a single agent. More specifically, a combination of the states of individual agents can be combined into a single state that represents them all.

A *group agent*, (sometimes shortened to *Group*) \(G_n\), is nothing more than a set of *n* agents:

\(G_n = \{X_1, X_2, \ldots , X_n\}\).

*a*-values,

*b*-values and

*I*-values for the agents in \(G_n\). Therefore, if an agent \(X_i\) has parameters \(a_i\), \(b_i\) and \(I_i\),

*group agent*, the name of the

*group agent*can be set independently of the names of the members of the

*group agent*. Alternatively, it can be derived from the names of those members, although naively combining names can become cumbersome very quickly! Supposing that the name of \(G_n\) is \(N_G\), the terms in Eq. 5 define the

*group agent*as in Eq. 6.

### 3.3 State change: convergence proof

This proof concerns a group of agents *G* who interact multiple times and thereby influence each others states. We show that the state of all agents converges to the state of the group.

*M*, which can be any of the three modes defined in Sect. 3. The agent interaction results in two new agents \(X^\prime\) and \(Y^\prime\):

#### 3.3.1 Convergence of the state of a single agent

*s*has been specified or calculated. Next, extend the notation for state to incorporate a time dependency: let \({\hat{S}}_X(t)\) be the state of

*X*at time

*t*. Further, assume that the initial state \({\hat{S}}_X(0)\) is a known constant. Then, by construction

*n*successive similar interactions which introduce similar size numbers \(\alpha _2, \alpha _3, \ldots , \alpha _n\) and error terms with means 0 and variances \(\sigma _2, \sigma _3, \ldots , \sigma _n\) to derive:

*n*such that \(\delta < \alpha _{[n]}\). Therefore, for sufficiently large

*n*,

*s*and \(\delta\) are all constants, the right-hand side of Eq. 12 can be wrapped into a constant \(\epsilon\) for all positive values of

*t*. Therefore,

*t*becomes large. Thus, the risk state of agent

*X*tends to the target risk state. Similarly if

*Y*is substituted for

*X*in the argument from Eq. 8 onwards, there is a similar result: \({\hat{S}}_Y(t) \rightarrow s\) as

*t*becomes large. Therefore, the risk state of

*Y*also tends to the target risk state.

#### 3.3.2 Convergence of the state of a group of agents

*n*agents: \(G_n = \{X_1, X_2, \ldots , X_n \}\). The

*group state*at time

*t*, denoted by \({\hat{S}}_{G,n}(t)\), is determined by the Beta parameters, \(a^\prime\) and \(b^\prime\), of the

*group agent*’s Beta distribution, defined in Eqs. 5 and 6. We assert that the state of the

*group agent*at time

*t*is a linear combination of the states of the members of the

*Group*. So in terms of normalised weights \(w_{X_i}(t)\) on the state of each agent \(X_i\):

*Group*analogy of Eq. 13, which is for a single agent. So as

*t*becomes large, \({\hat{S}}_{G,n}(t) \rightarrow s\). That is, the

*group state*tends to the same limit as the state of the members of the group.

## 4 Interaction modes

The three adaptive interaction modes and also the counter-adaptive mode, introduced at the start of Sect. 3, are presented in this section. The adaptive modes are discussed in increasing order of complexity: *passive adaptive*, followed by *weakly active adaptive*, and lastly *strongly active adaptive*. They are followed by a subsection on consensus failure: the *counter-adaptive* mode.

### 4.1 The passive adaptive (*PA*) mode

The *PA* mode is simple in that complicated monitoring of the state of the system and prediction are absent. Agents *X* and *Y* negotiate, and always agree to ‘meet mid-way’. The steps are summarised in Algorithm *ALGO PA*.

*ALGO PA*: Passive adaptive interaction mode

*X*,

*Y*; Outputs \(X^\prime , Y^\prime\))

1. Calculate resultants | |

\(X^* = \left\langle X,Y \right\rangle\) (with Beta parameters \((a_{X^*},b_{X^*}\)) | |

\(Y^* = \left\langle Y,X \right\rangle\) (with Beta parameters \((a_{Y^*},b_{Y^*}\)) | |

2. Calculate their states \({\hat{S}}_{X^*}\) and \({\hat{S}}_{Y^*}\) | |

3. Calculate the mean state, \(m = \frac{{\hat{S}}_{X^*}+{\hat{S}}_{Y^*}}{2}\) | |

4. Define new agents \(X^\prime\), and \(Y^\prime\), both with state equal to the mean value (see Eq. 4): | |

\((a_{X^*}, a_{X^*} (\frac{1}{m}-1))\) | |

\((a_{Y^*}, a_{Y^*} (\frac{1}{m}-1))\) |

Shifting the states of *X* and *Y* to the mean state *m* in *ALGO PA* constitutes adaption from one to the other, effectively by compromise. The agents put in their bids, and the final result for both is the mean bid. There is an implied utility function for assessing the state of the system with respect to an agent *X*. It is the measured distance of the original state of *X* from the mean: \(|{\hat{S}}_X - m|\). As such this mode is unsophisticated, but it its simplicity results in very fast convergence to consensus (see the results in Sect. 5).

### 4.2 The weakly active adaptive (*WAA*) mode

In the *WAA* mode, an agent *X* monitors the Group (*G*) and a decision is made to either retain its existing state, or accept the state of *G*, or the state of the resultant \(\left\langle X,Y \right\rangle\). A different utility function is used: the distance comparison, *d*, in *ALGO WAA*, below.

*ALGO WAA*: Weakly adaptive interaction mode

*X*,

*Y*; Output \(X^\prime\))

1. Calculate the non-adaptive resultant \(X^* = \left\langle X,Y \right\rangle\) | |

2. Calculate the states \({\hat{S}}_X\), \({\hat{S}}_Y\), \({\hat{S}}_{X^*}\) and \({\hat{S}}_G\) | |

3. Calculate \(d = min(|{\hat{S}}_X-{\hat{S}}_G|, |{\hat{S}}_Y-{\hat{S}}_G |, |{\hat{S}}_{X^*}-{\hat{S}}_G|)\) | |

4. If \(d = |{\hat{S}}_X-{\hat{S}}_G|\), set \(X^\prime = X\) (so |

This mode may be classified as *adaptive* because of the group is accounted for. Agent *X* assesses the state of the group as well as the state of the other agents that take part in of its each agent–agent interactions.

### 4.3 The strongly active adaptive (*SAA*) mode

*SAA*mode, the two agents in an interaction both aim to achieve a target state. The target may not be a state of consensus. Instead it may be a mutually beneficial state, such as a recovery from an adverse shock. Shock and recoveries were examined in Wolfram [14] using a crude recovery mechanism: at each interaction in the ‘recovery’ mode, agents are forced to move towards the target by a predetermined amount. The result of a single adaptive interaction between two agents

*X*and

*Y*is a tuple \(\{X^\prime ,Y^\prime \}\), and we denote the

*strongly active adaptive*interaction by \(\{X^\prime ,Y^\prime \} = \left\langle \left\langle X,Y \right\rangle \right\rangle\). In contrast to the non-interactive interaction, both

*X*and

*Y*are affected and both are returned. Algorithm

*ALGO SAA*shows the steps in the calculation. The algorithm uses a utility function \(U(X, s, \tau , T, m)\) where

*s*is a target state for agent

*X*, \(\tau\) is the duration remaining to some pre-defined target time

*T*and

*m*is a constant factor that determines how the interaction proceeds (and is explained below Eq. 17). This utility function can be interpreted as a cost saving associated with the interaction, and the intention is to maximise it at every interaction.

The term \(|s - {\hat{S}}_X|\) is a risk measure indicating the deviation of the current state of *X* from the target state. The ratio of exponentials is a time penalty, such that as time advances towards *T*, the cost saving decreases. Therefore, it is advantageous to agree quickly. The factor *m* is a constant that determines the decisions that an agent could take within the interaction \(\left\langle \left\langle X,Y \right\rangle \right\rangle\). It is set to one of the values listed in *ALGO SAA*, depending on the outcome of three utility function calculations. The choice is to either abandon the interaction, use a predicted state, or ignore the prediction in favour of a *WAA* alternative. The details of algorithm *ALGO SAA* are in the tableau that follows. The essential stages are a prediction of a future state, followed by utility calculations, and lastly a decision about which of a set of options to choose.

*ALGO SAA*: Strongly adaptive interaction mode

*X*,

*Y*; Outputs \(X^\prime , Y^\prime\))

1. Define agents \(X_P\) and \(Y_P\) corresponding to predicted states for | |

2. Calculate the agents that result from non-adaptive interactions: | |

\(X^\prime = \left\langle X,Y\right\rangle\) | |

\(Y^\prime = \left\langle Y,X\right\rangle\) | |

3.Calculate the utilities \(U(\sharp )\) for \(\sharp =\) | |

4. Choose the option ‘no change’, ‘use prediction’ or ‘cooperate’ for \(X^\prime\) based on the following criteria: | |

If \(U(X) > max(U(X^\prime ), U(X_P))\), set and output | |

If \(U(X_P) > max(U(X), U(X^\prime ))\), set and output \(U(X_P)\) | |

If \(U(X^\prime ) > max(U(X), U(X_P))\), set and output \(U(X^\prime )\) | |

5. If a time limit has been set, force a move towards the target |

### 4.4 Time to convergence

*X*is measured by searching for a sequence of

*r*successive states differences, measured at times \(\{t-r+1, \ldots , t-1, t\}\), all of which are less than a small maximum

*l*. Equation 18 is the required

*r*-term conjunction.

#### 4.4.1 Rate of convergence calculation

*G*at time

*t*as \(D_t\) with \(D_0\) set to the value of the initial state \({\hat{S}}_{G,n}(0)\). Then using the same notation as in Sect. 3.3.2:

*p*of the distance from the initial state to the target state

*s*is given by a solution to Eq. 21, below. The value of

*p*could be in the region of 0.01, for example.

### 4.5 Consensus failure: the *counter-adaptive* mode

In the *CA* mode, an attempt is made to reach consensus, but that attempt is thwarted multiple times. The result implies increased risk. The details are in *ALGO CA*. In that algorithm, the first step indicates an attempt at consensus, since the state of the group agent will be intermediate to the states of the two inputs *X* and *Y*. The subsequent steps define agents that have marked biases towards the original inputs, thereby undoing much of the consensus.

This does not imply that an agent cannot influence another agent sufficiently to reverse its view. That can happen, but simulations indicate that such cases are rare.

The Beta parameters for any agent *Z* are denoted by \((a_Z,b_Z)\).

*ALGO CA*: Counter Adaptive Interaction mode

*X*,

*Y*; Outputs \(X^\prime , Y^\prime\))

1. Derive the group agent \(G = \{X,Y\}\) | |

2. Calculate Beta parameters for new agents \({\bar{X}}\) and \({\bar{Y}}\): | |

\(a_{{\bar{X}}} = r a_X + (1-r) a_G\) | |

\(b_{{\bar{X}}} = r b_X + (1-r) b_G\) | |

\(a_{{\bar{Y}}} = r a_Y + (1-r) a_G\) | |

\(b_{{\bar{Y}}} = r b_Y + (1-r) b_G\) | |

3. Replace: \(a_{{\bar{Z}}} \rightarrow 1.02 \; a_{{\bar{Z}}}\) if \({\hat{S}}_X>{\hat{S}}_G\) and \(a_{{\bar{Z}}} \rightarrow 0.98 \; a_{{\bar{Z}}}\) if \({\hat{S}}_X \le {\hat{S}}_G\), first when | |

4. Output agents \(X^\prime\) and \(Y^\prime\) with Beta parameters \((a_{{\bar{X}}}, b_{{\bar{X}}})\) and \((a_{{\bar{Y}}}, b_{{\bar{Y}}})\) respectively |

The penultimate step is a mechanism for avoiding quick agreement and for breaking an agreement if it has been made. The 2% change in the Beta *a*-parameters pulls the output agents away from any consensus point.

## 5 Simulation results

We summarise simulation results for the three adaptive interaction modes and the counter-adaptive mode *CA* of Sect. 4 by showing representative traces of either the group state or the states of individual agents over time. In all cases, there are 10 agents in the group, and an assessment is made of the time taken to converge (using Eq. 21).

In the illustrations that follow the horizontal axes are labelled *time*. In this context, *time* should be interpreted as *number of interactions*. At each agent–agent interaction, the state of some part of the system changes, but the elapsed time between those interactions is variable. Therefore, although the axes are graduated linearly, those graduations represent nonlinear absolute time intervals.

### 5.1 Passive adaptive convergence results

### 5.2 Weakly active adaptive convergence results

*weak*).

### 5.3 Strongly active adaptive convergence results

*SAA*mode, traces of the group state replace traces for individual agents because it is more instructive to see convergence to different limits. Figure 3 shows three sets of paths. Each shows three independent simulations. The blue and red sets show convergence of extreme states to a consensus state 0.5 (representing medium risk). The green sets show much faster convergence to the same value for agents that are already near the consensus point. Compared to the

*WAA*mode, convergence times in the range 70-100 for the extreme initial states are not excessive. Agents in these groups have an incentive to agree, as measured by their utility functions. Only one agent in Fig. 3, indicated by the arrow, has not converged by time 100.

### 5.4 *Counter-adaptive results*

*CA*mode was discussed in Sect. 4.5. The aim was to make only a limited effort to achieve consensus. The three traces in Fig. 5 present a different view to preceding illustrations. Each one represents a pair of protagonists and shows the

**difference**in their states after each interaction. Exact consensus is therefore indicated by a zero difference, but the difference is not absolute, and can be negative. Initially, their states are far apart, and approach zero despite the

*counter agreement*mechanism. The interpretation is that some progress is made in trying to reach a compromise, but not enough. The red trace, for example, never reaches the zero line within the 250 interactions shown. The other two just about get there but then diverge away from the zero line. The blue trace does this several times.

The *CA* mode models cases of prolonged high risk or conflict.

During the course of the simulation there is some move to consensus. The difference between the states of the two agents narrows, as shown by a downward drift of the traces. That type of trace is typical: it occurs in approximately 98% of runs of this simulation. In those cases, an agent does not succeed in reversing the view if its ‘opponent’ and the simulation trace remains positive. However, with sufficient persuasion (and perhaps some coercion in practice!) an agent can influence another agent sufficiently well so that their states are reversed. The influenced agent has become more than totally convinced of the opposite viewpoint. That results in a negative difference, and the trace would dip below the ‘State = 0’ axis.

## 6 Discussion

A general comment applies for all results in Sect. 5. *Adaptative* implies convergence, so one impact of any emergent behaviour is predictable: it is convergence. Of issue is the speed of convergence, and that depends on the detailed nature of the agent interaction in the model. In *PA* mode, convergence is too rapid: agents simply agree halfway without other considerations. In *WAA* mode, convergence is arguably too slow. Agents have the option to ignore their environment, and then act in a selfish manner. Compromise becomes a secondary issue. The *SAA* mode seeks to find a compromise in which agents can choose whether to cooperate or not using an objective utility function. If they act rationally, they choose a path that minimises risk. The implementation of the *SAA* mode is subject to the weights placed on the available choices by the utility function. A change in the difference between an agent’s current risk state and a target risk state in Eq. 17 that depends on the mode would potentially affect the result of the utility function significantly. A further issue is that in real life, agents do not always act rationally. Irrationality in the form of a stochastic term is built into the model, but it is hard to assess whether or not that is a sufficient model of irrationality.

### 6.1 Applications

We now consider some applications of the theoretical models proposed. They concentrate on financial and economic cases, where the concept of risk is very pertinent.

*Brexit*Immediately following the result of Brexit referendum on 16 June 2016, the pound sterling fell sharply against the euro and other currencies. Subsequently, the GBPEUR exchange rate has reacted to Brexit-related events, and we use it as a measure of financial market sentiment towards the progress of Brexit negotiations. Figure 6 shows a plot of the GBPEUR exchange rate against ‘Day number’, which counts working days only. Day 1 is 1 June 2016, and the referendum day (23 June 2016) is numbered 17. It is indicated on the figure by the left-hand vertical line. The GBPEUR plot is volatile, but the overall trend is linearly downwards (shown by the fitted best fit line). It corresponds to the

*SAA*model (as in Fig. 3), and represents a positive, albeit slow, attempt to progress. Prior to the proposed ‘leaving’ date (29 March 2019—indicated by the middle vertical line), the increase in the GBPEUR rate is a response to an anticipated settlement. The period leading to the impending deadline corresponds to the time-limited

*SAA*model. In March 2019, the deadline was extended to 31 October 2019 (the right-hand vertical line), and the GBPEUR rate slumped for months afterwards. As the revised deadline approached, the up-trending GBPEUR rate indicated a second time-limited

*SAA*phase. Shortly before the second deadline, a further extension was agreed.

*Global warming*Global warming has been much discussed in recent years, and, with increasing interest from financial regulators, is soon likely to affect financial products [3]. The report by Cook et al. [4] shows illustrations of how consensus on the existence or otherwise of global warming has developed between the years 1985 and 2011. The measurement metric used was the annual number of publications of various types that either endorsed the existence of global warming, or rejected it, or expressed no view either way. Figure 7 shows a view of the Cook data in a way that resembles the plots in Fig. 1. The plots should be interpreted as: 'With a long period to achieve a nominal 25 year target, few papers were published. Nearer the target the number of publications increases'. Both plots show a gradual drift to consensus (that global warming is a significant problem) that fits the

*PA*mode.

*Consumer and business confidence*Confidence in the economy can be measured by two OECD indices: the consumer confidence index (CCI) and the business confidence index (BCI). The CCI is based on the monthly OECD Consumer Confidence Survey of 5000 households. The BCI is also survey-based and provides information on future developments, orders and stocks of finished goods. The month-by-month difference between them indicates whether or not businesses and consumers agree. The combined G7 surveys is a case where they do not. The difference ‘BCI-CCI’ is shown in Fig. 8 and corresponds to the

*CA*mode. Volatility about the zero line indicates a high incidence of opposed opinions, in which businesses and consumers ‘swap’ opinions frequently. The illustration in Sect. 4.5 shows more consistent patterns of disagreement with minimal ‘opinion swaps’. The differences shown in Fig. 8 can be tested statistically by calculating the best linear fit (also shown). A

*t*-test for the correlation coefficient

*r*shows that the disagreement is highly significant (\(r = 0.163, t = 4.05, p < 0.01\)). One has to be careful when making conclusions from plots alone in this context. The equivalent plot for the US indices looks very similar, but the equivalent calculation shows that disagreement is not significant (\(r = 0.044, t = 1.17, p = 0.122\)).

## 7 Conclusion

The ‘adaptive’ property, along with other complex system properties (no central control, self-organisation, nonlinearity and emergence) has enabled modelling of applicable ‘real’ situations by defining what happens when a pair of agents interact. No other assumptions are made. The results of simulations are broadly in line with what was expected. The more agents cooperate, the faster they can reach consensus. The three adaptive modes considered, in the order *passive*, then *weakly adaptive* and lastly *strongly adaptive*, indicate an increasingly urgent need to achieve consensus. In most cases, participants realise eventually that they cannot continue to disagree indefinitely. The simulations also show that once consensus has been reached, it is reasonably solid in most cases. There is always some subsequent deviation from the consensus point but it is unlikely to be significant.

Although particular real situations can be associated with one or more of the models considered, a strict calibration would be difficult because individual interactions would be hard to identify in practice. A possible way forward is to formally link ‘number of interactions’ to ‘elapsed time’, which could be measured.

## Notes

### Compliance with ethical standards

### Conflict of interest

The author declares that he has no conflict of interest.

### Human and animals rights

This article does not contain any studies with human participants or animals performed by the author.

## References

- 1.Bratman ME (1987) Intention, plans, and practical reason. Center for the Study of Language and Information. ISBN 9781575861920 (1999 reprint)Google Scholar
- 2.Brownlee J (2007) Complex adaptive systems. Technical report 070302A. Swinburne Research Bank. https://researchbank.swinburne.edu.au
- 3.Carney M (2019) A new horizon. In: European Commission conference: a global approach to sustainable finance. https://www.bankofengland.co.uk/news/speeches
- 4.Cook J, Nuccitelli D, Green S (2013) Quantifying the consensus on anthropogenic global warming in the scientific literature. Environ Res Lett 8(024024):1–7. https://doi.org/10.1088/1748-9326/8/2/024024/pdf CrossRefGoogle Scholar
- 5.Farooqui A, Niazi M (2016) Game theory models for communication between agents: a review. Complex Adapt Syst Model. https://doi.org/10.1186/s40294-016-0026 CrossRefGoogle Scholar
- 6.Ghen G, Moiola J (1994) An overview of bifurcation, chaos and nonlinear dynamics in control systems. J Frankl Inst 331(6):819–858MathSciNetCrossRefGoogle Scholar
- 7.Holland J (2014) Signals and boundaries: building blocks for complex dynamical systems. MIT Press, CambridgeGoogle Scholar
- 8.Holland JH (1995) Hidden order: how adaptation builds complexity. MIT Press, CambridgeGoogle Scholar
- 9.Kiefer NM, Larson CE (2004) Testing simple markov structures for credit rating transitions. OCC Economics Working Paper 2003-4. https://www.occ.treas.gov/publications/
- 10.Lorenz E (1963) Deterministic nonperiodic flow. J Atmos Sci 20(2):130–142MathSciNetCrossRefGoogle Scholar
- 11.Mitic P (2018a) A complexity framework for consensus and conflict. Int J Des Nat Ecodyn 13(3):281–293. https://doi.org/10.2495/DNE-V13-N3-281-293 CrossRefGoogle Scholar
- 12.Mitic P (2018b) Systemic shock propagation in a complex system. WP 2018-08—Proceedings of DySES Paris. http://www.labex-refi.com/publications/working-papers /labex-refi-working-paper-series-2018/
- 13.Rzevski G, Skobelev P (2014) Managing complexity. WIT Press, SouthamptonGoogle Scholar
- 14.Wolfram S (1984) Cellular automata as models of complexity. Nature 311:419–424CrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.