1 Introduction

Usually dynamic properties of dynamic models can be analysed by conducting simulation experiments. But sometimes, as a kind of prediction properties can also be found by calculations in a mathematical manner, without performing simulations. Examples of properties that can be explored in such a manner are:

  • whether some values for the variables exist for which no change occurs (stationary points or equilibria), and how such values may depend on the values of the parameters of the model and/or the initial values for the variables

  • whether certain variables in the model converge to some limit value (equilibria) and how this may depend on the values of the parameters of the model and/or the initial values for the variables

  • whether or not certain variables will show monotonically increasing or decreasing values over time (monotonicity)

  • whether situations occur in which no convergence takes place but in the end a specific sequence of values is repeated all the time (limit cycle)

Fig. 1
figure 1

Conceptual representation of an example model

Mathematical techniques addressing such questions have been developed, starting with Poincaré [12, 13]; see also[3, 9, 11], and [7] for a historical perspective. Such types of properties found in an analytic mathematical manner can be used for verification of the model by checking them for the values observed in simulation experiments. If one of these properties is not fulfilled, then there will be some error in the implementation of the model. This particular use of mathematical analysis is the focus of this paper. In this paper some methods to analyse such properties of temporal-causal network models will be described and illustrated for some example models, including a Hebbian learning model, and a model for dynamic connection strengths in social networks. The properties analysed by the methods discussed cover equilibria, increasing or decreasing trends, and recurring patterns: limit cycles.

To get the idea, first the general set up is discussed in Sect. 2. This is illustrated in Sect. 3 by an analysis of a simple example as discussed in [17], Section 2.4.1, using sum and identity combination functions. In simulations it is observed for this example model that when a constant stimulus level occurs in the world, for each state its activation value increases from 0 to some value that is then kept forever, until the stimulus disappears: an equilibrium state. In subsequent sections three more general examples of this type of analysis for which equilibrium states occur are addressed: for a scaled sum combination function (Sect. 4), for Hebbian learning (Sect. 5), and for dynamic networks based on the homophily principle (Sect. 6). In Sect. 7 the analysis is discussed for a case in which no equilibrium state occurs, but instead a limit cycle pattern emerges.

Fig. 2
figure 2

Simulation example for the model depicted in Fig. 1 using identity and sum combination functions for all states

2 How to verify a temporal-causal network model by mathematical analysis

A stationary point of a state occurs at some point in time if for this time point no change occurs: the graph is horizontal at that point. Stationary points are usually maxima or minima (peaks or dips) but sometimes also other stationary points may occur. An equilibrium occurs when for all states no change occurs. From the difference or differential equations describing the dynamics for a model it can be analysed when stationary points or equilibria occur. Moreover, it can be found when a certain state is increasing or decreasing when a state is not in a stationary point or equilibrium. First a definition for these notions is expressed; for example, see [3, 9, 1113].

Definition

(increase, decrease, stationary point and equilibrium) Let Y be a state

  • Y has a stationary point at t if d \(Y(t)\)/d t= 0

  • Y is increasing at t if d \(Y(t)\)/d \(t >\) 0

  • Y is decreasing at t if d \(Y(t)\)/d \(t <\) 0

The model is in equilibrium a t if every state Y of the model has a stationary point at t.

To illustrate these notions, consider the example from [17], with conceptual representation depicted here in Fig. 1, and an example simulation shown in Fig. 2.

The systematic transformation from a conceptual representation of a temporal-causal model (as depicted in Fig. 1) into a numerical representation of this temporal-causal model works as follows [17]:

  • At each time point t each state Y in the model has a real number value in the interval [0, 1], denoted by \(Y(t)\)

  • At each time point t each state X connected to state Y has an impact on Y defined as impact \(_{X,Y}\) (t)= \({\upomega }_{X,Y} X(t)\) where \({\upomega }_{X,Y}\) is the weight of the connection from X to Y

  • The aggregated impact of multiple states \(X_{i}\) on Y at t is determined using a combination function c \(_{Y}\) (..):

    $$\begin{aligned} \mathbf{aggimpact}_{Y}(t)= & {} \mathbf{c}_{Y}(\mathbf{impact}_{X{_{1}},Y}(t), {\ldots }, \mathbf{impact}_{X{_{k}},Y}(t))\\= & {} \mathbf{c}_{Y}({\upomega }_{X{_{1}},Y}X_{1}(t),{\ldots }, {\upomega }_{X{_{k}},Y}X_{k}(t)) \end{aligned}$$

    where \(X_{i}\) are the states with connections to state Y

  • The effect of aggimpact \(_{Y}\)(t) on Y is exerted over time gradually, depending on speed factor \({\upeta }_{Y}\):

    $$\begin{aligned} Y(t + \Delta t) = Y(t) + {\upeta }_{Y}[\mathbf{aggimpact}_{Y}(t) - Y(t)] \Delta t \end{aligned}$$

    or

    $$\begin{aligned} \mathbf{d}Y(t)/\mathbf{d}t = {\upeta }_{Y}[\mathbf{aggimpact}_{Y}(t\mathbf{) - }Y(t)] \end{aligned}$$
  • Thus, the following difference and differential equation for Y are obtained:

    $$\begin{aligned} Y(t + \Delta t)= & {} Y(t) \\&+\, {\upeta }_{Y } [\mathbf{c}_{Y}({\upomega }_{X{_{1}},Y}X_{1}(t),{\ldots }, {\upomega }_{X{_{k}},Y}X_{k}(t)) - Y(t)] \Delta t\\ \mathbf{d}Y(t)/\mathbf{d}t= & {} {\upeta }_{Y }[\mathbf{c}_{Y}({\upomega }_{X{_{1}},Y}X_{1}(t),{\ldots }, {\upomega }_{X{_{k}},Y}X_{k}(t)) -Y(t)] \end{aligned}$$

For more details, see [17].

Combination functions used in this simple example are the scaled sum function and the identity function, and all connections have weight 1, except the connections to ps\(_{a}\), which have weight 0.5.

In Fig. 2 it can be seen that as a result of the stimulus all states are increasing until time point 35, after which they start to decrease as the stimulus disappears. Just before time point 35 all states are almost stationary. If the stimulus is not taken away after this time point this trend is continued, and an equilibrium state is approximated. The question then is whether these observations based on one or more simulation experiments are in agreement with a mathematical analysis. If it is found out that they are in agreement with the mathematical analysis, then this provides some extent of evidence that the implemented model is correct. If they turn out not to be in agreement with the mathematical analysis, then this indicates that probably there is something wrong, and further inspection and correction has to be initiated.

Considering the differential equation for a temporal-causal network model more specific criteria can be found:

$$\begin{aligned} \mathbf{d}Y(t)/\mathbf{d}t = {\upeta }_{Y }[\mathbf{aggimpact}_{Y}(t) - Y(t)] \end{aligned}$$

with

$$\begin{aligned} \mathbf{aggimpact}_{Y}(t ) = \mathbf{c}_{Y}({\upomega }_{X{_{1}},Y}X_{1}(t),{\ldots }, {\upomega }_{X{_{k}},Y}X_{k}(t)) \end{aligned}$$

and \(X_{1},\ldots , X_{k}\) the states connected toward Y

For example, it can be concluded that

$$\begin{aligned} \mathbf{d}Y(t)/\mathbf{d}t > 0\Leftrightarrow & {} {\upeta }_{Y }[\mathbf{aggimpact}_{Y}(t) - Y(t)] > 0\\\Leftrightarrow & {} \mathbf{aggimpact}_{Y}(t) > Y(t) \\\Leftrightarrow & {} \mathbf{c}_{Y}({\upomega }_{X{_{1}},Y}X_{1}(t),{\ldots }, {\upomega }_{X{_{k}},Y}X_{k}(t))> Y(t) \end{aligned}$$

In this manner the following criteria can be found.

2.1 Criteria for a temporal-causal network model: increase, decrease, stationary point and equilibrium

Let Y be a state and \(X_{1}, \ldots , X_{k}\) the states connected toward Y. Then the following hold

figure a

These criteria can be used to verify (the implementation of) the model based on inspection of stationary points or equilibria in the following two different manners. Note that in a given simulation the stationary points that are identified are usually approximately stationary; how closely they are approximated depends on different aspects, for example on the step size, or on how long the simulation is done.

2.2 Verification by checking the criteria through substitution values from a simulation in the criteria

  1. 1.

    Generate a simulation

  2. 2.

    For a number of states Y identify stationary points with their time points t and state values \(Y(t)\)

  3. 3.

    For each of these stationary points for a state Y at time t identify the values \(X_{1}(t),\ldots ,X_{k}(t)\) at that time of the states \(X_{1},\ldots ,X_{k}\) connected toward Y

  4. 4.

    Substitute all these values \(Y(t)\) and \(X_{1}(t)\), ..., \(X_{k}(t)\) in the criterion c \(_{Y}({\upomega }_{X{_{1}},Y}X_{1}(t),\ldots ,{\upomega }_{X{_{k}},Y}X_{k}(t))= Y(t)\)

  5. 5.

    If the equation holds (for example, with an accuracy \(<10^{-2}\)), then this test succeeds, otherwise it fails

  6. 6.

    If this test fails, then it has to be explored were the error can be found

This verification method can be illustrated for the example of Figs. 1 and 2 as follows. For example, consider state ps\(_{a}\) with numerical representation

$$\begin{aligned} \mathrm{ps}_{a}(t+\Delta t)= & {} \mathrm{ps}_{a}(t) \\&+\,{\upeta }_{\mathrm{ps}_{a}} [{\upomega }_{\mathrm{responding}} \mathrm{srs}_{s}(t)\\&+\, {\upomega }_{\mathrm{amplifying}} \mathrm{srs}_{e}(t) -\mathrm{ps}_{a}(t) ] \Delta t \end{aligned}$$

The equation expressing that a state of ps\(_{a}\) is stationary at time t is

$$\begin{aligned} {\upomega }_{\mathrm{responding}} \mathrm{srs}_{s}(t)+{\upomega }_{\mathrm{amplifying}} \mathrm{srs}_{e}(t) = \mathrm{ps}_{a}(t) \end{aligned}$$

At time point \(t = 35\) (where all states are close to stationary) the following values occur: ps\(_{a}\)(35) \(=\) 0.99903, srs\(_{s}\)(35) \(=\) 1.00000 and srs\(_{e}\)(35) \(=\) 0.99863; moreover \({\upomega }_{\mathrm{responding}}={\upomega }_{\mathrm{amplifying}}\) \(=\) 0.5. All these values can be substituted in the above equation:

$$\begin{aligned}&0.5 \times 1.00000 + 0.5 \times 0.99863 = 0.99903 \\&0.999315 = 0.99903 \end{aligned}$$

It turns out that the equation is fulfilled with accuracy \(<10^{-3}\). This gives some evidence that the model as implemented indeed does what it was meant to do. If this is done for all other states, similar outcomes are found. This gives still more evidence. The step size \(\Delta t\) for the simulation here was 0.5, which is even not so small. For still more accurate results it is advisable to choose a smaller step size. So, having the equations for stationary points for all states provides a means to verify the implemented model in comparison to the model description. The equations for stationary points themselves can easily be obtained from the model description in a systematic manner.

Note that this method works without having to solve the equations, only substitution takes place; therefore, it works for any choice of combination function. Moreover, note that the method also works when there is no equilibrium but the values of the states fluctuate all the time, according to a recurring pattern (a limit cycle). In such cases for each state there are maxima (peaks) and minima (dips) which also are stationary. The method can be applied to such a type of stationary points as well; here it is still more important to choose a small step size as each stationary point occurs at just one time point. In Sect. 7 it will be discussed how the approach can be applied to such limit cycles.

There is still another method possible that is sometimes proposed; this method is applied for the case of an equilibrium (where all states have a stationary point simultaneously), and is based on solving the equations for the equilibrium values first. This can provide explicit expressions for equilibrium values in terms of the parameters of the model. Such expressions can be used to predict equilibrium values for specific simulations, based on the choice of parameter values. This method provides more than the previous method, but a major drawback is that it cannot be applied in all situations. For example, when logistic combination functions are used it cannot be applied. However, in some cases it still can be useful. The method goes as follows.

2.3 Verification by solving the equilibrium equations and comparing predicted equilibrium values to equilibrium values in a simulation

  1. 1.

    Consider the equilibrium equations for all states Y:

    $$\begin{aligned} \mathbf{c}_{Y}({\upomega }_{X{_{1}},Y}X_{1}(t),{\ldots }, {\upomega }_{X{_{k}},Y}X_{k}(t))= Y(t) \end{aligned}$$
  2. 2.

    Leave the t out and denote the values as constants

    $$\begin{aligned} \mathbf{c}_{Y}({\upomega }_{X{_{1}},Y} {\underline{\mathbf{X}}} _{1}, {\ldots }, {\upomega }_{X{_{k}},Y}\underline{\mathbf{X}} _{k})= {\underline{\mathbf{Y}}} \end{aligned}$$

    An equilibrium is a solution \({\underline{\mathbf{X}}}_{1},{\ldots }, {\underline{\mathbf{X}}}_{k}\) of the following set of n equilibrium equations in the n states \(X_{1},{\ldots }, X_{n}\) of the model:

    $$\begin{aligned} \mathbf{c}_{X1}({\upomega }_{X{_{1}},X{_{1}}} {\underline{\mathbf{X}}} _{1},&\ldots , {\upomega }_{X{_{n}},X{_{1}}} {\underline{\mathbf{X}}} _{n}) = {\underline{\mathbf{X}}} _{1} \\&\ldots \\ \mathbf{c}_{Xn}({\upomega }_{X_{1},X_{n}} {\underline{\mathbf{X}}} _{1},&\ldots , {\upomega }_{X{_{n}},X{_{n}}} {\underline{\mathbf{X}}} _{n}) = {\underline{\mathbf{X}}} _{n} \end{aligned}$$
  3. 3.

    Solve these equations mathematically in an explicit analytical form: for each state \(X_{i}\) a mathematical formula \({\underline{\mathbf{X}}}_{i} = {\ldots }\) in terms of the parameters of the model (connection weights and parameters in the combination function c \(_{Xi}\)(..), such as the steepness \(\sigma \) and threshold \(\tau \) in a logistic sum combination function); more than one solution is possible

  4. 4.

    Generate a simulation

  5. 5.

    Identify equilibrium values in this simulation

  6. 6.

    If for all states Y the predicted value \({\underline{\mathbf{Y}}}\) from a solution of the equilibrium equations equals the value for Y obtained from the simulation (for example, with an accuracy \(<\)10\(^{-2})\), then this test succeeds, otherwise it fails

  7. 7.

    If this test fails, then it has to be explored where the error can be found

In Sect. 2.3 it will be illustrated how this method works for the example depicted in Figs. 1 and 2. In general, whether or not the equilibrium equations can be solved in an explicit analytical manner strongly depends on the form of the combination functions \(\mathbf{c}_{Y}({\ldots })\). In a number of specific cases explicit analytical solutions can be found. Three examples of this are addressed in subsequent sections:

  • for a (scaled) sum combination function (Sects. 3 and 4)

  • for Hebbian learning (Sect. 5)

  • for dynamic networks based on the homophily principle (Sect. 6)

However, there are also many cases in which an explicit analytical solution cannot be determined, for example, when logistic combination functions are used. In such cases equilibria can only be determined either by numerically solving the equations by some numerical approximation method, or by observing the behaviour of the model in simulation experiments. But in the latter case verification is not possible, as then only simulation results are available. An additional drawback is that in such cases specific values for the parameters of the model have to be chosen, whereas in the case of an explicit analytical solution a more generic expression can be obtained which depends, as a function, on the parameter values. For example, for the cases described in Sects. 36 expressions can be found for the equilibrium values in terms of the connection weights (for which no specific values are needed at forehand).

3 Mathematical analysis for equilibrium states: an example

Are there cases in which the types of behaviour considered above can be predicted without running a simulation? In particular, can equilibrium values be predicted, and how they depend on the specific values of the parameters of the model (e.g. connection weights, speed factors)? Below, these questions will be answered for a relatively simple example. Indeed it will turn out that in this case it is possible to predict the equilibrium values from the connection weights (the equilibrium values turn out to be independent of the speed factors, as long as these are nonzero). As a first step, consider the sensor state ss\(_{s}\).

LP \(_{\mathrm{ss}_{s}}\) Sensing a stimulus: determining values for state ss \(_{s}\)

$$\begin{aligned} \mathbf{d}\mathrm{ss}_{s}(t)/\mathbf{d}\mathrm{t} = {\upeta }_{{\mathrm{ss}}_{s}} [{\upomega }_{\mathrm{sensing}} \mathrm{ws}_{s}(t) - \mathrm{ss}_{s}(t)] \end{aligned}$$

Having an equilibrium value means that no change occurs at t: dss\(_{s}(t)\)/dt \(=\) 0. As it is assumed that \({\upeta }_{{\mathrm{ss}}_{s}}\) is nonzero, this is equivalent to the following equilibrium equation for state ss\(_{s}\), with \(\underline{\mathbf{ws}}_{s}\) and \(\underline{\mathbf{ss}}_{s}\) the equilibrium values for the two states ws\(_{s}\) and ss\(_{s}\).

$$\begin{aligned} {\upomega }_{\mathrm{sensing}} {\underline{\mathbf{ws}}}_{s} = {\underline{\mathbf{ss}}}_{s} \end{aligned}$$

In a similar manner this can be done for the other states, resulting in the following equations:

$$\begin{aligned} \begin{array}{l@{\quad }l} \mathbf{Equilibrium} &{} \mathbf{Equilibrium}\\ \mathbf{of}\,\mathbf{state} &{} \mathbf{criterion}\\ \mathrm{ss}_{s} &{} {\upomega }_{\mathrm{sensing}} \underline{\mathbf{ws}}_{s} = \underline{\mathbf{ss}}_{s}\\ \mathrm{srs}_{s} &{} {\upomega }_{\mathrm{representing}} \underline{\mathbf{ss}}_{s }= \underline{\mathbf{srs}}_{s}\\ \mathrm{ps}_{a} &{} {\upomega }_{\mathrm{responding}} \underline{\mathbf{srs}}_{s}\!+\!{\upomega }_{\mathrm{amplifying}} \underline{\mathbf{srs}}_{e} = \underline{\mathbf{ps}}_{a}\\ \mathrm{srs}_{e} &{} {\upomega }_{\mathrm{predicting}} \underline{\mathbf{ps}}_{a} = \underline{\mathbf{srs}}_{e}\\ \mathrm{es}_{a} &{} {\upomega }_{\mathrm{executing}} \underline{\mathbf{ps}}_{a} = \underline{\mathbf{es}}_{a}\\ \end{array} \end{aligned}$$

These are five equations with six unknowns \(\underline{\mathbf{ws}}_{s}\), \(\underline{\mathbf{ss}}_{s}\), \(\underline{\mathbf{srs}}_{s}\), \(\underline{\mathbf{ps}}_{a}\), \(\underline{\mathbf{srs}}_{e}\), \(\underline{\mathbf{es}}_{a}\); however, the variable \(\underline{\mathbf{ws}}_{s}\) can be considered given as it indicates the external stimulus. So the five equations can be used to find expressions for the equilibrium values for the five other states in terms of the connection weights and \(\underline{\mathbf{ws}}_{s}\). Note that for the sake of simplicity here it is assumed that \({\upomega }_{\mathrm{amplifying}}\) and \({\upomega }_{\mathrm{predicting}}\) are not both 1. Then this can be solved in an explicit analytical manner as follows. First two of them (the first two equations) are expressed in the externally given value \(\underline{\mathbf{ws}}_{s}\):

$$\begin{aligned}&\underline{\mathbf{ss}}_{s }={\upomega }_{\mathrm{sensing}} \underline{\mathbf{ws}}_{s}\\&\underline{\mathbf{srs}}_{s}={\upomega }_{\mathrm{representing}} \underline{\mathbf{ss}}_{s }={\upomega }_{\mathrm{representing}} {\upomega }_{\mathrm{sensing} }\underline{\mathbf{ws}}_{s} \end{aligned}$$

Moreover, the third and fourth equation can be solved as follows:

$$\begin{aligned}&{\upomega }_{\mathrm{responding}} \underline{\mathbf{srs}}_{s}+{\upomega }_{\mathrm{amplifying}} \underline{\mathbf{srs}}_{e} = \underline{\mathbf{p}}_{a}\\&{\upomega }_{\mathrm{predicting}} \underline{\mathbf{ps}}_{a} = \underline{\mathbf{srs}}_{e} \end{aligned}$$

Substitute \({\upomega }_{\mathrm{predicting}}\) \(\underline{\mathbf{ps}}_{a}\) for \(\underline{\mathbf{srs}}_{e}\) in the third equation, resulting in the following equation in \(\underline{\mathbf{ps}}_{a}\) and \(\underline{\mathbf{srs}}_{s}\):

$$\begin{aligned} {\upomega }_{\mathrm{responding}} \underline{\mathbf{srs}}_{s}+{\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}} \underline{\mathbf{ps}}_{a} = \underline{\mathbf{ps}}_{a} \end{aligned}$$

This can be used to express \(\underline{\mathbf{ps}}_{a}\) in \(\underline{\mathbf{srs}}_{s}\), and subsequently in \(\underline{\mathbf{ws}}_{s}\):

$$\begin{aligned} {\upomega }_{\mathrm{responding}} \underline{\mathbf{srs}}_{s} = (1 - {\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}}) \underline{\mathbf{ps}}_{a} \end{aligned}$$
$$\begin{aligned} \underline{\mathbf{ps}}_{a}= & {} {\upomega }_{\mathrm{responding}} \underline{\mathbf{srs}}_{s} / (1 - {\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}})\\= & {} {\upomega }_{\mathrm{responding}} {\upomega }_{\mathrm{representing}} \\&\times \, {\upomega }_{\mathrm{sensing} }\underline{\mathbf{ws}}_{s }/(1 - {\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}}) \end{aligned}$$

Moreover, by the fourth equation it is found

$$\begin{aligned} \underline{\mathbf{srs}}_{e }= & {} {\upomega }_{\mathrm{predicting}} \underline{\mathbf{ps}}_{a}={\upomega }_{\mathrm{predicting}} {\upomega }_{\mathrm{responding}} {\upomega }_{\mathrm{representing}} \\&\times \; {\upomega }_{\mathrm{sensing}}\underline{\mathbf{ws}}_{s }/(1 - {\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}}) \end{aligned}$$

Based on these, the fifth equation can be used to get an expression for \(\underline{\mathbf{es}}_{a}\):

$$\begin{aligned} \underline{\mathbf{es}}_{a}= & {} {\upomega }_{\mathrm{executing}} \underline{\mathbf{ps}}_{a}={\upomega }_{\mathrm{executing}} {\upomega }_{\mathrm{responding}} {\upomega }_{\mathrm{representing}} \\&\times \; {\upomega }_{\mathrm{sensing}} \mathbf{ws}_{s } / (1 - {\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}}) \end{aligned}$$

Summarizing, all equilibrium values have been expressed in terms of the external state \(\underline{\mathbf{ws}}_{s}\) and the connection weights:

$$\begin{aligned} \underline{\mathbf{ss}}_{s }= & {} {\upomega }_{\mathrm{sensing}} \underline{\mathbf{ws}}_{s} \\ \underline{\mathbf{srs}}_{s}= & {} {\upomega }_{\mathrm{representing}} {\upomega }_{\mathrm{sensing}}{} \mathbf{ws}_{s}\\ \underline{\mathbf{ps}}_{a}= & {} {\upomega }_{\mathrm{responding}} {\upomega }_{\mathrm{representing}} \\&\times \; {\upomega }_{\mathrm{sensing}} \mathbf{ws}_{s }{/}(1 - {\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}})\\ \underline{\mathbf{srs}}_{e }= & {} {\upomega }_{\mathrm{predicting}} {\upomega }_{\mathrm{responding}} {\upomega }_{\mathrm{representing}} \\&\times \; {\upomega }_{\mathrm{sensing}} \mathbf{ws}_{s }{/}(1 - {\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}})\\ \underline{\mathbf{es}}_{a}= & {} {\upomega }_{\mathrm{executing}} {\upomega }_{\mathrm{responding}} {\upomega }_{\mathrm{representing}} \\&\times \; {\upomega }_{\mathrm{sensing} }{} \mathbf{ws}_{s }{/}(1 - {\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}}) \end{aligned}$$

For example, if the external stimulus \(\underline{\mathbf{ws}}_{s}\) has level 1 this becomes:

$$\begin{aligned} \underline{\mathbf{ss}}_{s }= & {} {\upomega }_{\mathrm{sensing}}\\ \underline{\mathbf{srs}}_{s}= & {} {\upomega }_{\mathrm{representing}} \, {\upomega }_{\mathrm{sensing}}\\ \underline{\mathbf{ps}}_{a}= & {} {\upomega }_{\mathrm{responding}} \, {\upomega }_{\mathrm{representing}} \, {\upomega }_{\mathrm{sensing}}{/}\\&(1 - {\upomega }_{\mathrm{amplifying}} \, {\upomega }_{\mathrm{predicting}})\\ \underline{\mathbf{srs}}_{e }= & {} {\upomega }_{\mathrm{predicting}} \, {\upomega }_{\mathrm{responding}} \, {\upomega }_{\mathrm{representing}} \\&\times \, {\upomega }_{\mathrm{sensing} }{/}(1 - {\upomega }_{\mathrm{amplifying}} \, {\upomega }_{\mathrm{predicting}})\\ \underline{\mathbf{es}}_{a }= & {} {\upomega }_{\mathrm{executing}} \, {\upomega }_{\mathrm{responding}} \, {\upomega }_{\mathrm{representing}} \\&\times \, {\upomega }_{\mathrm{sensing}}{/}(1 - {\upomega }_{\mathrm{amplifying}} \, {\upomega }_{\mathrm{predicting}}) \end{aligned}$$

Moreover, if all connection weights are 1, except that \({\upomega }_{\mathrm{responding}} = 0.5\) and \({\upomega }_{\mathrm{amplifying}} = 0.5\), as in the example simulation shown in [17], Section 2.4.1, the values become:

$$\begin{aligned} \underline{\mathbf{ss}}_{s }= & {} 1\\ \underline{\mathbf{srs}}_{s}= & {} 1\\ \underline{\mathbf{ps}}_{a}= & {} 0.5/0.5 = 1\\ \underline{\mathbf{srs}}_{e }= & {} 0.5/0.5 = 1\\ \underline{\mathbf{es}}_{a }= & {} 0.5/0.5 = 1 \end{aligned}$$

Indeed in the example simulation in [17], Section 2.4.1 Fig. 11 it can be seen that all values go to 1. The solution of the equilibrium equations in terms of the connection weights can be used to predict that when the connection weights have different values, also these equilibrium values will turn out different. Recall that the cases \({\upomega }_{\mathrm{amplifying}} = 1\) and \({\upomega }_{\mathrm{predicting}} = 1\) was excluded. In that case the combined third and fourth equation becomes trivial, as \(\underline{\mathbf{ps}}_{a}\) is lost from the equation:

$$\begin{aligned}&{\upomega }_{\mathrm{responding}} \underline{\mathbf{srs}}_{s}+{\upomega }_{\mathrm{amplifying}} {\upomega }_{\mathrm{predicting}} \underline{\mathbf{ps}}_{a} = \underline{\mathbf{ps}}_{a} \\&{\upomega }_{\mathrm{responding}} \underline{\mathbf{srs}}_{s} + \underline{\mathbf{ps}}_{a} = \underline{\mathbf{ps}}_{a}\\&{\upomega }_{\mathrm{responding}} \underline{\mathbf{srs}}_{s} = 0\\&\underline{\mathbf{srs}}_{s} = 0 \end{aligned}$$

Here in the last step it is assumed that \({\upomega }_{\mathrm{responding}}>\) 0. As a consequence by the first two equations also \(\underline{\mathbf{ss}}_{s}\) and \(\underline{\mathbf{ws}}_{s}\) are 0, and by the fourth and fifth equation also the values for the other states. It turns out that in this case there can only be an equilibrium if there is no stimulus at all. As soon as there is a nonzero stimulus in this case that \({\upomega }_{\mathrm{amplifying}} = 1\) and \({\upomega }_{\mathrm{predicting}} = 1\), the values of ps\(_{a}\), srs\(_{e}\) and es\(_{a}\) increase indefinitely to larger and larger values (and in particular do not stay within the interval [0, 1]), as can be seen from simulations. Note that there was an additional assumption made that \({\upomega }_{\mathrm{responding}} > 0\). If, in contrast, \({\upomega }_{\mathrm{responding}} = 0\), then still more possibilities for equilibria are available. For example, in that case \(\underline{\mathbf{ps}}_{a}\) and \(\underline{\mathbf{srs}}_{e}\) can have any value, but they have to be equal due to the fourth equation, but this value is independent of the values of \(\underline{\mathbf{ws}}_{s}\), \(\underline{\mathbf{ss}}_{s}\) and \(\underline{\mathbf{srs}}_{s}\), as there is no nonzero connection between these parts of the graph. So, this would not be a very relevant case.

The analysis above can also be done to find out whether or not the activation level of a state is increasing. As a first step, again consider the sensor state ss\(_{s}\).

LP \(_{\mathrm{ss}_{s}}\) Sensing a stimulus: determining values for state ss \(_{s}\)

$$\begin{aligned}&{} \mathbf{d}\mathrm{ss}_{s}(t)/\mathbf{d}t = {\upeta }_{\mathrm{ss}_{s}} [{\upomega }_{\mathrm{sensing}} \mathrm{ws}_{s}(t) - \mathrm{ss}_{s}(t) ]\\&\mathrm{ss}_{s}(t+\Delta t) = \mathrm{ss}_{s}(t)+{\upeta }_{\mathrm{ss}_{s}} [{\upomega }_{\mathrm{sensing}} \mathrm{ws}_{s}(t) - \mathrm{ss}_{s}(t) ] \Delta t \end{aligned}$$

The activation value increases mean

$$\begin{aligned} \mathbf{d}\mathrm{ss}_{s}(t)/\mathbf{d}\mathrm{t} > 0 \quad \mathrm{or} \quad \mathrm{ss}_{s}(t+\Delta t) > \mathrm{ss}_{s}(t) \end{aligned}$$

This is equivalent to:

$$\begin{aligned} {\upomega }_{\mathrm{sensing}} \mathrm{ws}_{s}(t) - \mathrm{ss}_{s}(t) > 0 \end{aligned}$$

This in turn is equivalent to the criterion that the impact on ss\(_{s}\) is higher than the current activation value:

$$\begin{aligned} {\upomega }_{\mathrm{sensing}} \mathrm{ws}_{s}(t) > \mathrm{ss}_{s}(t) \end{aligned}$$

For example, when ws\(_{s}(t) = 1\) and \({\upomega }_{\mathrm{sensing}} = 1\), then the criterion \({\upomega }_{\mathrm{sensing}}\) ws\(_{s}(t) >\) ss\(_{s}(t)\) indicates the activation of state ss\(_{s}\) will increase as long as it did not reach the value 1 yet. This gives as additional information that the equilibrium value 1 of sensor state ss\(_{s}\) is attracting: the value goes in that direction as long as it was not reached.

In a similar manner this can be done for the other states, thus obtaining the following criteria:

$$\begin{aligned} \begin{array}{ll} \mathbf{State} &{} \quad \mathbf{is}\,\mathbf{increasing}\,\mathbf{if}\,\mathbf{and}\,\mathbf{only}\,\mathbf{if} \\ \mathrm{ss}_{s} &{} \quad {\upomega }_{\mathrm{sensing}} \mathrm{ws}_{s}(t)> \mathrm{ss}_{s}(t)\\ \mathrm{srs}_{s} &{} \quad {\upomega }_{\mathrm{representing}} \mathrm{ss}_{s}(t) > \mathrm{srs}_{s}(t)\\ \mathrm{ps}_{a} &{} \quad {\upomega }_{\mathrm{responding}} \mathrm{srs}_{s}(t)+{\upomega }_{\mathrm{amplifying}} \mathrm{srs}_{e}(t) > \mathrm{ps}_{a}(t)\\ \mathrm{srs}_{e} &{} \quad {\upomega }_{\mathrm{predicting}} \mathrm{ps}_{a}(t)> \mathrm{srs}_{e}(t)\\ \mathrm{es}_{a} &{} \quad {\upomega }_{\mathrm{executing}} \mathrm{ps}_{a}(t)> \mathrm{es}_{a}(t)\\ \end{array} \end{aligned}$$

4 Mathematical analysis for equilibrium states: scaled sum combination function

The approach described above can be applied easily for the case of a scaled sum combination function c\(_{i}(\ldots )\) for each state \(X_{i}\); such a scaled sum function ssum \(_{\lambda _{i}}(\ldots )\) with scaling factor \(\lambda _{i}\) is defined as

$$\begin{aligned} \mathbf{ssum}_{\lambda {_{i}}}(V_{1}, \ldots , V_{k}) = (V_{1 }+ \cdots + V_{k})/\lambda _{i} \end{aligned}$$

Suppose the differential equation for some state \(X_{i}\) connected to states \(X_{j}\) is given by

$$\begin{aligned} \mathbf{d}X_{i }/\mathbf{d}t={\upeta }_{i} [{\mathbf{aggimpact}}_{i}(X_{1}, \ldots , X_{k}) - X_{i} ] \end{aligned}$$

where

$$\begin{aligned} {\mathbf{aggimpact}}_{i}(X_{1}, \ldots , X_{k})= & {} \mathrm{c}_{i}({\upomega }_{1,i}X_{1}, \ldots , {\upomega }_{k,i} X_{k}) \\= & {} \mathbf{ssum}_{\lambda _{i}}({\upomega }_{1,i}X_{1}, \ldots , {\upomega }_{k,i} X_{k}) \\= & {} ({\upomega }_{1,i} X_{1}+ \cdots + {\upomega }_{k,i} X_{k})/\lambda _{i} \end{aligned}$$

with \({\upomega }_{j,i}\) the specific weights for the connections from \(X_{j}\) to \(X_{i}\). In this case the following holds:

$$\begin{aligned}&\mathrm{Increasing} \, X_{i}: X_{i}(t +\Delta t)> X_{i}(t) \\&\quad \Leftrightarrow ({\upomega }_{1,i} X_{1} (t)+ \cdots + {\upomega }_{k,i} X_{k}(t))/\lambda _{i }> X_{i}(t) \end{aligned}$$
$$\begin{aligned}&\mathrm{Equilibrium\, of} \, X_{i}: X_{i}(t +\Delta t)= X_{i}(t) \\&\quad \Leftrightarrow ({\upomega }_{1,i} X_{1}(t)+ \cdots + {\upomega }_{k,i} X_{k}(t))/\lambda _{i }=X_{i}(t) \end{aligned}$$
$$\begin{aligned}&\mathrm{Decreasing} \, X_{i}: X_{i}(t +\Delta t) < X_{i}(t) \\&\quad \Leftrightarrow ({\upomega }_{1,i} X_{1}(t)+ \cdots + {\upomega }_{k,i} X_{k}(t))/\lambda _{i }< X_{i}(t) \end{aligned}$$

In particular, the equilibrium equations for the states \(X_{i}\) are

$$\begin{aligned} ({\upomega }_{1,1} {\underline{\mathbf{X}}} _{\mathbf{1}}+&\cdots + {\upomega }_{k,1} {\underline{\mathbf{X}}} _{\mathbf{k}})/\lambda _{1}= {\underline{\mathbf{X}}} _{\mathbf{1}}\\&\cdots \\ ({\upomega }_{1,k} {\underline{\mathbf{X}}} _{\mathbf{1}}+&\cdots + {\upomega }_{k,k} {\underline{\mathbf{X}}} _{\mathbf{k}})/\lambda _{k }= {\underline{\mathbf{X}}} _{\mathbf{k}} \end{aligned}$$

This means that in an equilibrium state the value \({\underline{\mathbf{X}}}_{i}\) for a state \(X_{i}\) may be a weighted average of the equilibrium values \({\underline{\mathbf{X}}}_{j}\) for the states \(X_{j}\), in particular when

$$\begin{aligned} \lambda _{i}={\upomega }_{1,i} + \cdots + {\upomega }_{k,i} \end{aligned}$$

Note that always at least one solution exists: when all are 0. But it is usually more interesting to know whether nonzero solutions exist.

The equilibrium equations are equivalent to

$$\begin{aligned} {\upomega }_{1,1} {\underline{\mathbf{X}}} _{\mathbf{1}}+&\cdots + {\upomega }_{k,1} {\underline{\mathbf{X}}} _{\mathbf{k}}=\lambda _{1} {\underline{\mathbf{X}}} _{\mathbf{1}}\\&\cdots \\ {\upomega }_{1,k} {\underline{\mathbf{X}}} _{\mathbf{1}}+&\cdots + {\upomega }_{k,k} {\underline{\mathbf{X}}} _{\mathbf{k}}=\lambda _{k } {\underline{\mathbf{X}}} _{\mathbf{k}} \end{aligned}$$

or

$$\begin{aligned}&({\upomega }_{1,1} - \lambda _{1}) {\underline{\mathbf{X}}} _{\mathbf{1}}+{\upomega }_{2,1} {\underline{\mathbf{X}}} _{\mathbf{2} }+ \cdots + {\upomega }_{k,1} {\underline{\mathbf{X}}} _{\mathbf{k}} = 0\\&\qquad \qquad \qquad .....\\&{\upomega }_{1,i} {\underline{\mathbf{X}}} _{\mathbf{1}}+ \cdots + {\upomega }_{i-1,i} {\underline{\mathbf{X}}} _{\varvec{i-1}}+ ({\upomega }_{i,i} - \lambda _{i}) {\underline{\mathbf{X}}} _{\mathbf{i}} \\&\quad +\,{\upomega }_{i+1,i} {\underline{\mathbf{X}}} _{\varvec{i+1} }+ \cdots + {\upomega }_{k,i} {\underline{\mathbf{X}}} _{\mathbf{k}} = 0\\&\qquad \qquad \qquad .....\\&{\upomega }_{1,k} {\underline{\mathbf{X}}} _{\mathbf{1} }+ \cdots + {\upomega }_{k-1,k} {\underline{\mathbf{X}}} _{\mathbf{k-1}} + ({\upomega }_{k,k} - \lambda _{k}) {\underline{\mathbf{X}}} _{\mathbf{k}} = 0 \end{aligned}$$

In general these linear equilibrium equations can be solved analytically, which in principle can provide symbolic expressions for the equilibrium values of \(X_{\!j }\) in terms of the connection weights \({\upomega }_{j,i}\) and the scaling factor \(\lambda _{i}\). However, for more than two states (\(k >\) 2) such expressions may tend to become more and more complex, but this depends on the number of these \({\upomega }_{j,i}\) which are nonzero, i.e. how many connections between the states exist. For example, if all states have only one incoming and one outgoing connection (a cascade or loop), then these equations can easily be solved. In some cases no nonzero solution exists. This happens, for example, when the values of the parameters are such that two of the equations in a sense contradict each other, as in the equations \(X_{1} - 2X_{2} = 0\) and \(X_{1} - 3X_{2} = 0\).

In some cases some properties of equilibrium values can be derived. For well-connected temporal-causal network models based on scaled sum functions with as scaling factor the sum of the weights of the incoming connections it can be derived that all states have the same equilibrium value.

Definition 1

A network is called strongly connected if for every two nodes A and B there is a directed path from A to B and vice versa.

Lemma 1

Let a temporal-causal network model be given based on scaled sum functions:

$$\begin{aligned} \mathbf{d}Y/\mathbf{d}t={\upeta }_{Y} [\Sigma _{X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y }X /\Sigma _{ X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y} - Y] \end{aligned}$$

Then the following hold.

  1. (a)

    If for some state Y at time t for all states X connected toward Y it holds \(X(t)\) \(\ge Y(t),\) then \(Y(t)\) is increasing at \(t: \mathbf{d} Y(t)/\mathbf{d}t \ge 0\); if for all states X connected toward Y it holds \( X(t) \le Y(t),\) then \(Y(t)\) is decreasing at \(t: \mathbf{d}Y(t)/\mathbf{d}t \le 0\).

  2. (b)

    If for some state Y at time t for all states X connected toward Y it holds \(X(t)\) \(\ge Y(t),\) and at least one state X connected toward Y exists with \(X(t) > Y(t)\) then \(Y(t)\) is strictly increasing at \(t{:}\,\mathbf{d}Y(t)/\mathbf{d}t > 0\). If for some state Y at time t for all states X connected toward Y it holds \( X(t) \le Y(t),\) and at least one state X connected toward Y exists with \(X(t) < Y(t)\) then \(Y(t)\) is strictly decreasing at \(t:\mathbf{d}Y(t)/\mathbf{d}t < 0\).

Proof of Lemma 1

  1. (a)

    From the differential equation for \(Y(t)\)

    $$\begin{aligned} \mathbf{d}Y/\mathbf{d}t= & {} {\upeta }_{Y} [\Sigma _{X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y }X / \Sigma _{ X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y} - Y]\\= & {} {\upeta }_{Y} [\Sigma _{X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y }X \\&- \; \Sigma _{ X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y} Y] /\Sigma _{ X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y}\\= & {} {\upeta }_{Y} [\Sigma _{X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y}(X -Y) ] / \Sigma _{ X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y} \end{aligned}$$

    it follows that \(\mathbf{d}Y(t)/\mathbf{d}t \ge 0\), so \(Y(t)\) is increasing at t. Similar for decreasing.

  2. (b)

    In this case it follows that \(\mathbf{d}Y(t)/\mathbf{d}t > 0\), so \(Y(t)\) is strictly increasing. Similar for decreasing.\(\square \)

Theorem 1

(convergence to one value) Let a strongly connected temporal-causal network model be given based on scaled sum functions:

$$\begin{aligned} \mathbf{d}Y/\mathbf{d}t={\upeta }_{Y} [\Sigma _{X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y }X /\Sigma _{ X,{\upomega }_{X,Y} >0} {\upomega }_{X,Y} - Y] \end{aligned}$$

Then for all states X and Y the equilibrium values \({\underline{\mathbf{X}}}\) and \({\underline{\mathbf{Y}}}\) are equal: \({\underline{\mathbf{X}}} = {\underline{\mathbf{Y}}}\). Moreover, this equilibrium state is attracting.

Proof of Theorem 1

Take a state Y with highest value \({\underline{\mathbf{Y}}}\). Then for all states X it holds \({\underline{\mathbf{X}}}\) \(\le \) \({\underline{\mathbf{Y}}}\). Suppose for some state X connected toward Y it holds \({\underline{\mathbf{X}}} < {\underline{\mathbf{Y}}}\). Take a time point t and assume \(Z(t) = \underline{\mathbf{Z}}\) for all states Z. Now apply Lemma 1b) to state Y. It follows that \(\mathbf{d}Y(t)/\mathbf{d}t < 0\), so \(Y(t)\) is not in equilibrium for this value \({\underline{\mathbf{Y}}}\). This contradicts that this \({\underline{\mathbf{Y}}}\) is an equilibrium value for state Y. Therefore, the assumption that for some state X connected toward Y it holds \({\underline{\mathbf{X}}}\) \(<\) \({\underline{\mathbf{Y}}}\) cannot be true. This shows that \({\underline{\mathbf{X}}}\) = \({\underline{\mathbf{Y}}}\) for all states connected toward Y. Now this argument can be repeated for all states connected toward Y instead of X. By iteration every other state in the network is reached, due to the strong connectivity assumption; it follows that all other states X in the temporal-causal network model have the same equilibrium value \({\underline{\mathbf{X}}}\) as \({\underline{\mathbf{Y}}}\). From Lemma 1b) it follows that such an equilibrium state is attracting: if for any state the value is deviating it will move to the equilibrium value. \(\square \)

5 Mathematical analysis for equilibrium states: Hebbian learning

It can also be analysed from the difference or differential equation when a Hebbian adaptation process (e.g. [2, 46, 8, 15, 16]) has an equilibrium and when it increases or decreases. More specifically, assume the following dynamic model (also see [5]) for Hebbian learning for the strength \({\upomega }\) of a connection from a state \(X_{1}\) to a state \(X_{2}\) with maximal connection strength 1, learning rate \({\upeta }> 0\), and extinction rate \(\zeta \ge 0\) (here \(X_{1}(t)\) and \(X_{2}(t) \) denote the activation levels of the states \(X_{1}\) and \(X_{2}\) at time t; sometimes the t is left out of \(X_{i}(t)\) and simply \(X_{i }\) is written)

$$\begin{aligned}&{\upomega }(t +\Delta t) = {\upomega }(t) + [{\upeta }X_{1}(t)X_{2}(t) (1-{\upomega }(t))-\zeta {\upomega }(t)] \Delta t \\&{} \mathbf{d}{\upomega }(t)/\mathbf{d}t = {\upeta }X_{1} X_{2} (1-{\upomega }(t))-\zeta {\upomega }(t) \end{aligned}$$

Note that also for the states \(X_{1}\) and \(X_{2}\) equations may be given, but here the focus is on \({\upomega }\).

From the expressions for \({\upomega }\) it can be analysed when each of the following cases occurs:

$$\begin{aligned} \mathrm{Increasing} \; {\upomega }:&\mathbf{d}{\upomega }(t)/\mathbf{d}t > 0 \\&\Leftrightarrow {\upeta }X_{1} X_{2} (1-{\upomega }(t))-\zeta {\upomega }(t) > 0 \end{aligned}$$
$$\begin{aligned} \mathrm{Equilibrium\, of} \; {\upomega }:&\mathbf{d}{\upomega }(t)/\mathbf{d}t = 0 \\&\Leftrightarrow {\upeta }X_{1} X_{2} (1-{\upomega }(t))-\zeta {\upomega }(t) = 0 \end{aligned}$$
$$\begin{aligned} \mathrm{Decreasing} \; {\upomega }:&\mathbf{d}{\upomega }(t)/\mathbf{d}t < 0 \\&\Leftrightarrow {\upeta }X_{1} X_{2} (1-{\upomega }(t))-\zeta {\upomega }(t) < 0 \end{aligned}$$

5.1 Analysis of increase, decrease or equilibrium for Hebbian learning without extinction

To keep things a bit simple for a first analysis, for the special case that there is no extinction (\(\zeta = 0\)), this easily leads to the following criteria

$$\begin{aligned} \mathrm{Increasing} \; {\upomega }:&{\upeta }X_{1} X_{2} (1-{\upomega }(t)) > 0 \\&\Leftrightarrow {\upomega }(t) < 1\, \mathrm{and\, both }\, X_{1} > 0 \quad \mathrm{and} \\&X_{2} > 0 \end{aligned}$$
$$\begin{aligned} \mathrm{Equilibrium\,of} \; {\upomega }:&{\upeta }X_{1} X_{2} (1-{\upomega }(t)) = 0\\&\Leftrightarrow {\upomega }(t) = 1 \quad \mathrm{or} \quad X_{1} = 0 \quad \mathrm{or} \quad X_{2} = 0 \end{aligned}$$
$$\begin{aligned} \mathrm{Decreasing} \; {\upomega }:&{\upeta }X_{1} X_{2} (1-{\upomega }(t)) < 0 \\&\mathrm{this \, is\, never\, the\, case,\, as\, always } \, X_{i }\ge 0 \\&\mathrm{and} \quad {\upomega }(t) \le 1 \end{aligned}$$

So, in case that there is no extinction, the only equilibrium is when \({\upomega }= 1\), and as long as this value was not reached yet and both \(X_{1} > 0\) and \(X_{2} >0\), the value of \({\upomega }\) increases: the equilibrium is attracting. Note that when \(X_{1}= 0\) or \(X_{2}= 0\), also an equilibrium for \({\upomega }\) can be found: no (further) learning takes place; the value of \({\upomega }\) stays the same independent of which value it has, so in this case any value is an equilibrium value. In simulations this indeed can be observed: as long as both \(X_{1} >0\) and \(X_{2} >0\) the value of \({\upomega }\) keeps on increasing until it reaches 1, but if \(X_{1}= 0\) or \(X_{2}=0\) then \({\upomega }\) always stays the same.

5.2 Analysis of increase, decrease or equilibrium for Hebbian learning with extinction

As a next step this analysis is extended to the case with extinction \(\zeta >\) 0. In this case the analysis requires slightly more work; here for convenience the t is left out of the expressions.

$$\begin{aligned} \begin{array}{lll} \mathbf{Increasing} \; {\upomega }: &{} {\upeta }X_{1} X_{2} (1-{\upomega })-\zeta {\upomega }>0\\ &{} \Leftrightarrow {\upeta }X_{1} X_{2} - {\upeta }X_{1} X_{2} {\upomega }-\zeta {\upomega }> 0\\ &{} \Leftrightarrow {\upeta }X_{1} X_{2} - (\zeta +{\upeta }X_{1} X_{2}) {\upomega }> 0\\ &{} \Leftrightarrow (\zeta +{\upeta }X_{1} X_{2}) {\upomega }< {\upeta }X_{1} X_{2}\\ &{} \Leftrightarrow {\upomega }< \frac{{\upeta }X_{1} X_{2}}{\zeta +{\upeta }X_{1} X_{2}}\\ &{} \Leftrightarrow {\upomega }< \frac{1}{1 +\zeta /({\upeta }X_{1} X_{2} )} \\ &{} (\mathrm{when\,both }\; X_{1} > 0 \quad \mathrm{and} \quad X_{2} > 0)\\ \end{array} \end{aligned}$$

Note that when \(X_{1} = 0\) or \(X_{2} = 0\), the value of \({\upomega }\) is never increasing. Similarly the following criteria can be found.

$$\begin{aligned} \mathbf{Equilibrium \ of} \; {\varvec{{\upomega }}}:&{\upeta }{X_{1}} {X_{2}} (1- {\upomega })-\zeta {\upomega }= 0 \\&\quad \Leftrightarrow {\upomega }= \frac{{\upeta }X_{1} X_{2}}{\zeta +{\upeta }X_{1} X_{2} }\\&\quad \Leftrightarrow {\upomega }= \frac{1}{1 +\zeta /{\upeta }(X_{1} X_{2})}\\&\quad (\mathrm{when\, both} \; X_{1} > 0 \quad \mathrm{and} \quad X_{2} > 0)\\&{\upeta }X_{1} X_{2} (1 - {\upomega })-\zeta {\upomega }= 0 \\&\quad \Leftrightarrow {\upomega }= 0\\&\quad (\mathrm{when} \; X_{1} = 0\quad \mathrm{or} \quad X_{2} = 0, \quad \mathrm{and}\\&\quad \zeta > 0) \end{aligned}$$
$$\begin{aligned} \mathbf{Decreasing} \; {\varvec{{\upomega }}}:&{\upeta }X_{1} X_{2} (1 -{\upomega })-\zeta {\upomega }< 0 \\&\quad \Leftrightarrow {\upomega }> \frac{{\upeta }X_{1} X_{2}}{\zeta +{\upeta }X_{1} X_{2} } \\&\quad \Leftrightarrow {\upomega }> \frac{1}{1 +\zeta /({\upeta }X_{1} X_{2} )} \\&\quad (\mathrm{when\,both} \; X_{1} > 0 \quad \mathrm{and} \quad X_{2} > 0) \\&{\upeta }X_{1} X_{2} (1 - {\upomega })-\zeta {\upomega }< 0 \\&\quad \Leftrightarrow \mathrm{always} \\&\quad (\mathrm{when} \; X_{1} = 0 \quad \mathrm{or} \quad X_{2} = 0, \quad \mathrm{and} \\&\quad \zeta > 0, {\upomega }> 0) \end{aligned}$$

In this more general case with extinction, depending on the values of \(X_{1}\) and \(X_{2}\) there may be a positive equilibrium value (when both \(X_{1} > 0\) and \(X_{2} > 0\)) but when \(\zeta > 0\) this value is \(< 1\). Also 0 is an equilibrium value (when \(X_{1} = 0\) or \(X_{2} = 0\)). This looks similar to the case without extinction. Moreover, as before, the value of \({\upomega }\) increases when it is under the positive equilibrium value and it decreases when it is above this value (it is an attracting equilibrium); for example patterns, see Figs. 3 and 4.

Fig. 3
figure 3

Hebbian learning for \({\upeta }= 0.4\), \(\zeta = 0.08\), \(\Delta t = 0.1\), and activation levels \(X_{1}= 1\) and \(X_{2}= 1\) Equilibrium value 0.83 (dotted line)

Fig. 4
figure 4

Hebbian learning for \({\upeta }= 0.4\), \(\zeta = 0.08\), \(\Delta t = 0.1\), and activation levels \(X_{1} = 0.6\) and \(X_{2} = 0.6\) Equilibrium value 0.64 (dotted line)

Note that this time this positive equilibrium value (indicated by the dotted line) is lower than 1. It may be close to 1, but when \(\zeta > 0\) it never will be equal to 1. In fact the maximal value of this equilibrium is when both \(X_{1} = 1\) and \(X_{2} = 1\), in which case the equilibrium value is

$$\begin{aligned} \frac{1}{1 +\zeta /{\upeta }} \end{aligned}$$

For example, for \({\upeta }= 0.4\), \(\zeta = 0.02\), and \(X_{1} = 1\) and \(X_{2} = 1\), the positive equilibrium value for \({\upomega }\) is about 0.95. Another example is \({\upeta }= 0.4\), \(\zeta = 0.08\), and \(X_{1}= 1\) and \(X_{2} = 1\), in which case the equilibrium value is 0.83. The graphs in Fig. 2 show what happens below this equilibrium and above it. If for the same settings for \({\upeta }\) and \(\zeta \), the activation levels are lower (\(X_{1} = 0.6\) and \(X_{2} = 0.6\)), then the equilibrium value is lower too (0.64), and the learning is much slower, as is shown in Fig. 3.

So, it is found that the positive equilibrium value occurs for \(X_{1} > 0\) and \(X_{2} >0\), and in that case this equilibrium is attracting. In contrast, the equilibrium value 0 does not occur for \(X_{1} >0\) and \(X_{2} > 0\), but it does occur for \(X_{1} = 0\) or \(X_{2}= 0\), in which case no positive equilibrium value occurs. In this case pure extinction occurs: \({\upomega }\) is attracted by the equilibrium value 0; this pattern is different from the case without extinction. For an example of such a pure extinction process, see Fig. 5. Note that, given the lower value of the extinction rate \(\zeta \), the extinction process takes a much longer time than the learning process.

5.3 How much activation of \(X_{1 }\) and \(X_{2 }\) is needed to let \({\upomega }\) increase?

From a different angle, another question that can be addressed is for a given value of \({\upomega }\), how high the value \(X_{1} X_{2}\) should be to let \({\upomega }\) become higher. This can be determined in a similar manner as follows:

$$\begin{aligned} \mathbf{Increasing} \; {\upomega }:&{\upomega }< \frac{1}{1 +\zeta /({\upeta }X_{1} X_{2})} \\&\; \Leftrightarrow ( 1+\zeta /{\upeta }X_{1} X_{2)}{\upomega }< 1\\&\; \Leftrightarrow 1 +\zeta /({\upeta }X_{1} X_{2})< 1/{\upomega }\\&\; \Leftrightarrow \zeta /({\upeta }X_{1} X_{2}) < 1/{\upomega }- 1 = (1 - {\upomega })/{\upomega }\\&\; \Leftrightarrow 1/(X_{1} X_{2}) < \frac{{\upeta }}{\zeta }(1 - {\upomega })/{\upomega }\\&\; \Leftrightarrow X_{1} X_{2 }> \frac{\zeta }{{\upeta }}{\upomega }/(1 - {\upomega }) \end{aligned}$$

So, for activation levels \(X_{1}\) and \(X_{2 }\) with \(X_{1} X_{2 }> \frac{\zeta }{{\upeta }} {\upomega }/(1 - {\upomega })\), further learning takes place, and below this value extinction dominates and will decrease the level of \({\upomega }\).

Fig. 5
figure 5

Pure extinction for \({\upeta }= 0.4\), \(\zeta = 0.08\), \(\Delta t = 0.1\), and activation levels \(X_{1} = X_{2} = 0\); equilibrium value 0

6 Mathematical analysis for equilibrium states: dynamic network connections

The connections between agents in a social network may change over time based on the homophily principle: the closer the states of the interacting agents, the stronger the connections of the agents will become. This principle may be formalized with as a general template

$$\begin{aligned} \mathbf{d}{\upomega }_{A,B}/ \mathbf{d}t = {\upeta }_{A,B} [\mathrm{c}_{A,B}(X_{A}, X_{B}{,{\upomega }}_{A,B}) - {\upomega }_{A,B }] \end{aligned}$$

for some combination function c\(_{A,B}(V_{1}, V_{2}, W)\) for which it is assumed that c\(_{A,B}(V_{1}, V_{2}, \)0) \(\ge \) 0 and c\(_{A,B}(V_{1}, V_{2}, \)1) \(\le \) 1.

The example used in this section is

$$\begin{aligned} \mathrm{c}_{A,B}(V_{1}, V_{2}, W) = W + (\tau _{A,B}^{2} - (V_{1}- V_{2})^{2}) W (1-W) \end{aligned}$$

In this case

$$\begin{aligned} \mathbf{d}{\upomega }_{A,B}/ \mathbf{d}t = {\upeta }_{A,B} (\tau _{A,B}^{2} - (X_{A}-X_{B})^{2}) {\upomega }_{A,B} (1-{\upomega }_{A,B}) \end{aligned}$$

In this section it is analysed which equilibrium values \(\underline{{\varvec{{\upomega }}}}_{A,B}\) can occur for \({\upomega }_{A,B}(t)\) and when \({\upomega }_{A,B}(t)\) is increasing or decreasing.

The standard approach is to derive an inequality or equation from the differential equation by putting \(\mathbf{d}{\upomega }_{A,B}(t)/ \mathbf{d}t = 0\), \(\mathbf{d}{\upomega }_{A,B}(t)/ \mathbf{d}t \ge \) 0 or \(\mathbf{d}{\upomega }_{A,B}(t)/ \mathbf{d} t \le 0\). For this case this provides

$$\begin{aligned} \mathrm{Increasing} \; {\upomega }_{A,B}&\quad \mathbf{d}{\upomega }_{A,B}(t)/ \mathbf{d}{t \ge } 0 \\&\Leftrightarrow {\upeta }_{A,B} (\tau _{A,B}^{2} - (X_{A}-X_{B})^{2}) \\&\quad \times \; {\upomega }_{A,B} (1-{\upomega }_{A,B}) > 0 \end{aligned}$$
$$\begin{aligned} \mathrm{Equilibrium\,of} \; {\upomega }_{A,B}&\quad \mathbf{d}{\upomega }_{A,B}(t)/ \mathbf{d}t = 0 \\&\Leftrightarrow {\upeta }_{A,B} (\tau _{A,B}^{2} - (X_{A}-X_{B})^{2}) \\&\quad \times \; {\upomega }_{A,B} (1-{\upomega }_{A,B}) =0 \end{aligned}$$
$$\begin{aligned} \mathrm{Decreasing} \; {\upomega }_{A,B}&\quad \mathbf{d}{\upomega }_{ A,B}(t)/ \mathbf{d}{t \le } 0 \\&\Leftrightarrow {\upeta }_{A,B} (\tau _{A,B}^{2} - (X_{A}-X_{B})^{2}) \\&\quad \times \; {\upomega }_{A,B} (1-{\upomega }_{A,B}) < 0 \end{aligned}$$

For \({\upomega }_{A,B} = 0\) or \({\upomega }_{A,B} = 1\) the middle condition is fulfilled. This means that \(\underline{\mathbf{{\upomega }}}_{A,B}\) = 0 and \(\underline{\mathbf{{\upomega }}}_{A,B} = 1\) are equilibrium values. Now assume 0 \(< {\upomega }_{A,B} < 1\). Then \({\upomega }_{A,B} (1-{\upomega }_{A,B}) > 0\), and therefore this factor can be left out, and the same applies to \({\upeta }_{A,B} >\) 0; this results in:

$$\begin{aligned} \mathrm{Increasing} \; {\upomega }_{A,B}&\quad \tau _{A,B}^{2} - (X_{A}-X_{B})^{2}> 0 \\&\quad \Leftrightarrow \vert X_{A}-X_{B} \vert < \tau _{A,B} \end{aligned}$$
$$\begin{aligned} \mathrm{Equilibrium\,of} \; {\upomega }_{A,B}&\quad \tau _{A,B}^{2} - (X_{A}-X_{B})^{2} = 0 \\&\quad \Leftrightarrow \vert X_{A}-X_{B} \vert =\tau _{A,B} \end{aligned}$$
$$\begin{aligned} \mathrm{Decreasing} \; {\upomega }_{A,B}&\quad \tau _{A,B}^{2} - (X_{A}-X_{B})^{2} < 0 \\&\quad \Leftrightarrow \vert X_{A}-X_{B} \vert > \tau _{A,B} \end{aligned}$$

This shows that for cases that \(\vert X_{A}-X_{B} \vert < \tau _{A,B}\) the connection keeps on becoming stronger until \({\upomega }_{A,B}\) becomes in equilibrium at 1. Similarly for cases that \(\vert X_{A}-X_{B}\) \(\vert > \tau _{A,B}\) the connection keeps on becoming weaker until \({\upomega }_{A,B}\) comes in equilibrium at 0. This implies that the equilibria \(\underline{{{\varvec{{\upomega }}}}}_{A,B} = 0\) and \(\underline{{\varvec{{\upomega }}}}_{A,B} = 1\) can both become attracting, but under different circumstances concerning the values of \(X_{A}\) and \(X_{B}\).

In exceptional situations it could be the case that \(\vert X_{A}-X_{B} \vert =\tau _{A,B}\) in which case \({\upomega }_{A,B}\) is also in equilibrium, with \({\upomega }_{A,B}\) having any value. So in principle the equilibrium equation has three solutions

$$\begin{aligned}&{\underline{\varvec{{\upomega }}}}_{A,B} =0 \quad \text{ or } \; {\underline{\varvec{{\upomega }}}}_{A,B} =\text{1 } \quad \mathrm{or } \\&\vert {\underline{\mathbf {X}}}_A -{\underline{\mathbf {X}}}_B \vert =\tau _{A,B} \quad \text{ and } \quad {\underline{\varvec{{\upomega }}}}_{A,B} \; \text{ has } \text{ any } \text{ value } \end{aligned}$$

The analysis above can also be done for similar but slightly more complex variants of the model, of which the quadratic variant is described in [14]:

$$\begin{aligned} \text{ c }_{A,B} ( {V_{1} ,V_{2} ,W})= & {} {\varvec{W}}+ \text{ Pos }({\upeta }_{A,B} (\tau _{A,B} -\vert V_{1} -V_{2} \vert ))( {\text{1 }-W}) \\&\quad - \;\text{ Pos }(-{\upeta }_{A,B} (\tau _{A,B} -\vert V_{1} -V_{2} \vert ))W \\ \text{ c }_{A,B} ( {V_{1} ,V_{2} ,W})= & {} {\varvec{W}}+ \text{ Pos }({\upeta }_{A,B} (\tau _{A,B} ^2-(V_{1} -V_{2} )^2))( {\text{1 }-W})\\&\quad - \;\text{ Pos }(-{\upeta }_{A,B} (\tau _{A,B} ^2-(V_{1} -V_{2} )^2))W \\ \text{ c }_{A,B} ( {V_{1} ,V_{2} ,W})= & {} {\varvec{W}}+\text{ Pos }({\upeta }_{A,B} ( 0.\text{5 } - \text{1 }/( \text{1 } \\&\quad +\text{ e }^{-\sigma A,B(\vert V\text{1 }-V\text{2 }\vert -\tau A,B)})))( {\text{1 }-W}) \\&\quad - \; \text{ Pos }(-{\upeta }_{A,B} ( {0.\text{5 } - \text{1 }/( {\text{1 }+\text{ e }^{-\sigma A,B(\vert V\text{1 }-V\text{2 }\vert -\tau A,B)}})})W \end{aligned}$$

where Pos(\(x) = (\vert x\vert +x)/2\), which returns x when x is positive and 0 when x is negative. These models make that the approaching of the boundaries 0 and 1 of the interval [0, 1] of \({\upomega }\) is slow, thus making \({\upomega }\) not cross these boundaries, but \({\upomega }\) departing from the neighbourhood of these boundaries is not slow. In [14] an analysis and example simulations can be found using the second, quadratic model. As part of the analysis, there it is also shown that different equilibrium values \({\underline{\mathbf{X}}} _{A }\) and \({\underline{\mathbf{X}}}_{B}\) have a distance of at least \(\tau _{A,B}\), which implies that at most 1/\(\tau _{A,B}\) clusters can emerge.

Fig. 6
figure 6

Simple example model incorporating suppression of sensing

7 Mathematical analysis for behaviour ending up in a limit cycle pattern

Sometimes the values of the states of a model do not end up in an equilibrium value, but instead keep on fluctuating all the time, and after some time they do this according to a repeated pattern, called a limit cycle; for example, see [3, 9, 1113]. The example model shown in Figs. 1 and 2 can be extended to show such behaviour; see Fig. 6. In this case it is assumed that action a directs the person (e.g. his or her gaze) away from the stimulus s, so that after (full) execution of a stimulus s is not sensed anymore. This type of behaviour can occur as a form of emotion regulation to down-regulate a stressful emotion triggered by s. The effect of this is as follows. The presence of stimulus s leads to high activation levels of sensor state and sensory representation for s, and subsequently for the preparation state and execution state of action a. But then the action leads to its effect in the world which is suppression of the sensor state for s. As a consequence the sensor state and sensory representation for s, and also the preparation state and execution state of action a get low activation levels. The effect is that there is no suppression of sensing the stimulus anymore and, therefore, all activation levels become high again. And so it goes on and on, forever (see also Fig. 8). At a longer timescale this type of pattern may also occur in so-called on-again-off-again relationships. This type of behaviour can be achieved by the following additions to the example model (see Fig. 6):

  • a connection from the execution state es\(_{a}\) of a to the world state ws\(_{e}\) for effect e of action a

  • a connection from this world state ws\(_{e}\) for e to the sensor state ss\(_{s}\) of s

  • a combination function for the sensor state ss\(_{s}\) of s that models that ws\(_{e}\) makes that s is not sensed

Fig. 7
figure 7

Example simulation showing a limit cycle

Table 1 Overview of the outcomes of a mathematical analysis for stationary points in a limit cycle

The aggregation used for ss\(_{s}\) is modelled by the following combination function c \(_{\mathrm{ss}_{s}}\)(\(V_{1},V_{2}\)), where \(V_{1 }\) refers to the impact \({\upomega }_{\mathrm{ws}{_{s}},\mathrm{ss}{_{s}}}\) ws\(_{s}(t)\) from ws\(_{s}\) on ss\(_{s}\) and \(V_{2 }\) to the impact \({\upomega }_{\mathrm{ws}{_{e}},\mathrm{ss}{_{s}}}\)ws\(_{e}(t)\) from ws\(_{e}\) on ss\(_{s}\):

$$\begin{aligned} \text{ c }_{\hbox {ss}{_{s}}} ( {V_{1}, V_{2} })=V_{1} ( {1+V_{2}}) \end{aligned}$$

Since the connection weight \({\upomega }_{\mathrm{ws}{_{e}},\mathrm{ss}{_{s}}}\) is chosen negative (it is a suppressing link), for example -1, this function makes the sensing of stimulus s inversely proportional to the extent ws\(_{e}(t)\) of avoidance; e.g. sensing s becomes 0 when avoidance e is 1, and \(V_{1}\) when avoidance e is 0. According to this combination function the difference and differential equation for ss\(_{s}\) are as follows:

$$\begin{aligned}&\hbox {ss}_{s} ( {t+\Delta t})=\hbox {ss}_{s} ( t)\\&\quad +{\upeta }_{\hbox {ss}_{s}} [{\upomega }_{\hbox {ws}_{s}, \hbox {ss}_{s}} \hbox {ws}_{s} ( t) (\text{1 } -{\upomega }_{\hbox {ws}_{e}, \hbox {ss}_{s}} \hbox {ws}_{e} ( t))-\hbox {ss}_{s}(t)]\Delta t \end{aligned}$$
$$\begin{aligned} \mathrm{\mathbf{d}}\hbox {ss}_{s} /\mathrm{\mathbf{d}}t= & {} {\upeta }_{\hbox {ss}_{s}} [{\upomega }_{\hbox {ws}_{s},\hbox {ss}_{s}} \hbox {ws}_{s} ( t)(\text{1 } -{\upomega }_{\hbox {ws}_{e},\hbox {ss}_{s}} \hbox {ws}_{e} \\&(t))-\hbox {ss}_{s} ( t)] \end{aligned}$$

The combination functions for all states with only one connection toward it are the identity function, except for es\(_{a}\) in which case the advanced logistic function \(\mathbf{alogistic}_{\sigma ,\tau }({\ldots })\) is used. The combination function for ps\(_{a}\) is also the advanced logistic function \(\mathbf{alogistic}_{\sigma ,\tau }({\ldots })\).

In Fig. 8 an example simulation with the model depicted in Fig. 7 clearly shows how a limit cycle pattern emerges, with period 18.5.

Here all connection weights are 1, except the weight of the suppressing connection from ws\(_{e}\) to ss\(_{s}\), which is -1. Moreover, the steepness \(\sigma \) and threshold \(\tau \) for ps\(_{a}\) are 4 and 0.9, respectively, and for es\(_{a}\) they are 40 and 0.7. The step size \(\Delta t\) was 0.1 and the speed factors \({\upeta }\) for es\(_{a}\) and ws\(_{e}\) were 0.4, and for the other (internal) states \({\upeta }\) was1.

For this simulation an analysis of the stationary points has been performed for the maxima and minima in the final stage for all states. Recall from Sect. 2 that the equation expressing that a state Y is stationary at time t is

$$\begin{aligned} \mathrm{\mathbf{aggimpact}}_Y(t)=Y( t) \end{aligned}$$

which is equivalent to

$$\begin{aligned} {\mathbf{c}}_Y ({\upomega }_{X{_1},Y} X_{1}(t),\ldots ,{\upomega }_{X{_{k}},Y} X_k(t))=Y( t) \end{aligned}$$

For example, for state ps\(_{a}\) the combination function is the sum function, so the aggregated impact is

$$\begin{aligned} {\mathbf{aggimpact}}_Y (t)={\upomega }_{\mathrm{responding}} \hbox {srs}_s ( t)+{\upomega }_{\mathrm{amplifying}} \hbox {srs}_e ( t) \end{aligned}$$

Then the stationary point equation expressing that state ps\(_{a}\) is stationary at time t is

$$\begin{aligned} {\upomega }_{\mathrm{responding}} \hbox {srs}_s ( t)+{\upomega }_{\mathrm{amplifying}} \hbox {srs}_e (t)=\hbox {ps}_a ( t) \end{aligned}$$

It is this equation that has been checked for the minima and maxima for each of the states in the final stage of the simulation. The results are shown in Table 1. Here both for the maxima and for the minima the first rows show the time points at which the stationary point occurs. The next row (state value) shows the values of the right-hand side of the above equation, followed by rows (aggregated impact) showing the left-hand sides of this equation, and then a row with the absolute deviation between the values in the two rows above it.

It turns out that the stationary point equations are fulfilled with an average accuracy over all states and stationary points of 0.002 and a maximal accuracy of 0.006, which both is \(<10^{-2}\). This provides evidence that the implemented model is correct in comparison to the model description. In Table 1 the more specific numbers are shown for the different states. For the maxima the average deviation is 0.00226, and the maximal absolute deviation is 0.00595 (which occurs for state ws\(_{e}\)). For the minima the average absolute deviation is 0.00204, and the maximal absolute deviation is 0.00480 (which again is for state ws\(_{e}\)). Taken minima and maxima together, the overall average absolute deviation is 0.00215, and the maximal absolute deviation is 0.00595 (for the maxima of state ws\(_{e}\)).

As another type example of the emergence of limit cycle behaviour, consider that in a realistic context stimuli can be present for some time, but also may be absent for certain periods according to fixed periods, for example, day/night rhythms. As an example, for Hebbian learning, for activations based on stimuli that return from time to time an analysis can be made about when there is enough stimulation over time to achieve or maintain a value for the weight \({\upomega }\) of some connection. As an example, see the pattern in Fig. 7, where the upper graph shows the levels of both \(X_{1}\) and \(X_{2}\) (alternating between 0 and 1) and the lower graph shows how due to these activation periods, the periods of learning (\(d_{1}= 5\) time units) and pure extinction (\(d_{0}= 15\) time units) alternate. It turns out that there is a form of convergence not to one specific value of \({\upomega }\), but to a recurring pattern that repeats itself; this is a specific case of a limit cycle, in this case induced by environmental fluctuations.

Fig. 8
figure 8

Limit cycle for \(d_{1} = 5\) (learning), \(d_{0} =\) 15 (pure extinction), and \({\upeta }= 0.2, \zeta = 0.04\) Equilibrium value 0.83, \({\upomega }_{\mathrm{max}}= 0.72, {\upomega }_{\mathrm{min}}= \)0.39 (dotted lines)

8 Discussion

In this paper it was discussed how mathematical analysis can be used to find out some properties of a model. An advantage is that this is done without performing simulations. This advantage makes that it can be used as an additional source of knowledge, independent of a specific implementation of the model. By comparing properties found by mathematical analysis and properties observed in simulation experiments some form of verification can be done. If a discrepancy is found, for example, in the sense that the mathematical analysis predicts a certain property but some simulation does not satisfy this property, this can be a reason to inspect the implementation of the model carefully (and/or check whether the mathematical analysis is correct). Having such an option can be fruitful during a development process of a model, as to acquire empirical data for validation of a model may be more difficult or may take a longer time.

The techniques used for such mathematical analysis were adopted from [3, 9, 1113]. In this literature many more techniques can be found than those covered in the current paper, for example, for the convergence speed (e.g. [10]) for attracting equilibria, but also for other types of properties. For example, there is underlying theory that proves the existence of certain patterns, for example, theorems from Poincaré (1881–1882) and Bendixson (1901) that state that under certain circumstances for two-dimensional systems (described by only two differential equations) limit cycles will occur. These are beyond the scope of this paper.

Mathematical analysis is not always easy or feasible. For example, linear equilibrium equations (for example, obtained when using scaled sum combination functions) in principle can be solved analytically in a generic form, thereby obtaining expressions for the equilibrium values in terms of the parameters of the model, but equilibrium equations involving logistic functions cannot be solved in such a manner. Nevertheless, for such cases often specific instances can be solved. Moreover, as discussed in Sect. 2, verification of a model does not depend on finding explicit analytical solutions of the equilibrium equations. For verification it is already sufficient if the equilibrium equations have been identified, which is always possible from the difference or differential equations. Then for each simulation trace observed equilibrium values can be substituted in these equations and by this it is checked whether they satisfy the equations. Therefore, in general, mathematical analysis still can add some value, in addition to systematic simulation experiments. However, a limitation is that although verification is always possible, prediction is not. For prediction without having any simulation, it is needed to find explicit analytical solutions of the equilibrium equations, and in many realistic models this is not feasible.