Introduction

Developing computer systems that benefit from complex adaptive systems concepts is a significant and challenging task in the context of computer science engineering. The development of such a system will allow its elements to interact, react, adapt and even evolve autonomously (Kharchenko et al. 2017). The implementation of these capabilities correctly will make the dream of making computers more independent and intelligent than human come true. However, the achievement of this dream is constrained by hardware and software limits as well as the safety requirements. For example, developing robots that can serve humans should be performed while taking into consideration the capability of analysing the surrounding environment and commands understanding besides the safety of the served human (Siciliano and Khatib 2019). The first step to implementing complex adaptive systems is modelling it correctly to deal with its complexity and avoid failures. One of the most powerful design tools in computer engineering is formal methods based on theorem-proving due to their strong assurance of bugs’ absence.

In this paper, we present an approach to develop correctly a complex adaptive system formal model while ensuring homogeneity and correctness. In particular, we focus on developing a standard model that plays the role of a starting point in the process of developing any complex adaptive system. This model includes the essential concepts of complex adaptive systems that can be enriched by adding detail depending on the typical requirements of the system.

Formal methods are techniques based on the mathematical language used for specifying and verifying systems. Utilization of formal techniques can extraordinarily expand our comprehension of the system by uncovering inconsistencies, ambiguities, and deficiency that may be harder to detect using standard methods. The formal method used for system-level modelling and analysis is Event-B, which is based on set theory notation. Event-B uses refinement to represent the different abstraction levels of the system while performing proofs to guarantee consistency of refinements. This method has been effectively used in different systems where no bugs were detected. For instance, the driverless meteor in Paris worked correctly and no bugs were identified after the theoretical proofs, neither at the functional approval (Boudi et al. 2019). This achievement encourages Alstom and Siemens Transportation Systems to use this method to develop future metros (Vistbakka and Troubitsyna 2018). In addition, several approaches have been proposed that use Event-B to provide a correct system (Wakrime et al. 2018a, b). Therefore, we also used Event-B to develop the concepts presented in this paper.

In many works, researchers tried to apply formal methods to verify the correctness of complex systems where they applied different concepts such as the process algebra CSP (Bartels and Kleine 2011), SOTA (Abeywickrama and Zambonelli 2012), MAPE-K (Iglesia and Weyns 2015), SMARTOS (Giese 2016), and LTL (Sadraddini and Belta 2017). In these works, researchers proved that the application of formal methods highly improve the security and correctness of complex systems. Unlike all these works that presents ready-made formal models, the correct-by-construction approach proposed in this work presents a systematic way that guides engineers in using a formal method to develop any complex adaptive system. Furthermore, this approach uses the basics of first-order predicate logic and set theory which make it easy and convenient for users of different scientific backgrounds.

In order to illustrate the proposed approach, we present a case study of modeling an air traffic control system as a complex adaptive system. The presented approach is a generalization of the proposed one in our previous work (Jarrar and Balouki 2018). The previous work deals with the application of Event-B to verify and analyze an air traffic control system; whereas, this work present a more general approach that can be applied in the case of any complex adaptive system.

The rest of the paper is structured as follows. Section 2 presents the state of art, related works, and we give some background on formal methods and validation tools used in this paper. In Sect. 3, we give a summary of the proposed approach in 5 steps, after that, we present the main contribution in term of a methodology for modeling each concept of a Complex Adaptive System: agent, reaction, interaction, evolution, and adaptation. In Sect. 4, we illustrate the use of our approach through a case studied of an air traffic control system ATC. Section 5 focuses on the validation and verification of the model generated in the case study to ensure the validity of our approach. Finally, we conclude in Sect. 6.

Background

In this section, we specify the technical terms and concepts related to the complex adaptive systems and formal method Event-B. In addition, we introduce the related works to position our contribution.

Complex adaptive systems

Nowadays, computing systems are becoming more and more complex and adaptive through the inspiration of concepts from the natural complex adaptive systems. Many concepts were inspired when observing systems such as social insect colonies, embryo development as well as the immune system. On the other hand, many other systems in nature such as the biosphere and the human brain are tried to be simulated as complex adaptive systems. Furthermore, the human being, the stock exchange, the business world, the social group are also considered of the complex adaptive systems.

Recently, the scientific community has opted for an adaptive approach to the development of computing systems. The integration of this approach provides a set of benefits such as autonomous adaptation and evolution of the system elements called agents. Adaptation of an agent is based on detecting the existence of a problem occurred due to environmental change and then trying to adopt a suitable strategy to avoid chaos. Whereas evolution relies on the control of the state of the system to identify the behaviours of the components that allow the system to evolve and strengthen it and at the same time impoverishing the behaviour that negatively influences evolution. Figure 1 bellows gives an overview of the complex adaptive system structure:

Fig. 1
figure 1

Structure of complex adaptive systems

Formal method: Event-B

To overcome the high risk of failure in complex systems, it is extremely important to perform preliminary modeling based on a certain theory. The main purpose of these theories is explaining, predicting, and understanding some characteristics of the system before construction. These theories are used in different field of science such as Maxwell’s equations in electrical engineering and theory of material’s strength in civil engineering. In computer engineering, formal methods are less considered, therefore, there are engineers who do not know any theory building computing systems (Clayton and Radcliffe 2018). These systems often have bugs that may cost millions to fix. Hence, the use of formal methods is highly recommended for verifying and specifying software in order to highly guarantee bugs’ absence.

Event-B is a system-level modelling formal method that provides a set of features such as analysis and the correct-by-construct approach performed by theorem proving (Vistbakka and Troubitsyna 2018; Abrial 2010). Modelling in Event-B is introduced as a set of successive models that are made of two parts: Context (the static part) and Machine (the dynamic part). The context includes the definition of carried sets, constants, and axioms. The machine is made up of variables that describe the state of the systems. The states of the systems are constrained by mathematical formulas called invariants. The states transitions are performed by events. The events are made up of actions that are the elementary operation allowing the change of variables values. The events may have some necessary conditions to trigger, these conditions are called guards.

Event-B supports a very practical technique of modelling, which is refinement. Refinement relies on creating an abstract model including the basic elements of the system; after that, more details are added to build successive models that are more and more concrete. This technique makes modelling easier than trying to model the whole system at once, we focus on a limited number of requirement in each step under the condition of cleverly choose the refinement strategy. Figure 2 below illuminates of the process of this modelling process:

Fig. 2
figure 2

Process of modelling

In order to guarantees the well-performance of this process, a set of proofs called proof obligations should be discharged. These proofs ensure the consistency of models and the invariant preservations as well as the non-contradiction of refinements. The most important proof among these is the invariant preservation which guarantees that all events preserve a certain invariant. More formally, let I a certain invariant, A refers to the set of axioms, c refers to constants, while s are the carried sets, v and v’ are respectively the before and after values of variables. The invariant preservation proof is formalized in (1).

$$A\left( {c,s} \right) \wedge I\left( {v,c,s} \right){ \vdash } I^{'} \left( {v^{\prime},c,s} \right)$$
(1)

Almost all proofs are discharged automatically using an eclipse based platform called Rodin (2017). In the case of other few ones, a set of predicates may be added to guide the interactive prover.

Related works

Recent works about complex adaptive systems

Given the complexity that can be increased explosively of this kind of systems compared to the increase in the number of components, their engineering must be carried out with great attention to minimize the error rate and avoid the chaos that these errors may generate. The majority of these errors are caused by issues related to the design phase and the use of standard methods to develop such complex systems.

To overcome these problems, many works addressed methods to model and study the behaviour of these systems before the construction phase. The most used approach to model an Adaptive Complex System is that of agent-based modeling. This approach has been defined by Volker (Grimm et al. 2005) as a class of computer models for simulating the actions and interactions of autonomous agents (individual or collective entities such as organizations or groups) to evaluate their effects on the environment system as a whole. It combines elements of game theory, complex systems, emergence, computer sociology, multi-agent systems and evolutionary programming. Monte Carlo methods are used to introduce chance.

This method has been used in many works to simulate the behaviour of different systems such as the retail electrical energy markets with demand response (Dehghanpour et al. 2018), urban crimes development (Groff et al. 2019), and the emergence of bio-jet fuel supply chain in Brazil (Moncada et al. 2019). These works present a graphical and numerical result of a simple agent-based modelling simulation performed through mathematical models. However, the verification of complex behaviour is constrained by the performance of the machine used for simulation and still always test for finite states; therefore, the verification does not perfectly ensure the absence of failures.

Formal methods for complex adaptive systems

The use of formal methods in the development of complex systems is highly recommended due to the high assurance of bugs’ absence that they provide. However, in order to use them, a very clear understanding of the system behaviour is a requirement which makes their use in the case of Complex Adaptive Systems a difficult task. Therefore, only a few works attempt to use formal methods in order to model and verify complex adaptive system. The authors of Bartels and Kleine (2011) study the specification and verification of adaptive systems seen as a subclass of reactive systems using the process algebra CSP. This allows them to use a set CSP tools for the verification such as CSP-Prover and ProB. They also introduce an approach for the implantation of a specified system. In Abeywickrama and Zambonelli (2012) DB Abeywickrama and al. present Model Checking Goal-Oriented Requirements for Self-Adaptive Systems. To this end, authors show how to perform a goal-level model checking analysis of complex adaptive systems by SOTA (State of the Affairs) as a general goal-oriented modelling framework. The effectiveness of their method is exposed by transforming the conceptual SOTA model into an operational one. On the other hand, Christopher and al give an overview in Rouff et al. (2012) of an approach for the verification of adaptive software systems. This approach is a result of the combination of different fields of science: stabilization science, high-performance computing simulations, compositional verification and traditional verification techniques, and operational monitors. In Iglesia and Weyns (2015), the authors propose a promising approach to handle designing software systems that have to deal with dynamic operating conditions through self-adaptive dynamic realized by a MAPE-K (Monitor-Analyze-Plan-Execute plus Knowledge). They validate their approach using formal specified MAPE-K templates that encode design expertise for a family of self-adaptive systems. Authors of Giese (2016) also present formal models and analysis for self-adaptive cyber-physical systems. They analyze the challenges face self-adaptive cyber-physical systems and outline their results through a generic approach based on extensions of graph transformations systems called SMARTOS. Sadraddini and Belta (2017) is considered as one of the latest work in this domain in which authors use formal methods for adaptive control of dynamical systems. They develop a method to control discrete-time systems with initially unknown parameters from LTL (linear temporal logic) specifications. They also show how to compute adaptive control strategies for finite and infinite systems using formal methods.

Another approach has been presented by Niazi and Hussain (2010), authors proposed a formal agent-based simulation framework (FABS). This framework is based on formal specification to describe wireless sensor networks used for sensing the environment of complex adaptive system. The approach is applied to a boids model of self-organized flocking of animals monitored by a random deployment of proximity sensors. Zafar (2016) presented a formal specification of take-off procedure using Vienna Development Method-Specification Language VDM-SL complex adaptive systems modeling in order to overcome communication failures that cause delays and collisions. The presented model is developed by a series of refinements and made up of (i) a static part that includes the graph-based model, aircrafts, controllers, taxiways and runways, whereas, (ii) the dynamic part describe the take-off algorithms. The same formal method along with the graph theory is used for modeling wireless sensor and actors networks (WSANs) earthquake disaster mitigation and management system by Zafar et al. in Zafar and Afzaal (2017). Although these approaches are formally well presented, still these work are ‘ad-hoc’ for specific cases of complex adaptive systems and wireless sensor networks. On the other hand, our approach is a more general and can be applied in any case of complex adaptive system modeling and it is independent of the used technology which extend its applicability.

Unlike all the works presented in this section that presents ready-made formal models, our approach presents a systematic standards way to guide in generating of formal models while guaranteeing the homogeneity of models and the use of basic mathematical concepts. For this study, we use a formal method called Event-B that present a set of tools that can be used for the verification of models such as proof obligations and model checking through animation. These tools ensure the correctness of models and thus the resulting implemented system is correct by construction.

Proposed approach

The purpose of this section is to present our contributions namely formal approach to model complex adaptive computing systems using Event-B.

Summary of approach

Our approach is as follow:

  1. 1.

    Filtering the desired concepts needed in the case study. During this stage we specify among the possible concepts those which interest us (reaction, interaction, adaptation, and evolution).

  2. 2.

    Filtering and formalizing the related requirements from the requirement document. These requirements will be inserted in step 4.

  3. 3.

    Adapt the standard model with the system requirements by renaming the elements of the model and adding appropriate properties and types. This step will generate and an abstract typical initial model.

  4. 4.

    Refine the model as much as needed by formalizing and adding the remaining typical requirements as well as the establishment of proof obligations. At the end of this step, we get a typical concrete model.

  5. 5.

    Construct the system according to the concrete typical model which will be guaranteed to be correct by construction.

Figure 3 below illustrates the outlines of the proposed approach:

Fig. 3
figure 3

Proposed approach

Formal approach to model complex adaptive computing systems using Event-B

Agent

Agents can be defined as the smallest elements of a complex adaptive system, these agents have the capability to interact with each other in order to adapt, evolve and solve problems (Niazi 2017). The main purpose of the use of agents is that they can act separately and independently as well as the ability to add and remove easily agents without stopping the system. Thus, we propose the following Definition 1 for an agent:

Definition 1

An agent is the smallest element of a complex adaptive system that can act independently.

These agents are defined using a set of properties such as types and strategies that determine their behaviour in certain circumstances. Although these properties may seem very simple, the significant number of agents in the same CAS beside the variety of their properties arise explosively the difficulty of predicting system behaviour. Also, a simple change of a single property of a certain agent may cause a completely new complex and potentially novel behaviours of the system. Therefore, properly defining the agents and their properties is the most critical part of modelling a complex adaptive system (Akram and Niazi 2018).

For this purpose, we start with the following simple formalization shown in Listing 1 and 2 that captures the agents’ main properties:

figure b
figure c

We define a set AGENTS that refer to all possible agents that may exist in the system currently, in the past or even in the future; therefore, the AGENTS set can be modelled as carried set and can be seen as a type of variables.

The location of an agent can affect many aspects of how it operates which makes formalizing it extremely important (Roundy et al. 2018). We introduce a constant LOCATIONS denoting the natural coordinate space ℕ3 which formalize altitude, longitude and latitude. Then, we “link” agents with their locations by means of a total function from the set of AGENTS to the set of LOCATIONS.

When one or more agents share the same characteristics, it is possible to associate to these agents a type (Abar et al. 2017). Groups of agents of the same type are defined as subsets of the AGENTS set. For instance, to define 3 types of agents we use the following formalism in Listing 3:

figure d

The partition predicate is an easy way to enumerate sets. Mathematically, the partition predicate is defined in (2):

$$partition\left( { S, x, y} \right) \Leftrightarrow x \cup y = S \wedge x \cap y = \emptyset$$
(2)

where x and y are two subsets of a set S.

Each agent has a number of properties that describe it such as colour, speed, size, etc. These properties may change during the system lifetime in order to adapt, react or evolve (Mittal and Risco-Martín 2017). Therefore, we propose a modelization of these properties in term of a total function from the set of agents to the set that includes all the possible values of a certain property. This function can be denoted AgentPropertyi (this notation is necessary when the type of property values is not primitive). For instance, in order to describe the colour of an agent, we propose a function named “AgentColour” defined from the set AGENTS to the set COLOURS that include all possible colours that an agent may have. This method simplifies the modification of the colour of an agent by only modifying the value of AgentColour (agent) (Burns et al. 2017). In general, for a certain property i we define the set of all its possible values in the context as a carrier set denoted PROPERTYi as shown in Listing 4. Then, we define the total function in Listing 5 below:

figure e
figure f

Reaction

Due to the unpredictable change of environment, each agent should have a strategy in term of stimulus/response that indicates what to do in which circumstances (Matheson and Thompson-Schill 2019). A stimulus occurs by changing a certain factor (the effect of the environment) that affect agent behaviour such as temperature, wind speed, light, etc. Sometimes, the agent should be able to react to a certain stimulus in order to remain functioning correctly (surviving) (Durniak et al. 2017). Therefore, for some stimulus, we should assign a response to guarantee the appropriate reaction. The appropriate response for a certain stimulus is up to the engineer to determine, while this is a typical problem—each problem has its own stimulus/response logic. The following state diagram in Fig. 4 describes the reaction process:

Fig. 4
figure 4

Reaction process

We propose in this paper an abstract formalization of this stimulus/response combination according to the following Definition 2:

Definition 2

The reaction is adopting an appropriate response for an external stimulus.

The approach proposed in this paper to formalize the strategy is by means of two sets: STIMULUS and RESPONSES. Similarly to the AGENTS and PROPERTYi sets, STIMULUS and RESPONSES refer to all the possible stimulus and responses; hence there formalization in B language is similar to the previous ones.

To formalize the association between each stimulus of an agent with an appropriate response, we present the following total function in Listing 6:

figure g

In order to formalize the fact that agents in a CAS are able to react, we use an event that occurs each time a stimulus appears. In this event and for a certain agent that was affected by a certain stimulus, an appropriate response should take place and reconfigure the agent properties. This is in other words: reacting to stimulus.

The reaction event can be presented as follows in Listing 7:

figure h

PropertyiResponse is a function that determines for a certain agent the appropriate new value of the property i based on a specific response. The definition of this function is as follows in Listing 8:

figure i

Interaction

In order to develop the responses to stimuli, agents should enrich consistently their knowledge base. The most efficient way to build their knowledge base is by interaction/communication and sharing experiences (Conklin et al. 2019; Sadri et al. 2019). The interaction between two agents can be described as illustrated in the following Fig. 5:

Fig. 5
figure 5

Interaction process

The interaction in a complex adaptive system is constrained by either the physical location or logical localization. Therefore, the interaction of agents can be defined as follows:

Definition 3

Interaction is the exchange of information between agents in a complex adaptive system.

An agent can primarily interact in a direct way with the agents surrounding it (its neighbors). Hence, we should associate to each agent a number of neighbors which will at the same time define the structure of the whole system (notice that in some cases, other properties may be added such as the distance between two agents). This can be formalized by means of the following function in Listing 9:

figure j

is the power set function which refers to the set of all subsets of AGENTS.

The interaction of agents implies that each agent has a set of data called agent knowledge (Grillitsch et al. 2019). This knowledge helps the agents to make decisions, share knowledge, adapt and evolve. Such a concept is very typical hence we will present an abstract formalization of this concept in term of a set of data associated with each agent. Listing 10 below presents the knowledge function as a total function from the set agents to the power set of data.

figure k

where DATA is a carried set of data.

While we had presented the basic elements of interaction (neighbors and knowledge), we can formalize the interaction between neighbors agents in term of an event that occur each time two agents interact or exchange data in order to learn (notice that the interaction as an agent affecting physically its neighbor is seen in our approach as a stimulus that was treated in the previous section). The agent interaction event is formalized in Listing 11 as follows:

figure l

Adaptation

This concept was inspired from the biological adaptation which is defined as the process of adapting to a certain environment to survive. In the same way, the adaptation of an agent in complex adaptive systems is the change of strategy in order to survive and guarantee the correct performance of its tasks (Aldrich et al. 2019). The following state diagram in Fig. 6 illustrates the adaptation process in a complex adaptive system:

Fig. 6
figure 6

Interaction process

We can define adaptation as follows in Definition 4:

Definition 4

Adaptation of an agent is changing its strategy to conform a change of environment.

The adaptation is necessary when the state of an agent is deteriorated because of the environment change; this means that a number of specific stimuli can be observed indicating this change of environment. Therefore, a new strategy should be adopted to adapt to the new environment which can be formalized as the total function in Listing 12 that determine for each environment (set of stimuli) the strategy that should be adopted:

figure m

In order to formalize the adaptation of an agent, we present the following event that triggers for a certain agent when some stimuli are observed (Rozantsev et al. 2019). To make the event more accurate, we add two guards: the first ensure that the current strategy is not the adapted one (grd3); while the second ensures the deterioration of the agent state (grd4). When all the guards are verified, the current strategy is changed by changing the AppropriateResponse. This is formalized as shown in Listing 13 below:

figure n

Evolving

The evolution concept as illustrated in definition 5 was also inspired from the evolution theory in biology that can be defined as an improvement in the heritable biological characteristics of a population over a long period of time in successive generations (Hall and Strickberger 2008).

Definition 5

Evolution in a complex adaptive system is developing gradually by improving agents strategies.

This evolution can be measured by the improvement of the whole system state. However, it is claimed in the complex adaptive system literature that no agent can visualize the state of the whole system; still, each agent can see its neighbors. Thus, we assume that agents are able to measure the state of the subsystem of their neighbors (the agent itself included). This ability will help the agent to observe the effect of a change in its strategy on the subsystem and decide either it will help the system to improve or deteriorate. In addition to the ability to measure the neighbors’ subsystem state, an agent should be able to memorize the before-change state to compare it with the after-change state; and memorize also the old strategy in order to reuse it if needed (the case of deterioration).

Figure 7 summarizes the proposed evolving approach:

Fig. 7
figure 7

Evolution process

The proposed evolution process is based on 3 phases:

  • Evolution attempt starting: in this phase, the agent proposes a random new strategy by changing at least one stimulus/response combination and memorizes the previous strategy alongside the neighbors’ subsystem state.

  • Deciding phase: during this phase, the agent measures the current state of the neighbours’ subsystem and then compares it with the previous state in order to decide if the change is improving or deteriorating the system. This phase should occur after a certain time from the first one to give the agent time to observe the after-change state of the subsystem.

  • Approving/disapproving phase: lastly, the agent may approve the change of strategy and evolve or disapprove changes and readopt the old strategy. If the evolution is approved then the agent will share this experience with others so that they may try the same thing. Otherwise, the agent readopts the old strategy that was previously memorized during the evolution attempt starting phase.

Notice that this process is based on the generation of random strategies; thus, the system should start with a predefined acceptable strategy. Formally, this can be defined in the initialization event (the first event that triggers at the beginning of the system lifetime).

To translate our proposed process of evolution into Event-B, we need to start with the basic elements and abilities that an agent needs to be able to evolve. Firstly, we introduce the total function Previous_strategy defined from the AGENTS set to (AGENTS × STIMULUS → RESPONSES) that allows the agent to memorize the previous strategy. Secondly, the total function Previous_state defined from AGENTS to natural numbers that represent the before-change neighbours’ subsystem state in term of a natural number; this number increases if the system improves and decreases if it deteriorates. Finally, we present the State function that formalizes the assumption that an agent is able to measure the state of a subsystem. This state function is defined from the power set of AGENTS to a certain natural number. The formalization of the functions used during evolution is presented in Listing 14 below:

figure o

The first event that triggers during evolution attempt starting phase is the memorization event that forms a sort of backup data before starting the evolution attempt. The second event in this phase allows the agent to change its strategy. However, during modelling, we figure out that if two neighbours are attempting to evolve the process may fail because in this case the after-change state that an agent will observe does not reflect exactly the effect of the change in its strategy; rather than that, it reflects the effect of the change of both strategies. To avoid this problem, we add a new function that its value is equal to 1 if an agent is currently in the process of evolution and 0 if not. This function can be formalized as shown in Listing 15 below:

figure p

This function will help us to ensure before starting the evolution attempt that all the neighbours of the agent are not currently attempting to evolve.

Additionally, the process of evolution is related to the concept of time. The formalization of this concept proposed in this approach is inspired by the time pattern proposed by Dominique Cansell et al. (2007). This formalization is based on formalizing the time as a natural number that increases over time. The value of this variable will represent the time when the last time the memorization event trigger which is the evolution attempt starting moment. Thus, the memorization and changing strategy events can be formalized as shown in the following Listing 16 and 17:

figure q
figure r

One last thing should be mentioned in this first phase, since the strategy function is taking the agent as a parameter then it can access to all its properties and most importantly its knowledge. This allows the agent to evolve considering its previous experience and without repeating the same attempt several times.

The decision of either the new strategy is helping the system to evolve or deteriorate is done after a certain time (denoted separation_time) to allow the agent to observe the effect of the new strategy (Son et al. 2019). After that, the agent measures the state of the neighbours’ subsystem and compares it with the previous state and then decides either the new strategy should be approved or disapproved. For this purpose, we present the following two events in Listing 18 and 19:

figure s
figure t

In both events, the second guard helps to ensure that the agent is in the evolution process; at the same time, the third guard guarantees the minimum separation time. The decision about the strategy is done by means of the fourth guard which compare the current and previous state of the subsystem. If the subsystem is evolving then the approving event triggers which share this experience –the new strategy helps the agent to evolve- and terminate the evolution process. As you may notice, the agent should be able to translate the experience into data that can be added to the current knowledge of neighbours. Therefore, we introduce in Listing 20 the StrategyToKnowledge function that translates a strategy and decision into data:

figure u

In the approving event, the StrategyToKnowledge function takes Approved as the second parameter while in the disapproving event it takes Disapproved. Sharing the experience of evolution or deteriorating will give a high priority of trying the new strategy in future evolution attempts of neighbours in the evolution case, and ensure high avoidance possibility of the strategy in the case of deteriorating.

Case study

This section presents a case study to extrapolate results of our contributions and provide a means for understanding our formal approach with greater clarity.

Air traffic control in an airport vicinity

Air traffic control (ATC) is a service provided currently by controllers located in the control tower. The main responsibility of the controllers is organizing and expediting the air traffic flow while preventing collisions and minimizing delays (Fact Sheet 2010). This service is performed due to the structure of the communication network between the aircrafts and the control tower. Figure 8 below illustrates this structure:

Fig. 8
figure 8

Air traffic control structure

Therefore, if the control tower has a technical issue or natural disaster, the aircrafts will not be able to react autonomously. For this reason, we propose a complex adaptive system structure for ATC system making aircraft able to react to the different kind of external stimuli such as Headwind, Crosswinds and Tailwinds. Aircrafts should also be able to interact and exchange data over a network in order to provide a more general vision about the surrounding environment. Based on this vision, the aircrafts may need to adapt by changing its way of flying and reacting (it strategy) which highly improve their performance to carry out their tasks. This interaction and adaptation will allow the aircrafts to build a rich knowledge base that permits them to perform attempts to evolve. However, automated evolution is never guaranteed which raises the risk of deterioration; therefore, aircrafts should be able to observe the result of their attempts to prevent this risk.

The purpose of this case study is presenting a brief example of how to apply the proposed approach in the air traffic control system. This work is based on our previous work in this domain (Blok et al. 2018; Jarrar and Balouki 2018a, b; Jarrar et al. 2017).

Requirement document

Since the main purpose of this work is guiding and building the first step of modelling correctly complex adaptive systems, we need to reorganize and reformulate the requirement document in a more formal way. JR Abrial proposes in Abrial (2010) an approach of presenting the requirement document in a more suitable form based on presenting it along two axes: Fun that refers to the functional requirements of the system; and Env that refer to environment assumptions and non-functional requirements.

The requirement document presented here focus on the future vision of the most popular organizations in the domain of airspace systems: International Civil Aviation Organization ICAO, Federal Aviation Administration FAA, and National Aeronautics and Space Administration (NASA) (In Focus 2018; Fact Sheet 2010; Department of Transportation Federal Aviation Administration 2017; National Aeronautics and Space Administration NASA 2009). Their vision focuses on making the aircrafts smart and minimizes as much as possible the participation of human beings to avoid errors. The proposed requirements document is as follows:

The aircrafts of the system are intelligent and able to react independently

Fun 1

Aircrafts are either fixed wings or rotorcrafts

Fun 2

Each aircraft has a manufacturer

Fun 3

Aircrafts always have vertical and direction angle as well as linear speed

Fun 4

Figure 9 illustrates the vertical and direction angle of an aircraft:

Fig. 9
figure 9

Direction and vertical angles

Different types of winds may be faced during flights: Headwind, Crosswinds and Tailwinds

Env 1

Aircrafts are able to react in the case of Headwind, Crosswinds and Tailwinds

Fun 5

Aircrafts are able to interact and exchange data over a network

Fun 6

Aircrafts should keep a minimum separation distance to prevent collisions

Fun 7

Each aircraft has its own knowledge base

Fun 8

The environment of the system is unpredictably and frequently changeable

Env 2

Aircraft are able to adapt in case of environment changes

Fun 9

Aircrafts are supported with a strategy that allows it to only evolve and prevented it from deterioration

Fun 10

Formalization of air traffic control

Formalizing agents

In the air traffic control system, the agents can be considered as the aircrafts and they have different properties. In this case study, we propose the following properties: type, manufacturer, speed, vertical angle, directional angle and location (Fun 4).

We introduce two types of aircrafts: fixed-wing and rotorcrafts (Fun 2). We also present the following manufacturer: Airbus, Boeing, Bombardier, Cessna, and Cirrus (Fun 3).

The context of the system is formalized as follows in listing 21:

figure v

In the machine, we present the set of all functions that formalize the association agent-property (Jarrar et al. 2017). This can be formalized as shown in Listing 22:

figure w

Formalizing reaction of aircrafts

In this section, we will present an example of how to model aircraft reaction in case of strong wind. The wind is one of the main elements that affect an aircraft’s flight; therefore it is one of the most important stimuli that should be considered when modelling aircraft reaction. The headwind is preferred by pilots for landing and taking off due to its benefits of using less runway and low ground speed at touchdown. On the other hand, Crosswinds and tailwinds are more difficult to deal with (Env 1), and aircrafts should adapt to the situation either by changing speed, angles or even location (Fun 5). To summarise, the wind may affect three properties: speed, vertical angle, and directional angle (iFACTS 2018).

To formalize the aircrafts reaction, the first elements that should be defined are STIMULUS and RESPONSES. All possible stimuli and responses should be formalized as shown in Listing 23 below:

figure x

In the machine, the AppropriateResponse function should be added beside the different function that allows adaptation of angular and location properties; these are LocationResponse, VerticalAngleResponse, and DirectionalAngleResponse. These functions are presented as invariants in Listing 24:

figure y

Finally, the reacting event can be formalized as shown in Listing 25:

figure z

Formalizing interaction between aircrafts

The first thing that should be presented to guarantee the interaction between aircraft is modelling the structure of aircrafts network (Fun 6); this can be formalized by means of the neighbors function. Besides, we introduce the knowledge function that formalizes the knowledge base of each aircraft (Fun 8). For air traffic management systems, aircrafts should always maintain a minimum distance between them which makes the distance between aircrafts also an important factor in the system (In Focus 2018). Therefore, we present an additional function formalizing the distance between two aircrafts. This can be formalized in Listing 26 as follows:

figure aa

The knowledge of aircraft is building by exchanging data over the network with neighbors. This can be formalized as shown in the Listing 27 below:

figure ab

Finally, the distance and location functions can be used to guarantee a minimum separation distance between aircraft during flying in the airport airspace (Fun 7). If two aircrafts are keeping this distance, collision will be strongly avoided as well as wake turbulence. The minimum distance is fixed and we denoted it Min_distance constant. To ensure that the minimum distance will be kept the following invariant presented in Listing 28 must be preserved:

figure ac

Formalizing Adaptation of aircrafts

In order for an aircraft to adapt (Fun 9), it is necessary to define for each change of environment (P(STIMULUS)) a certain strategy (AIRCRAFTS × STIMULUS → RESPONSES) that is more suitable to the new environment. This is illuminated in Listing 29 below:

figure ad

The event responsible for adapting aircrafts is denoted Aircraft_Adapting. This event is constrained by the condition that the current state is worse than the previous one, and the condition that the current strategy is not the adapted one. In this case, the Aircraft_Adapting event presented in Listing 30 will switch the current strategy to the adapted one defined by the AdaptedStrategy function.

figure ae

Formalizing the evolution process

Before starting the formalization of the evolution process (Fun 10), we present the Previous_strategy and Previous_state in order to make a backup of the aircraft state while we cannot ensure if the evolution will succeed or not. Besides, we introduce the state function that formalized a feature that an aircraft may have easily; this feature is the collection of information of its surrounding aircrafts and evaluates the state of the neighbours’ subsystem state. Also, a total function will be needed to recognize either an aircraft is in the process of evolution attempt or not; this is necessary to avoid multiple evolution attempts of the same aircraft at the same time. The functions need for aircrafts evolution as presented in Listing 31 below:

figure af

The first event that triggers in the process of evolution is the Memorization event presented in Listing 32. This event allows the backup of before-change state and strategy as well as the time when the process started.

figure ag

The next event is called Change_strategy; this event changes the strategy of a certain aircraft that has no neighbor in the process of evolution as shown in Listing 33. However, if two neighbors are changing their strategies at the same time, the effect of these strategies will not be clear while these two neighbors will share some neighbors that will be affected by both strategies changes. Therefore, we allow the change of strategy for only aircrafts that do not have any neighbor in the process of evolution.

figure ah

After a certain time denoted separation_time, the aircraft may decide if the change should be approved or not based on the before and after-change states. If the before change state is better than the after change state then the Approving_change event will trigger, otherwise, the one who will trigger is the Disapproving_change event. When the Approving_change trigger, it enriches the knowledge of the evolving aircraft and this knowledge will be shared with its neighbors by means of interaction; and then it terminates the process of evolution by associating to the evolving state the value zero as shown in Listing 34 (Department of Transportation Federal Aviation Administration 2017). In the same way, if disapproving_change trigger then the previous strategy backed up during the memorization event will be readopted; this is shown in Listing 35. Also, the knowledge of the aircraft will be enriched to avoid trying the same strategy over and over. These two events can be formalized as belows:

figure ai
figure aj

Before starting the proof verification, one last event should be added. This event is called the INITIALISATION event; it is an event without guards that what happen at the beginning. In general, this event initializes all the variables in the proposed model. The Listing 36 below presents the initialization of all the variables:

figure ak

Verification and validation

In this section, we are interested in experiments to validate our work. To do this, we used a verification using proof obligations and verification using ProB animator.

Verification using proof obligations

In order to guarantee the correctness of the proposed model in the case study, we use the Rodin platform to perform proof obligations. Table 1 presented below illuminate statistics about the established proofs:

Table 1 Rodin report

This table is generated automatically by Rodin and presents the number of proofs generated including manual (6) and automatic (109) proofs. Most of these proofs are Invariants preservation verifications which guarantee that all the invariants are maintained verified during the system lifetime. Therefore, the construction of a system based on the proposed model will generate a correct by construction system.

Verification using ProB animator

The validation of invariants preservation guarantees that all the before and after event states verify always all the invariants, which is a very critical validation requirement. Still, this validation is not enough to ensure the correctness; however, in some cases, we may arrive at a state where not guard is verified. In these cases, all events will be prevented from triggering since their guards are not verified. This is why another type of validation is needed which is the Deadlock Freedom.

In order to guarantee the deadlock freedom, two methods are provided: Invariant preservation of one additional invariant, and model checking through animation (Bozga et al. 2019). The first method is based on adding an invariant that include the disjunction of all the model events guards. This method guarantees the deadlock-freedom through ensuring that at least one of the guards is verified no matter what is the state of the system. Listing 37 illustrates the invariant of deadlock freedom:

figure al

As you may notice, the invariant is overcharged and contain a significant number of predicate which complicates the establishment of the invariant preservation proof. Therefore, the second method based on model checking through animation is more suitable for this kind of verification. In almost all cases, if there are deadlocks in the model they will be revealed through animating the model several times with a big number of steps. For this task, we use a tool called ProB Animator. ProB is a feature of the Rodin platform that allows the animation, constraint solving and model checking of models written in the B-language (Körner and Bendisposto 2018). We have used this tool to animate our model using one hundred steps and the model always was free of deadlocks. Figure 10 illustrates the result of ProB model checking with a total of 14,568 transitions that visited 4779 nodes. During this check no deadlock or invariant violation was detected which highly guarantees Deadlock freedom in our model. All experiments were conducted on a 64-bit PC, Windows 7 Professional operating system, an Intel Core i7, 2.13 GHz Processor with 4 cores and 4 GB RAM.

Fig. 10
figure 10

Deadlock-freedom verification screenshot

To conclude, our approach allows the creation of a model that is verified by proof obligations and animation; as a result, a system constructed based on this model will be correct by construction. Therefore, our approach provides a systematic way to build highly verified formal models of computing complex adaptive systems using the Event-B method.

Conclusion

We have introduced the steps and reasoning involved in the construction of a model of complex adaptive systems using the Event-B formal method. The main contribution is presenting a methodology for modelling that can be used to develop any complex adaptive systems. We cover the most important concepts proposed by the most well-known works in this domain, which facilitate the development of such a complex system. According to the requirements document, the developers of the system may add more refinements by including their typical requirements to build a customized model suitable to their cases. To illustrated the proposed approach, we have presented a use case of an air traffic control system. The requirement document of this use case was build based on several organizations descriptions (FAA, ICAO, and NASA). After the application of the approach, a model, that includes all these requirements in addition to complex adaptive systems concepts,has been produced. The validation of the model was guaranteed by two types of verifications: proof obligations using Rodin and Deadlock freedom using ProB. Therefore, the correctness of the generated model of the case study is guaranteed which demonstrate the validity of the proposed approach.

Due to the huge number of concepts and characteristics of Complex Adaptive System, the presentation of a methodology to formalize them all in a single work is not an easy task. Therefore, we intend, In future works, to improve the proposed approach by considering the rest of the complex adaptive systems concepts such as populations that refer to collections of agents or strategies, self-similarity, interaction pattern, selection of strategies, emergence and self-organization. Moreover, the combination of this approach with other verification methods such as model checking will highly minimize the probability of failures. Furthermore, we aim to develop a complete recommendation for other standardizations such as QoS and RM-ODP (Jarrar et al. 2017; Jarrar and Al 2019).