As previously mentioned, RoboChart models are endowed with a formal semantics. Our formalisation relies on the UTP framework, but we use CSP as a front end to the UTP to support early validation via model checking. In Sect. 4.1, we briefly introduce CSP and describe the operators used in our semantics. In Sect. 4.2, we provide an overview of our semantics, and in Sect. 4.3, we discuss its formalisation as a function from RoboChart to CSP models.
CSP
Communicating sequential process [45, 85] (CSP) is one language out of a large family of specification notations for concurrent systems referred to as process algebras. This family includes notations such as CCS [64], Pi-Calculus [65], and ACP [8]. CSP is distinctive in its denotational nature, while CCS focuses on operational semantics, and ACP on algebraic semantics. The denotational models of CSP give rise to notions of refinement that are particularly useful when verifying properties and establishing correctness of implementations.
The central constructs of CSP are processes and channels. Processes can specify patterns of interactions, including aspects such as deterministic and nondeterministic choice, deadlock, and termination. Process definitions can be made via parallel composition of other processes, where interaction occurs through channels. The communications between parallel processes are instantaneous, atomic events and can carry values.
Table 2 gives the CSP operators used in our semantics. For each operator, it provides its symbol, name, and an informal description of its behaviour. Section 4.2 presents a number of examples of usage of CSP. We explain each of the examples as they appear.
Overview
The structure of our CSP semantics is sketched in Fig. 11. The semantics of modules, controllers, and state machines is given by processes. A module process is defined by the parallel composition of the processes for its controllers interacting according to the connections in the module, as well as a memory process recording the variables (and constants) of the robotic platform.
While the semantics presented in Sect. 4.3 produces a single CSP process definition, in examples we use process declarations to modularise it and improve readability. For example, the semantic function
presented in Sect. 4.3 specifies the definition below of the process Foraging for the module of our running example (Fig. 5), but not the declaration shown that names it. Here, we name this process and others to facilitate presentation. The process names that we use match applications of the semantic functions described in Sect. 4.3. The resulting declarations have a positive impact also in optimising both the generation and analysis of the models. Therefore, we implement them in RoboTool; this is further discussed in Sect. 5.1.
The fact that the semantics presented in Sect. 4.3 generates a single process is not detrimental to compositionality. Each semantic function is defined compositionally, that is, purely in terms of the semantics of the components of the RoboChart element that it defines. Furthermore, compositionality of refinement follows directly from compositionality of the CSP operators. For instance, if a state machine S of a controller C is refined by another machine S’, then S can be replaced with S’ to form a controller C’ that is a refinement of C.
Foraging defines our example module as the parallel composition of the processes for its controllers, namely ObstacleAvoidance and ForagingC, and a memory process \(Memory\_Foraging\) for the platform ForagingRP. Synchronous connections between controllers are captured by a parallel composition (\({|}{[}\ldots {]}{|}\)) with the events involved in the connections hidden (\(\backslash \)) because these connections are not visible. Since, in our example, the controllers are not connected directly, in the composition of the controller processes the parallelism is an interleaving (\({|}{|}{|}\)) and there is no hiding.
Asynchronous connections, if present, are realised via a buffer modelled by another process composed in parallel with the composition of the controller processes. Interactions with the buffer are hidden.
When referring to RoboChart elements we use sans serif font, and use italics for CSP terms. For example, ObstacleAvoidance is a RoboChart state machine, and ObstacleAvoidance is the CSP process that specifies its semantics. Table 3 describes the names of elements (processes and channels) in our semantics.
The memory model of RoboChart is hierarchical, with a memory for the robotic platform at the top of the hierarchy, memories for the controllers at the next tier, and finally the memories for the state machines under those for their controllers. The memory for a robotic platform records its variables for sharing between controllers (and their state machines). In our example, the process \(Memory\_Foraging(0)\) defined below models a shared memory recording the platform variable dist. The value of the constant nest is defined using the channel \(set\_FRP\_nest\). Since a specific value is not determined in the RoboChart model, the communication accepts any possible value (and introduces a nondeterminism when \(set\_FRP\_nest\) is hidden).
Table 3 Summary of naming conventions in RoboChart semantics The memory processes for a robotic platform or controller are slightly different from those for state machines. A process for a platform or controller not only accepts updates to the memory variables, but also propagates updates down the hierarchy to the memory of the state machines that require the updated variables. The memory of a state machine caches the variables it requires, so that the model of the machine itself is independent of the location of the variables that it uses, that is, the particular controller or robotic platform that provides the variables that it requires.
The memory of a robotic platform is modelled by a recursive process that, at each step, accepts, for each variable v, updates to v through a channel \(set\_v\), and, for each controller \(C_i\) that requires v, propagates the new value through channels \(set\_Ext\_C_i\_v\). There are no get channels, since, as mentioned above, the values of the variables are cached in the memory processes for the state machines that use them.
The memory process for the platform in our example, namely, \(Memory\_Foraging(dist)\), is as follows. It accepts a value x for dist through \(set\_FRP\_dist\), and propagates it to ForagingC through \(set\_Ext\_FC\_dist\). ObstacleAvoidance does not require dist, so no additional propagation is needed.
The parameter defines the initial value of dist.
In the definition of a module process, the platform-memory process is composed in parallel with the processes for the controllers synchronising on the set channels. In our example, \(Memory\_Foraging(0)\) is composed in parallel with the parallel composition of the processes ObstacleAvoidance and ForagingC. They synchronise on \(set\_FRP\) and \(set\_Ext\_FC\) events for dist. The set channels used to update the variables are visible, since they represent changes to attributes of the platform, but the set channels used to define the values of the constants and the \(set\_Ext\) channels used just to define the internal propagation protocol are hidden.
Write access to a memory higher up in the hierarchy is accomplished by renaming (\([\![\ldots \leftarrow \ldots ]\!]\)) the set channels of a controller (or machine) process, when it is composed to define a module (or controller). In our example, the channel \(set\_FC\_dist\) used by the controller process ForagingC (defined below) to update the variable dist is renamed to \(set\_FRP\_dist\), resulting in a process that interacts directly with \(Memory\_Foraging\).
The two channels \(set\_FC\_dist\) and \(set\_FRP\_dist\) for the same variable dist represent assignments in different contexts. The channel \(set\_FRP\_dist\) represents assignment to dist as a provided variable of the module with platform ForagingRP (abbreviated to FRP). In the process for the controller ForagingC (abbreviated here to FC), however, we do not use \(set\_FRP\_dist\) to avoid dependence between the semantics of the controller and that of the module where it is used. This allows independent definition and, therefore, analysis of the controller. On the other hand, when defining the module, the two channels are identified (via renaming and synchronisation) to guarantee that when the controller assigns a value to dist, it is captured by the module.
Renaming is also used to deal with connections between a controller and the platform. A controller event is uniquely identified in the semantics by a qualified name determined by the controller. If such an event is connected to an event of the platform, a renaming to the platform event models the connection. For example, the event transferred of the controller ForagingC, whose qualified name in our example is \(FC\_transferred\), is renamed to \(FRP\_transferred\), which is the qualified name of the event transferred of the platform. The same sort of renamings are applied to ObstacleAvoidance, but are omitted in the sketch above for simplicity.
The visible interactions of a module, represented by visible CSP events of the module process, correspond to updates to platform variables, via set channels, to events of the platform, when they are accepted by a controller, and to calls and returns to and from the platform operations. In our example, they are events that use the channel \(set\_FRP\_dist\), the channels named after the events of ForagingRP (namely, \(FRP\_collected\), \(FRP\_stored\), and so on), and channels named after the operations in the interfaces GraspI and MovementI, provided by ForagingRP, with suffix Call or Ret to indicate an operation call or return (Fig. 2).
The semantics of a controller is the parallel composition of the processes for its state machines interacting according to their connections, and a memory process for the controller variables. This semantics is similar to that of a module, but the components are processes for state machines (Fig. 11) and the controller memory.
For instance, the process below for ForagingC is the parallel composition of a process that interleaves the behaviours of its state machines DTP and PositionEstimation, and its memory process \(Memory\_ForagingC\).
Here, the machine processes are composed in interleaving because they do not interact directly via RoboChart events, only through the shared variable position. Like in the definition of a module, renamings deal with associations of local and shared variables and with connections of events. In the example, renamings are applied to the processes that model the individual state machines to associate local (machine) updates to the shared variable position into global (controller) updates, and to associate connected events of the machines. For instance, while the semantics of DTP uses the channel \(set\_DTP\_position\) to write to position, the composition of DTP in the model of the controller renames this channel to \(set\_FC\_position\), thus allowing DTP to interact directly with the controller’s memory process.
Like in a module process, the parallel composition of the parallelism of the state machine processes with the memory process synchronises on events that update and propagate changes to the variables. In addition, the channels used to update variables in the controller memory (\(set\_FC\_position\) in our example) and the channels used to propagate changes to the state machine memories are hidden.
The memory process for a controller is similar to that of a robotic platform, except that it must not only receive updates and propagate changes, but also relay propagations (of updates from the robotic platform to the state machines). In our example, the memory process for the controller ForagingC is shown below.
The required constant nest is set at the start. Since the controller declares a variable position and requires the variable dist, it behaves differently for each of these variables. The variable position is treated similarly to dist in \(Memory\_Foraging\): updates are accepted and propagated. The required variable dist, on the other hand, is not updated directly here. \(Memory\_ForagingC\) simply relays values propagated by the robotic platform, received through the channel \(set\_Ext\_FC\_dist\), to any state machine that requires that value. Both machines of ForagingC require dist, so the value received is propagated through the channel \(set\_Ext\_DTP\_dist\) to DTP and through \(set\_Ext\_PositionEstimation\_dist\) to the state machine PositionEstimation. The order chosen in the semantics for propagation is arbitrary.
Below, we show the process for the machine DTP. It is a parallel composition of two processes. One of them models the behaviour of the state machine. It is itself defined by the parallel composition of a process Init, which describes the transition to the initial state, with the parallelism of processes modelling the states.
\(Memory\_DTP\) models the state machine memory; it records its local variables (and constant) and caches any required variables. RoboEvents is the set of all CSP events representing visible interactions of the machine, namely RoboChart events, accesses to shared variables, and operation calls and returns. All CSP events (\(\Sigma \)), except those in RoboEvents, are hidden. \(Memory\_DTP\) synchronises with the process that defines the behaviour of the machine on the set and get channels of all variables, and the events of the machine. The events are renamed to remove transitions identifiers, and the get and set channels are hidden, except for the set events of the shared variables: dist and position in our example.
CSP events are used to model the control flow defined by entering and exiting states via the channels enter, entered, exit, and exited. They model the beginning and end of these phases: entering or exiting a state is only completed when the entry and exit actions are finished. This has an impact on availability of transitions; for instance, the transitions of a state are only possible once that state is entered. So, we use the channel enter to start the entering stage, and entered to signal its completion; similarly for exit and exited. Each channel takes two parameters: the component that has requested the action to start and the target of the request. For instance, in Init above, DTP itself requests that the state Exploring is entered using the event enter.DTP.Exploring. This event synchronises with the event enter?s.Exploring offered by the process ExploringR, which models the state Exploring. The synchronisation instantiates s as DTP.
A process that models a state does so compositionally, capturing only information about the state itself, irrespective of the context (composite state or state machine) where it occurs. In general, a process for a state S may have two components: processes \(S\_main\), modelling the behaviours of S, and \(S\_ch\), capturing the behaviours of the children states, if any. In this view, a state is potentially itself an independent software component that we can consider separately in verification.
\(S\_main\) has the form sketched below.
This process uses the identifier SID of S, and, for a composite state, the identifier SSID of the state targeted by its initial junction. \(S\_main\) accepts communications over enter that request entry to S, executes its entry action, requests activation of SSID, waits for it to be completed, that is, for that state to be entered, acknowledges entry to S, and executes its during action while offering a choice of events that trigger transitions. If a transition is taken, the during action is interrupted (\(\triangle \)). To cater for a during action that terminates, the process during is followed by the deadlocked process STOP. This ensures that the interruptions arising from the transitions are not discarded by the termination, and remain available for as long as the state is active.
The transitions offered are those of S (modelled by processes denoted by \(transitions\_of\_S\) in the sketch above) and all possible transitions, that is, transitions with all possible valid triggers from and to all possible states, whether in the diagram or not, except those from S and its substates (modelled by processes denoted by \(all\_other\_transitions\_S\) above). These are all the transitions that, if taken, lead to an exit of S, like those of an ancestor of S. The transitions of the substates of S are the only ones not considered here, because they cannot lead to an exit of S, and, therefore, interruption of the during action, since there are no inter-level transitions.
The role of the processes in \(all\_other\_transitions\_S\) is to capture the possibility of a transition of an ancestor state of S interrupting its execution without losing compositionality in the model. We do not consider specifically the transitions of the states where S occurs. The definition of \(S\_main\) does not depend on the particular transitions of the ancestor states of S: it accepts any transitions, not only its own, except those of its substates. Since there are no inter-level transitions, the transitions in the substates of S are dealt with separately, in the processes for these substates themselves. As explained, these transitions are not modelled in either \(transitions\_of\_S\) or \(all\_other\_transitions\_S\).
For illustration, Fig. 12 presents part of a diagram showing a composite state S that is itself part of a composite state PS with transitions numbered. Transitions like (3), from S, are modelled in \(transitions\_of\_S\), the transitions like (4), (5), (6), (7), and (8) are modelled in \(all\_other\_transitions\_S\), and those like (1) and (2), between substates of S, are not modelled in either \(transitions\_of\_S\) or \(all\_other\_transitions\_S\).
Of course, as part of the behaviour of S, transitions from states that are not related to S, that is, not substates nor ancestors of S, are never taken. If S is the current state, those transitions are not actually available. Moreover, transitions that are not in the diagram also are obviously never taken. These transitions in S that cannot be taken are restricted as part of defining the process for the parent state PS as explained later in this section. The definition of S, however, does not depend on the identification of the transitions of its ancestors or even of the machine as a whole.
The transitions modelled in \(transitions\_of\_S\) and \(all\_other\_transitions\_S\) are offered in choice (\(\Box \)). In our example, the process Exploring for the state of the same name accepts the event collected, which is associated with its own transition, but also all other possible transitions whether in the diagram or not.
The semantics of a transition with identifier tid and trigger e?x, receiving a value x, with a guard g, and with an action tact, possibly defined in terms of x, from the state S to another state R, with identifier RID, is captured by a process T of the form indicated below.
T is composed in choice with similar processes for each transition of S to define the process indicated in the sketch of \(S\_main\) by \(transitions\_of\_S\).
In T, if e occurs, then exiting of S is indicated by exit.SID.SID and exited.SID.SID. In between, if S is a composite state, we have the request from S for its active child state (with identifier s) to exit and, after the confirmation via exited.SID.s, the execution of the exit action eact of S. Afterwards, the transition action tact is executed, and then there is a request for R to be entered using enter and entered, before recursing back to \(S\_main\). This recursion makes the behaviour of S available again; it can once again be requested to enter. The guard g is not modelled in T, but in the memory process for the machine discussed later.
Since, in the context of a transition for a state S, the decision to exit S comes from that state itself, in exit.SID.SID and exited.SID.SID both identifiers are that of S, that is, SID. In general, however, these channels are used to accept any requests to, and acknowledge, exit from the state. So, we need two identifiers. For instance, when S, as a parent state, asks a child state to exit, the values of these identifiers are different as shown in the sketch T itself.
The events exit.SID.SID and exited.SID.SID enables validation of a state machine by analysis of its internal control flow. We can use exited events to analyse, for example, the time spent in a state, by considering a version of the state machine process where such events are visible. This kind of validation is discussed later in Sect. 6.3.2. (We observe, however, that in the overall semantics such events are not visible, and so properties specified in terms of these events are not expected to be preserved by refinement. For example, in a refinement, states may even be removed or added as long as the externally observable behaviour is correct).
The processes \(T_O\) in \(all\_other\_transitions\_S\) are similar. In this case, the request to exit comes from another state as. In addition, no account of the transition action is given, and no new state is entered, because the interruption here is associated with transitions other than those of S and of its substates. The control flow for these other transitions is handled in the processes for their source state, if any, as further discussed below.
In specifying the semantics of a parent state PS, from the state process S, we need to define a more restricted process \(S\_R\) that excludes transitions from a sibling state of S. (A sibling is a state that has the same parent.) This is because it is the model for the parent state that specifies the transitions available between its children. So, the availability of transitions in S that are actually controlled by one of its sibling states needs to be blocked. In our example diagram sketch in Fig. 12, the process \(S\_R\) still captures the transitions (3), (6), (7), and (8) like \(S\_main\), but not (4) and (5), which are for its sibling states, and still not (1) and (2). Since we define \(S\_R\) as a component of the process for its parent state (if any), we still do have a compositional definition. In our example, for instance, the model for DTP, uses the restricted version of the state processes: \(Exploring\_R\), \(GoToNest\_R\), and so on.
In general, a process \(S\_R\) is defined as follows.
Here, we use \(\alpha all\_other\_transitions\_S\) to denote the set of events for the transitions captured by the process denoted by \(all\_other\_transitions\_S\) in \(S\_main\). Similarly, \(\alpha all\_transitions\_PS\) contains the events for the transitions of PS, including those whose semantics is captured in \(transitions\_of\_PS\) and \(all\_other\_transitions\_PS\).
The parallelism with SKIP blocks the transitions in the synchronisation set. To explain this definition, we observe that PS itself captures transitions as indicated in Fig. 12 for any state. So, in this example, the set of transitions modelled by \(all\_transitions\_PS\) includes (6), (7), and (8). These are the transitions not to or from a substate of PS. In \(S\_R\), we block the transitions captured by \(all\_other\_transitions\_S\), that is, those that are not from S or its substates, that are not captured by PS. These are exactly the transitions that are not from S and are from a substate of PS. In Fig. 12, \(S\_R\) captures the transitions like (3), (6), (7), and (8). The transitions like (4) and (5), originating from a sibling state of S in PS, are blocked. They are captured by the processes for those sibling states.
This restricted process \(S\_R\) is used to complement the model of PS, defined by the process \(PS\_main\) of the form sketched above, with a model \(PS\_{ch}\) of its children. For the example in Fig. 12, \(PS\_{ch}\) is defined in terms of \(S\_R\), \(U\_R\), \(V\_R\) and \(W\_R\). In general, for a state S, if S does not have any substates, like Exploring, the overall model S of S is just the process \(S\_main\). If, on the other hand, S does have substates, its model is a parallel composition between \(S\_main\) and another process \(S\_ch\) that models the children (Fig. 11). The transitions captured by \(S\_ch\) are all those captured by \(S\_main\), plus those among the children (but not those among further nested substates that might exist) of S.
The parallelism between \(S\_main\) and \(S\_ch\) captures the transitions modelled in \(S\_main\) and the additional ones in \(S\_ch\). Synchronisation is required on the transitions captured by both processes, namely, those of \(S\_main\). For S in Fig. 12, this parallelism captures transitions like all those shown: (1) to (8). The behaviour for transitions like (1) and (2) is defined solely by \(S\_ch\), while that for the other transitions it requires agreement. The specific behaviour of a transition like (3) is defined in \(S\_main\), by a process like T above. In \(S\_ch\), that transition is enabled without further restrictions. It is considered in \(S\_ch\) only because it is allowed by the definitions of \(S_1\) and \(S_2\), which are independent of the context defined by S as their parent state.
\(S\_main\) and \(S\_ch\) also synchronise on the flow events enter, entered, exit, and exited that target a child state, but are not from another child state. This ensures that entering and exiting of the children states that are not requested by another child are requested by \(S\_main\).
\(S\_ch\) is a parallel composition of the restricted processes for S’s children synchronising on the flow events. Use of the restricted processes ensures that the model of each child does not affect the transitions from its sibling states. Synchronisation on the flow events captures the sequential flow among the children states.
\(Memory\_DTP(P,source,dist,position)\), the memory process for the state machine DTP, is below.
While a state machine can write directly to the memory of the controller, it does not read values directly. The state machine memory process keeps a copy of the required variables, and accepts updates that keep their values synchronised through the use of \(set\_Ext\) events. Our example above is for a process that manages four variables: local variables P and source, and required variables dist and position. For P and source, we have set and get channels. For dist and position, we in addition have \(set\_Ext\) channels used by the controller to update the variables. The value for the required constant nest is defined like in the memory processes for platforms and controllers, but, unlike in those cases, the value chosen is passed to the local recursive process Memory, where it is offered through a get event. We note that \(Memory\_DTP\) is agnostic to the particular controller that uses this machine and may actually hold the values of the required variables.
A state machine memory process also models the evaluation of guards, this is captured by extra processes combined in the choice above. For instance, the transition from GoToNest to WaitForTransfer in Fig. 3 has a guard [dist> P]. As explained, this transition is modelled as part of the semantics of the state GoToNest by the process below, where, we do not model the guard, and since the transition is not associated with an event, we use a channel internal for its trigger.
This process waits for a synchronisation on the channel internal, which is later hidden. The parameter of internal, t2, is the identifier of the transition. Next, the process indicates that the state is being exited using the channel exit; the parameters are the identifier GTN of GoToNest and indicate that this state is exiting itself. Since there are no substates and no exit actions, the exiting terminates immediately, which is indicated by the channel exited, and the entering of the state WaitForTransfer with identifier WFT is requested using the channel enter. Finally, the transition waits on the channel entered for the state WaitForTransfer to finish entering and recurses to the process GoToNest, which models GoToNest when inactive.
As illustrated, a transition process does not evaluate any guard. This is done by the machine memory process. In our example, \(Memory\_DTP\) includes a choice \( [dist > P]~ \& ~ internal.t2 \rightarrow Memory\_DTP(\ldots )\). The guard is re-evaluated on every memory update, and the result restricts the occurrence of the transition in the semantics of GoToNest by making the communication on internal.t2 available or not.
The process S for a state gives an independent and compositional account of its behaviour. A state in a hierarchical machine can potentially model a significant component of the system [15]. Our approach facilitates the definition of a refinement technique for state machines like that in [67] for SysML, and enables compositional verification of states. For instance, if a state S is shown to be refined by a state T, any state machine that contains S is guaranteed to be refined by the same state machine with T substituted for S. Compositionality is particularly relevant for us in the long term, as we are interested in refinement techniques to support proof of correctness of software.
The models just described can be automatically generated by our RoboTool presented in Sect. 5.1. For that, RoboTool implements the transformation rules that formalise our semantics, which we describe next.
Formalisation
The semantics of RoboChart is formalised by a set of functions from RoboChart to CSP models. The main function \([\![\_]\!]_{\mathscr {M}}\) is for modules; it is shown in Fig. 13. It takes a value of the type Module (see Fig. 7) as an argument, used in a
clause to determine its robotic platform (rp), controllers (ctrls), connections (cons), the set of asynchronous connections not involving the platform (asyncs), and the set of events of asynchronous connections (evasyncs). The set asyncs of asynchronous connections is defined by a set comprehension as the set of all connections c of the module m, such that c is asynchronous (c.async) and none of its ends (c.from and c.to) is the platform. The set RoboticPlatform is the syntactic category of models of robotic platforms defined in the metamodel. The set asyncs is used to calculate the set evasyncs of identifiers for the events associated with asynchronous connections. This set includes the identifier of the source (c.efrom) and target (c.eto) events of the connections. These unique identifiers are determined by the function eventId.
Set comprehensions are often used in our semantics to define sets succinctly. The notation for set comprehensions we use is that adopted in Z. For example,
, where x is a variable name, T is a set, P is a predicate, and f is a function, denotes the set of values obtained by applying the function f to all values x in T such that P(x) is true. Both the predicate and function parts of a set comprehension can be omitted, in which case they are interpreted as the value true and the identify function, respectively.
The result of \([\![\_]\!]_{\mathscr {M}}\) is a value of type CSPProcess that represents a CSP process. This value defines a process via the constructor functions hiding, exception, generalisedParallel, and replicatedInterleave, as well as several functions described later: buffer, modMemory, and so on. Definitions such as that of \([\![\_]\!]_{\mathscr {M}}\) in Fig. 13 are hard to read due to the usage of CSP constructors as functions; for this reason, we present our semantic functions here in a more readable style as rules. For the definition of \([\![\_]\!]_{\mathscr {M}}\), for instance, we have Rule 1 instead.
In each rule definition, we identify the name of the function, its parameters, and return type in the header, and specify the function in the body of the rule. Our meta-notation is straightforward, and we underline its terms, which are used to specify the CSP processes (for example, the
clauses), and use standard mathematical italic font for CSP terms (for example, Skip).
The result of applying
to our running example, that is, the module Foraging in Fig. 5, is the process Foraging defined in the previous section. It is worth mentioning, however, that, for clarity, we have made some simplifications to the Foraging definition. First, since there are no asynchronous interactions between controllers in Foraging, the replicated interleaving over connections in
in Rule 1 is over the empty set, and omitted. Additionally, since none of the controllers in this example terminates, the interruption \(\varTheta _{\{end\}}\) and hiding \(\backslash \{end\}\) are redundant and also omitted. We explain these elements of Rule 1 in more detail next.
The semantic function
takes an asynchronous connection c as argument and defines a process that models a buffer of size one. It always accepts an input, possibly overriding a previously buffered value, and provides an output, if available. The input and output channels match those for c in
; they associate the buffer specifically with c. The rule that defines
and some other rules omitted here are in [96], and are implemented in RoboTool, presented in Sect. 5.1.
In Rule 1, we have one buffer for each asynchronous connection in
. The buffers are combined in interleaving. If
is empty, the interleaving reduces to Skip, the process that terminates immediately without any interaction. The interleaving is in parallel with the processes for the platform memory and for the controllers, synchronising on the events in
.
The function
takes a module and defines a memory process for it, that is, a process to hold its platform’s variables. The set
includes the channels used for interaction with this memory process, used for communication with the controllers.
The function
(Rule 2) takes a module, a sequence of controllers, and a set of connections, and defines the parallel process that composes the controller processes. Finally,
takes a module and determines the set of channels used internally by that module’s process.
The parallelisms and hiding of the CSP events identified by
in Rule 1 formalise the account of a module process in the previous section. To deal with termination, however, that process is composed with an exception operator that leads to Skip whenever the end event occurs. An exception \(P ~\varTheta _{A} Q\) is a process that behaves like P until an event in the set A occurs, when it behaves like Q.
The parallel composition of controllers in a sequence
is calculated recursively as shown in Rule 2. It constructs the parallel process that composes the semantics of the first controller (
) renamed according to its connections, and the process for the remaining controllers (
) defined via a recursion. The renaming applied to a controller process is given by
, which defines the process for the controller
, and the renaming to capture the connections
. The parallel processes synchronise on the events in
, which contains the connection events common to the first and the remaining controllers after renaming. They are determined by the function
, which considers the renaming effected by
.
The definition of
, omitted here, uses the function
, which gives the semantics of a controller
. It is defined similarly to the semantics of a module as shown in Rule 3. The memory process
is composed in parallel with
, which is the parallel composition of the processes for
’s state machines, synchronising on the events in
. These include the writing events (set) for local variables (identified by
) and the writing events (\(set\_Ext\)) for the required variables (
). Similarly, the memory and controller processes also synchronise on the events in
, which include the writing events (set) for the local and required constants. The parallel composition does not include buffers because connections between machines are always synchronous. The set events of the local variables (collected in
) and local constants (in
), and the \(set\_Ext\) events of the required variables (in
) are then hidden, with termination accounted for as in the semantics of modules: by capturing the event end through an exception \(\varTheta \) and terminating.
The memory process for a controller is specified by Rule 4; it differs from that for a robotic platform in that it accepts \(set\_Ext\) events for each required variable, which are used to propagate updates to shared variables. Since a controller memory is never read directly, this process does not accept get events.
The function
defines a parameterised recursive process
that at each step reads a value through one of two types of channels: set for local variables and \(set\_Ext\) for required variables. The parameters
are the names
of both local variables (
) and required variables (
). For each of them, depending on their nature (local or required),
offers, in a choice, a different communication. For a variable v in
, it accepts a value through the channel
; here
specifies the qualified name of
. Next,
propagates sequentially (
) the received value x to all machines m in the controller that require v using the channel
. The sequence of such machines is determined by
; the order in this sequence is arbitrary. The name vid(v, m) is the qualified name of v in the machine
. Finally,
recurses with argument
as x (
). For a variable v in
, the memory process accepts through
a value being propagated from a platform, and propagates it to the machines, as for local variables. The controller memory is defined by an interleaving
of set events for constants of the controller, followed by the instantiation of
with arguments
that define the initial values
for the variables
in the memory.
The definition of
is similar to that of
in Rule 2, and omitted here. It uses the semantic function for a state machine, specified by Rule 5. The definition in Rule 5 follows the pattern in Rules 1 and 3: the semantics of the component (state machine, here) is composed in parallel with a memory process defined by
.
There are two main differences here. First, the semantics of a machine
is itself the parallel composition of an initialisation process
and the parallel composition of the state processes defined by
. The arguments here are a sequence containing the nodes of
that are a state, as opposed to a junction, and
itself, which defines how those states are used. Second, the memory process
accepts not only get, set, and \(set\_Ext\) events, but also triggers of the transitions of the machine to support the evaluation of guards. So, the synchronisation set includes get and set events for the machine variables (and constants), local and required, defined by
, and the transition trigger events, defined by
, including events of the special channel internal used for transitions without trigger. The memory process
is defined in Rule 12. We rename the transition trigger events (
) to remove the transition identifiers, and hide the events (in
) that use the channels internal, get, or set for local variables or constants.
The initialisation process
defines the semantics of the sequences of transitions from the initial junction to a state. We note that it is possible that, from the initial junction, there can be several transitions to other junctions before a state is reached. The function
is specified in Rule 6.
The initialisation and states processes synchronise on the set
, which contains enter, entered, exit, and exited events. This synchronisation plays two roles. First, it allows
to request machine states to be entered. For that, it uses events
and
, where
is the machine identifier, and
is a state of the machine. Second, the synchronisation blocks the possibility of any other machine or states outside
requesting states of
to be entered or exited. For that,
includes events whose first parameter
is any identifier in SIDS, but not in the set
of identifiers for the states of
. SIDS includes the machine identifier itself, and that caters for requests from
. By including the identifiers of all other machines and of states not in
, which are not used in
, the synchronisation blocks them. They are allowed by
because the semantics of the states is agnostic to the context, including the machine and any sibling states, that can request their entering and exiting. The second parameter
of the events of
is an identifier in
, since it is the states of
that can be requested to be entered and exited.
Composition of states is defined by Rule 6; it takes a sequence of states
and their node container
. In Rule 5, the node container is the state machine. In Rule 8, where
is used to define the semantics of a composite state, the container is that state.
The definition of
follows a pattern similar to that used for controllers (Rule 2) . The restricted semantics (defined later by Rule 10) of the first state of
(
) is composed in parallel with the composition of the semantics of the rest of the sequence of states (
) calculated recursively, synchronising on the set
of their common flow events. This set is calculated similarly to the set
in Rule 2, and consists of the intersection of the set of events obtained by applying the function
to the first state, and the union of the sets obtained by applying
to the rest of the states.
The function
is given by Rule 7. It takes a state
and a node container
as parameters, and returns a value of type ChannelSet that represents a set of CSP events. The channel set returned by the function
contains the enter, entered, exit and exited events that represent requests or acknowledgements from
or to
. One of the two parameters
of each event is the identifier
of
, and the other is the identifier of one of the children of its container
, that is,
itself or one of its siblings. These events allow
to request any of its siblings to enter or exit, and any of the sibling states to request
to enter or exit.
Rule 6 uses a restricted semantics of states, also used to give semantics of substates. We present here first the semantics of (composite) states in Rule 8, and then the restricted semantics for a state in a container in Rule 10. The semantics of simple states is similar and simpler.
Rule 8 defines
as a parallelism of two processes. The first, Inactive, models the intrinsic behaviours of the state, that is, its actions and transitions. The second parallel process is
; it accounts for the semantics of the children of
in a similar fashion as the semantics of state machines, controllers, and modules handle the semantics of their components. The parallel processes synchronise on the events in the set
, defined in Rule 9 by the function
, with the flow events (those in
) hidden. The set
is used to restrict the flow and trigger events to ensure we have a compositional semantics for states as explained before.
The first parallel process Inactive specifies the initialisation of
using enter . The communication on enter is a request originating from any other state, with identifier o in
, for
to be entered. The definition of
ensures that the request can come from any state (or machine), but not
. A request from
itself is handled by the process for the transitions from
. After the request, the initialisation proceeds to the second process, Activating, which takes o as argument.
Activating(o) executes the entry-action process of
, that is,
, requests initialisation of the machine in
, if any, using
, indicates that
has finished entering using entered, and executes the during-action process
, while offering the possibility of interruption by a transition process. For a during action, its semantics (
) is composed with Stop. The transition processes are offered in external choice; there are two groups: (1) transitions that start in
and (2) transitions that can interrupt
if present in an ancestor state, including those without trigger, modelled using semantic internal events.
A transition t in the first group
is given semantics by
. Besides t, this function takes as arguments the source of t, which is
here, a boolean indicating whether that source is an initial node, which is false here, and processes
and
as parameters. These model the source state of t, when inactive, in the case of
, and after entering has been requested, in the case of
. If t exits the state,
proceeds to
. If t is a self-transition,
proceeds to
. In Rule 8, these arguments are the processes Inactive and Activating. The semantics of transitions is given by Rule 11.
The second group contains all possible transitions that could appear in any ancestor state. In the semantics, the set of all trigger events contains all pairs
, where e is any event, that is, an element of
, which includes the internal events, and
is any identifier from a set TIDS of valid transition identifiers. Those for the transitions of
and its substates, determined by the function
, cannot identify transitions of ancestors of
. So, we only need to consider transitions with identifiers in the set
. The name of the trigger event is that determined by
. Parameters y of typed events, that is, those for which
is not
, are modelled as parameters of the CSP channel
. After the trigger event, the state can be exited, as defined by the process
.
The process
waits for a synchronisation on the channel exit, requests and waits for the active substate, if any, to exit, using the process
, executes its exit-action process
, and indicates completion of the exiting process through exited. After executing the process
, Activating(o) recurses to Inactive so that
accepts new requests to be entered.
The synchronisation set between Inactive and the composition of substates is given by Rule 9. It specifies the events used by a parent state
to interact with its children. This set includes all the pairs of events e and transition identifiers t, that is, trigger events of the semantics, except those for transitions of the substates of
, defined by
. Additionally,
includes the flow events enter, entered, exit, and exited, where the first parameter
does not identify a child of
: it is in
, and the second parameter
identifies one of those substates. As illustrated in the previous section, the result is that requests to enter or exit a substate by a non-sibling state can come only from the parent state.
Rule 10 gives the restricted semantics of a state
, which captures its behaviour when used in a given node container (machine or parent state)
. It is discussed in Sect. 4.2 and used in Rule 6.
As explained previously, the semantics of a state is compositional and offers not only its own transitions, but also any transition that could possibly belong to one of its ancestors to account for the behaviour of composite states. When composing the process for a state
into the semantics of a container
, information about the identifiers of the transitions of the sibling states becomes available. Since exit from a state cannot be requested due to such a transition, we block the events in the process for
for transitions with those identifiers. This is achieved by synchronising that process with Skip on all possible transitions not from or in
, defined by
, except those corresponding to actual transitions in
: in the set
of transitions from
and its siblings.
The set of all transitions wholly contained within a state
is given by
; it determines the transitions between the substates of
(at any depth), but not the transitions starting at
itself. With the set
, on the other hand, we get exactly the transitions that start at
. These functions are used to define the set of identifiers
of the transitions from and within
, which is used to specify the set
. In the definition of
, the event of a transition
of
is obtained by
as defined in the metamodel.
The semantics of transitions is given by Rule 11. It takes a transition
, a node container
, a boolean value
, and processes
and
. This function is called recursively to cover all the transitions in a flow starting in a state or initial junction and finishing in a state. The parameters
and
record the starting point of the flow, that is, the source of the first transition, and whether or not it is an initial junction. If the source is a state,
and
model it:
, when it is inactive, and
, after entering has been requested.
Since it is called recursively, Rule 11 distinguishes the possible starting points
of a transition: a state, an initial junction, or just a junction (that is not initial). In each case, it produces a slightly different process.
If it is a state, the process consists of a communication given by the semantics
of the trigger (potentially non-existent, leading to an event that uses the channel internal), and a process that exits the state and moves on to the target of the transition. This process consists of a communication on exit to indicate that exiting of
has started,
to request and wait for the active child, if any, to exit, the process
for the exit action, an indication through exited that the exiting is finished, the process
for the transition action, and, finally,
, a process that captures the execution of the target. For example, if that target is a state, it is a request to enter that state. If it is a junction, we have a recursive call to deal with the transitions from that junction, and so
has the same parameters declared in Rule 11 itself.
The process when the source is an initial junction is slightly simpler, as there is no trigger, and no state to exit. In this case, the process accepts a synchronisation on the channel internal (corresponding to the empty trigger) with the identifier of the transition (
) as a parameter, followed by the execution of the transition action and
. Here, the second argument is the parent of
. This argument is used to request the target state to be entered, and if the source is an initial node, that request comes from the parent state (or machine). The semantics when the source is a regular junction is similar to that of initial junctions, except that the parameters
and
of the rule are passed on directly to
.
The parameters
and
are used by the function
to identify whether the sequence of transitions whose semantics is being defined starts in an initial state, and whether that sequence forms a cycle returning to the initial state. In the first case, the sequence of transitions does not lead to exiting a state (unlike regular transitions), and must lead to the execution of the remaining behaviours (for instance, executing the during action) of the state that contains the initial junction. In the second case, the sequence of transitions must lead to the behaviour specified by the second process
given as parameter. This is needed to guarantee that after a cycle, the source state is not waiting to be activated, as it has already been entered by the final transition of the cycle. The parameters
and
are the continuation processes used to build the mutually recursive processes of a state (Rule 8).
The memory process of a machine is defined by Rule 12. Like Rule 4, this rule defines a recursive parameterised process (
). At each step it offers, in choice,
and
events for each local variable of the state machine (
), events
,
and
for each required variable (
),
events for each local and required constant (
), and processes
offering the events of each of the transitions t of the machine. The choices involving events
and
result in recursive calls where the parameter for the variable is replaced with the received value, and all other choices lead to recursive calls with unchanged parameters.
Like in Rule 4, the values of the loose (local and required) constants \(c_1, c_2, \ldots \), that is, those whose values are not determined in the machine, are read via set channels. Here, however, as defined by the function
(omitted) they are read in an (arbitrary) order \(set\_c_1?c_1 \rightarrow set\_c_1?c_2 \rightarrow \ldots \). This defines a scope where the names \(c_1, c_2, \ldots \) are defined. In addition, in a let-expression, the constants whose values are defined in the machine are declared locally with their values. All constant names are used as arguments in \(Memory(\ldots ,c_1,c_2,\ldots )\) to define the values of the constants in the call to the process Memory. Unlike the platform and controller memory processes, here we need to record the values of the constants as parameters to make them available via get channels to the machine.
As previously said, state machines can also be used to define operations. In this case, the semantics of the operation is a parameterised process that takes the arguments of the operation as parameters. Apart from that, the process that defines the semantics is exactly that of a state machine presented above (Rule 5).
State machines may contain actions, which are composed of statements. Statements may contain expressions, and these expressions must be evaluated before the statement is executed. Since variables are recorded in a memory process, the values used in a statement must be first read from the memory, and a local context must be created where the statement can be executed. Rule 13 specifies this behaviour. It takes a statement
, and uses a function
to construct a process that reads a set of variables from the memory creating a local context, and executes the statement in that context. The set of variables is calculated using the function
, and the basic semantics of a statement is given by the function
. The semantics for statements, expressions, triggers, and so on is standard and omitted here, but is in [96].
Next, we present RoboTool, which implements the semantic rules above to calculate fully automatically a CSP model for a RoboChart diagram.