## Abstract

We enrich the operational semantics of a simple process calculus with ACP-style communication with a concurrency relation, so that for every process expression there exists an associated notion of *just path*. We then present sufficient conditions on the communication function and the syntax of process expressions that facilitate the formulation of justness on the level of labels rather than on individual transitions, taking a designated set of signals into account. This paves the way for the formulation of liveness properties under justness assumptions in the modal \(\mu \)-calculus and their verification on process specifications with the mCRL2 toolset.

## Introduction

In recent work, van Glabbeek and coauthors suggest that the liveness property for Peterson’s mutual exclusion algorithm [17], stating that any process that wants to enter the critical section will eventually enter it, cannot be analysed in CCS and related formalisms [4, 7]. This article is the result of our attempt to understand the formal underpinning of this suggestion and its ramifications. In particular, we address the question whether it also implies that the liveness property for Peterson’s algorithm cannot be convincingly established by means of a verification with the mCRL2 toolset [2], which has a process-algebra based specification formalism. Before we discuss our contributions, we briefly recap the arguments presented in [4].

### Recap of the arguments in [4]

The authors of [4] note that every process-algebraic specification of a distributed algorithm or system includes unrealistic finite or infinite computations in which some component never makes progress. Since such unrealistic computations typically violate liveness properties, their mere existence is in the way of a proof that all realistic computations do satisfy these properties. Unrealistic computations are then often excluded from consideration by imposing additional assumptions such as *progress* and *fairness* (see [8] for a comprehensive overview of such assumptions).

For the analysis of implementations of so-called *fair schedulers*—of which Peterson’s algorithm is an example—one should, however, take care that the fairness assumptions are not too strong, since fair schedulers are, themselves, intended to realise fairness in a system. Van Glabbeek and Höfner [7] have proposed *justness* as a criterion that is just strong enough to exclude unrealistic computation of fair schedulers, but not too strong:

Once a transition is enabled that stems from a set of parallel components, one (or more) of these components eventually partake in a transition. [8]

It turns out, however, that the proposed notion of justness, when formalised in the context of CCS, still does not exclude certain unrealistic (or at least: unintended) computations of Peterson’s algorithm, and some of these computations have liveness violations. The culprit is that in a process-algebraic specification shared variables are components (processes) themselves, and hence reading the value of a shared variable is modelled as an interaction of the component that reads and the component that models the variable. Hence, an infinite computation in which one component continuously wants to assign a new value to the variable, but never actually does, can, nevertheless, be just because another component time and again reads the value of the variable. Yet, in the context of Peterson’s algorithm, reading the value of a variable should not be considered to really affect the component corresponding to that variable.

To counteract this problem, it is proposed in [4] to extend the syntax and semantics of CCS with a so-called *signal emission operator*, providing an alternative mechanism to communicate information about the state of a component (e.g., a variable) to other components. Although adding this operator does not increase the absolute expressiveness of the calculus, it does facilitate a refined definition of justness. In this refined definition, the reading of a signal is given special treatment by which computations such as the one described above are not considered just, and thus excluded from consideration. Assuming the refined definition of justness, it is proved in [4] that the specification of Peterson’s algorithm in CCS extended with the signal emission operator satisfies the liveness property.

### Our contributions

The signal emission operator is a non-standard process-algebraic construction. It is not part of the specification formalism of mCRL2, nor, to the best of our knowledge, of the specification formalism of any other process-algebra based automated verification tool. The question arises whether the addition of such an operator is essential. If so, a non-trivial overhaul of established verification tools is called for. Our first contribution is to show that it is not, if one is willing to pay a small price: there is no general formal definition of justness for the entire calculus; the formal definition must be tuned to the process expression under consideration. When aiming for an automated verification, this is indeed a negligible price, since one is just interested in the process expression that models the system under verification.

Semantically, the signal emission operator simply adds a self-loop labelled with a *signal* to the state representing the process expression to which it is applied. A signal is just a special type of label, so the self-loop can easily be specified by other means (e.g., using recursion) if a particular subset of the set of labels is designated as signals. Because the choice of an appropriate set of signals depends on how those labels are used in the process expression at hand, the formal definition of justness needs to be specific for a particular process expression.

In the absence of tools supporting the verification under justness of specifications such as Peterson’s algorithm, establishing that a specification meets a property remains a manual activity. This is problematic, as the complexity of a typical specification easily leads to cases being missed in the analysis. Therefore, to conduct a convincing automated verification of a property of an algorithm, we not only need to specify the algorithm in a process-algebra based formalism; we also need to formulate the property in a suitable modal logic. Moreover, in the verification of the property, justness has to be taken into account. It is unclear, however, whether this can be achieved without changing the verification algorithms that are used to evaluate the validity of a modal-logic formula with respect to the labelled transition system associated with the process expression. A complication is, for instance, that the definition of justness refers to a notion of *component*, which naturally exists at the level of the syntactic representation of the system (i.e., the process expression), but not at the labelled transition-system level.

Our second contribution is derived from the observation that with the ACP-style communication mechanism [1] of mCRL2, which is more general than the communication mechanism of CCS, Peterson’s algorithm can be specified in such a way that justness can be defined referring to labels rather than to components. The idea is to achieve a partitioning of the set of labels that reflects the component structure of the process expression. It is then possible to reformulate justness referring to labels, rather than to components. We generalise the observation regarding Peterson’s algorithm and formulate general syntactic conditions that ensure that such a partitioning is possible.

Our third contribution is a template modal \(\mu \)-calculus formula that expresses a typical liveness property, asserting that on all just paths, an action, say *a*, is eventually followed by another, say *b*. This template formula can easily be instantiated by a user wishing to carry out a liveness verification of an algorithm, and only requires information concerning which actions are designated as signals. As a result, standard, off-the-shelf tooling such as mCRL2 can be used to automatically verify liveness properties of algorithms such as Peterson’s. In case such verifications fail, evidence [3, 18] can be provided, helping the user to pinpoint the root cause.

This paper is organised as follows. In Sect. 2, following [9], we take the notion of labelled transition system with concurrency (LTSC) as technical starting point, and present a definition of justness for it. In Sect. 3 we present a process calculus that is very similar to CCS, except that it has the more general ACP-style communication mechanism. Inspired by the LTSC-semantics that van Glabbeek gives for CCS and its extension with signals in [9], we propose an LTSC-semantics for the process calculus. Then, in Sect. 4 we recapitulate in more detail the argument presented in [4] that Peterson’s algorithm cannot be rendered in the process calculus in such a way that all unrealistic paths are excluded by assuming justness. In Sect. 5 we then include a semantic treatment of special labels that take the role of signals. In Sect. 6, we define when an LTSC admits a label-based treatment of justness, proposing a subclass of LTSCs that have a *concurrency-consistent labelling*. In Sect. 7, we present sufficient conditions on process expressions ensuring that the associated LTSC has a concurrency-consistent labelling. Process expressions satisfying these syntactic conditions are amenable to verifications that take justness into account. In Sect. 8 we formalise a general liveness property under justness assumptions for an LTSC that has a concurrency-consistent labelling. In Sect. 9 we comment on the actual verification of the liveness property for Peterson’s algorithm with the mCRL2 toolset. In Sect. 10 we present some conclusions.

## Justness

We recap the definition of labelled transition system with concurrency and the associated notion of just path from [9].

We presuppose disjoint sets \(\mathcal {A}\) and \(\mathcal {S}\) of *actions* and *signals*, respectively, and let \(\mathcal {L}=\mathcal {A}\cup \mathcal {S}\); elements of \(\mathcal {L}\) are generally referred to as \(labels \). A *labelled transition system* (LTS) is a tuple \(( St , Tr , src , target ,{\ell })\) with \( St \) and \( Tr \) sets of *states* and *transitions*, respectively, \( src , target : Tr \rightarrow St \) and \({\ell }: Tr \rightarrow \mathcal {L}\).

We call a transition \(t\in Tr \) a *signal transition* if its label is a signal and it does not change state, i.e., if \({\ell }(t)\in \mathcal {S}\) and \( src (t)= target (t)\); otherwise, *t* is called an *action transition*.

### Remark 1

Van Glabbeek mentions in [9] that signal transitions are not supposed to change state, but does not include it as an explicit requirement. Rather, in his work, it is a consequence of the operational semantics of the process calculi under consideration that transitions labelled with signals indeed never change state. The syntax and operational semantics of our process calculus will, by design, admit process specifications that give rise to transitions labelled with signals that do change state. We prefer that such transitions are not treated as signal transitions in the notion of justness. To this end it is convenient to include the requirement explicitly.

Signal transitions are disregarded in the definition of the notion of path. A *path* in a transition system \(( St , Tr , src , target ,{\ell })\) is a finite or infinite alternating sequence \(s_0t_1s_1t_2s_2\cdots \) of states and action transitions, starting with a state and if it is finite also ending with a state, satisfying \( src (t_i)=s_{i-1}\) and \( target (t_i) = s_{i}\) for all relevant *i*. We say that a state \(s'\) is *reachable* from a state *s* if there exists a path that starts with *s* and ends with \(s'\). We say that a transition *t* is *reachable* from a state *s* if there exists a state \(s'\) that is reachable from *s* and \( src (t)=s'\).

Labelled transition systems abstract entirely from the notion of component. For the definition of justness, the notion of component is relevant, at least to the extent that it should be possible to determine that, whenever some transition is enabled, eventually the component (or set of components) from which the transition stems, makes progress. For the formalisation of justness, it turns out to be sufficient to consider labelled transition systems enriched with a concurrency relation on transitions [9]. We first give the formal definition of labelled transition system with concurrency; the requirements on the concurrency relation are explained after the definition.

### Definition 2

A *labelled transition system with concurrency* (LTSC) is a tuple (, ) consisting of an LTS \(( St , Tr , src , target ,{\ell })\) and a *concurrency relation* such that

- 1.
is irreflexive on action transitions (i.e., if

*t*is an action transition, then ), and - 2.
if

*t*is an action transition and \(\pi \) is a path from \( src (t)\) to \(s\in St \) such that for all transitions*v*occurring on \(\pi \), then there is an action transition*u*such that \( src (u)=s\), \({\ell }(u)={\ell }(t)\) and .

Intuitively, transitions are *concurrent* if they stem from different (sets of) components, and they *interfere* if they have a component in common. It is then natural to require that the concurrency relation on transitions is irreflexive: a transition cannot be concurrent with itself. Furthermore, if some component (or set of components) can perform some activity, represented by a transition *t* in the labelled transition system, then after executing transitions concurrent with *t*—which, by assumption, then stem from different components than *t*—it should still be possible for the component to perform that same activity. The activity can be represented by a different transition *u* in the labelled transition system, but this transition should not be concurrent with *t* (it should interfere with *t*, i.e., ) and should have the same label.

As explained in [9], justness is a *completeness criterion*: it is used to specify which paths should be considered representing a complete computation of the system. For completeness one wants to distinguish between so-called *blocking* actions and *non-blocking* actions. Intuitively, a blocking action is not entirely under the control of the system that is being specified; it may depend on interaction with the environment. A non-blocking action is thought to be completely under control of the system. A complete computation may end in a state in which only blocking actions are enabled, but not in a state in which non-blocking actions are enabled. The definition of justness takes a set of blocking actions as parameter.

### Definition 3

Let \(\mathcal {B}\subseteq \mathcal {A}\) be a set of *blocking actions*. A path \(\pi \) in an LTSC is \(\mathcal {B}\)-*just* if for every action transition *t* with \({\ell }(t)\notin \mathcal {B}\) and \( src (t)\in \pi \), a transition *u* occurs in the suffix of \(\pi \) starting at \( src (t)\) such that .

The example below illustrates the concept of justness.

### Example 4

Consider a situation in which Alice drinks coffee and eats a croissant in a small cafe, and Bob is engaged in a series of phone calls. The situation can be modelled by the following LTSC:

Suppose that all labels in the above LTSC are non-blocking actions. In case all actions only interfere with themselves, the infinite path consisting of only \( phone \) transitions from state \(s_0\) is not \(\emptyset \)-just since the \( coffee \) transition is enabled in \(s_0\) but no interfering transition is ever taken on this path. In case the \( phone \) transitions in \(s_0, s_1\) and \(s_2\)*do* interfere with the \( coffee \) transition and the \( croissant \) transition—for instance because Bob is also the waiter who serves Alice, preferring to make phone calls instead of taking her orders—then the same infinite path *is*\(\emptyset \)-just.

## Process calculus

In [4], the authors claim that information exchanged through signals is essential for the characterisation of just paths in the context of Peterson’s algorithm; without signals, paths representing unrealistic executions of Peterson’s algorithm are considered just. In [4], justifications for the claim are presented in the context of CCS. First, a version of CCS without signals is considered, Peterson’s algorithm is modelled, and then it is shown that justness does not exclude all unrealistic computations. Then, Peterson’s algorithm is modelled in a variant of CCS with signals, and it is shown that the corresponding notion of justness works well for Peterson’s algorithm. We retrace their steps and in this section introduce a very simple process calculus to specify LTSCs that, as we show in the next section, indeed illustrates the phenomenon observed by the authors. In Sect. 5, we shall also introduce signals, but without changing the syntax of the calculus.

A special feature of our calculus, compared to CCS as considered in [4, 9], is that it includes an ACP-style communication mechanism [1]: We presuppose a binary *communication function* on the set of labels \(\mathcal {L}\), i.e., a partial function

that is

*commutative*: \(\gamma (\lambda _1,\lambda _2)\) is defined if, and only if, \(\gamma (\lambda _2,\lambda _1)\) is defined, and if both are defined, then we have \(\gamma (\lambda _1,\lambda _2)=\gamma (\lambda _2,\lambda _1)\); and*associative*: \(\gamma (\lambda _1,\gamma (\lambda _2,\lambda _3))\) is defined if, and only if, \(\gamma (\gamma (\lambda _1,\lambda _2),\lambda _3)\) is defined, and if both are defined then we have \(\gamma (\lambda _1,\gamma (\lambda _2,\lambda _3))=\gamma (\gamma (\lambda _1,\lambda _2),\lambda _3)\).

This communication function defines which actions may communicate, and what is the result of that communication. Thus, communication transitions are not all labelled with the same action, as they are in CCS (in CCS all transitions that are the result of communications are labelled with \(\tau \)). The advantage is that transitions that involve multiple components can be labelled such that from the label it can be determined which components are involved.

We proceed to introduce the syntax of our process calculus and associate an LTSC with it. The LTSC we get is in line with the LTSC that van Glabbeek associates with CCS in [9], though our way of defining it deviates somewhat from van Glabbeek’s in [9], as we shall explain below. For now, we presuppose that the set of signals is empty, i.e., \(\mathcal {L}=\mathcal {A}\). (In Sect. 5, we shall consider the general case in which the set of signals \(\mathcal {S}\) is not empty and adapt the structural operational semantics accordingly.) For the purpose of recursion, we also presuppose a set \(\mathcal {I}\) of *agent identifiers*. The set \(\mathcal {P}\) of *process expressions* is generated by the following grammar (with *A* ranging over \(\mathcal {I}\), \(\lambda \) ranging over \(\mathcal {L}\), and *H* ranging over subsets of \(\mathcal {L}\)):

The constructs \(\mathbf {0}\), \({\lambda }.\) and \(\mathbin {+}\) are familiar from basic CCS, respectively denoting inaction, action prefix and non-deterministic choice. The construct \(\mathbin {\Vert }\) stands for ACP-style parallel composition. It represents the arbitrary interleaving of the behaviours of its components, and additionally allows its components to execute communication steps in accordance with the communication function \(\gamma \): if the left component of the parallel composition can execute label \(\lambda _1\) and the right component can execute label \(\lambda _2\) and \(\gamma (\lambda _1,\lambda _2)\) is defined, then the parallel composition can execute \(\gamma (\lambda _1,\lambda _2)\). The process calculus includes the encapsulation operator \(\partial _{H}\) (similar to the restriction operator in CCS) by which the execution of certain labels can be blocked, and thus communication between components can be enforced. The behaviour of the agent identifiers is defined through a *recursive specification**E*, which is a set of defining equations

with *P* a process expression, including precisely one such equation for every \(A\in \mathcal {I}\).

We now proceed to associate an LTSC with our process calculus. The set of states \( St \) of this LTSC is the set of process expressions \(\mathcal {P}\), as usual. To define a suitable set \( Tr \) of transitions, as in [9], we take the collection of derivations in a formal proof system based on the structural operational semantics of the process calculus. We deviate from [9] in how we define the concurrency relation. In [9], van Glabbeek inductively associates a set of *synchrons* with a derivation, which can be thought of as extracting from the derivation all the required component information necessary to define a concurrency relation. We prefer to annotate the transition relation defined by the structural operational semantics with component information directly.

First, we associate with a process expression *P* its *static component architecture*, which is determined by the top-level occurrences of \(\mathbin {\Vert }\) and \(\partial _{H}\) in *P*. Let \(\mathcal {C}=\{\textsc {l},\textsc {r}\}\); we shall refer to a component in a process expression *P* as a sequence in \(\mathcal {C}^{*}\) (the empty sequence will be denoted by \(\epsilon \)). We recursively associate with every process expression *P* a set of *components*\(\mathcal {C}(P)\subseteq \mathcal {C}^{*}\) as follows:

if \(P=\mathbf {0}\), \(P={\lambda }.P'\) (for some \(\lambda \in \mathcal {L}\)), \(P=P_1\mathbin {+}P_2\), or \(P=A\) (for some \(A\in \mathcal {I}\)), then \(\mathcal {C}(P)=\{\epsilon \}\);

\(\mathcal {C}(P_1\mathbin {\Vert }P_2)=\textsc {l}\mathbin {\vartriangleright }\mathcal {C}(P_1)\cup \textsc {r}\mathbin {\vartriangleright }\mathcal {C}(P_2)\), and \(\mathcal {C}(\partial _{H}(P))=\mathcal {C}(P)\).

(If \(X\subseteq \mathcal {C}^{*}\), then \(\textsc {l}\mathbin {\vartriangleright }{}X=\{\textsc {l}\sigma \mid \sigma \in X\}\) and \(\textsc {r}\mathbin {\vartriangleright }{}X=\{\textsc {r}\sigma \mid \sigma \in X\}\).) Note that every \(\sigma \in \mathcal {C}(P)\) uniquely identifies a component of *P*: we denote this component by \({P}\mid _{\sigma }\).

We keep track of which components contribute to a transition in the structural operational semantics for our process calculus, presented in Table 1. It defines a transition relation \(\mathrel {\overset{\lambda ,\alpha }{\longrightarrow }}\) on process expressions, which is not only endowed with a label \(\lambda \in \mathcal {L}\), but also with a set \(\alpha \subseteq \mathcal {C}^{*}\) of components.

The rule \(\textsc {(Pref)}\) expresses that a prefix \({\lambda }.P\) can do a \(\lambda \)-labelled transition to *P*; furthermore, \({\lambda }.P\) is by itself a component. So the set of components associated with the transition is \(\{\epsilon \}\). The rules \((\textsc {Sum}\text {-}\textsc {l})^{}\) and \((\textsc {Sum}\text {-}\textsc {r})^{}\) express that a non-deterministic choice \(P\mathbin {+}Q\) can execute a \(\lambda \)-labelled transition from *P* or from *Q*. Also \(P\mathbin {+}Q\) is by itself a component, denoted by \(\epsilon \). So the set of components associated with the transition is \(\{\epsilon \}\).

The rules \((\textsc {Par}\text {-}\textsc {l})\), \((\textsc {Par}\text {-}\textsc {r})\) and \(\textsc {(Comm)}\) express, respectively, that a parallel composition \(P\mathbin {\Vert }Q\) can execute a transition of the components of *P*, a transition of the components of *Q*, or execute a transition in which both components of *P* and *Q* are involved. In the latter case, the communication function \(\gamma \) must be defined on the labels of the transitions of *P* and *Q* and the combined transition is labelled with the result of applying the communication function to these labels. In the case of an application of \((\textsc {Par}\text {-}\textsc {l})\) or \((\textsc {Par}\text {-}\textsc {r})\), the sets of components involved in the resulting transitions need to be updated by prefixing all components suitably with \(\textsc {l}\) or \(\textsc {r}\), respectively. In the case of an application of \(\textsc {(Comm)}\), the involved components of *P* are prefixed with \(\textsc {l}{}\), and the involved components of *Q* are prefixed with \(\textsc {r}{}\). Finally, the rule \(\textsc {(Enc)}\) expresses that \(\partial _{H}\) blocks transitions labelled with \(\lambda \in H\); the set of components is simply inherited.

The example below illustrates the operational rules, and how they can be used to construct derivations.

### Example 5

The recursive specification given below models the second situation of Example 4, i.e., the situation in which Alice orders coffee and a croissant, and Bob is her waiter.

Assume that \(\gamma \) is a communication function satisfying

Then we can derive the following transition with conclusion \( Bob \mathrel {\overset{ coffee _r,\{\epsilon \}}{\longrightarrow }} Bob \), with source process \( Bob \), target process \( Bob \), and label \( coffee _r\):

In a similar vein, we can derive a transition that has as conclusion \( Alice \mathrel {\overset{ coffee _s,\{\epsilon \}}{\longrightarrow }} { croissant _s}.\mathbf {0}\), and which allows us to derive a transition witnessing the communication that can take place between Alice and Bob:

The above derivation shows that both Alice and Bob contribute equally to the transition that results in Alice drinking a cup of coffee.

Now we let \( Tr \) be the set of all derivations^{Footnote 1} that can be constructed using the structural operational rules in Table 1, and we define \( src \), \( target \) and \({\ell }\) by stipulating that if \(t\in Tr \) is a derivation and \(P\mathrel {\overset{\lambda ,\alpha }{\longrightarrow }} P'\) is its conclusion, then \( src (t)=P\), \( target (t)=P'\) and \({\ell }(t)=\lambda \). Furthermore, we write \( comp (t)\) to denote the set of components \(\alpha \) contributing to *t*.

It remains to define the concurrency relation . We define that transitions *t* and *u* are concurrent (notation: ) if \( comp (t)\cap comp (u)=\emptyset \), i.e., if none of the components contributing to *t* are contributing to *u*.

### Lemma 6

For all transitions *t* and *v*, if \( src (t)= src (v)\) and , then there exists a transition *u* with \( src (u)= target (v)\), \({\ell }(u)={\ell }(t)\) and \( comp (u)= comp (t)\).

### Proof

By induction on *v*; see Lemma 44 in “Appendix A” for details. \(\square \)

### Proposition 7

The structure with components as defined above is an LTSC.

### Proof

From the rules in Table 1 it is immediate that whenever \(P\mathrel {\overset{\lambda ,\alpha }{\longrightarrow }}P'\), then \(\alpha \,{{/=}}\,\emptyset \). So for every \(t\in Tr \) we have that \( comp (t)\cap comp (t)=\alpha \,{{/=}}\,\emptyset \). It follows that and hence is irreflexive. That also satisfies the second requirement of Definition 2 follows with a straightforward induction on the length of \(\pi \) using Lemma 6. \(\square \)

## Modelling Peterson’s algorithm

Peterson’s algorithm for mutual exclusion provides a classical solution to enable two processes to use a shared resource in a mutually exclusive manner. In the algorithm, the shared resource is referred to as the *critical section*. The algorithm ensures that at all times only one of the two processes is in the critical section. A desired liveness property of a mutual exclusion algorithm is that whenever one of the two processes wishes to enter the critical section, then it will eventually do so. In this section, we shall discuss how Peterson’s algorithm can be modelled in the process calculus introduced in the previous section. Then, we shall recap the argument, already presented in [4], that the notion of justness associated with the process calculus is too weak to exclude all unrealistic paths violating the liveness property. In the next section, we shall refine the definition of justness in order to facilitate an exhaustive verification under this notion of justness of the aforementioned liveness property using the mCRL2 toolset.

Peterson’s algorithm is shown in Fig. 1. Processes *A* and *B* communicate via shared variables. By setting Boolean variables \( readyA \) and \( readyB \), respectively, they signal to the other process their wish to enter the critical section. In addition, a shared variable \( turn \) is used to keep track of whose turn it is to enter the critical section next; the idea is that a process, before entering its critical section, courteously always first grants access to the other process. This way of using \( turn \) is essential for ensuring both deadlock freedom and mutual exclusion.

In a message-passing process calculus, global variables are modelled as separate processes with which other processes can interact. Processes modelling a variable keep track of the value of the variable and can communicate with other processes in read and write operations. In our model, to read a variable, the variable that is being read performs an action \( s\_rd _{ var }^{ val }\) and the process that reads the variable performs an action \( r\_rd _{ var }^{ val }\). Together they communicate to a transition labelled with \( rd _{ var }^{ val }\). A similar communication, labelled with \( asgn _{ var }^{ val }\), is defined to write to a variable. To cover all the interactions with variables in Peterson’s algorithm we define the communication function \(\gamma \) in such a way that it satisfies the following equations and is undefined otherwise:

We model the behaviour of the three variables \( readyA \), \( readyB \) and \( turn \) with process identifiers \( RA ^b\), \( RB ^b\) and \( T ^t\) (with the superscripts referring to the current value of the variable), defined by the following equations:

Our specification uses labels \(\mathbf {noncritA}\), \(\mathbf {noncritB}\), \(\mathbf {critA}\), \(\mathbf {critB}\), to represent exiting the noncritical and critical sections, respectively. Process identifiers \( procA \) and \( procB \) model the behaviour of processes \( A \) and \( B \). They are defined by the following equations [using the abbreviation \({(\lambda _1+\lambda _2)}.P\) for \({\lambda _1}.P\mathbin {+}{\lambda _2}.P\)]:

Together, the process definitions form the recursive specification \(E_{ Pet }\) consisting of eight process identifiers: \( procA \), \( procB \), \( RA ^{ true }\), \( RA ^{ false }\), \( RB ^{ true }\), \( RB ^{ false }\), \( T ^{A}\) and \( T ^{B}\). With the set *H* defined by

we can now specify Peterson’s algorithm with the process expression

### Remark 8

Our specification of Peterson’s algorithm is almost identical to the CCS model presented in [4]. The difference is in how communication is defined. CCS presupposes a standard communication function by which an action *a* can communicate with its co-named action \({\bar{a}}\), resulting in a special action \(\tau \). In our setting, the exact same behaviour as defined by the specification in [4] would be obtained by using, instead of the communication function \(\gamma \) defined above, a communication function \(\gamma _{\text {CCS}}\) defined by

To get an appropriate notion of just path starting from \( Pet \), we define the set of blocking actions.

Let \(\pi \) denote the unique path starting with \( Pet \) such that if all states are omitted from it then we obtain the following sequence of labels:

The path \(\pi \) violates the liveness criterion as process *A* wants to enter the critical section but is never able to, waiting to write to the variable \( readyA \). It is deemed unrealistic, as process *B* reading \( readyA \) intuitively cannot prevent process *A* from writing it. To assess whether \(\pi \) is just we need to examine whether for every action transition *t* with \({\ell }(t)\notin \mathcal {B}\) and \( src (t) \in \pi \), a transition *u* occurs in the suffix of \(\pi \) starting at \( src (t)\) such that . The only component of interest here is *procA* as all other components partake in infinitely many transitions. Let *t* denote some transition labelled with \( asgn _{ RA }^{true}\), with \( src (t) \in \pi \). There always exists a transition *u* labelled with \( rd _{ RA }^{false}\) in the suffix of \(\pi \) starting at \( src (t)\). The components partaking in *t* are \(\textsc {l}\) and \(\textsc {r}\textsc {r}\textsc {l}\) and the components partaking in *u* are \(\textsc {r}\textsc {l}\) and \(\textsc {r}\textsc {r}\textsc {l}\). Hence, due to the overlap, ; the path violating the liveness property is just.

A more refined definition of the concurrency relation is needed to specify that certain interactions, such as reading a variable, do not interfere with other interactions with the same component. This requires distinguishing between components contributing passively to a transition and components really affected by a transition.

## Signals

In the previous section it was observed that the specification of Peterson’s algorithm in the proposed process calculus does not yield the appropriate notion of just path, at least not with the given semantics. The culprit is a combination of two aspects. First, shared variables need to be modelled as separate processes. Second, the process calculus does not offer a facility to distinguish between the activities of reading and writing a variable while, intuitively, if some component reads the value of a variable then this should not prevent another process from writing a new value to it.

The solution proposed in [4] is to extend the syntax of CCS with a *signal emission operator*, in order to treat signals differently in the definition of the concurrency relation. A separate set \(\mathcal {S}\) of signals is presupposed, and the signal emission operator adds a \(\lambda \)-labelled self-loop to a state if it can emit signal \(\lambda \in \mathcal {S}\). Variables, modelled as processes, then emit their values in the form of signals, and reading the value of a variable can then be treated as not affecting the variable. As a consequence, paths on which some component wants to write to a variable but never succeeds because the variable is perpetually read by some other component is not considered just.

Adding a signal emission operator solves the problem uniformly: with every process expression of the process calculus an appropriate notion of just path is associated: if a component only contributes to a transition by emitting a signal, then this contribution is considered passive. A disadvantage of the solution, however, is that it requires an addition to the syntax of the calculus. As a consequence, standard verification technology such as the mCRL2 toolset, which does not include a signal emission operator, cannot be used to perform verifications taking justness into account.

Here we opt for a different solution, which does not require an addition to the syntax of the process calculus. Instead, it suffices to distinguish a separate set of signals \(\mathcal {S}\) and tune the notion of justness to take signals into account. We need to modify the structural operational semantics, giving signals a special status: whenever a transition labelled with a signal indeed does not change state, then it is considered to be a signal. But this modification of the structural operational semantics is only necessary to get an appropriate definition of the concurrency relation. In Sects. 6 and 7, we shall propose sufficient conditions on a process expression (and the underlying recursive specification) that ensure that all transitions labelled with signals are indeed signal transitions. This, in combination with the use of an appropriate communication function that preserves component information, will eventually obviate the need for explicitly defining a concurrency relation on transitions, because it can be deduced from the labelling.

Henceforth we allow \(\mathcal {S}\) to be non-empty. The syntax of the process calculus [see (1) on p. 6] remains the same. In the structural operational semantics, however, we distinguish between components contributing actively and components contributing passively to a transition. A component contributes passively to a transition if another component reads one of its signals, i.e., the component participates with a transition that is labelled with a signal and this transition does not change the state of the component. The modified structural operational semantics in Table 2 defines a transition relation \(\mathrel {\overset{\lambda ,\alpha ,\varsigma }{\longrightarrow }}\) on process expressions, which is endowed with a label \(\lambda \in \mathcal {L}\), a set \(\alpha \subseteq \mathcal {C}^{*}\) of *active components* and a set \(\varsigma \subseteq \mathcal {C}^{*}\) of *signalling components*.

Note that \({\lambda }.P\,{{/=}}\, P\), and therefore a transition emanating from a prefix always changes state. Thus, according to the rule \(\textsc {(Pref)}\), the transition from a prefix has an active component \(\epsilon \) and no signalling components.

If an identifier *A* is the source of a transition that has *A* also as its target, and this transition is labelled with a signal, then this transition has a signalling component \(\epsilon \) and no active components; otherwise, the transition has an active component \(\epsilon \) and no signalling components.

Due to the presence of recursion, it may also happen that \(P\mathbin {+}Q\) is both the source and the target of a transition, and if such a transition is labelled with a signal, then we want to treat it as a signal transition. This is reflected in \((\textsc {Sum}\text {-}\textsc {l})^{}\) and \((\textsc {Sum}\text {-}\textsc {r})^{}\) by distinguishing whether the target of the transition equals \(P\mathbin {+}Q\) and is labelled with a signal: if so, then the transition has no active components and a signalling component \(\epsilon \); otherwise, the transition has an active component \(\epsilon \) and no signalling components.

In an application of \((\textsc {Par}\text {-}\textsc {l})\), both the active and signalling components of the premise are prefixed with an \(\textsc {l}\); in an application of \((\textsc {Par}\text {-}\textsc {r})\), they are prefixed with an \(\textsc {r}\); in an application of \(\textsc {(Comm)}\) the components of the left premise are prefixed with an \(\textsc {l}\) and those of the right premise are prefixed with an \(\textsc {r}\). In an application of \(\textsc {(Enc)}\), both the sets of active and signalling components are simply inherited from the premise.

### Example 9

Consider the recursive specification of Peterson’s algorithm—and in particular the specification of \( RA ^{ false }\)—given in the previous section. Suppose that \( s\_rd _{ RA }^ false \in \mathcal {S}\) but \( rd _{ RA }^ false , r\_rd _{ RA }^ false \notin \mathcal {S}\). Then we have the following (fragment of a) derivation:

The component \( RA ^ false \) contributes a signal transition, and hence does not actively contribute to the communication. As a consequence, the path we identified earlier as constituting a liveness violation of Peterson’s algorithm is, with the revised semantics, no longer just.

We now associate a revised LTSC with our process calculus as follows. Its set of states \( St \) is again the set of process expressions. Its set of transitions \( Tr \) is the set of all derivations in accordance with the new structural operational semantics in Table 2. Again, if \(t\in Tr \) is a derivation with conclusion \(P\mathrel {\overset{\lambda ,\alpha ,\varsigma }{\longrightarrow }}P'\), then \( src (t)=P\), \( target (t)=P'\) and \({\ell }(t)=\lambda \). We define the concurrency relation using a refined notion of component, in which we distinguish between *necessary participants* and *affected* components. The set of necessary participants of a transition *t*, denoted by \( npc (t)\), is defined as

and the set of *affected components* of *t*, denoted by \( afc (t)\), is defined as

We define that transitions *t* and *u* are concurrent (notation: ) if none of the components necessary for *t* are affected by *u*, i.e., if \( npc (t)\cap afc (u)=\emptyset \).

To satisfy the requirements on that it is irreflexive on action transitions, it is important that the set of affected components \( afc (t)\) of an action transition *t* is non-empty, for otherwise \( npc (t)\cap afc (t)=\emptyset \). The following example illustrates that we need to formulate some mild restrictions on the communication function for this.

### Example 10

Consider the recursive specification consisting of the following two defining equations:

and suppose that \(\gamma \) is a communication function satisfying

Furthermore, suppose that \(\lambda _1,\lambda _2\in \mathcal {S}\), while \(\lambda _3\in \mathcal {A}\). Then we have the following derivation:

Since \(\lambda _3\in \mathcal {A}\), this derivation is an action transition, but the set of affected components is empty. The culprit in this example is that communication between the two signals \(\lambda _1\) and \(\lambda _2\) results in an action \(\lambda _3\).

We can exclude the situation as described in the preceding example by requiring that the communication of two signals never results in an action. It is convenient and natural to also require the converse: the communication of an action with another label should never result in a signal.

### Definition 11

A communication function \(\gamma \) is *signal-respecting* if \(\gamma (\lambda _1,\lambda _2)\in \mathcal {S}\) if, and only if, \(\lambda _1,\lambda _2\in \mathcal {S}\).

### Lemma 12

If the communication function \(\gamma \) is signal-respecting, then a transition *t* is a signal transition if, and only if, \( afc (t)=\emptyset \).

### Proof

By induction on *t*; see Lemma 45 in “Appendix A” for details. \(\square \)

In the following corollary, which is an immediate consequence of the preceding lemma, we establish that satisfies condition 1 of Definition 2.

### Corollary 13

If the communication function \(\gamma \) is signal-respecting, then is irreflexive on action transitions, i.e., for all action transitions *t* we have .

### Proof

Let *t* be an action transition. Then, by Lemma 12, \( afc (t)\,{{/=}}\,\emptyset \). Since \( afc (t)\subseteq npc (t)\), it follows that \( npc (t)\cap afc (t)\,{{/=}}\,\emptyset \), and hence . \(\square \)

### Lemma 14

For all transitions *t* and *v*, if \( src (t)= src (v)\) and \( npc (t)\cap afc (v)=\emptyset \), then there exists a transition *u* with \( src (u)= target (v)\), \({\ell }(u)={\ell }(t)\) and \( npc (u)= npc (t)\). If \(\gamma \) is signal-respecting and *t* is an action transition, then so is *u*.

### Proof

By induction on *v*; see Lemma 46 in “Appendix A” for details. \(\square \)

It follows from the preceding lemma that the relation associated with our process calculus satisfies condition 2 of Definition 2, as established in the following corollary.

### Corollary 15

If \(\gamma \) is signal-respecting, *t* is an action transition and \(\pi \) is a path from \( src (t)\) to some process expression *P* such that for all transitions *v* occurring on \(\pi \), then there is an action transition *u* such that \( src (u)=P\), \({\ell }(u)={\ell }(t)\) and .

### Proof

Straightforward induction on the length of \(\pi \) using Lemma 14. \(\square \)

From Corollaries 13 and 15 we get the following proposition.

### Proposition 16

Let \(\gamma \) be signal-respecting, let \( St =\mathcal {P}\), let \( Tr \) be the set of all derivations of transitions in accordance with the operational semantics, stipulating that if \(t\in Tr \) is a derivation with conclusion \(P\mathrel {\overset{\lambda ,\alpha ,\varsigma }{\longrightarrow }}P'\), then \( src (t)=P\), \( target (t)=P'\) and \({\ell }(t)=\lambda \), \( npc (t)=\alpha \cup \varsigma \) and \( afc (t)=\alpha \), and defining by if, and only if, \( npc (t)\cap afc (u)=\emptyset \). Then

is an LTSC.

### Example 17

Returning to the running example of Peterson’s algorithm we reconsider the path that violates liveness. First, we define the signal actions and check whether the communication function is signal-respecting.

It is easy to see that the communication function \(\gamma \), defined in Eq. (2) on p. 9, is signal-respecting. Taking the LTSC as defined in Proposition 16 we re-examine the liveness violating path \(\pi \) presented at the end of Sect. 4, which gives rise to the following sequence of labels:

Let *t* and *u* be any two transitions with labels \( asgn _{ RA }^{true}\) and \( rd _{ RA }^{false}\), respectively. Then \( npc (t) = \{\textsc {l},\textsc {r}\textsc {r}\textsc {l}\}\) and \( afc (u) = \{\textsc {r}\textsc {l}\}\). Therefore \( npc (t)\cap afc (u)=\emptyset \) and thus . We conclude that path \(\pi \) contains transition *t*, \( src (t) \in \pi \), for which there does not exist a transition *v* in the suffix of \(\pi \) such that . The path \(\pi \) is therefore not just and can be ruled out. Note that this does not constitute a proof of liveness, we have only reasoned about a single path. To prove liveness we need to prove that there does not exist another liveness violating path that is just.

## Concurrency-consistent labelling

The semantics we associated with our process calculus in the previous section enables reasoning about just paths without the need for additional operators in the language. This allows one to manually analyse, e.g., the required liveness property of Peterson’s algorithm in a standard process algebra, by reasoning directly about the relevant just paths in the LTSC under analysis. Our aim, however, is to facilitate the automated verification of liveness properties for just paths, using toolsets such as mCRL2. Such toolsets are based on labelled transition systems without a concurrency relation. Moreover, in these toolsets, properties need to be expressed in a modal logic that has modalities that refer to labels, and not to individual transitions.

Our specification of Peterson’s algorithm is such that it allows a characterisation of its just paths in terms of labels rather than referring to individual transitions in the LTSC. This is possible, because the labelling of transitions reachable from \( Pet \) is consistent with the concurrency relation on those transitions.

In this section, we formally define when an LTSC has a concurrency-consistent labelling, and we prove that LTSCs with a concurrency-consistent labelling allow a characterisation of just paths in terms of labels instead of individual transitions. In the next section, we shall provide a sufficient syntactic criterion on specifications in our process calculus that ensure that the associated LTSC has a concurrency-consistent labelling, and we argue that our specification of Peterson’s algorithm satisfies this syntactic criterion.

### Definition 18

An LTSC has a *concurrency-consistent labelling* if for every \(t \in Tr \), \({\ell }(t)\in \mathcal {S}\) implies \( src (t)= target (t)\), and there exists a binary relation on the set of labels \(\mathcal {L}\) such that for all transitions \(t,u\in Tr \) we have that if, and only if, .

Clearly, there is no harm in the overloading of the symbol . In an LTSC with a concurrency-consistent labelling the relation on \(\mathcal {L}\) is uniquely determined by the relation on \( Tr \). Furthermore, it will be clear from the context whether we mean the relation on transitions or the relation on labels. For an LTSC with concurrency-consistent labelling, we can reformulate the notion of \(\mathcal {B}\)-justness referring to labels instead of transitions. A label \(\lambda \in \mathcal {L}\) is *enabled* in a state \(s\in St \) if there is a transition *t* with \( src (t)=s\) and \({\ell }(t)=\lambda \). An action \(\lambda \in \mathcal {A}\) is *eliminated* on a path \(\pi \) if there is a transition *t* on \(\pi \) such that . In an LTSC with a concurrency-consistent labelling, action transitions are not labelled by signals, so a non-blocking action transition is labelled by an element of the complement \(\overline{\mathcal {B}}=\mathcal {A}\backslash \mathcal {B}\) of \(\mathcal {B}\) relative to \(\mathcal {A}\).

### Proposition 19

Let \(\mathcal {B}\subseteq \mathcal {A}\) be a set of blocking actions. If has a concurrency-consistent labelling, then a path \(\pi \) is \(\mathcal {B}\)-just if, and only if, for every state *s* on \(\pi \) and every \(\lambda \in \overline{\mathcal {B}}\) enabled in *s*, \(\lambda \) is eliminated in the suffix of \(\pi \) starting at *s*.

### Proof

Let \(\pi \) be a path in .

To prove the implication from left to right, suppose that \(\pi \) is \(\mathcal {B}\)-just and suppose that \(\lambda \in \overline{\mathcal {B}}\) is enabled in some state *s* on \(\pi \). Then there is an action transition *t* with \( src (t)=s\) and \({\ell }(t)=\lambda \), so, by \(\mathcal {B}\)-justness, a transition *u* occurs in the suffix of \(\pi \) starting at \( src (t)=s\) such that . Since the LTSC has a concurrency-consistent labelling, it follows that , and hence \(\lambda \) is eliminated on the suffix of \(\pi \) starting at *s*.

To prove the implication from right to left, let *t* be an action transition such that \({\ell }(t)\notin \mathcal {B}\) and \( src (t)\in \pi \). Then \({\ell }(t)\) is enabled and, since *t* is an action transition and the LTSC has a concurrency-consistent labelling, it follows that \({\ell }(t)\in \overline{\mathcal {B}}\), so \(\lambda \) is eliminated in the suffix of \(\pi \) starting at \( src (t)\). So there is a transition *u* in the suffix of \(\pi \) starting at \( src (t)\) such that . Hence, since the LTSC has a concurrency-consistent labelling, , confirming that \(\pi \) is \(\mathcal {B}\)-just. \(\square \)

## Specifying an LTSC with concurrency-consistent labelling

The LTSC \(\mathbf {P}\) associated with the process calculus in Sect. 5 does not have a concurrency-consistent labelling, simply because there exist process expressions (e.g., \({\lambda }.\mathbf {0}\) with \(\lambda \in \mathcal {S}\)) that give rise to state-changing transitions labelled with signals. In automated verification, however, we are often only interested in the restriction of \(\mathbf {P}\) to the set of process expressions reachable from some initial process expression; for example, when verifying Peterson’s algorithm we are only interested in states and transitions reachable from \( Pet \). We shall now first formally define the LTSC associated with a process expression *P*, and then formulate sufficient syntactic conditions that guarantee that this LTSC has a concurrency-consistent labelling.

### Definition 20

Let *P* be a process expression. The *LTSC associated with P* has as set of states the set of all process expressions reachable from *P* in \(\mathbf {P}\), as transitions the set of all transitions reachable from *P*, and functions \( src \), \( target \), \({\ell }\) and relation obtained by restricting those of \(\mathbf {P}\) to the set of transitions reachable from *P*.

In Sect. 5, the concurrency relation on transitions was derived from assignments \( npc : Tr \rightarrow 2^{\mathcal {C}^{*}}\) and \( afc : Tr \rightarrow 2^{\mathcal {C}^{*}}\) of necessary participants and affected components to individual transitions. It is convenient to formulate sufficient conditions in terms of assignments \( npc _{\ell }:\mathcal {L}\rightarrow 2^{\mathcal {C}^{*}}\) and \( afc _{\ell }:\mathcal {L}\rightarrow 2^{\mathcal {C}^{*}}\) of necessary and affected components to labels, respectively, satisfying for every transition *t*

It is not possible to satisfy these equations in general: an appropriate assignment of components to labels largely depends on the process expression under consideration. Moreover, it may not even be possible to define \( npc _{\ell }:\mathcal {L}\rightarrow 2^{\mathcal {C}^{*}}\) and \( afc _{\ell }:\mathcal {L}\rightarrow 2^{\mathcal {C}^{*}}\) in such a way that the equations above are satisfied for all reachable transitions.

### Example 21

Consider the specification \( Pet \) of Peterson’s algorithm presented in Sect. 4, and consider the state reached from \( Pet \) by first executing \(\mathbf {noncritA}\) and then executing \(\mathbf {noncritB}\). In that state, two transitions are enabled: let us denote by *t* the transition corresponding to the activity of process *A* assigning the value \( true \) to the variable \( readyA \) (this is statement \(\ell _2\) in Fig. 1) and let us denote by *u* the transition corresponding to the activity of process *B* assigning the value \( true \) to the variable \( readyB \) (this is statement \(m_2\) in Fig. 1). Then \( npc (t)=\{\textsc {l},\textsc {r}\textsc {r}\textsc {l}\}\) and \( npc (u)=\{\textsc {r}\textsc {l},\textsc {r}\textsc {r}\textsc {r}\textsc {l}\}\). Now observe that, in the context of the CCS communication function \(\gamma _{\text {CCS}}\), defined in Eq. (3) on p. 10, we have that \({\ell }(t)={\ell }(u)=\tau \), and hence it is not possible to define a mapping \( npc _{\ell }:\mathcal {L}\rightarrow 2^{\mathcal {C}^{*}}\) satisfying (4). Note that with the communication function \(\gamma \), defined in Eq. (2) on p. 9 the problem disappears, since *t* and *u* have distinct labels \( asgn ^ true _ RA \) and \( asgn ^ true _ RB \), respectively.

The goal in this section is to formulate sufficient conditions on the communication function \(\gamma \) and a process expression *P* that allow us to define \( npc _{\ell }\) and \( afc _{\ell }\) satisfying (4) and (5) for all transitions *t* reachable from *P*. Furthermore, we show that our specification of Peterson’s algorithm satisfies these restrictions.

We first formulate some basic requirements on \( npc _{\ell }\) and \( afc _{\ell }\), expressing that the set of affected components associated with a label is included in the set of necessary components, and that signals do not have active components.

### Definition 22

Let \(C\subseteq \mathcal {C}^{*}\) be a finite set of static components. A *C*-assignment is a pair \(( npc _{\ell }, afc _{\ell })\) of mappings \( npc _{\ell }, afc _{\ell }:\mathcal {L}\rightarrow 2^{C}\) such that

- 1.
\( afc _{\ell }(\lambda )\subseteq npc _{\ell }(\lambda )\) for all \(\lambda \in \mathcal {L}\); and

- 2.
\( afc _{\ell }(\lambda )=\emptyset \) for all \(\lambda \in \mathcal {S}\).

In the following example, we define a \(\mathcal {C}( Pet )\)-assignment for our specification of Peterson’s algorithm.

### Example 23

Recall that the set of components \(\mathcal {C}( Pet )\) associated with \( Pet \) is

To define the mappings \( npc _{\ell }, afc _{\ell }: \mathcal {L}\rightarrow 2^{\mathcal {C}(\textit{Pet})}\) it is convenient to first associate with every component \(\sigma \in \mathcal {C}( Pet )\) a set of labels \(\mathcal {L}_{\sigma }\subseteq \mathcal {L}\). We have

Now we can define, for all \(\sigma \in \mathcal {C}( Pet )\) and all \(\lambda \in \mathcal {L}_{\sigma }\):

On the other elements of \(\mathcal {L}\), the results of communications, \( npc _{\ell }\) and \( afc _{\ell }\) are defined as follows:

It is easy to verify that \(( npc _{\ell }, afc _{\ell })\) satisfies the requirements of Definition 22 and hence is a \(\mathcal {C}( Pet )\)-assignment.

We could now proceed to prove directly that the \(\mathcal {C}( Pet )\)-assignment in the preceding example satisfies Eqs. (4) and (5) and conclude that the LTSC associated with \( Pet \) has a concurrency-consistent labelling. We prefer to proceed more generally, however, and define a subclass of process expressions together with assumptions on the underlying recursive specification that guarantee that an assignment satisfying Eqs. (4) and (5) exists. It will be easy to verify that \( Pet \) is a process expression in the subclass, and that the recursive specification \(E_{ Pet }\) satisfies the assumptions, from which it will follow that the \(\mathcal {C}( Pet )\)-assignment above indeed satisfies Eqs. (4) and (5). In fact, it can be checked automatically whether a process expression is in the subclass and the underlying recursive specification satisfies the assumptions.

We consider parallel compositions of sequential components. These sequential components should have disjoint alphabets and respect the use of signals. Moreover, the communication function should support a consistent assignment of components to labels. Below, we shall first formulate sufficient conditions on a sequential process expression and its underlying sequential recursive specification that ensure that transitions labelled with signals do not change state in the LTSC associated with the process expression. Then, we associate with every sequential process expression its (reachable) alphabet and its (reachable) action alphabet, so that we can formulate the requirement that the alphabets of components are disjoint. And finally we shall define when an assignment is consistent with a communication function.

*Sequential components*

The set of *sequential process expressions* is generated by the following grammar (with *A* ranging over \(\mathcal {I}\) and \(\lambda \) ranging over \(\mathcal {L}\)):

By a *sequential recursive specification**E* we mean a set of defining equations

with \(S_A\) a sequential process expression, including precisely one such equation for every \(A\in \mathcal {I}\).

A sequential process expression *S* is *syntactically guarded* if all occurrences of process identifiers in *S* are within the scope of an action prefix. A sequential recursive specification *E* is *syntactically guarded* if for every defining equation \(A{\mathop {=}\limits ^{\text {def}}}S_A\) in *E* it holds that \(S_A\) is syntactically guarded.

*Respect for signals* Let *E* be a sequential recursive specification, and let us denote, for all \(A\in \mathcal {I}\), by \(S_A\) the right-hand side of the defining equation for *A* in *E*. We say that \(A\in \mathcal {I}\) is *signalling* if \(S_A\) has a subexpression \({\lambda }.A\) with \(\lambda \in \mathcal {S}\). A process identifier \(A\in \mathcal {I}\) is *signal-respecting* if

- 1.
for every subexpression \({\lambda }.S'\) of \(S_A\) with \(\lambda \in \mathcal {S}\) it holds that \(S'=A\) and the occurrence of the subexpression is not in the scope of another prefix, and

- 2.
for every subexpression \(S_1\mathbin {+}S_2\) of \(S_A\) it holds that \(S_1\) and \(S_2\) are not signalling process identifiers.

*E* is *signal-respecting* if it is syntactically guarded and all process identifiers in \(\mathcal {I}\) are signal-respecting. A sequential process expression *S* is *signal-respecting* with respect to a signal-respecting sequential recursive specification *E* if *S* does not have subexpressions of the form \({\lambda }.S'\) with \(\lambda \in \mathcal {S}\), and for every subexpression \(S_1\mathbin {+}S_2\) it holds that \(S_1\) and \(S_2\) are not signalling process identifiers.

### Example 24

It is straightforward to check that \(E_{ Pet }\) is a syntactically guarded sequential recursive specification and that it is signal-respecting.

### Lemma 25

Let *E* be a signal-respecting recursive specification and let *t* be a transition such that \( src (t)\) is a signal-respecting sequential process expression. Then \( target (t)\) is again a signal-respecting sequential process expression, and *t* is a signal transition if, and only if, \({\ell }(t)\in \mathcal {S}\).

### Proof

To establish that \( target (t)\) is again a signal-respecting sequential process expression, we first note that if \(A{\mathop {=}\limits ^{\text {def}}}S_A\) is the equation in *E* defining some process identifier *A*, and \(S_A\mathrel {\overset{\lambda ,\alpha ,\varsigma }{\longrightarrow }}S'\), then \(S'\) is signal-respecting. For by syntactic guardedness, \(S'\) is a subexpression of \(S_A\), by the first requirement satisfied by signal-respecting process identifiers \(S'\) cannot have subexpressions of the form \({\lambda }.S''\) with \(\lambda \in \mathcal {S}\), and by the second requirement satisfied by signal-respecting process identifiers, whenever \(S_1\mathbin {+}S_2\) is a subexpression of \(S'\), then \(S_1\) and \(S_2\) cannot be signalling process identifiers. We can now argue that \( target (t)\) is a signal-respecting sequential process expression with a straightforward induction on the structure of \( src (t)\).

It remains to show that *t* is a signal transition if, and only if, \({\ell }(t)\in \mathcal {S}\).

For the implication from left to right, note that if *t* is a signal transition, then, by definition, \({\ell }(t)\in \mathcal {S}\).

For the converse implication, suppose that \({\ell }(t)\in \mathcal {S}\); we need to establish that \( src (t)= target (t)\). To this end, we first establish with induction on the structure of *S* that if *S* is a signal-respecting process expression, \(\lambda \in \mathcal {S}\) and \(S\mathrel {\overset{\lambda ,\alpha ,\varsigma }{\longrightarrow }}S'\), then \(S=A\) for some process identifier *A*. Clearly, *S* cannot be \(\mathbf {0}\). Furthermore, since signal-respecting sequential process expressions do not have subexpressions of the form \({\lambda }.S''\) with \(\lambda \in \mathcal {S}\), we cannot have that \(S={\lambda }.S''\) for some process expression \(S''\). Note that if we would have \(S=S_1\mathbin {+}S_2\), then either \(S_1\mathrel {\overset{\lambda ,\alpha ,\varsigma }{\longrightarrow }}S'\) or \(S_2\mathrel {\overset{\lambda ,\alpha ,\varsigma }{\longrightarrow }}S'\), so by the induction hypothesis either \(S_1\) or \(S_2\) would be a signalling process identifier, contradicting the assumption that for every subexpression \(S_1\mathbin {+}S_2\) of *S* it holds that \(S_1\) and \(S_2\) are not signalling process identifiers. It follows that \(S=A\) for some (signalling) process identifier *A*. Hence, assuming that \((A{\mathop {=}\limits ^{\text {def}}}S_A)\in E\), *t* has a subderivation \(t'\) with \( src (t')= S_A\) and \({\ell }(t')\in \mathcal {S}\). From the first requirement satisfied by signal-respecting process identifiers it now follows that \( target (t)= target (t')=A\). \(\square \)

*Alphabet*

We also wish to associate with each sequential process expression *S* its *alphabet*\(\mathcal {L}(S)\) and its *action alphabet*\(\mathcal {A}(S)\), the sets of labels of transitions and action transitions reachable from *S*, respectively. To this end, we first define \(\mathcal {L}(A)\) for all process identifiers defined in *E*, using two auxiliary notions. First, we associate with every sequential process expression *S* its non-recursive alphabet \(\mathcal {L}'(S)\) inductively by: \(\mathcal {L}'(\mathbf {0})=\emptyset \), \(\mathcal {L}'(A)=\emptyset \) for all \(A\in \mathcal {I}\), \(\mathcal {L}'({\lambda }.S)=\{\lambda \}\cup \mathcal {L}'(S)\), and \(\mathcal {L}'(S_1\mathbin {+}S_2)=\mathcal {L}'(S_1)\cup \mathcal {L}'(S_2)\). Second, we define on \(\mathcal {I}\) a binary relation \(\mathbin {\triangleright }\) by \(A\mathbin {\triangleright }A'\) if \(A{\mathop {=}\limits ^{\text {def}}}S\) in *E* and \(A'\) occurs in *S*, and denote by \(\mathbin {\triangleright ^{*}}\) the reflexive-transitive closure of \(\mathbin {\triangleright }\). Then we can define the alphabet \(\mathcal {L}(A)\) of *A* by

Now, we inductively extend \(\mathcal {L}(\_)\) to all sequential process expressions defining \(\mathcal {L}(\mathbf {0})=\emptyset \), \(\mathcal {L}({\lambda }.S)=\{\lambda \}\cup \mathcal {L}(S)\), and \(\mathcal {L}(S_1\mathbin {+}S_2)=\mathcal {L}(S_1)\cup \mathcal {L}(S_2)\). Furthermore, we define \(\mathcal {A}(S)=\mathcal {L}(S)\cap \mathcal {A}\).

### Lemma 26

Let *E* be a sequential recursive specification and let *S* be a sequential process expression over *E*. If \(S'\) is a sequential process expression reachable from *S*, then \(\mathcal {L}(S')\subseteq \mathcal {L}(S)\) and \(\mathcal {A}(S')\subseteq \mathcal {A}(S)\).

### Proof

We first consider the special case that there is a transition *t* with \( src (t)=S\) and \( target (t)=S'\) and prove with induction on *t* that \(\mathcal {L}(S')\subseteq \mathcal {L}(S)\).

If the last rule applied in *t* is \(\textsc {(Pref)}\), then we have \(S={\lambda }.{S'}\) and hence \(\mathcal {L}(S')\subseteq \{\lambda \}\cup \mathcal {L}(S')=\mathcal {L}(S)\).

If the last rule applied in *t* is (Sum-l), then there exist \(S_1\) and \(S_2\) such that \(S=S_1\mathbin {+}S_2\), and *t* has a subderivation \(t'\) with \( src (t')=S_1\) and \( target (t')=S'\). By the induction hypothesis we have that \(\mathcal {L}(S') \subseteq \mathcal {L}(S_1) \subseteq \mathcal {L}(S_1)\cup \mathcal {L}(S_2) =\mathcal {L}(S)\).

If the last rule applied in *t* is (Sum-r), then there exist \(S_1\) and \(S_2\) such that \(S=S_1\mathbin {+}S_2\), and *t* has a subderivation \(t'\) with \( src (t')=S_2\) and \( target (t')=S'\). By the induction hypothesis we have that \(\mathcal {L}(S') \subseteq \mathcal {L}(S_2) \subseteq \mathcal {L}(S_1)\cup \mathcal {L}(S_2) =\mathcal {L}(S)\).

If the last rule applied in *t* is \(\textsc {(Rec)}^{}\), then \(S=A\) for some process identifier \(A\in \mathcal {I}\) with defining equation \((A{\mathop {=}\limits ^{\text {def}}}S_A)\in E\), and *t* has a subderivation \(t'\) with \( src (t')=S_A\) and \( target (t')=S'\). By the induction hypothesis, \(\mathcal {L}(S')\subseteq \mathcal {L}(S_A)\); it therefore remains to show that \(\mathcal {L}(S_A)\subseteq \mathcal {L}(A)\). We have:

[In the second equality we have used that \(A\mathbin {\triangleright ^{*}}A''\) for all \(A'\) such that \(A\mathbin {\triangleright }A'\). In the third equality we have used the definition of \(\mathcal {L}(A)\).]

Now, if \(S'\) is reachable from *S*, then the statement of the lemma follows with a straightforward induction on the number of transitions in a path from *S* to \(S'\). Furthermore, it is then immediate from the definition of action alphabet that \(\mathcal {A}(S')\subseteq \mathcal {A}(S)\). \(\square \)

*Parallel-sequential processes*

Presupposing a signal-respecting sequential recursive specification *E*, a *parallel-sequential* process expression over *E* is a process expression generated by the following grammar (with *S* ranging over sequential process expressions and \(H\subseteq \mathcal {L}\)):

### Lemma 27

Let *E* be a sequential recursive specification and let *P* be a parallel-sequential process expression over *E*. If \(P'\) is reachable from *P*, then \(\mathcal {C}(P')=\mathcal {C}(P)\) and \(P'|_{\sigma }\) is reachable from \({P}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P)\).

### Proof

With induction on *t* it can be established that if *t* is a transition such that \( src (t)=P\) and \( target (t)=P'\), then \(\mathcal {C}(P')=\mathcal {C}(P)\) and \({P'}\mid _{\sigma }={P}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P)\). The details are worked out in “Appendix B” (see Lemma 47).

Then, if \(P'\) is reachable from *P*, the statement of the lemma follows with a straightforward induction on the number of transitions in a path from *P* to \(P'\). \(\square \)

Since a communication function \(\gamma \) is required to be commutative and associative, it induces a partial function \({\overline{\gamma }}:\mathcal {M}_f(\mathcal {L})\rightharpoonup \mathcal {L}\), where \({{\mathcal {M}}}_f(\mathcal {L})\) denotes the set of all finite multisets over \(\mathcal {L}\). We define \({\overline{\gamma }}([\lambda _0,\dots ,\lambda _n])\) with induction on *n* as follows:

- 1.
If \(n=0\), then \({\overline{\gamma }}([\lambda _0,\dots ,\lambda _n])=\lambda _0\).

- 2.
If \(n=1\), then \({\overline{\gamma }}([\lambda _0,\dots ,\lambda _n])= \gamma (\lambda _0,\lambda _n)\) if \(\gamma (\lambda _0,\lambda _n)\) is defined, and undefined otherwise.

- 3.
If \(n\ge 1\), then \({\overline{\gamma }}([\lambda _0,\dots ,\lambda _{n+1}])=\gamma ({\overline{\gamma }}([\lambda _0,\dots ,\lambda _n]),\lambda _{n+1})\) if both \({\overline{\gamma }}([\lambda _0,\dots ,\lambda _n])\) and \(\gamma ({\overline{\gamma }}([\lambda _0,\dots ,\lambda _n]),\lambda _{n+1})\) are defined, and undefined otherwise.

It is straightforward to prove, with induction on \(n\ge 1\), that for all \(\lambda _0,\dots ,\lambda _n\) and for all \(0\le k < n\) that \(\gamma ({\overline{\gamma }}(\lambda _0,\dots ,\lambda _k),{\overline{\gamma }}(\lambda _{k+1},\dots ,\lambda _n))={\overline{\gamma }}(\lambda _0,\dots ,\lambda _n)\); we shall use this fact in the proof of the next lemma, which relates transitions of a parallel-sequential process with transitions of its components.

### Lemma 28

Let *t* be a transition, let \( npc (t)=\{\sigma _0,\dots ,\sigma _n\}\), and suppose that \( src (t)\) is a parallel-sequential process expression. Then *t* has subderivations \(t_0,\dots ,t_n\) such that \( src (t_i)\) is a sequential process expression and \( src (t_i)={ src (t)}\mid _{\sigma _i}\) for all \(0\le i \le n\), and \({\ell }(t)={\overline{\gamma }}([{\ell }(t_0),\dots ,{\ell }(t_n)])\ (\)where \([{\ell }(t_0),\dots ,{\ell }(t_n)]\) denotes the multiset over \(\mathcal {L}\) consisting of \({\ell }(t_0),\dots ,{\ell }(t_n))\).

### Proof

We proceed by induction on *t*.

If the last rule applied in *t* is \(\textsc {(Pref)}\), \((\textsc {Sum}\text {-}\textsc {l})^{}\), \((\textsc {Sum}\text {-}\textsc {r})^{}\), or \(\textsc {(Rec)}^{}\), then \( npc (t)=\{\epsilon \}\), \( src (t)={ src (t)}\mid _{\epsilon }\), and \({\ell }(t)={\overline{\gamma }}([{\ell }(t)])\). Moreover, from the syntax definition of parallel-sequential processes it is clear that \( src (t)\) is a sequential process expression.

If the last rule applied in *t* is \((\textsc {Par}\text {-}\textsc {l})\), then *t* has a subderivation \(t'\) such that \( npc (t)=\textsc {l}\mathbin {\vartriangleright } npc (t')\), so there exist static components \(\sigma _0',\dots ,\sigma _n'\) such that \(\sigma _i=\textsc {l}\sigma _i'\) for all \(0\le i \le n\). By the induction hypothesis, \(t'\), and hence *t*, has subderivations \(t_0,\dots ,t_n\) such that \( src (t_i)\) is a sequential process expression, \( src (t_i)={ src (t')}\mid _{\sigma _i'}={ src (t)}\mid _{\sigma }\) for all \(0\le i\le n\) and \({\ell }(t)={\ell }(t')={\overline{\gamma }}([{\ell }(t_0),\dots ,{\ell }(t_n)])\).

If the last rule applied in *t* is \((\textsc {Par}\text {-}\textsc {r})\), then the proof proceeds analogously.

If the last rule applied in *t* is \(\textsc {(Comm)}\), then *t* has subderivations \(t'\) and \(t''\) such that \( npc (t)=\textsc {l}\mathbin {\vartriangleright } npc (t')\cup \textsc {r}\mathbin {\vartriangleright } npc (t'')\). Since \( npc (t')\) and \( npc (t'')\) cannot be empty, we have that \(n\ge 1\) and there exist static components \(\sigma _0',\dots ,\sigma _n'\) and a \(0\le k< n\) such that \(npc(t')=\{\sigma _0',\dots ,\sigma _k'\}\) and \( npc (t'')=\{\sigma _{k+1}',\dots ,\sigma _n\}\). By the induction hypothesis, \(t'\) and \(t''\), and hence *t*, have subderivations \(t_0,\dots ,t_n\) such that \( src (t_i)\) is a sequential process expression for all \(0\le i \le n\), \( src (t_i)={ src (t')}\mid _{\sigma _i'}={ src (t)}\mid _{\sigma _i}\) for all \(0\le i \le k\), \({\ell }(t')={\overline{\gamma }}([{\ell }(t_0),\dots ,{\ell }(t_k)])\), \( src (t_i)={ src (t'')}\mid _{\sigma _i'}={ src (t)}\mid _{\sigma _i}\) for all \(k < i \le n\), and \({\ell }(t'')={\overline{\gamma }}([{\ell }(t_{k+1}),\dots ,{\ell }(t_n)])\). Furthermore, we have that

Finally, if the last rule applied in *t* is \(\textsc {(Enc)}\), then *t* has a subderivation \(t'\) with \( npc (t')=\{\sigma _0,\dots ,\sigma _n\}\) and \({\ell }(t')={\ell }(t)\), so it follows immediately by the induction hypothesis that there exist subderivations \(t_0,\dots ,t_n\) of \(t'\) and hence of *t* such that \( src (t_i)={ src (t')}\mid _{\sigma _i}={ src (t)}\mid _{\sigma _i}\) and \({\ell }(t)={\ell }(t')={\overline{\gamma }}([{\ell }(t_0),\dots ,{\ell }(t_n)])\). \(\square \)

### Definition 29

Let \(C\subseteq \mathcal {C}^{*}\) be a finite set of static components. A *C*-assignment \(( npc _{\ell }, afc _{\ell })\) is *consistent* with a communication function \(\gamma \) if it satisfies, for all \(\lambda _1,\lambda _2,\lambda _3\in \mathcal {L}\) such that \(\gamma (\lambda _1,\lambda _2)=\lambda _3\):

- 1.
\( npc _{\ell }(\lambda _1)\cup npc _{\ell }(\lambda _2)= npc _{\ell }(\lambda _3)\); and

- 2.
\( afc _{\ell }(\lambda _1)\cup afc _{\ell }(\lambda _2)= afc _{\ell }(\lambda _3)\).

### Example 30

Consider the specification of Peterson’s algorithm, it is straightforward to verify that the \(\mathcal {C}( Pet )\)-assignment \(( npc _{\ell }, afc _{\ell })\) presented in Example 23 is consistent with the communication function \(\gamma \). Consider, by way of example, the equation

which is part of the definition of \(\gamma \). We confirm as follows that indeed the conditions of Definition 29 are satisfied:

If \( npc _{\ell }:\mathcal {L}\rightarrow 2^C\) associates with every label a subset of components in *C* and \(C'\subseteq C\), then we denote by \(\mathcal {L}(C')\) the *alphabet* of \(C'\), i.e.,

and by \(\mathcal {A}(C')\) the *action alphabet* of \(C'\), i.e.,

Note that by condition 2 of Definition 22 we have \(\mathcal {A}(C')\subseteq \mathcal {A}\).

### Theorem 31

Let *E* be a signal-respecting sequential recursive specification, let *P* be a parallel-sequential process expression over *E*, and let \(( npc _{\ell }, afc _{\ell })\) be a \(\mathcal {C}(P)\)-assignment. If \(\mathcal {L}({P}\mid _{\sigma })\subseteq \mathcal {L}(\{\sigma \})\) and \(\mathcal {A}({P}\mid _{\sigma })\subseteq \mathcal {A}(\{\sigma \})\) for all \(\sigma \in \mathcal {C}(P)\) and \(\gamma \) is signal-respecting and consistent with \(( npc _{\ell }, afc _{\ell })\), then \(( npc _{\ell }, afc _{\ell })\) satisfies the requirements (4) and (5) for every transition *t* reachable from *P*.

### Proof

Let *t* be a transition reachable from *P*. Then \( src (t)=P'\) for some parallel-sequential process expression \(P'\) reachable from *P*. By Lemma 27 we have \(\mathcal {C}(P')=\mathcal {C}(P)\) and we have \({P'}\mid _{c}\) is reachable from \({P}\mid _{c}\) for all \(c\in \mathcal {C}(P)\). So, without loss of generality, we may assume that \( src (t)=P\).

Let \( npc (t)=\{\sigma _0,\dots ,\sigma _n\}\). By Lemma 28, *t* has subderivations \(t_1,\dots ,t_n\) such that \( src (t_i)\) is a sequential process expression and \( src (t_i)={ src (t)}\mid _{\sigma _i}\) for \(0 \le i \le n\), and \({\ell }(t)={\overline{\gamma }}([{\ell }(t_0),\dots ,{\ell }(t_n)])\). Since, for all \(0\le i \le n\), \({\ell }(t_i)\in \mathcal {L}({P}\mid _{\sigma _i}) \subseteq \mathcal {L}(\{\sigma _i\})\), we have \( npc _{\ell }({\ell }(t_i))=\{\sigma _i\}\), and hence, by condition 1 of Definition 29,

Since *E* is a signal-respecting recursive specification and \( src (t_i)\) is a signal-respecting sequential process expression, by Lemma 25\(t_i\) is a signal transition if, and only if, \({\ell }(t_i)\in \mathcal {S}\). Since, on the one hand, \( afc _{\ell }({\ell }(t_i))=\emptyset \) for all \({\ell }(t_i)\in \mathcal {S}\), and, on the other hand, \(\mathcal {L}({P}\mid _{\sigma })\cap \mathcal {A}\subseteq \mathcal {A}(\{\sigma \})\) we have

This completes the proof of the theorem. \(\square \)

If *E*, *P*, \(\gamma \) and \(( npc _{\ell }, afc _{\ell })\) satisfy the requirements of the preceding theorem, then the relation on labels by if, and only if, \( npc _{\ell }(\lambda _1)\cap afc _{\ell }(\lambda _2)=\emptyset \) satisfies the requirements of Definition 18. So we get the following corollary.

### Corollary 32

Let *E* be a signal-respecting sequential recursive specification, let *P* be a parallel-sequential process expression over *E*, and let \(( npc _{\ell }, afc _{\ell })\) be a \(\mathcal {C}(P)\)-assignment such that \({\mathcal {L}}(P|_{\sigma })\subseteq \mathcal {L}(\{\sigma \})\) and \(\mathcal {A}(P|_{\alpha })\subseteq \mathcal {A}(\{\sigma \})\) for all \(\sigma \in \mathcal {C}(P)\) and \(\gamma \) is signal-respecting and consistent with \(( npc _{\ell }, afc _{\ell })\). Then the LTSC associated with *P* has a concurrency-consistent labelling.

### Example 33

In Example 30 we have established that all the conditions of Corollary 32 are satisfied for \(E_{ Pet }\), \(\gamma \), \( Pet \) and the \(\mathcal {C}( Pet )\)-assignment \(( npc _{\ell }, afc _{\ell })\) defined in Example 23, so the LTSC associated with \( Pet \) has a concurrency-consistent labelling.

## Expressing liveness

A mathematically rigorous method for establishing the correctness of a (finite model of a) system is by means of *model checking*. Given a process expression specifying a system, the behaviours of that system can be scrutinised by verifying which requirements, expressed in a modal logic, hold true and which ones fail to hold. Among the modal logics that can be used to express such requirements is the modal \(\mu \)-calculus. This is one of the most expressive logics available, subsuming logics such as LTL, CTL and CTL\(^{*}\), and it is typically used in toolset for analysing labelled transition systems, such as the mCRL2 toolset [2] and CADP [6]. We introduce this logic in Sect. 8.1.

Liveness requirements typically assert that (conditionally or unconditionally) something good must inevitably happen. Phrasing such properties in the modal \(\mu \)-calculus is rather standard, but it is less clear whether the logic permits expressing liveness properties restricted to just paths only. This is partly due to the fact that justness is a predicate on paths, whereas the modal \(\mu \)-calculus is a state-based formalism, and partly due to the ‘dynamic’ nature of justness, which checks along a path for enabledness of actions and their future elimination. In particular this dynamic nature rules out a ‘static’ encoding such as the one presented in [5] for dealing with fairness, as it assumes an *a priori* fixed—i.e., static—collection of constraints that need to hold infinitely often for a path to be fair.

We show that liveness requirements of the form ‘along every just path, every *a* action is inevitably followed by a *b* action’ can indeed be expressed in the modal \(\mu \)-calculus. Other path-based properties can be defined along the same lines. We discuss the liveness property in Sect. 8.2.

### The modal \(\mu \)-calculus

The modal \(\mu \)-calculus can be viewed as a fixed point extension of *Hennessy–Milner Logic* (HML) [13]. In HML one can characterise the capabilities of a state to execute actions using modal operators \([{\_}]\_\) and \(\langle {\_} \rangle \_\); essentially, this permits to reason about the transitions emanating from a state. Fixed points add the power of recursion to these basic facilities; this, intuitively, allows to reason about finite or infinite sequences or trees of transitions and the capabilities of the states visited along such sequences or trees. The resulting logic, i.e., HML with fixed points, is referred to as the modal \(\mu \)-calculus (\(\hbox {L}_\mu \)). For an in-depth treatment of this logic, we refer to, e.g. [14].

Our syntax of the modal \(\mu \)-calculus is given in the context of a set of recursion variables \(\mathcal {V}\), in addition to a finite set of labels \(\mathcal {L}\). The set \(\Phi \) of formulas of \(\hbox {L}_\mu \) is generated by the following grammar (with *X* ranging over the set of variables \(\mathcal {V}\), and \(\lambda \) ranging over the finite set of labels \(\mathcal {L}\)):

The binding precedence of the operators is as usual, with the fixed point operators binding weakest. We permit ourselves to write \(\bigwedge \nolimits _{\lambda \in A} \phi (\lambda )\) and \(\bigvee \nolimits _{\lambda \in A} \phi (\lambda )\) for a set of actions *A*, as generalisations of the binary conjunction and disjunction.

Let be a finite LTSC over \(\mathcal {L}\) with a concurrency-consistent labelling. We proceed to give a denotational semantics for our logic by associating every formula \(\varphi \) with the subset \(\llbracket {\varphi }\rrbracket _{\vartheta }\subseteq St \) of states in which it holds; since formulas may contain free variables, \(\llbracket {\varphi }\rrbracket _{\vartheta }\) is relative to an assignment \(\vartheta \) that provides an interpretation of recursion variables \(X \in \mathcal {V}\) as subsets of \( St \). We define \(\llbracket {\cdot }\rrbracket _{\vartheta }\) recursively as follows:

Note that the structure \((2^ St ,\subseteq )\) is a complete lattice. For an endofunction \({{\mathcal {T}}} : 2^ St \rightarrow 2^ St \), we write \(\mu {{\mathcal {F}}}.\,{{\mathcal {T}}}({{\mathcal {F}}})\) and \(\nu {{\mathcal {F}}}.\,{{\mathcal {T}}}({{\mathcal {F}}})\) to denote the least and greatest fixed points of \({{\mathcal {T}}}\), respectively. The interpretation of a formula \(\varphi \) is *independent* of the valuation \(\vartheta \) in case it contains no unbound recursion variables (i.e., all occurrences of a recursion variable are within the scope of a least or greatest fixed point). We simply write \(\llbracket {\varphi }\rrbracket _{}\) when referring to the semantics of such a formula, as it yields the same set of states for every possible environment \(\vartheta \) used to interpret \(\varphi \).

### Example 34

Greatest fixed point formulas typically characterise invariant properties, whereas least fixed point formulas characterise liveness properties. For instance, the \(\hbox {L}_\mu \) formula \(\nu {X}.\,{\langle {a} \rangle X \wedge [{b}]\bot }\), asserts the existence of an infinite *a*-path along which no *b*-action can be executed; this is an invariant property along the path. On the other hand, the formula \(\mu {X}.\,{\langle {a} \rangle X \vee \langle {b} \rangle \top }\) asserts that there is a finite path of *a*-labelled transitions, leading to a state in which a *b*-labelled transition is enabled.

### Expressing liveness along just paths

We consider liveness properties of the kind ‘whenever some non-blocking action \(\mathbf {a}\) happens, then inevitably also \(\mathbf {b}\) happens’; this property will be referred to as \(\mathbf {a}\)-\(\mathbf {b}\)-*liveness* and a state is said to satisfy \(\mathbf {a}\)-\(\mathbf {b}\)-liveness exactly when all paths emanating from that state satisfy \(\mathbf {a}\)-\(\mathbf {b}\)-liveness. An \(\hbox {L}_\mu \) formula that asserts that this property holds along all paths in a given (deadlock-free) LTS is the following

Restricting \(\mathbf {a}\)-\(\mathbf {b}\)-liveness to *just* paths requires that somehow the concept of justness is woven into this formula. We explain in several steps how this can be achieved.

In order to facilitate our reasoning, we consider the dual problem of characterising an \(\mathbf {a}\)-\(\mathbf {b}\)-liveness violation along *some* just path. While this problem is technically equally difficult, it is conceptually simpler since we are now only concerned with constructing a formula that describes the *existence* of a just path. Notice that a just path constitutes a violation to \(\mathbf {a}\)-\(\mathbf {b}\)-liveness precisely when (1) this path has a suffix starting at a state \(s'\), reached by an \(\mathbf {a}\)-labelled transition, along which action \(\mathbf {b}\) never takes place and (2) the path is just.

Our approach to characterising states that admit a violating path (should one exist) is based on the following observation. In our setting, any just path can be prefixed by an arbitrary finite path, resulting in a new just path (see Proposition 35 below). This means that we can characterise states that admit a just, \(\mathbf {b}\)-free path. Given any such state, we can characterise the states reaching it via a path ending with an \(\mathbf {a}\)-labelled transition.

For the remainder of this section we fix a finite LTSC with a concurrency-consistent labelling. The justness rephrasing of Proposition 19 requires one to reason about the enabled actions of a state. Let \(\textsf {En}(s)\) be the set of enabled non-blocking actions: \( \textsf {En}(s) = \{ \lambda \in \overline{\mathcal {B}}\mid \exists t \in Tr : src (t) = s\ \& \ {\ell }(t) = \lambda \}\).

### Proposition 35

Let \(\pi \) be a \(\mathcal {B}\)-just path. Then the path \(s_0 t_1 s_1 \dots t_n \pi \) is \(\mathcal {B}\)-just.

### Proof

Let \(\pi ' = s_0 t_1 s_1 t_2 \dots t_n \pi \) be a path such that \(\pi \) is \(\mathcal {B}\)-just, and let \(s_\pi \) be the starting state of \(\pi \). Suppose *s* is a state on \(\pi '\) and \(\lambda \in \textsf {En}(s)\). We distinguish two cases.

Case

*s*does not occur in the prefix \(s_0 t_1 s_1 t_2 \dots t_n\). Then*s*occurs in \(\pi \) and since \(\pi \) is \(\mathcal {B}\)-just, \(\lambda \) is eliminated in the suffix of \(\pi \) (and therefore also in the suffix of \(\pi '\)), starting in*s*.Case

*s*occurs in the prefix \(s_0 t_1 s_1 t_2 \dots t_n\). Towards a contradiction, assume that \(\lambda \) is not eliminated in the suffix of \(\pi '\) starting in*s*. Let*t*be the transition such that \({\ell }(t) = \lambda \) and \( src (t) = s\). Since \(\lambda \) is not eliminated in the suffix of \(\pi '\) starting in*s*and \(s_\pi \) is reachable from*s*, by condition 2 of Definition 2, there must be an action transition*u*such that \( src (u) = s_\pi \) and \({\ell }(t) = \lambda = {\ell }(u)\). But then \(\lambda \in \textsf {En}(s_\pi )\) and, since \(\pi \) is \(\mathcal {B}\)-just, \(\lambda \) is eliminated in \(\pi \). Contradiction. Consequently, \(\lambda \) is eliminated in the suffix of \(\pi '\) starting in*s*. \(\square \)

The suffixes of a just path are again just. This is formalised by the following proposition.

### Proposition 36

Let \(\pi = s_0 t_1 s_1 t_2 \dots \) be a finite or infinite path. If \(\pi \) is \(\mathcal {B}\)-just then also any suffix of \(\pi \) is \(\mathcal {B}\)-just.

### Proof

Let \(\pi \) be a \(\mathcal {B}\)-just path and let \(\pi '\) be a suffix of \(\pi \). Pick some state *s* in \(\pi '\) and an action \(\lambda \in \textsf {En}(s)\). Since *s* is in \(\pi '\), *s* is also in \(\pi \). Consequently, \(\lambda \) must be eliminated by some action in the suffix of \(\pi \) starting at *s*. Since *s* is in \(\pi '\), the suffix of \(\pi \) starting at *s* also is a suffix of \(\pi '\). \(\square \)

We next lift the notion of just path to the level of states: a *state* is just whenever it is the start of a just path. Note that we are interested in characterising states that admit a just path that constitute an \(\mathbf {a}\)-\(\mathbf {b}\)-liveness violation; such paths must have suffixes that are void of \(\mathbf {b}\)-actions. For this reason, we parameterise the notion of a just state with a set of actions *K* that limits the set of actions allowed to occur along the just paths.

### Definition 37

Let \(K \subseteq \mathcal {L}\) be a non-empty set of actions. We define \(\mathcal {J}_{}(K)\) as follows:

As we explained at the beginning of this section, we tackle our problem in two steps. First we show that formula \(\textsf {invariant}\), see Table 3, characterises the states that admit a just path along which no \(\mathbf {b}\)-action ever happens; i.e., those are essentially the states in the set \(\mathcal {J}_{}(\mathcal {A}\backslash \{\mathbf {b}\})\). Then we continue by characterising states that have a just path in which an \(\mathbf {a}\)-action is never followed by a \(\mathbf {b}\)-action. That is, we show that the formula that characterises the set of states that admit an \(\mathbf {a}\)-\(\mathbf {b}\)-liveness violation are exactly those states satisfying formula \(\textsf {violate}\) of Table 3.

Before we prove our claim that \(\textsf {invariant}\) exactly characterises states admitting just, \(\mathbf {b}\)-free paths we first prove an auxiliary lemma. This auxiliary lemma claims that \(\textsf {elim}(\lambda )\) captures exactly those states that have a \(\mathbf {b}\)-free path that eliminates action \(\lambda \), and leads to a set of states represented by *Y*.

### Lemma 38

For all environments \(\vartheta \), states \(s \in St \), actions \(\lambda \in \mathcal {A}\) and sets \(\mathcal {F} \subseteq St \), we have \(s \in \llbracket {\textsf {elim} (\lambda )}\rrbracket _{\vartheta [Y {:=} \mathcal {F}]}\) iff a state satisfying \(\mathcal {F}\) can be reached from *s* via a finite \(\mathbf {b}\)-free path ending with an action that eliminates \(\lambda \).

### Proof

The construct is a standard construct in the modal \(\mu \)-calculus; we refer to textbook proofs for the stated correspondence. \(\square \)

We continue by substantiating the claim that \(\textsf {invariant}\) characterises the states admiting just, \(\mathbf {b}\)-free paths. For the sake of conciseness, let \(\mathcal {J}\) be a shorthand for \(\mathcal {J}_{}(\mathcal {A}\backslash \{\mathbf {b}\})\). The following lemma states that \(\textsf {invariant}\) exactly characterises the set of states \(\mathcal {J}\).

### Lemma 39

For all \(s \in St \) we have \(s \in \mathcal {J}\) iff \(s \in \llbracket {\textsf {invariant} }\rrbracket _{}\).

### Proof

Let \(\vartheta \) be an arbitrary environment. We first show, by showing mutual set inclusion, that \(\mathcal {J}\) is a fixed point of the transformer \(T_{\textsf {invariant}}\) defined below:

Pick an arbitrary state \(s \in \mathcal {J}\). Let \(\pi \) be a path that witnesses \(s \in \mathcal {J}\). Pick an arbitrary action \(\lambda \in \mathcal {A}\) and assume that \(\lambda \in \textsf {En}(s)\). We must show that \(s \in \llbracket {\textsf {elim}(\lambda )}\rrbracket _{\vartheta [Y {:=} \mathcal {J}]}\) holds. From the fact that \(\pi \) witnesses \(s \in \mathcal {J}\), we obtain that there must be some transition

*t*on \(\pi \) such that , i.e., \(\ell (t)\in \#\lambda \), and by Proposition 36, \( target (t) \in \mathcal {J}\). Hence we can conclude the desired \(s \in \llbracket {\textsf {elim}(\lambda )}\rrbracket _{\vartheta [Y {:=} \mathcal {J}]}\).Pick a state \(s \in T_{\textsf {invariant}}\). Suppose \(\textsf {En}(s) = \emptyset \). Then state

*s*itself is a just path and hence \(s \in \mathcal {J}\). Next, suppose \(\textsf {En}(s) \,{{/=}}\, \emptyset \) and let \(\lambda \in \textsf {En}(s)\). Then also \(s \in \llbracket {\textsf {elim}(\lambda )}\rrbracket _{\vartheta [Y {:=} \mathcal {J}]}\). By Lemma 38, there must be some \(\mathbf {b}\)-free finite path \(s = s_0\,t_0\,s_1\,t_1\dots t_j\,s_{j+1}\) such that transition \(t_j\) eliminates \(\lambda \) and \(s_{j+1} \in \mathcal {J}\). By Proposition 35, then also the path witnessing \(s_{j+1} \in \mathcal {J}\), prefixed with \(s_0\,t_0\,s_1\,t_1\dots t_j\), is a just path witnessing \(s \in \mathcal {J}\).

We conclude that, indeed, \(\mathcal {J}\) is a fixed point of \(T_{\textsf {invariant}}\). We next show that \(\mathcal {J}\) is the greatest fixed point of \(T_{\textsf {invariant}}\); that is, for any \(\mathcal {F}\) satisfying \(T_{\textsf {invariant}}(\mathcal {F}) = \mathcal {F}\), we have \(\mathcal {F} \subseteq \mathcal {J}\). Let \(\mathcal {F}\) be a fixed point of \(T_{\textsf {invariant}}\), and choose \(s \in \mathcal {F}\). Our aim is to show that \(s \in \mathcal {J}\). First, observe that since \(\mathcal {F}\) is a fixed point of \(T_{\textsf {invariant}}\) and \(s \in \mathcal {F}\), we can conclude \(s \in T_{\textsf {invariant}}(\mathcal {F})\).

We construct a just, \(\mathbf {b}\)-free path starting in state *s* by eliminating all actions enabled in *s* in an arbitrary but fixed order as follows. Let *L* denote the set of enabled actions in *s*. In case \(L = \emptyset \), the state *s* itself witnesses \(s \in \mathcal {J}\) and we are done. Otherwise, fix a total ordering < on *L*. Pick the least action \(\lambda \in L\). Since \(s \in T_{\textsf {invariant}}(\mathcal {F})\), also \(s \in \llbracket {\textsf {elim}(\lambda )}\rrbracket _{\vartheta [Y {:=} \mathcal {F}]}\) holds. Consequently, by Lemma 38, there is a finite \(\mathbf {b}\)-free path \(s_0\,t_0\,s_1\,t_1\dots t_j\,s_{\lambda }\) such that transition \(t_j\) eliminates \(\lambda \) and \(s_{\lambda } \in \mathcal {F}\). Denote the set of enabled actions in \(s_{\lambda }\) by \(L_\lambda \). Note that \(L_\lambda \) contains at least those actions of *L* that were not eliminated on some path from *s* to \(s_{\lambda }\) [it may, however, contain actions that were already eliminated on *some* path from *s* to \(s_{\lambda }\), but, by Corollary 15, these actions were then not eliminated on *all* paths from *s* to \(s_{\lambda }\) witnessing \(s \in \llbracket {\textsf {elim}(\lambda )}\rrbracket _{\vartheta [Y {:=} \mathcal {F}]}\).] We now repeat this construction by choosing the least \(\lambda ' \in \{\lambda '' \in L_\lambda \cap L \mid \lambda < \lambda '' \}\), leading to a state \(s_{\lambda '}\), *etcetera*, until we have constructed a finite path that eliminates all obligations in *L* and ends in a state \(s' \in \mathcal {F}\). Note that this construction terminates since \(|L| \le |\mathcal {L}| < \infty \).

This means that for any state \(s \in \mathcal {F}\), we can construct a finite path to another state in \(\mathcal {F}\) such that all actions from \(\textsf {En}(s)\) are eliminated on that path. Since this holds invariantly for all states in \(\mathcal {F}\), this construction can be repeated to yield a finite \(\mathbf {b}\)-free just path or (in case it can be continued indefinitely) an infinite \(\mathbf {b}\)-free just path starting in *s*. Hence, \(s \in \mathcal {J}\) and therefore \(\mathcal {F} \subseteq \mathcal {J}\). \(\square \)

We illustrate the correspondence between \(\textsf {invariant}\) and \(\mathcal {J}\) on the example we provided earlier.

### Example 40

Reconsider Example 4, in which Alice drinks coffee and subsequently eats a croissant, Bob is engaged in a series of phone calls, and none of their activities interfere; see the following LTSC:

Suppose we claim that whenever Alice orders coffee, she eventually also orders a croissant. A counterexample to such a claim consists of a just path that contains a \( coffee \) event but is free of \( croissant \) actions following this \( coffee \) event. A state admits such a violating, \( coffee \)-less path iff it satisfies formula \(\textsf {invariant}\).

We argue that in this case, \(s_1\) does not satisfy formula \(\textsf {invariant}\). To this end, we first show that \(s_1\) does not satisfy \(\textsf {elim}( croissant )\). Notice that the set \(\#{ croissant } \backslash \{ croissant \}\) is the empty set, while the set \(\mathcal {A}\backslash (\#{ croissant } \cup \{ croissant \})\) is the set \(\{ coffee , phone \}\). Formula \(\textsf {elim}( croissant )\) therefore effectively holds in \(s_1\) iff formula \({\langle { phone } \rangle Q}\) holds in state \(s_1\). Due to the self-loop, this is the case exactly when state \(s_1\) satisfies \(\textsf {elim}( croissant )\). Since this chain of reasoning must be continued indefinitely and we are looking for the least solution to *Q*, we must conclude that \(s_1\) does not satisfy \(\textsf {elim}( croissant )\). As an immediate consequence we find that \(s_1\) also does not satisfy \(\textsf {invariant}\) since \( croissant \) is one of the enabled actions in that state. Observe that this is in line with the fact that \(s_1 \notin \mathcal {J}({\{ coffee , phone \}})\).

We now return to the original problem of characterising those states that have a just path that violates \(\mathbf {a}\)-\(\mathbf {b}\)-liveness. So far, we have established that formula \(\textsf {invariant}\) characterises those states that admit a \(\mathbf {b}\)-free, just path. A state that admits a path violating \(\mathbf {a}\)-\(\mathbf {b}\)-liveness is therefore one that admits a finite path that, via an \(\mathbf {a}\)-labelled transition, leads to a state satisfying \(\textsf {invariant}\). Given the similarities with the formula for \(\textsf {elim}\), we claim, without further proof, that formula \(\textsf {violate}\) indeed describes the set of states that admit an \(\mathbf {a}\)-\(\mathbf {b}\)-liveness violating just path.

### Theorem 41

Let be a finite LTSC with a concurrency-consistent labelling. Then all just paths starting in state \(s \in St \) satisfy \(\mathbf {a}\)-\(\mathbf {b}\)-liveness if and only if \(s \notin \llbracket {\textsf {violate} }\rrbracket _{}\).

### Example 42

We continue our previous example, showing that, indeed, the claim that whenever Alice orders coffee, she eventually also orders a croissant, holds true in state \(s_0\).

We find that \(s_0\) satisfies \(\textsf {violate}\) if, and only if, it satisfies \(\langle { coffee } \rangle \textsf {invariant}\), \(\langle { coffee } \rangle \textsf {violate}\), \(\langle { phone } \rangle \textsf {violate}\), or \(\langle { croissant } \rangle \textsf {violate}\). Notice that there is no \( croissant \) action enabled in \(s_0\), so \(s_0\) cannot satisfy \(\langle { croissant } \rangle \textsf {violate}\). In order for \(s_0\) to satisfy \(\langle { phone } \rangle \textsf {violate}\), we require \(s_0\) to again satisfy \(\textsf {violate}\). Like before, such a cyclic chain of reasoning does not permit us to conclude that \(s_0\) satisfies \(\textsf {violate}\). Therefore, the only way to show that \(s_0\) satisfies \(\textsf {violate}\) is to show that \(s_0\) satisfies \(\langle { coffee } \rangle \textsf {invariant}\). But as we may conclude from our previous example, also this will fail since \(s_1\) does not satisfy \(\textsf {invariant}\), which is required when we are to conclude that \(s_1\) satisfies \(\textsf {invariant}\). We can therefore conclude that state \(s_0\) does not satisfy \(\textsf {violate}\). Since the LTSC has a concurrency-consistent labelling, we may conclude by Theorem 41 that our liveness claim holds and Alice enjoys a croissant after drinking coffee.

### Example 43

In Example 30, we concluded that all the conditions of Corollary 32 are satisfied for \(E_{ Pet }\), \(\gamma \), \( Pet \) and the \(\mathcal {C}( Pet )\)-assignment \(( npc _{\ell }, afc _{\ell })\), so the LTSC associated with \( Pet \) has a concurrency-consistent labelling. As a consequence, by Theorem 41 we can therefore conclude that the formula in Table 3, with \(\mathcal {B}= \{\mathbf {noncritA},\mathbf {noncritB}\}\), \({\mathbf {a}}=\mathbf {noncritA}\) and \({\mathbf {b}}=\mathbf {critA}\) expresses \(\mathbf {noncritA}\)-\(\mathbf {critA}\)-liveness.

## Automated liveness analysis in mCRL2

A complete mCRL2 specification of Peterson’s algorithm is listed in “Appendix C”. The recursive specification \(E_ Pet \) presented in Sect. 4 served as the starting point and the reader will easily recognise it under the mCRL2 keyword **proc**. That the mCRL2 specification looks somewhat more involved than the specification presented in Sect. 4 is because we have used some convenient extra features of mCRL2. Before we comment on these extra features, we emphasise that the use of these features is by no means essential. We could have also verified liveness for all just paths with the mCRL2 toolset with a specification that almost literally corresponds to the one presented in Sect. 4.

In an mCRL2 specification, labels can be parameterised with data, defined by means of an algebraic specification. In our specification we have included an enumerated type of which the elements correspond to the labels of Peterson’s specification. This allows us to define, in a natural way, the functions \( npc _{\ell }\) and \( afc _{\ell }\) as mappings npc and afc, respectively, on the Label datatype. We then define a predicate interfere(a,a’) that evaluates to true if, and only if, \( npc _{\ell }(\texttt {a})\cap afc _{\ell }(\texttt {a'})\,{{/=}}\,\emptyset \) using the mappings npc and afc. In a similar vein, a predicate blocking(a) defines whether a is blocking or not.

The correspondence between labels and the data values representing them is achieved by turning the labels of \( Pet \) into *multi-actions*, ‘labelling’ the original actions with a parameterised action \(\texttt {label(<action>)}\), where \(\texttt {<action>}\) identifies the original action. For instance, we represent the label \(\mathbf {critA}\) using data value a_critA. In the equation defining procA, we have, instead of the occurrence of \(\mathbf {critA}\) appearing in \( procA \) in Sect. 4, a multi-action critA|label(a_critA). We can then choose to either hide the labels of the form \(\texttt {label(<label>)}\), or hide the labels representing those in the specification of Peterson’s algorithm in Sect. 4. The former allows us to generate a labelled transition system that is identical to that associated with \( Pet \); the latter yields a labelled transition system in which transitions are labelled with actions of the form \(\texttt {label(<label>)}\).

The toolset accepts the first-order modal \(\mu \)-calculus of [12], which generalises the logic \(\hbox {L}_\mu \) . With the labels available as a datatype and using the predicates interfere and blocking, we can express the formula expressing liveness for all just paths as an almost direct instantiation (with \(\mathbf {noncritA}\) for \({\mathbf {a}}\) and \(\mathbf {critA}\) for \({\mathbf {b}}\)) of the formula in Table 3. The formula we have used to verify that the mCRL2 specification of Peterson’s algorithm satisfies the required liveness property is listed in “Appendix D”. The extra features of mCRL2 described above facilitate writing the generalised disjunctions and conjunctions as existential and universal quantifications. Note, however, that, since the quantifications are over finite sets, they can be replaced by finite disjunctions and conjunctions.

Verifying whether the mCRL2 specification of Peterson’s algorithm satisfies \(\mathbf {noncritA}\)-\(\mathbf {critA}\)-liveness requires under half a second using the toolset and results in an affirmative verdict.^{Footnote 2} This once more confirms the manual correctness proof of [4]. If we modify the specification of the mapping afc by including c_ReadyA in afc(a_read_readyA), c_ReadyB in afc(a_read_readyB), and c_Turn in both afc(a_read_turnA) and afc(a_read_turnB), then the toolset produces the counterexample shown in Fig. 2. Note that the modification corresponds to not treating these actions as signals and that the counterexample represents the non-just path discussed in Sect. 4.

## Conclusions

To facilitate the automated verification of liveness properties, we have proposed a notion of concurrency-consistent labelling for labelled transition system with concurrency together with a formulation of justness in terms of states and actions. We have presented sufficient conditions on a process specification in a calculus with ACP-style communication that guarantee that the associated labelled transition system with concurrency has a concurrency-consistent labelling. Moreover, for LTSCs with a concurrency-consistent labelling we have shown how to formalise a liveness property under justness assumptions in the modal \(\mu \)-calculus.

We have built on the firm foundation laid by van Glabbeek in [9], but had to slightly deviate from it to enable a special treatment of signal transitions in a regular process calculus. Furthermore, we essentially relied on the ACP-style communication mechanism in our calculus.

As an example of our theory, we have shown that Peterson’s mutual exclusion algorithm can be specified in such a way that the associated LTSC has a concurrency-consistent labelling. Using the mCRL2 toolset we were able to verify that the specification satisfies the required liveness property for all just paths. We conjecture that similar specifications can be realised for the generalisation of Peterson’s algorithm to *N* processes [17], and for Lamport’s bakery algorithm [15]; it remains to confirm liveness properties for all just paths for these specifications with the mCRL2 toolset.

We see several directions in which our current work can be extended. For example, it would be useful to automate the verification of the syntactic conditions that guarantee that a specification induces an LTSC that has a concurrency-consistent labelling. A more challenging task is to identify to which extent the fragment of the process calculus can be extended without losing the guarantee that the LTSCs associated with expressions in that fragment have a concurrency-consistent labelling. We believe it may even be possible to check sufficient conditions for the LTSC to have a concurrency-consistent labelling by phrasing appropriate modal \(\mu \)-calculus formulas. Finally, an open issue in the context of justness is the definition of behavioural equivalences, such as component-preserving variants of strong bisimilarity [16] or divergence-preserving branching bisimilarity [10]. The latter is a particularly interesting starting point because it deals with abstraction and is the coarsest congruence included in branching bisimilarity that distinguishes livelock from deadlock and is compatible with parallel composition [11].

## Notes

The notion of derivation with respect to a set of derivation rules can be defined inductively as usual; we omit it here.

The mCRL2 sources can be found in the

*academic*example directory of the mCRL2 repository, which can be obtained from https://github.com/mCRL2org/mCRL2, revision b45856d9.

## References

Bergstra, J.A., Klop, J.W.: Algebra of communicating processes with abstraction. Theor. Comput. Sci.

**37**, 77–121 (1985)Bunte, O., Groote, J.F., Keiren, J.J.A., Laveaux, M., Neele, T., de Vink, E.P., Wesselink, W., Wijs, A., Willemse, T.A.C.: The mCRL2 toolset for analysing concurrent systems—improvements in expressivity and usability. In: TACAS (2), volume 11428 of Lecture Notes in Computer Science, pp. 21–39. Springer (2019)

Cranen, S., Luttik, B., Willemse, T.A.C.: Evidence for fixpoint logic. In: CSL, volume 41 of LIPIcs, pp. 8–93. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2015)

Dyseryn, V., van Glabbeek, R.J., Höfner, P.: Analysing mutual exclusion using process algebra with signals. In: Peters, K., Tini, S. (eds.) Proceedings Combined 24th International Workshop on Expressiveness in Concurrency and 14th Workshop on Structural Operational Semantics and 14th Workshop on Structural Operational Semantics, EXPRESS/SOS 2017, Berlin, Germany, 4th September 2017, volume 255 of EPTCS, pp. 18–34 (2017)

Emerson, E.A., Lei, C.-L.: Modalities for model checking: branching time logic strikes back. Sci. Comput. Program.

**8**(3), 275–306 (1987)Garavel, H., Lang, F., Mateescu, R., Serwe, W.: CADP 2011: a toolbox for the construction and analysis of distributed processes. STTT

**15**(2), 89–107 (2013)van Glabbeek, R.J., Höfner, P.: CCS: it’s not fair!—fair schedulers cannot be implemented in CCS-like languages even under progress and certain fairness assumptions. Acta Inf.

**52**(2–3), 175–205 (2015)van Glabbeek, R.J., Höfner, P.: Progress, justness, and fairness. ACM Comput. Surv.

**52**(4), 69:1–69:38 (2019)van Glabbeek, R.J.: Justness—a completeness criterion for capturing liveness properties (extended abstract). In: FoSSaCS, volume 11425 of Lecture Notes in Computer Science, pp. 505–522. Springer (2019)

van Glabbeek, R.J., Luttik, B., Trcka, N.: Branching bisimilarity with explicit divergence. Fundam. Inform.

**93**(4), 371–392 (2009)van Glabbeek, R.J., Luttik, B., Trcka, N.: Computation tree logic with deadlock detection. Log. Methods Comput. Sci.

**5**(4) (2009)Groote, J.F., Willemse, T.A.C.: Model-checking processes with data. Sci. Comput. Program.

**56**(3), 251–273 (2005)Hennessy, M., Milner, R.: Algebraic laws for nondeterminism and concurrency. J. ACM

**32**(1), 137–161 (1985)Kozen, D.: Results on the propositional \(\mu \)-calculus. Theoret. Comput. Sci.

**27**(3), 333–354 (1982)Lamport, L.: A new solution of Dijkstra’s concurrent programming problem. Commun. ACM

**17**(8), 453–455 (1974)Park, D.: Concurrency and automata on infinite sequences. In: Deussen, P. (ed.) Theoretical Computer Science. Lecture Notes in Computer Science, vol. 104, pp. 167–183. Springer, Berlin (1981)

Peterson, G.L.: Myths about the mutual exclusion problem. Inf. Process. Lett.

**12**(3), 115–116 (1981)Wesselink, W., Willemse, T.A.C.: Evidence extraction from parameterised Boolean equation systems. In: ARQNL@IJCAR, volume 2095 of CEUR Workshop Proceedings, pp. 86–100. CEUR-WS.org (2018)

## Acknowledgements

We thank the anonymous reviewers for their elaborate reviews and good suggestions. We thank Rob van Glabbeek for interesting discussions on the topic of this paper and for being a source of inspiration to us over the years.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Appendices

### Appendix A: Detailed proofs of lemmas in Sects. 3 and 5

In this “Appendix” we present elaborate proofs of Lemmas 6, 12 and 14, restated below as Lemmas 44, 45 and 46, respectively.

### Lemma 44

For all transitions *t* and *v*, if \( src (t)= src (v)\) and , then there exists a transition *u* with \( src (u)= target (v)\), \({\ell }(u)={\ell }(t)\) and \( comp (u)= comp (t)\).

### Proof

Let \(P= src (t)= src (v)\) and suppose that , hence \( comp (t)\cap comp (v)=\emptyset \). We prove with induction on *v* that there exists a transition *u* with \( src (u)= target (v)\), \({\ell }(u)={\ell }(t)\) and \( comp (u)= comp (t)\).

If the last rule applied in *v* is \(\textsc {(Pref)}\), \((\textsc {Sum}\text {-}\textsc {l})^{}\), \((\textsc {Sum}\text {-}\textsc {r})^{}\) or \(\textsc {(Rec)}^{}\), then \( comp (v)=\{\epsilon \}\), and, due to the syntactic form of *P*, the last rule applied in *t* must also be one of these rules, so \( comp (t)=\{\epsilon \}\). Thus, we find that \( comp (t)\cap comp (v)=\{\epsilon \}\), contradicting the assumption of the lemma.

Suppose that the last rule applied in *v* is \((\textsc {Par}\text {-}\textsc {l})\). Then there exist \(P_1\) and \(P_2\) such that \(P=P_1\mathbin {\Vert }P_2\), and a subderivation \(v'\) of *v* such that \( src (v')=P_1\), \({\ell }(v)={\ell }(v')\), \( target (v)= target (v')\mathbin {\Vert }P_2\) and \( comp (v)=\textsc {l}\mathbin {\vartriangleright } comp (v')\). From the syntactic shape of \( src (t)= src (v)=P_1\mathbin {\Vert }P_2\) we conclude that the last rule applied in *t* must be \((\textsc {Par}\text {-}\textsc {l})\), \((\textsc {Par}\text {-}\textsc {r})\) or \(\textsc {(Comm)}\). We distinguish these three cases:

If the last rule applied in

*t*is \((\textsc {Par}\text {-}\textsc {l})\), then*t*has a subderivation \(t'\) with \( src (t')=P_1\) and \({\ell }(t')={\ell }(t)\). Since \( comp (t)=\textsc {l}\mathbin {\vartriangleright } comp (t')\) and \( comp (v)=\textsc {l}\mathbin {\vartriangleright } comp (v')\), and \( comp (t)\cap comp (v)=\emptyset \), we have that \( comp (t')\cap comp (v')=\emptyset \), so . Hence, by the induction hypothesis, there exists a transition \(u'\) with \( src (u')= target (v')\), \({\ell }(u')={\ell }(t')\) and \( comp (u')= comp (t')\). We can now construct from \(u'\) with an application of \((\textsc {Par}\text {-}\textsc {l})\) a derivation*u*with \( src (u)= target (v')\mathbin {\Vert }{} P_2 = target (v)\), \({\ell }(u)={\ell }(u')={\ell }(t')={\ell }(t)\) and \( comp (u)=\textsc {l}\mathbin {\vartriangleright } comp (u')=\textsc {l}\mathbin {\vartriangleright } comp (t')= comp (t)\).If the last rule applied in

*t*is \((\textsc {Par}\text {-}\textsc {r})\), then*t*has a subderivation \(t'\) with \( src (t')=P_2\) and \({\ell }(t')={\ell }(t)\). Then with an application of \((\textsc {Par}\text {-}\textsc {r})\) we can construct from \(t'\) a derivation*u*with \( src (u)= target (v')\mathbin {\Vert }{}P_2= target (v)\), \({\ell }(u)={\ell }(t')={\ell }(t)\) and \( comp (u)=\textsc {r}\mathbin {\vartriangleright } comp (t')= comp (t)\).If the last rule applied in

*t*is \(\textsc {(Comm)}\), then*t*has subderivations \(t_1\) and \(t_2\) with \( src (t_1)=P_1\), \( src (t_2)=P_2\), and \(\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\). From \( comp (t)=\textsc {l}\mathbin {\vartriangleright } comp (t_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (t_2)\) and \( comp (t)\cap comp (v)=\emptyset \), we conclude that \( comp (t_1)\cap comp (v')=\emptyset \), so , and hence, by the induction hypothesis, there exists a derivation \(u_1\) with \( src (u_1)= target (v')\), \({\ell }(u_1)={\ell }(t_1)\) and \( comp (u_1)= comp (t_1)\). From \(u_1\) and \(t_2\) we can now, with an application of \(\textsc {(Comm)}\), construct a derivation*u*with \( src (u)= src (u_1)\mathbin {\Vert } src (t_2)= target (v')\mathbin {\Vert }P_2= target (v)\), \({\ell }(u)=\gamma ({\ell }(u_1),{\ell }(t_2))=\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\) and \( comp (u)=\textsc {l}\mathbin {\vartriangleright } comp (u_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (t_2) =\textsc {l}\mathbin {\vartriangleright } comp (t_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (t_2) = comp (t)\).

If the last rule applied in *v* is \((\textsc {Par}\text {-}\textsc {r})\), then the argument is symmetric to the argument for the case that the last rule applied in *v* is \((\textsc {Par}\text {-}\textsc {l})\).

Suppose last rule applied in *v* is \(\textsc {(Comm)}\). Then there exist subderivations \(v_1\) and \(v_2\) of *v* with \( src (v)= src (v_1)\mathbin {\Vert } src (v_2)\), \({\ell }(v)=\gamma ({\ell }(v_1),{\ell }(v_2)\), \( target (v)= target (v_1)\mathbin {\Vert } target (v_2)\) and \( comp (v)=\textsc {l}\mathbin {\vartriangleright } comp (v_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (v_2)\). From the syntactic shape of \( src (t)= src (v)= src (v_1)\mathbin {\Vert } src (v_2)\), we conclude that the last rule applied in *t* must be \((\textsc {Par}\text {-}\textsc {l})\), \((\textsc {Par}\text {-}\textsc {r})\) or \(\textsc {(Comm)}\). We distinguish these three cases:

If the last rule applied in

*t*is \((\textsc {Par}\text {-}\textsc {l})\), then*t*has a subderivation \(t'\) with \( src (t')= src (v_1)\) and \({\ell }(t')={\ell }(t)\). Since \( comp (t)=\textsc {l}\mathbin {\vartriangleright } comp (t')\), \( comp (v)=\textsc {l}\mathbin {\vartriangleright } comp (v_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (v_2)\), and \( comp (t)\cap comp (v)=\emptyset \), we have that \( comp (t')\cap comp (v_1)=\emptyset \). So , and hence, by the induction hypothesis, there exists a transition \(u'\) with \( src (u')= target (v_1)\), \({\ell }(u')={\ell }(t')\) and \( comp (u')= comp (t')\). We can now construct from \(u'\) with an application of \((\textsc {Par}\text {-}\textsc {l})\) a derivation*u*with \( src (u)= src (u')\mathbin {\Vert } target (v_2)= target (v_1)\mathbin {\Vert } target (v_2)= target (v)\), \({\ell }(u)={\ell }(u')={\ell }(t')={\ell }(t)\) and \( comp (u)=\textsc {l}\mathbin {\vartriangleright } comp (u')=\textsc {l}\mathbin {\vartriangleright } comp (t')= comp (t)\).If the last rule applied in

*t*is \((\textsc {Par}\text {-}\textsc {r})\), then the argument is similar to the argument in the previous case, using the induction hypothesis for \(v_2\) instead.If the last rule applied in

*t*is \(\textsc {(Comm)}\), then*t*has subderivations \(t_1\) and \(t_2\) with \( src (t_1)= src (v_1)\), \( src (t_2)= src (v_2)\), \(\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\). Since \( comp (t)=\textsc {l}\mathbin {\vartriangleright } comp (t_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (t_2)\) and \( comp (v)=\textsc {l}\mathbin {\vartriangleright } comp (v_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (v_2)\), and \( comp (t)\cap comp (v)=\emptyset \), we have that \( comp (t_1)\cap comp (v_1)=\emptyset \) and \( comp (t_2)\cap comp (v_2)=\emptyset \). Hence, by the induction hypothesis, there exist action transitions \(u_1\) and \(u_2\) with \( src (u_1)= target (v_1)\), \( src (u_2)= target (v_2)\), \({\ell }(u_1)={\ell }(t_1)\), \({\ell }(u_2)={\ell }(t_2)\), \( comp (u_1)= comp (t_1)\) and \( comp (u_2)= comp (t_2)\). We can now construct from \(u_1\) and \(u_2\) with an application of \(\textsc {(Comm)}\) a derivation*u*with \( src (u)= src (u_1)\mathbin {\Vert } src (u_2)= target (v_1)\mathbin {\Vert } target (v_2)= target (v)\), \({\ell }(u)=\gamma ({\ell }(u_1),{\ell }(u_2))=\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\), and \( comp (u)=\textsc {l}\mathbin {\vartriangleright } comp (u_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (u_2)=\textsc {l}\mathbin {\vartriangleright } comp (t_1)\cup \textsc {r}\mathbin {\vartriangleright } comp (t_2)= comp (t)\).

Suppose that the last rule applied in *v* is (Enc). Then there exists a subderivation \(v'\) with \( src (v)=\partial _{H}( src (v'))\) for some \(H\subseteq \mathcal {L}\), \({\ell }(v')={\ell }(v)\not \in H\) and \( comp (v)= comp (v')\). From the syntactic shape of \( src (t)= src (v)=\partial _{H}( src (v'))\) it follows that the last rule applied in *t* must be (Enc) too. So *t* has a subderivation \(t'\) with \( src (t')= src (v')\) and \({\ell }(t')={\ell }(t)\). Since \( comp (t)= comp (t')\) and \( comp (v)= comp (v')\), from \( comp (t)\cap comp (v)=\emptyset \) it follows that \( comp (t')\cap comp (v')=\emptyset \). Hence, by the induction hypothesis, there exists \(u'\) with \( src (u')= target (v')\), \({\ell }(u')={\ell }(t')\) and \( comp (u')= comp (t')\). With an application of (Enc) we can now construct from \(u'\) a derivation *u* with \( src (u)=\partial _{H}( src (u'))=\partial _{H}( target (v'))= target (v)\), \({\ell }(u)={\ell }(u')={\ell }(t')={\ell }(t)\) and \( comp (u)= comp (u')= comp (t')= comp (t)\). \(\square \)

### Lemma 45

If the communication function \(\gamma \) is signal-respecting, then a transition *t* is a signal transition if, and only if, \( afc (t)=\emptyset \).

### Proof

We prove with induction on the derivation *t* that *t* is a signal transition if, and only if, \( afc (t)=\emptyset \).

If the last rule applied in *t* is \(\textsc {(Pref)}\), then \( src (t)\,{{/=}}\, target (t)\) so *t* is not a signal transition and \( afc (t)\,{{/=}}\,\emptyset \).

If the last rule applied in *t* is \((\textsc {Sum}\text {-}\textsc {l})^{}\), \((\textsc {Sum}\text {-}\textsc {r})^{}\) or \(\textsc {(Rec)}^{}\), then *t* is a signal transition if, and only if, \({\ell }(t)\in \mathcal {S}\) and \( src (t)= target (t)\), if, and only if, \( afc (t)=\emptyset \).

If the last rule applied in *t* is \((\textsc {Par}\text {-}\textsc {l})\), then *t* has a subderivation \(t'\) such that, for some process expression *P*, \( src (t)= src (t')\mathbin {\Vert }P\), \( target (t)= target (t')\mathbin {\Vert }P\) and \({\ell }(t)={\ell }(t')\). On the one hand, if *t* is a signal transition, then from \( src (t)= target (t)\) it follows that \( src (t')= target (t')\) and \({\ell }(t')={\ell }(t)\in \mathcal {S}\), so \(t'\) is a signal transition too. By the induction hypothesis, \( afc (t')=\emptyset \), and, since \( afc (t)=\textsc {l}\mathbin {\vartriangleright } afc (t')\), it follows that \( afc (t)=\emptyset \). On the other hand, if \( afc (t)=\emptyset \), then \( afc (t')=\emptyset \). So by the induction hypothesis it follows that \(t'\) is a signal transition. So \( src (t')= target (t')\) and hence \( src (t)= target (t)\), and therefore *t* is a signal transition too.

If the last rule applied in *t* is \((\textsc {Par}\text {-}\textsc {r})\), then the argument is similar to the argument in the previous case.

If the last rule applied in *t* is \(\textsc {(Comm)}\), then there exist subderivations \(t_1\) and \(t_2\) such that \( src (t)= src (t_1)\mathbin {\Vert }{} src (t_2)\), \( target (t)= target (t_1)\mathbin {\Vert }{} target (t_2)\), and \(\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\). On the one hand, if *t* is a signal transition, then from \( src (t)= target (t)\) it follows that \( src (t_1)= target (t_1)\) and \( src (t_2)= target (t_2)\) and, since \(\gamma \) is signal-respecting, from \({\ell }(t)\in \mathcal {S}\) it follows that \({\ell }(t_1),{\ell }(t_2)\in \mathcal {S}\). Hence both \(t_1\) and \(t_2\) are signal transitions too. By the induction hypothesis, \( afc (t_1)= afc (t_2)=\emptyset \), and since \( afc (t)=\textsc {l}\mathbin {\vartriangleright } afc (t_1)\cup \textsc {r}\mathbin {\vartriangleright } afc (t_2)\), it follows that \( afc (t)=\emptyset \). On the other hand, if \( afc (t)=\emptyset \), then since \( afc (t)=\textsc {l}\mathbin {\vartriangleright } afc (t_1)\cup \textsc {r}\mathbin {\vartriangleright } afc (t_2)\), it follows that \( afc (t_1)= afc (t_2)=\emptyset \), so, by the induction hypothesis, \(t_1\) and \(t_2\) are signal transitions. Hence, \( src (t_1)= target (t_1)\), \( src (t_2)= target (t_2)\) and \({\ell }(t_1),{\ell }(t_2)\in \mathcal {S}\). It follows that \( src (t)= src (t_1)\mathbin {\Vert } src (t_2)= target (t_1)\mathbin {\Vert } target (t_2)= target (t)\) and, since \(\gamma \) is signal-respecting, \({\ell }(t)=\gamma ({\ell }(t_1),{\ell }(t_2))\in \mathcal {S}\), so *t* is a signal transition.

If the last rule applied in *t* is (Enc), then there exists a subderivation \(t'\) such that \( src (t)=\partial _{H}( src (t'))\) for some \(H\subseteq \mathcal {L}\), \({\ell }(t')={\ell }(t)\) and \( target (t)=\partial _{H}( src (t'))\). Furthermore, note that \( afc (t')= afc (t)\). On the one hand, if *t* is a signal transition, then \(t'\) is a signal transition too, so by the induction hypothesis, \( afc (t')=\emptyset \). Since \( afc (t)= afc (t')\), it follows that \( afc (t)=\emptyset \). On the other hand, if \( afc (t)=\emptyset \), then \( afc (t')=\emptyset \). So, by the induction hypothesis, \(t'\) is a signal transition, and hence \( src (t')= target (t')\) and \({\ell }(t')\in \mathcal {S}\). It follows that \( src (t)= target (t)\) and \({\ell }(t)\in \mathcal {S}\), so *t* is a signal transition. \(\square \)

### Lemma 46

For all transitions *t* and *v*, if \( src (t)= src (v)\) and \( npc (t)\cap afc (v)=\emptyset \), then there exists a transition *u* with \( src (u)= target (v)\), \({\ell }(u)={\ell }(t)\) and \( npc (u)= npc (t)\). If \(\gamma \) is signal-respecting and *t* is an action transition, then so is *u*.

### Proof

Let \(P= src (t)= src (v)\) and suppose that \( npc (t)\cap afc (v)=\emptyset \). We prove with induction on *v* that there exists *u* with \( src (u)= target (v)\), \({\ell }(u)={\ell }(t)\) and \( npc (u)= npc (t)\).

If the last rule applied in *v* is \(\textsc {(Pref)}\), then there exist \(\lambda \) and \(P'\) such that \(P={\lambda }.P'\) and \( afc (v)=\{\epsilon \}\). Due to the syntactic form of *P*, the last rule applied in *t* must also be \(\textsc {(Pref)}\) and therefore \( npc (t)=\{\epsilon \}\). Thus, we find that \( npc (t)\cap afc (v)=\{\epsilon \}\), contradicting the assumption of the lemma.

If the last rule applied in *v* is \((\textsc {Sum}\text {-}\textsc {l})^{}\) or \((\textsc {Sum}\text {-}\textsc {r})^{}\). Then either then \(P=P_1\mathbin {+}P_2\), so the last rule applied in *t* is also \((\textsc {Sum}\text {-}\textsc {l})^{}\) or \((\textsc {Sum}\text {-}\textsc {r})^{}\). Since \( npc (t)=\{\epsilon \}\) and \( npc (t)\cap afc (v)=\emptyset \), we have that \( afc (v)\,{{/=}}\,\{\epsilon \}\), and hence \( afc (v)=\emptyset \). So, by Lemma 12, *v* is a signal transition, and hence \(\lambda \in \mathcal {S}\) and \(P=P'\). Then we have that \( src (t)= src (v)= target (v)\), so we can take \(u=t\) to satisfy the requirements of the lemma. Clearly, if *t* is an action transition, then so is *u*.

If the last rule applied in *v* is \(\textsc {(Rec)}^{}\), then \(P=A\), so also the last rule applied in *t* is \(\textsc {(Rec)}^{}\). Since \( npc (t)=\{\epsilon \}\) and \( npc (t)\cap afc (v)=\emptyset \), it we have that \( afc (v)=\emptyset \). So, by Lemma 12, *v* is a signal transition, and hence \(P'=A\) and \(\lambda \in \mathcal {S}\). Then we have that \( src (t)= src (v)= target (v)\), so we can take \(u=t\) to satisfy the requirements of the lemma. Clearly, if *t* is an action transition, then *u* is an action transition too.

Suppose that the last rule applied in *v* is \((\textsc {Par}\text {-}\textsc {l})\). Then there exists a subderivation \(v'\) of *v* and a process expression \(P'\) such that \( src (v)= src (v')\mathbin {\Vert }P'\), \({\ell }(v)={\ell }(v')\) and \( afc (v)=\textsc {l}\mathbin {\vartriangleright } afc (v')\). From the syntactic shape of \( src (t)= src (v)= src (v')\mathbin {\Vert }P'\) we conclude that the last rule applied in *t* must be \((\textsc {Par}\text {-}\textsc {l})\), \((\textsc {Par}\text {-}\textsc {r})\) or \(\textsc {(Comm)}\). We distinguish these three cases:

If the last rule applied in

*t*is \((\textsc {Par}\text {-}\textsc {l})\), then*t*has a subderivation \(t'\) with \( src (t')=P_1\) and \({\ell }(t')={\ell }(t)\). Since \( npc (t)=\textsc {l}\mathbin {\vartriangleright } npc (t')\) and \( afc (v)=\textsc {l}\mathbin {\vartriangleright } afc (v')\), it follows from \( npc (t)\cap afc (v)=\emptyset \) that \( npc (t')\cap afc (v')=\emptyset \). Hence, by the induction hypothesis, there exists a transition \(u'\) with \( src (u')= target (v')\), \({\ell }(u')={\ell }(t')\) and \( npc (u')= npc (t')\). We can now construct from \(u'\) with an application of \((\textsc {Par}\text {-}\textsc {l})\) a derivation*u*with \( src (u)= target (v')\mathbin {\Vert }{} P_2 = target (v)\), \({\ell }(u)={\ell }(u')={\ell }(t')={\ell }(t)\) and \( npc (u)=\textsc {l}\mathbin {\vartriangleright } npc (u') =\textsc {l}\mathbin {\vartriangleright } npc (t')= npc (t)\). It remains to argue that if \(\gamma \) is signal-respecting and*t*is an action transition, then*u*is an action transition too. To this end, first note that if \(\gamma \) signal-respecting and*t*is an action transition, then, by Lemma 12, \( afc (t)\,{{/=}}\,\emptyset \), say \(\textsc {l}\sigma \in afc (t)\) for some \(\sigma \in \mathcal {C}^{*}\). Then, since \( afc (t)=\textsc {l}\mathbin {\vartriangleright } afc (t')\), it follows that \(\sigma \in afc (t')\), so \(t'\) is an action transition too. Then, by the induction hypothesis, also \(u'\) is an action transition, so there exists \(\sigma \in afc (u')\), and hence \(\textsc {l}\sigma '\in afc (u)\), which therefore is also an action transition.If the last rule applied in

*t*is \((\textsc {Par}\text {-}\textsc {r})\), then*t*has a subderivation \(t'\) with \( src (t')=P_2\) and \({\ell }(t')={\ell }(t)\). Then with an application of \((\textsc {Par}\text {-}\textsc {r})\) we can construct from \(t'\) a derivation*u*with \( src (u)= target (v')\mathbin {\Vert }{}P_2= target (v)\), \({\ell }(u)={\ell }(t')={\ell }(t)\) and \( npc (u)=\textsc {r}\mathbin {\vartriangleright } npc (t')= npc (t)\). If*t*is an action transition, then, by Lemma 12, \( afc (t)\,{{/=}}\,\emptyset \), and hence there exists \(\sigma \in \mathcal {C}^{*}\) such that \(\textsc {r}\sigma \in afc (t)\). From the construction of*u*it is then easy to see that also \(\textsc {r}\sigma \in afc (u)\), proving that*u*is an action transition too.If the last rule applied in

*t*is \(\textsc {(Comm)}\), then*t*has subderivations \(t_1\) and \(t_2\) with \( src (t_1)=P_1\), \( src (t_2)=P_2\), and \(\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\). From \( npc (t)=\textsc {l}\mathbin {\vartriangleright } npc (t_1)\cup \textsc {r}\mathbin {\vartriangleright } npc (t_2)\) and \( npc (t)\cap afc (v)=\emptyset \) we conclude that \( npc (t_1)\cap afc (v')=\emptyset \), so by the induction hypothesis there exists a derivation \(u_1\) with \( src (u_1)= src (v')\), \({\ell }(u_1)={\ell }(t_1)\) and \( npc (u_1)= npc (t_1)\). From \(u_1\) and \(t_2\) we can now, with an application of \(\textsc {(Comm)}\), construct a derivation*u*with \( src (u)= src (u_1)\mathbin {\Vert } src (t_2)= target (v')\mathbin {\Vert }P_2= target (v)\), \({\ell }(u)=\gamma ({\ell }(u_1),{\ell }(t_2))=\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\) and \( npc (u)=\textsc {l}\mathbin {\vartriangleright } npc (u_1)\cup \textsc {r}\mathbin {\vartriangleright } npc (t_2) =\textsc {l}\mathbin {\vartriangleright } npc (t_1)\cup \textsc {r}\mathbin {\vartriangleright } npc (t_2) = npc (t)\). If \(\gamma \) is signal-respecting and*t*is an action transition, then, by Lemma 12, \( afc (t)\,{{/=}}\,\emptyset \). Hence, since \( afc (t)=\textsc {l}\mathbin {\vartriangleright } afc (t_1)\cup \textsc {r}\mathbin {\vartriangleright } afc (t_2)\), we have that \( afc (t_1)\,{{/=}}\,\emptyset \) or \( afc (t_2)\,{{/=}}\,\emptyset \). In the first case, \(t_1\) is an action transition, so we get from the induction hypothesis that \(u_1\) is an action transition. It follows that \( afc (u_1)\,{{/=}}\,\emptyset \), and hence \( afc (u)\,{{/=}}\,\emptyset \), so*u*is an action transition. In the second case, we simply get that \(\textsc {r}\mathbin {\vartriangleright } afc (t_2)\subseteq afc (u)\), so \( afc (u)\,{{/=}}\,\emptyset \) and therefore*u*is an action transition.

If the last rule applied in *v* is \((\textsc {Par}\text {-}\textsc {r})\), then the argument is symmetric to the argument for the case that the last rule applied in *v* is \((\textsc {Par}\text {-}\textsc {l})\).

Suppose that the last rule applied in *v* is \(\textsc {(Comm)}\). Then there exist subderivations \(v_1\) and \(v_2\) of *v* with \( src (v)= src (v_1)\mathbin {\Vert } src (v_2)\), \({\ell }(v)=\gamma ({\ell }(v_1),{\ell }(v_2)\) and \( afc (v)=\textsc {l}\mathbin {\vartriangleright } afc (v_1)\cup \textsc {r}\mathbin {\vartriangleright } afc (v_2)\). From the syntactic shape of \( src (t)= src (v)= src (v_1)\mathbin {\Vert } src (v_2)\), we conclude that the last rule applied in *t* must be \((\textsc {Par}\text {-}\textsc {l})\), \((\textsc {Par}\text {-}\textsc {r})\) or \(\textsc {(Comm)}\). We distinguish these three cases:

If the last rule applied in

*t*is \((\textsc {Par}\text {-}\textsc {l})\), then*t*has a subderivation \(t'\) with \( src (t')= src (v_1)\) and \({\ell }(t')={\ell }(t)\). Since \( npc (t)=\textsc {l}\mathbin {\vartriangleright } npc (t')\) and \( afc (v)=\textsc {l}\mathbin {\vartriangleright } afc (v_1)\cup \textsc {r}\mathbin {\vartriangleright } afc (v_2)\), it follows from \( npc (t)\cap afc (v)=\emptyset \) that \( npc (t')\cap afc (v_1)=\emptyset \). So, by the induction hypothesis, there exists a transition \(u'\) with \( src (u')= target (v_1)\), \({\ell }(u')={\ell }(t')\) and \( npc (u')= npc (t')\). We can now construct from \(u'\) with an application of \((\textsc {Par}\text {-}\textsc {l})\) a derivation*u*with \( src (u)= src (u')\mathbin {\Vert } target (v_2)= target (v_1)\mathbin {\Vert } target (v_2)= target (v)\), \({\ell }(u)={\ell }(u')={\ell }(t')={\ell }(t)\) and \( npc (u)=\textsc {l}\mathbin {\vartriangleright } npc (u')=\textsc {l}\mathbin {\vartriangleright } npc (t')= npc (t)\). If \(\gamma \) is signal-respecting and*t*is an action transition, then, by Lemma 12, we have that \( afc (t)\,{{/=}}\,\emptyset \), and hence \( afc (t')\,{{/=}}\,\emptyset \). So \(t'\) is an action transition, and hence, by the induction hypothesis, \(u'\) is an action transition. Therefore, by construction,*u*is an action transition too.If the last rule applied in

*t*is \((\textsc {Par}\text {-}\textsc {r})\), then the argument is similar to the argument in the previous case, using the induction hypothesis for \(v_2\) instead.If the last rule applied in

*t*is \(\textsc {(Comm)}\), then*t*has subderivations \(t_1\) and \(t_2\) with \( src (t_1)= src (v_1)\), \( src (t_2)= src (v_2)\), \(\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\). Since \( npc (t)=L\vartriangleright npc (t_1)\cup \textsc {r}\mathbin {\vartriangleright } npc (t_2)\) and \( afc (v)=L\vartriangleright afc (v_1)\cup \textsc {r}\mathbin {\vartriangleright } afc (v_2)\), it follows from \( npc (t)\cap afc (v)=\emptyset \) that \( npc (t_1)\cap afc (v_1)=\emptyset \) and \( npc (t_2)\cap afc (v_2)=\emptyset \). Hence, by the induction hypothesis, there exist action transitions \(u_1\) and \(u_2\) with \( src (u_1)= target (v_1)\), \( src (u_2)= target (v_2)\), \({\ell }(u_1)={\ell }(t_1)\), \({\ell }(u_2)={\ell }(t_2)\), \( npc (u_1)= npc (t_1)\) and \( npc (u_2)= npc (v_2)\). We can now construct from \(u_1\) and \(u_2\) with an application of \(\textsc {(Comm)}\) a derivation*u*with \( src (u)= src (u_1)\mathbin {\Vert } src (u_2)= target (v_1)\mathbin {\Vert } target (v_2)= target (v)\), \({\ell }(u)=\gamma ({\ell }(u_1),{\ell }(u_2))=\gamma ({\ell }(t_1),{\ell }(t_2))={\ell }(t)\), and \( npc (u)=\textsc {l}\mathbin {\vartriangleright } npc (u_1)\cup \textsc {r}\mathbin {\vartriangleright } npc (u_2)=\textsc {l}\mathbin {\vartriangleright } npc (t_1)\cup \textsc {r}\mathbin {\vartriangleright } npc (t_2)= npc (t)\). If \(\gamma \) is signal-respecting and*t*is an action transition, then, by Lemma 12, \( afc (t)\,{{/=}}\,\emptyset \). Hence, since \( afc (t)=\textsc {l}\mathbin {\vartriangleright } afc (t_1)\cup \textsc {r}\mathbin {\vartriangleright } afc (t_2)\), we have that \( afc (t_1)\,{{/=}}\,\emptyset \) or \( afc (t_2)\,{{/=}}\,\emptyset \). In the first case, \(t_1\) is an action transition, so we get from the induction hypothesis that \(u_1\) is an action transition. It follows that \( afc (u_1)\,{{/=}}\,\emptyset \), and hence \( afc (u)\,{{/=}}\,\emptyset \), so*u*is an action transition. In the second case, \(t_2\) is an action transition, so we get from the induction hypothesis that \(u_2\) is an action transition. It follows that \( afc (u_2)\,{{/=}}\,\emptyset \), and hence \( afc (u)\,{{/=}}\,\emptyset \), so*u*is an action transition.

Suppose that the last rule applied in *v* is (Enc). Then there exists a subderivation \(v'\) with \( src (v)=\partial _{H}( src (v'))\) for some \(H\subseteq \mathcal {L}\), \({\ell }(v')={\ell }(v)\not \in H\) and \( npc (v)= npc (v')\). From the syntactic shape of \( src (t)= src (v)=\partial _{H}( src (v'))\) it follows that the last rule applied in *t* must be (Enc) too. So *t* has a subderivation \(t'\) with \( src (t')= src (v')\) and \({\ell }(t')={\ell }(t)\). Since \( npc (t)= npc (t')\) and \( afc (v)= afc (v')\), from \( npc (t)\cap afc (v)=\emptyset \) it follows that \( npc (t')\cap afc (v')=\emptyset \). Hence, by the induction hypothesis, there exists \(u'\) with \( src (u')= target (v')\), \({\ell }(u')={\ell }(t')\) and \( npc (u')= npc (t')\). With an application of (Enc) we can now construct from \(u'\) a derivation *u* with \( src (u)=\partial _{H}( src (u'))=\partial _{H}( target (v'))= target (v)\), \({\ell }(u)={\ell }(u')={\ell }(t')={\ell }(t)\) and \( npc (u)= npc (u')= npc (t')= npc (t)\).

If \(\gamma \) is signal-respecting and *t* is an action transition, then, by Lemma 12 and \( afc (t')= afc (t)\), \(t'\) is an action transition too, so by the induction hypothesis also \(u'\) is an action transition, and thus it follows, by Lemma 12 and \( afc (u)= afc (u')\), that *u* is an action transition. \(\square \)

### B Detailed proof of a lemma in Sect. 7

In this “Appendix”, we present a detailed proof of Lemma 27, restated below as Lemma 47.

### Lemma 47

Let *E* be a sequential recursive specification and let *P* be a parallel-sequential process expression over *E*. If \(P'\) is reachable from *P*, then \(\mathcal {C}(P')=\mathcal {C}(P)\) and \({P'}\mid _{\sigma }\) is reachable from \({P}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P)\).

### Proof

Let us first consider the special case that there is a transition *t* such that \( src (t)=P\) and \( target (t)=P'\). With induction on *t* we establish that \(\mathcal {C}(P')=\mathcal {C}(P)\) and \({P'}\mid _{\sigma }={P}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P)\).

If the last rule applied in *t* is \(\textsc {(Pref)}\), (Sum-l), (Sum-r) or \(\textsc {(Rec)}^{}\), then *P* is a sequential process expression and, since *E* is a sequential recursive specification, so is \(P'\). It follows that \(\mathcal {C}(P')=\{\epsilon \}=\mathcal {C}(P)\) and \({P'}\mid _{\epsilon }=P'\) is reachable from \(P={P}\mid _{\epsilon }\).

If the last rule applied in *t* is \((\textsc {Par}\text {-}\textsc {l})\), then there exist \(P_1\), \(P_1'\) and \(P_2\) such that \(P=P_1\mathbin {\Vert }P_2\), \(P'=P_1'\mathbin {\Vert }P_2\), and *t* has a subderivation \(t'\) with \( src (t')=P_1\) and \( target (t')=P_1'\). By the induction hypothesis, \(\mathcal {C}(P_1)=\mathcal {C}(P_1')\) and \({P_1}\mid _{\sigma }={P_1'}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P_1)\). It follows that \(\mathcal {C}(P) =L\vartriangleright \mathcal {C}(P_1)\cup \textsc {r}\mathbin {\vartriangleright }\mathcal {C}(P_2) =L\vartriangleright \mathcal {C}(P_1')\cup \textsc {r}\mathbin {\vartriangleright }\mathcal {C}(P_2) =\mathcal {C}(P')\). Moreover, since \({P_1}\mid _{\sigma }={P_1'}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P_1)\), it also follows that \({P}\mid _{\sigma }={P'}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P)\).

If the last rule applied in *t* is \((\textsc {Par}\text {-}\textsc {r})\), then the argument is analogous to the argument in the case that the last rule applied is \((\textsc {Par}\text {-}\textsc {l})\).

If the last rule applied in *t* is \(\textsc {(Comm)}\), then there exist \(P_1\), \(P_1'\), \(P_2\) and \(P_2'\) such that \(P=P_1\mathbin {\Vert }P_2\), \(P'=P_1'\mathbin {\Vert }P_2'\), and *t* has a subderivations \(t_1\) and \(t_2\) with \( src (t_1)=P_1\), \( target (t_1)=P_1'\), \( src (t_2)=P_2\) and \( target (t_2)=P_2'\). By the induction hypothesis, \(\mathcal {C}(P_1)=\mathcal {C}(P_1')\), \(\mathcal {C}(P_2)=\mathcal {C}(P_2')\), \({P_1}\mid _{\sigma }={P_1'}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P_1)\), and \({P_2}\mid _{\sigma }={P_2'}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P_2)\). It follows that \(\mathcal {C}(P) =L\vartriangleright \mathcal {C}(P_1)\cup \textsc {r}\mathbin {\vartriangleright }\mathcal {C}(P_2) =L\vartriangleright \mathcal {C}(P_1')\cup \textsc {r}\mathbin {\vartriangleright }\mathcal {C}(P_2') =\mathcal {C}(P')\). Moreover, since \({P_1}\mid _{\sigma }={P_1'}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P_1)\) and \({P_2}\mid _{\sigma }={P_2'}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P_2)\), it also follows that \({P}\mid _{\sigma }={P'}\mid _{\sigma }\) for all \(\sigma \in \mathcal {C}(P)\).

If the last rule applied in *t* is (Enc), then there exist \(P_1\) and \(P_1'\) such that \(P=\partial _{H}(P_1)\) and \(P'=\partial _{H}(P_1')\), and *t* has a subderivation \(t'\) with \( src (t')=P_1\) and \( target (t')=P_1'\). By the induction hypothesis, \(\mathcal {C}(P)=\mathcal {C}(P_1)=\mathcal {C}(P_1')=\mathcal {C}(P')\) and \(P|_{\sigma } =P_{1}|_{\sigma } =P_1'|_{\sigma } =P'|_{\sigma }\) for all \(\sigma \in \mathcal {C}(P)\).

Now, if \(P'\) is reachable from *P*, then the statement of the lemma follows with a straightforward induction on the number of transitions in a path from *P* to \(P'\). \(\square \)

### C mCRL2 specification of Peterson’s algorithm

### D Formula expressing liveness for all just paths

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Bouwman, M., Luttik, B. & Willemse, T. Off-the-shelf automated analysis of liveness properties for just paths.
*Acta Informatica* **57**, 551–590 (2020). https://doi.org/10.1007/s00236-020-00371-w

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s00236-020-00371-w