Justness A Completeness Criterion for Capturing Liveness Properties

This paper poses that transition systems constitute a good model of distributed systems only in combination with a criterion telling which paths model complete runs of the represented systems. Among such criteria, progress is too weak to capture relevant liveness properties, and fairness is often too strong; for typical applications we advocate the intermediate criterion of justness. Previously, we proposed a deﬁnition of justness in terms of an asymmetric concurrency relation between transitions. Here we deﬁne such a concurrency relation for the transition systems associated to the process algebra CCS as well as its extensions with broadcast communication and signals, thereby making these process algebras suitable for capturing liveness properties requiring justness.


Introduction
Transition systems are a common model for distributed systems. They consist of sets of states, also called processes, and transitions-each transition going from a source state to a target state. A given distributed system D corresponds to a state P in a transition system T-the initial state of D. The other states of D are the processes in T that are reachable from P by following the transitions. A run of D corresponds with a path in T: a finite or infinite alternating sequence of states and transitions, starting with P, such that each transition goes from the state before to the state after it. Whereas each finite path in T starting from P models a partial run of D, i.e., an initial segment of a (complete) run, typically not each path models a run. Therefore a transition system constitutes a good model of distributed systems only in combination with what we here call a completeness criterion: a selection of a subset of all paths as complete paths, modelling runs of the represented system.
A liveness property says that "something [good] must happen" eventually [22]. Such a property holds for a distributed system if the [good] thing happens in each of its possible runs. One of the ways to formalise this in terms of transition systems is to postulate a set of good states G , and say that the liveness property G holds for the process P if all complete paths starting in P pass through a state of G [18]. Without a completeness criterion the concept of a liveness property appears to be meaningless.
Example 1 The transition system on the right models Cataline eating 1 2 t a croissant in Paris. It abstracts from all activity in the world except the eating of that croissant, and thus has two states only-the states of the world before and after this event-and one transition t. We depict states by circles and transitions by arrows between them. An initial state is indicated by a short arrow without a source state. A possible liveness property says that the croissant will be eaten. It corresponds with the set of states G consisting of state 2 only. The states of G are indicated by shading. The depicted transition system has three paths starting with state 1: 1, 1t and 1t 2. The path 1t 2 models the run in which Cataline finishes the croissant. The path 1 models a run in which Cataline never starts eating the croissant, and the path 1t models a run in which she starts eating it, but never finishes.

arXiv:1909.00286v2 [cs.LO] 25 Aug 2021
The liveness property G holds only when using a completeness criterion that disqualifies the paths 1 and 1t, saying that they do not model actual runs of the system, thus leaving 1t 2 as the sole complete path. ¶ The transitions of transition systems can be understood to model atomic actions that can be performed by the represented systems. Although we allow these actions to be instantaneous or durational, in the remainder of this paper we adopt the assumption that "atomic actions always terminate" [30]. This is a partial completeness criterion. It rules out the path 1t in Example 1. We build in this assumption in the definition of a path by henceforth requiring that finite paths should end with a state.
Progress The most widely employed completeness criterion is progress. 1 In the context of closed systems, having no run-time interactions with the environment, it is the assumption that a run will never get stuck in a state with outgoing transitions. This rules out the path 1 in Example 1, as t is outgoing. When adopting progress as completeness criterion, the liveness property G holds for the system modelled in Example 1.
Progress is assumed in almost all work on process algebra that deals with liveness properties, mostly implicitly. Milner makes an explicit progress assumption for the process algebra CCS in [25]. A progress assumption is built into the temporal logics LTL [31], CTL [8] and CTL* [9], namely by disallowing states without outgoing transitions and evaluating temporal formulas by quantifying over infinite paths only. 2 In [21] the 'multiprogramming axiom' is a progress assumption, whereas in [1] progress is assumed as a 'fundamental liveness property'. Definition 1 ( [18]) Completeness criterion F is stronger than completeness criterion H iff F rules out (as incomplete) at least all paths that are ruled out by H.
As we argue in [11,16,18], a progress assumption as above is too strong in the context of reactive systems. There, a transition typically represents an interaction between the distributed system being modelled and its environment. In many cases a transition can occur only if both the modelled system and the environment are ready to engage in it. We therefore distinguish blocking and non-blocking transitions. A transition is non-blocking if the environment cannot or will not block it, so that its execution is entirely under the control of the system under consideration. A blocking transition on the other hand may fail to occur because the environment is not ready for it. The same was done earlier in the setting of Petri nets [33], where blocking and non-blocking transitions are called cold and hot, respectively.
In [11,16,18] we work with transition systems that are equipped with a partitioning of the transitions into blocking and non-blocking ones, and reformulate the progress assumption as follows: a (transition) system in a state that admits a non-blocking transition will eventually progress, i.e., perform a transition.
In other words, a run will never get stuck in a state with outgoing non-blocking transitions. In Example 1, when adopting progress as our completeness criterion, we assume that Cataline actually wants to eat the croissant, and does not willingly remain in State 1 forever. When that assumption is unwarranted, one would model her behaviour by a transition system different from that of Example 1. However, she may still be stuck in State 1 by lack of any croissant to eat. If we want to model the capability of the environment to withhold a croissant, we classify t as a blocking transition, and the liveness property G does not hold. If we abstract from a possible shortage of croissants, t is deemed a non-blocking transition, and, when assuming progress, G holds.
As an alternative approach to a dogmatic division of transitions in a transition system, one could shift the status of transitions to the progress property, and speak of B-progress when B is the set of blocking transitions. In that approach, G holds for State 1 of Example 1 under the assumption of B-progress when t / ∈ B, but not when t ∈ B.
To properly capture reactive systems, we work with labelled transition systems, where each transition is labelled with the action that occurs (or fact that is revealed) when taking this transition. The labelling is typically used to describe how the transition synchronises with, and thus is dependent on, the environment. Whether a transition is blocking is then completely determined by its label. Hence we work with sets B of blocking actions and regard a transition as blocking iff it is labelled by an action in B.
Justness Justness is a completeness criterion proposed in [11,16,18]. It strengthens progress. It can be argued that once one adopts progress it makes sense to go a step further and adopt even justness.
Example 2 The transition system on the right models Alice making an unending sequence of phone calls in London. There is no interaction of any kind between Alice and Cataline. Yet, we may choose to abstracts from all activity in the world except the eating of the croissant by Cataline, and the making of calls by Alice. t This yields the combined transition system on the bottom right. Even when taking the transition t to be non-blocking, progress is not a strong enough completeness criterion to ensure that Cataline will ever eat the croissant, for the infinite path that loops in the first state is complete. Nevertheless, as nothing stops Cataline from making progress, in reality t will occur. [18] ¶ This example is not a contrived corner case, but a rather typical illustration of an issue that is central to the study of distributed systems. Other illustrations of this phenomena occur in [11,Section 9.1], [17,Section 10], [12,Section 1.4], [13] and [7,Section 4]. The criterion of justness aims to ensure the liveness property occurring in these examples. In [18] it is formulated as follows: Once a non-blocking transition is enabled that stems from a set of parallel components, one (or more) of these components will eventually partake in a transition.
In Example 2, t is a non-blocking transition enabled in the initial state. It stems from the single parallel component Cataline of the distributed system under consideration. Justness therefore requires that Cataline must partake in a transition. This can only be t, as all other transitions involve component Alice only. Hence justness says that t must occur. The infinite path starting in the initial state and not containing t is ruled out as unjust, and thereby incomplete. Unlike progress, the concept of justness as formulated above is in need of some formalisation, i.e., to formally define a component, to make precise for concrete transition systems what it means for a transition to stem from a set of components, and to define when a component partakes in a transition.
A formalisation of justness for the labelled transition system generated by the process algebra AWN, the Algebra for Wireless Networks [10], was provided in [11]. In the same vein, [16] offered a formalisation for the labelled transition systems generated by CCS, the Calculus of Communicating Systems [25], and its extension ABC, the Algebra of Broadcast Communication [16], a variant of CBS, the Calculus of Broadcasting Systems [32]. The same was done for CCS extended with signals in [7]. The formalisations of [16,7] coinductively define B-justness, where B ranges over sets of actions that are deemed to be blocking, as a family of predicates on paths, and proceed by a case distinction on the operators in the language. Although these definitions do capture the concept of justness formulated above, it is not easy to see why.
A more syntax-independent, and perhaps more convincing, formalisation of justness occurred in [18]. There it is defined directly on transition systems that are equipped with a, possibly asymmetric, concurrency relation between transitions. However, the concurrency relation itself is defined only for the transition system generated by a fragment of CCS, and the generalisation to full CCS, and other process algebras, is non-trivial.
It is the purpose of this paper to make the definition of justness from [18] available to a large range of process algebras by defining the concurrency relation for CCS, for ABC, and for the extension of CCS with signals used in [7]. We do this in a precise (Section 6) as well as in an approximate way (Section 7), and show that both approaches lead to the same concept of justness (Section 9). Moreover, in all cases we establish a closure property on the concurrency relation ensuring that justness is a meaningful notion. We show that for all these process algebras justness is feasible. Here feasibility is a requirement on completeness criteria advocated in [1,23,18]. Finally, we establish agreement between the formalisation of justness from [18] and the present paper, and the original coinductive ones from [16] and [7].
Fairness Fairness assumptions are special kinds of completeness criteria. They postulate that if certain activities can happen often enough, they will in fact happen.
Example 3 Suppose Bart stands behind a bar and wants to order a beer. But by lack of any formal queueing protocol many other customers get their beer before Bart does. This situation can be modelled as a transition system where in each state in which Bart is not served yet there is an outgoing transition modelling that Bart gets served, but there are also outgoing transitions modelling that someone else gets served instead. The essence of fairness is the assumption that Bart will get his beer eventually. Fairness rules out as unfair, and thereby incomplete, any path in which Bart could have gotten a beer any time, but never will. ¶ Fairness comes in two flavours: weak and strong fairness. Weak fairness merely rules out paths in which some task is enabled in each state, yet never occurs. Strong fairness also rules out paths in which some task is enabled infinitely often, yet never occurs. Here a task is an appropriate set of transitions, in Example 3 all transitions giving Bart a beer. In Example 3 the liveness property that Bart will get a beer holds under the assumption of weak fairness, and thus certainly when assuming strong fairness. It does not hold when merely assuming justness, let alone when merely assuming progress. Our survey paper [18] proposes a unifying definition of strong and weak fairness, parametrised by the definition of a task. Many notions of fairness found in the literature are cast as instances of this definition, differing only in how to define tasks. The same paper also offers a taxonomy of completeness criteria, ordered by strength (cf. Definition 1). This taxonomy contains the criteria progress and justness, as well as all these fairness criteria. Besides strong and weak fairness we also consider a form of fairness even weaker than weak fairness, requiring a task to be enabled in each state on a path, as well as "during each transition". We are not aware of any completeness criteria occurring in the literature that is not progress, justness or one of these forms of fairness-or the weakest possible completeness criterion, declaring all paths complete, thereby ensuring almost no liveness properties.
In [18] we argue that fairness assumptions are by default unwarranted. In real-world situations akin to Example 3 there is in fact no guarantee that Bart will ever get a beer. This is in contrast to justness, which by default is warranted. One could argue that a formalisation of justness is not necessary to arrive at a model of concurrency in which Cataline will eat her croissant, as fairness is an alternative to justness that accomplishes the same goal. But here we reject that argument on grounds that fairness tends to rule out as incomplete more paths than necessary. As argued in [13], this can lead to false guarantees about the satisfaction of certain liveness properties, e.g., Bart getting a beer in Example 3.
Reading guide In Section 2, following [18], we present labelled transition systems with a concurrency relation satisfying some closure property, and define justness as a predicate on paths in any such transition system. Again following [18], we also propose an optional characterisation of the concurrency relation in terms of more primitive notions of the necessary and affected components of a transition.
In Section 3 we show liberal conditions under which B-justness meets the requirement of feasibility. In Section 4 we recall the unifying definition of fairness from [18], and show how progress can be cast as a particular fairness property. In spite of this we continue to see progress as a completeness criterion different from fairness. We cannot cast justness as a fairness property.
Section 5 recalls the syntax and semantics of CCS, and its extensions ABC and CCSS with broadcast communication and signals, respectively. It also proposes a simplification of the operational semantics of CCSS by encoding signal emissions as transitions, and recalls an alternative presentation of ABC that avoids negative premises in the operational semantics.
In Section 6 we associate labelled transition systems with a concurrency relation to each of the five process algebras from Section 5, and show that they satisfy the closure property of Section 2. The concurrency relation is defined in terms of synchrons, novel particles out of which transitions are seen to be composed. Sections 2 and 6 together constitute a definition of justness valid for the five process algebras of Section 5. For CCS the concurrency relation is symmetric, but for the other four process algebras it is not. The alternative presentations of CCSS and ABC feature indicator transitions that do not model state changes of the represented system. Instead, an indicator transition reveals a property of its source state P, which is also its target state, for instance that process P is emitting a signal, or that it cannot receive a given broadcast. Indicator transitions need to be excepted from the justness requirement.
Section 7 revisits the component-based characterisation of the concurrency relation contemplated in Section 2, and proposes two alternative concepts of system components associated to a transition, with for each a classification of components as necessary and/or affected. The dynamic components give rise to the exact same concurrency relation as defined in terms of synchrons in Section 6, whereas the static components yield an underapproximation-a strictly smaller concurrency relation. However, only the static components satisfy a closure property proposed in [18].
Section 8 provides two computational interpretations of CCS and its extensions, the default one corresponding to the concurrency relation of Section 6, and thus the dynamic concurrency relation of Section 7; the other corresponding to the static concurrency relation of Section 7. We also provide a natural sublanguage on which the two concurrency relations coincide. Section 9 shows that the dynamic and static concurrency relations give rise to the very same concept of justness. Hence, for the study of justness we may use whichever of these concurrency relations is the most convenient. Using this, in Section 10 we apply the results of Section 3 to show that B-justness is feasible for full CCS [24] and its extensions with broadcast communication or signals.
Section 11 shows that the concurrency relation of Section 6 agrees with the one defined earlier in [16] on pairs of transitions for which both are defined. Yet, the concurrency relation from [16] was defined only for transitions with the same source, and hence is not suitable for our formalisation of justness.
In Sections 12 and 13 we establish that the concept of justness based on a concurrency relation between transitions, as proposed in [18] and applied to CCS and its extension in the present paper, coincides with the original coinductively defined concepts of justness from [16] and [7]. Section 14 summarises, and reviews related and future work.

Labelled transition systems with concurrency
We start with the formal definitions of a labelled transition system, a path, and the completeness criterion progress, which is parametrised by the choice of a collection B of blocking actions. Then we define the completeness criterion justness on labelled transition systems upgraded with a concurrency relation.
Definition 2 A labelled transition system (LTS) is a tuple (S, Tr, source, target, ) with S and Tr sets (of states and transitions), source, target : Tr → S and : Tr → L , for some set of transition labels L .
Here we work with LTSs labelled over a structured set of labels (L , Act, Rec), where Rec ⊆ Act ⊆ L .
In [7] and in Sections 5.3-5.5 one encounters LTSs T enriched with indicators, revealing facts about states. While these are naturally modelled as unary predicates on the states of T, it is technically possible to model them as ordinary transitions t, satisfying source(t) = target(t) [3]. This is formalised by declaring a set of actions Act ⊆ L. Transitions t model the occurrence of an action (t) if (t) ∈ Act, or the revelation of the fact (t) otherwise. Indicator transitions are largely ignored in the definitions below.
Rec ⊆ Act is the set of receptive actions. Sets B ⊆ Act of blocking actions must always contain Rec. In CCS and most other process algebras Rec = / 0 and Act = L . Let Tr • = {t ∈ Tr | (t) ∈ Act \ Rec} be the set of transitions that are neither indicator transitions nor receptive.
Definition 3 A path in a labelled transition system (S, Tr, source, target, ) is an alternating sequence s 0 t 1 s 1 t 2 s 2 · · · of states and non-indicator transitions, starting with a state and either being infinite or ending with a state, such that source(t i ) = s i−1 and target(t i ) = s i for all relevant i.
A completeness criterion is a unary predicate on the paths in a labelled transition system. Definition 5 A labelled transition system with concurrency (LTSC) is a tuple (S, Tr, source, target, , • ) consisting of an LTS (S, Tr, source, target, ) and a concurrency relation • ⊆ Tr • × Tr, such that: if t ∈ Tr • and π is a path from source(t) to s ∈ S such that t • v for all transitions v occurring in π, then there is a u ∈ Tr • such that source(u) = s, (u) = (t) and t • u.
Informally, t • v means that the transition v does not interfere with t, in the sense that it does not affect any resources that are needed by t, so that in a state where t and v are both possible, after doing v one can still do (a future variant u of) t. In many transition systems • is a symmetric relation, denoted . The transition relation in a labelled transition system is often defined as a relation Tr ⊆ S × L × S. This approach is not suitable here, as we will encounter multiple transitions with the same source, target and label that ought to be distinguished based on their concurrency relations with other transitions.
Definition 6 A path π in an LTSC is B-just, for Rec ⊆ B ⊆ Act, if for each suffix π of π, and for each transition t ∈ Tr • ¬B enabled in the starting state of π , a transition u with t • u occurs in π . Informally, justness requires that once a non-blocking non-indicator transition t is enabled, sooner or later a transition u will occur that interferes with it, possibly t itself.
Note that, for any Rec ⊆ B ⊆ Act, B-justness is a completeness criterion stronger than B-progress.
In reasonable extensions of • to Tr × Tr, indicator transitions t would satisfy t • t, meaning that execution of t in no way affects any resources needed to execute t again. It therefore makes no sense to impose closure property (2), or the justness requirement, on indicator transitions (see Example 9).
Components Instead of introducing • as a primitive, it is possible to obtain it as a notion derived from two functions npc : Tr • → P(C ) and afc : Tr → P(C ), for a given set of components C . These functions could then be added as primitives to the definition of an LTS. They are based on the idea that a process represents a system built from parallel components. Each transition is obtained as a synchronisation of activities from some of these components. Now npc(t) describes the (nonempty) set of components that are necessary participants in the execution of t, whereas afc(t) describes the components that are affected by the execution of t. The concurrency relation is then defined by saying that u interferes with t if and only if a necessary participant in t is affected by u.
Most material in this section stems from [18]. However, there Tr • = Tr, so that • is irreflexive, i.e., npc(t) ∩ afc(t) = / 0 for all t ∈ Tr. Moreover, a fixed set B is postulated, so that the notions of progress and justness are not explicitly parametrised with the choice of B. Furthermore, closure property (2) is new here; it is the weakest closure property that supports Theorem 1 and Proposition 1 below. In [18] only the model in which • is derived from functions npc and afc comes with a closure property: If t ∈ Tr • and v ∈ Tr with source(t) = source(v) and npc(t) ∩ afc(v) = / 0, then there is a u ∈ Tr • with source(u) = target(v), (u) = (t) and npc(u) = npc(t). ( Trivially (3) implies (2).

Feasibility
An important requirement on completeness criteria is that any finite path can be extended into a complete path. This requirement was proposed by Apt, Francez & Katz in [1] and called feasibility. It also appears in Lamport [23] under the name machine closure. The theorem below list conditions under which Bjustness is feasible. Its proof is a variant of a similar theorem from [18] showing conditions under which notions of strong and weak fairness are feasible. Proof: We present an algorithm for extending any given finite path π 0 into a B-just path π. The extension uses transitions from Tr • ¬B only. We build an N×N-matrix with column i for the-to be constructedprefix π i of π, for i ≥ 0. Column i will list the transitions from Tr • ¬B enabled in the last state of π i , leaving empty most slots if there are only finitely many. An entry in the matrix is either (still) empty, filled in with a transition, or crossed out. Let f : N → N×N be an enumeration of the entries in this matrix.
At the beginning only π 0 is known, and all columns of the matrix are empty. At each step i ≥ 0 we fill in column i, extend the path π i into π i+1 if possible by appending one transition (and its target state), and cross out some transitions occurring in the matrix. As an invariant, we maintain that a transition t occurring in column k is already crossed out when reaching step i > k iff a transition u occurs in the extension of π k into π i such that t • u. At each step i ≥ 0 we proceed as follows: Since π i is known, we fill in column i by listing all transitions from Tr • ¬B enabled in the last state of π i . We take n to be the smallest value such that entry f (n) ∈ N×N is already filled in, say with t ∈ Tr • ¬B , but not yet crossed out. If such an n does not exist, the algorithm terminates, with output π i . Let k be the column in which f (n) appears. By our invariant, all transitions v occurring in the extension of π k into π i satisfy t • v. By (2) there is a transition u ∈ Tr • ¬B enabled in the last state of π i such that t • u. We now extend π i into π i+1 by appending u to it, while crossing out all entries t in the matrix for which t • u, including f (n), which is the entry in column k representing the transition t. This maintains our invariant. Obviously, π i is a prefix of π i+1 , for i ≥ 0. The desired path π is the limit of all the π i . It is B-just, using the invariant, because each transition t ∈ Tr • ¬B that is enabled in a state of π is either interfered with by a transition occurring in π 0 , or will appear in the matrix, which acts like a priority queue, and be eventually crossed out. 2 It is possible to strengthen Theorem 1 somewhat by calling two transitions t and t equivalent if t • u ⇔ t • u for all u ∈ Tr • . An equivalence class of transitions is enabled iff one of its elements is.
Corollary 1 If, in an LTSC with set B of blocking actions, only countably many equivalence classes of transitions from Tr • ¬B are enabled in each state, then B-justness is feasible. Proof: The proof is the same as the one above, except that the matrix now contains equivalence classes of enabled transitions. 2

Fairness
Let To formalise fairness we use LTSs (S, Tr, source, target, , T ) that are augmented with a set T ⊆ P(Tr • ) of tasks T ⊆ Tr • , each being a set of transitions. The concept of J-fairness from [18] is defined only for LTSCs (S, Tr, source, target, • , , T ) augmented with such a T .

Definition 7 ([18]) For an augmented LTS
contains a transition t ∈ T. It is said to be relentlessly B-enabled on π, if each suffix of π contains a state in which it is B-enabled. 3 It is perpetually B-enabled on π, if it is B-enabled in every state of π. [It is said to be continuously B-enabled on π, if it is B-enabled in every state and during every transition of π.] A path π in T is strongly B-fair if, for every suffix π of π, each task that is relentlessly B-enabled on π , occurs in π . A path π in T is weakly B-fair if, for every suffix π of π, each task that is perpetually B-enabled on π , occurs in π . [A path π in T is J-B-fair if, for every suffix π of π, each task that is continuously B-enabled on π , occurs in π .] When the set B is defined once and for all or clear from context, we may omit the parameter B. This was the situation in [18].
In [18] many notions of fairness occurring in the literature were cast as instances of this definition. For each of them the set of tasks T was derived, in different ways, from some other structure present in the model of distributed systems from the literature. In fact, [18] considers 7 ways to construct the collection T , and speaks of fairness of actions, transitions, instructions, synchronisations, components, groups of components and events. This yields 21 notions of fairness. To compare them, each is defined formally on a fragment of CCS, and the 21 fairness notions (together with progress, justness, and a few concepts of fairness found in the literature that are not instances of Definition 7) are ordered by strength by placing them in a lattice.
Progress can be cast as a fairness notion in the sense of Definition 7 by taking T to be the collection of only one task, namely Tr • . Clearly weak, strong and J-fairness all coincide for this T . Likewise, the trivial completeness criterion, declaring all paths complete, coincides with weak, strong and J-fairness when taking T = / 0. Nevertheless, it would be confusing to address these completeness criteria as fairness assumptions.
We do not see how justness can be cast a fairness notion in the sense of Definition 7. However, we now show that there exists a form of fairness according to Definition 7 that is at least as strong as justness.
Proposition 1 Given this T and B ⊆ Act, any path that is strongly or weakly B-fair is certainly B-just.
Proof: Any path that is strongly B-fair is certainly weakly B-fair. This follows trivially from the definitions, for any choice of T . (Likewise, any path that is weakly B-fair is certainly J-B-fair.) Suppose π is weakly B-fair. We show it is B-just. Suppose that t ∈ Tr • ¬B is enabled in the first state s of a suffix π of π, i.e., source(t) = s, but all transitions v occurring in π satisfy t • v. Closure property (2) guarantees that for every state s of π there is a u ∈ Tr • such that source(u) = s , (u) = (t) and t • u. Hence task T t is perpetually B-enabled on π . By weak B-fairness T t must occur in π , meaning that π contains a transition u ∈ Tr • with t • u. This contradicts the assumptions. 2

CCS and its extensions with broadcast communication and signals
This section presents five process algebras: Milner's Calculus of Communicating Systems (CCS) [25], its extensions ABC with broadcast communication [16] and CCSS with signals [7], an alternative presentation of CCSS where signal emissions are encoded as transitions, and an alternative presentation of ABC that avoids negative premises in favour of discard transitions.

CCS
CCS [25] is parametrised with sets A of agent identifiers and C h of (handshake communication) names; each A ∈ A comes with a defining equation A def = P with P being a CCS expression as defined below.
it extends to Act by f (c) = f (c) and f (τ) := τ. The set P CCS of CCS expressions or processes is the smallest set including: for α ∈ Act and P ∈ P CCS action prefixing for f a relabelling and P ∈ P CCS relabelling One often abbreviates α.0 by α, and P\{c} by P\c. The traditional semantics of CCS is given by the labelled transition relation → ⊆ P CCS ×Act ×P CCS , where transitions P −→ Q are derived from the rules of Table 1. HereL := {c | c ∈ L}. The process α.P performs the action α first and subsequently acts as P. The process P + Q may act as either P or Q, depending on which of the processes is able to act at all. The parallel composition P|Q executes an action from P, an action from Q, or in the case where P and Q can perform complementary actions c andc, the process can perform a synchronisation, resulting in an internal action τ. The process P\L inhibits execution of the actions from L and their complements. The relabelling P[ f ] acts like process P with all labels replaced by f ( ). Finally, the rule for agent identifiers says that an agent A has the same transitions as the body P of its defining equation.

ABC-The Algebra of Broadcast Communication
The Algebra of Broadcast Communication (ABC) [16] is parametrised with sets A of agent identifiers, B of broadcast names and C h of handshake communication names; each A ∈ A comes with a defining equation A def = P with P being a guarded ABC expression as defined below. The collections B! and B? of broadcast and receive actions are given by B : The set P ABC of ABC expressions is defined exactly as P CCS . An expression is guarded if each agent identifier occurs within the scope of a prefixing operator. The structural operational semantics of ABC is the same as the one for CCS (see Table 1) but augmented with the rules for broadcast communication in Table 2. ABC is CCS augmented with a formalism for broadcast communication taken from the Calculus of Broadcasting Systems (CBS) [32]. The syntax without the broadcast and receive actions and all rules except (BRO-L), (BRO-C) and (BRO-R) are taken verbatim from CCS. However, the rules now cover the different name spaces; (ACT) for example allows labels of broadcast and receive actions. The rule (BRO-C) -without rules like (PAR-L) and (PAR-R) for label b!-implements a form of broadcast communication where any broadcast b! performed by a component in a parallel composition is guaranteed to be received by any other component that is ready to do so, i.e., in a state that admits a b?-transition. In order to ensure associativity of the parallel composition, one also needs this rule for components receiving at the same time ( 1 = 2 =?). The rules (BRO-L) and (BRO-R) are added to make broadcast communication non-blocking: without them a component could be delayed in performing a broadcast simply because one of the other components is not ready to receive it.

CCS with signals
CCS with signals (CCSS) [7] is CCS extended with a signalling operator Pˆs. Informally, Pˆs emits the signal s to be read by another process. Pˆs could for instance be a traffic light emitting the signal red. The reading of the signal emitted by Pˆs does not interfere with any transition of P, such as jumping to green. Formally, CCS is extended with a set S of signals, ranged over by s and r. In CCSS the set of actions is defined as Act : As before it extends to Act by f (c) = f (c) and f (τ) := τ. The set P CCSS of CCSS expressions is defined just as P CCS , but now also Pˆs is a process for P ∈ P CCSS and s ∈ S , and restriction also covers signals.
The semantics of CCSS is given by the labelled transition relation → ⊆ P CCSS × Act × P CCSS and a predicate ⊆ P CCSS × S that are derived from the rules of CCS (Table 1, where η, α, range over Act , and the rules of Table 3. The predicate P s indicates that process P emits the signal s, whereas a transition P s −→ P indicates that P reads the signal s and thereby turns into P . The first rule is the base case showing that a process Pˆs emits the signal s. The second rule of Table 3 models the fact that signalling cannot prevent a process from making progress. After having taken an action, the signalling process loses its ability to emit the signal. The two rules in the middle of Table 3 state that the action of reading a signal by one component in (parallel) composition together with the emission of the same signal by another component, results in an internal transition τ; similar to the case of handshake communication. Note that the component emitting the signal does not change through this interaction. All the other rules of Table 3 lift the emission of s by a subprocess P to the overall process.

Encoding signal emissions as transitions
A more compact presentation of CCSS can be obtained by encoding a signal emission P s as a transition Ps −→ P; this is done in [3]. The price to be paid for the resulting simplification of the operational semantics is that the new transitions Ps −→ P should not be counted in the definition of justness, since they do not model changes in the state of the represented system.
In this presentation of CCSS the set of labels is defined as L := Act .
∪ S , and f (τ) := τ. The semantics is given by the labelled transition relation → ⊆ P CCSS × L × P CCSS derived from the rules of CCS (Table 1), where now η, range over L , α over Act, c over C h . ∪ S and L ⊆ C h .
∪ S , augmented with the rules of Table 4.

Using indicator transitions to avoid negative premises in ABC
Finally, we present an alternative operational semantics ABCd of ABC that avoids negative premises. The price to be paid is the introduction of indicator transitions that indicate when a process does not admit a receive action. 4 To this end, let B: := {b: | b ∈ B} be the set of broadcast discards, and L := B: .
∪ Act the set of transition labels, with Act as in Section 5.2. The semantics is given by the labelled transition relation → ⊆ P ABC × L × P ABC derived from the rules of CCS (Table 1), where ∪ {τ}, α over Act and over L , and moreover L ⊆ C h and augmented with the rules of Table 5.
Proof: A straightforward induction on derivability of transitions.

An LTSC for CCS and its extensions
The forthcoming material applies to each of the process algebras from Section 5. Let P be the set of processes or expressions in the appropriate language.
We allocate an LTS as in Definition 2 to these languages by taking S to be the set P of processes, and Tr the set of derivations t of transitions P −→ Q with P, Q ∈ P. Of course source(t)=P, target(t)=Q and (t) = . A derivation of a formula ϕ (either a transition P −→ Q or a predicate P s ) is a well-founded tree with the nodes labelled by formulas, such that the root has label ϕ, and if µ is the label of a node and K is the sequence of labels of the children of this node then K µ is an instance of a rule of Tables 1-5. 4 A process P admits an action α ∈ Act if there exists a transition P α −→ Q.

Figure 1: Indicators and transitions
In classical process algebra, and in Section 5, a transition is a formula of the form P −→ Q. The CCS process P = A|(c + τ) for instance, where the agent identifier A has the defining equation A def = c.A, has 3 outgoing transitions: P c −→ P, Pc −→ A|0 and P τ −→ A|0. The last of these transitions can be derived in two different ways: through a synchronisation between c andc, or through the τ from the right component. Here we distinguish these two τ-transitions, so that we can say that one of them is concurrent with the c-transition whereas the other is not. This is the reason that as the transitions in the sense of Section 2 we take the derivations of transitions in the sense of Section 5. When confusion is unlikely, we will use the word "transition" for an element of Tr, which is a derivation of a formula P α −→ Q. Likewise, an "indicator" is a derivation of a formula P s .
In this paper we distinguish five kinds of formula derivations, as indicated in Figure 1. Class I contains the indicators, and Classes II-V the transitions. Classes II, III and IV contain the transitions with labels inS , B: and B?, respectively, and Class V contains all others. Recall that L = Act .
∪ B:. So Classes II and III contain the indicator transitions, whereas the transitions in Classes IV, V model action occurrences. The latter ones were collected in the set Tr • in Section 4. Class IV is inhabited only for ABC or ABCd, and class III only for ABCd. Class I is inhabited only for the version of CCSS from Section 5.3, and Class II only for the one from Section 5.4.
By Definition 3 a path contains transitions from Tr • only. To properly define the concept of justness (cf. Definition 6) we need a concurrency relation of type • ⊆ Tr • × Tr • . However, at no extra effort we extend it to • ⊆ Tr • × Tr; in fact t • u will always hold when u ∈ Tr\Tr • . Although not required by Definition 5, for some purposes it turns out to be handy to extend the type of the concurrency relation further to • ⊆ Tr s• × Tr, where Tr s• contains all derivations from Classes I, II and V-see Section 11.
We take Rec := B? in ABC and ABCd: broadcast receipts can always be blocked by the environment, namely by not broadcasting the requested message. For CCS and CCSS we take Rec := / 0, thus allowing environments that can always participate in certain handshakes, and/or always emit certain signals.
Following [16], we give a name to any derivation of a transition: The unique derivation of the transition α.P α −→ P using the rule (ACT) is called α →P. The derivation obtained by application of (COMM) or (BRO-C) on the derivations t and u of the premises of that rule is called t|u. The derivation obtained by application of (PAR-L) or (BRO-L) on the derivation t of the (positive) premise of that rule, and using process Q at the right of |, is t|Q. In the same way, ( For CCSS as in Section 5.3 we also name each indicator ξ / ∈ Tr. The unique derivation of the formula (Pˆs) s using the first rule of Table 3 is called P →s . The other rules of Table 3 yield derivations tˆr, ξ +Q, P + ξ , ξ |Q, ξ |t, t|ξ , P|ξ , ξˆr, ξ \L, ξ [ f ] and A:ξ , where ξ is the derivation of the indicator premise, and t of the transition premise of the rule. The derivations resulting from the rules from Section 5.4 are (named) the same as the ones from Section 5.3, but now they are all derivations of transitions t ∈ Tr; in particular P →s is now the unique derivation of the transition Pˆss −→ Pˆs using the first rule of Table 4.
The derivations obtained by application of the rules of Table 5 are called b:0, b:α.P, t + u, t|u and A:t, where t and u are the derivations of the premises of these rules.
An argument ι ∈ Arg is applied componentwise to a set Σ of synchrons: ι(Σ) := {ις | ς ∈ Σ}. The set ς (P) of synchrons of a CCS, ABC or CCSS process P is inductively defined by Thus, a synchron of a process Q can be seen as a path in the parse tree of Q to an unguarded subexpression α.P or Pˆs of Q-except that recursion A def = P gets unfolded in the construction of such a path. Here a subexpression of Q occurs unguarded if it does not lay within a subexpression β .R of Q.
For ABCd we amend the clauses for inaction and prefixing: The set of synchrons ς (t) of a derivation t of a transition P −→ Q or formula P s is defined by Thus, a synchron of t represents a path in the proof-tree t from its root to a leaf. Note that we use the symbol ς as a variable ranging over synchrons, and as the name of two functions-context disambiguates.
Lemma 2 If t is a derivation of P −→ Q or P s then ς (t) ⊆ ς (P).
Proof: A trivial structural induction on t. 2 Each transition derivation can be seen as the synchronisation of one or more synchrons. We proceed to formalise the concepts "future variant" and "concurrent" that occur above, by defining two binary relations ; ⊆ Tr s• × Tr s• and • ⊆ Tr s• × Tr such that the following properties hold: The relation ; is reflexive and transitive.
If t ; t and t • v, then t • v.
If t • v with source(t) = source(v) then ∃t ∈ Tr s• with source(t ) = target(v) and t ; t .
If t ; t then (t ) = (t) and when moreover t ∈ Tr • (that is, (t) ∈ Act\Rec) then t • t .
Here source(t) := P and (t) =s if t is the derivation of a formula P s . With t • v we mean that the possible occurrence of t is unaffected by the occurrence of v. Although for CCS the relation • is symmetric (and Tr s• = Tr), for ABC and CCSS it is not: Proposition 2 Assume (4)- (6). If t ∈ Tr s• and π is a path from source(t) to P ∈ P such that t • v for all transitions v occurring in π, then there is a t ∈ Tr s• such that source(t ) = P and t ; t .
Proof: By induction on the length of π.
The induction base is trivial, taking t := t, and applying the reflexivity of ;. So assume t ∈ Tr s• and π is a path from source(t) with as last transition v with source(v ) = P and target(v ) = Q, such that t • v for all transitions v occurring in π. By induction, there is a t ∈ Tr • such that source(t ) = P and t ; t . By (5) t • v . By (6) there is a t ∈ Tr s• such that source(t ) = Q and t ; t . Now apply the transitivity of ;. 2 Corollary 3 Assume (4)- (7). If t ∈ Tr • and π is a path from source(t) to P ∈ P such that t • v for all transitions v occurring in π, then there is a t ∈ Tr • such that source(t ) = P, (t ) = (t) and t • t .
Proof: By assumption, there are ς † , υ † with ς ; ς † d υ † ; υ. W.l.o.g. we choose ς † and υ † such that either ς † = ς or υ † = υ. The synchrons ς and υ describe paths in the parse tree of P, so the first symbol where they differ must be op L versus op R for some binary operator op. The possibility that op = + quickly leads to a contradiction, so ς d υ. 2 Necessary and active synchrons All synchrons of the form σ ( α →P) are active; their execution causes a transition α.P α −→ P in the relevant component of the represented system. Synchrons σ (P →s ) and σ (b:) are passive; they are not causing any state change. Let aς (t) denote the set of active synchrons of a derivation t. It follows that Tr • (see Section 4) is the set of transitions t ∈ Tr with aς (t) = / 0. Whether a synchron ς ∈ ς (t) of a transition or indicator t is necessary for t to occur is defined only for t ∈ Tr s• . If t is the derivation of a broadcast transition, i.e., (t) = b! for some b ∈ B, then exactly one synchron υ ∈ ς (t) is of the form σ ( ∪ aς (t) with ς = υ, then ς d υ.
Proof: A trivial structural induction on t.
By Lemmas 4 and 5 this implies that the relation ; between the necessary synchrons of t and t is a bijection.

Proposition 5 If t ; t then (t ) = (t).
Proof: If t ; t then the derivation t must be obtainable from t by reducing some subterms of the form u + Q, P + u, A:u or uˆr to u, and/or changing some receptive or discarding partners in a broadcast communication. Given rules (SUM-L), (SUM-R), (REC), etc., this does not alter the label of this derivation. 2 Propositions 3-5 establish the required properties (4,5,7). It remains to establish (6)-see Proposition 6.
Definition 13 A set Σ ⊆ ς (P) of synchrons of P is P-consistent if there is a derivation t ∈ Tr s• with nς (t) = Σ and source(t) = P.
Proof: Let t ∈ Tr s• be such that nς (t) = Σ and source(t) = P. So Σ = / 0. Each synchron ς ∈ Σ represents a path in the derivation t from its root to a leaf. If ς ; ς then ς represents a version of the same path, but in which certain nodes of t, labelled +Q , P +, A: orˆr, are marked as being deleted by ς . Since Σ ⊆ ς (Q), this marking of deleted nodes is consistent, in the the sense that no node of t is deleted according to one element of Σ , but kept according to another. In fact, whether a node of t is marked as deleted depends entirely on the syntactic shape of Q. In view of the rules (SUM-L), (SUM-R), (REC), etc., actually deleting the indicated nodes from the derivation t yields another derivation t , with nς (t ) = Σ and source(t ) = Q. Depending on the syntactic shapes of P and Q, some receptive or discarding partners in broadcast communications may have been altered between t and t as well. 2 Write ς • d u for ς a synchron and u ∈ Tr if ς d υ for all υ ∈ aς (u).

Proposition 6
If t ∈ Tr s• , v ∈ Tr and t • v with source(t) = source(v) then there is a derivation t ∈ Tr s• with source(t ) = target(v) and t ; t .

Components
This section proposes two concepts of system components associated to a transition, with for each a classification of components as necessary and/or affected. We then apply a definition of a concurrency relation in terms of these components closely mirroring Definition 12 in Section 6 of the concurrency relation • in terms of synchrons. The dynamic components give rise to the exact same concurrency relation • from Definition 12, whereas the static components yield a strictly smaller concurrency relation • s . However, only the static components satisfy closure property (3). Finally, we present three alternative versions of • s that all give rise to the same concept of justness.

Dynamic components
A (dynamic) component is either the empty string ε or any string σ ι with σ ∈ Arg * and ι ∈ Arg a static argument. Each synchron ς can be uniquely written as γς with γ a component and ς a synchron with only dynamic arguments. The dynamic component C(ς ) of such a synchron ς is defined to be γ.
The set of dynamic components COMP(P) of a process P is defined as {C(ς ) | ς ∈ ς (P)}.
Example 7 In Example 4, t d ,t e ∈ Tr • with source(t d ) = source(t e ) and NC(t d ) ∩ AC(t e ) = / 0. Yet, there is no u ∈ Tr • with source(u) = target(t e ), (u) = (t d ) = d and NC(u) = NC(t d ). In fact, the unique u ∈ Tr • with source(u) = target(t e ) and (u) = d is t d . However,

Static components
A static component is a string σ ∈ Arg * of static arguments. Let C be the set of static components. The static component c(ς ) of a synchron ς is defined to be the largest prefix γ of ς that is a static component. The set of static components comp(P) of a process P is defined as {c(ς ) | ς ∈ ς (P)}. The set of static components comp(t) of a derivation t is defined as {c(ς ) | ς ∈ ς (t)}. The set of necessary static components npc(t) of a derivation t is defined as {c(ς ) | ς ∈ nς (t)}. The set of affected static components afc(t) of a derivation t is defined as {c(ς ) | ς ∈ aς (t)}.
The following lemma shows how the relations ; and from Section 7.1 simplify when applied to static components.
Proof: The first statement and direction "if" of the second are trivial. So let γ , δ ∈ C with γ δ . Then γ ; γ d δ ; δ for some components γ and δ . Thus, using insights from the proof of Lemma 5, γ = static(γ ) = static(γ) d static(δ ) = static(δ ) = δ . 2 The next result says that any two different static components of the same process are concurrent.
Proof: "Only if" is trivial. "If" follows by a straightforward structural induction on P.
We now define a static concurrency relation • s between derivations in terms of their static components in the same way that the (dynamic) concurrency relation • is characterised (by Lemma 7) in terms of their dynamic components: Definition 15 Derivation t ∈ Tr s• is statically unaffected by u, t • s u, iff ∀γ ∈ npc(t). ∀δ ∈ afc(u). γ δ .
The following shows that • s is strictly contained in • .

Proposition 7
If t • s u then t • u.

Proposition 8
If t ∈ Tr s• , v ∈ Tr and t • s v with source(t) = source(v) then there is a derivation u ∈ Tr s• with source(u) = target(v) and t ≡ u.

Another compatible definition of the static concurrency relation
The concurrency relation • c between transitions defined in terms of static components according to the template in [18], recalled in Section 2, is not identical to the concurrency relation • s of Definition 15 when restricted to Tr • × Tr.
The following shows that • s is strictly included in • c .

Proposition 9
If t • s u then t • c u.
Proof: This follows immediately from the irreflexivity of ⊆ C × C (Lemma 9). Nevertheless, we show that for the study of justness it makes no difference whether justness is defined using the concurrency relation • s or • c .

Lemma 11
If t ∈ Tr s• and π is a path from source(t) to a state P such that t • s v for all transitions v on π, then there is a derivation t ∈ Tr s• with source(t ) = P and t ≡ t .
Proof: This is a corollary of Proposition 8, obtained by a simple induction on the length of π, using the reflexivity and transitivity of ≡, and that t • s v and t ≡ t implies t • s v.
Proof: "Only if" is immediate from Definition 6 and Proposition 9. "If": Let π be • s -B-just, let π be a suffix of π starting in a state P, and let t ∈Tr • ¬B with source(t)=P. By Definition 6 a transition u with t • s u occurs in π . W.l.o.g. we take u to be the first such transition in π . If suffices to show that t • c u. For all transitions v in π between P and source(u) we have t • s v. Hence, by Lemma 11, there is a t ∈ Tr s• with source(t ) = source(u) and t ≡ t . Moreover, t • s u implies t • s u, which implies t • c u by Lemma 10, which implies t • c u. 2 The above proof shows that for the study of justness, we need to know whether two transitions t ∈ Tr • and u ∈ Tr are related by the static concurrency relation or not, only when ∃t ∈ Tr s• with t ≡ t and source(t ) = source(u). And restricted to such pairs (t, u) the relations • s and • c coincide.

Computational interpretations
The classical computational interpretation of CCS and related languages aligns with the (dynamic) concurrency relation • of Sections 6 and 7.1, rather than the static concurrency relation • s of Section 7.2. This is illustrated by the transitions t d and t e of Example 4, which are generally regarded as concurrent. This computational interpretation also aligns with the semantics of CCS in terms of event structures and Petri nets, where concurrency is made more explicit [36,20].
Below, we first define a sublanguage of CCS with broadcast communication and/or signals on which the static and dynamic concurrency relations coincide-so it does not include the process P of Example 4.
Using this, we propose an alternative computational interpretation of CCS and its extensions that aligns with the static concurrency relation.
The underlying intuition is that each transition occurs at some location, concurrent transitions occur at different locations, and a choice made through the +-operator necessarily needs to be made locally, so that one can not have two truly parallel actions d and e for which the execution of either one constitutes the same choice. In Example 4, the transitions t τ and t d stem from opposite sides of a +-operator, and therefore should be co-located. The same holds for t τ and t e , and consequently t d should be co-located with t e and not concurrent. This intuition is loosely inspired by [14,15]. Similarly, a signal can be emitted only locally, and for convenience we treat recursion in the same vein, so that all dynamic operators can be applied to sequential processes only.

Definition 18
The dynamically sequential fragment of CCS with broadcast communication and/or signals is given by the context-free grammar where S is the sort of (initially) sequential processes, and P the sort of parallel processes. Defining equations for agent identifiers should have the form A def = S.
This language is crafted in such a way that in all synchrons a dynamic argument will never precede a parallel composition argument | L or | R . As a consequence, we obtain, for synchrons ς and υ, that ς ; υ ⇔ ς = υ and that Hence, on this fragment, the concurrency relations • and • s coincide. Next, we introduce a new unary operator sq, that turns a parallel process into a sequential one. Thus "| sq(P)" can be added to the line for S ::= in the context-free grammar above. Its operational rules are where κ ranges over b: ∈ B: ands ∈S, so that it changes its argument as little as possible. However, the argument sq is now added to synchrons, counting as dynamic, and Definition 10 of d is upgraded by the requirement that the argument sq does not occur in σ 1 , with redefined to equal d . Consequently, the only effect of sq is that any concurrency between outgoing transitions of its arguments is removed. The process sq(d.R|e.S), for instance, behaves exactly like d(R|e.S) + e(d.R|S).
On this extension of the dynamically sequential fragment of CCS with broadcast communication and/or signals we still have that ς υ ⇔ ς d υ ⇔ C(ς ) d C(υ) ⇔ c(ς ) c(υ), and consequently • and • s coincide.
Finally, we propose a language that has the same syntax as CCS, possibly extended with broadcast communication and/or signals, but is technically a sublanguage of language proposed above, because whenever the operators + orˆs are applied to parallel arguments, P + Q is taken to be an abbreviation of sq(P) + sq(Q), and Pˆs of sq(P)ˆs. Likewise, A def = P can be seen as an abbreviation of A def = sq(P). This language can be seen as an alternative computational interpretation of CCS (plus extensions) that aligns with the static concurrency relation • s .
Interestingly, the operational Petri net semantics of [6] follows the static computational interpretation above, whereas its modification in [28,29] follows the classical (dynamic) interpretation of concurrency.

The dynamic and static accounts of justness agree
We now show that the concurrency relations • and • s (and thus also the variants • s , • c and • c of • s studied in Sections 7.3 and 7.4) give rise to the same concept of justness. Each derivation t ∈ Tr has only finitely many synchrons, and each synchron contains finitely many dynamic arguments. Let d(t) be the sum, over ς ∈ nς (t), of the number of dynamic arguments in ς .
Proof: "Only if" is immediate from Definition 6 and Proposition 7.
"If": Let π be • s -B-just, let π be a suffix of π starting in a state P, and let t ∈Tr • ¬B with source(t)=P. By induction on d(t) we find a transition u in π such that t • u.
By Definition 6 a transition u with t • s u occurs in π . W.l.o.g. we take u ∈ Tr to be the first such transition in π . By Lemma 11 there is a derivation t ∈ Tr s• with source(t ) = source(u ) and t ≡ t . So t • s u . Hence there are ς ∈ nς (t ) and υ ∈ aς (u ) with c(ς ) = c(υ) by Lemma 10. In case t • u then t • u and we are done. So suppose t • u . Then ς υ, so ς d υ by Lemmas 2 and 3. Thus ς has the form σ 1 | D ς 2 and υ = σ 1 | E υ 2 with {D, E} = {L, R}. Since c(ς ) = c(υ), a dynamic operator must occur in σ 1 . So by Definition 14 ς @υ contains fewer dynamic arguments than ς , and hence ς @u contains fewer dynamic arguments than ς . (Here we use that u ∈ Tr • , since aς (u ) = / 0.) Moreover, for ς = ς , ς @u contains at most as many dynamic arguments as ς .
The proof of Proposition 6 finds a t ∈ Tr s• with source(t ) = target(u ) and t ; t , such that nς (t ) = {ς @u | ς ∈ nς (t )}. It follows that d(t ) < d(t ). By Proposition 5, (t ) = (t ) = (t), and so t ∈ Tr • ¬B . By the induction hypothesis we find a transition u occurring in π past the (first) occurrence of source(t ), such that t • u. Now t • u, using Proposition 4. 2

Justness is feasible even with infinitary choice
A straightforward induction of the length of derivations shows that for each process P ∈ P in any of the languages of Section 5 there are only countably many derivations t ∈ Tr • with source(t) = P. Consequently, Theorem 1 says that, for any set B ⊆ Act with Rec ⊆ B, B-justness is feasible. However, the standard version of CCS [24] features the infinitary choice operator ∑ i∈I P i for any index set I, which was omitted in Section 5 (and many of the references). Its operational rule is The work reported here can be straightforwardly extended with this infinitary choice operator. Instead of + L and + R it gives rise to dynamic arguments ∑ j appearing in synchrons. But then we have processes P with uncountably many outgoing transitions, so that Theorem 1 no longer applies. Nevertheless, Bjustness is feasible, as follows from Corollary 1, in conjunction with Theorem 2. For if t ≡ t (defined in Section 7.2), then t • s u ⇔ t • s u for all u ∈ Tr • . As an ≡-equivalence class is completely determined by a finite set of static components, and the set C of static components is countable, so is the collection of ≡-equivalence classes of transitions, and thus the set of equivalence classes used in Corollary 1.

An inductive characterisation of the concurrency relations • d and • s
As a variant of Definition 12 in Section 6, write t • d u if ∀ς ∈ nς (t). ∀υ ∈ aς (u). ς d υ. By Lemmas 2 and 3, when source(t) = source(u) then t • u ⇔ t • d u.
The idea of an asymmetric concurrency relation • is not new here. A similar relation, here called • [16] , appeared in [16]. That relation was defined only between derivations t and u with source(t) = source(u). Here we show that • [16] agrees with our • , in the sense that for all t ∈ Tr • and u ∈ Tr. [16] dealt with ABC only, so there Tr s• = Tr • . In order to prove this, we give an inductive characterisation of • d . This effort also yields an inductive characterisation of • s , which will be used in Section 12 to provide a coinductive characterisation of justness, in the spirit of the definitions of justness from [11,16,7,3].

Proposition 11
The relation • d is the smallest relation • x ⊆ Tr s• × Tr such that Proof: It is straightforward to check that • d satisfies all the properties listed in Proposition 11, so the smallest relation • x is contained in • d . For the other direction we prove by structural induction on t that if t • d u then t • x u can be derived by the rules of Proposition 11.
• If aς (u ) = / 0, i.e., u is the derivation of an indicator transition, then t • d u . Correspondingly, t • x u , by the last requirement of Proposition 11. So below assume that aς (u ) = / 0. • If t has the form α →P or P →s , then t • d u for no u , so there is nothing to show.
Then all synchrons of t start with [ f ], so for t • d u to hold, all active synchrons of u must start with [ f ] as well. In fact u must have the form u[ f ] such that t • d u. By induction t • x u and hence t • x u by the seventh requirement of Proposition 11. • The cases t = t\L, t = A :t, t = tˆr, t = t+P and t = P+t proceed in the same way. • Let t = t|P. We make a further case distinction on u .
-Let u = Q|u. Then always t • d u , and indeed t • x u by the first requirement on • x .
-Let u = u|Q. Then all synchrons of t and all active synchrons of u start with | L , and stripping those off shows that t • d u. By induction t • x u and hence t • x u by the third requirement. -Let u = u|w. Then all synchrons of t and some of u start with | L , and stripping those off shows that t • d u. By induction t • x u and hence t • x u by the fourth requirement on • x . -If u has any other shape, then t • d u , so there is nothing to show.
• The case t = P|t proceeds symmetrically. • Let t = t|v. We make a further case distinction on u .
-Let u = Q|u. First consider the case that (t) ∈ B!. Then (v) ∈ B? . ∪ B: and all necessary synchrons of t start with | L . Since all active synchrons of u start with | R , we have t • d u . Accordingly, t • x u by the second requirement on • x . In case (t) / ∈ B!, some necessary synchrons of t and all active synchrons of u start with | R , and stripping those off shows that v • d u. By induction v • x u and hence t • x u by the fourth requirement of Proposition 11 (reversing the roles of t and v).
-The case u = u|Q proceeds symmetrically.
-Let u = u|w. First consider the case that (t) ∈ B!. Then (v) ∈ B? . ∪ B:, and all necessary synchrons of t start with | L . Stripping those off shows that t • d u. By induction t • x u and hence t • x u by the fifth requirement on • x . The case that (v) ∈ B! proceeds symmetrically. Otherwise, we obtain t • d u and v • d w. By induction t • x u and v • x w and hence t • x u by the sixth requirement of Proposition 11.
-If u has any other shape, then t • d u , so there is nothing to show. 2 The main reason for defining • as a relation of type Tr s• × Tr instead of merely Tr • × Tr, which is all we need in Definition 6, is that in order to derive t • u with t ∈ Tr • from the rules of Proposition 11, we sometimes need a judgement t • u with t ∈ Tr s• \Tr • . 5 The relation • [16] was defined in [16,Definition C.4] for ABC. Its definition is almost the same as the one of • x in Proposition 11, but simplified because there are no indicators or indicator transitions, and adding the requirement that source(t) = source(u).

Proof: A trivial structural induction on t. 2
In spite of this agreement between • [16] and • , the former is not suitable as an alternative for the latter for the purposes of this paper, because our formalisation of justness depends on judgements t • u for transitions t and u with source(t) = source(u).

Proposition 12
The relation • s from Section 7.2 is the smallest relation • x ⊆ Tr s• × Tr such that ∪ S and relabelling f , and • t • x ξ for any derivation ξ of an indicator transition.
for arbitrary t, u, v, w, P, Q and R, where t and v are derivations of indicators or transitions, u and w are derivations of non-indicator transitions, P, R ∈ P are expressions, and Q is either an expression or the derivation of an indicator or indicator transition-provided that the composed derivations exist.
Proof: A trivial adaptation of the proof of the previous proposition. 2

A coinductive characterisation of justness
In this section we show that the • -based concept of justness defined in this paper coincides with a coinductively defined concept of justness, for CCS and ABC originating from [16].
To obtain agreement between our • -based and coinductive definitions for CCSS, we first extend the • -based concept of B-justness to the case where also CCSS signal emissions fromS may appear in B. Since this extension is unsuitable as a completeness criterion, and hence should not be confused with the proper concept of justness, we have not treated this extension from the start of the paper, and give it another name: B-sigjustness. This extension is needed because in the coinductive definition, some cases of proper B-justness depend on cases of C-sigjustness, where C involves signal emissions.

An extension of the notion of justness dealing with emissions
∪S , if for each suffix π of π, and for each derivation t ∈ Tr s• ¬B enabled in the starting state of π , a transition u with t • u occurs in π . B-sigjust corresponds with what is calledB ∩ S -signalling and B \S -just in [7]. Here we save double work by collapsing the similar concepts signalling and just from [7]. Note that a path is B-just in the sense of Definition 6, where B? ⊆ B ⊆ Act, iff it is B .
∪S -sigjust according to Definition 19. Proposition 10 and Theorem 2 extend from justness to sigjustness, with the same proofs. However, sigjustness is unsuitable as a completeness criterion, because it fails the requirement of feasibility. Example 9 The process 0ˆs has only one path π, and π has no transitions. This path is not / 0-sigjust, since a transition 0 →s is enabled in its only state. So π can not be extended into an / 0-sigjust path. Changing the definition of a path to allow indicator transitions does not help; this allows an infinite path π containing the transition 0 →s infinitely often. But as 0 →s • 0 →s , also π fails to be / 0-sigjust. ¶

A coinductive definition of justness
To state our coinductive definition of justness, we need to define the notion of the decomposition of a path starting from a process with a leading static operator. Any derivation t ∈ Tr of a transition with source(t) = P|Q has the shape • u|Q, with target(t) = target(u)|Q, • u|v, with target(t) = target(u)|target(v), • or P|v, with target(t) = P|target(v). Let a path of a process P be a path as in Definition 3 starting with P. Now the decomposition of a path π of P|Q into paths π 1 and π 2 of P and Q, respectively, is obtained by concatenating all left-projections of the states and transitions of π into a path of P and all right-projections into a path of Q-notation π π 1 |π 2 . Here it could be that π is infinite, yet either π 1 or π 2 (but not both) are finite.
The decomposition π of a path π of P[ f ] is the path obtained by leaving out the outermost [ f ] of all states and transitions in π, notation π π [ f ]. In the same way one defines the decomposition of a path of P\c. The following co-inductive definition of the family B-justness of predicates on paths, with one family member for each choice of a set B of blocking actions, stems from [16,Appendix E]. 6 To interpret the 6 To be precise, the notion of Y -justness from [16] translates to Y .
∪ B?-justness as occurs here. Furthermore, [16] restricts to the case that Y ⊆ C h . To make this definition apply to CCSS, as well as CCS, ABC and ABCd, read "sigjust" for "just" throughout, and allow B? ⊆ B ⊆ Act .
∪S ; a state P admits an actions ∈S iff Ps −→ P or P s holds.
Intuitively, justness is a completeness criterion, telling which paths can actually occur as runs of the represented system. A path is B-just if it can occur in an environment that may block the actions in B. In this light, the first, third, fourth and fifth requirements above are intuitively plausible. The second requirement first of all says that if π π 1 |π 2 and π can occur in the environment that may block the actions in B, then π 1 and π 2 must be able to occur in such an environment as well, or in environments blocking less. The last clause in this requirement prevents a C-just path of P and a D-just path of Q to compose into a B-just path of P|Q when C contains an action c and D the complementary actionc (except when τ ∈ B). The reason is that no environment (except one that can block τ-actions) can block both actions for their respective components, as nothing can prevent them from synchronising with each other.
The fifth requirement helps characterising processes of the form b+(A|b) and a.(A|b), with A def = a.A. Here, the first transition 'gets rid' of the choice and of the leading action a, respectively, and this requirement reduces the justness of paths of such processes to their suffixes.

Example 10
To illustrate Definition 20 consider the unique infinite path of the process Alice|Cataline of Example 2 in which the transition t does not occur. Taking the empty set of blocking actions, we ask whether this path is / 0-just. If it were, then by the second requirement of Definition 20 the projection of this path on the process Cataline would need to be / 0-just as well. This is the path 1 (without any transitions) in Example 1. It is not / 0-just by the first requirement of Definition 20, because its last state 1 admits a transition. ¶

Agreement between the concurrency-based and coinductive definitions of justness
We now establish that the concept of justness from Definition 20 agrees with the concept of justness defined earlier in this paper. The below applies to CCSS by reading Tr s• for Tr • and "sigjust" for "just". • Let π be a • s -B-just path. It follows immediately from Definition 6 that its last state admits no transitions t ∈ Tr • ¬B .
• Let π be a • s -B-just path of a process P|Q. There is a unique decomposition π π 1 |π 2 of π into a path π 1 of P and a path π 2 of Q. Let C be the set of actions α such that there is a t ∈ Tr • with s:=source(t)∈π 1 and (t)=α, but no transition u with t • s u occurs in π 1 past the occurrence of s.
Take C := C . ∪ B?. Then π 1 is • s -C-just. In fact, C is the smallest set B? ⊆ X ⊆ Act such that π 1 is X-just. Likewise, let D be the smallest set such that B? ⊆ D ⊆ Act and π 2 is D-just. It remains to be shown that C, D ⊆ B and τ ∈ B ∨C∩D = / 0. Let α ∈ C . Then there is a state P |Q in π and a t ∈ Tr • with P = source(t) ∈ π 1 and (t) = α, but no transition u with t • s u occurs in π 1 past the occurrence of P . We claim that α ∈ B.
∪{τ}. Then t|Q ∈Tr • with P |Q =source(t|Q )∈π and (t|Q )=α. Suppose, towards a contradiction, that α / ∈ B. Then, using the • s -B-justness of π, a transition t † must occur in π past the occurrence of P |Q , such that t|Q • s t † . Since t † occurs in π, source(t † ) has the form P |Q . By Proposition 12, t † must have the form u|v or u|Q , with t • s u. Hence a transition u with t • s u occurs in π past the occurrence of P -a contradiction. So α ∈ B.
or (v) = b: and (t|v) = b! = α. In both cases the argument proceeds as above. It follows that C ⊆ B. By symmetry also D ⊆ B.
Let c ∈ C andc ∈ D. Then there are states P 1 |Q 1 and P 2 |Q 2 in π and t 1 ,t 2 ∈ Tr • with -P 1 = source(t 1 ) ∈ π 1 and (t 1 ) = c, but no u with t 1 • s u occurs in π 1 past P 1 , and -Q 2 = source(t 2 ) ∈ π 2 and (t 2 ) =c, but no w with t 2 • s w occurs in π 2 past Q 2 . Assume that either P 1 |Q 1 = P 2 |Q 2 or the state P 2 |Q 2 occurs in π past the state P 1 |Q 1 -the other case will follow by symmetry. By Lemma 11 there is a t 1 ∈ Tr • with source(t 1 ) = P 2 and t 1 ≡ t 1 . So (t 1 ) = c. Now t 1 |t 2 ∈ Tr • and source(t 1 |t 2 ) = P 2 |Q 2 . Moreover, (t 1 |t 2 ) = τ. Assume that τ / ∈ B. Then, using the • s -B-justness of π, a transition t † must occur in π past the occurrence of P 2 |Q 2 , such that t 1 |t 2 • s t † . By Proposition 12 t † must have the form P |v or u|v or u|Q with t 1 • s u or t 2 • s v. Again we obtain a contradiction. • Let π be a • s -B-just path of a process P\L. Let π be the decomposition of π. We have to show that π is • s -(B ∪ {c,c ∈ Act | c ∈ L})-just. So assume t ∈ Tr • with (t) / ∈ B ∪ {c,c ∈ Act | c ∈ L}, and P := source(t) ∈ π . Then P \L = source(t\L) ∈ π and (t\L) / ∈ B. By the • s -B-justness of π, a transition t † must occur in π past the occurrence of P \L, such that t\L • s t † . Now t † must have the form u\L, and by Proposition 12 t • s u. So a transition u occurs in π past the occurrence of P , such that t • s u.
• The case that π is a • s -B-just path of a process P[ f ] goes likewise. • Finally, it follows directly from Definition 6 that each suffix of a • s -B-just path is • s -B-just.
"If": Let t ∈ Tr • ¬B with s := source(t) ∈ π for a path π that is B-just in the sense of Definition 20. We have to show that a transition t † with t • s t † occurs in π past the occurrence of s. Using the last requirement of Definition 20 we may assume, without loss of generality, that s is the first state of π. We proceed by structural induction on t.
• Let t have the form α →P or P →r or P + u or u + Q or A:u or tˆr. Then npc(t) = {ε}. Using the first requirement of Definition 20, s cannot be the last state of π, for it admits a transition t with (t) / ∈ B. Since s has the form α.P or P + Q or A or Pˆr, the first transition v of π satisfies afc(v) = {ε}, and thus t • s v.
• Let t have the form u|v. Then s has the form P|Q, with P := source(u) and Q := source(v). By the second requirement of Definition 20, π π 1 |π 2 , with π 1 a C-just path of P and π 2 a D-just path of Q, for some C, D ⊆ B such that τ ∈ B ∨C∩D = / 0.
∪C h . Then (v) =c. Since t ∈ Tr • ¬B , τ = (t) / ∈ B. So either c / ∈ C orc / ∈ D-by symmetry assume the former. -Let (u) = (t) = b! with b ∈ B. (The case (v) = b! follows by symmetry.) Then b! / ∈ B ⊇ C. So in all relevant cases u ∈ Tr • ¬C . By induction, a transition u † with u • s u † occurs in π 1 . Consequently, a transition t † of the form u † |Q or u † |v † occurs in π. By Proposition 12 t • t † .
• Let t have the form u|Q. Then s has the form P|Q, with P := source(u). By the second requirement of Definition 20, π π 1 |π 2 , with π 1 a C-just path of P and π 2 a D-just path of Q, for some C, D ⊆ B.
Since (u) = (t) / ∈ B ⊇ C, u ∈ Tr • ¬C . The argument proceeds as above. • The case that t has the form P|v follows by symmetry. • Let t have the form u\L. Then s has the form P\L, with P := source(u). Moreover (u) = (t) / ∈ B ∪ {c,c ∈ Act | c ∈ L}. By the third requirement of Definition 20, the decomposition π of π is (B ∪ {c,c ∈ Act | c ∈ L})-just. So by induction, a transition u † with u • s u † occurs in π . Consequently, a transition t † = u † \L occurs in π. By Proposition 12 t • t † .
• Let t have the form u[ f ]. Then s has the form P[ f ], with P := source(u). Moreover (u) / ∈ f −1 (B). By the fourth requirement of Definition 20, the decomposition π of π is f −1 (B)-just. So by induction, a transition u † with u • s u † occurs in π . Consequently, a transition t † = u † [ f ] occurs in π. By Proposition 12 t • t † . 2

Justness on abstract paths
By Definition 3, a path is an alternating sequence of states and non-indicator transitions. These nonindicator transitions are, in the LTS for CCS and its extensions constructed in Section 6, actually derivations of transitions P α −→ Q according to the structural operational semantics of these languages. Now define an abstract path to be an alternating sequence of states and actual transitions P α −→ Q. Definition 21 Let · be the function that takes a derivation t ∈ Tr • into the transition P α −→ Q derived by t. Given a path π = s 0 t 1 s 1 t 2 s 2 · · ·, let π := s 0 t 1 s 1 t 2 s 2 · · ·. An abstract path is such an object π.
The concept of justness naturally lifts from paths to abstract paths: Definition 22 An abstract path ρ is B-just iff there exists a B-just (concrete) path π such that ρ = π.
This definition fits with the intuition that a path is just iff it models a run that can actually occur.
The following variant of Definition 6 defines • s -B-justness directly on abstract paths.

Definition 23
An abstract path ρ is • s -B-just, for B? ⊆ B ⊆ Act, if for each derivation t ∈ Tr • ¬B with P := source(t) ∈ ρ, there is a u ∈ Tr with t • s u such that u occurs in ρ past the occurrence of P.
We proceed to show that Definitions 22 and 23 agree.

Proposition 13
An abstract path is • s -B-just in the sense of Definition 23 iff it is B-just in the sense of Definition 22, i.e., iff it has the form π for a concrete path π that is • s -B-just in the sense of Definition 17.
Proof: "If": Let π be a concrete path that is • s -B-just. Then by Definition 23 π is • s -B-just.
"Only if": Let ρ be an abstract path that is • s -B-just in the sense of Definition 23. We present an algorithm for constructing a concrete path π that is • s -B-just in the sense of Definition 17, such that ρ = π. Loosely following the idea behind the proof of Theorem 1, we build an N×N-matrix with a column for each of the states P 0 , P 1 , P 2 , . . . of ρ. The column P i lists the transitions from Tr • ¬B enabled in state P i , leaving empty most slots if there are only finitely many. 7 Incrementally, we construct prefixes π i of π. As an invariant, we maintain that π i is the prefix of ρ with i transitions. So π i ends in state P i . An entry in the matrix is either empty, filled in with a transition, or crossed out. Let f : N → N×N be an enumeration of the entries in this matrix.
At the beginning, take π 0 to be the path consisting of the first state P 0 of ρ only. At each step i ≥ 0 we extend the path π i into π j for some j > i, if possible, thereby skipping over all π h with i < h < j, and cross out some transitions occurring in the matrix. As an invariant, we maintain that a transition t occurring in the k-th column is already crossed out when reaching step i > k iff a transition u occurs in the extension of π k into π i such that t • s u. Furthermore, when reaching step i, no entry in a column ≥ i is already crossed out. At each step i ≥ 0 we proceed as follows: We take n ∈ N to be the smallest value such that entry f (n) = (k, m) ∈ N×N-with k a column number-satisfies k ≤ i and is filled in, say with t ∈ Tr • ¬B , but not yet crossed out. If such an n does not exist, just extend π i with an arbitrary transition u such that u is the next transition of ρ; if ρ ends in P i , the algorithm terminates, with output π i . By our invariant, all transitions v occurring in the extension of π k into π i satisfy t • s v. By Lemma 11 there is a t ∈ Tr • ¬B with source(t ) = P i and t ≡ t . Since ρ is • s -B-just, there is a u ∈ Tr with t • s u such that u occurs in ρ past the occurrence of P. So also t • s u. Now extend π i into π j , for j > i, such that π j ends with the transition u. Cross out all entries in the matrix up to row j necessary to maintain the invariant above. This includes entry f (n).
The desired path π is the limit of all the π i . It is • s -B-just, using the invariant, because each transition t ∈ Tr • ¬B that is enabled in a state of π appears in the matrix, which acts like a priority queue, and is eventually crossed out. 2 Corollary 5 Let ρ be an abstract path. If ρ is B-just then it is C-just for any C ⊇ B.
If ρ is C-just as well as D-just, then it is C ∩ D-just.
In fact the collection of sets B such that a given abstract path ρ is B-just is closed under arbitrary intersection, and thus there is a least set B ρ such that ρ is B-just. Actions α ∈ B ρ \B? are called ρ-enabled [17]. An action α is ρ-enabled iff there is a suffix ρ of ρ and a derivation t ∈ Tr • with (t) = α, enabled in the starting state of ρ , such that t • s v for all v ∈ Tr such that v occurs in ρ . As a consequence of Definition 6, the same closure properties apply to justness on concrete paths, but for abstract paths these results are much less trivial. We now show that the concepts of justness on abstract paths from Definitions 22 and 23 both agree with the original definition of justness from [16]. This requires lifting the definition of decomposition from concrete to abstract paths.
Definition 24 An abstract path ρ of a process P|Q can be decomposed into abstract paths ρ 1 of P and ρ 2 of Q, notation ρ ∈ ρ 1 |ρ 2 , if there exist paths π, π 1 and π 2 such that π π 1 |π 2 , ρ = π and ρ i = π i . Likewise, an abstract path ρ of P[ f ] can be decomposed into an abstract path ρ if there are π and π with π π [ f ], ρ = π and ρ = π . The decomposition of an abstract path ρ of P\L is defined likewise.
In [16,Section 4.3] the decomposition of an abstract path was defined in a different style, but using [16, Observation E.3] the resulting notion of decomposition is the same.
In [16,Definition 4.1] B-justness was defined directly on abstract paths. The definition is the same as the one for concrete paths-see Definition 20-but reading "abstract path" for "path"; 8 see also Footnote 6. The following theorem says that that definition agrees with Definition 22 above.
Theorem 4 An abstract path is B-just in the sense of Definition 20 iff it is B-just in the sense of Definition 22, i.e., iff it has the form π for a concrete path π that is B-just in the sense of Definition 20.  All definitions and results in this section apply equally well to "sigjustness" in the role of "justness", allowing B? ⊆ B ⊆ Act .
∪S . In this form B-sigjustness is the same as B-justness as defined in [3,Definition 4]. 9 Finally, we compare our definitions of justness to the one in [7].
Proposition 14 An abstract path is Y -signalling as defined in [7] iff it isȲ . ∪ Act for an Y ⊆S such that ρ is Y -signalling as defined in [7]. Then trivially B-sigjustness † satisfies the five conditions of Definition 20. 2 Proposition 15 An abstract path is Y -just as defined in [7] iff it is B-sigjust according to Definition 20 for some B with Y = B ∩ Act, which is the case iff it is Y .
Proof: "If": Call an abstract path Y -just † iff it is B-sigjust by Definition 20 for some B with Y = B ∩ Act. Then trivially B-justness † satisfies the five conditions of [7,Definition 3]. The second condition (for paths starting in P|Q) uses Proposition 14 in conjunction with the first statement of Corollary 5. "Only if": It suffices to show that each abstract path ρ that is Y -just as defined in [7] is also • s -Y .
∪S -sigjust according to Definition 23. So for each t ∈ Tr • with (t) ∈ Act \Y and s := source(t) ∈ ρ for such a path ρ, we have to find a transition t † with t • s t † such that t † occurs in ρ past the occurrence of s. The proof of this statement is similar to direction "If" of the proof of Theorem 3. Using the last requirement of [7, Definition 3] we may assume, without loss of generality, that s is the first state of ρ. We proceed by structural induction on t. The only case that deviates from the proof of Theorem 3 is where t has the form u|v. In this case s has the form P|Q, with P := source(u) and Q := source(v). By the second requirement of [7, Definition 3], ρ ρ 1 |ρ 2 , with ρ 1 an X-just and X -signalling abstract path of P and ρ 2 a Z-just and Z -signalling abstract path of Q, for some X, Z ⊆ Y and X , Z ⊆ S , such that (when τ / ∈ Y ) X∩Z = / 0, X ∩ Z = / 0 and X ∩ Z = / 0. 9 Following [16,17], [3] restricts to the case that B ⊆ C h . ∪C h . ∪ S .
So in all but one of the relevant cases u ∈ Tr • with (u) ∈ Act \ X. By induction, there is a transition u † with u • s u † such that u † occurs in ρ 1 . Consequently, there is a transition t † of the form u † |Q or u † |v † such that t † occurs in ρ. By Proposition 12 t • t † .
In the remaining case (v) =s ∈S and ρ 2 is Z -signalling with s / ∈ Z . Using Proposition 14, Theorems 4 and 3 and Proposition 13, ρ 2 is • s -Z .
∪ Act-sigjust in the sense of Definition 23, so there is a v † ∈ Tr with v • s v such that v † occurs in ρ 2 . Consequently, there is a transition t † of the form P |v † or u † |v † such that t † occurs in ρ. By Proposition 12 t • t † . 2 Corollary 6 An abstract path is B-just as defined in [7] iff it is B-just according to Definition 6.
Corollary 7 An abstract path is B-sigjust according to Definition 20 iff it isB ∩ S -signalling as well as B ∩ Act-just as defined in [7].
Proof: "Only if": Let ρ be B-sigjust according to Definition 20. By Corollary 5 it is also B ∪ Act-sigjust, so by Proposition 14 it isB ∩ S -signalling. By Proposition 15 ρ is B ∩ Act-just. "If": Let ρ beB ∩ S -signalling as well as B ∩ Act-just as defined in [7]. By Proposition 14 it is B ∪ Act-sigjust. By Proposition 15 it is B ∪S -sigjust. So by Corollary 5 it is B-sigjust. 2 In [16, 7, 3] a(n abstract) path is called just (without a predicate B) iff it is B-just for some ∪ S -just. This amounts to making a default choice for the set B of blocking actions, in which CCS handshake synchronisations c andc as well as broadcast receive and signal read actions can always be blocked by the environment (namely by withholding a synchronisation partner, or failing to broadcast or to emit a signal). Using this definition it follows that an abstract path is just as defined in [18] and the current paper (using Definition 6 with any of the five concurrency relations • , • s , • s , • c or • c and taking B := B? .

Conclusion
We advocate justness as a reasonable completeness criterion for formalising liveness properties when modelling distributed systems by means of transition systems. In [18] we proposed a definition of justness in terms of a, possibly asymmetric, concurrency relation between transitions. The current paper defines such a concurrency relation for the transition systems associated to CCS, as well as its extensions with broadcast communication or signals, thereby making the definition of justness from [18] available to these languages. In fact, we provided five versions of the concurrency relation, and showed that they all give rise to the same concept of justness. We expect that this style of definition will carry over to many other process algebras. We have shown that justness satisfies the criterion of feasibility, and proved that our formalisation agrees with previous coinductive formalisations of justness for these languages.
Concurrency relations between transitions in transition systems have been studied in [35]. Our concurrency relation • follows the same computational intuition. However, in [35] transitions are classified as concurrent or not only when they have the same source, whereas as a basis for the definition of justness here we compare transitions with different sources. Apart from that, our concurrency relation is more general in that it satisfies fewer closure properties, and moreover is allowed to be asymmetric.
Concurrency is represented explicitly in models like Petri nets [33], event structures [36], or asynchronous transition systems [34,2,37]. We believe that the semantics of CCS in terms of such models agrees with its semantics in terms of labelled transition systems with a concurrency relation as given here. However, formalising such a claim requires a choice of an adequate justness-preserving semantic equivalence defined on the compared models. Development of such semantic equivalences is a topic for future research [19].