On CSP and the Algebraic Theory of Effects



We consider CSP from the point of view of the algebraic theory of effects, which classifies operations as effect constructors or effect deconstructors; it also provides a link with functional programming, being a refinement of Moggi’s seminal monadic point of view. There is a natural algebraic theory of the constructors whose free algebra functor is Moggi’s monad; we illustrate this by characterising free and initial algebras in terms of two versions of the stable failures model of CSP, one more general than the other. Deconstructors are dealt with as homomorphisms to (possibly non-free) algebras. One can view CSP’s action and choice operators as constructors and the rest, such as concealment and concurrency, as deconstructors. Carrying this programme out results in taking deterministic external choice as constructor rather than general external choice. However, binary deconstructors, such as the CSP concurrency operator, provide unresolved difficulties. We conclude by presenting a combination of CSP with Moggi’s computational λ-calculus, in which the operators, including concurrency, are polymorphic. While the paper mainly concerns CSP, it ought to be possible to carry over similar ideas to other process calculi.

15.1 Introduction

We examine Hoare’s CSP [9, 13,29] from the point of view of the algebraic theory of effects [14, 22, 23, 25], a refinement of Moggi’s seminal “monads as notions of computation” [3, 18, 19]. This is a natural exercise as the algebraic nature of both points to a possibility of commonality. In the algebraic theory of effects all operations do not have the same character. Some are effect constructors: they create the effects at hand; some are effect deconstructors: they respond to effects created. For example, raising an exception creates an effect the exception raised whereas exception-handling responds to effects exceptions that have been raised. It may therefore be interesting, and even useful, to classify CSP operators as constructors or deconstructors. Considering CSP and the algebraic theory of effects together also raises the possibility of combining CSP with functional programming in a principled way, as Moggi’s monadic approach provides a framework for the combination of computational effects with functional programming. More generally, although we mainly consider CSP, a similar exercise could be undertaken for other process calculi as they have a broadly similar algebraic character.

The theory of algebraic effects starts with the observation that effect constructors generally satisfy natural equations, and Moggi’s monad T is precisely the free algebra monad for these equations (an exception is the continuations monad, which is of a different character). Effect deconstructors are treated as homomorphisms from the free algebra to another algebra, perhaps with the same carrier as the free algebra but with different operations. These operations can be given by combinations of effect constructors and previously defined deconstructors. The situation is much like that of primitive recursive definitions, although we will not present a formal definitional scheme.

We mainly consider that part of CSP containing action, internal and external choice, deadlock, relabelling, concealment, concurrency and interleaving, but not, for example, recursion (we do, albeit briefly, consider the extension with termination and sequencing). The evident constructors are then action prefix, and the two kinds of choice, internal and external, the latter together with deadlock. The evident deconstructors are relabelling, concealment, concurrency and interleaving. There is, however, a fly in the ointment, as pointed out in [25]. Parallel operators, such as CSP’s concurrency and interleaving, are naturally binary, and respond to effects in both arguments. However, the homomorphic approach to deconstructors, as sketched above, applies only to unary deconstructors, although it is possible to extend it to accommodate parameters and simultaneous definitions. Nonetheless, the natural definitions of concurrency and interleaving do not fall within the homomorphic approach, even in the extended sense. This problem has nothing to do with CSP: it applies to all examples of parallelism of which we are aware.

Even worse, when we try to carry out the above analysis for CSP, it seems that the homomorphic approach cannot handle concealment. The difficulty is caused by the fact that concealment does not commute with external choice. Fortunately this difficulty can be overcome by changing the effect constructors: we remove external choice and action prefix and replace them by the deterministic external choice operator (a1P(a1)∣anP(an)), where the ai are all different. Binary external choice then becomes a deconstructor.

With that we can carry out the program of analysis, finding only the expected difficulty in dealing with concurrency and interleaving. However, it must be admitted that the n-ary operators are somewhat clumsy to work with, and it is at least a priori odd to take binary external choice as a deconstructor. On the other hand, in [13] Section 1.1.3 Hoare writes:

The definition of choice can readily be extended to more than two alternatives, e.g.,
$$(x \rightarrow P\mid y \rightarrow Q\mid \ldots \mid z \rightarrow R)$$
Note that the choice symbol ∣ is not an operator on processes; it would be syntactically incorrect to write PQ, for processes P and Q. The reason for this rule is that we want to avoid giving a meaning to
$$(x \rightarrow P\mid x \rightarrow Q)$$
which appears to offer a choice of first event, but actually fails to do so.

which might be read as offering some support to a treatment, which takes deterministic external choice as a primitive (here = constructor), rather than general external choice. On our side, we count it as a strength of the algebraic theory of effects that it classifies effect-specific operations and places constraints on them: that they either belong to the basic theory or must be defined according to a scheme that admits inductive proofs.

Turning to the combination with functional programming, consider Moggi’s computational λ-calculus. Just as one accommodates imperative programming within functional programming by treating commands as expressions of type unit, so it is natural to treat our selection of CSP terms as expressions of type emptyas they do not terminate normally, only in deadlock. For process languages such as ACP [4,5] which do have the possibility of normal termination, or CSP with such a termination construct, one switches to regarding process terms as expressions of type unit, when a sequencing operator is also available.

As we have constructors for every T(X), it is natural to treat them as polymorphic constructs, rather than just as process combinators. For example, one could have a binary construction for internal choice, with typing rule:
$$\frac{M\! :\! \sigma \quad \quad N\! :\! \sigma } {M \sqcap N\! :\! \sigma }$$
It is natural to continue this theme for the deconstructors, as in:
$$\frac{M\! :\! \sigma } {M\setminus a\! :\! \sigma }\quad \quad \quad \frac{M\! :\! \sigma \quad \quad N\! :\! \tau } {M\!\mid \mid \!N\! :\! \sigma\times\tau }$$
where the thought behind the last rule is that M and N are evaluated concurrently, terminating normally only if they both do, when the pair of results returned individually by each is returned.

In the case of CSP a functional programming language CSPM incorporating CSP processes has been given by Scattergood [31]; it is used by most existing CSP tools including the Failures Divergences Refinement Checker (FDR), see [28]. Scattergood’s CPSM differs from our proposal in several respects. Most significantly, processes are not treated on a par with other expressions: in particular, they cannot be taken as arguments in functions, and CSP constructors and deconstructors are only available for processes. It remains to be seen if such differences are of practical relevance.

In Section 15.3 we take deadlock, action, binary internal and external choice as the constructors. We show, in Theorem 1, that, with the standard equational theory, the initial algebra is the “finitary part” of the original Brookes-Hoare-Roscoe failures model [9]; which is known to be isomorphic to the finitary, divergence- and \(\surd\)-free part of the failures/divergences model, as well as the finitary, divergence- and \(\surd\)-free part of the stable failures model, both of which are described in [29]. In Section 15.4 we go on to consider effect deconstructors, arriving at the difficulty with concealment and illustrating the problems with parallel operators in the (simpler) context of Milner’s synchronisation trees. A reader interested in the problem of dealing with parallel operators algebraically need only read this part, together with [25].

We then backtrack in Section 15.5, making a different choice of constructors, as discussed above, and giving another characterisation of the finitary failures model as an initial algebra in Theorem 3. With that, we can carry out our programme, failing only where expected: with the binary deconstructors. In Section 15.6 we add a zero for the internal choice operator to our algebra; this can be interpreted as divergence in the stable failures model, and permits the introduction of a useful additional deterministic external choice constructor. Armed with this tool, in Section 15.7, we look at the combination of CSP and functional programming, following the lines hinted at above. In order to give a denotational semantics we need, in Theorem 7, to characterise the free algebras rather than just the initial one.

As remarked above, termination and sequencing are accommodated within functional programming via the type unit ​; in Section 15.7.1 we therefore also give a brief treatment of our fragment of CSP extended with termination and sequencing, modelling it in the free algebra over the one-point set.

Section 15.8 contains a brief discussion of the general question of combining process calculi, or parallelism with a global store, with functional programming. The case of CSP considered here is just one example of the many such possible combinations. Throughout this paper we do not consider recursion; this enables us to work within the category of sets. A more complete treatment would deal with recursion working within, say, the category of ω-cpos (i.e., partial orders with lubs of increasing ω-sequences) and continuous functions (i.e., monotone functions preserving lubs of increasing ω-sequences). This is discussed further in Section 15.8. The appendix gives a short presentation of Moggi’s computational λ-calculus.

15.2 Technical Preliminaries

We give a brief sketch of finitary equational theories and their free algebra monads. For a fuller explanation see, e.g., [8, 2]. Finitary equational theories Th are derived from a given set of axioms, written using a signature Σ consisting of a set of operation symbols op: n, together with their arities n ≥ 0. One forms terms t from the signature and variables and the axioms then consist of equations t = u between the terms; there is a natural equational logic for deducing consequences of the axioms; and the theory consists of all the equations derivable from the axioms. A ground equation is one where both terms are closed, meaning that they contain no variables.

For example, we might consider the fragment of CSP with signature □ ​ : ​ 2, Stop​ : ​ 0 and the following axioms for a semilattice (the first three axioms) with a zero (the last):
$$\begin{array}{lc} \mbox{ Associativity} &(x\square y)\square z = x\square (y\square z) \\ \mbox{ Commutativity}& x\square y = y\square x \\ \mbox{ Idempotence} & x\square x = x \\ \mbox{ Zero} & x\square \mathtt{Stop} = x\end{array}$$
A Σ-algebra is a structure \(\mathcal{A} = (X,{(\mathrm{{op}}_{\mathcal{A}}\! :\! {X}^{n} \rightarrow X)}_{\mathrm{op}:n\in \Sigma })\); we say that X is the carrier of \(\mathcal{A}\) and the \(\mathrm{{op}}_{\mathcal{A}}\) are its operations. We may omit the subscript on operations when the algebra is understood. When we are thinking of an algebra as an algebra of processes, we may say “operator” rather than “operation.” A homomorphism between two algebras is a map between their carriers respecting their operations; we therefore have a category of Σ-algebras.

Given such a Σ-algebra, every term t has a denotation [​[t]​](ρ), an element of the carrier, given an assignment ρ of elements of the carrier to every variable; we often confuse terms with their denotation. The algebra satisfies an equation t = u if t and u have the same denotation for every such assignment. If \(\mathcal{A}\) satisfies all the axioms of a theory Th, it is called a Th-algebra; the Th-algebras form a subcategory of the category of Σ-algebras. Any equation provable from the axioms of a theory Th is satisfied by any Th-algebra. We say that a theory Th is (ground) equationally complete with respect to a Th-algebra if a (ground) equation is provable from Th if, and only if, it is satisfied by the Th-algebra.

Any finitary equational theory Th determines a free algebra monad TTh on the category of sets, as well as operations
$$\mathrm{{op}}_{X}\! :\! {T}_{\mathrm{Th}}{(X)}^{n} \rightarrow{T}_{\mathrm{ Th}}(X)$$
for any set X and op: nΣ, such that (TTh(X), (opX​ : ​ XnX)op: nΣ) is the free Th-algebra over X. Although TTh(X) is officially just a set, the carrier of the free algebra, we may also use TTh(X) to denote the free algebra itself. In the above example the monad is the finite powerset monad:
$$\mathcal{F}(X) =\{ u \subseteq X\mid u\mbox{ is finite}\}$$
with □ X and StopX being union and the empty set, respectively.

15.3 A First Attempt at Analysing CSP

We consider the fragment of CSP with deadlock, action prefix, internal and external choice, relabelling and concealment, and concurrency and interleaving. Working over a fixed alphabet A of actions, we consider the following operation symbols:

$$\mathtt{Stop}\! :\! 0$$
$$a \rightarrow -\! :\! 1\quad \quad (a \in A)$$
Internal and External Choice
$$\sqcap,\square \! :\! 2$$
Relabelling and Concealment
$$f(-),-\setminus a\! :\! 1$$
for any relabelling functionf​ : ​ AA and action a. If A is infinite, this makes the syntax infinitary; as that causes us no problems, we do not avoid it.
Concurrency and Interleaving
$$\mid \mid,\mid \mid \mid \;\! :\! 2$$
The signature of our (first) equational theory CSP ( □ ) for CSP only has operation symbols for the subset of these operators, which are naturally thought of as constructors, namely deadlock, action and internal and external choice. Its axioms are those given by de Nicola in [10]. They are largely very natural and modular, and are as follows:
  • □, Stop is a semilattice with a zero (i.e., the above axioms for a semilattice with a zero).

  • ⊓is a semilattice (i.e., the axioms stating the associativity, commutativity and idempotence of ⊓).

  • □ and ⊓distribute over each other:
    $$x\square (y \sqcap z) = (x\square y) \sqcap(x\square z)\quad \quad x \sqcap(y\square z) = (x \sqcap y)\square (x \sqcap z)$$
  • Actions distribute over ⊓:
    $$a \rightarrow(x \sqcap y) = a \rightarrow x \sqcap a \rightarrow y$$
    $$a \rightarrow x\square a \rightarrow y = a \rightarrow x \sqcap a \rightarrow y$$

All these axioms are mathematically natural except the last which involves a relationship between three different operators.

We adopt some useful standard notational abbreviations. For n ≥ 1 we write ⊓i = 1nti to abbreviate t1tn, intending t1 when n = 1. We assume that parentheses associate to the left; however as ⊓is associative, the choice does not matter. As ⊓is a semilattice, we can even index over nonempty finite sets, as in ⊓iIti, assuming some standard ordering of the ti without repetitions. As □ is a semilattice with a zero, we can adopt analogous notations □ i = 1nti and □ iIti but now also allowing n to be 0 and I to be .

As ⊓is a semilattice we can define a partial order for which it is the greatest lower bound by writing tu as an abbreviation for tu = t; then, as □ distributes over ⊓, it is monotone with respect to ⊑:that is, if xx′ and yy′ then xyx′y′. (We mean all these in a formal sense, for example, that if tu and uv are provable, so is tv, etc.) We note the following, which is equivalent to the distributivity of ⊓over □, given that ⊓and □ are semilattices, and the other distributivity, that □ distributes over ⊓:
$$x \sqcap(y\square z) = x \sqcap(y\square z) \sqcap(x\square y)$$
The equation can also be written as x ⊓(yz) ⊑ (xy). Using this one can derive another helpful equation:
$$(x\square a \rightarrow z) \sqcap(y\square a \rightarrow w) = (x\square a \rightarrow(z \sqcap w)) \sqcap(y\square a \rightarrow(z \sqcap w))$$

We next rehearse the original refusal sets model of CSP, restricted to finite processes without divergence; this provides a convenient context for identifying the initial model of CSP ( □ ) in terms of failures.

A failure (pair) is a pair (w, W) with wA and WfinA. For every set F of failure pairs, we define its set of traces to be
$$\mathrm{{tr}}_{F} =\{ w\mid \,(w,\emptyset ) \in F\}$$
and for every w ∈ trF we define its set of futures to be:
$$\mathrm{{fut}}_{F}(w) =\{ a\mid wa \in \mathrm{{ tr}}_{F}\}$$
With that a refusal setF (aka a failure set) is a set of failure pairs, satisfying the following conditions:
  1. 1.

    ε ∈ trF

  2. 2.

    wa ∈ trFw ∈ trF

  3. 3.

    (w, W) ∈ FVW ⇒ (w, V ) ∈ F

  4. 4.

    (w, W) ∈ Fa∉futF ⇒ (w, W ∪{ a}) ∈ F


A refusal set is finitary if its set of traces is finite.

The collection of finitary refusal sets can be turned into a CSP ( □ )-algebra \({\mathcal{R}\!}_{f}\) by the following standard definitions of the operators:
$$\begin{array}{lcl} {\mathtt{Stop}}_{{\mathcal{R}\!}_{f}} & =&\{(\epsilon,W)\mid W {\subseteq }_{\mathrm{fin}}A\} \\ a {\rightarrow }_{{\mathcal{R}\!}_{f}}F & =&\{(\epsilon,W)\mid a\notin W\} \cup \{ (aw,W)\mid (w,W) \in F\} \\ F {\sqcap }_{{\mathcal{R}\!}_{f}}G & =&F \cup G \\ F{\square }_{{\mathcal{R}\!}_{f}}G & =&\{(\epsilon,W)\mid (\epsilon,W) \in F \cap G\} \cup \{ (w,W)\mid w\neq \epsilon,\ (w,W) \in F \cup G\}\end{array}$$
The other CSP operation symbols also have standard interpretations over the collection of finitary refusal sets:
$$\begin{array}{lcl} f(F)& =&\{(f(w),W)\mid (w,{f}^{-1}(W) \cap \mathrm{{ fut}}_{F}(w)) \in F\} \\ F\setminus a & =&\{(w\setminus a,W)\mid (w,W \cup \{ a\}) \in F\} \\ F\mid \mid G & =&\{(w,W \cup V )\mid (w,W) \in F,\ (w,V ) \in G\} \\ F\mid \mid \mid G & =&\{(w,W)\mid (u,W) \in F,\ (v,W) \in G,\ w \in u\!\mid \!v\} \end{array}$$
with the evident action of f on sequences and sets of actions, and where wa is obtained from w by removing all occurrences of a, and where u​∣​v is the set of interleavings of u and v.

Lemma 1.

Let F be a finitary refusal set. Then for every w ∈trFthere are V1,…,VnfutF(w), includingfutF(w), such that (w,W) ∈ F iff W ∩ Vi= ∅ for some i ∈ {1,…,n}.


The closure conditions imply that (w, W) is in F iff (w, W ∩ futF(w)) is. Thus we only need to be concerned about pairs (w, W) with W ⊆ futF(w). Now, as futF(w) is finite, for any relevant (w, W) ∈ F, of which there are finitely many, we can take V to be futF(w) ∖ W, and we obtain finitely many such sets. As (w, ) ∈ F, these include futF(w).

Lemma 2.

All finitary refusal sets are definable by closed CSP (□) terms


Let F be a finitary refusal set. We proceed by induction on the length of the longest trace in F. By the previous lemma there are sets V1, , Vn, including futF(ε), such that (ε, W) ∈ F iff WVi = for some i ∈ { 1, , n}. Define Fa, for a ∈ futF(ε), by:
$${F}_{a} =\{ (w,W)\mid (aw,W) \in F\}$$
Then it is not hard to see that each Fa is a finitary refusal set, and that
$$F = {\sqcap }_{i}{\square }_{a\in {V }_{i}}a \rightarrow{F}_{a}$$
As the longest trace in Fa is strictly shorter than the longest one in F, the proof concludes, employing the induction hypothesis.
We next recall some material from de Nicola [10]. Let be a collection of sets; we say it is saturated if whenever LL′ ⊆ ⋃, for L then L′. Then a closed CSP ( □ )-term t is in normal form if it is of the form:
$${\sqcap }_{L\in \mathcal{L}}{\square }_{a\in L}a \rightarrow{t}_{a}$$
where is a finite non-empty saturated collection of finite sets of actions and each term ta is in normal form. Note that the concept of normal form is defined recursively.

Proposition 1.

CSP (□) is ground equationally complete with respect to\({\mathcal{R}\!}_{f}\).


Every term is provably equal in CSP ( □ ) to a term in normal form. For the proof, follow that of Proposition A6 in [10]; alternatively, it is a straightforward induction in which Eqs. 15.1 and 15.2 are helpful. Further, it is an immediate consequence of Lemma 4.8 in [10] that if two normal forms have the same denotation in \({\mathcal{R}\!}_{f}\) then they are identical (and Lemma 6 below establishes a more general result). The result then follows.

Theorem 1.

The finitary refusal sets algebra\({\mathcal{R}\!}_{f}\)is the initial CSP (□) algebra.


Let the initial such algebra be I. There is a unique homomorphism \(h\! :\!\mathrm{ I} \rightarrow {\mathcal{R}\!}_{f}\). By Lemma 2, h is a surjection. By the previous proposition, \({\mathcal{R}\!}_{f}\) is complete for equations between closed terms, and so h is an injection. So h is an isomorphism, completing the proof.

15.4 Effect Deconstructors

In the algebraic theory of effects, the semantics of effect deconstructors, such as exception handlers, is given using homomorphisms from free algebras. In this case we are interested in TCSP ( □ )(). This is the initial CSP ( □ ) algebra, \({\mathcal{R}\!}_{f}\), so given a CSP ( □ ) algebra:
$$\mathcal{A} = ({T}_{\mathrm{CSP}\ (\square )}(\emptyset ),{\sqcap }_{\mathcal{A}},{\mathtt{Stop}}_{\mathcal{A}},(a {\rightarrow }_{\mathcal{A}}),{\square }_{\mathcal{A}})$$
there is a unique homomorphism:
$$h\! :\! {\mathcal{R}\!}_{f} \rightarrow \mathcal{A}$$

Relabelling We now seek to define f( − )​ : ​ TCSP ( □ )() → TCSP ( □ )() homomorphically. Define an algebra Rl on TCSP ( □ )() by putting, for refusal sets F, G:

$${ \mathtt{Stop}}_{Rl} ={ \mathtt{Stop}}_{{\mathcal{R}\!}_{f}}$$
$$(a {\rightarrow }_{Rl}F) = (f(a) {\rightarrow }_{{\mathcal{R}\!}_{f}}F)$$
$$F {\sqcap }_{Rl}G = F {\sqcap }_{{\mathcal{R}\!}_{f}}G\quad \quad F{\square }_{Rl}G = F{\square }_{{\mathcal{R}\!}_{f}}G$$
One has to verify this gives a CSP ( □ )-algebra, which amounts to verifying that the two action equations hold, for example that, for all F, G:
$$a {\rightarrow }_{Rl}(F {\sqcap }_{Rl}G) = (a {\rightarrow }_{Rl}F) {\sqcap }_{Rl}(a {\rightarrow }_{Rl}G)$$
which is equivalent to:
$$f(a) {\rightarrow }_{{\mathcal{R}\!}_{f}}(F {\sqcap }_{{\mathcal{R}\!}_{f}}G) = (f(a) {\rightarrow }_{{\mathcal{R}\!}_{f}}F) {\sqcap }_{{\mathcal{R}\!}_{f}}(f(a) {\rightarrow }_{{\mathcal{R}\!}_{f}}G)$$
We therefore have a unique homomorphism
$${\mathcal{R}\!}_{f}\stackrel{{h}_{Rl}}{ \rightarrow }Rl$$
and so the following equations hold over the algebra \({\mathcal{R}\!}_{f}\):
$${h}_{Rl}(\mathtt{Stop}) = \mathtt{Stop}$$
$${h}_{Rl}(a \rightarrow F) = f(a) \rightarrow{h}_{Rl}(F)$$
$${h}_{Rl}(F \sqcap G) = {h}_{Rl}(F) \sqcap{h}_{Rl}(G)\quad \quad {h}_{Rl}(F\square G) = {h}_{Rl}(F)\square {h}_{Rl}(G)$$
Informally one can use these equations to define hRl by a “principle of equational recursion,” but one must remember to verify that the implicit algebra obeys the required equations.
We use hRl to interpret relabelling. We then immediately recover the familiar CSP laws:
$$f(\mathtt{Stop}) = \mathtt{Stop}$$
$$f(a \rightarrow x) = f(a) \rightarrow f(x)$$
$$f(x \sqcap y) = f(x) \sqcap f(y)\quad \quad f(x\square y) = f(x)\square f(y)$$
which we now see to be restatements of the homomorphism of relabelling.
Concealment There is a difficulty here. We do not have that
$$(F\square G)\setminus a = F\setminus a\,\square G\setminus a$$
but rather have the following two equations (taken from [10]):
$$((a \rightarrow F)\square G)\setminus a = F\setminus a \sqcap((F\square G)\setminus a)$$
$$\left ({\square }_{i=1}^{n}{a}_{ i}{F}_{i}\right )\left \backslash a = {\square }_{i=1}^{n}{a}_{ i}({F}_{i}\backslash a)\right.$$
where no ai is a. Furthermore, there is no direct definition of concealment via an equational recursion, i.e., there is no suitable choice of algebra, \({\square }_{\mathcal{A}}\) etc. For, if there were, we would have:
$$(F\square G)\setminus a = F\setminus a\,{\square }_{\mathcal{A}}G\setminus a$$
So if a does not occur in any trace of F′ or G′ we would have:
$$\begin{array}{lcl} F^{\prime}\,{\square }_{\mathcal{A}}G^{\prime}& =&F^{\prime}\setminus a\,{\square }_{\mathcal{A}}G^{\prime}\setminus a \\ & =&(F^{\prime}\square G^{\prime})\setminus a \\ & =&F^{\prime}\square G^{\prime} \end{array}$$
but, returning to Eq. 15.5, a certainly does not occur in any trace of Fa or Ga and so we would have:
$$\begin{array}{lcl} (F\square G)\setminus a& =&F\setminus a{\square }_{\mathcal{A}}G\setminus a \\ & =&F\setminus a{\square }_{{\mathcal{R}\!}_{f}}G\setminus a \end{array}$$
which is false. It is conceivable that although there is no direct homomorphic definition of concealment, there may be an indirect one where other functions (possibly with parameters see below) are defined homomorphically and concealment is definable as a combination of those.

15.4.1 Concurrency Operators

Before trying to recover from the difficulty with concealment, we look at a further difficulty, that of accommodating binary deconstructors, particularly parallel operators. We begin with a simple example in a strong bisimulation context, but rather than a concurrency operator in the style of CCS we consider one analogous to CSP’s ∣∣.

We take as signature a unary action prefix, a. −, for aA, a nullary NIL and a binary sum +. The axioms are that + is a semilattice with zero NIL ; the initial algebra is then that of finite synchronisation trees ST. Every synchronisation tree τ has a finite depth and can be written as
$$\sum\limits_{i=1}^{n}{a}_{ i}.{\tau }_{i}$$
for some n ≥ 0, where the τi are also synchronisation trees (of strictly smaller depth), and where no pair (ai, τi) occurs twice. The order of writing the summands makes no difference to the tree denoted.
One can define a binary synchronisation operator ∣∣ on synchronisation trees τ = ∑iai. τi and τ = ∑jbj. τj by induction on the depth of τ (or τ):
$$\tau \mid \mid \tau^{\prime} = \sum\limits_{{a}_{i}\,=\,{b}_{j}}{a}_{i}.({\tau }_{i}\mid \mid {\tau^{\prime}}_{j})$$
Looking for an equational recursive definition of ∣∣, one may try a “mutual (parametric) equational recursive definition” of ∣∣ and a certain family ∣∣a with x, y, z varying over ST:
$$\begin{array}{ccl} \mathtt{NIL}\,\mid \mid z & =&\mathtt{NIL}\, \\ (x + y)\mid \mid z& =&(x\mid \mid z) + (y\mid \mid z) \\ a.x\mid \mid z & =&x\mid {\mid }^{a}z\end{array}$$
$$\begin{array}{ccl} z\mid {\mid }^{a}\mathtt{NIL}\, & =&\mathtt{NIL}\, \\ z\mid {\mid }^{a}(x + y)& =&(z\mid {\mid }^{a}x) + (z\mid {\mid }^{a}y) \\ z\mid {\mid }^{a}b.x & =&\left \{\begin{array}{ll} a.(z\mid \mid x)&(\mbox{ if }b = a)\\ \mathtt{NIL } \, &(\mbox{ if } b\neq a) \end{array} \right. \end{array}$$
Unfortunately, this definition attempt is not an equational recursion. Mutual (parametric) equational recursions are single ones to an algebra on a product. Here we wish a map: STST ×ST. Informally we would write such clauses as:
$$\langle (x + y)\mid \mid z,\;z\mid {\mid }^{a}(x + y)\rangle =\langle (x\mid \mid z) + (y\mid \mid z),\;(z\mid {\mid }^{a}x) + (z\mid {\mid }^{a}y)\rangle$$
with the recursion variables, here x, y, on the left for ∣∣ and on the right for ∣∣a. However:
$$\langle a.x\mid \mid z,\;z\mid {\mid }^{a}b.x\rangle = \left \{\begin{array}{ll} \langle x\mid {\mid }^{a}z,\;a.(z\mid \mid x)\rangle &(\mbox{ if }b = a) \\ \langle x\mid {\mid }^{a}z,\;\mathtt{NIL}\,\rangle &(\mbox{ if }b\neq a) \end{array} \right.$$
does not respect this discipline: the recursion variable, here x, (twice) switches places with the parameter z.
We are therefore caught in a dilemma. One can show, by induction on the depth of synchronisation trees, that the above definitions, viewed as equations for ∣∣ and ∣∣a have a unique solution: the expected synchronisation operator ∣∣, and the functions ∣∣a defined on synchronisation trees τ and τ = ∑jbj. τj by:
$$\tau \mid {\mid }^{a}\tau^{\prime} =\sum\limits_{{b}_{j}\,=\,a}a.(\tau \mid \mid {\tau }_{j})$$
So we have a correct definition not in equational recursion format. So we must find either of the following:
  • A different correct definition in the equational recursion format

  • Another algebraic format into which the correct definition fits

When we come to the CSP parallel operator we do not even get as far as we did with synchronisation trees. The problem is like that with concealment: the distributive equation:
$$(F\square F^{\prime})\mid \mid G = (F\mid \mid G)\square (F^{\prime}\mid \mid G)$$
does not hold. One can show that there is no definition of ∣∣ analogous to the above one for synchronisation trees, i.e., there is no suitable choice of algebra, \({\square }_{\mathcal{A}}\) etc, and functions ∣∣a. The reason is that there is no binary operator □ on (finitary) failure sets such that, for all F, G, H we have:
$$(F\square F^{prime})\mid \mid G = (F\mid \mid G)\square (F^{\prime}\mid \mid G)$$
For suppose, for the sake of contradiction, that there is such an operator. Then, fixing F and F′, choose G such that F∣∣G = F, F′∣∣G = F′ and (FF′)∣∣G = (FF′). Then, substituting into the above equation, we obtain that FF′ = F′F′ and so the above equation yields distributivity, which, in fact, does not hold. As in the case of concealment, there may nonetheless be an indirect definition of ∣∣.

A similar difficulty obtains for the CSP interleaving operator. It too does not commute with □, and it too does not have any direct definition (the argument is like that for the concurrency operator but a little simpler, taking G = Stop). As in the case of the concurrency operator, there may be an indirect definition.

15.5 Another Choice of CSP Effect Constructors

Equations 15.3 and 15.4 do not immediately suggest a recursive definition of concealment. However, one can show that, for distinct actions ai (i = 1, n), the following equation holds between refusal sets:
$$\left ({\square }_{i=1}^{n}{a}_{ i} \rightarrow{F}_{i}\right )\left \backslash {a}_{j} = \left ({F}_{j}\backslash {a}_{j}\right ) \sqcap \Bigg(\left ({F}_{j}\backslash {a}_{j}\right )\square {\square }_{i\neq j}{a}_{i} \rightarrow \left ({F}_{i}\backslash {a}_{j}\right )\Bigg)\right.$$
where 1 ≤ jn. Taken together with Eq. (15.4), this suggests a recursive definition in terms of deterministic external choice. We therefore now change our choice of constructors, replacing binary external choice, action prefix and deadlock by deterministic external choice.

So as our second signature for CSP we take a binary operation symbol ⊓of internal choice and, for any deterministic action sequence\(\vec{a}\) (i.e., any sequence of actions ai (i = 1​, n), with the ai all different and n ≥ 0), an n-ary operation symbol \({\square }_{\vec{a}}\) of deterministic external choice. We write \({\square }_{\vec{a}}({t}_{1},\ldots,{t}_{n})\) as □ i = 1naiti although it is more usual to use Hoare’s notation \(({a}_{1} \rightarrow{t}_{1}\mid \ldots \mid {a}_{n} \rightarrow{t}_{n})\); we also use Stop to abbreviate \({\square }_{\vec{a}}()\).

We have the usual semilattice axioms for ⊓. Deterministic external choice is commutative, in the sense that:
$${\square }_{i}{a}_{i}{x}_{i} = {\square }_{i}{a}_{\pi (i)}{x}_{\pi (i)}$$
for any permutation π of {1, , n}. Given this, we are justified in writing deterministic external choices over finite, possibly empty, sets of actions, □ aIata, assuming some standard ordering of pairs (a, ta) without repetitions.
For the next axiom it is convenient to write (a1t1) □ □ i = 2naiti for □ i = 1naiti (for n ≥ 0). The axiom states that deterministic external choice distributes over internal choice:
$$({a}_{1} \rightarrow(x\sqcap x))\;\square \;{\square }_{i=2}^{n}{a}_{ i}{x}_{i} = \left (({a}_{1} \rightarrow x)\;\square \;{\square }_{i=2}^{n}{a}_{ i}{x}_{i}\right )\;\,\sqcap \;\,\left (({a}_{1} \rightarrow x)\;\square \;{\square }_{i=2}^{n}{a}_{ i}{x}_{i}\right )$$
This implies that deterministic external choice is monotone with respect to ⊑.
We can regard a, possibly nondeterministic, external choice, in which the ai need not be all different, as an abbreviation for a deterministic one, via:
$$\begin{array}{rcl}{ \square }_{i}{a}_{i}{t}_{i}& =& {\square }_{b\in \{{a}_{1},\ldots,{a}_{n}\!\}}b\left ({\sqcap }_{{a}_{i}=b}{t}_{i}\right )\end{array}$$
With that convention we may also write a1t1  □   □ i = 2naiti even when a1 is some ai, for i > 1. We can now write our final axiom:
$$\left ({\square }_{i}{a}_{i}{x}_{i}\right )\; \sqcap \;\left (({b}_{1} \rightarrow{y}_{1})\square {\square }_{j=2}^{n}{b}_{ j}{y}_{j}\right )\;\, \sqsubseteq \;\, ({b}_{1} \rightarrow{y}_{1})\;\square \;{\square }_{i}{a}_{i}{x}_{i}$$
Restricting the external choice (b1y1) □ □ jbjyj to be deterministic gives an equivalent axiom, as does restricting □ iaixi (in the presence of the others).
Let us call this equational theory CSP(∣). The finitary refusal sets form a CSP(∣)-algebra \({\mathcal{R}}_{\mathit{df }}\) with the evident definitions:
$$\begin{array}{lcl} F {\sqcap }_{{\mathcal{R}}_{\mathit{df }}}G & =&F \cup G \\ {\left ({\square }_{\vec{a}}\right )}_{{\mathcal{R}}_{\mathit{df }}}({F}_{1},\ldots,{F}_{n})& =&\{(\epsilon,W)\mid W \cap \{ {a}_{1},\ldots,{a}_{n}\} = \emptyset \} \\ & &\cup \:\{ ({a}_{i}w,W)\mid (w,W) \in{F}_{i}\} \end{array}$$

Theorem 2.

The finitary refusal sets algebra\({\mathcal{R}}_{\mathit{df }}\)is complete for equations between closed CSP (∣) terms.


De Nicola’s normal form can be regarded as written in the signature of CSP(∣), and a straightforward induction proves that every CSP(∣) term can be reduced to such a normal form using the above axioms. But two such normal forms have the same denotation whether they are regarded as CSP ( □ ) or as CSP(∣) terms, and in the former case, by Lemma 4.8 of [10], they are identical.

Theorem 3.

The finitary refusal sets algebra\({\mathcal{R}}_{\mathit{df }}\)is the initial CSP (∣) algebra.


Following the proof of Lemma 2 we see that every finitary refusal set is definable by a closed CSP(∣) term. With that, initiality follows from the above completeness theorem, as in the proof of Theorem 1.

Turning to the deconstructors, relabelling again has a straightforward homomorphic definition: given a relabelling function f​ : ​ AA, hRl​ : ​ TCSP(∣)() → TCSP(∣)() is defined homomorphically by:
$${h}_{Rl}(F \sqcap G) = {h}_{Rl}(F) \sqcap{h}_{Rl}(G)$$
$${h}_{Rl}\left ({\square }_{i}{a}_{i}{F}_{i}\right ) = {\square }_{i}f({a}_{i}){h}_{Rl}({F}_{i})$$
As always one has to check that the implied algebra satisfies the equations, here those of CSP(∣).
There is also now a natural homomorphic definition of concealment, − ∖ a, but, surprisingly perhaps, one needs to assume that □ is available. For every aA one defines ha​ : ​ TCSP(∣)() → TCSP(∣)() homomorphically by:
$${h}_{a}(F \sqcap G) = {h}_{a}(F) \sqcap{h}_{a}(G)$$
$${ h}_{a}\left ({\square }_{i=1}^{n}{a}_{ i}{F}_{i}\right ) = \left \{\begin{array}{ll} {h}_{a}({F}_{j}) \sqcap({h}_{a}({F}_{j})\square {\square }_{i\neq j}{a}_{i}{h}_{a}({F}_{i}))&(\mbox{ if }a = {a}_{j},\,j \in \{ 1\ldots n\} \\ {\square }_{i=1}^{n}{a}_{i}{h}_{a}({F}_{i}) &(\mbox{ if }a\neq \mbox{ any}\;{a}_{i}) \end{array} \right.$$
Verifying that the implicit algebra obeys satisfies the required equations is quite a bit of work. We record the result, but omit the calculations:

Proposition 2.

One can define a CSP (∣)-algebra {Con} on TCSP (∣)(∅) by:

$$F {\sqcap }_{{\it { Con}}}G = F \sqcap G$$

\({({\square }_{\vec{a}})}_{\mathit{Con}}({F}_{1},\ldots,{F}_{n}) = \left \{\begin{array}{ll} {F}_{j} \sqcap({F}_{j}\ ;\square \;{\square }_{i\neq j}{a}_{i}{F}_{i})&(\mbox{ if }a = {a}_{j}) \\ {\square }_{i}{a}_{i}{F}_{i} &(\mbox{ if }a\neq \mbox{ any}\;{a}_{i}) \end{array} \right.\)

The operator □ is, of course, no longer available as a constructor. However, it can alternatively be treated as a binary deconstructor. While its treatment as such is no more successful than our treatment of parallel operators, it is also no less successful. We define it simultaneously with (n + 1)-ary functions \({\square }^{{a}_{1}\ldots {a}_{n}}\) on TCSP(∣)(), for n ≥ 0, where the ai are all distinct. That we are defining infinitely many functions simultaneously arises from dealing with the infinitely many deterministic choice operators (there would be be infinitely many even if we considered them as parameterised on the a’s). However, we anticipate that this will cause no real difficulty, given that we have overcome the difficulty of dealing with binary deconstructors.

Here are the required definitions:
$$\begin{array}{rcl} (F \sqcap F)\square G& =& (F\square G) \sqcap(F\square G) \\ \left ({\square }_{i}{a}_{i}{F}_{i}\right )\square G& =& ({F}_{1},\ldots,{F}_{n}){\square }^{{a}_{1}\ldots {a}_{n} }G \\ ({F}_{1},\ldots,{F}_{n}){\square }^{{a}_{1}\ldots {a}_{n} }(G \sqcap G)& =&(({F}_{1},\ldots,{F}_{n}){\square }^{{a}_{1}\ldots {a}_{n} }G) \\ & & \sqcap \:(({F}_{1},\ldots,{F}_{n}){\square }^{{a}_{1}\ldots {a}_{n} s}G)\\ \left ({F}_{1},\ldots,{F}_{n}\right ){\square }^{{a}_{1}\ldots {a}_{n} }\left ({\square }_{j}{b}_{j}{G}_{j}\right )& =& \begin{array}{l} \left ({a}_{1} \rightarrow{F}_{1}\right ) \\ \square \:\left (\ldots \left (\left ({a}_{n} \rightarrow{F}_{n}\right )\square {\square }_{j}{b}_{j}{G}_{j}\right )\ldots \,\right )\end{array} \\ \end{array}$$
where, in the last equation, the notational convention (a1t1) □ □ i = 2naiti is used n times. It is clear that □ together with the functions
$${\square }^{{a}_{1}\ldots {a}_{n} }\! :\! {T}_{\mathrm{CSP}(\mid )}{(\emptyset )}^{n+1} \rightarrow{T}_{\mathrm{ CSP}(\mid )}(\emptyset )$$
defined by:
$${\square }^{{a}_{1}\ldots {a}_{n} }({F}_{1},\ldots,{F}_{n},G) = \left ({\square }_{i}{a}_{i}{F}_{i}\right )\square G$$
satisfy the equations, and, using the fact that all finitary refusal sets are definable by normal forms, one sees that they are the unique such functions.
We can treat the CSP parallel operator ∣∣ in a similar vein following the pattern given above for parallel merge operators in the case of synchronisation trees. We define it simultaneously with (n + 1)-ary functions \(\mid {\mid }^{{a}_{1}\ldots {a}_{n}}\) on TCSP(∣)(), for n ≥ 0, where the ai are all distinct:
$$\begin{array}{rcl} (F \sqcap F)\mid \mid G& =& (F\mid \mid G) \sqcap(F\mid \mid G) \\ \left ({\square }_{i}{a}_{i}{F}_{i}\right )\mid \mid G& =& ({F}_{1},\ldots,{F}_{n})\mid {\mid }^{{a}_{1}\ldots {a}_{n} }G \\ & & \\ \left ({F}_{1},\ldots,{F}_{n}\right )\mid {\mid }^{{a}_{1}\ldots {a}_{n} }(G \sqcap G)& =& (({F}_{1},\ldots,{F}_{n})\mid {\mid }^{{a}_{1}\ldots {a}_{n} }G) \\ & & \sqcap \:(({F}_{1},\ldots,{F}_{n}){\square }^{{a}_{1}\ldots {a}_{n} }G)\\ \left ({F}_{1},\ldots,{F}_{n}\right )\mid {\mid }^{{a}_{1}\ldots {a}_{n} }\left ({\square }_{j}{b}_{j}{G}_{j}\right )& =& {\square }_{{a}_{i}={b}_{j}}{a}_{i}({F}_{i}\mid \mid {G}_{j}) \\ \end{array}$$
Much as before, ∣∣ together with the functions \(\mid {\mid }^{{a}_{1}\ldots {a}_{n}}\! :\! {T}_{\mathrm{CSP}(\mid )}{(\emptyset )}^{n+1} \rightarrow{T}_{\mathrm{CSP}(\mid )}(\emptyset )\) defined by:
$$\mid {\mid }^{{a}_{1}\ldots {a}_{n} }\left ({F}_{1},\ldots,{F}_{n},G\right ) = \left ({\square }_{i}{a}_{i}{F}_{i}\right )\mid \mid G$$
are the unique functions satisfying the equations.
Finally, we consider the CSP interleaving operator ∣∣∣. We define this by following an idea, exemplified in the ACP literature [4, 5], of splitting an associative operation into several parts. Here we split ∣∣∣ into a left interleaving operator ∣∣∣l and a right interleaving operator ∣∣∣r so that:
$$F\mid \mid \mid G = (F\mid \mid {\mid }^{l}G)\square (F\mid \mid {\mid }^{r}G)$$
In ACP the parallel operator is split into three parts: a left merge, a right merge (defined in terms of the left merge), and a communication merge; in a subtheory, PA, there is no communication, and the parallel operator, now an interleaving one, is split into left and right parts [5]. The idea of splitting an associative operation into several operations can be found in a much wider context [11] where the split into two or three parts is axiomatised by the respective notions of dendriform dialgebra and trialgebra.
Our left and right interleaving are defined by the following “binary deconstructor” equations:
$$\begin{array}{rcl} (F \sqcap F)\mid \mid {\mid }^{l}G& =& (F\mid \mid {\mid }^{l}G) \sqcap(F\mid \mid {\mid }^{l}G) \\ \left ({\square }_{i=1}^{n}{a}_{ i}{F}_{i}\right )\mid \mid {\mid }^{l}G& =& {\square }_{ i}{a}_{i}(({F}_{i}\mid \mid {\mid }^{l}G)\square ({F}_{ i}\mid \mid {\mid }^{r}G)) \\ & & \\ G\mid \mid {\mid }^{r}(F \sqcap F)& =& (G\mid \mid {\mid }^{r}F) \sqcap(G\mid \mid {\mid }^{r}F) \\ G\mid \mid {\mid }^{r}\left ({\square }_{ i=1}^{n}{a}_{ i}{F}_{i}\right )& =& {\square }_{i}{a}_{i}((G\mid \mid {\mid }^{l}{F}_{ i})\square (G\mid \mid {\mid }^{r}{F}_{ i}))\end{array}$$
As may be expected, these equations also have unique solutions, now given by:
$$\begin{array}{lcl} F\mid \mid {\mid }^{l}G & =&\{(\epsilon,W)\mid (\epsilon,W) \in F\} \cup \{ (w,W)\mid (u,W) \in F,\ (\nu,W) \in G,\ w \in u\!{\mid }^{l}\!\nu\} \\ F\mid \mid {\mid }^{r}G& =&\{(\epsilon,W)\mid (\epsilon,W) \in G\} \cup \{ (w,W)\mid (u,W) \in F,\ (\nu,W) \in G,\ w \in u\!{\mid }^{r}\!\nu\}\end{array}$$
where u​∣lv is the set of interleavings of u and v which begin with a letter of u, and u​∣rv is defined analogously. It is interesting to note that:
$$F\mid \mid {\mid }^{l}(G \sqcap G) = (F\mid \mid {\mid }^{l}G) \sqcap(F\mid \mid {\mid }^{l}G)$$
and similarly for ∣∣∣r.

15.6 Adding divergence

The treatment of CSP presented thus far dealt with finite divergence-free processes only. There are several ways to extend the refusal sets model of Section 15.3 to infinite processes with divergence. The most well-known model is the failures/ divergences model of [13], further elaborated in [29]. A characteristic property of this model is that divergence, i.e., an infinite sequence of internal actions, is modelled as Chaos, a process that satisfies the equation:
$$\begin{array}{rcl} {\it { Chaos}}\square x& =& {\it { Chaos}} \sqcap x = {\it { Chaos}}\end{array}$$
So after Chaos no further process activity is discernible.

An alternative extension is the stable failures model proposed in [6], and also elaborated in [29]. This model equates processes that allow the same observations, where actions and deadlock are considered observable, but divergence does not give rise to any observations. A failure pair (w, W) now allowing W to be infinite records an observation in which w represents a sequence of actions being observed, and W represents the observation of deadlock under the assumption that the environment in which the observed process is running allows only the (inter)actions in the set W. Such an observation can be made if after engaging in the sequence of visible actions w, the observed process reaches a state in which no further internal actions are possible, nor any actions from the set W. Besides failure pairs, also traces are observable, and thus the observable behaviour of a process is given by a pair (T, F) where T is a set of traces and F is a set of failure pairs. Unlike the model \({\mathcal{R}\!}_{f}\) of Section 15.3, the traces are not determined by the failure pairs. In fact, in a process that can diverge in every state, the set of failure pairs is empty, yet the set of traces conveys important information.

In the remainder of this paper we add a constant Ω to the signature of CSP that is a zero for the semilattice generated by ⊓. This will greatly facilitate the forthcoming development. Intuitively, one may think of Ω as divergence in the stable failures model.

With respect to the equational theory CSP ( □ ) of Section 15.3 we thus add the constant Ω and the single axiom:
$$\begin{array}{rcl} x \sqcap\Omega= x& &\end{array}$$
thereby obtaining the theory CSP ( □, Ω). We note two useful derived equations:
$$\begin{array}{rcl} x \sqcap(\Omega \square y)& =& x \sqcap(x\square y) \\ (\Omega \square x) \sqcap(\Omega \square y)& =& (\Omega \square x)\square (\Omega \square y)\end{array}$$
Semantically, a process is now given by a pair (T, F), where T is a set of traces and F is a set of failure pairs that satisfy the following conditions:
  1. 1.

    ε ∈ T

  2. 2.


  3. 3.

    (w, W) ∈ FwT

  4. 4.

    (w, W) ∈ FVW ⇒ (w, V ) ∈ F

  5. 5.

    (w, W) ∈ F ∧ ∀ aV. waT ⇒ (w, WV) ∈ F (where V ⊆ A)

The two components of such a pair P are denoted TP and FP, respectively, and for wTP we define futP(w) : = {aAwaTP}. We can define the CSP operators on processes by setting
$$P \mathrm{op} Q = (P \mathrm{{op}}_{\mathcal{T}} Q,P \mathrm{{op}}_{\mathcal{R}} Q)$$
where \(\mathrm{{op}}_{\mathcal{T}}\) is given by:
$$\begin{array}{lcl} {\mathtt{Stop}}_{\mathcal{T}} & =&\{\epsilon \} \\ a {\rightarrow }_{\mathcal{T}}P & =&\{\epsilon \} \cup \{ aw\mid w \in{T}_{P}\} \\ P {\sqcap }_{\mathcal{T}}Q & =&{T}_{P} \cup{T}_{Q} \\ P{\square }_{\mathcal{T}}Q & =&{T}_{P} \cup{T}_{Q} \\ {f}_{\mathcal{T}}(P) & =&\{f(w)\mid w \in{T}_{P}\} \\ P{\setminus }_{\mathcal{T}}a & =&\{w\setminus a\mid w \in{T}_{P}\} \\ P\mid {\mid }_{\mathcal{T}}Q & =&\{w\mid w \in{T}_{P},\ w \in{T}_{Q}\} \\ P\mid \mid {\mid }_{\mathcal{T}}Q & =&\{w\mid u \in{T}_{P},\ \nu \in{T}_{Q},\ w \in u\!\mid \!\nu\} \end{array}$$
and op is given as \(\mathrm{{op}}_{{\mathcal{R}\!}_{f}}\) was in Section 15.3, but without the restriction to finite sets W in defining Stop. For the new process Ω we set
$${\Omega }_{\mathcal{T}} =\{ \epsilon \}\qquad \mbox{ and}\qquad {\Omega }_{\mathcal{R}} = \emptyset $$
This also makes the collection of processes into a CSP ( □, Ω)-algebra, .

A process P is called finitary if TP is finite. The finitary processes evidently form a subalgebra of \(\mathcal{F}\); we call it \({\mathcal{F}\!\!\!}_{f}\).

Lemma 3.

Let P be a finitary process. Then, for every w ∈ TPthere is an n ≥ 0 and V1,…,VnfutF(w) such that (w,W) ∈ FPiff W ∩ Vi= ∅ for some i ∈ {1,…,n}.


Closure conditions 4 and 5 above imply that (w, W) ∈ FP if, and only if, (w, W ∩ futP(w)) ∈ FP. Thus we only need to be concerned about pairs (w, W) with W ⊆ futP(w). Now, as futP(w) is finite, for any relevant (w, W) ∈ F, of which there are finitely many, we can take V to be futP(w) ∖ W, and we obtain finitely many such sets.

Note that it may happen that n = 0, in contrast with the case of Lemma 1.

Lemma 4.

All finitary processes are definable by closed CSP (□,Ω) terms.


Let P be a finitary process. We proceed by induction on the length of the longest trace in TP. By the previous lemma there are sets V1, , Vn, for some n ≥ 0, such that (ε, W) ∈ F iff WVi = for some i ∈ { 1, , n}. Define Ta and Fa, for aTP, by:
$${T}_{a} =\{ w\mid aw \in{T}_{P}\}\qquad {F}_{a} =\{ (w,W)\mid (aw,W) \in{F}_{P}\}$$
Then it is not hard to see that each Pa : = (Ta, Fa) is a finitary process, and that
$$P = \left ({\sqcap }_{i}{\square }_{a\in {V }_{i}}a \rightarrow{P}_{a}\right )\ \sqcap \ \left (\Omega \square {\square }_{a\in {T}_{P}}a \rightarrow{P}_{a}\right )$$
As the longest trace in Ta is strictly shorter than the longest one in TP, the proof concludes, employing the induction hypothesis.

Proposition 3.

CSP (□,Ω) is ground equationally complete with respect to both\(\mathcal{F}\)and\({\mathcal{F}\!\!\!}_{f}\).


This time we recursively define a normal form as a CSP ( □, Ω)-term of the form
$${\sqcap }_{L\in \mathcal{L}}{\square }_{a\in L}a \rightarrow{t}_{a}\qquad \mbox{ or}\qquad \Omega \square {\square }_{a\in K}a \rightarrow{t}_{a}$$
where is a finite non-empty saturated collection of finite sets of actions, K is a finite set of actions, and each term ta is in normal form. Every term is provably equal in CSP ( □, Ω) to a term in normal form; the proof proceeds as for Proposition 1, but now also using the derived equations (15.14). Next, by Lemma 6 below, if two normal forms have the same denotation in \(\mathcal{F}\) then they are identical. So the result follows for \(\mathcal{F}\), and then for \({\mathcal{F}\!\!\!}_{f}\) too, as all closed terms denote finitary processes.

Theorem 4.

The algebra\({\mathcal{F}\!\!\!}_{f}\)of finitary processes is the initial CSP (□,Ω) algebra.


Let the initial such algebra be I. There is a unique homomorphism \(h\! :\!\mathrm{ I} \rightarrow {\mathcal{F}\!\!\!}_{f}\). By Lemma 4, h is a surjection. By the previous proposition, \({\mathcal{F}\!\!\!}_{f}\) is complete for equations between closed terms, and so h is an injection. Hence h is an isomorphism, completing the proof.

As in Section 15.5, in order to deal with deconstructors, particularly hiding, we replace external choice by deterministic external choice. The availability of Ω permits useful additional such operators. The equational theory CSP( |, Ω) has as signature the binary operation symbol ⊓, and for any deterministic action sequence \(\vec{a}\), the n-ary operation symbols \({\square }_{\vec{a}}\) (as in Section 15.5), as well as the new n-ary operation symbols \({\square }_{\vec{a}}^{\Omega }\), for n ≥ 0, which denote a deterministic external choice with Ω as one of the summands. We adopt conventions for \({\square }_{\vec{a}}^{\Omega }\) analogous to those previously introduced for \({\square }_{\vec{a}}({t}_{1},\ldots,{t}_{n})\). We write \({\square }_{\vec{a}}^{\Omega }({t}_{1},\ldots,{t}_{n})\) as Ω □ □ i = 1naiti. We also write Ω □ (c1t1) □ □ j = 2ncjtj for Ω □ □ j = 1ncjtj, so that the cj (j = 1, n) must all be distinct.

The first three groups of axioms of CSP( |, Ω) are:
  • ⊓, Ω Is a semilattice with a zero here Ω is the 0-ary case of \({\square }_{\vec{a}}^{\Omega }\)

  • Both deterministic external choice operators \({\square }_{\vec{a}}\) and \({\square }_{\vec{a}}^{\Omega }\) are commutative, as explained in Section 15.5

  • Both deterministic external choice operators distribute over internal choice, as explained in Section 15.5

Given commutativity, we are, as before, justified in writing deterministic external choices □ aIata or Ω □ □ aIata, over finite, possibly empty, sets of actions I, assuming some standard ordering of pairs (a, ta) without repetitions. Next, using the analogous convention to (15.6) we can then also understand Ω □ □ j = 1ncjtj, and so also Ω □ (c1t1) □ □ j = 2ncjtj, even when the cj are not all distinct. With these conventions established, we can now state the final group of axioms. These are all variants of Axiom (15.7) of Section 15.5, allowing each of the two deterministic external choices to have an Ω-summand:
$$\left (\Omega \square {\square }_{i}{a}_{i}{x}_{i}\right )\; \sqcap \;\left (\Omega \square ({b}_{1} \rightarrow{y}_{1})\square {\square }_{j=2}^{n}{b}_{ j}{y}_{j}\right )\;\, \sqsubseteq \;\, \Omega \square ({b}_{1} \rightarrow{y}_{1})\square {\square }_{i}{a}_{i}{x}_{i}$$
$$\left (\Omega \square {\square }_{i}{a}_{i}{x}_{i}\right )\; \sqcap \;\left (({b}_{1} \rightarrow{y}_{1})\square {\square }_{j=2}^{n}{b}_{ j}{y}_{j}\right )\;\, \sqsubseteq \;\, \Omega \square ({b}_{1} \rightarrow{y}_{1})\square {\square }_{i}{a}_{i}{x}_{i}$$
$$\left ({\square }_{i}{a}_{i}{x}_{i}\right )\; \sqcap \;\left (\Omega \square ({b}_{1} \rightarrow{y}_{1})\square {\square }_{j=2}^{n}{b}_{ j}{y}_{j}\right )\;\, \sqsubseteq \;\, ({b}_{1} \rightarrow{y}_{1})\square {\square }_{i}{a}_{i}{x}_{i}$$
$$\left ({\square }_{i}{a}_{i}{x}_{i}\right )\; \sqcap \;\left (({b}_{1} \rightarrow{y}_{1})\square {\square }_{j=2}^{n}{b}_{ j}{y}_{j}\right )\;\, \sqsubseteq \;\, ({b}_{1} \rightarrow{y}_{1})\square {\square }_{i}{a}_{i}{x}_{i}$$
As in the case of Axiom (15.7), restricting any of these choices to be deterministic results in an axiom of equivalent power. We note two useful derived equations:
$$\begin{array}{rcl} {\square }_{i}{a}_{i}{x}_{i} \sqcap \left (\Omega \square {\square }_{j}{b}_{j}{y}_{j}\right )& =& {\square }_{i}{a}_{i}{x}_{i} \sqcap \left ({\square }_{i}{a}_{i}{x}_{i}\square {\square }_{j}{b}_{j}{y}_{j}\right ) \\ \left (\Omega \square {\square }_{i}{a}_{i}{x}_{i}\right ) \sqcap \left (\Omega \square {\square }_{j}{b}_{j}{y}_{j}\right )& =& \left (\Omega \square {\square }_{i}{a}_{i}{x}_{i}\right )\square {\square }_{j}{b}_{j}{y}_{j}\end{array}$$
where two further notational conventions are used: \(({\square }_{i=1}^{m}{a}_{i}{t}_{i})\square ({\square }_{j=1}^{n}{b}_{j}{t^{\prime}}_{j})\) stands for \({\square }_{k=1}^{m+n}{c}_{k}{t}_{k^{\prime\prime}}\) where ck = ak and t′k = tk, for k = 1, m, and \({c}_{k} = {b}_{k-m}\), and \({t^{\prime\prime}}_{k} = {t^{\prime}}_{k-m}\), for \(k = m+1,m+n\); and \((\Omega \square {\square }_{i=1}^{m}{a}_{i}{t}_{i})\square ({\square }_{j=1}^{n}{b}_{j}{t}_{j})\) is understood analogously. In fact, the first three axioms of (15.18) are also derivable from (15.19), in the presence of the other axioms, and thus may be replaced by (15.19).
The collection of processes is turned into a CSP( |, Ω)-algebra \({\mathcal{F}\!\!}_{d}\) as before, writing:
$$P \mathrm{{op}}_{{\mathcal{F}\!\!}_{d}} Q = (P \mathrm{{op}}_{{\mathcal{T}}_{d}} Q,P \mathrm{{op}}_{{\mathcal{R}}_{d}} Q)$$
and defining \(\mathrm{{op}}_{{\mathcal{T}}_{d}}\) and \(\mathrm{{op}}_{{\mathcal{R}}_{d}}\) in the evident way:
$$\begin{array}{lcl} P {\sqcap }_{{\mathcal{T}}_{d}}Q & =&{T}_{P} \cup{T}_{Q} \\ {\left ({\square }_{\vec{a}}\right )}_{{\mathcal{T}}_{d}}\left ({P}_{1},\ldots,{P}_{n}\right ) & =&\{\epsilon \} \cup \{ {a}_{i}w\mid w \in{T}_{{P}_{i}}\} \\ {\left ({\square }_{\vec{a}}^{\Omega }\right )}_{{\mathcal{T}}_{d}}\left ({P}_{1},\ldots,{P}_{n}\right ) & =&\{\epsilon \} \cup \{ {a}_{i}w\mid w \in{T}_{{P}_{i}}\} \\ {\left ({\square }_{\vec{a}}^{\Omega }\right )}_{{\mathcal{R}}_{d}}\left ({P}_{1},\ldots,{P}_{n}\right )& =&\{\left ({a}_{i}w,W\right )\mid (w,W) \in{F}_{{P}_{i}}\} \end{array}$$
with \({\sqcap }_{{\mathcal{R}}_{d}}\) and \({({\square }_{\vec{a}})}_{{\mathcal{R}}_{d}}\) given just as in Section 15.5. Exactly as in Section 15.5, but now using the derived equations (15.19), we obtain:

Theorem 5.

The algebra\({\mathcal{F}\!\!}_{d}\)is complete for equations between closed CSP (|,Ω) terms.

Theorem 6.

The finitary subalgebra\({\mathcal{F}\!\!}_{\mathit{df }}\)of\({\mathcal{F}\!\!}_{d}\)is the initial CSP (|,Ω) algebra.

Turning to the deconstructors, relabelling and concealment can again be treated homomorphically. For relabelling by f one simply adds the equation:
$${h}_{Rl}\left (\Omega \square {\square }_{i}{a}_{i}{F}_{i}\right ) = \Omega \square {\square }_{i}f({a}_{i}){h}_{Rl}({F}_{i})$$
to the treatment in Section 15.5, and checks that the implied algebra satisfies the equations. Pleasingly, the treatment of concealment can be simplified in such a way that the deconstructor □ is no longer needed. For every aA one defines ha​ : ​ TCSP(∣, Ω)() → TCSP(∣, Ω)() homomorphically by:
$${h}_{a}(P \sqcap Q) = {h}_{a}(P) \sqcap{h}_{a}(Q)$$
$${ h}_{a}\left ({\square }_{i=1}^{n}{a}_{ i}{P}_{i}\right ) = \left \{\begin{array}{ll} {h}_{a}({P}_{j}) \sqcap(\Omega \square {\square }_{i\neq j}{a}_{i}{h}_{a}({P}_{i}))&(\mbox{ if }a = {a}_{j},\,j \in \{ 1\ldots n\} \\ {\square }_{i=1}^{n}{a}_{i}{h}_{a}({P}_{i}) &(\mbox{ if }a\neq \mbox{ any}\;{a}_{i}) \end{array} \right.$$
$${ h}_{a}\left (\Omega \square {\square }_{i=1}^{n}{a}_{ i}{P}_{i}\right ) = \left \{\begin{array}{ll} {h}_{a}({P}_{j}) \sqcap(\Omega \square {\square }_{i\neq j}{a}_{i}{h}_{a}({P}_{i}))&(\mbox{ if }a = {a}_{j},\,j \in \{ 1\ldots n\} \\ \Omega \square {\square }_{i=1}^{n}{a}_{i}{h}_{a}({P}_{i}) &(\mbox{ if }a\neq \mbox{ any}\;{a}_{i}) \end{array} \right.$$
Note the use of the new form of deterministic choice here. One has again to verify that the implicit algebra obeys satisfies the required equations. The treatment of the binary deconstructors □, ∣∣ and ∣∣∣ is also a trivial adaptation of the treatment in Section 15.5. For □ one adds a further auxiliary operator \({\square }^{\Omega,{a}_{1}\ldots {a}_{n}}\) and the equations:
$$\begin{array}{rcl} (\Omega \square {\square }_{i}{a}_{i}{P}_{i})\square Q& =& ({P}_{1},\ldots,{P}_{n}){\square }^{\Omega,{a}_{1}\ldots {a}_{n} }Q \\ ({P}_{1},\ldots,{P}_{n}){\square }^{\Omega,{a}_{1}\ldots {a}_{n} }(Q \sqcap Q^{\prime})& =& \begin{array}{@{}l@{}} \left (\left ({P}_{1},\ldots,{P}_{n}\right ){\square }^{\Omega,{a}_{1}\ldots {a}_{n}}Q\right ) \\ \sqcap \mbox{ }\left (\left ({P}_{1},\ldots,{P}_{n}\right ){\square }^{\Omega,{a}_{1}\ldots {a}_{n}}Q^{\prime}\right )\end{array} \\ \left ({P}_{1},\ldots,{P}_{n}\right ){\square }^{\Omega,{a}_{1}\ldots {a}_{n} }\left ({\square }_{j}{b}_{j}{Q}_{j}\right )& =& \left (\Omega \square {\square }_{i}{a}_{i}{P}_{i}\right )\square {\square }_{j}{b}_{j}{Q}_{j} \\ ({P}_{1},\ldots,{P}_{n}){\square }^{\Omega,{a}_{1}\ldots {a}_{n} }\left (\Omega \square {\square }_{j}{b}_{j}{Q}_{j}\right )& =& \left (\Omega \square {\square }_{i}{a}_{i}{P}_{i}\right )\square {\square }_{j}{b}_{j}{Q}_{j} \\ \left ({P}_{1},\ldots,{P}_{n}\right ){\square }^{{a}_{1}\ldots {a}_{n} }\left (\Omega \square {\square }_{j}{b}_{j}{Q}_{j}\right )& =& \left (\Omega \square {\square }_{i}{a}_{i}{P}_{i}\right )\square {\square }_{j}{b}_{j}{Q}_{j} \\ \end{array}$$
For ∣∣ one adds the auxiliary operator \(\mid {\mid }^{\Omega,{a}_{1}\ldots {a}_{n}}\) and the equations:
$$\begin{array}{rcl} \left (\Omega \square {\square }_{i}{a}_{i}{P}_{i}\right )\mid \mid Q& =& \left ({P}_{1},\ldots,{P}_{n}\right )\mid {\mid }^{\Omega,{a}_{1}\ldots {a}_{n} }Q \\ \left ({P}_{1},\ldots,{P}_{n}\right )\mid {\mid }^{\Omega,{a}_{1}\ldots {a}_{n} }\left (Q \sqcap Q^{\prime}\right )& =& \begin{array}{@{}l@{}} \left (\left ({P}_{1},\ldots,{P}_{n}\right )\mid {\mid }^{\Omega,{a}_{1}\ldots {a}_{n}}Q\right ) \\ \sqcap \mbox{ }\left (\left ({P}_{1},\ldots,{P}_{n}\right ){\square }^{\Omega,{a}_{1}\ldots {a}_{n}}Q^{\prime}\right )\end{array} \\ \left ({P}_{1},\ldots,{P}_{n}\right )\mid {\mid }^{\Omega,{a}_{1}\ldots {a}_{n} }\left ({\square }_{j}{b}_{j}{Q}_{j}\right )& =& \Omega \square {\square }_{{a}_{i}={b}_{j}}{a}_{i}\left ({P}_{i}\mid \mid {Q}_{j}\right ) \\ \left ({P}_{1},\ldots,{P}_{n}\right )\mid {\mid }^{\Omega,{a}_{1}\ldots {a}_{n} }\left (\Omega \square {\square }_{j}{b}_{j}{Q}_{j}\right )& =& \Omega \square {\square }_{{a}_{i}={b}_{j}}{a}_{i}\left ({P}_{i}\mid \mid {Q}_{j}\right ) \\ \left ({P}_{1},\ldots,{P}_{n}\right )\mid {\mid }^{{a}_{1}\ldots {a}_{n} }\left (\Omega \square {\square }_{j}{b}_{j}{Q}_{j}\right )& =& \Omega \square {\square }_{{a}_{i}={b}_{j}}{a}_{i}\left ({P}_{i}\mid \mid {Q}_{j}\right ) \\ \end{array}$$
Finally, for ∣∣∣ one simply adds extra equations:
$$\begin{array}{rcl} \left (\Omega \square {\square }_{i=1}^{n}{a}_{ i}{P}_{i}\right )\mid \mid {\mid }^{l}Q& =& \Omega \square {\square }_{ i}{a}_{i}\left (\left ({P}_{i}\mid \mid {\mid }^{l}Q\right )\square \left ({P}_{ i}\mid \mid {\mid }^{r}Q\right )\right ) \\ Q\mid \mid {\mid }^{r}\left (\Omega \square {\square }_{ i=1}^{n}{a}_{ i}{P}_{i}\right )& =& \Omega \square {\square }_{i}{a}_{i}\left (\left (Q\mid \mid {\mid }^{l}{P}_{ i}\right )\square \left (Q\mid \mid {\mid }^{r}{P}_{ i}\right )\right ) \\ \end{array}$$

15.7 Combining CSP and Functional Programming

To combine CSP with functional programming, specifically the computational λ-calculus, we use the monad TCSP( |, Ω) for the denotational semantics. As remarked above, CSP processes then become terms of type empty. However, as the** constructors are polymorphic, it is natural to go further and look for polymorphic versions of the deconstructors. We therefore add polymorphic constructs to λc as follows:


$$\frac{M\! :\! \sigma \quad N\! :\! \sigma } {M \sqcap N\! :\! \sigma } \quad \quad \frac{M\! :\! \sigma } {a \rightarrow M\! :\! \sigma }\quad \quad \Omega \! :\! \sigma $$

Unary Deconstructors

$$\frac{M\! :\! \sigma } {f(M)\! :\! \sigma }\quad \quad \quad \frac{M\! :\! \sigma } {M\setminus a\! :\! \sigma }$$
for any relabelling function f, and any aA. (One should really restrict the allowable relabelling functions in order to keep the syntax finitary.)

Binary Deconstructors

$$\frac{M\! :\! \sigma \quad N\! :\! \sigma } {M\square N\! :\! \sigma } \quad \quad \quad \frac{M\! :\! \sigma \quad N\! :\! \tau } {M\mid \mid N\! :\! \sigma\times\tau }\quad \quad \quad \frac{M\! :\! \sigma \quad N\! :\! \tau } {M\mid \mid \mid N\! :\! \sigma\times\tau }$$
The idea of the two parallel constructs is to evaluate the two terms in parallel and then return the pair of the two values produced. We did not include syntax for the two deterministic choice constructors as they are definable from a → − and Ω with the aid of the □ deconstructor.
For the denotational semantics, the semantics of types is given as usual using the monad TCSP( |, Ω), which we know exists by the general considerations of Section 15.2. These general considerations also yield a semantics for the constructors. For example, for every set X we have the map:
$${\sqcap }_{X}\! :\! {T}_{\mathrm{CSP}(\vert,\Omega )}{(X)}^{2} \rightarrow{T}_{\mathrm{ CSP}(\vert,\Omega )}(X)$$
which we can use for X = [​[σ]​] to interpret terms MN​ : ​ σ.
The homomorphic point of view also leads to an interpretation of the unary deconstructors, but using free algebras rather than just the initial one. For example, for relabelling by f we need a function:
$${h}_{Rl}\! :\! {T}_{\mathrm{CSP}(\vert,\Omega )}(X) \rightarrow{T}_{\mathrm{CSP}(\vert,\Omega )}(X)$$
We obtain this as the unique homomorphism extending the unit ηX​​ : ​ X​ → TCSP( |, Ω)(X), equipping TCSP( |, Ω)(X) with the algebra structure
$$\mathcal{A} = \left ({T}_{\mathrm{CSP}(\vert,\Omega )}(X),{\sqcap }_{\mathcal{A}},{\square }_{\mathcal{A}},{\square }_{\mathcal{A}}^{\Omega }\right )$$
where, for x, yTCSP( |, Ω)(X),
$$x {\sqcap }_{\mathcal{A}}y = x {\sqcap }_{X}y$$

\({\left ({\square }_{\vec{a}}\right )}_{\mathcal{A}}\left ({x}_{1},\ldots,{x}_{n}\right ) ={ \left ({\square }_{f(\vec{a})}\right )}_{X}({x}_{1},\ldots,{x}_{n})\)


\({\left ({\square }_{a}^{\Omega }\right )}_{\mathcal{A}}({x}_{1},\ldots,{x}_{n}) ={ \left ({\square }_{f(a)}^{\Omega }\right )}_{X}\left ({x}_{1},\ldots,{x}_{n}\right )\)

Concealment − ∖ a can be treated analogously, but now following the treatment in the case of \({\mathcal{F}\!\!}_{\mathit{df }}\), and defining \(\mathcal{A}\) by:
$$x {\sqcap }_{\mathcal{A}}y = x {\sqcap }_{X}y$$
for x, yTCSP( |, Ω)(X)0

\({({\square }_{a})}_{\mathcal{A}}({x}_{1},\ldots,{x}_{n}) = \left \{\begin{array}{ll} {x}_{j} \sqcap(\Omega \square {\square }_{i\neq j}{a}_{i}{x}_{i})&(\mbox{ if }a = {a}_{j},\,\mbox{ where}\,1 \leq j \leq n) \\ {\square }_{i=1}^{n}{a}_{i}{x}_{i} &(\mbox{ if }a\neq \mbox{ any}\;{a}_{i})\end{array} \right.\)


\({({\square }_{a}^{\Omega })}_{\mathcal{A}}({x}_{1},\ldots,{x}_{n}) = \left \{\begin{array}{ll} {x}_{j} \sqcap(\Omega \square {\square }_{i\neq j}{a}_{i}{x}_{i})&(\mbox{ if }a = {a}_{j},\,\mbox{ where}\,1 \leq j \leq n) \\ \Omega \square {\square }_{i=1}^{n}{a}_{i}{x}_{i} &(\mbox{ if }a\neq \mbox{ any}\;{a}_{i})\end{array} \right.\)

We here again make use of the deterministic choice operator made available by the presence of Ω.

However, we cannot, of course, carry this on to binary deconstructors as we have no general algebraic treatment of them. We proceed instead by giving a concrete definition of them (and the other constructors and deconstructors). That is, we give an explicit description of the free CSP( |, Ω)-algebra on a set X and define our operators in terms of that representation.

An X-trace is a pair (w, x), where wA and xX; it is generally more suggestive to write (w, x) as wx. For any relabelling function f, we set f(w) = f(w)x, and, for any aA, we set wxa = (wa)x. An X-process is a pair (T, F) with T a set of traces as well as X-traces, and F a set of failure pairs, satisfying the same five conditions as in Section 15.6, together with:

2. wxTwT (for xX)

The CSP operators are defined on X-processes exactly as before, except that the two parallel operators now have more general types:
$$\mid {\mid }_{X,Y },\mid \mid {\mid }_{X,Y }\,\! :\! {T}_{\mathrm{CSP}(\vert,\Omega )}(X) \times{T}_{\mathrm{CSP}(\vert,\Omega )}(Y ) \rightarrow{T}_{\mathrm{CSP}(\vert,\Omega )}(X \times Y )$$
We take futP(w) : = { aAwaTP}, as before.
$$\begin{array}{llllllllllllllllllll} {\Omega }_{\mathcal{T} (X)} \ =\ &\{\epsilon \} \\ {\Omega }_{\mathcal{R}(X)} \ =\ &\emptyset\\ {\mathtt{Stop}}_{\mathcal{T} (X)} \ =\ &\{\epsilon \} \\ {\mathtt{Stop}}_{\mathcal{R}(X)} \ =\ &\{(\epsilon,W)\mid W \subseteq A\} \\ a {\rightarrow }_{\mathcal{T} (X)}P\ =\ &\{\epsilon \} \cup \{ aw\mid w \in{T}_{P}\} \\ a {\rightarrow }_{\mathcal{R}(X)}P\ =\ &\{(\epsilon,W)\mid a\notin W\} \cup \{ (aw,W)\mid (w,W) \in{F}_{P}\} \\ P {\sqcap }_{\mathcal{T} (X)}Q \ =\ &{T}_{P} \cup{T}_{Q} \\ P {\sqcap }_{\mathcal{R}(X)}Q\ =\ &{F}_{P} \cup{F}_{Q} \\ P{\square }_{\mathcal{T} (X)}Q \ =\ &{T}_{P} \cup{T}_{Q} \\ P{\square }_{\mathcal{R}(X)}Q \ =\ &\begin{array}{l} \{(\epsilon,W)\mid (\epsilon,W) \in{F}_{P} \cap{F}_{Q}\} \\ \cup \:\{ (w,W)\mid w\neq \epsilon,\ (w,W) \in{F}_{P} \cup{F}_{Q}\} \end{array} \\ {f}_{\mathcal{T} (X)}(P) \ =\ &\{f(w)\mid w \in{T}_{P}\} \\ {f}_{\mathcal{R}(X)}(P) \ =\ &\{(f(w),W)\mid (w,{f}^{-1}(W) \cap \mathrm{{ fut}}_{P}(w)) \in{F}_{P}\} \\ P{\setminus }_{\mathcal{T} (X)}a \ =\ &\{w\setminus a\mid w \in{T}_{P}\} \\ P{\setminus }_{\mathcal{R}(X)}a \ =\ &\{(w\setminus a,W)\mid (w,W \cup \{ a\}) \in{F}_{P}\} \\ P\mid {\mid }_{\mathcal{T} (X,Y )}Q \ =\ &\{w\mid w \in{T}_{P} \cap{T}_{Q} \cap{A}^{{\ast}}\}\cup \{ w(x,y)\mid wx \in{T}_{P},\,wy \in{T}_{Q}\} \\ P\mid {\mid }_{\mathcal{R}(X,Y )}Q \ =\ &\{(w,W \cup V )\mid (w,W) \in{F}_{P},\ (w,V ) \in{F}_{Q}\} \\ P\mid \mid {\mid }_{\mathcal{T} (X,Y )}Q \ =\ &\begin{array}{l} \{w\mid u \in{T}_{P} \cap{A}^{{\ast}},\ v \in{T}_{Q} \cap{A}^{{\ast}},\ w \in u\!\mid \!v\} \\ \cup \:\{ w(x,y)\mid ux \in{T}_{P},\ vy \in{T}_{Q},\ w \in u\!\mid \!v\} \end{array} \\ P\mid \mid {\mid }_{\mathcal{R}(X,Y )}Q \ =\ &\{(w,W)\mid (u,W) \in{F}_{P},\,(v,W) \in{F}_{Q},\,w \in u\!\mid \!v\} \end{array}$$
Here, much as before, we write \(P \mathrm{{op}}_{\mathcal{F}(X)} Q = (P \mathrm{{op}}_{\mathcal{T} (X)} Q,P \mathrm{{op}}_{\mathcal{R}(X)} Q)\) when defining the CSP operators on X-processes. The X-processes also form the carrier of a CSP( |, Ω)-algebra \({\mathcal{F}\!\!}_{d}(X)\), with the operators defined as follows:
$$\begin{array}{@{}l@{~=~}l@{}} P {\sqcap }_{{\mathcal{T}}_{d}(X)}Q \ =\ &{T}_{P} \cup{T}_{Q} \\ P {\sqcap }_{{\mathcal{R}}_{d}(X)}Q \ =\ &{F}_{P} \cup{F}_{Q} \\ {\left ({\square }_{\vec{a}}^{\Omega }\right )}_{{\mathcal{T}}_{d}(X)}\left ({P}_{1},\ldots,{P}_{n}\right )\ =\ &\{\epsilon \} \cup \{ {a}_{i}w\mid w \in{T}_{{P}_{i}}\} \\ {\left ({\square }_{\vec{a}}^{\Omega }\right )}_{{\mathcal{R}}_{d}(X)}\left ({P}_{1},\ldots,{P}_{n}\right )\ =\ &\{\left ({a}_{i}w,W\right )\mid \left (w,W\right ) \in{F}_{{P}_{i}}\} \\ {\left ({\square }_{\vec{a}}\right )}_{{\mathcal{T}}_{d}(X)}\left ({P}_{1},\ldots,{P}_{n}\right ) \ =\ &\{\epsilon \} \cup \{ {a}_{i}w\mid w \in{T}_{{P}_{i}}\} \\ {\left ({\square }_{\vec{a}}\right )}_{{\mathcal{R}}_{d}(X)}\left ({P}_{1},\ldots,{P}_{n}\right ) \ =\ &\{\left (\epsilon,W\right )\mid W \cap \{ {a}_{1},\ldots,{a}_{n}\} = \emptyset \}\cup \mbox{ } \\ \{\left ({a}_{i}w,W\right )\mid \left (w,W\right ) \in{F}_{{P}_{i}}\} \end{array}$$
The finitary X-processes are those with a finite set of traces and X-traces; they form the carrier of a CSP( |, Ω)-algebra \({\mathcal{F}\!\!}_{\mathit{df }}(X)\).
We now show that \({\mathcal{F}\!\!}_{\mathit{df }}(X)\) is the free CSP( |, Ω)-algebra over X. As is well known, the free algebra of a theory Th over a set X is the same as the initial algebra of the theory Th+ obtained by extending Th with constants \(\underline{x}\) for each xX but without changing the axioms. The unit map η​ : ​ XTTh(X) sends xX to the denotation of \(\underline{x}\) in the initial algebra. We therefore show that \({\mathcal{F}\!\!}_{\mathit{df }}(X)\), extended to a CSP( |, )+-algebra by taking
$$[\![\underline{x}]\!] = (\{x\},\emptyset )\qquad \ (\mbox{ for $x \in X$})$$
is the initial CSP( |, Ω)+-algebra. We begin by looking at definability.

Lemma 5.

The finitary X-processes are those definable by closed CSP (|,Ω)+terms.


The proof goes just as the one for Lemma 4, using that Lemma 3 applies just as well to finitary X-processes, but this time we have\(P = {\sqcap }_{i}{\square }_{a\in {V }_{i}}a \rightarrow{P}_{a}\ \sqcap \ \left (\Omega \square {\square }_{a\in {T}_{P}}a \rightarrow{P}_{a}\right )\ \sqcap \ {\sqcap }_{x\in {T}_{P}}\underline{x}\)

Next, we say that a closed CSP( |, Ω)+-term t is in normal form if it is has one of the following two forms:
$${\sqcap }_{L\in \mathcal{L}}{\square }_{a\in L}a{t}_{a} \sqcap {\sqcap }_{x\in J}\underline{x}\qquad \mbox{ or}\qquad \left (\Omega \square {\square }_{a\in K}a{t}_{a}\right ) \sqcap {\sqcap }_{x\in J}\underline{x}$$
where, as appropriate, is a finite non-empty saturated collection of finite sets of actions, JfinX, KfinA, and each term ta is in normal form.

Lemma 6.

Two normal forms are identical if they have the same denotation in\({\mathcal{F}\!\!}_{\mathit{df }}(X)\).


Consider two normal forms with the same denotation in \({\mathcal{F}\!\!}_{\mathit{df }}(X)\), say (T, F). As (ε, ) ∈ F iff F is the denotation of a normal form of the first form (rather than the second), both normal forms must be of the same form. Thus, there are two cases to consider, the first of which concerns two forms:
$${\sqcap }_{L\in \mathcal{L}}{\square }_{a\in L}a{t}_{a} \sqcap {\sqcap }_{x\in J}\underline{x}\quad \quad \quad {\sqcap }_{L^{\prime}\in \mathcal{L^{\prime}}}{\square }_{a^{\prime}\in L^{\prime}}a^{\prime}{t^{\prime}}_{a^{\prime}} \sqcap {\sqcap }_{x\in J^{\prime}}\underline{x}$$
We argue by induction on the sum of the sizes of the two normal forms. We evidently have that J = J′. Next, if a ∈ ⋃ then aT and so \(a \in \bigcup\nolimits \mathcal{L}\); we therefore have that \(\bigcup\nolimits \mathcal{L}\subseteq \bigcup\nolimits \mathcal{L^{\prime}}\). Now, if L, then \((\epsilon,(\bigcup\nolimits \mathcal{L^{\prime}})\setminus L) \in F\); so for some L′ we have \(L \cap((\bigcup\nolimits \mathcal{L^{\prime}})\setminus L) = \emptyset \), and so L′L. As \(\mathcal{L}\) is saturated, it follows by the previous remark that Lℒ′. So we have the inclusion \(\mathcal{L}\subseteq \mathcal{L^{\prime}}\) and then, arguing symmetrically, equality.

Finally, the denotations of ta and t′a, for \(a \in \bigcup\nolimits \mathcal{L} = \bigcup\nolimits \mathcal{L}\) are the same, as they are determined by T and F, being {wawT} and {(w, W)∣(aw, W) ∈ F}, and the argument concludes, using the inductive hypothesis.

The other case concerns normal forms:
$$\left (\Omega \square {\square }_{a\in K}a{t}_{a}\right ) \sqcap {\sqcap }_{x\in J}\underline{x}\quad \quad \quad \left (\Omega \square {\square }_{a\in K}a{t}_{a}\right ) \sqcap {\sqcap }_{x\in J}\underline{x}$$
Much as before we find J = J′, K = K′, and ta = ta for aK.

Lemma 7.

CSP (|,Ω)+is ground complete with respect to\({\mathcal{F}\!\!}_{\mathit{df }}(X)\).


As before, a straightforward induction shows that every term has a normal form, and then completeness follows by Lemma 6.

Theorem 7.

The algebra\({\mathcal{F}\!\!}_{\mathit{df }}(X)\)is the free CSP (|,Ω)-algebra over X.


It follows from Lemmas 5 and 7 that \({{\mathcal{F}\!\!}_{\mathit{df }}(X)}^{+}\) is the initial CSP( |, Ω)+-algebra.

As with any finitary equational theory, CSP( |, Ω) is equationally complete with respect to \({\mathcal{F}\!\!}_{\mathit{df }}(X)\) when X is infinite. It is not difficult to go a little further and show that this also holds when X is only required to be non-empty, and, even, if A is infinite, when it is empty.

Now that we have an explicit representation of the free CSP( |, Ω)-monad in terms of X-processes, we indicate how to use it to give the semantics of the computational λ-calculus. First we need the structure of the monad. As we know from the above, the unit ηX​ : ​ XTCSP( |, Ω)(X) is the map x↦({x}, ). Next, we need the homomorphic extension \({g}^{\dag }\! :\! {\mathcal{F}\!\!}_{\mathit{df }}(X) \rightarrow {\mathcal{F}\!\!}_{\mathit{df }}(Y )\) of a given map \(g\! :\! X \rightarrow {\mathcal{F}\!\!}_{\mathit{df }}(Y )\), i.e., the unique such homomorphism making the following diagram commute:

This is given by:
$$\begin{array}{cc} {\left ({g}^{\dag }\left (P\right )\right )}_{ \mathcal{T}} =\{ \nu\mid \nu \in{T}_{P} \cap{A}^{{\ast}}\}\cup \{ \nu w\mid \nu x \in{T}_{ P},\ w \in g{\left (x\right )}_{\mathcal{T}}\} & \\ {\left ({g}^{\dag }\left (P\right )\right )}_{ \mathcal{R}} =\{ \left (\nu,V \right ) \in{F}_{P}\} \cup \{\left (\nu w,W\right )\mid \nu x \in{T}_{P},\ \left (w,W\right ) \in g{\left (x\right )}_{\mathcal{R}}\}& \end{array}$$
As regards the constructors and deconstructors, we have already given explicit representations of them as functions over (finitary) X-processes. We have also already given homomorphic treatments of the unary deconstructors. We finally give treatments of the binary deconstructors as unique solutions to equations, along similar lines to their treatment in the case of \({\mathcal{F}\!\!}_{\mathit{df }}\). Observe that:
$$\begin{array}{cc} {\left ({\square }_{\vec{a}}\right )}_{X}({P}_{1},\ldots,{P}_{n}) = {a}_{1}{P}_{1}{\square }_{X}{a}_{2}{P}_{2}{\square }_{X}\ldots {\square }_{X}{a}_{n}{P}_{n} & \\ {\left ({\square }_{\vec{a}}^{\Omega }\right )}_{X}({P}_{1},\ldots,{P}_{n}) = \Omega {\square }_{X}{a}_{1}{P}_{1}{\square }_{X}{a}_{2}{P}_{2}{\square }_{X}\ldots {\square }_{X}{a}_{n}{P}_{n}& \end{array}$$
Using this, one finds that □ X, □ XΩ, a1…an and □ Xa1…an, the latter defined as in Eq. 15.9 are the unique functions which satisfy the evident analogues of Eq. 15.8 together with, making another use of the form of external choice made available by Ω:
$$\eta (x)\square P = \eta (x) {\sqcap }_{X}(\Omega \square P)$$
$$({P}_{1},\ldots,{P}_{n}){\square }^{{a}_{1}\ldots {a}_{n} }\eta (x) ={ \left ({\square }_{\vec{a}}^{\Omega }\right )}_{X}({P}_{1},\ldots,{P}_{n}) {\sqcap }_{X}\eta (x)$$
$$({P}_{1},\ldots,{P}_{n}){\square }^{\Omega,{a}_{1}\ldots {a}_{n} }\eta (x) ={ \left ({\square }_{\vec{a}}^{\Omega }\right )}_{X}({P}_{1},\ldots,{P}_{n}) {\sqcap }_{X}\eta (x)$$
As regards concurrency, we define
$$\mid {\mid }_{X,Y }\! :\! {T}_{\mathrm{CSP}(\vert,\Omega )}(X) \times{T}_{\mathrm{CSP}(\vert,\Omega )}(Y ) \rightarrow{T}_{\mathrm{CSP}(\vert,\Omega )}(X \times Y )$$
together with functions
$$\mid {\mid }_{X,Y }^{{a}_{1}\ldots {a}_{n} }\! :\! {T}_{\mathrm{CSP}(\vert,\Omega )}{(X)}^{n} \times{T}_{\mathrm{ CSP}(\vert,\Omega )}(Y ) \rightarrow{T}_{\mathrm{CSP}(\vert,\Omega )}(X \times Y )$$
$$\mid {\mid }_{X,Y }^{\Omega,{a}_{1}\ldots {a}_{n} }\! :\! {T}_{\mathrm{CSP}(\vert,\Omega )}{(X)}^{n} \times{T}_{\mathrm{ CSP}(\vert,\Omega )}(Y ) \rightarrow{T}_{\mathrm{CSP}(\vert,\Omega )}(X \times Y )$$
$$\mid {\mid }_{X,Y }^{x}\! :\! {T}_{\mathrm{ CSP}(\vert,\Omega )}(Y ) \rightarrow{T}_{\mathrm{CSP}(\vert,\Omega )}(X \times Y )$$
where aiA are all different, and xX, by the analogues of Eq. 15.10 above, together with:
$$\begin{array}{lcl} \eta (x)\mid \mid Q & =&\mid {\mid }^{x}(Q)\\ && \\ \mid {\mid }^{x}(P \sqcap Q) & =&\mid {\mid }^{x}(P)\, \sqcap \,\mid {\mid }^{x}(Q) \\ \mid {\mid }^{x}({\square }_{i=1}^{n}{a}_{i}{P}_{i}) & =&\Omega\\ \mid {\mid }^{x}(\Omega \square {\square }_{i=1}^{n}{a}_{i}{P}_{i}) & =&\Omega\\ \mid {\mid }^{x}(\eta (y)) & =&\eta ((x,y))\\ && \\ ({P}_{1},\ldots,{P}_{n})\mid {\mid }^{{a}_{1}\ldots {a}_{n}}\eta (x) & =&\Omega\\ ({P}_{1},\ldots,{P}_{n})\mid {\mid }^{\Omega,{a}_{1}\ldots {a}_{n}}\eta (x)& =&\Omega \end{array}$$
Much as before, the equations have a unique solution, with the ∣∣ component being ∣∣X, Y.
As regards interleaving, we define
$$\mid \mid {\mid }_{X,Y }^{l},\mid \mid {\mid }_{ X,Y }^{r}\! :\! {T}_{\mathrm{ CSP}(\vert,\Omega )}(X) \times{T}_{\mathrm{CSP}(\vert,\Omega )}(Y ) \rightarrow{T}_{\mathrm{CSP}(\vert,\Omega )}(X \times Y )$$
$$\begin{array}{lcl} P\mid \mid {\mid }_{{\mathcal{T}}_{\mathit{df }}(X,Y )}^{l}Q & =&\{\epsilon \} \cup \{ w\mid u \in{T}_{P} \cap{A}^{{\ast}},\ v \in{T}_{Q} \cap{A}^{{\ast}},\ w \in u\!{\mid }^{l}\!v\} \cup \mbox{ } \\ & &\{w(x,y)\mid ux \in{T}_{P},\ vy \in{T}_{Q},\ w \in u{\mid }^{l}v \vee(u = v = w = \epsilon )\}\\ & & \\ P\mid \mid {\mid }_{{\mathcal{R}}_{\mathit{df }}(X,Y )}^{l}Q& =&\{(\epsilon,W)\mid (\epsilon,W) \in{F}_{P}\} \cup \mbox{ } \\ & &\{(w,W)\mid (u,W) \in{F}_{P},\ (v,W) \in{F}_{Q},\ w \in u{\mid }^{l}v\}\\ & & \\ P\mid \mid {\mid }_{X,Y }^{r}Q & =&Q\mid \mid {\mid }_{Y,X}^{l}P \end{array}$$
One has that:
$$P\mid \mid {\mid }_{X,Y }Q = P\mid \mid {\mid }_{X,Y }^{l}Q\,\square \,P\mid \mid {\mid }_{ X,Y }^{r}Q$$
and that ∣∣∣X, Yl, ∣∣∣X, Yr are components of the unique solutions to the analogues of Eq. 15.11 above, together with:
$$\begin{array}{lcl} \eta (x)\mid \mid {\mid }^{l}Q & =&\mid \mid {\mid }^{l,x}(Q)\\ & & \\ \mid \mid {\mid }^{l,x}(P \sqcap Q) & =&\mid \mid {\mid }^{l,x}(P)\, \sqcap \,\mid \mid {\mid }^{l,x}(Q) \\ \mid \mid {\mid }^{l,x}({\square }_{i=1}^{n}{a}_{i}{P}_{i}) & =&\Omega\\ \mid \mid {\mid }^{l,x}(\Omega \square {\square }_{i=1}^{n}{a}_{i}{P}_{i})& =&\Omega\\ \mid \mid {\mid }^{l,x}(\eta (y)) & =&\eta (x,y) \end{array}$$
and corresponding equations for ∣∣∣r and ∣∣∣r, y.
It would be interesting to check more completely which of the usual laws, as found in, e.g., [9, 13, 10], the CSP operators at the level of free CSP( |, Ω)-algebras obey. Note that some adjustments need to be made due to varying types. For example, ∣∣ is commutative, which here means that the following equation holds:
$${T}_{\mathrm{CSP}(\vert,\Omega )}({\gamma }_{X,Y })(P\mid {\mid }_{X,Y }Q) = Q\mid {\mid }_{Y,X}P$$
where γ​ : ​ X ×YY ×X is the commutativity map (x, y)↦(y, x).

15.7.1 Termination

As remarked in the introduction, termination and sequencing are available in a standard way for terms of type unit. Syntactically, we regard skip as an abbreviation for ∗ and M; N as one for (λx​ : ​unit. N)(M) where x does not occur free in N; semantically, we have a corresponding element of, and binary operator over, the free CSP( |, Ω)-algebra on the one-point set.

Let us use these ideas to treat CSP extended with termination and sequencing. We work with the finitary \(\{\surd\}\)-processes representation of \({T}_{\mathrm{CSP}(\vert,\Omega )}(\{\surd\})\). Then, following the above prescription, termination and sequencing are given by:
$$\mathtt{SKIP}\, =\{ \surd\}\quad \quad \quad \quad P;Q = {(x \in \{\surd\}\mapsto Q)}^{\dag }(P)$$
For general reasons, termination and sequencing, so-defined, form a monoid and sequencing commutes with all constructors in its first argument. For example, we have that:
$${\square }_{i=1}^{n}{a}_{ i}({P}_{i};Q) = \left ({\square }_{i=1}^{n}{a}_{ i}{P}_{i}\right );Q$$
Composition further commutes with ⊓in its second argument.

The deconstructors are defined as above except that in the case of the concurrency operators one has to adjust \(\mid {\mid }_{ \{\surd\},\{\surd\}}\) and \(\mid \mid {\mid }_{ \{\surd\},\{\surd\}}\) so that they remain within the world of the \(\{\surd\}\)-processes; this can be done by postcomposing them with the evident bijection between \(\{\surd\}\times \{\surd\}\)-processes and \(\{\surd\}\)-processes, and all this restricts to the finitary processes. Alternatively one can directly consider these adjusted operators as deconstructors over the (finitary) \(\{\surd\}\)-processes.

The \(\{\surd\}\)-processes are essentially the elements of the stable failures model of [29]. More precisely, one can define a bijection from Roscoe’s model to our \(\{\surd\}\)-processes by setting θ(T, F) = (T, F′) where
$$F =\{ (w,W) \in{A}^{{\ast}}\times \mathcal{P}(A)\mid (w,W \cup \{\surd\}) \in F\}$$
The inverse of θ sends F′ to the set:
$$\begin{array}{cc} \{(w,W),(w,W \cup \{\surd\})\mid (w,W) \in F\} \cup \mbox{ } & \\ \{(w,W)\mid w\surd\in T \wedge W \subseteq A\} \cup \{ (w\surd,W)\mid w\surd\in T \wedge W \in A \cup \{\surd\}\}& \end{array}$$
and is a homomorphism between all our operators, whether constructors, deconstructors, termination, or sequencing (suitably defined), and the corresponding ones defined for Roscoe’s model.

15.8 Discussion

We have shown the possibility of a principled combination of CSP and functional programming from the viewpoint of the algebraic theory of effects. The main missing ingredient is an algebraic treatment of binary deconstructors, although we were able to partially circumvent that by giving explicit definitions of them. Also missing are a logic for proving properties of these deconstructors, an operational semantics, and a treatment that includes recursion.

As regards a logic, it may prove possible to adapt the logical ideas of [24, 25] to handle binary deconstructors; the main proof principle would then be that of computation induction, that if a proposition holds for all “values” (i.e., elements of a given set X) and if it holds for the applications of each constructor to any given “computations” (i.e., elements of T(X)) for which it is assumed to hold, then it holds for all computations. We do not anticipate any difficulty in giving an operational semantics for the above combination of the computational λ-calculus and CSP and proving an adequacy theorem.

To treat recursion algebraically, one passes from equational theories to inequational theories Th (inequations have the form tu, for terms t, u in a given signature Σ); inequational theories can include equations, regarding an equation as two evident inequations. There is a natural inequational logic for deducing consequences of the axioms: one simply drops symmetry from the logic for equations [7]. Then Σ-algebras and Th-algebras are taken in the category of ω-cpos and continuous functions, a free algebra monad always exists, just as in the case of sets, and the logic is complete for the class of such algebras. One includes a divergence constant Ω in the signature and the axiom
$$\Omega\leq x$$
so that Th-algebras always have a least element. Recursive definitions are then modelled by least fixed-points in the usual way. See [14, 21] for some further explanations.

The three classical powerdomains: convex (aka Plotkin), lower (aka Hoare) and upper (aka Smyth) provide a useful illustration of these ideas [12, 14]. One takes as signature a binary operation symbol ⊓, to retain notational consistency with the present paper (a more neutral symbol, such as ∪, is normally used instead), and the constant Ω; one takes the theory to be that ⊓is a semilattice (meaning, as before, that associativity, commutativity and idempotence hold) and that, as given above, Ω is the least element with respect to the ordering ≤. This gives an algebraic account of the convex powerdomain.

If one adds that Ω is the zero of the semilattice (which is equivalent, in the present context, to the inequation xxy) one obtains instead an algebraic account of the lower powerdomain. One then further has the notationally counterintuitive facts that xy is equivalent to yx, with ⊑ defined as in Section 15.3, and that xy is the supremum of x and y with respect to ≤ ; in models, ≤ typically corresponds to subset. It would be more natural in this case to use the dual order to ⊑ and to write ⊔ instead of ⊓, when we would be dealing with a join-semilattice with a least element whose order coincides with ≤.

If one adds instead that xyx, one obtains an algebraic account of the upper powerdomain. One now has that xy is equivalent in this context to xy, that xy is the greatest lower bound of x and y, and that xΩ = Ω (but this latter fact is not equivalent in inequational logic to xyx); in models, ≤ typically corresponds to superset. The notations ⊓and ⊑ are therefore more intuitive in the upper case, and there one has a meet-semilattice with a least element whose order coincides with ≤.

It will be clear from these considerations that the stable failures model fits into the pattern of the lower powerdomain and that the failures/divergences model fits into the pattern of the upper powerdomain. In the case of the stable failures model it is natural, in the light of the above considerations, to take Th to be CSP( |, Ω) together with the axiom Ωx. The X-processes with countably many traces presumably form the free algebra over X, considered as a discrete ω-cpo; one should also characterise more general cases than discrete ω-cpos.

One should also investigate whether a fragment of the failures/divergences model forms the initial model of an appropriate theory, and look at the free models of such a theory. The theory might well be found by analogy with our work on the stable failures model, substituting (15.12) for (15.13) and, perhaps, using the mixed-choice constructor, defined below, to overcome any difficulties with the deconstructors. One would expect the initial model to contain only finitely-generable processes, meaning those which, at any trace, either branch finitely or diverge (and see the discussion in [29]).

Our initial division of our selection of CSP operators into constructors and deconstructors was natural, although it turned out that a somewhat different division, with “restricted” constructors, resulted in what seemed to be a better analysis (we were not able to rule out the possibility that there are alternative, indirect, definitions of the deconstructors with the original choice of constructors). One of these restricted constructors was a deterministic choice operator making use of the divergence constant Ω. There should surely, however, also be a development without divergence that allows the interpretation of the combination of CSP and functional programming.

We were, however, not able to do this using CSP( | ): the free algebra does not seem to support a suitable definition of concealment, whether defined directly or via a homomorphism. For example a straightforward extension of the homomorphic treatment of concealment, in the case of the initial algebra (cf. Section 15.5) would give
$$(a.\underline{x}\square b.\mathtt{Stop})\setminus a = \underline{x} \sqcap(\underline{x}\square b.\mathtt{Stop})$$
However, our approach requires the right-hand side to be equivalent to a term built from constructors only, but no natural candidates came forward all choices that came to mind lead to unwanted identifications.
We conjecture that, taking instead, as constructor, a mixed-choice operator of the form:
$${\square }_{i}{\alpha }_{i}.{x}_{i}$$
where each αi is either an action or τ, would lead to a satisfactory theory. This new operator is given by the equation:
$${\square }_{i}{\alpha }_{i}.{x}_{i} = {\sqcap }_{{\alpha }_{i}=\tau }{x}_{i}\; \sqcap \left ({\square }_{{\alpha }_{i}=\tau }{x}_{i}\;\square {\square }_{{\alpha }_{i}\neq \tau }{\alpha }_{i}.{x}_{i}\right )$$
and there is a homomorphic relationship with concealment:
$$\left ({\square }_{i}{\alpha }_{i}.{x}_{i}\right )\left \backslash a = {\square }_{i}({\alpha }_{i}\backslash a).({x}_{i}\backslash a)\right.$$
(with the evident understanding of αia). Note that in the stable failures model we have the equation:
$${\square }_{i}{\alpha }_{i}.{x}_{i} = {\sqcap }_{{\alpha }_{i}=\tau }{x}_{i}\; \sqcap \left (\Omega \square {\square }_{{\alpha }_{i}\neq \tau }{\alpha }_{i}.{x}_{i}\right )$$
which is presumably why the deterministic choice operator available in the presence of Ω played so central a rôle there.

In a different direction, one might also ask whether there is some problem if we alternatively take an extended set of operators as constructors. For example, why not add relabelling with its equations to the axioms? As the axioms inductively determine relabelling on the finitary refusal sets model, that would still be the initial algebra, and the same holds if we add any of the other operators we have taken as deconstructors.

However, the X-refusal sets would not longer be the free algebra, as there would be extra elements, such as f(x) for xX, where f is a relabelling function. We would also get some undesired equations holding between terms of the computational λ-calculus. For any n-ary constructor op and evaluation context E[ − ], one has in the monadic semantics:
$$E[\mathrm{op}({M}_{1},\ldots,{M}_{n})] =\mathrm{ op}(E[{M}_{1}],\ldots,E[{M}_{n}])$$
So one would have E[f(M)] = f(E[M]) if one took relabelling as a constructor, and, as another example, one would have E[M∣∣N] = E[M]∣∣E[N] if one took the concurrency operator as a constructor.

It will be clear to the reader that, in principle, one can investigate other process calculi and their combination with functional programming in a similar way. For example, for Milner’s CCS [17] one could take action prefix (with names, conames and τ) together with NIL and the sum operator as constructors, and as axioms that we have a semilattice with a zero, for strong bisimulation, together with the usual τ-laws, if we additionally wish to consider weak bisimulation. The deconstructors would be renaming, hiding, and parallel, and all should have suitable polymorphic versions in the functional programming context. Other process calculi such as the π-calculus [30, 33], or even the stochastic π-calculus [26, 16], might be dealt with similarly. In much the same way, one could combine parallelism with a global store with functional programming, following the algebraic account of the resumptions monad [14, 1] where the constructors are the two standard ones for global store [22], a nondeterministic choice operation, and a unary “suspension” operation.

A well-known feature of the monadic approach [14] is that it is often possible to combine different effects in a modular way. For example, the global side-effects monad is (S × − )S where S is a suitable set of states. A common combination of it with another monad T is the monad T(S × − )S. So, taking T = TCSP(∣), for example, we get a combination of CSP with global side-effects.

As another example, given a monoid M, one has the M-action monad M × − which supports a unary M-action effect constructor m. −, parameterised by elements m of the monoid. One might use this monad to model the passage of time, taking M to be, for example, the monoid of the natural numbers I​N under addition. A suitable combination of this monad with ones for CSP may yield helpful analyses of timed CSP [27, 20], with Wain; − given by the I​N-action effect constructor. We therefore have a very rich space of possible combinations of process calculi, functional programming and other effects, and we hope that some of these prove useful.

Finally, we note that there is no general account of how the equations used in the algebraic theory of effects arise. In such cases as global state, nondeterminism or probability, there are natural axioms and monads already available, and it is encouraging that the two are equivalent [22, 14]. One could investigate using operational methods and behavioural equivalences to determine the equations, and it would be interesting to do so. Another approach is the use of “test algebras” [32, 15]. In the case of process calculi one naturally uses operational methods; however, the resulting axioms may not be very modular, or very natural mathematically, and, all in all, in this respect the situation is not satisfactory.

15.9 Appendix: The Computationalλ-Calculus

In this appendix, we sketch (a slight variant of) the syntax and semantics of Moggi’s computational λ-calculus, or λc-calculus [18, 19]. It has types given by:
$$\sigma::= \mathtt{b}\,\mid \mathtt{unit}\,\mid \sigma\times\sigma \mid \mathtt{empty}\,\mid \sigma\rightarrow\sigma $$
where b ranges over a given set of base types, e.g., nat ; the type construction Tσ may be defined to be unit → σ. The terms of the λc-calculus are given by:
$$M ::= x\mid g(M)\mid {\ast}\mid \mathtt{in}\,M\mid (M,M)\mid \mathtt{fst}\,M\mid \mathtt{snd}\,M\mid \lambda x\! :\! \sigma.M\mid MM$$
where g ranges over given unary function symbols of given types σ → τ, such as 0​ : ​unitnat or succ ​ : ​natnat, if we want the natural numbers, or op​ : ​ T(σ) × ×T(σ) → T(σ) for some operation symbol from a theory for which T is the free algebra monad. There are standard notions of free and bound variables and of closed terms and substitution; there are also standard typing rules for judgements ΓM​ : ​ σ, that the term M has type σ in the context Γ (contexts have the form Γ = x1 : σ1, , xn : σn), including:
$$\frac{\Gamma\vdash M\! :\! \mathtt{empty}\,} {\Gamma\vdash \mathtt{in}\,M\! :\! \sigma }$$

A λc-model (on the category of sets Moggi worked more generally) consists of a monad T, together with enough information to interpret basic types and the given function symbols. So there is a given set [​[b]​] to interpret each basic type b, and then every type σ receives an interpretation as a set [​[σ]​]; for example [​[empty ]​] = . There is also given a map [​[σ]​] → T([​[τ]​]) to interpret every given unary function symbol g​ : ​ σ → τ. A term ΓM​ : ​ σ of type σ in context Γ is modelled by a map [​[M]​]​ : ​ [​[Γ]​] → T[​[σ]​] (where [​[x1​ : ​ σ1, , xn​ : ​ σn]​] = [​[σ1]​] × ×[​[σn]​]). For example, if ΓinM​ : ​ σ then [​[inM]​] = 0[​[σ]​]o[​[M]​] (where, for any set X,0X is the unique map from to X).

We define values and evaluation contexts. Values can be thought of as (syntax for) completed computations, and are defined by:
$$V ::= x\mid {\ast}\mid (V,V )\mid \mathtt{in}\,V \mid \lambda x\! :\! \sigma.M$$
together with clauses such as:
$$V ::= 0\mid \mathtt{succ}\,(V )$$
depending on the choice of basic types and given function symbols. We may then define evaluation contexts by:
$$E ::= [-]\mid \mathtt{in}\,E\mid (E,M)\mid (V,E)\mid EM\mid V E\mid \mathtt{fst}\,(E)\mid \mathtt{snd}\,(E)$$
together with clauses such as:
$$E ::= \mathtt{succ}\,(E)$$
depending on the choice of basic types and given function symbols. We write E[M] for the term obtained by replacing the ‘hole’ [ − ] in an evaluation term E by a term M. The computational thought behind evaluation contexts is that in a program of the form E[M], the first computational step arises within M.


  1. 1.
    Abadi, M., Plotkin, G.D.: A model of cooperative threads. In: Shao, Z., Pierce, B.C. (eds.), Proc. POPL 2009. ACM Press, pp. 29 40 (2009)Google Scholar
  2. 2.
    Abramsky, S., Gabbay, D.M., Maibaum, T.S.E. (eds.), Handbook of Logic in Computer Science (Vol. 1), Background: Mathematical Structures, Oxford University Press (1995)MATHGoogle Scholar
  3. 3.
    Benton, N., Hughes, J., Moggi, E.: Monads and effects. Proc. APPSEM 2000, LNCS 2395, pp. 42 122, Springer (2002)Google Scholar
  4. 4.
    Bergstra, J.A., Klop, J.W.: Algebra of communicating processes with abstraction. Theor. Comput. Sci. 37, pp. 77 121 (1985)CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    Bergstra, J.A., Klop, J.W.: Algebra of communicating processes. In: de Bakker, J.W., Hazewinkel, M., Lenstra, J.K. (eds.), Proc. of the CWI Symp. Math. and Comp. Sci. pp. 89 138, North-Holland (1986)Google Scholar
  6. 6.
    Bergstra, J.A., Klop, J.W., Olderog, E.-R.: Failures without chaos: a new process semantics for fair abstraction. In: Wirsing, M. (ed.), Proc. of the 3rd IFIP WG 2.2 working conference on Formal Description of Programming Concepts. pp. 77 103, North-Holland (1987)Google Scholar
  7. 7.
    Bloom, S.L.: Varieties of ordered algebras. J. Comput. Syst. Sci., 13(2):200 212 (1976)CrossRefMATHGoogle Scholar
  8. 8.
    Borceux, F.: Handbook of Categorical Algebra 2, Encyclopedia of Mathematics and Its Applications 51. Cambridge University Press (1994)Google Scholar
  9. 9.
    Brookes, S.D., Hoare, C.A.R., Roscoe, A.W.: A theory of communicating sequential processes. J. ACM 31(3):560 599 (1984)CrossRefMATHMathSciNetGoogle Scholar
  10. 0.
    De Nicola, R.: Two complete axiom systems for a theory of communicating sequential processes. Inform. Control 64, pp. 136 172 (1985)CrossRefMATHGoogle Scholar
  11. 1.
    Ebrahimi-Fard, K., Guo, L.: Rota-Baxter algebras and dendriform algebras. J. Pure Appl. Algebra 212(2):320 33 (2008)CrossRefMATHMathSciNetGoogle Scholar
  12. 2.
    Gierz, G., Hofmann, K.H., Keimel, K., Lawson, J.D., Mislove, M., Scott, D.S.: Continuous Lattices and Domains, Encyclopedia of Mathematics and its Applications 93. Cambridge University Press (2003)Google Scholar
  13. 3.
    Hoare, C.A.R.: Communicating Sequential Processes. Prentice-Hall (1985)Google Scholar
  14. 4.
    Hyland, J.M.E., Plotkin, G.D., Power, A.J.: Combining effects: sum and tensor. In: Artemov, S., Mislove, M. (eds.), Clifford Lectures and the Mathematical Foundations of Programming Semantics. Theor. Comput. Sci. 357(1 3):70 99 (2006)Google Scholar
  15. 5.
    Keimel, K., Plotkin, G.D.: Predicate transformers for extended probability and non-determinism. Math. Struct. Comput. Sci. 19(3):501 539. Cambridge University Press (2009)Google Scholar
  16. 6.
    Klin, B., Sassone, V.: Structural operational semantics for stochastic process calculi. In: Amadio, R.M. (ed.), Proc. 11th. FoSSaCS. LNCS 4962, pp. 428 442, Springer (2008)Google Scholar
  17. 7.
    Milner, A.J.R.G.: A Calculus of Communicating Systems. Springer (1980)Google Scholar
  18. 8.
    Moggi, E.: Computational lambda-calculus and monads. Proc. 3rd. LICS, pp. 14 23, IEEE Press (1989)Google Scholar
  19. 9.
    Moggi, E.: Notions of computation and monads. Inf. Comp. 93(1):55 92 (1991)CrossRefMATHMathSciNetGoogle Scholar
  20. 0.
    Ouaknine, J., Schneider, S.: Timed CSP: a retrospective, Proceedings of the Workshop “Essays on Algebraic Process Calculi” (APC 25), Electr. Notes Theor. Comput. Sci., 162, pp. 273 276 (2006)CrossRefGoogle Scholar
  21. 1.
    Plotkin, G.D.: Some varieties of equational logic. Essays Dedicated to Joseph A. Goguen. In: Futatsugi, K., Jouannaud, J.-P., Meseguer, J. (eds.), LNCS 4060, pp. 150 156, Springer (2006)Google Scholar
  22. 2.
    Plotkin, G.D., Power, A.J.: Notions of computation determine monads, Proc. 5th. FoSSaCS. LNCS 2303, pp. 342 356, Springer (2002)Google Scholar
  23. 3.
    Plotkin, G.D., Power, A.J.: Computational effects and operations: an overview. In: Escardó, M., Jung, A. (eds.), Proc. Workshop on Domains VI. Electr. Notes Theor. Comput. Sci. 73, pp. 149 163, Elsevier (2004)Google Scholar
  24. 4.
    Plotkin, G.D., Pretnar, M.: A logic for algebraic effects. Proc. 23rd. LICS, pp. 118 129, IEEE Press (2008)Google Scholar
  25. 5.
    Plotkin, G.D., Pretnar, M.: Handlers of algebraic effects. Proc. 18th. ESOP, pp. 80 94 (2009)Google Scholar
  26. 6.
    Priami, C.: Stochastic pi-calculus. Comput. J. 38 (7):578 589 (1995)CrossRefGoogle Scholar
  27. 7.
    Reed, G.M., Roscoe, A.W.: The timed failures-stability model for CSP. Theor. Comput. Sci. 211(1 2):85 127 (1999)CrossRefMATHMathSciNetGoogle Scholar
  28. 8.
    Roscoe, A.W.: Model-checking CSP. In: Roscoe, A.W. (ed.), A Classical Mind: Essays in Honour of Hoare, C.A.R., pp. 353 337, Prentice-Hall (1994)Google Scholar
  29. 9.
    Roscoe, A.W.: The Theory and Practice of Concurrency. Prentice Hall (1998)Google Scholar
  30. 0.
    Sangiorgi, D., Walker, D.: The π-Calculus: A Theory of Mobile Processes. Cambridge University Press (2003)Google Scholar
  31. 1.
    Scattergood, B.: The Semantics and Implementation of Machine-Readable CSP. D.Phil Thesis, Oxford University (1998)Google Scholar
  32. 2.
    Schröder, M., Simpson, A.: Probabilistic observations and valuations (extended abstract). Electr. Notes Theor. Comput. Sci. 155, pp. 605 615 (2006)CrossRefGoogle Scholar
  33. 3.
    Stark, I.: Free-algebra models for the pi-calculus. Theor. Comput. Sci. 390(2 3):248 270 (2008)CrossRefMATHGoogle Scholar

Copyright information

© Springer London 2010

Authors and Affiliations

  1. 1.NICTASydneyAustralia
  2. 2.University of New South WalesSydneyAustralia
  3. 3.Laboratory for the Foundations of Computer Science, School of InformaticsUniversity of EdinburghEdinburghUK

Personalised recommendations