Translating between models of concurrency

Hoare’s Communicating Sequential Processes (CSP) (Hoare in Communicating Sequential Processes, Prentice-Hall Inc, Upper Saddle River, 1985) admits a rich universe of semantic models closely related to the van Glabbeek spectrum. In this paper we study finite observational models, of which at least six have been studied for CSP, namely traces, stable failures, revivals, acceptances, refusal testing and finite linear observations (Roscoe in Understanding concurrent systems. Texts in computer science, Springer, Berlin, 2010). (Others are known.) We show how to use the relatively recently-introduced priority operator (Roscoe in Understanding concurrent systems. Texts in Computer Science, Springer, Berlin, 2010) to transform refinement questions in these models into trace refinement (language inclusion) tests. Furthermore, we are able to generalise this to any (rational) finite observational model. As well as being of theoretical interest, this is of practical significance since the state-of-the-art refinement checking tool FDR4 (Gibson-Robinson et al. in Int J Softw Tools Technol Transf 18(2):149–167, 2016) currently only supports two such models. In particular we study how it is possible to check refinement in a discrete version of the Timed Failures model that supports Timed CSP.


Introduction
In this paper we re-examine part of the Linear-Time spectrum that forms part of the field of study of van Glabbeek in [31,30], specifically the part characterised by finite linear observations.
A number of different forms of process calculus have been developed for the modeling of concurrent programs, including Hoare's Communicating Sequential Processes (CSP) [8], Milner's Calculus of Communicating Systems (CCS) [12], and the π-calculus [13].Unlike the latter two, CSP's semantics are traditionally given in behavioural semantic models coarser than bisimulation, normally ones that depend on linear observations only.Thus, while the immediate range of options possible from a state can be observed, only one of them can be followed in a linear observation and so branching behaviour is not recorded.
In this paper, we study finite 1 linear-time observational models for CSP; that is, models where all observations considered can be determined in a finite time by an experimenter who can see the visible events a process communicates and the sets of events it can offer in any stable state.While the experimenter can run the process arbitrarily often, he or she can only record the results of individual finite executions.Thus each behaviour recorded can be deduced from a single finite sequence of events and the visible events that link them, together with the sets of events accepted in stable states during and immediately after this trace.The representation in the model is determined purely by the sent of these linear behaviours that it is possible to observe of the process being examined.
At least six such models have been actively considered for CSP, but the stateof-the art refinement checking tool, FDR4 [5,6] 2 , currently only supports two, namely traces and stable failures 3 .FDR4 also supports the (divergence-strict) failures-divergences model, which is not finite observational.
The question we address in this paper supposes that we have an automated proof tool such as FDR that answers questions about how a process is represented in model A, and asks under what circumstances it is possible to answer questions posed in model B, especially the core property of refinement.
It seems intuitive that if model A records more details than model B, then by looking carefully at how A codes the details recorded in B, the above ought to be possible.We will later see some techniques for achieving this.However it does not intuitively seem likely that we can do the reverse.Surprisingly, however, we find it can be done by the use of process operators for which the coarser model B is not compositional.Sometimes we can use such operators to transform observable features of behaviour that B does not see into ones that it does.
The operator we choose in the world of CSP is the relatively new priority operator.While simple to define in operational semantics, this is only compositional over the finest possible finite-linear-obseration model of CSP.Priority is not part of "standard" CSP, but is implemented in the model checker FDR4 and greatly extends the expressive power of the notation.
We present first a construction which produces a context C such that refinement questions in the well-known stable failures model correspond to trace refinement questions under the application of C. We then generalise this to show (Theorem 1) that a similar construction is possible not only for the six models which have been studied, but also for any sensible finite observational model (where 'sensible' means that the model can be recognised by a finite-memory computer, in a sense which we shall make precise).In fact we can seemingly handle any equivalence determined in a compact way by finitary observations, even though not a congruence.
We first briefly describe the language of CSP.We next give an informal description of our construction for the stable failures model.To prove the result in full generality, we first give a formal definition of a finite observational model, and of the notion of rationality.We then describe our general construction.In a case study we consider the discrete version of the Timed Failures model of Timed CSP, a closely related notation which already depends on priority thanks to its need to enforce the principle of maximal progress.For that we show not only how model shifting can obtain exactly what is needed but also show how Timed Failures checking can be reduced to its relative Refusal Testing.Finally we discuss performance and optimisation issues.
The present paper is a revised and extended version of [11], with the main additions being the study of Timed CSP and the model translation options available there, plus a description of how to include CSP termination .

The CSP language
We provide a brief outline of the language, largely taken from [20]; the reader is encouraged to consult [21] for a more comprehensive treatment.
Throughout, Σ is taken to be a finite nonempty set of communications that are visible and can only happen when the observing environment permits via handshaken communication.The actions of every process are taken from Σ∪{τ }, where τ is the invisible internal action that cannot be prevented by the environment.We extend this to Σ ∪ {τ, } if we want the language to allow the successful termination process SKIP and sequential compositions as described below.
is different from other events, because it is observable but not controllable: in that sense it is between a regular Σ event and τ .It only ever appears at the end of traces and from a state which has refusal set Σ and acceptance set { }, although that state is not stable in the usual sense.It thus complicates matters a little so the reader might prefer to ignore it when first studying this paper.We will later contemplate a second event with special semantics: tock signifying the passage of time.
The constant processes of our core version of CSP are: • STOP which does nothing-a representation of deadlock.
• div which performs (only) an infinite sequence of internal τ actions-a representation of divergence or livelock.
• CHAOS which can do anything except diverge, though this absence of divergence is unimportant when studying finite behaviour models.
• SKIP which terminates successfully.
The prefixing operator introduces communication: • a → P communicates the event a before behaving like P .
There are two main forms of binary choice between a pair of processes: • P ⊓ Q lets the process decide to behave like P or like Q: this is nondeterministic or internal choice.
• P ✷ Q offers the environment the choice between the initial Σ-events of P and Q.If the one selected is unambiguous then it continues to behave like the one chosen; if it is an initial event of both then the subsequent behaviour is nondeterministic.The occurence of τ in one of P and Q does not resolve the choice (unlike CCS +).This is external choice.
A further form of binary choice is the asymmetric P ✄ Q, sometimes called sliding choice.This offers any initial visible action of P from an unstable (in the combination) state and can (until such an action happens) perform a τ action to Q.It can be re-written in terms of prefix, external choice and hiding.It represents a convenient shorthand way of creating processes in which visible actions happen from an unstable state, so this is not an operator one is likely to use much for building practical systems, rather a tool for analysing how systems can behave.As discussed in [21], to give a full treatment of CSP in any model finer than stable failures, it is necessary to contemplate processes that have visible actions performed from unstable states.
We only have a single parallel operator in our core language since all the usual ones of CSP can be defined in terms of it as discussed in Chapter 2 etc. of [21].
• P X Q runs P and Q in parallel, allowing each of them to perform any action in Σ \ X independently, whereas actions in X must be synchronised between the two.
There are two operators that change the nature of a process's communications.
• P \ X, for X ⊆ Σ, hides X by turning all P 's X-actions into τ s. • Sequential composition P ;Q allows P to run until it terminates successfully ( ).P 's is turned into τ and then Q is started.So if P and Q respectively have traces sˆ and t, then P ; Q has the trace sˆt.

• P [[R]] applies the
There is another operator that allows one process to follow another: • P Θ a:A Q behaves like P until an event in the set A occurs, at which point P is shut down and Q is started.This is the throw operator, and it is important for establishing clean expressivity results.
The final CSP construct is recursion: this can be single or mutual (including mutual recursions over infinite parameter spaces), can be defined by systems of equations or (in the case of single recursion) in line via the notation µ p.P , for a term P that may include the free process identifier p. Recursion can be interpreted operationally as having a τ -action corresponding to a single unwinding.Denotationally, we regard P as a function on the space of denotations, and interpret µ p.P as the least (or sometimes provably unique) fixed point of this function.
We also make use of the interleaving operator |||, which allows processes to perform actions independently and is equivalent to ∅ , and the process RUN X , which always offers every element of the set X and is defined by This completes our list of operators other than priority.While others, for example △ (interrupt) are sometimes used, they are all expressible in terms of the above (see ch.9 of [21]).

Priority
The priority operator is introduced and discussed in detail in Chapter 20 of [21] as well as [23].It allows us to specify an ordering on the set of visible events Σ, and prevents lower-priority events from occuring whenever a higher-priority event or τ is available.
The operator described in [21] as implemented in FDR4 [5] is parametrised by three arguments: a process P , a partial order ≤ on the event set Σ, and a subset X ⊆ Σ of events that can occur when a τ is available.We require that all elements of X are maximal with respect to ≤ and additionally require that if a is any event incomparable to τ , then a is also maximal.Failing to respect these principles means that the operator might undermine some basic principles of CSP.Writing initials(P ) ⊆ Σ ∪ {τ } for the set of events that P can immediately perform, and extending ≤ to a partial order on Σ ∪ {τ } by adding y ≤ τ ∀ y ∈ Σ \ X, we define the operational semantics of prioritise as follows: prioritise makes enormous contributions to the expressive power of CSP as explained in [23], meaning that CSP+prioritise can be considered a universal language for a much wider class of operational semantics than the CSP-like class described in [22,21].
It should not therefore be surprising that prioritise is not compositional over denotational finite observation models other than the most precise model, as we will discuss below.So we think of it as an optional addition to CSP rather than an integral part of it; when we refer below to particular types of observation as giving rise to valid models for CSP, we will mean CSP without priority.
If we admit successful termination , then it must have the same priority as τ .

Example: the stable failures model
We introduce our model shifting construction using the stable failures model: we will produce a context C such that for any processes P, Q, we have that Q refines P in the stable failures model if and only C[Q] refines C[P ] in the traces model.

The traces and failures models
The traces model T is familiar from both process algebra and automata theory, and represents a process by the set of (finite) strings of events it is able to accept.Thus each process is associated (for fixed alphabet Σ) to a subset of Σ * the set of finite words over Σ (plus words of the form w if we allow SKIP and sequential composition).The stable failures model F also records sets X of events that the process is able to stably refuse after a trace s (that is, the process is able after trace s to be in a state where no τ events are possible, and where the set of initial events is disjoint from X). Thus a process is associated to a subset of Σ * × (P(Σ) ∪ {•}), where • represents the absence of a recorded refusal set. 4 We would add the symbol to this set when including termination.Note that recording • does not imply that there is no refusal to observe, simply that we have not observed stability.The observation of the refusal ∅ implies that the process can be stable after the present trace, whereas observing • does not.
In any model M, we say that Q M-refines P , and write P ⊑ M Q, if the set associated to Q is a subset of that corresponding to P .
Because can be seen, but happens automatically, we need to distinguish a process like SKIP which must terminate from one that can but may not like STOP ⊓ SKIP.After all if these are subsituted for P in P ; Q we get processes equivalent to Q and STOP ⊓ Q. However the state that accepts can be thought of as being able to refuse the rest of the visible events Σ, since it can terminate all by itself.

Model shifting for the stable failures model
We first consider this without .The construction is as follows: Lemma 1 For each finite alphabet Σ there exists a context C (over an expanded alphabet) such that for any processes P and Q we have that

Proof
Step 1: We use priority to produce a process (over an expanded alphabet) that can communicate an event x ′ if and only if the original process P is able to stably refuse x.This is done by expanding the alphabet Σ to Σ ∪ Σ ′ (where Σ ′ contains a corresponding primed event x ′ for every event x ∈ Σ), and prioritising with respect to the partial order which prioritises each x over the corresponding x ′ and makes τ incomparable to x and greater than x ′ .
We must also introduce an event stab to signify the observation of stability (i.e.no τ is possible in this state) without requiring any refusals to be possible.This is necessary in order to be able to record an empty refusal set.The priority order ≤ 1 is then the above (i.e.x ′ < x for all x ∈ Σ) extended by making stab less than only τ and independent of all x and x ′ .
We can now fire up these new events as follows: This process has a state ξ ′ for each state ξ of P , where ξ ′ has the same unprimed events (and corresponding transitions) as ξ.Furthermore ξ ′ can communicate x ′ just when ξ is stable and can refuse X, and stab just when ξ is stable.
Step 2: We now recall that the definition of the stable failures model only allows a refusal set to be recorded at the end of a trace, and is not interested in (so does not record) what happens after the refusal set.
We gain this effect by using a regulator process to prevent a primed event (or stab) from being followed by an unprimed event.Let and define C by A trace of C[P ] consists of: firstly, a trace s of P ; followed by, if P can after s be in a stable state, then for some such state σ 0 any string formed from the events that can be refused in σ 0 , together with stab.The lemma clearly follows.
It is clear that any such context must involve an operator that is not compositional over traces, for otherwise we would have which is equivalent to P ⊑ F Q, and this is not true for general P and Q (consider for instance P = a → STOP , Q = (a → STOP ) ⊓ STOP ).It follows that only contexts which like ours involve priority or some operator with similar status can achieve this.
Adding to the model causes a few issues with the above.For one thing it creates a refusal (namely of everything except ) from what could be an unstable state, namely a state that can perform and perhaps also a τ .And secondly we need to find an effective way of making processes show their refusal of , and their refusal of all events other than , when respectively appropriate.One way of doing these things is to add to the state space so that termination goes through multiple stages.Create a new event term and consider P ; term → SKIP.This performs any behaviour of P except that all s of P become τ s and lead to term → SKIP .That of course is a stable state.If we now (treating term as a member of Σ) apply C as defined above, this will be able to perform term ′ in any stable state that cannot terminate, and will perform every a ′ event other than term ′ every time it reaches the state term → SKIP.Thus if we define we get exactly the decorated traces we might have expected from the stable failures representation of P except that instead of having an event ′ we have term ′ .

Semantic models
In order to generalise this construction to arbitrary finite observational semantic models, we must give formal definitions not only of particular models but of the very notion of a finite observational model.

Finite observations
We consider only models arising from finite linear observations.Intuitively, we postulate that we are able to observe the process performing a finite number of visible actions, and that where the process was stable (unable to perform a τ ) immediately before an action, we are able to observe the acceptance set of actions it was willing to perform.Note that there cannot be two separate stable states before visible event b without another visible event c between them, even though it is possible to have many visible events between stable states.Thus it makes no sense to record two separate refusals or acceptance sets between consecutive visible events.Similarly it does not make sense to record both an acceptance and a refusal, since observing an acceptance set means that recording a refusal conveys no extra information: if acceptance A is observed then no other is seen before the next visible event, and observable refusals are exactly those disjoint from A.
We are unable to finitely observe instability: the most we are able to record from an action in an unstable state is that we did not observe stability.Thus in any context where we can observe stability we can also fail to observe it by simply not looking.
We take models to be defined over finite alphabets Σ, and take an arbitrary linear ordering on each finite Σ to be alphabetical.
The most precise finite observational model is that considering all finite linear observations, and is denoted F L: Definition 1 The set of finite linear observations over an alphabet Σ is where the a i are interpreted as a sequence of communicated events, and the A i denote stable acceptance sets, or in the case of • failure to observe stability.Let the set of such observations corresponding to a process P be denoted F L Σ (P ).This needs to be extended to encompass final s if we want to include termination.
(Sometimes we will drop the Σ and just write F L(P )).More formally, F L(P ) can be defined inductively; for instance (where X ∪ • := • for any set X). See Section 11.1.1 of [21] for further details.
Observe that F L has a natural partial order corresponding to extensions (where αˆ • ˆβ and αˆ A are both extended by αˆ A ˆβ for any set A and any α and β).Note that for any process P we have that F L(P ) is downwards-closed with respect to this partial order.
The definition of priority over F L (accommodating final s) is as follows.prioritise(P, ≤, X) is, with ≤ extended to the whole of Σ ∪ {τ } by making all elements not in X incomparable to all others where for each i one of the following holds: • b i is maximal under ≤ and A i = • (so there is no condition on Z i except that it exists).
• a i is not maximal under ≤ and A i−1 = • and Z i is not • and neither does Z i contain any c > b i .
• and in each case where This is not possible for the other studied finite behaviour models of CSP: the statement that it is for refusal testing RT in [21] is not true, though it is possible for some partical orders ≤ including those needed for maximal progress in timed modelling of the sort we will see later.

Finite observational models
We consider precisely the models which are derivable from the observations of F L, which are well-defined in the sense that they are compositional over CSP syntax (other than priority), and which respect extension of the alphabet Σ.
Definition 2 A finite observational pre-model M consists for each (finite) alphabet Σ of a set of observations, obs Σ (M), together with a relation M Σ ⊆ F L Σ × obs Σ (M).The representation of a process P in M Σ is denoted M Σ (P ), and is given by For processes P and Q over alphabet Σ, if we have M Σ (Q) ⊆ M Σ (P ) then we say Q M-refines P , and write P ⊑ M Q.
(As before we will sometimes drop the Σ).
Note that this definition is less general than if we had defined a pre-model to be any equivalence relation on P (F L Σ ).For example, the equivalence relating sets of the same cardinality has no corresponding pre-model.Definition 2 agrees with that sketched in [21].
Without loss of generality, M Σ does not identify any elements of obs Σ (M); that is, we have M −1 Σ (x) = M −1 Σ (y) only if x = y (otherwise quotient by this equivalence relation).Subject to this assumption, M Σ induces a partial order on obs Σ (M): Definition 3 The partial order induced by M Σ on obs Σ (M) is given by: x ≤ y if and only if for all b ∈ M −1 Σ (y) there exists a ∈ M −1 Σ (x) with a ≤ b.Observe that for any process P it follows from this definition that M(P ) is downwards-closed with respect to this partial order (since F L(P ) is downwardsclosed).
Definition 4 A pre-model M is compositional if for all CSP operators , say of arity k, and for all processes P 1 , . . ., P k and Q 1 , . . ., Q k such that M(P i ) = M(Q i ) for all i, we have This means that the operator defined on processes in obs(M) by taking the pushforward of along M is well-defined: for any sets X 1 , . . ., X k ⊆ obs(M) which correspond to the images of CSP processes, take processes P 1 , . . ., P k such that X i = M(P i ), and let Definition 4 says that the result of this does not depend on the choice of the P i .Note that it is not necessary to require the equivalent of Definition 4 for recursion in the definition of a model, because of the following lemma which shows that least fixed point recursion is automatically well-defined (and formalises some arguments given in [21]): Lemma 2 Let M be a compositional pre-model.Let C 1 , C 2 be CSP contexts, such that for any process P we have M(C 1 [P ]) = M(C 2 [P ]).Let the least fixed points of C 1 and C 2 (viewed as functions on P(F L) under the subset order) be P 1 and P 2 respectively.Then M(P 1 ) = M(P 2 ).

Proof
Using the fact that CSP contexts induce Scott-continuous functions on P(F L) (see [8], Section 2.8.2), the Kleene fixed point theorem gives that ) is in the union taken up to some finite N , and since finite unions correspond to internal choice, and ⊥ to the process div, we have that the unions up to N of C 1 and C 2 agree under M by compositionality.Hence x ∈ M(P 2 ), so M(P 1 ) ⊆ M(P 2 ).Similarly M(P 2 ) ⊆ M(P 1 ).
In this setting, we now describe the five main finite observational models coarser than F L: traces, stable failures, revivals, acceptances and refusal testing.

The traces model
The coarsest model measures only the traces of a process; that is, the sequences of events it is able to accept.This corresponds to the language of the process viewed as a nondeterministic finite automaton (NFA).
Definition 7 The traces model, T , is given by where trace is the equivalence relation which relates the observation A 0 , a 1 , A 1 , . . ., a n , A n to the string a 1 . . .a n .

Failures
The traces model gives us information about what a process is allowed to do, but it in some sense tells us nothing about what it is required to do.In particular, the process STOP trace-refines any other process.
In order to specify liveness properties, we can incorporate some information about the events the process is allowed to refuse, begining with the stable failures model.Intuitively, this captures traces s, together with the sets of events the process is allowed to stably refuse after s.

Definition 8
The stable failures model, F , is given by where fail Σ relates the observation A 0 , . . ., a n , A n to all pairs (a 1 . . .a n , X), for all X ⊆ Σ \ A n if A n = •, and for X = • otherwise.

Revivals
The next coarsest model, first introduced in [20], is the revivals model.Intuitively this captures traces s, together with sets X that can be stably refused after s, and events a (if any) that can then be accepted.

Definition 9
The revivals model, R, is given by where rev Σ relates the observation A 0 , a 1 , . . ., A finite linear observation is related to all triples consisting of: its initial trace; a stable refusal that could have been observed, or • if the original observation did not observe stability; and optionally (part (i) above) a single further event that can be accepted.

Acceptances
All the models considered up to now refer only to sets of refusals, which in particular are closed under subsets.The next model, acceptances (also known as 'ready sets'), refines the previous three and also considers the precise sets of events that can be stably accepted at the ends of traces.
Definition 10 The acceptances model, A, is given by where acc Σ relates the observation A 0 , a 1 , . . ., a n , A n to the pair (a 1 . . .a n , A n ).
It is convenient to note here that, just as we were able to use a ′ as a cipher for the refusal of a when model shifting, we can introduce a second one a ′′ as a chipher for stable acceptance of a: it is performed (without changing the state) just when a ′ is stably refused.We will apply this idea and discuss it further below.

Refusal testing
The final model we consider is that of refusal testing, first introduced in [16].This refines F and R by considering an entire history of events and stable refusal sets.It is incomparable to A, because it does not capture precise acceptance sets.

Definition 11
The refusal testing model, RT , is given by where rt Σ relates the observation A 0 , . . ., a n , A n to X 0 , . . ., a n , X n , for all The correct way to handle , if needed, in any of these models is to add to the respective transformation in exactly the same way we did for stable failures.This is to be expected because only ever happens at the end of traces.Clearly we will need to use term ′′ as a cipher for ′′ in appropriate cases.

Rational models
We will later on wish to consider only models M for which the correspondence between F L-observations and M observations is decidable by a finite memory computer.We will interpret this notion as saying the the relation M Σ corresponds to the language accepted by some finite state automaton.In order to do this, we must first decide how to convert elements of F L Σ to words in a language.We do this in the obvious way (the reasons for using fresh variables to represent the A i will become apparent in Section 5).

Definition 12
The canonical encoding of F L Σ is over the alphabet Ξ := Σ ∪ Σ ′′ ∪ Sym, where Σ ′′ := {a ′′ : a ∈ Σ} and Sym = { , , ', ′ , •}. 5 It is given by the representation in Definition 1, where sets A i are expressed by listing the elements of Σ ′′ corresponding to the members of A i in alphabetical order.We denote this encoding by φ Σ : F L Σ → Ξ * .
We now define a model to be rational (borrowing a term from automata theory) if its defining relation can be recognised (when suitably encoded) by some nondeterministic finite automaton.
Definition 13 A model M is rational if for every alphabet Σ, there is some finite alphabet Θ and a map ψ Σ : obs Σ (M) → Θ * , such that there is a (nondeterministic) finite automaton A recognising {(φ Σ (x), ψ Σ (y)) : (x, y) ∈ M Σ }, and such that ψ Σ is order-reflecting (that is, ψ Σ (x) ≤ ψ Σ (y) only if x ≤ y), with respect to the prefix partial order on Θ * , and the partial order induced by M Σ on obs Σ (M).
What does it mean for an automaton to 'recognise' a relation?Note that recognisability in the sense of Definition 14 is easily shown to be equivalent to the common notion of recognisability by a finite state transducer given for instance in [29], but the above definition is more convenient for our purposes.Note also that F L itself (viewing F L Σ as the diagonal relation) is trivially rational.

Lemma 3
The models T , F , R, A and RT are rational.

Proof
By inspection of Definitions 7-11.We take Θ = Σ ∪ Σ ′ ∪ Σ ′′ ∪ Sym, with Σ ′′ and the expression of acceptance sets as in the canonical encoding of F L, and refusal sets expressed in the corresponding way over Σ ′ := {a ′ : a ∈ Σ}.
Note that not all relations are rational.For instance, the 'counting relation' mapping each finite linear observation to its length is clearly not rational.We do not know whether the additional constraint of being a finite observational model necessarily implies rationality; however, no irrational models are known.We therefore tentatively conjecture: that every finite observational model is rational.

Model shifting
We now come to the main substance of this paper: we prove results on 'model shifting', showing that there exist contexts allowing us to pass between different semantic models and the basic traces model.The main result is Theorem 1, which shows that this is possible for any rational model.

Model shifting for F L
We begin by proving the result for the finest model, F L. We show that there exists a context C F L such that for any process P , the finite linear observations of P correspond to the traces of C F L (P ).
Lemma 4 (Model shifting for F L) For every alphabet Σ, there exists a context C F L over alphabet T := Σ ∪ Σ ′ ∪ Σ ′′ ∪ {done}, and an order-reflecting map π : F L Σ → T * (with respect to the extension partial order on F L Σ and the prefix partial order on T * ) such that for any process P over Σ we have T (C F L [P ]) = pref(π(F L(P ))) (where pref(X) is the prefix-closure of the set X).

Proof
We will use the unprimed alphabet Σ to denote communicated events from the original trace, and the double-primed alphabet Σ ′′ to denote (members of) stable acceptances.Σ ′ will be used in an intermediate step to denote refusals, and done will be used to distinguish ∅ (representing an empty acceptance set) from • (representing a failure to observe anything).
Step 1: We first produce a process which is able to communicate events x ′ i , just when the original process can stably refuse the corresponding x i .Define the partial order ≤ 1 = x ′ < 1 x : x ∈ Σ , which prevents refusal events when the corresponding event can occur.
Let the context C 1 be given by Note that the third argument prevents primed events from occurring in unstable states.
Step 2: We now similarly introduce acceptance events, which can happen in stable states when the corresponding refusal can't.The crucial difference between a and a ′′ is that a usually changes the underlying process state, whereas a ′′ leaves it alone.a ′′ means that P can perform a from its present stable state, but does not explore what happens when it does.
Similarly define the partial order ≤ 2 = x ′′ < 2 x ′ : x ∈ Σ , which prevents acceptance events when the corresponding refusal is possible.Let the context C 2 be defined by Step 3: We now ensure that an acceptance set inferred from a trace is a complete set accepted by the process under examination.This is most straightforwardly done by employing a regulator process, which can either accept an unprimed event or accept the alphabetically first refusal or acceptance event, followed by a refusal or acceptance for each event in turn.In the latter case it then communicates a done event, and returns to its original state.It has thus recorded the complete set of events accepted by P 's present state.
The done event is necessary in order to distinguish between a terminal ∅, which can have a done after the last event, and a terminal •, which cannot (observe that a ∅ cannot occur other than at the end).Along the way, we hide the refusal events.
Let a and z denote the alphabetically (by which me mean in a fixed but arbitrary linear order on Σ) first and last events respectively, and let succx denote the alphabetical successor of x.Define the processes and let A little care is required here.We can prevent acceptances from being 'skipped over' by prioritising the double-primed events in alphabetical order, but we also have to prevent acceptances from ending early, i.e. prevent an unprimed event from happening prematurely.
The most obvious solution is to prioritise acceptance events over unprimed events.This does not work, however, because the prioritise operator forces all events which can be performed in unstable states to be maximal in the order, and in the LTS representing the underlying process, any event can happen as an alternative to τ .
We instead use the event done to mark the end of an acceptance set, and use a regulator process to prevent a double-primed event from being followed by an unprimed event without an intervening done.This also distinguishes between ∅, represented by a done between two unprimed events, and •, represented by consecutive unprimed events.
Define the partial order which prevents jumps in acceptance sets, and allows done only in stable states where no double-primed events are left to be communicated.Let the context C 3 be defined by We now define the regulator process which prevents sequences of double-primed events not concluded by done: and then define the context C F L by Σ∪Σ ′′ ∪{done} DREG.
Step 4: We now complete the proof by defining the function π inductively as follows: , where without loss of generality the x i are listed in alphabetical order.
It is clear that this is order-reflecting, and by the construction above satisfies This result allows us to translate questions of F L-refinement into questions of trace refinement under C F L , as follows: Corollary 1 For C F L as in Lemma 4, and for any processes P and Q, we have Conversely, suppose there exists x ∈ F L(Q) \ F L(P ).Then since F L(P ) is downwards-closed, we have x y for all y ∈ F L(P ).Since π is orderreflecting, we have correspondingly π(x) π(y) for all y ∈ F L(P ).Hence π(x) / ∈ pref(π(F L(P ))), so pref(π(F L(Q))) pref(π(F L(P ))).

Model shifting for rational observational models
We now have essentially all we need to prove the main theorem.We formally record a well known fact, that any Nondeterministic Finite Austomaton (NFA) can be implemented as a CSP process (up to prefix-closure, since trace-sets are prefix-closed but regular languages are not): Lemma 5 (Implementation for NFA) Let A = (Σ, Q, δ, q 0 , F ) be a (nondeterministic) finite automaton.Then there exists a CSP process P A such that pref(L(A)) = pref(T (P A )).
See Chapter 7 of [18] for the proof.
Theorem 1 (Model shifting for rational models) For every rational model M, there exists a context C M such that for any process P we have T (C M [P ]) = pref(ψ(M(P ))).

Proof
Let A be the automaton recognising (φ × ψ)(M) (as from Definition 13), and let P A be the corresponding process from Lemma 5. We first apply Lemma 4 to produce a process whose traces correspond to the finite linear observations of the original process, prefixed with left: let C F L be the context from Lemma 4, and let the context C 1 be defined by We now compose in parallel with P A , to produde a process whose traces correspond to the M-observations of the original process.Let C 2 be defined by Then the traces of C 2 [X] are precisely the prefixes of the images under ψ of the observations corresponding to X, as required.

By the same argument as for Corollary 1, we have
Corollary 2 For any rational model M, let C M be as in Theorem 1. Then for any processes P and Q, we have

Implementation
We demonstrate the technique by implementing contexts with the property of Corollary 2; source code may be found at [1].
For the sake of efficiency we work directly rather than using the general construction of Theorem 1.The context C1 introduces refusal events and a stab event, which can occur only when the corresponding normal events can be refused.This implements the refusal testing model, and the context CF which allows only normal events optionally followed by some refusals (and stab) implements the stable failures model.This is however suboptimal over large alphabets, in the typical situation where most events are refused most of the time.FDR4's inbuilt failures refinement checking codes refusal in terms of minimal acceptance sets (checking that each such acceptance of the specification is a superset of one of the implementation).Minimal acceptances are typically smaller than maximal refusal sets.
For models based on acceptance sets rather than refusal sets, we have to consider the whole collection of them rather than just the minimal ones.We can introduce a second extra copy of Σ, namely Σ ′′ , where a ′′ will mean that the current stable state of P can accept a, but the communication of a ′′ will not change the state of P .We can do this essentially by applying the previous construction (creating a ′ ) to itself.This uses an order ≤ ′′ under which a ′′ < ′ a ′ on the process C(P ): Here Reg ′′ is a process that initially will accept a Σ event or a ′ or a ′′ for the alphabetically first member a of Σ.If either of the latter it will insist on getting each subsequent member of Σ in one of these two forms until it has pieced together the complete acceptance set.Thus as soon as the present state of P is recognised as stable, Reg ′′ establishes its complete acceptance set before permitting to P to carry on further if desired.(For the acceptances model with only acceptances at the ends of traces, there is no need to do so.)As an alternative, Reg ′′ could communicate an event such as done when it gets to the end of the list of events, which would enable us to hide the refusal events Σ ′ .
Similar constructions with slightly different restrictions on the permissible sequences of events produce efficient processes for the revivals and refusal testing models.We will generalise this below.

Testing
We test this implementation by constructing processes which are first distinguished by the stable failures, revivals, refusal testing and acceptance models respectively (the latter two being also distinguished by the finite linear observations model).The processes, and the models which do and do not distinguish them, are shown in Table 1 (recall the precision hierarchy of models: T ≤ F ≤ R ≤ {A, RT } ≤ F L).The correct results are obtained when these checks are run in FDR4 with the implementation described above.
Table 1: Tests distinguishing levels of the model precision heir achy.△ is the interrupt operator; see [21] for details.

Performance
We assess the performance of our simulation by running those examples from Table 1 of [7] which involve refinement checks (as opposed to deadlock-or divergence-freedom assertions), and comparing the timings for our construction against the time taken by FDR4's inbuilt failures refinement check (since F is the only model for which we have a point of comparison between a direct implementation and the methods developed in this paper).Results are shown in Table 2, for both the original and revised contexts described above; the performance of the F L check is also shown.As may be seen, performance is somewhat worse but not catastrophically so.Note however that these processes involve rather small alphabets; performance is expected to be worse for larger alphabets.

Example: Conflict detection
We now illustrate the usefulness of richer semantic models than just traces and stable failures by giving a sample application of the revivals model.Suppose that we have a process P consisting of the parallel composition of two subprocesses Q and R. The stable failures model is able to detect when P can refuse all the events of their shared alphabet, or deadlock in the case when they are synchronised on the whole alphabet.However, it is unable to distinguish between the two possible causes of this: it may be that one of the arguments is able to refuse the entire shared alphabet, or it may be that each accepts some events from the shared alphabet, but the acceptances of Q and R are disjoint.We refer to the latter situation as a 'conflict'.The absence of conflict (and similar situations) is at the core of a number of useful ways of proving deadlock-freedom for networks of processes running in parallel [24].
The revivals model can be used to detect conflicts.For a process P = Q X Y R, we introduce a fresh event a to represent a generic event from the shared alphabet, and form the process {a}, and similarly for R ′ and Y ′ .Conflicts of P now correspond to revivals (s, X ∩ Y, a), where s is a trace not containing a.
7 Timed Failures and Timed CSP Timed CSP is a notation which adds a W AIT t construct to CSP and reinterprets how processes behave in a timed context.So not only does it constrain the order that things happen, but also when they happen.Introduced in [26], it has been widely used and studied [28,27,4].W AIT t behaves like SKIP except that termination takes place exactly t time units after it starts.It introduced and uses the vital principle of maximal progress, namely that no action that is not waiting for some other party's agreement is delayed: such actions do not sit waiting while time passes.That principle fundamentally changes the nature of its semantic models.
Consider how the hiding operator is defined.It is perfectly legitimate to have a process P that offers the initial visible events a and b for an indefinite length of time, say P = a → P 1 ✷ b → P 2. However P \ {a} cannot perform the initial b at any time other than the very beginning because the a has become a τ .So P \ X only uses those behaviours of P which refuse X whenever time is passing.
Timed CSP was originally described on the basis of continuous (non-negative real) time values.The basic unit of semantic discourse is a timed failure, the coupling of a timed trace -a sequence of events with non-strictly increasing times -and a timed refusal, which is the union of a suitably finitary products of a half-open time interval [t 1 , t 2 ) (containing t 1 but not t 2 ) and a set of events.Thus the refusal set changes only finitely often in a finite time, coinciding with the fact that a process can only perform finitely many actions in this time.This continuous model of time takes it well outside the finitary world that model checking finds comfortable.However it has long been known that restricting the t in W AIT t statements to integers makes it susceptible to a much more finitary analysis by region graphs [9].However the latter represents a technique remote from the core algorithms of FDR so it has never been implemented for CSP, though it has for other notations [10].In [15,14], Joel Ouaknine made the following important discoveries: • It makes sense to interpret Timed CSP with integer W AIT over the positive integers as time domain.
• The technique of digitisation (effectively a uniform mapping of general times to integers) provides a natural mapping between these two representations.
• Properties that are closed under inverse digitisation can be decided over continuous Timed CSP by analysis over Discrete Timed CSP, and these include many practically important specifications.
• It is in principle possible to interpret Discrete Timed CSP in a modified (by the addition of two new operators) tock-CSP (a dialect developed by Roscoe in the early 1990's for reasoning about timed systems in FDR) and therefore in principle it is possible to reason about continuous Timed CSP in FDR.The definition of Timed CSP hiding over LTSs involves prioritising τ and over tock.
This was implemented as described in in [2], originally in the context of the last versions of FDR2 and Timed CSP continues to be supported in FDR4.There is an important thing missing from these implementations, however, namely refinement checking in the Timed Failures Model, the details of which we describe below.That means that although it is possible to check properties of complete Timed CSP systems, there is no satisfactory compositional theory for (Discrete) Timed CSP.For example one cannot automate the reasoning that if C[P, Q] (a term in Timed CSP) satisfies SP EC, and P ⊑ P ′ and Q ⊑ Q ′ then C[P ′ , Q ′ ] satisfied SP EC, because FDR does not give us a means of checking the necessary refinements.
The purpose of this section is to show how Timed Failures refinement can be reduced to things FDR can do, filling this hole.Given the methods described in this paper to date, it is natural to try model shifting, and we will do this below.There is another option offered to us by late versions of FDR2, namely reduction to the Refusal Testing model which is implemented in that but not (at the time of writing) later versions of FDR.We will discuss these in turn.

A summary of Discrete Timed Failures
The Discrete Timed Failures model D consists, in one presentation, of sequences of the form (s 0 , X 0 , tock, s 1 , X 1 , tock, . . ., s n−1 , X n−1 , tock, s n , X n ) where each of s i is a member of Σ * , each of X i is a subset of Σ, and tock ∈ Σ.Since tock never happens from an unstable state, there is no need to have the possibility of • as discussed above for other models before tock, and it would be misleading to have it.We do however allow • for X n .
What this means, of course, is that the trace s 0 occurs, after which it reaches a stable state where tock occurs, and this is repeated for other s i and X i until, after the last tock, the trace s n is performed followed by the refusal x n (not including tock) or potentially instability.Recall that we apply the principle of maximal progress, so that tock only happens from a stable state: this means that if, after behaviour . . .s n , stability is not observable, then tock can never happen and we have reached an error state.It is, however, convenient to have this type of error state in our model because misconstrued systems can behave like this.
Like other CSP models, it has healthiness conditions, or in other words properties that the representation of any real process must satisfy.These are analogous to those of related untimed models, such as prefix closure and subset closure on refusal sets, and the certain refusal of impossible events.A property that it inherits from continuous Timed CSP is no instantaneous withdrawal, meaning that if, following behaviour β, it is impossible for a process to refuse a leading up to the next tock, then the process must still have the possibility of performing a after β tock .This amounts to the statement that the passage of time as represented by tock is not directly visible to the processes concerned, and is much discussed in the continuous context in [17,25].
D is a rational model, since it can be obtained from the standard representation of RT by the rational transduction which deletes all refusal sets preceding events other than tock (and replaces non-terminal occurences of • by ∅, since tock can only occur in stable states).Hence by Theorem 1 it can be model shifted: there exists a context C D such that trace refinement under C D is equivalent to refinement in D.
The operational semantics of Discrete Timed CSP processes, under the transformation described and implemented in [2].Have the property that tock is available in every stable state and no unstable state.

Model shifting Timed Failures
We can capture this through model shifting by introducing a primed copy a ′ of each a ∈ Σ and using the following construct involving a regulator which ensures that an ordinary event cannot follow a refusal flag.This means that its traces consist of pairs of traces of Σ and traces of Σ ′ (which can be empty) interspersed with a tock between consecutive pairs.
and a ′ < a (as well as the implicit a ′ < τ ) for each a ∈ Σ.
We have assumed here that any prioritisation needed to ensure maximal progress has already been applied before this, so that the LTS being operated on here has the correct behaviour under a normal interpretation.
Note that this regulator allows only allows refusal events and tock after refusal events a ′ , thus forcing the decorated traces (namely combinations of real events and the a ′ ones signifying refusals) to exactly follow the structure set out for timed failures above.Thus aside from the exact structure of the model, we have followed the same procedure as that used for the stable failures model above.
Note that • Events in Σ cannot follow refusal (primed events); only other primed events or tock.
• There would have been no harm in using the stab event seen for stable failures (perhaps most elegantly so that it can only happen as the last event), but for Timed CSP processes this would make no difference to the equivalence or refinement relations induced.This is because stab would be possible after a trace if and only if tock is.
• Timed Failures refinement between two Timed CSP processes is decided by traces refinement between the decorated and regulated transformed processes as defined above.Thus it does not matter (when using them for this purpose) that the regulator adds in further refusals not possible for the original process (namely, after any a ′ event, the regulated process refuses the whole of Σ.

Reducing Timed Failures to refusal testing
In effect the Timed Failures model is the refusal testing model with all refusal sets that precede a non-tock event ignored.This seems difficult, not least because when observing P refusing something, we cannot stop it performing any action other than tock without affecting the refusal itself.We can again solve this by use of a regulator process which allows any Σ event to happen at any time from an unstable state and carry on, or from a stable state allow any Σ event leading to the divergent process DIV , or tock after which the regulator just carries on.
REGP is thus a three-state process: in the initial state all the Σ events can happen as can a τ taking it to the second and stable state.The third state is DIV .Note how unstable states and the fact that DIV is refinement-maximal in the model are crucial in making this construction work.As before, this regulator is synchronised with P to perform the transformation.
So we have defined a projection Π T F (P ) = P REGP This and M ST F (P ) are thus faithful representations of the timed failures semantics of P in two different models.They can be used for comparisons under refinement in these models.
Because Π T F simply records behaviours with refusals before members of Σ from all processes, we notice that in general

Case study: Timed Sliding Window Protocol
The sliding window protocol has long been used as a case study with FDR: it is well known and reasonably easy to understand, at least in an untimed setting.It is a development of the alternating bit protocol in which the messages in a fixedlength window on the input stream are simultaneously available for transmission and acknowledgement across an erroneous medium which, in our version, can lose and duplicate messages but not re-order them.We have re-interpreted this in Timed CSP with the following features: • There is a parameter W which defines the width of the window.Because the windows held by the sender and receiver processes may be out of step, we need to define B = 2W to be the bound on the amount of buffering the system can provide.
• In common with other CSP codings of this protocol, we need to make the indexing space of places in the input and output streams finite by replacing the natural non-negative integers by integers modulo some N which must be at least 2W (though there is no requirement that B and N are the same).This is sufficient to ensure that acknowledgement tags never get confused as referring to the wrong message.
• Round robin sending of message components from unacknowledged items in the current window: this clearly has a bearing on the timing behaviour of the transmission and acknowledgements that the system exhibits.
• The occurrence of errors is limited by a parameter which forces them to be spaced: at least K time units must pass between consecutive ones.
To achieve this elegantly we have used the controlled error model [18] in which errors are triggered by events that can be restricted by external regulators, and then lazily abstracted.It turns out that lazy abstraction (originally proposed in [18]) needs reformulating in Timed CSP.We will detail this below.
Clearly it would be possible to use different error assumptions.
• We have assumed for simplicity that all ordinary actions take one time unit to complete.
• Where a message is duplicated, we need to assume that the duplicate is available reasonably quickly, say within 2 time units of the original send.
If it can be deferred indefinitely this causes subtle errors in the sense that deferred duplication can prevent the system from settling sufficiently.
We can create a Timed Failures specification in CSP which says, following established models for regular CSP, that the resulting system is a buffer bounded by B (so it never contains more than B items) but is only obliged to input when it has nothing in it.Whenever it is nonempty it is obliged to output, but these two obligations do not kick in before some parameter D time units from the previous external communication.
This is slightly trickier than we might think because of the way in which the implementation process can entirely legitimately change its behaviour over time.So in an interval where it can legitimately accept or refuse an input lef t.1, at one point it can refuse to communicate it, while later accepting it after time has passed.
In hand-coded tock-CSP this can be expressed as This says that if we have not yet reached the point where offers must be made (i.e.k < n) then it can perform permitted actions but can (expressed via [> or sliding choice) also refuse them and wait for time to pass.
In Timed CSP a completely equivalent specification can be divided into three separate parts: one to control the buffer behaviour, one to handle what the specification says about when offers must be made as opposed to can be made, and the final one to control nondeterminism by creating the most nondeterministic timed process on a given alphabet.The last of these is notably trickier than in the untimed world because where a process has the choice, over a period, to accept or refuse an event b, it is not sufficient for it to make the choice once and for all.So we have TCHAOS(A) = let onestep = ([] x:A @ x -> onestep) [> WAIT(1) within onestep;TCHAOS(A) In Timed CSP lazy abstraction needs to be formulated with this revised Chaos definition LAbs(A)(P) = (P [|A|] TCHAOS(A))\A noting that the passage of time (tock) is implicitly synchonised here as well as A, and priority of τ over tock will also apply.
In the main part of the buffer specification we do not create this style of nondeterminism, but instead use two variants of the externally visible events: one that will be made nondeterministic by the above and one that will not: The above always allows the nondeterministic variants of the events, and allows the "deterministic" ones when they should be offered if sufficient time has passed since the last visible event.Thus left is only offered deterministically when the buffer is empty, no matter how long since the last event.
The choice over whether the offers available must be made, implemented by allowing the deterministic versions of events, is made by the following process Visited 41,779,778 states and 107,648,549 transitions in 81.64 seconds (on ply 261) The following are the statistics from the same check simplified to a no-model shifting traces check, which does not find the problem, and so passes.
Visited 15,413,107 states and 36,428,632 transitions in 19.71 seconds (on ply 186) The smaller state count here is probably mainly because the normalised specification in this final case is significantly smaller, as the count-down to forcing an offer is irrelevant to traces.
It is noteworthy that the overhead of model shifting (relatively speaking) is here less than reported earlier for the untimed case.We expect this is explained because the unshifted checks in the timed case already contain (timed) prioritisation before it is applied as part of model shifting.
The experiments in this section were performed on a MacBook with a 2.7GHz Intel Core i7 processor.

Conclusions
We have seen how the expressive power of CSP, particularly when extended by priority, allows seemingly any finite behaviour model of CSP to be reduced to traces.Indeed this extends to any finitely expressed rules for what can be observed within finite linear behaviours, whether the resulting equivalence is compositional or not.
This considerably extends the range of what can be done with a tool like FDR.The final section shows an alternative approach to this, namely reducing a less discerning model to a more discerning one without priority.This worked well for reducing timed failures to refusal testing, but other reductions (for example ones involving both acceptances and refusal sets) do not always seem to be so efficient.For example reducing a refusal sets process to the acceptances model seems unnecessarily complex as, for example, the process CHAOS needs exponentially many acceptance sets where a single maximal refusal suffices.
We discovered that it is entirely practical to use this technique to reason about large systems.Furthermore the authors have found that the debugging feedback that FDR gives to model shifting checks is very understandable and usable.
In particular the authors were pleased to find that the results of this paper make automated reasoning about Timed CSP practical.They have already found it most informative about the expressive power of the notation.It seems possible that, as with untimed CSP, the availability of automated refinement checking will bring about enrichments in the notations of Timed CSP that help it in expressing practical systems and specifications.
Model shifting means that it is far easier to experiment with automated verification in a variety of semantic models, so it will only very occasionally be necessary for a new one to be directly supperted.
We believe that similar considerations will apply to classes of models that include infinite observations such as divergences, infinite traces, where these can be extended to incorporate refusals and acceptances as part of such observations.In such cases we imagine that model shifting will take care of the aspects of infinite behaviours that are present in their finite prefixes, and that the ways that infinitary aspects are handled will follow one of the three traces models available in CSP.These are • finite traces (used in the present paper), • divergence-strict finite and infinite traces, so as soon as an observation is made that can be followed by immediate divergence, we deem all continuations to be in the process model whether or not the process itself can do them operationally, and finally • with full divergence strictness replaced by the weak divergence strictness discussed in [19] (here an infinite behaviour with infinitely many divergent prefixes is added as above).
Thus it should be possible to handle virtually the entire hierarchy of models described in [21] in terms of variants on traces and model shifting.This will be the subject of future research.
renaming relation R ⊆ Σ × Σ to P : if (a, b) ∈ R and P can perform a, then P [[R]] can perform b.The domain of R must include all visible events used by P .Renaming by the relation {(a, b)} is denoted [[ a /b]].

Definition 14
For alphabets Σ and T , a relation R ⊆ Σ * × T * is recognised by an automaton A just when: (i) The event-set of A is left.Σ ∪ right.T , and (ii) For any s ∈ Σ * , t ∈ T * , we have sRt if and only if there is some interleaving of left.s and right.taccepted by A.

Table 2 :
Experimental results comparing the performance of our construction with FDR3's inbuilt failures refinement check.|S| is the number of states, |∆| is the number of transitions, T is the time (in seconds), all state and transition counts are in millions.
If follows that if we can create a context C[•] in CSP such that C[P ] contains precisely the behaviours of P that should not be forgotten, then we can state that C[P ] ⊑ RT C[Q] if and only if P ⊑ T F Q.