A formal verification technique for behavioural model-to-model transformations

In Model Driven Software Engineering, models and model transformations are the primary artifacts when developing a software system. In such a workflow, model transformations are used to incrementally transform initial abstract models into concrete models containing all relevant system details. Over the years, various formal methods have been proposed and further developed to determine the functional correctness of models of concurrent systems. However, the formal verification of model transformations has so far not received as much attention. In this article, we propose a formal verification technique to determine that formalisations of such transformations in the form of rule systems are guaranteed to preserve functional properties, regardless of the models they are applied on. This work extends our earlier work in various ways. Compared to our earlier approaches, the current technique involves only up to n individual checks, with n the number of rules in the rule system, whereas previously, up to 2 n − 1 checks were required. Furthermore, a full correctness proof for the technique is presented, based on a formal proof conducted with the Coq proof assistant. Finally, we report on two sets of conducted experiments. In the first set, we compared traditional model checking with transformation verification, and in the second set, we compared the verification technique presented in this article with the previous version.


Introduction
It is a well-known fact that concurrent systems are very hard to develop correctly.In order to support the development process, over the years, a whole range of formal methods have been constructed to determine the functional correctness of system models [BH04].Over time, these techniques have greatly improved, but the analysis of complex models is still time-consuming, and often beyond what is currently possible.
To get a stronger grip on the development process, model-driven development has been proposed [KWB05].In this approach, models are constructed iteratively, by defining model transformations that can be viewed as functions applicable on models: they are applied on models, producing new models.Using such transformations, an abstract initial model can be gradually transformed into a very detailed model describing all aspects of the system.If one can determine that the transformations are correct, then it is guaranteed that a correct initial model will be transformed into a correct final model.
Correspondence and offprint requests to: S. Putter and A. Wijs, E-mails: s.m.j.d.putter@tue.nland a.j.wijs@tue.nl property based hiding Most model transformation verification techniques are focussed on determining that a given transformation applied on a given model produces a correct new model, but in order to show that a transformation is correct in general, one would have to determine this for all possible input models.There are some techniques that can do this [ACL + 15, RW13], but it is often far from trivial to show that these are correct.
This work is an extension of [PW16], where we formally proved the correctness of such a formal transformation verification technique proposed in [WE13,Wij13] and implemented in the tool Refiner [WE14].It is applicable on models with a semantics that can be captured by Labelled Transition Systems (LTSs).Transformations are formally defined as LTS transformation rules.Correctness of transformations is interpreted as the preservation of properties.Given a property ϕ written in a fragment of the μ-calculus [MW14], and a system of transformation rules , Refiner checks whether preserves ϕ for all possible inputs.This is done by first hiding all behaviour irrelevant for ϕ [MW14] and then checking whether the rules replace parts of the input LTSs by new parts that are branching bisimilar to the old ones.Branching bisimilarity preserves safety properties and a subset of liveness properties involving inevitable reachability [vGW96].
Figure 1 provides an overview of the transformation verification workflow in Refiner.Given as input is a rule system consisting of n LTS transformation rules, where each rule r i consists of a left pattern L i , describing which component behaviour is subject to transformation, and a right pattern R i , defining the behaviour produced by a transformation of the corresponding left pattern behaviour.If such a rule system were to be applied on an input model, Refiner would identify the possible matches of the left patterns of the rules on the behaviour of the components in the model, and subsequently, apply transformation on those matches, thereby replacing the existing behaviour with copies of the corresponding right patterns.
In order to verify that a rule system will preserve a property ϕ for any model it is applied on, Refiner combines the left patterns on the one hand, and the right patterns on the other hand, and produces the state spaces of both these combinations, as these can be interpreted as models themselves.In practice, Refiner actually checks whether patterns are dependent on each other, in the sense that their behaviour needs to synchronise at some point, and groups the rules together into sets of dependent rules.In this example, there is only one such group.
Next, property-based hiding [MW14] is performed, given a property ϕ to check.Finally, the resulting abstract state spaces are compared using a branching bisimulation checking algorithm.Only if the combinations of both the left and the right patterns satisfy ϕ will the outcome of this check be positive.When no property is considered, the technique checks for full semantics preservation, i.e., it does not apply property-based hiding.This is useful, for instance, when refactoring models.
The technique has been successfully applied to reason very efficiently about model transformations; speedups of five orders of magnitude have been measured w.r.t.traditional model checking of the models produced by a transformation [Wij13].However, as the technique is theoretically very involved, its absolute correctness, i.e., whether it returns true iff a given rule system is property preserving for all possible input models, has been an open question since it was constructed.In [PW16] we first addressed the correctness of the transformation verification technique.After finding and fixing two issues the verification technique was proven correct.
Contributions.This work is an extension of [PW16] in which the formal correctness of the transformation verification technique from [Wij13] is addressed.In the current article, first of all, we have extended the expressiveness of transformation rules by distinguishing between glue-states that allow incoming and/or outgoing transitions entering or leaving the LTS patterns, respectively.By doing so, the verification technique is able to handle more cases.
Second of all, we present a new proof that shows that the required number of bisimulation checks when verifying an LTS transformation rule system can be reduced from 2 n − 1 per set of dependent transformation rules (where n is the upper bound of the number of rules in such a sets) to only one per set of dependent rules.This proof is presented in greater detail than the one given previously [PW16].The proof is based on a formal proof conducted with the Coq proof assistant1 version 8.6 (December 2016).The Coq formalisation is available online. 2tructure of the article.Related work is discussed in Sect. 2. Section 3 presents the notions for and analysis of the application of a rule system consisting of only a single transformation rule.A correctness proof is presented.This section can be viewed as a first step towards discussing the complete technique, applicable on rule systems consisting of multiple rules.Next, in Sect.4, the complete technique is presented; the discussion is continued by considering networks of concurrent process LTSs, and systems of transformation rules.Again, we give a proof of correctness.
After that, we present experimental results in Sect.5, by which we demonstrate the effectiveness of the analysis technique, compared to, more traditional, model checking the models again once they have been altered by a model transformation.Finally, Sect.6 contains our conclusions and pointers to future work.

Related work
Papers on incremental model checking (IMC) propose how to reuse model checking results of safety properties for a given input model after it has been altered [SS94,Swa96].We also consider verifying models that are subject to changes.However, we focus on analysing transformation specifications, i.e., the changes themselves, allowing us to determine whether a change always preserves correctness, independent of the input model.Furthermore, our technique can also check the preservation of (a subset of) liveness properties.
In the context of Dynamic graph algorithms [EGI97], reachability is an unbounded problem [RR96, SS94], i.e., it cannot be determined solely based on the changes.Thanks to our criteria, this is not an issue in our context.
In [Sah07], an incremental algorithm is presented for updating bisimulation relations based on changes applied on a graph.Their goal is to efficiently maintain a bisimulation, whereas our goal is to assess whether bisimulations are guaranteed to remain after a transformation has been applied without considering the whole relation.As is the case for the IMC techniques, this algorithm works only for a given input graph, while we aim to prove correctness of the transformation specification itself regardless of the input.
In refinement checking [AL91,KLG07], supported by tools such as Rodin [ABH + 10], Fdr33 and Csp-Casl-Prover [KR08], it is usually checked that one model refines another.This is very similar to our approach, but refinements are defined in terms of what the new model will be, as opposed to how the new model can be obtained from the old one, i.e., model transformations are not represented as artifacts independent of the models they can be applied on.This makes the technique not directly suitable to investigate the feasibility to verify definitions of model transformations, as opposed to the models they produce.
The Bart tool 4 allows automatically refining B components to B 0 implementations.Similar to our setting, it treats refinement rules as user-definable artifacts and performs pattern matching to do the refining.Constraints are checked to ensure that the resulting system will be correct.Other work related to B , e.g., [Lan96], is on strictly refining existing functionalities.Approaches described in, e.g., [BGL05, CCGT09, GL12, HKR + 10] prove that a transformation preserves the semantics of any input model, by showing that the transformed model will be strong or weak bisimilar to the original.Contrary to our work, in all these approaches, no cases can be handled where transformations alter the semantics in a way that does not invalidate the functional property of interest.Furthermore, by using branching bisimilarity as opposed to strong or weak bisimilarity, our technique also supports a subset of liveness properties.
Several techniques, such as those described in [KN07, NK08,VP03], perform individual checks for each concrete model.As such, the transformation itself is not verified, but verification is done each time the transformation is applied in a concrete situation.Our technique verifies the transformation definition once, after which the verification result is relevant for each application of that transformation.
Monotonically adding functionality, as opposed to refining, is addressed in, e.g., [BE04].The focus is on updating property formulae; it could be interesting to see if this is applicable in our setting to update properties.
In some works, e.g., [SMR11, GGL + 06], theorem proving is used to verify the preservation of behavioural semantics.The use of theorem provers requires expert knowledge and high effort [SMR11].In contrast, our equivalence checking approach is more lightweight, automated, and allows the construction of counter examples which help developers identify issues with the transformations.
In [BCE + 07], transformation rules for Open Nets are verified on the preservation of dynamic semantics.Open Nets are a reactive extension of Petri Nets.The technique is comparable to our technique with two main exceptions.First, they consider weak bisimilarity for the comparison of rule patterns, which preserves a strictly smaller fragment of the μ-calculus than branching bisimilarity [MW14].Second, their technique does not allow transforming the communication interfaces between components.Our approach allows this, and checks whether the components remain 'compatible'.
Finally, in [SLC + 14], transformations expressed in the DSLTrans language are checked for correspondence between source and target models.DSLTrans uses a symbolic model checker to verify properties that can be derived from the meta-models.The state space captures the evolution of the input model.In contrast, our approach considers the state spaces of combinations of transformation rules, which represent the potential behaviour described by those rules.An interesting pointer for future work is whether those two approaches can be combined.

Verifying single LTS transformations
This section introduces the main concepts related to the transformation of Labelled Transition Systems (LTSs), and explains how a single transformation rule can be analysed to guarantee that it preserves the branching structure of all LTSs it can be applied on.

LTS transformation and LTS equivalence
We use LTSs as in Definition 3.1 to reason about the potential behaviour of processes.
Action labels in A G are denoted by a, b, c, etc.In addition, there is the special action label τ to represent internal, or hidden, system steps.A transition (s, a, s ) ∈ T G , or s a − → G s for short, denotes that LTS G can move from state s to state s by performing the a-action.For the reflexive transitive closure of a − → G , we use a − → * G .Note that transitions are uniquely identifiable by the combination of their source state, action label, and target state.This property is sometimes called the extensionality of LTSs [Win90].This does not limit the applicability of our technique, as a system that is not extensional can be rewritten into an extensional system by introducing separate target states for each transition with an equivalent label.Finally, we only consider LTSs that are weakly connected, meaning that the undirected version of an LTS is a single connected component (from each state, there is path to each other state).This implies that each state in an LTS is reachable from at least one initial state.It would be irrelevant to consider states that are unreachable, since a system could never end up in such a state, starting from an initial state.
We allow LTSs to be transformed by means of formally defined transformation rules.Transformation rules are defined as follows.

Definition 3.2 (Transformation rule)
The two pattern LTSs are annotated with a (possibly empty) set of exit-states The states in S L ∩ S R are called the glue-states.The initial (glue-)states of a pattern LTS, also called the in-states, represent the states at which the pattern may be entered.The exit-states are glue-states that represent the states from which the pattern may be left.It is possible for a glue-state to be both an in-state and an exit-state.Figure 2 shows an example of a transformation rule r (L, R) transforming a sequence of two a-transitions to a τ -transition followed by two a -transitions.The initial states, i.e., the in-states of L and R, are indicated by an incoming arrow.The exit-states are represented by a square.Furthermore, all glue-states (i.e., the in-and exit-states) are coloured grey.
These LTS patterns expect only ingoing transitions at state 1 as this is an in-state.At exit-state 3 , only outgoing transitions are expected.Our previous formalisation [PW16] cannot express these subtleties as it did not distinguish between in-states and out-states.In Sect.3.2, we show that, due to the in-states and out-states, the formalisation presented in this article is able to tell that, when the a -transitions are relabelled to a-transitions, this transformation rule is correct while the previous formalisation cannot.
When applying a transformation rule to an LTS, the changes are applied relative to the glue-states.To reason about the application of a transformation rule, we first define the notion of an LTS morphism.
which preserve source states, target states, and transition labels, i.e., for all It should be noted that for extensional LTSs, there is never a need to explicitly indicate how transitions are mapped by an LTS morphism f .In extensional LTSs, no two transitions have the same source state, label, and target state.This ensures that given a function f S : S G 0 → S G 1 , an LTS morphism f is implied by it, since no two transitions in G 0 can be mapped to the same transition in G 1 , i.e., a function f T : T G 0 → T G 1 is implied.Because of that, with slight abuse of notation, we directly reason about LTS morphisms f as mappings between LTS states, in the remainder of this article.
A match is a behaviour preserving morphism of a pattern LTS P in an LTS G defined via a category of LTSs [Win90].The first match condition expresses that an initial state may only be matched on by exit-states.This is a reasonable assumption as all reachable behaviour starts at initial states.A consequence of the condition is that initial states may not be removed by a transformation.
The remaining two conditions make sure that a match may not cause removal of transitions that are not explicitly present in P. The first condition ensures that if a match of the pattern LTS is entered at some state s ∈ S G , then the state matching s must be an in-state.Similarly, the second condition states that if a match of the pattern LTS is left at some state s ∈ S G , then the state matching s must be an exit-state.
An LTS G is transformed to an LTS T (G) according to Definition 3.5.For clarity, we refer with p, p , . . . to states in a left pattern LTS, with q, q , . . . to states in a right pattern LTS, with s, s , . . . to states in an input LTS, and with t, t , . . . to states in an output LTS.

Definition 3.5 (LTS transformation)
Let G (S G , A G , T G , I G ) be an LTS and let r (L, R) be a transformation rule with match m : L → G.Moreover, consider match m : R → T (G), with ∀ q ∈ S L ∩ S R .m(q) m(q) and ∀ q ∈ S R \ S L .m(q) ∈ S G , by which m defines the new states being introduced by the transformation.The transformation of LTS G, via rule r with matches m, m, is defined as Given a match, an LTS transformation replaces states and transitions matched by L by a copy of R yielding LTS T (G).An application of a transformation rule is shown in Fig. 3. Again, the initial states are indicated by an incoming arrow.In the middle of Fig. 3, the transformation rule r (L, R) is shown (presented earlier in Fig. 2) which is applied on LTS G resulting in LTS T (G).The states are numbered such that matches can be identified by the state label, i.e., a state ĩ is matched onto state i .Note that such a match satisfies the conditions of Definition 3.4: State 1 is not an exit-state, but state 1 does not have unmatched outgoing transitions, state 3 is not an in-state, but there are no unmatched incoming transitions to state 3 , and finally, state 3 has unmatched outgoing transitions, but this is allowed, since 3 is an exit-state.
On the other hand, states 1 , 2 and 3 of L do not match on states 2 , 3 , and 4 , respectively, as this violates condition 2 of Definition 3.4.Namely, transition 3 b − → G 5 is unmatched, since state 5 is unmatched, but state 2 is not an exit-state.
Since in general, L may have several matches on G, we assume that transformations are confluent, i.e., that they are guaranteed to terminate and lead to a unique T (G).Confluence of LTS transformations can be checked efficiently [Wij15].By assuming confluence, we can focus on having a single match when verifying transformation rules, since the transformations of individual matches do not influence each other.
To compare LTSs, we use the branching bisimulation equivalence relation [vGW96] as presented in Definition 3.6.Branching bisimulation supports abstraction from actions and is sensitive to internal actions and the branching structure of an LTS.We require abstraction from actions for the verification of abstraction and refinement transformations such that input and output models can be compared on the same abstraction level.Definition 3.6 (Branching bisimulation) A binary relation B between two LTSs G 1 and G 2 is a branching bisimulation iff s B t implies Two states s, t ∈ S are branching bisimilar, denoted s ↔ b t, iff there is a branching bisimulation B such that s B t. Two sets of states S 1 and S 2 are called branching bisimilar, denoted .s 1 ↔ b s 2 and vice versa.We say that two LTSs G 1 and G 2 are branching bisimilar, denoted

Analysing a transformation rule
The basis of the transformation verification procedure is to check whether the two patterns making up a transformation rule are equivalent, while respecting that these patterns represent embeddings in larger systems.We want to be able to verify the transformation's side effects on both the matched states and the states connected to these matched states.To make this explicit, we extend the left-and right-patterns of a transformation rule r (L, R) according to Definition 3.7.The resulting so-called κ-extended transformation rule is defined as r κ (L κ , R κ ), and is specifically used for the purpose of analysing r , it does not replace r .
In the κ-extended version of a pattern LTS P, a new state named κ is introduced, which is connected to the original states by new transitions labeled σ p for p ∈ E P , and ε p for p ∈ I P .Furthermore, for all p ∈ E P and p ∈ I P a γ p,p -transition is introduced.The set initial states of the κ-extended LTS consists of the states in {κ}∪E P .The in-states p ∈ I P do not need to be added to the set of initial states as they are always reachable via a σ -transition.

Definition 3.7 (κ-extension of a pattern LTS)
The pattern LTS P extended with a κ-state, and σ -, εand γtransitions is defined as: with E P κ E P and where σ p , ε p and γ p,p are unique labels that are not the silent τ -label.
The κ-extension of an LTS pattern P can be see as an abstraction of LTSs it is matched on, in which we indicate how the behaviour described by P can be embedded in a larger LTS G.The introduced κ-state represents the unmatched (and thus unaffected) states in G.The σ -transitions go from the κ-state to the in-states and represent transitions that enter the part of G matched on by P. The ε-transitions go from exit-states to the κ-state.They represent transitions in G that leave the part matched on by P. The γ -transitions go from exit-states to in-states representing transitions connected to states that are matched on P, while the transition itself is not matched on.The σ -, ε-, and γ -transitions are uniquely identified by their corresponding glue-states.This ensures that side effects on unmatched states become visible.
If the original L and R are branching bisimilar, then one cannot in general conclude that input and output LTSs on which the rule r (L, R) is applicable are branching bisimilar as well.For instance, consider the transformation rule in Fig. 4 which swaps a and b transitions.Without the κ-extensions, the LTS patterns are branching bisimilar.However, this would not capture the fact that patterns should be interpreted as possible embeddings in larger LTSs.These larger LTSs may not be branching bisimilar, because glue-states 2 and 3 could be mapped to states with different outgoing transitions, apart from the behaviour described in the LTS patterns (states 2 and 3 are exit-states).However, due to the introduced κ-state and in particular the ε-transitions, a comparison of the κ-extended networks is able to determine that the rule does not guarantee branching bisimilarity between input and output LTSs. Figure 5 shows that the verification approach discussed in this article is able to perform a more fine grained analysis compared to the approach in previous work [PW16].The κ-extension of the transformation rule in Fig. 2, but now with a replaced by a, is shown in Fig. 5a, b using the approach presented in this article and the approach in previous work, respectively.In the latter case, the notions of in-state and exit-state are not used, instead both types of states are treated in the same way, as glue-states.
The approach discussed in this article determines that the left and right κ-extended pattern LTSs are branching bisimilar as shown in Fig. 5b.The branching bisimulation relation between the left and right κ-extended pattern LTSs is indicated with dashed lines.The introduction of the τ -transition does not break branching bisimilarity since no behaviour is lost.
However, the approach in [PW16] reports a counter-example as shown in Fig. 5b.Since the approach in [PW16] does not distinguish between in-states and exit-states, the semantics of the transformation rule is slightly different; each glue-state is allowed to be matched on states with ingoing transitions, outgoing transitions, and both inand outgoing transitions.Therefore, any correct verification technique would have to consider the possibility that the glue-states are matched on states that have additional in-and/or outgoing transitions, and therefore, the extra τ -transition in R κ could mean that unmatched outgoing transitions are disabled when the τ -transition is followed.By adding the notions of in-state and exit-state, we can restrict the applicability of transformation rules and thereby provide more information to the verification technique.
The analysis.In the verification of a transformation rule r (L, R) the aim is to determine whether r is sound for any LTS G on which r is applicable.The verification proceeds as follows: 1. Construct the κ-extended pattern LTSs L κ and R κ according to Definition 3.7.2. Determine whether L κ and R κ are branching bisimilar.
If L κ and R κ are branching bisimilar, then r is branching-structure preserving for all inputs it is applicable on.Otherwise, r may preserve the branching-structure of some LTSs, but it is definitely not branching-structure preserving for all possible inputs it is applicable on.Time complexity of the analysis.Consider a transformation rule r .Let g be the number of glue-states defined in the pattern LTSs of r .Furthermore, let s, t and a be the largest number of states, transitions and action labels, respectively in the pattern LTSs of r .
In the first step of the verification of a rule r , a κ-state is added and for each glue state one σ -, ε, and/or γ -transition is added.The number of states added is constant, the number of transitions added is O(g), and the number of action labels added is O(g).Hence, the running time of step 1 is O(g).
In the second step, it is checked whether L κ and R κ are branching bisimilar.Branching bisimilarity checking can be performed in O(t • log(s + a)) [GW16].Therefore, the time complexity of the final step of the analysis is O((t + g) • log((s + 1) + (a + g))).

Correctness of the verification
In this section we prove the correctness of the analysis algorithm presented in the previous section.First, we introduce two lemmas that express properties of left and right κ-extended pattern LTSs that are branching bisimilar.Next, we prove the soundness of the approach in Proposition 3.1.Finally, the completeness of the approach is proven in Proposition 3.2.
Fig. 5.The approach presented in this article (Fig. 5a) is able to determine from that the transformation rule shown in Fig. 2 (where a has been relabelled to a) guarantees that the input and output LTSs are branching bisimilar; this is an improvement over our previous formalisation [PW16] (Fig. 5b) which reports a counter-example shown as it does not distinguish between in-and exit-states.a The κ-extension of the transformation rule shown in Fig. 2, relabelled with a : a, using the formalisation in this article; the left and right κ-extended pattern LTSs are branching bisimilar.b The κ-extension of the transformation rule shown in Fig. 2, relabelled with a : a, where in-states and exit-states are not distinguished from each other; the left and right κ-extended pattern LTSs are not branching bisimilar since in R κ , the possibility of performing a ε 1 -transition is lost once the τ -transition from state 1 to state 6 is taken Recall that glue-states are not removed by transformation and that the κ-state represents unmatched states, and therefore also represents states that are not removed.When comparing LTS patterns by checking for branching bisimilarity, it is desirable that these states are related to themselves, as illustrated in the previous example.Lemma 3.1 shows that it is indeed the case that κ-extension achieves this: if two κ-extended pattern LTSs L κ , R κ are branching bisimilar, then the κ-state, the in-states, and the exit-states, i.e., the initial states of the κ-extended LTS patterns, are related to themselves.
Proof The proof follows from the fact that the σ -and ε-transitions are uniquely constructed for a specific gluestate.Consider a state p ∈ {κ} ∪ I L ∪ E L .By Definition 3.7, we have p ∈ I L κ .Since L κ and R κ are branching bisimilar, there is a state q ∈ S R κ such that p ↔ b q.We perform a case distinction on p ∈ {κ} ∪ I L ∪ E L .In each case we show that there is a transition labelled with a σ or ε between p and p ∈ S L κ such that the action label uniquely identifies the states p and p .For convenience, let us refer to this unique label as α and say we have a transition p α − → κ L p .As p ↔ b q we can apply Definition 3.6 to show that q simulates p.As the unique labels are not allowed to be the silent action τ , the only remaining case indicates that there are states q ∈ S R κ and q ∈ S R κ such that q τ − → * R κ q α − → R κ q with p ↔ b q and p ↔ b q .There is only one transition labeled α in both L κ and R κ and it occurs as p α − → p .It follows that q p and q p .Consequently, we have p ↔ b p and p ↔ b p .We now discuss the case distinction in full detail: • p κ.By Definition 3.1, I L ∅, so there is a state p ∈ I L .This means that there is a transition κ σ p − → L κ p where σ p τ and σ p uniquely occurs on κ σ p − → p in both L κ and R κ (Definition 3.7).Hence, since σ p τ , by Definition 3.6, there are states q ∈ S R κ and q ∈ S R κ such that q τ − → * R q σ q − → R q with κ ↔ b q.The σ ptransition in both L κ and R κ is strictly present as κ σ p − → p .It follows that p κ, therefore we have p ↔ b p.
• p ∈ I L .Then there is a transition κ σ p − → L κ p where σ p τ and σ p uniquely occurs from κ to p in both L κ and R κ (Definition 3.7).In the previous case we established that κ ↔ b κ.Because σ p τ , by Definition 3.6, we have states q ∈ S R κ and q ∈ S R κ such that κ τ − → * R κ q σ p − → R κ q with p ↔ b q .The σ p -transition in L κ and R κ only goes from κ to p.It follows that q p and thus p ↔ b p.
• p ∈ E L .By Definition 3.7, there is a state p ∈ E L κ with an observable action ε p that uniquely occurs on a transition from p to κ in both L κ and R κ .Moreover, since L κ ↔ b R κ , there is a state q ∈ I R κ such that p ↔ b q.Therefore, by ε p τ and Definition 3.6, there are states q ∈ S R κ and q ∈ S R κ such that q τ − → * R κ q ε p − → R q with p ↔ b q and κ ↔ b q .By Definition 3.7, there is only one transition labeled ε p in L κ and R κ , which goes from p to κ.It follows that q p and hence p ↔ b p.
Exit-states are the states where the embedding of an LTS pattern may be left.A transition leaving the embedding is represented in the κ-extended pattern by a ε-transition.Should an arbitrary state q ∈ S R be related to an exit-state p ∈ E L , then there must exist a τ -path from q to p, otherwise state q cannot simulate the ε-transitions from p. Lemma 3.2 shows that, indeed, such a τ -path from q to p exists.
Intuitively, the proof follows from the fact that action ε p uniquely occurs on a transition from p to κ.Since in L κ , any state q branching bisimilar to p must be able to perform this transition directly or be able to reach such a transition via a τ -path, we must either have that q p or that from q, p can be reached via a τ -path.Next, we discuss the proof in full detail.
Let state p ∈ E L and state q ∈ S R κ such that q ↔ b p.By Definition 3.7 we have p R κ , therefore, we must have q p and q κ.It follows that q τ − → * R κ p and by structural induction q τ − → * R p. Definition 3.8 introduces a mapping that formally defines how a κ-extended LTS pattern is related to the LTS that is matched on.The fact that κ-states represent all states that are not matched on is made explicit by this mapping.

Definition 3.8 (Mapping of κ-extended LTS)
Consider an LTS G and a pattern LTS P with corresponding match m : P → G.We say that a p ∈ S P κ is mapped to a state s ∈ S G , denoted by m κ (p) s, iff either p κ and m(p) s or p κ and there is no state in S L matching on s (i.e., ¬ ∃ x ∈ S P , m(x ) s).

Soundness of the analysis.
A transformation rule r preserves the branching structure of all LTSs it is applicable on if the κ-extended patterns of r are branching bisimilar.This is expressed in Proposition 3.1.
Proposition 3.1 is a special case of Proposition 4.1, discussed in the next section, which considers transformation of concurrent systems.This proof is derived from the Coq proof of Proposition 4.1 to explain the transformation verification technique in a more simple and intuitive setting.Lemmas 3.1 and 3.2 have been formalised in Coq.Proposition 3.1 Let G be an LTS, let r be a transformation rule with matches m : L → G and m : R → T (G) such that Definition 3.5 is satisfied.Then, then these two patterns exhibit branching bisimilar behaviour, even when they are embedded into a larger LTS.Therefore, the behaviour of the original and transformed system (G and T (G), respectively) are branching bisimilar.

Proof By definition, we have
, which means that there must exist a branching bisimulation relation C relating the states in I G and I T (G) .Let B be a branching bisimulation relation demonstrating that States that are touched by the transformation are related via the corresponding matches m and m, and branching bisimulation relation B .States that are left untouched by the transformation are represented by the κ-state for which it holds that κ B κ (Lemma 4.4).The mappings m κ and mκ map the κ-state on all states that are not matched on.To ensure that untouched states are related to themselves we require s t whenever either s or t is mapped on by a κ-state.
We now prove that C is a branching bisimulation relation by showing that the initial states of G and T (G) are related, and that Definition 3.6 holds for C .For the latter we only discuss one of the two symmetric cases.
• C relates the initial states of G and T (G).Since we have I G I T (G) we only have to show ∀ s ∈ I G .∃ t ∈ I G .s C t. Take s again for t, we have to show that s C s. State s is either matched on or not matched on: s is matched on by m, i.e., ∃ p ∈ S L .m(p) s.We have p ∈ E L since initial states may only be matched on by exit-states (first condition of Definition 3.4).By Lemma 3.1 it follows that p B p. Hence, we have s C s.
s is not matched on by m, ¬ ∃ p ∈ S L .m(p) s.By definition we have m κ (κ) s.By Lemma 3.1 it follows that κ B κ. Therefore, we have s C s.
In both cases it holds that s C s.
there are states p ∈ S L κ and q ∈ S R κ such that p B q, m κ (p) s, mκ (q) t, and (p κ ∨ q κ) ⇒ s t (1) .Furthermore, by definition of m κ , there is a state p ∈ S L κ such that m κ (p ) s .The transition s a − → G s is either matched on by a transition p a − → L p or not match on: 1.There exists a transition p a − → L p matching on s a − → G s in L. Since p B q, by Definition 3.6, we have the following two cases: a τ with p B q. Since m κ (p ) s and mκ (q) t, we have s C t.
− → R q with p B q and p B q .Transitions from and to κ-states are not matched on by m and m, i.e., only transitions in T L (T R ) match on transitions in T G (T T (G) ).Hence, states p, p , q, q and q cannot be κ-states.It follows that s C mκ ( q) and s C mκ (q ), since the matching states are not κ, m κ (p) s, and m κ (p ) s .Finally, as mκ (q) t, we have -State s is matched on by a state p, i.e., m(p) s.We must have p κ. Since there is no transition matching s a − → T G s , it follows from the second matching condition (Definition 3.4) that p ∈ E L .Now it follows from p B q and Lemma 3.2 that q τ − → * R p.Moreover, since m is an embedding the transition is preserved in T (G) and we have t and there is no transition in L matching s a − → G s the state q must be either a κ-state or an in-state, i.e., p ∈ I L ∪ {κ}.For p ∈ E L and p ∈ I L ∪ {κ} it follows from Lemma 3.1 that p B p and p B p , respectively.Hence, we have s C t and s C s .
This case is symmetric to the previous case.

Completeness of the analysis.
Completeness is an important factor in verification.A complete analysis technique will not report false negatives.The next proposition expresses that our analysis technique is complete.In the context of this work, completeness means that the analysis will always report that the left and right κ-extended pattern LTSs of a transformation rule r are branching bisimilar if the input LTS G and output LTS T (G) produced by applying r on G are branching bisimilar for any given input LTS G and any given matching.The proof for Proposition 3.2 is derived from the Coq proof of Proposition 4.2 to explain the transformation verification technique in a more simple and intuitive setting.
We want to stress that the analysis considers all possible input LTSs.Because of this, it may be that the analysis reports that a transformation rule does not preserve a given property in general, while the property may still hold after transformation of some specific input LTS.Consider, for instance, a transformation rule r that is not property preserving according to the analysis.There may still be an input LTS G 1 with a match m 1 such that G 1 ↔ b T (G 1 ).However, it is guaranteed that there also exists an LTS G 2 for r with a corresponding match m 2 for which G 2 ↔ b / T (G 2 ).
Proposition 3.2 Consider a transformation rule r (L, R).Let G be the set of all LTSs.Let r G be the set of all possible match pairs corresponding to a transformation of an LTS G where r G consists of tuples of the form (m : L → G, m : R → T (G)).The following holds

Verifying sets of dependent LTS transformations
In this section, we extend the setting by considering sets of interacting process LTSs in so-called networks of LTSs [Lan06] or LTS networks.Transformations can now affect multiple LTSs in an input network, and the analysis of transformations is more involved, since changes to process-local behaviour may affect system-global properties.Finally, we prove the correctness of the technique based on the complete Coq proof.From the proof it follows that per set of related transformation rules, only a single bisimulation check is required in order to verify a system of transformation rules.

LTS networks and their transformation
An LTS network (Definition 4.1) describes a system consisting of a finite number of concurrent process LTSs and a set of synchronisation laws which define the possible interaction between the processes.The explicit behaviour of an LTS network is defined by its system LTS (Definition 4.2).We write 1..n for the set of integers ranging from 1 to n.A vector v of size n contains n elements indexed from 1 to n.For all i ∈ 1..n, vi represents the i th element of vector v .Definition 4.1 (LTS network) An LTS network N of size n is a pair ( , V), where • is a vector of n concurrent LTSs.For each i ∈ 1..n, we write i (S i , A i , T i , I i ) and s a − → i s as shorthand for s a − → i s .
• V is a finite set of synchronisation laws.A synchronisation law is a tuple ( v , a), where v is a vector of size n, called the synchronisation vector, describing synchronising action labels, and a is an action label representing the result of successful synchronisation.We have ∀ i ∈ 1..n. vi ∈ A i ∪{•}, where • is a special symbol denoting that i performs no action.
where s i si for all i with vi •.
The system LTS is obtained by combining the transitions of the processes in according to the synchronisation laws in V. Whenever we want to make explicit that a transition s a − → N s is enabled by a synchronisation law ( v , a), we write s v ,a − − → N s .We refer to s a − → N s and s v ,a − − → N s transitions as global transitions and we refer to transitions s vi − → i s (i ∈ Ac( v )) as the (process-)local transitions.If it is clear from the context whether a transition is global or local, then "global" or "local" is omitted.
The LTS network model subsumes most hiding, renaming, cutting, and parallel composition operators present in process algebras, but also more expressive operators such as m among n synchronisation [LM13].For instance, hiding can be applied by replacing the a component in a law by τ .A transition of a process LTS is cut if it is blocked with respect to the behaviour of the whole system (system LTS), i.e., there is no synchronisation law involving the transition's action label at the position of the process LTS.
Figure 6 shows an LTS network N ( , V) with two processes and three synchronisation laws (left) and its system LTS G N (right).To construct the system LTS, first, the initial states of 1 and 2 are combined to form the initial state of G N .Then, the outgoing transitions of the initial states of 1 and 2 are combined using the synchronisation laws, leading to new states in G N , and so on.
Law ( a, a , a) specifies that the process LTSs can synchronise on their a-transitions, resulting in a-transitions in the system LTS.The other laws specify that b-and d -transitions can synchronise, resulting in e-transitions, and that c-transitions can be fired independently.Note that in fact, b-and d -transitions in 1 and 2 are never able to synchronise.
The set of indices of processes participating in a synchronisation law ( v , a) is defined as Branching bisimilarity is a congruence for construction of the system LTS of LTS networks if the synchronisation laws do not synchronise, rename, or cut τ -transitions [Lan06].Given an LTS network N ( , V), these properties are formalised as follows: In this article, we only consider LTS networks satisfying these properties.

Transformation of an LTS network.
A system of transformation rules, or a rule system for short, allows the transformation of LTS networks.A rule system transforms multiple processes and may introduce new synchronisation laws.The rule system is defined as follows.
Definition 4.3 (Rule system) A rule system (R, V , V) consists of a vector of transformation rules R, a set of synchronisation laws V that must be present in the networks that is applied on, and a set of synchronisation laws V introduced in the network resulting from a transformation.The i th left and right pattern LTSs of R are denoted by L i and R i , respectively.
Intuitively, a rule system describes how a concurrent system is modified to create a transformed concurrent system.A rule system is designed with a specific result in mind.Therefore, it is desirable that a rule system is confluent such that transformation rules can be applied in any order eventually leading to the same result.Checking confluence can be done efficiently [Wij15].In the remainder of this article, we only consider confluent rule systems.
The transformation of an LTS network N ( , V) of size n given a rule system (R, V , V) is achieved via a set m of pairs of matches.Each element (m, m) ∈ m corresponds to the application of some rule in R to a process in .The match m : L j → i matches the left pattern LTS (L j ) of the j th transformation rule in R on to the i th process LTS ( i ) of LTS network N .Similarly, match m : R i → T ( i ) matches the right process LTS (R i ) of rule R i in R on to the transformed process LTS (T ( i )) of the transformation of network N .
One transformation rule can match on many processes and one process can be matched on by many transformation rules.However, for the sake of simplicity, the transformation of N is separated in several transformation steps.Since we assume that rule systems are confluent the order of transformation steps is irrelevant for the final result.
A transformation step transforms a network N of size n given a vector M consisting of n match pairs taken from m ∪ {δ} where δ is a dummy match pair corresponding to a dummy transformation rule that leaves the target process unchanged (i.e., T ( i ) i for some process LTS i ).The dummy transformation rule consists of a single state and no transitions in both its left and right patterns.The left and right matches of the i th match pair Mi are referred to as m i and mi , respectively.For each index i ∈ 1..n the matches of a match pair Mi match on process LTS i , i.e., we have m i : L i → i and mi : R i → T ( i ).
Each match pair in (m i , mi ) ∈ M corresponds to a rule r ∈ R ∪ { }.Hence, the match pair vector M defines a partial mapping between processes in and rules in R. With abuse of notation, we shall use M as a partial mapping to project the synchronisation laws of on according to the matches in M .We write M (i ) j to indicate that m i and mi are matches for the j th transformation rule of R. If Mi δ, then we write M (i ) * to indicate that i is not mapped to a rule in R.This mapping describes a projection from synchronisation vectors of rule system on to synchronisation vectors of network N on which the transformation step is applied.This projection of synchronisation vectors is formally defined as follows..m with n, m ∈ N be a partial mapping.For i ∈ 1..n we write f (i ) * to indicate that i is not mapped by f .A synchronisation vector v of size m can be projected iff for all j ∈ Ac( v ) there exists an i ∈ 1..n such that f (i ) j .This condition ensures that all active indices of v are represented in the projected vector.The projected synchronisation vector, denoted v f , is a vector of size n with elements i ∈ 1..n defined as: Let f be a partial mapping.Given a synchronisation law ( v , a) the projected synchronisation law is written as ( v f , a).The projection of a set of synchronisation laws V is defined as For a vector of matches M , the transformation step is formalised in Definition 4.5.The transformed LTS network T M (N ) consists of the transformed process LTSs and the original set of synchronisation laws V updated with the projection of V.
Before the transformation step defined by M can be applied it must be ensured that the input network N ( , V) contains the corresponding projection of the set of synchronisation laws V of the applied rule system , i.e., we must check V M ⊆ V.If this is the case, then is applicable, and the transformed network T M (N ) receives the set of synchronisation laws V ∪ V M .That is, the projected laws of the set of synchronisation laws V are introduced to the network in the transformation step.Each process LTS i (i ∈ 1..n) is transformed to an LTS T ( i ) by applying the matches m i : S L i → S i and mi : S R i → S T ( i ) .Note that every process LTS is matched on, since matches on every state.

Definition 4.5 (LTS network transformation step) Let N
( , V) be an LTS network of size n and let (R, V , V) be a rule system.Let M be a vector of size n of match pairs (m i , mi ) such that V M ⊆ V.For all i ∈ 1..n let T m i denote the LTS transformation of i using application matches m i and mi according to Definition 3.5.
The application of a transformation step defined by M on LTS network N is defined as follows: The exhaustive application of a rule system on a network N is denoted by T (N ).In most of our examples M is trivial and there is only a single transformation step.In these examples the definition of M is omitted and the transformed network is referred to as T (N ).  Figure 7 presents a transformation sequence that is the result of the application of a rule system (see Fig. 7b) on a network N .The system LTSs of the input and output networks are exactly the same, this LTS is shown in Fig. 7c.The a-and c-transitions and the b-and d -transitions can never synchronise.Furthermore, the dand c-transitions in 1 are cut.The synchronisation of the e-transitions, resulting in h-transitions, are the only reachable behaviour in the system LTS.
Rule system removes the a-, b-transition sequence, and the d -, c-transition sequence.The system LTS of the network is applied on remains unchanged because in the described situation synchronisation of both the a-and d -transitions is impossible and the b-and c-transitions are otherwise unreachable.Transformations like this are useful to gain insights in the reachable behaviour of local process LTSs.
Figure 7a shows the exhaustive application of on a network N ( , V).The first transformation step applies the match pair vector M 1 which contains the left matches m 1  1 : ).The second transformation step applies the match pair vector M 2 which contains the left matches m 2 1 : This final transformed network is

Analysing transformations of an LTS network
In a rule system, transformation rules can be dependent on each other regarding the behaviour they affect.
In particular, the rules may refer to actions that require synchronisation according to some law, either in the network being transformed, or the network resulting from the transformation.Since in general, it is not known a priori whether or not those synchronisations can actually happen (see Fig. 6, the a-transitions versus the b-and d -transitions), full analysis of such rules must consider both successful and unsuccessful synchronisation.
To achieve this, dependent rules must be analysed together as combinations of LTS patterns, as shown in Fig. 1.To this end, LTS patterns are combined into an LTS network, called a pattern network P ( ¯ , W), with ¯ a vector of pattern LTSs, and W a set of synchronisation laws.In particular, the left and right pattern networks of a rule system (R, V , V) are defined as L ( L 1 , . . ., L |R| , V ) and R ( R 1 , . . ., R |R| , V ∪ V).For the analysis of these pattern networks, we define in Definition 4.6 the κ-extended pattern network consisting of the combination of the κ-extended LTS patterns and an extension of the synchronisation laws with κ-synchronisation laws V κ .The left and right κ-extended pattern networks are denoted Lκ and Rκ and, for the purpose of equivalence checking, must use the same set of κ-synchronisation laws V κ .Definition 4.6 (κ-Extended Pattern Network) Given a pattern network P ( ¯ , W) of size n, its κ-extended pattern network is defined as P κ , where Verifying a rule system must account for all possible ways of entering or leaving the pattern networks.Therefore, the set of κ-synchronisation laws W κ describes all possible combinations of synchronisations between σ -, ε-, and γ -actions.
Figure 8 shows a rule system , in which the two rules are dependent.Again, the states are numbered such that matches can be identified by the state label, i.e., a state ĩ is matched onto state i .The two transformation rules depicted in Fig. 8a introduce a new dependency between two (possibly) independent systems.The corresponding κ-extended pattern networks are given in Fig. 8b.The κ-synchronisation laws allow σ -, ε-, and γ -actions to be performed both independently and synchronised.The synchronisations of σ 1 -and σ 2 -transitions, σ 1 -and ε 2 -transitions, and σ 1 -and γ 2,2 -transitions are displayed as the σ σ -transition, the σ ε-transition, and the σ γtransition, respectively.Figure 8c presents the branching bisimulation check performed on the two κ-extended pattern networks.The check concludes that the two networks are not branching bisimilar.In particular, when the second process (R κ 2 ) leaves the pattern LTS at state 4, 2 via the ε 2 -transition, the a-transition can no longer be mimicked.The same would occur in any application of at any state matched by 4, 2 that has a transition to an unmatched state. 5 rule system may consist of multiple classes of dependent rules where synchronisation is contained within a class.There is no synchronisation defined between the classes, i.e., the classes are independent of each other in terms of synchronising behaviour.These independent classes can be analysed separately.Given a rule system (R, V , V), the potential synchronisation between the behaviour in transformation rules in R is characterised by the direct dependency relation D: Transformation rule R i is related via D to the rule R j iff both rules participate in a synchronisation law ( v , a) ∈ V ∪ V.The relation considering directly and indirectly dependent rules, called the dependency relation, is defined by the transitive closure of D, i.e., D + .The D + relation can be used to construct a partition D of transformation rule indices into classes of indices referring to dependent rules.Each class can be analysed independently.We call these classes dependency sets.
To define the projection of a rule system (R, V , V) along a dependency set P ∈ D we use (P , <) to obtain a vector mapping the domain 1..|P | to the rules in R; we write P (i ) j iff the i th element of P (with i ∈ 1..|P |) refers to the transformation rule R j (with j ∈ 1..|R|).The projection of a rule system along dependency set P is defined as follows.
Definition 4.7 (Projection of a rule system) Let (R, V , V) be a rule system with a partition D of dependency sets.The projection of along a dependency set P ∈ D is a rule system P (R P , V P , VP ) with R P a vector of size |P | such that for all i ∈ |P | we have The left and right pattern networks of a projected rule system are denoted as LP and RP .
An analysis of the pattern networks is only sufficient if all relevant behaviour is described in those networks.Furthermore, the effect of the matches (i.e., the application of the rule system) must be taken into consideration with respect to both the projection of the sets of synchronisation laws V and V, and completeness of transformation of synchronising transitions.To ensure the soundness of the transformation verification approach one analysis condition and four application conditions must be satisfied.
In Sections 4.2.1 and 4.2.2 the analysis of a rule system and application of a rule system, respectively, is discussed further.The analysis of a rule system consists of the verification of the pattern networks and the analysis condition.The analysis of the application of a rule system constitutes the verification of the application conditions.Both sections present an analysis algorithm and a time complexity analysis.

Analysis of a rule system
In the analysis of a rule system the left and right pattern networks are checked for branching bisimilarity.To guarantee the soundness of this check, an analysis condition must apply.We first describe the analysis condition.Then, the algorithm for the analysis of a rule system is presented.Finally, this section is concluded with a run time analysis.Consider a rule system (R, V , V).The analysis condition requires that is complete with respect to the synchronisation laws in V .That is, all the action labels described by the laws in V must be transformed by the associated transformation rule.This ensures that any behaviour described in V , and affected by the rule system, is explicitly visible in the pattern networks.The symmetric condition involving the R i and V applies as well.
Figure 9 shows how the application of a rule system that does not satisfy ANC1 affects the transformation verification.The rule system , shown in Fig. 9a, has a synchronisation law ( a, a , a) ∈ V .However, transformation rule R 2 does not contain any a-transitions, i.e., ANC1 is not satsfied.As a result the effects of the transformation of the a-transition by rule R 1 is not visible in the κ-extended pattern networks presented in Fig. 9b. Figure 9d shows that an input network N exists (Fig. 9c) such that the input network N and output network T (N ) are not branching bisimilar.If rule R 2 would contain the a-loop, then the a-transition would not have been cut and Lκ and Rκ would not be bisimilar any longer.Hence, it is vital that labels considered by synchronisation laws in V is also present in the transformation rules, i.e., rule systems must adhere to ANC1.
The analysis.In the verification of a rule system the aim is to determine whether is sound for any network N on which is applicable.Before analysing the transformation rules with branching bisimulation checks, it is checked whether is confluent and satisfies ANC1.Verification of a rule system (R, V , V) proceeds as follows: G N . a A rule system (R, V , V) that does not satisfy ANC1.b Lκ and Rκ are equivalent since a-transitions are not considered by rule 2. c An input network N ( , V). d The system LTSs before (N ) and after (T (N )) transformation are not branching bisimilar 1. Check whether in , no τ -transitions can be synchronised, renamed, or cut, and whether ANC1 is satisfied.
If not, report which check failed and stop.2. Check whether the rules in are confluent.If not, report that is not confluent and stop.3.For each rule in R the κ-extended pattern LTSs are constructed according to Definition 3.7.4. Construct the set of dependency sets D. 5.For each class (dependency set) P ∈ D determine whether Lκ,P ↔ b Rκ,P holds, i.e., whether the κ-extended pattern networks projected along P are branching bisimilar.
If all steps produce positive results, then is branching-structure preserving for all inputs it is applicable on.Otherwise, may preserve the branching-structure of some LTS networks, but it certainly is not branchingstructure preserving for all possible inputs it is applicable on.

Time complexity of the analysis.
In the first step of the verification of a rule system , each check requires the verification of a condition on each synchronisation law in V , V, or both.Each condition can be checked in linear time.Hence, the running time of step 1 is O(|V ∪ V|).
In the second step, it is checked whether is confluent.Confluence checking of transformations of LTSs has , with s and t the largest number of states and transitions in an LTS pattern of a rule in , respectively.
In the third step, for each transformation rule R i , the left and right κ-extended pattern LTSs are built, resulting in L κ i and R κ i , respectively.The pattern LTSs must only be extended once.Therefore, the running time of step 3 has time complexity O(|R| • g), with g the largest number of glue-states appearing in an LTS pattern of a rule in .The fourth step constructs the dependency sets by analysing the synchronisation laws in V ∪ V.This can be done in O(|V ∪ V|) time.
In the fifth and last step, for each dependency set P ∈ D the pattern networks Lκ,P and Rκ,P are constructed and it is verified whether Lκ,P and Rκ,P are branching bisimilar.Hence, |D| bisimulation checks are performed.Let s, t and a be the largest number of states, transitions and action labels, respectively, appearing in the κ-extended pattern networks of .Branching bisimilarity checking can be performed in O(t • log(s + a)) [GW16].Therefore, the time complexity of the final step of the analysis is O(|D| • (t • log(s + a))).
The running time of steps 3-5 together therefore amounts to O(|D| • (t • log(s + a)) + |V ∪ V| + |R| • g).In contrast with previous work, the analysis presented here only requires a single bisimulation check per dependency set P ∈ D (versus 2 |P | − 1 in previous work [WE13]).This improvement is made possible by the new correctness proof presented in Sect.4.3.

Analysis of the application of a rule system
The analysis presented in the previous section is not enough to guarantee the soundness of the transformation verification technique.There are four more conditions that need to be taken into account when the rule system is applied on an input LTS network.We first describe these four application conditions.Then, the algorithm for the analysis of the application of a rule system is presented.Finally, this section is concluded with a run time analysis.
Consider a rule system (R, V , V) and an LTS network N ( , V) of size n on which is applied subject to a set of match pairs m.
The first condition concerns the completeness of transformation of synchronising transitions when applying rule system on network N .To prevent breaking branching bisimilarity due to a mixture of old and new synchronising behaviour, we require that old synchronising behaviour is completely transformed.A rule transforming synchronising transitions (with a minimum of two synchronising parties) must be applicable on all equivalent synchronising transitions.More precisely, for each active action label vj (j ∈ |R|) of a law ( v , a) ∈ V that synchronises with another action label (i.e., {j } ⊂ Ac( v )), we must have that if a process i (i ∈ 1..n) is matched on by L j , all vj -transitions in i are transformed, i.e., for all vj -transitions in i , there exists a match pair (m, m) such that m:L j → i matches a vj -transition in L j on that vj -transition in i .
We write "_" to indicate that the second element of the match pair is not relevant.The symmetric condition involving the R j and V ∪ V applies as well.Together with ANC1, APC1 ensures that synchronising transitions with a particular label in the input network are either all transformed, or none are transformed.This is shown in Sect.4.3 in Lemma 4.9.
Figure 10 shows a transformation that satisfies ANC1, but does not adhere to APC1.The rule system , presented in Fig. 10a, transforms a-transitions to c-transitions.The first transformation rule transforms an atransition to a c-transition iff there is a b-loop at the state from which the a-transition is performed.The second transformation rule transforms a-loops to c-loops.If is applied on network N , presented in Fig. 10c, then the transition 1 a − → 1 2 is not transformed.Therefore, the transformation does not satisfy APC1.The laws of the rule system describe that the synchronisation of two a-transitions results in an a-transition (i.e., ( a, a , a) ∈ V ), and that the b-loop is performed independently of other processes (i.e., ( b, • , b) ∈ V ).A new synchronisation law ( c, c , a) is added such that the synchronisation of two c-transitions results in an a-transition again.This makes old and new synchronising behaviour comparable.As shown in Fig. 10b, the branching bisimulation check cannot distinguish between the left and right κ-extended pattern networks.
However, if is applied on the LTS network N given in Fig. 10c, then it turns out that G N and G T (N ) are not branching bisimilar (see Fig. 10d).The transformed network can no longer perform the 2, 3 a − → 1, 3 transition.Transition 2 a − → 1 in process 1 has not been transformed while the a-loop in 2 has been transformed to a c-loop in T ( 2 ).Hence, there is no a-transition available anymore with which the 2 a − → T ( 2 ) 1 transition can synchronise.
The second condition prevents that projections of new synchronisation laws in V are defined over actions already present in the processes of an input network.Otherwise, an LTS network could be altered without actually defining any transformation rules.Formally, if the left LTS pattern L j (j ∈| R |) of the j th rule in R is matched on the i th process (i ∈ 1..n) in , then the vj may not be defined over actions in A i .( , V). d The system LTSs before (G N ) and after (G T (N ) ) transformation are not branching bisimilar; the transformed model is no longer able to synchronise the a-transition performed at state 2 that was not transformed, since the loop at state 3 has been transformed to a c-loop As an example, consider a rule system (R, V , V) with V ∅, and V { a , b}.Say R contains a single transformation rule that transforms an a-loop with L 1 R 1 .Note that rule R 1 does not change the LTSs it is applied on, and thus Lκ Rκ .Furthermore, APC1 and ANC1 are satisfied for both V and V.If is applied on a network N ( , {( a , a)}) with L 1 , then we obtain the network T (N ) ( , {( a , a), ( a , b)).Clearly, the system LTSs of N and T (N ) are not branching bisimilar; T (N ) can perform both an a-loop and a b-loop whereas N can only perform the a-loop.Condition APC2 does not allow the application of on N as ( a , b) ∈ V involves label a which is present in A 1 .
The third and fourth condition concern how the set of laws V (of the network N on which is applied) is related to the set of laws V (that expects).Consider a set of match pairs m describing the transformation of N as defined by .The application of the matches in m is distributed over a sequence of transformation steps.Let M be a vector of match pairs defining a single transformation step in the sequence.For each transformation step M it is required that APC3 and APC4 hold.
The third condition expresses that the set of synchronisation laws V of network N must contain all the synchronisation laws in V that expects.
An application of a rule system on an LTS network N for which APC3 does not hold is given in Fig. 11.The corresponding match pair vector is M (m : L 1 → 1 , m : R 1 → T ( 1 )).Condition APC3 does not hold since the law ( a , c) ∈ V of rule system , presented in Fig. 11a is not included in the set of laws of the input network N , shown in Fig. 11c.The analysis condition ANC1 and application conditions APC1 and APC2 hold.
The rule system transforms a-transition to d -transitions.The local a-transitions result in global c-transitions due to law ( a , c) ∈ V .To ensure that the behaviour remains equivalent a new synchronisation law ( d , c) ∈ V is introduced such that, like the a-transitions, the d -transitions result in global c-transitions.As shown in Fig. 11b the left and right κ-extended pattern networks are branching bisimilar, as expected.
1 1 However, when is applied on input network N the transformation of the a-transition in process 1 (now a d -transition) is not cut due to introduction of the law ( d , c) ∈ V.The system LTSs before (G N ) and after (G T (N ) ) transformation are given in Fig. 11d.Since V M ⊆ V, an analysis of does not take into account that in 1 the a-transition is cut.Therefore, the analysis cannot give any guarantees for the input network N .
If application condition APC3 is satisfied by a network, then either network N must include the law ( a , c) ∈ V or the law must be removed from V in rule system .In the former case, the a-transition in 1 is not cut and the system LTS G N and G T (N ) are branching bisimilar.In the latter case, the a-transition in L 1 is cut as well and it follows that Lκ and Rκ are not branching bisimilar.Condition APC3 ensures that, with respect to the transitions described in the transformation rules, both the rules system and the input network cut the same transitions.
The fourth condition ensures that is aware of all the synchronisation laws in V that affect the rules in R.That is, besides the projection of synchronisation laws in V , no other synchronisation laws in V may involve behaviour described by the rules in R.
The symmetric condition involving the V \ V and A R M (i) applies as well.
A transformation that does not satisfy APC4 is presented in Fig. 12. Condition APC4 is not satisfied because the law ( a, c , e) ∈ V \ V of input network N , shown in Fig. 12c, contains behaviour that influences the transformation rules of rule system , shown in Fig. 12a.The transformation satisfies conditions ANC1, APC1, APC2, and APC3.V). d The system LTSs before (G N ) and after (G T (N ) ) transformation are not branching bisimilar; the transformed model is unable to synchronise the b-transition of 2 that was not transformed because the a-loop in 1 has been transformed to an f -loop.
Rule system transforms a-loops to f -loops and b-loops to g-loops.In an attempt to preserve the semantics the f -and g-actions, like the a-and b-actions, they are forced to synchronise, resulting in d -actions.As a result the left and right κ-extended pattern networks, presented in Fig. 12b, are branching bisimilar.
However, if is applied on network N , then the possibility of synchronising the a-and c-loops is lost.It follows that G N can perform an e-loop while G T (N ) cannot (see Fig. 12d).Hence, the two system LTSs are not branching bisimilar.
If APC4 is satisfied, then rule system must contain the synchronisation law ( a, c , e) ∈ V .Additionally, due to ANC1, the b-transition must then be present in L 2 .It then becomes visible when comparing the κ-extended pattern networks Lκ and Rκ that the possibility of performing an e-loop is lost in Rκ .
Note that for a confluent rule system all transformation sequences have the same end result.Therefore, it is sufficient that these conditions hold for a single transformation sequence from the input network to the final output network.
The analysis.For the application of a rule system the aim is to determine whether a verified is sound for a network N on which is applied.Before transformation of a network N , it is checked whether the application conditions APC1, APC2, APC3, APC4 are satisfied.Checking applicability of a rule system (R, V , V) on an input network N ( , V) is performed as follows: 1. Calculate the maximum set of match pairs m. 2. Check whether APC1 and APC2 hold for all (m, m) ∈ m. 3. Distribute the match pairs in m over a sequence of transformation steps defined by M 1 , . . ., M k with k ∈ N. 4. Check whether APC3 and APC4 are satisfied with respect to each M i (i ∈ 1..k ).
If the steps return positive results, then is applicable on N .

Time complexity of the analysis.
To check for the applicability of a rule system , a set of matches is required.Say that n is the size of the input network, m is the size of the set of matches, and t is the largest number of transitions in an LTS pattern of a rule in .
In the first step, the maximum set of matches is calculated.In general, the graph matching problem [DP06] is NP-complete.However, it has been shown in [DP06] that if the graphs have a root, all states are reachable from that root, and each state has a bounded number b of outgoing transitions, then the complexity is independent of the size of the input graph, instead only depending on b and the number of transitions n in the left pattern of the transformation rule.The complexity is then O( n i 0 b i ).Since our LTSs are weakly connected, they meet these requirements.
In the second analysis step, application conditions APC1 and APC2 are verified.When APC1 is checked, for each law ( v , a) When APC2 is checked, for each match and each law ( v , a) ∈ V it is checked whether for all i ∈ 1..n it holds that vi is not an action of the corresponding matched process.This takes O(n The third step distributes the matches in m over a sequence of match pairs.The set is traversed as a vector of match pairs of size n that contains as many pairs from m as possible.The time complexity is O(m 2 ).
The fourth and final step verifies whether APC3 and APC4 hold.The check for both APC3 and APC4 needs to iterate over vectors of match pairs N i (i ∈ 1..k ) and indices of all synchronisation laws in V and V .The number of matches in the match vector is limited to the number of matches in the set m. Therefore, this step has a running time of The running time of steps 2-4 together therefore amounts to O(n

Correctness of the verification
In this section, we prove the correctness of the rule system verification as described in the previous section.We prove the soundness of the rule system verification in Proposition 4.1.The completeness of the verification approach is shown in Proposition 4.2.
To simplify the proofs, we strengthen condition APC1 such that the correctness proof can be formulated on a single transformation step instead of a sequence.Application condition APC1 is formulated over a set of matches.However, since rule systems are confluent there is always a single result LTS after a series of applications of a rule system.Therefore, we may consider a 'merged' match without influencing the correctness of the verification technique over confluent sequences.
In line with this simplification, we assume that has n rules, and that a rule R i (i ∈ 1..n) matches on i in the LTS network that is applied on.For a single transformation step, the rules in R can be reordered according to this assumption with an appropriate projection of the rule system.For confluent rule systems, the result can be lifted to confluent sequences of transformations steps and the strengthened APC1 can be weakened again to APC1.
In this case, where R i matches on i , we do not have to consider the projection of synchronisation laws since V V M for a set of synchronisation laws V and vector of match pairs M .Hence, we simply write V instead of V M .
To prove soundness of the technique, we show that from a bisimulation relation B between Lκ and Rκ , a bisimulation relation C can be constructed between an arbitrary N and corresponding T (N ).For this purpose, we first need to prove some lemmas.For clarity, we refer with p, p, p , . . . to states in a left pattern network, with q, q, q , . . . to states in a right pattern network, with s, ŝ, s , . . . to states in an input network, and with t, t, t , . . . to states in an output network.
The κ-extended pattern networks can be seen as an abstraction from the input networks.In a κ-extended pattern network, individual processes can only leave the κ-state via σ -transitions and only enter the κ-state via ε-transitions.Hence, for all transitions enabled by laws in the original (non-κ-extended) pattern network, the processes that are in the κ-state before such a transition is executed are still in the κ-state after the transition has been followed.This property is formalised in Lemmas 4.1 and 4.2.
Proof Let ( v , a) ∈ W and p, p ∈ S P κ such that p v ,a − − → P κ p .Let i ∈ 1..n.We distinguish two cases: pi κ or p i κ.We only discuss the first case, the proof for the other case is symmetric.

Say pi
κ.By Definition 3.7, σ -transitions can only be performed from a κ-state.Similarly, a process can only enter a κ-state by performing an ε-transition.Hence, since ( v , a) ∈ W and pi κ, we must have vi •.It follows from Definition 4.2 that pi p i .Hence, we have p i κ.Since the proof for case p i κ is symmetric, it follows that pi κ ⇐⇒ p i κ.Lemma 4.1 can be applied inductively to obtain a similar result for τ -paths.Synchronisation laws with τ as a result action are, by Definition 4.6, never κ-synchronisation laws.Therefore, every process that is in a κ-state before a sequence of τ -transitions is still in the κ-state after the sequence of τ -transitions as shown by Lemma 4.2.Lemma 4.2 Consider a pattern network P ( ¯ , W) of size n.Then, Proof Consider states p, p ∈ S P κ such that p τ − → * P κ p .The use of τ -actions is not allowed in laws in W κ , hence it follows that ( v , a) ∈ W. Therefore, we have p τ − → * P p .The remainder of the proof follows directly from Lemma 4.1 and structural induction on p τ − → * P p .Due to the κ-laws, branching bisimulation relations between Lκ and Rκ preserve κ-states, i.e., when two state vectors are related, any κ-states present in one vector are also present in the other vector at the same positions, and vice versa.This is expressed in Lemma 4.3.

Lemma 4.3
Consider two pattern networks L and R of size n.Then, Proof Consider states p ∈ S Lκ and q ∈ S Rκ .For each i ∈ 1..n, we can distinguish two symmetric cases: pi κ or qi κ.We only discuss the first case, the proof for the other case is symmetric.
Say pi κ.By Definition 3.1, there is at least one state p ∈ I L i , and furthermore, according to Definition 3.7, there is a transition enabling transition p μ − → Lκ p for some p with p i p (by Definition 4.6).Since p ↔ b q and μ τ , we have q τ − → * q μ − → q .It follows that qi σ p − → q i .Since σ -transitions can only be performed from κ-states, we have qi κ.Finally, from Lemma 4.2 it follows that qi κ.
As κ-extended pattern networks form an abstraction from the matched input network, it is desirable that those states representing states not removed by the transformation are related to themselves.In the κ-extended left and right pattern networks the glue-states and the κ-states represent the states that are kept.As shown in Lemma 4.4, this can be directly lifted to the network-global level when state vectors only contain exit-and κ-states.However, this cannot be guaranteed for state vectors that also contain in-states due to the lack of a unique transition (such as the σ -, ε, and γ -transitions) leaving those in-states.If a state vector p consists of in-, out-and κ-states, then p may be related to a different state vector q via a τ -path originating from an in-state pi contained in p.When matches on initial states are restricted to exit-states, Lemma 4.4 is sufficient to show that initial states of the input and output networks of a transformation are related.
Proof Consider a state p ∈ I Lκ .We will construct a state p and synchronisation law ( v κ , μ) ∈ V κ such that p v κ ,μ − − → p .We construct v κ and p with for all i ∈ 1..|R|: if pi κ.By Definition 3.1, there exists an x ∈ I Li Let μ be the unique result action corresponding to v κ .Since for all i ∈ 1..|R| either pi κ or pi ∈ E Li , there is a transition pi It follows that there is a transition p v κ ,μ − − → p .By Definition 3.6, there is a state q ∈ S Rκ such that p ↔ b q.Furthermore, since ( v κ , μ) ∈ V κ , we have μ τ .Hence, there is a q τ − → * q μ − → q such that p ↔ b q and p ↔ b q .We show that p q from which it follows that p ↔ b p.Consider an i ∈ 1..n.The σ -transitions only leave from κ-states (in which case pi κ and the ε pi -transitions only leave from the state pi , i.e., each of the pi v κ i − → p i transitions carries a unique label identifying the states connected by the transition.Both pi and qi can perform the v κ i − → directly.It follows that pi qi .Therefore, p q and p ↔ b q can be rewritten to p ↔ b p. To formally define how a κ-extended network relates to an input network, we introduce the mapping of state vectors as presented in Definition 4.8.Similar to matches for a single rule, the mapping of a state vector of a κ-extended pattern network defines how a state vector of the pattern is mapped to a state vector of an LTS network.Definition 4.8 (State vector mapping) Consider an LTS network N ( , V) and a pattern network P ( ¯ , W) of size n with corresponding matches m i : ¯ i → i for all i ∈ 1..n.We say a state vector p ∈ S P κ is mapped to a state vector s ∈ S N , denoted by p s, iff By referring to matches of the individual vector elements, a state vector is mapped on to another state vector.Since the κ-state represents unmatched states, the mapping relates the κ-state to all unmatched states.Hence, for every state s ∈ S N there is a state p ∈ P κ that maps on state s (Lemma 4.5).
Since κ-states represent all unmatched states, we need to construct states that specify explicitly which unmatched state is represented at the moment.The state s : s[ pi | P (i )] denotes the state s constructed from states s and p such that for all i ∈ 1..n, if predicate P (i ) holds, we have s i pi , and if not, we have s i si .For example, the state s : s[m i ( pi ) | pi κ] is produced from matches of p, where for all i ∈ 1..n, s i m i ( pi ) if pi κ, and s i si if pi κ.With the exception of transitions enabled by W κ , the input network is able to simulate the behaviour of the κ-extended network.The transitions enabled by κ-laws form an abstraction from all transitions that may possibly enter or leave states of the input network matched by the glue-states of the pattern network.That is, for laws ( v , a) ∈ W, the mapping preserves the branching structure of the pattern network.Following, we formalise this in a number of lemmas.
The state vector mapping preserves the branching structure of the pattern network.Similar to a match, the state vector mapping (Definition 4.8) preserves the branching structure of the pattern network for the set of matching laws.Before proving this claim, we first show that the state vector mapping is complete, i.e., the mapping relation maps to all states of any input network.More precisely, for each (vector) state s in the input network there is a (vector) state in the κ-extended pattern network that is mapped on s.This is formally proven in Lemma 4.5.

Lemma 4.5 Consider an LTS network N
( , V) and a pattern network P ( ¯ , W) of size n with W ⊆ V and corresponding matches m i : ¯ i → i for all i ∈ 1..n.Then, ∀ s ∈ S N .∃ p ∈ S P κ .p s.
Proof Let s be a state in S N .From the definition of state vector mapping (Definition 4.8) it follows that there is a p ∈ S P κ with p s.We shall construct p such that p s.
si , then we take pi x .Otherwise, we take pi κ.By construction, it holds that p s. Finally, by Definition 3.7, x , κ ∈ S ¯ κ i , and thus, p ∈ S P κ .Lemmas 4.6 and 4.7 express that the state vector mapping preserves the branching structure of the pattern network.Lemma 4.6 states that for each transition in the pattern network there is a corresponding transition in the mapped network.Lemma 4.7 extends this to sequences of τ -transitions.In Lemma 4.6 and Lemma 4.7, the end state of the transition and the sequence of transitions in the input network, respectively, is identified.In short, for vector states p and s, if p s, then for each index i , firstly, transitions taken in pattern ¯ i can be mimicked by transitions in process i from the mapped state leading to a state mapped by the target state in ¯ i , and secondly, if the state pi is a κ-state, then no transitions from the corresponding state si are taken.

Lemma 4.6 Consider an LTS network N
( , V) and a pattern network P ( ¯ , W) of size n with W ⊆ V (APC3) and corresponding matches m i : ¯ i → i for all i ∈ 1..n.Then,  APC3) and (v , a) ∈ W, we must have (v , a) ∈ V. Consider an i ∈ 1..n.We distinguish two cases: In conclusion, we have ( v , a) ∈ V and Lemma 4.7 Consider an LTS network N ( , V) and a pattern network P ( ¯ , W) of size n with W ⊆ V (APC3) and corresponding matches m i : ¯ i → i for all i ∈ 1..n.Then, Proof Let p, p ∈ S P κ and s, s ∈ S N such that p τ − → * P κ p , p s, and p s .Since τ result actions are not allowed in W κ , any law ( v , τ ) in the sequence of τ 's must be a member of W. The proof now follows directly from Lemma 4.6 and structural induction on p τ − → * P κ p .
When two states s and t from the input and transformed network, respectively, are related by a branching bisimulation relation, and from s a certain transition is enabled, then from t it must be possible to simulate this behaviour (and vice versa).Even when this transition is not matched by the transformation rule system, it may still be the case that t is matched by a state that is not a glue-state.In this case, t cannot simulate s directly, thus, there must be a τ -path to some other state that is able to directly simulate s.The existence of such a state is proven in Lemma 4.8.
In Lemma 4.8, we show that in the presence of a state vector mapping and a witness showing that there is a transition leaving the pattern, all active glue-states (all glue-states involved in that transition) or κ-states are reachable by related states.More precisely, when a state p -mapped to a state from which the pattern is leftis related to a state q, then there is a τ -path from q to q such that p ↔ b q and there is a state q which is the corresponding entry-point of a given unmatched transition that leaves q.This follows from two facts.First, the κ-synchronisation laws have sychronisation vectors uniquely identifying the active states within q.Second, due to the matching conditions (Definition 3.4), a matched state must be a glue-state when there is a transition leaving or entering the corresponding matched state.Lemma 4.8 Let N ( , V) be an LTS network of size n and let (R, V , V) be a rule system applicable on N such that Lκ ↔ b Rκ .Let M be a vector of match pairs of size n such that m i : L i → i and mi : R i → T ( i ) for all i ∈ 1..n.Then, Proof Consider a synchronisation law ( v , a) ∈ V and states s, s ∈ S N such that s v ,a − − → s .Let p ∈ S Lκ and q ∈ S Rκ be states such that p ↔ b q, p s, and A κ-extended pattern network explicitly models transitions that enter and leave the embedding of the pattern network.However, it does not model the situation where the matched network moves between two unmatched states.Therefore, we have to perform a case distinction: either the transition of the input-network N is represented by one of the κ-synchronisations, or the transition of the input network has no representation in Lκ .In the former case, we build the corresponding κ-synchronisation law ( v κ , μ) and obtain the required states by applying the branching bisimulation definition on p ↔ b q and p v κ ,μ − − → p .In the latter case, we show that p p and take q for both q and q from which the proof will follow.
We distinguish the two aforementioned cases: C1 There exists an i ∈ Ac( v ) such that ( pi . The three cases correspond to the situations where the γ -, ε-, and σ -transitions, respectively, are introduced in κ-extended pattern networks.We will construct a synchronisation law ( v κ , μ) ∈ V κ enabling a transition p μ − → Lκ p that represents the LTS pattern network abstraction for s v ,a − − → N s .We construct v κ with for all i ∈ 1..n: Let μ be the unique result action corresponding to v κ .Since there are no matching transitions (∀ i ∈ It follows that ( v κ , μ) indeed enables the transition p μ − → Lκ p .Furthermore, by Definition 4.6, we have μ τ .
Since p ↔ b q and μ τ , by Definition 3.6, we have q τ − → * Rκ q μ − → Rκ q with p ↔ b q and p ↔ b q .What remains to be shown 1) ∀ i ∈ Ac( v ).qi pi ∧ q i p i and 2) ∀ i ∈ 1..n \ Ac( v ).qi q i : 1) Consider an i ∈ Ac( v ).We distinguish two cases: Because μ is unique in V κ and does not occur in V ∪ V, the transition q μ − → Rκ q is enabled by ( v κ , μ) ∈ V κ .Recall that pi ∈ E Li ∨ pi κ and p i ∈ I Li ∨ p i κ (since i ∈ Ac( v )).The v κ i -transition is only present between pi and p i in both Lκ and Rκ , and (by Definition 3.7).Therefore, we must have qi pi and q i p i .
• i ∈ Ac( v κ ).It follows that pi p i and qi q i (Definition 4.2).Recall that pi ∈ E Li ∨ pi κ and p i ∈ I Li ∨ p i κ (since i ∈ Ac( v )).However, since i ∈ Ac( v κ ) only the case where both pi and p i are κ-states remains.By applying Lemma 4.3 on p ↔ b q and p ↔ b q , it follows that qi κ and q i κ.Hence, pi p i and qi q i .
2) Consider an i ∈ 1..n \ Ac( v ).Since i ∈ Ac( v ), we must have i ∈ Ac( v κ ) (by construction of v κ ).It follows from Definition 4.2 that qi q i .C2 For all i ∈ Ac( v ) it holds that ¬( pi ∈ E Li ∧ p i ∈ I Li ), ¬( pi ∈ E Li ∧ p i κ), and ¬( pi κ ∧ p i ∈ I Li ).We take q for both q and q which leads to p ↔ b q.We first show that p p from which it follows that p ↔ b q .The proof is then completed by showing ∀ i ∈ Ac( v ).qi pi , the remainder follows from q q q.We show that p p .Consider an i ∈ 1..n.If both pi κ and p i κ, we trivially have pi p i .Now consider the opposite case: pi κ or p i κ.Assume for a contradiction that i ∈ Ac( v ).Then by Definition 3.4, we must have pi ∈ E Li ∨ pi κ and p i ∈ I Li ∨ p i κ.Since pi κ or p i κ, only the following three cases remain: ( pi ∈ E Li ∧ p i ∈ I Li ), ( pi ∈ E Li ∧ p i κ), and ( pi κ ∧ p i ∈ I Li ).These three cases contradict the assumptions of case C2.Hence, we must have i ∈ Ac( v ).It now follows that si s i .Finally, we have pi p i because m is an injection.In conclusion, we have p p .What remains to be shown is ∀ i ∈ Ac( v ).qi pi .Consider an i ∈ Ac( v ).Again, by Definition 3.4, we must have pi ∈ E Li ∨ pi κ and p i ∈ I Li ∨ p i κ.Based on this we distinguish three cases: pi κ, pi ∈ E Li ∧ p i κ, and pi ∈ E Li ∧ p i ∈ I Li .In the first case it follows from p ↔ b q and Lemma 4.3 that pi κ qi .The latter two cases contradict one of the three assumptions of case C2 and the proof follows by contradiction.
In both C1 and C2 there exist q, q ∈ S Rκ such that q τ − → * Rκ q with p ↔ b q, p ↔ b q , ∀ i ∈ Ac( v ).qi pi ∧ q i p i , and ∀ i ∈ 1..n \ Ac( v ).qi q i .
Completeness of transition transformation.Due to the application conditions either all process-local transitions participating in a synchronising global transition are transformed or no process-local transitions are transformed at all.For confluent rule systems, the order of applying rules is irrelevant and application of the transformation rules can lead to only one output network.This fact allows us to simplify the correctness proof of the verification technique by strengthening condition APC1 such that the correctness proof (Proposition 3.1) can be formulated on a single transformation step instead of a sequence.
Recall that APC1 requires that a rule transforming synchronising transitions labeled with some action a must be applicable on all a-transitions within the corresponding LTS.Since confluent rule systems have only a single possible output network, we may consider a 'merged' match consisting of all the matches in the sequence that produces the output network in a single transformation step.A transformation sequence resulting in the final output network cannot be distinguished from the application of the 'merged' match.Therefore, for proving the correctness of the technique, APC1 may be strengthened as follows: In contrast with APC1, requiring existence of a match that transforms synchronising transitions for all equivalent transitions, the strengthened condition APC1' requires that a single match transforms all equivalent synchronising transitions.Indeed, this means that APC1' requires a 'merged' match transforming all synchronising transitions in one step.Conditions APC1' and ANC1 ensure that global transitions in the input network are always fully transformed or not transformed at all.This leads to Lemma 4.9 that states the following: if a transition in a network enabled by ( v , a) ∈ V has a match on a local transition (say vi for some i ∈ 1..n), then for all j ∈ 1..n, the participating local vj -transitions must be matched, i.e., all local transition participating in the global transition must be matched.From this it follows that a global transition is either fully transformed or not transformed at all.Lemma 4.9 Consider an LTS network N ( , V) of size n and a rule system (R, V , V) such that APC1' and ANC1 are satisfied.Consider the pattern network , let P ( ¯ , W) as representative for the left and right pattern network.Let the m i : ¯ i → i (i ∈ 1..n) be the matches specifying the embedding of P in N .Then, .n.We distinguish three cases: • vi • ∧ Ac( v ) {i }.Law (v , a) constitutes independent behaviour and the proof follows from the premises.
To prove the proposition we have to show that C is a bisimulation relation.This requires proving that C relates the initial states of N and T M (N ) and that C satisfies Definition 3.6.
• C relates the initial states of N and T M (N ).We have I N I T M (N ) .Hence, it suffices to show ∀ s ∈ I N .s C s.
Take and q ∈ S Rκ such that p B q, p s, and q t, and ∀ i ∈ 1..n. ( pi κ ∨ qi κ) ⇒ si ti (2) .Furthermore, by Lemma 4.5, there is a state p ∈ S Lκ with p s .We distinguish two cases: We shall first establish that ( v , a) ∈ V , after which we can apply Lemma 4.9 to obtain a the corresponding a-transition in T Lκ .Assume for a contradiction that ( v , a) ∈ V .Since there is an i ∈ Ac( v ) with pi vi − → L κ i p i , the vi action must be a member of the actions of L i , i.e., vi ∈ A L i .However, since i ∈ Ac( v ) and (v , a) ∈ V \ V , by APC4, it must hold that vi ∈ A L i Hence, by contradiction, we have ( v , a) ∈ V .Now, by Lemma 4.9, there is a transition p a − → Lκ p enabled by ( v , a).Since p B q, Definition 3.6, we have the following two cases: • a τ and p B q. To show s C t, all ingredients but one are there.In particular, we still need to show that ∀ i ∈ 1..n. p i κ ∨ qi κ ⇒ s i ti .Consider an i ∈ 1..n with p i κ ∨ qi κ.Since V κ and V are disjoint, we must have ( v , a) ∈ V κ .If qi κ, then by Lemma 4.3, pi κ.Since pi κ or p i κ and ( v , a) ∈ V κ , it follows that i ∈ Ac( v ).Hence, by Definition 4.2, s i si .Finally by (2), si ti .
• q τ − → * Rκ q a − → Rκ q with p B q and p B q .From Lemma 4.7, it follows that there is a state t ∈ S T M (N ) with q t, a τ -path t τ − → * T M (N ) t, and ∀ i ∈ 1..n. qi κ ⇒ ti ti (3) .Furthermore, by Lemma 4.6, we have a state t ∈ S T M (N ) with q t , a transition t a − → T M (N ) t , and ∀ i ∈ 1..n. q i κ ⇒ t i ti (4) .What remains to be shown is that s C t and s C t .Therefore, si ti ti .In conclusion, s C t.
• Similarly, for s C t , all that is left to show is ∀ i ∈ 1..n. p i κ ∨ q i κ ⇒ s i t i .Consider an i ∈ 1..n with p i κ ∨ q i κ.By Lemma 4.1, qi κ iff q i κ.Hence, p i κ ∨ qi κ.From s C t, it follows that si ti .Finally, by Lemma 4.3, p i κ iff q i κ and it follows from (4) that t i ti .Therefore, s i ti t i .In conclusion, s C t .

¬1. Because ¬1, we have
Therefore, for all states p, p ∈ S Lκ with p s and p s , there is no transition p a − → Lκ p .Moreover, by Definitions 3.7 and 3.4, for each i ∈ 1..n state pi is either a κ-state or an exit-state (in E L i ), and state p i is either a κ-state or an in-state (in (5) .By applying Lemma 4.8, we get states q, q ∈ S Rκ such that there is a τ -path q τ − → * Rκ q with related states p B q and p B q , and the states have the following two properties: ∀ i ∈ Ac( v ).qi pi ∧ q i p i (6) , and ∀ i ∈ 1..n \ Ac( v ).qi q i (7) .From Lemma 4.7 it follows that there is a state t ∈ S T M (N ) with q t, a τ -path t τ − → * T M (N ) t, and ∀ i ∈ 1..n. qi κ ⇒ ti ti (8) .We construct a state t : t . By construction of t we a rule system that is not property preserving there may be an input network N with a vector of matches M such that N ↔ b T (N ).However, it is guaranteed for that there also exists an LTS network N and vector of matches M such that N ↔ b / T (N ).
Proposition 4.2 Consider a rule system (R, V).Let M be the set of all LTS networks and N be the set of all possible vectors M of n match pairs defining a transformation step using for an LTS network N ( , V) ∈ M of size n.Such a vector M consists of tuples (m i , mi ), with m i : L i → i and mi : R i → T ( i ), respectively.The following holds: It follows that Lκ ↔ b Rκ .

Experiments
Refiner is implemented in Python 3 and can be run from the command-line.It is platform-independent, and allows performing behavioural transformations of LTS networks, and checking semantics and property preservation.It integrates with the action-based, explicit-state model checking toolsets Cadp [GLMS11] and mCRL2 [CGK + 13].These tools can be used to specify and verify concurrent systems.Refiner uses the mCRL2 tool LtsCompare to perform bisimilarity comparisons with an implementation of the Groote-Wijs algorithm [GW16].
For the experiments presented in this section Refiner was compiled using Nuitka to get a free performance improvement. 6We ran Refiner on the standard machines of the DAS-5 cluster [BEdL + 16], which have an Intel Haswell E5-2630-v3 2.4 GHz processor, 64 GB memory, running CentOS Linux 7.2.Each experiment was conducted no longer than 80 hours and aborted in case the machine ran out of memory.
We have performed two types of experiments.The first setup compares traditional model checking with the transformation verification approach presented in this work.The results are reported in Sect.5.1.The second setup aims to compare the previous transformation verification algorithm (Refiner v1) with the algorithm in presented in this paper (Refiner v2).Those results are discussed in Sect.5.2.7

Comparing traditional model checking and transformation verification
The goal of this experimental setup is to compare the running time of transformation verification with traditional model checking.For model checking we have selected the explicit-state model checker Cadp.For the transformation verification we use Refiner with the algorithms presented in this article.
We have selected a set of base models for verification and transformation.Each base model, say N 1, was transformed using Refiner resulting in a new model N 2. For two cases we have applied two different rule systems on the base model, the models are then called N 2A and N 2B.Another two cases were transformed even further resulting in a model N 3.
Each of the models is verified for the absence of deadlocks.Likewise, the rule systems are verified for the preservation of absence of deadlocks, i.e., the rule systems may not introduce new deadlocks.
Each base model was verified using Cadp and the verification time was measured.The rule systems were applied and verified by Refiner and both the transformation and verification time were measured.After each transformation the resulting model was verified again using Cadp.
For Cadp only the execution time of the tool can be measured.Hence, to make a fair comparison, we measured the execution time of the Refiner v1 tool in the same way.For both tools we have measured the wall clock time (i.e., the real elapsed time) using the Unix time command: Each model was subjected to one or two transformations, of the following types: (1) adding internal computations, (2) adding support for lossy channels by introducing the Alternating Bit Protocol (the ABP case), and (3) breaking down broadcast, i.e., synchronisations involving more than two parties to combinations of two-party synchronisations (the broadcast and the HAVi leader election case). 1 presents the experimental results.The first column indicates the name of a test model.For each model, the number at the end of each name reflects the order in which the models were obtained, i.e., original models are indexed by '1'.Models resulting from the application of a transformation on the corresponding original model are indexed by '2', '2A, or '2B'.The '2A'-and '2B'-models are independently obtained via two different transformations from the corresponding '1'-model.Models indexed by '3' are likewise the result of transforming the corresponding '2'-model.

Discussion of results. Table
In the second and third columns metrics obtained from verifying the test model using Cadp are displayed.We report the number of states each state space consists of (second column), and the running time (in seconds) to generate and verify these using Cadp (third column).
The fourth and fifth column show the running time (in seconds) of applying and verifying the rule system using Refiner, respectively.The running time of the transformation is the required time to obtain a particular model by applying a rule system.Because the base models are not the results of the application of a rule system there is no transformation and verification time.Therefore, the time measurements are not applicable, indicated with "n.a.", for base models.Note that Refiner does not actually check the state spaces of the models indexed by '2' and '3', but instead can reason about their correctness by verifying the applied transformation rules.
Finally, the fifth column provides the outcome of the verification for each case, where ✓ indicates that the system satisfies the property and ✗ indicates that it does not.
One notable result is the time needed to obtain DES 2. The network of DES 1 contains one particularly large LTS, consisting of more than four million states, making transformation at least as costly as verifying DES 1.In fact, it is even slower, but this is due to the fact that Cadp reads compressed LTS files, while Refiner does not, hence the latter requires more time to read the input network.
The experiments demonstrate that preservation checking with Refiner is many orders of magnitude faster compared to verifying the property again, if the state space is of reasonable size.This is not surprising, as the check only focuses on the applied change, not the resulting state space.In our benchmark set of examples, the changes can be verified practically instantaneously, resulting in most verification tasks being ready in 0.17 or 0.18 seconds on our test machines.If one would compare Refiner's running times with those of other model checkers, the conclusion would be the same.
The usual workflow of verifying a model and verifying and applying the corresponding transformations is as follows.First, the initial model (version 1 in Table 1) is verified, using a model checker such as Cadp.Then, instead of applying a transformation and then verifying the resulting model again, one can verify the transformation itself.If the transformation does not preserve the desired property, both its application and the verification of the resulting model (version 2A in Table 1) can be avoided.In case verification of the transformation produces a positive result, it can be safely applied without having to verify the resulting model (versions 2, 2B and 3 in Table 1).

Refiner v1 Versus Refiner v2
The goal of this experiment is to compare the running times of the previous version of the algorithm (Refiner v1) and the algorithm presented in this article (Refiner v2).For this we have generated a scalable set of rule systems that model the transformation of a token-ring.
We measured the time both Refiner versions spent building and verifying the state space.The state space construction and verification algorithms are the only difference between Refiner v1 and v2.Therefore, the elements that are equivalent for both versions are eliminated from the measurements.Although the state space generation and verification dominate the running time, for the small models, incorporating tasks such as reading the input may introduce unnecessary noise.By removing this noise we are able to observe the direct impact the new algorithm has on the performance of the tool.
For time measurement we used the Python method time.time().This is sufficiently accurate, even for the smaller models, as it can measure differences of even less than a hundredth of a second between the Refiner versions.
We ran both Refiner v1 and v2 in quiet mode to limit the time spent writing messages to the standard output.Refiner v1 needs to check all κ-extended pattern networks of subsets of the set of transformation rules.Refiner v1 can distribute the checks over several cores to increase the performance.For completeness sake, for Refiner v1 the experiments were run with both a single thread and multiple threads (eight in the case of a standard DAS-5 machine).The former allows a better comparison of the theoretical performance improvements as Refiner v2 only uses a single thread.The latter allows a more practical comparison showing the typical performance of Refiner v1 in its common use.
The largest check performed by Refiner v1 considers the entire set of transformation rules when the left and right κ-extended pattern networks are checked for branching bisimilarity.This largest check is equivalent to the check proposed in this work and performed by Refiner v2.This is the result of improved theoretical results, as presented in the current article, that proved that only this largest check is required.Hence, the expectation is that Refiner v1 will never perform better than Refiner v2.

Invocation of tools.
All generated rule systems were verified for full semantic preservation using single-threaded Refiner v1, multi-threaded Refiner v1, and Refiner v2.For reproducibility of the experiment, we explain the commands used for this experiment below.
For Refiner v1 using a single thread the following command was used: refiner -q -t 1 -c <rule_system> The argument -q indicates that Refiner should run in quiet mode, i.e., no messages are sent to the standard output.The maximum number of threads is set using the -t argument.Here, -t 1 expresses that only a single thread is used.Argument -c <rule system> tells Refiner to verify the rule system <rule system>.In this experimental setup, the models are named gen i with i ∈ 2..n.For Refiner v1 using multiple threads we used the command: refiner -q -c <rule_system> Without the -t argument Refiner creates a thread for each core of the machine and distributes the checks over these threads.In the case of a standard DAS-5 machine eight threads are used.The remaining arguments are the same as the ones for the single threaded variant.Refiner v2 is invoked using the following command: refiner -q -c2 -c <rule_system> The results clearly show that the algorithm presented in this article (Refiner v2) outperforms both the singleand multi-threaded variant of the previous version of the algorithm (Refiner v1).As mentioned before, this is no surprise as the largest check performed by Refiner v1 is the same as the check performed by Refiner v2.The extra checks that Refiner v1 performs consider the projected rule systems of all subsets of dependent transformation rules.These projected rule systems become exponentially smaller as the size of the subset decreases, thereby also decreasing the size of the state space analysed in the check that is performed.The decreasing size of these extra checks explains why the running time of Refiner v1 is only a few factors larger than the running time of Refiner v2.
A last observation we can make based on Table 2 is that the running time ratio of Refiner v1 to v2 seems to increase.To investigate whether this is a trend we have plotted the ratios between the different Refiner versions in Fig. 14.The horizontal axis depicts the number of rules in the generated rule system, the vertical axis indicates the ratio.Although the number of data points is limited, the graph gives us some insights into the practical running time improvements.
The ratio of Refiner v1 with a single thread to Refiner v1 with eight threads is shown as the continuous line where the diamonds indicate the data points.The ratio shows a general decline towards 1 as n grows, i.e., for large n the benefit of the extra threads becomes negligible.This is unexpected as more cores should be able to verify more checks in the dependency set simultaneously.Upon further inspection we found that the utilisation of the cores was not efficient.Refiner performs smaller checks before larger checks.Hence, the largest check is performed last.Thus, in the worst case, the remaining cores are not utilised when this final check is performed.
For the same reason there is a sudden decline in the ratio from a token ring transformation with three rules to one with four rules.At three rules, there are exactly eight checks, thus the eight cores are utilised optimally.Whereas at four rules, sixteen checks need to be performed, but cores are poorly utilised as the larger checks are performed last.Finally, at two rules, there are only four checks while there are eight cores available.As not all the cores can be put to use only a small performance gain is obtained .We choose not to optimise the distribution of checks over the available cores for Refiner v1 as Refiner v2 is by definition more efficient.
The dashed line shows the ratio of the single threaded Refiner v1 to Refiner v2 where the data points are indicated with squares.The ratio increases as n grows.Due to the limited number of data points we cannot estimate the trend function.The running time analysis predicts an exponential trend, however, but this is not visible in the data.
The dotted line shows the ratio of Refiner v1 running 8 threads to Refiner v2 where the data points are indicated with triangles.This ratio shows an increase as n grows similar to the ratio between the single threaded Refiner v1 and Refiner v2.As n grows the data points move towards the dashed line (the single threaded Refiner v1 to Refiner v2 ratio).This is expected as the difference between the single threaded Refiner v1 and multithreaded Refiner v1 decreases as n grows as indicated by the continuous line.At three rules, there are exactly as many cores as there are checks for Refiner v1.Hence, at this point the performance of the multi-threaded Refiner v1 is equivalent to that of Refiner v2.However, at two rules, there are more cores than checks, but Refiner v2 performs better than Refiner v1.As the checks are extremely small for rule systems consisting of 2 rules it is likely that the overhead of the threads have a visible impact on the performance of Refiner v1.

Conclusions
We discussed the correctness of an LTS transformation verification technique.The aim of the technique is to verify whether a given LTS transformation system preserves a property ϕ, written in a fragment of the μ-calculus, for all possible input models formalised as LTS networks.It does this by determining whether is guaranteed to transform an input network into one that is branching bisimilar, ignoring the behaviour not relevant for ϕ.
We demonstrated the efficiency of the verification technique compared to model checking the entire model again after it has been transformed.Many orders of magnitude speedup can be achieved through model transformation verification.
We improved upon our previous work by reducing the number of required bisimulation checks from 2 n − 1 per set of dependent transformation rules to one per set of dependent rules.Experimentally, we demonstrated that our new verification algorithm outperforms the previous one, even if the latter uses eight threads and the new one only a single thread.
Furthermore, the expressiveness of transformation rules was extended by distinguishing between glue-states that allow incoming or outgoing transition that enter or leave the LTS pattern, respectively.This work presents a proof for these results.The proof has been verified in Coq.
The property preservation check is limited to rule systems that adhere to the applicability and admissibility conditions.Input networks must be admissible as well.Furthermore, application of a rule system on an input network must satisfy application conditions APC1 to APC4.
Even when a transformation does not preserve a given property, it may still be possible that said property holds for the output model of a specific instance of the transformation.Nevertheless, transformations that are property-preserving can be reused without the need for additional verification.

Future work.
In earlier work, we used branching bisimulation with explicit divergence [vGW96,WE13], which preserves τ -loops and therefore liveness properties.In future work, we would like to prove that for this flavour of bisimulation the technique is also correct.Moreover, we would like to investigate what the practical limitations of the pre-conditions of the technique are in industrial sized transformation systems.
In [Wij13], the technique from [WE13] has been extended to explicitly consider the communication interfaces between components, thereby removing the completeness condition ANC1 regarding synchronising behaviour being transformed (see Sect. 4.1).We wish to prove that also this extension is correct.
Finally, our framework can be extended in a number of ways, to reason about additional aspects of concurrent systems.For instance, in line with the encoding proposed in [Wij07], timing information could be included in the LTSs to design timed systems and express transformations of timed behaviour.This would also introduce the possibility to analyse the impact a transformation will have on the performance of a system under transformation [WF05], by means of timed branching bisimulation checking [FPW05].The capability to reason about system performance could be further strengthened by also introducing probabilities on the LTS transitions [BK08].Existing tools, such as PRISM [KNP11] and extensions [BESW10], could then be employed to conduct the analysis of the systems.An interesting challenge is then how to involve these probabilities in the verification of transformations as well.

Fig. 2 .
Fig. 2. A transformation rule A transformation rule r (L, R) is applicable on an LTS G iff a match m : L → G exists according to Definition 3.4.Given a state s ∈ S G of an input LTS and a state p ∈ S P , we write m(p) s to indicate that state s is matched on by state p via match m.The set m(S ) {m(s) ∈ S G | s ∈ S } is the image of a set of states S ⊆ S P through match m on an LTS G.

Fig. 3 .
Fig. 3. Application of a transformation rule Definition 3.4 (Match) A pattern LTS P (S P , A P , T P , I P ) with a set of exit-states E P has a match m : P → G on an LTS G (S G , A G , T G , I G ) iff m is an injective LTS morphism and for all p ∈ S P , s ∈ S G :

2 .-
There is no transition matching s a − → G s in L, i.e., ¬p a − → L p .Thus, both s and s are not removed by the transformation.We distinguish two cases: State s is not matched on by m.Therefore, we must have m κ (p) s with p κ.It now follows from (1) that s t.Hence, t a − → T (G) s , and by reflexivity of τ − → * , t τ − → * T (G) t a − → T (G) s .We have s C t, thus, what remains to be shown is s C s .Since s is not matched on, it follows from Definition 3.4 that state s is either not matched on or matched on by an in-state.In the former case we have p κ, and in the latter case we have p ∈ I L .In both cases we can apply Lemma 3.1 to obtain p B p .It follows that s C s .
and • T N and S N are the smallest relation and set, respectively, satisfying I N ⊆ S N and for all s ∈ S N , ( v , a) ∈ V :

m 1 2 { 4 2 1{ 4 2 { 4
→ 9, 5 → 10, 6 → 7}, and M 2 contains the left matches m 2 1 : L 2 → 1 and m 2 2 : L 2 → 2 with m → 2, 5 → 5, 6 → 6} and m 2 → 7, 5 → 8, 6 → 9}. a A transformation sequence resulting from the application of on N ( , V) with match pair vectors M 1 and M 2 b Rule system (R, V , V) removes the a-, b-transition sequence, and the d-, c-transition sequence; the system LTS of networks is applied on remain unchanged as the a-and d-transitions can both never synchronise and the b-and c-transitions are therefore unreachable c The system LTSs of the networks in the transformation sequence; since the a-and c-transitions and the b-and d-transitions can never synchronise and the d-and c-transitions in 1 are cut the h-transitions are the only reachable behaviour

Fig. 8 .
Fig. 8.A rule system and its κ-extended pattern networks and bisimulation checks.a A rule system (R, V , V) b The corresponding κ-extended pattern networks c Bisimulation check; the ε 2 -transition ensures that Lκ ↔ b / Rκ

Fig. 10 .
Fig. 10.Rule system and input network N with matches m( ĩ)i and m(i) i that do not satisfy APC1; although Lκ ↔ b Rκ , the system LTS of the input network G N is not bisimilar to system LTS of the output network G T (N ) . a Rule system (R, V , V) transforms a-transitions to c-transitions, synchronisation of c-transitions results in an a-transition.b Lκ and Rκ are equivalent since the synchronisation of two c-transitions result in an a-transition again.c An input network N ( , V). d The system LTSs before (G N ) and after (G T (N ) ) transformation are not branching bisimilar; the transformed model is no longer able to synchronise the a-transition performed at state 2 that was not transformed, since the loop at state 3 has been transformed to a c-loop

Fig. 11 .
Fig. 11.Rule system and input network N with matches m( ĩ) i and m(i) i that do not satisfy APC3; although Lκ ↔ b Rκ , the system LTSs of the input network G N is not bisimilar to system LTS of the output network G T (N ) . a Rule system (R, V , V) transforms a-transitions to d-transitions, the synchronisation laws specify that both the local a-and the d-transitions result in a global c-transition.b Lκ and Rκ are equivalent since for both the left and right patterns the process-local transitions results in a c-transition.c Input network N ( , V). d The system LTSs before (G N ) and after (G T (N ) ) transformation are not branching bisimilar; since ( a , c) ∈ V \ V (i.e., APC3 is violated) the a-transition is cut in 1 while the d-transition is not cut in T ( 1 ) due to introduction of ( d , c)

Fig. 12 .
Fig. 12. Rule system and input network N with matches m( ĩ) i and m(i) i that do not satisfy APC4; although Lκ ↔ b Rκ , the system LTSs of the input network G N is not bisimilar to system LTS of the output network G T (N ) . a Rule system (R, V , V) transforms a-loops to f -loops and b-loops to g-loops, like synchronisation of a and b labels, the synchronisation of f and g labels results in an d-transition.b Lκ and Rκ are equivalent since the transformed loops synchronisation to a d-loop again.c Input network N( , V). d The system LTSs before (G N ) and after (G T (N ) ) transformation are not branching bisimilar; the transformed model is unable to synchronise the b-transition of 2 that was not transformed because the a-loop in 1 has been transformed to an f -loop.

Lemma 4 . 1
Consider a pattern network P ( ¯ , W) of size n.Then,

Fig. 14 .
Fig. 14.Ratios between the Refiner versions when analysing the transformation of a token ring of size n and states p, p ∈ S P κ such that p v ,a − − → P κ p , and s ∈ S N with p s. Take s : s[m i ( p i ) | p i κ].By construction, we have s ∈ S N and ∀ i ∈ 1..n. p i κ ⇒ s i si .Furthermore, Lemma 4.1 ensures that pi κ ⇐⇒ p i κ for all i ∈ 1..n.Therefore, for all i ∈ 1..n, it holds that it holds that si ∈ S i .Moreover, by Definition 4.2, we have pi p i and pi ∈ S ¯ κ i .If pi κ, then by construction of s we have si s i .If pi κ, then m i ( pi ) si and m i ( p i ) s i .Finally, since pi p i and m i is injective (Definition 3.4), m i ( pi ) m i ( p i ) si s i .¯ i p i .Hence, it follows that m i ( pi ) si and m i ( p i ) s i .Finally, since a match (Definition 3.4) is an embedding, we conclude that si vi vi− → an arbitrary state s ∈ I N , then by Lemma 4.5, there is a state p ∈ S Lκ with p s. Since s ∈ I N , it follows from Definition 3.4 that ∀ i ∈ 1..n. pi ∈ E L i ∨ pi κ, i.e., p ∈ I Lκ .By Lemma 4.4, we have p B p. It follows that s C s.

Table 1 .
Experimental results: verification of various models using Cadp and Refiner