Linear parallel algorithms to compute strong and branching bisimilarity

We present the first parallel algorithms that decide strong and branching bisimilarity in linear time. More precisely, if a transition system has n states, m transitions and |Act|\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\vert Act \vert $$\end{document} action labels, we introduce an algorithm that decides strong bisimilarity in O(n+|Act|)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {O}(n+\vert Act \vert )$$\end{document} time on max(n,m)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\max (n,m)$$\end{document} processors and an algorithm that decides branching bisimilarity in O(n+|Act|)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {O}(n+\vert Act \vert )$$\end{document} time using up to max(n2,m,|Act|n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\max (n^2,m,\vert Act \vert n)$$\end{document} processors.


Introduction
The notion of bisimilarity for Kripke structures and labelled transition systems (LTSs) is commonly used to define behavioural equivalence. Deciding this equivalence is essential for modelling and verifying discrete event systems [1,2]. Kanellakis and Smolka proposed a partition refinement algorithm for obtaining the bisimilarity relation for Kripke structures [3]. The proposed algorithm has a run time complexity of O(nm) where n is the number of states and m is the number of transitions of the input. Later, a more sophisticated refinement algorithm running in O(m log n) steps was proposed by Paige and Tarjan [4].
Communicated by Gwen Salaun.
In recent years, the increase in the speed of sequential chip designs has stagnated due to a multitude of factors such as energy consumption and heat generation. In contrast, parallel devices, such as graphics processing units (GPUs), keep increasing rapidly in computational power. In order to profit from the acceleration of these devices, we require algorithms with massive parallelism. The article 'There's plenty of room at the Top: What will drive computer performance after Moore's law' by Leierson et al. [5] indicates that the advance in computational performance will come from software and algorithms that can employ hardware structures with a massive number of simple, parallel processors, such as GPUs. In this article, we propose two such algorithms to decide strong and branching bisimilarity.
For strong bisimilarity, we improve on the best-known theoretical time complexity for parallel bisimulation algorithms using a higher degree of parallelism. The proposed algorithm improves the run time complexity to O(n) on max(n, m) processors, and we base it on the sequential algorithm of Kanellakis and Smolka [3]. This time complexity matches the theoretical lower bound for parallel partition refinement algorithms given in [6]. The larger number of processors used in this algorithm favours the increasingly parallel design of contemporary and future hardware. In addition, the algorithm is optimal w.r.t. the sequential Kanellakis-Smolka algorithm, meaning that overall, it does not perform more work than its sequential counterpart.
We first present our algorithm on Kripke structures where transitions are unlabelled. However, as labelled transition systems (LTSs) are commonly used, and labels are not straightforward to incorporate in an efficient way (cf. for instance [7]), we discuss how to extend our algorithm to take action labels into account. This leads to an algorithm with a run time complexity of O(n + |Act|) on max(n, m) processors, with Act the set of action labels.
Our algorithm has been designed for and can be analysed with the Concurrent Read Concurrent Write (CRCW) PRAM model, following notations from [8]. This model extends the normal RAM model, allowing multiple processors to work with shared memory. In the CRCW PRAM model, parallel algorithms can be described in a straightforward and elegant way. In reality, no device exists that completely adheres to this PRAM model. Still, with recent advancements, hardware gets better and better at approximating the model since the number of parallel threads keeps growing. We demonstrate this by translating the PRAM algorithm to GPU code. We straightforwardly implemented our algorithm that decides strong bisimilarity in CUDA and experimented with an NVIDIA Titan RTX, a state-of-the-art GPU, showing that our algorithm performs mostly in line with what our PRAM algorithm predicts.
The present article is an extended version of our conference paper [9]. We have extended that work with a new algorithm that decides branching bisimilarity. Algorithms deciding branching bisimilarity typically propagate information over τ -paths sequentially, which can take O(n) time. We opt to use up to n 2 processors to propagate this information in constant time. In this way, we manage to keep a linear time bound on the algorithm. As far as we know, this is the first algorithm deciding branching bisimilarity that uses extra parallelism to propagate this information. We explain this new algorithm and provide correctness proofs.
The article is structured as follows: In Sect. 2, we recall the necessary preliminaries on the CRCW PRAM model and state the partition refinement problems this article focuses on. In Sect. 3, we propose a parallel algorithm to compute strong bisimulation for Kripke structures, which is also called the relational coarsest partition problem (RCPP). In this section, we also prove the correctness of the algorithm and provide a complexity analysis. In Sect. 4, we discuss the details for an adjustment to the algorithm that deals with multiple action labels, thereby supporting LTSs, which forms the bisimulation coarsest refinement problem (BCRP). In this section, we also provide a complexness and correctness argument, and the results of experiments on a GPU implementation. In Sect. 5, we discuss the modification of the algorithm to compute branching bisimilarity (branching-BCRP). We discuss the details of the modifications, prove them correct and give the complete algorithm. In Sect. 6, we discuss related work. Lastly, in Sect. 7 we draw conclusions and discuss future work.

The PRAM model
A parallel random access machine (PRAM) is a natural extension of the normal random access machine (RAM), where an arbitrary number of parallel processors can access the memory. Following the definitions of [8], we use a version of a PRAM that is able to concurrently read and concurrently write (CRCW PRAM). It differs from the model introduced in [10] in which the PRAM model was only allowed to concurrently read from the same memory address, but concurrent writes (to the same address) could not happen.
A CRCW PRAM consists of a sequence of numbered processors P 0 , P 1 , . . . . These processors have all the natural instructions of a normal RAM, such as addition, subtraction and conditional branching based on the equality and less-than operators. There is an infinite amount of common memory the processors have access to. The processors have instructions to read from and write to the common memory. In addition, a processor P i has an instruction to obtain its unique index i.
All the processors have the same program and run in a synchronised way in a single-instruction, multiple-data (SIMD) fashion. In other words, all processors execute the program in lock step. Parallelism can be achieved by distributing the data elements over the processors and having the processors apply the program instructions to 'their' data elements.
We assume that one arbitrary write will succeed whenever a concurrent write happens to the same memory cell. This is called the arbitrary CRCW PRAM.
A parallel program for a PRAM is called optimal w.r.t. a sequential algorithm if the total work done by the program does not exceed the work done by the sequential algorithm. More precisely, if T is the parallel run time and P the number of processors used, then the algorithm is optimal w.r.t. a sequential algorithm running in S steps if P · T ∈ O(S).

Strong bisimulation
To formalise concurrent system behaviour, we use LTSs.

Definition 2.1 (Labelled Transition System) A labelled transition system (LTS) is a three-tuple A = (S, Act, →)
where S is a finite set of states, Act a finite set of action labels and →⊆ S × Act × S the transition relation.
Let A = (S, Act, − →) be an LTS. Then, for any two states s, t ∈ S and a ∈ Act, we write s a − →t iff (s, a, t) ∈ − →. Kripke structures differ from LTSs in the fact that the states are labelled as opposed to the transitions. In the current article, for convenience, instead of using Kripke structures where appropriate, we reason about LTSs with a single action label, i.e. |Act| = 1. Computing the coarsest partition of such an LTS can be done in the same way as for Kripke structures, apart from the fact that in the latter case, a different initial partition is computed that is based on the state labels (see, for instance, [11]). is called a strong bisimulation relation if and only if it is symmetric and for all s, t ∈ S with s Rt and for all a ∈ Act with s a − →s , we have:

Definition 2.2 (Strong bisimulation) On an LTS
Whenever we refer to bisimulation, we mean strong bisimulation. Two states s, t ∈ S in an LTS A are called bisimilar, denoted by s t, iff there is a bisimulation relation R for A that relates s and t.
A partition π of a finite set of states S is a set of subsets that are pairwise disjoint and whose union is equal to S, i.e. B∈π B = S. Every element B ∈ π of this partition π is called a block.
We call partition π a refinement of π iff for every block B ∈ π there is a block B ∈ π such that B ⊆ B. We say a partition π of a finite set S induces the relation R π = {(s, t) | ∃B ∈ π. s ∈ B and t ∈ B}. This is an equivalence relation of which the blocks of π are the equivalence classes.
Given an LTS A = (S, Act, − →) and two states s, t ∈ S, we say that s reaches t with action a ∈ Act iff s a − →t. A state s reaches a set U ⊆ S with an action a iff there is a state t ∈ U such that s reaches t with action a.
A set of states V ⊆ S is called stable under a set of states U ⊆ S iff for all actions a either all states in V reach U with a, or no state in V reaches U with a. A partition π is stable under a set of states U iff each block B ∈ π is stable under U . The partition π is called stable iff it is stable under all its own blocks B ∈ π . Fact 2.3 [4] Stability is inherited under refinement, i.e. given a partition π of S and a refinement π of π . If π is stable under U ⊆ S, then π is also stable under U .

Problems
The main problem we focus on in this work is called the bisimulation coarsest refinement problem (BCRP). It is defined as follows: Input: An LTS M = (S, Act, − →). Output: The partition π of S which is the coarsest partition, i.e. has the smallest number of blocks, that forms a bisimulation relation.
In a Kripke structure, the transition relation forms a single binary relation, since the transitions are unlabelled. This is also the case when an LTS has a single action label. In that case, the problem is called the relational coarsest partition problem (RCPP) [3,4,12]. This problem is defined as follows: Input: A set S, a binary relation →: S × S and an initial partition π 0 Output: The partition π which is the coarsest refinement of π 0 and which forms a bisimulation relation.
It is known that BCRP is not significantly harder than RCPP as there are intuitive translations from LTSs to Kripke structures [13,Dfn. 4.1]. However, some non-trivial modifications can speed up the algorithm in some cases; hence, we discuss both problems separately.
In Sect. 3, we discuss the basic parallel algorithm for RCPP, and in Sect. 4, we discuss the modifications required to efficiently solve the BCRP problem for LTSs with multiple action labels.

Relational coarsest partition problem
In this section, we discuss a sequential algorithm based on the one of Kanellakis and Smolka [3] for RCPP (Sect. 3.1). This is the basic algorithm that we adapt to the parallel PRAM algorithm (Sect. 3.2). The algorithm starts with an input partition π 0 and refines all blocks until a stable partition is reached. This stable partition will be the coarsest refinement that defines a bisimulation relation.

The sequential algorithm
The sequential algorithm, Algorithm 1, works as follows. Given are a set S, a transition relation → ⊆ S × S, and an initial partition π 0 of S. Initially, we mark the partition as not necessarily stable under all blocks by putting these blocks in a set Waiting. In any iteration of the algorithm, if a block B of the current partition is not in Waiting, then the current partition is stable under B. If Waiting is empty, the partition is stable under all its blocks, and the partition represents the required bisimulation.
As long as some blocks are in Waiting (line 3), a single block B ∈ π is taken from this set (line 4), and we split the current partition such that it becomes stable under B. Therefore, we refer to this block as the splitter. The set S = {s ∈ S | ∃t ∈ B.s → t} is the reverse image of B (line 6). This set consists of all states that can reach B, and we use S to define our new blocks. All blocks B that have a non-empty intersection with S , i.e. B ∩ S = ∅, and are not a subset of S , i.e. B ∩ S = B (line 7), are split into the subset of states in S and the subset of states that are not in S (lines 9-10). These two new blocks are also added to the set of Waiting blocks (line [11][12]. The number of states is finite, and blocks can be split only a finite number of times. Hence, blocks are only finitely often put in Waiting, so the algorithm is guaranteed to terminate.

The PRAM algorithm
Next, we describe a PRAM algorithm to solve RCPP that is based on the sequential algorithm given in Algorithm 1.

Block representation
Given an LTS A = (S, Act, →) with |Act| = 1 and |S| = n states, we assume that the states are labelled with unique indices 0, . . . , n − 1. A partition π in the PRAM algorithm is represented by assigning a block label from a set of block labels L B to every state. The number of blocks can never be larger than the number of states; hence, we use the indices of the states as block labels: L B = S. We exploit this in the PRAM algorithm to efficiently select a new block label whenever a new block is created. We select the block label of a new block by electing one of its states to be the leader of that block and using the index of that state as the block label. By doing so, we maintain an invariant that the leader of a block is also a member of the block.
In a partition π , whenever a block B ∈ π is split into two blocks B and B , the leader s of B, which is part of B becomes the leader of B , and for B , a new state t ∈ B is elected to be the leader of this new block. Since the new leader is not part of any other block, the label of t is fresh with respect to the block labels that are used for the other blocks. This method of using state leaders to represent subsets was first proposed in [14,15].

Data structures
The common memory contains the following information: 1. n : N, the number of states of the input. 2. m : N, the number of transitions of the input relation. 3. The input, a fixed numbered list of transitions. For every index 0 ≤ i < m of a transition, a source source i ∈ S and target target i ∈ S are given, representing the transition source i → target i . 4. C : L B ∪{⊥}, the label of the current block that is used as a splitter; ⊥ indicates that no splitter has been selected. 5. The following is stored in lists of size n, for each state with index i: (a) mark i : B, a mark indicating whether state i is able to reach the splitter. (b) block i : L B , the block of which state i is a member.
6. The following is stored in lists of size n, for each potential block with block label i: (a) new_leader i : L B the leader of the new block when a split is performed. (b) waiting i : B indicating whether π might not be stable w.r.t. the block and should be checked as splitter.
As input, we assume that each state with index i has an input variable I i ∈ L B that is the initial block label. In other words, the values of the I i variables together encode π 0 . Using this input, the initial values of the block label block i variables are calculated to conform to our block representation with leaders. Furthermore, in the initialisation, executed on n processors, waiting i = false for all i that are not used as block label, and true otherwise.

The algorithm
We provide our first PRAM algorithm in Algorithm 2. The PRAM is started with max(n, m) processors. These processors are used for transitions, states and blocks.
The algorithm performs initialisation (lines 1-6) using n processors, where each block selects a new leader (lines 3-4). In line 3, we exploit the feature of a CRCW PRAM that only one write will succeed when multiple processors try to write to the same memory location. In line 4, the write that has succeeded is the same for each block, ensuring that the leader is one of its own states. Next, the algorithm sets the initial block to waiting. Subsequently, the algorithm enters a single loop that we explain in three separate parts.
Splitter selection (lines [8][9][10][11][12][13][14], using n processors. Every variable mark i is set to false. After this, every processor with index i will check waiting i . 1 If block i is marked waiting, the processor tries to write i in the variable C. If multiple write accesses to C happen concurrently in this iteration, then according to the arbitrary PRAM model (see Sect. 2), only one process j will succeed in writing, setting C := j as the splitter in this iteration. Mark states (lines [15][16][17], using m processors. Every processor i is responsible for the transition s i − →t i and checks if t i (target i ) is in the current block C (line 15). If this is the case the processor writes true to mark source i where source i is s i . This mark now indicates that s i reaches block C. Splitting blocks (lines [18][19][20][21][22][23][24][25][26]  The algorithm repeats execution of the while-loop until no blocks are marked waiting.

Correctness
The block i list in the common memory at the start of iteration k defines a partition π k where states s ∈ S with equal block labels block i form the blocks: A run of the program produces a sequence π 0 , π 1 , . . . of partitions. Partition π k is a refinement of every partition π 0 , π 1 , . . . , π k−1 , since blocks are only split and never merged. A partition π induces a relation in which the blocks are the equivalence classes. For an input partition π 0 we call the relation induced by the coarsest refinement of π 0 that is a bisimulation relation π 0 .
We now prove that Algorithm 2 indeed solves RCPP. We first introduce Lemma 3.1 which is invariant throughout the execution and expresses that states which are related by π 0 are never split into different blocks. This lemma implies that if a refinement forms a bisimulation relation, it is the coarsest. Lemma 3.1 Let S be the input set of states, →: S × S the input relation and π 0 the input partition. Let π 1 , π 2 , . . . be the sequence of partitions produced by Algorithm 2, then for all initial blocks B ∈ π 0 , states s, t ∈ B and iterations k ∈ N: Proof This is proved by induction on k. In the base case, π 0 , this is true by default. Now assume for a particular k ∈ N that the property holds. We know that the partition π k+1 is obtained by splitting with respect to a block C ∈ π k . For two states s, t ∈ S with s π 0 t we know that s and t are in the same block in π k . In the case that both s and t do not reach C, then mark s = mark t = false. Since they were in the same block, they will be in the same block in π k+1 . Now consider the case that at least one of the states is able to reach C. Without loss of generality say that s is able to reach C. Then there is a transition s → s with s ∈ C. By Definition 2.2, there exists t ∈ S such that t → t and s π 0 t . By the induction hypothesis we know that since s π 0 t , s and t must be in the same block in π k , i.e. t is in C. This witnesses that t is also able to reach C and we must have mark s = mark t = true. Since the states s and t are both marked and are in the same block in π k , they will remain in the same block in π k+1 . Lemma 3.2 Let S be the input set of states with →: S × S, L B = S the block labels, and π n the partition stored in the memory after the termination of Algorithm 2. Then the relation induced by π n is a bisimulation relation.
Proof Since the program finished, we know that for all block indices i ∈ L B we have waiting i = false. For a block index i ∈ L B , waiting i is set to false if the partition π k , after iteration k, is stable under the block with index i and set to true if it is split. So, by Fact 2.3, we know π n is stable under every block B, hence stable. Next, we prove that a stable partition is a bisimulation relation.
We show that the relation R induced by π n is a bisimulation relation. Assume states s, t ∈ S with s Rt are in block B ∈ π n . Consider a transition s → s with s ∈ S. State s is in some block B ∈ π n , and since the partition is stable under block B , and s is able to reach B , by the definition of stability, we know that t is also able to reach B . Therefore, there must be a state t ∈ B such that t → t and s Rt . Finally, by the fact that R is an equivalence relation we know that R is also symmetric; therefore, it is a bisimulation relation. Proof By Lemma 3.2, the resulting partition is a bisimulation relation. Lemma 3.1 implies that it is the coarsest refinement which is a bisimulation.

Complexity analysis
Every step in the body of the while-loop can be executed in constant time. So the asymptotic complexity of this algorithm is given by the number of iterations.

Theorem 3.4 RCPP on an input with m transitions and n states is solved by Algorithm 2 in O(n) time using max(n, m) CRCW PRAM processors.
Proof In iteration k ∈ N of the algorithm, let us call the total number of blocks N k ∈ N and the number of blocks marked as waiting U k ∈ N. Initially, N 0 = U 0 = |π 0 |. In every iteration k, a number of blocks l k ∈ N is split, resulting in l k new blocks, so the new total number of blocks at the end of iteration k is N k+1 = N k + l k .
First the current block C in iteration k which was marked as waiting is no longer waiting which causes the number of waiting blocks to decrease by one. In this iteration k, the set of l k blocks B 1 , . . . , B l k are split, resulting in l k newly created blocks. These l k blocks are all marked as waiting. A number of blocks l k ≤ l k of the blocks B 1 , . . . B l k were not yet marked as waiting and are now set to waiting. The current block C is possibly one of these l k blocks which were not marked as waiting and are now waiting again. The total number of blocks marked as waiting at the end of iteration k The number of waiting blocks is always positive, i.e. U k ≥ 0, and the total number of blocks can never be larger than the number of states, i.e. N k ≤ n, so the total number of iterations z is bounded by z ≤ 2N z − |π 0 | ≤ 2n − |π 0 |.

Bisimulation coarsest refinement problem
In this section, we extend our algorithm to the bisimulation coarsest refinement problem (BCRP), i.e. to LTSs with multiple action labels.
Solving BCRP can in principle be done by translating an LTS to a Kripke structure, for instance by using the method described in [16]. This translation introduces a new state for every transition, resulting in a Kripke structure with n + m states. If the number of transitions is significantly larger than the number of states, then the number of iterations of our algorithm increases undesirably.

The PRAM algorithm
Instead of introducing more states, we introduce multiple marks per state, but in total we have no more than m marks. For each state s, we use a mark variable for each different outgoing action label relevant for s, i.e. for each a for which there is a transition s a − →t to some state t. Each state may have a different set of outgoing action labels and thus a different set of marks. Therefore, we first perform a preprocessing procedure in which we group together states as blocks in the initial partition that have the same set of outgoing action labels. This is valid, since two bisimilar states must have the same outgoing actions. Since in the sequence of produced partitions each partition is a refinement of the previous one, the algorithm has the invariant that two states of the same block have the same set of action labels. For the extended algorithm, we need to maintain extra information in addition to the information needed for Algorithm 2. (a) off (s) : N, the offset to access the beginning of the sublist of the marks of the state s in mark. The positions mark off (s) up to mark off (s+1) contain the sublist of marks for state s. For example, if state s has outgoing transitions with 3 distinct action labels, we know that off (s + 1) ≡ off (s) + 3, and we have 3 marks for state s. We write mark off (s)+order i to access the mark for transition i which has source state s. 4. mark_length : N, the length of the mark list. This allows us to reset all marks in constant time using mark_length processors. This number is not larger than the number of transitions (mark_length ≤ m). 5. In a list of size n, we store for each state s ∈ S a variable split s : B. This indicates if the state will be split off from its block.
With this extra information, we can modify Algorithm 2 to work with labels. The new version is given in Algorithm 3. The changes involve the following: 1. Lines 7-9: Reset the mark list. 2. Line 11: Reset the split list. 3. Line 17: When marking the transitions, we do this for the correct action label, using order i . Note the indexing into mark. It involves the offset for the state source i and order i . 4. Lines 19-21: We tag a state to be split when it differs for any action from the block leader. 5. Line 24: If a state was tagged to be split in the previous step, it should split from its leader. 6. Line 29: If any block was split, the partition may not be stable w.r.t. the splitter.
To use Algorithm 3, two preprocessing steps are required. First, we need to partition the states w.r.t. their set of outgoing action labels. This can be done with a modified version of Algorithm 2, which performs one iteration for each action label. For the second preprocessing step, we need to gather the extra information that is needed in Algorithm 3. This is done via sorting the action labels and subsequently performing some parallel segmented (prefix) sums [17]. In total, the preprocessing takes O(|Act| + log m) time. For details how this is implemented, see Appendix A.

Complexity and correctness
The correctness of this algorithm follows directly from the arguments in Sect. 3.3.

Algorithm 3
The algorithm for BCRP, the lines highlighted differ from Algorithm 2. We can modify the proof of Lemma 3.1, for it to still hold in the setting of labels. The mark for each action label is the same for two bisimilar states. This means bisimilar states have the same value for split. Therefore, they will always be split into the same block together. Thus, bisimilar states will always be in the same block.
The proof of Lemma 3.2 still works in the setting with labels, proving that the partition π is stable with respects to all blocks B ∈ π , and thereby that π induces a bisimulation relation.
Thus, these two facts imply that π is the coarsest partition inducing a bisimulation relation.
For Algorithm 3, we need to prove why it takes a linear number of steps to construct the final partition. This is subtle, as an iteration of the algorithm does not necessarily remove a block from the waiting set.

Theorem 4.2 BCRP on an input with m transitions and n states is solved by Algorithm 3 in O(n + |Act|) time using max(n, m) CRCW PRAM processors.
Proof The total preprocessing takes O(|Act| + log m) steps, after which the while-loop will be executed on a partitioning π 0 which was the result of the preprocessing on the partition {S}. Every iteration of the while-loop is executed in constant time. Using the structure of the proof of Theorem 3.4, we derive a bound on the number of iterations.
At the start of iteration k ∈ N the total number of blocks and waiting blocks are N k , U k ∈ N, initially U 0 = N 0 = |π 0 |. In iteration k, a number l k of blocks is split into two blocks, resulting in l k new blocks, meaning that N k+1 = N k +l k . All new l k blocks are marked as waiting and a number l k ≤ l k of the old blocks that are split were not waiting at the start of iteration k and are now marked as waiting. If l k = l k = 0 there are no blocks split and the current block C is no longer marked as waiting. We indicate this with a variable c k : c k = 1 if l k = 0, and c k = 0, otherwise. The total number of iterations up to iteration k in which no block is split is given by k−1 i=0 c i . The number of iterations in which at least one block is split is given by k − k−1 i=0 c i . If in an iteration k at least one block is split, the total number of blocks at the end of iteration k is strictly higher than at the beginning; hence, for all k ∈ N, i=0 c i is an upper bound for k. We derive an upper bound for the number of iterations in which no blocks are split using the total number of waiting blocks. In iteration k there are Since the sum of newly created blocks k−1 i=0 (l i ) = N k − |π 0 | and l i ≤ l i , the number of blocks marked as waiting U k is bounded by With the time for preprocessing this makes the run time complexity O(n + |Act| + log m). Since the number of transitions m is bounded by |Act| × n 2 , this simplifies to O(n + |Act|).

Experimental results
In this subsection, we discuss the results of our implementation of Algorithm 3. Note that this implementation is not optimised for the specific hardware it runs on, since the goal of this article is to provide a generic parallel algorithm. This implementation is purely a proof of concept, to show that our algorithm can be mapped to contemporary hardware and to understand how the algorithm scales with the size of the input.
The implementation targets GPUs since a GPU closely resembles a PRAM and supports a large amount of paral-lelism. The algorithm is implemented in CUDA C++ version 11.1 with use of the Thrust library. 2 As input, we chose all benchmarks of the VLTS benchmark suite 3 for which the implementation produces a result within 10 minutes. The VLTS benchmarks are LTSs that have been obtained from various case studies derived from real concurrent system models.
The experiments were run on an NVIDIA Titan RTX with 24 GB memory and 72 Streaming Multiprocessors, each supporting up to 1024 threads in flight. Although this GPU supports 73,728 threads in flight, it is very common to launch a GPU program with one or even several orders of magnitude more threads, in particular to achieve load balancing between the Streaming Multiprocessors and to hide memory latencies. In fact, the performance of a GPU program usually relies on that many threads being launched. Table 1 shows the results of the experiments we conducted. The |Act| column corresponds to the number of different action labels. The |Blocks| column indicates the number of different blocks at the end of the algorithm, where each block contains only bisimilar states. With #It, we refer to the number of while-loop iterations that were executed (see Algorithm 3). The number of states and transitions can be derived from the benchmark name. In the benchmark 'X _N _M,' N * 1000 is the number of states and M * 1000 is the number of transitions. The T pre give the preprocessing times in milliseconds, which includes the memory transfers to the GPU, sorting the transitions and initial partitioning. The T alg give the times of the core algorithm, in milliseconds. The T BCRP is the sum of the preprocessing and the algorithm, in milliseconds. We have not included the loading times for the files and the first CUDA API call that initialises the device. We ran each benchmark 10 times and took the averages. The standard deviation of the total times varied between 0% and 3% of the average; thus, 10 runs are sufficient. All the times are rounded with respect to the standard error of the mean.
We see that the bound as proved in Sect. 4.2 (k ≤ 3n) is indeed respected, #It/n is at most 2.20, and in most cases below that. The number of iterations is tightly related to the number of blocks that the final partition has, the #It/|Blocks| column varies between 1.00 and 2.53. This can be understood by the fact that each iteration either splits one or more blocks or marks a block as non-waiting, and all blocks are waiting at least once. This also means that for certain LTSs the algorithm scales better than linearly in n. The preprocessing often takes the same amount of time (about a few milliseconds). Exceptions are those cases with a large num-ber of actions and/or transitions, for instance Vasy_25_25 or Vasy_574_13561.
Concerning the run times, it is not true that each iteration takes the same amount of time. A GPU is not a perfect PRAM machine. There are two key differences. Firstly, we suspect that the algorithm is memory bound since it is performing a limited amount of computations. The memory accesses are irregular, i.e. random, which caches can partially compensate, but for sufficiently large n and m, the caches cannot contain all the data. This means that as the LTSs become larger, memory accesses become relatively slower. Secondly, at a certain moment, the maximum number of threads that a GPU can run in parallel is achieved, and adding more threads will mean more run time. These two effects can best be seen in the T alg /#It column, which corresponds to the time per iteration. The values are around 0.02 up to 300, 000 transitions, but for a larger number of states and transitions, the amount of time per iteration increases.

Experimental comparison
We compared our implementation (BCRP) with an implementation of the algorithm by Lee and Rajasekaran (LR) [12] on GPUs, and the optimised GPU implementation by Wijs based on signature-based bisimilarity checking [18], with multi-way splitting (Wms) and with single-way splitting (Wss) [14]. Multi-way splitting indicates that a block is split into multiple blocks at once, which is achieved by computing a signature for each state in every partition refinement iteration, and splitting each block off into sets of states, each containing all the states with the same signature. The signature of a state is derived from the labels of the blocks that this state can reach in the current partition. Note that we are not including comparisons with CPU bisimulation checking tools; the fact that those tools run on completely different hardware makes a comparison problematic, and such a comparison does not serve the purpose of evaluating the feasibility of implementing Algorithm 3. Optimising our implementation to make it competitive with CPU tools is planned for future work.
The running times of the different algorithms can be found in Table 2. Similarly to our previous benchmarks, the algorithms were run 10 times on the same machine using the same VLTS benchmark suite with a time-out of 10 minutes. In some cases, the nondeterministic behaviour of the algorithms Wms and Wss led to high variations in the runs. In cases where the standard error of the mean was more than 5% of the mean value, we have added the standard error in Table 2 in parentheses. Furthermore, all the results are rounded with respect to the standard error of the mean. As a preprocessing step for the LR, Wms and Wss algorithms the input LTSs need to be sorted. We did not include this in the times, nor the reading of files and the first CUDA API call (which initialises the GPU). This comparison confirms the expectation that our algorithm in all cases (except the small LTS Cwi_1_2) outperforms LR and that LR is not suitable for massive parallel devices such as GPUs.
Furthermore, the comparison demonstrates that in most cases our algorithm (BCRP) outperforms Wss. In some benchmarks (Cwi_1_2, Cwi_214_684, Cwi_2165_8723 and Cwi_2416_17605) Wss is more than twice as fast, but in 16 other cases our algorithm is more than twice as fast. The last comparison shows that our algorithm does not outperform Wms. Wms employs multi-way splitting which is known to be very effective in practice. Furthermore, contrary to our implementation, Wms is optimised for GPUs while the focus of the current work is to improve the theoretical bounds and describe a general algorithm.
In order to understand the difference in performance between Wms and our algorithm better, we analysed the complexity of Wms [14]. In general this algorithm is quadratic in time, and the linearity claim in [14] depends on the assumption that the fan-out of 'practical' transition systems is bounded, i.e. every state has no more than c outgoing transitions for c a (low) constant. We designed the This LTS has n states, 3n − 3 transitions and a maximum out degree of n transitions. Fig. 2 shows the results of calculating the bisimulation equivalence classes for Fan_out n , with Wms and BCRP. It is clear that the run time for Wms increases quadratically as the number of states grows linearly, already becoming untenable for a small number of states. On the other hand, in conformance with our analysis, our algorithm scales linearly.

Branching bisimilarity
When systems get more complex, abstractions are an essential tool to gain insight into the behaviour of the system. Often a special action label τ is used to model internal behaviour in LTSs. Branching bisimilarity, as introduced by Van Glabbeek and Weijland [19] is an equivalence that takes these internal actions into account. It is defined as follows. Branching bisimilarity is the coarsest branching bisimulation relation, and we denote this as b .
The problem we address in this section is taking into account branching bisimulation relations. We define the branching bisimulation refinement problem (branching-BCRP) as follows: Input: An LTS M = (S, Act, − →) with a dedicated internal action τ ∈ Act.
Output: The partition π of S, which is the coarsest partition, i.e. has the smallest number of blocks, that forms a branching bisimulation relation.
This section explains how to adapt our parallel algorithm for strong bisimilarity (BCRP) to decide branching bisimilarity (branching-BCRP) in O(n + |Act|) parallel time using max(n 2 , m, |Act|n) parallel processors.
The section is structured as follows: In Sect. 5.1, we recall some preliminaries specific to branching bisimulations and establish the notation we use throughout this section. Next, in Sect. 5.2, we explain key ingredients from the sequential algorithm by Groote and Vaandrager [20]. After that, in Sect. 5.3, we explain the key modifications in a parallel setting. Then, in Sect. 5.4, we give the details of the algorithm. Lastly, in Sect. 5.5, we prove the correctness and complexity bounds of the parallel algorithm.

Preliminaries on branching bisimulation
For the next section, we assume that in any LTS the internal τ -action is available, so τ ∈ Act. Fig. 3 shows an LTS at the top with all states in one block. We group branching bisimilar states at the bottom, where we denote the branching bisimilarity equivalence classes B 1 up to B 7 .
Next, we define bottom states, which are states that do not have an outgoing τ -transition to another state in the same block. [20]) Let A = (S, Act, →) be an LTS, π a partition of S, and s ∈ B ∈ π where B is a block. We call s a bottom state iff there is no s ∈ B such that s τ − →s . In Fig. 3, at the top states s 5 , s 6 and s 7 are bottom states. In the situation at the bottom every state except s 3 is a bottom state.

Definition 5.2 (Bottom state
We call a τ -transition silent when both the source and target state are in the same block.  We call ⇒ π the transitive and reflexive τ -closure with respect to π .

Definition 5.3 (Silent τ -transition) Let
In Fig. 3, we indicate the transitive τ -closure for the two different partitions. To improve readability, we do not show the reflexive part.
The following lemma expresses that not only the first state t and the last state t on a silent τ -path t τ − → · · · τ − →t are related but every intermediate state is related to t. In order to talk about the directly reachable non-silent transitions of a state, we define the direct markings of a state. Definition 5.6 (Direct markings) Let A = (S, Act, →) be an LTS, and π a partition of S. Let s ∈ S and B ∈ π . We define the direct markings of s to B as: For branching bisimulations, the stability condition changes slightly. Given an LTS A = (S, Act, − →) and π a partition of S. A block B is called stable modulo branching bisimulation with respect to a block B ∈ π , iff for all actions a ∈ Act either for all states s ∈ B there is an s ∈ B such that s ⇒ π s and a ∈ dir(s , B ), or for all states s ∈ B, a / ∈ dir(s, B ). A partition π is again called stable under a block B iff each block B ∈ π is stable under B . The partition π is called stable iff it is stable under all its own blocks B ∈ π .

The sequential algorithm
In this section, we explain the key ideas used in the sequential Groote-Vaandrager algorithm [20] to decide branching bisimilarity. In this algorithm, similar to the sequential Kanellakis-Smolka algorithm as explained in Sect. 3.1, a partition is maintained and iteratively refined based on behaviour that distinguishes states. For strong bisimulation, a block is split into the set of states that can directly reach a splitter block. However, in the Groote-Vaandrager algorithm, states are also split if they can reach the splitter by first doing a sequence of silent τ actions. In [20], this initial silent path is called stuttering and comes from the related problem of deciding stuttering equivalence on Kripke structures [21].
Another subtle but essential difference between the two algorithms is that stability modulo branching bisimulation is not necessarily preserved under refinement, i.e. Fact 2.3 may not hold. This is also observed in [20, Lemma 5.2.3] and is addressed by always considering all blocks as potential splitters.
In summary, when solving branching-BCRP the two main modifications we need to address with respect to the parallel algorithm for BCRP.

Modification 1:
A different splitting strategy, that takes into account initial τ -paths. 2. Modification 2: After refinement of partition π into π check that blocks that are stable w.r.t π are still stable for π .

The parallel algorithm
This section addresses the necessary modifications in the parallel setting such that we have a linear parallel algorithm to decide branching bisimilarity. In the strong bisimilarity algorithm, we choose a block as a splitter. We check which actions can reach this splitter for each state by going over all transitions and storing a mark for each action label. We also store a marking for each action label for branching bisimulation, but we must consider the τ -paths. Therefore, to address Modification 1, we need to propagate the markings over the τ -paths. For example, in Fig. 3 any marking that state s 5 has can potentially propagate to state s 1 , s 2 , s 3 and s 4 .
This method of propagating information backwards over τ -paths is also used in the sequential algorithms [11,20]. Likewise, the parallel algorithm [14] for branching bisimilarity propagates information from the bottom states backwards over the silent τ -paths. However, this approach is inherently sequential since, in the worst case, we propagate state information over long silent paths before we can calculate a refinement. For a better asymptotic complexity, we calculate a transitive closure in the style of [22,23].
In our parallel algorithm, we maintain the relation ⇒ π that is the transitive and reflexive closure of all silent τ -transitions in π . This closure changes throughout the algorithm, but we can update this information in constant parallel time. Lemma 5.9 states that if π refines into π , we only need to check the source and target of a silent τ -path to see whether it remains silent in π .
For Modification 2 regarding stability, we use the fact that a block not in the waiting list can only make the partition unstable if a state has become newly bottom, as Lemma 5.8 shows.
A state that is non-bottom can become bottom only once, and this happens if a silent τ -transition becomes non-silent. For example, in Fig. 3 the transition s 1 τ − →s 2 is non-silent at the bottom in the picture; thus, s 1 became a bottom state at some point.
Finally, we add a further modification. Throughout the algorithm, we ensure that each block has a bottom state as a leader. This is well defined since we consider LTSs with a finite number of states, and in the preprocessing phase, we remove τ -loops, and therefore, each block has at least one bottom state. This modification makes it easier to compare states since bottom states do not get extra marks via silent τ -steps.

Modification 1: splitting
We split blocks by considering all actions a state can reach via the silent transitive τ -closure. A similar approach is used in [20] where the set of states that can reach a certain block with a specific action label after an initial inert τ -path is considered. We initially mark all states with the actions they can take directly to the splitter block. Afterwards, we distribute all the marks via the silent transitive τ -closure. This results in a similar set of states as in [20] but not with respect to a specific action label but with respect to all action labels. Distributing the marks naively would mean we hand out up to |Act|n 2 marks since there can be |Act| markings per state and up to n 2 elements in the transitive τ -closure. Thus, we opt only to distribute one element per state if a state should split, thus distributing n 2 elements maximally.
For a partition π of S, we consider a block B ∈ π that acts as a splitter, and we let B s b denote any block in π that has the bottom state s b ∈ S as its leader. We define:  (B s b , B) such that s ⇒ π s } Informally PreSplit π (B s b , B) contains all the states of B s b that have a behaviour with respect to B that s b cannot mimic, and it also contains the bottom states that cannot mimic the behaviour of s b with respect to B. The first means that the state can reach B with an action a which s b cannot. The second means that s is a bottom state, and s b can reach B with an action a, which s cannot. The set Split π (B s b , B) contains all states that have silent τ -transitions to states that are in PreSplit π . These states are also necessarily not branching bisimilar to s b .
The states that are in the set Split π (B s b , B) are all the states that should split from the block B s b into a new block. Based on this function we derive a new partition Ref(π, B) in which every block B s b is split into Split π (B s b , B) and the states not in Split π (B s b , B) From the initial partition π 0 = {S}, we iteratively refine the current partition π by finding a splitter for which Ref(π, B) = π and continue based on the partition π = Ref(π, B). The correctness relies on the fact that Ref(π, B) never splits states that are branching bisimilar and the fact that if the partition π does not induce the relation b then there still is a splitter B such that Ref(π, B) = π .
In Sect. 5.5.1, we will prove that this splitting procedure is correct.

Modification 2: stability
The key property (Fact 2.3) does not hold for stability modulo branching bisimilarity. Counterexamples show that refinement can cause blocks to lose stability. See [20,Remark 3.4], where a counterexample is given. In the Groote-Vaandrager algorithm, the authors resolve this by reconsidering all blocks if new bottom states are added [20,Lemma 3.3]. This strategy would result in a quadratic worst-case algorithm in our parallel setting.
The following lemma expresses when a block can act as a splitter. We observe that only item 5.7 from Lemma 5.7 is not inherited under refinement.  and blocks B, B ∈  π . The block B is a splitter of B (Split π (B, B ) = ∅) iff there is an a ∈ Act such that:  π (B, B ) is as well.
For the ( ⇒ ) part, we make a case distinction on the two sets that define PreSplit π (B, B ) and it follows that we find a state and bottom state for which the three conditions of this lemma hold. For the (⇐ ) part, we make a case distinction on whether the bottom state in condition 3 of this lemma is the block leader of B. In both cases, we can prove that there are states in PreSplit π (B, B ).
In order to maintain our invariant that if a block is not marked as waiting the partition is stable modulo branching bisimulation under this block of states, we perform extra steps for each new bottom state to check if there is a block that now is a splitter conform Lemma 5.7. Since a bottom state never becomes a non-bottom state, this is maximally done at most once for each state.
The following lemma is a slight modification of Lemma 5.7. However, here we specify that a new bottom state must witness that the partition is not stable anymore with respect to a block. We define bottom(π ) as all states that are bottom in the partition π . Lemma 5.8 Let A = (S, − →, Act) be an LTS. Let π be a partition of S and π be a refinement of π and both are refined by b . Now suppose B ∈ π and B ∈ π and that Ref(π, B ) = π , thus π is stable with respect to B . Now for some block B ∈ π we find that B is a splitter of B (Split π (B, B )  − →s . We now need to show that s b ∈ bottom(π ) \ bottom(π ). We assume to the contrary that s b is not part of this set and show that this leads to a contradiction.
If s b / ∈ bottom(π ) \ bottom(π ), this must mean that s b ∈ bottom(π ). Since π refines π , we must have a C ∈ π such that B ⊆ C. Now s b ∈ C and there is a bottom state s ∈ C and either a = τ or C = B . Thus, we have found an s b , and s to which Lemma 5.7 applies and we know that Split π (C, B ) = ∅. This means that Ref(π, B) = π which is a contradiction.
This lemma implies that a block B , which is not in the waiting set, is a new splitter for the partition iff there is a newly created bottom state in block B that is unable to reach this block B , while there is some state in the block B that can reach this block B . A procedure to find these blocks B for a newly created bottom state s b ∈ B ∈ π , can be run in parallel (see Algorithm 7). This procedure can be summarised as follows.

Algorithm
Based on the modifications, we mention in Sect. 5.3 and the parallel algorithm for strong bisimilarity, we construct an algorithm that decides branching bisimulation in O(n + |Act|) time in parallel on a PRAM with max(n 2 , m, |Act|n) processors. In Algorithm 4, we give the main structure of our algorithm. The algorithm is similar to the parallel algorithm for strong bisimilarity. We add the modifications mentioned in Sect. 5.3 to subroutines, which we give in Algorithms 5, 6 and 7.
In this subsection, we first define the additional data we store in memory. Next, we discuss preprocessing. Finally, we discuss the algorithm in more detail and give an example iteration.

Data structure
The main memory contains the following extra data for our algorithm for branching bisimulation: -same_marks i : B specifying whether the state s i has the same direct markings as its leader.
• An array of booleans of size m such that for index 0 ≤ i < m of a transition i the boolean value silent i : B indicates whether the transition is silent. • We use t : N to indicate the number of transitive τtransitions. • For every index 0 ≤ i < t of a transitive τ -transition (s ⇒ π s ): -t_source i : S the source state (s); -t_target i : S the target state (s ); -t_silent i : B which is true iff the transitive τtransition is silent.
• We need to adjust the mark i : B array to not only contain the marks for directly reachable actions, but also actions that can be reached after performing τ -steps. This is needed since states with the same reachable actions can be in the same block and must be able to compare marks with each other. Each state always has a mark for the τ -action, regardless of whether it has τ -actions. For example, state s 3 of Fig. 3 has markings for τ , a and b.
Additionally, for each mark, we now keep track of -mark_source i : S the state to which this mark belongs; -mark_or der i : N the order of the mark. For example, state s 3 of Fig. 3 has order 0, 1 and 2 for its τ , a and b marks, respectively.
• In the algorithm we use or der i : N such that a transition can set the correct mark of a state. Since mark i is different, we need to update this array as well. • An array that has for each block B and action a ∈ Act the boolean value reachable B,a : B. This is used for storing whether the current new bottom state under inspection can reach block B with a non-silent a action. We note that we can maximally have n blocks. Thus, this array has size n · |Act|.

Preprocessing
First, we calculate the transitive τ -closure, which we can view as a pairwise shortest path computation. We can compute this using n 2 processors in O(n) [22,23]. If we consider the transitive τ -closure as transitions, we can end up with up to n 2 of these transitions. Thus, we need n 2 processors to go over them in O(1) time. In real-life examples of LTSs, especially when only considering the silent τ -steps, the number of these transitions is much smaller than n 2 .
As in Groote and Vaandrager [20], we first remove τcycles since all states in a cycle end up in the same block of the final partition. We use the transitive τ -closure to remove the Fig. 4 An example LTS and its derived preprocessing information. The variables action_switch i , order i , nr_marks source i are auxiliary and we explain them in Sect. 1 of Appendix τ -cycles. We start a processor for each state s, and each state iterates over each other state s , and looks if both s ⇒ π s and s ⇒ π s hold. We note that π = {S} here. We store the lowest state number, s l , for which this holds, which we use to replace all references to s with s l . All states in a cycle have one such number, namely the lowest. So this is consistent. In this way, we removed all τ -cycles in O(n) time using O(n) processors.
As opposed to the original algorithm, we now need to group states on directly possible actions and actions that a state can take after performing silent τ -steps. Therefore, we again use the base algorithm to split the sets of states into blocks based on the actions they can take and additionally let the transitive τ -closure distribute the marks for silent τ -paths. We describe this procedure in Appendix in Algorithm 8. Afterwards, we set all the blocks of this new partition to waiting, i.e. waiting block i = true. Since we loop over all the actions and execute the loop body in constant time, this step takes O(|Act|) time.
Initially we mark all states as non-bottom (bottom i := false), all non-τ -transitions as non-silent and all τ -and transitive τ -transitions as silent (silent i := true, t_silent i := true). In the first partition, not every (transitive) τ -transition is actually silent anymore and we need to know which states are bottom. We use the procedure SetSilentAndBottom to determine this. To elect bottom states as leaders, we also execute ElectBottomLeaders. Both these procedures can be found in Algorithm 5. Finally, we mark all the values for new_bottom i to false since we actually do not have to fix stability for these bottom states because every block is already waiting.
We can now remove any transitive τ -transitions which are not silent anymore. This step is unnecessary to guarantee correctness, but we expect it to significantly reduce the number of transitive τ -transitions. Therefore, this would improve the run time of the algorithm. Filtering these transitions takes O(log t) time when using a parallel scan to put the elements in a consecutive array again. Note that we can write O(log t) as O(log n), since t ≤ n 2 .
Finally, we must preprocess the mark and or der arrays differently. We only had a mark for each possible direct action in the original preprocessing. We now consider actions that a state can reach using silent τ -steps. Thus, a state can have a mark for an action that it cannot take directly. Since a bottom state only takes direct actions, and we always pick a bottom state as a block leader, we let each state look at its block leader to determine the needed marks. Each state also needs a mark for a τ -action. For more details, see Sect. 1 in Appendix. At most, this preprocessing step takes O(log n + |Act|) time on max(n 2 , m, |Act|n) processors.
Combining all the bounds on the complexity of the complete preprocessing procedure, we end up with O(n + |Act|) time using max(n 2 , m, |Act|n) processors in the worst case. if split i then 20: waiting

Algorithm
In Fig. 5, an iteration of Algorithm 4 is performed on the example LTS from Fig. 3, starting at Line 13, after a splitter C has been selected.
Following Algorithm 4, the first lines up to Line 13 are to reset the variables and select a splitter. In the procedure CheckStability, the condition from Lemma 5.8 is used to determine whether there is a new bottom state that causes a split. If this procedure finds a splitter C, we use it. On the other hand, if this procedure does not yield a splitter, a splitter is selected uniformly among the blocks for which the partition is not considered stable yet (Line 9-10).
In Lines 13-15, the direct markings are set for each transition reaching the currently selected splitter C. After which the procedure DetermineSplits, given in Algorithm 6, marks states that are supposed to split. For each state s ∈ B s b ∈ π , the variable split i is set (Algorithm 6 Lines 14-16) to indicate whether the state is in the PreSplit π (B s b , C). Finally the transitive τ -closures are followed to also set the split i variable for states that are in the set Split π (B s b , C).
In lines 17-25, the new partition Ref(π, C) is calculated. This is done in the same way as in the algorithm for strong τ -paths by checking the conditions from Lemma 5.9. With all τ -steps now correctly marked silent, we can calculate the new bottom states, and the procedure ElectBottomLeaders ensures that for each new block, the leader is a bottom state. After we found the new leaders for split blocks, we set the newly split blocks to waiting.

Complexity and correctness
First, we proof that our splitting procedure of Sect. 5.3.1 is correct. Next, showing the correctness of the algorithm can be done in a similar fashion as in the previous algorithm. By first showing the resulting partition induces a branching bisimulation relation, and secondly showing it is the coarsest partition forming a branching bisimulation relation. Lastly, we prove the complexity of the algorithm.

Correct splitting
To establish that we have a valid splitting procedure, we need to prove that branching bisimilar states never end up in different blocks and that if we cannot split blocks anymore, the partition induces a branching bisimulation relation. We prove these two properties in the remainder of this subsection, but we first prove two auxiliary lemmas.
Firstly, the following auxiliary lemma shows that updating the silent transitive closure ⇒ π by only checking whether states are still in the same block is correct with the given splitting procedure.

Lemma 5.9
Let A = (S, →, Act) be an LTS where S contains no τ -cycles, π a partition of S, s, s ∈ S two states, C ∈ π a splitter, and π the partition given by π = Ref(π, C). Then s ⇒ π s if and only if s ⇒ π s and s and s are in the same block B ∈ π .
Proof Given a partition π of an LTS A = (S →, Act), two states s, s ∈ S and a partition π = Ref(π, C), as mentioned in the lemma. We prove both directions: ( ⇒ ) If s ⇒ π s then since there is a silent path we know that s and s are in the same block B ∈ π . Since π is a refinement of π there is a block B ∈ π such that B ⊆ B , and s ⇒ π s . (⇐ ) Assuming s ⇒ π s and there is a block B ∈ π such that s, s ∈ B we show that s ⇒ π s . Since π refines π there is a block B s b ∈ π that contains s and s and has the state s b as leader. There is a set of states s 1 , . . . , s n ∈ B s b that witnesses the silent path s τ − →s 1 τ − → · · · τ − →s n τ − →s . The block B s b will either remain the same in π or be split into Split π (B s b , C) and the remainder. If s, s ∈ Split π (B s b , C) then by the transitive closure s 1 , . . . , s n ∈ Split π (B s b , C), since every s 1 , . . . , s n reaches s with inert τ steps. If s, s / ∈ Split π (B s b , C) then also s 1 , . . . , s n / ∈ Split π (B s b , C) since if for any 1 ≤ i ≤ n we have s i ∈ Split π (B s b , C) then since s ⇒ π s i it holds that s ∈ Split π (B s b , C). Thus, we conclude that s 1 , . . . , s n ∈ B witnessing s ⇒ π s . Secondly, the next auxiliary lemma shows that two bisimilar states can always reach branching bisimilar states within a block via silent τ -paths. Lemma 5.10 Let A = (S, Act, − →) be an LTS. Given a partition π of S such that b is a refinement of π . For two states s, t in the same block B ∈ π , if s b t and s ⇒ π s then there is a state t ∈ B such that t ⇒ π t , s b t .
Proof We prove this using induction on i. Induction Hypothesis: If s b t, s 0 τ − →s 1 τ − → · · · τ − →s i and s 0 , s 1 , . . . , s i ∈ B, then there is a state t ∈ B such that t ⇒ π t and s i b t .
For i = 0, we have s 0 = s i ; thus, we can take t as t and the induction hypothesis holds. Now, assume that the induction hypothesis holds for i, we prove that it holds for i +1. We have s 0 and we want to find a t such that t ⇒ π t and s i+1 b t . From the induction hypothesis, we know that we have a t such that t ⇒ π t , s i b t . We now have two cases. Because s i+1 ∈ B we know t ∈ B also holds. Thus, we have t ⇒ π t and s i+1 b t which proves the induction hypothesis for i + 1.
Thus, by mathematical induction, the lemma holds.
We now prove that we never move two branching bisimilar states to different blocks.

Lemma 5.11
Let A = (S, Act, − →) be an LTS where S contains no τ -cycles. Given a partition π which is refined by b and two states s, t ∈ S in a block B ∈ π . Then for all B ∈ π if s b t either s, t ∈ Split π (B, B ) or s, t / ∈ Split π (B, B ).
Proof Let s b be the leader state of B. This allows us to write B s b for B. Consider a partition π , and two branching bisimilar states s, t ∈ B s b ∈ π , i.e. s b t as in the lemma. Assume that s ∈ Split π (B s b , B ) for any B ∈ π , we will show that t ∈ Split π (B s b , B ). First we distinguish on whether s ∈ PreSplit π (B s b , B ).
In the first case, we assume that s ∈ PreSplit π (B s b , B ), and show that this implies t ∈ Split π (B s b , B ). Assuming s ∈ PreSplit π (B s b , B ), we can distinguish the following two cases: • If it is the case that dir(s, B ) dir(s b , B ), this means there is an action a ∈ Act such that s a − →s for some s ∈ B and s b a − →s for any s ∈ B . From the definition of dir, we know that it is not the case that a = τ and s b s ; thus, we can ignore the first case of the definition of b . Following the second case, since s b t there is a sequence t τ − → · · · τ − →t such that t b t , t a − →t and t b s . Since b refines π we know that t ∈ B s b , t ∈ B and, using Lemma 5.5, t ⇒ π t . Therefore, with dir(t , B ) dir(s b , B ), t ∈ PreSplit π (B s b , B ) holds, and hence, t ∈ Split π (B s b , B ).
• For the other case where s is a bottom state and dir(s b , B ) dir(s, B ), notice that there is at least one bottom state t ∈ B s b such that t ⇒ π t . By Lemma 5.10 as s b t there has to be a s ∈ B s b such that s ⇒ π s and s b t . Since s is a bottom state it has to be the case that s b t . By the definition of branching bisimilarity and the states are bottom states it has to be the case that dir (t , B ) ⊆ dir(s, B ). Since for every letter a ∈ Act and s ∈ B s b , if s a − →s then there is a t ∈ B such that t a − →t and s b t . Since  dir(s b , B ) dir(s, B ) and dir(t , B ) ⊆ dir(s, B ) we know that also dir(s b , B ) dir(t , B ), and thus, t ∈ PreSplit π (B s b , B ), meaning t ∈ Split π (B s b , B ).
This concludes that if s ∈ PreSplit π (B s b , B ) it is the case that t ∈ Split π (B s b , B ).
In the second case we assume s / ∈ PreSplit π (B s b , B ) but s is part of the split, i.e. s ∈ Split π (B s b , B ), and show that t ∈ Split π (B s b , B ). In this case there is a state s ∈ B s b such that s ∈ PreSplit π (B s b , B ) and s ⇒ π s . According Lemma 5.10 there is a t ∈ B s b such that t ⇒ π t and t b s , and since s ∈ PreSplit π (B s b , B ) by the previous argumentation we know that t ∈ Split π (B s b , B ) and since t ⇒ π t also t ∈ Split π (B s b , B ).
We are now ready to conclude that if s ∈ Split π (B s b , B ) then also t ∈ Split π (B s b , B ). Since s, t are chosen arbitrarily if t ∈ Split π (B s b , B ) then also s ∈ Split π (B s b , B ). This means that either both states s, t ∈ Split π (B s b , B ), or both s, t / ∈ Split π (B s b , B ), proving this lemma.
Lastly, we show that if the current partition is not yet branching bisimilar, there is still a block that refines the partition with our splitting procedure.

Lemma 5.12
Let A = (S, →, Act) be an LTS, and π a partition of S. If for all blocks B ∈ π it holds that Ref(π, B) = π then the relation R π induced by the partition π is a branching bisimulation.
Proof Let π be a partition of S for which every B ∈ π it holds that Ref(π, B) = π as in the lemma. Assume two states s, t ∈ S are related s R π t, i.e. there is a block B s b ∈ π , with leader s b ∈ S such that s, t ∈ B s b . For all actions a ∈ Act and s ∈ B such that s a − →s , if a = τ and B = B s b we know that s , t ∈ B s b . Hence, s R π t. If this is not the case then a ∈ dir(s, B ). We know that since Ref(π, B ) = π , also Split π (B s b , B ) = PreSplit π (B s b , B ) = ∅. This means that s / ∈ PreSplit π (B s b , B ), and hence, dir(s, B ) ⊆ dir(s b , B ). Additionally, we know that there is at least one bottom state t ∈ B s b such that t ⇒ π t , and thus, t R π t . Since t is a bottom state and also t / ∈ PreSplit π (B s b , B ) we know that dir(t , B ) ⊇ dir(s b , B ) ⊇ dir(s, B ). This means a ∈ dir(t , B ), so there is a state t ∈ B such that t a − →t and s R π t . Since R π is also symmetric, it follows that R π forms a branching bisimulation.

Correctness of algorithm
Theorem 5.13 Let A = (S, Act, − →) be an LTS that contains no τ -loops. The partition π resulting from executing Algorithm 4 with input A forms the coarsest partition inducing a branching bisimulation, solving branching-BCRP.
Proof At the end of the iteration for all block labels i ∈ N, i ≤ |S|, waiting i = false. Since for all new bottom states we did the extra stability procedure in Algorithm 7 by Lemma 5.8 we know that the partition is stable modulo branching bisimulation. By Lemma 5.12 this means π is a branching bisimulation relation.
Finally by Lemma 5.11 it never happens that two branching bisimilar states are split. This means that π is the coarsest partition inducing a branching bisimulation, solving branching-BCRP.

Complexity of algorithm
Theorem 5.14 The algorithm for branching bisimulation terminates in O(n +|Act|) parallel time on max(n 2 , m, |Act|n) processors.
Proof As indicated in the preprocessing section, the preprocessing takes O(n + |Act|) time using max(n 2 , m, |Act|n) processors in the worst case.
In every iteration of the algorithm, either there is a splitter that causes the number of blocks to increase, or a block is removed from waiting. This means that the proof of Theorem 4.2 still holds with our modifications. So there are maximally 3n iterations of the main algorithm. Except for the procedure to maintain stability, every part of the algorithm runs in constant time. So our total running time is the number of iterations in addition to the number of iterations of CheckStability that does not result in a splitter. The stability check only happens at most once for every new bottom state since a bottom state can never become non-bottom.
We use max(n, m, t, mark_length) processors. We observe that the size of the transitive closure t is maximally size n 2 and mark_length ≤ |Act|n. Therefore, we need max(n 2 , m, |Act|n) processors.
Thus, the main part of the algorithm runs in O(n) time using max(n 2 , m, |Act|n) processors.
Therefore, the total complexity of preprocessing and running the algorithm is O(n +|Act|) using max(n 2 , m, |Act|n) processors.

Related work
Deciding bisimilarity is P-complete [24], which suggests that bisimilarity is an inherently sequential problem. This fact has not withheld the community from searching for efficient parallel algorithms for deciding the bisimilarity of Kripke structures. In particular, Lee and Rajasekaran [12,25] proposed a parallel algorithm based on the Paige-Tarjan algorithm that works in O(n log n) time complexity using m log n log log n Concurrent, Read Concurrent Write (CRCW) processors, and one using only m n log n Concurrent Read Exclusive Write (CREW) processors. Jeong et al. [26] presented a linear-time parallel algorithm, but it is probabilistic in the sense that it has a nonzero chance of producing an incorrect result.
Furthermore, Wijs [14] presented a GPU implementation of an algorithm to solve the strong and branching bisimulation partition refinement problems. But as shown in Sect. 4.3 it has only linear complexity in the number of states if the fan-out of an LTS is bounded and can otherwise have quadratic complexity. However, the multi-way splitting Wijs employs is effective in practice.
In a distributed setting, Blom and Orzan studied algorithms for refinement [18]. These algorithms use message passing to communicate between different workers in a network and rely on a small number of processors. Therefore, they are very different in nature from our algorithm. The algorithms of Blom and Orzan [18] were extended and optimised for branching bisimulation [27].
Parallel partition refinement algorithms are also studied in the problem of minimising deterministic finite automata (DFAs) [28]. These DFAs can be seen as the restricted LTSs where the transition relation is deterministic, and there is an initial partitioning distinguishing accepting and nonaccepting states. This means our algorithm also works in the setting of DFAs. To our knowledge, this is the first linear-time parallel algorithm for DFA minimisation.
The sequential algorithm introduced in [11] requires time O(m log n) compared to the original Groote-Vaandrager algorithm that requires O(mn) time [20]. The techniques in [11] are complex, using among others the principle of the smaller half [4]. These are therefore less suitable when aiming for maximal parallelism.
Recent work [6] provides a linear lower bound for the class of parallel partition refinement algorithms, matching the run time of our algorithm. The most efficient algorithms that decide bisimilarity use partition refinement; however, there is a possibility that other techniques give rise to algorithms that are faster than O(n).
Examples of these different techniques can be found in [29][30][31]. In [30], a linear-time sequential algorithm is given to decide bisimilarity in the restricted setting of deterministic automata with only a single action label. In the same setting, a parallel version of this algorithm is proposed in [31] with a time complexity of O(log n). The run time complexity of these algorithms is fundamentally faster than the partition refinement algorithms. However, we do not think this approach can be generalised to a setting with multiple action labels or non-determinism.
The sequential algorithm proposed in [29] finds and merges bisimilar states in a way different from partition refinement. However, this algorithm performs in a run time complexity that is worse than that of partition refinement algorithms.

Conclusions and future work
In this work, we proposed linear-time PRAM algorithms to decide strong and branching bisimilarity. We implemented the strong bisimilarity algorithm in CUDA and conducted experiments that show the potential to compute bisimula-tion in practice in linear time on contemporary hardware. Further advances in parallel hardware will make this more feasible.
Other work suggests no significant improvement on the time complexity O(n) is possible using partition refinement [6]. On the other hand, the sequential run time of O((m + n) log n) of the Paige-Tarjan algorithm suggests there might be an improvement in the number of processors needed. For future work, it would be interesting to see whether a linear algorithm deciding bisimilarity is feasible using fewer processors, for instance m n log(n). Another future direction is to optimise these algorithms for the available highly parallel hardware, such as GPUs. Optimising these algorithms should give a better insight into whether using an abundance of processors performs well in practice or other trade-offs should be made in practice. For instance, data locality and synchronisation play an important role in GPU performance but are not taken into account in a PRAM model.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.

Appendix A: Preprocessing for strong bisimulation
To use the Algorithm 3, we need to do two preprocessing steps. First, we need to partition the states w.r.t. their set of outgoing action labels. This can be done with an altered version of Algorithm 2. Instead of splitting on a block, we split on an action a ∈ A. We visit all transitions, and we mark the source if it has the same action label a. This can be done by the following PRAM pseudocode. Each block can be split into two blocks: a block that contains states that have a as an outgoing action label and a block with states that do not have this outgoing action label. After For the second preprocessing step, we need to gather the extra information that is needed in Algorithm 3. Only a i is part of the input, the other information like order and offset need to be calculated. We start our preprocessing by sorting the transitions by (source i , a i ), which can be done in O(log m) time with m processors, for instance using a parallel merge sort [32]. In order to calculate order i and nr_marks, we first calculate action_switch i for each transition i, which is done as follows: 1: if i ≤ m then 2: if i = 0 or source i = source i−1 or a i = a i−1 then action_switch i = 0; 3: elseaction_switch i = 1; 4: end if 5: end if See Fig. 6 for an example. Now, order i can be calculated with a parallel segmented inclusive scan [17] of action_switch. A parallel segmented sum can be performed on action_switch to calculate nr_marks, where we make sure to set nr_marks s to 0, if state s has no outgoing transitions. Finally, off s , for the mark offsets, can be constructed as a list and calculated by applying a parallel exclusive scan on nr_marks. Calculating action_switch takes O(1) time on m processors, and a parallel (segmented) sum/scan takes O(log m) time [17].
In total, the preprocessing takes O(|Act| + log m) time using m processors. Jan Martens is a PhD student in formal system analysis at Eindhoven University of Technology. His research interest is in theoretical computer science and parallel algorithms.
Jan Friso Groote (1965) is a full professor in formal system analysis since 1998 at Eindhoven University of Technology. His research interest is in modelling and analysing behaviour of programmed systems with as purpose of increasing its quality.
Lars B. van den Haak is a PhD student in formal system analysis at Eindhoven University of Technology. His research interest is in formal verification, parallel software, and programming languages.
Pieter Hijma worked as an assistant professor at the Vrije Universiteit Amsterdam and now works in the industry in the area of Open Source Hardware (OSH). His research focuses on the primary interface between humans and computers, namely the programming language, with a strong focus on parallel programming. Anton Wijs is an assistant professor at Eindhoven University of Technology. His research interest is in model checking, particularly improving the algorithms to do model checking.