Abstract
Parity games have important practical applications in formal verification and synthesis, especially to solve the modelchecking problem of the modal mucalculus. They are also interesting from the theory perspective, as they are widely believed to admit a polynomial solution, but so far no such algorithm is known. In recent years, a number of new algorithms and improvements to existing algorithms have been proposed. We implement a new and easy to extend tool Oink, which is a highperformance implementation of modern parity game algorithms. We further present a comprehensive empirical evaluation of modern parity game algorithms and solvers, both on real world benchmarks and randomly generated games. Our experiments show that our new tool Oink outperforms the current stateoftheart.
T. van Dijk—Supported by FWF, NFN Grant S11408N23 (RiSE).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Parity games are turnbased games played on a finite graph. Two players Odd and Even play an infinite game by moving a token along the edges of the graph. Each vertex is labeled with a natural number priority and the winner of the game is determined by the parity of the highest priority that is encountered infinitely often. Player Odd wins if this parity is odd; otherwise, player Even wins.
Parity games are interesting both for their practical applications and for complexity theoretic reasons. Their study has been motivated by their relation to many problems in formal verification and synthesis that can be reduced to the problem of solving parity games, as parity games capture the expressive power of nested least and greatest fixpoint operators [17]. In particular, deciding the winner of a parity game is polynomialtime equivalent to checking nonemptiness of nondeterministic parity tree automata [33], and to the explicit modelchecking problem of the modal \(\mu \)calculus [14, 15, 23, 32].
Parity games are interesting in complexity theory, as the problem of determining the winner of a parity game is known to lie in \(\text {UP}\cap \text {coUP}\) [26], as well as in \(\text {NP}\,\cap \,\text {coNP}\) [15]. This problem is therefore unlikely to be NPcomplete and it is widely believed that a polynomial solution may exist. Despite much effort, no such algorithm has yet been found.
Motivated by recent publications with both novel approaches and improvements to known algorithms, we implement a number of modern solvers in our new tool Oink, which aims to provide a highperformance implementation of parity game solvers. Oink is designed as a library that integrates with other tools and can easily be extended. We use Oink to provide a modern empirical evaluation of parity game solvers based on both real world benchmarks and randomly generated games.
In past publications new and improved algorithms are often tested against the implementation of Zielonka’s algorithm in the PGSolver tool [19]. However, various recent publications [1, 34, 40] suggest that much better performance can be obtained. We combine a number of improvements from the literature [34, 40, 41] and propose additional optimizations. We show that our implementation of Zielonka’s algorithm outperforms PGSolver by several orders of magnitude.
We describe Oink in Sect. 3 and provide accessible descriptions of the implemented stateoftheart algorithms in Sects. 4–7. We implement the strategy improvement algorithm (Sect. 4), both the small progress measures and the recently proposed quasipolynomial progress measures algorithms (Sect. 5), the wellknown Zielonka algorithm (Sect. 6) as well as a number of related algorithms from the priority promotion family (Sect. 7). We also propose an alternative multicore implementation of strategy improvement.
2 Preliminaries
Parity games are twoplayer turnbased infiniteduration games over a finite directed graph \(G=(V,E)\), where every vertex belongs to exactly one of two players called player Even and player Odd, and where every vertex is assigned a natural number called the priority. Starting from some initial vertex, a play of both players is an infinite path in G where the owner of each vertex determines the next move. The winner of such an infinite play is determined by the parity of the highest priority that occurs infinitely often along the play.
More formally, a parity game is a tuple where is a set of vertices partitioned into the sets controlled by player Even and controlled by player Odd, and \(E\subseteq V\times V\) is a total relation describing all possible moves, i.e., every vertex has at least one successor. We also write E(u) for all successors of u and \(u\rightarrow v\) for \(v\in E(u)\). The function assigns to each vertex a priority, where d is the highest priority in the game.
We write \(\textsf {pr}(v)\) for the priority of a vertex v and \(\textsf {pr}(V)\) for the highest priority of a set of vertices V and for the highest priority in the game . Furthermore, we write \(\textsf {pr}^{1}(i)\) for all vertices with the priority i. A path \(\pi =v_0 v_1 \dots \) is a sequence of vertices consistent with E, i.e., \(v_i \rightarrow v_{i+1}\) for all successive vertices. A play is an infinite path. We write \(\textsf {pr}(\pi )\) for the highest priority in \(\pi \) is \(\pi \) is finite, or the highest priority that occurs infinitely often if \(\pi \) is infinite. Player Even wins a play \(\pi \) if \(\textsf {pr}(\pi )\) is even; player Odd wins if \(\textsf {pr}(\pi )\) is odd.
A strategy \(\sigma :V\rightarrow V\) is a partial function that assigns to each vertex in its domain a single successor in E, i.e., \(\sigma \subseteq E\). We typically refer to a strategy of player \(\alpha \) to restrict \(\sigma \) to all vertices controlled by player \(\alpha \). A player wins a vertex if they have a strategy such that all plays consistent with this strategy are winning for the player. A fundamental result for parity games is that they are memoryless determined [13], i.e., each vertex is winning for exactly one player, and both players have a strategy for each of their winning vertices.
Algorithms for solving parity games frequently employ (variations of) attractor computation. Given a set of vertices A, the attractor of A for a player \(\alpha \) represents those vertices that \(\alpha \) can force the play toward. We write to attract vertices in to A as player \(\alpha \), i.e.,
Informally, we compute the \(\alpha \)attractor of A by iteratively adding vertices to A of \(\alpha \) that have a successor in A and of \({\overline{\alpha }}\) that have no successors outside A.
2.1 Solvers
We briefly introduce several approaches to solving parity games. These approaches can be roughly divided into two categories.
First, several algorithms iteratively perform local updates to vertices until a fixed point is reached. Each vertex is equipped with some measure which records the best game either player knows that they can play from that vertex so far. By updating measures based on the successors, they play the game backwards. The final measures indicate the winning player of each vertex and typically a winning strategy for one or both players. The strategy improvement algorithm (Sect. 4) and the progress measures algorithms (Sect. 5) fall into this category.
Second, several algorithms employ attractor computation to partition the game into regions that share a certain property. This partition is refined until the winners of some or all vertices can be identified, as well as the strategy for the winning player(s). The recursive Zielonka algorithm (Sect. 6) and the recently proposed priority promotion algorithms (Sect. 7) fall into this category.
2.2 Empirical Evaluation
Our goal in the empirical study is threefold. First, we aim to compare modern algorithms and treat them fairly. We therefore need to establish that our implementation is competitive with existing work. Second, we compare the algorithms that are implemented in Oink directly. Third, as two algorithms have a parallel implementation, we also study the obtained parallel speedup and the parallel overhead when going from a sequential to a multicore implementation.
We use the parity game benchmarks from model checking and equivalence checking proposed by Keiren [31] that are publicly available online. This is a total of 313 model checking and 216 equivalence checking games. We also consider different classes of random games, in part because the literature on parity games tends to favor studying the behavior of algorithms on random games. We include three classes of selfloopfree random games generated using PGSolver with a fixed number of vertices:

low outdegree random games (randomgame N N 1 2 x)
\(N \in \{\, 100, 200, 500, 1000, 2000, 5000, 10000, 20000 \,\}\)

fully random games (randomgame N N 1 N x)
\(N \in \{\, 100, 500, 1000, 2000, 4000 \,\}\)

lowdegree steady games (steadygame N 1 4 1 4)
\(N \in \{\, 100, 200, 500, 1000, 2000, 5000, 10000, 20000 \,\}\)
We generate 20 games for each parameter N, in total 420 random games. We include lowdegree games, since the solvers may behave differently on games where all vertices have few edges.
We present the evaluation in two ways. We compare runtimes of algorithms and penalize algorithms that do not finish on time (with a timeout of 15 min) by a factor \(2\times \) (PAR2), i.e., we assume that their runtime is \(2\times \) the timeout. This may still be quite optimistic. Compared to a timeout of 10 min, only few more games could be solved in 15 min. We also generate socalled cactus plots (often used to compare solvers in the SAT community) that show that a solver solved X models within Y seconds individually.
All experimental scripts and log files are available online via http://www.github.com/trolando/oinkexperiments. The experiments were performed on a cluster of Dell PowerEdge M610 servers with two Xeon E5520 processors and 24 GB internal memory each. The tools were compiled with gcc 5.4.0.
3 Oink
We study modern parity game algorithms using our research tool named Oink. Oink is written in C++ and is publicly available under a permissive license via https://www.github.com/trolando/oink. Oink is easy to extend, as new solvers subclass the Solver class and only require a few extra lines in solvers.cpp.
Apart from implementing the full solvers described below, Oink also implements several preprocessors similar to other parity game solvers. We base our choices mainly on the practical considerations and observations by Friedmann and Lange [19] and by Verver [41]. We always reorder the vertices by priority and renumber the priorities from 0 to eliminate gaps (not the same as compression). The former is beneficial for the attractorbased algorithms in Sects. 6 and 7. The latter may reduce the amount of memory required for the measuresbased algorithms in Sects. 4 and 5.
The following preprocessors are optional. Oink can perform priority inflation and priority compression, as described in [19]. We implement selfloop solving and winnercontrolled winning cycle detection, as proposed in [41]. Winnercontrolled winning cycle detection is a prerequisite for the strategy improvement algorithm of Sect. 4 but is optional for the other algorithms. We trivially solve games with only a single parity. Finally, we also implement SCC decomposition, which repeatedly solves a bottom SCC of the game until the full game is solved.
The correctness of an algorithm does not imply that implementations are correct. Although a formal proof of the implementations would be preferred, we also implement a fast solution verifier and verify all obtained solutions.
4 Strategy Improvement
Strategy improvement is a technique where each player iteratively improves their strategies until they are optimal. Strategy improvement algorithms were first explored for parity games by Jurdziński and Vöge [42] and have been subsequently improved in [7, 16, 20, 35, 38]. Recently, parallel implementations have been studied for the GPU [17, 24, 36]. Fearnley [17] also implements their parallel algorithm for multicore CPUs. Our treatment in this section is mostly based on [17, 35].
In the strategy improvement algorithm, player Even has a strategy \(\sigma \) and player Odd has a strategy \(\tau \) for all their vertices. They improve their strategies until a fixed point is reached, at which point the game is solved. Instead of choosing a successor, player Even may also end the play. In the specific algorithm here, player Even delays selecting a strategy for a vertex until there is a favorable continuation. As \(\sigma \) and \(\tau \) cover all vertices, they induce a fixed path from each vertex. This path is either infinite (a play) or ends at a vertex of player Even. The strategy is evaluated by computing a valuation for each vertex based on the current paths. Strategies are improved by choosing the most favorable successor.
The valuation used in e.g. [17, 35] assigns to (infinite) plays the value \(\top \) and to (finite) paths a function L(p) that records for each priority p how often it occurs in the path. To determine the best move for each player, a total order \(\sqsubset \) compares valuations as follows. For non\(\top \) valuations \(L_1\) and \(L_2\), \(L_1 \sqsubset L_2\) iff there exists a highest priority z that is different in \(L_1\) and \(L_2\), i.e., \(z = \max \,\{\, z \mid L_1(z) \ne L_2(z) \,\}\), and either \(L_1(z)<L_2(z)\) if z is even, or \(L_1(z)>L_2(z)\) if z is odd. Furthermore, \(L\sqsubset \top \) for any \(L\ne \top \). If \(L_1 \sqsubset L_2\), then \(L_2\) is more favorable for player Even and \(L_1\) is more favorable for player Odd.
Intuitively, player Even likes plays where higher even priorities occur more often. Furthermore, player Even will end the play unless the highest priority in the continuation is even. Thus infinite paths are won by player Even and the valuation \(\top \) represents this. Player Even will always play to \(\top \) and player Odd will always try to avoid \(\top \). This assumes that no winnercontrolled winning cycles exist where player Odd wins, which can be guaranteed using a preprocessing step that removes these cycles.
For every strategy \(\sigma \) of player Even, a socalled best response \(\tau \) of player Odd minimizes the valuation of each position. Player Even always plays against this best response. In each iteration, after player Odd computes their best response, player Even computes all switchable edges based on the current valuation L and the current strategy \(\sigma \). An edge (u, v) is switchable if \(L_v \sqsupset L_{\sigma (u)}\). Not all switchable edges need to be selected for the improved strategy, but as argued in [17], the greedy allswitches rule that simply selects the best edge (maximal in \(\sqsubset \)) for every position performs well in practice.
There are different methods to compute the best response of player Odd. We refer again to [17] for a more indepth discussion. Player Odd can compute their best response by repeatedly switching all Oddswitchable edges, i.e., edges (u, v) s.t. \(L_v \sqsubset L_{\tau (u)}\) and \(L_v\) is minimal in \(\sqsubset \) of all successors of u.
The players thus improve their strategies as in Algorithm 1, where \(\text {All}_\text {Odd}\) and \(\text {All}_\text {Even}\) compute all switchable edges as described above. The initial strategy for player Even is to always end the play. Player Odd computes their best response, starting from a random \(\tau \) initially. Player Even then improves their strategy once. They improve their strategies until a fixed point is reached. Then all vertices with valuation \(\top \) are won by player Even with strategy \(\sigma \) and all other vertices are won by player Odd with strategy \(\tau \) [17]. We extend the algorithm given in [17] at line 8 by marking vertices with valuation \(\top \) after player Odd computes their best response as “won”. We no longer need to consider them for \(\text {All}_\text {Odd}\) and \(\text {All}_\text {Even}\) as player Odd was unable to break the infinite play and thus they are won by Even.
The valuations can be computed in different ways. Fearnley implements a parallel algorithm that uses list ranking in two steps. The first step computes an Euler tour of the game restricted to chosen strategies \(\sigma \) and \(\tau \) resulting in a list. The second step uses a threestep parallel reduction algorithm to sum all values of the list. The list is divided into sublists which are each summed independently in the first sweep, all subresults are then propagated in the second sweep and then the final values are computed in the third sweep. See further [17].
We propose an alternative parallel algorithm to compute the valuation. We start from each Even vertex where the path ends and perform updates along a recursive backwards search, processing predecessors in parallel using task parallelism. Any vertex that is not visited has valuation \(\top \). See Algorithm 2. This algorithm is implemented in Oink using the highperformance workstealing framework Lace [12]. When updating the valuations in L, we first sweep twice over all vertices to initialize L for each vertex to \(\top \) and to add all vertices to \(\texttt {into}(v)\) that have their strategy to v. We also implement computing switchable edges in parallel via a straightforward binary reduction using Lace.
Empirical Evaluation. We compare the performance of Oink with the sequential and parallel solvers (1 or 8 threads) by Fearnley [17] and the “optstratimprov” solver in PGSolver. We disable optional preprocessing in all solvers. We only consider games without winnercontrolled winning cycles, which are 289 model checking, 182 equivalence checking and 279 random games, in total 750 games.
See Table 1. We observe that PGSolver is vastly outperformed by Oink and the sequential solver of Fearnley. PGSolver timed out for 160 games, whereas psi and parsiseq only timed out for 1 and 5 models, respectively. We observe similar parallel speedup for the parallel solvers, although Fearnley’s solver has more overhead from sequential to parallel with 1 thread. This might be due to the extra work to produce the Euler tour and to perform list ranking. The speedup we obtain with Oink is not very impressive, but the vast majority of the games are solved within 1 s already. Furthermore, psi and parsiseq are fairly close in performance. This is not a surprise, as their implementations are similar; the main difference is that Fearnley uses a forward search and we use a backward search. Hence, Oink is faster, but not by a large margin. Finally, we remark that Fearnley reports excellent results for list ranking on GPUs, whereas our algorithm is designed for a multicore architecture.
5 Progress Measures
Progress measures is a technique that assigns to each vertex a monotonically increasing measure. The measure of each vertex is lifted based on the measures of its successors. By lifting vertices, players Even and Odd essentially play the game backwards. The measure represents a statistic of the most optimal play so far from the vertex, without storing the plays explicitly.
While progress measures have been used elsewhere, they were introduced for parity games by Jurdziński [27]. Several improvements to the original algorithm are due to Verver [41] and Gazda and Willemse [22]. A number of parallel implementations have been proposed for the Playstation 3 [6], for multicore architectures [25, 37] and for GPUs [8, 24]. Furthermore, Chatterjee et al. proposed an implementation using BDDs [10]. Different types of progress measures were introduced after the recent breakthrough of a quasipolynomial time algorithm due to Calude et al. [9], which resulted in the progress measures algorithms by Jurdziński et al. [28] and by Fearnley et al. [18]. This section studies small progress measures [27] and quasipolynomial progress measures [18].
5.1 Small Progress Measures
The original small progress measures algorithm is due to Jurdziński [27]. We rely on the operational interpretation by Gazda and Willemse [22] and propose the capandcarryover mechanism to further understand the algorithm.
Progress measures record how favorable the game is for one of the players. W.l.o.g. we assume even progress measures. Given the highest priority d, define \(\mathbb {M}^\diamond \subseteq \mathbb {N}^d\cup \{\top \}\) to be the largest set containing \(\top \) (\(\top \notin \mathbb {N}^d\)) and only those dtuples with 0 (denoted as \(\_\)) on odd positions. An even progress measure \(m\in \mathbb {N}^d\) essentially records for a vertex v how often each even priority p is encountered along the most optimal play (starting at v) so far, until a higher priority is encountered, i.e., until p no longer dominates. Such a prefix of the play is called a pdominated stretch. Suppose that the sequence of priorities for a given play \(\pi \) is 0 0102120232142656201, then \(m=\{\, 2 \_ 3 \_ 1 \_ 2 \,\}\), since the play starts with a 0dominated stretch containing two 0s, with a 2dominated stretch containing three 2s, with a 4dominated stretch containing one 4, and with a 6dominated stretch containing two 6s. Furthermore, the special measure \(\top \) represents that the vertex can be won by player Even.
A total order \(\sqsubset \) compares measures as follows. For non\(\top \) measures \(m_1\) and \(m_2\), \(m_1 \sqsubset m_2\) iff there exists a highest priority \(z = \max \,\{\, z \mid m_1(z) \ne m_2(z) \,\}\) and \(m_1(z)<m_2(z)\). Furthermore, \(m\sqsubset \top \) for all \(m\ne \top \). We define a derived ordering \(\sqsubset _p\) by restricting z to priorities \(\ge p\). Examples:
To compute the progress measure for vertex v when playing to vertex w, given current measures \(\rho :V\rightarrow \mathbb {M}^\diamond \), we define \(\text {Prog}(\rho , v, w)\) as follows:
Prog computes the measure of the play obtained by playing from v to the play recorded in \(\rho (w)\). By choosing the lowest measure m according to \(\sqsupseteq _{\textsf {pr}(v)}\), we ensure that all m(p) for \(p <\textsf {pr}(v)\) are set to 0. The inequality is strict for even priorities \(\textsf {pr}(v)\) to ensure that \(m(\textsf {pr}(v))\) increases.
Player Even wants to achieve the highest measure, whereas player Odd wants to achieve the lowest measure. We define \(\text {Lift}(\rho , v)\) as follows:
By definition, the Lift operation increases measures monotonically. For the specific algorithm described here, we also observe that \(\text {Prog}(\rho ,v,w)\sqsupseteq _{\textsf {pr}(v)}\rho (w)\) and therefore Lift would even monotonically increase \(\rho \) without taking the maximum of the current measure and the best updated successor measure in a lifting procedure that starts with \(\rho =V\mapsto 0\).
If we iteratively lift vertices from \(\rho =V\mapsto 0\) using \(\mathrm {Lift}\), eventually some vertex may have a measure m such that m(p) for some p is higher than the number of vertices with priority p, i.e., \(m(p) > \leftV_p \right\). In this case, we know that m represents a play that visits at least one vertex with priority p twice and thus contains a cycle dominated by p. Furthermore, player Odd cannot escape from this cycle unless by playing to a higher losing priority. This follows from the fact that if player Odd could escape from the cycle, then it would not lift to this measure. The option to play to the higher losing priority is not considered because a measure to a higher priority is \(\sqsupset \) a measure that records a cycle.
We need a mechanism to let player Odd play to the next higher priority if it is forced into a cycle. However, we cannot let just any vertex play to a higher priority when its measure records a cycle, since some other vertex may escape to a lower higher priority. Therefore we need a mechanism that finds the lowest escape for player Odd. Small progress measures achieves this using a capandcarryover mechanism. \(\mathbb {M}^\diamond \) is restricted such that values for each even priority p may not be higher than \(\leftV_p \right\). When this cap is reached, \(\text {Prog}\) will naturally find a next higher m by increasing the value of higher priorities and eventually reach \(\top \). For example, if we have two vertices of priority 2 and two vertices of priority 4 in a game and there is a selfloop of priority 2, measures increase as follows: \(\{ 0 \_ 2 \_ 0 \}\), \(\{ 0 \_ 0 \_ 1 \}\), \(\{ 0 \_ 1 \_ 1 \}\), \(\{ 0 \_ 2 \_ 1 \}\), \(\{ 0 \_ 0 \_ 2 \}\), \(\{ 0 \_ 1 \_ 2 \}\), \(\{ 0 \_ 2 \_ 2 \}\), \(\top \).
Thus all vertices involved in a cycle will find their measures slowly rising until the measure of some vertex controlled by Odd becomes equal to the measure when playing to a vertex that is not rising. This is the lowest escape. If no such escape is found, then the measures rise until \(\top \) and these vertices are won by player Even. The slowly increasing measures no longer follow the operational interpretation described above, but can be understood as player Odd looking for the lowest escape.
We refer to [27] for the proof that the fixed point of applying the above lifting operation solves the parity game, such that vertices with measure \(\top \) are won by player Even and all other vertices are won by player Odd with a strategy that chooses the successor for which \(\text {Prog}(\rho , v, w)\) is the lowest.
We implement three known improvements. Improvements 2 and 3 are also implemented by PGSolver [19].

1.
When a vertex with some even priority p is raised to \(\top \), the cap of p may be lowered. The reason is that if a play records priority p \(\leftV_p \right\) times, it either contains a vertex now won by player Even or a cycle of priority p [41].

2.
Small progress measures only computes the strategy for player Odd according to measures for player Even. We compute both even and odd measures simultaneously to compute the strategy for both players.

3.
In addition, we occasionally halt the lifting procedure to perform an attractor computation for player Even to the set of evenliftable vertices. Any vertices not in this set are won by player Odd. We can immediately lift these vertices to \(\top \) in the odd measures. We perform this analysis also for oddliftable measures to potentially lift vertices to \(\top \) in the even measures.
5.2 Quasipolynomial Progress Measures
Different types of progress measures were introduced after the recent breakthrough of a quasipolynomial time algorithm due to Calude et al. [9], which resulted in the progress measures algorithms by Jurdziński et al. [28] and by Fearnley et al. [18]. We only briefly and informally describe the idea of [18]. (Even) measures are ktuples \(M:(\mathbb {N}\cup \{\bot \})^k\cup \{\top \}\), which record that the optimal play consists of consecutive stretches that are dominated by vertices with even priority. For example, in the path \(1\underline{2}131\underline{4}23\underline{2}15\underline{6}3\underline{2}1\underline{2}\), all vertices are dominated by each pair of underlined vertices of even priority. k is such that there are fewer than \(2^k\) vertices with even priority in the game. An 8tuple \(\{\, 2 \; 2\; 4\; \bot \; 5\; \bot \; 6\; \bot \,\}\) denotes a game with consecutive stretches of 1, 2, 4, 16 and 64 even vertices, where the first dominating vertex has priority M(i) and may actually be odd instead of even. If the first dominating vertex has an odd priority, then player Even must reach a higher priority before continuing to build a play where they have more dominating even vertices than are in the game. If player Even can visit more dominating even vertices than are in the game, then at least one of these is visited twice and therefore player Even knows that they can win and lifts to \(\top \).
5.3 Empirical Evaluation
We compare our implementation of small progress measures and quasipolynomial progress measures to the small progress measures implementation of pbespgsolve that comes with the mCRL2 model checker [11, 41] and the implementation of small progress measures in PGSolver [19]. Unfortunately, the solver used in [18] contains proprietary source code and cannot be compiled and compared. For this comparison, we disabled optional preprocessing, i.e., removing selfloops, winnercontrolled winning cycles and solving singleparity games.
See Table 2. Although Fearnley et al. [18] say that the QPT solver is mainly interesting for the theoretical result rather than practical performance, we observe that qpt outperforms the other solvers for random games. Oink is faster than PGSolver, especially for model checking and equivalence checking.
6 Zielonka’s Recursive Algorithm
The algorithm by Zielonka [43] is a recursive solver that despite its relatively bad theoretical complexity is known to outperform other algorithms in practice [19]. Furthermore, tight bounds are known for various classes of games [21].
Zielonka’s recursive algorithm is based on attractor computation. At each step, given current subgame , the algorithm removes the attractor , i.e., all vertices attracted to the current highest vertices of priority for player \(\alpha =p \bmod 2\), and recursively computes the winning regions of the remaining subgame . If the opponent \({\overline{\alpha }}\) can attract vertices in A to \(W_{\overline{\alpha }}\), then \({\overline{\alpha }}\) wins and the solution for the remainder is computed recursively. Otherwise, \(\alpha \) wins A and no further recursion is necessary. The strategies for both players are trivially obtained during attractor computation and by assigning to winning pvertices in A any strategy to vertices in \(W_\alpha \cup A\).
Zielonka’s original algorithm has been extended and improved over the years. In his thesis, Verver [41] improves the partitioning of the game after computing A by extending A with the attractors of the next highest vertices if they are of the same parity. The original algorithm always recomputes the solution of if \(W_{\overline{\alpha }}\) is nonempty, even if no vertices are attracted to \(W_{\overline{\alpha }}\). Liu et al. propose that this is not necessary [34]. See Algorithm 3 for the recursive algorithm with these modifications. Other extensions that we do not consider here are the subexponential algorithm [29] and the big steps algorithm [39] that have been reported to perform slower than ordinary Zielonka [19]. Also, variations using BDDs have been proposed [2, 30].
Although the implementation of the recursive algorithm in PGsolver [19] is typically used for comparisons in the literature, improved implementations have been proposed by Verver [41], Di Stasio et al. [40], Liu et al. [34], and Arcucci et al. [1]. Verver suggests to record the number of remaining “escaping” edges for each vertex during attractor computation, to reduce the complexity of attractor computation at the cost of an extra integer per vertex. Di Stasio et al. avoid creating copies of the game for recursive operations by recording which vertices are removed in a special array. Recently, Arcucci et al. extended the implementation in [40] with a multicore implementation of attractor computation [1].
The implementation in Oink is based upon the ideas described above. Furthermore, we improve the implementation using the following techniques.

Instead of creating copies of the “removed” array [40] for each recursive step, we use a single “region” array that stores for each vertex that it is attracted by the rth call to attr. This value is initially \(\bot \) for all vertices and is reset to \(\bot \) for vertices in (line 10). We record the initial r at each depth and thus derive that all vertices with a value \(\ge r\) or \(\bot \) are part of the subgame.

As a preprocessing step, we order all vertices by priority. We can then quickly obtain the highest vertex of each subgame.

We eliminate the recursion using a stack.

We implement an alternative lockfree attractor, relying on the workstealing library Lace [12] that provides finegrained load balancing.
In the interest of space, we cannot describe the multicore attractor indepth. This implementation is fairly straightforward. We implement the attractor recursively where the workstealing framework runs the recursive operations in parallel. Like typical lockfree algorithms, we rely on the compareandswap operation to implement safe communication between threads. The attractor uses this operation when manipulating the number of escaping edges and to “claim” a vertex by setting its value in the region array from \(\bot \) to r.
Empirical Evaluation. We compare our implementation of Zielonka’s recursive algorithm with and without the optimizations of Algorithm 3 to PGSolver, to Verver’s implementation pbespgsolve [11, 41] and to SPGSolver [1, 40]. Unfortunately, the Java version of SPGSolver (all three variations) suffers from severe performance degradation for unknown reasons. They also provide a C++ implementation in their online repository, which we used instead. The multicore version of the SPGSolver tool relies on async tasks provided by C++11. Similar to the previous sections, we disable the optional preprocessors that solve single parity games, remove selfloops and solve winnercontrolled winning cycles.
See Table 3. The results show that the implementation in Oink outperforms PGSolver by several orders of magnitude on all benchmark types. PGSolver timed out for 83 of all 949 games. The solvers spgseq and pbeszlk are faster than Oink on the model checking and equivalence checking games, but are significantly outperformed on random games. We also observe severe performance degradation for spgmc on random games. It appears that our parallel implementation of Zielonka’s algorithm also does not scale well. Finally, there seems to be no significant difference between the optimized and unoptimized versions of Zielonka’s algorithm, except for random games.
7 Priority Promotion
In recent work, a new family of algorithms has been proposed based on priority promotion [5]. Priority promotion starts with a similar decomposition of the game as Zielonka’s recursive algorithm. Priority promotion is based on the insight that a recursive decomposition based on attractor computation leads to regions with a specific property related to the highest priority in the region, called its measure p. This property is that all plays that stay in the region are won by the player who wins the highest priority p, denoted by player \(\alpha \). The other player \({\overline{\alpha }}\) has three options. They either lose the game by staying in the region, or they can leave the region by playing to an \(\alpha \)region of higher measure, or they can leave the region to a lower region of either player via a vertex with priority p. The goal of \(\alpha \) is to find “closed” \(\alpha \)regions, where \({\overline{\alpha }}\) cannot escape to lower regions. The result is a region where player \({\overline{\alpha }}\) either loses, or leaves the region to a higher \(\alpha \)region which may or may not be closed. The measure of the closed \(\alpha \)region is then “promoted” to the measure of the lowest higher region to which \({\overline{\alpha }}\) can escape and the attractorbased decomposition is recomputed for all lower regions. The promoted region may now attract from regions with a measure between its original measure and its promoted measure, thus requiring recomputing the decomposition. When player \({\overline{\alpha }}\) cannot escape from an \(\alpha \)region to a higher \(\alpha \)region, player \(\alpha \) is the winner of all vertices in the region.
Priority promotion was proposed in [5] and improved in [3, 4]. The original PP algorithm [5] forgets all progress (“resets”) in lower regions after promotion. The PP+ algorithm [3] only resets lower regions of player \({\overline{\alpha }}\). The RR algorithm [4] only resets some lower regions of player \({\overline{\alpha }}\). The DP algorithm [3] uses a heuristic to delay certain promotions to avoid resets. We implement all four algorithms and also combine the DP algorithm, which is based on PP+, with the RR algorithm.
Empirical Evaluation. We compare our implementation of five variations of priority promotion in Oink. As we do not compare with other solvers, we enable the optional preprocessors that solve single parity games, remove selfloops and solve winnercontrolled winning cycles.
See Table 4. Overall, we see that the simplest solver pp performs just as good as the more complex solvers. The motivation for the variations is based on crafted families that require an exponential number of promotions. The pp solver may be most vulnerable to these crafted families, but on practical games and random games there is no significant difference.
8 Conclusions
See Table 5 for a comparison of the five main sequential algorithms in Oink, including the preprocessing that removes winnercontrolled winning cycles, selfloops and solves single parity games. The results show that the zlk and the pp solvers have similar performance and outperform the other solvers. See also Fig. 1 for a cactus plot of these five solvers.
Priority promotion is a powerful and attractive idea, as promoting closed \(\alpha \)regions is similar to capandcarryover in small progress measures. Attractor computation finds such regions directly whereas value iteration algorithms may require many iterations. We confirm the observations in [5] that the algorithm has a good performance but it is not faster than Zielonka’s algorithm.
In this work, we studied modern parity game algorithms using a new tool named Oink. Oink is publicly available via https://www.github.com/trolando/oink. We implemented a number of modern algorithms and provided a comprehensive description of these algorithms, introducing capandcarryover to understand small progress measures. We proposed improvements to strategy improvement and to Zielonka’s algorithm. We presented an empirical evaluation of Oink, comparing its performance with stateoftheart solvers, especially the popular PGSolver tool. The results demonstrate that Oink is competitive with other implementations and in fact outperforms PGSolver for all algorithms, especially Zielonka’s recursive algorithm. This result is particularly interesting considering that many publications compare the performance of novel ideas to Zielonka’s algorithm in PGSolver.
References
Arcucci, R., Marotta, U., Murano, A., Sorrentino, L.: Parallel parity games: a multicore attractor for the Zielonka recursive algorithm. In: ICCS, Procedia Computer Science, vol. 108, pp. 525–534. Elsevier (2017)
Bakera, M., Edelkamp, S., Kissmann, P., Renner, C.D.: Solving \(\mu \)calculus parity games by symbolic planning. In: Peled, D.A., Wooldridge, M.J. (eds.) MoChArt 2008. LNCS (LNAI), vol. 5348, pp. 15–33. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642004315_2
Benerecetti, M., Dell’Erba, D., Mogavero, F.: A delayed promotion policy for parity games. In: GandALF 2016. EPTCS, vol. 226, pp. 30–45 (2016)
Benerecetti, M., Dell’Erba, D., Mogavero, F.: Improving priority promotion for parity games. In: Bloem, R., Arbel, E. (eds.) HVC 2016. LNCS, vol. 10028, pp. 117–133. Springer, Cham (2016). https://doi.org/10.1007/9783319490526_8
Benerecetti, M., Dell’Erba, D., Mogavero, F.: Solving parity games via priority promotion. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9780, pp. 270–290. Springer, Cham (2016). https://doi.org/10.1007/9783319415406_15
van der Berg, F.: Solving parity games on the playstation 3. In: Twente Student Conference (2010)
Björklund, H., Vorobyov, S.G.: A combinatorial strongly subexponential strategy improvement algorithm for mean payoff games. Discret. Appl. Math. 155(2), 210–229 (2007)
Bootsma, P.: Speeding up the small progress measures algorithm for parity games using the GPU. Master’s thesis, Eindhoven University of Technology (2013)
Calude, C.S., Jain, S., Khoussainov, B., Li, W., Stephan, F.: Deciding parity games in quasipolynomial time. In: STOC, pp. 252–263. ACM (2017)
Chatterjee, K., Dvorák, W., Henzinger, M., Loitzenbauer, V.: Improved setbased symbolic algorithms for parity games. In: CSL, LIPIcs, vol. 82, pp. 18:1–18:21. Schloss Dagstuhl  LeibnizZentrum fuer Informatik (2017)
Cranen, S., Groote, J.F., Keiren, J.J.A., Stappers, F.P.M., de Vink, E.P., Wesselink, W., Willemse, T.A.C.: An overview of the mCRL2 toolset and its recent advances. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 199–213. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642367427_15
van Dijk, T., van de Pol, J.C.: Lace: nonblocking split deque for workstealing. In: Lopes, L., et al. (eds.) EuroPar 2014. LNCS, vol. 8806, pp. 206–217. Springer, Cham (2014). https://doi.org/10.1007/9783319143132_18
Emerson, E.A., Jutla, C.S.: Tree automata, \(\mu \)calculus and determinacy (extended abstract). In: FOCS, pp. 368–377. IEEE Computer Society (1991)
Emerson, E.A., Jutla, C.S., Sistla, A.P.: On modelchecking for fragments of \(\mu \)calculus. In: Courcoubetis, C. (ed.) CAV 1993. LNCS, vol. 697, pp. 385–396. Springer, Heidelberg (1993). https://doi.org/10.1007/3540569227_32
Emerson, E.A., Jutla, C.S., Sistla, A.P.: On model checking for the \(\mu \)calculus and its fragments. Theor. Comput. Sci. 258(1–2), 491–522 (2001)
Fearnley, J.: Nonoblivious strategy improvement. In: Clarke, E.M., Voronkov, A. (eds.) LPAR 2010. LNCS (LNAI), vol. 6355, pp. 212–230. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642175114_13
Fearnley, J.: Efficient parallel strategy improvement for parity games. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10427, pp. 137–154. Springer, Cham (2017). https://doi.org/10.1007/9783319633909_8
Fearnley, J., Jain, S., Schewe, S., Stephan, F., Wojtczak, D.: An ordered approach to solving parity games in quasi polynomial time and quasi linear space. In: SPIN, pp. 112–121. ACM (2017)
Friedmann, O., Lange, M.: Solving parity games in practice. In: Liu, Z., Ravn, A.P. (eds.) ATVA 2009. LNCS, vol. 5799, pp. 182–196. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642047619_15
Friedmann, O., Lange, M.: Local strategy improvement for parity game solving. In: GandALF. EPTCS, vol. 25, pp. 118–131 (2010)
Gazda, M., Willemse, T.A.C.: Zielonka’s recursive algorithm: dull, weak and solitaire games and tighter bounds. In: GandALF. EPTCS, vol. 119, pp. 7–20 (2013)
Gazda, M., Willemse, T.A.C.: Improvement in small progress measures. In: GandALF. EPTCS, vol. 193, pp. 158–171 (2015)
Grädel, E., Thomas, W., Wilke, T. (eds.): Automata Logics, and Infinite Games: A Guide to Current Research. LNCS, vol. 2500. Springer, Heidelberg (2002). https://doi.org/10.1007/3540363874
Hoffmann, P., Luttenberger, M.: Solving parity games on the GPU. In: Van Hung, D., Ogawa, M. (eds.) ATVA 2013. LNCS, vol. 8172, pp. 455–459. Springer, Cham (2013). https://doi.org/10.1007/9783319024448_34
Huth, M., Kuo, J.H.P., Piterman, N.: Concurrent small progress measures. In: Eder, K., Lourenço, J., Shehory, O. (eds.) HVC 2011. LNCS, vol. 7261, pp. 130–144. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642341885_13
Jurdzinski, M.: Deciding the winner in parity games is in UP \(\cap \) coUP. Inf. Process. Lett. 68(3), 119–124 (1998)
Jurdziński, M.: Small progress measures for solving parity games. In: Reichel, H., Tison, S. (eds.) STACS 2000. LNCS, vol. 1770, pp. 290–301. Springer, Heidelberg (2000). https://doi.org/10.1007/3540465413_24
Jurdzinski, M., Lazic, R.: Succinct progress measures for solving parity games. In: LICS, pp. 1–9. IEEE Computer Society (2017)
Jurdzinski, M., Paterson, M., Zwick, U.: A deterministic subexponential algorithm for solving parity games. SIAM J. Comput. 38(4), 1519–1532 (2008)
Kant, G., van de Pol, J.: Generating and solving symbolic parity games. In: GRAPHITE. EPTCS, vol. 159, pp. 2–14 (2014)
Keiren, J.J.A.: Benchmarks for parity games. In: Dastani, M., Sirjani, M. (eds.) FSEN 2015. LNCS, vol. 9392, pp. 127–142. Springer, Cham (2015). https://doi.org/10.1007/9783319246444_9
Kozen, D.: Results on the propositional mucalculus. Theor. Comput. Sci. 27, 333–354 (1983)
Kupferman, O., Vardi, M.Y.: Weak alternating automata and tree automata emptiness. In: STOC, pp. 224–233. ACM (1998)
Liu, Y., Duan, Z., Tian, C.: An improved recursive algorithm for parity games. In: TASE, pp. 154–161. IEEE Computer Society (2014)
Luttenberger, M.: Strategy iteration using nondeterministic strategies for solving parity games. CoRR abs/0806.2923 (2008)
Meyer, P.J., Luttenberger, M.: Solving meanpayoff games on the GPU. In: Artho, C., Legay, A., Peled, D. (eds.) ATVA 2016. LNCS, vol. 9938, pp. 262–267. Springer, Cham (2016). https://doi.org/10.1007/9783319465203_17
van de Pol, J., Weber, M.: A multicore solver for parity games. Electr. Notes Theor. Comput. Sci. 220(2), 19–34 (2008)
Schewe, S.: An optimal strategy improvement algorithm for solving parity and payoff games. In: Kaminski, M., Martini, S. (eds.) CSL 2008. LNCS, vol. 5213, pp. 369–384. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540875314_27
Schewe, S.: Solving parity games in big steps. J. Comput. Syst. Sci. 84, 243–262 (2017)
Di Stasio, A., Murano, A., Prignano, V., Sorrentino, L.: Solving parity games in scala. In: Lanese, I., Madelaine, E. (eds.) FACS 2014. LNCS, vol. 8997, pp. 145–161. Springer, Cham (2015). https://doi.org/10.1007/9783319153179_9
Verver, M.: Practical improvements to parity game solving. Master’s thesis, University of Twente (2013)
Vöge, J., Jurdziński, M.: A discrete strategy improvement algorithm for solving parity games. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 202–215. Springer, Heidelberg (2000). https://doi.org/10.1007/10722167_18
Zielonka, W.: Infinite games on finitely coloured graphs with applications to automata on infinite trees. Theor. Comput. Sci. 200(1–2), 135–183 (1998)
Acknowledgements
We thank Tim Willemse and John Fearnley for their helpful comments and Jaco van de Pol for the use of their computer cluster.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
<SimplePara><Emphasis Type="Bold">Open Access</Emphasis> This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.</SimplePara> <SimplePara>The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.</SimplePara>
Copyright information
© 2018 The Author(s)
About this paper
Cite this paper
van Dijk, T. (2018). Oink: An Implementation and Evaluation of Modern Parity Game Solvers. In: Beyer, D., Huisman, M. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2018. Lecture Notes in Computer Science(), vol 10805. Springer, Cham. https://doi.org/10.1007/9783319899602_16
Download citation
DOI: https://doi.org/10.1007/9783319899602_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319899596
Online ISBN: 9783319899602
eBook Packages: Computer ScienceComputer Science (R0)