Abstract
This paper (1) summarizes the history of the RERS challenge for the analysis and verification of reactive systems, its profile and intentions, its relation to other competitions, and, in particular, its evolution due to the feedback of participants, and (2) presents the most recent development concerning the synthesis of hard benchmark problems. In particular, the second part proposes a way to tailor benchmarks according to the depths to which programs have to be investigated in order to find all errors. This gives benchmark designers a method to challenge contributors that try to perform well by excessive guessing.
Introduction
Competitions and challenges have provided a valuable contribution to the development of verification and analysis tools, and numerous events of this kind have evolved over the last decades [4, 6, 8, 29, 32, 36, 43]. The approaches followed by these in many cases recurring events vary from offsite to onsite, with or without concrete resource constraints, from solution orientation to tool orientation, from known benchmark problems to problems with unknown true properties to controlled, generated benchmarks, and from qualitative/human evaluation to automated evaluation processes etc. (cf. Sect. 2 and [5]).
The RERS challenge is characterized by its propertyoriented benchmark generation: benchmarks are automatically generated in a “requirementsdriven” fashion. More precisely, the starting point of the benchmark generation process is a set of desired structural properties, here formulated in LTL, which are successively transformed via Büchi automata that characterize all satisfying executions to Modal Transitions Systems and, with a few more steps, transformed to code of various implementation languages (cf. Sect. 2.4). This construction principle aims at benchmarks that closely resemble realistic code, but can be flexibly tailored in their degree of difficulty. Originally, we considered size, amount of arithmetic operations, and the data structures used as a measure for intricacy. Over the years, the importance of controlling the length of shortest counterexamples as a means for scaling the difficulty of the verification task (in contrast to the complexity of the benchmark problem) became more and more apparent.
This paper consists of two parts: The first part (Sect. 2) summarizes the history of RERS, its profile and intentions, its relation to other competitions, and, in particular, its evolution due to the feedback of participants. This also comprises a discussion of experienced ‘oddities’, both at RERS and in relation to other competitions, as well as ways to overcome them. The second part (Sect. 3) presents our most recent development concerning the control of counterexample lengths. The proposed tailoring of benchmarks focusing on the depths to which programs have to be investigated in order to find all errors gives benchmark designer a way to challenge contributors who are claiming satisfaction without sufficient evidence.
The RERS challenge
The Rigorous Examination of Reactive Systems challenge (RERS) is a verification challenge that focuses on temporal and reachability properties of reactive systems. RERS was founded in 2010 and has annual instances since 2012. The challenge was designed to explore, evaluate, and compare the capabilities of stateoftheart software verification tools and techniques. Areas of interest include but are not limited to static analysis [52], model checking [9, 14, 25], symbolic execution [38], and (learningbased) testing [62].
The key idea of RERS is to use generated, realistic problems of scalable complexity on which participants have to check sets of properties.

Automatic generation of benchmarks with known properties provides new problems each year that are (a) previously unknown to participants, and (b) for which the correct verdicts for properties are not known to the participants during the challenge—preventing “performance tuning” of participating verification tools towards a high score on the basis of known expected results or known characteristics of benchmarks.

Realism of benchmarks (in contrast to typical randomly generated benchmarks) is achieved in a requirementsdriven fashion: programs are generated according to characteristic temporal patterns resembling the structure of real code.

Scalability of difficulty is the basis for detailed performance profiling of participating tools.
In this section, we provide a brief motivation for RERS, an overview of the history of RERS, sketch some of the scientific contributions on the automatic generation of benchmarks that were facilitated through RERS, and briefly describe how different ranking methods in RERS enable detailed profiling of tools.
Remark
Parts of this section have been published before in papers or on the RERS website. We provide pointers to more detailed accounts where it is appropriate and possible. The focus of this section is on providing a general overview.
Rigorous examination of reactive systems
The motivation of RERS is to enable profiling of principal capabilities and limitations of tools and approaches. The RERS challenge is therefore “freestyle”, i.e., without neither time nor resource limitations, and encourages the combination of methods and tools. Strict time or resource limitations in combination with previously known solutions encourage tools to be tweaked for certain training sets, which could give a false impression of their capabilities. It also leads to abandoning time consuming problems in the interest of time. Our focus on principal capabilities instead of defined and identical resources is reflected by making RERS a challenge instead of a competition. We only provide the tasks and collect the results from participants. Solutions are computed by them in any way they want. The main goals of RERS are:

1.
encourage the combination of methods from different (and usually disconnected) research fields for better software verification results,

2.
provide a complete framework for an automated challenge organization that covers the process from generating differently tailored tasks that reveal the strengths and weaknesses of specific approaches to an automated result comparison (excluding the computation of results itself),

3.
initiate a discussion about better benchmark generation, reaching out across the usual community barriers to provide benchmarks useful for testing and comparing a wide variety of tools, and

4.
collect (additional) interesting syntactical features that should be supported by benchmark generators.
There exists no other software verification challenge with a profile that is similar to that of RERS: while (1) is a quite generic goal that is pursued by a number of verification competitions, goals (2)–(4) are unique to RERS. Nevertheless, RERS shares some intentions and characteristics with SVCOMP, MCC, and VerifyThis.
The software verification competition [8] (SVCOMP) is also concerned with reachability properties and features a few verification tasks concerning termination and memory safety. In direct comparison, SVCOMP does not allow the manual combination of tools and directly addresses tool developers. In contrast to RERS, it has time and resource limitations, does not feature certificatelike achievements (cf. Sect. 2.5), but has developed a detailed ranking system for the comparison of tools and tries to prevent guessing by imposing high penalties on mistakes. An important difference to SVCOMP is that RERS features benchmarks that are generated automatically for each challenge iteration, ensuring that all results to the verification tasks are unknown to participants. Over time, the RERS benchmark generator contributed problems to the SVCOMP benchmark repository.
Another competition concerned with the verification of parallel systems in combination with LTL properties is the Model Checking Contest [43] (MCC). Participants have to analyze Petri nets as abstract models and check LTL and CTL formulas, the size of the state space, reachability, and various upper bounds. The benchmark consists of a large set of known models and a small set of unknown models that were collected among the participants.
In contrast to RERS, MCC participants submit tools that have to adhere to resource restrictions, rather than problem answers. Moreover, the correct answers to the used verification tasks are not always known, and a majority votebased approach to correctness is used.^{Footnote 1} This may well penalize outstanding approaches that are e.g. unique in identifying the correct result. This problem is overcome for RERS due to its propertyoriented benchmark generation. We were happy to hear that MCC started using some verification tasks of RERS to partially overcome this problem.
Finally, VerifyThis [29] features program verification challenges. Participants get a fixed amount of time to work on a number of challenges, to prove the functional correctness of a number of nontrivial algorithms. That competition focuses on the use of interactive or semiinteractive tools. Similar to RERS, VerifyThis encourages the use of a mixture of tools, however submissions are judged by a jury. In direct comparison, RERS participants submit results that can be checked and ranked automatically; only the “best approach award” involves a jury judgment.
Genesis (from ZULU to RERS)
The idea for RERS arose in 2010 after participating in the ZULU automata learning competition [28]. The ZULU competition had some very exciting and some rather frustrating aspects. The competition was based on randomly generated automata, the participating learning tools competed in a blackbox scenario, and questionnaires (sets of words for which participants had to decide language membership) were the basis for ranking tools.
One the one hand, ZULU had an incredibly engaging training and competition mode: contestants could generate new training problems in a pushbutton approach and ranking of tools on all benchmark instances was instantly visible. Improvements to algorithms did translate to almost instant gratification, fueling a monthlong race for the win.
The mode of ranking performance by counting correct answers in questionnaires, on the other hand, did not serve well for differentiating tools and in some cases even favored learning algorithms that were already known to perform badly on real problems. Less precise models produced better predictions for certain distributions of words in questionnaires. Moreover, algorithms could (and were) tuned towards the structural properties of a randomly generated benchmark. It turned out that this tuning was often counterproductive for inferring models of real systems.
The RERS initiative aimed at developing an engaging challenge, or a set of challenges (cf. Sect. 2.3), in the area of formal methods that would overcome the perceived weaknesses of ZULU. As a consequence, RERS is based on generated benchmarks (cf. Sect. 2.4), and one of the longterm goals of RERS is making the generation of new benchmarks accessible to participants. At the same time, the approach to benchmark generation in RERS aims at generating benchmarks that have realistic properties—resulting in relevant performance profiles of tools. This aim is also supported by RERS providing multiple modes of ranking and rating, tailored to profile contributions according to their capabilities and limitations (cf. Sect. 2.5).
Tracks and history
After an initial workshop in 2010, RERS had yearly challenges since 2012 with a constantly evolving set of tracks and verification challenges. Since 2012, a total of 49 people from 16 different research groups participated in RERS.^{Footnote 2} Table 1 provides a comprehensive overview that will be detailed by the remainder of this section.
Sequential Programs RERS started in 2012 with sequential benchmark programs in two tracks (LTL and Reachability) that correspond to the property type that has to be analyzed. Sequential benchmark programs are made available as Java and C programs. Since 2014, there are three categories in each track that represent the syntactical features included in the benchmark programs that belong to the respective category.

Plain. The program only contains assignments, with the exception of some scattered summation, subtraction, and remainder operations in the reachability problems.

Arithmetic. The programs frequently contain summation, subtraction, multiplication, division, and remainder operations.

Data structures. Arrays and operations on arrays are added (Other data structures are planned for the future).
In each category, small, mediumsized, and large programs are generated for a challenge benchmark.
Starting in 2020, LTL properties will be controlled for minimal depth of counterexamples (presented in this paper), enabling an additional dimension in which complexity can be scaled.
Parallel Programs Since 2016, RERS features benchmarks that contain parallel systems which are made available as labeled transition systems (LTSs), Promela [24] code, and Petri nets [20, 53]. The parallel track started with LTL properties and was tentatively extended to CTL properties in 2018. As a new addition in 2019, CTL properties were fully supported as a full track for the category of parallel programs (e.g. Petri nets) and were therefore on par with our support for LTL model checking tasks.
Experimental Tracks In several years, RERS had experimental tracks that did not (yet) result in permanent additions to the challenge.

In 2013, RERS featured greybox and blackbox problems in addition to the (default) whitebox problems. The additional problems were intended to encourage participation of blackbox approaches and facilitated integration of whitebox and blackbox techniques.

In 2015, RERS was colocated with the international conference on runtime verification (RV) [6] and featured monitoring problems for which traces were provided.

In 2019, for the first time in the history of RERS, the challenge featured benchmark programs that are based on realworld models [32]. The corresponding challenge tracks were based on a cooperation with ASML, a large Dutch semiconductor company who provided the underlying models. Properties that participants could analyze for these systems ranged from reachability queries over LTL formulas to CTL properties (omitted in Table 1).
A detailed history and description of all past tracks and all sets of challenge problems can be found on the RERS website^{Footnote 3} along with properties and expected verdicts.
Synthesis of benchmarks with known properties
RERS relies on generated benchmark problems of scalable complexity and with known properties. The motivation for this, as stated above, is to enable detailed profiling of tools. The RERS benchmark generation technology combines scalable complexity with known properties, two goals that appear conflicting at first glance: it is impossible to automatically decide properties on problems that are too complex for current tools to analyze. Other competitions (e.g. MCC) solve this by determining verdicts that ought to be accepted as correct by majority vote. This, of course, has the drawback that a high performance of few tools, resulting in uncommon but correct verdicts on some problems, leads to a competition ranking that is inversely correlated to performance. We observed this firsthand in the ZULU competition.
Motivated by this experience, we have developed a generic method and toolboxes for generating benchmark problems of scalable complexity with known properties. A frequent argument against the use of generated benchmarks is the potential threat to the validity of profiling results that arises from their artificial nature. We address this threat in RERS by using sets of LTL properties for inducing structure or actual industrial system models at the core of our benchmark synthesis.
In this section, we provide a brief overview of the generic method, using the generation of sequential benchmark problems as a concrete example. Detailed accounts of concrete toolboxes for different classes of benchmark problems can be found in the papers listed in Table 2.
General Approach Our general approach to the generation of benchmarks that we use in RERS is sketched in Fig. 1 and exists in two variants, propertybased benchmark generation and modelbased benchmark generation. Both variants follow the same highlevel pattern. The process is divided into two phases. In the first phase (upper half of both subfigures), benchmark properties are established on a small model. In the second phase, models are expanded by semanticspreserving transformations that increase complexity at the modellevel and then generated into code. Code generation can add another dimension of complexity by encoding the behavior specified in the model through different language features (e.g., using arithmetic expressions or data structures).
Propertybased Benchmark Generation In this variant (left of Fig. 1), we start the generation process by randomly choosing and then instantiating LTL property specification patterns [19] that we partition into a small defining set used in the subsequent synthesis step, and a larger set of additional properties whose validity is later checked on the synthesized model via model checking. Typically, we generate around 100 properties, about ten of which can be defining, in order to still allow for automated synthesis.
Our current implementation uses LTL2Buchi [23] and the Spot library [18] for translating the LTL specification into a Büchi automaton. The resulting intermediate Büchi automaton is then transformed into a concrete reactive system model (a Mealy machine) that represents all words/paths satisfying the defining properties. The construction of this Mealy machine is randomized and can be customized in various dimensions, e.g., the size of the model, the size of the input and output alphabets, the density of the transition graph etc., while guaranteeing that all defining properties remain valid.
Modelbased Benchmark Generation In this variant, we start from a reactive system model. Such models were provided by ASML in 2019 [32]. Properties and verdicts can then be computed in two different ways from these models (right of Fig. 1). Generated properties can be modelchecked as in the propertydriven approach. This was, e.g., done for LTL properties in the industrial track of RERS 2019. Alternatively, properties can be computed from the models directly. This was done for CTL properties in the industrial track of RERS 2019.
ModelExpansion and Code Generation In the second phase of generating sequential benchmark problems, Mealy Machines are enlarged via randomized propertyoriented expansion (POE) [60] and by introducing unreachable states. Both transformations are incremental and can be stopped at any moment, e.g., when a certain threshold of states is reached. The transformation from Mealy Machines to programs interprets Mealy machines as simple loops of guarded commands, whose guards precisely check for the correct state identification, and replaces the simple guard structure with a complex, semantically equivalent decision structure.
As a final step, we employ dataflow analysis, transformation and code motion techniques [12, 39,40,41,42, 49, 58, 68] to randomly elaborate the program model structure along both the logical and the control structure, delocalizing information and obtaining quite general whileprogramlike structures [2].
Generation of Parallel Benchmarks We have also applied our propertypreserving generation process to obtain parallel systems in various formats like (NestedUnit) Petri nets [20, 53], Promela [24] code, or simply as graphs in DOT^{Footnote 4}. This also happens in two conceptual steps: first, we synthesize an interesting core model from an LTL specification in the same way as for the sequential case, and then decompose this core model into parallel components in a propertypreserving fashion. Key for this decomposition was a new notion of modal contracts [65] which allows us to generate parallel systems with arbitrarily many components.
Basing property preservation on modal refinement [46] instead of language inclusion guarantees that not only lineartime properties are preserved, but branchingtime properties as well. This allows us to use the generated parallel models not only for the reachability and the LTL model checking track, but also for the CTL model checking and bisimulation checking track [66], the later being planned as a future addition.
Ranking
RERS has a threedimensional reward structure that consists of a competitionbased ranking on the total number of points, achievements for solving problems without submitting wrong answers, and an evaluationbased award for the most original idea or a good combination of methods. Computation of scores and modes of ranking (per track, per category) have evolved slightly over the years. Adaptations were made to arrive at more detailed, relevant, and valid profiling of participating approaches.
Competitionbased Ranking The competitionbased ranking was established to facilitate competition and as a direct comparative evaluation of the capabilities of tools. Participants are free to opt out of this ranking and to only aim at obtaining achievements. For the ranking, a score for the performance of every participating tool is computed. Based on these scores, tools are ranked. Positive points are awarded for correct verdicts. Incorrect verdicts lead to penalties whose heights was a major point of discussion over the years, leading to frequent changes.
The negative impact of incorrect verdicts on a tool’s score in the competitionbased ranking was originally quite small. In 2012 it was just \(1\) point, and it was \(2\) points in 2013 to 2015. In 2016 there was a drastic change in the penalty which became exponential in the number of errors (\(2^n\)). This change has turned out to be too drastic and we are therefore using quadratic penalties (\(n^2\)) since 2018.
For RERS 2019, also the points for correct answers were refined (from previously one point per correct answer) to two points for verifiable LTL properties or unreachable errors and one point for refutable LTL properties or reachable errors, accounting for the fact that showing the existence of counterexamples and errors is usually easier than proving their absence.
Achievements To honor the accomplishments of verification tools and methods without the pressure of loosing in a competition despite good results and only in relation to the complexity of the set of benchmark problems, RERS introduced achievements for different nuances of difficulty.
For every category there are three achievements: bronze, silver and gold. An achievement is only awarded if no wrong answers are given in the respective category. For tracks on CTL properties, a participant needs to answer 12 out of 20 properties correctly in order to “solve” an individual problem. If there are n problems within such a track, then a participant needs to answer \(\frac{1}{3} \cdot n\cdot 12\) properties correctly for a bronze award, \(\frac{2}{3} \cdot n\cdot 12\) for silver and \(n \cdot 12\) for gold.
For the remaining tracks of RERS, proving the absence of a property violation is typically much harder than showing such a violation. Taking this into account, achievements are awarded for reaching a threshold of points that is equal to the number of counterexamples that can be witnessed for the corresponding group of benchmark instances, as long as no wrong answer is given. Counterexamples are paths reaching an error function for the Reachability track and paths violating LTL properties for the LTL tracks. Only the highest achievement for every category is actually awarded and the thresholds for every category are calculated as follows:
The participant’s achievement score within a category is computed from all submitted results (verified or falsified). Let \(a_t(C) = n\) be the achievement score of tool t for category C, where n is the number of correct (i.e., reported) verdicts for category C. Now let, e.g.,
Then participant t is awarded the Bronze Achievement in category C. It is possible to receive six achievements in the sequential tracks: one for each category (Plain, Arithmetic, Data Structures) in the Reachability and LTL track, respectively. In the parallel tracks, an overall of six achievements can be obtained by participating tools for small, medium, and large problems in the LTL and CTL tracks.
Since achievements are awarded on a perparticipant basis, there may be multiple goldmedalists in some category in any particular year of RERS.
Evaluationbased Award To honor creativity and crossfertilization between different research areas, RERS features jurybased awards. For these awards, category winners are chosen based on the employed (combination of) methods which must not necessarily have scored highest. Submitted descriptions of approaches and solutions are reviewed and ranked by the challenge organizers. Due to the possible variety of methods there may be several winners in this category.
Impact
In the ten years since its inception, RERS has had an impact in different dimensions.
Scientific Contributions. First of all, RERS has facilitated a number of scientific advances by challenge participants. Some examples are presented in [1, 7, 10, 11, 15,16,17, 30, 34, 37, 44, 45, 47, 48, 50, 51, 56, 57, 59, 69,70,71].
Benchmark Generation. Organizing RERS required the generation of benchmarks. Over the past decade, we have developed multiple approaches for generating scalable and realistic benchmarks with known properties. Benchmark generation required integration of a diverse set of formal methods and RERS benchmarks have been integrated by other verification competitions (e.g., SVCOMP) into their sets of benchmark programs.
Combination of Methods. Over the years, RERS has facilitated a number of promising combinations of methods, e.g. [44]. In the latest instance, participants of RERS 2019 notably used diverse combinations of tools to produce their answers to the given verification tasks. As an example for this diversity, one of the participating teams combined verification based on greybox fuzzing and traditional compilerbased interval analysis. Another team employed three different available verification tools to generate their submission and thereby profiled and utilized the individual strengths and weaknesses of these tools.
In summary, one can argue that instead of submitting an executable tool that computes a single verdict automatically as commonly required in verification competitions such as SVCOMP or MCC, participants of RERS make use of the freedom from resource constraints by employing an entire toolkit to solve the given verification tasks. RERS allows manual comparison of the output of tools and gives room for final judgment made by humans on the verdicts of a bouquet of verifiers and approaches, whereas other competitions enforce completely automated decisions by tools. This plethora of approaches provides evidence that RERS achieves one of its main goals, namely to motivate the comparison of different approaches and technologies (see Sect. 2.1).
Guaranteeing hardness of benchmarks
In this section, we sketch our most recent approach to tailor benchmark problems according to hardness: the generation of benchmark problems which are known to have no evidence for a counterexample that is shorter than a given threshold, but which are also guaranteed to have such evidence for an additionally provided upper bound. This allows the production of benchmarks with a designed distribution of depths to which the programs have to be investigated in order to find all errors. In particular, this gives benchmark designers a methodology for challenging contributors that are claiming satisfaction without having a proper proof.
Preliminaries
Fundamental to our approach are the notions related to words and languages:
Definition 1
(Words) Given a finite alphabet \(\varSigma \), a word over \(\varSigma \) is a (possibly empty or infinite) sequence of symbols from \(\varSigma \). Given an integer \(n \in \mathbb {N}\) and a finite word \(w= \sigma _1\sigma _2\ldots \sigma _n\), w denotes the length n of w. Any infinite word w has the length \(w = \infty \). Given any word \(w=\sigma _1\sigma _2\ldots \) and any integer \(i \in \mathbb {N}\) such that \(i \le w\), \(w_{\le i}\) denotes the prefix of w of length i.
Definition 2
(Languages) Given a finite alphabet \(\varSigma \), a language (over \(\varSigma \)) is a set of words over \(\varSigma \). For a given \(n \in \mathbb {N}\), the language \(\varSigma ^n\) consists of all words \(w= \sigma _1\sigma _2,\ldots \sigma _n\) of length \(w = n\) such that \(\sigma _i \in \varSigma \) for all \(i \in 1\, ..\, n\).
For any \(n \in \mathbb {N}\), we define \(\varSigma ^{\le n} :=\bigcup _{i = 1}^{n} \varSigma ^i\), and additionally \(\varSigma ^* :=\bigcup _{i \in \mathbb {N}} \varSigma ^i\). A language L is finite iff \(L \in \mathbb {N}\) and infinite otherwise. \(\varSigma ^\omega \) denotes the infinite language that contains all infinite words over \(\varSigma \). Moreover, L is a language of finite words iff \(L \subseteq \varSigma ^*\), and a language of infinite words iff \(L \subseteq \varSigma ^\omega \). The concatenation of symbols extends naturally to languages: Given a language \(L \subseteq \varSigma ^*\) and any language \(L'\), we have
Our approach to benchmark generation (cf. Sect. 3.3) is based on the automatic generation of Büchi automata [13].
Definition 3
(Büchi Automaton) Let \(B = (S, \varSigma \), \(\varDelta , s_0, F)\) be a finite automaton with a set S of states and an alphabet \(\varSigma \). State \(s_0 \in S\) represents the initial state and \(F \subseteq S\) a set of accepting states. The relation \(\varDelta \subseteq (S \times \varSigma \times S)\) represents transitions between states in S. We also write \(p {\mathop {\rightarrow }\limits ^{\sigma }} q\) to denote \((p,\sigma ,q) \in \varDelta \).
A path p in B is a sequence of transitions \(u_i {\mathop {\rightarrow }\limits ^{\sigma _i}} u_{i+1}\) with i ranging from 1 to either a fixed integer n or infinity. Path p spells the word \(w = \sigma _1\sigma _2 \ldots \) .
Given these definitions, B is called a Büchi automaton if it adheres to Büchi acceptance, meaning that it accepts infinite words \(w \in \varSigma ^{\omega }\) based on the following criteria:

1.
There exists a path p in B that starts in \(s_0\) and spells w

2.
This path p visits a state in F infinitely often
The set \(\mathcal {L}(B) :=\{ w \in \varSigma ^{\omega } \text { }\text { }B \text{ accepts } w \}\) defines the language of B.
The following definitions specify (propositional) linear temporal logic (LTL) [54] which we use to specify properties and as a basis for synthesizing Büchi automata. In essence, LTL is an extension of propositional logic that includes additional temporal operators. Its syntax is defined as follows [3]:
Definition 4
(Syntax of Linear Temporal Logic) Let AP be a set of atomic propositions and \(a \in \text {AP}\). The syntax of propositional linear temporal logic (LTL) is defined by the following grammar in BackusNaur form:
The operator \({\mathbf {X}}\) (or “next”) describes behavior that has to hold at the next time step. A formula \((\varphi _1 ~{\mathbf {U}}~\varphi _2)\) describes that \(\varphi _2\) has to occur eventually and that \(\varphi _1\) has to hold until \(\varphi _2\) occurs in a sequence. The formal semantics of LTL is based on a satisfaction relation between infinite words and LTL formulas [3]:
Definition 5
(Semantics of LTL) Let AP be an alphabet of atomic propositions and let \((2^{\text {AP}})^{\omega }\) denote infinite sequences over sets \(A \subseteq \text {AP}\). For any sequence \(w{\!}=(A_1{,} A_2{,} \ldots ) \in (2^{AP})^{\omega }\) and any \(i \in \mathbb {N}\), let \(w_i = A_i\) be the ith element of w and \(w_{\ge i} = (A_i, A_{i+1}, \ldots )\) be the suffix of w starting at index i.
Then the satisfaction relation \(\models \) \(\subseteq ((2^{\text {AP}})^{\omega } \times \text {LTL})\) is defined as the relation that adheres to the following rules:
where \(w \in (2^{\text {AP}})^{\omega }\) and \(\varphi , \psi \in \text {LTL}\).
Given a language \(L \subseteq \varSigma ^\omega \), we define
and given a Büchi automaton B, we further define
For any \(\varphi \in \mathord {\text {LTL}}\), the semantics \(\llbracket {\varphi }\rrbracket \) of \(\varphi \) is given by
Büchi automata are strictly more expressive than LTL [72]. One can synthesize a Büchi automaton B from an LTL property \(\varphi \) such that \(\mathcal {L}(B) = \llbracket {\varphi }\rrbracket \) holds [55].
Using the basic set of operators in Definition 4, abbreviations for commonly described constraints can be introduced. Popular ones include \({\mathbf {F}}(\varphi ) :=(\top ~{\mathbf {U}}~\varphi )\) which expresses that \(\varphi \) will eventually become true and its dual operator \({\mathbf {G}}(\varphi ) :=\lnot {\mathbf {F}}(\lnot \varphi )\) which claims that \(\varphi \) is always true. A later example also utilizes the weakuntil operator \((\varphi ~{\mathbf {W}}~\psi ) :=(\varphi ~{\mathbf {U}}~\psi ) \vee {\mathbf {G}}(\varphi )\).
In the following, we introduce our approach to specify languages such that a given verification property \(\varphi \in \mathord {\text {LTL}}\) is violated, however in a way such that all counterexamples that witness this violation have a guaranteed minimal length.
Guaranteeing deep LTL counterexamples
In this section we show how to construct (m, n]hard verification tasks. Here, hardness is based on an integer interval (m, n] of prefix lengths that means the following: looking at prefixes of words \(w \in L\) of length at most m does not suffice to explain the property violation, however there exists such a violating prefix of length at most n. In other words, every prefix of length smaller or equal to m can be extended to a word that satisfies \(\varphi \), but this is not the case for all prefixes of length up to n. We aim for verification tasks \((L,\varphi )\) such that

1.
\(L \subseteq \varSigma ^\omega \) and

2.
\(\varphi \) is an LTL formula satisfying that

3.
\((L,\varphi )\) is (m, n]hard.
In the following, we only synthesize reactive programs and LTL properties for reasoning about nonterminating paths. Our construction then works by constructing a maximal sublanguage \(L' \subseteq L\) that is (m, n]hard w.r.t. \(\varphi \) (see. Sec. 3.3 for our realization based on Büchi automata). In general, \(L'\) may well be empty, a phenomenon that we deal with in a heuristic fashion.
The following notion of violating prefix is important:
Definition 6
(Violating Prefix) Let \(w \in \varSigma ^*\). Then w violates \(\varphi \) iff the following holds:
An infinite word \(w \in \varSigma ^\omega \) kviolates \(\varphi \) iff its prefix \(w_{\le k}\) violates \(\varphi \). A language \(L' \subseteq \varSigma ^\omega \) kviolates \(\varphi \) iff there exists a word \(w \in L'\) such that w kviolates \(\varphi \).
Intuitively speaking, a finite word violates \(\varphi \) if it cannot be extended to a word that satisfies \(\varphi \). The following lemma follows straightforwardly:
Lemma 1
(Monotonicity) If a word \(w \in \varSigma ^\omega \) kviolates \(\varphi \), then for all \(k' \in \mathbb {N}\) with \(k' \ge k\), w also \(k'\)violates \(\varphi \).
This monotonicity property allows us to specify (m, n]hardness simply based on the boundaries of this integer interval.
Definition 7
(Hardness) A language \(L' \subseteq \varSigma ^\omega \) is called (m, n]hard w.r.t. \(\varphi \) iff the following hold:

1.
\(L'\) does not mviolate \(\varphi \)

2.
\(L'\) nviolates \(\varphi \)
Based on this hardness definition, we can deduce a constructive approach to generate the maximal sublanguage of L that is (m, n]hard w.r.t. \(\varphi \). We simply construct the maximal sublanguage \(L_\varphi ^m\) of L that does not mviolate \(\varphi \) and then check whether or not \(L_\varphi ^m\) nviolates \(\varphi \). If it does, \((L_\varphi ^m,\varphi )\) is an (m, n]hard verification task. Otherwise, we know that no (m, n]hard verification task exists for L and \(\varphi \), and we continue by heuristically modifying the parameters.
The remainder of this section is therefore dedicated to the construction of \(L_\varphi ^m\) and the subsequent check whether it nviolates \(\varphi \).
Definition 8
(Violating Prefixes) Let \(L' \subseteq \varSigma ^\omega \) and \(k \in \mathbb {N}\). We denote the set of prefixes of \(L'\) with length at most k by
Given a \(\varphi \in \mathord {\text {LTL}}\), we call
the violating prefixes of \(\varphi \) in L with length at most k.
The following lemma is straightforward to prove:
Lemma 2
Let \(k \in \mathbb {N}\). Then \(\text {VP}(L,\varphi ,k)\) consists of all words \(w \in L_{\le k}\) that violate \(\varphi \).
The following theorems follow straightforwardly from Lemmas 1 and 2:
Theorem 1
\( L_\varphi ^m = L {\setminus } (\text {VP}(L,\varphi ,m)\varSigma ^\omega ) \)
and
Theorem 2
Complementation of Büchi automata is a very expensive operation. The following theorem guarantees that this operation can be avoided and instead replaced by one that executes in quadratic time:
Theorem 3
Proof
We show the two inclusions between \(L_\varphi ^m\) and \(L' :=L \cap (\llbracket {\varphi }\rrbracket _{\le m}\varSigma ^\omega )\).
Every word \(w \in L'\) lies in \(\llbracket {\varphi }\rrbracket _{\le m}\varSigma ^\omega \ \) which excludes that it mviolates \(\varphi \). Thus we have as desired \(L \cap (\llbracket {\varphi }\rrbracket _{\le m}\varSigma ^\omega ) \subseteq L_\varphi ^m\).
For the converse inclusion let \(w \in L_\varphi ^m\). According to Def. 6, this means that there exists a word \(w' \in \varSigma ^\omega \) such that \(w_{\le m}w'\) satisfies \(\varphi \) which yields \(w_{\le m} \in \llbracket {\varphi }\rrbracket _{\le m}\) and therefore in particular \(w \in \llbracket {\varphi }\rrbracket _{\le m}\varSigma ^\omega \). On the other hand, \(L_\varphi ^m \subseteq L\). Together this guarantees that \(w \in L'\). \(\square \)
The next section presents the Büchi automatonbased realization of \( L_\varphi ^m\) in the way that it is used for our RERS benchmarks.
Realization based on Büchi automata
RERS’ benchmark generation follows the idea of requirementdriven system generation. More precisely, starting point for RERS benchmarks is a set of structural LTL properties \(\varPhi \) which are meant to impose realistic benchmarks structures. Thus, the initial languages L we consider in the rest of this paper are of the form \(L = \llbracket {\varPhi }\rrbracket \), and the goal is to construct \(L^m_\varphi = \llbracket {\varPhi }\rrbracket ^m_\varphi \). According to Theorem 3 this means that we have to compute
This can be done by means of wellknown technology for Büchi automata as follows:

1.
Compute \(L = \llbracket {\varPhi }\rrbracket \) and \(\llbracket {\varphi }\rrbracket \). We use the Spot library [18] for this purpose. Please note that we need to constrain the construction of \(L = \llbracket {\varPhi }\rrbracket \) such that all transitions within the resulting Büchi automaton are labeled with a single atomic proposition. This can be accomplished by enforcing an according invariant \(\varOmega \) in LTL (cf. [35]).

2.
Concatenate the prefix tree of depth m for \(\llbracket {\varphi }\rrbracket \) with \(\varSigma ^\omega \) to obtain a Büchi automaton for \(\llbracket { \varphi }\rrbracket _{\le m}\varSigma ^\omega \). Essentially, this means to add an accepting \(\varSigma ^\omega \)loop at the end of each leaf of this prefix tree.

3.
Compute the intersection of the two Büchi automata constructed in steps 1 and 2. This is again accomplished using the Spot library.

4.
Heuristically minimize the Büchi automaton that results from step 3, again based on the Spot library. This is important for the scalability of later transformation steps in our overall approach, and it helps to obfuscate the tree expansion in step 2.
In order to be sure that \( (L', \varphi ) \) is indeed an (m, n]hard verification task, it remains to be shown that \(L'\) nviolates \(\varphi \) (cf. Def. 7). This can be done simply by means of an emptiness check for
If it fails, we are guaranteed to have a violating prefix that is longer than m but shorter than or equal to n. Otherwise, we know that no (m, n]hard verification task exists for \(\llbracket {\varPhi }\rrbracket \) and \(\varphi \), and we continue by heuristically modifying the parameters.
Example
The following example illustrates each step of realizing
for \(m=2\),
and
where \(\varOmega \) is the abovementioned invariant that ensures that every Büchi automaton transition is labeled with exactly one atomic proposition (cf. [35]). In order to ease readability when this invariant is enforced, we abbreviate n transitions labeled \(b_1, \ldots , b_n\) that share their sources and targets by a single transition labeled “\(b_1  \ldots  b_n\)”.
Figures 2 and 3 display the Büchi automata for \(\llbracket {\varPhi }\rrbracket \) and \(\llbracket {\varphi }\rrbracket \), respectively, whereas Fig. 4 shows the Büchi automaton for the language that guarantees that there are no violating prefixes of length smaller or equal to m (cf. step 2 above). As \(\llbracket {\varphi }\rrbracket \) and \(\llbracket { \varphi }\rrbracket _{\le m}\varSigma ^\omega \) do not feature a singleton invariant, they are an exception to our simplified representation. Spot [18], the library that we use for Büchi synthesis, uses a BDD representation for Büchi automaton transitions. Therefore, all labels within Figs. 3 and 4 have to be interpreted as BDDs and not in our simplified manner presented above. Note that because \(B'\) (Fig. 4) is afterwards intersected with B (Fig. 2), the self loop at state 3 of the former does not need to specify the exact alphabet \(\varSigma \). The Büchi automaton \(B_{res}\) shown in Fig. 5 specifies already the desired language (cf. step 3 above), but it needs to be minimized to obfuscate the tree expansion step and to achieve scalability of subsequent transformations (cf. Fig. 6).
Experiments
To provide an impression of the scalability of our approach, we analyzed its execution time and the occurring numbers of states with \(\varPhi \) and \(\varphi \) given as in the previous section, but with increasing lower hardness bounds. This means in particular that B (cf. Fig. 2) and \(B_\varphi \) (cf. Fig. 3) are maintained during our experiments.
The first column of Table 3 shows the hardness bound m which was set to 2 during the discussion of the previous section, while columns two and three summarize the number of states of the resulting automata before minimization (corresponding to Fig. 5) and after heuristic minimization (corresponding to Fig. 6). The fourth column provides the wallclock execution time for computing the final heuristically minimized Büchi automata as well as the proportion of execution time which is needed for that minimization.
As one can see, the numbers are strictly increasing. This seems to indicate that the corresponding languages are continuously changing, or more precisely, continuously strictly decreasing. This is, however, not guaranteed, because the fourstep construction via Spot may well provide two different Büchi automata for the same language. There is no canonicity. Thus, to be sure that one has a a valid (m, n]hard verification task one still has to check whether the languages for n and m are indeed different. In the considered cases, this could always be verified.
Our C++ implementation utilizes the Spot library [18] for synthesizing, modifying, and optimizing Büchi automata. The execution times in Table 3 are based on our implementation that was executed on a machine running Arch Linux 5.5.13arch21 and featuring an AMD Ryzen 3950X processor with 32GiB of RAM.
Conclusion and perspective
We have summarized the history of the RERS challenge for the analysis and verification of reactive systems and its objectives in two parts. In the first part, its profile and intentions, its relation to other competitions, and, in particular, its evolution due to the feedback of participants were discussed. This comprised, in particular, the discussion of ‘oddities’ like overtuning: some participants tweak their tools to the sometimes concretely known solutions of the competitions’ benchmarks, which leads to scores that have little to do with the tools’ performance in realistic scenarios. This way, even winning a competition does not necessarily need to be a recommendation for potential users.
The second part presents our latest development with regard to the overtuning problem: the automatic synthesis of benchmark problems with tailored difficulty in a requirementdriven’ fashion. More precisely, since the beginning, the starting point of the RERS benchmark generation are desired structural properties, here formulated in LTL, which are successively transformed via Büchi automata that characterize all satisfying executions to Modal Transitions Systems and, with a few more steps, to code of various implementation languages (cf. Sec. 2). This way, RERS aims at benchmarks that closely resemble realistic code, but can be flexibly tailored in their degree of difficulty.
The contribution of the second part is a way to tailor benchmarks according to the depths to which programs have to be investigated in order to find all errors. This approach gives benchmark designers a method to challenge contributors that try to perform well by excessive guessing, e.g., based on ‘inappropriate’ side knowledge. Combined with our traditional way of benchmark tailoring concerning the code/model size, the amount of arithmetic, and the used data structures as measure for intricacy, RERS provides benchmark designers with a very powerful engine that we plan to make available open source.
It should be noted that the ideas presented in this paper are not only applicable to the generation of benchmarks that feature sequential programs. Rather, they can also be applied during the generation of parallel benchmark problems. This means that we can provide not only SVCOMP and similar competitions with tailored benchmark problems, but also competitions like MCC.
Notes
This approach is indeed quite common e.g., in the SATsolving community.
See http://www.rerschallenge.org/<challengeyear>/index.php?page=results for comprehensive lists of participants and results.
References
Apel, S., Beyer, D., Friedberger, K., Raimondi, F., von Rhein, A.: Domain types: abstractdomain selection based on variable usage. In: Bertacco, V., Legay, A. (eds.) Hardware and Software: Verification and Testing, pp. 262–278. Springer, Cham (2013)
Apt, K.R., Olderog, E.R.: Verification of Sequential and Concurrent Programs. Texts and Monographs in Computer Science. Springer (1991). https://doi.org/10.1007/9781475743760
Baier, C., Katoen, J.P., Larsen, K.G.: Principles of Model Checking. MIT Press, Cambridge (2008)
Barrett, C., de Moura, L., Stump, A.: Smtcomp: satisfiability modulo theories competition. In: Etessami, K., Rajamani, S.K. (eds.) Computer Aided Verification, pp. 20–23. Springer, Berlin (2005)
Bartocci, E., Beyer, D., Black, P.E., Fedyukovich, G., Garavel, H., Hartmanns, A., Huisman, M., Kordon, F., Nagele, J., Sighireanu, M., Steffen, B., Suda, M., Sutcliffe, G., Weber, T., Yamada, A.: Toolympics 2019: an overview of competitions in formal methods. In: Beyer, D., Huisman, M., Kordon, F., Steffen, B. (eds.) Tools and Algorithms for the Construction and Analysis of Systems, pp. 3–24. Springer, Cham (2019)
Bartocci, E., Falcone, Y., Bonakdarpour, B., Colombo, C., Decker, N., Havelund, K., Joshi, Y., Klaedtke, F., Milewicz, R., Reger, G., et al.: First International Competition on Runtime Verification: Rules, Benchmarks, Tools, and Final Results of CRV 2014. STTT pp. 1–40 (2017)
Bauer, O., Geske, M., Isberner, M.: Analyzing program behavior through active automata learning. Int. J. Softw. Tools Technol. Transf. 16(5), 531–542 (2014)
Beyer, D.: Competition on software verification. TACAS. LNCS, vol. 7214, pp. 504–524. Springer, Berlin (2012)
Beyer, D.: Status Report on Software Verification. In: Proceedings of the TACAS, LNCS 8413, pp. 373–388. Springer (2014). https://doi.org/10.1007/9783642548628_25
Beyer, D., Stahlbauer, A.: Bddbased software model checking with cpachecker. In: Kučera, A., Henzinger, T.A., Nešetřil, J., Vojnar, T., Antoš, D. (eds.) Mathematical and Engineering Methods in Computer Science, pp. 1–11. Springer, Berlin (2013)
Beyer, D., Stahlbauer, A.: BDDbased software verification. Applications to eventconditionaction systems. Int. J. Softw. Tools Technol. Transf. 16(5), 507–518 (2014)
Briggs, P., Cooper, K.D.: Effective partial redundancy elimination. In: Proceedings of the ACM SIGPLAN’94 Conference on Programming Language Design and Implementation (PLDI), pp. 159–170 (1994). https://doi.org/10.1145/773473.178257
Büchi, J.R.: Symposium on decision problems: On a decision method in restricted second order arithmetic. In: Logic, Methodology and Philosophy of Science, Studies in Logic and the Foundations of Mathematics, vol. 44, pp. 1 – 11. Elsevier (1966). https://doi.org/10.1016/S0049237X(09)705646
Clarke, E.M., Grumberg, O., Peled, D.: Model Checking. MIT Press, Cambridge (1999)
Decker, N., Pirogov, A.: Flat model checking for counting ltl using quantifierfree presburger arithmetic. In: Enea, C., Piskac, R. (eds.) Verification, Model Checking, and Abstract Interpretation, pp. 513–534. Springer, Cham (2019)
Dietsch, D., Heizmann, M., Langenfeld, V., Podelski, A.: Fairness modulo theory: a new approach to ltl software model checking. In: Kroening, D., Păsăreanu, C.S. (eds.) Computer Aided Verification, pp. 49–66. Springer, Cham (2015)
Duan, Z., Tian, C., Duan, Z.: Verifying temporal properties of c programs via lazy abstraction. In: Duan, Z., Ong, L. (eds.) Formal Methods and Software Engineering, pp. 122–139. Springer, Cham (2017)
DuretLutz, A., Lewkowicz, A., Fauchille, A., Michaud, T., Renault, E., Xu, L.: Spot 2.0—a framework for LTL and \(\omega \)automata manipulation. In: Proceedings of the 14th International Symposium on Automated Technology for Verification and Analysis (ATVA’16), Lecture Notes in Computer Science, vol. 9938, pp. 122–129. Springer (2016). https://doi.org/10.1007/9783319465203_8
Dwyer, M.B., Avrunin, G.S., Corbett, J.C.: Patterns in property specifications for finitestate verification. In: Proceedings of the 21st International Conference on Software Engineering (IEEE Cat. No.99CB37002), pp. 411–420 (1999). https://doi.org/10.1145/302405.302672
Garavel, H.: Nestedunit petri nets. J. Log. Algebraic Methods Program. 104, 60–85 (2019). https://doi.org/10.1016/j.jlamp.2018.11.005
Geske, M., Isberner, M., Steffen, B.: Rigorous examination of reactive systems. In: Bartocci, E., Majumdar, R. (eds.) Runtime Verification, pp. 423–429. Springer, Cham (2015)
Geske, M., Jasper, M., Steffen, B., Howar, F., Schordan, M., van de Pol, J.: RERS 2016: parallel and sequential benchmarks with focus on LTL verification. In: ISoLA. LNCS, vol 9953, pp. 787–803. Springer (2016)
Giannakopoulou, D., Lerda, F.: From states to transitions: improving translation of ltl formulae to büchi automata. In: Peled, D.A., Vardi, M.Y. (eds.) Formal Techniques for Networked and Distributed Sytems—FORTE 2002, pp. 308–326. Springer, Berlin (2002)
Holzmann, G.: The SPIN Model Checker: Primer and Reference Manual, 1st edn. AddisonWesley Professional, Boston (2011)
Holzmann, G.J., Smith, M.H.: Software model checking: extracting verification models from source code. Softw. Test. Verif. Reliab. (2001). https://doi.org/10.1002/stvr.228
Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D.: The rers greybox challenge 2012: analysis of eventconditionaction systems. In: Margaria, T., Steffen, B. (eds.) Leveraging Applications of Formal Methods, Verification and Validation. Technologies for Mastering Change, pp. 608–614. Springer, Berlin (2012)
Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D., Păsăreanu, C.: Rigorous examination of reactive systems. The RERS challenges 2012 and 2013. STTT 16(5), 457–464 (2014)
Howar, F., Steffen, B., Merten, M.: From ZULU to RERS. In: Margaria, T., Steffen, B. (eds.) Leveraging Applications of Formal Methods, Verification, and Validation, pp. 687–704. Springer, Berlin (2010)
Huisman, M., Klebanov, V., Monahan, R.: VerifyThis 2012. STTT 17(6), 647–657 (2015)
Jasper, M.: Counterexampleguided prefix refinement analysis for program verification. In: Lamprecht, A.L. (ed.) Leveraging Applications of Formal Methods, Verification, and Validation, pp. 143–155. Springer, Cham (2016)
Jasper, M., Fecke, M., Steffen, B., Schordan, M., Meijer, J., Pol, J.v.d., Howar, F., Siegel, S.F.: The RERS 2017 challenge and workshop (invited paper). In: Proceedings of the 24th ACM SIGSOFT International SPIN Symposium on Model Checking of Software, SPIN 2017, pp. 11–20. ACM (2017). https://doi.org/10.1145/3092282.3098206
Jasper, M., Mues, M., Murtovi, A., Schlüter, M., Howar, F., Steffen, B., Schordan, M., Hendriks, D., Schiffelers, R., Kuppens, H., Vaandrager, F.W.: Rers 2019: combining synthesis with realworld models. In: Beyer, D., Huisman, M., Kordon, F., Steffen, B. (eds.) Tools and Algorithms for the Construction and Analysis of Systems, pp. 101–115. Springer, Cham (2019)
Jasper, M., Mues, M., Schlüter, M., Steffen, B., Howar, F.: Rers 2018: Ctl, ltl, and reachability. In: Margaria, T., Steffen, B. (eds.) Leveraging Applications of Formal Methods, Verification and Validation. Verification, pp. 433–447. Springer, Cham (2018)
Jasper, M., Schordan, M.: Multicore model checking of largescale reactive systems using different state representations. In: ISoLA. LNCS, vol 9952, pp. 212–226. Springer (2016)
Jasper, M., Steffen, B.: Synthesizing subtle bugs with known witnesses. In: Margaria, T., Steffen, B. (eds.) Leveraging Applications of Formal Methods, Verification and Validation. Verification, pp. 235–257. Springer, Cham (2018)
Järvisalo, M., Le Berre, D., Roussel, O., Simon, L.: The international SAT solver competitions. AI Mag. 33(1), 89–92 (2012). https://doi.org/10.1609/aimag.v33i1.2395
Kant, G., Laarman, A., Meijer, J., van de Pol, J., Blom, S., van Dijk, T.: Ltsmin: highperformance languageindependent model checking. In: Baier, C., Tinelli, C. (eds.) Tools and Algorithms for the Construction and Analysis of Systems, pp. 692–707. Springer, Berlin (2015)
King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976). https://doi.org/10.1145/360248.360252
Knoop, J., Rüthing, O., Steffen, B.: Lazy code motion. In: Proceedings of the ACM SIGPLAN’92 Conference on Programming Language Design and Implementation (PLDI), pp. 224–234. ACM (1992). https://doi.org/10.1145/143095.143136
Knoop, J., Rüthing, O., Steffen, B.: Lazy strength reduction. J. Program. Lang. 1, 71–91 (1993)
Knoop, J., Rüthing, O., Steffen, B.: Partial dead code elimination. In: Proceedings of the ACM SIGPLAN’94 Conference on Programming Language Design and Implementation (PLDI), pp. 147–158. ACM (1994). https://doi.org/10.1145/178243.178256
Knoop, J., Rüthing, O., Steffen, B.: Expansionbased removal of semantic partial redundancies. In: Compiler Construction, 8th International Conference, CC’99, Held as Part of the European Joint Conferences on the Theory and Practice of Software, ETAPS’99, Amsterdam, The Netherlands, 22–28 March, 1999, Proceedings, LNCS, vol. 1575, pp. 91–106. Springer (1999). https://doi.org/10.1007/b72146
Kordon, F., Linard, A., Buchs, D., Colange, M., Evangelista, S., Lampka, K., Lohmann, N., PaviotAdet, E., ThierryMieg, Y., Wimmel, H.: Report on the model checking contest at petri nets 2011. In: Transactions on Petri Nets and Other Models of Concurrency VI. LNCS, vol 7400, pp. 169–196. Springer (2012)
Lang, F., Mateescu, R., Mazzanti, F.: Compositional verification of concurrent systems by combining bisimulations. In: ter Beek, M.H., McIver, A., Oliveira, J.N. (eds.) Formal Methods—The Next 30 Years, pp. 196–213. Springer, Cham (2019)
Lang, F., Mateescu, R., Mazzanti, F.: Sharp congruences adequate with temporal logics combining weak and strong modalities. In: Biere, A., Parker, D. (eds.) Tools and Algorithms for the Construction and Analysis of Systems, pp. 57–76. Springer, Cham (2020)
Larsen, K.G.: Modal specifications. In: CAV. LNCS, vol 407, pp. 232–246. Springer (1989)
Meijer, J.: Efficient Learning and Analysis of System Behavior. Ph.D. thesis, University of Twente, Netherlands (2019). https://doi.org/10.3990/1.9789036548441
Meijer, J., van de Pol, J.: Sound blackbox checking in the learnlib. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NASA Formal Methods, pp. 349–366. Springer, Cham (2018)
Morel, E., Renvoise, C.: Global optimization by suppression of partial redundancies. Commun. ACM 22(2), 96–103 (1979). https://doi.org/10.1145/359060.359069
Morse, J.: Expressive and Efficient Bounded Model Checking of Concurrent Software. Ph.D. thesis, University of Southampton (2015). http://eprints.soton.ac.uk/id/eprint/379284
Morse, J., Cordeiro, L., Nicole, D., Fischer, B.: Applying symbolic bounded model checking to the 2012 RERS Greybox challenge. Int. J. Softw. Tools Technol. Transf. 16(5), 519–529 (2014)
Nielson, F., Nielson, H.R., Hankin, C.: Principles of Program Analysis. Springer, Berlin (1999)
Peterson, J.L.: Petri Net Theory and the Modeling of Systems. Prentice Hall PTR, Hoboken (1981)
Pnueli, A.: The temporal logic of programs. In: 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), pp. 46–57 (1977). https://doi.org/10.1109/SFCS.1977.32
Pnueli, A., Rosner, R.: On the synthesis of a reactive module. In: Proceedings of the 16th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL ’89, pp. 179–190. ACM (1989). https://doi.org/10.1145/75277.75293
van de Pol, J., Meijer, J.: Synchronous or Alternating?, pp. 417–430. Springer, Cham (2019). https://doi.org/10.1007/9783030223489_24
van de Pol, J., Ruys, T.C., te Brinke, S.: Thoughtful Bruteforce attack of the RERS 2012 and 2013 challenges. Int. J. Softw. Tools Technol. Transf. 16(5), 481–491 (2014). https://doi.org/10.1007/s1000901403243
Rosen, B.K., Wegman, M.N., Zadeck, F.K.: Global value numbers and redundant computations. In: Proceedings of the 15th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages. ACM (1988). https://doi.org/10.1145/73560.73562
Schordan, M., Prantl, A.: Combining static analysis and state transition graphs for verification of eventconditionaction systems in the RERS 2012 and 2013 challenges. Int. J. Softw. Tools Technol. Transf. 16(5), 493–505 (2014). https://doi.org/10.1007/s100090140338x
Steffen, B.: Propertyoriented expansion. In: Cousot, R., Schmidt, D.A. (eds.) Static Analysis, pp. 22–41. Springer, Berlin (1996)
Steffen, B., Howar, F., Isberner, M., Naujokat, S., Margaria, T.: Tailored generation of concurrent benchmarks. STTT 16(5), 543–558 (2014)
Steffen, B., Howar, F., Merten, M.: Introduction to Active Automata Learning from a Practical Perspective, pp. 256–296. Springer, Berlin (2011). https://doi.org/10.1007/9783642214554_8
Steffen, B., Isberner, M., Naujokat, S., Margaria, T., Geske, M.: Propertydriven benchmark generation. In: Bartocci, E., Ramakrishnan, C.R. (eds.) Model Checking Software, pp. 341–357. Springer, Berlin (2013)
Steffen, B., Isberner, M., Naujokat, S., Margaria, T., Geske, M.: Propertydriven benchmark generation: synthesizing programs of realistic structure. STTT 16(5), 465–479 (2014)
Steffen, B., Jasper, M.: Propertypreserving parallel decomposition. In: Models, Algorithms, Logics and Tools. LNCS, vol. 10460, pp. 125–145. Springer (2017)
Steffen, B., Jasper, M.: Generating Hard Benchmark Problems for Weak Bisimulation, pp. 126–145. Springer, Cham (2019). https://doi.org/10.1007/9783030315146_8
Steffen, B., Jasper, M., Meijer, J., van de Pol, J.: Propertypreserving generation of tailored benchmark petri nets. In: 17th International Conference on Application of Concurrency to System Design (ACSD), pp. 1–8 (2017). https://doi.org/10.1109/ACSD.2017.24
Steffen, B., Knoop, J.: Finite Constants: Characterizations of a New Decidable Set of Constants. In: Kreczmar, A., Mirkowska, G. (eds.) Mathematical Foundations of Computer Science (MFCS’89), LNCS, vol. 379, pp. 481–491. Springer (1989). https://doi.org/10.1007/3540514864_94
Wang, M., Tian, C., Duan, Z.: Full regular temporal property verification as dynamic program execution. In: IEEE/ACM 39th International Conference on Software Engineering Companion (ICSEC), pp. 226–228 (2017). https://doi.org/10.1109/ICSEC.2017.98
Wang, M., Tian, C., Zhang, N., Duan, Z.: Verifying full regular temporal properties of programs via dynamic program execution. IEEE Trans. Reliab. 68(3), 1101–1116 (2019). https://doi.org/10.1109/TR.2018.2876333
Wang, M., Tian, C., Zhang, N., Duan, Z., Yao, C.: Translating XdC programs to MSVL programs. Theor. Comput. Sci. 809, 430–465 (2020). https://doi.org/10.1016/j.tcs.2019.12.038
Wolper, P.: Temporal logic can be more expressive. Inf. Control 56(1), 72–99 (1983). https://doi.org/10.1016/S00199958(83)800515
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Howar, F., Jasper, M., Mues, M. et al. The RERS challenge: towards controllable and scalable benchmark synthesis. Int J Softw Tools Technol Transfer 23, 917–930 (2021). https://doi.org/10.1007/s1000902100617z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1000902100617z
Keywords
 Benchmark generation
 Verification competitions
 Error witnesses
 Temporal logic
 LTL synthesis
 Büchi automata
 Modal contracts
 Parallel decomposition
 Model checking
 Bisimulation checking