Minimal-Time Synthesis for Parametric Timed Automata

Parametric timed automata (PTA) extend timed automata by allowing parameters in clock constraints. Such a formalism is for instance useful when reasoning about unknown delays in a timed system. Using existing techniques, a user can synthesize the parameter constraints that allow the system to reach a specified goal location, regardless of how much time has passed for the internal clocks. We focus on synthesizing parameters such that not only the goal location is reached, but we also address the following questions: what is the minimal time to reach the goal location? and for which parameter values can we achieve this? We analyse the problem and present an algorithm that solves it. We also discuss and provide solutions for minimizing a specific parameter value to still reach the goal. We empirically study the performance of these algorithms on a benchmark set for PTAs and show that minimal-time reachability synthesis is more efficient to compute than the standard synthesis algorithm for reachability.


Introduction
Verification of real-time systems involving hard timing constraints and concurrency is of utmost importance, and is now recognized in standards such as the DO-178C, that allows formal methods without addressing specific process requirements.Model checking is a popular model-based technique that formally verifies whether a model satisfies a property.Parametric timed model checking significantly enhances model checking by allowing its application earlier in the design phase, when timing constants may not be known yet.In addition, it is possible to verify systems in the presence of uncertainty, e. g., when some periods are known with some limited precision.This is the case of Thales' FMTV1 challenge 2014 where the system was characterized with uncertain but constant periods, that rules out the use of non-parametric timed model checking.
Several tools support parameters, such as HyTech [HHWT95] (parametric hybrid automata), Romeo [LRST09] (parametric time Petri nets), IMITA-TOR [AFKS12] (parametric timed automata), PSyHCoS [ALS + 13] (parametric stateful timed CSP), or Symrob (robustness for timed automata) [San15].In addition, several tools support the larger class of hybrid automata, such as PHAVer [Fre08] or SpaceEx [FLGD + 11] and, while not explicitly supporting parameters, can encode them.2Recently, a growing number of analyses and techniques were proposed to analyze parametric timed models (mainly PTAs) such as SMT-based techniques [KP12], integer hull abstractions [JLR15], corner-point abstractions [BBLS15], distributed verification [ACN15], NDFS-based synthesis [NPvdP18], machine learning [AL17,LSGA17], etc.However, despite some case studies informally shared between these works, there is a lack of a common basis to compare new tools and techniques in a fair manner.Without a stable list of benchmarks publicly available, it is difficult to assess the efficiency of a new algorithm.
Contribution.We present here a library of benchmarks containing academic and industrial case studies collected in the past few years from academic papers and industrial collaborations.In addition, a focus is made on (possibly toy) examples known to be unsolvable using current state-of-the-art techniques, with the hope to encourage the development of new techniques to solve them.Benchmarks are available online in the IMITATOR input format, and distributed using the GNU General Public License.
Related libraries.The library most related to ours is that by Chen et al., that proposes a suite of benchmarks for hybrid systems [CSBM + 15].However, it aims at analyzing hybrid systems, which are strictly more expressive than PTAs in theory, and incomparable in practice, as most hybrid systems do not feature timing parameters.In addition, that benchmark suite focuses only on reachability properties.Finally and most importantly, it does not focus on parameters, and the benchmarks are non-parametric.In contrast, our library focuses on parametric timed benchmarks, with various types of properties.
Another interesting library is that by Hoxha, Abbas, and Fainekos [HAF14], that offers Matlab/Simulink models of automotive systems.However, it does not aim specifically at parametric timed model checking; two of our benchmarks originally partially come from the aforementioned library [HAF14].

IMITATOR parametric timed automata
Parametric timed automata extend finite-state automata with clocks, i. e., realvalued variables evolving at the same rate.Clocks can be reset along transitions, and can be compared to constants or parameters (integer-or rational-valued) Fig. 1: Examples of PTAs along transitions ("guards") or in locations ("invariants").IMITATOR parametric timed automata extend PTAs [AHV93] with some useful features such as synchronization between components, stopwatches (i.e., the ability to stop the elapsing of some clocks [CL00]), presence of parametric linear terms in guards, invariants and resets, shared global rational-valued variables, etc.
Example 1.Consider the PTA in Fig. 1a, containing two locations l 0 and l 1 , two clocks x and y, and one parameter p.The self-loop on l 0 can be taken whenever x = p holds, and resets x, i. e., can be taken every p time units.In addition, initially, as x = y = 0 and clocks evolve at the same rate, the transition guarded by y = 1 ∧ x = 0 cannot be taken.Observe that, if p = 1, then the transition to l 1 can be taken after exactly one loop on l 0 .If p = 1 2 , then the transition to l 1 can be taken after exactly two loops.In fact, the set of valuations for which l 1 is reachable is exactly 3 The benchmark library

Categories
Our benchmarks are classified into three main categories: 1. academic benchmarks, studied in a range of papers: a typical example is the Fischer mutual exclusion protocol; 2. industrial case studies, which correspond to a concrete problem solved (or not) in an industrial environment; 3. examples famous for being unsolvable using state-of-the-art techniques; for some of them, a solution may be computed by hand, but existing automated techniques are not capable of computing it.This is the case of the PTA in Fig. 1a, as a human can very easily solve it, while (to the best of our knowledge) no tool is able to compute this result automatically.
Remark 1.Our library contains a fourth category: education benchmarks, that consist of generally simple case studies that can be used for teaching.This category contains toy examples such as coffee machines.We omit this category from this paper as these benchmarks generally have a limited interest performance wise.
In addition, we use the following classification criteria: number of variables: clocks, parameters, locations, automata; -whether the benchmark (in the provided version) is easily scalable, i. e., whether one can generate a large number of instances; for example, protocols often depend on the number of participants, and can therefore be scaled accordingly; -presence of shared rational-valued variables; -presence of stopwatches; -presence of location invariants, as some works (e. g., [AHV93,ALR18a]) exclude them; -whether the benchmark meets the L/U assumption.

Properties
We consider the three following main properties: reachability / safety: synthesize parameter valuations for which a given state of the system (generally a location, but possibly a constraint on variables) must be reachable / avoided (see e. g., [JLR15]).optimal reachability: same as reachability, but with an optimization criterion: some parameters (or the time) should be minimized or maximized.unavoidability: synthesize parameter valuations for which all runs must always eventually reach a given state (see e. g., [JLR15]).robustness: synthesize parameter valuations preserving the discrete behavior (untimed language) w.r.t. to a given valuation (see e. g., [ACEF09,San15]).
In addition, we include some recent case studies of parametric timed pattern matching ("PTPM" hereafter), i. e., being able to decide for which part of a log and for which values of parameters does a parametric property holds on that log [AHW18].Finally, a few more case studies have ad-hoc properties (liveness, properties expressed using observers [ABBL98,And13], etc.), denoted "Misc."later on.

Presentation
The benchmark library comes in the form of a Web page that classifies models and is available at https://www.imitator.fr/library.html.
The library is made of a list of a set of benchmarks.Each benchmark may have different models: for example, Flip-flop comes with three models, one with 2 parameters, one with 5, and one with 12 parameters.Similarly, some Fischer benchmarks come with several models, each of them corresponding to a different number of processes.Finally, each model comes with one or more properties.For example, for Fischer, one can either run safety synthesis, or evaluate the robustness of a given reference parameter valuation.
The first version of the library contains 34 benchmarks with 80 different models and 122 properties.

Performance
We present a selection of the library in Table 1.Not all benchmarks are given; in addition, most benchmarks come with several models and several properties, omitted here for space concern.We give from left to right the number of automata, of clocks, of parameters, of discrete variables, whether the model is an L/U-PTA, a U-PTA or a regular PTA, whether it features invariants and stopwatches, the kind of property, and a computation time on an Intel i7-7500U CPU @ 2.70GHz with 8 GiB running Linux Mint 18. "T.O." denotes time-out (after 300 s)."?" denotes unsolvable, because no such algorithm is implemented in existing tools."HS" denotes time-out but human-solvable: e. g., for Fischer, one knows the correctness constraint independently of the number of processes, but tools may fail to compute it.This is also the case of the toy PTAs in Figs.1a and 1b.
Despite time-out, some case studies come with a partial result: either because IMITATOR is running reachability-synthesis ("EFsynth" [JLR15]) which can output a partial result when interrupted before completion, or because some other methods can output some valuations.For example, for ProdCons, IMITATOR is unable to synthesize a constraint; however, in the original work [KP12], some punctual valuations (non-symbolic) are given.
Robustness case studies are not part of Table 1, but are included in the online library.

Perspectives
Syntax.So far, all benchmarks use the IMITATOR input format; in addition, only if the benchmark comes from another model checker (e. g., a HyTech or Uppaal model), it also comes with its native syntax.In a near future, we plan to propose a translation to Uppaal timed automata; however, some information will be lost as Uppaal does not allow parameters, and supports stopwatches in a limited manner.A future work will be to propose other syntaxes, or a normalized syntax for parametric timed model checking benchmarks.Contributions and versioning.The library is aimed at being enriched with future benchmarks.Furthermore, it is collaborative, and is open to any willing contributor.A versioning system will be set up with the addition (or modification) of benchmarks in the future.

Table 1 :
A selection from the benchmark library