Abstract
To decide whether a set of states is reachable in a hybrid system, overapproximative symbolic successor computations can be used, where the symbolic representation of state sets as well as the successor computations have several parameters which determine the efficiency and the precision of the computations. Naturally, faster computations come with less precision and more spurious counterexamples. To remove a spurious counterexample, the only possibility offered by current tools is to reduce the error by restarting the complete search with different parameters. In this paper we propose a CEGAR approach that takes as input a userdefined ordered list of search configurations, which are used to dynamically refine the search tree along potentially spurious counterexamples. Dedicated datastructures allow to extract as much useful information as possible from previous computations in order to reduce the refinement overhead.
This work was supported by the German research council (DFG) in the context of the HyPro project and the DFG Research Training Group 2236 UnRAVeL.
Download conference paper PDF
1 Introduction
As the correct behavior of hybrid systems with mixed discretecontinuous behavior is often safety critical, a lot of effort was put into the development and implementation of techniques for their analysis. In this paper we focus on techniques for proving unreachability of a given set of unsafe states. Besides methods based on theorem proving [11, 21, 25], logical encoding [13, 15, 22, 26] and validated simulation [12, 28], flowpipeconstructionbased methods [2, 7, 9, 17,18,19,20, 27] show increasing performance and usability. These methods overapproximate the set of states that are reachable in a hybrid system from a given set of initial states by executing an iterative forward reachability analysis algorithm. The result is a sequence of state sets whose union contains all system paths starting in any initial state (usually for bounded time duration and a bounded number of discrete steps, unless a fixedpoint could be detected).
If the resulting overapproximation does not intersect with the unsafe state set then the verification task is successfully completed. However, if the intersection is not empty, due to the overapproximation the results are not conclusive. In this case the only possibility for achieving a conclusive answer is to change some analysis parameters to reduce the approximation error. As a smaller error typically comes with a higher computational effort, the choice of suitable parameters by the user can be a tedious task.
Most tools do not support the dynamic change of those parameters, thus after the modification of the parameters the user has to restart the whole computation. One of the few tools implementing some hardcoded dynamic parameter adaptations is the STC mode [16] of SpaceEx [17], which dynamically adapts the timestep size during reachability analysis to detect the enabledness of discrete events more precisely. Another parameter (the degree of Taylor approximations) is dynamically adapted in the Flow\(^*\) tool [9]. The method [5], also implemented in SpaceEx, uses cheap (but stronger overapproximating) computations to detect potentially unsafe paths and use this information to guide more precise (and more timeconsuming) computations. In [6] the authors present a method to automatically derive template directions when using template polyhedra as a state set representation in a CEGAR refinement fashion during analysis. As a last example, in [24] the authors use model abstraction to hide model details and apply model refinement if potential counterexamples are detected; after each refinement, the approach makes use of previous reachability analysis results and adapts them for the refined model, instead of a complete restart.
However, none of the available tools supports the dynamic adjustments of several parameters by a more elaborate strategy, which is either defined by the user or chosen from a predefined set. In this paper we propose such an approach, provide an implementation based on the HyPro [27] programming library, present some use cases to demonstrate its applicability and advantages, and discuss ideas for further extensions and improvements. Our main contributions are:

the definition of search strategies to specify the dynamic adjustment of parameter configurations;

the formalization of a general reachability analysis algorithm with dynamic configuration adjustment following a search strategy, where dynamic means that adjustments are triggered during the analysis process in a fully automated manner only for parts of the search where they are needed to achieve conclusive analysis results;

the identification of information, collected during reachability analysis, which can be reused after a parameter adjustment to reduce the computational effort of forthcoming analysis steps;

a datatype to store information about previously completed analysis steps, including information about reusability, and supporting dynamic parameter adjustments according to a given strategy;

the implementation of the reachability analysis algorithm using dynamic parameter adjustment and supporting information reusage;

the evaluation of our method on some case studies.
Outline. In Sect. 2 we recall some preliminaries on flowpipeconstructionbased reachability analysis, before presenting our algorithm for the dynamic adjustment of parameter configurations in Sect. 3. In Sect. 4 we provide some experimental results and conclude the paper in Sect. 5.
2 Preliminaries
In this work we develop a method to dynamically adjust the parameters of a verification method for autonomous linear hybrid systems whose continuous dynamics can be described by ordinary differential equations (ODEs) of the form \(\dot{x}(t)=A\cdot x(t)\), but our approach can be naturally extended to methods for nonautonomous hybrid systems with external input or nonlinear dynamics.
Hybrid automata [3] are one of the modeling formalisms for hybrid systems. Similarly to discrete transition systems, nodes (called locations or control modi) model the discrete part of the state space (e.g. the states of a discrete controller) and transitions between the nodes (called jumps) labeled with guards and reset functions model discrete state changes. To model the continuous dynamics between discrete state changes, flows in the form of ordinary differential equation (ODE) systems, and invariants in the form of predicates over the model variables are attached to the locations. The ODEs specify the evolution of the continuous quantities over time (called the flowpipe), where the control is forced to leave the current location before its invariant gets violated. Initial predicates attached to the locations specify the initial states.
A state \(\sigma _{} = (\ell _{}, \nu _{})\) of a hybrid automaton consists of a location l and a variable valuation \(\nu \). A region is a set of states \((\ell _{},P_{})=\{\ell _{}\}\times P_{}\). A path \(\pi \) of a hybrid automaton is a sequence \(\pi _{}^{} = \sigma _{0} {\mathop {\rightarrow }\limits ^{t_0}} \sigma _{1} {\mathop {\rightarrow }\limits ^{e_1}} \sigma _{2} {\mathop {\rightarrow }\limits ^{t_2}} \ldots \) of time steps \(\sigma _{i} {\mathop {\rightarrow }\limits ^{t_i}} \sigma _{i+1}\) of duration \(t_i\) and discrete steps \(\sigma _{k} {\mathop {\rightarrow }\limits ^{e_k}} \sigma _{k+1}\) following a jump, where \(\sigma _{0}=(\ell _{0},\nu _{0})\) is an initial state. A state is called reachable if there exists a path leading to it.
Flowpipeconstructionbased reachability analysis aims at determining the states that are reachable in (a model of) a hybrid system, in order to show that certain unsafe states cannot be reached. Since the reachability problem for hybrid systems is in general undecidable, these methods usually overapproximate the set of states that are reachable along paths with a bounded number of jumps (called the jump depth) J and a bounded time duration T (called the time horizon) between two jumps. We explain the basic ideas needed to understand our contributions; for further reading we refer to, e.g., [8, 23].
Starting from an initial region \((\ell _{0},V_{0})\), the analysis overapproximates flowpipes and jump successors iteratively. Due to nondeterminism, this generates a tree, whose nodes \(n_i\) are either unprocessed leafs storing a tuple \((\pi _{i}^{};\ \ell _{i},V_{i};\ \bot )\), or processed inner nodes storing \((\pi _{i}^{};\ \ell _{i},V_{i};\ V_{i,0},\ldots ,V_{i,k_i})\).
The pair \((\ell _{i},V_{i})\) is the node’s initial region, which is \((\ell _{0},V_{0})\) for the root. By \(\pi _{i}^{}=I_{i,0},e_{i,0},\ldots ,I_{i,d_i},e_{i,d_i}\), with \(I_{i,l}\) being intervals and \(e_{i,l}\) being jumps, we encode a set \(\{\sigma _{0}{\mathop {\rightarrow }\limits ^{t_0}}\sigma _{0}'{\mathop {\rightarrow }\limits ^{e_{i,0}}} \sigma _{1}\ldots {\mathop {\rightarrow }\limits ^{e_{i,d_i}}} \sigma _{d_i+1}\,\,\sigma _0\in (\ell _{0},V_{0}),t_l\in I_{i,l}\}\) of paths along which \((\ell _{i},V_{i})\) is reachable.
To process a node \((\pi _{i}^{};\ \ell _{i},V_{i};\ \bot )\), we divide the time horizon [0, T] into segments \([t_{i,0},t_{i,1}],{\ldots },[t_{i,k_i},t_{i,k_{i+1}}]\) with \(t_{i,0}=0\) and \(t_{i,k_{i+1}}=T\), and for each segment \([t_{i,j},t_{i,j+1}]\) we compute an overapproximation \(V_{i,j}\) of the states reachable from \(V_{i}\) in \(\ell _{i}\) within time \([t_{i,j},t_{i,j+1}]\). I.e., \(R_i=\cup _{j=0}^{k_i}V_{i,j}\) contains all valuations reachable in location \(\ell _{i}\) from \(V_{i}\) within time T. The segmentation is usually homogeneous, meaning that the timestep size \(t_{i,j+1}t_{i,j}\) is constant, but there are also approaches for dynamic adaptations.
The processing is completed by computing for each flowpipe segment \(V_{i,j}\) and each jump e from \(\ell _{i}\) to some \(\ell _{i}'\) an overapproximation \(V_{i,j}^e\) of the valuations reachable from \(V_{i,j}\) by executing e. To store the jump successors, either we add a child node \((\pi _{i}^{},[t_{i,j},t_{i,j+1}],e;\ \ell _{i}',V_{i,j}^e;\ \bot )\) to \(n_i\) for each \(V_{i,j}^e\not =\emptyset \), or we aggregate successors along a jump e into a single child node \((\pi _{i}^{},[t_{i,j},t_{i,j'}],e;\ \ell _{i}',R_{i}^e;\ \bot )\) with \(V_{i,l}^e=\emptyset \) for all \(l\notin [j,j'1]\) and \(\cup _{e}\cup _{j''\in [j,j'1]}V_{i,j''}^e\subseteq R_i^e\), or we cluster successors along a jump into a fixed number of child nodes (see Fig. 3).
For illustration purposes, above we stored all flowpipe segments \(V_{i,j}\) in the nodes. In practice they are too numerous and if they contain no unsafe states then they are deleted. In the following, we assume that each node stores a tuple \((\pi _{i}^{};\ \ell _{i},V_{i};\ p)\), where the flag \(p\) is 1 for processed nodes and 0 otherwise. (For a simple reachability analysis, we need to store neither the path nor the processed flag, but we will make use of the information stored in them later on. Furthermore, we could even delete the initial regions of processed nodes, however, besides counterexample and further output generation, they might be also useful for fixedpoint detection.)
State set representations are one of the core components in the above analysis procedure. Additionally to the storage of state sets, these datatypes need to provide certain (overapproximative) operations (union, intersection, linear transformation, Minkowski sum etc.) on states sets. Besides geometric representations (e.g., boxes/hyperrectangles, oriented rectangular hulls, convex polyhedra, template polyhedra, orthogonal polyhedra, zonotopes, ellipsoids) also symbolic representations (e.g., support functions or Taylor models) can be used for this purpose. The variety of representations is rooted in the general problem of deciding between computational effort and precision. Generally, faster computations often come at the cost of precision loss and vice versa, more precise computations need higher computational effort. The representations might differ in their size, i.e., the required memory consumption, which has a further influence on the computational costs for operations on these representations.
3 CEGARBased Reachability Analysis
If potential reachability of an unsafe state is detected by overapproximative computations, in order to achieve a conclusive verification result, we need to reduce the overapproximation error to an extent that allows to determine that the counterexample is spurious.
Search parameters, parameter configurations and search strategies. The size of the overapproximation error depends on various search parameters, which influence besides the precision also the computational effort of the performed analysis:

1.
State set representation: The choice of the state set representation has a very strong influence on both the error and the running time of the computations. For example, boxes are very efficient but introduce large overapproximations, whereas convex polyhedra are in general more precise but computationally much more expensive (see Fig. 1).

2.
Reductions: Some of the state set representations can grow in the representation size during the computations. For example, during the analysis we need to compute the Minkowski sum \(A\oplus B=\{a+b\ \ a\in A\wedge b\in B\}\) of two state sets A and B. Figure 2(a) shows a 2dimensional example to illustrate how the representation size of a polytope P in the vertex representation (storing the vertices of the polytope) increases from 4 to 6 when building the Minkowski sum with a box. Another source of growing representation sizes are large enumerators and/or denominators when using rationals to describe for instance coefficients of vectors. When the size of a representation gets too large we can try to reduce it on the cost of additional overapproximation. Thus the precision/cost is dependent also on the fact whether such reductions take place.

3.
Timestep size: The timestep size for the flowpipe construction can be constant or dynamically adapted. In the constant case it directly determines the number of flowpipe segments that need to be overapproximated and for which jump successors need to be computed. In the case of dynamic adaptation, the adaptation heuristics determines the number of segments and thus the computational effort. In both cases, smaller timestep sizes often lead to more precise computations on the cost of higher computational effort as more segments are computed (see Fig. 2(b)).

4.
Aggregation and clustering: The precision is higher if no aggregation takes place or if the number of clusters increases (see Fig. 3). However, completely switching off both aggregation and clustering often leads to practically intractable computational costs. Increasing the precision by allowing a larger number of clusters can improve the precision by managable increase in the running times, but the number of clusters should be carefully chosen considering also the size of the time steps (as they determine the number of flowpipe segments and thus the number of state sets to be clustered).

5.
Splitting initial sets: Large initial state sets might be challenging for the reachability analysis. If the algorithm cannot find a conclusive answer, we can split the initial set into several subsets and apply reachability analysis to each of the subsets. Besides the enabling/disabling of initial state set splitting, also the splitting heuristics is relevant for the precision. In general, a fewer number of initial state sets is less precise but more cheap to compute with. Furthermore, it might be also relevant where the splitting takes place.
Most flowpipeconstructionbased tools allow the user to define a search parameter configuration, fixing values for the abovelisted search parameters. Aside from a few exceptions mentioned in the introduction, this configuration remains constant during the whole analysis. Whenever an unsafe state is detected to be potentially reachable, the user can restart the analysis with a different parameter configuration to reduce the overapproximation error.
As the executions with different parameter configurations are completely independent, potentially useful information from previous search processes gets lost. To enable the exploitation of such information, we propose an approach to build a connection between executions with different parameter configurations.
Instead of a single configuration, we propose to define an ordered sequence \(c_0,\ldots ,c_n\) of search parameter configurations, which we call a search strategy, whereas the position of a parameter configuration within a search strategy is called its refinement level. Configurations at higher refinement levels should typically lead to more precise computations, but this is not a soundness requirement.
Dynamic configuration adaptation. We start the analysis with the first configuration in the search strategy, i.e. the one at refinement level 0. If the analysis with this configuration can prove safety then the process is completed.
Otherwise, if the reachability computation detects a (potentially spurious) counterexample then the search with the current configuration is paused; note that at this point there might be unprocessed nodes whose successors were not yet computed. Now, our goal is to exclude the detected counterexample by doing as few computations as possible using configurations at higher refinement levels and, if we succeed, process those yet unprocessed nodes further at refinement level 0. For the first counterexample this means intuitively recomputing reachability only along the counterexample path with the configuration at refinement level 1; we say that we refine the path. Note that the result of a path refinement can be a tree, e.g. if the refinement switched off aggregation. If the counterexample could be excluded by the path refinement, then we switch back to the previous refinement level to process the remaining, yet unprocessed nodes. Otherwise, if the counterexample could not be excluded then we get another, refined counterexample; in this case we recursively try to exclude this counterexample by switching to the configuration at the second refinement level etc.
Let us first clarify what we mean by refining a counterexample path. We define a counterexample to be a path in the search tree. If the configuration, which created the counterexample, used aggregation then it means determining the flowpipes and the jump successors for the given sequence of locations (as stored in the nodes on the path) and jumps (as stored on the edges) with the configuration at the nexthigher refinement level. However, if the previous configuration did not aggregate then we need to determine only a subset of the jump successors, namely those whose time point is covered by the counterexample.
Now let us discuss what it means to refine a path by doing as few computations as possible. If we find a counterexample at a refinement level i then we need a refinement for the whole path at level \(i+1\). However, another counterexample detected previously at level i might share a prefix with the current one; if the previous counterexample has already been refined then we need to refine only the notyetrefined postfix of the current counterexample.
The analysis at refinement level 0 and each path refinement computation generates a search tree. To reduce the computational effort as much as possible, we have to exchange information between these search trees. For example, for a given counterexample found at refinement level i we need to know whether a prefix of it was already refined at level \(i+1\). To allow such information exchange, we could store each search tree separately and extract information from the trees when needed by traversing them. This option requires the least management overhead during reachability computations but it has major drawbacks from the point of computational costs for tree traversal. Alternatively, we could store each search tree separately but store in addition refinement relations between their nodes, allowing to relate paths and retrieve information more easily. However, we would have high costs for setting up and storing all node relations. Instead, we decided to collect all information in a single refinement tree. Tree updates require a careful management of the refinement nodes and their successors, but the advantage is that information about previous searches is easier accessible.
Next we first discuss how nodes of the refinement tree are processed, how paths in the refinement tree are refined, and finally we explain our dynamical parameter refinement algorithm.
The algorithm. Each refinement tree node \(n_{i}^{}\) is a kind of “metanode” that contains an ordered sequence \((n_{i}^{0},\ldots ,n_{i}^{u_i})\) with \(0\le u_i\le n\), where \(n+1\) is the size of the search strategy, and each entry \(n_{i}^{j}\) has the form \((\pi _{}^{};\ \ell _{},V_{};\ p)\) as explained in Sect. 2.
Assume for simplicity that the model has a single initial region \((\ell _{0},X_0)\), and let \(V_{0,i}\) represent \(X_0\) according to the state set representation of refinement level i. The refinement tree is initialized with a root node \(n_{0}^{}=(n_{0}^{0},\ldots ,n_{0}^{n})\) with \(n_{0}^{i}=(\epsilon ;\ \ell _{0}, V_{0,i};\ 0)\).
We additionally introduce a task list which is initialized to contain \((n_{0}^{};0;\epsilon )\) only. Elements \((n_{i}^{};j;\pi )\) in the task list store the fact that we need to compute successors for the jth element of the refinement node \(n_{i}^{}\) at level j. If \(\pi =\epsilon \) then we are not refining and we need to consider all the successors for further computations, otherwise we are at a refinement level \(j>0\) and only the successors along the counterexamplepath \(\pi \) need to be considered.
We remove and process elements from the task list one by one. Assume we consider the task list element \((n_{i}^{};j;\pi ')\) with \(n_{i}^{j}=(\pi _{}^{};\ \ell _{},V_{};\ p)\).
If \(p=0\) then we overapproximate the flowpipe starting from \(V_{}{}\) in \(\ell _{}\) for the time horizon T, using the configuration at level j in the search strategy.
If the computed flowpipe segments contain no bad states and the jump depth J is not yet reached then we compute also the jump successors. Depending on the clustering/aggregation settings at level j, this yields a set of jump successor regions \(R_1,\ldots ,R_m\) with \(R_k=(\ell _{k},V_{k})\) over time intervals \(I_1,\ldots ,I_m\) along jumps \(e_1,\ldots ,e_m\). If the number of children \(m'\) of \(n_{i}^{}\) is less than m then we add \(mm'\) new children; if \(m'>0\) then we add to the newly created children as many dummy entries (containing empty sets) as the other children have, in order to bring all children to the same refinement level. After that, we select for each \(k=1,\ldots ,m\) a different child \(\hat{n}_k\) of \(n_{i}^{}\) and append \((\pi _{}^{},I_k,e_k;\ \ell _{k},V_{k};\ 0)\) to the child’s entry sequence (see Fig. 4). If \(m'>m\) then we add to all not selected children (to which no new entry was added) a dummy entry. Finally, we set \(p\) to 1.
If the node could be processed without discovering any bad states (or if \(p\) was already 1 and thus processing was not needed) then we update the task list as follows:

If \(\pi '=\epsilon \) then we have to process all successor nodes at the level \(j'\) determined by the number of entries E in each of the nodes \(\hat{n}_k\). We add \((\hat{n}_k;E;\epsilon )\) to the task list for all \(k=1,\ldots ,m\).

Otherwise, if \(\pi '=I,e,\pi ''\) then we add \((\hat{n}_{k};j;\pi '')\) for all \(k=1,\ldots ,m\) for which \(I_k\cap I\not =\emptyset \) and \(e_k=e\).
Note that if \(\pi '=\epsilon \) but \(j>0\) then we just succeeded to refine a spurious counterexample from level \(j1\) to a safe path at level j and can continue further successor computations using a lower level configuration. This switch to a lower level happens because the children \(\hat{n}_k\) of \(n_{i}^{}\) have less then j entries in their queues. Now the processing is completed and the next element from the task list can be approached.
If during processing \((n_{i}^{};j;\pi ')\) with \(n_{i}^{j}=(\pi _{}^{};\ \ell _{},V_{};\ p)\) the computed flowpipe had a nonempty intersection with the set of unsafe states then we have found a counterexample at level j. If \(j=n\) then the highest refinement level has been reached and the algorithmus terminates without any conclusive answer. Otherwise, if \(j<n\), we repeat the computations along the counterexample path with a higherlevel configuration (see Fig. 5). This is implemented by adding \((n_{0}^{};j+1;\pi ,\pi ')\) to the task list.
The main structure of the algorithm is shown in Algorithm 1.1.
3.1 Incrementality
The efficiency of the presented approach can be further improved by implementing incrementality: already available bookkeeping and additional information gained throughout the computation can be exploited to speed up later refinements.
For example, the presented approach already keeps track of time intervals where jumps were enabled, i.e. the time intervals during which the intersection of a state set and the guard condition was nonempty. Assume we process \((n;i;\pi ')\) at level i with \(n_i=(\pi ;\ell _{},V;p)\) being the ith entry in n. Let I be the union of all the time intervals for all flowpipe segments for which a nonempty jump successor was computed along a jump e. Later, when processing \((\hat{n};j;\hat{\pi }')\) at level \(j>i\) with \(\hat{n}_j=(\hat{\pi };\ell _{},\hat{V};\hat{p})\) being the jth entry in \(\hat{n}\), if the path set encoded by \(\hat{\pi }\) is included in the path set encoded by \(\pi \) then we need to compute jump successors along e only for flowpipe segments over time intervals that have a nonempty intersection with I.
Similarly, if \((\ell _{},V)\) contains no unsafe states but \((\ell _{},\hat{V})\) does then we know that the latter counterexample is spurious if the path set encoded by \(\hat{\pi }\) is included in the path set encoded by \(\pi \).
A similar observation holds for flowpipe segments: if a segment in the flowpipe of \((\ell _{},\hat{V})\) is empty, what happens if the invariant is violated, then we know that the same segment of the flowpipe from \((\ell _{},\hat{V})\) will also be empty.
4 Experimental Results
In order to show the general applicability of our approach we have conducted several experiments on an implementation of the method presented in Sect. 3. We have used our implementation to verify safety of several wellknown benchmarks using different strategies (see Table 1). All experiments were carried on an Intel Core i7 (\(4\times 4\) GHz) CPU with 16 GB RAM. Results for the used strategies can be found in Table 2.
Benchmarks. Different benchmarks from the area of hybrid systems verification are selected: The wellknown bouncing ball benchmark models the height and velocity of a falling ball bouncing off the ground. The added set of bad states constrains the height of the ball after the first of 4 bounces. This benchmark already exhibits most properties more challenging benchmarks cover while being simple enough to be a sanity check for our method.
The 5D switching system [10] is an artificially created model with 5 locations and 5 variables which shows more complex dynamic and is wellsuited to show the differences in overapproximation error between the used state set representations. We added a set of bad states in the last location where the system’s trajectories converge to a certain point.
The navigation benchmark [14] models the velocity and position of a point mass moving through cells on a twodimensional plane (we used variations of instances 9 and 11). Each cell (location) exhibits different dynamic influencing the acceleration of the mass. The goal is to show that a set of good states can potentially be reached while a set of bad states will always be avoided (see Fig. 6(b)). The initial position of the mass is chosen from a set, such that this benchmark demonstrates nondeterminism for the discrete transitions which results in a more complex search tree.
The platoon benchmark [1, 4] models a vehicle platoon of three cars where two controlled cars follow the first one while keeping the distance \(e_i\) between each other within a certain threshold (see Fig. 6(a)). This benchmark was chosen, as it unifies a higher dimension of the state space with a more complex dynamic.
Strategies. During the development of our approach we tested several strategies with varying parameters (a) the state set representation, (b) the time step size and (c) aggregation settings. In general, other parameters (e.g. initial set splitting) could be also considered but our prototype currently does not yet support these. For this evaluation we selected six strategies \(s_{0},\ldots ,s_{5}\) which mostly vary (a) and (b) (see Table 1). Changing aggregation settings has shown to be challenging for the tree update mechanism but the exponential blowup of the number of tree nodes did not render this method effective in practice. Furthermore for disabled aggregation settings, the largest precision gain can be observed for boxes while for all other tested state set representations the effect can be neglected. Note that our prototype implements the general case where time step sizes are not necessarily monotonically decreasing and multiples of each other which implies refinement starting from the root node.
Comparison. We compare our refinement algorithm (1) with a classic approach where no refinement is performed. To achieve this, we specify only a single strategy element for our algorithm. We give results for (2) the fasted successful setting (of the respective strategy), an experienced user would choose and for (3) the setting with the highest precision level, a conservative user would select. The three entries per cell in Table 2 show the running times for our dynamical approach (gray), the fastest successful setting and the conservative approach. The numbers in brackets show the number of nodes in the search tree; for refinement strategies we give the number of nodes for each refinement level.
Observations. The results in Table 2 show that our method in general is competitive to classical approaches, as the running times are in the same orders of magnitude as the fastest setting when using dynamic refinement and in some cases our method is even faster. From the results we can infer manifold:

Our implementation currently supports reusing information of guard intersection timings (see Sect. 3.1) while other information such as time intervals where a state set is fully contained in the set defined by the invariant of a location are not used. Keeping track of this reduced information already noticeably influences the running times as costly intersection operations for transition guards can be avoided for most computed segments and the running times can compete with the optimal setting. This shows that the additional cost of precomputing parts of the search tree can be compensated in terms of running time when information is properly reused.

The length of the counterexample plays a significant role — in the bouncing ball benchmark the set of bad states is reachable after one discrete transition and from then on never again while in the 5D switching system the set of bad states is reached in the last reachable location which causes a refinement of the whole tree and a recovery to a lower refinement level is not possible. In the platoon benchmark, stepping back to a lower refinement level does not provide any advantages, as an intersection with the set of bad states occurs before transition timings can be recorded (see Fig. 6(a)). To overcome this problem a future implementation should allow for additional entry points for refinement in order to reduce the length of the refinement path (see Sect. 5).

The shape of the search tree influences the effectiveness of our approach. As the navigation benchmark is the only benchmark in our set where the resulting search tree naturally branches due to multiple outgoing transitions per location, the effect of partial refinement can especially be observed for this benchmark. Whole subtrees can be cut off and are shown to be unreachable on higher refinement levels such that the number of nodes is reduced. The presented method renders most effectively for systems exhibiting nondeterminism, which is reflected in a strongly branching search tree.

Coarse analysis allows for fast discovery of the search tree, possibly requiring more nodes to be computed. We can observe that for models with nondeterminism the number of nodes at the highest required level is lower than when using the classical approach. Together with the running times this confirms our assumption that putting effort in selective, partial refinement of single branches pays off in terms of computational effort.
In conclusion we expect a strategy where a coarse analysis precedes a finegrained setting (e.g. strategy \(s_{3}\)) which allows to detect enabled transitions quickly and to recover fast after the removal of a spurious counterexample shows good results on average.
5 Conclusion
We presented a reachability analysis algorithm with dynamic configuration adjustment, which allows to refine search configurations to obtain conclusive results, but exploits as much information as possible from previous computations in order to keep the computational effort as low as possible. We plan to continue our work in several directions:
Incrementality. Our current implementation reuses information from previous refinement levels about the time intervals of jump enabledness. We will implement also the reusage of information when an invariant is definitely true or definitely violated (when the flowpipe segment for a time interval was fully contained or fully outside the invariant set).
Additional parameters. The current implementation supports 3 parameters in search strategies: timestep size, state set representation, aggregation and clustering settings. We aim at extend our search strategies with the adjustment of further parameters.
Dynamic strategy synthesis. Using information about a counterexample, e.g. the Hausdorff distance between the set of bad states and the state set intersecting it, automatically deriving strategies for partial path refinement could be further investigated.
Parameter synthesis. With little modification we can use our approach also to synthesize the coarsest parameter setting which still allows to verify safety. This can be achieved by strategies, where the parameter settings decrease in precision and the analysis stops when a bad state is potentially reachable.
Partial path refinement. Partial refinement of counterexamples, for example restricted to a suffix, could possibly improve the effectiveness of the approach (if the refinement of the suffix renders a bad state unreachable).
Conditional strategies. We defined search strategies to be ordered sequences of parameter configurations, which are used one after the other for refinements. Introducing trees of configurations with conditional branching would allow even more powerful strategies where the characteristics of the system or runtime information (like previous refinement times, state set sizes, number of sets aggregated etc.) can be used to determine which branch to take for the next refinement.
References
Althoff, M., Bak, S., Cattaruzza, D., Chen, X., Frehse, G., Ray, R., Schupp, S.: ARCHCOMP17 category report: continuous and hybrid systems with linear continuous dynamics. In: Proceedings of ARCH 2017, pp. 143–159 (2017)
Althoff, M., Dolan, J.M.: Online verification of automated road vehicles using reachability analysis. IEEE Trans. Robot. 30(4), 903–918 (2014)
Alur, R., Courcoubetis, C., Halbwachs, N., Henzinger, T., Ho, P.H., Nicollin, X., Olivero, A., Sifakis, J., Yovine, S.: The algorithmic analysis of hybrid systems. Theoret. Comput. Sci. 138(1), 3–34 (1995)
Ben Makhlouf, I., Kowalewski, S., Chávez Grunewald, M., Abel, D.: Safety assessment of networked vehicle platoon controllers practical experiences with available tools. In: Proceedings of ADHS 2009 (2009)
Bogomolov, S., Donzé, A., Frehse, G., Grosu, R., Johnson, T.T., Ladan, H., Podelski, A., Wehrle, M.: Guided search for hybrid systems based on coarsegrained space abstractions. STTT 18(4), 449–467 (2016)
Bogomolov, S., Frehse, G., Giacobbe, M., Henzinger, T.A.: Counterexampleguided refinement of template polyhedra. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10205, pp. 589–606. Springer, Heidelberg (2017). https://doi.org/10.1007/9783662545775_34
Bouissou, O., Chapoutot, A., Mimram, S.: Computing flowpipe of nonlinear hybrid systems with numerical methods. CoRR abs/1306.2305 (2013)
Chen, X.: Reachability Analysis of NonLinear Hybrid Systems Using Taylor Models. Ph.D. thesis, RWTH Aachen University, Germany (2015)
Chen, X., Ábrahám, E., Sankaranarayanan, S.: Flow*: an analyzer for nonlinear hybrid systems. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 258–263. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642397998_18
Chen, X., Schupp, S., Makhlouf, I.B., Ábrahám, E., Frehse, G., Kowalewski, S.: A benchmark suite for hybrid systems reachability analysis. In: Havelund, K., Holzmann, G., Joshi, R. (eds.) NFM 2015. LNCS, vol. 9058, pp. 408–414. Springer, Cham (2015). https://doi.org/10.1007/9783319175249_29
Collins, P., Bresolin, D., Geretti, L., Villa, T.: Computing the evolution of hybrid systems using rigorous function calculus. In: Proceedings of ADHS 2012, pp. 284–290. IFACPapersOnLine (2012)
Duggirala, P.S., Mitra, S., Viswanathan, M., Potok, M.: C2E2: a verification tool for stateflow models. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 68–82. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466810_5
Eggers, A.: Direct handling of ordinary differential equations in constraintsolvingbased analysis of hybrid systems. Ph.D. thesis, Universität Oldenburg, Germany (2014)
Fehnker, A., Ivančić, F.: Benchmarks for hybrid systems verification. In: Alur, R., Pappas, G.J. (eds.) HSCC 2004. LNCS, vol. 2993, pp. 326–341. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540247432_22
Fränzle, M., Herde, C., Ratschan, S., Schubert, T., Teige, T.: Efficient solving of large nonlinear arithmetic constraint systems with complex Boolean structure. J. Satisf. Boolean Model. Comput. 1, 209–236 (2007)
Frehse, G., Kateja, R., Le Guernic, C.: Flowpipe approximation and clustering in spacetime. In: Proceedings of HSCC 2013, pp. 203–212. ACM (2013)
Frehse, G., Le Guernic, C., Donzé, A., Cotton, S., Ray, R., Lebeltel, O., Ripado, R., Girard, A., Dang, T., Maler, O.: SpaceEx: scalable verification of hybrid systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 379–395. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642221101_30
Hagemann, W., Möhlmann, E., Rakow, A.: Verifying a PI controller using SoapBox and Stabhyli: experiences on establishing properties for a steering controller. In: Proceedings of ARCH 2014. EPiC Series in Computer Science, vol. 34, pp. 115–125. EasyChair (2014)
HyCreate. http://stanleybak.com/projects/hycreate/hycreate.html
HyReach. https://embedded.rwthaachen.de/doku.php?id=en:tools:hyreach
Immler, F.: Tool presentation: Isabelle/hol for reachability analysis of continuous systems. In: Frehse, G., Althoff, M. (eds.) ARCH1415. 1st and 2nd International Workshop on Applied veRification for Continuous and Hybrid Systems. EPiC Series in Computer Science, vol. 34, pp. 180–187. EasyChair (2015)
Kong, S., Gao, S., Chen, W., Clarke, E.: dReach: \(\delta \)reachability analysis for hybrid systems. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 200–205. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466810_15
Le Guernic, C.: Reachability analysis of hybrid systems with linear continuous dynamics. Ph.D. thesis, Université JosephFourierGrenoble I, France (2009)
Nellen, J., Driessen, K., Neuhäußer, M., Ábrahám, E., Wolters, B.: Two CEGARbased approaches for the safety verification of PLCcontrolled plants. Inf. Syst. Front. 18(5), 927–952 (2016)
Platzer, A., Quesel, J.D.: KeYmaera: a hybrid theorem prover for hybrid systems (system description). In: Armando, A., Baumgartner, P., Dowek, G. (eds.) IJCAR 2008. LNCS (LNAI), vol. 5195, pp. 171–178. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540710707_15
Ratschan, S., She, Z.: Safety verification of hybrid systems by constraint propagation based abstraction refinement. In: Morari, M., Thiele, L. (eds.) HSCC 2005. LNCS, vol. 3414, pp. 573–589. Springer, Heidelberg (2005). https://doi.org/10.1007/9783540319542_37
Schupp, S., Ábrahám, E., Makhlouf, I.B., Kowalewski, S.: HyPro: A C++ library of state set representations for hybrid systems reachability analysis. In: Barrett, C., Davies, M., Kahsai, T. (eds.) NFM 2017. LNCS, vol. 10227, pp. 288–294. Springer, Cham (2017). https://doi.org/10.1007/9783319572888_20
Taha, W., et al.: Acumen: an opensource testbed for cyberphysical systems research. In: Mandler, B., et al. (eds.) IoT360 2015. LNICST, vol. 169, pp. 118–130. Springer, Cham (2016). https://doi.org/10.1007/9783319470634_11
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
This chapter is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what reuse is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and reuse information, please contact the Rights and Permissions team.
Copyright information
© 2018 The Author(s)
About this paper
Cite this paper
Schupp, S., Ábrahám, E. (2018). Efficient Dynamic Error Reduction for Hybrid Systems Reachability Analysis. In: Beyer, D., Huisman, M. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2018. Lecture Notes in Computer Science(), vol 10806. Springer, Cham. https://doi.org/10.1007/9783319899633_17
Download citation
DOI: https://doi.org/10.1007/9783319899633_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319899626
Online ISBN: 9783319899633
eBook Packages: Computer ScienceComputer Science (R0)