1 Introduction

Abstraction-based controller synthesis (ABCS) is a general procedure for automatic synthesis of controllers for continuous-time nonlinear dynamical systems against temporal specifications. ABCS works by first abstracting a time-sampled version of the continuous dynamics of the open-loop system by a symbolic finite state model. Then, it computes a finite-state controller for the symbolic model using algorithms from automata-theoretic reactive synthesis. When the time-sampled system and the symbolic model satisfy a certain refinement relation, the abstract controller can be refined to a controller for the original continuous-time system while guaranteeing its time-sampled behavior satisfies the temporal specification. Since its introduction about 15 years ago, much research has gone into better theoretical understanding of the basic method and extensions [11, 18, 29, 33, 35, 39], into scalable tools [26, 27, 31, 37], and into demonstrating its applicability to nontrivial control problems [1, 5, 32, 38].

In its most common form, the abstraction of the continuous-time dynamical system is computed by fixing a parameter \(\tau \) for the sampling time and a parameter \(\eta \) for the state space, and then representing the abstract state space as a set of hypercubes, each of diameter \(\eta \). The hypercubes partition the continuous concrete state space. The abstract transition relation adds a transition between two hypercubes if there exists some state in the first hypercube and some control input that can reach some state of the second by following the original dynamics for time \(\tau \). The transition relation is nondeterministic due to (a) the possibility of having continuous transitions starting at two different points in one hypercube but ending in different hypercubes, and (b) the presence of external disturbances causing a deviation of the system trajectories from their nominal paths. When restricted to a compact region of interest, the resulting finite-state abstract system describes a two-player game between controller and disturbance, and reactive synthesis techniques are used to algorithmically compute a controller (or show that no such controller exists for the given abstraction) for members of a broad class of temporal specifications against the disturbance. One can show that the abstract transition system is in a feedback refinement relation (FRR) with the original dynamics [35]. This ensures that when the abstract controller is applied to the original system, the time-sampled behaviors satisfy the temporal specifications.

The success of ABCS depends on the choice of \(\eta \) and \(\tau \). Increasing \(\eta \) (and \(\tau \)) results in a smaller state space and symbolic model, but more nondeterminism. Thus, there is a tradeoff between computational tractability and successful synthesis. We have recently shown that one can explore this space of tradeoffs by maintaining multiple abstraction layers of varying granularity (i.e., abstract models constructed from progressively larger \(\eta \) and \(\tau \)) [25]. The multi-layered synthesis approach tries to find a controller for the coarsest abstraction whenever feasible, but adaptively considers finer abstractions when necessary. However, the bottleneck of our approach [25] is that the abstract transition system of every layer needs to be fully computed before synthesis begins. This is expensive and wasteful. The cost of abstraction grows as \(O((\frac{1}{\eta })^n)\), where n is the dimension, and much of the abstract state space may simply be irrelevant to ABCS, for example, if a controller was already found at a coarser level for a sub-region.

We apply the paradigm of lazy abstraction [21] to multi-layered synthesis for safety specifications in [23]. Lazy abstraction is a technique to systematically and efficiently explore large state spaces through abstraction and refinement, and is the basis for successful model checkers for software, hardware, and timed systems [3, 4, 22, 40]. Instead of computing all the abstract transitions for the entire system in each layer, the algorithm selectively chooses which portions to compute transitions for, avoiding doing so for portions that have been already solved by synthesis. This co-dependence of the two major computational components of ABCS is both conceptually appealing and results in significant performance benefits.

This paper gives a concise presentation of the underlying principles of lazy ABCS enabling synthesis w.r.t. safety and reachability specifications. Notably, the extension from single-layered to multi-layered and lazy ABCS is somewhat nontrivial, for the following reasons.

(I) Lack of FRR Between Abstractions. An efficient multi-layered controller synthesis algorithm uses coarse grid cells almost everywhere in the state space and only resorts to finer grid cells where the trajectory needs to be precise. While this idea is conceptually simple, the implementation is challenging as the computation of such a multi-resolution controller domain via established abstraction-refinement techniques (as in, e.g., [12]), requires one to run the fixed-point algorithms of reactive synthesis over a common game graph representation connecting abstract states of different coarseness. However, to construct the latter, a simulation relation must exist between any two abstraction layers. Unfortunately, this is not the case in our setting: each layer uses a different sampling time and, while each layer is an abstraction (at a different time scale) of the original system, layers may not have any FRR between themselves. Therefore, we can only run iterations of fixed-points within a particular abstraction layer, but not for combinations of them.

We therefore introduce novel fixed-point algorithms for safety and reach-avoid specifications. Our algorithms save and re-load the results of one fixed-point iteration to and from the lowest (finest) abstraction layer, which we denote as layer 1. This enables arbitrary switching of layers between any two sequential iterations while reusing work from other layers. We use this mechanism to design efficient switching protocols which ensure that synthesis is done mostly over coarse abstractions while fine layers are only used if needed.

(II) Forward Abstraction and Backward Synthesis. One key principle of lazy abstraction is that the abstraction is computed in the direction of the search. However, in ABCS, the abstract transition relation can only be computed forward, since it involves simulating the ODE of the dynamical system forward up to the sampling time. While an ODE can also be solved backwards in time, backward computations of reachable sets using numerical methods may lead to high numerical errors [30, Remark 1]. Forward abstraction conflicts with symbolic reactive synthesis algorithms, which work backward by iterating controllable predecessor operators.Footnote 1 For reachability specifications, we solve this problem by keeping a set of frontier states, and proving that in the backward controllable predecessor computation, all transitions that need to be considered arise out of these frontier states. Thus, we can construct the abstract transitions lazily by computing the finer abstract transitions only for the frontier.

(III) Proof of Soundness and Relative Completeness. The proof of correctness for common lazy abstraction techniques uses the property that there is a simulation relation between any two abstraction layers [10, 20]. As this property does not hold in our setting (see (I)), our proofs of soundness and completeness w.r.t. the finest layer only use (a) FRRs between any abstraction layer and the concrete system to argue about the correctness of a controller in a sub-space, and combines this with (b) an argument about the structure of ranking functions, which are obtained from fixed point iterations and combine the individual controllers.

Related Work. Our work is an extension and consolidation of several similar attempts at using multiple abstractions of varying granularity in the context of controller synthesis, including our own prior work [23, 25]. Similar ideas were explored in the context of linear dynamical systems [2, 15], which enabled the use of polytopic approximations. For unperturbed systems [2, 8, 16, 19, 34, 36], one can implement an efficient forward search-based synthesis technique, thus easily enabling lazy abstraction (see (II) above). For nonlinear systems satisfying a stability property, [7, 8, 16] show a multi-resolution algorithm. It is implemented in the tool CoSyMA [31]. For perturbed nonlinear systems, [13, 14] show a successive decomposition technique based on a linear approximation of given polynomial dynamics. Recently, Nilsson et al. [6, 33] presented an abstraction-refinement technique for perturbed nonlinear systems which shares with our approach the idea of using particular candidate states for local refinement. However, the approaches differ by the way abstractions are constructed. The approach in [33] identifies all adjacent cells which are reachable using a particular input, splitting these cells for finer abstraction computation. This is analogous to established abstraction-refinement techniques for solving two-player games [12]. On the other hand, our method computes reachable sets for particular sampling times that vary across layers, resulting in a more delicate abstraction-refinement loop. Recently, Hussien and Tabuada [41] developed an orthogonal scalable abstraction method which explores the input space (instead of the state space) lazily.

Fig. 1.
figure 1

An illustration of the lazy ABCS algorithms for safety (top) and reach-avoid (bottom) specifications. In both scenarios, the solid black regions are the unsafe states which need to be avoided. In the reach-avoid problem, the system has to additionally reach the target (T) red square at the left of Pic. 1. Both figures show the sequence of synthesis stages across three abstraction layers: \(l=1\) (Pics. 4, 7), \(l=2\) (Pics. 3, 6), and \(l=3\) (Pics. 2, 5) for safety; and \(l=1\) (Pics. 4, 5), \(l=2\) (Pics. 3, 6), and \(l=3\) (Pics. 2, 7) for reach-avoid. Both Pics. 8 indicate the domains of the resulting controllers with different granularity: \(l=1\) (yellow), \(l=2\) (green), and \(l=3\) (orange). The red regions represent winning states, and the blue regions represent states determined as winning in the present synthesis stage. Cyan regions represent “potentially losing” states in the safety synthesis. We set the parameter \(m=2\) for reach-avoid synthesis. The gridded regions in different layers represent the states where the transitions have been computed; large ungridded space in \(l=2\) and \(l=1\) signifies the computational savings of the lazy abstraction approach. (Color figure online)

Informal Overview. We illustrate our approach via solving a safety control problem and a reach-avoid control problem depicted in Fig. 1. In reactive synthesis, these problems are solved using a maximal and minimal fixed point computation, respectively [28]. Thus, for safety, one starts with the maximal set of safe states and iteratively shrinks the latter until the remaining set, called the winning state set, does not change. That is, for all states in the winning state set, there is a control action which ensures that the system remains within this set for one step. For reachability, one starts with the set of target states as the winning ones and iteratively enlarges this set by adding all states which allow the system to surely reach the current winning state set, until no more states can be added. These differences in the underlying characteristics of the fixed points require different switching protocols when multiple abstraction layers are used.

The purpose of Fig. 1 is only to convey the basic idea of our algorithms in a visual and lucid way, without paying attention to the details of the underlying dynamics of the system. In our example, we use three layers of abstraction \(S_1\), \(S_2\) and \(S_3\) with the parameters \((\eta ,\tau )\), \((2\eta ,2\tau )\) and \((4\eta ,4\tau )\). We refer to the steps of Fig. 1 as Pic. #.

For the safety control problem (the top figure in Fig. 1), we assume that a set of unsafe states are given (the black box in the left of Pic. 1). These need to be avoided by the system. For lazy ABCS, we first fully compute the abstract transition relation of the coarsest abstraction \(S_3\), and find the states from where the unsafe states can be avoided for at least one time step of length \(4\tau \) (blue region in Pic. 2). Normally for a single-layered algorithm, the complement of the blue states would immediately be discarded as losing states. However, in the multi-layered approach, we treat these states as potentially losing (cyan regions), and proceed to \(S_2\) (Pic. 3) to determine if some of these potentially losing states can avoid the unsafe states with the help of a more fine-grained controller.

However, we cannot perform any safety analysis on \(S_2\) yet as the abstract transitions of \(S_2\) have not been computed. Instead of computing all of them, as in a non-lazy approach, we only locally explore the outgoing transitions of the potentially losing states in \(S_2\). Then, we compute the subset of the potentially losing states in \(S_2\) that can avoid the unsafe states for at least one time step (of length \(2\tau \) in this case). These states are represented by the blue region in Pic. 3, which get saved from being discarded as losing states in this iteration. Then we move to \(S_1\) with the rest of the potentially losing states and continue similarly. The remaining potentially losing states at the end of the computation in \(S_1\) are surely losing—relative to the finest abstraction \(S_1\)—and are permanently discarded. This concludes one “round” of exploration.

We restart the process from \(S_3\). This time, the goal is to avoid reaching the unsafe states for at least two time steps of available lengths. This is effectively done by inflating the unsafe region with the discarded states from previous stages (black regions in Pics. 5, 6, and 7). The procedure stops when the combined winning regions across all layers do not change for two successive iterations.

In the end, the multi-layered safety controller is obtained as a collection of the safety controllers synthesized in different abstraction layers in the last round of fixed-point computations. The resulting safety controller domain is depicted in Pic. 8.

Now consider the reach-avoid control problem in Fig. 1 (bottom). The target set is shown in red, and the states to be avoided in black. We start by computing the abstract transition system completely for the coarsest layer and solve the reachability fixed point at this layer until convergence using under-approximations of the target and of the safe states. The winning region is marked in blue (Pic. 2); note that the approximation of the bad states “cuts off” the possibility to reach the winning states from the states on the right. We store the representation of this winning region in the finest layer as the set \(\varUpsilon _1\).

Intuitively, we run the reachability fixed point until convergence to enlarge the winning state set as much as possible using large cells. This is in contrast to the previous algorithm for safety control in which we performed just one iteration at each level. For safety, each iteration of the fixed-point shrinks the winning state set. Hence, running \(\mathsf {Safe} \) until convergence would only keep those coarse cells which form an invariant set by themselves. Running one iteration of \(\mathsf {Safe} \) at a time instead has the effect that clusters of finer cells which can be controlled to be safe by a suitable controller in the corresponding layer are considered safe in the coarser layers in future iterations. This allows the use of coarser control actions in larger parts of the state space (see Fig. 1 in [23] for an illustrative example of this phenomenon).

To further extend the winning state set \(\varUpsilon _1\) for reach-avoid control, we proceed to the next finer layer \(l=2\) with the new target region (red) being the projection of \(\varUpsilon _1\) to \(l=2\). As in safety control, all the safe states in the complement of \(\varUpsilon _l\) are potentially within the winning state set. The abstract transitions at layer \(l=2\) have not been computed at this point. We only compute the abstract transitions for the frontier states: these are all the cells that might contain layer 2 cells that can reach the current winning region within m steps (for some parameter m chosen in the implementation). The frontier is indicated for layer 2 by the small gridded part in Pic. 3.

We continue the backward reachability algorithm on this partially computed transition system by running the fixed-point for m steps. The projection of the resulting states to the finest layer is added to \(\varUpsilon _1\). In our example (Pic. 3), we reach a fixed-point just after 1 iteration implying that no more layer 2 (or layer 3) cells can be added to the winning region.

We now move to layer 1, compute a new frontier (the gridded part in Pic. 4), and run the reachability fixed point on \(\varUpsilon _1\) for m steps. We add the resulting winning states to \(\varUpsilon _1\) (the blue region in Pic. 4). At this point, we could keep exploring and synthesizing in layer 1, but in the interest of efficiency we want to give the coarser layers a chance to progress. This is the reason to only compute m steps of the reachability fixed point in any one iteration. Unfortunately, for our example, the attempt to go coarser fails as no new layer 2 cells can be added yet (see Pic. 3). We therefore fall back to layer 1 and make progress for m more steps (Pic. 5). At this point, the attempt to go coarser is successful (Pic. 6) as the right side of the small passage was reached.

We continue this movement across layers until synthesis converges in the finest layer. In Pic. 8, the orange, green and yellow colored regions are the controller domains obtained using \(l=3\), \(l=2\) and \(l=1\), respectively. Observe that we avoid computing transitions for a significant portion of layers 1 and 2 (the ungridded space in Pics. 5, 6, respectively).

2 Control Systems and Multi-layered ABCS

We recall the theory of feedback refinement relations (FRR) [35] and multi-layered ABCS [25].

Notation. We use the symbols \(\mathbb {N}\), , \(\mathbb {Z}\), and \(\mathbb {Z}_{>0}\) to denote the sets of natural numbers, reals, positive reals, integers, and positive integers, respectively. Given with \(a\le b\), we write [ab] for the closed interval and write \([a;b]=[a,b]\cap \mathbb {Z}\) as its discrete counterpart. Given a vector , we denote by \(a_{i}\) its i-th element, for \(i\in [1;n]\). We write \(\llbracket a,b \,\rrbracket \) for the closed hyper-interval . We define the relations \(<,\le ,\ge ,>\) on vectors in component-wise. For a set W, we write \(W^*\) and \(W^\omega \) for the sets of finite and infinite sequences over W, respectively. We define \(W^\infty = W^* \cup W^\omega \). We define if \(w\in W^*\), and \(\mathrm {dom}(w) = \mathbb {N}\) if \(w\in W^\omega \). For \(k\in \mathrm {dom}(w)\) we write w(k) for the k-th symbol of w.

2.1 Abstraction-Based Controller Synthesis

Systems. A system \(S=(X,U,F)\) consists of a state space X, an input space U, and a transition function . A system S is finite if X and U are finite. A trajectory \(\xi \in X^\infty \) is a maximal sequence of states compatible with F: for all \(1\le k < |\xi |\) there exists \(u\in U\) s.t. \(\xi (k)\in F(\xi (k-1),u)\) and if \(|\xi | < \infty \) then \(F(\xi (|\xi |),u)= \emptyset \) for all \(u\in U\). For \(D\subseteq X\), a D-trajectory is a trajectory \(\xi \) with \(\xi (0)\in D\). The behavior \(\mathcal {B}(S, D)\) of a system \(S=(X,U,F)\) w.r.t. \(D\subseteq X\) consists of all D-trajectories; when \(D=X\), we simply write \(\mathcal {B}(S)\).

Controllers and Closed Loop Systems. A controller \(C=(D,U,G)\) for a system \(S=(X,U,F)\) consists of a controller domain \(D\subseteq X\), a space of inputs U, and a control map mapping states in its domain to non-empty sets of control inputs. The closed loop system formed by interconnecting S and \(C\) in feedback is defined by the system \(S^{cl}=(X,U, F^{cl})\) with s.t. \(x'\in F^{cl}(x,u)\) iff \(x\in D\) and \(u\in G(x)\) and \(x'\in F(x,u)\), or \(x\notin D\) and \(x'\in F(x,u)\).

Control Problem. We consider specifications given as \(\omega \)-regular languages whose atomic predicates are interpreted as sets of states. Given a specification \(\psi \), a system S, and an interpretation of the predicates as sets of states of S, we write \(\langle \![\psi ]\!\rangle _S\subseteq \mathcal {B}(S)\) for the set of behaviors of S satisfying \(\psi \). The pair \({( S,\psi )}\) is called a control problem on S for \(\psi \). A controller \(C= (D, U, G)\) for S solves \({( S,\psi )}\) if \(\mathcal {B}(S^{cl}, D)\subseteq \langle \![\psi ]\!\rangle _S\). The set of all controllers solving \({( S,\psi )}\) is denoted by \(\mathcal {C}(S, \psi )\).

Feedback Refinement Relations. Let be two systems with \(U_2\subseteq U_1\). A feedback refinement relation (FRR) from \(S_1\) to \(S_2\) is a relation \(Q\subseteq X_1\times X_2\) s.t. for all \(x_1\in X_1\) there is some \(x_2\in X_2\) such that \(Q(x_1,x_2)\) and for all \((x_1,x_2)\in Q\), we have (i) \(U_{S_2}(x_2)\subseteq U_{S_1}(x_1)\), and (ii) \(u\in U_{S_2}(x_2) \Rightarrow Q(F_1(x_1,u))\subseteq F_2(x_2,u)\) where \(U_{S_i}(x):=\{u\in U_i\mid F_i(x,u)\ne \emptyset \}\). We write \(S_1\preccurlyeq _{Q} S_2\) if Q is an FRR from \(S_1\) to \(S_2\).

Abstraction-Based Controller Synthesis (ABCS). Consider two systems \(S_1\) and \(S_2\), with \(S_1\preccurlyeq _{Q} S_2\). Let \(C=(D,U_2,G)\) be a controller for \(S_2\). Then, as shown in [35], \(C\) can be refined into a controller for \(S_1\), defined by \(C\circ Q=(\widetilde{D},U_1,\widetilde{G})\) with \(\widetilde{D}= Q^{-1}(D)\), and \(\widetilde{G}(x_1)=\{u\in U_1\mid \exists x_2\in Q(x_1)\;.\;u\in G(x_2)\}\) for all \(x_1\in \widetilde{D}\). This implies soundness of ABCS.

Proposition 1

([35], Def. VI.2, Thm. VI.3). Let \(S_1\preccurlyeq _{Q} S_2\) and \(C\in \mathcal {C}(S_2,\psi )\) for a specification \(\psi \). If for all \(\xi _1\in \mathcal {B}(S_1)\) and \(\xi _2\in \mathcal {B}(S_2)\) with \(\mathrm {dom}(\xi _1)=\mathrm {dom}(\xi _2)\) and \((\xi _1(k), \xi _2(k))\in Q\) for all \(k\in \mathrm {dom}(\xi _1)\) holds that \(\xi _2\in \langle \![\psi ]\!\rangle _{S_2}\Rightarrow \xi _1\in \langle \![\psi ]\!\rangle _{S_1}\), then \(C\circ Q \in \mathcal {C}(S_1, \psi )\).

2.2 ABCS for Continuous Control Systems

We now recall how ABCS can be applied to continuous-time systems by delineating the abstraction procedure [35].

Continuous-Time Control Systems. A control system \(\varSigma = (X, U, W, f)\) consists of a state space , a non-empty input space , a compact disturbance set with \(0\in W\), and a function \(f:X\times U\rightarrow X\) s.t. \(f(\cdot ,u)\) is locally Lipschitz for all \(u\in U\). Given an initial state \(x_0\in X\), a positive parameter \(\tau >0\), and a constant input trajectory \(\mu _u:[0,\tau ]\rightarrow U\) which maps every \(t\in [0,\tau ]\) to the same \(u\in U\), a trajectory of \(\varSigma \) on \([0,\tau ]\) is an absolutely continuous function \(\xi :[0,\tau ]\rightarrow X\) s.t. \(\xi (0) = x_0\) and \(\xi (\cdot )\) fulfills the following differential inclusion for almost every \(t\in [0,\tau ]\):

$$\begin{aligned} \dot{\xi }\in f(\xi (t),\mu _u(t))+W = f(\xi (t),u) + W. \end{aligned}$$
(1)

We collect all such solutions in the set \({\text {Sol}}_f(x_0,\tau ,u)\).

Time-Sampled System. Given a time sampling parameter \(\tau >0\), we define the time-sampled system \(\overrightarrow{S}_{{}}(\varSigma ,\tau )=(X,U,\overrightarrow{F}_{{}})\) associated with \(\varSigma \), where X, U are as in \(\varSigma \), and the transition function is defined as follows. For all \(x\in X\) and \(u \in U\), we have iff there exists a solution \(\xi \in {\text {Sol}}_f(x,\tau ,u)\) s.t. \(\xi (\tau )=x'\).

Covers. A cover \(\widehat{X}\) of the state space X is a set of non-empty, closed hyper-intervals \(\llbracket a,b \,\rrbracket \) with called cells, such that every \(x\in X\) belongs to some cell in \(\widehat{X}\). Given a grid parameter , we say that a point \(c\in Y\) is \(\eta \)-grid-aligned if there is \(k\in \mathbb {Z}^n\) s.t. for each \(i\in \{ 1,\ldots ,n \}\), \(c_i = \alpha _i + k_i\eta _i - \frac{\eta _i}{2}\). Further, a cell \(\llbracket a,b \,\rrbracket \) is \(\eta \)-grid-aligned if there is a \(\eta \)-grid-aligned point c s.t. \(a = c - \frac{\eta }{2}\) and \(b = c + \frac{\eta }{2}\); such cells define sets of diameter \(\eta \) whose center-points are \(\eta \)-grid-aligned.

Abstract Systems. An abstract system \(\widehat{S}_{{}}(\varSigma ,\tau ,\eta )=(\widehat{X}_{{}},\widehat{U}{},\widehat{F}_{{}})\) for a control system \(\varSigma \), a time sampling parameter \(\tau > 0\), and a grid parameter consists of an abstract state space \(\widehat{X}_{{}}\), a finite abstract input space \(\widehat{U}\subseteq U\), and an abstract transition function \(\widehat{F}_{{}}:\widehat{X}_{{}}\times \widehat{U}{}\rightarrow 2^{\widehat{X}_{{}}}\). To ensure that \(\widehat{S}_{{}}\) is finite, we consider a compact region of interest \(Y=\llbracket \alpha , \beta \,\rrbracket \subseteq X\) with s.t. \(\beta - \alpha \) is an integer multiple of \(\eta \). Then we define \(\widehat{X}_{{}}=\widehat{Y}\cup \widehat{X}'\) s.t. \(\widehat{Y}\) is the finite set of \(\eta \)-grid-aligned cells covering Y and \(\widehat{X}'\) is a finite set of large unbounded cells covering the (unbounded) region \(X\setminus Y\). We define \(\widehat{F}_{{}}\) based on the dynamics of \(\varSigma \) only within Y. That is, for all \(\widehat{x}\in \widehat{Y}_{{}}\), \(\widehat{x}'\in \widehat{X}_{{}}\), and \(u\in \widehat{U}{}\) we require

$$\begin{aligned} \widehat{x}'\in \widehat{F}_{{}}(\widehat{x},u) \ \text{ if } \ \exists \xi \in \cup _{x\in \widehat{x}}{\text {Sol}}_f(x,\tau ,u) \;.\; \xi (\tau ) \in \widehat{x}'. \end{aligned}$$
(2)

For all states in \(\widehat{x}\in (\widehat{X}_{{}}\setminus \widehat{Y}_{{}})\) we have that \(\widehat{F}_{{}}(\widehat{x},u)=\emptyset \) for all \(u\in \widehat{U}\). We extend \(\widehat{F}_{{}}\) to sets of abstract states \(\varUpsilon \subseteq \widehat{X}_{{}}\) by defining \(\widehat{F}_{{}}(\varUpsilon ,u) := \bigcup _{\widehat{x}\in \varUpsilon } \widehat{F}_{{}}(\widehat{x},u)\).

While \(\widehat{X}_{{}}\) is not a partition of the state space X, notice that cells only overlap at the boundary and one can define a deterministic function that resolves the resulting non-determinism by consistently mapping such boundary states to a unique cell covering it. The composition of \(\widehat{X}_{{}}\) with this function defines a partition. To avoid notational clutter, we shall simply treat \(\widehat{X}\) as a partition.

Control Problem. It was shown in [35], Thm. III.5 that the relation \(\widehat{Q}_{{}}\subseteq X\times \widehat{X}_{{}}\), defined by all tuples \((x,\widehat{x})\in \widehat{Q}_{{}}\) for which \(x\in \widehat{x}\), is an FRR between \(\overrightarrow{S}_{{}}\) and \(\widehat{S}_{{}}\), i.e., \(\overrightarrow{S}_{{}}\preccurlyeq _{\widehat{Q}_{{}}}\widehat{S}_{{}}\). Hence, we can apply ABCS as described in Sect. 2.1 by computing a controller C for \(\widehat{S}_{{}}\) which can then be refined to a controller for \(\overrightarrow{S}_{{}}\) under the pre-conditions of Proposition 1.

More concretely, we consider safety and reachability control problems for the continuous-time system \(\varSigma \), which are defined by a set of static obstacles \({\mathsf {O}}\subset X\) which should be avoided and a set of goal states \({\mathsf {G}}\subseteq X\) which should be reached, respectively. Additionally, when constructing \(\widehat{S}_{{}}\), we used a compact region of interest \(Y\subseteq X\) to ensure finiteness of \(\widehat{S}_{{}}\) allowing to apply tools from reactive synthesis [28] to compute C. This implies that C is only valid within Y. We therefore interpret Y as a global safety requirement and synthesize a controller which keeps the system within Y while implementing the specification. This interpretation leads to a safety and reach-avoid control problem, w.r.t. a safe set \(R=Y\setminus {\mathsf {O}}\) and target set \(T={\mathsf {G}}\cap R\). As \(R{}\) and \(T{}\) can be interpreted as predicates over the state space X of \(\overrightarrow{S}_{{}}\), this directly defines the control problems \({( \overrightarrow{S}_{{}},\psi _\mathrm {safe} )}\) and \({( \overrightarrow{S}_{{}},\psi _\mathrm {reach} )}\) via

figure a

for safety and reach-avoid control, respectively. Intuitively, a controller \(C\in \mathcal {C}(\overrightarrow{S}_{{}},\psi )\) applied to \(\varSigma \) is a sample-and-hold controller, which ensures that the specification holds on all closed-loop trajectories at sampling instances.Footnote 2

To compute \(C\in \mathcal {C}(\overrightarrow{S}_{{}},\psi )\) via ABCS as described in Sect. 2.1 we need to ensure that the pre-conditions of Proposition 1 hold. This is achieved by under-approximating the safe and target sets by abstract state sets

$$\begin{aligned} \widehat{R}_{}=\{\widehat{x}\in \widehat{X}_{{}}\mid \widehat{x}\subseteq R{}\},~\text {and}~ \widehat{T}_{}=\{\widehat{x}\in \widehat{X}_{{}}\mid \widehat{x}\subseteq T{}\}, \end{aligned}$$
(4)

and defining \(\langle \![\psi _\mathrm {safe}]\!\rangle _{\widehat{S}_{{}}}\) and \(\langle \![\psi _\mathrm {reach}]\!\rangle _{\widehat{S}_{{}}}\) via (3) by substituting \(\overrightarrow{S}_{{}}\) with \(\widehat{S}_{{}}\), R with \(\widehat{R}_{}\) and T with \(\widehat{T}_{}\). With this, it immediately follows from Proposition 1 that \(C\in \mathcal {C}{( \widehat{S}_{{}},\psi )}\) can be refined to the controller \(C\circ Q\in \mathcal {C}{{( \overrightarrow{S}_{{}},\psi )}}\).

2.3 Multi-layered ABCS

We now recall how ABCS can be performed over multiple abstraction layers [25]. The goal of multi-layered ABCS is to construct an abstract controller C which uses coarse grid cells in as much part of the state space as possible, and only resorts to finer grid cells where the control action needs to be precise. In particular, the domain of this abstract controller C must not be smaller then the domain of any controller \(C'\) constructed for the finest layer, i.e., C must be relatively complete w.r.t. the finest layer. In addition, C should be refinable into a controller implementing \(\psi \) on \(\varSigma \), as in classical ABCS (see Proposition 1).

The computation of such a multi-resolution controller via established abstraction-refinement techniques (as in, e.g., [12]), requires a common transition system connecting states of different coarseness but with the same time step. To construct the latter, a FRR between any two abstraction layers must exist, which is not the case in our setting. We can therefore not compute a single multi-resolution controller C. We therefore synthesize a set \(\mathbf {C}\) of single-layered controllers instead, each for a different coarseness and with a different domain, and refine each of those controllers separately, using the associated FRR. The resulting refined controller is a sample-and-hold controller which selects the current input value \(u\in \widehat{U}\subseteq U\) and the duration \(\tau _l\) for which this input should be applied to \(\varSigma \). This construction is formalized in the remainder of this section.

Multi-layered Systems. Given a grid parameter \(\underline{\eta }\), a time sampling parameter \(\underline{\tau }\), and \(L \in \mathbb {Z}_{>0}\), define \({\eta _{l}} = 2^{l-1}\underline{\eta }\) and \({\tau _{l}} = 2^{l-1}\underline{\tau }\). For a control system \(\varSigma \) and a subset \(Y\subseteq X\) with \(Y=\llbracket \alpha ,\beta \,\rrbracket \), s.t. \(\beta -\alpha =k{\eta _{L}}\) for some \(k\in \mathbb {Z}^n\), \(\widehat{Y}_l\) is the \({\eta _{l}}\)-grid-aligned cover of Y. This induces a sequence of time-sampled systems

$$\begin{aligned} \overrightarrow{\mathbf {S}}:=\{ \overrightarrow{S}_{{l}}(\varSigma ,{\tau _{l}}) \}_{l\in [1;L]} \quad \text {and}\quad \widehat{\mathbf {S}}:=\{ \widehat{S}_{{l}}(\varSigma ,{\tau _{l}},{\eta _{l}}) \}_{l\in [1;L]}, \end{aligned}$$
(5)

respectively, where \(\overrightarrow{S}_{{l}}:=(X,U,\overrightarrow{F}_{{l}})\) and \(\widehat{S}_{{l}}:=(\widehat{X}_{{l}},\widehat{U}{},\widehat{F}_{{l}})\). If \(\varSigma \), \(\tau \), and \(\eta \) are clear from the context, we omit them in \(\overrightarrow{S}_{{l}}\) and \(\widehat{S}_{{l}}\).

Our multi-layered synthesis algorithm relies on the fact that the sequence \(\widehat{\mathbf {S}}{}\) of abstract transition systems is monotone, formalized by the following assumption.

Assumption 1

Let \(\widehat{S}_{{l}}\) and \(\widehat{S}_{{m}}\) be two abstract systems with \(m,{l\in [1;L]} \), \(l<m\). ThenFootnote 3 if .

As the exact computation of \(\cup _{x\in \widehat{x}}{\text {Sol}}_f(x,\tau ,u)\) in (2) is expensive (if not impossible), a numerical over-approximation is usually computed. Assumption 1 states that the approximation must be monotone in the granularity of the discretization. This is fulfilled by many numerical methods e.g. the ones based on decomposition functions for mixed-monotone systems [11] or on growth bounds [37]; our implementation uses the latter one.

Induced Relations. It trivially follows from our construction that, for all \({l\in [1;L]} \), we have \(\overrightarrow{S}_{{l}}\preccurlyeq _{\widehat{Q}_{{l}}}\widehat{S}_{{l}}\), where \(\widehat{Q}_{{l}} \subseteq X\times \widehat{X}_{{l}}\) is the FRR induced by \(\widehat{X}_{{l}}\). The set of relations \(\{\widehat{Q}_{{l}}\}_{l\in [1;L]} \) induces transformers \(\widehat{R}_{ll'}\subseteq \widehat{X}_{{l}}\times \widehat{X}_{{l'}}\) for \(l,{l'\in [1;L]} \) between abstract states of different layers such that

$$\begin{aligned} {\widehat{x}\in \widehat{R}_{ll'}(\widehat{x}')}\Leftrightarrow {\widehat{x}\in \widehat{Q}_{{l}}(\widehat{Q}^{-1}_{{l'}}(\widehat{x}')).} \end{aligned}$$
(6)

However, the relation \(\widehat{R}_{ll'}\) is generally not a FRR between the layers due to different time sampling parameters used in different layers (see [25], Rem. 1). This means that \(\widehat{S}_{{l+1}}\) cannot be directly constructed from \(\widehat{S}_{{l}}\), unlike in usual abstraction refinement algorithms [10, 12, 21].

Multi-layered Controllers. Given a multi-layered abstract system \(\widehat{\mathbf {S}}\) and some \(P\in \mathbb {N}\), a multi-layered controller is a set \(\mathbf {C}=\{ C^{p} \}_{{p\in [1;P]}}\) with \(C^{p}=(D^{p}, \widehat{U},G^{p})\) being a single-layer controller with . Then \(\mathbf {C}\) is a controller for \(\widehat{\mathbf {S}}\) if for all \({p\in [1;P]} \) there exists a unique \(l_p \in [1;L]\) s.t. \(C^{p}\) is a controller for \(\widehat{S}_{{l_p}}\), i.e., \(D^{p}\subseteq \widehat{X}_{{l_p}}\). The number P may not be related to L; we allow multiple controllers for the same layer and no controller for some layers.

The quantizer induced by \(\mathbf {C}\) is a map with \(\widehat{\mathbf{X }}=\bigcup _{l\in [1;L]} \widehat{X}_{{l}}\) s.t. for all \(x\in X\) it holds that \(\widehat{x}\in \mathbf {Q}(x)\) iff there exists \(p\in [1;P]\) s.t. \(\widehat{x}\in \widehat{Q}_{{l_p}}(x)\cap D^{p}\) and no \(p'\in [1;P]\) s.t. \(l_{p'}>l_p\) and \(\widehat{Q}_{{l_{p'}}}(x)\cap D^{p'}\ne \emptyset \). In words, \(\mathbf {Q}\) maps states \(x\in X\) to the coarsest abstract state \(\widehat{x}\) that is both related to x and is in the domain \(D^{p}\) of some \(C^{p}\in \mathbf {C}\). We define \(\mathbf {D}=\{\widehat{x}\in \widehat{\mathbf{X }}\mid \exists x\in X\;.\;\widehat{x}\in \mathbf {Q}(x)\}\) as the effective domain of \(\mathbf {C}\) and \(D=\{x\in X\mid \mathbf {Q}(x)\ne \emptyset \}\) as its projection to X.

Multi-layered Closed Loops. The abstract multi-layered closed loop system formed by interconnecting \(\widehat{\mathbf {S}}\) and \(\mathbf {C}\) in feedback is defined by the system \(\widehat{\mathbf {S}}^{cl}=(\widehat{\mathbf{X }}, \widehat{U}{}, \widehat{\mathbf {F}}^{cl})\) with s.t. \(\widehat{x}'\in \widehat{\mathbf {F}}^{cl}(\widehat{x},\widehat{u})\) iff (i) there exists \(p\in [1;P]\) s.t. \(\widehat{x}\in \mathbf {D}\cap D^{p}\), \(\widehat{u}\in G^{p}(\widehat{x})\) and there exists \(\widehat{x}''\in \widehat{F}_{{l_p}}(\widehat{x},\widehat{u})\) s.t. either \(\widehat{x}'\in \mathbf {Q}(\widehat{Q}^{-1}_{{l_p}}(\widehat{x}''))\), or \(\widehat{x}'=\widehat{x}''\) and \(\widehat{Q}^{-1}_{{l_p}}(\widehat{x}'')\not \subseteq D\), or (ii) \(\widehat{x}\in \widehat{X}_{{l}}\), \(\widehat{Q}^{-1}_{{l}}(\widehat{x})\not \subseteq D\) and \(\widehat{x}'\in \widehat{F}_{{l}}(\widehat{x},\widehat{u})\). This results in the time-sampled closed loop system \(\overrightarrow{\mathbf {S}}^{cl}=(X,U, \overrightarrow{\mathbf {F}}^{cl})\) with s.t. \(x'\in \overrightarrow{\mathbf {F}}^{cl}(x,u)\) iff (i) \(x\in D\) and there exists \(p\in [1;P]\) and \(\widehat{x}\in \mathbf {Q}(x)\cap D^{p}\) s.t. \(u\in G^{p}(\widehat{x})\) and \(x'\in \overrightarrow{F}_{{l_p}}(x,u)\), or (ii) \(x\notin D\) and \(x'\in \overrightarrow{F}_{{l}}(x,u)\) for some \({l\in [1;L]} \).

Multi-layered Behaviors. Slightly abusing notation, we define the behaviors \(\mathcal {B}(\overrightarrow{\mathbf {S}})\) and \(\mathcal {B}(\widehat{\mathbf {S}})\) via the construction for systems S in Sect. 2.1 by interpreting the sequences \(\overrightarrow{\mathbf {S}}\) and \(\widehat{\mathbf {S}}\) as systems \(\widehat{\mathbf {S}}=(\widehat{\mathbf{X }},\widehat{U},\widehat{\mathbf {F}})\) and \(\overrightarrow{\mathbf {S}}=(X,U,\overrightarrow{\mathbf {F}})\), s.t. 

$$\begin{aligned} \textstyle \widehat{\mathbf {F}}(\widehat{x},\widehat{u})=\bigcup _{{l\in [1;L]}}\widehat{R}_{ll'}(\widehat{F}_{{l'}}(\widehat{x},\widehat{u})),~\text {and}\quad \textstyle \overrightarrow{\mathbf {F}}(x,u)=\bigcup _{{l\in [1;L]}}\overrightarrow{F}_{{l}}(x,u), \end{aligned}$$
(7)

where \(\widehat{x}\) is in \(\widehat{X}_{{l'}}\). Intuitively, the resulting behavior \(\mathcal {B}(\widehat{\mathbf {S}})\) contains trajectories with non-uniform state size; in every time step the system can switch to a different layer using the available transition functions \(\widehat{F}_{{l}}\). For \(\mathcal {B}(\overrightarrow{\mathbf {S}})\) this results in trajectories with non-uniform sampling time; in every time step a transition of any duration \({\tau _{l}}\) can be chosen, which corresponds to some \(\overrightarrow{F}_{{l}}\). For the closed loops \(\overrightarrow{\mathbf {S}}^{cl}\) and \(\widehat{\mathbf {S}}^{cl}\) those behaviors are restricted to follow the switching pattern induced by \(\mathbf {C}\), i.e., always apply the input chosen by the coarsest available controller. The resulting behaviors \(\mathcal {B}(\overrightarrow{\mathbf {S}}^{cl})\) and \(\mathcal {B}(\widehat{\mathbf {S}}^{cl})\) are formally defined as in Sect. 2.1 via \(\overrightarrow{\mathbf {S}}^{cl}\) and \(\widehat{\mathbf {S}}^{cl}\).

Soundness of Multi-layered ABCS. As shown in [25], the soundness property of ABCS stated in Proposition 1 transfers to the multi-layered setting.

Proposition 2

([25], Cor. 1). Let \(\mathbf {C}\) be a multi-layered controller for the abstract multi-layered system \(\widehat{\mathbf {S}}\) with effective domains \(\mathbf {D}\in \widehat{\mathbf{X }}\) and \(D\in X\) inducing the closed loop systems \(\overrightarrow{\mathbf {S}}^{cl}\) and \(\widehat{\mathbf {S}}^{cl}\), respectively. Further, let \(\mathbf {C}\in \mathcal {C}(\widehat{\mathbf {S}}, \psi )\) for a specification \(\psi \) with associated behavior \(\langle \![\psi ]\!\rangle _{\widehat{\mathbf {S}}}\subseteq \mathcal {B}(\widehat{\mathbf {S}})\) and \(\langle \![\psi ]\!\rangle _{\overrightarrow{\mathbf {S}}}\subseteq \mathcal {B}(\overrightarrow{\mathbf {S}})\). Suppose that for all \(\xi \in \mathcal {B}(\overrightarrow{\mathbf {S}})\) and \(\widehat{\xi }\in \mathcal {B}(\widehat{\mathbf {S}})\) s.t. (i) \(\mathrm {dom}(\xi )=\mathrm {dom}(\widehat{\xi })\), (ii) for all \(k\in \mathrm {dom}(\xi )\) it holds that \((\xi (k),\widehat{\xi }(k))\in \mathbf {Q}\), and (iii) \(\widehat{\xi }\in \langle \![\psi ]\!\rangle _{\widehat{\mathbf {S}}}\Rightarrow \xi \in \langle \![\psi ]\!\rangle _{\overrightarrow{\mathbf {S}}}\). Then \(\mathcal {B}(\overrightarrow{\mathbf {S}}^{cl},\mathbf {D})\subseteq \langle \![\psi ]\!\rangle _{\overrightarrow{\mathbf {S}}}\), i.e., the time-sampled multi-layered closed loop \(\overrightarrow{\mathbf {S}}^{cl}\) fulfills specification \(\psi \).

Control Problem. Consider the safety and reach-avoid control problems defined over \(\varSigma \) in Sect. 2.2. As \(R{}\) and \(T{}\) can be interpreted as predicates over the state space X of \(\overrightarrow{\mathbf {S}}{}\), this directly defines the control problems \({( \overrightarrow{\mathbf {S}}{},\psi _\mathrm {safe} )}\) and \({( \overrightarrow{\mathbf {S}}{},\psi _\mathrm {reach} )}\) via (3) by substituting \(\overrightarrow{S}_{{}}\) with \(\overrightarrow{\mathbf {S}}{}\).

To solve \({( \overrightarrow{\mathbf {S}}{},\psi )}\) via multi-layered ABCS we need to ensure that the pre-conditions of Proposition 2 hold. This is achieved by under-approximating the safe and target sets by a set \(\{ \widehat{R}_{l} \}_{l\in [1;L]} \) and \(\{ \widehat{T}_{l} \}_{l\in [1;L]} \) defined via (4) for every \({l\in [1;L]} \). Then \(\langle \![\psi _\mathrm {safe}]\!\rangle _{\widehat{\mathbf {S}}{}}\) and \(\langle \![\psi _\mathrm {reach}]\!\rangle _{\widehat{\mathbf {S}}{}}\) can be defined via (3) by substituting \(\overrightarrow{S}_{{}}\) with \(\widehat{\mathbf {S}}{}\), R with \(\widehat{R}_{\lambda (\xi (k))}\) and T with \(\widehat{T}_{\lambda (\xi (k))}\), where \(\lambda (\widehat{x})\) returns the \({l\in [1;L]} \) to which \(\widehat{x}\) belongs, i.e., for \(\widehat{x}\in \widehat{X}_{{l}}\) we have \(\lambda (\widehat{x})=l\). We collect all multi-layered controllers \(\mathbf {C}\) for which \(\mathcal {B}(\widehat{\mathbf {S}}^{cl},\mathbf {D})\subseteq \langle \![\psi ]\!\rangle _{\widehat{\mathbf {S}}}\) in \(\mathcal {C}{( \widehat{\mathbf {S}}{},\psi )}\). With this, it immediately follows from Proposition 2 that \(\mathbf {C}\in \mathcal {C}{( \widehat{\mathbf {S}}{},\psi )}\) also solves \({( \overrightarrow{\mathbf {S}}{},\psi )}\) via the construction of the time-sampled closed loop system \(\overrightarrow{\mathbf {S}}^{cl}\).

A multi-layered controller \(\mathbf {C}\in \mathcal {C}(\widehat{\mathbf {S}}{},\psi )\) is typically not unique; there can be many different control strategies implementing the same specification. However, the largest possible controller domain for a particular abstraction layer l always exists and is unique. In this paper we will compute a sound controller \(\mathbf {C}\in \mathcal {C}(\widehat{\mathbf {S}}{},\psi )\) with a maximal domain w.r.t. the lowest layer \(l=1\). Formally, for any sound layer 1 controller \(\widetilde{C}={( \widetilde{D},U,\widetilde{G} )}\in \mathcal {C}(\widehat{S}_{{1}},\psi )\) it must hold that \(\widetilde{D}\) is contained in the projection \(D_1=\widehat{Q}_{{1}}(D)\) of the effective domain of \(\mathbf {C}\) to layer 1. We call such controllers \(\mathbf {C}\) complete w.r.t. layer 1. On top of that, for faster computation we ensure that cells within its controller domain are only refined if needed.

3 Controller Synthesis

Our synthesis of an abstract multi-layered controller \(\mathbf {C}\in \mathcal {C}(\widehat{\mathbf {S}}{},\psi )\) has three main ingredients. First, we use the usual fixed-point algorithms from reactive synthesis [28] to compute the maximal set of winning states (i.e., states which can be controlled to fulfill the specification) and deduce an abstract controller (Sect. 3.1). Second, we allow switching between abstraction layers during these fixed-point computations by saving and re-loading intermediate results of fixed-point computations from and to the lowest layer (Sect. 3.2). Third, through the use of frontiers, we compute abstractions lazily by only computing abstract transitions in parts of the state space currently explored by the fixed-point algorithm (Sect. 3.3). We prove that frontiers always over-approximate the set of states possibly added to the winning region in the corresponding synthesis step.

3.1 Fixed-Point Algorithms for Single-Layered ABCS

We first recall the standard algorithms to construct a controller C solving the safety and reach-avoid control problems \({( \widehat{S}_{{}},\psi _\mathrm {safe} )}\) and \({( \widehat{S}_{{}},\psi _\mathrm {reach} )}\) over the finite abstract system \(\widehat{S}_{{}}(\varSigma , \tau ,\eta ) = (\widehat{X}_{{}},\widehat{U}{},\widehat{F}_{{}})\). The key to this synthesis is the controllable predecessor operator, , defined for a set \(\varUpsilon \subseteq \widehat{X}_{{}}\) by

$$\begin{aligned} \mathrm {CPre}_{\widehat{S}_{{}}}(\varUpsilon ) := \{ \widehat{x}\in \widehat{X}_{{}} \mid \exists \widehat{u}\in \widehat{U}{}\;.\;\widehat{F}_{{}}(\widehat{x},\widehat{u}) \subseteq \varUpsilon \}. \end{aligned}$$
(8)

\({( \widehat{S}_{{}},\psi _\mathrm {safe} )}\) and \({( \widehat{S}_{{}},\psi _\mathrm {reach} )}\) are solved by iterating this operator.

Safety Control. Given a safety control problem \({( \widehat{S}_{{}},\psi _\mathrm {safe} )}\) associated with \(\widehat{R}_{}\subseteq \widehat{Y}_{{}}\), one iteratively computes the sets

$$\begin{aligned} W^0 = \widehat{R}_{} \text { and } W^{i+1} = \mathrm {CPre}_{\widehat{S}_{{}}}(W^i) \cap \widehat{R}_{} \end{aligned}$$
(9)

until an iteration \(N\in \mathbb {N}\) with \(W^N = W^{N+1}\) is reached. From this algorithm, we can extract a safety controller \(C = (D,\widehat{U}{},G)\) where \(D=W^N\) and

$$\begin{aligned} {\widehat{u}\in G^{}(\widehat{x})}\Rightarrow {\widehat{F}_{{}}(\widehat{x},\widehat{u})\subseteq D} \end{aligned}$$
(10)

for all \(\widehat{x}\in D\). Note that \(C\in \mathcal {C}(\widehat{S}_{{}},\psi _\mathrm {safe})\).

We denote the procedure implementing this iterative computation until convergence \(\mathsf {Safe} _\infty (\widehat{R}_{},\widehat{S}_{{}})\). We also use a version of \(\mathsf {Safe} \) which runs one step of (9) only. Formally, the algorithm \(\mathsf {Safe} (\widehat{R}_{},\widehat{S}_{{}})\) returns the set \(W^1\) (the result of the first iteration of (9)). One can obtain \(\mathsf {Safe} _\infty (\widehat{R}_{},\widehat{S}_{{}})\) by chaining \(\mathsf {Safe} \) until convergence, i.e., given \(W^1\) computed by \(\mathsf {Safe} (\widehat{R}_{},\widehat{S}_{{}})\), one obtains \(W^{2}\) from \(\mathsf {Safe} (W^1,\widehat{S}_{{}})\), and so on. In Sect. 3.2, we will use such chaining to switch layers after every iteration within our multi-resolution safety fixed-point.

Reach-Avoid Control. Given a reach-avoid control problem \({( \widehat{S}_{{}},\psi _\mathrm {reach} )}\) for \(\widehat{R}_{},\widehat{T}_{}\subseteq \widehat{Y}_{{}}\), one iteratively computes the sets

$$\begin{aligned} W^0 = \widehat{T}_{} \text { and } W^{i+1} = \left( \mathrm {CPre}_{\widehat{S}_{{}}}(W^i)\cap \widehat{R}_{}\right) \cup \widehat{T}_{} \end{aligned}$$
(11)

until some iteration \(N\in \mathbb {N}\) is reached where \(W^N = W^{N+1}\). We extract the reachability controller \(C = (D,\widehat{U}{},G)\) with \(D= W^N\) and

$$\begin{aligned} G(\widehat{x}) = {\left\{ \begin{array}{ll} \{\widehat{u}\in \widehat{U}{}\mid \widehat{F}_{{}}(\widehat{x},\widehat{u}) \subseteq W^{i*}\}, &{} \widehat{x}\in D\setminus \widehat{T}_{}\\ \widehat{U}, &{}\text {else,} \end{array}\right. } \end{aligned}$$
(12)

where \(i^* = \min (\{i \mid \widehat{x}\in W^i\setminus \widehat{T}_{}\})- 1\).

Note that the safety-part of the specification is taken care of by only keeping those states in \(\mathrm {CPre}_{\widehat{S}_{{}}}\) that intersect \(\widehat{R}_{}\). So, intuitively, the fixed-point in (11) iteratively enlarges the target state set while always remaining within the safety constraint. We define the procedure implementing the iterative computation of (11) until convergence by \(\mathsf {Reach} _\infty (\widehat{T}_{},\widehat{R}_{},\widehat{S}_{{}})\). We will also use a version of \(\mathsf {Reach} \) which runs m steps of (11) for a parameter \(m\in \mathbb {Z}_{>0}\). Here, we can again obtain \(\mathsf {Reach} _\infty (\widehat{T}_{},\widehat{R}_{},\widehat{S}_{{}})\) by chaining \(\mathsf {Reach} _m\) computations, i.e., given \(W^m\) computed by \(\mathsf {Reach} _m(\widehat{T}_{},\widehat{R}_{},\widehat{S}_{{}})\), one obtains \(W^{2m}\) from \(\mathsf {Reach} _m(W^m,\widehat{R}_{},\widehat{S}_{{}})\), if no fixed-point is reached beforehand.

3.2 Multi-resolution Fixed-Points for Multi-layered ABCS

Next, we present a controller synthesis algorithm which computes a multi-layered abstract controller \(\mathbf {C}\) solving the safety and reach-avoid control problems \({( \widehat{\mathbf {S}}{},\psi _\mathrm {safe} )}\) and \({( \widehat{\mathbf {S}}{},\psi _\mathrm {reach} )}\) over a sequence of L abstract systems \(\widehat{\mathbf {S}}:=\{\widehat{S}_{{l}}\}_{l\in [1;L]} \). Here, synthesis will perform the iterative computations \(\mathsf {Safe} \) and \(\mathsf {Reach} \) from Sect. 3.1 at each layer, but also switch between abstraction layers during this computation. To avoid notational clutter, we write \(\mathsf {Safe} (\cdot {}, l)\), \(\mathsf {Reach} _{\cdot }(\cdot , \cdot , l)\) to refer to \(\mathsf {Safe} (\cdot , \widehat{S}_{{l}})\), \(\mathsf {Reach} _\cdot (\cdot ,\cdot , \widehat{S}_{{l}})\) within this procedure.

The core idea that enables switching between layers during successive steps of the fixed-point iterations are the saving and re-loading of the computed winning states to and from the lowest layer \(l=1\) (indicated in green in the subsequently discussed algorithms). This projection is formalized by the operator

$$\begin{aligned} \varGamma ^\downarrow _{ll'}(\varUpsilon _{l'}) = {\left\{ \begin{array}{ll} \widehat{R}_{ll'}(\varUpsilon _{l'}), &{} l \le l' \\ \{\hat{x} \in \widehat{X}_{{l}}\mid \widehat{R}_{l'l}(\hat{x})\subseteq \varUpsilon _{l'}\}, &{} l > l' \end{array}\right. } \end{aligned}$$
(13)

where \(l,l'\in [1;L]\) and \(\varUpsilon _{l'}\subseteq \widehat{X}_{{l'}}\). The operation \(\varGamma ^\downarrow _{ll'}(\varUpsilon _{l'}) \subseteq \widehat{X}_{{l}}\) under-approximates a set \(\varUpsilon _{l'} \subseteq \widehat{X}_{{l'}}\) with one in layer l.

In this section, we shall assume that each \(\widehat{F}_{{l}}\) is pre-computed for all states within \(\widehat{R}_{l}\) in every \({l\in [1;L]}\). In Sect. 3.3, we shall compute \(\widehat{F}_{{l}}\) lazily.

Safety Control. We consider the computation of a multi-layered safety controller \(\mathbf {C}\in \mathcal {C}(\widehat{\mathbf {S}}{},\psi _\mathrm {safe})\) by the iterative function \(\mathsf {SafeIteration} \) in 1 assuming that \(\widehat{\mathbf {S}}\) is pre-computed. We refer to this scenario by the wrapper function \(\mathsf {EagerSafe} {}(\widehat{R}_{1},L)\), which calls the iterative algorithm \(\mathsf {SafeIteration} \) with parameters \((\widehat{R}_{1},\emptyset , L, \emptyset )\). For the moment, assume that the \(\mathsf {ComputeTransitions} \) method in line 1 does nothing (i.e., the gray lines of 1 are ignored in the execution).

When initialized with \(\mathsf {SafeIteration} {}(\widehat{R}_{1},\emptyset ,L,\emptyset )\), 1 performs the following computations. It starts in layer \(l=L\) with an outer recursion count \(i=1\) (not shown in 1) and reduces l, one step at the time, until \(l=1\) is reached. Upon reaching \(l=1\), it starts over again from layer L with recursion count \(i+1\) and a new safe set \(\varUpsilon \). In every such iteration i, one step of the safety fixed-point is performed for every layer and the resulting set is stored in the layer1 map \(\varUpsilon \subseteq \widehat{X}_{{1}}\), whereas \(\varPsi \subseteq \widehat{X}_{{1}}\) keeps the knowledge of the previous iteration. If the finest layer (\(l=1\)) is reached and we have \(\varPsi =\varUpsilon \), the algorithm terminates. Otherwise \(\varUpsilon \) is copied to \(\varPsi \), \(\varUpsilon \) and \(\mathbf {C}\) are reset to \(\emptyset \) and \(\mathsf {SafeIteration} {}\) starts a new iteration (see Line 10).

After \(\mathsf {SafeIteration} \) has terminated, it returns a multi-layered controller \(\mathbf {C}=\{C^{l}\}_{l\in [1;L]} \) (with one controller per layer) which only contains the domains of the respective controllers \(C^{l}\) (see Line 3 in 1). The transition functions \(G^{l}\) are computed afterward by choosing one input \(\widehat{u}\in \widehat{U}\) for every \(\widehat{x}\in D^{l}\) s.t.

$$\begin{aligned} {\widehat{u}=G^{l}(\widehat{x})}{\widehat{F}_{{l}}(\widehat{x},\widehat{u})\subseteq \varGamma ^\downarrow _{l1}(\varPsi )}. \end{aligned}$$
(14)

As stated before, the main ingredient for the multi-resolution fixed-point is that states encountered for layer l in iteration i are saved to the lowest layer 1 (Line 4, green) and “loaded” back to the respective layer l in iteration \(i+1\) (Line 2, green). This has the effect that a state \(\widehat{x}\in \widehat{X}_{{l}}\) with \(l>1\), which was not contained in W computed in layer l and iteration i via Line 2, might be included in \(\varGamma ^\downarrow _{l1}(\varPsi )\) loaded in the next iteration \(i+1\) when re-computing Line 2 for l. This happens if all states \(x\in \widehat{x}\) were added to \(\varUpsilon \) by some layer \(l'<l\) in iteration i.

Due to the effect described above, the map W encountered in Line 2 for a particular layer l throughout different iterations i might not be monotonically shrinking. However, the latter is true for layer 1, which implies that \(\mathsf {EagerSafe} {}(\widehat{R}_{1},L)\) is sound and complete w.r.t. layer 1 as formalized by Theorem 1.

figure b

Theorem 1

([23]). \(\mathsf {EagerSafe} {}\) is sound and complete w.r.t. layer 1.

It is important to mention that the algorithm \(\mathsf {EagerSafe} {}\) is presented only to make a smoother transition to the lazy ABCS for safety (to be presented in the next section). In practice, \(\mathsf {EagerSafe} {}\) itself is of little algorithmic value as it is always slower than \(\mathsf {Safe} (\cdot , \widehat{S}_{{1}})\), but produces the same result. This is because in \(\mathsf {EagerSafe} {}\), the fixed-point computation in the finest layer does not use the coarser layers’ winning domain in any meaningful way. So the computation in all the layers—except in \(\widehat{S}_{{1}}\)—goes to waste.

Reach-Avoid Control. We consider the computation of an abstract multi-layered reach-avoid controller \(\mathbf {C}\in \mathcal {C}(\widehat{\mathbf {S}}{},\psi _\mathrm {reach})\) by the iterative function \(\mathsf {ReachIteration} \) in 1 assuming that \(\widehat{\mathbf {S}}\) is pre-computed. We refer to this scenario by the wrapper function \(\mathsf {EagerReach}(\widehat{T}_{1},\widehat{R}_{1},L)\), which calls \(\mathsf {ReachIteration} \) with parameters \((\widehat{T}_{1}, \widehat{R}_{1}, L, \emptyset )\). Assume in this section that \(\mathsf {ComputeTransitions} \) and \(\mathsf {ExpandAbstraction} _m\) do not modify anything (i.e., the gray lines of 1 are ignored in the execution).

figure c

The recursive procedure \(\mathsf {ReachIteration} _m\) in 2 implements the switching protocol informally discussed in Sect. 1. Lines 1–12 implement the fixed-point computation at the coarsest layer \(\widehat{S}_{{L}}\) by iterating the fixed-point over \(\widehat{S}_{{L}}\) until convergence (line 3). Afterward, \(\mathsf {ReachIteration} _m\) recursively calls itself (line 9) to see if the set of winning states (W) can be extended by a lower abstraction layer. Lines 12–28 implement the fixed-point computations in layers \(l < L\) by iterating the fixed-point over \(\widehat{S}_{{l}}\) for m steps (line 14) for a given fixed parameter \(m>0\). If the analysis already reaches a fixed point, then, as in the first case, the algorithm \(\mathsf {ReachIteration} _m\) recursively calls itself (line 21) to check if further states can be added in a lower layer. If no fixed-point is reached in line 14, more states could be added in the current layer by running \(\mathsf {Reach} \) for more then m steps. However, this might not be efficient (see the example in Sect. 1). The algorithm therefore attempts to go coarser when recursively calling itself (line 25) to expand the fixed-point in a coarser layer instead. Intuitively, this is possible if states added by lower layer fixed-point computations have now “bridged” a region where precise control was needed and can now be used to enable control in coarser layers again. This also shows the intuition behind the parameter m. If we set it to \(m=1\), the algorithm might attempt to go coarser before this “bridging” is completed. The parameter m can therefore be used as a tuning parameter to adjust the frequency of such attempts and is only needed in layers \(l<L\). The algorithm terminates if a fixed-point is reached in the lowest layer (line 7 and line 9). In this case the layer 1 winning state set \(\varUpsilon \) and the multi-layered controller \(\mathbf {C}\) is returned.

It was shown in [25] that this switching protocol ensures that \(\mathsf {EagerReach}_m\) is sound and complete w.r.t. layer 1.

Theorem 2

([25]). \(\mathsf {EagerReach}_m\) is sound and complete w.r.t. layer 1.

figure d

3.3 Lazy Exploration Within Multi-layered ABCS

We now consider the case where the multi-layered abstractions \(\widehat{\mathbf {S}}\) are computed lazily. Given the multi-resolution fixed-points discussed in the previous section, this requires tightly over-approximating the region of the state space which might be explored by \(\mathsf {Reach} \) or \(\mathsf {Safe} \) in the current layer, called the frontier. Then abstract transitions are only constructed for frontier states and the currently considered layer l via 3. As already discussed in Sect. 1, the computation of frontier states differs for safety and reachability objectives.

Safety Control. We now consider the lazy computation of a multi-layered safety controller \(\mathbf {C}\in \mathcal {C}(\widehat{\mathbf {S}},\psi _\mathrm {safe})\). We refer to this scenario by the wrapper function \(\mathsf {LazySafe} (\widehat{R}_{1},L)\) which simply calls \(\mathsf {SafeIteration} {}(\widehat{R}_{1},\emptyset ,L,\emptyset )\).

This time, Line 1 of 1 is used to explore transitions. The frontier cells at layer l are given by \(\mathcal {F}_l=\varGamma ^\downarrow _{l1}(\varPsi )\setminus \varGamma ^\downarrow _{l1}(\varUpsilon )\). The call to \(\mathsf {ComputeTransitions} {}\) in 3 updates the abstract transitions for the frontier cells. In the first iteration of \(\mathsf {SafeIteration} {}(\widehat{R}_{1},\emptyset ,L,\emptyset )\), we have \(\varPsi =\widehat{R}_{1}\) and \(\varUpsilon =\emptyset \). Thus, \(\mathcal {F}_L=\varGamma ^\downarrow _{1L}(\widehat{R}_{1})=\widehat{R}_{L}\), and hence, for layer L, all transitions for states inside the safe set are pre-computed in the first iteration of 1. In lower layers \(l<L\), the frontier \(\mathcal {F}_l\) defines all states which are (i) not marked unsafe by all layers in the previous iteration, i.e., are in \(\varGamma ^\downarrow _{l1}(\varPsi )\), but (ii) cannot stay safe for i time-steps in any layer \(l'>l\), i.e., are not in \(\varGamma ^\downarrow _{l1}(\varUpsilon )\). Hence, \(\mathcal {F}_l\) defines a small boundary around the set W computed in the previous iteration of \(\mathsf {Safe} \) in layer \(l+1\) (see Sect. 1 for an illustrative example of this construction).

It has been shown in [23] that all states which need to be checked for safety in layer l of iteration i are indeed explored by this frontier construction. This implies that Theorem 1 directly transfers from \(\mathsf {EagerSafe} \) to \(\mathsf {LazySafe} \).

Theorem 3

\(\mathsf {LazySafe} \) is sound and complete w.r.t. layer 1.

Reach-Avoid Control. We now consider the lazy computation of a multi-layered reach-avoid controller \(\mathbf {C}\in \mathcal {C}(\widehat{\mathbf {S}},\psi _\mathrm {reach})\). We refer to this scenario by the wrapper function \(\mathsf {LazyReach} _m(\widehat{T}_{1},\widehat{R}_{1},L)\) which calls \(\mathsf {ReachIteration} _{m}(\widehat{T}_{1},\widehat{R}_{1},L,\emptyset )\).

In the first iteration of \(\mathsf {ReachIteration} _{m}\) we have the same situation as in \(\mathsf {LazySafe} \); given that \(\varPsi =\widehat{R}_{1}\), line 2 in 2 pre-computes all transitions for states inside the safe set and \(\mathsf {ComputeTransitions} {}\) does not perform any computations for layer L in further iterations. For \(l<L\) however, the situation is different. As \(\mathsf {Reach} \) computes a smallest fixed-point, it iteratively enlarges the set \(\widehat{T}_{1}\) (given when \(\mathsf {ReachIteration} \) is initialized). Computing transitions for all not yet explored states in every iteration would therefore be very wasteful (see the example in Sect. 1). Therefore, \(\mathsf {ExpandAbstraction} _m\) determines an over-approximation of the frontier states instead in the following manner: it computes the predecessors (not controllable predecessors!) of the already-obtained set \(\varUpsilon \) optimistically by (i) using (coarse) auxiliary abstractions for this computation and (ii) applying a cooperative predecessor operator.

This requires a set of auxiliary systems, given by

$$\begin{aligned} \widehat{\mathbf {A}}= \{\widehat{A}^L_{{l}}\}_{l=1}^L,\qquad \widehat{A}^L_{{l}}:=\widehat{S}_{{}}(\varSigma ,\tau _l,\eta _L) = (\widehat{X}_{{L}},\widehat{U},\widehat{F}_{{l}}^L). \end{aligned}$$
(15)

The abstract system \(\widehat{A}^L_{{l}}\) induced by \(\varSigma \) captures the \(\tau _l\)-duration transitions in the coarsest layer state space \(\widehat{X}_{{L}}\). Using \(\tau _l\) instead of \(\tau _L\) is important, as \(\tau _L\) might cause “holes” between the computed frontier and the current target \(\varUpsilon \) which cannot be bridged by a shorter duration control actions in layer l. This would render \(\mathsf {LazyReach} _{m}\) unsound. Also note that in \(\mathsf {ExpandAbstraction} _m\), we do not restrict attention to the safe set. This is because \(\widehat{R}_{l}\supseteq \widehat{R}_{L}\), and when the inequality is strict then the safe states in layer l which are possibly winning but are covered by an obstacle in layer L (see Fig. 1) can also be explored.

For \(\varUpsilon \subseteq \widehat{X}_{{L}}\) and \(l\in [1;L]\), we define the cooperative predecessor operator

$$\begin{aligned} \mathrm {Pre}_{\widehat{A}^L_{{l}}}(\varUpsilon )= \{ \widehat{x}\in \widehat{X}_{{L}} \mid \exists \widehat{u}\in \widehat{U}{}\;.\;\widehat{F}_{{l}}^L(\widehat{x},\widehat{u})\cap \varUpsilon \ne \emptyset \}. \end{aligned}$$
(16)

in analogy to the controllable predecessor operator in (8). We apply the cooperative predecessor operator m times in \(\mathsf {ExpandAbstraction} _m\), i.e.,

$$\begin{aligned}&\mathrm {Pre}_{\widehat{A}^L_{{l}}}^1(\varUpsilon ) = \mathrm {Pre}_{\widehat{A}^L_{{l}}}(\varUpsilon )~\text {and}~ \nonumber \\&\mathrm {Pre}_{\widehat{A}^L_{{l}}}^{j+1}(\varUpsilon ) = \mathrm {Pre}_{\widehat{A}^L_{{l}}}^{j}(\varUpsilon ) \cup \mathrm {Pre}_{\widehat{A}^L_{{l}}}(\mathrm {Pre}_{\widehat{A}^L_{{l}}}^j(\varUpsilon )). \end{aligned}$$
(17)

Calling \(\mathsf {ExpandAbstraction} _m\) with parameters \(\varUpsilon \subseteq \widehat{X}_{{1}}\) and \(l<L\) applies \(\mathrm {Pre}_{\widehat{A}^L_{{l}}}^m\) to the over-approximation of \(\varUpsilon \) by abstract states in layer L. This over-approximation is defined as the dual operator of the under-approximation operator \(\varGamma ^\downarrow _{ll'}\):

(18)

where \(l,l'\in [1;L]\) and \(\varUpsilon _{l'}\subseteq \widehat{X}_{{l'}}\). Finally, m controls the size of the frontier set and determines the maximum progress that can be made in a single backwards synthesis run in a layer \(l < L\).

It can be shown that all states which might be added to the winning state set in the current iteration are indeed explored by this frontier construction, implying that \(\mathsf {LazyReach} _m(\widehat{T}_{1},\widehat{R}_{1},L)\) is sound and complete w.r.t. layer 1. In other words, Theorem 2 can be transfered from \(\mathsf {EagerReach} _m\) to \(\mathsf {LazyReach} _{m}\) (see the extended version [24] for the proof).

Theorem 4

\(\mathsf {LazyReach} _m\) is sound and complete w.r.t. layer 1.

4 Experimental Evaluation

We have implemented our algorithms in the MASCOT tool and we present some brief evaluation.Footnote 4

4.1 Reach-Avoid Control Problem for a Unicycle

We use a nonlinear kinematic system model commonly known as the unicycle model, specified as

$$\begin{aligned} \dot{x}_1 \in u_1\cos (x_3) + W_1\quad \dot{x}_2 \in u_1\sin (x_3) + W_2\quad \dot{x}_3 = u_2 \end{aligned}$$

where \(x_1\) and \(x_2\) are the state variables representing 2D Cartesian coordinates, \(x_3\) is a state variable representing the angular displacement, \(u_1\), \(u_2\) are control input variables that influence the linear and angular velocities respectively, and \(W_1\), \(W_2\) are the perturbation bounds in the respective dimensions given by \(W_1 = W_2 = [-0.05, 0.05]\). The perturbations render this deceptively simple problem computationally intensive. We run controller synthesis experiments for the unicycle inside a two dimensional space with obstacles and a designated target area, as shown in Fig. 2. We use three layers for the multi-layered algorithms \(\mathsf {EagerReach} \) and \(\mathsf {LazyReach} \). All experiments presented in this subsection were performed on a Intel Core i5 3.40 GHz processor.

Algorithm Comparison. Table 1 shows a comparison on the \(\mathsf {Reach} \), \(\mathsf {EagerReach} _2\), and \(\mathsf {LazyReach} _{2}\) algorithms. The projection to the state space of the transitions constructed by \(\mathsf {LazyReach} _{2}\) for the finest abstraction is shown in Fig. 2b. The corresponding visualization for \(\mathsf {EagerReach} _{2}\) would show all of the uncolored space being covered by red. The savings of \(\mathsf {LazyReach} _{2}\) over \(\mathsf {EagerReach} _2\) can be mostly attributed to this difference.

Fig. 2.
figure 2

(a) Solution of the unicycle reach-avoid problem by \(\mathsf {LazyReach} _{2}\). (b) Cells of the finest layer (\(l=1\)) for which transitions were computed during \(\mathsf {LazyReach} _{2}\) are marked in red. For \(\mathsf {EagerReach} _{2}\), all uncolored cells would also be red. (Color figure online)

Table 1. Comparison of running times (in seconds) of reachability algorithms on the perturbed unicycle system.
Fig. 3.
figure 3

Runtime with increasing number of obstacles

Varying State Space Complexity. We investigate how the lazy algorithm and the multi-layered baseline perform with respect to the structure of the state space, achieved by varying the number of identical obstacles, o, placed in the open area of the state space. The runtimes for \(\mathsf {EagerReach} _2\) and \(\mathsf {LazyReach} _{2}\) are plotted in Fig. 3. We observe that \(\mathsf {LazyReach} _{2}\) runs fast when there are few obstacles by only constructing the abstraction in the finest layer for the immediate surroundings of those obstacles. By \(o = 20\), \(\mathsf {LazyReach} _{2}\) explores the entire state space in the finest layer, and its performance is slightly worse than that of \(\mathsf {EagerReach} _2\) (due to additional bookkeeping). The general decreasing trend in the abstraction construction runtime for \(\mathsf {EagerReach} _2\) is because transitions outgoing from obstacle states are not computed.

Fig. 4.
figure 4

Run-time comparison of \(\mathsf {LazySafe} {}\) and \(\mathsf {EagerSafe} {}\) on the DC-DC boost converter example. \(L > 4\) is not used for \(\mathsf {EagerSafe} {}\) since coarser layers fail to produce a non-empty winning set. The same is true for \(L > 7\) in \(\mathsf {LazySafe} {}\).

4.2 Safety Control Problem for a DC-DC Boost Converter [23]

We evaluate our safety algorithm on a benchmark DC-DC boost converter example from [17, 31, 37]. The system \(\varSigma \) is a second order differential inclusion \(\dot{X}(t) \in A_pX(t) + b + W\) with two switching modes \(p\in \{ 1,2 \}\), where

$$\begin{aligned} b = \begin{bmatrix} \frac{v_s}{x_l}\\ 0 \end{bmatrix}, A_1 = \begin{bmatrix} -\frac{r_l}{x_l} &{} 0\\ 0 &{} -\frac{1}{x_c}\frac{r_0}{r_0+r_c} \end{bmatrix}, A_2 = \begin{bmatrix} -\frac{1}{x_l}(r_l+\frac{r_0r_c}{r_0+r_c}) &{} \frac{1}{5}(-\frac{1}{x_l}\frac{r_0}{r_0+r_c})\\ 5\frac{r_0}{r_0+r_c}\frac{1}{x_c} &{} -\frac{1}{x_c}\frac{1}{r_0+r_c} \end{bmatrix}, \end{aligned}$$

with \(r_0 = 1\), \(v_s = 1\), \(r_l = 0.05\), \(r_c = 0.5r_l\), \(x_l = 3\), \(x_c = 70\) and \(W = [-0.001, 0.001]\times [-0.001, 0.001]\). A physical and more detailed description of the model can be found in [17]. The safety control problem that we consider is given by \(\langle \varSigma ,\psi _\mathrm {safe}\rangle \), where \(\psi _\mathrm {safe}= always( [1.15,1.55]\times [5.45,5.85])\). We evaluate the performance of our \(\mathsf {LazySafe} {}\) algorithm on this benchmark and compare it to \(\mathsf {EagerSafe} {}\) and a single-layered baseline. For \(\mathsf {LazySafe} {}\) and \(\mathsf {EagerSafe} {}\), we vary the number of layers used. The results are presented in Fig. 4. In the experiments, the finest layer is common, and is parameterized by \(\eta _1 = [0.0005, 0.0005]\) and \(\tau _1 = 0.0625\). The ratio between the grid parameters and the sampling times of the successive layers is 2.

From Fig. 4, we see that \(\mathsf {LazySafe} {}\) is significantly faster than both \(\mathsf {EagerSafe} {}\) (and the single-layered baseline) as L increases. The single layered case (\(L=1\)) takes slightly more time in both \(\mathsf {LazySafe} {}\) and \(\mathsf {EagerSafe} {}\) due to the extra bookkeeping in the multi-layered algorithms. In Fig. 5, we visualize the domain of the constructed transitions and the synthesized controllers in each layer for \(\mathsf {LazySafe} {}(\cdot ,6)\). The safe set is mostly covered by cells in the two coarsest layers. This phenomenon is responsible for the computational savings over \(\mathsf {LazySafe} {}(\cdot ,1)\).

In contrast to the reach-avoid control problem for a unicycle, in this example, synthesis takes significantly longer time than the abstraction. To reason about this difference is difficult, because the two systems are completely incomparable, and the abstraction parameters are very different. Still we highlight two suspected reasons for this mismatch: (a) Abstraction is faster because of the lower dimension and smaller control input space of the boost converter, (b) A smaller sampling time (0.0625 s as compared to 0.225 s for the unicycle) in the finest layer of abstraction for the boost converter results in slower convergence of the fixed-point iteration.

Fig. 5.
figure 5

Domain of the computed transitions (union of red and black region) and the synthesized controllers (black region) for the DC-DC boost converter example, computed by \(\mathsf {LazySafe} {}(\cdot ,6)\). (Color figure online)

5 Conclusion

ABCS is an exciting new development in the field of formal synthesis of cyber-physical systems. We have summarized a multi-resolution approach to ABCS. Fruitful avenues for future work include designing scalable and robust tools, and combining basic algorithmic techniques with structural heuristics or orthogonal techniques (e.g. those based on data-driven exploration).