A key motivator behind abstraction-based control synthesis is that computing the game iterations from Eqs. (2) and (3) exactly is often intractable for high-dimensional nonlinear dynamics. Termination is also not guaranteed. Quantizing (or “abstracting”) continuous interfaces into a finite counterpart ensures that each predicate operation of the game terminates in finite time but at the cost of the solution’s precision. Finer quantization incurs a smaller loss of precision but can cause the memory and computational requirements to store and manipulate the symbolic representation to exceed machine resources.
This section first introduces the notion of interface abstraction as a refinement relation. We define the notion of a quantizer and show how it is a simple generalization of many existing quantizers in the abstraction-based control literature. Finally, we show how one can inject these quantizers anywhere in the control synthesis pipeline to reduce computational bottlenecks.
4.1 Theory of Abstract Interfaces
While a controller synthesis algorithm can analyze a simpler model of the dynamics, the results have no meaning unless they can be extrapolated back to the original system dynamics. The following interface refinement condition formalizes a condition when this extrapolation can occur.
Definition 6
(Interface Refinement [22]). Let F(i, o) and \(\hat{F}(\hat{i}, \hat{o})\) be interfaces. \(\hat{F}\) is an abstraction of F if and only if \(i \equiv \hat{i}\), \(o \equiv \hat{o}\), and
$$\begin{aligned} {\texttt {nb}}(\hat{F})&\Rightarrow {\texttt {nb}}(F) \end{aligned}$$
(11)
$$\begin{aligned} \left( {\texttt {nb}}(\hat{F}) \wedge F\right)&\Rightarrow \hat{F} \end{aligned}$$
(12)
are valid formulas. This relationship is denoted by \(\hat{F} \preceq F\).
Definition 6 imposes two main requirements between a concrete and abstract interface. Equation (11) encodes the condition where if \(\hat{F}\) accepts an input, then F must also accept it; that is, the abstract component is more aggressive with rejecting invalid inputs. Second, if both systems accept the input then the abstract output set is a superset of the concrete function’s output set. The abstract interface is a conservative representation of the concrete interface because the abstraction accepts fewer inputs and exhibits more non-deterministic outputs. If both the interfaces are sink interfaces, then \(\hat{F} \preceq F\) reduces down to \(\hat{F} \subseteq F\) when \(F, \hat{F}\) are interpreted as sets. If both are source interfaces then the set containment direction is flipped and \(\hat{F} \preceq F\) reduces down to \(F \subseteq \hat{F}\).
The refinement relation satisfies the required reflexivity, transitivity, and antisymmetry properties to be a partial order [22] and is depicted in Fig. 4. This order has a bottom element \(\bot \) which is a universal abstraction. Conveniently, the bottom element \(\bot \) signifies both boolean false and the bottom of the partial order. This interface blocks for every potential input. In contrast, Boolean \(\top \) plays no special role in the partial order. While \(\top \) exhibits totally non-deterministic outputs, it also accepts inputs. A blocking input is considered “worse” than non-deterministic outputs in the refinement order. The refinement relation \(\preceq \) encodes a direction of conservatism such that any reasoning done over the abstract models is sound and can be generalized to the concrete model.
Theorem 1
(Informal Substitutability Result [22]). For any input that is allowed for the abstract model, the output behaviors exhibited by an abstract model contains the output behaviors exhibited by the concrete model.
If a property on outputs has been established for an abstract interface, then it still holds if the abstract interface is replaced with the concrete one. Informally, the abstract interface is more conservative so if a property holds with the abstraction then it must also hold for the true system. All aforementioned interface operators preserve the properties of the refinement relation of Definition 6, in the sense that they are monotone with respect to the refinement partial order.
Theorem 2
(Composition Preserves Refinement [22]). Let \(\hat{A} \preceq A\) and \(\hat{B} \preceq B \). If the composition is well defined, then \({\texttt {comp}}(\hat{A}, \hat{B}) \preceq {\texttt {comp}}(A,B)\).
Theorem 3
(Output Hiding Preserves Refinement [22]). If \(A \preceq B\), then \({\texttt {\textit{ohide}}}(w,A) \preceq {\texttt {\textit{ohide}}}(w,B)\) for any variable w.
Theorem 4
(Input Hiding Preserves Refinement). If A, B are both sink interfaces and \(A \preceq B\), then \({\texttt {ihide}}(w, A) \preceq {\texttt {ihide}}(w, B)\) for any variable w.
Proofs for Theorems 2 and 3 are provided in [22]. Theorem 4’s proof is simple and is omitted. One can think of using interface composition and variable hiding to horizontally (with respect to the refinement order) navigate the space of all interfaces. The synthesis pipeline encodes one navigated path and monotonicity of these operators yields guarantees about the path’s end point. Composite operators such as \({{\texttt {cpre}}}(\cdot )\) chain together multiple incremental steps. Furthermore since the composition of monotone operators is itself a monotone operator, any composite constructed from these parts is also monotone. In contrast, the coarsening and refinement operators introduced later in Definitions 8 and 10 respectively are used to move vertically and construct abstractions. The “direction” of new composite operators can easily be established through simple reasoning about the cumulative directions of their constituent operators.
4.2 Dynamically Coarsening Interfaces
In practice, the sequence of interfaces \(Z_i\) generated during synthesis grows in complexity. This occurs even if the dynamics F and the target/safe sets have compact representations (i.e., fewer nodes if using BDDs). Coarsening F and \(Z_i\) combats this growth in complexity by effectively reducing the amount of information sent between iterations of the fixed point procedure.
Spatial discretization or coarsening is achieved by use of a quantizer interface that implicitly aggregates points in a space into a partition or cover.
Definition 7
A quantizer Q(i, o) is any interface that abstracts the identity interface \((i == o)\) associated with the signature (i, o).
Quantizers decrease the complexity of the system representation and make synthesis more computationally tractable. A coarsening operator abstracts an interface by connecting it in series with a quantizer. Coarsening reduces the number of non-blocking inputs and increases the output non-determinism.
Definition 8
(Input/Output Coarsening). Given an interface F(i, o) and input quantizer \(Q_{}(\hat{i}, i)\), input coarsening yields an interface with signature \((\hat{i}, o)\).
$$\begin{aligned} {\texttt {icoarsen}}(F, Q(\hat{i}, i))&= {\texttt {\textit{ohide}}}(i, {\texttt {comp}}(Q(\hat{i}, i), F) ) \end{aligned}$$
(13)
Similarly, given an output quantizer \(Q_{}(o, \hat{o})\), output coarsening yields an interface with signature \((i, \hat{o})\).
$$\begin{aligned} {\texttt {ocoarsen}}(F, Q(o, \hat{o}))&= {\texttt {\textit{ohide}}}(o, {\texttt {comp}}( F, Q(o, \hat{o}))) \end{aligned}$$
(14)
Figure 5 depicts how coarsening reduces the information required to encode a finite interface. It leverages a variable precision quantizer, whose implementation is described in the extended version at [1].
The corollary below shows that quantizers can be seamlessly integrated into the synthesis pipeline while preserving the refinement order. It readily follows from Theorems 2, 3, and the quantizer definition.
Corollary 1
Input and output coarsening operations (13) and (14) are monotone operations with respect to the interface refinement order \(\preceq \).
It is difficult to know a priori where a specific problem instance lies along the spectrum between mathematical precision and computational efficiency. It is then desirable to coarsen dynamically in response to runtime conditions rather than statically beforehand. Coarsening heuristics for reach games include:
-
Downsampling with progress [7]: Initially use coarser system dynamics to rapidly identify a coarse reach basin. Finer dynamics are used to construct a more granular set whenever the coarse iteration “stalls”. In [7] only the \(Z_i\) are coarsened during synthesis. We enable the dynamics F to be as well.
-
Greedy quantization: Selectively coarsening along certain dimensions by checking at runtime which dimension, when coarsened, would cause \(Z_i\) to shrink the least. This reward function can be leveraged in practice because coarsening is computationally cheaper than composition. For BDDs, the winning region can be coarsened until the number of nodes reduces below a desired threshold. Figure 6 shows this heuristic being applied to reduce memory usage at the expense of answer fidelity. A fixed point is not guaranteed as long as quantizers can be dynamically inserted into the synthesis pipeline, but is once quantizers are always inserted at a fixed precision.
The most common quantizer in the literature never blocks and only increases non-determinism (such quantizers are called “strict” in [18, 19]). If a quantizer is interpreted as a partition or cover, this requirement means that the union must be equal to an entire space. Definition 7 relaxes that requirement so the union can be a subset instead. It also hints at other variants such as interfaces that don’t increase output non-determinism but instead block for more inputs.