Keywords

figure a

1 Introduction

Hard timing constraints, where deadlines should always been met, have been widely used in real-time systems to ensure system safety. However, with the rapid increase of system functional and architectural complexity, hard deadlines have become increasingly pessimistic and often lead to infeasible designs or over provisioning of system resources  [16, 20, 21, 32]. The concept of weakly-hard systems are thus proposed to relax hard timing constraints by allowing occasional deadline misses  [2, 11]. This is motivated by the fact that many system functions, such as some control tasks, have certain degrees of robustness and can in fact tolerate some deadline misses, as long as those misses are bounded and dependably controlled. In recent years, considerable efforts have been made in the research of weakly-hard systems, including schedulability analysis  [1, 2, 5, 12,13,14, 19, 25, 28, 30], opportunistic control for energy saving [18], control stability analysis and optimization  [8, 10, 22, 23, 26], and control-schedule co-design under possible deadline misses  [3, 6, 27]. Compared with hard deadlines, weakly-hard constraints can more accurately capture the timing requirements of those system functions that tolerate deadline misses, and significantly improve system feasibility and flexibility  [16, 20]. Compared with soft deadlines, where any deadline miss is allowed, weakly-hard constraints could still provide deterministic guarantees on system safety, stability, performance, and other properties under formal analysis  [17, 29].

A common type of weakly-hard model is the (mK) constraint, which specifies that among any K consecutive task executions, at most m instances could violate their deadlines  [2]. Specifically, the high-level structure of a (mK)-constrained weakly-hard system is presented in Fig. 1. Given a sampled-data system \(\dot{x} = f(x,u)\) with a sampling period \(\delta > 0\), the system samples the state x at the time \(t = i\delta \) for \(n=0,1,2,\dots \), and computes the control input u with function \(\pi (x)\). If the computation completes within the given deadline, the system applies u to influence the plant’s dynamics. Otherwise, the system stops the computation and applies zero control input. As aforementioned, the system should ensure the control input can be successfully computed and applied within the deadline for at least \(K{-}m\) times over any K consecutive sampling periods.

Fig. 1.
figure 1

A weakly-hard system with perfect sensors and actuators.

For such weakly-hard systems, a natural and critical question is whether the system is safe by allowing deadline misses defined in a given (mK) constraint. There is only limited prior work in this area, while nominal systems have been adequately studied [4, 9, 15, 31]. In  [8], a weakly-hard system with linear dynamic is modeled as a hybrid automaton and then the reachability of the generated hybrid automaton is verified by the tool SpaceEx  [9]. In  [7], the behavior of a linear weakly-hard system is transformed into a program, and program verification techniques such as abstract interpretation and SMT solvers can be applied.

In our previous work  [17], the safety of nonlinear weakly-hard systems are considered for the first time. Our approach tries to derive a safe initial set for any given (mK) constraint, that is, starting from any initial state within such set, the system will always stay within the same safe state set under the given weakly-hard constraint. Specifically, we first convert the infinite-time safety problem into a finite one by finding a set satisfying both local safety and inductiveness. The computation of such valid set heavily lies on the estimation of the system state evolution, where two key assumptions are made: 1) The system is exponentially stable under nominal cases without any deadline misses, which makes the system state contract with a constant decay rate; 2) The system dynamics are Lipschitz continuous, which helps bound the expansion under a deadline miss. Based on these two assumptions, we can abstract the safety verification problem as a one-dimensional problem and use linear programming (LP) to solve it, which we call one-dimension abstraction in the rest of the paper.

In practice, however, the assumptions in  [17] are often hard to satisfy and the parameters of exponential stability are difficult to obtain. In addition, while the scalar abstraction provides high efficiency, the experiments demonstrate that the estimation is always over conservative. In this paper, we go one step further and present a new tool SAW for infinite-time safety verification of nonlinear weakly-hard systems without any particular assumption on exponential stability and Lipschitz bound, and try to be less conservative than the scalar abstraction. Formally, the problem solved by this tool is described as follows:

Problem 1

Given an (mK) weakly-hard system with nonlinear dynamics \(\dot{x}=f(x,u)\), sampling period \(\delta \), and safe set X, find a safe initial set \(X_0\), such that from any state \(x(0) \in X_0\), the system will always be inside X.

To solve this problem, we first discretize the safe state set X into grids. We then try to find the grid set that satisfies both local safety and inductiveness. For each property, we build a directed graph, where each node corresponds to a grid and each directed edge represents the mapping between grids with respect to reachability. We will then be able to leverage graph theory to construct the initial safe set. Experimental results demonstrate that our tool is effective for general nonlinear systems.

Fig. 2.
figure 2

The schematic diagram of SAW.

2 Algorithms and Tool Design

The schematic diagram of our tool SAW is shown in Fig. 2. The input is a model file that specifies the system dynamics, sampling period, safe region and other parameters, and a configuration file of Flow*  [4] (which is set by default but can also be customized). After fed with the input, the tool works as follows (shown in Algorithm 1). The safe state set X is first uniformly partitioned into small grids \(\varGamma = \{v_1,v_2,\ldots ,v_{p^d}\}\), where \(X = v_1 \cup v_2 \cup \cdots \cup v_{d^p}\), \(v_i \cap v_j = \phi \) (\(\forall i\ne j\)), d is the dimension of the state space, and p is the number of partitions in each dimension (Line 1 in Algorithm 1). The tool then tries to find the grids that satisfy the local safety. It first invokes a reachability graph constructor to build a one-step reachability graph \(G_1\) to describe how the system evolves in one sampling step (Line 2). Then, a dynamic programming (DP) based approach finds the largest set \(\varGamma _S = \{v_{s_1}, v_{s_2}, \ldots , v_{s_n}\}\) from which the system will not go out of the safe region. The K-step reachability graph \(G_K\) is also built in the DP process based on \(G_1\) (Line 3). After that, the tool searches the largest subset \(\varGamma _I\) of \(\varGamma _S\) that satisfies the inductiveness by using a reverse search algorithm (Line 4). The algorithm outputs \(\varGamma _I\) as the target set \(X_0\) (Line 5).

figure b

The key functions of the tool are the reachability graph constructor, DP-based local safety set search, and reverse inductiveness set search. In the following sections, we introduce these three functions in detail.

figure c

2.1 Reachability Graph Construction

Integration in dynamic system equations is often the most time-consuming part to trace the variation of the states. In this function, we use Flow* to get a valid overapproximation of reachable set (represented as flowpipes) starting from every grid after a sampling period \(\delta \). Given a positive integer n, the graph constructed by the reachability set after n sampling period, \(n \cdot \delta \), is called a n-step graph \(G_n\). Since the reachability for all the grids in any sampling step is independent under our grid assumption, we first build \(G_1\) and then reuse \(G_1\) to construct \(G_K\) later without redundant computation of reachable set.

One-step graph is built with Algorithm 2. We consider deadline miss and deadline meet separately, corresponding to two categories of edges (Line 3). For a grid v, if the one-step reachable set \(R_1(v)\) intersects with unsafe state \(X^c\), then it is considered as an unsafe grid and we let its reachable grid be \(\emptyset \). Otherwise, if \(R_1(v)\) intersects with another grid \(v'\) under the deadline miss/meet event e, then we add a directed edge \((v,e,v')\) from \(v'\) to v with label e. The number of outgoing edges for each grid node v is bounded by \(p^d\). Assuming that the complexity of Flow* to compute flowpipes for its internal clock \(\epsilon \) is O(1), we can get the overall time complexity as \(O(|\varGamma | \cdot p^d \cdot \delta / \epsilon )\).

figure d

K-step graph \(G_K\) is built for finding the grid set that satisfies local safety and inductiveness. To avoid redundant computation on reachable set, we construct \(G_K\) based on \(G_1\) by traversing K-length paths, as the bi-product of local safety set searching procedure.

2.2 DP-Based Local Safety Set Search

We propose a bottom-up dynamic programming for considering all the possible paths, utilizing the overlapping subproblems property (Algorithm 3). The reachable grid set at step K that is derived from a grid v at step \(k \le K\) with respect to the number of deadline misses \(n \le m\) can be defined as \(\text {DP}(v, n, k)\). To be consistent with Algorithm 2, this set is empty if and only if it does not satisfy the local safety. We need to derive \(\text {DP}(v, 0, 0)\). Initially, the zero-step reachability is straight forward, i.e., \(\forall u \in \varGamma , n \in [0, m]\), \(\text {DP}(v, n, K) = \{v\}\). The transition is defined as:

$$\begin{aligned} \forall k \in [0, K - 1]:\ \text {DP}(v, n, k) = \bigcup \limits _{\forall v', e: (v, e, v') \in E_1, n + e \le m} \text {DP}(v', n + e, k + 1). \end{aligned}$$

If there exists an empty set on the right hand side or there is no outgoing edge from v for any e such that \(n + e \le m\), we let \(\text {DP}(v, n, k) = \emptyset \). Finally, we have \(\varGamma _S = \{v \mid \text {DP}(v, 0, 0) \ne \emptyset \}\), \(E_K = \{(v, v') \mid v' \in \text {DP}(v, 0, 0)\}\).

We used bitset to implement the set union which can accelerate 64 times under the 64-bit architecture. The time complexity is \(O(|\varGamma |^2 / bits \cdot p^d \cdot K^2 + |\varGamma |^2)\), where bits depends on the running environment. \(|\varGamma |^2\) is contributed by \(G_K\).

figure e

2.3 Reverse Inductiveness Set Search

To find the grid set \(\varGamma _I \subseteq \varGamma _S\) that satisfies inductiveness, we propose a reverse search algorithm Algorithm 4. Basically, instead of directly searching \(\varGamma _I\), we try to obtain \(\varGamma _I\) by removing any grid v within \(\varGamma _S\), from which there exists a path reaching \(\varGamma _U = \varGamma - \varGamma _S\). Specifically, Algorithm 4 starts with initializing \(\varGamma _U = \varGamma - \varGamma _S\) (line 1). The \(\varGamma _U\) iteratively absorbs the grid v that can reach \(\varGamma _U\) in K sampling periods, until a fixed point is reached (line 2–3). Finally \(\varGamma _I = \varGamma - \varGamma _U\) is the largest set that satisfies inductiveness. It is implemented as a breadth first search (BFS) on the reversed graph of \(G_K\), and the time complexity is \(O(|\varGamma |^2)\).

3 Example Usage

Example 1

Consider the following linear control system from  [17]:

$$\begin{aligned} \begin{bmatrix} \dot{x_1} \\ \dot{x_2} \end{bmatrix}=\begin{bmatrix} 0 &{} 1 \\ 0 &{} -0.1 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}+u ,\ \ \text {where}\ \ u=\begin{bmatrix} 0 &{} 0 \\ -0.375 &{} -1.15 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}. \end{aligned}$$

\(\delta = 0.2\) and \(step\_size = 0.01\). The initial state set is \(x_1 \in [-1, 1]\) and \(x_2 \in [-1, 1]\). The safe state set is \(x_1 \in [-3, 3]\) and \(x_2 \in [-3, 3]\). Following the input format shown in Listing 1.1. Thus, we prepare the model file as Listing 1.2.

figure f

Then, we run our program with the model file.

figure g

To further ease the use of our tool, we also pre-complied our tool for x86_64 linux environment. In such environment, users do not need to compile our tool and can directly invoke saw_linux_x86_64 instead of saw (which is only available after manually compiling the tool).

figure h

The program output is shown in Listing 1.3. Line 6 shows the number of edges of \(G_1\). Lines 8–10 provide the information of \(G_K\), including the number of edges and nodes. Line 12 prints the safe initial set \(X_0\). Our tool then determines whether the given initial set is safe by checking if it is the subset of \(X_0\).

figure i
Table 1. Benchmark setting. ODE denotes the ordinary differential equation of the example, \(\pi \) denotes the control law, and \(\delta \) is the discrete control stepsize.

4 Experiments

We implemented a prototype of SAW that is integrated with Flow*. In this section, we first compare our tool with the one-dimension abstraction  [17], on the full benchmarks from  [17] (#1–#4) and also additional examples with no guarantee on exponential stability from related works (#5 and #6) [24]. Table 1 shows the benchmark settings, including the (mK) constraint set for each benchmark. Then, we show how different parameter settings affect the verification results of our tool. All our experiments were run on a desktop, with 6-core 3.60 GHz Intel Core i7.

Table 2. Experimental results. ExpParam denotes the parameters of the exponential stability, where “N/A” means that either the system is not exponentially stable or the parameters are not available. Initial state set denotes the set that needs to be verified. The last two columns denote the verification results of the one-dimension abstraction  [17] and SAW, respectively. “—” means that no safe initial set \(X_0\) is found by the tool. p represents the partition number for each dimension in SAW. Time (in seconds) represents the execution time of SAW.

4.1 Comparison with One-Dimension Abstraction

Table 2 shows the experimental results. It is worth noting that the one-dimension abstraction cannot find the safe initial set in most cases from  [17]. In fact, it only works effectively for a limited set of (mK), e.g., when no consecutive deadline misses is allowed. For general (mK) constraints, one-dimension abstraction performs much worse due to the over-conservation. Furthermore, we can see that, without exponential stability, one-dimension abstraction based approach is not applicable for the benchmarks #5 and #6. Note that for benchmark #2, one-dimension abstraction obtains a non-empty safe initial set \(X_0\), which however, does not contain the given initial state set. Thus we use “No” instead of “—” to represent this result. Conversely, for every example, our tool computes a feasible \(X_0\) that contains the initial state set (showing the initial state set is safe), which we denote as “Yes”.

4.2 Impact of (mK), Granularity, and Stepsize

(mK). We take benchmark #1 (Example 1 in Sect. 3) as an example and run our tool under different (mK) values. Figures 3a, 3b, 3c demonstrate that, for this example, the size of local safety region \(\varGamma _S\) shrinks when K gets larger. The size of inductiveness region \(\varGamma _I\) grows in contrast. \(\varGamma _S\) becomes the same as \(\varGamma _I\) when K gets larger, in which case m is the primary parameter that influences the size of \(\varGamma _I\).

Granularity. We take benchmark #3 as an example, and run our tool with different partition granularities. The results (Figs. 3d, 3e, 3f) show that \(\varGamma _I\) grows when p gets larger. The choice of p has significant impact on the result (e.g., the user-defined initial state set cannot be verified when \(p = 15\)).

Stepsize. We take benchmark #5 as an example, and run our tool with different stepsizes of Flow*. With the same granularity \(p = 100\), we get the safe initial state set \(\varGamma _I = [-1.56, 1.32]\) when \(step\_size = 0.1\), but \(\varGamma _I\) is empty when \(step\_size = 0.3\). The computation times are 4.713 s and 1.835 s, respectively. Thus, we can see that there is a trade-off between the computational efficiency and the accuracy.

Fig. 3.
figure 3

Results under different (mK) values (3a, 3b, 3c) and different granularities (3d, 3e, 3f). The green solid region is \(\varGamma _I\). The slashed region is \(\varGamma _S\). The blue rectangle is the initial state set that needs to be verified. (Color figure online)

5 Conclusion

In this paper, we present a new tool SAW to compute a tight estimation of safe initial set for infinite-time safety verification of general nonlinear weakly-hard systems. The tool first discretizes the safe state set into grids. By constructing a reachability graph for the grids based on existing tools, the tool leverages graph theory and dynamic programming technique to compute the safe initial set. We demonstrate that our tool can significantly outperform the state-of-the-art one-dimension abstraction approach, and analyze how different constraints and parameters may affect the results of our tool. Future work includes further speedup of the reachability graph construction via parallel computing.