1 Introduction

During the last three decades, a popular approach for specifying real-time systems has been based on Timed Automata (TAs) [1]. TAs are powerful in designing real-time models via explicit clocks, where real-time constraints are captured by explicitly setting/resetting clock variables. A number of automatic verification tools for TAs have proven to be successful [2,3,4,5]. Industrial case studies show that requirements for real-time systems are often structured into phases, which are then composed sequentially, in parallel, alternatively [6, 7]. TAs lack high-level compositional patterns for hierarchical design; moreover, users often need to manipulate clock variables with carefully calculated clock constraints manually. The process is tedious and error-prone.

There have been some translation-based approaches on building verification support for compositional timed-process representations. For example, Timed Communicating Sequential Process (TCSP), Timed Communicating Object-Z (TCOZ) and Statechart based hierarchical Timed Automata are well suited for presenting compositional models of complex real-time systems. Prior works [8, 9] systematically translate TCSP/TCOZ/Statechart models to flat TAs so that the model checker Uppaal [3] can be applied. However, possible insufficiencies are: the expressiveness power is limited by the finite-state automata; and there is always a gap between the verified logic and the actual code implementation.

In this work, we investigate an alternative approach for verifying real-time systems. We propose a novel temporal specification language, Timed Effects (TimEffs), which enables a compositional verification via a Hoare-style forward verifier and a term rewriting system (TRS). More specifically, we specify system behaviors in the form of TimEffs, which integrates the Kleene Algebra with dependent values and arithmetic constraints, to provide real-time abstractions into traditional linear temporal logics. For example, one safety property, Footnote 1, is expressed in TimEffs as:  . Here \( \wedge \) connects the arithmetic formula and the timed trace; the operator \( {\mathrel {{\#}}} \) binds time variables to traces (here t is a time bound of ); \( \_ \) is a wildcard matching to any event; Kleene star \( \star \) denotes a trace repetition. The above formula \( {\mathrm {\Phi }} \) corresponds to   in metric temporal logic (MTL), reads . Furthermore, the time bounds can be dependent on the program inputs, as shown in Fig. 1.

Fig. 1.
figure 1

Value-dependent specification.

Function addNSugar takes a parameter n, representing the portion of the sugar to add. When n=0, it raises an event to mark the end of the process. Otherwise, it adds one portion of the sugar by calling addOneSugar(), then recursively calls addNSugar with parameter n-1. The use of is standard [11], which executes a block of code e after the specified time d. Therefore, the time spent on adding one portion of the sugar is more than one time unit. Note that refers to an empty trace which takes time t. Both preconditions require no arithmetic constraints and no temporal constraints upon the history traces. The postcondition of addNSugar(n) indicates that the method generates a finite trace where takes a no less than n time-units delay to finish.

Although these examples are simple, they show the benefits of deploying value-dependent time bounds, which is beyond the capability of TAs. Essentially, TimEffs define symbolic TAs, which stands for a set (possibly infinite) of concrete transition systems. Moreover, we deploy a Hoare-style forward verifier to soundly reason about the behaviors from the source level, with respect to the well-defined operational semantics. This approach provides a direct (opposite to the techniques which require manual and remote modeling processes), and modular verification – where modules can be replaced by their already verified properties – for real-time systems, which are not possible by any existing techniques. Furthermore, we develop a novel TRS, which is inspired by Antimirov and Mosses’ algorithmFootnote 2 [12] but solving the language inclusions between more expressive TimEffs. In short, the main contributions of this work are:

  1. 1.

    Language Abstraction: we formally define a core language \( C^{t} \), by defining its syntax and operational semantics, generalizing the real-time systems with mutable variables and timed behavioral patterns, e.g., delay, timeout, deadline.

  2. 2.

    Novel Specification: we propose TimEffs, by defining its syntax and semantics, gaining the expressive power beyond traditional linear temporal logics.

  3. 3.

    Forward Verifier: we establish a sound effect system to reason about temporal behaviors of given programs. The verifier triggers the back-end solver TRS.

  4. 4.

    Efficient TRS: we present the rewriting rules to (dis)prove the inclusion relations between the actual behaviors and the given specifications, both in TimEffs.

  5. 5.

    Implementation and Evaluation: we prototype the automated verification system, prove its soundness, report on case studies and experimental results.

Fig. 2.
figure 2

System Overview.

2 Overview

An overview of our automated verification system is given in Fig. 2. The system consists of a forward verifier and a TRS, i.e., the rounded boxes. The input of the forward verifier is a \( C^{t} \) program annotated with temporal specifications written in TimEffs. The input of the TRS is a pair of effects LHS and RHS, referring to the inclusion LHS \( \sqsubseteq \) RHSFootnote 3 to be checked (LHS and RHS refer to left/right-hand-side effects respectively). The forward verifier calls TRS to solve proof obligations. Next, we use Fig. 3 to highlight our main methodologies, which simulates a coffee machine, that dynamically adds sugar based on the user’s input number.

2.1 TimEffs. We define Hoare-triple style specifications (enclosed in ) for each function, which leads to a compositional verification strategy, where static checking can be done locally. The precondition of makeCoffee specifies that the input value n is non-negative, and it requires that before entering into this function, this history trace must contain the event on the tail. The verification fails if the precondition is not satisfied at the caller sites. Line 17 sets a five time-units deadline (i.e., maximum 5 portion of sugar per coffee) while calling addNSugar (defined in Fig. 1); then emits event with a deadline, indicating the pouring coffer process takes no more than four time-units. The precondition of main requires no arithmetic constraints (expressed as true) and an empty history trace. The postcondition of main specifies that before the final happens, there is no occurrence of (! indicates the absence of events); and the whole process takes no more than nine time-units to hit the final event.

Fig. 3.
figure 3

To make coffee with three portions of sugar within nine time units.

TimEffs support more features such as disjunctions, guards, parallelism and assertions, etc (cf. Sec. 3.3), providing detailed information upon: branching properties: different arithmetic conditions on the inputs lead to different effects; and required history traces: by defining the prior effects in precondition. These capabilities are beyond traditional timed verification, and cannot be fully captured by any prior works [2,3,4,5, 8, 9]. Nevertheless, the increase in expressive power needs support from finer-grind reasoning and a more sophisticated back-end solver, discharged by our forward verifier and TRS.

Fig. 4.
figure 4

The forward verification examples (t1 and t2 are fresh time variables).

2.2 Forward Verification. Fig. 4 demonstrates the forward verification of functions addOneSugar and addNSugar, defined in Fig. 1. The effects states are captured in the form of . To facilitate the illustration, we label the steps

by (1) to (11), and mark the deployed forward rules (cf. Sec. 4.1) in . The initial states (1) and (4) are obtained from the preconditions, by the \( \small [FV\text {-}Meth] \) rule. States (5)(7)(10) are obtained by \( \small [FV\text {-}Cond] \), which enforces the conditional constraints into the effects states, and unions the effects accumulated from two branches. State (6) is obtained by \( \small [FV\text {-}Event] \), which concatenates an event to the current effects. The intermediate states (8) and (9) are obtained by \( \small [FV\text {-}Call] \). Before each function call, \( \small [FV\text {-}Call] \) invokes the TRS to check whether the current effects states satisfy callees’ preconditions. If it is not satisfied, the verification fails; otherwise, it concatenates the callee’s postcondition to the current states (the precondition check for step (8) is omitted here).

State (2) is obtained by \( \small [FV\text {-}Timeout] \), which adds a lower time-bound to an empty trace. After these state transformations, steps (3) and (11) invoke the TRS to check the inclusions between the final effects and the declared postconditions.

2.3 The TRS. Having TimEffs to be the specification language, and the forward verifier to reason about the actual behaviors, we are interested in the following verification problem: Given a program \( \mathcal {P} \), and a temporal specification \( {\mathrm {\Phi }}^{\prime } \), does the inclusions \( {\mathrm {\Phi }}^{\mathcal {P}} \sqsubseteq {\mathrm {\Phi }}^{\prime } \) holds? Typically, checking the inclusion/entailment between the concrete program effects \( {\mathrm {\Phi }}^{\mathcal {P}} \) and the expected property \( {\mathrm {\Phi }}^{\prime } \) proves that: the program \( \mathcal {P} \) will never lead to unsafe traces which violate \( {\mathrm {\Phi }}^{\prime } \).

Our TRS is an extension of Antimirov and Mosses’s algorithm [12], which can be deployed to decide inclusions of two regular expressions (REs) through an iterated process of checking inclusions of their partial derivatives [13]. There are two basic rules: \({{ [Disprove] }}\) infers false from trivially inconsistent inclusions; and \({{ [Unfold] }}\) applies Definition 2 to generate new inclusions.

Definition 1 (Derivative)

Given any formal language S over an alphabet \(\varSigma \) and any string \(u{\in } \varSigma ^{*}\), the derivative of S with respect to u is defined as:

\({u^{\text {-}1}S{=}\{w{\in } \varSigma ^{*}\mid uw{\in } S\}}\).

Definition 2 (REs Inclusion)

For REs \( r \) and \( s \), .

Definition 3

(TimEffs Inclusion). For TimEffs \( {\mathrm {\Phi }}_1 \) and \( {\mathrm {\Phi }}_2 \),

Similarly, we defined Definition 3 for unfolding the inclusions between TimEffs, where is the partial derivative of \( {\mathrm {\Phi }} \) w.r.t the event with the time bound t. Termination of the rewriting is guaranteed because the set of derivatives to be considered is finite, and possible cycles are detected using memorization (cf. Table 5) [14]. Next, we use Table 1 to demonstrate how the TRS automatically proves the final effects of main satisfying its postcondition (shown at step (11) in Fig. 4). We mark the rewriting rules (cf. Sec. 5) in .

In Table 1, step renames the time variables to avoid the name clashes between the antecedent and the consequent. Step splits the proof tree into two branches, according to the different arithmetic constraints, by rule \(\texttt {[LHS\text {-}OR]}\). In the first branch, step eliminates the event ES from the head of both sides, by rule \(\texttt {[UNFOLD]}\). Step proves the inclusion, because evidently the consequent contains \( \epsilon \) when tR=0. In the second branch, step eliminates a time duration \( \epsilon {\mathrel {{\#}}}\texttt{t2} \) from both sides. Therefore the rule \(\texttt {[UNFOLD]}\) subtracts a time duration from the consequent, i.e., (tR-t2). Similarly, step eliminates \({\texttt {ES}{\mathrel {{\#}}}\texttt {tL}}\) from the both sides, adding to the unification constraints. Step proves \(\wedge \)Footnote 4; therefore, the proof succeed.

Table 1. An inclusion proving example. \( (I) \) is the right hand side sub-tree of the the main rewriting proof tree. (ES stands for the event EndSugar)

2.4 Verifying the Fischer’s Mutual Exclusion Protocol. Fig. 5 presents

Fig. 5.
figure 5

Fischer’s mutually exclusion algorithm.

the classical Fischer’s mutually exclusion protocol, in \( C^{t} \). Global variables x and cs indicate ‘which process attempted to access the critical section most recently’ and ‘the number of processes accessing the critical section’ respectively. The main procedure is a parallel composition of three processes, where d and e are two constants. Each process attempts to enter the critical section when x is -1, i.e. no other process is currently attempting. Once the process is active (i.e., reaches line 6), it sets x to its identity number i within d time units, captured by . Then it idles for e time units, captured by and then checks whether x still equals to i. If so, it safely enters the critical section. Otherwise, it restarts from the beginning. Quantitative timing constraint \(\texttt {d<e}\) plays an important role in this algorithm to guarantee mutual exclusion. One way to prove mutual exclusion is to show that \(\texttt {cs}{\le }1\) is always true. Or, using event temporal logic, we can show that the occurrence of Critical always indicates the next event is Exit. We show in Sec. 6 that our prototype system can verify such algorithms symbolically.

3 Language and Specifications

3.1 The Target Language

We define the core language \( {{ C^{t} }} \) in Fig. 6, which is built based on C syntax and provides support for timed behavioral patterns.

Fig. 6.
figure 6

A core first-order imperative language with timed constructs via implicit clocks.

Here, \( c \) and \( b \) stand for integer and Boolean constants, \( mn \) and \( x \) are meta-variables, drawn from var (the countably infinite set of arbitrary distinct identifiers). A program \( \mathcal {P} \) comprises a list of global variable initializations \( \alpha ^* \) and a list of method declarations \( {meth^*} \). Here, we use the \( * \) superscript to denote a finite list of items, for example, \( {x^*} \) refers to a list of variables, \( x_1, ..., x_n \). Each method \( meth \) has a name \( mn \), an expression-oriented body \( e \), also is associated with a precondition \( {\mathrm {\Phi }}_{pre} \) and a postcondition \( {\mathrm {\Phi }}_{post} \) (specification syntax is given in Fig. 7). \( C^{t} \) allows each iterative loop to be optimized to an equivalent tail-recursive method, where mutation on parameters is made visible to the caller.

Expressions comprise: values \( v \); guarded processes \( [v]e \), where if \( v \) is true, it behaves as \( e \), else it idles until \( v \) becomes true; method calls \( mn({v^*}) \); sequential composition \( e_1;e_2 \); parallel composition \( e_1 || e_2 \), where \( e_1 \) and \( e_2 \) may communicate via shared variables; conditionals \( {if}\ v\ e_1\ e_2 \); and event raising expressions where the event comes from the finite set of event labels \( \varSigma \). Without loss of generality, events can be further parametrized with one value \( v \) and a set of assignments \( \alpha ^* \) to update the mutable variables. Moreover, a number of timed constructs can be used to capture common real-time system behaviors, which are explained via operational semantics rules in Sec. 3.2.

3.2 Operational Semantics of \( C^{t} \)

To build the semantics of the system model, we define the notion of a configuration in Definition 4, to capture the global system state during system execution.

Definition 4 (System configuration)

A system configuration \( \zeta \) is a pair \((\mathcal {S}, e)\) where \(\mathcal {S}\) is a variable valuation function (or a stack) and e is an expression.

A transition of the system is of the form \(\zeta \xrightarrow []{l} \zeta ^\prime \) where \(\zeta \) and \(\zeta ^\prime \) are the system configurations before and after the transition respectively. Transition labels \( l \) include: d, denoting a non-negative integer; \(\tau \), denoting an invisible event; , denoting an observable event. For example, \(\zeta \xrightarrow []{\text {d}} \zeta ^\prime \) denotes a d time-units elapse. Next, we present the firing rules, associated with timed constructs.

Process \(\texttt { delay} [v]\) idles for exactly t time units. Rule \( [delay_1] \) states that the process may idle for any amount of time given it is less than or equal to \( t \); Rule \( [delay_2] \) states that the process terminates immediately when \( t \) becomes \( 0 \).

figure al

In \(e_1\ \texttt{timeout} [v]\ e_2\), the first observable event of \(e_1\) shall occur before t time units; otherwise, \( e_2 \) takes over the control after exactly t time units. Note that the usage of timeout in Fig. 1 is a special case where \( e_1 \) never starts by default.

figure am

Process \(\texttt{deadline}\ [v] \ e\) behaves exactly as \( e \) except that it must terminate before \( t \) time units. The guarded process \( [v]e \) behaves as \( e \) when \( v \) is true, otherwise it idles until \( v \) becomes true. Process \(e_1\ \texttt{interrupt} [v] \ e_2\) behaves as \(e_1\) until \( t \) time units, and then \(e_2\) takes over. We leave the rest rules in [16].

figure an
Fig. 7.
figure 7

Syntax of TimEffs.

3.3 The Specification Language

We plant TimEffs specifications into the Hoare-style verification system, using \( {\mathrm {\Phi }}_{pre} \) and \( {\mathrm {\Phi }}_{post} \) to capture the temporal pre/post conditions. As shown in Fig. 7, TimEffs can be constructed by a conditioned event sequence \( {\pi }\wedge {\theta } \); or an effects disjunction \( {\mathrm {\Phi }}_1 \vee {\mathrm {\Phi }}_2\). Timed sequences comprise nil (\(\bot \)); empty trace \( \epsilon \); single event \( ev \); concatenation \( {\theta }_1\cdot {\theta }_2 \); disjunction \( {\theta }_1\vee {\theta }_2 \); parallel composition \( {\theta }_1 || {\theta }_2 \); a block waiting for a certain constraint to be satisfied \( \pi ?{\theta } \). We introduce a new operator \( {\mathrel {{\#}}} \), and \( {\theta }{\mathrel {{\#}}}t \) represents the trace \( {\theta } \) takes \( t \) time units to complete, where \( t \) is a real-time term. A timed sequence also can be constructed by \( {\theta }^\star \), representing zero or more times repetition of the trace \( {\theta } \). For single events, \( (v, \alpha ^*) \) stands for an observable event with label , parameterized by \( v \), and the assignment operations \( \alpha ^* \); \( \tau (\pi ) \) is an invisible event, parameterized with a pure formula \( \pi \)Footnote 5.

Events can also be , referring to all events which are not labeled using ; and a wildcard \( \_ \), which matches to all the events. We use \( {\pi } \) to denote a pure formula which captures the (Presburger) arithmetic conditions on terms or program parameters. We use \( {bop(}{t_1, t_2}{)} \) to represent binary atomic formulas of terms (including \( {=}, \ {>}, \ {<}, \ {\ge }\ \) and \( {\le } \)). Terms consist of constant integer values \( c \); integer variables \( x \); simple computations of terms, \( t_1{\mathtt{+}}t_2 \) and \( t_1\text {-}t_2 \).

3.4 Semantic Model of Timed Effects

Let \( d, \mathcal {S}, \varphi {\models } {\mathrm {\Phi }} \) denote the model relation, i.e., a stack \( \mathcal {S} \), a concrete execution trace \( \varphi \) take \( d \) time units to complete, and they satisfy the specification \( {\mathrm {\Phi }} \).

Fig. 8.
figure 8

Semantics of TimEffs.

To define the model, \( var \) is the set of program variables, \( val \) is the set of primitive values; and \( d \), \( \mathcal {S} \), \( \varphi \) are drawn from the following concrete domains: \( d \): \( \mathbb {N} \), \( \mathcal {S} \): \( var {\rightarrow } val \) and \( \varphi \): list of event. As shown in Fig. 8, \(\mathrel {\texttt {++}}\) appends event sequences; [] describes the empty sequences, \( [ev] \) represents the singleton sequence contains event \( ev \); \( \llbracket {\pi } \rrbracket _\mathcal {S}{=} {True} \) represents \( \pi \) holds on the stack \( \mathcal {S} \). Notice that, simple events, i.e., without \( {\mathrel {{\#}}} \), are taken to be happening in instant time.

3.5 Expressiveness. TimEffs draw similarities to metric temporal logic (MTL), which is derived from LTL, where a set of non-negative real numbers is added to temporal modal operators. As shown in Table 2, we are able to encode MTL

operators into TimEffs, making it more intuitive and readable. The basic modal operators are: \( \square \) for “globally”; \( \Diamond \) for “finally”; \( \bigcirc \) for “next”; \( \mathcal {U} \) for “until”, and their past time reversed versions: \( \overleftarrow{\square } \); \( \overleftarrow{\Diamond } \); and \( {\ominus } \) for “previous”; \( \mathcal {S} \) for “since”. \( I \) in MTL is the time interval with concrete upper/lower bounds; whereas in TimEffs they can be symbolic bounds which are dependent on program inputs.

Table 2. Examples for converting MTL formulae into TimEffs with t\({\in } I\) applied.

4 Automated Forward Verification

4.1 Forward Rules

Forward rules are in the Hoare-style triples \( \mathcal {S}\vdash \{ \varPi , \varTheta \} \ e\ \{ \varPi ^\prime , \varTheta ^\prime \} \), where \(\mathcal {S}\) is the stack environment; \( \{ \varPi , \varTheta \} \) and \( \{ \varPi ^\prime , \varTheta ^\prime \} \) are program states, i.e., disjunctions of conditioned event sequence \( {\pi }\wedge {\theta } \). The meaning of the transition is: \( \{ \varPi ^\prime , \varTheta ^\prime \} = \mathop {{\bigcup }}_{i{=}0}^{|\{ \varPi , \varTheta \}| \text {-} 1} \{ \varPi _i^\prime , \varTheta _i^\prime \}\ where \ {{ (\pi _i{\wedge }{\theta }_i) \in \{ \varPi , \varTheta \} }} \ and \vdash \{ \pi _i, {\theta }_i \} \ e\ \{ \varPi _i^\prime , \varTheta _i^\prime \} \)Footnote 6.

We here present the rules for time-related constructs and leave the rest rules in [16]. Rule \( \small [FV\text {-} Delay ] \) creates a trace \( \epsilon {\mathrel {{\#}}}t \), where \( t \) is fresh, and concatenates it to the current program state, together with the additional constraint \( t{=}v \). Rule \( \small [FV\text {-} Deadline ] \) computes the effects from \( e \) and adds an upper time-bound to the results. Rule \( \small [FV\text {-} Timeout ] \) computes the effects from \( e_1 \) and \( e_2 \) using the starting state \( \{ \pi , \epsilon \} \). The final state is an union of possible effects with corresponding time bounds and arithmetic constraints. Note that, \( hd(\varTheta _1) \) and \( tl(\varTheta _1) \) return the event head (cf. Definition 6), and the tail of \( \varTheta _1 \) respectively.

$$\begin{aligned}\begin{gathered} {{ \frac{ \begin{matrix} {[FV\text {-}Delay]}\\ {\theta }^\prime = {\theta }\cdot (\epsilon {\mathrel {{\#}}}t) \quad (t\ is \ fresh ) \end{matrix} }{\mathcal {S}\vdash \{ \pi , {\theta } \} \ \texttt{delay} [v] \ \{ \pi {\wedge } ( t {=}v), {\theta }^\prime \}} }} \ \ {{ \frac{ \begin{matrix} {[FV\text {-} Deadline ]}\\ \mathcal {S}\vdash \{ \pi , \epsilon \}\ e \ \{ \varPi _1, \varTheta _1\} \quad (t\ is\ fresh ) \end{matrix} }{\mathcal {S}\vdash \{ \pi , {\theta } \} \ \texttt{deadline} [v] \ e \ \{ \varPi _1 {\wedge } (t{\le }v), {\theta }\cdot (\varTheta _1{\mathrel {{\#}}}t) \} } }} \\ {{ \frac{ \begin{matrix} {[FV\text {-} Timeout ]}\\ \mathcal {S}\vdash \{\pi , \epsilon \}\ e_1 \ \{ \varPi _1, \varTheta _1\} \qquad \quad \mathcal {S}\vdash \{ \pi , \epsilon \}\ e_2 \ \{ \varPi _2, \varTheta _2 \} \qquad \quad {(t_1, t_2\ are \ fresh )} \\ \{ \varPi _f, \varTheta _f \} = \{ \varPi _1 {\wedge } t_1{<}v, ( hd(\varTheta _1){\mathrel {{\#}}}t_1) \cdot tl(\varTheta _1) \} \cup \{ \varPi _2 {\wedge } t_2{=}v, (\epsilon {\mathrel {{\#}}}t_2) \cdot \varTheta _2 \} \end{matrix} }{\mathcal {S}\vdash \{ \pi , {\theta } \} \ e_1\ \texttt{timeout} [v] \ e_2 \ \{ \varPi _f, {\theta }\cdot \varTheta _f \} } }} \\ {{ \frac{ \begin{matrix} {[FV\text {-} Interrupt ]}\\ \mathcal {S}\vdash \{\pi , \epsilon \}\ e_1 \ \{\varPi , \varTheta \} \qquad \varDelta = \mathop {{\bigcup }}_{i{=}0}^{|\{ \varPi , \varTheta \}| \text {-} 1} \aleph ^{ Interrupt(v, \pi _i) }_{ Interleave }({\theta }_i, \epsilon ) \qquad \mathcal {S}\vdash \{ \varDelta \}\ e_2 \ \{ \varPi ^\prime , \varTheta ^\prime \} \\ \end{matrix} }{{\mathcal {S}\vdash } \{ \pi , {\theta } \} \ e_1\ \texttt{interrupt} [v] \ e_2 \ \{ \varPi ^\prime , {\theta }\cdot \varTheta ^\prime \} } }} \end{gathered}\end{aligned}$$
figure as

\([FV\text {-} Interrupt ]\) computes the interruption interleaves of \( e_1 \)’s effects, which come from the over-approximation of all the possibilities. For example, for trace , the interruption with time \( t \) creates three possibilities: . Then the rule continues to compute the effects of \( e_2 \); lastly, it prepends the original history \( {\theta } \) to the final results. Algorithm 1 presents the interleaving algorithm for interruptions, where \({{\mathrel {\texttt {+}}}}\) unions program states (cf. Definition 7 and Definition 8 for fst and D functions).

Theorem 1 (Soundness of Forward Rules)

Given any system configuration \( \zeta {=} (\mathcal {S}, e) \), by applying the operational semantics rules, if \( (\mathcal {S}, e) {\rightarrow ^*} (\mathcal {S}^\prime , v) \) has execution time \( d \) and produces event sequence \( \varphi \); and for any history effect \( \pi {\wedge } {\theta } \), such that \( d_1, \mathcal {S}, \varphi _1 {\models } (\pi {\wedge } {\theta }) \), and the forward verifier reasons \( {\mathcal {S}}{\vdash } \{ \pi , {\theta } \} e \{ \varPi , \varTheta \} \), then \( \exists (\pi ^\prime {\wedge }{\theta }^\prime ) \in \{ \varPi , \varTheta \} \) such that \( (d_1{\mathrel {\texttt {+}}} d), \mathcal {S}^\prime , (\varphi _1 {\mathrel {\texttt {++}}} \varphi ) {\models } (\pi ^\prime {\wedge } {\theta }^\prime ) \).

(\( \zeta {\xrightarrow []{}^*} \zeta ^\prime \) denotes the reflexive, transitive closure of \( \zeta \xrightarrow []{} \zeta ^\prime \).)

Proof

See the technical report[16].

5 Temporal Verification via a TRS

The TRS is an automated entailment checker to prove language inclusions between TimEffs. It is triggered prior to function calls for the precondition checking; and by the end of verifying a function, for the post condition checking.

Given two effects \( {\mathrm {\Phi }}_1 \) and \( {\mathrm {\Phi }}_2 \), the TRS decides if the inclusion \( {\mathrm {\Phi }}_1 \sqsubseteq {\mathrm {\Phi }}_2 \) is valid. During the effects rewriting process, the inclusions are in the form of \( \varGamma \vdash {\mathrm {\Phi }}_1 \sqsubseteq ^{ {\mathrm {\Phi }}} {\mathrm {\Phi }}_2 \), a shorthand for: \( \varGamma \vdash {\mathrm {\Phi }}\cdot {\mathrm {\Phi }}_1 \sqsubseteq {\mathrm {\Phi }}\cdot {\mathrm {\Phi }}_2 \). To prove such inclusions is to check whether all the possible timed traces in the antecedent \( {\mathrm {\Phi }}_1 \) are legitimately allowed in the timed traces described by the consequent \( {\mathrm {\Phi }}_2 \). Here \( \varGamma \) is the proof context, i.e., a set of effects inclusion hypothesis; and \( {\mathrm {\Phi }} \) is the history effects from the antecedent that have been used to match the effects from the consequent. The checking is initially invoked with \( \varGamma {=}\emptyset \) and \( {\mathrm {\Phi }}{=} True \wedge \epsilon \).

Effects Disjunctions. An inclusion with a disjunctive antecedent succeeds if both disjunctions entail the consequent. An inclusion with a disjunctive consequent succeeds if the antecedent entails either of the disjunctions.

$$\begin{aligned}\begin{gathered} \frac{ \begin{array}{c} {{{ \varGamma \vdash {\mathrm {\Phi }}_1 \sqsubseteq {\mathrm {\Phi }}\qquad \varGamma \vdash {\mathrm {\Phi }}_2 \sqsubseteq {\mathrm {\Phi }} }}} \end{array}}{\varGamma \vdash {\mathrm {\Phi }}_1 \vee {\mathrm {\Phi }}_2 \sqsubseteq {\mathrm {\Phi }}}\ {{ [LHS\text {-}OR] }} \ \ \frac{ \begin{array}{c} {{{ \varGamma \vdash {\mathrm {\Phi }}\sqsubseteq {\mathrm {\Phi }}_1 \quad or \quad \varGamma \vdash {\mathrm {\Phi }}\sqsubseteq {\mathrm {\Phi }}_2 }}} \end{array}}{\varGamma \vdash {\mathrm {\Phi }}\sqsubseteq {\mathrm {\Phi }}_1 \vee {\mathrm {\Phi }}_2}\ {{ [RHS\text {-}OR] }} \end{gathered}\end{aligned}$$

Now, the inclusions are disjunction-free formulas. Next we provide the definitions and key implementations of auxiliary functions Nullable, First and Derivative. Intuitively, the Nullable function \( \delta _{{\pi }}({\theta }) \) returns a Boolean value indicating whether \( {\pi } {\wedge } {\theta } \) contains the empty trace; the First function \( fst_{{\pi }}( {\theta }) \) computes a set of initial heads, denoted as \( h \), of \( {\pi } {\wedge } {\theta } \); the Derivative function \( D^{\pi }_{h}({\theta }) \) computes a next-state effects after eliminating the head \( h \) from the current effects \( {\pi } \wedge {\theta } \).

Definition 5

([NullableFootnote 7). Given any \( {\mathrm {\Phi }}{=} \pi \wedge {\theta } \), \( \delta _{{\pi }}({\theta }):bool {=} {\left\{ \begin{array}{ll} {{ true }} &{} {{ if\ \epsilon \in \llbracket {\pi } {\wedge } {\theta }\rrbracket }}\\ {{ false }} &{} {{ if\ \epsilon \notin \llbracket {\pi } {\wedge } {\theta }\rrbracket }} \end{array}\right. } \)

$$\begin{aligned}\begin{gathered} {{ \delta _{\pi }(\bot ) {=}\delta _{\pi }(ev) {=} false }} \quad {{ \delta _{\pi }( \epsilon ) {=} \delta ({{\theta }^\star } ) {=} true }} \quad \ {{ \delta _{\pi }(\pi ^\prime ?{\theta }){=} \delta _{\pi }({\theta }) }} \quad \ {{ \delta _{\pi }({\theta }_1 {\vee } {\theta }_2) {=} \delta ({\theta }_1) {\vee } \delta ({\theta }_2) }} \\ {{ \delta _{\pi }({\theta }\cdot {\theta }_2) {=} \delta ({\theta }_1) {\wedge } \delta ({\theta }_2) }} \quad \ {{ \delta _{\pi }({\theta }_1 || {\theta }_2) {=} \delta ({\theta }_1) {\wedge } \delta ({\theta }_2) }} \quad \ {{ \delta _{\pi }({\theta }{\mathrel {{\#}}}t) {=} SAT ({\pi } {\wedge }(t {=} 0) ) \wedge \delta _{{\pi }}({\theta }) }} \end{gathered}\end{aligned}$$

Definition 6 (Heads)

If \( h \) is a head of \( \pi \wedge {\theta } \), then there exist \( \pi ^\prime \) and \( {\theta }^\prime \), such that \( \pi \wedge {\theta } \) = \( \pi ^\prime \wedge (h \cdot {\theta }^\prime ) \). A head can be t, denoting a pure time passing; , denoting an instant event passing; or , denoting an event passing which takes time \( t \).

Definition 7 (First)

Given any \( {\mathrm {\Phi }}{=} \pi \wedge {\theta } \), \( fst_{{\pi }}({\theta }) \) returns a set of heads, be the set of initial elements derivable from effects \( \pi \wedge {\theta } \), where (\( {t^{\prime }\ is\ fresh} \)):

figure ax

Definition 8

(TimEffs Partial Derivative). Given any \( {\mathrm {\Phi }}{=}\pi \wedge {\theta } \), the partial derivative \( D^{\pi }_{h}( {\theta }) \) computes the effects for the left quotient \( h^{\text {-}1} (\pi \wedge {\theta }) \), cf. Definition 1.

figure ay

Notice that the derivatives of a parallel composition makes use of the Parallel Derivative \( \bar{\bar{D}}^{\pi }_{h}({\theta }) \), defined as follows: \( \bar{\bar{D}}^{\pi }_{h}( {\theta }) {=} {\left\{ \begin{array}{ll} {{ \pi {\wedge } {\theta } }} &{} {{ if\ D^{\pi }_{h}(\pi \wedge {\theta }) = (False {\wedge } \bot ) }}\\ {{ D^{\pi }_{h}( {\theta }) }} &{} {{ otherwise }} \end{array}\right. } \)

5.1 Rewriting Rules. Given the well-defined auxiliary functions above, we now discuss the key rewriting rules that deployed in effects inclusion proofs.

figure az

Axiom rules \(\texttt {[Bot\text {-}LHS]}\) and \(\texttt {[Bot\text {-}RHS]}\) are analogous to the standard propositional logic, \( \bot \) (referring to false) entails any effects, while no non-false effects entails \( \bot \). \(\texttt {[DISPROVE]}\) is used to disprove the inclusions when the antecedent is nullable, while the consequent is not nullable.

We use two rules to prove an inclusion: (i) \(\texttt {[PROVE]}\) is used when the antecedent has no head; and (ii) \(\texttt {[REOCCUR]}\) proves an inclusion when there exist inclusion hypotheses in the proof context \( \varGamma \), which are able to soundly prove the current goal. \(\texttt {[UNFOLD]}\) is the inductive step of unfolding the inclusions. The proof of the original inclusion succeeds if all the derivative inclusions succeed.

figure ba

Theorem 2 (Termination of the TRS)

The TRS is terminating.

Proof

See the technical report[16].

Theorem 3 (Soundness of the TRS)

Given an inclusion \( {\mathrm {\Phi }}_1 \sqsubseteq {\mathrm {\Phi }}_2 \), if the TRS returns \( TRUE \) with a proof, then \( {\mathrm {\Phi }}_1 \sqsubseteq {\mathrm {\Phi }}_2 \) is valid.

Proof

See the technical report[16].

6 Implementation and Evaluation

To show the feasibility, we prototype our automated verification system using OCaml (\(\thicksim \)5k LOC); and prove soundness for both the forward verifier and the TRS. We set up two experiments to evaluate our implementation: i) functionality validation via verifying symbolic timed programs; and ii) comparison with PAT [17] and Uppaal [3] using real-life Fischer’s mutual exclusion algorithm. Experiments are done on a MacBook with a 2.6 GHz 6-Core Intel i7 processor. The source code and the evaluation benchmark are openly accessible from [18].

6.1 Experimental Results for Symbolic Timed Models. We manually annotate TimEffs specifications for a set of synthetic examples (for about 54 programs), to test the main contributions, including: computing effects from symbolic timed programs written in \( C^{t} \); and the inclusion checking for TimEffs with the parallel composition, block waiting operator and shared global variables.

Table 3 presents the evaluation results for another 16 \( C^{t} \) programsFootnote 8, and the annotated temporal specifications are in a 1:1 ratio for succeeded/failed cases. The table records: No., index of the program; LOC, lines of code; Forward(ms), effects computation time; \({\mathrel {{\#}}}\)Prop(), number of valid properties; Avg-Prove(ms), average proving time for the valid properties; \({\mathrel {{\#}}}\)Prop(), number of invalid properties; Avg-Dis(ms), average disproving time for the invalid properties; \({\mathrel {{\#}}}\)AskZ3, number of querying Z3 through out the experiments.

Table 3. Experimental Results for Manually Constructed Synthetic Examples.

Observations: i) the proving/disproving time increases when the effect computation time increases because larger Forward(ms) indicates the higher complexity w.r.t the timed constructs, which complicates the inclusion checking; ii) while the number of querying Z3 per property (\({\mathrel {{\#}}}\)AskZ3/(\({\mathrel {{\#}}}\)Prop(✓)+\({\mathrel {{\#}}}\)Prop(✗))) goes up, the proving/disproving time goes up. Besides, we notice that iii) the disproving times for invalid properties are constantly lower than the proving process, regardless of the program’s complexity, which is as expected in a TRS.

6.2 Verifying Fischer’s mutual exclusion algorithm. As shown in Fig. 4, the data in columns PAT(s) and Uppaal(s) are drawn from prior work [19], which indicate the time to prove Fischer’s mutual exclusion w.r.t the number of processes (\({\mathrel {{\#}}}\)Proc) in PAT and Uppaal respectively. For our system, based on the implementation presented in Fig. 5, we are able to prove the mutual exclusion properties, given the arithmetic constraint \(\texttt {d<e}\). Besides, the system disproves mutual exclusion when d \(\le \)e. We record the proving (Prove(s)) and disproving (Disprove(s)) time and their number of uniquely querying Z3 (\({\mathrel {{\#}}}\)AskZ3-u).

Table 4. Comparison with PAT via verifying Fischer’s mutual exclusion algorithm

Observations: i) automata-based model checkers (both PAT and Uppaal) are vastly efficient when given concrete values for constants d and e; however ii) our proposal is able to symbolically prove the algorithm by only providing the constraints of d and e, which cannot be achieved by existing model checkers; ii) our verification time largely depends on the number of querying Z3, which is optimized in our implementation by keeping a table for already queried constraints.

6.3 Case Study: Prove it when Reoccur. Termination of TRS is guaranteed because the set of derivatives to be considered is finite, and possible cycles are detected using memorization [14], demonstrated in Table 5. In step , in order to eliminate the first event B, \( \texttt { A}^\star {\mathrel {{\#}}}\texttt { tR} \) has to be reduced to \( \epsilon \), therefore the RHS time constraint has been strengthened to . Looking at the sub-tree \( (I) \), in step , tL and tR are split into \( \texttt { tL}^1\texttt { {+}tL}^2 \) and \( \texttt { tR}^1\texttt { {+}tR}^2 \). Then in step , \( \texttt { A} {\mathrel {{\#}}}\texttt { tL}^1 \) together with \( \texttt { A} {\mathrel {{\#}}}\texttt { tR}^1 \) are eliminated, unifying \( \texttt { tL}^1 \) and \( \texttt { tR}^1 \) by adding the side constraint . In step , we observe the proposition is isomorphic with one of the the previous step, marked using . Hence we apply the rule \(\texttt {[REOCCUR]}\) to prove it with a succeed side constraints entailment.

Table 5. The reoccurrence proving example. \( (I) \) is the left hand side sub-tree of the main rewriting proof tree.

6.4 Discussion. Our implementation is the first that proves the inclusion of symbolic TAs, which is considered significant because it overcomes the following main limitations of traditional timed model checking: i) TAs cannot be used to specify/verify incompletely specified systems (i.e., whose timing constants have yet to be known) and hence cannot be used in early design phases; ii) verifying a system with a set of timing constants usually requires enumerating all of them if they are supposed to be integer-valued; iii) TAs cannot be used to verify systems with timing constants to be taken in a real-valued dense interval.

7 Related Work

7.1 Verification Framework. This work draws the most similarities to [20], which also deploys a forward verifier and a TRS for extended regular expressions. The differences are: i) [20] targets general-purpose sequential programs without shared variables, whereas this work targets time-critical programs with the presence of concurrency and global shared states; ii) the dependent values in [20] denote the number of repetitions of a trace, whereas in this work, they abstract the real-time bounds; iii) in this work, the TRS supports inclusion checking for the block waiting operator \(\pi ?\) and the concurrent composition \( || \). These are essential in timed verification (or, more generally, for distributed systems), which are not supported in [20] or any other TRS-related works.

7.2 Specifications and Real-Time Verification. Apart from compositional modelling for real-time systems based on timed-process algebras, such as Timed CSP [8] and CCS\( \text {+} \)Time [21],

there have been a number of translation-based approaches on building verification support for timed-process algebras. For example, in [8], Timed CSP is translated to TAs (TAs) so that the model checker Uppaal [3] can be applied. On the other hand, all the translation-based approaches share the common problem: the overhead introduced by the complex translation makes it particularly inefficient when disproving properties. We are of the opinion that in that the goal of verifying real-time systems, in particular safety-critical systems is to check logical temporal properties, which can be done without constructing the whole reachability graph or the full power of model-checking. We consider our approach is simpler as it is based directly on constraint-solving techniques and can be fairly efficient in verifying systems consisting of many components as it avoids to explore the whole state-space [20, 22].

This work draws similarities to Real-Time Maude [23], which complements timed automata with more expressive object-oriented specifications.

7.3 Clock Manipulation and Zone-based Bisimulation. The concept of implicit clocks has also been used in time Petri nets, and implemented in a several model checking engines, e.g., [24]. On the other hand, to make model checking more efficient with explicit clocks, [25,26,27,28] work on dynamically deleting or merging clocks. Our work also draw connections with region/zone-based bisimulations [29], which is broadly used in reasoning timed automata.

8 Conclusion

This work provides an alternative approach for verifying real-time systems, where temporal behaviors are reasoned at the source level, and the specification expressiveness goes beyond traditional Timed Automata. We define the novel effects logic TimEffs, to capture real-time behavioral patterns and temporal properties. We demonstrate how to build axiomatic semantics (or rather an effects system) for \( {{ C^{t} }} \) via timed-trace processing functions. We use this semantic model to enable a Hoare-style forward verifier, which computes the program effects constructively. We present an effects inclusion checker – the TRS – to efficiently prove the annotated temporal properties. We prototype the verification system and show its feasibility. To the best of our knowledge, our work proposes the first algebraic TRS for solving inclusion relations between timed specifications.

Limitations And Future Work. Our TRS is incomplete, meaning there exist valid inclusions which will be disproved in our system. That is mainly because of insufficient unification in favour of achieving automation. We also foresee the possibilities of adding other logics into our existing trace-based temporal logic, such as separation logic for verifying heap-manipulating distributed programs.