Hoare Logics for Time Bounds
 3 Citations
 5.6k Downloads
Abstract
We study three different Hoare logics for reasoning about time bounds of imperative programs and formalize them in Isabelle/HOL: a classical Hoare like logic due to Nielson, a logic with potentials due to Carbonneaux et al. and a separation logic following work by Atkey, Chaguérand and Pottier. These logics are formally shown to be sound and complete. Verification condition generators are developed and are shown sound and complete too. We also consider variants of the systems where we abstract from multiplicative constants in the running time bounds, thus supporting a bigO style of reasoning. Finally we compare the expressive power of the three systems.
Keywords
Hoare logic Algorithm analysis Program verification1 Introduction
This paper is about Hoare logics for proving upper bounds on running times and about the formalized (in a theorem prover) study of their meta theory. The paper is not about the automatic analysis of running times but about fundamental questions like soundness and completeness of logics and of verification condition generators (VCGs). The need for such a study becomes apparent when browsing the related literature (e.g. [1, 6, 7]): (formalized) soundness results are of course provided, but completeness of logics and VCGs is missing.
We study multiple different Hoare logics because we are interested in different aspects of the logics. One aspect is the difference between precise upper bounds and orderofmagnitude upper bounds that abstract from multiplicative constants. In the latter case we speak of “bigO style” logics.
A second aspect is modularity. We would like to combine verified results about subprograms in order to show correctness and running time for larger programs. Therefore we also study a separation logic for running time analysis.

Based on the simple imperative language IMP (Sect. 2), we formalize three logics for time bounds from the literature (Sect. 3); we show their soundness and completeness w.r.t. IMP’s semantics, discuss specific weaknesses and strengths and study their interrelations (Sect. 4).

The first logic we study is a bigO style logic due to Nielson [23] (Sect. 3.1). We improve, formalize and verify this logic and extend it with a VCG whose soundness and completeness we also verify.

In Sect. 3.2 we formalize a quantitative Hoare logic following ideas by Carbonneaux et al. [4, 6] and extend their work as follows: we prove completeness of the logic and design a sound and complete VCG. Additionally we extend the logic to a bigO style logic.

Following ideas of Atkey [1] and Charguéraud and Pottier [7] we formalize a logic similar to separation logic (Sect. 3.3) for reasoning about concrete running times. We formally prove soundness and completeness.

All proofs have been formalized in Isabelle/HOL [18, 19] and are available online [9].
2 Basics
We consider the simple deterministic imperative language IMP. Its formalization is standard and can be found elsewhere [18]. IMP’s commands are built up from SKIP, assignment, sequential composition, conditional and Whileloop. Program states are functions from variables to values. By default c is a command and s a state. Evaluation of a boolean or arithmetic expression e in state s is denoted by \([\![e]\!]_s\).
We have defined a bigstep semantics that counts the consumed time during execution: SKIP, assignment and evaluation of boolean expressions require one time unit. The precise definition of the semantics is routine. We write Open image in new window to mean that starting command c in state s terminates after time t in state \(s'\).
Given a pair (c, s), \({\downarrow } (c,s)\) means that the computation of c starting from s terminates, \({\downarrow }_S (c,s)\) then denotes the final state, and \({\downarrow _T} (c,s)\) the execution time.
3 Hoare Logics for Time Bounds
In this section we study and extend three different Hoare logics: a classical one based on [23], one using potentials [4] and one based on separation logic with time credits [1].
3.1 Nielson Style
Riis Nielson and Nielson [23] present a Hoare logic to prove the “order of magnitude of the execution time” of a program (which we call “bigO style”). They reason about triples of the form \(\{P\} c \{e \Downarrow Q\}\) where P and Q are assertions and e is a time bound. The intuition is the following: if the execution of command c is started in a state satisfying P then it terminates in a state satisfying Q after O(e) time units, i.e. the execution time has order of magnitude e. Note that e is evaluated in the state before executing c.
Throughout the paper we rely on what is called a shallow embedding of assertions and time bounds: there is no concrete syntactic representation of assertions and time bounds but they are merely functions in HOL, our ambient logic. They map states to truth values and natural numbers.
A complication in reasoning about execution time comes from the fact that one needs to combine time bounds that refer to different points in the execution, for example when adding time bounds in a sequential composition. This difficulty can be overcome with logical variables that enable us to transport time bounds from the prestate to the poststate of a command. We formalize logical variables by modelling assertions as functions of two states, the state of the logical variables (typically l) and the state of the program variables (typically s).
Now we discuss the rules in Fig. 1. Rules Skip, Assign, If and conseq are straightforward. Note that 1 is the time bound \(\lambda s. 1\) and \(+\) is lifted to time bounds pointwise. The notation s[a / x] is short for “s with x mapped to \([\![a]\!]_s\)”.
Now consider rule Seq. Given \(\{P\} c_1 \{e_1 \Downarrow Q\}\) and \(\{Q\} c_2 \{e_2 \Downarrow R\}\) one may want to conclude \(\{P\} c_1 ; c_2 \{e_1 + e_2 \Downarrow R\}\). Unfortunately, \(e_1 + e_2\) does not lead to the correct result, as \(c_1\) could have altered variables \(e_2\) depends on. In order to adapt \(e_2\) for the changes that occur in \(c_1\), we use a shifted time bound \(e_2'\), and leave as a proof goal to show that the value of \(e_2'\) in the prestate is an upper bound on \(e_2\) in the poststate of \(c_1\). Rule Seq relates \(e_2'\) and \(e_2\) through a fresh logical variable u that is equated with the value of \(e_2'\) in the prestate of \(c_1\). The time bound e in the conclusion must be an upper bound of \(e_1 + e_2'\).
In the const rule, the time bound can be reduced by a constant factor. Note that we split up Nielson’s \(cons_e\) rule into conseq and const.
Our rule While is a simplification of the one in [23]. The latter is an extension with time of the “standard” Whilerule for total correctness where a variable decreases with each loop iteration. However, once you have time, you no longer need that variable and we removed it. The key constraint in rule While is \(e \ge 1 + e' + e''\). It can be explained by unfolding the loop once. The time e to execute the whole loop must be an upper bound for the time \(e''\) to execute the loop body plus the time \(e'\) to execute the remaining loop iteration; the \(1+\) accounts for evaluation of b. The time \(e'\) to execute the remaining loop iterations is obtained from e by (intuitively) an application of rule Seq: in the first premise a fresh logical variable u is used to pull e back over c, resulting in \(e'\). The rest of rule While is standard.
Soundness of the calculus can be shown by induction on the derivation of \(\vdash _1\{P\} c \{e \Downarrow Q\}\):
Theorem 1
(Soundness of \(\vdash _1\) ). \(\vdash _1\{P\} c \{e \Downarrow Q\} \Longrightarrow \ \models _1\{P\} c \{e \Downarrow Q\}\)
Lemma 1
\(finite (support\ Q) \Longrightarrow \ \vdash _1\{wp\ c \ Q\} c \{\lambda s.\ {\downarrow }_T (c,s) \Downarrow Q\}\)
As we need fresh logical variables for rules Seq and While, we assume that the set of logical variables Q depends on is finite.
It is instructive to observe that for this proof, only the Hoare rules Skip to conseq are needed. Neither const nor \(conseq_K\) are used. Lemma 1 thus expresses that it always is possible to derive a triple with the precise execution time as a time bound. Only as a last step an abstraction of multiplicative constants and overapproximation of the time bound is necessary. This shows that for every valid triple one can first deduce a correct upper bound for the running time, only to get rid of a multiplicative constant in a final application of the const rule one. In the end, Lemma 1 implies completeness:
Theorem 2
(Completeness of \(\vdash _1\) ).
\(finite\ (support\ Q) \Longrightarrow \ \models _1\{P\} c \{e \Downarrow Q\} \Longrightarrow \ \vdash _1\{P\} c \{e \Downarrow Q\}\)
In particular we can now apply the above observation about the shape of derivations of valid triples to provable ones, by soundness: in any derivation one can pull out all applications of const and combine them into a single one at the very root of the proof tree. We will observe the very same principle when studying the quantitative Hoare logic in Sect. 3.2.
Verification Condition Generator. Showing validity of \(\{P\} c \{e \Downarrow Q\}\) now boils down to applying the correctly instantiated rules of the Hoare logic and proving their side conditions. The former is a mechanical task, which is routinely automated by a verification condition generator, while the latter is left to an automatic or interactive theorem prover.
We design a VCG that collects the side conditions for an annotated program. While for classical Hoare logic it suffices to annotate a loop with an invariant I, for reasoning about execution time we introduce two more annotations for the following reason.
Consider rule Seq in Fig. 1. When applying the rule to a proof goal \(\vdash _1\{P\} c_1;c_2 \{e \Downarrow R\}\) we need to instantiate the variables P, Q, \(e_1\), \(e_2\), and \(e_2'\). As for classical Hoare logic, Q is chosen to be the weakest preconditions of \(c_2\) w.r.t. R, which can be calculated if the loops in \(c_2\) are annotated by invariants. (Analogously for P being the weakest precondition of \(c_1\) w.r.t. Q). Similarly, when annotating the loops in \(c_1\) and \(c_2\) with time bounds E, time bounds \(e_1\) and \(e_2\) can be constructed. Finally, \(e_2'\) can be determined if the evolution of \(e_2\) through \(c_1\) is known. For straight line programs, this can be deduced, only for loops a state transformer S has to be annotated. An annotated loop then has the form \( \{I,S,E\}\ \text {WHILE} \ b\ \text {DO}\ C\) where I is the invariant and S and E are as above.
For our completeness proof of the VCG we also need annotations that correspond to applications of rule \(conseq_K\) and record information that cannot be inferred automatically. For that purpose we introduce a new annotated command \(\textit{Conseq}\ \{P',Q,e'\}\ C\) where \(P'\), Q and \(e'\) are as in rule \(conseq_K\).
We use capital letters, e.g. C, to denote annotated commands and \(\overline{C}\) is the unannotated version of C stripped of all annotations.
Theorem 3
(Soundness of vc). Let C and Q involve only finitely many logical variables. Then \(vc\; C\; Q\) together with \(\exists k.\forall l\; s.\ P\;l\;s \longrightarrow \textit{pre}\; C\; Q\; l\; s \wedge time\; C\; s \le k \cdot e\; s\) imply \(\vdash _1\{P\}\ \overline{C} \{e \Downarrow Q\}\).
That is, for proving \(\vdash _1\{P\}\ \overline{C} \{e \Downarrow Q\}\) one has to show the verification conditions, that P implies the weakest precondition (as computed by pre) and that the running time (as computed by time) is in the order of magnitude of e.
Now we come to the raison d’être of the stronger consequence rule \(conseq_K\): the completeness proof of our VCG. The other proofs in this section only require the derived rules conseq and const. Our completeness proof of the VCG builds annotated programs that contain a \(\textit{Conseq}\) construct for every Seq and While rule. The annotations of \(\textit{Conseq}\) enable us to adapt the logical state; without this adaptation we failed to generate true verification conditions.
Theorem 4
(Completeness of vc). If \(\vdash _1\{P\}\ c \{e \Downarrow Q\}\) then there is a C such that \(\overline{C} = c\), \(vc\ C\ Q\) is true and \(\exists k.\forall l\, s.\ P\;l\;s \longrightarrow \textit{pre}\;C\;Q\;l\;s \wedge time\; C\; s \le k \cdot e\; s\).
That is, if a triple \(\vdash _1\{P\}\ c \{e \Downarrow Q\}\) is provable then c can be annotated such that the verification conditions are true, P implies the weakest precondition (as computed by pre) and the running time (as computed by time) is in the order of magnitude of e.
Annotating loops with a correct S is troublesome, as it captures the semantics of the whole loop. Luckily S only needs to be correct for “interesting” variables, i.e. variables that occur in time bounds that need to be pulled backward through the loop body. Often these variables are not modified by a command. We implemented an optimized VCG that keeps track of which variables are of interest and requires S to be correct only on those; we also showed its soundness and completeness. Further details can be found in the formalization.
3.2 Quantitative Hoare Logic
The main idea by Carbonneaux et al. [4] is to generalize predicates (\(\text {state} \Rightarrow \mathbb {B}\)) in Hoare triples to potentials (\(\text {state} \Rightarrow \mathbb {N}_{\infty }\)). That is, Hoare triples are now of the form \(\{P\}c\{Q\}\) where P and Q are potentials. The resulting logic does not need logical variables. We prove soundness and completeness of that logic and design a sound and complete VCG. Then we extend the logic and VCG to bigO style reasoning.
The rules in Fig. 3 define the Hoare logic \(\vdash _2\) corresponding to \(\models _2\). Note that \(P \ge Q\) is short for \(\forall s.\ P\ s \ge Q\ s\).
Rules Skip, Assign and If are straightforward; the 1 time unit added to the prepotential pays for, respectively, SKIP, assignment and the evaluation of the boolean expression. The conseq rule also looks familiar, only that \(\longrightarrow \) has been replaced by \(\ge \). You can think of a bigger potential implying a smaller one; also remember that False corresponds to \(\infty \).
For the While rule, assume one can derive that having the potential I and a true guard b before the execution of c implies a postpotential one more than the invariant I (the plus one is needed for the upcoming evaluation of the guard, which incurs cost 1), then one can conclude that, starting the loop with potential \(I+\mathbf 1 \) (again the plus one pays for the evaluation of the guard), the loop terminates with a potential equal to I and the negation of the guard holds in the final state. Although this rule resembles the While rule for partial correctness, the decreasing potential actually also ensures termination.
Theorem 5
(Soundness of \(\vdash _2\) ). \(\vdash _2\{P\} c \{Q\} \Longrightarrow \ \models _2\{P\} c \{Q\}\)
Lemma 2
\(\vdash _2\{wp\ c \ Q\} c \{Q\}\)
As usual, completeness follows easily from this lemma:
Theorem 6
(Completeness of \(\vdash _2\) ). \(\models _2\{P\} c \{Q\} \Longrightarrow \ \vdash _2\{P\} c \{Q\}\)
Verification Condition Generator. The simpler Seq rule (compared to \(\vdash _1\)) leads to a more compact VCG. Loops are simply annotated with invariants, which now are potentials. No \(\textit{Conseq}\) annotations are required.
Function \(\textit{pre}\ C\ Q\) determines the weakest prepotential of an annotated program C and postpotential Q. Its definition is by recursion on annotated commands and refines our earlier \(\textit{pre}\) on predicates.
Lemma 3
(Soundness of vc). If we can show the verification conditions \(vc\ C\ Q\) and that we have at least as much potential as the needed prepotential (\(P \ge \textit{pre}\ C\ Q\)) then we can derive \(\vdash _2\{P\} \overline{C} \{Q\}\).
Completeness of the VCG can be paraphrased like this: if we can derive the Hoare Triple \(\vdash _2\{P\} c \{Q\}\), we can find an annotation for c such that the verification conditions are true and P “implies” the prepotential.
Lemma 4
(Completeness of vc ).
\(\vdash _2\{P\} c \{Q\} \Longrightarrow \exists C.\ \overline{C} = c \ \wedge \ vc\; C\; Q \ \wedge \ P \ge \textit{pre}\; C\; Q\)
Theorem 7
(Soundness of \(\vdash _{2'}\) ). \(\vdash _{2'}\{P\} c \{Q\} \Longrightarrow \ \models _{2'}\{P\} c \{Q\}\)
For the completeness proof, nothing changes. We reuse the same wp and the proof of \(\vdash _{2'}\{wp\;c\;Q\}c\{Q\}\) is identical to that of Lemma 2 because we extended the Hoare rules, but not the command language. In particular this means that the new const rule is not used in this proof. The same principle as in Sect. 3.1 applies: the const rule is only used once at the end when showing completeness from \(\vdash _{2'}\{wp\;c\;Q\}c\{Q\}\):
Theorem 8
(Completeness of \(\vdash _{2'}\) ). \(\models _{2'}\{P\} c \{Q\} \Longrightarrow \ \vdash _{2'}\{P\} c \{Q\}\)
The soundness and completeness proofs must only be adapted marginally, only some algebraic lemmas about ediv are needed.
To summarize this section: we have shown how to generalize conditions to potentials, thus obtaining a compositional Hoare logic; we have extended the Hoare logic to bigO style reasoning and have adapted the calculus and proofs; we also have established sound and complete VCGs for both logics.
One drawback of the quantitative Hoare logic is that it is not modular. Imagine two independent programs \(c_1\) and \(c_2\) that are run one after the other. When reasoning about a subprogram \(c_1\) we need to specify a postpotential that is then used for the following program \(c_2\). If we change \(c_2\), resulting in a changed time consumption, also the analysis for \(c_1\) has do be redone. What we actually would like to do, is to reason about \(c_1\) and \(c_2\) locally and then combine them in a final step. Separation logic addresses this issue.
3.3 Separation Logic with Time Credits
Our last logic follows the idea by Atkey [1] to use separation logic in order to reason about the resource consumption of programs. This logic generalizes the quantitative Hoare logic.
The principle of “local reasoning” is addressed by separation logic for disjoint heap areas; Atkey [1] uses separation logic with time credits to reason about the amortised execution time of (imperative) programs.
In this section we follow his ideas and design a Hoare logic based on separation logic. As IMP does not have a heap to reason about, but we want to compare the logic to the two logics we already described, we treat the state of a program as a kind of heap: a partial state ps is a map from variable names to values, \(\textit{dom}\ ps\) is the domain of ps, we call \(ps_1\) and \(ps_2\) disjoint (\(ps_1 \bot ps_2\)) if their domains are, and we can add two partial states to form their disjoint union (\(ps_1 + ps_2\)).
Lemma 5
Proving soundness and completeness follows the same lines as for the quantitative Hoare logic, only complicated by the reasoning about partial states.
Theorem 9
(Soundness of \(\vdash _3\) ). \(\vdash _3\{P\} c \{Q\} \Longrightarrow \ \models _3\{P\} c \{Q\}\)
Theorem 10
(Completeness of \(\vdash _3\) ). \(\models _3\{P\} c \{Q\} \Longrightarrow \ \vdash _3\{P\} c \{Q\}\)
BigO style. Similar to last subsection’s system we extend the Hoare logic based on Separation Logic to bigO style reasoning. We again generalize our notion of validity (now \(\models _{3'}\)) and add a similar const rule to obtain the Hoare Logic \(\vdash _{3'}\). Proving soundness and completeness of this new Hoare logic follows the same lines as in the subsection before. Similarly we come up with a simple VCG: somewhat unorthodoxly for separation logic, we use a backwards style, as well as we do not provide annotations for abstraction from multiplicative constants, as one final abstraction at the outer most position suffices to ensure completeness.
The approach inspired by Nielson to incorporate abstraction from multiplicative constants directly into the Hoare Logic in order to reason about the order of magnitude of the running time of programs shows weaknesses and seems to complicate matters. Our theoretical results show that it is always possible to reason about the exact running time and abstract away multiplicative constants in a last step.
4 Discussion
In this section we discuss the interrelations between the Hoare logics described in the last section.
First we can compare the expressibility of the logics. Nielson logic \(\models _1\) and the quantitative Hoare logic \(\models _{2'}\), both bigO style logics, are equivalent in the following sense:
Lemma 6
\(\models _1\{\lfloor P \rfloor _{\mathbb {B}} \}c\{\lambda s.\ \lfloor P\ s  Q ({\downarrow }_S(c,s)) \rfloor _{\mathbb {N}} \Downarrow \lfloor Q \rfloor _{\mathbb {B}} \} \Longrightarrow \ \models _{2'}\{P\}c\{Q\}\) where \(\lfloor P \rfloor _{\mathbb {B}}\; s \equiv P\ s < \infty \) and \(\lfloor . \rfloor _{\mathbb {N}}\) is the coercion from \(\mathbb {N}_{\infty }\) to \(\mathbb {N}\), assuming the argument is finite.
Validity of a triple in the quantitative Hoare logic can be reduced to validity of a transformed triple in Nielson’s logic. In the other direction this is only possible for assertions P and Q that do not depend on the state of their logical variables:
Lemma 7
\(\models _{2'}\{{\Uparrow }P + e\}c\{{\Uparrow }Q\} \Longrightarrow \ \models _1\{P\}c\{e \Downarrow Q\}\) where \({\Uparrow }P\; s \equiv (\forall l. {\uparrow } P\ l\ s)\)
The quantitative logics support amortised resource analysis. On the face of it, Nielson’s logic does not, but Lemma 6 tells us that in theory it actually does. However, automatic tools for resource analysis are mainly based on the potential method, for example [5, 12].
Furthermore, as the third system based on separation logic talks about partial states, in general it cannot be simulated by any of the other systems. This can only be done for assertions that act on complete states:
Lemma 8
\(\models _{2'}\{\lfloor P \rfloor \}c\{\lfloor Q \rfloor \} \Longrightarrow \ \models _{3'}\{P\}c\{Q\}\), when P is only true for complete partial states, with \(\lfloor P \rfloor s \equiv \inf _{n\in \mathbb {N}} \{P (\lfloor s \rfloor ,n)\}\) and \(\lfloor s \rfloor \) is the partial state defined everywhere and returning the same results as the total state s.
On the other hand any triple in the quantitative Hoare logic \(\models _{2'}\) can be embedded into the separation logic \(\models _{3'}\):
Lemma 9
\(\models _{3'}\{\lfloor P \rfloor \}c\{\lfloor Q \rfloor \} \Longrightarrow \ \models _{2'}\{P\}c\{Q\}\), where \(\lfloor P \rfloor (ps,n) \equiv (\forall s.\ n \ge P \lfloor ps \rfloor ^s)\) and \(\lfloor ps \rfloor ^s\) is the extension of the partial state ps by the state s to a total state.
With the simplification that the intervals between l and r are always powers of two, we can easily show the running time to be in the order of magnitude of \(1 + \log x\). Note that we can get rid of multiplicative constants, but not additive ones!
For showing \(\vdash _1\{\lambda l\ s.\ (\exists k. 1 + s\ ''x'' = 2 ^ k)\} c \{\lambda s. \log (s\ ''x'') + 1 \Downarrow \lambda l\ s.\ True )\) we provide the following annotations for the while loop: \(I_1\) = \(\lambda l\ s.\ s\ ''l'' \ge 0 \wedge (\exists k. s\ ''r''  s\ ''l'' = 2 ^ k ) \), \(E_1\) = \(\lambda s.\ 1 + 5 \cdot \log (s\ ''r''  s\ ''l'')\) and \(S_1\) = \(\lambda s.\ s\); then we use our optimized VCG and prove the remaining proof obligations.
For showing \(\vdash _{2'}\{(\lambda s.\ \uparrow (\exists k. 1 + s\ ''x'' = 2 ^ k) + (\log ( s\ ''x'') + 1)\} c \{\lambda \_. 0 )\), we annotate the while loop with the potential \(I_{2'}\) = \(\lambda s.\ \uparrow ( s\ ''l'' \ge 0 \wedge (\exists k. s\ ''r''  s\ ''l'' = 2 ^ k) ) + 5 \cdot \log (s\ ''r''  s\ ''l'')\).
Let us now compare the VCGs. Our VCG for Nielson’s logic requires the annotation of loops with invariants I, running time bounds E and the state transformers S. In contrast, the annotations required for the VCG for the quantitative Hoare logic are uniformly potentials. In the above example, one can see that this annotated potential \(I_{2'}\) exactly contains the same information as both \(I_1\) and \(E_1\) in the Nielson approach. The additional \(1 +\) in \(E_1\) is needed, as \(E_1\) describes the running time of the whole loop, where \(I_{2'}\) describes the running time from after evaluating the loop guard. Only more practical experience can tell if it is better to work with separate I, E and S or with a combined invariant potential.
In addition our annotated commands for Nielson’s system may require annotations of the form \(\textit{Conseq}\ \{P',Q,e'\}\), whereas for the quantitative Hoare logic we managed to reduce this to \(Const\ \{k\}\) annotations. It would be desirable to reduce the \(\textit{Conseq}\) annotations similarly.
5 Related Work
Riis Nielson [21, 22] was the first to study Hoare logics for running time analysis of programs. She proved soundness and completeness of her systems (on paper) which are based on a deep embedding of her assertion language. We base our formalization on the system given in [23] where assertions are just predicates, i.e. functions. However, our inference system differs from hers in several respects and our mechanized proofs in Isabelle/HOL are completely independent. Moreover we provide a VCG and prove it sound and complete.
Possibly the first example of a resource analysis logic based on potentials is due to Hofmann and Jost [11]. The idea of generalising predicates to potentials in order to form a “quantitative Hoare logic” we borrowed from [4]: Carbonneaux et al. design a quantitative logic in order to reason about stackspace usage of C programs. They also formally show soundness of their logic in Coq. They employ their logic for reasoning about other resource bounds and use it as the underlying logic for an automatic tool for resource bound analysis [5, 6]. In a draft version of his dissertation [3] Carbonneaux complements his toolfocused work with a theoretical treatment of an “Invariant Logic”. The relation to our logics of Sect. 3.2 should be studied in more detail.
Atkey [1] proposed to use separation logic with time credits to reason about the amortised running time of programs; he formalized his logic and its soundness in Coq. Similar ideas were used by Hoffmann et al. [10] to prove lockfreedom of concurrent programs, and by Charguéraud and Pottier [7] to verify the amortised running time of the UnionFind data structure in Coq. Guéneau et al. [8] recently extended their framework to also obtain O results for the running time of programs. None of these works include verified VCGs.
There is also some related work that extends to probabilistic programs. Kaminski et al. [13] reason about the expected running time of probabilistic programs and show that their approach corresponds to Nielson’s logic when restricted to deterministic algorithms. Ngo et al. [16] extend the idea of working with potentials to reasoning about the expected running time of probabilistic programs.
For formal treatment of program logics [17] is a good entry point. Basic concepts as well as formalizations of Hoare logics that lay the ground for our work can be found in [18].
6 Conclusion
In this paper we have studied three Hoare logics for reasoning about the running time of programs in a simple imperative language. We have formalized and verified their meta theory in Isabelle/HOL.
Further investigation is required in order to simplify the VCG for Nielson’s logic and avoid the Conseq construct while preserving completeness of the VCG. Extending IMP with more language features is a natural next step. Adding recursive procedures should be easy (following [17]) whereas probabilistic choice (following [20]) is much more challenging and interesting. Not only is the meta theory of probabilistic programs nontrivial but even very small programs can be surprisingly hard to analyze. Although we view our work primarily as foundational, we expect that it could become a viable basis for the verification of small probabilistic programs.
Notes
Data Availability Statement and Acknowledgments
The formal proof development is available online [9]. We thank Peter Lammich for his initial help with setting up the separation logic.
References
 1.Atkey, R.: Amortised resource analysis with separation logic. In: Gordon, A.D. (ed.) ESOP 2010. LNCS, vol. 6012, pp. 85–103. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642119576_6CrossRefGoogle Scholar
 2.Calcagno, C., O’Hearn, P.W., Yang, H.: Local action and abstract separation logic. In: Logic in Computer Science, LICS 2007, pp. 366–378. IEEE (2007)Google Scholar
 3.Carbonneaux, Q.: Modular and certified resourcebound analyses. Ph.D. dissertation, Yale University (2017). http://cs.yale.edu/homes/qcar/diss/
 4.Carbonneaux, Q., Hoffmann, J., Ramananandro, T., Shao, Z.: Endtoend verification of stackspace bounds for C programs. In: O’Boyle, M.F.P., Pingali, K. (eds.) Conference on Programming Language Design and Implementation, PLDI 2014, pp. 270–281. ACM (2014)Google Scholar
 5.Carbonneaux, Q., Hoffmann, J., Reps, T., Shao, Z.: Automated resource analysis with Coq proof objects. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10427, pp. 64–85. Springer, Cham (2017). https://doi.org/10.1007/9783319633909_4CrossRefGoogle Scholar
 6.Carbonneaux, Q., Hoffmann, J., Shao, Z.: Compositional certified resource bounds. In: Grove, D., Blackburn, S. (eds.) Conference on Programming Language Design and Implementation, PLDI 2015, pp. 467–478. ACM (2015)Google Scholar
 7.Charguéraud, A., Pottier, F.: Verifying the correctness and amortized complexity of a unionfind implementation in separation logic with time credits. J. Autom. Reasoning, accepted for publicationGoogle Scholar
 8.Guéneau, A., Charguéraud, A., Pottier, F.: A fistful of dollars: formalizing asymptotic complexity claims via deductive program verification. In: European Symposium on Programming (ESOP) (2018)CrossRefGoogle Scholar
 9.Haslbeck, M.P.L., Nipkow, T.: Hoare logics for time bounds. Archive of formal proofs, February 2018. https://www.isaafp.org/entries/Hoare_Time.html. Formal proof development
 10.Hoffmann, J., Marmar, M., Shao, Z.: Quantitative reasoning for proving lockfreedom. In: Logic in Computer Science, LICS 2013, pp. 124–133. IEEE (2013)Google Scholar
 11.Hofmann, M., Jost, S.: Typebased amortised heapspace analysis. In: Sestoft, P. (ed.) ESOP 2006. LNCS, vol. 3924, pp. 22–37. Springer, Heidelberg (2006). https://doi.org/10.1007/11693024_3CrossRefGoogle Scholar
 12.Hofmann, M., Rodriguez, D.: Automatic type inference for amortised heapspace analysis. In: Felleisen, M., Gardner, P. (eds.) ESOP 2013. LNCS, vol. 7792, pp. 593–613. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642370366_32CrossRefGoogle Scholar
 13.Kaminski, B.L., Katoen, J.P., Matheja, C., Olmedo, F.: Weakest precondition reasoning for expected run–times of probabilistic programs. In: Thiemann, P. (ed.) ESOP 2016. LNCS, vol. 9632, pp. 364–389. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662494981_15CrossRefGoogle Scholar
 14.Klein, G., Kolanski, R., Boyton, A.: Separation algebra. Archive of formal proofs, May 2012. http://isaafp.org/entries/Separation_Algebra.html. Formal proof development
 15.Kleymann, T.: Hoare logic and auxiliary variables. Formal Aspects Comput. 11(5), 541–566 (1999)CrossRefGoogle Scholar
 16.Ngo, V.C., Carbonneaux, Q., Hoffmann, J.: Bounded expectations: resource analysis for probabilistic programs. In: Conference on Programming Language Design and Implementation, PLDI 2018 (2018)Google Scholar
 17.Nipkow, T.: Hoare logics in Isabelle/HOL. In: Schwichtenberg, H., Steinbrüggen, R. (eds.) Proof and SystemReliability, pp. 341–367. Kluwer (2002)CrossRefGoogle Scholar
 18.Nipkow, T., Klein, G.: Concrete Semantics: With Isabelle/HOL. Springer, Cham (2014). https://doi.org/10.1007/9783319105420CrossRefzbMATHGoogle Scholar
 19.Nipkow, T., Wenzel, M., Paulson, L.C. (eds.): Isabelle/HOL—A Proof Assistant for HigherOrder Logic. LNCS, vol. 2283. Springer, Heidelberg (2002). https://doi.org/10.1007/3540459499CrossRefzbMATHGoogle Scholar
 20.Olmedo, F., Kaminski, B.L., Katoen, J.P., Matheja, C.: Reasoning about recursive probabilistic programs. In: Logic in Computer Science, LICS 2016, pp. 672–681. ACM (2016)Google Scholar
 21.Riis Nielson, H.: Hoare logic’s for runtime analysis of programs. Ph.D. thesis, University of Edinburgh (1984)Google Scholar
 22.Riis Nielson, H.: A Hoarelike proof system for analysing the computation time of programs. Sci. Comput. Program. 9(2), 107–136 (1987)MathSciNetCrossRefGoogle Scholar
 23.Riis Nielson, H., Nielson, F.: Semantics with Applications: An Appetizer. Springer, New York (2007). https://doi.org/10.1007/9781846286926CrossRefGoogle Scholar
Copyright information
<SimplePara><Emphasis Type="Bold">Open Access</Emphasis> This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.</SimplePara> <SimplePara>The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.</SimplePara>