2.1 Time Credits
A small number of axioms, presented in Fig. 1, govern time credits. The assertion
denotes n time credits. The splitting axiom, a logical equivalence, means that time credits can be split and combined. Because Iris is an affine logic, it is implicitly understood that time credits cannot be duplicated, but can be thrown away.
The axiom
means that time credits are independent of Iris’ step-indexing. In practice, this allows an Iris invariant that involves time credits to be acquired without causing a “later” modality to appear [12, §5.7]. The reader can safely ignore this detail.
The last axiom, a Hoare triple, means that every computation step requires and consumes one time credit. As in Iris, the postconditions of our Hoare triples are \(\lambda \)-abstractions: they take as a parameter the return value of the term. At this point,
can be thought of as a pseudo-instruction that has no runtime effect and is implicitly inserted in front of every computation step.
Time credits can be used to express worst-case time complexity guarantees. For instance, a sorting algorithm could have the following specification:
Here,
asserts the existence and unique ownership of an array at address
, holding the sequence of elements \(xs\). This Hoare triple guarantees not only that the function call
runs safely and has the effect of sorting the array at address
, but also that
runs in at most \(6n\log n\) time steps, where n is the length of the sequence \(xs\), that is, the length of the array. Indeed, only \(6n\log n\) time credits are provided in the precondition, so the algorithm does not have permission to run for a greater number of steps.
2.2 Time Receipts
In contrast with time credits, time receipts are a new concept, a contribution of this paper. We distinguish two forms of time receipts. The most basic form, exclusive time receipts, is the dual of time credits, in the sense that every computation step produces one time receipt. The second form, persistent time receipts, exhibits slightly different properties. Inspired by Clochard et al. [5], we show that time receipts can be used to prove that certain undesirable events, such as integer overflows, cannot occur unless a program is allowed to execute for a very, very long time—typically centuries. In the following, we explain that exclusive time receipts allow reconstructing Clochard et al.’s “one-time” integers [5, §3.2], which are so named because they are not duplicable, whereas persistent time receipts allow reconstructing their “peano” integers [5, §3.2], which are so named because they do not support unrestricted addition.
Exclusive time receipts. The assertion
denotes n time receipts. Like time credits, these time receipts are “exclusive”, by which we mean that they are not duplicable. The basic laws that govern exclusive time receipts appear in Fig. 2. They are the same laws that govern time credits, with two differences. The first difference is that time receipts are the dual of time credits: the specification of \( tick \), in this case, states that every computation step produces one time receipt.Footnote 1 The second difference lies in the last axiom of Fig. 2, which has no analogue in Fig. 1, and which we explain below.
In practice, how do we expect time receipts to be exploited? They can be used to prove lower bounds on the execution time of a program: if the Hoare triple
holds, then the execution of the program p cannot terminate in less than n steps. Inspired by Clochard et al. [5], we note that time receipts can also be used to prove that certain undesirable events cannot occur in a feasible time. This is done as follows. Let \(N\) be a fixed integer, chosen large enough that a modern processor cannot possibly execute \(N\) operations in a feasible time.Footnote 2 The last axiom of Fig. 2,
, states that \(N\) time receipts imply a contradiction.Footnote 3 This axiom informally means that we won’t compute for \(N\) time steps, because we cannot, or because we promise not to do such a thing. A consequence of this axiom is that
implies \(n < N\): that is, if we have observed n time steps, then n must be small.
Adopting this axiom weakens the guarantee offered by the program logic. A Hoare triple
no longer implies that the program p is forever safe. Instead, it means that p is \((N-1)\)-safe: the execution of p cannot go wrong until at least \(N-1\) steps have been taken. Because \(N\) is very large, for many practical purposes, this is good enough.
How can this axiom be exploited in practice? We hinted above that it can be used to prove the absence of certain integer overflows. Suppose that we wish to use signed w-bit machine integers as a representation of mathematical integers. (For instance, let w be 64.) Whenever we perform an arithmetic operation, such as an addition, we must prove that no overflow can occur. This is reflected in the specification of the addition of two machine integers:
Here, the variables \(x_i\) denote machine integers, while the auxiliary variables \(n_i\) denote mathematical integers, and the function \(\iota \) is the injection of machine integers into mathematical integers. The conjunct \(-2^{w-1} \le n_1+n_2 < 2^{w-1}\) in the precondition represents an obligation to prove that no overflow can occur.
Suppose now that the machine integers \(x_1\) and \(x_2\) represent the lengths of two disjoint linked lists that we wish to concatenate. To construct each of these lists, we must have spent a certain amount of time: as proofs of this work, let us assume that the assertions
and
are at hand. Let us further assume that the word size \(w\) is sufficiently large that it takes a very long time to count up to the largest machine integer. That is, let us make the following assumption:
$$\begin{aligned} N\le 2^{w-1} \end{aligned}$$
(large word size assumption)
(E.g., with \(N=2^{63}\) and \(w=64\), this holds.) Then, we can prove that the addition of \(x_1\) and \(x_2\) is permitted. This goes as follows. From the separating conjunction
, we get
. The existence of these time receipts allows us to deduce \(0 \le n_1+n_2 < N\), which implies \(0 \le n_1+n_2 < 2^{w-1}\). Thus, the precondition of the addition operation \( add (x_1,x_2)\) is met.
In summary, we have just verified that the addition of two machine integers satisfies the following alternative specification:
This can be made more readable and more abstract by defining a “clock” to be a machine integer x accompanied with \(\iota (x)\) time receipts:
Then, the above specification of addition can be reformulated as follows:
In other words, clocks support unrestricted addition, without any risk of overflow. However, because time receipts cannot be duplicated, neither can clocks: \( clock (x)\) does not entail \( clock (x) \,\mathrel {*}\, clock (x)\). In other words, a clock is uniquely owned. One can think of a clock x as a hard-earned integer: the owner of this clock has spent x units of time to obtain it.
Clocks are a reconstruction of Clochard et al.’s “one-time integers” [5], which support unrestricted addition, but cannot be duplicated. Whereas Clochard et al. view one-time integers as a primitive concept, and offer a direct paper proof of their soundness, we have just reconstructed them in terms of a more elementary notion, namely time receipts, and in the setting of a more powerful program logic, whose soundness is machine-checked, namely Iris.
Persistent time receipts. In addition to exclusive time receipts, it is useful to introduce a persistent form of time receipts.Footnote 4 The axioms that govern both exclusive and persistent time receipts appear in Fig. 3.
We write
for a persistent receipt, a witness that at least n units of time have elapsed. (We avoid the terminology “n persistent time receipts”, in the plural form, because persistent time receipts are not additive. We view
as one receipt whose face value is n.) This assertion is persistent, which in Iris terminology means that once it holds, it holds forever. This implies, in particular, that it is duplicable:
. It is created just by observing the existence of n exclusive time receipts, as stated by the following axiom, also listed in Fig. 3:
. Intuitively, someone who has access to the assertion
is someone who knows that n units of work have been performed, even though they have not necessarily “personally” performed that work. Because this knowledge is not exclusive, the conjunction
does not entail
. Instead, we have the following axiom, also listed in Fig. 3:
.
More subtly, the specification of \( tick \) in Fig. 3 is stronger than the one in Fig. 2. According to this strengthened specification,
does not just produce an exclusive receipt
. In addition to that, if a persistent time receipt
is at hand, then
is able to increment it and to produce a new persistent receipt
, thus reflecting the informal idea that a new unit of time has just been spent. A user who does not wish to make use of this feature can pick \(n=0\) and recover the specification of \( tick \) in Fig. 2 as a special case.
Finally, because
means that n steps have been taken, and because we promise never to reach \(N\) steps, we adopt the axiom
, also listed in Fig. 3. It implies the earlier axiom
, which is therefore not explicitly shown in Fig. 3.
In practice, how are persistent time receipts exploited? By analogy with clocks, let us define a predicate for a machine integer x accompanied with \(\iota (x)\) persistent time receipts:
By construction, this predicate is persistent, therefore duplicable:
We refer to this concept as a “snapclock ”, as it is not a clock, but can be thought of as a snapshot of some clock. Thanks to the axiom
, we have:
Furthermore, snapclocks have the valuable property that, by performing just one step of extra work, a snapclock can be incremented, yielding a new snapclock that is greater by one. That is, the following Hoare triple holds:
The proof is not difficult. Unfolding \( snapclock (x)\) in the precondition yields
, where \(\iota (x)=n\). As per the strengthened specification of \( tick \), the execution of
then yields
. As in the case of clocks, the assertion
implies \(0 \le n+1 < 2^{w-1}\), which means that no overflow can occur. Finally,
is thrown away and
is used to justify \( snapclock (x')\) in the postcondition.
Adding two arbitrary snapclocks \(x_1\) and \(x_2\) is illegal: from the sole assumption \( snapclock (x_1) \,\mathrel {*}\, snapclock (x_2)\), one cannot prove that the addition of \(x_1\) and \(x_2\) won’t cause an overflow, and one cannot prove that its result is a valid snapclock. However, snapclocks do support a restricted form of addition. The addition of two snapclocks \(x_1\) and \(x_2\) is safe, and produces a valid snapclock x, provided it is known ahead of time that its result is less than some preexisting snapclock y:
Snapclocks are a reconstruction of Clochard et al.’s “peano integers” [5], which are so named because they do not support unrestricted addition. Clocks and snapclocks represent different compromises: whereas clocks support addition but not duplication, snapclocks support duplication but not addition. They are useful in different scenarios: as a rule of thumb, if an integer counter is involved in the implementation of a mutable data structure, then one should attempt to view it as a clock; if it is involved in the implementation of a persistent data structure, then one should attempt to view it as a snapclock.