Time Credits and Time Receipts in Iris
 5 Citations
 7.9k Downloads
Abstract
We present a machinechecked extension of the program logic Iris with time credits and time receipts, two dual means of reasoning about time. Whereas time credits are used to establish an upper bound on a program’s execution time, time receipts can be used to establish a lower bound. More strikingly, time receipts can be used to prove that certain undesirable events—such as integer overflows—cannot occur until a very long time has elapsed. We present several machinechecked applications of time credits and time receipts, including an application where both concepts are exploited.
1 Introduction
A program logic, such as Hoare logic or Separation Logic, is a set of deduction rules that can be used to reason about the behavior of a program. To this day, considerable effort has been invested in developing evermorepowerful program logics that control the extensional behavior of programs, that is, logics that guarantee that a program safely computes a valid final result. A lesser effort has been devoted to logics that allow reasoning not just about safety and functional correctness, but also about intensional aspects of a program’s behavior, such as its time consumption and space usage.
In this paper, we are interested in narrowing the gap between these lines of work. We present a formal study of two mechanisms by which a standard program logic can be extended with means of reasoning about time. As a starting point, we take Iris [11, 12, 13, 14], a powerful evolution of Concurrent Separation Logic [3]. We extend Iris with two elementary timerelated concepts, namely time credits [1, 4, 9] and time receipts.
Time credits and time receipts are independent concepts: it makes sense to extend a program logic with either of them in isolation or with both of them simultaneously. They are dual concepts: every computation step consumes one time credit and produces one time receipt. They are purely static: they do not exist at runtime. We view them as Iris assertions. Thus, they can appear in the correctness statements that we formulate about programs and in the proofs of these statements.
Time credits can be used to establish an upper bound on the execution time of a program. Dually, time receipts can be used to establish a lower bound, and (as explained shortly) can be used to prove that certain undesirable events cannot occur until a very long time has elapsed.
Until now, time credits have been presented as an ad hoc extension of some fixed flavor of Separation Logic [1, 4, 9]. In contrast, we propose a construction which in principle allows time credits to be introduced on top of an arbitrary “base logic”, provided this base logic is a sufficiently rich variety of Separation Logic. In order to make our definitions and proofs more concrete, we use Iris as the base logic. Our construction involves composing the base logic with a program transformation that inserts a \( tick ()\) instruction in front of every computation step. As far as a user of the composite logic is concerned, the \( tick ()\) instruction and the assertion Open image in new window , which represents one time credit, are abstract: the only fact to which the user has access is the Hoare triple Open image in new window , which states that “\( tick ()\) consumes one time credit”.
There are two reasons why we choose Iris [12] as the base logic. First, in the proof of soundness of the composite logic, we must exhibit concrete definitions of \( tick \) and Open image in new window such that Open image in new window holds. Several features of Iris, such as ghost state and shared invariants, play a key role in this construction. Second, at the user level, the power of Iris can also play a crucial role. To illustrate this, we present the first machinechecked reconstruction of Okasaki’s debits [19] in terms of time credits. The construction makes crucial use of both time credits and Iris’ ghost monotonic state and shared invariants.
Time receipts are a new concept, a contribution of this paper. To extend a base logic with time receipts, we follow the exact same route as above: we compose the base logic with the same program transformation as above, which we refer to as “the tick translation”. In the eyes of a user of the composite logic, the \( tick ()\) instruction and the assertion Open image in new window , which represents one time receipt, are again abstract: this time, the only published fact about \( tick \) is the triple Open image in new window , which states that “\( tick ()\) produces one time receipt”.
Thus far, the symmetry between time credits and time receipts seems perfect: whereas time credits allow establishing an upper bound on the cost of a program fragment, time receipts allow establishing a lower bound. This raises a pragmatic question, though: why invest effort, time and money into a formal proof that a piece of code is slow? What might be the point of such an endeavor? Taking inspiration from Clochard et al. [5], we answer this question by turning slowness into a quality. If there is a certain point at which a process might fail, then by showing that this process is slow, we can show that failure is far away into the future. More specifically, Clochard et al. propose two abstract types of integer counters, dubbed “onetime” integers and “peano” integers, and provide a paper proof that these counters cannot overflow in a feasible time: that is, it would take infeasible time (say, centuries) for an execution to reach a point where overflow actually occurs. To reflect this idea, we abandon the symmetry between time credits and time receipts and publish a fact about time receipts which has no counterpart on the timecredit side. This fact is an implication: Open image in new window , that is, “\(N\) time receipts imply Open image in new window ”. The global parameter \(N\) can be adjusted so as to represent one’s idea of a running time that is infeasible, perhaps due to physical limitations, perhaps due to assumptions about the conditions in which the software is operated. In this paper, we explain what it means for the composite program logic to remain sound in the presence of this axiom, and provide a formal proof that Iris, extended with time receipts, is indeed sound. Furthermore, we verify that Clochard et al.’s ad hoc concepts of “onetime” integers and “peano” integers can be reconstructed in terms of time receipts, a more fundamental concept.
Finally, to demonstrate the combined use of time credits and receipts, we present a proof of the UnionFind data structure, where credits are used to express an amortized time complexity bound and receipts are used to prove that a node’s integer rank cannot overflow, even if it is stored in very few bits.
 1.
A way of extending an offtheshelf program logic with time credits and/or receipts, by composition with a program transformation.
 2.
Extensions of Iris with time credits and receipts, accompanied with machinechecked proofs of soundness.
 3.
A machinechecked reconstruction of Okasaki’s debits as a library in Iris with time credits.
 4.
A machinechecked reconstruction of Clochard et al.’s “onetime” integers and “peano” integers in Iris with time receipts.
 5.
A machinechecked verification of UnionFind in Iris with time credits and receipts, offering both an amortized complexity bound and a safety guarantee despite the use of machine integers of very limited width.
All of the results reported in this paper have been checked in Coq [17].
2 A User’s Overview of Time Credits and Time Receipts
2.1 Time Credits
The axiom Open image in new window means that time credits are independent of Iris’ stepindexing. In practice, this allows an Iris invariant that involves time credits to be acquired without causing a “later” modality to appear [12, §5.7]. The reader can safely ignore this detail.
The last axiom, a Hoare triple, means that every computation step requires and consumes one time credit. As in Iris, the postconditions of our Hoare triples are \(\lambda \)abstractions: they take as a parameter the return value of the term. At this point, Open image in new window can be thought of as a pseudoinstruction that has no runtime effect and is implicitly inserted in front of every computation step.
2.2 Time Receipts
In contrast with time credits, time receipts are a new concept, a contribution of this paper. We distinguish two forms of time receipts. The most basic form, exclusive time receipts, is the dual of time credits, in the sense that every computation step produces one time receipt. The second form, persistent time receipts, exhibits slightly different properties. Inspired by Clochard et al. [5], we show that time receipts can be used to prove that certain undesirable events, such as integer overflows, cannot occur unless a program is allowed to execute for a very, very long time—typically centuries. In the following, we explain that exclusive time receipts allow reconstructing Clochard et al.’s “onetime” integers [5, §3.2], which are so named because they are not duplicable, whereas persistent time receipts allow reconstructing their “peano” integers [5, §3.2], which are so named because they do not support unrestricted addition.
Exclusive time receipts. The assertion Open image in new window denotes n time receipts. Like time credits, these time receipts are “exclusive”, by which we mean that they are not duplicable. The basic laws that govern exclusive time receipts appear in Fig. 2. They are the same laws that govern time credits, with two differences. The first difference is that time receipts are the dual of time credits: the specification of \( tick \), in this case, states that every computation step produces one time receipt.^{1} The second difference lies in the last axiom of Fig. 2, which has no analogue in Fig. 1, and which we explain below.
In practice, how do we expect time receipts to be exploited? They can be used to prove lower bounds on the execution time of a program: if the Hoare triple Open image in new window holds, then the execution of the program p cannot terminate in less than n steps. Inspired by Clochard et al. [5], we note that time receipts can also be used to prove that certain undesirable events cannot occur in a feasible time. This is done as follows. Let \(N\) be a fixed integer, chosen large enough that a modern processor cannot possibly execute \(N\) operations in a feasible time.^{2} The last axiom of Fig. 2, Open image in new window , states that \(N\) time receipts imply a contradiction.^{3} This axiom informally means that we won’t compute for \(N\) time steps, because we cannot, or because we promise not to do such a thing. A consequence of this axiom is that Open image in new window implies \(n < N\): that is, if we have observed n time steps, then n must be small.
Adopting this axiom weakens the guarantee offered by the program logic. A Hoare triple Open image in new window no longer implies that the program p is forever safe. Instead, it means that p is \((N1)\)safe: the execution of p cannot go wrong until at least \(N1\) steps have been taken. Because \(N\) is very large, for many practical purposes, this is good enough.
Persistent time receipts. In addition to exclusive time receipts, it is useful to introduce a persistent form of time receipts.^{4} The axioms that govern both exclusive and persistent time receipts appear in Fig. 3.
We write Open image in new window for a persistent receipt, a witness that at least n units of time have elapsed. (We avoid the terminology “n persistent time receipts”, in the plural form, because persistent time receipts are not additive. We view Open image in new window as one receipt whose face value is n.) This assertion is persistent, which in Iris terminology means that once it holds, it holds forever. This implies, in particular, that it is duplicable: Open image in new window . It is created just by observing the existence of n exclusive time receipts, as stated by the following axiom, also listed in Fig. 3: Open image in new window . Intuitively, someone who has access to the assertion Open image in new window is someone who knows that n units of work have been performed, even though they have not necessarily “personally” performed that work. Because this knowledge is not exclusive, the conjunction Open image in new window does not entail Open image in new window . Instead, we have the following axiom, also listed in Fig. 3: Open image in new window .
More subtly, the specification of \( tick \) in Fig. 3 is stronger than the one in Fig. 2. According to this strengthened specification, Open image in new window does not just produce an exclusive receipt Open image in new window . In addition to that, if a persistent time receipt Open image in new window is at hand, then Open image in new window is able to increment it and to produce a new persistent receipt Open image in new window , thus reflecting the informal idea that a new unit of time has just been spent. A user who does not wish to make use of this feature can pick \(n=0\) and recover the specification of \( tick \) in Fig. 2 as a special case.
Finally, because Open image in new window means that n steps have been taken, and because we promise never to reach \(N\) steps, we adopt the axiom Open image in new window , also listed in Fig. 3. It implies the earlier axiom Open image in new window , which is therefore not explicitly shown in Fig. 3.
3 HeapLang and the Tick Translation
In the next section (Sect. 4), we extend Iris with time credits, yielding a new program logic Open image in new window . We do this without modifying Iris. Instead, we compose Iris with a program transformation, the “tick translation”, which inserts \( tick ()\) instructions into the code in front of every computation step. In the construction of Open image in new window , our extension of Iris with time receipts, the tick translation is exploited in a similar way (Sect. 5). In this section, we define the tick translation and state some of its properties.
Iris is a generic program logic: it can be instantiated with an arbitrary calculus for which a smallstep operational semantics is available [12]. Ideally, our extension of Iris should take place at this generic level, so that it, too, can be instantiated for an arbitrary calculus. Unfortunately, it seems difficult to define the tick translation and to prove it correct in a generic manner. For this reason, we choose to work in the setting of HeapLang [12], an untyped \(\lambda \)calculus equipped with Booleans, signed machine integers, products, sums, recursive functions, references, and sharedmemory concurrency. The three standard operations on mutable references, namely allocation, reading, and writing, are available. A compareandset operation Open image in new window Open image in new window and an operation for spawning a new thread are also provided. As the syntax and operational semantics of HeapLang are standard and very much irrelevant in this paper, we omit them. They appear in our online repository [17].
As far the end user is concerned, Open image in new window remains abstract (Sect. 2). Yet, in our constructions of Open image in new window and Open image in new window , we must provide a concrete implementation of it in HeapLang. This implementation, named \( tick _ c \), appears in Fig. 4. A global integer counter \( c \) stores the number of computation steps that the program is still allowed to take. The call Open image in new window decrements a global counter \( c \), if this counter holds a nonzero value, and otherwise invokes Open image in new window .
At this point, the memory location \( c \) and the value \( oops \) are parameters.
We stress that \( tick _ c \) plays a role only in the proofs of soundness of Open image in new window and Open image in new window . It is never actually executed, nor is it shown to the end user.
Once Open image in new window is instantiated with \( tick _ c \), one can prove that the translation is correct in the following sense: the translated code takes the same computation steps as the source code and additionally keeps track of how many steps are taken. More specifically, if the source code can make n computation steps, and if \( c \) is initialized with a value m that is sufficiently large (that is, \(m \ge n\)), then the translated code can make n computation steps as well, and \( c \) is decremented from m to \(mn\) in the process.
Lemma 1
In this statement, the metavariable Open image in new window stands for a thread pool, while Open image in new window stands for a heap. The relation Open image in new window is HeapLang’s “threadpool reduction”. For the sake of brevity, we write just Open image in new window for Open image in new window , that is, for the translation of the expression Open image in new window , where Open image in new window is instantiated with \( tick _ c \). This notation is implicitly dependent on the parameters \( c \) and \( oops \).
The above lemma holds for every choice of \( oops \). Indeed, because the counter \( c \) initially holds the value m, and because we have \(m\ge n\), the counter is never about to fall below zero, so \( oops \) is never invoked.
The next lemma also holds for every choice of \( oops \). It states that if the translated program is safe and if the counter \( c \) has not yet reached zero then the source program is not just about to crash.
Lemma 2
(Immediate Safety Preservation). Assume \( c \) is fresh for Open image in new window . Let \(m > 0\). If the configuration Open image in new window is safe, then either Open image in new window is a value or the configuration Open image in new window is reducible.
By combining Lemmas 1 and 2 and by contraposition, we find that safety is preserved backwards, as follows: if, when the counter \( c \) is initialized with m, the translated program Open image in new window is safe, then the source program Open image in new window is msafe.
Lemma 3
(Safety Preservation). If for every location \( c \) the configuration Open image in new window is safe, then the configuration Open image in new window is msafe.
4 Iris with Time Credits
The authors of Iris [12] have used Coq both to check that Iris is sound and to offer an implementation of Iris that can be used to carry out proofs of programs. The two are tied: if Open image in new window can be established by applying the proof rules of Iris, then one gets a selfcontained Coq proof that the program p is safe.
In this section, we temporarily focus on time credits and explain how we extend Iris with time credits, yielding a new program logic Open image in new window . The new logic is defined in Coq and still offers an endtoend guarantee: if Open image in new window can be established in Coq by applying the proof rules of Open image in new window , then one has proved in Coq that p is safe and runs in at most k steps.
To define Open image in new window , we compose Iris with the tick translation. We are then able to argue that, because this program transformation is operationally correct (that is, it faithfully accounts for the passing of time), and because Iris is sound (that is, it faithfully approximates the behavior of programs), the result of the composition is a sound program logic that is able to reason about time.
In the following, we view the interface Open image in new window as explicitly parameterized over Open image in new window and Open image in new window . Thus, we write “ Open image in new window ” for the separating conjunction of all items in Fig. 1 except the declarations of Open image in new window and Open image in new window .
We require the end user, who wishes to perform proofs of programs in Open image in new window , to work with Open image in new window triples, which are defined as follows:
Definition 1
Thus, an Open image in new window triple is in reality an Iris triple about the instrumented expression Open image in new window . While proving this Iris triple, the end user is given an abstract view of the predicate Open image in new window and the instruction \( tick \). He does not have access to their concrete definitions, but does have access to the laws that govern them.
We prove that Open image in new window is sound in the following sense:
Theorem 1
Open image in new window If Open image in new window holds, then the machine configuration Open image in new window , where \(\varnothing \) is the empty heap, is safe and terminates in at most n steps.
 1.
we provide a concrete definition of \( tick \);
 2.
we provide a concrete definition of Open image in new window and prove that Open image in new window holds;
 3.
this yields Open image in new window ; from this and from the correctness of the tick translation, we deduce that e cannot crash or run for more than n steps.
With these concrete choices of \( tick \) and \( oops \), the translation transforms an outoftimebudget condition into a hard crash. Because Iris forbids crashes, Open image in new window , which is the composition of the translation with Iris, will forbid outoftimebudget conditions, as desired.
For technical reasons, we need two more lemmas about the translation, whose proofs rely on the fact that \( oops \) is instantiated with \( crash \). They are slightly modified or strengthened variants of Lemmas 2 and 3. First, if the source code can take one step, then the translated code, supplied with zero budget, crashes. Second, if the translated code, supplied with a runtime budget of m, does not crash, then the source code terminates in at most m steps.
Lemma 4
(Credit Exhaustion). Suppose the configuration Open image in new window is reducible. Then, for all \( c \), the configuration Open image in new window is unsafe.
Lemma 5
(Safety Preservation, Strengthened). If for every location \( c \) the configuration Open image in new window is safe, then Open image in new window is safe and terminates in at most \(m\) steps.
Step 2. Our second step, roughly, is to exhibit a definition of Open image in new window such that \( TCIntf \;(\mathord {\$})\; tick _ c \) is satisfied. That is, we would like to prove something along the lines of: Open image in new window . However, these informal sentences do not quite make sense. This formula is not an ordinary proposition: it is an Iris assertion, of type Open image in new window . Thus, it does not make sense to say that this formula “is true” in an absolute manner. Instead, we prove in Iris that we can make this assertion true by performing a view shift, that is, a number of operations that have no runtime effect, such as allocating a ghost location and imposing an invariant that ties this ghost state with the physical state of the counter \( c \). This is stated as follows:
Lemma 6
In this statement, on the lefthand side of the view shift symbol, we find the “pointsto” assertion \( c \mapsto n\), which represents the unique ownership of the memory location \( c \) and the assumption that its initial value is n. This assertion no longer appears on the righthand side of the view shift. This reflects the fact that, when the view shift takes place, it becomes impossible to access \( c \) directly; the only way of accessing it is via the operation \( tick _ c \).
On the righthand side of the view shift symbol, beyond the existential quantifier, we find a conjunction of the assertion \( TCIntf \;(\mathord {\$})\; tick _ c \), which means that the laws of time credits are satisfied, and Open image in new window , which means that there are initially n time credits in existence.
In the interest of space, we provide only a brief summary of the proof of Lemma 6; the reader is referred to the extended version of this paper [18, Appendix A] for more details. In short, the assertion Open image in new window is defined in such a way that it represents an exclusive contribution of one unit to the current value of the global counter c. In other words, we install the following invariant: at every time, the current value of c is (at least) the sum of all time credits in existence. Thus, the assertion Open image in new window guarantees that c is nonzero, and can be viewed as a permission to decrement c by one. This allows us to prove that the specification of \( tick \) in Fig. 1 is satisfied by our concrete implementation \( tick _ c \). In particular, \( tick _ c \) cannot cause a crash: indeed, under the precondition Open image in new window , c is not in danger of falling below zero, and Open image in new window is not executed—it is in fact dead code.
Step 3. In the last reasoning step, we complete the proof of Theorem 1. The proof is roughly as follows. Suppose the end user has established Open image in new window . By Safety Preservation, Strengthened (Lemma 5), to prove that \((e, \varnothing )\) is safe and runs in at most n steps, it suffices to show (for an arbitrary location \( c \)) that the translated expression Open image in new window , executed in the initial heap Open image in new window , is safe. To do so, beginning with this initial heap, we perform Time Credit Initialization, that is, we execute the view shift whose statement appears in Lemma 6. This yields an abstract predicate Open image in new window as well as the assertions Open image in new window and Open image in new window . At this point, we unfold the Open image in new window triple Open image in new window , yielding an implication (see Definition 1), and apply it to Open image in new window , to \( tick _ c \), and to the hypothesis Open image in new window . This yields the Iris triple Open image in new window . Because we have Open image in new window at hand and because Iris is sound [12], this implies that Open image in new window is safe. This concludes the proof.
This last step is, we believe, where the modularity of our approach shines. Iris’ soundness theorem is reused as a black box, without change. In fact, any program logic other than Iris could be used as a basis for our construction, as along as it is expressive enough to prove Time Credit Initialization (Lemma 6). The last ingredient, Safety Preservation, Strengthened (Lemma 5), involves only the operational semantics of HeapLang, and is independent of Iris.
This was just an informal account of our proof. For further details, the reader is referred to the online repository [17].
5 Iris with Time Receipts
In this section, we extend Iris with time receipts and prove the soundness of the new logic, dubbed Open image in new window . To do so, we follow the scheme established in the previous section (Sect. 4), and compose Iris with the tick translation.
From here on, let us view the interface of time receipts as parameterized over Open image in new window , Open image in new window , and Open image in new window . Thus, we write “ Open image in new window ” for the separating conjunction of all items in Fig. 3 except the declarations of Open image in new window , Open image in new window , and Open image in new window .
As in the case of credits, the user is given an abstract view of time receipts:
Definition 2
Theorem 2
Open image in new window If Open image in new window holds, then the machine configuration Open image in new window is \((N1)\)safe.
As indicated earlier, we assume that the end user is interested in proving that crashes cannot occur until a very long time has elapsed, which is why we state the theorem in this way.^{6} Whereas an Iris triple Open image in new window guarantees that Open image in new window is safe, the Open image in new window triple Open image in new window guarantees that it takes at least \(N1\) steps of computation for Open image in new window to crash. In this statement, \(N\) is the global parameter that appears in the axiom Open image in new window (Fig. 3). Compared with Iris, Open image in new window provides a weaker safety guarantee, but offers additional reasoning principles, leading to increased convenience and modularity.
 1.
provide a concrete definition of \( tick \);
 2.
provide concrete definitions of Open image in new window and prove that Open image in new window holds;
 3.
from Open image in new window , deduce that Open image in new window is \((N1)\)safe.
Step 2. The next step is to prove that we are able to establish the time receipt interface. We prove the following:
Lemma 7
Step 3. In the last reasoning step, we complete the proof of Theorem 2. Suppose the end user has established Open image in new window . By Safety Preservation (Lemma 3), to prove that Open image in new window is \((N1)\)safe, it suffices to show (for an arbitrary location \( c \)) that Open image in new window , executed in the initial heap Open image in new window , is safe. To do so, beginning with this initial heap, we perform Time Receipt Initialization, that is, we execute the view shift whose statement appears in Lemma 7. This yields two abstract predicates Open image in new window and Open image in new window as well as the assertion Open image in new window . At this point, we unfold Open image in new window (see Definition 2), yielding an implication, and apply this implication, yielding the Iris triple Open image in new window . Because Iris is sound [12], this implies that Open image in new window is safe. This concludes the proof. For further detail, the reader is again referred to our online repository [17].
6 Marrying Time Credits and Time Receipts
Definition 3
Theorem 3
Open image in new window If Open image in new window holds then the machine configuration Open image in new window is \((N1)\)safe. If furthermore \(n < N\) holds, then this machine configuration terminates in at most n steps.
Open image in new window allows exploiting time credits to prove time complexity bounds and, at the same time, exploiting time receipts to prove the absence of certain integer overflows. Our verification of UnionFind (Sect. 8) illustrates these two aspects.
Guéneau et al. [7] use time credits to reason about asymptotic complexity, that is, about the manner in which a program’s complexity grows as the size of its input grows towards infinity. Does such asymptotic reasoning make sense in Open image in new window , where no program is ever executed for \(N\) time steps or beyond? It seems to be the case that if a program p satisfies the triple Open image in new window , then it also satisfies the stronger triple Open image in new window , therefore also satisfies Open image in new window . Can one therefore conclude that p has “constant time complexity”? We believe not. Provided \(N\) is considered a parameter, as opposed to a constant, one cannot claim that “\(N\) is O(1)”, so Open image in new window does not imply that “p runs in constant time”. In other words, a universal quantification on \(N\) should come after the existential quantifier that is implicit in the O notation. We have not yet attempted to implement this idea; this remains a topic for further investigation.
7 Application: Thunks in Iris\(^{\mathord {\$}}\)
In this section, we illustrate the power of Open image in new window by constructing an implementation of thunks as a library in Open image in new window . A thunk, also known as a suspension, is a very simple data structure that represents a suspended computation. There are two operations on thunks, namely Open image in new window , which constructs a new thunk, and \( force \), which demands the result of a thunk. A thunk memoizes its result, so that even if it is forced multiple times, the computation only takes place once.
Okasaki [19] proposes a methodology for reasoning about the amortized time complexity of computations that involve shared thunks. For every thunk, he keeps track of a debit, which can be thought of as an amount of credit that one must still pay before one is allowed to force this thunk. A ghost operation, \( pay \), changes one’s view of a thunk, by reducing the debit associated with this thunk. \( force \) can be applied only to a zerodebit thunk, and has amortized cost O(1). Indeed, if this thunk has been forced already, then \( force \) really requires constant time; and if this thunk is being forced for the first time, then the cost of performing the suspended computation must have been paid for in advance, possibly in several installments, via \( pay \). This discipline is sound even in the presence of sharing, that is, of multiple pointers to a thunk. Indeed, whereas duplicating a credit is unsound, duplicating a debit leads to an overapproximation of the true cost, hence is sound. Danielsson [6] formulates Okasaki’s ideas as a type system, which he proves sound in Agda. Pilkiewicz and Pottier [20] reconstruct this type discipline in the setting of a lowerlevel type system, equipped with basic notions of time credits, hidden state, and monotonic state. Unfortunately, their type system is presented in an informal manner and does not come with a proof of type soundness.
We reproduce Pilkiewicz and Pottier’s construction in the formal setting of Open image in new window . Indeed, Open image in new window offers all of the necessary ingredients, namely time credits, hidden state (invariants, in Iris terminology) and monotonic state (a special case of Iris’ ghost state). Our reconstruction is carried out inside Coq [17].
7.1 Concurrency and Reentrancy
One new problem that arises here is that Okasaki’s analysis, which is valid in a sequential setting, potentially becomes invalid in a concurrent setting. Suppose we wish to allow multiple threads to safely share access to a thunk. A natural, simpleminded approach would be to equip every thunk with a lock and allow competition over this lock. Then, unfortunately, forcing would become a blocking operation: one thread could waste time waiting for another thread to finish forcing. In fact, in the absence of a fairness assumption about the scheduler, an unbounded amount of time could be wasted in this way. This appears to invalidate the property that \( force \) has amortized cost O(1).
Technically, the manner in which this problem manifests itself in Open image in new window is in the specification of locks. Whereas in Iris a spin lock can be implemented and proved correct with respect to a simple and wellunderstood specification [2], in Open image in new window , it cannot. The \( lock ()\) method contains a potentially infinite loop: therefore, no finite amount of time credits is sufficient to prove that \( lock ()\) is safe. This issue is discussed in greater depth later on (Sect. 9).
A distinct yet related problem is reentrancy. Arguably, an implementation of thunks should guarantee that a suspended computation is evaluated at most once. This guarantee seems particularly useful when the computation has a side effect: the user can then rely on the fact that this side effect occurs at most once. However, this property does not naturally hold: in the presence of heapallocated mutable state, it is possible to construct an illbehaved “reentrant” thunk which, when forced, attempts to recursively force itself. Thus, something must be done to dynamically reject or statically prevent reentrancy. In Pilkiewicz and Pottier’s code [20], reentrancy is detected at runtime, thanks to a threecolor scheme, and causes a fatal runtime failure. In a concurrent system where each thunk is equipped with a lock, reentrancy is also detected at runtime, and turned into deadlock; but we have explained earlier why we wish to avoid locks.
7.2 Implementation of Thunks
A simple implementation of thunks in HeapLang appears in Fig. 5. A thunk can be in one of two states: Open image in new window and Open image in new window . A white thunk is unevaluated: the function Open image in new window represents a suspended computation. A black thunk is evaluated: the value Open image in new window is the result of the computation that has been performed already. Two colors are sufficient: because our static discipline rules out reentrancy, there is no need for a third color, whose purpose would be to dynamically detect an attempt to force a thunk that is already being forced.
7.3 Specification of Thunks in Iris\(^{\mathord {\$}}\)
Our specification of thunks appears in Fig. 6. It declares an abstract predicate Open image in new window , which asserts that \( t \) is a valid thunk, that the debt associated with this thunk is n, and that this thunk (once forced) produces a value that satisfies the postcondition \(\varPhi \). The number n, a debit, is the number of credits that remain to be paid before this thunk can be forced. The postcondition \(\varPhi \) is chosen by the user when a thunk is created. It must be duplicable (this is required in the specification of \( force \)) because \( force \) can be invoked several times and we must guarantee, every time, that the result Open image in new window satisfies Open image in new window .
The second axiom states that Open image in new window is a persistent assertion. This means that a valid thunk, once created, remains a valid thunk forever. Among other things, it is permitted to create two pointers to a single thunk and to reason independently about each of these pointers.
The third axiom states that Open image in new window is covariant in its parameter n. Overestimating a debt still leads to a correct analysis of a program’s worstcase time complexity.
Next, the specification declares an abstract assertion Open image in new window , and provides the user with one copy of this assertion. We refer to it as “the thunderbolt”.
The next item in Fig. 6 is the specification of Open image in new window . It is higherorder: the precondition of Open image in new window contains a specification of the function f that is passed as an argument to Open image in new window . This axiom states that, if f represents a computation of cost n, then Open image in new window produces an ndebit thunk. The cost of creation itself is 3 credits. This specification is somewhat simplistic, as it does not allow the function f to have a nontrivial precondition. It is possible to offer a richer specification; we eschew it in favor of simplicity.
Next comes the specification of \( force \). Only a 0debit thunk can be forced. The result is a value Open image in new window that satisfies \(\varPhi \). The (amortized) cost of forcing is 11 credits. The thunderbolt appears in the pre and postcondition of \( force \), forbidding any concurrent attempts to force a thunk.
The last axiom in Fig. 6 corresponds to \( pay \). It is a view shift, a ghost operation. By paying k credits, one turns an ndebit thunk into an \((nk)\)debit thunk. At runtime, nothing happens: it is the same thunk before and after the payment. Yet, after the view shift, we have a new view of the number of debits associated with this thunk. Here, paying requires the thunderbolt. It should be possible to remove this requirement; we have not yet attempted to do so.
7.4 Proof of Thunks in Iris\(^{\mathord {\$}}\)
After implementing thunks in HeapLang (Sect. 7.2) and expressing their specification in Open image in new window (Sect. 7.3), there remains to prove that this specification can be established. We sketch the key ideas of this proof.

either the thunk is white, in which case we have ac credits at hand;

or the thunk is black, in which case we have no credits at hand, as they have been spent already.
The final aspect that remains to be explained is our use of Open image in new window , an Iris “nonatomic invariant”. Indeed, in this proof, we cannot rely on Iris’ primitive invariants. A primitive invariant can be acquired only for the duration of an atomic instruction [12]. In our implementation of thunks (Fig. 5), however, we need a “critical section” that encompasses several instructions. That is, we must acquire the invariant before dereferencing \( t \), and (in the case where this thunk is white) we cannot release it until we have marked this thunk black. Fortunately, Iris provides a library of “nonatomic invariants” for this very purpose. (This library is used in the RustBelt project [10] to implement Rust’s type Cell.) This library offers separate ghost operations for acquiring and releasing an invariant. Acquiring an invariant consumes a unique token, which is recovered when the invariant is released: this guarantees that an invariant cannot be acquired twice, or in other words, that two threads cannot be in a critical section at the same time. The unique token involved in this protocol is the one that we expose to the end user as “the thunderbolt”.
8 Application: UnionFind in Open image in new window
As an illustration of the use of both time credits and time receipts, we formally verify the functional correctness and time complexity of an implementation of the UnionFind data structure. Our proof [17] is based on Charguéraud and Pottier’s work [4]. We port their code from OCaml to HeapLang, and port their proof from Separation Logic with Time Credits to Open image in new window . At this point, the proof exploits just Open image in new window , a subset of Open image in new window . The mathematical analysis of UnionFind, which represents a large part of the proof, is unchanged. Our contribution lies in the fact that we modify the data structure to represent ranks as machine integers instead of unbounded integers, and exploit time receipts in Open image in new window to establish the absence of overflow. We equip HeapLang with signed machine integers whose bit width is a parameter \(w\). Under the hypothesis \(\log \log N< w 1\), we are able to prove that, even though the code uses limitedwidth machine integers, no overflow can occur in a feasible time. If for instance \(N\) is \(2^{63}\), then this condition boils down to \(w\ge 7\). Ranks can be stored in just 7 bits without risking overflow.
The conjunct \(\mathord {\$}(44\alpha (D )+152)\) in the precondition indicates that \( union \) has time complexity \(O(\alpha (n))\), where \(\alpha \) is an inverse of Ackermann’s function and n is the number of nodes in the data structure. This is an amortized bound; the predicate Open image in new window also contains a certain number of time credits, known as the potential of the data structure, which are used to justify \( union \) operations whose actual cost exceeds the advertised cost. The constants 44 and 152 differ from those found in Charguéraud and Pottier’s specification [4] because Open image in new window counts every computation step, whereas they count only function calls. Abstracting these constants by using O notation, as proposed by Guéneau et al. [7], would be desirable, but we have not attempted to do so yet.
The main novelty, with respect to Charguéraud and Pottier’s specification, is the hypothesis \(\log \log N< w 1\), which is required to prove that no overflow can occur when the rank of a node is incremented. In our proof, \(N\) and \(w\) are parameters; once their values are chosen, this hypothesis is easily discharged, once and for all. In the absence of time receipts, we would have to publish the hypothesis \(\log \log n < w 1\), where n is the cardinal of D, forcing every (direct and indirect) user of the data structure to keep track of this requirement.
For the proof to go through, we store n time receipts in the data structure: that is, we include the conjunct Open image in new window , where n stands for \(D \), in the definition of the invariant Open image in new window . The operation of creating a new node takes at least one step, therefore produces one new time receipt, which is used to prove that the invariant is preserved by this operation. At any point, then, from the invariant, and from the basic laws of time receipts, we can deduce that \(n < N\) holds. Furthermore, it is easy to show that a rank is at most \(\log n\). Therefore, a rank is at most \(\log N\). In combination with the hypothesis \(\log \log N< w 1\), this suffices to prove that a rank is at most \(2^{w1}  1\), the largest signed machine integer, and therefore that no overflow can occur in the computation of a rank.
Clochard et al. [5, §2] already present UnionFind as a motivating example among several others. They write that “there is obviously no danger of arithmetic overflow here, since [ranks] are only obtained by successive increments by one”. This argument would be formalized in their system by representing ranks as either “onetime” or “peano” integers (in our terminology, clocks or snapclocks). This argument could be expressed in Open image in new window , but would lead to requiring \(\log N< w 1\). In contrast, we use a more refined argument: we note that ranks are logarithmic in n, the number of nodes, and that n itself can never overflow. This leads us to the much weaker requirement \(\log \log N< w 1\), which means that a rank can be stored in very few bits. We believe that this argument cannot be expressed in Clochard et al.’s system.
9 Discussion
One feature of Iris and HeapLang that deserves further discussion is concurrency. Iris is an evolution of Concurrent Separation Logic, and HeapLang has sharedmemory concurrency. How does this impact our reasoning about time? At a purely formal level, this does not have any impact: Theorems 1, 2, 3 and their proofs are essentially oblivious to the absence or presence of concurrency in the programming language. At a more informal level, though, this impacts our interpretation of the realworld meaning of these theorems. Whereas in a sequential setting a “number of computation steps” can be equated (up to a constant factor) with “time”, in a concurrent setting, a “number of computation steps” is referred to as “work”, and is related to “time” only up to a factor of p, the number of processors. In short, our system measures work, not time. The number of available processors should be taken into account when choosing a specific value of \(N\): this value must be so large that \(N\) computation steps are infeasible even by p processors. With this in mind, we believe that our system can still be used to prove properties that have physical relevance.
In short, our new program logics, Open image in new window , Open image in new window , and Open image in new window , tolerate concurrency. Yet, is it fair to say that they have “good support” for reasoning about concurrent programs? We believe not yet, and this is an area for future research. The main open issue is that we do not at this time have good support for reasoning about the time complexity of programs that perform busywaiting on some resource. The root of the difficulty, already mentioned during the presentation of thunks (Sect. 7.1), is that one thread can fail to make progress, due to interference with another thread. A retry is then necessary, wasting time. In a spin lock, for instance, the “compareandset” (CAS) instruction that attempts to acquire the lock can fail. There is no bound on the number of attempts that are required until the lock is eventually acquired. Thus, in Open image in new window , we are currently unable to assign any specification to the \( lock \) method of a spin lock.
In the future, we wish to take inspiration from Hoffmann, Marmar and Shao [9], who use time credits in Concurrent Separation Logic to establish the lockfreedom of several concurrent data structures. The key idea is to formalize the informal argument that “failure of a thread to make progress is caused by successful progress in another thread”. Hoffmann et al. set up a “quantitative compensation scheme”, that is, a protocol by which successful progress in one thread (say, a successful CAS operation) must transmit a number of time credits to every thread that has encountered a corresponding failure and therefore must retry. Quite interestingly, this protocol is not hardwired into the reasoning rule for CAS. In fact, CAS itself is not primitive; it is encoded in terms of an atomic { ...} construct. The protocol is set up by the user, by exploiting the basic tools of Concurrent Separation Logic, including shared invariants. Thus, it should be possible in Open image in new window to reproduce Hoffmann et al.’s reasoning and to assign useful specifications to certain lockfree data structures. Furthermore, we believe that, under a fairness assumption, it should be possible to assign Open image in new window specifications also to coarsegrained data structures, which involve locks. Roughly speaking, under a fair scheduler, the maximum time spent waiting for a lock is the maximum number of threads that may compete for this lock, multiplied by the maximum cost of a critical section protected by this lock. Whether and how this can be formalized is a topic of future research.
The axiom Open image in new window comes with a few caveats that should be mentioned. The same caveats apply to Clochard et al.’s system [5], and are known to them.
One caveat is that it is possible in theory to use this axiom to write and justify surprising programs. For instance, in Open image in new window , the loop “\( for \;i=1\; to \;N\; do \;()\; done \)” satisfies the specification Open image in new window : that is, it is possible to prove that this loop “never ends”. As a consequence, this loop also satisfies every specification of the form Open image in new window . On the face of it, this loop would appear to be a valid solution to every programming assignment! In practice, it is up to the user to exhibit taste and to refrain from exploiting such a paradox. In reality, the situation is no worse than that in plain Iris, a logic of partial correctness, where the infinite loop “\( while \; true \; do \;()\; done \)” also satisfies Open image in new window .
Another important caveat is that the compiler must in principle be instructed to never optimize ticks away. If, for instance, the compiler was allowed to recognize that the loop “\( for \;i=1\; to \;N\; do \;()\; done \)” does nothing, and to replace this loop with a noop, then this loop, which according to Open image in new window “never ends”, would in reality end immediately. We would thereby be in danger of proving that a source program cannot crash unless it is allowed to run for centuries, whereas in reality the corresponding compiled program does crash in a short time. In practice, this danger can be avoided by actually instrumenting the source code with \( tick ()\) instructions and by presenting \( tick \) to the compiler as an unknown external function, which cannot be optimized away. However, this seems a pity, as it disables many compiler optimizations.
We believe that, despite these pitfalls, time receipts can be a useful tool. We hope that, in the future, better ways of avoiding these pitfalls will be discovered.
10 Related Work
Time credits in an affine Separation Logic are not a new concept. Atkey [1] introduces them in the setting of Separation Logic. Pilkiewicz and Pottier [20] exploit them in an informal reconstruction of Danielsson’s type discipline for lazy thunks [6], which itself is inspired by Okasaki’s work [19]. Several authors subsequently exploit time credits in machinechecked proofs of correctness and time complexity of algorithms and data structures [4, 7, 22]. Hoffmann, Marmar and Shao [9], whose work was discussed earlier in this paper (Sect. 9), use time credits in Concurrent Separation Logic to prove that several concurrent data structure implementations are lockfree.
At a metatheoretic level, Charguéraud and Pottier [4] provide a machinechecked proof of soundness of a Separation Logic with time credits. Haslbeck and Nipkow [8] compare three program logics that can provide worstcase time complexity guarantees, including Separation Logic with time credits.
To the best of our knowledge, affine (exclusive and persistent) time receipts are new, and the axiom Open image in new window is new as well. It is inspired by Clochard et al.’s idea that “programs cannot run for centuries” [5], but distills this idea into a simpler form.
Our implementation of thunks and our reconstruction of Okasaki’s debits [19] in terms of credits are inspired by earlier work [6, 20]. Although Okasaki’s analysis assumes a sequential setting, we adapt it to a concurrent setting by explicitly forbidding concurrent operations on thunks; to do so, we rely on Iris nonatomic invariants. In contrast, Danielsson [6] views thunks as a primitive construct in an otherwise pure language. He equips the language with a type discipline, where the type Thunk, which is indexed with a debit, forms a monad, and he provides a direct proof of type soundness. The manner in which Danielsson inserts \( tick \) instructions into programs is a precursor of our tick translation; this idea can in fact be traced at least as far back as Moran and Sands [16]. Pilkiewicz and Pottier [20] sketch an encoding of debits in terms of credits. Because they work in a sequential setting, they are able to install a shared invariant by exploiting the antiframe rule [21], whereas we use Iris’ nonatomic invariants for this purpose. The antiframe rule does not rule out reentrancy, so they must detect it at runtime, whereas in our case both concurrency and reentrancy are ruled out by our use of nonatomic invariants.
Madhavan et al. [15] present an automated system that infers and verifies resource bounds for higherorder functional programs with thunks (and, more generally, with memoization tables). They transform the source program to an instrumented form where the state is explicit and can be described by monotone assertions. For instance, it is possible to assert that a thunk has been forced already (which guarantees that forcing it again has constant cost). This seems analogous in Okasaki’s terminology to asserting that a thunk has zero debits, also a monotone assertion. We presently do not know whether Madhavan et al.’s system could be encoded into a lowerlevel program logic such as Open image in new window ; it would be interesting to find out.
11 Conclusion
We have presented two mechanisms, namely time credits and time receipts, by which Iris, a stateoftheart concurrent program logic, can be extended with means of reasoning about time. We have established soundness theorems that state precisely what guarantees are offered by the extended program logics Open image in new window , Open image in new window , and Open image in new window . We have defined these new logics modularly, by composing Iris with a program transformation. The three proofs follow a similar pattern: the soundness theorem of Iris is composed with a simulation lemma about the tick translation. We have illustrated the power of the new logics by reconstructing Okasaki’s debitbased analysis of thunks, by reconstructing Clochard et al.’s technique for proving the absence of certain integer overflows, and by presenting an analysis of UnionFind that exploits both time credits and time receipts.
One limitation of our work is that all of our metatheoretic results are specific to HeapLang, and would have to be reproduced, following the same pattern, if one wished to instantiate Open image in new window for another programming language. It would be desirable to make our statements and proofs generic. In future work, we would also like to better understand what can be proved about the time complexity of concurrent programs that involve waiting. Can the time spent waiting be bounded? What specification can one give to a lock, or a thunk that is protected by a lock? A fairness hypothesis about the scheduler seems to be required, but it is not clear yet how to state and exploit such a hypothesis. Hoffmann, Marmar and Shao [9] have carried out pioneering work in this area, but have dealt only with lockfree data structures and only with situations where the number of competing threads is fixed. It would be interesting to transpose their work into Open image in new window and to develop it further.
Footnotes
 1.
For now, we discuss time credits and time receipts separately, which is why we have different specifications for \( tick \) in either case. They are combined in Sect. 6.
 2.
For a specific example, let \(N\) be \(2^{63}\). Clochard et al. note that, even at the rate of one billion operations per second, it takes more than 292 years to execute \(2^{63}\) operations. On a 64bit machine, \(2^{63}\) is also the maximum representable signed integer, plus one.
 3.
The connective Open image in new window is an Iris view shift, that is, a transition that can involve a side effect on ghost state.
 4.
Instead of viewing persistent time receipts as a primitive concept, one could define them as a library on top of exclusive time receipts. Unfortunately, this construction leads to slightly weaker laws, which is why we prefer to view them as primitive.
 5.
If HeapLang used lefttoright evaluation, the definition of the translation would be slightly different, but the lemmas that we prove would be the same.
 6.
If the user instead wishes to establish a lower bound on a program’s execution time, this is possible as well.
 7.
In fact, it is not essential that \( loop ()\) diverges. What matters is that \( loop \) satisfy the Iris triple Open image in new window . A fatal runtime error that Iris does not rule out would work just as well, as it satisfies the same specification.
 8.
Even though the interface provides only one \( tick \) function, it gets instantiated in the soundness theorem with different implementations depending on whether there are more than N time credits or not.
 9.
Therefore, a suspended computation cannot force any thunk. This is admittedly a very severe restriction, which rules out many useful applications of thunks. In fact, we have implemented a more flexible discipline, where thunks can be grouped in multiple “regions” and there is one token per region instead of a single global Open image in new window token. This discipline allows concurrent or reentrant operations on provably distinct thunks, yet can still be proven sound.
 10.
This definition of \(R'\) and \(V'\) has free variables x, y, z, therefore in reality must appear inside the postcondition. Here, it is presented separately, for greater readability.
References
 1.Atkey, R.: Amortised resource analysis with separation logic. Log. Methods Comput. Sci. 7(2:17) (2011). http://bentnib.org/amortisedseplogicjournal.pdf
 2.Birkedal, L.: Lecture11: CAS and spin locks, November 2017. https://irisproject.org/tutorialpdfs/lecture11casspinlock.pdf
 3.Brookes, S., O’Hearn, P.W.: Concurrent separation logic. SIGLOG News 3(3), 47–65 (2016). http://siglog.hosting.acm.org/wpcontent/uploads/2016/07/siglognews9.pdf#page=49
 4.Charguéraud, A., Pottier, F.: Verifying the correctness and amortized complexity of a unionfind implementation in separation logic with time credits. J. Autom. Reason. (2017). http://gallium.inria.fr/~fpottier/publis/chargueraudpottierufsltc.pdf
 5.Clochard, M., Filliâtre, J.C., Paskevich, A.: How to avoid proving the absence of integer overflows. In: Gurfinkel, A., Seshia, S.A. (eds.) VSTTE 2015. LNCS, vol. 9593, pp. 94–109. Springer, Cham (2016). https://doi.org/10.1007/9783319296135_6. https://hal.inria.fr/al01162661CrossRefGoogle Scholar
 6.Danielsson, N.A.: Lightweight semiformal time complexity analysis for purely functional data structures. In: Principles of Programming Languages (POPL) (2008). http://www.cse.chalmers.se/~nad/publications/danielssonpopl2008.pdf
 7.Guéneau, A., Charguéraud, A., Pottier, F.: A fistful of dollars: formalizing asymptotic complexity claims via deductive program verification. In: Ahmed, A. (ed.) ESOP 2018. LNCS, vol. 10801, pp. 533–560. Springer, Cham (2018). https://doi.org/10.1007/9783319898841_19. http://gallium.inria.fr/~fpottier/publis/gueneauchargeraudpottieresop2018.pdf
 8.Haslbeck, M.P.L., Nipkow, T.: Hoare logics for time bounds: a study in meta theory. In: Beyer, D., Huisman, M. (eds.) TACAS 2018. LNCS, vol. 10805, pp. 155–171. Springer, Cham (2018). https://doi.org/10.1007/9783319899602_9. https://www21.in.tum.de/~nipkow/pubs/tacas18.pdf
 9.Hoffmann, J., Marmar, M., Shao, Z.: Quantitative reasoning for proving lockfreedom. In: Logic in Computer Science (LICS), pp. 124–133 (2013). http://www.cs.cmu.edu/~janh/papers/lockfree2013.pdf
 10.Jung, R., Jourdan, J.H., Krebbers, R., Dreyer, D.: RustBelt: securing the foundations of the rust programming language. PACMPL 2(POPL), 66:1–66:34 (2018). https://people.mpisws.org/~dreyer/papers/rustbelt/paper.pdf
 11.Jung, R., Krebbers, R., Birkedal, L., Dreyer, D.: Higherorder ghost state. In: International Conference on Functional Programming (ICFP), pp. 256–269 (2016). http://irisproject.org/pdfs/2016icfpiris2final.pdf
 12.Jung, R., Krebbers, R., Jourdan, J.H., Bizjak, A., Birkedal, L., Dreyer, D.: Iris from the ground up: a modular foundation for higherorder concurrent separation logic. J. Funct. Program. 28, e20 (2018). https://people.mpisws.org/~dreyer/papers/irisgroundup/paper.pdf
 13.Jung, R., et al.: Iris: monoids and invariants as an orthogonal basis for concurrent reasoning. In: Principles of Programming Languages (POPL), pp. 637–650 (2015). http://plv.mpisws.org/iris/paper.pdf
 14.Krebbers, R., Jung, R., Bizjak, A., Jourdan, J.H., Dreyer, D., Birkedal, L.: The essence of higherorder concurrent separation logic. In: Yang, H. (ed.) ESOP 2017. LNCS, vol. 10201, pp. 696–723. Springer, Heidelberg (2017). https://doi.org/10.1007/9783662544341_26. http://irisproject.org/pdfs/2017esopiris3final.pdfCrossRefGoogle Scholar
 15.Madhavan, R., Kulal, S., Kuncak, V.: Contractbased resource verification for higherorder functions with memoization. In: Principles of Programming Languages (POPL), pp. 330–343 (2017). http://lara.epfl.ch/~kandhada/orbpopl17.pdf
 16.Moran, A., Sands, D.: Improvement in a lazy context: an operational theory for callbyneed. In: Principles of Programming Languages (POPL), pp. 43–56 (1999). http://www.cse.chalmers.se/~dave/papers/cbneedtheory.pdf
 17.Mével, G., Jourdan, J.H., Pottier, F.: Time credits and time receipts in Iris – Coq proofs, October 2018. https://gitlab.inria.fr/gmevel/iristimeproofs
 18.Mével, G., Jourdan, J.H., Pottier, F.: Time credits and time receipts in Iris – extended version (2019). https://jhjourdan.mketjh.fr/pdf/mevel2019time.pdf
 19.Okasaki, C.: Purely Functional Data Structures. Cambridge University Press, Cambridge (1999). http://www.cambridge.org/us/catalogue/catalogue.asp?isbn=0521663504zbMATHGoogle Scholar
 20.Pilkiewicz, A., Pottier, F.: The essence of monotonic state. In: Types in Language Design and Implementation (TLDI) (2011). http://gallium.inria.fr/~fpottier/publis/pilkiewiczpottiermonotonicity.pdf
 21.Pottier, F.: Hiding local state in direct style: a higherorder antiframe rule. In: Logic in Computer Science (LICS), pp. 331–340 (2008). http://gallium.inria.fr/~fpottier/publis/fpottierantiframe2008.pdf
 22.Zhan, B., Haslbeck, M.P.L.: Verifying asymptotic time complexity of imperative programs in Isabelle. In: Galmiche, D., Schulz, S., Sebastiani, R. (eds.) IJCAR 2018. LNCS (LNAI), vol. 10900, pp. 532–548. Springer, Cham (2018). https://doi.org/10.1007/9783319942056_35. arxiv:1802.01336CrossRefGoogle Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.