1 Introduction

Since its inception, denotational semantics has grown into a very wide subject. Its developments now cover numerous programming languages or paradigms, using approaches that range from the extensionality of domain semantics [24] (recording the input-output behaviour) to the intensionality of game semantics [1, 17] (recording execution traces, formalized as plays in a 2-players game between the program (“Player”) and its execution environment (“Opponent”)). Denotational semantics has had significant influence on the theory of programming languages, with contributions ranging from program logics or reasoning principles to new language constructs and verification algorithms.

Most denotational models are qualitative in nature, meaning that they ignore efficiency of programs in terms of time, or other resources such as power or bandwith. To our knowledge, the first denotational model to cover time was Ghica’s slot games [13], an extension of Ghica and Murawski’s fully abstract model for a higher-order language with concurrency and shared state [14]. Slot games exploit the intensionality of game semantics and represent time via special moves called tokens matching the ticks of a clock. They are fully abstract w.r.t. the notion of observation in Sands’ operational theory of improvement [26].

More recently, there has been a growing interest in capturing quantitative aspects denotationally. Laird et al. constructed [18] an enrichment of the relational model of Linear Logic [11], using weights from a resource semiring given as parameter. This way, they capture in a single framework several notions of resources for extensions of PCF, ranging from time to probabilistic weights. Two type systems with similar parametrizations were introduced simultaneously by, on the one hand, Ghica and Smith [15] and, on the other hand, Brunel, Gaboardi et al. [4]; the latter with a quantitative realizability denotational model.

In this paper, we give a resource-sensitive denotational model for \(\mathcal {R}\)-IPA, an affine higher-order programming language with concurrency, shared state, and with a primitive for resource consumption. With respect to slot games our model differs in that our resource analysis accounts for the fact that resource consumption may combine differently in parallel and sequentially – simply put, we mean to express that \(\mathbf {wait}(1) \parallel \mathbf {wait}(1)\) may terminate in 1 s, rather than 2. We also take inspiration from weighted relational models [18] in that our construction is parametrized by an algebraic structure representing resources and their usage. Our resource bimonoids \(\langle \mathcal {R}, 0, ;, \parallel , \le \rangle \) differ however significantly from their resource semiring \(\langle \mathcal {R}, 0, 1, +, \cdot \rangle \): while ;  matches \(\cdot \), \(\parallel \) is a new operation expressing the consumption of resources in parallel. We have no counterpart for the \(+\), which agglomerates distinct non-deterministically co-existing executions leading to the same value: instead our model keeps them separate.

Capturing parallel resource usage is technically challenging, as it can only be attempted relying on a representation of execution where parallelism is explicit. Accordingly, our model belongs to the family of concurrent or asynchronous game semantics pioneered by Abramsky and Melliès [2], pushed by Melliès [20] and later with Mimram [22], and by Faggian and Piccolo [12]; actively developed in the past 10 years prompted by the introduction of a more general framework by Rideau and Winskel [7, 25]. In particular, our model is a refinement of the (qualitative) truly concurrent interpretation of affine IPA described in [5]. Our methodology to record resource usage is inspired by game semantics for first-order logic [3, 19] where moves carry first-order terms from a signature – instead here they carry explicit functions, i.e. terms up to a congruence (it is also reminiscent of Melliès’ construction of the free dialogue category over a category [21]).

As in [5] we chose to interpret an affine language: this lets us focus on the key phenomena which are already at play, avoiding the technical hindrance caused by replication. As suggested by recent experience with concurrent games [6, 10], we expect the developments presented here to extend transparently in the presence of symmetry [8, 9]; this would allow us to move to the general (non-affine) setting.

Outline. We start Sect. 2 by introducing the language \(\mathcal {R}\)-IPA. We equip it first with an interleaving semantics and sketch its interpretation in slot games. We then present resource bimonoids, give a new parallel operational semantics, and hint at our truly concurrent games model. In Sect. 3, we construct this model and prove its soundness. Finally in Sect. 4, we show adequacy for an operational semantics specialized to time, noting first that the general parallel operational semantics is too coarse w.r.t. our model.

2 From \(\mathcal {R}\)-IPA to \(\mathcal {R}\)-Strategies

2.1 Affine IPA

Terms and Types. We start by introducing the basic language under study, affine Idealized Parallel Algol (IPA). It is an affine variant of the language studied in [14], a call-by-name concurrent higher-order language with shared state. Its types are given by the following grammar:

$$ A, B\,{:}{:}{=}~\mathbf {com}\mid \mathbf {bool}\mid \mathbf {mem}_W \mid \mathbf {mem}_R \mid A \multimap B $$

Here, \(\mathbf {mem}_W\) is the type of writeable references and \(\mathbf {mem}_R\) is the type of readable references; the distinction is necessary in this affine setting as it allows to share accesses to a given state over subprocesses; this should make more sense in the next paragraph with the typing rules. In the sequel, non-functional types are called ground types (for which we use notation \(\mathbb {X}\)). We define terms directly along with their typing rules in Fig. 1. Contexts are simply lists \(x_1 : A_1, \dots , x_n : A_n\) of variable declarations (in which each variable occurs at most once), and the exchange rule is kept implicit. Weakening is not a rule but is admissible. We comment on a few aspects of these rules.

Fig. 1.
figure 1

Typing rules for affine IPA

Firstly, observe that the reference constructor \(\mathbf {new}\,x,y\,\mathbf {in}\,M\) binds two variables x and y, one with a write permission and the other with a read permission. In this way, the permissions of a shared state can be distributed in different components of e.g. an application or a parallel composition, causing interferences despite the affine aspect of the language. Secondly, the assignment command, , seems quite restrictive. Yet, the language is affine, so a variable can only be written to once, and, as we choose to initialize it to , the only useful thing to write is . Finally, many rules seem restrictive in that they apply only at ground type \(\mathbb {X}\). More general rules can be defined as syntactic sugar; for instance we give (all other constructs extend similarly): \(M;_{A\multimap B} N = \lambda x^A.\left( \,M;_B\,(N\,x)\right) \).

Operational Semantics. We fix a countable set \(\mathsf {L}\) of memory locations. Each location \(\ell \) comes with two associated variable names \(\ell _W\) and \(\ell _R\) distinct from other variable names. Usually, stores are partial maps from \(\mathsf {L}\) to . Instead, we find it more convenient to introduce the notion of state of a memory location. A state corresponds to a history of memory actions (reads or writes) and follows the state diagram of Fig. 2 (ignoring for now the annotations with \(\alpha , \beta \)). We write \((\mathsf {M},\le _\mathsf {M})\) for the induced set of states and accessibility relation on it. For each \(m\in \mathsf {M}\), its set of available actions is \(\mathrm {act}(m)=\{W, R\}\setminus m\) (the letters not occurring in m, annotations being ignored); and its value (in ) is iff W occurs in m.

Fig. 2.
figure 2

State diagram

Finally, a store is a partial map \(s : \mathsf {L}\rightarrow \mathsf {M}\) with finite domain, mapping each memory location to its current state. To each store corresponds a typing context

$$ \varOmega (s) = \{\ell _X : \mathbf {mem}_X \mid \ell \in \mathrm {dom}(s) ~ \& ~ X\in \mathrm {act}(s(\ell ))\}. $$

The operational semantics operates on configurations defined as pairs \(\langle M, s \rangle \) with s a store and \(\varGamma \vdash M : A\) a term whose free variables are all memory locations with \(\varGamma \subseteq \varOmega (s)\). This property will be preserved by our rather standard small-step, call-by-name operational semantics. We refrain for now from giving the details, they will appear in Sect. 2.2 in the presence of resources.

2.2 Interleaving Cost Semantics, and \(\mathcal {R}\)-IPA

Ghica and Murawski [14] have constructed a fully abstract(for may-equivalence) model for (non-affine) IPA, relying on an extension of Hyland-Ong games [17].

Their model takes an interleaving view of the execution of concurrent programs: a program is represented by the set of all its possible executions, as decided non-deterministically by the scheduler. In game semantics, this is captured by lifting the standard requirement that the two players alternate. For instance, Fig. 3 shows a play in the interpretation of the program \(x : \mathbf {com}, y : \mathbf {bool}\vdash x \parallel y : \mathbf {bool}\). The diagram is read from top to bottom, chronologically. Each line comprises one computational event (“move”), annotated with “−” if due to the execution environment (“Opponent”) and with “\(+\)” if due to the program (“Player”); each move corresponds to a certain type component, under which it is placed. With the first move \(\mathbf {q}^-\), the environment initiates the computation. Player then plays \(\mathbf {run}^+\), triggering the evaluation of x. In standard game semantics, the control would then go back to the execution environment – Player would be stuck until Opponent plays. Here instead, due to parallelism Player can play a second move \(\mathbf {q}^+\) immediately. At this point of execution, x and y are both running in parallel. Only when they have both returned (moves \(\mathbf {done}^-\) and ) is Player able to respond , terminating the computation. The full interpretation of \(x : \mathbf {com}, y : \mathbf {bool}\vdash x \parallel y : \mathbf {bool}\), its strategy, comprises numerous plays like that, one for each interleaving.

Fig. 3.
figure 3

A non-alternating play

As often in denotational semantics, Ghica and Murawski’s model is invariant under reduction: if \(\langle M, s \rangle \rightarrow \langle M', s' \rangle \), both have the same denotation. The model adequately describes the result of computation, but not its cost in terms, for instance, of time. Of course this cost is not yet specified: one must, for instance, define a cost model assigning a cost to all basic operations (e.g. memory operations, function calls, etc). In this paper we instead enrich the language with a primitive for resource consumption – cost models can then be captured by inserting this primitive concomitantly with the costly operations (see for example [18]).

Fig. 4.
figure 4

Typing \(\mathbf {consume}\)

\(\mathcal {R}\)-IPA. Consider a set \(\mathcal {R}\) of resources. The language \(\mathcal {R}\)-IPA is obtained by adding to affine IPA a new construction, \(\mathbf {consume}(\alpha )\), typed as in Fig. 4. When evaluated, \(\mathbf {consume}(\alpha )\) triggers the consumption of resource \(\mathcal {R}\). Time consumption will be a running example throughout the paper. In that case, we will consider the non-negative reals \(\mathbb {R}_+\) as set \(\mathcal {R}\), and for \(t\in \mathbb {R}_+\) we will use \(\mathbf {wait}(t)\) as a synonym for \(\mathbf {consume}(t)\).

Fig. 5.
figure 5

Operational semantics: basic rules

To equip \(\mathcal {R}\)-IPA with an operational semantics we need operations on \(\mathcal {R}\), they are introduced throughout this section. First we have \(0 \in \mathcal {R}\), the null resource; if \(\alpha ,\beta \in \mathcal {R}\), we have some \( \alpha ;\,\beta \in \mathcal {R}\), the resource taken by consuming \(\alpha \), then \(\beta \) – for \(\mathcal {R}= \mathbb {R}_+\), this is simply addition. To evaluate \(\mathcal {R}\)-IPA, the configurations are now triples \(\langle M, s, \alpha \rangle \) with \(\alpha \in \mathcal {R}\) tracking resources already spent. With that, we give in Fig. 5 the basic operational rules. The only rule affecting current resources is that for \(\mathbf {consume}(\beta )\), the others leave it unchanged. However note that we store the current state of resources when performing memory operations, explaining the annotations in Fig. 2. These annotations do not impact the operational behaviour, but will be helpful in relating with the game semantics in Sect. 3. As usual, these rules apply within call-by-name evaluation contexts – we omit the details here but they will appear for our final operational semantics.

Slot Games. In [13], Ghica extends Ghica and Murawski’s model to slot games in order to capture resource consumption. Slot games introduce a new action called a token, representing an atomic resource consumption, and written – writing for n successive occurrences of . A model of \(\mathbb {N}_+\)-IPA using slot games would have for instance the play in Fig. 6 in the interpretation of

$$ H = (\mathbf {wait}(1);\,x;\,\mathbf {wait}(2)) \parallel (\mathbf {wait}(2);\,y;\,\mathbf {wait}(1)) $$

in context \(x : \mathbf {com}, y : \mathbf {bool}\), among with many others. Note, in examples, we use a more liberal typing rule for ‘; ’ allowing \(y^\mathbf {bool};\,z^\mathbf {com}: \mathbf {bool}\) to avoid clutter: it can be encoded as . Following the methodology of game semantics, the interpretation of would yield, by composition, the strategy with only maximal play , where reflects the overall 6 time units (say “seconds”) that have to pass in total before we see the result (3 in each thread). This seems wasteful, but it is indeed an adequate computational analysis, because both slot games and the operational semantics given so far implicitly assume a sequential operational model, i.e. that both threads compete to be scheduled on a single processor. Let us now question that assumption.

Fig. 6.
figure 6

A play with tokens

Parallel Resource Consumption. With a truly concurrent evaluation in mind, we should be able to prove that the program above may terminate in 3 s, rather than 6; as nothing prevents the threads from evaluating in parallel. Before we update the operational semantics to express that, we enrich our resource structure to allow it to express the effect of consuming resources in parallel.

We now introduce the full algebraic structure we require for resources.

Definition 1

A resource bimonoid is \(\langle \mathcal {R}, 0, ;, \parallel , \le \rangle \) where \(\langle \mathcal {R}, 0, ;, \le \rangle \) is an ordered monoid, \(\langle \mathcal {R}, 0, \parallel , \le \rangle \) is an ordered commutative monoid, 0 is bottom for \(\le \), and \(\parallel \) is idempotent, i.e. it satisfies \(\alpha \parallel \alpha = \alpha \).

A resource bimonoid is in particular a concurrent monoid in the sense of e.g. [16] (though we take \(\le \) in the opposite direction: we read \(\alpha \le _\mathcal {R}\alpha '\) as “\(\alpha \) is better/more efficient than \(\alpha '\)”). Our Idempotence assumption is rather strong as it entails that \(\alpha \parallel \beta \) is the supremum of \(\alpha , \beta \in \mathcal {R}\). This allows to recover a number of simple laws, e.g. \(\alpha \parallel \beta \le \alpha ;\,\beta \), or the exchange rule \((\alpha ;\,\beta ) \parallel (\alpha ';\,\beta ') \le (\alpha \parallel \alpha ');\,(\beta \parallel \beta ')\). Idempotence, which would not be needed for a purely functional language, is used crucially in our interpretation of state.

Our leading examples are \(\langle \mathbb {N}_+, 0, +, \max , \le \rangle \) and \(\langle \mathbb {R}_+, 0, +, \max , \le \rangle \) – we call the latter the time bimonoid. Others are the permission bimonoid \(\langle \mathcal {P}(P), \emptyset , \cup , \cup ,\) \(\subseteq \rangle \) for some set P of permissions: if reaching a state requires certain permissions, it does not matter whether these have been requested sequentially or in parallel; the bimonoid of parametrized time \(\langle \mathcal {M}, 0, ;, \parallel , \le \rangle \) with \(\mathcal {M}\) the monotone functions from positive reals to positive reals, 0 the constant function, \(\parallel \) the pointwise maximum, and \((f;\,g)(x) = f(x) + g(x + f(x))\): it tracks time consumption in a context where the time taken by \(\mathbf {consume}(\alpha )\) might grow over time.

Besides time-based bimonoids, it would be appealing to cover resources such as power, bandwith or heapspace. Those, however, clearly fail idempotence of \(\parallel \), and are therefore not covered. It is not clear how to extend our model to those.

Fig. 7.
figure 7

Rules for parallel reduction

Parallel Operational Semantics. Let us fix a resource bimonoid \(\mathcal {R}\). To express parallel resource consumption, we use the many-step parallel reductions defined in Fig. 7, with call-by-name evaluation contexts given by

The rule for parallel composition carries some restrictions regarding memory: M and N can only reduce concurrently if they do not access the same memory cells. This is achieved by requiring that the partial operation \(s\uparrow s'\) – that intuitively corresponds to “merging” two memory stores s and \(s'\) whenever there are no conflicts – is defined. More formally, the partial order \(\le _\mathsf {M}\) on memory states induces a partial order (also written \(\le _\mathsf {M}\)) on stores, defined by \(s \le _\mathsf {M}s'\) iff \(\mathrm {dom}(s) \subseteq \mathrm {dom}(s')\) and for all \(\ell \in \mathrm {dom}(s)\) we have \(s(\ell ) \le _\mathsf {M}s'(\ell )\). This order is a cpo in which \(s'\) and \(s''\) are compatible (i.e. have an upper bound) iff for all \(\ell \in \mathrm {dom}(s') \cap \mathrm {dom}(s'')\), \(s'(\ell ) \le _\mathsf {M}s''(\ell )\) or \(s''(\ell ) \le _\mathsf {M}s'(\ell )\) – so there has been no interference going to \(s'\) and \(s''\) from their last common ancestor. When compatible, \(s' \uparrow s''\) maps \(s'\) and \(s''\) to their lub, and is undefined otherwise.

For \(\vdash M : \mathbf {com}\), we set \(M \Downarrow _\alpha \) if \(\langle M, \emptyset , 0 \rangle \rightrightarrows \langle \mathbf {skip}, s, \alpha \rangle \). For instance, instantiating the rules with the time bimonoid, we have

$$ \left( \mathbf {wait}(1);\,\mathbf {wait}(2)\right) \parallel \left( \mathbf {wait}(2);\,\mathbf {wait}(1)\right) \Downarrow _3 $$

2.3 Non-interleaving Semantics

To capture this parallel resource usage semantically, we build on the games model for affine IPA presented in [5]. Rather than presenting programs as collections of sequences of moves expressing all observable sequences of computational actions, this model adopts a truly concurrent view using collections of partially ordered plays. For each Player move, the order specifies its causal dependencies, i.e. the Opponent moves that need to have happened before. For instance, ignoring the subscripts, Fig. 8 displays a typical partially ordered play in the strategy for the term H of Sect. 2.2. One partially ordered play does not fully specify a sequential execution: that in Fig. 8 stands for many sequential executions, one of which is in Fig. 3. Behaviours expressed by partially ordered plays are deterministic up to choices of the scheduler irrelevant for the eventual result. Because \(\mathcal {R}\)-IPA is non-deterministic (via concurrency and shared state), our strategies will be sets of such partial orders.

Fig. 8.
figure 8

A parallel \(\mathcal {R}\)-play

To express resources, we leverage the causal information and indicate, in each partially ordered play and for each positive move, an \(\mathcal {R}\)-expression representing its additional cost in function of the cost of its negative dependencies. Figure 8 displays such a \(\mathcal {R}\)-play: each Opponent move introduces a fresh variable, which can be used in annotations for Player moves. As we will see further on, once applied to strategies for values \(\mathbf {skip}\) and (with no additional cost), this \(\mathcal {R}\)-play will answer to the initial Opponent move \(\mathbf {q}^-_\mathsf {x}\) with where \(\alpha = (1;\,2) \parallel (2;\,1) =_{\mathbb {R}_+} 3\), as prescribed by the more efficient parallel operational semantics.

We now go on to define formally our semantics.

3 Concurrent Game Semantics of IPA

3.1 Arenas and \(\mathcal {R}\)-Strategies

Arenas. We first introduce arenas, the semantic representation of types in our model. As in [5], an arena will be a certain kind of event structure [27].

Definition 2

An event structure comprises \((E, \le _E, \mathrel {\#}_E)\) where E is a set of events, \(\le _E\) is a partial order called causal dependency, and \(\mathrel {\#}_E\) is an irreflexive symmetric binary relation called conflict, subject to the two axioms:

$$ \begin{array}{l} \forall e\in E, [e]_E = \{e'\in E\mid e'\le _E e\}\text { is finite}\\ \forall e_1\,\mathrel {\#}_E\,e_2, \forall e_1 \le _E e'_1, e'_1\,\mathrel {\#}_E\,e_2 \end{array} $$

We will use some vocabulary and notations from event structures. A configuration \(x \subseteq E\) is a down-closed, consistent (i.e. for all \(e, e' \in x\), \(\lnot (e\,\mathrel {\#}_E\,e')\)) finite set of events. We write \(\mathscr {C}(E)\) for the set of configurations of E. We write for immediate causality, i.e. iff \(e <_E e'\) with nothing in between – this is the relation represented in diagrams such as Fig. 8. A conflict \(e_1\,\mathrel {\#}_E\,e_2\) is minimal if for all \(e'_1\,<_E\,e_1\), \(\lnot {(e'_1\,\mathrel {\#}_E\,e_2)}\) and symmetrically. We write \(e_1 \sim _E e_2\) to indicate that \(e_1\) and \(e_2\) are in minimal conflict.

With this, we now define arenas.

Definition 3

An arena is \((A, \le _A, \mathrel {\#}_A, {{\,\mathrm{pol}\,}}_A)\), an event structure along with a polarity function \({{\,\mathrm{pol}\,}}_A : A \longrightarrow \{-, +\}\) subject to: (1) \(\le _A\) is forest-shaped, (2) is alternating: if , then \({{\,\mathrm{pol}\,}}_A(a_1) \ne {{\,\mathrm{pol}\,}}_A(a_2)\), and (3) it is race-free, i.e. if \(a_1 \sim _A a_2\), then \({{\,\mathrm{pol}\,}}_A(a_1) = {{\,\mathrm{pol}\,}}_A(a_2)\).

Arenas present the computational actions available on a type, following a call-by-name evaluation strategy. For instance, the observable actions of a closed term on \(\mathbf {com}\) are that it can be ran, and it may terminate, leading to the arena . Likewise, a boolean can be evaluated, and can terminate on or , yielding the arena on the right of Fig. 9 (when drawing arenas, immediate causality is written with a dotted line, from top to bottom). We present some simple arena constructions. The empty arena, written 1, has no events. If A is an arena, then its dual \(A^\perp \) has the same components, but polarity reversed. The parallel composition of A and B, written \(A\parallel B\), has as events the tagged disjoint union \(\{1\}\times A \cup \{2\}\times B\), and all other components inherited. For \(x_A \in \mathscr {C}(A)\) and \(x_B \in \mathscr {C}(B)\), we also write \(x_A \parallel x_B \in \mathscr {C}(A\parallel B)\). Figure 9 displays the arena \(\mathbf {com}^\perp \parallel \mathbf {bool}^\perp \parallel \mathbf {bool}\).

Fig. 9.
figure 9

An arena for a sequent

\(\mathcal {R}\)-Augmentations. As hinted before, \(\mathcal {R}\)-strategies will be collections of partially ordered plays with resource annotations in \(\mathcal {R}\), called \(\mathcal {R}\)-augmentations.

Definition 4

An augmentation [5] on arena A is a finite partial order \(\mathbbm {q}= (|\mathbbm {q}|, \le _\mathbbm {q})\) such that \(\mathscr {C}(\mathbbm {q}) \subseteq \mathscr {C}(A)\) (concerning configurations, augmentations are considered as event structures with empty conflict), which is courteous, in the sense that for all , if \({{\,\mathrm{pol}\,}}_A(a_1) = +\) or \({{\,\mathrm{pol}\,}}_A(a_2) = -\), then .

A \(\mathcal {R}\)-augmentation also has (with \([a]_\mathbbm {q}^- = \{a'\le _\mathbbm {q}a\mid {{\,\mathrm{pol}\,}}_A(a') = -\}\))

$$ \lambda _\mathbbm {q}: \left( a \in |\mathbbm {q}|\right) \quad \longrightarrow \quad \left( \mathcal {R}^{[a]_\mathbbm {q}^-} \rightarrow \mathcal {R}\right) $$

such that if \({{\,\mathrm{pol}\,}}_A(a) = -\), then \(\lambda _\mathbbm {q}(a)(\rho ) = \rho _a\), the projection on a of \(\rho \in \mathcal {R}^{[a]_\mathbbm {q}^-}\), and for all \(a\in |\mathbbm {q}|\), \(\lambda _\mathbbm {q}(a)\) is monotone w.r.t. all of its variables.

We write \(\mathcal {R}\text {-}\mathrm {Aug}(A)\) for the set of \(\mathcal {R}\)-augmentations on A.

If \(\mathbbm {q}, \mathbbm {q}'\in \mathcal {R}\text {-}\mathrm {Aug}(A)\), \(\mathbbm {q}\) is rigidly embedded in \(\mathbbm {q}'\), or a prefix of \(\mathbbm {q}'\), written \(\mathbbm {q}\hookrightarrow \mathbbm {q}'\), if \(|\mathbbm {q}| \in \mathscr {C}(\mathbbm {q}')\), for all \(a, a' \in |\mathbbm {q}|\), \(a \le _\mathbbm {q}a'\) iff \(a \le _{\mathbbm {q}'} a'\), and for all \(a\in |\mathbbm {q}|\), \(\lambda _\mathbbm {q}(a) = \lambda _{\mathbbm {q}'}(a)\). The \(\mathcal {R}\)-plays of Sect. 2.3 are formalized as \(\mathcal {R}\)-augmentations: Fig. 8 presents an \(\mathcal {R}\)-augmentation on the arena of Fig. 9. The functional dependency in the annotation of positive events is represented by using the free variables introduced alongside negative events, however this is only a symbolic representation: the formal annotation is a function for each positive event. In the model of \(\mathcal {R}\)-IPA, we will only use the particular case where the annotations of positive events only depend on the annotations of their immediate predecessors.

\(\mathcal {R}\)-Strategies. We start by defining \(\mathcal {R}\)-strategies on arenas.

Definition 5

A \(\mathcal {R}\)-strategy on A is a non-empty prefix-closed set of \(\mathcal {R}\)-augmentations \(\sigma \subseteq \mathcal {R}\text {-}\mathrm {Aug}(A)\) which is receptive [5]: for \(\mathbbm {q}\in \sigma \) such that \(|\mathbbm {q}|\) extends with \(a^-\in A\) (i.e. \({{\,\mathrm{pol}\,}}(a) = -\), \(a\not \in |\mathbbm {q}|\), and \(|\mathbbm {q}|\cup \{a\} \in \mathscr {C}(A)\)), there is \(\mathbbm {q}\hookrightarrow \mathbbm {q}' \in \sigma \) such that \(|\mathbbm {q}'| = |\mathbbm {q}|\cup \{a\}\).

If \(\sigma \) is a \(\mathcal {R}\)-strategy on arena A, we write \(\sigma : A\).

Observe that \(\mathcal {R}\)-strategies are fully described by their maximal augmentations, i.e. augmentations that are the prefix of no other augmentations in the strategy. Our interpretation of \(\mathsf {new}\) will use the \(\mathcal {R}\)-strategy \(\mathsf {cell}: \llbracket \mathbf {mem}_W \rrbracket \parallel \llbracket \mathbf {mem}_R \rrbracket \) (with arenas presented in Fig. 10), comprising all the \(\mathcal {R}\)-augmentations rigidly included in either of the two from Fig. 11. These two match the race when reading and writing simultaneously: if both and \(\mathbf {r}^-\) are played the read may return or , but it can only return in the presence of .

Fig. 10.
figure 10

\(\llbracket \mathbf {mem}_W \rrbracket \) and \(\llbracket \mathbf {mem}_R \rrbracket \)

Fig. 11.
figure 11

Maximal \(\mathcal {R}\)-augmentations of \(\mathsf {cell}\)

3.2 Interpretation of \(\mathcal {R}\)-IPA

Categorical Structure. In order to define the interpretation of terms of \(\mathcal {R}\)-IPA as \(\mathcal {R}\)-strategies, a key step is to show how to form a category of \(\mathcal {R}\)-strategies. To do that we follow the standard idea of considering \(\mathcal {R}\)-strategies from A to B to be simply \(\mathcal {R}\)-strategies on the compound arena \(A^\perp \parallel B\). As usual, our first example of a \(\mathcal {R}\)-strategy between arenas is the copycat \(\mathcal {R}\)-strategy.

Definition 6

Let A be an arena. We define a partial order on \(A^\perp \parallel A\):

where \((-)^+\) denotes the transitive closure of a relation. Note that if \(a \in A^\perp \parallel A\) is positive, it has a unique immediate predecessor \(\mathrm {pred}(a) \in A^\perp \parallel A\) for .

If \(x\parallel y \in \mathscr {C}(A^\perp \parallel A)\) is down-closed for (write \(\le _{x, y}\) for the restriction of to \(x\parallel y\)), we define an \(\mathcal {R}\)-augmentation \(\mathbbm {q}_{x,y} = (x\parallel y, \le _{x, y}, \lambda _{x,y})\) where

$$ \lambda _{x, y} : \left( a\in x\parallel y\right) \quad \longrightarrow \quad \left( \mathcal {R}^{[a]_{x\parallel y}^-}\rightarrow \mathcal {R}\right) $$

with \(\lambda _{x,y}(a^-)(\rho ) = \rho _a\), and \(\lambda _{x,y}(a^+)(\rho ) = \rho _{\mathrm {pred}(a)}\). Then, is the \(\mathcal {R}\)-strategy comprising all \(\mathbbm {q}_{x, y}\) for \(x\parallel y \in \mathscr {C}(A^\perp \parallel A)\) down-closed in A.

We first define interactions of \(\mathcal {R}\)-augmentations, extending [5].

Definition 7

We say that \(\mathbbm {q}\in \mathcal {R}\text {-}\mathrm {Aug}(A^\perp \parallel B)\), and \(\mathbbm {p}\in \mathcal {R}\text {-}\mathrm {Aug}(B^\perp \parallel C)\) are causally compatible if \(|\mathbbm {q}| = x_A \parallel x_B\), \(|\mathbbm {p}| = x_B \parallel x_C\), and the preorder \(\le _{\mathbbm {p}\circledast \mathbbm {q}}\) on \(x_A \parallel x_B \parallel x_C\) defined as \(\left( \le _\mathbbm {q}\cup \le _\mathbbm {p}\right) ^+\) is a partial order.

Say \(e \in x_A \parallel x_B \parallel x_C\) is negative if it is negative in \(A^\perp \parallel C\). We define

$$ \lambda _{\mathbbm {p}\circledast \mathbbm {q}} : (e\in x_A\parallel x_B \parallel x_C) \quad \longrightarrow \quad \left( \mathcal {R}^{[e]_{\mathbbm {p}\circledast \mathbbm {q}}^-}\rightarrow \mathcal {R}\right) $$

as follows, by well-founded induction on \(<_{\mathbbm {p}\circledast \mathbbm {q}}\), for \(\rho \in \mathcal {R}^{[e]_{\mathbbm {p}\circledast \mathbbm {q}}^-}\):

$$ \lambda _{\mathbbm {p}\circledast \mathbbm {q}}(e)(\rho ) = \left\{ \begin{array}{ll} \lambda _\mathbbm {p}(e)\left( \langle \lambda _{\mathbbm {p}\circledast \mathbbm {q}}(e')(\rho ) \mid e'\in [e]_\mathbbm {p}^- \rangle \right) &{}~~\text {if }{{\,\mathrm{pol}\,}}_{B^\perp \parallel C}(e) = +,\\ \lambda _\mathbbm {q}(e)\left( \langle \lambda _{\mathbbm {p}\circledast \mathbbm {q}}(e')(\rho ) \mid e' \in [e]_\mathbbm {q}^- \rangle \right) &{}~~\text {if }{{\,\mathrm{pol}\,}}_{A^\perp \parallel B}(e) = +,\\ \rho _e&{}~~\text {otherwise},~{i.e.}\, e\text { negative} \end{array} \right. $$

The interaction \(\mathbbm {p}\circledast \mathbbm {q}\) of compatible \(\mathbbm {q}, \mathbbm {p}\) is \((x_A \parallel x_B \parallel x_C, \le _{\mathbbm {p}\circledast \mathbbm {q}}, \lambda _{\mathbbm {p}\circledast \mathbbm {q}})\).

If \(\sigma : A^\perp \parallel B\) and \(\tau : B^\perp \parallel C\), we write \(\tau \circledast \sigma \) for the set comprising all \(\mathbbm {p}\circledast \mathbbm {q}\) such that \(\mathbbm {p}\in \tau \) and \(\mathbbm {q}\in \sigma \) are causally compatible. For \(\mathbbm {q}\in \sigma \) and \(\mathbbm {p}\in \tau \) causally compatible with \(|\mathbbm {p}\circledast \mathbbm {q}| = x_A \parallel x_B \parallel x_C\), their composition is \(\mathbbm {p}\odot \mathbbm {q}= (x_A \parallel x_C, \le _{\mathbbm {p}\odot \mathbbm {q}}, \lambda _{\mathbbm {p}\odot \mathbbm {q}})\) where \(\le _{\mathbbm {p}\odot \mathbbm {q}}\) and \(\lambda _{\mathbbm {p}\odot \mathbbm {q}}\) are the restrictions of \(\le _{\mathbbm {p}\circledast \mathbbm {q}}\) and \(\lambda _{\mathbbm {p}\circledast \mathbbm {q}}\). Finally, the composition of \(\sigma : A^\perp \parallel B\) and \(\tau : B^\perp \parallel C\) is the set comprising all \(\mathbbm {p}\odot \mathbbm {q}\) for \(\mathbbm {q}\in \sigma \) and \(\mathbbm {p}\in \tau \) causally compatible.

Fig. 12.
figure 12

Example of interaction and composition between \(\mathbb {R}_+\)-augmentations

In Fig. 12, we display an example composition between \(\mathbb {R}_+\)-augmentations – with also in gray the underlying interaction. The reader may check that the variant of the left \(\mathbb {R}_+\)-augmentation with replaced with is causally compatible with the other augmentation in Fig. 11, with composition .

We also have a tensor operation: on arenas, \(A\otimes B\) is simply a synonym for \(A\parallel B\). If \(\mathbbm {q}_1 \in \mathcal {R}\text {-}\mathrm {Aug}(A_1^\perp \parallel B_1)\) and \(\mathbbm {q}_2 \in \mathcal {R}\text {-}\mathrm {Aug}(A_2^\perp \parallel B_2)\), their tensor product \(\mathbbm {q}_1 \otimes \mathbbm {q}_2 \in \mathcal {R}\text {-}\mathrm {Aug}((A_1 \otimes A_2)^\perp \parallel (B_1\otimes B_2))\) is defined in the obvious way. This is lifted to \(\mathcal {R}\)-strategies element-wise. As is common when constructing basic categories of games and strategies, we have:

Proposition 1

There is a compact closed category \(\mathcal {R}\text {-}\mathsf {Strat}\) having arenas as objects, and as morphisms, \(\mathcal {R}\)-strategies between them.

Negative Arenas and \(\mathcal {R}\)-Strategies. As a compact closed category, \(\mathcal {R}\text {-}\mathsf {Strat}\) is a model of the linear \(\lambda \)-calculus. However, we will (as usual for call-by-name) instead interpret \(\mathcal {R}\)-IPA in a sub-category of negative arenas and strategies, in which the empty arena 1 is terminal, providing the interpretation of weakening. We will stay very brief here, as this proceeds exactly as in [5].

A partial order with polarities is negative if all its minimal events are. This applies in particular to arenas, and \(\mathcal {R}\)-augmentations. A \(\mathcal {R}\)-strategy is negative if all its \(\mathcal {R}\)-augmentations are. A negative \(\mathcal {R}\)-augmentation \(\mathbbm {q}\in \mathcal {R}\text {-}\mathrm {Aug}(A)\) is well-threaded if for all \(a \in |\mathbbm {q}|\), \([a]_\mathbbm {q}\) has exactly one minimal event; a \(\mathcal {R}\)-strategy is well-threaded iff all its \(\mathcal {R}\)-augmentations are. We have:

Proposition 2

Negative arenas and negative well-threaded \(\mathcal {R}\)-strategies form a cartesian symmetric monoidal closed category \(\mathcal {R}\text {-}\mathsf {Strat}_-\), with 1 terminal.

We also write for morphisms in \(\mathcal {R}\text {-}\mathsf {Strat}_-\).

The closure of \(\mathcal {R}\text {-}\mathsf {Strat}\) does not transport to \(\mathcal {R}\text {-}\mathsf {Strat}_-\) as \(A^\perp \parallel B\) is never negative if A is non-empty, thus we replace it with a negative version. Here we describe only a restricted case of the general construction in [5], which is however sufficient for the types of \(\mathcal {R}\)-IPA. If AB are negative arenas and B is well-opened, i.e. it has exactly one minimal event b, we form \(A\multimap B\) as having all components as in \(A^\perp \parallel B\), with additional dependencies \(\{((2, b), (1, a))\mid a\in A\}\).

Fig. 13.
figure 13

Maximal \(\mathcal {R}\)-augmentations of \(\mathcal {R}\)-strategies used in the interpretation

Using the compact closed structure of \(\mathcal {R}\text {-}\mathsf {Strat}\) it is easy to build a copycat \(\mathcal {R}\)-strategy , and to associate to any some providing the monoidal closure. The cartesian product of A and B is with components the same as \(A\parallel B\), except for \((1, a) \mathrel {\#}(2, b)\) for all \(a\in A, b\in B\). We write for the projections, and for the pairing of , and .

Interpretation of \(\mathcal {R}\)-IPA. We set , \(\llbracket \mathbf {bool} \rrbracket \) as in the right-hand side of Fig. 9, \(\llbracket \mathbf {mem}_W \rrbracket \) and \(\llbracket \mathbf {mem}_R \rrbracket \) as in Fig. 10, and \(\llbracket A\multimap B \rrbracket = \llbracket A \rrbracket \multimap \llbracket B \rrbracket \) as expected. Contexts \(\varGamma = x_1 : A_1, \dots , x_n : A_n\) are interpreted as \(\llbracket \varGamma \rrbracket = \otimes _{1\le i \le n} \llbracket A_i \rrbracket \). Terms \(\varGamma \vdash M : A\) are interpreted as as follows: \(\llbracket \bot \rrbracket \) is the diverging \(\mathcal {R}\)-strategy (no player move), \(\llbracket \mathbf {consume}(\alpha ) \rrbracket \) has only maximal \(\mathcal {R}\)-augmentation , \(\llbracket \mathbf {skip} \rrbracket \) is \(\llbracket \mathbf {consume}(0) \rrbracket \), and and are interpreted similarly with the adequate constant \(\mathcal {R}\)-strategies. The rest of the interpretation is given on the left, using the two obvious isos and ; the \(\mathcal {R}\)-strategy \(\mathsf {cell}\) introduced in Fig. 11; and additional \(\mathcal {R}\)-strategies with typical \(\mathcal {R}\)-augmentations in Fig. 13. We omit the (standard) clauses for the \(\lambda \)-calculus.

3.3 Soundness

Now that we have defined the game semantics of \(\mathcal {R}\)-IPA, we set to prove that it is sound with respect to the operational semantics given in Sect. 2.2.

We first introduce a useful notation. For any type A, \(\llbracket A \rrbracket \) has a unique minimal event; write for the arena without this minimal event. Likewise, if \(\varGamma \vdash M : A\), then by construction, \(\llbracket M \rrbracket : \llbracket \varGamma \rrbracket ^\perp \parallel \llbracket A \rrbracket \) is a negative \(\mathcal {R}\)-strategy whose augmentations all share the same minimal event \(\mathbf {q}^-_{\mathsf {x}}\) where \(\mathbf {q}^-\) is minimal in A. For \(\alpha \in \mathcal {R}\), write for \(\llbracket M \rrbracket \) without \(\mathbf {q}^-_{\mathsf {x}}\), with \(\mathsf {x}\) replaced by \(\alpha \). Then we have – one may think of as “M started with consumed resource \(\alpha \)”.

Naively, one may expect soundness to state that for all \(\vdash M : \mathbf {com}\), if \(M \Downarrow _\alpha \), then . However, whereas the resource annotations in the semantics are always as good as permitted by the causal constraints, derivations in the operational semantics may be sub-optimal. For instance, we may derive \(M \Downarrow _\alpha \) not using the parallel rule at all. So our statement is:

Theorem 1

If \(\vdash M : \mathbf {com}\) with \(M\Downarrow _\alpha \), there is \(\beta \le _\mathcal {R}\alpha \) s.t. .

Our proof methodology is standard: we replay operational derivations as augmentations in the denotational semantics. Stating the invariant successfully proved by induction on operational derivations requires some technology.

If s is a store, then write \(\mathsf {cell}_s : \llbracket \varOmega (s) \rrbracket \) for the memory strategy for store s. It is defined as \(\otimes _{\ell \in \mathrm {dom}(s)} \mathsf {cell}_{s(\ell )}\) where \(\mathsf {cell}_{\varepsilon } = \mathsf {cell}\), \(\mathsf {cell}_{R^\alpha }\) is the \(\mathcal {R}\)-strategy with only maximal \(\mathcal {R}\)-augmentation , \(\mathsf {cell}_{W^\alpha }\) has maximal \(\mathcal {R}\)-augmentation , and the empty \(\mathcal {R}\)-strategy for the other cases. If \(s \le _\mathsf {M}s'\), then \(s'\) can be obtained from s using memory operations and there is a matching \(\mathcal {R}\)-augmentation \(\mathbbm {q}_{s\rhd s'} \in \mathsf {cell}_s\) defined location-wise in the obvious way.

Now, if is a \(\mathcal {R}\)-strategy and \(\mathbbm {q}\in \sigma \) with moves only in \(\llbracket \varOmega (s) \rrbracket ^\perp \) is causally compatible with \(\mathbbm {q}_{s\rhd s'}\), we define the residual of \(\sigma \) after \(\mathbbm {q}\):

If \(\mathbbm {p}\in \sigma \) with \(\mathbbm {q}\hookrightarrow \mathbbm {p}\), we write first \(\mathbbm {p}' = \mathbbm {p}/(\mathbbm {q}\circledast \mathbbm {q}_{s\rhd s'})\) the \(\mathcal {R}\)-augmentation with \(|\mathbbm {p}'| = |\mathbbm {p}|\setminus |\mathbbm {q}|\), and with causal order the restriction of that of \(\mathbbm {p}\). For \(e\in |\mathbbm {p}'|\), we set \(\lambda _{\mathbbm {p}'}(e)\) to be \(\lambda _\mathbbm {p}(e)\) whose arguments corresponding to negative events \(e'\) in \(\mathbbm {q}\) are instantiated with \(\lambda _{\mathbbm {q}\circledast \mathbbm {q}_{s\rhd s'}}(e') \in \mathcal {R}\). With that, we set \(\sigma / (\mathbbm {q}\circledast \mathbbm {q}_{s\rhd s'})\) as comprising all \(\mathbbm {p}/ (\mathbbm {q}\circledast \mathbbm {q}_{s\rhd s'})\) for \(\mathbbm {p}\in \sigma \) with \(\mathbbm {q}\hookrightarrow \mathbbm {p}\).

Informally, this means that, considering some \(\mathbbm {q}\) which represents a scheduling of the memory operations turning s into \(s'\), we extract from \(\sigma \) its behavior after the execution of these memory operations. Finally, we generalize \(\le _\mathcal {R}\) to \(\mathcal {R}\)-augmentations by setting \(\mathbbm {q}\le _\mathcal {R}\mathbbm {q}'\) iff they have the same underlying partial order and for all \(e\in |\mathbbm {q}|\), \(\lambda _\mathbbm {q}(e) \le _\mathcal {R}\lambda _{\mathbbm {q}'}(e)\). With that, we can finally state:

Lemma 1

Let \(\varOmega (s) \vdash M : A\), \(\langle M, s_1, \alpha \rangle \rightrightarrows \langle M', s'_1 \uplus s'_2, \alpha ' \rangle \) with \(\mathrm {dom}(s_1) = \mathrm {dom}(s'_1)\), and all resource annotations in \(s_1\) lower than \(\alpha \). Then, there is with events in \(\llbracket \varOmega (s) \rrbracket \), causally compatible with \(\mathbbm {q}_{s_1\rhd s'_1}\), and a function

preserving \(\hookrightarrow \) and s.t. for all , \(\varphi (\mathbbm {p}\circledast \mathbbm {q}_{s'_2}) \le _\mathcal {R}\mathbbm {p}\odot \mathbbm {q}_{s'_2}\).

This is proved by induction on the operational semantics – the critical cases are: assignment and dereferenciation exploiting that if \(\alpha \le _\mathcal {R}\beta \), then \(\alpha \parallel \beta = \beta \) (which boils down to idempotence); and parallel composition where compatibility of \(s'\) and \(s''\) entails that the corresponding augmentations of \(\mathsf {cell}_s\) are compatible.

Lemma 1, instantiated with \(\langle M, \emptyset , 0 \rangle \rightrightarrows \langle \mathbf {skip}, s, \alpha \rangle \), yields soundness.

Non-adequacy. Our model is not adequate. To see why, consider:

Our model predicts that this may evaluate to in 3 s (see Fig. 12) and to in 4 s. However, the operational semantics can only evaluate it (both to and ) in 4 s. Intuitively, the reason is that the causal shapes implicit in the reduction \(\rightrightarrows \) are all series-parallel (generated with sequential and parallel composition), whereas the interaction in Fig. 12 is not.

Our causal semantic approach yields a finer resource analysis than achieved by the parallel operational semantics. The operational semantics, rather than our model, is to blame for non-adequacy: indeed, we now show that for \(\mathcal {R}= \mathbb {R}_+\) our model is adequate w.r.t. an operational semantics specialized for time.

4 Adequacy for Time

For time, we may refine the operational semantics by adding the following rule

$$ \langle \mathbf {wait}(t_1 + t_2), s, t_0 \rangle \rightarrow \langle \mathbf {wait}(t_2), s, t_0 + t_1 \rangle $$

using which the program above evaluates to in 3 s. It is clear that the soundness theorem of the previous section is retained.

We first focus on adequacy for first-order programs without abstraction or application, written \(\varOmega (s) \vdash _1 M : \mathbf {com}\). For any \(t_0\in \mathbb {R}_+\) there is \(\langle M, s, t_0 \rangle \rightrightarrows \langle M', s\uplus s', t_0 \rangle \) where and \(M'\) is in canonical form: it cannot be decomposed as \(C[\mathbf {skip};\,N]\), \(C[\mathbf {skip}\parallel N]\), \(C[N \parallel \mathbf {skip}]\), , , \(C[\mathbf {wait}(0)]\) and \(C[\mathbf {new}\,x,y\,\mathbf {in}\,N]\) for C[] an evaluation context.

Consider \(\varOmega (s) \vdash _1 M : \mathbf {com}\), and with a top element \(\mathbf {done}^+_{t_{\mathsf {f}}}\) in , the resulti.e. \(\mathbbm {q}\) describes an interaction between and the memory leading to a successful evaluation to \(\mathbf {done}\) at time \(t_{\mathsf {f}}\). To prove adequacy, we must extract from it a derivation from \(\langle M, s, t_0 \rangle \), at time \(t_{\mathsf {f}}\).

Apart from the top \(\mathbf {done}^+_{t_{\mathsf {f}}}\), \(\mathbbm {q}\) only records memory operations, which we must replicate operationally in the adequate order. A minimal operation with timing t is either the top \(\mathbf {done}^+_t\) if it is the only event in \(\mathbbm {q}\), or a prefix corresponding to a memory operation (for instance, in augmentations of Fig. 14, the only minimal operation has timing 2). If \(t=t_0\), this operation should be performed immediately. If \(t>t_0\) we need to spend time to trigger it – it is then critical to spend time on all available \(\mathbf {wait}\)s in parallel:

Lemma 2

For \(\varOmega (s) \vdash _1 M : \mathbf {com}\) in canonical form, \(t_0 \in \mathbb {R}_+\), with result \(\mathbf {done}^+_{t_{\mathsf {f}}}\), if all minimal operations have timing strictly greater than \(t_0\),

$$ \langle M, s, t_0 \rangle \rightrightarrows \langle M', s, t_0 + t \rangle $$

for some \(t>0\) and \(M'\) only differing from M by having smaller annotations in \(\mathbf {wait}\) commands and at least one \(\mathbf {wait}\) changed to \(\mathbf {skip}\).

Furthermore, there is \(\mathbbm {q}\le _\mathcal {R}\mathbbm {q}'\) with with result \(\mathbf {done}^+_{t_{\mathsf {f}}}\).

Fig. 14.
figure 14

Spending time adequately (where \(\mathbf {test}\,M = \mathbf {if}\,M\,\mathbf {skip}\,\bot \))

Proof

As M is in canonical form, all delays in minimal operations are impacted by \(\mathbf {wait}(t)\) commands in head position (i.e. such that \(M = C[\mathbf {wait}(t)])\). Let \(t_{\mathrm {min}}\) be the minimal time appearing in those \(\mathbf {wait}(-)\) commands in head position. Using our new rule and parallel composition, we remove \(t_{\mathrm {min}}\) to all such instances of \(\mathbf {wait}(-)\); then transform the resulting occurrences of \(\mathbf {wait}(0)\) to \(\mathbf {skip}\).

A representative example is displayed in Fig. 14. In the second step, though is available immediately, we must wait to get the right result.

With that we can prove the key lemma towards adequacy.

Lemma 3

Let \(\varOmega (s) \vdash _1 M : \mathbf {com}\), \(t_0 \in \mathbb {R}_+\), and with result \(\mathbf {done}_{t_{\mathsf {f}}}^+\) in . Then, there is \(\langle M, s, t_0 \rangle \rightrightarrows \langle \mathbf {skip}, - , t_{\mathsf {f}} \rangle \).

Proof

By induction on the size of M. First, we convert M to canonical form. If all minimal operations in have timing strictly greater than \(t_0\), we apply Lemma 2 and conclude by induction hypothesis.

Otherwise, at least one minimal operation has timing \(t_0\). If it is the result \(\mathbf {done}^+_{t_0}\) in , then M is the constant \(\mathbf {skip}\). Otherwise, it is a memory operation, say \(\mathbbm {p}\hookrightarrow \mathbbm {q}\) with and write also \(s' = s[\ell \mapsto s(\ell ).R^{t_0}]\). It follows then by an induction on M that for some C[], with

so \(\langle M, s, t_0 \rangle \rightrightarrows \langle C[b], s', t_0 \rangle \rightrightarrows \langle \mathbf {skip}, - , t_{\mathsf {f}} \rangle \) by induction hypothesis.

Adequacy follows for higher-order programs: in general, any \(\vdash M : \mathbf {com}\) can be \(\beta \)-reduced to first-order \(M'\), leaving the interpretation unchanged. By Church-Rosser, \(M'\) behaves like M operationally, up to weak bisimulation. Hence:

Theorem 2

Let \(\vdash M : \mathbf {com}\). For any \(t\in \mathbb {R}_+\), if then \(M\Downarrow _t\).

5 Conclusion

It would be interesting to compare our model with structures used in timing analysis, for instance [23] relies on a concurrent generalization of control flow graphs that is reminiscent of event structures. In future work we also plan to investigate whether our annotated model construction could be used for other purposes, such as symbolic execution or abstract interpretation.