## Abstract

We present a framework for game semantics based on concurrent games, that keeps track of *resources* as data modified throughout execution but not affecting its control flow. Our leading example is *time*, yet the construction is in fact parametrized by a *resource bimonoid* \(\mathcal {R}\), an algebraic structure expressing resources and the effect of their consumption either sequentially or in parallel. Relying on our construction, we give a sound resource-sensitive denotation to \(\mathcal {R}\)-IPA, an affine higher-order concurrent programming language with shared state and a primitive for resource consumption in \(\mathcal {R}\). Compared with general operational semantics parametrized by \(\mathcal {R}\), our resource analysis turns out to be finer, leading to non-adequacy. Yet, our model is not degenerate as adequacy holds for an operational semantics specialized to time.

In regard to earlier semantic frameworks for tracking resources, the main novelty of our work is that it is based on a non-interleaving semantics, and as such accounts for *parallel* use of resources accurately.

Supported by project Elica (ANR-14-CE25-0005) and Labex MiLyon (ANR-10-LABX-0070) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007), operated by the French National Research Agency (ANR).

Download conference paper PDF

## 1 Introduction

Since its inception, *denotational semantics* has grown into a very wide subject. Its developments now cover numerous programming languages or paradigms, using approaches that range from the extensionality of *domain semantics* [24] (recording the input-output behaviour) to the intensionality of *game semantics* [1, 17] (recording execution traces, formalized as *plays* in a 2-players game between the program (“Player”) and its execution environment (“Opponent”)). Denotational semantics has had significant influence on the theory of programming languages, with contributions ranging from program logics or reasoning principles to new language constructs and verification algorithms.

Most denotational models are *qualitative* in nature, meaning that they ignore efficiency of programs in terms of time, or other resources such as power or bandwith. To our knowledge, the first denotational model to cover time was Ghica’s *slot games* [13], an extension of Ghica and Murawski’s fully abstract model for a higher-order language with concurrency and shared state [14]. Slot games exploit the intensionality of game semantics and represent time via special moves called *tokens* matching the *ticks* of a clock. They are fully abstract *w.r.t.* the notion of observation in Sands’ operational theory of *improvement* [26].

More recently, there has been a growing interest in capturing quantitative aspects denotationally. Laird *et al.* constructed [18] an enrichment of the relational model of Linear Logic [11], using weights from a *resource semiring* given as parameter. This way, they capture in a single framework several notions of resources for extensions of PCF, ranging from time to probabilistic weights. Two type systems with similar parametrizations were introduced simultaneously by, on the one hand, Ghica and Smith [15] and, on the other hand, Brunel, Gaboardi *et al.* [4]; the latter with a quantitative realizability denotational model.

In this paper, we give a resource-sensitive denotational model for \(\mathcal {R}\)*-IPA*, an affine higher-order programming language with concurrency, shared state, and with a primitive for resource consumption. With respect to slot games our model differs in that our resource analysis accounts for the fact that resource consumption may combine differently in parallel and sequentially – simply put, we mean to express that \(\mathbf {wait}(1) \parallel \mathbf {wait}(1)\) may terminate in 1 s, rather than 2. We also take inspiration from weighted relational models [18] in that our construction is parametrized by an algebraic structure representing resources and their usage. Our *resource bimonoids* \(\langle \mathcal {R}, 0, ;, \parallel , \le \rangle \) differ however significantly from their resource semiring \(\langle \mathcal {R}, 0, 1, +, \cdot \rangle \): while ; matches \(\cdot \), \(\parallel \) is a new operation expressing the consumption of resources in parallel. We have no counterpart for the \(+\), which agglomerates distinct non-deterministically co-existing executions leading to the same value: instead our model keeps them separate.

Capturing parallel resource usage is technically challenging, as it can only be attempted relying on a representation of execution where parallelism is explicit. Accordingly, our model belongs to the family of *concurrent* or *asynchronous* game semantics pioneered by Abramsky and Melliès [2], pushed by Melliès [20] and later with Mimram [22], and by Faggian and Piccolo [12]; actively developed in the past 10 years prompted by the introduction of a more general framework by Rideau and Winskel [7, 25]. In particular, our model is a refinement of the (qualitative) truly concurrent interpretation of *affine IPA* described in [5]. Our methodology to record resource usage is inspired by game semantics for first-order logic [3, 19] where moves carry first-order terms from a signature – instead here they carry explicit *functions*, *i.e.* terms up to a congruence (it is also reminiscent of Melliès’ construction of the free dialogue category over a category [21]).

As in [5] we chose to interpret an affine language: this lets us focus on the key phenomena which are already at play, avoiding the technical hindrance caused by replication. As suggested by recent experience with concurrent games [6, 10], we expect the developments presented here to extend transparently in the presence of *symmetry* [8, 9]; this would allow us to move to the general (non-affine) setting.

*Outline.* We start Sect. 2 by introducing the language \(\mathcal {R}\)-IPA. We equip it first with an interleaving semantics and sketch its interpretation in slot games. We then present resource bimonoids, give a new parallel operational semantics, and hint at our truly concurrent games model. In Sect. 3, we construct this model and prove its soundness. Finally in Sect. 4, we show adequacy for an operational semantics specialized to time, noting first that the general parallel operational semantics is too coarse *w.r.t.* our model.

## 2 From \(\mathcal {R}\)-IPA to \(\mathcal {R}\)-Strategies

### 2.1 Affine IPA

*Terms and Types.* We start by introducing the basic language under study, *affine Idealized Parallel Algol* (IPA). It is an affine variant of the language studied in [14], a call-by-name concurrent higher-order language with shared state. Its **types** are given by the following grammar:

Here, \(\mathbf {mem}_W\) is the type of *writeable* references and \(\mathbf {mem}_R\) is the type of *readable* references; the distinction is necessary in this affine setting as it allows to share accesses to a given state over subprocesses; this should make more sense in the next paragraph with the typing rules. In the sequel, non-functional types are called **ground types** (for which we use notation \(\mathbb {X}\)). We define terms directly along with their typing rules in Fig. 1. **Contexts** are simply lists \(x_1 : A_1, \dots , x_n : A_n\) of variable declarations (in which each variable occurs at most once), and the exchange rule is kept implicit. Weakening is not a rule but is admissible. We comment on a few aspects of these rules.

Firstly, observe that the reference constructor \(\mathbf {new}\,x,y\,\mathbf {in}\,M\) binds two variables *x* and *y*, one with a write permission and the other with a read permission. In this way, the permissions of a shared state can be distributed in different components of *e.g.* an application or a parallel composition, causing interferences despite the affine aspect of the language. Secondly, the assignment command, , seems quite restrictive. Yet, the language is affine, so a variable can only be written to once, and, as we choose to initialize it to , the only useful thing to write is . Finally, many rules seem restrictive in that they apply only at ground type \(\mathbb {X}\). More general rules can be defined as syntactic sugar; for instance we give (all other constructs extend similarly): \(M;_{A\multimap B} N = \lambda x^A.\left( \,M;_B\,(N\,x)\right) \).

*Operational Semantics.* We fix a countable set \(\mathsf {L}\) of **memory locations**. Each location \(\ell \) comes with two associated variable names \(\ell _W\) and \(\ell _R\) distinct from other variable names. Usually, stores are partial maps from \(\mathsf {L}\) to . Instead, we find it more convenient to introduce the notion of **state** of a memory location. A state corresponds to a history of memory actions (reads or writes) and follows the *state diagram* of Fig. 2 (ignoring for now the annotations with \(\alpha , \beta \)). We write \((\mathsf {M},\le _\mathsf {M})\) for the induced set of states and accessibility relation on it. For each \(m\in \mathsf {M}\), its set of **available actions** is \(\mathrm {act}(m)=\{W, R\}\setminus m\) (the letters not occurring in *m*, annotations being ignored); and its **value** (in ) is iff *W* occurs in *m*.

Finally, a **store** is a partial map \(s : \mathsf {L}\rightarrow \mathsf {M}\) with finite domain, mapping each memory location to its current state. To each store corresponds a *typing context*

The operational semantics operates on **configurations** defined as pairs \(\langle M, s \rangle \) with *s* a store and \(\varGamma \vdash M : A\) a term whose free variables are all memory locations with \(\varGamma \subseteq \varOmega (s)\). This property will be preserved by our rather standard small-step, call-by-name operational semantics. We refrain for now from giving the details, they will appear in Sect. 2.2 in the presence of resources.

### 2.2 Interleaving Cost Semantics, and \(\mathcal {R}\)-IPA

Ghica and Murawski [14] have constructed a *fully abstract*(for may-equivalence) model for (non-affine) IPA, relying on an extension of Hyland-Ong games [17].

Their model takes an *interleaving* view of the execution of concurrent programs: a program is represented by the set of all its possible executions, as decided non-deterministically by the scheduler. In game semantics, this is captured by lifting the standard requirement that the two players alternate. For instance, Fig. 3 shows a *play* in the interpretation of the program \(x : \mathbf {com}, y : \mathbf {bool}\vdash x \parallel y : \mathbf {bool}\). The diagram is read from top to bottom, chronologically. Each line comprises one computational event (“move”), annotated with “−” if due to the execution environment (“Opponent”) and with “\(+\)” if due to the program (“Player”); each move corresponds to a certain type component, under which it is placed. With the first move \(\mathbf {q}^-\), the environment initiates the computation. Player then plays \(\mathbf {run}^+\), triggering the evaluation of *x*. In standard game semantics, the control would then go back to the execution environment – Player would be stuck until Opponent plays. Here instead, due to parallelism Player can play a second move \(\mathbf {q}^+\) immediately. At this point of execution, *x* and *y* are both running in parallel. Only when they have both returned (moves \(\mathbf {done}^-\) and ) is Player able to respond , terminating the computation. The full interpretation of \(x : \mathbf {com}, y : \mathbf {bool}\vdash x \parallel y : \mathbf {bool}\), its *strategy*, comprises numerous plays like that, one for each interleaving.

As often in denotational semantics, Ghica and Murawski’s model is invariant under reduction: if \(\langle M, s \rangle \rightarrow \langle M', s' \rangle \), both have the same denotation. The model adequately describes the result of computation, but not its *cost* in terms, for instance, of time. Of course this cost is not yet specified: one must, for instance, define a *cost model* assigning a cost to all basic operations (*e.g.* memory operations, function calls, *etc*). In this paper we instead enrich the language with a primitive for *resource consumption* – cost models can then be captured by inserting this primitive concomitantly with the costly operations (see for example [18]).

\(\mathcal {R}\)*-IPA.* Consider a set \(\mathcal {R}\) of **resources**. The language \(\mathcal {R}\)-IPA is obtained by adding to affine IPA a new construction, \(\mathbf {consume}(\alpha )\), typed as in Fig. 4. When evaluated, \(\mathbf {consume}(\alpha )\) triggers the consumption of resource \(\mathcal {R}\). Time consumption will be a running example throughout the paper. In that case, we will consider the non-negative reals \(\mathbb {R}_+\) as set \(\mathcal {R}\), and for \(t\in \mathbb {R}_+\) we will use \(\mathbf {wait}(t)\) as a synonym for \(\mathbf {consume}(t)\).

To equip \(\mathcal {R}\)-IPA with an operational semantics we need operations on \(\mathcal {R}\), they are introduced throughout this section. First we have \(0 \in \mathcal {R}\), the null resource; if \(\alpha ,\beta \in \mathcal {R}\), we have some \( \alpha ;\,\beta \in \mathcal {R}\), the resource taken by consuming \(\alpha \), then \(\beta \) – for \(\mathcal {R}= \mathbb {R}_+\), this is simply addition. To evaluate \(\mathcal {R}\)-IPA, the **configurations** are now triples \(\langle M, s, \alpha \rangle \) with \(\alpha \in \mathcal {R}\) tracking resources already spent. With that, we give in Fig. 5 the basic operational rules. The only rule affecting current resources is that for \(\mathbf {consume}(\beta )\), the others leave it unchanged. However note that we store the current state of resources when performing memory operations, explaining the annotations in Fig. 2. These annotations do not impact the operational behaviour, but will be helpful in relating with the game semantics in Sect. 3. As usual, these rules apply within call-by-name evaluation contexts – we omit the details here but they will appear for our final operational semantics.

*Slot Games.* In [13], Ghica extends Ghica and Murawski’s model to *slot games* in order to capture resource consumption. Slot games introduce a new action called a *token*, representing an atomic resource consumption, and written – writing for *n* successive occurrences of . A model of \(\mathbb {N}_+\)-IPA using slot games would have for instance the play in Fig. 6 in the interpretation of

in context \(x : \mathbf {com}, y : \mathbf {bool}\), among with many others. Note, in examples, we use a more liberal typing rule for ‘; ’ allowing \(y^\mathbf {bool};\,z^\mathbf {com}: \mathbf {bool}\) to avoid clutter: it can be encoded as . Following the methodology of game semantics, the interpretation of would yield, by composition, the strategy with only maximal play , where reflects the overall 6 time units (say “seconds”) that have to pass in total before we see the result (3 in each thread). This seems wasteful, but it is indeed an adequate computational analysis, because both slot games and the operational semantics given so far implicitly assume a sequential operational model, *i.e.* that both threads compete to be scheduled on a *single* processor. Let us now question that assumption.

*Parallel Resource Consumption.* With a truly concurrent evaluation in mind, we should be able to prove that the program above may terminate in 3 s, rather than 6; as nothing prevents the threads from evaluating in parallel. Before we update the operational semantics to express that, we enrich our resource structure to allow it to express the effect of consuming resources in parallel.

We now introduce the full algebraic structure we require for resources.

### Definition 1

A **resource bimonoid** is \(\langle \mathcal {R}, 0, ;, \parallel , \le \rangle \) where \(\langle \mathcal {R}, 0, ;, \le \rangle \) is an ordered monoid, \(\langle \mathcal {R}, 0, \parallel , \le \rangle \) is an ordered commutative monoid, 0 is bottom for \(\le \), and \(\parallel \) is **idempotent**, *i.e.* it satisfies \(\alpha \parallel \alpha = \alpha \).

A resource bimonoid is in particular a *concurrent monoid* in the sense of *e.g.* [16] (though we take \(\le \) in the opposite direction: we read \(\alpha \le _\mathcal {R}\alpha '\) as “\(\alpha \) is *better*/*more efficient* than \(\alpha '\)”). Our *Idempotence* assumption is rather strong as it entails that \(\alpha \parallel \beta \) is the supremum of \(\alpha , \beta \in \mathcal {R}\). This allows to recover a number of simple laws, *e.g.* \(\alpha \parallel \beta \le \alpha ;\,\beta \), or the exchange rule \((\alpha ;\,\beta ) \parallel (\alpha ';\,\beta ') \le (\alpha \parallel \alpha ');\,(\beta \parallel \beta ')\). Idempotence, which would not be needed for a purely functional language, is used crucially in our interpretation of state.

Our leading examples are \(\langle \mathbb {N}_+, 0, +, \max , \le \rangle \) and \(\langle \mathbb {R}_+, 0, +, \max , \le \rangle \) – we call the latter the *time bimonoid*. Others are the *permission bimonoid* \(\langle \mathcal {P}(P), \emptyset , \cup , \cup ,\) \(\subseteq \rangle \) for some set *P* of *permissions*: if reaching a state requires certain permissions, it does not matter whether these have been requested sequentially or in parallel; the bimonoid of *parametrized time* \(\langle \mathcal {M}, 0, ;, \parallel , \le \rangle \) with \(\mathcal {M}\) the monotone functions from positive reals to positive reals, 0 the constant function, \(\parallel \) the pointwise maximum, and \((f;\,g)(x) = f(x) + g(x + f(x))\): it tracks time consumption in a context where the time taken by \(\mathbf {consume}(\alpha )\) might grow over time.

Besides time-based bimonoids, it would be appealing to cover resources such as *power*, *bandwith* or *heapspace*. Those, however, clearly fail idempotence of \(\parallel \), and are therefore not covered. It is not clear how to extend our model to those.

*Parallel Operational Semantics.* Let us fix a resource bimonoid \(\mathcal {R}\). To express parallel resource consumption, we use the many-step *parallel reductions* defined in Fig. 7, with **call-by-name evaluation contexts** given by

The rule for parallel composition carries some restrictions regarding memory: *M* and *N* can only reduce concurrently if they do not access the same memory cells. This is achieved by requiring that the *partial* operation \(s\uparrow s'\) – that intuitively corresponds to “merging” two memory stores *s* and \(s'\) whenever there are no conflicts – is defined. More formally, the partial order \(\le _\mathsf {M}\) on memory states induces a partial order (also written \(\le _\mathsf {M}\)) on stores, defined by \(s \le _\mathsf {M}s'\) iff \(\mathrm {dom}(s) \subseteq \mathrm {dom}(s')\) and for all \(\ell \in \mathrm {dom}(s)\) we have \(s(\ell ) \le _\mathsf {M}s'(\ell )\). This order is a cpo in which \(s'\) and \(s''\) are *compatible* (*i.e.* have an upper bound) iff for all \(\ell \in \mathrm {dom}(s') \cap \mathrm {dom}(s'')\), \(s'(\ell ) \le _\mathsf {M}s''(\ell )\) or \(s''(\ell ) \le _\mathsf {M}s'(\ell )\) – so there has been no interference going to \(s'\) and \(s''\) from their last common ancestor. When compatible, \(s' \uparrow s''\) maps \(s'\) and \(s''\) to their lub, and is undefined otherwise.

For \(\vdash M : \mathbf {com}\), we set \(M \Downarrow _\alpha \) if \(\langle M, \emptyset , 0 \rangle \rightrightarrows \langle \mathbf {skip}, s, \alpha \rangle \). For instance, instantiating the rules with the time bimonoid, we have

### 2.3 Non-interleaving Semantics

To capture this parallel resource usage semantically, we build on the games model for affine IPA presented in [5]. Rather than presenting programs as collections of *sequences* of moves expressing all observable sequences of computational actions, this model adopts a *truly concurrent* view using collections of *partially ordered* plays. For each Player move, the order specifies its *causal dependencies*, *i.e.* the Opponent moves that need to have happened before. For instance, ignoring the subscripts, Fig. 8 displays a typical partially ordered play in the strategy for the term *H* of Sect. 2.2. One partially ordered play does not fully specify a sequential execution: that in Fig. 8 stands for *many* sequential executions, one of which is in Fig. 3. Behaviours expressed by partially ordered plays are deterministic *up to* choices of the scheduler irrelevant for the eventual result. Because \(\mathcal {R}\)-IPA is non-deterministic (via concurrency and shared state), our strategies will be *sets* of such partial orders.

To express resources, we leverage the causal information and indicate, in each partially ordered play and for each positive move, an \(\mathcal {R}\)-expression representing its *additional cost* in function of the cost of its negative dependencies. Figure 8 displays such a \(\mathcal {R}\)*-play*: each Opponent move introduces a fresh variable, which can be used in annotations for Player moves. As we will see further on, once applied to strategies for values \(\mathbf {skip}\) and (with no additional cost), this \(\mathcal {R}\)-play will answer to the initial Opponent move \(\mathbf {q}^-_\mathsf {x}\) with where \(\alpha = (1;\,2) \parallel (2;\,1) =_{\mathbb {R}_+} 3\), as prescribed by the more efficient parallel operational semantics.

We now go on to define formally our semantics.

## 3 Concurrent Game Semantics of IPA

### 3.1 Arenas and \(\mathcal {R}\)-Strategies

*Arenas.* We first introduce *arenas*, the semantic representation of types in our model. As in [5], an arena will be a certain kind of *event structure* [27].

### Definition 2

An **event structure** comprises \((E, \le _E, \mathrel {\#}_E)\) where *E* is a set of *events*, \(\le _E\) is a partial order called *causal dependency*, and \(\mathrel {\#}_E\) is an irreflexive symmetric binary relation called *conflict*, subject to the two axioms:

We will use some vocabulary and notations from event structures. A **configuration** \(x \subseteq E\) is a down-closed, consistent (*i.e.* for all \(e, e' \in x\), \(\lnot (e\,\mathrel {\#}_E\,e')\)) finite set of events. We write \(\mathscr {C}(E)\) for the set of configurations of *E*. We write for **immediate causality**, *i.e.* iff \(e <_E e'\) with nothing in between – this is the relation represented in diagrams such as Fig. 8. A conflict \(e_1\,\mathrel {\#}_E\,e_2\) is **minimal** if for all \(e'_1\,<_E\,e_1\), \(\lnot {(e'_1\,\mathrel {\#}_E\,e_2)}\) and symmetrically. We write \(e_1 \sim _E e_2\) to indicate that \(e_1\) and \(e_2\) are in minimal conflict.

With this, we now define arenas.

### Definition 3

An **arena** is \((A, \le _A, \mathrel {\#}_A, {{\,\mathrm{pol}\,}}_A)\), an event structure along with a **polarity function** \({{\,\mathrm{pol}\,}}_A : A \longrightarrow \{-, +\}\) subject to: *(1)* \(\le _A\) is forest-shaped, *(2)* is alternating: if , then \({{\,\mathrm{pol}\,}}_A(a_1) \ne {{\,\mathrm{pol}\,}}_A(a_2)\), and *(3)* it is race-free, *i.e.* if \(a_1 \sim _A a_2\), then \({{\,\mathrm{pol}\,}}_A(a_1) = {{\,\mathrm{pol}\,}}_A(a_2)\).

Arenas present the computational actions available on a type, following a call-by-name evaluation strategy. For instance, the observable actions of a closed term on \(\mathbf {com}\) are that it can be ran, and it may terminate, leading to the arena . Likewise, a boolean can be evaluated, and can terminate on or , yielding the arena on the right of Fig. 9 (when drawing arenas, immediate causality is written with a dotted line, from top to bottom). We present some simple arena constructions. The **empty arena**, written 1, has no events. If *A* is an arena, then its **dual** \(A^\perp \) has the same components, but polarity reversed. The **parallel composition** of *A* and *B*, written \(A\parallel B\), has as events the tagged disjoint union \(\{1\}\times A \cup \{2\}\times B\), and all other components inherited. For \(x_A \in \mathscr {C}(A)\) and \(x_B \in \mathscr {C}(B)\), we also write \(x_A \parallel x_B \in \mathscr {C}(A\parallel B)\). Figure 9 displays the arena \(\mathbf {com}^\perp \parallel \mathbf {bool}^\perp \parallel \mathbf {bool}\).

\(\mathcal {R}\)*-Augmentations.* As hinted before, \(\mathcal {R}\)-strategies will be collections of partially ordered plays with resource annotations in \(\mathcal {R}\), called \(\mathcal {R}\)*-augmentations*.

### Definition 4

An **augmentation** [5] on arena *A* is a finite partial order \(\mathbbm {q}= (|\mathbbm {q}|, \le _\mathbbm {q})\) such that \(\mathscr {C}(\mathbbm {q}) \subseteq \mathscr {C}(A)\) (concerning configurations, augmentations are considered as event structures with empty conflict), which is **courteous**, in the sense that for all , if \({{\,\mathrm{pol}\,}}_A(a_1) = +\) or \({{\,\mathrm{pol}\,}}_A(a_2) = -\), then .

A \(\mathcal {R}\)**-augmentation** also has (with \([a]_\mathbbm {q}^- = \{a'\le _\mathbbm {q}a\mid {{\,\mathrm{pol}\,}}_A(a') = -\}\))

such that if \({{\,\mathrm{pol}\,}}_A(a) = -\), then \(\lambda _\mathbbm {q}(a)(\rho ) = \rho _a\), the projection on *a* of \(\rho \in \mathcal {R}^{[a]_\mathbbm {q}^-}\), and for all \(a\in |\mathbbm {q}|\), \(\lambda _\mathbbm {q}(a)\) is monotone *w.r.t.* all of its variables.

We write \(\mathcal {R}\text {-}\mathrm {Aug}(A)\) for the set of \(\mathcal {R}\)-augmentations on *A*.

If \(\mathbbm {q}, \mathbbm {q}'\in \mathcal {R}\text {-}\mathrm {Aug}(A)\), \(\mathbbm {q}\) is **rigidly embedded** in \(\mathbbm {q}'\), or a **prefix** of \(\mathbbm {q}'\), written \(\mathbbm {q}\hookrightarrow \mathbbm {q}'\), if \(|\mathbbm {q}| \in \mathscr {C}(\mathbbm {q}')\), for all \(a, a' \in |\mathbbm {q}|\), \(a \le _\mathbbm {q}a'\) iff \(a \le _{\mathbbm {q}'} a'\), and for all \(a\in |\mathbbm {q}|\), \(\lambda _\mathbbm {q}(a) = \lambda _{\mathbbm {q}'}(a)\). The \(\mathcal {R}\)*-plays* of Sect. 2.3 are formalized as \(\mathcal {R}\)-augmentations: Fig. 8 presents an \(\mathcal {R}\)-augmentation on the arena of Fig. 9. The functional dependency in the annotation of positive events is represented by using the free variables introduced alongside negative events, however this is only a symbolic representation: the formal annotation is a function for each positive event. In the model of \(\mathcal {R}\)-IPA, we will only use the particular case where the annotations of positive events only depend on the annotations of their immediate predecessors.

\(\mathcal {R}\)*-Strategies.* We start by defining \(\mathcal {R}\)-strategies on arenas.

### Definition 5

A \(\mathcal {R}\)**-strategy** on *A* is a non-empty prefix-closed set of \(\mathcal {R}\)-augmentations \(\sigma \subseteq \mathcal {R}\text {-}\mathrm {Aug}(A)\) which is **receptive** [5]: for \(\mathbbm {q}\in \sigma \) such that \(|\mathbbm {q}|\) extends with \(a^-\in A\) (*i.e.* \({{\,\mathrm{pol}\,}}(a) = -\), \(a\not \in |\mathbbm {q}|\), and \(|\mathbbm {q}|\cup \{a\} \in \mathscr {C}(A)\)), there is \(\mathbbm {q}\hookrightarrow \mathbbm {q}' \in \sigma \) such that \(|\mathbbm {q}'| = |\mathbbm {q}|\cup \{a\}\).

If \(\sigma \) is a \(\mathcal {R}\)-strategy on arena *A*, we write \(\sigma : A\).

Observe that \(\mathcal {R}\)-strategies are fully described by their *maximal* augmentations, *i.e.* augmentations that are the prefix of no other augmentations in the strategy. Our interpretation of \(\mathsf {new}\) will use the \(\mathcal {R}\)-strategy \(\mathsf {cell}: \llbracket \mathbf {mem}_W \rrbracket \parallel \llbracket \mathbf {mem}_R \rrbracket \) (with arenas presented in Fig. 10), comprising all the \(\mathcal {R}\)-augmentations rigidly included in either of the two from Fig. 11. These two match the race when reading and writing simultaneously: if both and \(\mathbf {r}^-\) are played the read may return or , but it can only return in the presence of .

### 3.2 Interpretation of \(\mathcal {R}\)-IPA

*Categorical Structure.* In order to define the interpretation of terms of \(\mathcal {R}\)-IPA as \(\mathcal {R}\)-strategies, a key step is to show how to form a *category* of \(\mathcal {R}\)-strategies. To do that we follow the standard idea of considering \(\mathcal {R}\)**-strategies from** *A* **to** *B* to be simply \(\mathcal {R}\)-strategies on the compound arena \(A^\perp \parallel B\). As usual, our first example of a \(\mathcal {R}\)-strategy between arenas is the *copycat* \(\mathcal {R}\)*-strategy*.

### Definition 6

Let *A* be an arena. We define a partial order on \(A^\perp \parallel A\):

where \((-)^+\) denotes the transitive closure of a relation. Note that if \(a \in A^\perp \parallel A\) is positive, it has a unique immediate predecessor \(\mathrm {pred}(a) \in A^\perp \parallel A\) for .

If \(x\parallel y \in \mathscr {C}(A^\perp \parallel A)\) is down-closed for (write \(\le _{x, y}\) for the restriction of to \(x\parallel y\)), we define an \(\mathcal {R}\)-augmentation \(\mathbbm {q}_{x,y} = (x\parallel y, \le _{x, y}, \lambda _{x,y})\) where

with \(\lambda _{x,y}(a^-)(\rho ) = \rho _a\), and \(\lambda _{x,y}(a^+)(\rho ) = \rho _{\mathrm {pred}(a)}\). Then, is the \(\mathcal {R}\)-strategy comprising all \(\mathbbm {q}_{x, y}\) for \(x\parallel y \in \mathscr {C}(A^\perp \parallel A)\) down-closed in *A*.

We first define *interactions* of \(\mathcal {R}\)-augmentations, extending [5].

### Definition 7

We say that \(\mathbbm {q}\in \mathcal {R}\text {-}\mathrm {Aug}(A^\perp \parallel B)\), and \(\mathbbm {p}\in \mathcal {R}\text {-}\mathrm {Aug}(B^\perp \parallel C)\) are **causally compatible** if \(|\mathbbm {q}| = x_A \parallel x_B\), \(|\mathbbm {p}| = x_B \parallel x_C\), and the preorder \(\le _{\mathbbm {p}\circledast \mathbbm {q}}\) on \(x_A \parallel x_B \parallel x_C\) defined as \(\left( \le _\mathbbm {q}\cup \le _\mathbbm {p}\right) ^+\) is a partial order.

Say \(e \in x_A \parallel x_B \parallel x_C\) is *negative* if it is negative in \(A^\perp \parallel C\). We define

as follows, by well-founded induction on \(<_{\mathbbm {p}\circledast \mathbbm {q}}\), for \(\rho \in \mathcal {R}^{[e]_{\mathbbm {p}\circledast \mathbbm {q}}^-}\):

The **interaction** \(\mathbbm {p}\circledast \mathbbm {q}\) of compatible \(\mathbbm {q}, \mathbbm {p}\) is \((x_A \parallel x_B \parallel x_C, \le _{\mathbbm {p}\circledast \mathbbm {q}}, \lambda _{\mathbbm {p}\circledast \mathbbm {q}})\).

If \(\sigma : A^\perp \parallel B\) and \(\tau : B^\perp \parallel C\), we write \(\tau \circledast \sigma \) for the set comprising all \(\mathbbm {p}\circledast \mathbbm {q}\) such that \(\mathbbm {p}\in \tau \) and \(\mathbbm {q}\in \sigma \) are causally compatible. For \(\mathbbm {q}\in \sigma \) and \(\mathbbm {p}\in \tau \) causally compatible with \(|\mathbbm {p}\circledast \mathbbm {q}| = x_A \parallel x_B \parallel x_C\), their **composition** is \(\mathbbm {p}\odot \mathbbm {q}= (x_A \parallel x_C, \le _{\mathbbm {p}\odot \mathbbm {q}}, \lambda _{\mathbbm {p}\odot \mathbbm {q}})\) where \(\le _{\mathbbm {p}\odot \mathbbm {q}}\) and \(\lambda _{\mathbbm {p}\odot \mathbbm {q}}\) are the restrictions of \(\le _{\mathbbm {p}\circledast \mathbbm {q}}\) and \(\lambda _{\mathbbm {p}\circledast \mathbbm {q}}\). Finally, the **composition** of \(\sigma : A^\perp \parallel B\) and \(\tau : B^\perp \parallel C\) is the set comprising all \(\mathbbm {p}\odot \mathbbm {q}\) for \(\mathbbm {q}\in \sigma \) and \(\mathbbm {p}\in \tau \) causally compatible.

In Fig. 12, we display an example composition between \(\mathbb {R}_+\)-augmentations – with also in gray the underlying interaction. The reader may check that the variant of the left \(\mathbb {R}_+\)-augmentation with replaced with is causally compatible with the other augmentation in Fig. 11, with composition .

We also have a tensor operation: on arenas, \(A\otimes B\) is simply a synonym for \(A\parallel B\). If \(\mathbbm {q}_1 \in \mathcal {R}\text {-}\mathrm {Aug}(A_1^\perp \parallel B_1)\) and \(\mathbbm {q}_2 \in \mathcal {R}\text {-}\mathrm {Aug}(A_2^\perp \parallel B_2)\), their **tensor product** \(\mathbbm {q}_1 \otimes \mathbbm {q}_2 \in \mathcal {R}\text {-}\mathrm {Aug}((A_1 \otimes A_2)^\perp \parallel (B_1\otimes B_2))\) is defined in the obvious way. This is lifted to \(\mathcal {R}\)-strategies element-wise. As is common when constructing basic categories of games and strategies, we have:

### Proposition 1

There is a compact closed category \(\mathcal {R}\text {-}\mathsf {Strat}\) having arenas as objects, and as morphisms, \(\mathcal {R}\)-strategies between them.

*Negative Arenas and* \(\mathcal {R}\)*-Strategies.* As a compact closed category, \(\mathcal {R}\text {-}\mathsf {Strat}\) is a model of the linear \(\lambda \)-calculus. However, we will (as usual for call-by-name) instead interpret \(\mathcal {R}\)-IPA in a sub-category of *negative* arenas and strategies, in which the empty arena 1 is terminal, providing the interpretation of weakening. We will stay very brief here, as this proceeds exactly as in [5].

A partial order with polarities is **negative** if all its minimal events are. This applies in particular to arenas, and \(\mathcal {R}\)-augmentations. A \(\mathcal {R}\)-strategy is **negative** if all its \(\mathcal {R}\)-augmentations are. A negative \(\mathcal {R}\)-augmentation \(\mathbbm {q}\in \mathcal {R}\text {-}\mathrm {Aug}(A)\) is **well-threaded** if for all \(a \in |\mathbbm {q}|\), \([a]_\mathbbm {q}\) has exactly one minimal event; a \(\mathcal {R}\)-strategy is **well-threaded** iff all its \(\mathcal {R}\)-augmentations are. We have:

### Proposition 2

*Negative arenas* and *negative well-threaded* \(\mathcal {R}\)*-strategies* form a cartesian symmetric monoidal closed category \(\mathcal {R}\text {-}\mathsf {Strat}_-\), with 1 terminal.

We also write for morphisms in \(\mathcal {R}\text {-}\mathsf {Strat}_-\).

The closure of \(\mathcal {R}\text {-}\mathsf {Strat}\) does not transport to \(\mathcal {R}\text {-}\mathsf {Strat}_-\) as \(A^\perp \parallel B\) is never negative if *A* is non-empty, thus we replace it with a negative version. Here we describe only a restricted case of the general construction in [5], which is however sufficient for the types of \(\mathcal {R}\)-IPA. If *A*, *B* are negative arenas and *B* is **well-opened**, *i.e.* it has exactly one minimal event *b*, we form \(A\multimap B\) as having all components as in \(A^\perp \parallel B\), with additional dependencies \(\{((2, b), (1, a))\mid a\in A\}\).

Using the compact closed structure of \(\mathcal {R}\text {-}\mathsf {Strat}\) it is easy to build a copycat \(\mathcal {R}\)-strategy , and to associate to any some providing the monoidal closure. The cartesian product of *A* and *B* is with components the same as \(A\parallel B\), except for \((1, a) \mathrel {\#}(2, b)\) for all \(a\in A, b\in B\). We write for the projections, and for the pairing of , and .

*Interpretation of* \(\mathcal {R}\)*-IPA.* We set , \(\llbracket \mathbf {bool} \rrbracket \) as in the right-hand side of Fig. 9, \(\llbracket \mathbf {mem}_W \rrbracket \) and \(\llbracket \mathbf {mem}_R \rrbracket \) as in Fig. 10, and \(\llbracket A\multimap B \rrbracket = \llbracket A \rrbracket \multimap \llbracket B \rrbracket \) as expected. Contexts \(\varGamma = x_1 : A_1, \dots , x_n : A_n\) are interpreted as \(\llbracket \varGamma \rrbracket = \otimes _{1\le i \le n} \llbracket A_i \rrbracket \). Terms \(\varGamma \vdash M : A\) are interpreted as as follows: \(\llbracket \bot \rrbracket \) is the diverging \(\mathcal {R}\)-strategy (no player move), \(\llbracket \mathbf {consume}(\alpha ) \rrbracket \) has only maximal \(\mathcal {R}\)-augmentation , \(\llbracket \mathbf {skip} \rrbracket \) is \(\llbracket \mathbf {consume}(0) \rrbracket \), and and are interpreted similarly with the adequate constant \(\mathcal {R}\)-strategies. The rest of the interpretation is given on the left, using the two obvious isos and ; the \(\mathcal {R}\)-strategy \(\mathsf {cell}\) introduced in Fig. 11; and additional \(\mathcal {R}\)-strategies with typical \(\mathcal {R}\)-augmentations in Fig. 13. We omit the (standard) clauses for the \(\lambda \)-calculus.

### 3.3 Soundness

Now that we have defined the game semantics of \(\mathcal {R}\)-IPA, we set to prove that it is sound with respect to the operational semantics given in Sect. 2.2.

We first introduce a useful notation. For any type *A*, \(\llbracket A \rrbracket \) has a unique minimal event; write for the arena without this minimal event. Likewise, if \(\varGamma \vdash M : A\), then by construction, \(\llbracket M \rrbracket : \llbracket \varGamma \rrbracket ^\perp \parallel \llbracket A \rrbracket \) is a negative \(\mathcal {R}\)-strategy whose augmentations all share the same minimal event \(\mathbf {q}^-_{\mathsf {x}}\) where \(\mathbf {q}^-\) is minimal in *A*. For \(\alpha \in \mathcal {R}\), write for \(\llbracket M \rrbracket \) without \(\mathbf {q}^-_{\mathsf {x}}\), with \(\mathsf {x}\) replaced by \(\alpha \). Then we have – one may think of as “*M* started with consumed resource \(\alpha \)”.

Naively, one may expect soundness to state that for all \(\vdash M : \mathbf {com}\), if \(M \Downarrow _\alpha \), then . However, whereas the resource annotations in the semantics are always as good as permitted by the causal constraints, derivations in the operational semantics may be sub-optimal. For instance, we may derive \(M \Downarrow _\alpha \) not using the parallel rule at all. So our statement is:

### Theorem 1

If \(\vdash M : \mathbf {com}\) with \(M\Downarrow _\alpha \), there is \(\beta \le _\mathcal {R}\alpha \) *s.t.* .

Our proof methodology is standard: we replay operational derivations as augmentations in the denotational semantics. Stating the invariant successfully proved by induction on operational derivations requires some technology.

If *s* is a store, then write \(\mathsf {cell}_s : \llbracket \varOmega (s) \rrbracket \) for the memory strategy for store *s*. It is defined as \(\otimes _{\ell \in \mathrm {dom}(s)} \mathsf {cell}_{s(\ell )}\) where \(\mathsf {cell}_{\varepsilon } = \mathsf {cell}\), \(\mathsf {cell}_{R^\alpha }\) is the \(\mathcal {R}\)-strategy with only maximal \(\mathcal {R}\)-augmentation , \(\mathsf {cell}_{W^\alpha }\) has maximal \(\mathcal {R}\)-augmentation , and the empty \(\mathcal {R}\)-strategy for the other cases. If \(s \le _\mathsf {M}s'\), then \(s'\) can be obtained from *s* using memory operations and there is a matching \(\mathcal {R}\)-augmentation \(\mathbbm {q}_{s\rhd s'} \in \mathsf {cell}_s\) defined location-wise in the obvious way.

Now, if is a \(\mathcal {R}\)-strategy and \(\mathbbm {q}\in \sigma \) with moves only in \(\llbracket \varOmega (s) \rrbracket ^\perp \) is causally compatible with \(\mathbbm {q}_{s\rhd s'}\), we define the **residual** of \(\sigma \) after \(\mathbbm {q}\):

If \(\mathbbm {p}\in \sigma \) with \(\mathbbm {q}\hookrightarrow \mathbbm {p}\), we write first \(\mathbbm {p}' = \mathbbm {p}/(\mathbbm {q}\circledast \mathbbm {q}_{s\rhd s'})\) the \(\mathcal {R}\)-augmentation with \(|\mathbbm {p}'| = |\mathbbm {p}|\setminus |\mathbbm {q}|\), and with causal order the restriction of that of \(\mathbbm {p}\). For \(e\in |\mathbbm {p}'|\), we set \(\lambda _{\mathbbm {p}'}(e)\) to be \(\lambda _\mathbbm {p}(e)\) whose arguments corresponding to negative events \(e'\) in \(\mathbbm {q}\) are instantiated with \(\lambda _{\mathbbm {q}\circledast \mathbbm {q}_{s\rhd s'}}(e') \in \mathcal {R}\). With that, we set \(\sigma / (\mathbbm {q}\circledast \mathbbm {q}_{s\rhd s'})\) as comprising all \(\mathbbm {p}/ (\mathbbm {q}\circledast \mathbbm {q}_{s\rhd s'})\) for \(\mathbbm {p}\in \sigma \) with \(\mathbbm {q}\hookrightarrow \mathbbm {p}\).

Informally, this means that, considering some \(\mathbbm {q}\) which represents a scheduling of the memory operations turning *s* into \(s'\), we extract from \(\sigma \) its behavior after the execution of these memory operations. Finally, we generalize \(\le _\mathcal {R}\) to \(\mathcal {R}\)-augmentations by setting \(\mathbbm {q}\le _\mathcal {R}\mathbbm {q}'\) iff they have the same underlying partial order and for all \(e\in |\mathbbm {q}|\), \(\lambda _\mathbbm {q}(e) \le _\mathcal {R}\lambda _{\mathbbm {q}'}(e)\). With that, we can finally state:

### Lemma 1

Let \(\varOmega (s) \vdash M : A\), \(\langle M, s_1, \alpha \rangle \rightrightarrows \langle M', s'_1 \uplus s'_2, \alpha ' \rangle \) with \(\mathrm {dom}(s_1) = \mathrm {dom}(s'_1)\), and all resource annotations in \(s_1\) lower than \(\alpha \). Then, there is with events in \(\llbracket \varOmega (s) \rrbracket \), causally compatible with \(\mathbbm {q}_{s_1\rhd s'_1}\), and a function

preserving \(\hookrightarrow \) and *s.t.* for all , \(\varphi (\mathbbm {p}\circledast \mathbbm {q}_{s'_2}) \le _\mathcal {R}\mathbbm {p}\odot \mathbbm {q}_{s'_2}\).

This is proved by induction on the operational semantics – the critical cases are: assignment and dereferenciation exploiting that if \(\alpha \le _\mathcal {R}\beta \), then \(\alpha \parallel \beta = \beta \) (which boils down to idempotence); and parallel composition where compatibility of \(s'\) and \(s''\) entails that the corresponding augmentations of \(\mathsf {cell}_s\) are compatible.

Lemma 1, instantiated with \(\langle M, \emptyset , 0 \rangle \rightrightarrows \langle \mathbf {skip}, s, \alpha \rangle \), yields soundness.

*Non-adequacy.* Our model is not adequate. To see why, consider:

Our model predicts that this may evaluate to in 3 s (see Fig. 12) and to in 4 s. However, the operational semantics can only evaluate it (both to and ) in 4 s. Intuitively, the reason is that the causal shapes implicit in the reduction \(\rightrightarrows \) are all series-parallel (generated with sequential and parallel composition), whereas the interaction in Fig. 12 is not.

Our causal semantic approach yields a finer resource analysis than achieved by the parallel operational semantics. The operational semantics, rather than our model, is to blame for non-adequacy: indeed, we now show that for \(\mathcal {R}= \mathbb {R}_+\) our model is adequate *w.r.t.* an operational semantics specialized for time.

## 4 Adequacy for Time

For time, we may refine the operational semantics by adding the following rule

using which the program above evaluates to in 3 s. It is clear that the soundness theorem of the previous section is retained.

We first focus on adequacy for first-order programs without abstraction or application, written \(\varOmega (s) \vdash _1 M : \mathbf {com}\). For any \(t_0\in \mathbb {R}_+\) there is \(\langle M, s, t_0 \rangle \rightrightarrows \langle M', s\uplus s', t_0 \rangle \) where and \(M'\) is in **canonical form**: it cannot be decomposed as \(C[\mathbf {skip};\,N]\), \(C[\mathbf {skip}\parallel N]\), \(C[N \parallel \mathbf {skip}]\), , , \(C[\mathbf {wait}(0)]\) and \(C[\mathbf {new}\,x,y\,\mathbf {in}\,N]\) for *C*[] an evaluation context.

Consider \(\varOmega (s) \vdash _1 M : \mathbf {com}\), and with a top element \(\mathbf {done}^+_{t_{\mathsf {f}}}\) in , the **result** – *i.e.* \(\mathbbm {q}\) describes an interaction between and the memory leading to a successful evaluation to \(\mathbf {done}\) at time \(t_{\mathsf {f}}\). To prove adequacy, we must extract from it a derivation from \(\langle M, s, t_0 \rangle \), at time \(t_{\mathsf {f}}\).

Apart from the top \(\mathbf {done}^+_{t_{\mathsf {f}}}\), \(\mathbbm {q}\) only records memory operations, which we must replicate operationally in the adequate order. A **minimal operation with timing ** *t* is either the top \(\mathbf {done}^+_t\) if it is the only event in \(\mathbbm {q}\), or a prefix corresponding to a memory operation (for instance, in augmentations of Fig. 14, the only minimal operation has timing 2). If \(t=t_0\), this operation should be performed immediately. If \(t>t_0\) we need to spend time to trigger it – it is then critical to spend time on *all available* \(\mathbf {wait}\)*s in parallel*:

### Lemma 2

For \(\varOmega (s) \vdash _1 M : \mathbf {com}\) in canonical form, \(t_0 \in \mathbb {R}_+\), with result \(\mathbf {done}^+_{t_{\mathsf {f}}}\), if all minimal operations have timing strictly greater than \(t_0\),

for some \(t>0\) and \(M'\) only differing from *M* by having smaller annotations in \(\mathbf {wait}\) commands and at least one \(\mathbf {wait}\) changed to \(\mathbf {skip}\).

Furthermore, there is \(\mathbbm {q}\le _\mathcal {R}\mathbbm {q}'\) with with result \(\mathbf {done}^+_{t_{\mathsf {f}}}\).

### Proof

As *M* is in canonical form, all delays in minimal operations are impacted by \(\mathbf {wait}(t)\) commands in head position (*i.e.* such that \(M = C[\mathbf {wait}(t)])\). Let \(t_{\mathrm {min}}\) be the minimal time appearing in those \(\mathbf {wait}(-)\) commands in head position. Using our new rule and parallel composition, we remove \(t_{\mathrm {min}}\) to all such instances of \(\mathbf {wait}(-)\); then transform the resulting occurrences of \(\mathbf {wait}(0)\) to \(\mathbf {skip}\).

A representative example is displayed in Fig. 14. In the second step, though is available immediately, we must wait to get the right result.

With that we can prove the key lemma towards adequacy.

### Lemma 3

Let \(\varOmega (s) \vdash _1 M : \mathbf {com}\), \(t_0 \in \mathbb {R}_+\), and with result \(\mathbf {done}_{t_{\mathsf {f}}}^+\) in . Then, there is \(\langle M, s, t_0 \rangle \rightrightarrows \langle \mathbf {skip}, - , t_{\mathsf {f}} \rangle \).

### Proof

By induction on the size of *M*. First, we convert *M* to canonical form. If all minimal operations in have timing strictly greater than \(t_0\), we apply Lemma 2 and conclude by induction hypothesis.

Otherwise, at least one minimal operation has timing \(t_0\). If it is the result \(\mathbf {done}^+_{t_0}\) in , then *M* is the constant \(\mathbf {skip}\). Otherwise, it is a memory operation, say \(\mathbbm {p}\hookrightarrow \mathbbm {q}\) with and write also \(s' = s[\ell \mapsto s(\ell ).R^{t_0}]\). It follows then by an induction on *M* that for some *C*[], with

so \(\langle M, s, t_0 \rangle \rightrightarrows \langle C[b], s', t_0 \rangle \rightrightarrows \langle \mathbf {skip}, - , t_{\mathsf {f}} \rangle \) by induction hypothesis.

Adequacy follows for higher-order programs: in general, any \(\vdash M : \mathbf {com}\) can be \(\beta \)-reduced to first-order \(M'\), leaving the interpretation unchanged. By Church-Rosser, \(M'\) behaves like *M* operationally, up to weak bisimulation. Hence:

### Theorem 2

Let \(\vdash M : \mathbf {com}\). For any \(t\in \mathbb {R}_+\), if then \(M\Downarrow _t\).

## 5 Conclusion

It would be interesting to compare our model with structures used in timing analysis, for instance [23] relies on a concurrent generalization of control flow graphs that is reminiscent of event structures. In future work we also plan to investigate whether our annotated model construction could be used for other purposes, such as symbolic execution or abstract interpretation.

## References

Abramsky, S., Jagadeesan, R., Malacaria, P.: Full abstraction for PCF. Inf. Comput.

**163**(2), 409–470 (2000). https://doi.org/10.1006/inco.2000.2930Abramsky, S., Melliès, P.: Concurrent games and full completeness. In: 14th Annual IEEE Symposium on Logic in Computer Science, Trento, Italy, 2–5 July 1999, pp. 431–442 (1999). https://doi.org/10.1109/LICS.1999.782638

Alcolei, A., Clairambault, P., Hyland, M., Winskel, G.: The true concurrency of Herbrand’s theorem. In: 27th EACSL Annual Conference on Computer Science Logic, CSL 2018, Birmingham, UK, 4–7 September 2018, pp. 5:1–5:22 (2018). https://doi.org/10.4230/LIPIcs.CSL.2018.5

Brunel, A., Gaboardi, M., Mazza, D., Zdancewic, S.: A core quantitative coeffect calculus. In: Shao, Z. (ed.) ESOP 2014. LNCS, vol. 8410, pp. 351–370. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54833-8_19

Castellan, S., Clairambault, P.: Causality vs. interleavings in concurrent game semantics. In: Desharnais, J., Jagadeesan, R. (eds.) 27th International Conference on Concurrency Theory, CONCUR 2016, Québec City, Canada, 23–26 August 2016. LIPIcs, vol. 59, pp. 32:1–32:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2016). https://doi.org/10.4230/LIPIcs.CONCUR.2016.32

Castellan, S., Clairambault, P., Paquet, H., Winskel, G.: The concurrent game semantics of probabilistic PCF. In: Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2018, Oxford, UK, 09–12 July 2018, pp. 215–224 (2018). https://doi.org/10.1145/3209108.3209187

Castellan, S., Clairambault, P., Rideau, S., Winskel, G.: Games and strategies as event structures. Logical Methods Comput. Sci.

**13**(3) (2017). https://doi.org/10.23638/LMCS-13(3:35)2017Castellan, S., Clairambault, P., Winskel, G.: The parallel intensionally fully abstract games model of PCF. In: 30th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2015, Kyoto, Japan, 6–10 July 2015, pp. 232–243 (2015). https://doi.org/10.1109/LICS.2015.31

Castellan, S., Clairambault, P., Winskel, G.: Thin games with symmetry and concurrent hyland-ong games. Logical Methods Comput. Sci. (to appear, 2019)

Clairambault, P., de Visme, M., Winskel, G.: Game semantics for quantum programming. PACMPL

**3**(POPL), 32:1–32:29 (2019). https://doi.org/10.1145/3290345Ehrhard, T.: The Scott model of linear logic is the extensional collapse of its relational model. Theor. Comput. Sci.

**424**, 20–45 (2012). https://doi.org/10.1016/j.tcs.2011.11.027Faggian, C., Piccolo, M.: Partial orders, event structures and linear strategies. In: Curien, P.-L. (ed.) TLCA 2009. LNCS, vol. 5608, pp. 95–111. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02273-9_9

Ghica, D.R.: Slot games: a quantitative model of computation. In: Proceedings of the 32nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2005, Long Beach, California, USA, 12–14 January 2005, pp. 85–97 (2005). https://doi.org/10.1145/1040305.1040313

Ghica, D.R., Murawski, A.S.: Angelic semantics of fine-grained concurrency. Ann. Pure Appl. Logic

**151**(2–3), 89–114 (2008). https://doi.org/10.1016/j.apal.2007.10.005Ghica, D.R., Smith, A.I.: Bounded linear types in a resource semiring. In: Shao, Z. (ed.) ESOP 2014. LNCS, vol. 8410, pp. 331–350. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54833-8_18

Hoare, T., Möller, B., Struth, G., Wehrman, I.: Concurrent Kleene algebra and its foundations. J. Log. Algebr. Program.

**80**(6), 266–296 (2011). https://doi.org/10.1016/j.jlap.2011.04.005Hyland, J.M.E., Ong, C.L.: On full abstraction for PCF: I, II, and III. Inf. Comput.

**163**(2), 285–408 (2000). https://doi.org/10.1006/inco.2000.2917Laird, J., Manzonetto, G., McCusker, G., Pagani, M.: Weighted relational models of typed lambda-calculi. In: 28th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), New Orleans, USA, Proceedings, pp. 301–310 (2013)

Laurent, O.: Game semantics for first-order logic. Logical Methods Comput. Sci.

**6**(4) (2010). https://doi.org/10.2168/LMCS-6(4:3)2010Melliès, P.: Asynchronous games 4: a fully complete model of propositional linear logic. In: 20th IEEE Symposium on Logic in Computer Science (LICS 2005), Chicago, IL, USA, 26–29 June 2005, Proceedings, pp. 386–395 (2005). https://doi.org/10.1109/LICS.2005.6

Melliès, P.: Game semantics in string diagrams. In: Proceedings of the 27th Annual IEEE Symposium on Logic in Computer Science, LICS 2012, Dubrovnik, Croatia, 25–28 June 2012, pp. 481–490 (2012). https://doi.org/10.1109/LICS.2012.58

Melliès, P.-A., Mimram, S.: Asynchronous games: innocence without alternation. In: Caires, L., Vasconcelos, V.T. (eds.) CONCUR 2007. LNCS, vol. 4703, pp. 395–411. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74407-8_27

Mittermayr, R., Blieberger, J.: Timing analysis of concurrent programs. In: Vardanega, T. (ed.) 12th International Workshop on Worst-Case Execution Time Analysis, WCET 2012, Pisa, Italy, 10 July 2012. OASICS, vol. 23, pp. 59–68. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2012). https://doi.org/10.4230/OASIcs.WCET.2012.59

Plotkin, G.D.: Post-graduate lecture notes in advanced domain theory (incorporating the “Pisa notes”). Department of Computer Science, University of Edinburgh (1981)

Rideau, S., Winskel, G.: Concurrent strategies. In: Proceedings of the 26th Annual IEEE Symposium on Logic in Computer Science, LICS 2011, Toronto, Ontario, Canada, 21–24 June 2011, pp. 409–418 (2011). https://doi.org/10.1109/LICS.2011.13

Sands, D.: Operational theories of improvement in functional languages (extended abstract). In: Heldal, R., Holst, C.K., Wadler, P. (eds.) Functional Programming, Glasgow 1991, pp. 298–311. Springer, London (1991). https://doi.org/10.1007/978-1-4471-3196-0_24

Winskel, G.: Event structures. In: Brauer, W., Reisig, W., Rozenberg, G. (eds.) ACPN 1986. LNCS, vol. 255, pp. 325–392. Springer, Heidelberg (1987). https://doi.org/10.1007/3-540-17906-2_31

## Author information

### Authors and Affiliations

### Corresponding author

## Editor information

### Editors and Affiliations

## Rights and permissions

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## Copyright information

© 2019 The Author(s)

## About this paper

### Cite this paper

Alcolei, A., Clairambault, P., Laurent, O. (2019). Resource-Tracking Concurrent Games. In: Bojańczyk, M., Simpson, A. (eds) Foundations of Software Science and Computation Structures. FoSSaCS 2019. Lecture Notes in Computer Science(), vol 11425. Springer, Cham. https://doi.org/10.1007/978-3-030-17127-8_2

### Download citation

DOI: https://doi.org/10.1007/978-3-030-17127-8_2

Published:

Publisher Name: Springer, Cham

Print ISBN: 978-3-030-17126-1

Online ISBN: 978-3-030-17127-8

eBook Packages: Computer ScienceComputer Science (R0)