1 Introduction

Proofs up to commuting conversions. In the sequent calculus of Linear Logic, the order between rules need not always matter: allowed reorderings are expressed by commuting conversions. These conversions are necessary for confluence of cut-elimination by mitigating the sequentiality of the sequent calculus. The real proof object is often seen as an equivalence class of proofs modulo commuting conversions. The problem of providing a canonical representation of proofs up to those commuting conversions is as old as Linear Logic itself, and proves to be a challenging problem. The traditional solution interprets a proof by a graphical representation called proof net and dates back to Girard [17]. Girard’s solution is only satisfactory in the multiplicative-exponential fragment of Linear Logic. For additives, a well-known solution is due to Hughes and van Glabbeck [22], where proofs are reduced to their set of axiom linkings. However, the correctness criterion relies on the difficult toggling condition.

Proof nets tend to be based on specific representations such as graphs or sets of linkings. Denotational semantics has not managed to provide a semantic counterpart to proof nets, which would be a model where every element is the interpretation of a proof (full completeness) and whose equational theory coincides with commuting conversions (injectivity). We believe this is because denotational semantics views conversions as extensional principles, hence models proofs with extensional objects (relations, functions) too far from the syntax.

Fig. 1.
figure 1

Examples of causal invariants

figure a

Conversions essentially state that the order between rules applied to different premises does not matter, as evidenced in the two equivalent proofs of the sequent depicted on the right. These two proofs are equal in extensional models of Linear Logic because they have the same extensional behaviour. Unfortunately, characterising the image of the interpretation proved to be a difficult task in extensional models. The first fully complete models used game semantics, and are due to Abramsky and Melliès (MALL) [1] and Melliès (Full LL) [24]. However, their models use an extensional quotient on strategies to satisfy the conversions, blurring the concrete nature of strategies.

The true concurrency of conversions. Recent work [5] highlights an interpretation of Linear Logic as communicating processes. Rules become actions whose polarity (input or output) is tied to the polarity of the connective (negative or positive), and cut-elimination becomes communication. In this interpretation, each assumption in the context is assigned a channel on which the proof communicates. Interestingly, commuting conversions can be read as asynchronous permutations. For instance, the conversion mentioned above becomes the equation in the syntax of Wadler [27]:

$$(1) \quad u[\mathtt{inl}].\, v[\mathtt{inl}].\, [u \leftrightarrow v] \equiv v[\mathtt{inl}].\, u[\mathtt{inl}].\, [u\leftrightarrow v] \ \triangleright \ u: X^\perp \oplus X^\perp , v: X \oplus X,$$

where \(u[\mathtt{inl}]\) corresponds to a \( \oplus _1\)-introduction rule on (the assumption corresponding to) u, and \([u \leftrightarrow v]\) is the counterpart to an axiom between the hypothesis corresponding to u and v. It becomes then natural to consider that the canonical object representing these two proofs should be a concurrent process issuing the two outputs in parallel. A notion of causality emerges from this interpretation, where a rule depends on a previous rule below in the tree when these two rules cannot be permuted using the commuting conversions. This leads us to causal models to make this dependency explicit. For instance, the two processes in (1) can be represented as the partial order depicted in Fig. 1a, where dependency between rules is marked with .

In presence of , a derivation stands for several execution (slices), given by different premises of a -rule (whose process equivalent is \(u{.}{} \mathtt{case}\, (P, Q)\) and represents pattern matching on an incoming message). The identity on \(X \oplus Y\), corresponding to the proof

is interpreted by the event structure depicted in Fig. 1b. Event structures [28] combine a partial order, representing causality, with a conflict relation representing when two events cannot belong to the same execution (here, same slice). Conflict here is indicating with and separates the slices. The -introduction becomes two conflicting events.

Fig. 2.
figure 2

Representations of or

Conjunctive and disjunctive causalities. Consider the process on the context \(u: (X \oplus X)^\perp , v: (Y \oplus Y)^\perp , w: (X \otimes Y) \oplus (X \otimes Y)\) implementing disjunction:

$$\begin{aligned} \mathtt {or} = u{.}{} \mathtt{case}\,\left( \begin{aligned}&v{.}{} \mathtt{case}\, (w[\mathtt{inl}].\, P, w[\mathtt{inl}].\, P),\\&v{.}{} \mathtt{case}\, (w[\mathtt{inl}].\, P, w[\mathtt{inr}].\, P) \end{aligned}\right) \ \text {where} \ P = w[x].\, ([u \leftrightarrow w] \mid [v \leftrightarrow x]). \end{aligned}$$

Cuts of \(\mathtt {or}\) against a proof starting with \(u[\mathtt{inl}]\) or \(v[\mathtt{inl}]\) answer on w after reduction:

$$(\nu u) (\mathtt {or}\mid u[\mathtt{inl}]) \rightarrow ^* w[\mathtt{inl}] . v{.}{} \mathtt{case}\, (P, P) \quad (\nu v) (\mathtt {or}\mid v[\mathtt{inl}]) \rightarrow ^* w[\mathtt{inl}] . u{.}{} \mathtt{case}\, (P, P)$$

where \((\nu u)(P\mid Q)\) is the process counterpart to logical cuts. This operational behaviour is related to parallel or, evaluating its arguments in parallel and returning true as soon as one returns true. Due to this intentional behaviour, the interpretation of or in prime event structures is nondeterministic (Fig. 2a), as causality in event structures is conjunctive (an event may only occur after all its predecessors have occurred). By moving to general event structures, however, we can make the disjunctive causality explicit and recover determinism (Fig. 2b).

Contributions and outline. Drawing inspiration from the interpretation of proofs in terms of processes, we build a fully complete and injective model of unit-free Multiplicative Additive Linear Logic with MIX (MALL\(^-\)), interpreting proofs as general event structures living in a submodel of the model introduced by [6]. Moreover, our model captures the dependency between rules, which makes sequentialisation a local operation, unlike in proof nets, and has a more uniform acyclicity condition than [22].

We first recall the syntax of MALL\(^-\) and its reading in terms of processes in Sect. 2. Then, in Sect. 3, we present a slight variation on the model of [6], where we call the (pre)strategies causal structures, by analogy with proof structures. Each proof tree can be seen as a (sequential) causal structure. However, the space of causal structures is too broad and there are many causal structures which do not correspond to any proofs. A major obstacle to sequentialisation is the presence of deadlocks. In Sect. 4, we introduce a condition on causal structures, ensuring deadlock-free composition, inspired by the interaction between and \( \otimes \) in Linear Logic. Acyclic causal structures are still allowed to only explore partially the game, contrary to proofs which must explore it exhaustively, hence in Sect. 5, we introduce further conditions on causal structures, ensuring a strong sequentialisation theorem (Theorem 2): we call them causal nets. In Sect. 6, we define causal invariants as maximal causal nets. Every causal net embeds in a unique causal invariant; and a particular proof P embeds inside a unique causal invariant which forms its denotation . Moreover, two proofs embed in the same causal invariant if and only if they are convertible (Theorem 4). Finally, we show how to equip causal invariants with the structure of \( *\)-autonomous category with products and deduce that they form a fully complete model of MALL\(^-\) (Theorem 6) for which the interpretation is injective.

The proofs are available in the technical report [7].

figure c

2 MALL\(^-\) and Its Commuting Conversions

In this section, we introduce MALL\(^-\) formulas and proofs as well as the standard commuting conversions and cut elimination for this logic. As mentioned in the introduction, we use a process-like presentation of proofs following [27]. This highlights the communicating aspect of proofs which is an essential intuition for the model; and it offers a concise visualisation of proofs and conversions.

Formulas. We define the formulas of MALL\(^-\): , where X and are atomic formulas (or ltterals) belonging to a set \(\mathbb A\). Formulas come with the standard notion of duality \((\cdot )^\perp \) given by the De Morgan rules: \(\otimes \) is dual to , and \(\oplus \) to . An environment is a partial mapping of names to formulas, instead of a multiset of formulas – names disambiguate which assumption a rule acts on.

Proofs as processes. We see proofs of MALL\(^-\) (with MIX) as typing derivations for a variant of the \(\pi \)-calculus [27]. The (untyped) syntax for the processes is as follows:

$$ \begin{array}{llrlr} P, Q &{} {:}{:}{=}\, u(v).\, P \mid u[v].\, (P \mid Q) &{} \text {(multiplicatives)} \\ &{}\mid \, u{.}{} \mathtt{case}\,(P, Q) \mid u[\mathtt{inl}].\, P \mid u[\mathtt{inr}].\, P &{}\text {(additives)}\\ &{} \mid {\,}\, [u\leftrightarrow v] \mid (\nu u) (P\mid Q) \mid \,\, (P\mid Q)&{}\text {(logical and mix)} \\ \end{array}$$

u(v).P denotes an input of v on channel u (used in -introduction) while \(u[v]{.}(P \mid Q)\) denotes output of a fresh channel v along channel u (used in \( \otimes \)-introduction); The term \([u \leftrightarrow v]\) is a link, forwarding messages received on u to v, corresponds to axioms, and conversely; and \((\nu u) (P\mid Q)\) represents a restriction of u in P and Q and corresponds to cuts; \(u{.}{} \mathtt{case}\,(P, Q)\) is an input branching representing -introductions, which interacts with selection, either \(u[\mathtt{inl}].\, R\) or \(u[\mathtt{inr}].\, R\); in \((\nu u) (P\mid Q)\), u is bound in both P and Q, in \(u(v).\, P\), v is bound in P, and in \(u[v].\, (P\mid Q)\), v is only bound in Q.

We now define MALL\(^-\) proofs as typing derivations for processes. The inference rules, recalled in Fig. 3, are from [27]. The links (axioms) are restricted to literals – for composite types, one can use the usual \(\eta \)-expansion laws. There is a straightforward bijection between standard (\( \eta \)-expanded) proofs of MALL\(^-\) and typing derivations.

Fig. 3.
figure 3

Typing rules for MALL\(^-\) (above) and contexts (below)

Commutation rules and cut elimination. We now explain the valid commutations rules in our calculus. We consider contexts \(C\left[ []_1, \ldots , []_n\right] \) with several holes to accomodate which has two branches. Contexts are defined in Fig. 3, and are assigned a type . It intuitively means that if we plug proofs of \( \varGamma _i\) in the holes, we get back a proof of . We use the notation \(C[P_i]_i\) for \(C[P_1, \ldots , P_n]\) when \((P_i)\) is a family of processes. Commuting conversion is the smallest congruence \(\equiv \) satisfying all well-typed instances of the rule \(C[D[P_{i, j}]_j]_i \equiv D[C[P_{i, j}]_i]_j\) for C and D two contexts. For instance \(a[\mathtt{inl}].\, b{.}{} \mathtt{case}\,(P, Q) \equiv b{.}{} \mathtt{case}\,(a[\mathtt{inl}].\, P, a[\mathtt{inl}].\, Q)\). Figure 4 gives reduction rules \(P \rightarrow Q\). The first four rules are the principal cut rules and describe the interaction of two dual terms, while the last one allows cuts to move inside contexts.

3 Concurrent Games Based on General Event Structures

This section introduces a slight variation on the model of [6]. In Sect. 3.1, we define games as prime event structures with polarities, which are used to interpret formulas. We then introduce general event structures in Sect. 3.2, which are used to define causal structures.

Fig. 4.
figure 4

Cut elimination in MALL\(^-\)

3.1 Games as Prime Event Structures with Polarities

Definition of games. Prime event structures [28] (simply event structures in the rest of the paper) are a causal model of nondeterministic and concurrent computation. We use here prime event structures with binary conflict. An event structure is a triple \((E, \le _E , \#_E)\) where \((E, \le _E)\) is a partial order and \(\#_E\) is an irreflexive symmetric relation (representing conflict) satisfying: (1) if \(e \in E\), then \([e] := \{ e' \in E \mid e' \le _E e \}\) is finite; and (2) if \(e \,\#_E \, e'\) and \(e \le _E e''\) then \(e'' \,\#_E\, e'\). We often omit the E subscripts when clear from the context.

A configuration of E is a downclosed subset of E which does not contain two conflicting events. We write \( \mathscr {C} (E)\) for the set of finite configurations of E. For any \(e \in E\), [e] is a configuration, and so is \([e) := [e] \setminus \{e\}\). We write for the immediate causal relation of E defined as \(e < e'\) with no event between. Similarly, a conflict \(e \# e'\) is minimal, denoted , when the \([e] \cup [e')\) and \([e) \cup [e']\) are configurations. When drawing event structures, only and are represented. We write \(\max (E)\) for the set of maximal events of E for \( \le _E\). An event e is maximal in x when it has no successor for \( \le _E\) in x. We write \(\max _E\, x\) for the maximal events of a configuration \(x \in \mathscr {C} (E)\).

An event structure E is confusion-free when (1) for all then \([e) = [e')\) and (2) if and then \(e = e''\) or . As a result, the relation is an equivalence relation whose equivalent classes \(\mathfrak a\) are called cells.

Definition 1

A game is a confusion-free event structure A along with an assignment \( pol :A \rightarrow \{-, +\}\) such that cells contain events of the same polarity, and a function mapping every maximal event of A to an atom. Events with polarity − (resp. \(+\)) are negative (resp. positive).

Events of a game are usually called moves. The restriction imposes branching to be polarised (i.e. belonging to a player). A game is rooted when two minimal events are in conflict. Single types are interpreted by rooted games, while contexts are interpreted by arbitrary games. When introducing moves of a game, we will indicate their polarity in exponent, e.g. “let \(a^+ \in A\)” stands for assuming a positive move of A.

Interpretation of formulas. To interpret formulas, we make use of standard constructions on prime event structures. The event structure \(a \cdot E\) is E prefixed with a, i.e. \(E \cup \{a\}\) where all events of E depends on a. The parallel composition of E and \(E'\) represents parallel executions of E and \(E'\) without interference:

Definition 2

The parallel composition of event structures \(A_0\) and \(A_1\) is the event structure \(A_0 \parallel A_1 = (\{0 \} \times A_0\, \cup \, \{1\} \times A_1, \le _{A_0 \parallel A_1}, \#_{A_0 \parallel A_1})\) with \((i, a) \le _{A_0 \parallel A_1} (j, a')\) iff \(i = j\) and \(a \le _{A_i} a'\); and \((i, a) \,\#_{A_0 \parallel A_1} \, (j, a')\) when \(i = j\) and \(a\,\#_{A_j}\, a'\).

The sum of event structure \(E+F\) is the nondeterministic analogue of parallel composition.

Definition 3

The sum \(A_0 + A_1\) of the two event structures \(A_0\) and \(A_1\) has the same partial order as \(A_0 \parallel A_1\), and conflict relation \((i, a) \,\#_{A_0 + A_1}\, (j, a')\) iff \(i \ne j\) or \(i = j\) and \(a\,\#_{A_j}\, a'\).

Prefixing, parallel composition and sum of event structures extend to games. The dual of a game A, obtained by reversing the polarity labelling, is written \(A^\perp \). Given \(x \in \mathscr {C} (A)\), we define A / x (“A after x”) as the subgame of A comprising the events \(a \in A \setminus x\) not in conflict with events in x.

Interpretation of formulas. The interpretation of the atom X is the game with a single positive event simply written X with \( atom (X) = X\), and the interpretation of \(X^\perp \) is , written simply \(X^\perp \) in diagrams. For composite formulas, we let (where send, inl and inr are simply labels):

Parallel composition is used to interpret contexts: . The interpretation commutes with duality: .

In diagrams, we write moves of a context following the syntactic convention: for instance \(u[\mathtt{inl}]\) denotes the minimal inl move of the u component. For tensors and pars, we use the notation u[v] and u(v) to make explicit the variables we use in the rest of the diagram, instead of \(\texttt {send}^+\) and \(\texttt {send}^-\) respectively. For atoms, we use u : X and \(u: X^\perp \).

3.2 Causal Structures as Deterministic General Event Structures

As we discussed in Sect. 1, prime event structures cannot express disjunctive causalities deterministically, hence fail to account for the determinism of LL. Our notion of causal structure is based on general event structures, which allow more complex causal patterns. We use a slight variation on the definition of deterministic general event structures given by [6], to ensure that composition is well-defined without further assumptions.

Instead of using the more concrete representation of general event structures in terms of a set of events and an enabling relation, we use the following formulation in terms of set of configurations, more adequate for mathematical reasoning. Being only sets of configurations, they can be reasoned on with very simple set-theoretic arguments.

Definition 4

A causal structure (abbreviated as causal struct) on a game A is a subset \( \sigma \subseteq \mathscr {C} (A)\) containing \(\varvec{\emptyset }\) and satisfying the following conditions:

  • Coincidence-freeness If \(e, e' \in x \in \sigma \) then there exists \(y \in \sigma \) with \(y \subseteq x\) and \(y \cap \{e, e'\}\) is a singleton.

  • Determinism for \(x, y \in \sigma \) such that \(x\,\cup \,y\) does not contain any minimal negative conflict, then \(x \cup y \in \sigma \).

figure m

Configurations of prime event structures satisfy a further axiom, stability, which ensures the absence of disjunctive causalities. When \( \sigma \) is a causal struct on A, we write \( \sigma : A\). We draw as regular event structures, using and . To indicate disjunctive causalities, we annotate joins with or. This convention is not powerful enough to draw all causal structs, but enough for the examples in this paper. As an example, on \(A = a \parallel b \parallel c\) the diagram on the right denotes the following causal struct \( \sigma = \{x \in \mathscr {C} (A) \mid c \in x \Rightarrow x \cap \{a, b\} \ne \varvec{\emptyset } \}\).

A minimal event of \( \sigma : A\) is an event \(a \in A\) with \(\{a\} \in \sigma \). An event \(a \in x \in \sigma \) is maximal in x when \(x \setminus \{a\} \in \sigma \). A prime configuration of \(a \in A\) is a configuration \(x \in \sigma \) such that a is its unique maximal event. Because of disjunctive causalities, an event \(a \in A\) can have several distinct prime configurations in \( \sigma \) (unlike in event structures). In the previous example, since c can be caused by either a or b, it has two prime configurations: \(\{a, c\}\) and \(\{b, c\}\). We write \(\max \sigma \) for the set of maximal configurations of \( \sigma \), ie. those configurations that cannot be further extended.

Even though causality is less clear in general event structures than in prime event structures, we give here a notion of immediate causal dependence that will be central to define acyclic causal structs. Given a causal struct \( \sigma : A\) and \(x \in \sigma \), we define a relation on x as follows: when there exists a prime configuration y of \(a'\) such that \(x \cup y \in \sigma \), and that a is maximal in \(y \setminus \{a'\}\). This notion is compatible with the drawing above: we have and as c has two prime configurations: \(\{a, c\}\) and \(\{b, c\}\). Causality needs to be contextual, since different slices can implement different causal patterns. Parallel composition and prefixing structures extend to causal structs:

$$\begin{aligned} \begin{aligned} \sigma \parallel \tau = \{ x \parallel y \in \mathscr {C} (A \parallel B) \mid (x, y) \in \sigma \times \tau \}\quad \qquad a \cdot \sigma = \{ x \in \mathscr {C} (a \cdot A) \mid x \cap A \in \sigma \}. \end{aligned} \end{aligned}$$

Categorical setting. Causal structs can be composed using the definitions of [6]. Consider \( \sigma : A^\perp \parallel B\) and \( \tau : B^\perp \parallel C\). A synchronised configuration is a configuration \(x \in \mathscr {C} (A \parallel B \parallel C)\) such that \(x \cap (A \parallel B) \in \sigma \) and \( x \cap (B \parallel C) \in \tau \). A synchronised configuration x is reachable when there exists a sequence (covering chain) of synchronised configurations \(x_0 = \varvec{\emptyset } \subseteq x_1 \subseteq \ldots \subseteq x_n = x\) such that \(x_{i+1}\setminus x_i\) is a singleton. The reachable configurations are used to define the interaction \( \tau \circledast \sigma \), and then after hiding, the composition \( \tau \odot \sigma \):

Unlike in [6], our determinism is strong enough for \( \tau \odot \sigma \) to be a causal struct.

Lemma 1

If \( \sigma : A^\perp \parallel B\) and \( \tau :B^\perp \parallel C\) are causal structs then \( \tau \odot \sigma \) is a causal struct.

Composition of causal structs will be used to interpret cuts between proofs of Linear Logic. In concurrent game semantics, composition has a natural identity, asynchronous copycat [25], playing on the game \(A^\perp \parallel A\), forwarding negative moves on one side to the positive occurrence on the other side. Following [6], we define where \(x \subseteq ^p y\) means \(x \subseteq y\) and \( pol (y \setminus x) \subseteq \{p\}\).

However, in general copycat is not an identity on all causal structs, only \( \sigma \subseteq \ c\!\!\!\!c\,_A \odot \sigma \) holds. Indeed, copycat represents an asynchronous buffer, and causal structs which expects messages to be transmitted synchronously may be affected by composition with copycat. We call causal structs that satisfy the equality asynchronous. From [6], we know that asynchronous causal structs form a compact-closed category.

The syntactic tree. The syntactic tree of a derivation can be read as a causal struct \( Tr (P)\) on , which will be the basis for our interpretation. It is defined by induction:

figure q

We use the convention in the diagram, for instance u[v] means the initial \(\mathtt {send}\) move of the u component. An example of this construction is given in Fig. 5a. Note that it is not asynchronous.

4 Acyclicity of Causal Structures

The space of causal structs is unfortunately too broad to provide a notion of causal nets, due in particular to the presence of deadlocks during composition. As a first step towards defining causal nets, we introduce in this section a condition on causal structs inspired by the tensor rule in Linear Logic. In Sect. 4.1, we propose a notion of communication between actions, based on causality. In Sect. 4.2, we introduce a notion of acyclicity which is shown to be stable under composition and ensure deadlock-free composition.

4.1 Communication in Causal Structures

The tensor rule of Linear Logic says that after a tensor u[v], the proof splits into two independent subproofs, one handling u and the other v. This syntactic condition is there to ensure that there are no communications between u and v. More precisely, we want to prevent any dependence between subsequent actions on u and an action v. Indeed such a causal dependence could create a deadlock when facing a par rule u(v), which is allowed to put arbitrary dependence between such subsequent actions.

Communication in MLL. Let us start by the case of MLL, which corresponds to the case where games do not have conflicts. Consider the following three causal structs:

figure r

The causal structs \( \sigma _1\) and \( \sigma _2\) play on the game , while \( \sigma _3\) plays on the game . The causal structs \( \sigma _2\) and \( \sigma _3\) are very close to proof nets, and it is easy to see that \( \sigma _2\) represents a correct proof net while \( \sigma _3\) does not. In particular, there exists a proof P such that \( Tr (P) \subseteq \sigma _2\) but there are no such proof Q for \( \sigma _3\). Clearly, \( \sigma _3\) should not be acyclic. But should \( \sigma _2\)? After all it is sequentialisable. But, in all sequentialisations of \( \sigma _2\), the par rule v(z) is applied before the tensor u[w], and this dependency is not reflected by \( \sigma _2\). Since our goal is exactly to compute these implicit dependencies, we will only consider \( \sigma _1\) to be acyclic, by using a stronger sequentialisation criterion:

Definition 5

A causal struct is strongly sequentialisable when for all \(x \in \sigma \), there exists with \(x \in Tr (P)\) and \( Tr (P) \subseteq \sigma \).

To understand the difference between \( \sigma _1\) and \( \sigma _2\), we need to look at causal chains. In both \( \sigma _1\) and \( \sigma _2\), we can go from \(u:X^\perp \) to \(w:Y^\perp \) by following immediate causal links in any direction, but observe that in \( \sigma _1\) they must all cross an event below u[w] (namely v(z) or u[w]). This prompts us to define a notion of communication outside a configuration x:

Definition 6

Given \( \sigma : A\) and \(x \in \sigma \) we say that \(a, a' \in A \setminus x\) communicate outside x (written \(a \leftrightsquigarrow _{x, \sigma } a'\)) when there exists a chain where all the \(a_i \in A \setminus x\), and denotes the symmetric closure of .

Communication in MALL. In presence of additives, immediate causality is not the only vector of communication. Consider the following causal struct \( \sigma _4\), playing on the context where A is irrelevant:

figure t

This pattern is not strongly sequentialisable: the tensor u[w] must always go after the -introduction on v, since we need this information to know how whether v should go with u or w when splitting the context. Yet, it is not possible to find a communication path from one side to the other by following purely causal links without crossing u[w]. There is however a path that uses both immediate causality and minimal conflict. This means that we should identify events in minimal conflict, since they represent the same (-introduction rule). Concretely, this means lifting the previous definition at the level of cells. Given an causal struct \( \sigma : A\) and \(x \in \sigma \), along with two cells \(\mathfrak a, \mathfrak a'\) of A / x, we define the relation when there exists \(a \in \mathfrak a\) and \(a' \in \mathfrak a'\) such that ; and \(\mathfrak a \leftrightsquigarrow _{x, \sigma } \mathfrak a'\) when there exists where all the \(\mathfrak a_i\) do not intersect x. For instance, the two cells which are successors of the tensor u[w] in \( \sigma _4\) communicate outside the configuration \(\{u[w]\}\) by going through the cell \(\{v(\mathtt {inl}), v(\mathtt {inr})\}\).

4.2 Definition of Acyclicity on Casual Structures

Since games are trees, two events \(a, a'\) are either incomparable or have a meet \(a \wedge a'\). If \(a \wedge a'\) is defined and positive, we say that a and \(a'\) have positive meet, and means that they are on two distinct branches of a tensor. If \(a \wedge a'\) is undefined, or defined and negative, we say that \(a \wedge a'\) has a negative meet. When the meet is undefined, it means that a and \(a'\) are events of different components of the context. We consider the meet to be negative in this case, since components of a context are related by an implicit par.

These definitions are easily extended to cells. The meet \(\mathfrak a \wedge \mathfrak a'\) of two cells \(\mathfrak a\) and \(\mathfrak a'\) of A is the meet \(a \wedge a'\) for \(a \in \mathfrak a\) and \(a' \in \mathfrak a'\): by confusion-freeness, it does not matter which ones are chosen. Similarly, we say that \(\mathfrak a\) and \(\mathfrak a'\) have positive meet if \(\mathfrak a \wedge \mathfrak a'\) is defined and positive; and have negative meet otherwise. These definitions formalise the idea of “the two sides of a tensor”, and allow us to define acyclicity.

Definition 7

A causal struct \( \sigma : A\) is acyclic when for all \(x \in \sigma \), for any cells \(\mathfrak a, \mathfrak a'\) not intersecting x and with positive meet, if \(\mathfrak a \leftrightsquigarrow _{x, \sigma } \mathfrak a'\) then \(\mathfrak a \wedge \mathfrak a' \not \in x\).

This captures the desired intuition: if \(\mathfrak a\) and \(\mathfrak a'\) are on two sides of a tensor a (ie. have positive meet), and there is a communication path outside x relating them, then a must also be outside x (and implicitly, the communication path must be going through a).

Reasoning on the interaction of acyclic strategies proved to be challenging. We prove that acyclic strategies compose, and their interaction are deadlock-free, when composition is on a rooted game B. This crucial assumption arises from the fact that in linear logic, cuts are on formulas. It entails that for any \(b, b' \in B\), \(b \wedge b'\) is defined, hence must be positive either from the point of view of \( \sigma \) or of \( \tau \).

Theorem 1

For acyclic causal structs \( \sigma : A^\perp \parallel B\) and \( \tau : B^\perp \parallel C\), (1) their interaction is deadlock-free: \( \tau \circledast \sigma = ( \sigma \parallel C) \cap (A \parallel \tau )\); and (2) the causal struct \( \tau \odot \sigma \) is acyclic.

As a result, acyclic and asynchronous causal structs form a category. We believe this intermediate category is interesting in its own right since it generalises the deadlock-freeness argument of Linear Logic without having to assume other constraints coming from Linear Logic, such as linearity. In the next section, we study further restriction on acyclic causal structs which guarantee strong sequentialisability.

5 Causal Nets and Sequentialisation

We now ready to introduce causal nets. In Sect. 5.1, we give their definition by restricting acyclic causal structs and in Sect. 5.2 we prove that causal nets are strongly sequentialisable.

5.1 Causal Nets: Totality and Well-Linking Casual Structs

To ensure that our causal structs are strongly sequentialisable, acyclicity is not enough. First, we need to require causal structs to respect the linearity discipline of Linear Logic:

Definition 8

A causal struct \( \sigma : A\) is total when (1) for \(x \in \sigma \), if x is maximal in \( \sigma \), then it is maximal in \( \mathscr {C} (A)\); and (2) for \(x \in \sigma \) and \(a^-\in A\setminus x\) such that \(x \cup \{a\} \in \sigma \), then whenever , we also have \(x \cup \{a'\} \in \sigma \) as well.

The first condition forces a causal struct to play until there are no moves to play, and the second forces an causal struct to be receptive to all Opponent choices, not a subset.

Our last condition constrains axiom links. A linking of a game A is a pair \((x, \ell )\) of a \(x \in \max \mathscr {C} (A)\), and a bijection \(\ell : (\max _A x)^- \simeq (\max _A x)^+\) preserving the \( atom \) labelling.

Definition 9

A total causal struct \( \sigma : A\) is well-linking when for each \(x \in \max (\sigma ) \), there exists a linking \(\ell _x\) of x, such that if y is a prime configuration of \(\ell _x(e)\) in x, then \(\max (y \setminus \{\ell _x(e)\}) = \{e\}\).

This ensures that every positive atom has a unique predecessor which is a negative atom.

Definition 10

A causal net is an acyclic, total and well-linking causal struct.

A causal net \( \sigma : A\) induces a set of linkings A, \(\textsf {link}( \sigma ) := \{ \ell _x \mid x \in \max \sigma \}\). The mapping \(\textsf {link}(\cdot )\) maps causal nets to the proof nets of [22].

5.2 Strong Sequentialisation of Causal Nets

Our proof of sequentialisation relies on an induction on causal nets. To this end, we provide an inductive deconstruction of parallel proofs. Consider \( \sigma : A\) a causal net and a minimal event \(a \in \sigma \) not an atom. We write A / a for \(A/\{a\}\). Observe that if , it is easy to see that there exists a context such that . Given a causal struct \( \sigma : A\), we define the causal struct \( \sigma /a = \{ x \in \mathscr {C} (A/a) \mid x \cup \{a\} \in \sigma \} : A/a\).

Lemma 2

\( \sigma /a\) is a causal net on A / a.

When a is positive, we can further decompose \( \sigma /a\) in disjoint parts thanks to acyclicity. Write \(\mathfrak a_1, \ldots , \mathfrak a_n\) for the minimal cells of A / a and consider for \(n \ge k > 0\), \(A_k = \{a' \in A/a \mid \text {cell}(a') \leftrightsquigarrow _{\{a\}, \sigma } \mathfrak a_k \}\). \(A_k\) contains the events of A / a which \( \sigma \) connects to the k-th successor of a. We also define the set \(A_0 = A/a \setminus \bigcup _{1 \le k \le n} A_k\), of events not connected to any successor of a (this can happen with MIX). It inherits a game structure from A.

Each subset inherits a game structure from A / a. By acyclicity of \( \sigma \), the \(A_k\) are pairwise disjoint, so \(A/a \cong A_0 \parallel \ldots \parallel A_n\). For \(0 \le k \le n\), define \( \sigma _k = \mathscr {C} (A_k) \cap \sigma /a\).

Lemma 3

\( \sigma _k\) is a causal net on \(A_k\) and we have \( \sigma /a = \sigma _0 \parallel \ldots \parallel \sigma _n\).

This formalises the intuition that after a tensor, an acyclic causal net must be a parallel composition of proofs (following the syntactic shape of the tensor rule of Linear Logic). From this result, we show by induction that any causal net is strongly sequentialisable.

Theorem 2

If \( \sigma :A\) is a causal net, then \( \sigma \) is strongly sequentialisable.

We believe sequentialisation without MIX requires causal nets to be connected: two cells with negative meets always communicate outside any configuration they are absent from. We leave this lead for future work.

6 Causal Invariants and Completeness

Causal nets are naturally ordered by inclusion. When \( \sigma \subseteq \tau \), we can regard \( \tau \) as a less sequential implementation of \( \sigma \). Two causal nets which are upper bounded by a causal net should represent the same proof, but with varying degrees of sequentiality. Causal nets which are maximal for inclusion (among causal nets) are hence most parallel implementations of a certain behaviour and capture our intuition of causal invariants.

Definition 11

A causal invariant is a causal net \( \sigma : A\) maximal for inclusion.

6.1 Causal Invariants as Maximal Causal Nets

We start by characterising when two causal nets are upper-bounded for inclusion:

Proposition 1

Given two causal nets \( \sigma , \tau : A\), the following are equivalent:

  1. 1.

    there exists a causal net \( \upsilon : A\) such that \( \sigma \subseteq \upsilon \) and \( \tau \subseteq \upsilon \),

  2. 2.

    the set \( \sigma \vee \tau = \{ x \cup y \mid x \in \sigma , y \in \tau , x \cup y \in \mathscr {C} (A) \}\) is a causal net on A,

  3. 3.

    \(\textsf {link}(\sigma )= \textsf {link}(\tau )\).

In this case we write \( \sigma \uparrow \tau \) and \( \sigma \vee \tau \) is the least upper bound of \( \sigma \) and \( \tau \) for \(\subseteq \).

It is a direct consequence of Proposition 1 that any causal net \( \sigma \) is included in a unique causal invariant \( \sigma ^ \uparrow : A\), defined as: \( \sigma ^ \uparrow = \bigvee _{ \sigma \subseteq \tau } \tau \), where \( \tau \) ranges over causal nets.

Lemma 4

For \( \sigma , \tau : A\) causal nets, \( \sigma \uparrow \tau \) iff \( \sigma ^ \uparrow = \tau ^ \uparrow \). Moreover, if \( \sigma \) and \( \tau \) are causal invariants, \( \sigma \uparrow \tau \) if and only if \( \sigma = \tau \).

Fig. 5.
figure 5

Interpreting \(P = u(u').\, v(v').\, w[w'].\, ([u \leftrightarrow w] \mid ([w' \leftrightarrow v'] \mid [u' \leftrightarrow v]))\) in the context

The interpretation of a proof is simply defined as . Figure 5c illustrates the construction on a proof of MLL+mix. The interpretation features a disjunctive causality, as the tensor can be introduced as soon as one of the two pars has been.

Defining \(\textsf {link}(P) = \textsf {link}( Tr (P))\), we have from Lemma 4: \(\textsf {link}(P) = \textsf {link}(Q)\) if and only if . This implies that our model has the same equational theory than the proof nets of [22]. Such proof nets are already complete:

Theorem 3

([22]). For PQ two proofs of \( \varGamma \), we have \(P \equiv Q\) iff \(\textsf {link}(P) = \textsf {link}(Q)\).

As a corollary, we get:

Theorem 4

For cut-free proofs PQ we have \(P \equiv Q\) iff .

The technical report [7] also provides an inductive proof not using the result of [22]. A consequence of this result, along with strong sequentialisation is: This equality justifies our terminology of “causal completeness”, as for instance it implies that the minimal events of correspond exactly the possible rules in P that can be pushed to the front using the commuting conversions.

6.2 The Category of Causal Invariants

So far we have focused on the static. Can we integrate the dynamic aspect of proofs as well? In this section, we show that causal invariants organise themselves in a category. First, we show that causal nets are stable under composition:

Lemma 5

If \( \sigma : A^\perp \parallel B\) and \( \tau : B^\perp \parallel C\) are causal nets, then so is \( \tau \odot \sigma \).

Note that totality requires acyclicity (and deadlock-freedom) to be stable under composition. However, causal invariants are not stable under composition: \( \tau \odot \sigma \) might not be maximal, even if \( \tau \) and \( \sigma \) are. Indeed, during the interaction, some branches of \( \tau \) will not be explored by \( \sigma \) and vice-versa which can lead to new allowed reorderings. However, we can always embed \( \tau \odot \sigma \) into \(( \tau \odot \sigma )^ \uparrow \):

Lemma 6

Rooted games and causal invariants form a category CInv, where the composition of \( \sigma : A^\perp \parallel B\) and \( \tau : B^\perp \parallel C\) is \(( \tau \odot \sigma )^ \uparrow \) and the identity on A is .

Note that the empty game is an object of \(\mathbf {CInv}\), as we need a monoidal unit.

Monoidal-closed structure. Given two games A and B we define \(A \otimes B\) as \(\mathtt {send}^+ \cdot (A \parallel B)\), and 1 as the empty game. There is an obvious isomorphism \(A \otimes 1 \cong A\) and \(A \otimes (B \otimes C) \cong (A \otimes B) \otimes C\) in \(\mathbf {CInv}\). We now show how to compute directly the functorial action of \( \otimes \), without resorting to \({}^ \uparrow \). Consider \( \sigma \in \mathbf {CInv}(A, B)\) and \( \tau \in \mathbf {CInv}(C, D)\). Given \(x \in \mathscr {C} ((A \otimes C)^\perp \parallel (B \otimes D))\), we define \(x \langle \sigma \rangle = x \cap (A^\perp \parallel B)\) and \(x \langle \tau \rangle = x \cap (C^\perp \parallel D)\). If \(x \langle \sigma \rangle \in \sigma \) and \(x \langle \tau \rangle \in \tau \), we say that x is connected when there exists cells \(\mathfrak a, \mathfrak b, \mathfrak c\) and \(\mathfrak d\) of ABC and D respectively such that \(\mathfrak a \leftrightsquigarrow _{x \langle \sigma \rangle , \sigma } \mathfrak c\) and \(\mathfrak b \leftrightsquigarrow _{x \langle \tau \rangle , \tau } \mathfrak d\). We define:

$$ \sigma \otimes \tau = \left\{ \begin{aligned}&x \in \mathscr {C} ((A \otimes C)^\perp \parallel (B \otimes D))\ \text {such that}: \\&\quad (1)\ x \langle \sigma \rangle \in \sigma \text { and } x \langle \tau \rangle \in \tau \\&\quad (2)\ \text {if }x \text { is connected and contains } \mathtt {send}^+, \text { then } \mathtt {send}^- \in x\ \end{aligned}\right\} $$

In (2), \(\mathtt {send}^-\) refers to the minimal move of \((A \otimes C)^\perp \) and \(\mathtt {send}^+\) to the one of \(B \otimes D\). (2) ensures that \( \sigma \otimes \tau \) is acyclic.

Lemma 7

The tensor product defines a symmetric monoidal structure on CInv.

Define , \(\bot = 1 = \varvec{\emptyset }\) and .

Lemma 8

We have a bijection between causal invariants on \(A \parallel B \parallel C\) and on . As a result, there is an adjunction \(A \otimes \_ \dashv A \multimap \_\).

Lemma 8 implies that \(\mathbf {CInv}((A \multimap \bot ) \multimap \bot ) \simeq \mathbf {CInv}(A)\), and \(\mathbf {CInv}\) is \(*\)-autonomous.

Cartesian products. Given two games AB in \(\mathbf {CInv}\), we define their product . We show how to construct the pairing of two causal invariants concretely. Given \( \sigma \in \mathbf {CInv}(A, B)\) and \( \tau \in \mathbf {CInv}(A, C)\), we define the common behaviour of \( \sigma \) and \( \tau \) on A to be those \(x \in \mathscr {C} (A^\perp ) \cap \sigma \cap \tau \) such that for all \(\mathfrak a, \mathfrak a'\) outside of x with positive meet, \(\mathfrak a\leftrightsquigarrow _{x, \sigma } \mathfrak a'\) iff \(\mathfrak a\leftrightsquigarrow _{x, \tau } \mathfrak a'\). We write \( \sigma \cap _A \tau \) for the set of common behaviours of \( \sigma \) and \( \tau \) and define: \(\langle \sigma , \tau \rangle = (L^- \cdot \sigma ) \cup (R^- \cdot \tau ) \cup \sigma \cap _A \tau \). The projections are defined using copycat: (and similarly for \( \pi _2\)).

Theorem 5

CInv has products. As it is also \( *\)-autonomous, it is a model of MALL.

It is easy to see that the interpretation of MALL\(^-\) in \(\mathbf {CInv}\) following the structure is the same as , however it is computed compositionally without resorting to the \(^\uparrow \) operator. We deduce that our interpretation is invariant by cut-elimination: if \(P \rightarrow Q\), then . Putting the pieces together, we get the final result.

Theorem 6

CInv is an injective and fully complete model of MALL\(^-\).

7 Extensions and Related Work

The model provides a representation of proofs which retains only the necessary sequentiality. We study the phenomenon in Linear Logic, but commuting conversions of additives arise in other languages, eg. in functional languages with sums and products, where proof nets do not necessarily exist. Having an abstract representation of which reorderings are allowed could prove useful (reasoning on the possible commuting conversions in a language with sum types is notoriously difficult).

Extensions. Exponentials are difficult to add, as their conversions are not as canonical as those of MALL. Cyclic proofs [2] could be accomodated via recursive event structures.

Adding multiplicative units while keep determinism is difficult, as their commuting conversion is subtle (e.g. conversion for MLL is PSPACE-complete [18]), and exhibit apparent nondeterminism. For instance the following proofs are convertible in MLL:

$$\small a().\, b[] \mid c[] \equiv a().\, (b[] \mid c[]) \equiv b[] \mid a().\, c[] \triangleright a: \bot , b:1, c:1 $$

where \(a().\, P\) is the process counterpart to introduction of \(\bot \) and a[] of 1. Intuitively, b[] and c[] can be performed at the start, but as soon as one is performed, the other has to wait for the input on a. This cannot be modelled inside deterministic general event structures, as it is only deterministic against an environment that will emit on b. In contrast, proofs of MALL\(^-\) remain deterministic even if their environment is not total.

We would also be interested in recast multifocusing [9] in our setting by defining a class of focussed causal nets, where there are no concurrency between positive and negative events, and show that sequentialisation always give a focused proof.

Related work. The first fully complete model of MALL\(^-\) is based on closure operators [1], later extended to full Linear Logic [24]. True concurrency is used to define innocence, on which the full completeness result rests. However their model does not take advantage of concurrency to account for permutations, as strategies are sequential. This investigation has been extended to concurrent strategies by Mimram and Melliès [25, 26]. De Carvalho showed that the relational model is injective for MELL [11]. In another direction, [4] provides a fully complete model for MALL without game semantics, by using a glueing construction on the model of hypercoherences. [21] explores proof nets a weaker theory of commuting conversions for MALL.

The idea of having intermediate representations between proof nets and proofs has been studied by Faggian and coauthors using l-nets [10, 13,14,15,16], leading to a similar analysis to ours: they define a space of causal nets as partial orders and compare different versions of proofs with varying degree of parallelism. Our work recasts this idea using event structures and adds the notion of causal completeness: keeping jumps that cannot be undone by a permutation, which leads naturally to step outside partial orders, as well as full completeness: which causal nets can be strongly sequentialised?

The notion of dependency between logical rules has also been studied in [3] in the case of MLL. From a proof net R, they build a partial order which we believe is very related to where P is a sequentialisation of R. Indeed, in the case of MLL without MIX a partial order is enough to capture the dependency between rules. The work [12] shows that permutation rules of Linear Logic, understood as asynchronous optimisations on processes, are included in the observational equivalence. [19] studies mutual embedding between polarised proof nets [23] and the control \(\pi \)-calculus [20]. In another direction, we have recently built a fully-abstract, concurrent game semantics model of the synchronous session \(\pi \)-calculus [8]. The difficulty there was to understand name passing and the synchrony of the \(\pi \)-calculus, which is the dual of our objective here: trying to understand the asynchrony behind the conversions of MALL\(^-\).