1 Introduction

The last 15 years have witnessed great strides in program verification [7, 27, 39, 43, 44, 46]. One major area of focus has been concurrent programs following Concurrent Separation Logic (CSL) [40]. The key rule of CSL is Parallel:

In this rule, we write \(c_1 || c_2\) to indicate the parallel execution of commands \(c_1\) and \(c_2\). The separating conjunction \(\star \) indicates that the resources used by the threads is disjoint in some useful way, i.e. that there are no dangerous races. Many subsequent program logics [18, 20, 30, 31, 45] have introduced increasingly sophisticated notions of “resource disjointness” for the Parallel rule.

Fractional permissions (also called “shares”) are a relatively simple enhancement to separation logic’s original notion of disjointness [4]. Rather than owning a resource (e.g. a memory cell) entirely, a thread is permitted to own a part/fraction of that resource. The more of a resource a thread owns, the more actions it is permitted to take, a mapping called a policy. In this paper we will use the original policy of Bornat [4] to keep the examples straightforward: non-zero ownership of a memory cell permits reading while full ownership also permits writing. More modern logics allow for a variety of more flexible share policies [13, 28, 42], but our techniques still apply. Fractional permissions are less expressive than the “protocol-based” notions of disjointness used in program logics such as FCSL [38, 44], Iris [30], and TaDa [16], but are well-suited for common concurrent programming patterns such as read sharing and so have been incorporated into many program logics and verification tools [19, 26, 28, 31, 36, 41].

Since fractionals are simpler and more uniform than protocol-based logics, they are amenable to automation [26, 33]. However, previous techniques had difficulty with the inductive predicates common in SL proofs. We introduce predicate multiplication, a concise method for specifying the fractional sharing of complex predicates, writing \(\pi \cdot P\) to indicate that we own the \(\pi \)-share of the arbitrary predicate P, e.g. \(0.5 \cdot \mathsf {tree}(x)\) indicates a tree rooted at x and we own half of each of the nodes in the tree. If set up properly, predicate multiplication handles inductive predicates smoothly and is well-suited for automation because:

  • Section 3 it distributes with bientailments—e.g. \(\pi \cdot (P \wedge Q) \dashv \vdash (\pi \cdot P) \wedge (\pi \cdot Q)\)—enabling rewriting techniques and both forwards and backwards reasoning;

  • Section 4 it works smoothly with the inference process of biabduction [10]; and

  • Section 5 the side conditions required for bientailments and biabduction can be verified directly in the object logic, leveraging existing entailment checkers.

There has been significant work in recent years on tool support for protocol-based approaches [15, 19, 29, 30, 48], but they require significant user input and provide essentially no inference. Fractional permissions and protocol-based approaches are thus complementary: fractionals can handle large amounts of relatively simple concurrent code with minimal user guidance, while protocol-based approaches are useful for reasoning about the implementations of fine-grained concurrent data structures whose correctness argument is more sophisticated.

In addition to Sects. 3, 4 and 5, the rest of this paper is organized as follows.

  • Section 2 We give the technical background necessary for our work.

  • Section 6 We document ShareInfer [1], a tool that uses the logical tools developed in Sects. 3, 4 and 5 to infer frames and antiframes and check the necessary side conditions. We benchmark ShareInfer with 27 selective examples.

  • Section 7 We introduce scaling separation algebra that allows us to construct predicate multiplication on an abstract structure in a compositional way. We show such model can be constructed from Dockins et al.’s tree shares [21]. The key technical proofs in Sects. 5 and 7 have been verified in Coq [1].

  • Section 8 We prove that there are no useful share models that simultaneously satisfy disjointness and two distributivity axioms. Consequently, at least one axioms has to be removed, which we choose to be the left distributivity. We also prove that the failure of two-sided distributivity forces a side condition on a key proof rule for predicate multiplication.

  • Section 9 We discuss related work before delivering our conclusion.

2 Technical Preliminaries

Share Models. An (additive) share model \((\mathcal {S},\oplus )\) is a partial commutative monoid with a bottom/empty element \(\mathcal {E}\) and top/full element \(\mathcal {F}\). On the rationals in [0, 1], \(\oplus \) is partial addition, \(\mathcal {E}\) is 0, and \(\mathcal {F}\) is 1. We also require the existence of complements \(\overline{\pi }\) satisfying \(\pi \,\oplus \, \overline{\pi } = \mathcal {F}\); in \(\mathbb {Q}\), \(\overline{\pi } \,{\mathop {=}\limits ^\mathrm{{def}}}\, 1 - \pi \).

Separation Logic. Our base separation logic has the following connectives:

$$\begin{aligned} P,Q,\text { etc.} ~~ {\mathop {=}\limits ^\mathrm{{def}}}~~ \langle F \rangle ~ | ~ P \wedge Q ~ | ~ P \vee Q ~ | ~ \lnot P ~ | ~ P \star Q ~ | ~ \forall x.P ~ | ~ \exists x. P ~ | ~\mu X.P ~ | ~ \,\,e_1 {\mathop {\mapsto }\limits ^{\pi }} e_2\,\, \end{aligned}$$

Pure facts F are put in angle brackets, e.g. \(\langle even (12) \rangle \). Pure facts force the empty heap, i.e. the usual separation logic \(\mathsf {emp}\) predicate is just a macro for \(\langle \top \rangle \). Our propositional fragment has (classical) conjunction \(\wedge \), disjunction \(\vee \), negation \(\lnot \), and the separating conjunction \(\star \). We have both universal \(\forall \) and existential \(\exists \) quantifiers, which can be impredicative if desired. To construct recursive predicates we have the usual Tarski least fixpoint \(\mu \). The fractional points-to \(\,\,e_1 {\mathop {\mapsto }\limits ^{\pi }} e_2\,\,\) means we own the \(\pi \)-fraction of the memory cell pointed to by \(e_1\), whose contents is \(e_2\), and nothing more. To distinguish points-to from \(\mathsf {emp}\) we require that \(\pi \) be non-\(\mathcal {E}\). For notational convenience we sometimes elide the full share \(\mathcal {F}\) over a fractional maps-to, writing just \(e_1 \mapsto e_2\). The connection of \(\oplus \) to the fractional maps-to predicate is given by the bi-entailment:

Fig. 1.
figure 1

This heap satisfies \(\mathsf {tree}(\texttt {root},0.3)\) despite being a DAG

Disjointness. Although intuitive, the rationals are not a good model for shares in SL. Consider this definition for \(\pi \)-fractional trees rooted at x:

$$\begin{aligned} \mathsf {tree}(x, \pi ) ~~ {\mathop {=}\limits ^\mathrm{{def}}}~~ \langle x = \mathsf {null} \rangle \vee \exists d,l,r.~x \,{\mathop {\mapsto }\limits ^{\pi }}\, (d,l,r) \star \mathsf {tree}(l, \pi ) \star \mathsf {tree}(r, \pi ) \end{aligned}$$
(1)

This \(\mathsf {tree}\) predicate is obtained directly from the standard recursive predicate for binary trees by asserting only \(\pi \) ownership of the root and recursively doing the same for the left and right substructures, and so at first glance looks straightforwardFootnote 1. The problem is that when \(\pi \in (0,0.5]\), then \(\mathsf {tree}\) can describe some non-tree directed acyclic graphs as in Fig. 1. Fractional \(\mathsf {tree}\)s are a little too easy to introduce and thus unexpectedly painful to eliminate.

To prevent the deformation of recursive structures shown in Fig. 1, we want to recover the “disjointness” property of basic SL: \(\,\,e {\mathop {\mapsto }\limits ^{}} e_1\,\, \star \,\,e {\mathop {\mapsto }\limits ^{}} e_2\,\, \dashv \vdash \bot \). Disjointness can be specified either as an inference rule in separation logic [41] or as an algebraic rule on the share model [21] as follows:

(2)

In other words, a nonempty share \(\pi \) cannot join with itself. In Sect. 3 we will see how disjointness enables the distribution of predicate multiplication over \(\star \) and in Sect. 4 we will see how disjointness enables antiframe inference during biabduction.

Tree Shares. Dockins et al. [21] proposed “tree shares” as a share model satisfying disjointness. For this paper the details of the model are not critical so we provide only a brief overview. A tree share \(\tau \in \mathbb {T}\) is a binary tree with Boolean leaves, i.e. , where \(\circ \) is the empty share \(\mathcal {E}\) and \(\bullet \) is the full share \(\mathcal {F}\). There are two “half” shares: and , and four “quarter” shares, e.g. . Trees must be in canonical form, i.e., the most compact representation under \(\cong \):

Union \(\sqcup \), intersection \(\sqcap \), and complement \(\bar{\cdot }\) are the basic operations on tree shares; they operate leafwise after unfolding the operands under \(\cong \) into the same shape:

The structure forms a countable atomless Boolean algebra and thus enjoys decidable existential and first-order theories with precisely known complexity bounds [34]. The join operator \(\oplus \) on trees is defined as \(\tau _1 \oplus \tau _2 = \tau _3 {\mathop {=}\limits ^\mathrm{{def}}}\tau _1 \sqcup \tau _2 = \tau _3 \wedge \tau _1 \sqcap \tau _2 = \circ \). Due to their good metatheoretic and computational properties, a variety of program logics [24, 25] and verification tools [3, 26, 33, 47] have used tree shares (or other isomorphic structures [19]).

3 Predicate Multiplication

The additive structure of share models is relatively well-understood [21, 33, 34]. The focus for this paper is exploring the benefits and consequences of incorporating a multiplicative operator \(\otimes \) into a share model. The simplest motivation for multiplication is computationally dividing some share \(\pi \) of a resource “in half;” the two halves of the resource are then given to separate threads for parallel processing. When shares themselves are rationals, \(\otimes \) is just ordinary multiplication, e.g. we can divide \(0.6 = (0.5 \otimes 0.6) \oplus (0.5 \otimes 0.6)\). Defining a notion of multiplication on a share model that satisfies disjointness is somewhat trickier, but we can do so with tree shares as follows. Define \(\tau _1 \otimes \tau _2\) to be the operation that replaces each \(\bullet \) in \(\tau _2\) with a copy of \(\tau _1\), e.g.:

figure a

The structure is a kind of “near-semiring.” The \(\otimes \) operator is associative, has identity \(\mathcal {F}\) and null point \(\mathcal {E}\), and is right distributive, i.e. \((a \oplus b) \otimes c = (a \otimes c) \oplus (b \otimes c)\). It is not commutative, does not distribute on the left, or have inverses. It is hard to do better: adding axioms like multiplicative inverses forces any model satisfying disjointness (\(\forall a,b.~a \oplus a = b ~ \Rightarrow ~ a = \mathcal {E}\)) to have no more than two elements (Sect. 8).

Now consider the toy program in Fig. 2. Starting from the tree rooted at x, the program itself is dead simple. First (line 3) we check if the x is null, i.e. if we have reached a leaf; if so, we return. If not, we split into parallel threads (lines 4–6 and 7–9) that do some processing on the root data in both branches. In the toy example, the processing just prints out the root data (lines 4 and 7); the print command is unimportant: what is important that we somehow access some of the data in the tree. After processing the root, both parallel branches call the processTree function recursively on the left x->l (lines 5 and 8) and right x->r (lines 6 and 9) branches, respectively. After both parallel processes have terminated, the function returns (line 10). The program is simple, so we would like its verification to be equally simple.

Fig. 2.
figure 2

The parallel processTree function, written in a C-like language

Predicate multiplication is the tool that leads to a simple proof. Specifically, we would like to verify that processTree has the specification:

Here \(\mathsf {tree}(x) \,{\mathop {=}\limits ^\mathrm{{def}}}\, \langle x = \mathsf {null} \rangle \vee \exists d,l,r.~x \mapsto (d,l,r) \star \mathsf {tree}(l) \star \mathsf {tree}(r)\) is exactly the usual definition of binary trees in separation logic. Predicate multiplication has allowed us to isolate the fractional ownership from the definition; compare with Eq. (1) above. Our precondition and postcondition both say that x is a pointer to a heap-represented \(\pi \)-owned \(\mathsf {tree}\). Critically, we want to ensure that our \(\pi \)-share at the end of the program is equal to the \(\pi \)-share at the beginning. This way if our initial caller had full \(\mathcal {F}\) ownership before calling processTree, he will have full ownership afterwards (allowing him to e.g. deallocate the tree).

The intuition behind the proof is simple. First in line 3, we check if \(\mathtt {x}\) is null; if so we are in the base case of the \(\mathsf {tree}\) definition and can simply return. If not we can eliminate the left disjunct and can proceed to split the \(\star \)-separated bits into disjoint subtrees \(\mathtt {l}\) and \(\mathtt {r}\), and then dividing the ownership of those bits into two “halves”. Let and . When we start the parallel computation on lines 4 and 7 we want to pass the left branch of the computation the \(\mathcal {L}\otimes \pi \)-share of the spatial resources, and the right branch of the computation the \(\mathcal {R}\otimes \pi \). In both branches we then need to show that we can read from the data cell, which in the simple policy we use for this paper boils down to making sure that the product of two non-\(\mathcal {E}\) shares cannot be \(\mathcal {E}\). This is a basic property for reasonable share models with multiplication. In the remainder of the parallel code (lines 5–6 and 8–9) we need to make recursive calls, which is done by simply instantiating \(\pi \) with \(\mathcal {L}\otimes \pi \) and \(\mathcal {R}\otimes \pi \) in the recursive specification (as well as \(\mathtt {l}\) and \(\mathtt {r}\) for x). The later half proof after the parallel call is pleasantly symmetric to the first half in which we fold back the original tree predicate by merging the two halves \(\mathcal {L}\otimes \pi \) and \(\mathcal {R}\otimes \pi \) back into \(\pi \). Consequently, we arrive at the postcondition \(\pi \cdot \textsf {tree}(x)\), which is identical to the precondition.

3.1 Proof Rules for Predicate Multiplication

In Fig. 4 we put the formal verification for processTree, which follows the informal argument very closely. However, before we go through it, let us consider the reason for this alignment: because the key rules for reasoning about predicate multiplication are bidirectional. These rules are given in Fig. 3. The non-spatial rules are all straightforward and follow the basic pattern that predicate multiplication both pushes into and pulls out of the operators of our logic without meaningful side conditions. The DotPure rule means that predicate multiplication ignores pure facts, too. Complicating the picture slightly, predicate multiplication pushes into implication \(\Rightarrow \) but does not pull out of it. Combining DotImpl with DotPure we get a one-way rule for negation: \(\pi \cdot (\lnot P) \vdash \lnot \pi \cdot \). We will explain why we cannot get both directions in Sects. 5.1 and 8.

Fig. 3.
figure 3

Distributivity of the scaling operator over pure and spatial connectives

Most of the spatial rules are also simple. Recall that \(\mathsf {emp} \,{\mathop {=}\limits ^\mathrm{{def}}}\, \langle \top \rangle \), so DotPure yields \(\pi \cdot \mathsf {emp} \dashv \vdash \mathsf {emp}\). The DotFull rule says that \(\mathcal {F}\) is the scalar identity on predicates, just as it is the multiplicative identity on the share model itself. The DotDot rule allows us to “collapse” repeated predicate multiplication using share multiplication; we will shortly see how we use it to verify the recursive calls to processTree. Similarly, the DotMapsTo rule shows how predicate multiplication combines with basic maps-to by multiplying the associated shares together. All three rules are bidirectional and require no side conditions.

While the last two rules are both bidirectional, they both have side conditions. The DotPlus rule shows how predicate multiplication distributes over \(\oplus \). The \(\vdash \) direction does not require a side condition, but the \(\dashv \) direction we require that P be precise in the usual separation logic sense. Precision will be discussed in Sect. 5.2; for now a simple counterexample shows why it is necessary:

$$\begin{aligned} \mathcal {L}\cdot (x \mapsto a \vee (x+1) \mapsto b) \star \mathcal {R}\cdot (x \mapsto a \vee (x+1) \mapsto b) ~~ \not \vdash ~~ \mathcal {F}\cdot (x \mapsto a \vee (x+1) \mapsto b) \end{aligned}$$

The premise is also consistent with \(\,\,x {\mathop {\mapsto }\limits ^{\mathcal {L}}} a\,\, \star \,\,(x+1) {\mathop {\mapsto }\limits ^{\mathcal {R}}} b\,\,\).

The DotStar rule shows how predicate multiplication distributes into and out of the separating conjunction \(\star \). It is also bidirectional. Crucially, the \(\dashv \) direction fails on non-disjoint share models like \(\mathbb {Q}\), which is the “deeper reason” for the deformation of recursive structures illustrated in Fig. 1. On disjoint share models like , we get equational reasoning \(\dashv \vdash \) subject to the side condition of uniformity. Informally, \(P \vdash \textsf {uniform}(\pi ')\) asserts that any heap that satisfies P has the permission \(\pi '\) uniformly at each of its defined addresses. In Sect. 8 we explain why we cannot admit this rule without a side condition.

In the meantime, let us argue that most predicates used in practice in separation logic are uniform. First, every SL predicate defined in non-fractional settings, such as \(\mathsf {tree}(x)\), is \(\mathcal {F}\)-uniform. Second, P is a \(\pi \)-uniform predicate if and only if \(\pi ' \cdot P\) is \((\pi ' \otimes \pi )\)-uniform. Third, the \(\star \)-conjunction of two \(\pi \)-uniform predicates is also \(\pi \)-uniform. Since a significant motivation for predicate multiplication is to allow standard SL predicates to be used in fractional settings, these already cover many common cases in practice. It is useful to consider examples of non-uniform predicates for contrast. Here are three (we elide the base cases):

$$\begin{aligned} \begin{array}{lcl} \mathsf {slist}(x) &{} \dashv \vdash &{} \exists d,n. \big ((\langle d = 17 \rangle \star \,\,x {\mathop {\mapsto }\limits ^{\mathcal {L}}} (d,n)\,\,) \vee (\langle d \ne 17 \rangle \star \,\,x {\mathop {\mapsto }\limits ^{\mathcal {R}}} (d,n)\,\,)\big ) \star \mathsf {slist}(n) \\ \mathsf {dlist}(x) &{} \dashv \vdash &{} \exists d,n. x \mapsto d,n \star \mathcal {L}\cdot \mathsf {dlist}(n) \\ \mathsf {dtree}(x) &{} \dashv \vdash &{} \exists d,l,r. x \mapsto d,l,r \star \mathcal {L}\cdot \mathsf {dtree}(l) \star \mathcal {R}\cdot \mathsf {dtree}(r) \end{array} \end{aligned}$$

The \(\mathsf {slist}(x)\) predicate owns different amounts of permissions at different memory cells depending on the value of those cells. The \(\mathsf {dlist}(x)\) predicate owns decreasing amounts of the list, e.g. the first cell is owned more than the second, which is owned more than the third. The \(\mathsf {dtree}(x)\) predicate is even stranger, owning different amounts of different branches of the tree, essentially depending on the path to the root. None of these predicates mix well with DotStar, but perhaps they are not useful to verify many programs in practice, either. In Sects. 5.1 and 5.2 we will discuss how to prove predicates are precise and uniform. In Sect. 5.4 will demonstrate our techniques to do so by applying them to two examples.

Fig. 4.
figure 4

Reasoning with the scaling operator \(\pi \cdot P\).

3.2 Verification of processTree using predicate multiplication

We now explain how the proof of \(\texttt {processTree}\) is carried out in Fig. 4 using scaling rules in Fig. 3. In line 2, we unfold the definition of predicate \(\textsf {tree}(x)\) which consists of one base case and one inductive case. We reach line 3 by pushing \(\pi \) inward using various rules \(\textsc {DotPure}\), \(\textsc {DotDisj}\), \(\textsc {DotExis}\), \(\textsc {DotMapsto}\) and \(\textsc {DotStar}\). To use \(\textsc {DotStar}\) we must prove that \(\mathsf {tree}(x)\) is \(\mathcal {F}\)-uniform, which we show how to do in Sect. 5.4. We prove this lemma once and use it many times.

The base base \(\mathtt {x} = \texttt {null}\) is handled in lines 4–5 by applying rule \(\textsc {DotPure}\), i.e., \(\langle \mathtt {x} = \texttt {null} \rangle \vdash \pi \cdot \langle \mathtt {x} = \texttt {null} \rangle \) and then \(\textsc {DotPos}\), \(\pi \cdot \langle \mathtt {x} = \texttt {null} \rangle \vdash \pi \cdot \mathsf {tree}(x)\). For the inductive case, we first apply \(\textsc {DotFull}\) in line 7 and then replace \(\mathcal {F}\) with \(\mathcal {L}\oplus \mathcal {R}\) (recall that \(\mathcal {R}\) is \(\mathcal {L}\)’s compliment). On line 9 we use \(\textsc {DotPlus}\) to translate the split on shares with \(\oplus \) into a split on heaps with \(\star \).

We show only one parallel process; the other is a mirror image. Line 10 gives the precondition from the Parallel rule, and then in lines 11 and 12 we continue to “push in” the predicate multiplication. To verify the code in lines 13–14 just requires Frame. Notice that we need the DotDot rule to “collapse” the two uses of predicate multiplication into one so that we can apply the recursive specification (with the new \(\pi '\) in the recursive precondition equal to \(\mathcal {L}\otimes \pi \)).

Having taken the predicate completely apart, it is now necessary to put Humpty Dumpty back together again. Here is why it is vital that all of our proof rules are bidirectional, without which we would not be able to reach the final postcondition \(\pi \cdot \mathsf {tree}(x)\). The final wrinkle is that for line 19 we must prove the precision of the \(\mathsf {tree}(x)\) predicate. We show how to do so with example in Sect. 5.4, but typically in a verification this is proved once per predicate as a lemma.

4 Bi-abductive Inference with Fractional Permissions

Biabduction is a separation logic inference process that helps to increase the scalability of verification for sizable programs [22, 49]; in recent years it has been the focus of substantial research for (sequential) separation logic [8, 10, 11, 32]. Biabduction aims to infer the missing information in an incomplete separation logic entailment. More precisely, given an incomplete entailment \(A \star [??] \vdash B \star [??]\), we would like to find predicates for the two missing pieces [??] that complete the entailment in a nontrivial manner. The first piece is called the antiframe while the second is the inference frame. The standard approach consists of two sequential subroutines, namely the abductive inference and frame inference to construct the antiframe and frame respectively. Our task in this section is to show how to upgrade these routines to handle fractional permissions so that biabduction can extend to concurrent programs. As we will see, disjointness plays a crucial role in antiframe inference.

4.1 Fractional Residue Computation

Consider the fractional point-to bi-abduction problem with rationals:

There are three cases to consider, namely \(\pi _1 = \pi _2\), \(\pi _1 < \pi _2\) or \(\pi _1 > \pi _2\). In the first case, both the (minimal) antiframe \(F_\text {a}\) and frame \(F_\text {f}\) are \(\mathsf {emp}\); for the second case we have \(F_\text {a} = \mathsf {emp}\), and the last case gives us . Here we straightforwardly compute the residue permission using rational subtraction. In general, one can attempt to define subtraction \(\ominus \) from a share model \(\langle \mathcal {S},\oplus \rangle \) as \(a \ominus b = c \,{\mathop {=}\limits ^\mathrm{{def}}}\, b \oplus c = a\). However, this definition is too coarse as we want subtraction to be a total function so that the residue is always computable efficiently. A solution to this issue is to relax the requirements for \(\ominus \), asking only that it satisfies the following two properties:

$$\begin{aligned} C_1: a \oplus (b \ominus a) = b \oplus (a \ominus b) \quad \quad \quad C_2: a \ll b \oplus c \Rightarrow a \ominus b \ll c \end{aligned}$$

where \(a \ll b \,{\mathop {=}\limits ^\mathrm{{def}}}\, \exists c.~a \oplus c = b\). The condition \(C_1\) provides a convenient way to compute the fractional residue in both the frame and antiframe while \(C_2\) asserts that \(a \ominus b\) is effectively the minimal element that when joined with b becomes greater than a. In the rationals \(\mathbb {Q}\), \(a \ominus b \,{\mathop {=}\limits ^\mathrm{{def}}}\, if (a>b)~ then ~a-b~ else ~0\). On tree shares , \(a \ominus b {\mathop {=}\limits ^\mathrm{{def}}}a \sqcap \overline{b}\). Recalling that the case when \(\pi _1 = \pi _2\) is simple (both the antiframe and frame are just \(\mathsf {emp}\)), then if \(\pi _1 \ne \pi _2\) we can compute the fractional antiframe and inference frames uniquely using \(\ominus \):

Generally, the following rule helps compute the residue of predicate P:

Using \(C_1\) and \(C_2\) it is easy to prove that the residue is minimal w.r.t. \(\ll \), i.e.:

$$\begin{aligned} \pi _1 \oplus a = \pi _2 \oplus b \Rightarrow \pi _2 \ominus \pi _1 \ll a \wedge \pi _1 \ominus \pi _2 \ll b \end{aligned}$$

4.2 Extension of Predicate Axioms

To support reasoning over recursive data structure such as lists or trees, the assertion language is enriched with the corresponding inductive predicates. To derive properties over inductive predicates, verification tools often contain a list of predicate axioms/facts and use them to aid the verification process [9, 32]. These facts are represented as entailment rules \(A \vdash B\) that can be classified into “folding” and “unfolding” rules to manipulate the representation of inductive predicates. For example, some axioms for the tree predicate are:

We want to transform these axioms into fractional forms. The key ingredient is the \(\textsc {DotPos}\) rule from Fig. 3, that lifts the fractional portion of an entailment, i.e. \((P \vdash Q) \Rightarrow (\pi \cdot P \vdash \pi \cdot Q)\). Using this and the other scaling rules from Fig. 3, we can upgrade the folding/unfolding rules into corresponding fractional forms:

As our scaling rules are bi-directional, they can be applied both in the antecedent and consequent to produce a smooth transformation to fractional axioms. Also, recall that our \(\textsc {DotStar}\) rule \(\pi \cdot (P \star Q) \dashv \vdash \pi \cdot P \star \pi \cdot Q\) has a side condition that both P and Q are \(\pi '\)-uniform. This condition is trivial in the transformation as standard predicates (i.e. those without permissions) are automatically \(\mathcal {F}\)-uniform. Furthermore, the precision and uniformity properties can be transferred directly to fractional forms by the following rules:

$$\begin{aligned} \text {precise}(\pi \cdot P) \Leftrightarrow \text {precise(P)} \quad \quad \quad P \vdash \textsf {uniform}(\pi ) \Leftrightarrow \pi ' \cdot P \vdash \textsf {uniform}(\pi '\otimes \pi ) \end{aligned}$$

4.3 Abductive Inference and Frame Inference

To construct the antiframe, Calcagno et al. [10] presented a general framework for antiframe inference which contains rules of the form:

where \(\text {Cond}\) is the side condition, together with consequents \((H,H')\), heap formulas \((\varDelta ,\varDelta ')\) and antiframes \((M,M')\). In principle, the abduction algorithm gradually matches fragments of consequent with antecedent, derives sound equalities among variables while applying various folding and unfolding rules for recursive predicates in both sides of the entailment. Ideally, the remaining unmatched fragments of the antecedent are returned to form the antiframe. During the process, certain conditions need to be maintained, e.g., satisfiability of the antecedent or minimal choice for antiframe. After finding the antiframe, the inference process is invoked to construct the inference frame. In principle, the old antecedent is first combined with the antiframe to form a new antecedent whose fragments are matched with the consequent. Eventually, the remaining unmatched fragments of the antecedent are returned to construct the inference frame.

Fig. 5.
figure 5

An example of biabduction with fractional permissions

The discussion of fractional residue computation in Sect. 4.1 and extension of recursive predicate rules in Sect. 4.2 ensure a smooth upgrade of the biabduction algorithm to fractional form. We demonstrate this intuition using the example in Fig. 5. The partial consequent is a fractional \(\textsf {tree}(x)\) predicate with permission \(\pi _3\) while the partial antecedent is star conjunction of a fractional maps-to predicate of address x with permission \(\pi _1\), a fractional \(\textsf {tree}(x_1)\) predicate with permission \(\pi _2\) and a null pointer \(x_2\). Following the spirit of Calcagno et al. [10], the steps in both sub-routines include applying the folding and unfolding rules for predicate \(\textsf {tree}\) and then matching the corresponding pair of fragments from antecedent and consequent. On the other hand, the upgraded part is reflected through the use of the two new rules \(\textsc {Msub}\) and \(\textsc {Psub}\) to compute the fractional residues as well as a more general system of folding and unfolding rules for predicate \(\textsf {tree}\). We are then able to compute the antiframe and the inference frame respectively.

Antiframe Inference and Disjointness. Consider the following abduction problem:

Using the folding rule \(F_2\), we can identify the antiframe as \(\mathsf {tree}(x_2)\). Now suppose we have a rational permission \(\pi \in \mathbb {Q}\) distributed everywhere, i.e.:

A naïve solution is to let the antiframe be \(\pi \cdot \mathsf {tree}(x_2)\). However, in \(\mathbb {Q}\) this choice is unsound due to the deformation of recursive structures issue illustrated in Fig. 1: if the antiframe is \(\pi \cdot \mathsf {tree}(x_2)\), the left hand side can be a DAG, even though the right hand side must be a tree. However, in disjoint share models like , choosing \(\pi \cdot \mathsf {tree}(x_2)\) for the antiframe is correct and the entailment holds. As is often the case, things are straightforward once the definitions are correct.

5 A Proof Theory for Fractional Permissions

Our main objective in this section is to show how to discharge the uniformity and precision side conditions required by the DotStar and DotPlus rules. To handle recursive predicates like \(\mathsf {tree}(x)\) we develop set of novel modal-logic based proof rules to carry out induction in the heap. To allow tools to leverage existing entailment checkers, all of these techniques are done in the object logic itself, rather than in the metalogic. Thus, in Sect. 5, we do not assume a concrete model for our object logic (in Sect. 7 we will develop a model).

First we discuss new proof rules for predicate multiplication and fractional maps-to (Sect. 5.1), precision (Sect. 5.2), and induction over fractional heaps (Sect. 5.3). We then conclude (Sect. 5.4) with two examples of proving real properties using our proof theory: that \(\mathsf {tree}(x)\) is \(\mathcal {F}\)-uniform and that \(\mathsf {list}(x)\) is precise. Some of the theorems have delicate proofs, so all of them have been verified in Coq [1].

5.1 Proof Theory for Predicate Multiplication and Fractional Maps-To

In Sect. 3 we presented the key rules that someone who wants to verify programs using predicate multiplication is likely to find convenient. On page 13 we present a series of additional rules, mostly used to establish the “uniform” and “precise” side conditions necessary in our proofs.

Figure 6 is the simplest group, giving basic facts about the fractional points-to predicate. Only \(\mapsto \textsc {inversion}\) is not immediate from the nonfractional case. It says that it is impossible to have two fractional maps-tos of the same address and with two different values. We need this fact to e.g. prove that predicates with existentials such as \(\mathsf {tree}\) are precise.

Fig. 6.
figure 6

Proof theory for fractional maps-to

Fig. 7.
figure 7

Uniformity and precision for predicate multiplication

Fig. 8.
figure 8

Proof theory for precision

Fig. 9.
figure 9

Proof theory for substructural induction

Proving the side conditions for DotPlus and DotStar. Figure 7 contains some rules for establishing that P is \(\pi \)-uniform (i.e. \(P \vdash \mathsf {uniform}(\pi )\)) and that P is precise. Since uniformity is a simple property, the rules are easy to state:

To use predicate multiplication we will need to prove two kinds of side conditions: \(\mathsf {uniform}\textsc {/}\mathsf {emp}\) tells us that \(\mathsf {emp}\) is \(\pi \)-uniform for all \(\pi \); the conclusion (all defined heap locations are held with share \(\pi \)) is vacuously true. The \(\mathsf {uniform}\textsc {Dot}\) rule tells us that if P is \(\pi \)-uniform then when we multiply P by a fraction \(\pi '\) the result is \((\pi ' \! \otimes \pi )\)-uniform. The \(\mapsto \mathsf {uniform}\) rule tells us that points-to is uniform. The \(\mathsf {uniform}\star \) rule possesses interesting characteristics. The \(\dashv \) direction follows from \(\mathsf {uniform}\textsc {/}\mathsf {emp}\) and the \(\star \mathsf {emp}\) rule (\(P \star \mathsf {emp} \dashv \vdash P\)). The \(\vdash \) direction is not automatic but very useful. One consequence is that from \(P \vdash \mathsf {uniform}(\pi )\) and \(Q \vdash \mathsf {uniform}(\pi )\) we can prove \(P \star Q \vdash \mathsf {uniform}(\pi )\). The \(\vdash \) direction follows from disjointness but fails over non-disjoint models such as rationals \(\mathbb {Q}\).

The \(\mapsto \textsc {precise}\) rule tells us that points-tos are precise. The DotPrecise rule is a partial solution to proving precision. It states that \(\pi \cdot P\) is precise if and only if P is precise. We will next show how to prove that P itself is precise.

5.2 Proof Theory for Proving that Predicates Are Precise

Proving that a predicate is \(\pi \)-uniform is relatively straightforward using the proof rules presented so far. However, proving that a predicate is precise is not as pleasant. Traditionally precision is defined (and checked for concrete predicates) in the metalogic [40] using the following definition:

(3)

Here we write \(h_1 \subseteq h_2\) to mean that \(h_1\) is a subheap of \(h_2\), i.e. \(\exists h'. h_1 \oplus h' = h_2\), where \(\oplus \) is the joining operation on the underlying separation algebra [21]. Essentially precision is a kind of uniqueness property: if a predicate P is precise then it can only be true on a single subheap.

Rather than checking precision in the metalogic, we wish to do so in the object logic. We give a proof theory that lets us do so in Fig. 8. Among other advantages, proving precision in the object logic lets tools build on existing separation logic entailment checkers to prove the precision of recursive predicates. The core idea is simple: we define a new object logic operator “\(\mathsf {precisely}(P)\)” that captures the notion of precision relativized to the current heap; essentially it is a partially applied version of the definition of \(\text {precise}(P)\) in Eq. (3):

(4)

Although we have given \(\mathsf {precisely}\)’s model to aid intuition, we emphasize that in Sect. 5 all of our proofs take place in the object logic; we never unfold \(\mathsf {precisely}\)’s definition. Note that \(\mathsf {precisely}\) is also generally weaker than the typical notion of precision. For example, the predicate \(\,\,x {\mathop {\mapsto }\limits ^{}} 7\,\, \vee \,\,y {\mathop {\mapsto }\limits ^{}} 7\,\,\) is not precise; however the entailment \(\,\,z {\mathop {\mapsto }\limits ^{}} 8\,\, \vdash \mathsf {precisely}(\,\,x {\mathop {\mapsto }\limits ^{}} 7\,\, \vee \,\,y {\mathop {\mapsto }\limits ^{}} 7\,\,)\) is provable from Fig. 8.

That said, two notions are closely connected as given in the \(\mathsf {precisely}\textsc {Precise}\) rule. We also give introduction \(\mathsf {precisely}\textsc {Right}\) and elimination rules \(\mathsf {precisely}\textsc {Left}\) that make a connection between precision and an “antidistribution” of \(\star \) over \(\wedge \).

We also give a number of rules for showing how \(\mathsf {precisely}\) combines with the connectives of our logic. The rules for propositional \(\wedge \) and separating \(\star \) conjunction follow well-understood patterns, with the addition of an arbitrary premise context G being the key feature. The rule for disjunction \(\vee \) is a little trickier, with an additional premise that forces the disjunction to be exclusive rather than inclusive. An example of such an exclusive disjunction is in the standard definition of the \(\mathsf {tree}\) predicate, where the first disjunct \(\langle x = \mathtt {null} \rangle \) is fundamentally incompatible with the second disjunct \(\exists d,l,r. x \mapsto d,l,r \star \ldots \) since \(\mapsto \) does not allow the address to be \(\mathtt {null}\) (by rule \(\mapsto \mathtt {null}\) from Fig. 6). The rules for universal quantification \(\forall \) existential quantification \(\exists \) are essentially generalizations of the rules for the traditional conjunction \(\wedge \) and disjunction \(\vee \).

It is now straightforward to prove the precision of simple predicates such as \(\langle x = \mathtt {null} \rangle \vee (\exists y. x \mapsto y \star y \mapsto 0)\). Finding and proving the key lemmas that enable the proof of the precision of recursive predicates remains a little subtle.

5.3 Proof Theory for Induction over the Finiteness of the Heap

Recursive predicates such as \(\mathsf {list}(x)\) and \(\mathsf {tree}(x)\) are common in SL. However, proving properties of such predicates, such as proving that \(\mathsf {list}(x)\) is precise, is a little tricky since the \(\mu \textsc {FoldUnfold}\) rule provided by the Tarski fixed point does not automatically provide an induction principle. Generally speaking such properties follow by some kind of induction argument, either over auxiliary parameters (e.g. if we augment trees to have the form \(\mathsf {tree}(x,\tau )\), where \(\tau \) is an inductively-defined type in the metalogic) or over the finiteness of the heap itself. Both arguments usually occur in the metalogic rather than the object logic.

We have two contributions to make for proving inductive properties. First, we show how to do induction over the heap in a fractional setting. Intuitively this is more complicated than in the non-fractional case because there are infinite sequences of strictly smaller subheaps. That is, for a given initial heap \(h_0\), there are infinite sequences \(h_1\), \(h_2\), ...such that \(h_0 \supsetneq h_1 \supsetneq h_2 \supsetneq \ldots \). The disjointness property does not fundamentally change this issue, so we illustrate with an example with the shares in \(\mathbb {Q}\). The heap \(h_0\) satisfying \(\,\,x {\mathop {\mapsto }\limits ^{1}} y\,\,\) is strictly larger than the heap \(h_1\) satisfying \(\,\,x {\mathop {\mapsto }\limits ^{\frac{1}{2}}} y\,\,\), which is strictly larger than the heap \(h_2\) satisfying \(\,\,x {\mathop {\mapsto }\limits ^{\frac{1}{4}}} y\,\,\); in general \(h_i\) satisfies \(\,\,x {\mathop {\mapsto }\limits ^{\frac{1}{2^i}}} y\,\,\). Since our sequence is infinite, we cannot use it as the basis for an induction argument. The solution is that we require that the heaps decrease by at least some constant size c. If each heap subsequent heap must shrink by at least e.g. \(c=0.25\) of a memory cell then the sequence must be finite just as in the non-fractional case, i.e. \(c=\mathcal {F}\). More sophisticated approaches are conceivable (e.g. limits) but they are not easy to automate and we did not find any practical examples that require such methods.

Our second contribution is the development of a proof theory in the object logic that can carry out these kinds of induction proofs in a relatively straightforward way. The proof rules that let us do so are given in Fig. 9. Once good lemmas are identified, we find doing induction proofs over the finite heap formally in the object logic simpler than doing the same proofs in the metalogic.

The key to our induction rules is two new operators: “within” \(\circledcirc \) and “shrinking” \(\rhd _\pi \). Essentially \(\rhd _\pi P\) is used as an induction guard, preventing us from applying our induction hypothesis P until we are on a \(\pi \)-smaller subheap. When \(\pi = \mathcal {F}\) we sometimes write just \(\rhd P\). Semantically, if h satisfies \(\rhd _\pi P\) then P is true on all strict subheaps of h that are smaller by at least a \(\pi \)-piece. Accordingly, the key elimination rule \(\rhd _\pi \!\star \) may seem natural: it verifies that the induction guard is satisfied and unlocks the underlying hypothesis. To start an induction proof to prove an arbitrary goal \(\top \,\models \, P\), we use the rule W to introduce an induction hypothesis, resulting in the new entailment goal of \(\rhd _\pi P \vdash P\).

Some definitions, such as \(\mathsf {list}(x)\), have only one “recursive call”; others, such as \(\mathsf {tree}(x)\) have more than one. Moreover, sometimes we wish to apply our inductive hypothesis immediately after satisfying the guard, whereas other times it is convenient to satisfy the guard somewhat before we need the inductive hypothesis. To handle both of these issues we use the “within” operator \(\circledcirc \) such that \(h \,\models \, \circledcirc P\) means P is true on all subheaps of h, which is the intuition behind the rule \(\circledcirc \!\star \). To apply our induction hypothesis somewhat after meeting its guard (or if we wish to apply it more than once) we use the \(\rhd _\pi \!\circledcirc \) rule to add the \(\circledcirc \) modality before eliminating the guard. We will see an example of this shortly.

5.4 Using Our Proof Theory

We now turn to two examples of using our proof theory from page 13 to demonstrate that the rule set is strong and flexible enough to prove real properties.

Proving that \(\mathsf {tree}(x)\) is \(\mathcal {F}\)-uniform. Our logical rules for induction and uniformity are able to establish the uniformity of predicates in a fairly simple way. Here we focus on the \(\mathsf {tree}(x)\) predicate because it is a little harder due to the two recursive “calls” in its unfolding. For convenience, we will write \(\mathsf {u}(\pi )\) instead of \(\mathsf {uniform}(\pi )\).

Our initial proof goal is \(\mathsf {tree}(x) \vdash \mathsf {u}(\mathcal {F})\). Standard natural deduction arguments then reach the goal \(\top \vdash \forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F})\), after which we apply the W rule (\(\pi = \mathcal {F}\) is convenient) to start the induction, adding the hypothesis \(\rhd \forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F})\), which we strengthen with the \(\rhd _\pi \!\circledcirc \) rule to reach \(\rhd \circledcirc \,\forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F})\). Natural deduction from there reaches

$$ \big (\langle x = \mathtt {null} \rangle \vee \exists d,l,r. x \mapsto (d,l,r) \star \mathsf {tree}(l) \star \mathsf {tree}(r)\big )\wedge \big (\rhd \,\circledcirc \forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F})\big ) \vdash \mathsf {u}(\mathcal {F}) $$

The proof breaks into two cases. The first reduces to \(\langle x = \mathtt {null} \rangle \wedge (\rhd \cdots ) \vdash \mathsf {u}(\mathcal {F})\), which follows from \(\mathsf {uniform}\textsc {/}\mathsf {emp}\) rule. The second case reduces to \(\big (x \mapsto (d,l,r) \star \mathsf {tree}(l) \star \mathsf {tree}(r)\big )\wedge \big (\rhd \,\circledcirc \forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F})\big ) \vdash \mathsf {u}(\mathcal {F})\). Then the \(\mathsf {uniform}\star \) rule gives

$$ \big (x \mapsto (d,l,r) \star (\mathsf {tree}(l) \star \mathsf {tree}(r))\big )\wedge \big (\rhd \circledcirc \forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F})\big ) \vdash \mathsf {u}(\mathcal {F}) \star \mathsf {u}(\mathcal {F}) $$

We now can cut with the \(\rhd _\pi \!\star \) rule to meet the inductive guard since \(x \mapsto (d,l,r) \vdash \mathsf {uniform}(\mathcal {F}) \wedge \lnot \mathsf {emp}\) due to the rules \(\mapsto \!\mathsf {uniform}\) and \(\mapsto \!\mathsf {emp}\). Our remaining goal is thus

$$ \big (x \mapsto (d,l,r) \wedge \rhd \cdots \big ) \star \big ((\mathsf {tree}(l) \star \mathsf {tree}(r)) \wedge \circledcirc \forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F})\big ) \vdash \mathsf {u}(\mathcal {F}) \star \mathsf {u}(\mathcal {F}) $$

We split over \(\star \). The first goal is \(x \mapsto (d,l,r) \wedge \rhd \cdots \vdash \mathsf {u}(\mathcal {F})\), which follows from \(\mapsto \!\mathsf {u}\). The second goal is \((\mathsf {tree}(l) \star \mathsf {tree}(r)) \wedge \circledcirc \forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F})\big ) \vdash \mathsf {u}(\mathcal {F})\). We apply \(\circledcirc \!\star \) to distribute the inductive hypothesis into the \(\star \), and \(\mathsf {uniform}\star \) to split the right hand side, yielding

$$ \big (\mathsf {tree}(l) \wedge \circledcirc \forall x. \mathsf {tree}(x) \!\Rightarrow \!\mathsf {u}(\mathcal {F})\big ) \! \star \! \big (\mathsf {tree}(r) \wedge \circledcirc \forall x. \mathsf {tree}(x) \!\Rightarrow \!\mathsf {u}(\mathcal {F})\big ) \! \! \vdash \! \! \mathsf {u}(\mathcal {F}) \star \! \mathsf {u}(\mathcal {F}) $$

We again split over \(\star \) to reach two essentially identical cases. We apply rule T to remove the \(\circledcirc \) and then reach e.g. \(\forall x. \mathsf {tree}(x) \Rightarrow \mathsf {u}(\mathcal {F}) \vdash \mathsf {tree}(l) \Rightarrow \mathsf {u}(\mathcal {F})\), which is immediate. Further details on this proof can be found in the full paper [2].

Proving that \(\mathsf {list}(x)\) is precise. Precision is more complex than \(\pi \)-uniformity, so it is harder to prove. We will use the simpler \(\mathsf {list}(x)\) as an example; the additional trick we need to prove that \(\mathsf {tree}(x)\) is precise are applications of the \(\rhd _\pi \!\circledcirc \) and \(\circledcirc \!\star \) rules in the same manner as the proof that \(\mathsf {tree}(x)\) is \(\mathcal {F}\)-uniform. We have proved that both \(\mathsf {list}(x)\) and \(\mathsf {tree}(x)\) are precise using our proof rules in Coq [1].

Fig. 10.
figure 10

Key lemmas we use to prove recursive predicates precise

In Fig. 10 we give four key lemmas used in our proofFootnote 2. All four are derived (with a little cleverness) from the proof rules given in Fig. 8. We sketch the proof as follows. To prove \(\text {precise}(\mathsf {list}(x))\) we first use the \(\mathsf {precisely}\textsc {Precise}\) rule to transform the goal into \(\top \vdash \mathsf {precisely}(\mathsf {list}(x))\). We cannot immediately apply rule W, however, since without a concrete \(\star \)-separated conjunct outside the \(\mathsf {precisely}\), we cannot dismiss the inductive guard with the \(\rhd _\pi \!\star \) rule. Accordingly, we next use lemma (A) and standard natural deduction to reach the goal \(\top \vdash \forall x. (\mathsf {list}(x) \star \top ) \Rightarrow \mathsf {precisely}(\mathsf {list}(x))\), after which we apply rule W with \(\pi = \mathcal {F}\).

Afterwards we do some standard natural deduction steps yielding the goal

We are now in a position to apply lemma (B) to break up the conjunction. We now have three goals. The first goal is that \(\langle x = \mathtt {null} \rangle \) is precise, which follows from the fact that \(\mathsf {emp}\) is precise, which in turn can be proved using the rule \(\mathsf {precisely}\textsc {Right}\). The third goal is that the two branches of the disjunction are mutually incompatible, which follows from \(\langle x = \mathtt {null} \rangle \) being incompatible with maps-to using rule \(\mapsto \!\mathtt {null}\). The second (and last remaining) goal needs to use lemma (C) twice to break up the existentials. Two of the three new goals are to show that the two existentials are uniquely determined, which follow from \(\mapsto \!\textsc {inversion}\), leaving the goal

We now cut with lemma (D), using rule \(\mapsto \!\textsc {precise}\) to prove its premise, yielding

We now use \(\rhd _\pi \!\star \) rule to defeat the inductive guard. The rest is straightforward. Further details on this proof can be found in the full paper [2].

6 The ShareInfer fractional biabduction engine

Having described our logical machinery in Sects. 3, 4 and 5, we now demonstrate that our techniques are well-suited to automation by documenting our \(\textsf {ShareInfer}\) prototype [1]. Our tool is capable of checking whether a user-defined recursive predicate such as \(\textsf {list}\) or \(\textsf {tree}\) is uniform and/or precise and then conducting biabductive inference over a separation logic entailment containing said predicates.

To check uniformity, the tool first uses heuristics to guess a potential tree share candidate \(\pi \) and then applies proof rules in Figs. 7 and 6 to derive the goal \(\textsf {uniform}(\pi )\). To support more flexibility, our tool also allows users to specify the candidate share \(\pi \) manually. To check precision, the tool maneuvers over the proof rules in Figs. 6 and 8 to achieve the desired goal. In both cases, recursive predicates are handled with the rules in Fig. 9. \(\textsf {ShareInfer}\) returns either Yes, No or Unknown together with a human-readable proof of its claim.

Fig. 11.
figure 11

Evaluation of our proof systems using ShareInfer

For bi-abduction, \(\textsf {ShareInfer}\) automatically checks precision and uniformity whenever it encounters a new recursive predicate. If the check returns Yes, the tool will unlock the corresponding rule, i.e., \(\textsc {DotPlus}\) for precision and \(\textsc {DotStar}\) for uniformity. \(\textsf {ShareInfer}\) then matches fragments between the consequent and antecedent while applying folding and unfolding rules for recursive predicates to construct the antiframe and inference frame respectively. For instance, here is the biabduction problem contained in file \(\textsf {bi\_tree2}\) (see Fig. 11):

ShareInfer returns antiframe \(\mathcal {L}\cdot \textsf {tree}(d)\) and inference frame .

ShareInfer is around 2.5k LOC of Java. We benchmarked it with 27 selective examples from three categories: precision, uniformity and bi-abduction. The benchmark was conducted with a 3.4 GHz processor and 16 GB of memory. Our results are given in Fig. 11. Despite the complexity of our proof rules our performance is reasonable: \(\textsf {ShareInfer}\) only took 75.9 ms to run the entire example set, or around 2.8 ms per example. Our benchmark is small, but this performance indicates that more sophisticated separation logic verifiers such as HIP/SLEEK [14] or Infer [9] may be able to use our techniques at scale.

7 Building a Model for Our Logic

Our task now is to provide a model for our proof theories. We present our models in several parts. In Sect. 7.1 we begin with a brief review of Cancellative Separation Algebras (CSA). In Sect. 7.2 we explain what we need from our fractional share models. In Sect. 7.3 we develop an extension to CSAs called “Scaling Separation Algebras” (SSA). In Sect. 7.5 we develop the machinery necessary to support our rules for object-level induction over the heap. We have verified in Coq [1] that the models in Sect. 7.1 support the rules in Fig. 8, the models in Sect. 7.3 support the rules Figs. 3 and 7, and the models in Sect. 7.5 support the rules in Fig. 9.

7.1 Cancellative Separation Algebras

A Separation Algebra (SA) is a set H with an associative, commutative partial operation \(\oplus \). Separation algebras can have a single unit or multiple units; we use \( identity (x)\) to indicate that x is a unit. A Cancellative SA \(\langle H,\oplus \rangle \) further requires that \(a \oplus b_1 = c \Rightarrow a \oplus b_2 = c \Rightarrow b_1 = b_2\). We can define a partial order on H using \(\oplus \) by \(h_1 \subseteq h_2 \,{\mathop {=}\limits ^\mathrm{{def}}}\, \exists h'. h_1 \oplus h' = h_2\). Calcagno et al. [12] showed that CSAs can model separation logic with the definitions

The standard definition of \(\text {precise}(P)\) was given as Eq. (3) in Sect. 5.2, together with the definition for our new \(\mathsf {precisely}(P)\) operator in Eq. (4). What is difficult here is finding a set of axioms (Fig. 8) and derivable lemmas (e.g. Fig. 10) that are strong enough to be useful in the object-level inductive proofs. Once the axioms are found, proving them from the model given is straightforward. Cancellation is not necessary to model basic separation logic [18], but we need it to prove the introduction \(\mathsf {precisely}\textsc {Right}\) and elimination rules \(\mathsf {precisely}\textsc {Left}\) for our new operator.

7.2 Fractional Share Algebras

A fractional share algebra \(\langle S,\oplus ,\otimes ,\mathcal {E},\mathcal {F} \rangle \) (FSA) is a set S with two operations: partial addition \(\oplus \) and total multiplication \(\otimes \). The substructure \(\langle S,\oplus \rangle \) is a CSA with the single unit \(\mathcal {E}\). For the reasons discussed in Sect. 2 we require that \(\oplus \) satisfies the disjointness axiom \(a \oplus a = b \Rightarrow a = \mathcal {E}\). Furthermore, we require that the existence of a top element \(\mathcal {F}\), representing complete ownership, and assume that each element \(s \in S\) has a complement \(\overline{s}\) such that \(s \oplus \overline{s} = \mathcal {F}\).

Often (e.g. in the fractional \(\mapsto \) operator) we wish to restrict ourselves to the “positive shares” \(S^{+} \,{\mathop {=}\limits ^\mathrm{{def}}}\, S \setminus \{\mathcal {E}\}\). To emphasize that a share is positive we often use the metavariable \(\pi \) rather than s. \(\oplus \) is still associative, commutative, and cancellative; every element other than \(\mathcal {F}\) still has a complement. To enjoy a partial order on \(S^{+}\) and other SA- or CSA-like structures that lack identities (sometimes called “permission algebras”) we define \(\pi _1 \subseteq \pi _2 \,{\mathop {=}\limits ^\mathrm{{def}}}\, (\exists \pi '. \pi _1 \oplus \pi ' = \pi _2) \vee (\pi _1 = \pi _2)\).

For the multiplicative structure we require that \(\langle S,\otimes ,\mathcal {F} \rangle \) be a monoid, i.e. that \(\otimes \) is associative and has identity \(\mathcal {F}\). Since we restrict maps-tos and the permission scaling operator to be positive, we want \(\langle S^{+}, \otimes , \mathcal {F} \rangle \) to be a submonoid. Accordingly, when \(\{\pi _1,\pi _2\} \subset S^{+}\), we require that \(\pi _1 \otimes \pi _2 \ne \mathcal {E}\). Finally, we require that \(\otimes \) distributes over \(\oplus \) on the right, that is \((s_1 \oplus s_2) \otimes s_3 = (s_1 \otimes s_3) \oplus (s_2 \otimes s_3)\); and that \(\otimes \) is cancellative on the right given a positive left multiplicand, i.e. \(\pi \otimes s_1 = \pi \otimes s_2 \Rightarrow s_1 = s_2\).

The tree share model we present in Sect. 2 satisfies all of the above axioms, so we have a nontrivial model. As we will see shortly, it would be very convenient if we could assume that \(\otimes \) also distributed on the left, or if we had multiplicative inverses on the left rather than merely cancellation on the right. However, we will see in Sect. 8.2 that both assumptions are untenable.

7.3 Scaling Separation Algebra

A scaling separation algebra (SSA) is \(\langle H,S,\oplus _H,\oplus _S,\otimes _S,\mathcal {E},\mathcal {F}, mul , force \rangle \), where \(\langle H,\oplus _H \rangle \) is a CSA for heaps and \(\langle S,\oplus _S,\otimes _S,\mathcal {E},\mathcal {F} \rangle \) is a FSA for shares. Intuitively, \( mul (\pi ,h_1)\) multiplies every share inside \(h_1\) by \(\pi \) and returns the result \(h_2\). The multiplication is on the left, so for each original share \(\pi '\) in \(h_1\), the resulting share in \(h_2\) is \(\pi \otimes _S \pi '\). Recall that the informal meaning of \(\pi \cdot P\) is that we have a \(\pi \)-fraction of predicate P. Formally this notion relies on a little trick:

$$\begin{aligned} h \,\models \, \pi \cdot P ~~ {\mathop {=}\limits ^\mathrm{{def}}}~~ \exists h'.~ mul (\pi , h') = \pi \wedge h' \,\models \, P \end{aligned}$$
(5)

A heap h contains a \(\pi \)-fraction of P if there is a bigger heap \(h'\) satisfying P, and multiplying that bigger heap \(h'\) by the scalar \(\pi \) gets back to the smaller heap h.

The simpler \( force (\pi ,h_1)\) overwrites all shares in \(h_1\) with the constant share \(\pi \) to reach the resulting heap \(h_2\). We use \( force \) to define the \(\mathsf {uniform}\) predicate as \(h \,\models \, \mathsf {uniform}(\pi ) \,{\mathop {=}\limits ^\mathrm{{def}}}\, force (\pi ,h) = h\). A heap h is \(\pi \)-uniform when setting all the shares in h to \(\pi \) gets you back to hi.e., they must have been \(\pi \) to begin with.

Fig. 12.
figure 12

The 14 additional axioms for scaling separation algebras beyond those inherited from cancellative separation algebras

We need to understand how all of the ingredients in an SSA relate to each other to prove the core logical rules on page 13. We distill the various relationships we need to model our logic in Fig. 12. Although there are a goodly number of them, most are reasonably intuitive.

Axioms \(S_1\) through \(S_4\) describe how \( force \) and \( mul \) compose with each other. Axioms \(S_5\), \(S_9\), and \(S_{10}\) give conditions when \( force \) and \( mul \) are identity functions: when either is applied to empty heaps, and when \( mul \) is applied to the multiplicative identity on shares \(\mathcal {F}\). Axioms \(S_6\) and \(S_{12}\) relate heap order with forcing the full share \(\mathcal {F}\) and multiplication by an arbitrary share \(\pi \). Axiom \(S_7\) says that \( force \) is order-preserving. Axiom \(S_8\) is how the disjointness axiom on shares is expressed on heaps: when two \(\pi \)-uniform heaps are joined, the result is \(\pi \)-uniform. Axiom \(S_{11}\) says that \( mul \) is injective on heaps. Axiom \(S_{13}\) is delicate. In the \(\Rightarrow \) direction, it states that \( mul \) preserves the share model’s join structure on heaps. In the \(\Leftarrow \) direction, \(S_{13}\) is similar to axiom \(S_8\), saying that the share model’s join structure must be preserved. Taking both directions together, \(S_{13}\) translates the right distribution property of \(\oplus _S\) over \(\otimes _S\) into heaps. The final axiom \(S_{14}\) is a bit of a compromise. We wish we could satisfy

$$\begin{aligned} S_{14}'. \qquad a \oplus _H b = c ~~ \Leftrightarrow ~~ mul (\pi ,a) \oplus _H mul (\pi ,b) = mul (\pi ,c) \end{aligned}$$

\(S_{14}'\) is a kind of dual for \(S_{13}\), i.e. it would correspond to a left distributivity property of \(\oplus _S\) over \(\otimes _S\) in the share model into heaps. Unfortunately, as we will see in Sect. 8.2, the disjointness of \(\oplus _S\) is incompatible with simultaneously supporting both left and right distributivity. Accordingly, \(S_{14}\) weakens \(S_{14}'\) so that it only holds when a and b are \(\pi '\)-uniform (which by \(S_8\) forces c to be \(\pi '\)-uniform). We also wish we could satisfy \(S_{15}'\): \(\forall \pi , a.\exists b. mul (\pi ,b) = a\), which corresponds to left multiplicative inverses, but again (Sect. 8.2) disjointness is incompatible.

7.4 Compositionality of Scaling Separation Algebras

Despite their complex axiomatization, we gain two advantages from developing SSAs rather than directly proving our logical axioms on a concrete model. First, they give us a precise understanding of exactly which operations and properties (\(S_1\)\(S_{14}\)) are used to prove the logical axioms. Second, following Dockins et al. [21] we can build up large SSAs compositionally from smaller SSAs.

To do so cleanly it will be convenient to consider a slight variant of SSAs, “Weak SSAs” that allow, but do not require, the existence of identity elements in the underlying CSA model. A WSSA satisfies exactly the same axioms as an SSA, except that we use the weaker \(\subseteq _H\) definition we defined for permission algebras, i.e. \(a_1 \subseteq _H a_2 \,{\mathop {=}\limits ^\mathrm{{def}}}\, (\exists a'. a_1 \oplus _H a' = a_2) \vee (a_1 = a_2)\). Note that \(S_5\) and \(S_9\) are vacuously true when the CSA does not have identity elements. We need identity elements to prove the logical axioms from the model; we only use WSSAs to gain compositionality as we construct a suitable final SSA. Keeping the share components \(\langle S,\oplus _S,\otimes _S,\mathcal {E},\mathcal {F} \rangle \) constant, we give three SSA constructors to get a flavor for what we can do with the remaining components \(\langle H,\oplus _H, force , mul \rangle \).

Example 1

(Shares). The share model \(\langle S, \oplus _S \rangle \) is an SSA, and the positive (non-\(\mathcal {E}\)) shares \(\langle S^{+}, \oplus \rangle \) are a WSSA, with \( force _S(\pi ,\pi ') {\mathop {=}\limits ^\mathrm{{def}}}\pi \) and \( mul _S(\pi ,\pi ') {\mathop {=}\limits ^\mathrm{{def}}}\pi \otimes \pi '\).

Example 2

(Semiproduct). Let \(\langle A,\oplus _A, force _A, mul _A \rangle \) be an SSA/WSSA, and B be a set. Define \((a_1,b_1) \oplus _{A \times B} (a_2,b_2) = (a_3,b_3) {\mathop {=}\limits ^\mathrm{{def}}}a_1 \oplus _A a_2 = a_3 \wedge b_1 = b_2 = b_3\), \( force _{A \times B}(\pi ,(a,b)) {\mathop {=}\limits ^\mathrm{{def}}}( force _A(\pi ,a),b)\), and \( mul _{A \times B}(\pi ,(a,b)) {\mathop {=}\limits ^\mathrm{{def}}}( mul _A(\pi ,a),b)\). Then \(\langle A \times B, \oplus _{A \times B}, force _{A \times B} , mul _{A \times B} \rangle \) is an SSA/WSSA.

Example 3

(Finite partial map). Let A be a set and \(\langle B,\oplus _B, force _B, mul _B \rangle \) be an SSA/WSSA. Define \(f \oplus _{A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} B} g = h\) pointwise [21]. Define \( force _{A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} B}(\pi ,f) {\mathop {=}\limits ^\mathrm{{def}}}\lambda x. force _B(\pi ,f(x))\) and likewise define \( mul _{A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} B}(\pi ,f) {\mathop {=}\limits ^\mathrm{{def}}}\lambda x. mul _B(\pi , f(x))\). The structure \(\langle A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} B,{\oplus \!}_{A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} B}, force\! _{A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} B}, mul\! _{A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} B} \rangle \) is an SSA.

Using these constructors, \(A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} (S^{+},V)\), i.e. finite partial maps from addresses to pairs of positive shares and values, is an SSA and thus can support a model for our logic. We also support other standard constructions e.g. sum types \(+\).

7.5 Model for Inductive Logic

What remains is to give the model that yields the inductive logic in Fig. 9. The key induction guard modal \(\rhd _\pi \) operator is defined as follows:

$$ \begin{array}{lcl} h_1 ~ S_\pi ~ h_4 &{} ~~ {\mathop {=}\limits ^\mathrm{{def}}}~~ &{} \exists h_2, h_3.~ h_1 \supseteq _H h_2 \wedge h_3 \oplus _H h_4 = h_2 \wedge (h_3 \,\models \, \mathsf {uniform}(\pi ) \wedge \lnot \mathsf {emp})\\ h \,\models \, \rhd _\pi P &{} ~~ {\mathop {=}\limits ^\mathrm{{def}}}~~ &{} \forall h'.~ (h ~ S_\pi ~ h') \Rightarrow (h' \,\models \, P) \end{array} $$

In other words, \(\rhd _\pi \) is a (boxy) modal operator over the relation \(S_\pi \), which relates a heap \(h_1\) with all heaps that are strict subheaps that are smaller by at least a \(\pi \)-piece. The model is a little subtle to enable the rules \(\rhd _\pi \!\circledcirc \) and \(\circledcirc \!\rhd _\pi \) that let us handle multiple recursive calls and simplify the engineering. The within operator \(\circledcirc \) is much simpler to model:

$$ h_1 ~ W ~ h_2 ~~ {\mathop {=}\limits ^\mathrm{{def}}}~~ h_1 \supseteq _H h_2 \qquad \qquad h \,\models \, \circledcirc P ~~ {\mathop {=}\limits ^\mathrm{{def}}}~~ \forall h'.~ (h ~ W ~ h') \Rightarrow (h' \,\models \, P) $$

All of the rules in Fig. 9 follow from these definitions except for rule W. To prove this rule, we require that the heap model have an additional operator. The “\(\pi \)-quantum”, written \(|h|_\pi \), gives the number of times a non-empty \(\pi \)-sized piece can be taken out of h. For disjoint shares, the number of times is no more than the number of defined memory locations in h. We require two facts for \(|h|_\pi \). First, that \(h_1 \subseteq _H h_2 \Rightarrow |h_1|_\pi \le |h_2|_\pi \), i.e. that subheaps do not have larger \(\pi \)-quanta than their parent. Second, that \(h_1 \oplus _H h_2 = h_3 \Rightarrow (h_2 \,\models \, \mathsf {uniform}(\pi ) \wedge \lnot \mathsf {emp}) \Rightarrow |h_3|_\pi > |h_1|_\pi \), i.e. that taking out a \(\pi \)-piece strictly decreases the number of \(\pi \)-quanta. Given this setup, rule W follows immediately by induction on \(|h|_\pi \). The rules that require the longest proofs in the model are \(\rhd _\pi \!\circledcirc \) and \(\circledcirc \!\rhd _\pi \).

8 Lower Bounds on Predicate Multiplication

In Sect. 7 we gave a model for the logical axioms we presented in Fig. 3 and on page 13. Our goal here is to show that it is difficult to do better, e.g. by having a premise-free DotStar rules or a bidirectional DotImpl rule. In Sect. 8.1 we show that these logical rules force properties on the share model. In Sect. 8.2 we show that disjointness puts restrictions on the class of share models. There are no non-trivial models that have left inverses or satisfy both left and right distributivity.

8.1 Predicate Multiplication’s Axioms Force Share Model Properties

The SSA structures we gave in Sect. 7.3 are good for building models that enable the rules for predicate multiplication from Fig. 3. However, since they impose intermediate algebraic and logical signatures between the concrete model and rules for predicate multiplication, they are not good for showing that we cannot do better. Accordingly here we disintermediate and focus on the concrete model \(A {\mathop {\rightharpoonup }\limits ^{\mathsf {fin}}} (S^{+},V)\), that is finite partial maps from addresses to pairs of positive shares and values. The join operations on heaps operates pointwise [21], with \((\pi _1,v_1) \oplus (\pi _2,v_2) = (\pi _3, v_3) {\mathop {=}\limits ^\mathrm{{def}}}\pi _1 \oplus _S \pi _2 = \pi _3 \wedge v_1 = v_2 = v_3\), from which we derive the usual SA model for \(\star \) and \(\mathsf {emp}\) (Sect. 7.1). We define \(h \,\models \, \,\,x {\mathop {\mapsto }\limits ^{\pi }} y\,\, \,{\mathop {=}\limits ^\mathrm{{def}}}\, dom (h) = \{x\} \wedge h(x) = (\pi ,y)\). We define scalar multiplication over heaps \(\otimes _H\) pointwise as well, with \(\pi _1 \otimes (\pi _2, v) \,{\mathop {=}\limits ^\mathrm{{def}}}\, (\pi _1 \otimes _S \pi _2, v)\), and then define predicate multiplication by \(h \,\models \, \pi \cdot P \,{\mathop {=}\limits ^\mathrm{{def}}}\, \exists h'.~ h' = \pi \otimes _H h' = h \wedge h' \,\models \, P\). All of the above definitions are standard except for \(\otimes _H\), which strikes us as the only choice (up to commutativity), and predicate multiplication itself.

By Sect. 7 we already know that this model satisfies the rules for predicate multiplication, given the assumptions on the share model from Sect. 7.2. What is interesting is that we can prove the other direction: if we assume that the key logical rules from Fig. 3 hold, they force axioms on the share model. The key correspondences are: DotFull forces that \(\mathcal {F}\) is the left identity of \(\otimes _S\); DotMapsTo forces that \(\mathcal {F}\) is the right identity of \(\otimes _S\); DotMapsTo forces the associativity of \(\otimes _S\); the \(\dashv \) direction of DotConj forces the right cancellativity of \(\otimes _S\) (as does DotImpl and the \(\dashv \) direction of DotUniv); and DotPlus, which forces right distributivity of \(\otimes _S\) over \(\oplus _S\).

The following rules force left distributivity of \(\otimes _S\) over \(\oplus _S\) and left \(\otimes _S\) inverses:

The \(\dashv \) direction of \(\textsc {DotStar}'\) also forces that \(\oplus _S\) satisfies disjointness; this is the key reason that we cannot use rationals \(\langle (0,1],+,\times \rangle \). Clearly the side-condition-free \(\textsc {DotStar}'\) rule is preferable to the DotStar in Fig. 3, and it would also be preferable to have bidirectionality for predicate multiplication over implication and negation. Unfortunately, as we will see shortly, the disjointness of \(\oplus _S\) places strong multiplicative algebraic constraints on the share model. These constraints are the reason we cannot support the \(\textsc {DotImpl}'\) rule and why we require the \(\pi '\)-uniformity side condition in our DotStar rule.

8.2 Disjointness in a Multiplicative Setting

Our goal now is to explore the algebraic consequences of the disjointness property in a multiplicative setting. Suppose \(\langle S,\oplus \rangle \) is a CSA with a single unit \(\mathcal {E}\), top element \(\mathcal {F}\), and \(\oplus \) complements \(\overline{s}\). Suppose further that shares satisfy the disjointness property \(a \oplus a = b \Rightarrow a = \mathcal {E}\). For the multiplicative structure, assume \(\langle S,\otimes ,\mathcal {F} \rangle \) is a monoid (i.e. the axioms forced by the DotDot, DotMapsTo, and DotFull rules). It is undesirable for a share model if multiplying two positive shares (e.g. the ability to read a memory cell) results in the empty permission, so we assume that when \(\pi _1\) and \(\pi _2\) are non-\(\mathcal {E}\) then their product \(\pi _1 \otimes \pi _2 \ne \mathcal {E}\).

Now add left or right distributivity. We choose right distributivity \((s_1 \oplus s_2) \otimes s_3 = (s_1 \otimes s_3) \oplus (s_2 \otimes s_3)\); the situation is mirrored with left. Let us show that we cannot have left inverses for \(\pi \ne \mathcal {F}\). We prove by contradiction: suppose \(\pi \ne \mathcal {F}\) and there exists \(\pi ^{-1}\) such that \(\pi ^{-1} \otimes \pi = \mathcal {F}\). Then

$$ \pi = \mathcal {F}\otimes \pi = (\pi ^{-1} \oplus \overline{\pi ^{-1}}) \otimes \pi = (\pi ^{-1} \otimes \pi ) \oplus (\overline{\pi ^{-1}} \otimes \pi ) = \mathcal {F}\oplus (\overline{\pi ^{-1}} \otimes \pi ) $$

Let \(e = \overline{\pi ^{-1}} \otimes \pi \). Now \(\pi = \mathcal {F}\oplus e = (\overline{e} \oplus e) \oplus e\), which by associativity and disjointness forces \(e = \mathcal {E}\), which in turn forces \(\pi = \mathcal {F}\), a contradiction.

Now suppose that instead of adding multiplicative inverses we have both left and right distributivity. First we prove (Lemma 1) that for arbitrary \(s \in S\), \(s \otimes \overline{s} = \overline{s} \otimes s\). We calculate:

$$ (s \otimes s) \oplus (s \otimes \overline{s}) = s \otimes (s \oplus \overline{s}) = s \otimes \mathcal {F}= s = \mathcal {F}\otimes s = (s \oplus \overline{s}) \otimes s = (s \otimes s) \oplus (\overline{s} \otimes s) $$

Lemma 1 follows by the cancellativity of \(\oplus \) between the far left and the far right.

Now we show (Lemma 2) that \(s \otimes \overline{s} = \mathcal {E}\). We calculate:

$$\begin{aligned} \mathcal {F}= \mathcal {F}\otimes \mathcal {F}= (s \oplus \overline{s}) \otimes (s \oplus \overline{s})&= (s \otimes s) \oplus (s \otimes \overline{s}) \oplus (\overline{s} \otimes s) \oplus (\overline{s} \otimes \overline{s})\\&=(s \otimes s) \oplus \underline{(s \otimes \overline{s}) \oplus (s \otimes \overline{s})} \oplus (\overline{s} \otimes \overline{s}) \end{aligned}$$

The final equality is by Lemma 1. The underlined portion implies \(s \otimes \overline{s} = \mathcal {E}\) by disjointness. The upshot of Lemma 2, together with our requirement that the product of two positive shares be positive, is that we can have no more than the two elements \(\mathcal {E}\) and \(\mathcal {F}\) in our share model. Since the entire motivation for fractional share models is to allow ownership between \(\mathcal {E}\) and \(\mathcal {F}\), we must choose either left or right distributivity; we choose right since we are able to prove that the \(\pi '\)-uniformity side condition enables the bidirectional DotStar.

9 Related Work

Fractional permissions are essentially used to reason about resource ownership in concurrent programming. The well-known rational model \(\langle [0,1],+ \rangle \) by Boyland et al. [5] is used to reason about join-fork programs. This structure has the disjointness problem mentioned in Sect. 2, first noticed by Bornat et al. [4], as well as other problems discussed in Sects. 3, 4, and [2]. Boyland [6] extended the framework to scale permissions uniformly over arbitrary predicates with multiplication, e.g., he defined \(\pi \cdot P\) as “multiply each permission \(\pi '\) in P with \(\pi \)”. However, his framework cannot fit into SL and his scaling rules are not bi-directional. Jacobs and Piessens [28] also used rationals for scaling permissions \(\pi \cdot P\) in SL but only obtained one direction for \(\textsc {DotStar}\) and \(\textsc {DotPlus}\). A different kind of scaling permission was used by Dinsdale-Young et al. [20] in which they used rationals to define permission assertions \([A]_\pi ^r\) to indicate a thread with permission \(\pi \) can execute the action A over the shared region r.

There are other flavors of permission besides rationals. Bornat et al. [4] introduced integer counting permissions \(\langle \mathbb {Z},+,0 \rangle \) to reason about semaphores and combined rationals and integers into a hybrid permission model. Heule et al. [23] flexibly allowed permissions to be either concretely rational or abstractly read-only to lower the nuisance of detailed accounting. A more general read-only permissions was proposed by Charguéraud and Pottier [13] that transforms a predicate P into read-only mode \(\mathsf {RO}(P)\) which can duplicated/merged with the bi-entailment \(\mathsf {RO}(P) \dashv \vdash \mathsf {RO}(P) \star \mathsf {RO}(P)\). Their permissions distribute pleasantly over disjunction and existential quantifier but only work one way for \(\star \), i.e., \(\mathsf {RO}(H_1 \star H_2) \vdash \mathsf {RO}(H_1) \star \mathsf {RO}(H_2)\). Parkinson [41] proposed subsets of the natural numbers for shares \(\langle \mathcal {P}(\mathbb {N}),\uplus \rangle \) to fix the disjointness problem. Compared to tree shares, Parkinson’s model is less practical computationally and does not have an obvious multiplicative structure.

Protocol-based logics like FCSL [38] and Iris [30] have been very successful in reasoning about fine-grained concurrent programs, but their high expressivity results in a heavyweight logic. Automation (e.g. inference such as we do in Sect. 4) has been hard to come by. We believe that fractional permissions and protocol-based logics are in a meaningful sense complementary rather than competitors.

Verification tools often implement rational permissions because of its simplicity. For example, VeriFast [29] uses rationals to verify programs with locks and semaphores. It also allows simple and restrictive forms of scaling permissions which can be applied uniformly over standard predicates. On the other hand, HIP/SLEEK [31] uses rationals to model “thread as resource” so that the ownership of a thread and its resources can be transferred. Chalice [36] has rational permissions to verify properties of multi-threaded, objected-based programs such as data races and dead-locks. Viper [37] has an expressive intermediate language that supports both rational and abstract permissions. However, a number of verification tools have chosen tree shares due to their better metatheoretical properties. VST [3] is equipped with tree share permissions and an extensive tree share library. HIP/SLEEK uses tree shares to verify the barrier structure [26] and has its own complete share solver [33, 35] that reduces tree formulae to Boolean formulae handled by Z3 [17]. Lastly, tree share permissions are featured in Heap-Hop [47] to reason over asynchronous communications.

10 Conclusion

We presented a separation logic proof framework to reason about resource sharing using fractional permissions in concurrent verification. We support sophisticated verification tasks such as inductive predicates, proving predicates precise, and biabduction. We wrote ShareInfer to gauge how our theories could be automated. We developed scaling separation algebras as compositional models for our logic. We investigated why our logic cannot support certain desirable properties.