An AssertionBased Program Logic for Probabilistic Programs
 3 Citations
 6.7k Downloads
Abstract
We present Ellora, a sound and relatively complete assertionbased program logic, and demonstrate its expressivity by verifying several classical examples of randomized algorithms using an implementation in the EasyCrypt proof assistant. Ellora features new proof rules for loops and adversarial code, and supports richer assertions than existing program logics. We also show that Ellora allows convenient reasoning about complex probabilistic concepts by developing a new program logic for probabilistic independence and distribution law, and then smoothly embedding it into Ellora.
Keywords
Investigation Program EasyCrypt Ellora Proof Rules Adversarial Code1 Introduction
The most mature systems for deductive verification of randomized algorithms are expectationbased techniques; seminal examples include PPDL [28] and pGCL [34]. These approaches reason about expectations, functions E from states to real numbers,^{1} propagating them backwards through a program until they are transformed into a mathematical function of the input. Expectationbased systems are both theoretically elegant [16, 23, 24, 35] and practically useful; implementations have verified numerous randomized algorithms [19, 21]. However, properties involving multiple probabilities or expected values can be cumbersome to verify—each expectation must be analyzed separately.
An alternative approach envisioned by Ramshaw [37] is to work with predicates over distributions. A direct comparison with expectationbased techniques is difficult, as the approaches are quite different. In broad strokes, assertionbased systems can verify richer properties in one shot and have specifications that are arguably more intuitive, especially for reasoning about loops, while expectationbased approaches can transform expectations mechanically and can reason about nondeterminism. However, the comparison is not very meaningful for an even simpler reason: existing assertionbased systems such as [8, 18, 38] are not as well developed as their expectationbased counterparts.

Restrictive Assertions. Existing probabilistic program logics do not support reasoning about expected values, only probabilities. As a result, many properties about averagecase behavior are not even expressible.

Inconvenient Reasoning for Loops. The Hoare logic rule for deterministic loops does not directly generalize to probabilistic programs. Existing assertionbased systems either forbid loops, or impose complex semantic side conditions to control which assertions can be used as loop invariants. Such side conditions are restrictive and difficult to establish.

No Support for External or Adversarial Code. A strength of expectationbased techniques is reasoning about programs combining probabilities and nondeterminism. In contrast, Morgan and McIver [30] argue that assertionbased techniques cannot support compositional reasoning for such a combination. For many applications, including cryptography, we would still like to reason about a commonlyencountered special case: programs using external or adversarial code. Many security properties in cryptography boil down to analyzing such programs, but existing program logics do not support adversarial code.

Few Concrete Implementations. There are by now several independent implementations of expectationbased techniques, capable of verifying interesting probabilistic programs. In contrast, there are only scattered implementations of probabilistic program logics.
 1.
Can assertionbased approaches achieve similar expressivity?
 2.
Are there situations where assertionbased approaches are more suitable?
In this paper, we give positive evidence for both of these points.^{2} Towards the first point, we give a new assertionbased logic Ellora for probabilistic programs, overcoming limitations in existing probabilistic program logics. Ellora supports a rich set of assertions that can express concepts like expected values and probabilistic independence, and novel proof rules for verifying loops and adversarial code. We prove that Ellora is sound and relatively complete.
Towards the second point, we evaluate Ellora in two ways. First, we define a new logic for proving probabilistic independence and distribution law properties—which are difficult to capture with expectationbased approaches—and then embed it into Ellora. This sublogic is more narrowly focused than Ellora, but supports more concise reasoning for the target assertions. Our embedding demonstrates that the assertionbased approach can be flexibly integrated with intuitive, specialpurpose reasoning principles. To further support this claim, we also provide an embedding of the Union Bound logic, a program logic for reasoning about accuracy bounds [4]. Then, we develop a fullfeatured implementation of Ellora in the EasyCrypt theorem prover and exercise the logic by mechanically verifying a series of complex randomized algorithms. Our results suggest that the assertionbased approach can indeed be practically viable.
Abstract Logic. To ease the presentation, we present Ellora in two stages. First, we consider an abstract version of the logic where assertions are general predicates over distributions, with no compact syntax. Our abstract logic makes two contributions: reasoning for loops, and for adversarial code.

arbitrary assertions for certainly terminating loops, i.e. loops that terminate in a finite amount of iterations;

topologically closed assertions for almost surely terminating loops, i.e. loops terminating with probability 1;

downwards closed assertions for arbitrary loops.
The definition of topologically closed assertion is reminiscent of Ramshaw [37]; the stronger notion of downwards closed assertion appears to be new.
Besides broadening the class of loops that can be analyzed, our rules often enable simpler proofs. For instance, if the loop is certainly terminating, then there is no need to prove semantic sideconditions. Likewise, there is no need to consider the termination behavior of the loop when the invariant is downwards and topologically closed. For example, in many applications in cryptography, the target property is that a “bad” event has low probability: \(\Pr {[E]} \le k\). In our framework this assertion is downwards and topologically closed, so it can be a loop invariant regardless of the termination behavior.
Reasoning About Adversaries. Existing assertionbased logics cannot reason about probabilistic programs with adversarial code. Adversaries are special probabilistic procedures consisting of an interface listing the concrete procedures that an adversary can call (oracles), along with restrictions like how many calls an adversary may make. Adversaries are useful in cryptography, where security notions are described using experiments in which adversaries interact with a challenger, and in game theory and mechanism design, where adversaries can represent strategic agents. Adversaries can also model inputs to online algorithms.
We provide proof rules for reasoning about adversary calls. Our rules are significantly more general than previously considered rules for reasoning about adversaries. For instance, the rule for adversary used by [4] is restricted to adversaries that cannot make oracle calls.
Metatheory. We show soundness and relative completeness of the core abstract logic, with mechanized proofs in the Coq proof assistant.
Concrete Logic. While the abstract logic is conceptually clean, it is inconvenient for practical formal verification—the assertions are too general and the rules involve semantic sideconditions. To address these issues, we flesh out a concrete version of Ellora. Assertions are described by a grammar modeling a twolevel assertion language. The first level contains state predicates—deterministic assertions about a single memory—while the second layer contains probabilistic predicates constructed from probabilities and expected values over discrete distributions. While the concrete assertions are theoretically less expressive than their counterparts in the abstract logic, they can already encode common properties and notions from existing proofs, like probabilities, expected values, distribution laws and probabilistic independence. Our assertions can express theorems from probability theory, enabling sophisticated reasoning about probabilistic concepts.
Furthermore, we leverage the concrete syntax to simplify verification.

We develop an automated procedure for generating preconditions of nonlooping commands, inspired by expectationbased systems.

We give syntactic conditions for the closedness and termination properties required for soundness of the loop rules.
Embeddings. We propose a simple program logic for proving probabilistic independence. This logic is designed to reason about independence in a lightweight way, as is common in paper proofs. We prove that the logic can be embedded into Ellora, and is therefore sound. Furthermore, we prove an embedding of the Union Bound logic [4].
2 Mathematical Preliminaries
As is standard, we will model randomized computations using subdistributions.
Definition 1
A subdistribution over a set A is defined by a mass function \(\mu : A \rightarrow [0,1]\) that gives the probability of the unitary events \(a \in A\). This mass function must be s.t. \(\sum _{a \in A} \mu (a)\) is welldefined and \(\mu  \mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\sum _{a\in A} \mu (a) \le 1\). In particular, the support \(\mathrm{supp}(\mu ) \mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\{ a \in A \mid \mu (a) \ne 0 \}\) is discrete.^{3} The name “subdistribution” emphasizes that the total probability may be strictly less than 1. When the weight \(\mu \) is equal to 1, we call \(\mu \) a distribution. We let \(\mathbf {SDist}(A)\) denote the set of subdistributions over A. The probability of an event E(x) w.r.t. a subdistribution \(\mu \), written \(\Pr _{x \sim \mu } [E(x)]\), is defined as \(\sum _{x \in A \mid E(x)} \mu (x)\).
Simple examples of subdistributions include the null subdistribution \(\mathbf {0}\), which maps each element of the underlying space to 0; and the Dirac distribution centered on x, written \({{\delta }^{}_{x}}\), which maps x to 1 and all other elements to 0. The following standard construction gives a monadic structure to subdistributions.
Definition 2
We will need two constructions to model branching statements.
Definition 3
Let \(\mu _1,\mu _2\in \mathbf {SDist}(A)\) such that \(\mu _1+\mu _2\le 1\). Then \(\mu _1+\mu _2\) is the subdistribution \(\mu \) such that \(\mu (a)=\mu _1(a)+\mu _2(a)\) for every \(a\in A\).
Definition 4
Let \(E \subseteq A\) and \(\mu \in \mathbf {SDist}(A)\). Then the restriction \({\mu }_{ {E}}\) of \(\mu \) to E is the subdistribution such that \({\mu }_{ {E}} (a)= \mu (a)\) if \(a\in E\) and 0 otherwise.
Subdistributions are partially ordered under the pointwise order.
Definition 5
Let \(\mu _1,\mu _2\in \mathbf {SDist}(A)\). We say \(\mu _1\le \mu _2\) if \(\mu _1(a) \le \mu _2(a)\) for every \(a\in A\), and we say \(\mu _1 = \mu _2\) if \(\mu _1(a) = \mu _2(a)\) for every \(a\in A\).
We use the following lemma when reasoning about the semantics of loops.
Lemma 1
If \(\mu _1\le \mu _2\) and \(\mu _1=1\), then \(\mu _1=\mu _2\) and \(\mu _2=1\).
Subdistributions are stable under pointwiselimits.
Definition 6
Lemma 2
Any bounded increasing real sequence has a limit; the same is true of subdistributions.
Lemma 3
Let \((\mu _n)_{n\in \mathbb {N}} \in \mathbf {SDist}(A)\) be an increasing sequence of subdistributions. Then, this sequence converges to \(\mu _\infty \) and \(\mu _n \le \mu _\infty \) for every \(n\in \mathbb {N}\). In particular, for any event E, we have \(\Pr _{x \sim \mu _n} [E]\le \Pr _{x \sim \mu _\infty } [E]\) for every \(n\in \mathbb {N}\).
3 Programs and Assertions
Now, we introduce our core programming language and its denotational semantics.
Semantics. The denotational semantics of programs is adapted from the seminal work of [27] and interprets programs as subdistribution transformers. We view states as typepreserving mappings from variables to values; we write \(\mathbf {State}\) for the set of states and \(\mathbf {SDist}(\mathbf {State})\) for the set of probabilistic states. For each procedure name \(f \in \mathcal {I}\cup \mathcal {A}\), we assume a set \(\mathcal {X}^{\mathfrak {L}}_{f} \subseteq \mathcal {X}\) of local variables s.t. \(\mathcal {X}^{\mathfrak {L}}_{f}\) are pairwise disjoint. The other variables \(\mathcal {X}\setminus \bigcup _f \mathcal {X}^{\mathfrak {L}}_{f}\) are global variables.
To define the interpretation of expressions and distribution expressions, we let Open image in new window denote the interpretation of expression e with respect to state m, and Open image in new window denote the interpretation of expression e with respect to an initial subdistribution \(\mu \) over states defined by the clause Open image in new window . Likewise, we define the semantics of commands in two stages: first interpreted in a single input memory, then interpreted in an input subdistribution over memories.
Definition 7
The semantics of commands are given in Fig. 1.

The semantics Open image in new window of a statement s in initial state m is a subdistribution over states.

The (lifted) semantics Open image in new window of a statement s in initial subdistribution \(\mu \) over states is a subdistribution over states.
We briefly comment on loops. The semantics of a loop \({\mathbf {while}}\,\, e \,\, {\mathbf {do}}\,\, c \) is defined as the limit of its lower approximations, where the nth lower approximation of Open image in new window is Open image in new window , where \({\mathbf {if}}\,\, e\,\, {\mathbf {then}}\,\, s\) is shorthand for \({\mathbf {if}}\,\, e\,\, {\mathbf {then}}\,\, s\,\, {\mathbf {else}}\,\, {\mathbf {skip}}\) and \(c^n\) is the nfold composition \(c;\cdots ;c\). Since the sequence is increasing, the limit is welldefined by Lemma 3. In contrast, the nth approximation of Open image in new window defined by Open image in new window may not converge, since they are not necessarily increasing. However, in the special case where the output distribution has weight 1, the nth lower approximations and the nth approximations have the same limit.
Lemma 4
This follows by Lemma 1, since lower approximations are below approximations so the limit of their weights (and the weight of their limit) is 1. It will be useful to identify programs that terminate with probability 1.
Definition 8
(Lossless). A statement s is lossless if for every subdistribution \(\mu \), Open image in new window , where \(\mu \) is the total probability of \(\mu \). Programs that are not lossless are called lossy.
Informally, a program is lossless if all probabilistic assignments sample from full distributions rather than subdistributions, there are no \({\mathbf {abort}}\) instructions, and the program is almost surely terminating, i.e. infinite traces have probability zero. Note that if we restrict the language to sample from full distributions, then losslessness coincides with almost sure termination.
Another important class of loops are loops with a uniform upper bound on the number of iterations. Formally, we say that a loop \({\mathbf {while}}\,\, e \,\, {\mathbf {do}}\,\, s \) is certainly terminating if there exists k such that for every subdistribution \(\mu \), we have Open image in new window . Note that certain termination of a loop does not entail losslessness—the output distribution of the loop may not have weight 1, for instance, if the loop samples from a subdistribution or if the loop aborts with positive probability.
Semantics of Procedure Calls and Adversaries. The semantics of internal procedure calls is straightforward. Associated to each procedure name \(f \in \mathcal {I}\), we assume a designated input variable \({f}_{{\mathbf {arg}}} \in \mathcal {X}^{\mathfrak {L}}_{f}\), a piece of code \({f}_{{\mathbf {body}}}\) that executes the function call, and a result expression \({f}_{{\mathbf {res}}}\). A function call Open image in new window is then equivalent to Open image in new window . Procedures are subject to wellformedness criteria: procedures should only use local variables in their scope and after initializing them, and should not perform recursive calls.
External procedure calls, also known as adversary calls, are a bit more involved. Each name \(a \in \mathcal {A}\) is parametrized by a set \({a}_{{\mathbf {ocl}}} \subseteq \mathcal {I}\) of internal procedures which the adversary may call, a designated input variable \({a}_{{\mathbf {arg}}} \in \mathcal {X}^{\mathfrak {L}}_{a}\), a (unspecified) piece of code \({a}_{{\mathbf {body}}}\) that executes the function call, and a result expression \({a}_{{\mathbf {res}}}\). We assume that adversarial code can only access its local variables in \(\mathcal {X}^{\mathfrak {L}}_{a}\) and can only make calls to procedures in \({a}_{{\mathbf {ocl}}}\). It is possible to impose more restrictions on adversaries—say, that they are lossless—but for simplicity we do not impose additional assumptions on adversaries here.
4 Proof System
In this section we introduce a program logic for proving properties of probabilistic programs. The logic is abstract—assertions are arbitrary predicates on subdistributions—but the metatheoretic properties are clearest in this setting. In the following section, we will give a concrete version suitable for practical use.
Assertions and Closedness Conditions. We use predicates on state distribution.
Definition 9
(Assertions). The set \(\mathsf {Assn}\) of assertions is defined as \(\mathcal {P}(\mathbf {SDist}(\mathbf {State}))\). We write \(\eta (\mu )\) for \(\mu \in \eta \).
Given an assertion \(\eta \) and an event \(E \subseteq \mathbf {State}\), we let \( {\eta }_{ {E}}(\mu )\mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\eta ({\mu }_{ {E}}) . \) This assertion holds exactly when \(\eta \) is true on the portion of the subdistribution satisfying E. Finally, given an assertion \(\eta \) and a function F from \(\mathbf {SDist}(\mathbf {State})\) to \(\mathbf {SDist}(\mathbf {State})\), we define \( \eta [F] \mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\lambda \mu .\, \eta (F(\mu )) . \) Intuitively, \(\eta [F]\) is true in a subdistribution \(\mu \) exactly when \(\eta \) holds on \(F(\mu )\).
Now, we can define the closedness properties of assertions. These properties will be critical to our rules for \({\mathbf {while}}\) loops.
Definition 10

uclosed if for every increasing sequence of subdistributions \((\mu _n)_{n\in \mathbb {N}}\) such that \(\eta _n(\mu _n)\) for all \(n\in \mathbb {N}\) then \(\eta _\infty (\lim _{n\rightarrow \infty }\mu _n)\);

tclosed if for every converging sequence of subdistributions \((\mu _n)_{n\in \mathbb {N}}\) such that \(\eta _n(\mu _n)\) for all \(n\in \mathbb {N}\) then \(\eta _\infty (\lim _{n\rightarrow \infty }\mu _n)\);

dclosed if it is tclosed and downward closed, that is for every subdistributions \(\mu \le \mu '\), \(\eta _\infty (\mu ')\) implies \(\eta _\infty (\mu )\).
When \((\eta _n)_n\) is constant and equal to \(\eta \), we say that \(\eta \) is u/t/dclosed.
Note that tclosedness implies uclosedness, but the converse does not hold. Moreover, uclosed, tclosed and dclosed assertions are closed under arbitrary intersections and finite unions, or in logical terms under finite boolean combinations, universal quantification over arbitrary sets and existential quantification over finite sets.
Intuitively, an assertion is separated from a set of variables X if every two subdistributions that agree on the variables outside X either both satisfy the assertion, or both refute the assertion.
Judgments and Proof Rules. Judgments are of the form \(\{\eta \}\,s\,\{\eta '\}\), where the assertions \(\eta \) and \(\eta '\) are drawn from \(\mathsf {Assn}\).
Definition 11
A judgment \(\{\eta \}\,s\,\{\eta '\}\) is valid, written Open image in new window , if Open image in new window for every interpretation of adversarial procedures and every probabilistic state \(\mu \) such that \(\eta (\mu )\).
Figure 2 describes the structural and basic rules of the proof system. Validity of judgments is preserved under standard structural rules, like the rule of consequence [Conseq]. As usual, the rule of consequence allows to weaken the postcondition and to strengthen the postcondition; in our system, this rule serves as the interface between the program logic and mathematical theorems from probability theory. The [Exists] rule is helpful to deal with existentially quantified preconditions.
The rules for \({\mathbf {skip}}\), assignments, random samplings and sequences are all straightforward. The rule for \({\mathbf {abort}}\) requires \(\square {\bot }\) to hold after execution; this assertion uniquely characterizes the resulting null subdistribution. The rules for assignments and random samplings are semantical.
The rule [Cond] for conditionals requires that the postcondition must be of the form \(\eta _1\,\oplus \,\eta _2\); this reflects the semantics of conditionals, which splits the initial probabilistic state depending on the guard, runs both branches, and recombines the resulting two probabilistic states.
The next two rules ([Split] and [Frame]) are useful for local reasoning. The [Split] rule reflects the additivity of the semantics and combines the pre and postconditions using the \(\oplus \) operator. The [Frame] rule asserts that lossless statements preserve assertions that are not influenced by modified variables.
The rule [Call] for internal procedures is as expected, replacing the procedure call f with its definition.
Figure 3 presents the rules for loops. We consider four rules specialized to the termination behavior. The [While] rule is the most general rule, as it deals with arbitrary loops. For simplicity, we explain the rule in the special case where the family of assertions is constant, i.e. we have \(\eta _n=\eta \) and \(\eta '_n=\eta '\). Informally, the \(\eta \) is the loop invariant and \(\eta '\) is an auxiliary assertion used to prove the invariant. We require that \(\eta \) is uclosed, since the semantics of a loop is defined as the limit of its lower approximations. Moreover, the first premise ensures that starting from \(\eta \), one guarded iteration of the loop establishes \(\eta '\); the second premise ensures that restricting to \(\lnot e\) a probabilistic state \(\mu '\) satisfying \(\eta '\) yields a probabilistic state \(\mu \) satisfying \(\eta \). It is possible to give an alternative formulation where the second premise is substituted by the logical constraint \({\eta '}_{ {\lnot e}}\implies \eta \). As usual, the postcondition of the loop is the conjunction of the invariant with the negation of the guard (more precisely in our setting, that the guard has probability 0).
The [WhileAST] rule deals with lossless loops. For simplicity, we explain the rule in the special case where the family of assertions is constant, i.e. we have \(\eta _n=\eta \). In this case, we know that lower approximations and approximations have the same limit, so we can directly prove an invariant that holds after one guarded iteration of the loop. On the other hand, we must now require that the \(\eta \) satisfies the stronger property of tclosedness.
The [WhileD] rule handles arbitrary loops with a dclosed invariant; intuitively, restricting a subdistribution that satisfies a downwards closed assertion \(\eta \) yields a subdistribution which also satisfies \(\eta \).
The [WhileCT] rule deals with certainly terminating loops. In this case, there is no requirement on the assertions.
We briefly compare the rules from a verification perspective. If the assertion is dclosed, then the rule [WhileD] is easier to use, since there is no need to prove any termination requirement. Alternatively, if we can prove certain termination of the loop, then the rule [WhileCT] is the best to use since it does not impose any condition on assertions. When the loop is lossless, there is no need to introduce an auxiliary assertion \(\eta '\), which simplifies the proof goal. Note however that it might still be beneficial to use the [While] rule, even for lossless loops, because of the weaker requirement that the invariant is uclosed rather than tclosed.
Finally, Fig. 4 gives the adversary rule for general adversaries. It is highly similar to the general rule [WhileD] for loops since the adversary may make an arbitrary sequence of calls to the oracles in \({a}_{{\mathbf {ocl}}}\) and may not be lossless. Intuitively, \(\eta \) plays the role of the invariant: it must be dclosed and it must be preserved by every oracle call with arbitrary arguments. If this holds, then \(\eta \) is also preserved by the adversary call. Some framing conditions are required, similar to the ones of the [Frame] rule: the invariant must not be influenced by the state writable by the external procedures.
Soundness and Relative Completeness. Our proof system is sound with respect to the semantics.
Theorem 1
(Soundness). Every judgment \(\{\eta \}\,s\,\{\eta '\}\) provable using the rules of our logic is valid.
Completeness of the logic follows from the next lemma, whose proof makes an essential use of the [While] rule. In the sequel, we use \({\mathbf {1}}_{\mu }\) to denote the characteristic function of a probabilistic state \(\mu \), an assertion stating that the current state is equal to \(\mu \).
Lemma 5
Proof
By induction on the structure of s.

\(s = {\mathbf {abort}}\), \(s = {\mathbf {skip}}\), Open image in new window and Open image in new window are trivial;
 \(s = s_1;s_2\), we have to prove We apply the [Seq] rule with Open image in new window premises can be directly proved using the induction hypothesis;
 \(s = {\mathbf {if}}\,\, e\,\, {\mathbf {then}}\,\, s_1\,\, {\mathbf {else}}\,\, s_2\), we have to prove We apply the [Conseq] rule to be able to apply the [Cond] rule with Open image in new window and Open image in new window Both premises can be proved by an application of the [Conseq] rule followed by the application of the induction hypothesis.
 \(s = {\mathbf {while}}\,\, e \,\, {\mathbf {do}}\,\, s \), we have to prove We first apply the [While] rule with Open image in new window and For the first premise we apply the same process as for the conditional case: we apply the [Conseq] and [Cond] rules and we conclude using the induction hypothesis (and the [Skip] rule). For the second premise we follow the same process but we conclude using the [Abort] rule instead of the induction hypothesis. Finally we conclude since \(\mathsf {uclosed}((\eta _n)_{n\in \mathbb {N}^\infty })\). \(\square \)
The abstract logic is also relatively complete. This property will be less important for our purposes, but it serves as a basic sanity check.
Theorem 2
(Relative completeness). Every valid judgment is derivable.
Proof
Consider a valid judgment \(\{\eta \}s\{\eta '\}\). Let \(\mu \) be a probabilistic state such that \(\eta (\mu )\). By the above proposition, Open image in new window . Using the validity of the judgment and [Conseq], we have \(\{{\mathbf {1}}_{\mu } \wedge \eta (\mu )\}s\{\eta '\}\). Using the [Exists] and [Conseq] rules, we conclude \(\{\eta \}s\{\eta '\}\) as required. \(\square \)
The sideconditions in the loop rules (e.g., \(\mathsf {uclosed}\)/\(\mathsf {tclosed}\)/\(\mathsf {dclosed}\) and the weight conditions) are difficult to prove, since they are semantic properties. Next, we present a concrete version of the logic with give easytocheck, syntactic sufficient conditions.
5 A Concrete Program Logic
To give a more practical version of the logic, we begin by setting a concrete syntax for assertions
The interpretation of the concrete syntax is as expected. The interpretation of probabilistic assertions is relative to a valuation \(\rho \) which maps logical variables to values, and is an element of \(\mathsf {Assn}\). The definition of the interpretation is straightforward; the only interesting case is Open image in new window which is defined by Open image in new window , where Open image in new window is the interpretation of the state expression \(\tilde{e}\) in the memory m and valuation \(\rho \). The interpretation of state expressions is a mapping from memories to values, which can be lifted to a mapping from distributions over memories to distributions over values. The definition of the interpretation is straightforward; the most interesting case is for expectation Open image in new window . We present the full interpretations in the supplemental materials.

the probability that \(\phi \) holds in some probabilistic state is represented by the probabilistic expression \(\Pr [\phi ] \mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\mathbb {E}_{} [{{\mathbf {1}}_{\phi }}]\);
 probabilistic independence of state expressions \(\tilde{e}_1\), ..., \(\tilde{e}_n\) is modeled by the probabilistic assertion Open image in new window , defined by the clause^{4}$$\forall v_1 \ldots v_n,~ \Pr [\top ]^{n  1} \Pr [\bigwedge _{i=1\ldots n} \tilde{e}_i = v_i] = \prod _{i=1\ldots n}\Pr [\tilde{e}_i = v_i] ; $$

the fact that a distribution is proper is modeled by the probabilistic assertion \(\mathcal {L}\mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\Pr [\top ] = 1\);
 a state expression \(\tilde{e}\) distributed according to a law g is modeled by the probabilistic assertionThe inner expectation computes the probability that v drawn from g is equal to a fixed w; the outer expectation weights the inner probability by the probability of each value of w.$$ \tilde{e} \sim g \mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\forall w,~ \Pr [\tilde{e}= w] = \mathbb {E}_{} [{\mathbb {E}_{{v} \sim {g}} [{{\mathbf {1}}_{v = w}}]}] .$$
We can easily define \(\square \) operator from the previous section in our new syntax: \(\square {\phi } \mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\Pr [\lnot \phi ] = 0\).
The rule for assignment is the usual rule from Hoare logic, replacing the program variable x by its corresponding expression e in the precondition. The replacement \(\eta [x := e]\) is done recursively on the probabilistic assertion \(\eta \); for instance for expectations, it is defined by \( \mathbb {E}_{} [{\tilde{e}}][x := e] \mathrel {{\mathop {=}\limits ^{\scriptscriptstyle \triangle }}}\mathbb {E}_{} [{\tilde{e}[x := e]}] , \) where \(\tilde{e}[x := e]\) is the syntactic substitution.
While tclosedness is a semantic condition (cf. Definition 10), there are simple syntactic conditions to guarantee it. For instance, assertions that carry a nonstrict comparison \(\bowtie \mathbin {\in } \{\le ,\ge ,=\}\) between two bounded probabilistic expressions are tclosed; the assertion stating probabilistic independence of a set of expressions is tclosed.
Precondition Calculus. With a concrete syntax for assertions, we are also able to incorporate syntactic reasoning principles. One classic tool is Morgan and McIver’s greatest preexpectation, which we take as inspiration for a precondition calculus for the loopfree fragment of Ellora. Given an assertion \(\eta \) and a loopfree statement s, we mechanically construct an assertion \(\eta ^*\) that is the precondition of s that implies \(\eta \) as a postcondition. The basic idea is to replace each expectation expression p inside \(\eta \) by an expression \(p^*\) that has the same denotation before running s as p after running s. This process yields an assertion \(\eta ^*\) that, interpreted before running s, is logically equivalent to \(\eta \) interpreted after running s.
Theorem 1
6 Case Studies: Embedding Lightweight Logics
While Ellora is suitable for generalpurpose reasoning about probabilistic programs, in practice humans typically use more specialpurpose proof techniques—often targeting just a single, specific kind of property, like probabilistic independence—when proving probabilistic assertions. When these techniques apply, they can be a convenient and powerful tool.
To capture this intuitive style of reasoning, researchers have considered lightweight program logics where the assertions and proof rules are tailored to a specific proof technique. We demonstrate how to integrate these tools in an assertionbased logic by introducing and embedding a new logic for reasoning about independence and distribution laws, useful properties when analyzing randomized algorithms. We crucially rely on the rich assertions in Ellora—it is not clear how to extend expectationbased approaches to support similar, lightweight reasoning. Then, we show to embed the union bound logic [4] for proving accuracy bounds.
6.1 Law and Independence Logic
We begin by describing the law and independence logic IL, a proof system with intuitive rules that are easy to apply and amenable to automation. For simplicity, we only consider programs which sample from the binomial distribution, and have deterministic control flow—for lack of space, we also omit procedure calls.
Definition 12
The assertion \({\mathsf {det}}(e)\) states that e is deterministic in the current distribution, i.e., there is at most one element in the support of its interpretation. The assertion Open image in new window states that the expressions in E are independent, as formalized in the previous section. The assertion \(e \sim {\mathrm {B}}(m,p)\) states that e is distributed according to a binomial distribution with parameter m (where m can be an expression) and constant probability p, i.e. the probability that \(e=k\) is equal to the probability that exactly k independent coin flips return heads using a biased coin that returns heads with probability p.
Definition 13
Judgments of the logic are of the form \(\left\{ \xi \right\} \;\,s\,\;\left\{ \xi '\right\} \), where \(\xi \) and \(\xi '\) are ILassertions. A judgment is valid if it is derivable from the rules of Fig. 9; structural rules and rule for sequential composition are similar to those from Sect. 4 and omitted.
The rule [ILAssgn] for deterministic assignments is as in Sect. 4. The rule [ILSample] for random assignments yields as postcondition that the variable x and a set of expressions E are independent assuming that E is independent before the sampling, and moreover that x follows the law of the distribution that it is sampled from. The rule [ILCond] for conditionals requires that the guard is deterministic, and that each of the branches satisfies the specification; if the guard is not deterministic, there are simple examples where the rule is not sound. The rule [ILWhile] for loops requires that the loop is certainly terminating with a deterministic guard. Note that the requirement of certain termination could be avoided by restricting the structural rules such that a statement s has deterministic control flow whenever \(\left\{ \xi \right\} \;s\;\left\{ \xi '\right\} \) is derivable.
We now turn to the embedding. The embedding of IL assertions into general assertions is immediate, except for \({\mathsf {det}}(e)\) which is translated as \(\square {e}\vee \square {\lnot e}\). We let \(\overline{\xi }\) denote the translation of \(\xi \).
Theorem 2
(Embedding and soundness of IL logic). If \(\left\{ \xi \right\} \;\,s\,\;\left\{ \xi '\right\} \) is derivable in the IL logic, then \(\{\overline{\xi }\}\,s\,\{\overline{\xi '}\}\) is derivable in (the syntactic variant of) Ellora. As a consequence, every derivable judgment \(\left\{ \xi \right\} \;s\;\left\{ \xi '\right\} \) is valid.
Proof sketch
The first premise follows from the rule for random assignment and structural rules. The second premise follows from the rule for deterministic assignment and the rule of consequence, applying axioms about sums of binomial distributions.
We briefly comment on several limitations of IL. First, IL is restricted to programs with deterministic control flow, but this restriction could be partially relaxed by enriching IL with assertions for conditional independence. Such assertions are already expressible in the logic of Ellora; adding conditional independence would significantly broaden the scope of the IL proof system and open the possibility to rely on axiomatizations of conditional independence (e.g., based on graphoids [36]). Second, the logic only supports sampling from binomial distributions. It is possible to enrich the language of assertions with clauses \(c \sim g\) where g can model other distributions, like the uniform distribution or the Laplace distribution. The main design challenge is finding a core set of useful facts about these distributions. Enriching the logic and automating the analysis are interesting avenues for further work.
6.2 Embedding the Union Bound Logic
The program logic aHL [4] was recently introduced for estimating accuracy of randomized computations. One main application of aHL is proving accuracy of randomized algorithms, both in the offline and online settings—i.e. with adversary calls. aHL is based on the union bound, a basic tool from probability theory, and has judgments of the form Open image in new window where s is a statement, \(\varPhi \) and \(\varPsi \) are firstorder formulae over program variables, and \(\beta \) is a probability, i.e. \(\beta \in {[0,1]}\). A judgment Open image in new window is valid if for every memory m such that \(\varPhi (m)\), the probability of \(\lnot \varPsi \) in Open image in new window is upper bounded by \(\beta \), i.e. Open image in new window .
aHL has a simple embedding into Ellora.
Theorem 3
(Embedding of aHL). If Open image in new window is derivable in aHL, then \(\{\square {\varPhi }\}\,s\,\{\mathbb {E}_{} [{{\mathbf {1}}_{\lnot \varPsi }}] \le \beta \}\) is derivable in Ellora.
7 Case Studies: Verifying Randomized Algorithms
In this section, we will demonstrate Ellora on a selection of examples; we present further examples in the supplemental material. Together, they exhibit a wide variety of different proof techniques and reasoning principles which are available in the Ellora ’s implementation.
Hypercube Routing. will begin with the hypercube routing algorithm [41, 42]. Consider a network topology (the hypercube) where each node is labeled by a bitstring of length D and two nodes are connected by an edge if and only if the two corresponding labels differ in exactly one bit position.
We assume that initially, the position of the packet i is at node i (see Open image in new window ). Then, we initialize the random intermediate destinations \(\rho \). The remaining loop encodes the evaluation of the routing strategy iterated T time. The variable Open image in new window is a map that logs if an edge is already used by a packet, it is empty at the beginning of each iteration. For each packet, we try to move it across one edge along the path to its intermediate destination. The function Open image in new window returns the next edge to follow, following the bitfixing scheme. If the packet can progress (its edge is not used), then its current position is updated and the edge is marked as used.
Lemma 4
Proving this lemma can be done using the Fundamental Lemma of GamePlaying, and bounding the probability of bad in the program from Fig. 15. We focus on the latter. Here we apply the [Adv] rule of Ellora with the invariant Open image in new window where H is the size of the map H, i.e. the number of adversary call. Intuitively, the invariant says that at each call to the oracle the probability that Open image in new window has been set before and that the number of adversary call is less than k is bounded by a polynomial in k.
8 Implementation and Mechanization
Benchmarks
Example  LC  FPLC 

hypercube  100  1140 
coupon  27  184 
vertexcover  30  61 
pairwiseindep  30  231 
privatesums  22  80 
polyidtest  22  32 
randomwalk  16  42 
dicesampling  10  64 
matrixprodtest  20  75 
We used the implementation for verifying many examples from the literature, including all the programs presented in Sect. 7 as well as some additional examples in Table 1 (such as polynomial identity test, private running sums, properties about random walks, etc.). The verified proofs bear a strong resemblance to the existing, paper proofs. Independently of this work, Ellora has been used to formalize the main theorem about a randomized gossipbased protocol for distributed systems [26, Theorem 2.1]. Some libraries developed in the scope of Ellora have been incorporated into the main branch of EasyCrypt, including a general library on probabilistic independence.
A New Library for Probabilistic Independence. In order to support assertions of the concrete program logic, we enhanced the standard libraries of EasyCrypt, notably the ones dealing with big operators and subdistributions. Like all EasyCrypt libraries, they are written in a foundational style, i.e. they are defined instead of axiomatized. A large part of our libraries are proved formally from first principles. However, some results, such as concentration bounds, are currently declared as axioms.
Our formalization of probabilistic independence deserves special mention. We formalized two different (but logically equivalent) notions of independence. The first is in terms of products of probabilities, and is based on heterogenous lists. Since Ellora (like EasyCrypt) has no support for heterogeneous lists, we use a smart encoding based on secondorder predicates. The second definition is more abstract, in terms of product and marginal distributions. While the first definition is easier to use when reasoning about randomized algorithms, the second definition is more suited for proving mathematical facts. We prove the two definitions equivalent, and formalize a collection of related theorems.
Mechanized MetaTheory. The proofs of soundness and relative completeness of the abstract logic, without adversary calls, and the syntactical termination arguments have been mechanized in the Coq proof assistant. The development is available in supplemental material.
9 Related Work
More on AssertionBased Techniques. The earliest assertionbased system is due to Ramshaw [37], who proposes a program logic where assertions can be formulas involving frequencies, essentially probabilities on subdistributions. Ramshaw’s logic allows assertions to be combined with operators like \(\oplus \), similar to our approach. [18] presents a Hoarestyle logic with general assertions on the distribution, allowing expected values and probabilities. However, his while rule is based on a semantic condition on the guarded loop body, which is less desirable for verification because it requires reasoning about the semantics of programs. [8] give decidability results for a probabilistic Hoare logic without while loops. We are not aware of any existing system that supports assertions about general expected values; existing works also restrict to Boolean distributions. [38] formalize a Hoare logic for probabilistic programs but unlike our work, their assertions are interpreted on distributions rather than subdistributions. For conditionals, their semantics rescales the distribution of states that enter each branch. However, their assertion language is limited and they impose strong restrictions on loops.
Other Approaches. Researchers have proposed many other approaches to verify probabilistic program. For instance, verification of Markov transition systems goes back to at least [17, 40]; our condition for ensuring almostsure termination in loops is directly inspired by their work. Automated methods include model checking (see e.g., [1, 25, 29]) and abstract interpretation (see e.g., [12, 32]). Techniques for reasoning about higherorder (functional) probabilistic languages are an active subject of research (see e.g., [7, 13, 14]). For analyzing probabilistic loops, in particular, there are tools for reasoning about running time. There are also automated systems for synthesizing invariants [3, 11]. [9, 10] use a martingale method to compute the expected time of the coupon collector process for \(N=5\)—fixing N lets them focus on a program where the outer while loop is fully unrolled. Martingales are also used by [15] for analyzing probabilistic termination. Finally, there are approaches involving symbolic execution; [39] use a mix of static and dynamic analysis to check probabilistic programs from the approximate computing literature.
10 Conclusion and Perspectives
We introduced an expressive program logic for probabilistic programs, and showed that assertionbased systems are suited for practical verification of probabilistic programs. Owing to their richer assertions, program logics are a more suitable foundation for specialized reasoning principles than expectationbased systems. As evidence, our program logic can be smoothly extended with custom reasoning for probabilistic independence and union bounds. Future work includes proving better accuracy bounds for differentially private algorithms, and exploring further integration of Ellora into EasyCrypt.
Footnotes
 1.
Treating a program as a function from input states s to output distributions \(\mu (s)\), the expected value of E on \(\mu (s)\) is an expectation.
 2.
Note that we do not give mathematically precise formulations of these points; as we are interested in the practical verification of probabilistic programs, a purely theoretical answer would not address our concerns.
 3.
We work with discrete distributions to keep measuretheoretic technicalities to a minimum, though we do not see obstacles to generalizing to the continuous setting.
 4.
The term \(\Pr [\top ]^{n  1}\) is necessary since we work with subdistributions.
 5.
Recall that the number of node in a hypercube of dimension D is \(2^D\) so each node can be identified by a number in \([1,2^D]\).
Notes
Acknowledgments
We thank the reviewers for their helpful comments. This work benefited from discussions with Dexter Kozen, Annabelle McIver, and Carroll Morgan. This work was partially supported by ERC Grant #679127, and NSF grant 1718220.
Supplementary material
References
 1.Baier, C.: Probabilistic model checking. In: Dependable Software Systems Engineering, NATO Science for Peace and Security Series  D: Information and Communication Security, vol. 45, pp. 1–23. IOS Press (2016), https://doi.org/10.3233/97816149962791
 2.Barthe, G., Dupressoir, F., Grégoire, B., Kunz, C., Schmidt, B., Strub, P.Y.: EasyCrypt: A tutorial. In: Aldini, A., Lopez, J., Martinelli, F. (eds.) FOSAD 20122013. LNCS, vol. 8604, pp. 146–166. Springer, Cham (2014). https://doi.org/10.1007/9783319100821_6CrossRefGoogle Scholar
 3.Barthe, G., Espitau, T., Ferrer Fioriti, L.M., Hsu, J.: Synthesizing probabilistic invariants via Doob’s decomposition. In: International Conference on Computer Aided Verification (CAV), Toronto, Ontario (2016). https://arxiv.org/abs/1605.02765Google Scholar
 4.Barthe, G., Gaboardi, M., Grégoire, B., Hsu, J., Strub, P.Y.: A program logic for union bounds. In: International Colloquium on Automata, Languages and Programming (ICALP), Rome, Italy (2016). http://arxiv.org/abs/1602.05681
 5.Barthe, G., Grégoire, B., Heraud, S., Béguelin, S.Z.: Computeraided security proofs for the working cryptographer. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 71–90. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642227929_5CrossRefGoogle Scholar
 6.Bellare, M., Rogaway, P.: The security of triple encryption and a framework for codebased gameplaying proofs. In: IACR International Conference on the Theory and Applications of Cryptographic Techniques (EUROCRYPT), Saint Petersburg, Russia, pp. 409–426 (2006). https://doi.org/10.1007/11761679_25CrossRefGoogle Scholar
 7.Bizjak, A., Birkedal, L.: Stepindexed logical relations for probability. In: Pitts, A. (ed.) FoSSaCS 2015. LNCS, vol. 9034, pp. 279–294. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466780_18CrossRefGoogle Scholar
 8.Chadha, R., CruzFilipe, L., Mateus, P., Sernadas, A.: Reasoning about probabilistic sequential programs. Theoretical Computer Science 379(1–2), 142–165 (2007)MathSciNetCrossRefGoogle Scholar
 9.Chakarov, A., Sankaranarayanan, S.: Probabilistic program analysis with martingales. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 511–526. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642397998_34CrossRefGoogle Scholar
 10.Chakarov, A., Sankaranarayanan, S.: Expectation invariants for probabilistic program loops as fixed points. In: MüllerOlm, M., Seidl, H. (eds.) SAS 2014. LNCS, vol. 8723, pp. 85–100. Springer, Cham (2014). https://doi.org/10.1007/9783319109367_6CrossRefGoogle Scholar
 11.Chatterjee, K., Fu, H., Novotný, P., Hasheminezhad, R.: Algorithmic analysis of qualitative and quantitative termination problems for affine probabilistic programs. In: ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), Saint Petersburg, Florida, pp. 327–342 (2016). https://doi.org/10.1145/2837614.2837639
 12.Cousot, P., Monerau, M.: Probabilistic abstract interpretation. In: Seidl, H. (ed.) ESOP 2012. LNCS, vol. 7211, pp. 169–193. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642288692_9CrossRefGoogle Scholar
 13.Crubillé, R., Dal Lago, U.: On probabilistic applicative bisimulation and callbyvalue \(\lambda \)calculi. In: Shao, Z. (ed.) ESOP 2014. LNCS, vol. 8410, pp. 209–228. Springer, Heidelberg (2014). https://doi.org/10.1007/9783642548338_12CrossRefzbMATHGoogle Scholar
 14.Dal Lago, U., Sangiorgi, D., Alberti, M.: On coinductive equivalences for higherorder probabilistic functional programs. In: ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), San Diego, California, pp. 297–308 (2014). https://arxiv.org/abs/1311.1722
 15.Fioriti, L.M.F., Hermanns, H.: Probabilistic termination: Soundness, completeness, and compositionality. In: ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), Mumbai, India, pp. 489–501 (2015)Google Scholar
 16.Gretz, F., Katoen, J.P., McIver, A.: Operational versus weakest preexpectation semantics for the probabilistic guarded command language. Perform. Eval. 73, 110–132 (2014)CrossRefGoogle Scholar
 17.Hart, S., Sharir, M., Pnueli, A.: Termination of probabilistic concurrent programs. ACM Trans. Program. Lang. Syst. 5(3), 356–380 (1983)CrossRefGoogle Scholar
 18.den Hartog, J.: Probabilistic extensions of semantical models. Ph.D. thesis, Vrije Universiteit Amsterdam (2002)Google Scholar
 19.Hurd, J.: Formal verification of probabilistic algorithms. Technical report, UCAMCLTR566, University of Cambridge, Computer Laboratory (2003)Google Scholar
 20.Hurd, J.: Verification of the MillerRabin probabilistic primality test. J. Log. Algebr. Program. 56(1–2), 3–21 (2003). https://doi.org/10.1016/S15678326(02)00065–6MathSciNetCrossRefGoogle Scholar
 21.Hurd, J., McIver, A., Morgan, C.: Probabilistic guarded commands mechanized in HOL. Theor. Comput. Sci. 346(1), 96–112 (2005)MathSciNetCrossRefGoogle Scholar
 22.Impagliazzo, R., Rudich, S.: Limits on the provable consequences of oneway permutations. In: ACM SIGACT Symposium on Theory of Computing (STOC), Seattle, Washington, pp. 44–61 (1989). https://doi.org/10.1145/73007.73012
 23.Kaminski, B.L., Katoen, J.P., Matheja, C.: Inferring covariances for probabilistic programs. In: Agha, G., Van Houdt, B. (eds.) QEST 2016. LNCS, vol. 9826, pp. 191–206. Springer, Cham (2016). https://doi.org/10.1007/9783319434254_14CrossRefGoogle Scholar
 24.Kaminski, B., Katoen, J.P., Matheja, C., Olmedo, F.: Weakest precondition reasoning for expected runtimes of probabilistic programs. In: European Symposium on Programming (ESOP), Eindhoven, The Netherlands, January 2016CrossRefGoogle Scholar
 25.Katoen, J.P.: The probabilistic modelchecking landscape. In: IEEE Symposium on Logic in Computer Science (LICS), New York (2016)Google Scholar
 26.Kempe, D., Dobra, A., Gehrke, J.: Gossipbased computation of aggregate information. In: Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, pp. 482–491 (2003). https://doi.org/10.1109/SFCS.2003.1238221
 27.Kozen, D.: Semantics of probabilistic programs. J. Comput. Syst. Sci. 22, 328–350 (1981). https://www.sciencedirect.com/science/article/pii/0022000081900362MathSciNetCrossRefGoogle Scholar
 28.Kozen, D.: A probabilistic PDL. J. Comput. Syst. Sci. 30(2), 162–178 (1985)MathSciNetCrossRefGoogle Scholar
 29.Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: Verification of probabilistic realtime systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642221101_47CrossRefGoogle Scholar
 30.McIver, A., Morgan, C.: Abstraction, refinement, and proof for probabilistic systems. Monographs in Computer Science. Springer, New York (2005)zbMATHGoogle Scholar
 31.McIver, A., Morgan, C., Kaminski, B.L., Katoen, J.P.: A new rule for almostcertain termination. In: Proceedings of the ACM on Programming Languages 1(POPL) (2018). https://arxiv.org/abs/1612.01091, appeared at ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), Los Angeles, California
 32.Monniaux, D.: Abstract interpretation of probabilistic semantics. In: Palsberg, J. (ed.) SAS 2000. LNCS, vol. 1824, pp. 322–339. Springer, Heidelberg (2000). https://doi.org/10.1007/9783540450993_17CrossRefGoogle Scholar
 33.Morgan, C.: Proof rules for probabilistic loops. In: BCSFACS Conference on Refinement, Bath, England (1996)Google Scholar
 34.Morgan, C., McIver, A., Seidel, K.: Probabilistic predicate transformers. ACM Trans. Program. Lang. Syst. 18(3), 325–353 (1996)CrossRefGoogle Scholar
 35.Olmedo, F., Kaminski, B.L., Katoen, J.P., Matheja, C.: Reasoning about recursive probabilistic programs. In: IEEE Symposium on Logic in Computer Science (LICS), New York, pp. 672–681 (2016)Google Scholar
 36.Pearl, J., Paz, A.: Graphoids: graphbased logic for reasoning about relevance relations. In: ECAI, pp. 357–363 (1986)Google Scholar
 37.Ramshaw, L.H.: Formalizing the Analysis of Algorithms. Ph.D. thesis, Computer Science (1979)Google Scholar
 38.Rand, R., Zdancewic, S.: VPHL: a verified partialcorrectness logic for probabilistic programs. In: Conference on the Mathematical Foundations of Programming Semantics (MFPS), Nijmegen, The Netherlands (2015)MathSciNetCrossRefGoogle Scholar
 39.Sampson, A., Panchekha, P., Mytkowicz, T., McKinley, K.S., Grossman, D., Ceze, L.: Expressing and verifying probabilistic assertions. In: ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), Edinburgh, Scotland, p. 14 (2014)Google Scholar
 40.Sharir, M., Pnueli, A., Hart, S.: Verification of probabilistic programs. SIAM J. Comput. 13(2), 292–314 (1984)MathSciNetCrossRefGoogle Scholar
 41.Valiant, L.G.: A scheme for fast parallel communication. SIAM J. Comput. 11(2), 350–361 (1982)MathSciNetCrossRefGoogle Scholar
 42.Valiant, L.G., Brebner, G.J.: Universal schemes for parallel communication. In: ACM SIGACT Symposium on Theory of Computing (STOC), Milwaukee, Wisconsin, pp. 263–277 (1981). https://doi.org/10.1145/800076.802479
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.