ConstraintBased Synthesis of Coupling Proofs
Abstract
Proof by coupling is a classical technique for proving properties about pairs of randomized algorithms by carefully relating (or coupling) two probabilistic executions. In this paper, we show how to automatically construct such proofs for probabilistic programs. First, we present fcoupled postconditions, an abstraction describing two correlated program executions. Second, we show how properties of fcoupled postconditions can imply various probabilistic properties of the original programs. Third, we demonstrate how to reduce the proofsearch problem to a purely logical synthesis problem of the form Open image in new window , making probabilistic reasoning unnecessary. We develop a prototype implementation to automatically build coupling proofs for probabilistic properties, including uniformity and independence of program expressions.
1 Introduction
In this paper, we aim to automatically synthesize coupling proofs for probabilistic programs and properties. Originally designed for proving properties comparing two probabilistic programs—socalled relational properties—a coupling proof describes how to correlate two executions of the given programs, simulating both programs with a single probabilistic program. By reasoning about this combined, coupled process, we can often give simpler proofs of probabilistic properties for the original pair of programs.
A number of recent works have leveraged this idea to verify relational properties of randomized algorithms, including differential privacy [8, 10, 12], security of cryptographic protocols [9], convergence of Markov chains [11], robustness of machine learning algorithms [7], and more. Recently, Barthe et al. [6] showed how to reduce certain nonrelational properties—which describe a single probabilistic program—to relational properties of two programs, by duplicating the original program or by sequentially composing it with itself.
While coupling proofs can simplify reasoning about probabilistic properties, they are not so easy to use; most existing proofs are carried out manually in relational program logics using interactive theorem provers. In a nutshell, the main challenge in a coupling proof is to select a correlation for each pair of corresponding sampling instructions, aiming to induce a particular relation between the outputs of the coupled process; this relation then implies the desired relational property. Just like finding inductive invariants in proofs for deterministic programs, picking suitable couplings in proofs can require substantial ingenuity.
To ease this task, we recently showed how to cast the search for coupling proofs as a program synthesis problem [1], giving a way to automatically find sophisticated proofs of differential privacy previously beyond the reach of automated verification. In the present paper, we build on this idea and present a general technique for constructing coupling proofs, targeting uniformity and probabilistic independence properties. Both are fundamental properties in the analysis of randomized algorithms, either in their own right or as prerequisites to proving more sophisticated guarantees; uniformity states that a randomized expression takes on all values in a finite range with equal probability, while probabilistic independence states that two probabilistic expressions are somehow uncorrelated—learning the value of one reveals no additional information about the value of the other.
Our techniques are inspired by the automated proofs of differential privacy we considered previously [1], but the present setting raises new technical challenges.

Nonlockstep execution. To prove differential privacy, the behavior of a single program is compared on two related inputs. To take advantage of the identical program structure, previous work restricted attention to synchronizing proofs, where the two executions can be analyzed assuming they follow the same control flow path. In contrast, coupling proofs for uniformity and independence often require relating two programs with different shapes, possibly following completely different control flows [6].
To overcome this challenge, we take a different approach. Instead of incrementally finding couplings for corresponding pairs of sampling instructions—requiring the executions to be tightly synchronized—we first lift all sampling instructions to the front of the program and pick a coupling once and for all. The remaining execution of both programs can then be encoded separately, with no need for lockstep synchronization (at least for loopfree programs—looping programs require a more careful treatment).

Richer space of couplings. The heart of a coupling proof is selecting—among multiple possible options—a particular correlation for each pair of random sampling instructions. Random sampling in differentially private programs typically use highly domainspecific distributions, like the Laplace distribution, which support a small number of useful couplings. Our prior work leveraged this feature to encode a collection of primitive couplings into the synthesis system. However, this is no longer possible when programs sample from distributions supporting richer couplings, like the uniform distribution. Since our approach coalesces all sampling instructions at the beginning of the program (more generally, at the head of the loop), we also need to find couplings for products of distributions.
We address this problem in two ways. First, we allow couplings of two sampling instructions to be specified by an injective function f from one range to another. Then, we impose requirements—encoded as standard logical constraints—to ensure that f indeed represents a coupling; we call such couplings fcouplings.
 More general class of properties. Finally, we consider a broad class of properties rather than just differential privacy. While we focus on uniformity and independence for concreteness, our approach can establish general equalities between products of probabilities, i.e., probabilistic properties of the formwhere \(e_i\) and \(e_j'\) are program expressions in the first and second programs respectively, and \(E_i\) and \(E_j'\) are predicates. As an example, we automatically establish a key step in the proof of Bertrand’s Ballot theorem [20].$$\begin{aligned} \prod _{i = 1}^m\Pr [ e_i \in E_i ] = \prod _{j = 1}^n\Pr [ e_j' \in E_j' ], \end{aligned}$$

Proof technique: We introduce fcoupled postconditions, a form of postcondition for two probabilistic programs where random sampling instructions in the two programs are correlated by a function f. Using fcoupled postconditions, we present proof rules for establishing uniformity and independence of program variables, fundamental properties in the analysis of randomized algorithms (Sect. 3).

Reduction to constraintbased synthesis: We demonstrate how to automatically find coupling proofs by transforming our proof rules into logical constraints of the form Open image in new window —a synthesis problem. A satisfiable constraint shows the existence of a function f—essentially, a compact encoding of a coupling proof—implying the target property (Sect. 4).

Extension to looping programs: We extend our technique to reason about loops, by requiring synchronization at the loop head and finding a coupled invariant (Sect. 5).

Implementation and evaluation: We implement our technique and evaluate it on several case studies, automatically constructing coupling proofs for interesting properties of a variety of algorithms (Sect. 6).
We conclude by comparing our technique with related approaches (Sect. 7).
2 Overview and Illustration
2.1 Introducing fCouplings
A Simple Example. We begin by illustrating fcouplings over two identical Bernoulli distributions, denoted by the following probability mass functions: \(\mu _1(x) = \mu _2(x) = 0.5\) for all \(x \in \mathbb {B}\) (where \(\mathbb {B} = \{ true , false \})\). In other words, the distribution \(\mu _i\) returns \( true \) with probability 0.5, and \( false \) with probability 0.5.
2.2 Simulating a Fair Coin
Now, let’s use fcouplings to prove more interesting properties. Consider the program fairCoin in Fig. 1; the program simulates a fair coin by flipping a possibly biased coin that returns \( true \) with probability \(p \in (0,1)\), where p is a program parameter. Our goal is to prove that for any p, the output of the program is a uniform distribution—it simulates a fair coin. We consider two separate copies of fairCoin generating distributions \(\mu _1\) and \(\mu _2\) over the returned value x for the same bias p, and we construct a coupling showing \(\mu _1( true ) = \mu _2( false )\), that is, heads and tails have equal probability.
Constructing fCouplings. At first glance, it is unclear how to construct an fcoupling; unlike the distributions in our simple example, we do not have a concrete description of \(\mu _1\) and \(\mu _2\) as uniform distributions (indeed, this is what we are trying to establish). The key insight is that we do not need to construct our coupling in one shot. Instead, we can specify a coupling for the concrete, primitive sampling instructions in the body of the loop—which we know sample from \(\mathsf {bern}(p)\)—and then extend to a fcoupling for the whole loop and \(\mu _1, \mu _2\).
Analyzing the Loop. To extend a \(f_{body}\)coupling on loop bodies to the entire loop, it suffices to check a synchronization condition: the coupling from \(f_{body}\) must ensure that the loop guards are equal so the two executions synchronize at the loop head. This holds in our case: every time the first program executes the statement \(x, y \sim \mathsf {bern}(p) \times \mathsf {bern}(p)\), we can think of x, y as nondeterministically set to some values (a, b), and the corresponding variables in the second program as set to \(f_{ swap }(a,b) = (b,a)\). The loop guards in the two programs are equivalent under this choice, since \(a = b\) is equivalent to \(b = a\), hence we can analyze the loops in lockstep. In general, couplings enable us to relate samples from a pair of probabilistic assignments as if they were selected nondeterministically, often avoiding quantitative reasoning about probabilities.
Note that our approach does not need to construct \(f_{loop}\) concretely—this function may be highly complex. Instead, we only need to show that \(\varPsi _{f_{ loop }}\) (or some overapproximation) lies inside the target relation in Formula 1.
Achieving Automation. Observe that once we have fixed an \(f_ body \)coupling for the sampling instructions inside the loop body, checking that the \(f_ loop \)coupling satisfies the conditions for uniformity (Formula 1) is essentially a program verification problem. Therefore, we can cast the problem of constructing a coupling proof as a logical problem of the form Open image in new window , where f is the fcoupling we need to discover and Open image in new window is a constraint ensuring that (i) f indeed represents an fcoupling, and (ii) the fcoupling implies uniformity. Thus, we can use established synthesisverification techniques to solve the resulting constraints (see, e.g., [2, 13, 27]).
3 A Proof Rule for Coupling Proofs
In this section, we develop a technique for constructing couplings and formalize proof rules for establishing uniformity and independence properties over program variables. We begin with background on probability distributions and couplings.
3.1 Distributions and Couplings
Distributions. A function \(\mu : B\rightarrow [0,1]\) defines a distribution over a countable set \(B\) if \(\sum _{b\in B} \mu (b) = 1\). We will often write \(\mu (A)\) for a subset \(A \subseteq B\) to mean \(\sum _{x \in A} \mu (x)\). We write \( dist (B)\) for the set of all distributions over \(B\).
An important fact is that an injective function \(f: B_1 \rightarrow B_2\) where \(\mu _1(b) \leqslant \mu _2(f(b))\) for all \(b\in B_1\) induces a coupling between \(\mu _1\) and \(\mu _2\); this follows from a general theorem by Strassen [28], see also [23]. We write \(\mu _1 \leftrightsquigarrow ^{f} \mu _2\) for \(\mu _1 \leftrightsquigarrow ^{\varPsi _f} \mu _2\), where \(\varPsi _f = \{(b_1, f(b_1)) \mid b_1 \in B_1\}\). The existence of a coupling can imply various useful properties about the two distributions. The following general fact will be the most important for our purposes—couplings can prove equalities between probabilities.
Proposition 1
Let \(E_1 \subseteq B_1\) and \(E_2 \subseteq B_2\) be two events, and let \(\varPsi _= \triangleq \{(b_1, b_2) \mid b_1 \in E_1 \iff b_2 \in E_2\}\). If \(\mu _1 \leftrightsquigarrow ^{\varPsi _=} \mu _2\), then \(\mu _1(E_1) = \mu _2(E_2)\).
3.2 Program Model
Our program model uses an imperative language with probabilistic assignments, where we can draw a random value from primitive distributions. We consider the easier case of loopfree programs first; we consider looping programs in Sect. 5.
We make a few simplifying assumptions. First, distribution expressions only mention input variables \(V^I\), e.g., in the example above, \(\mathsf {bern}(p)\), we have \(p\in V^I\). Also, all programs are in static single assignment (ssa) form, where each variable is assigned to only once and are welltyped. These assumptions are relatively minor; they can can be verified using existing tools, or lifted entirely at the cost of slightly more complexity in our encoding.
Semantics. A state s of a program \(P\) is a valuation of all of its variables, represented as a map from variables to values, e.g., s(x) is the value of \(x\in V\) in s. We extend this mapping to expressions: \(s( exp )\) is the valuation of \( exp \) in s, and \(s( dexp )\) is the probability distribution defined by \( dexp \) in s.
We use S to denote the set of all possible program states. As is standard [24], we can give a semantics of \(P\) as a function \(\llbracket P\rrbracket : S \rightarrow dist (S)\) from states to distributions over states. For an output distribution \(\mu = \llbracket P\rrbracket (s)\), we will abuse notation and write, e.g., \(\mu (x = y)\) to denote the probability of the event that the program returns a state s where \(s(x = y) = true \).
SelfComposition. We will sometimes need to simulate two separate executions of a program with a single probabilistic program. Given a program \(P\), we use \(P_i\) to denote a program identical to \(P\) but with all variables tagged with the subscript i. We can then define the selfcomposition: given a program \(P\), the program \(P_1; P_2\) first executes \(P_1\), and then executes the (separate) copy \(P_2\).
3.3 Coupled Postconditions
We are now ready to present the fcoupled postcondition, an operator for approximating the outputs of two coupled programs.
fCoupled Postcondition. We rewrite programs so that all probabilistic assignments are combined into a single probabilistic assignment to a vector of variables appearing at the beginning of the program, i.e., an assignment of the form \(\varvec{v} \sim dexp \) in P and \(\varvec{v}' \sim dexp '\) in \(P'\), where \(\varvec{v},\varvec{v}'\) are vectors of variables. For instance, we can combine \(x \sim \mathsf {bern}(0.5); y\sim \mathsf {bern}(0.5)\) into the single statement \(x,y \sim \mathsf {bern}(0.5)\times \mathsf {bern}(0.5)\).
Example 1
Consider the simple program P defined as \(x \sim \mathsf {bern}(0.5); x = \lnot x\) and let \(f_\lnot (x) = \lnot x\). Then, \(\mathsf {cpost}(P,P,Q,f_\lnot )\) is \(\{(s,s') \mid s(x) = \lnot s'(x)\}\).
The main soundness theorem shows there is a probabilistic coupling of the output distributions with support contained in the coupled postcondition (we defer all proofs to the full version of this paper).
Theorem 1
Let programs P and \(P'\) be of the form \(\varvec{v} \sim dexp ; P_D\) and \(\varvec{v}' \sim dexp '; P'_D\), for deterministic programs \(P_D, P'_D\). Given a function \(f : B \rightarrow B'\) satisfying Formula 2, for every \((s,s') \in S \times S'\) we have \(\llbracket P\rrbracket (s) \leftrightsquigarrow ^{\varPsi } \llbracket P'\rrbracket (s')\), where \(\varPsi = \mathsf {cpost}(P, P', (s,s'), f)\).
3.4 Proof Rules for Uniformity and Independence
We are now ready to demonstrate how to establish uniformity and independence of program variables using fcoupled postconditions. We will continue to assume that random sampling commands have been lifted to the front of each program, and that f satisfies Formula 2.
Uniformity. Consider a program \(P\) and a variable \(v^* \in V\) of finite, nonempty domain \(B\). Let \(\mu = \llbracket P\rrbracket (s)\) for some state \(s\in S\). We say that variable \(v^*\) is uniformly distributed in \(\mu \) if \(\mu (v^* = b) = \frac{1}{B}\) for every \(b\in B\).
The following theorem connects uniformity with fcoupled postconditions.
Theorem 2
The intuition is that in the two fcoupled copies of P, the first \(v^*\) is equal to b exactly when the second \(v^*\) is equal to \(b'\). Hence, the probability of returning b in the first copy and \(b'\) in the second copy are the same. Repeating for every pair of values \(b,b'\), we conclude that \(v^*\) is uniformly distributed.
Example 2
Independence. We now present a proof rule for independence. Consider a program P and two variables \(v^*,w^* \in V\) with domains \(B\) and \(B'\), respectively. Let \(\mu = \llbracket P\rrbracket (s)\) for some state \(s\in S\). We say that \(v^*,w^*\) are probabilistically independent in \(\mu \) if \(\mu (v^* = b\wedge w^* = b') = \mu (v^* = b) \cdot \mu (w^* = b')\) for every \(b\in B\) and \(b' \in B'\).
The following theorem connects independence with fcoupled postconditions. We will selfcompose two tagged copies of P, called \(P_1\) and \(P_2\).
Theorem 3
The idea is that under the coupling, the probability of P returning \(v^* = b\wedge w^* = b'\) is the same as the probability of \(P_1\) returning \(v^* = b\) and \(P_2\) returning \(w^* = b'\), for all values \(b,b'\). Since \(P_1\) and \(P_2\) are two independent executions of P by construction, this establishes independence of \(v^*\) and \(w^*\).
4 ConstraintBased Formulation of Proof Rules
In Sect. 3, we formalized the problem of constructing a coupling proof using fcoupled postconditions. We now automatically find such proofs by posing the problem as a constraint, where a solution gives a function f establishing our desired property.
4.1 Generating Logical and Probabilistic Constraints
As expected, enc reflects the strongest postcondition post.
Lemma 1
Let P be a program and let \(\rho \) be any assignment of the variables. An assignment \(\rho '\) agreeing with \(\rho \) on all input variables \(V^I\) satisfies the constraint \(\mathsf {enc}(P)[\rho '/V]\) precisely when \(\mathsf {post}(P, \{\rho \}) = \{ \rho ' \}\), treating \(\rho ,\rho '\) as program states.
Note that this is a secondorder formula, as it quantifies over the uninterpreted function f. The left side of the implication in Formula 3 encodes an fcoupled execution of P and \(P_1\), starting from equal initial states. The right side of this implication encodes the conditions for uniformity, as in Theorem 2.
Formula 4 ensures that there is an fcoupling between \( dexp \) and \( dexp _1\) for any initial state; recall that \( dexp \) may mention input variables \(V^I\). The constraint \( dexp \leftrightsquigarrow ^{f} dexp _1\) is not a standard logical constraint—intuitively, it is satisfied if \( dexp \leftrightsquigarrow ^{f} dexp _1\) holds for some interpretation of f, \( dexp \), and \( dexp _1\).
Example 3
Example 4
Theorem 4
(Uniformity constraints). Fix a program P and variable \(v^* \in V\). Let \(\varphi \) be the uniformity constraints in Formulas 3 and 4. If \(\varphi \) is valid, then \(v^*\) is uniformly distributed in \(\llbracket P\rrbracket (s)\) for all \(s \in S\).
Theorem 5
(Independence constraints). Fix a program P and two variables \(v^*,w^* \in V\). Let \(\varphi \) be the independence constraints from Formulas 5 and 6. If \(\varphi \) is valid, then \(v^*,w^*\) are independent in \(\llbracket P\rrbracket (s)\) for all \(s \in S\).
4.2 Constraint Transformation
To solve our constraints, we transform our constraints into the form Open image in new window , where \(\varphi \) is a firstorder formula. Such formulas can be viewed as synthesis problems, and are often solvable automatically using standard techniques.
We perform our transformation in two steps. First, we transform our constraint into the form Open image in new window , where \(\varphi _p\) still contains the coupling constraint. Then, we replace the coupling constraint with a firstorder formula by logically encoding primitive distributions as uninterpreted functions.
where \(g(a,a',)\) is the function after partially applying g.
Note that if we cannot encode the definition of the distribution in our firstorder theory (e.g., if it requires nonlinear constraints), or if we do not have a concrete description of the distribution, we can simply elide the last two constraints and underconstrain h and \(h'\). In Sect. 6 we use this feature to prove properties of a program encoding a Bayesian network, where the primitive distributions are unknown program parameters.
Theorem 6
(Transformation soundness). Let \(\varphi \) be the constraints generated for some program P. Let \(\varphi '\) be the result of applying the above transformations to \(\varphi \). If \(\varphi '\) is valid, then \(\varphi \) is valid.
Constraint Solving. After performing these transformations, we finally arrive at constraints of the form Open image in new window , where \(\varphi \) is a firstorder formula. These exactly match constraintbased program synthesis problems. In Sect. 6, we use smt solvers and enumerative synthesis to handle these constraints.
5 Dealing with Loops
So far, we have only considered loopfree programs. In this section, we our approach to programs with loops.
Intuitively, the set I is the least inductive invariant for the two coupled programs running with synchronized loops. Theorem 1, which establishes that fcoupled postconditions result in couplings over output distributions, naturally extends to a setting with loops.
The first three constraints encode the definition of \(\mathsf {cpost}\); the last two ensure that f constructs a coupling and that the invariant implies the uniformity condition when the loop terminates. Using the technique presented in Sect. 4.2, we can transform these constraints into the form Open image in new window . That is, in addition to discovering the function f, we need to discover the invariant I.
Proving independence in looping programs poses additional challenges, as directly applying the selfcomposition construction from Sect. 3 requires relating a single loop with two loops. When the number of loop iterations is deterministic, however, we may simulate two sequentially composed loops with a single loop that interleaves the iterations (known as synchronized or cross product [4, 29]) so that we reduce the synthesis problem to finding a coupling for two loops.
6 Implementation and Evaluation
Implementation. To solve formulas of the form Open image in new window , we implemented a simple solver using a guessandcheck loop: We iterate through various interpretations of f, insert them into the formula, and check whether the resulting formula is valid. In the simplest case, we are searching for a function f from ntuples to ntuples. For instance, in Sect. 2.2, we discovered the function \(f(x,y) = (y,x)\). Our implementation is parameterized by a grammar defining an infinite set of interpretations of f, which involves permuting the arguments (as above), conditionals, and other basic operations (e.g., negation for Boolean variables). For checking validity of Open image in new window given f, we use the Z3 smt solver [19] for loopfree programs. For loops, we use an existing constrainedHornclause solver based on the MathSAT smt solver [18].
Benchmarks and Results. As a set of case studies for our approach, we use 5 different programs collected from the literature and presented in Fig. 2. For these programs, we prove uniformity, (conditional) independence properties, and other probabilistic equalities. For instance, we use our implementation to prove a main lemma for the Ballot theorem [20], encoded as the program ballot.
Figure 3 shows the time and number of loop iterations required by our implementation to discover a coupling proof. The small number of iterations and time needed demonstrates the simplicity of the discovered proofs. For instance, the ballot theorem was proved in 3 s and only 4 iterations, while the fairCoin example (illustrated in Sect. 2.2) required only two iterations and 1.4 s. In all cases, the size of the synthesize function f in terms of depth of its ast is no more than 4. We describe these programs and properties in a bit more detail.
The second program fairDie gives a different construction for simulating a roll of a fair die given fair coin flips. Three fair coins are repeatedly flipped as long as they are all equal; the returned triple is the binary representation of a number in \(\{ 1, \dots , 6 \}\), the result of the simulated roll. The synthesized coupling is a bijection on triples of booleans \(\mathbb {B} \times \mathbb {B} \times \mathbb {B}\); fixing any two possible output triples \((b_1, b_2, b_3)\) and \((b_1', b_2', b_3')\) of distinct booleans, the coupling maps \((b_1, b_2, b_3) \mapsto (b_1', b_2', b_3')\) and vice versa, leaving all other triples unchanged.
Case Studies: Independence ( noisySum , bayes). In the next two programs, our approach synthesizes coupling proofs of independence and conditional independence of program variables in the output distribution. The first program, noisySum, is a stylized program inspired from privacypreserving algorithms that sum a series of noisy samples; for giving accuracy guarantees, it is often important to show that the noisy draws are probabilistically independent. We show that any pair of samples are independent.
The second program, bayes, models a simple Bayesian network with three independent variables x, y, z and two dependent variables w and \(w'\), computed from (x, y) and (y, z) respectively. We want to show that w and \(w'\) are independent conditioned on any value of y; intuitively, w and \(w'\) only depend on each other through the value of y, and are independent otherwise. We use a constraint encoding similar to the encoding for showing independence to find a coupling proof of this fact. Note that the distributions \(\mu , \mu ', \mu ''\) of x, y, z are unknown parameters, and the functions f and g are also uninterpreted. This illustrates the advantage of using a constraintbased technique—we can encode unknown distributions and operations as uninterpreted functions.
Case Studies: Probabilistic Equalities ( ballot). As we mentioned in Sect. 1, our approach extends naturally to proving general probabilistic equalities beyond uniformity and independence. To illustrate, we consider a lemma used to prove Bertrand’s Ballot theorem [20]. Roughly speaking, this theorem considers counting ballots onebyone in an election where there are \(n_A\) votes cast for candidate A and \(n_B\) votes cast for candidate B, where \(n_A, n_B\) are parameters. If \(n_A > n_B\) (so A is the winner) and votes are counted in a uniformly random order, the Ballot theorem states that the probability that A leads throughout the whole counting process—without any ties—is precisely \((n_A  n_B) / (n_A + n_B)\).
7 Related Work
Probabilistic programs have been a longstanding target of formal verification. We compare with two of the most welldeveloped lines of research: probabilistic model checking and deductive verification via program logics or expectations.
Probabilistic Model Checking. Model checking has proven to be a powerful tool for verifying probabilistic programs, capable of automated proofs for various probabilistic properties (typically encoded in probabilistic temporal logics); there are now numerous mature implementations (see, e.g., [21] or [3, Chap. 10] for more details). In comparison, our approach has the advantage of being fully constraintbased. This gives it a number of unique features: (i) it applies to programs with unknown inputs and variables over infinite domains; (ii) it applies to programs sampling from distributions with parameters, or even ones sampling from unknown distributions modeled as uninterpreted functions in firstorder logic; (iii) it applies to distributions over infinite domains; and (iv) the generated coupling proofs are compact. At the same time, our approach is specialized to the coupling proof technique and is likely to be more incomplete.
Deductive Verification. Compared to general deductive verification systems for probabilistic programs, like program logics [5, 14, 22, 26] or techniques reasoning by preexpectations [25], the main benefit of our technique is automation—deductive verification typically requires an interactive theorem prover to manipulate complex probabilistic invariants. In general, the coupling proof method limits reasoning about probabilities and distributions to just the random sampling commands; in the rest of the program, the proof can avoid quantitative reasoning entirely. As a result, our system can work with nonprobabilistic invariants and achieve full automation. Our approach also smoothly handles properties involving the probabilities of multiple events, like probabilistic independence, unlike techniques that analyze probabilistic events onebyone.
Notes
Acknowledgements
We thank Samuel Drews, Calvin Smith, and the anonymous reviewers for their helpful comments. Justin Hsu was partially supported by ERC grant #679127 and NSF grant #1637532. Aws Albarghouthi was supported by NSF grants #1566015, #1704117, and #1652140.
References
 1.Albarghouthi, A., Hsu, J.: Synthesizing coupling proofs of differential privacy. Proc. ACM Programm. Lang. 2(POPL), 58:1–58:30 (2018). http://doi.acm.org/10.1145/3158146Google Scholar
 2.Alur, R., Bodik, R., Juniwal, G., Martin, M.M., Raghothaman, M., Seshia, S.A., Singh, R., SolarLezama, A., Torlak, E., Udupa, A.: Syntaxguided synthesis. In: Formal Methods in ComputerAided Design (FMCAD), Portland, Oregon, pp. 1–8. IEEE (2013)Google Scholar
 3.Baier, C., Katoen, J.P., Larsen, K.G.: Principles of Model Checking. MIT Press, Cambridge (2008)zbMATHGoogle Scholar
 4.Barthe, G., Crespo, J.M., Kunz, C.: Relational verification using product programs. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 200–214. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642214370_17CrossRefGoogle Scholar
 5.Barthe, G., Espitau, T., Gaboardi, M., Grégoire, B., Hsu, J., Strub, P.Y.: A program logic for probabilistic programs. In: European Symposium on Programming (ESOP), Thessaloniki, Greece (2018, to appear). https://justinh.su/files/papers/ellora.pdf
 6.Barthe, G., Espitau, T., Grégoire, B., Hsu, J., Strub, P.Y.: Proving uniformity and independence by selfcomposition and coupling. In: International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR), Maun, Botswana. EPiC Series in Computing, vol. 46, pp. 385–403 (2017). https://arxiv.org/abs/1701.06477
 7.Barthe, G., Espitau, T., Grégoire, B., Hsu, J., Strub, P.: Proving expected sensitivity of probabilistic programs. Proc. ACM Programm. Lang. 2(POPL), 57:1–57:29 (2018). http://doi.acm.org/10.1145/3158145Google Scholar
 8.Barthe, G., Fong, N., Gaboardi, M., Grégoire, B., Hsu, J., Strub, P.Y.: Advanced probabilistic couplings for differential privacy. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), Vienna, Austria (2016). https://arxiv.org/abs/1606.07143
 9.Barthe, G., Fournet, C., Grégoire, B., Strub, P.Y., Swamy, N., ZanellaBéguelin, S.: Probabilistic relational verification for cryptographic implementations. In: ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), San Diego, California, pp. 193–206 (2014). https://research.microsoft.com/enus/um/people/nswamy/papers/rfstar.pdf
 10.Barthe, G., Gaboardi, M., Grégoire, B., Hsu, J., Strub, P.Y.: Proving differential privacy via probabilistic couplings. In: IEEE Symposium on Logic in Computer Science (LICS), New York, pp. 749–758 (2016), http://arxiv.org/abs/1601.05047
 11.Barthe, G., Grégoire, B., Hsu, J., Strub, P.Y.: Coupling proofs are probabilistic product programs. In: ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), Paris, France, pp. 161–174 (2017). http://arxiv.org/abs/1607.03455
 12.Barthe, G., Köpf, B., Olmedo, F., ZanellaBéguelin, S.: Probabilistic relational reasoning for differential privacy. ACM Trans. Programm. Lang. Syst. 35(3), 9 (2013). http://software.imdea.org/~bkoepf/papers/toplas13.pdfCrossRefGoogle Scholar
 13.Beyene, T., Chaudhuri, S., Popeea, C., Rybalchenko, A.: A constraintbased approach to solving games on infinite graphs. In: ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), San Diego, California, pp. 221–233 (2014)Google Scholar
 14.Chadha, R., CruzFilipe, L., Mateus, P., Sernadas, A.: Reasoning about probabilistic sequential programs. Theor. Comput. Sci. 379(1), 142–165 (2007)MathSciNetCrossRefGoogle Scholar
 15.Chatterjee, K., Fu, H., Goharshady, A.K.: Termination analysis of probabilistic programs through Positivstellensatz’s. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9779, pp. 3–22. Springer, Cham (2016). https://doi.org/10.1007/9783319415284_1CrossRefGoogle Scholar
 16.Chatterjee, K., Fu, H., Novotný, P., Hasheminezhad, R.: Algorithmic analysis of qualitative and quantitative termination problems for affine probabilistic programs. In: ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), Saint Petersburg, Florida, pp. 327–342 (2016). https://doi.acm.org/10.1145/2837614.2837639
 17.Chatterjee, K., Novotný, P., Žikelić, Đ.: Stochastic invariants for probabilistic termination. In: ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL), Paris, France, pp. 145–160 (2017). https://doi.acm.org/10.1145/3009837.3009873
 18.Cimatti, A., Griggio, A., Schaafsma, B.J., Sebastiani, R.: The MathSAT5 SMT solver. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 93–107. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642367427_7CrossRefzbMATHGoogle Scholar
 19.de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540788003_24CrossRefGoogle Scholar
 20.Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 1, 3rd edn. Wiley, Hoboken (1968)zbMATHGoogle Scholar
 21.Forejt, V., Kwiatkowska, M., Norman, G., Parker, D.: Automated verification techniques for probabilistic systems. In: Bernardo, M., Issarny, V. (eds.) SFM 2011. LNCS, vol. 6659, pp. 53–113. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642214554_3CrossRefGoogle Scholar
 22.den Hartog, J.: Probabilistic extensions of semantical models. Ph.D. thesis, Vrije Universiteit Amsterdam (2002)Google Scholar
 23.Hsu, J.: Probabilistic Couplings for Probabilistic Reasoning. Ph.D. thesis, University of Pennsylvania (2017). https://arxiv.org/abs/1710.09951
 24.Kozen, D.: Semantics of probabilistic programs. J. Comput. Syst. Sci. 22(3), 328–350 (1981). https://www.sciencedirect.com/science/article/pii/0022000081900362MathSciNetCrossRefGoogle Scholar
 25.Morgan, C., McIver, A., Seidel, K.: Probabilistic predicate transformers. ACM Trans. Programm. Lang. Syst. 18(3), 325–353 (1996). dl.acm.org/ft_gateway.cfm?id=229547CrossRefGoogle Scholar
 26.Rand, R., Zdancewic, S.: VPHL: a verified partialcorrectness logic for probabilistic programs. In: Conference on the Mathematical Foundations of Programming Semantics (MFPS), Nijmegen, The Netherlands (2015)Google Scholar
 27.SolarLezama, A., Tancau, L., Bodík, R., Seshia, S.A., Saraswat, V.A.: Combinatorial sketching for finite programs. In: International Conference on Architectural Support for Programming Langauages and Operating Systems (ASPLOS), San Jose, California, pp. 404–415 (2006). http://doi.acm.org/10.1145/1168857.1168907
 28.Strassen, V.: The existence of probability measures with given marginals. Annals Math. Stat. 423–439 (1965). https://projecteuclid.org/euclid.aoms/1177700153
 29.Zaks, A., Pnueli, A.: CoVaC: compiler validation by program analysis of the crossproduct. In: Cuellar, J., Maibaum, T., Sere, K. (eds.) FM 2008. LNCS, vol. 5014, pp. 35–51. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540682370_5CrossRefGoogle Scholar
Copyright information
<SimplePara><Emphasis Type="Bold">Open Access</Emphasis>This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.</SimplePara><SimplePara>The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.</SimplePara>