Provable security provides a firm mathematical foundation for reasoning about cryptography. A variety of definition styles have been proposed to reason about security in different settings. For example, simulation-based definitions [16, 28] capture the security notions in Multi-Party Computations (MPC), and game-based definitions [7, 39] formalise the security of primitives like encryption and commitments.

Security proofs are now a cornerstone of modern cryptography. Provable security has greatly increased the level of rigour of the security statements, however proofs of these statements often present informal or incomplete arguments. In fact, many proofs are still considered to be unverifiable [7, 30]. Formal methods offer one way to establish far higher levels of rigour in proofs and tools have been developed to formally reason about cryptography and obtain machine-checked proof of security statements. Formalisation of cryptography is a maturing area of research; the EasyCrypt framework [2] has captured proofs of low-lying cryptographic primitives [34] as well as MPC [29] and Universal Composibility [17]. Moreover CryptHOL [6] has also considered fundamental primitives [6, 13] and MPC protocols [11, 12] as well as Constructive Cryptography [33]. Other tools for reasoning about cryptographic proofs in the context of our work include FCF [36], which provides a shallow embedding in Coq for reasoning about cryptography and CertiCrypt [1], a deep embedding in Coq in which the first (and only, before this work) formalisation of \(\varSigma \)-protocols was made [5].

In this work we consider two fundamental cryptographic primitives, namely \(\varSigma \)-protocols and commitment schemes, and their connection. Commitment schemes allow a party to commit to a message and keep it hidden until it is chosen to be revealed at a later time. In particular commitment schemes are used to hold parties accountable to the messages they send; ensuring they do not cheat when participating in protocols. To this end, commitments are often used to extend protocols secure in the semi-honest model (where parties are assumed to follow the protocol) to be secure in the malicious setting (where corrupted parties may arbitrarily deviate from the protocol).

\(\varSigma \)-protocols allow for a party, the prover, to convince a verifier they possess some knowledge. More formally, we consider a relation R and say w is a witness to the relation with respect public input x if \((x, w) \in R\). A \(\varSigma \)-protocol allows the prover to convince the verifier that the prover knows w for some given x without revealing anything else about w itself. Like commitment schemes, \(\varSigma \)-protocols aid the enforcement of honest behaviour from potentially malicious parties. For example the witness (and proof of knowledge of the witness) can provide a guarantee that the party is authorised to perform certain actions, or access certain sensitive information.

The two primitives are strongly linked; Damgård [23] showed how \(\varSigma \)-protocols can be used to construct commitment schemes. So every \(\varSigma \)-protocol yields a corresponding commitment scheme.

\(\varSigma \)-protocols provide a basis for full zero-knowledge, even though the verifier is assumed to be honest for the zero-knowledge property to hold (honest verifier zero-knowledge). The honesty assumption on the verifier can be removed by making the verifier commit to the challenge first; Hazay and Lindell present a generic construction for \(\varSigma \)-protocols [31]. Moreover, the Fiat-Shamir transform [27] can be used to convert a \(\varSigma \)-protocol into a non-interactive proof of knowledge.

Our formalisation is done using the CryptHOL framework inside Isabelle/HOL. We have chosen CryptHOL for three reasons: First, it provides the expressiveness and rigour of higher-order logic. Second, we believe the resulting formalisations are easy to read,Footnote 1 even for the non formal methods expert; this is something we feel is important. Third, it supports different styles of security definitions.

The ability to reason about different types of security definition is imporant as the security of commitment schemes is expressed using game-based definitions whereas \(\varSigma \)-protocols’ security definitions contain a flavour of the simulation-based proof method. Therefore our work draws on the originally designed application of CryptHOL (game-based proofs) [6] as well as more recently considered applications (simulation-based proofs) [12].

Contributions By leveraging the expressiveness and modularity of CryptHOL and Isabelle we develop a framework for formally reasoning about the security proofs of commitment schemes and \(\varSigma \)-protocols. To the best of our knowledge this is the first formalisation that links the two primitives.

  1. 1.

    We formalise a framework for reasoning about the security of commitment schemes and \(\varSigma \)-protocols in a general manner. This provides an abstract basis for others to use as well as lends weight to the notion that CryptHOL is an appropriate framework for cryptography.

  2. 2.

    We provide clarity to the definition of \(\varSigma \)-protocols. We show that the standard textbook definition [31] and Damgård’s [23] are too weak. While this is, in theory, known in the cryptographic community (Cramer’s original definitions, in his PhD thesis [21], are sufficient) we do not believe it is widely known as the modern literature and Cramer’s definitions are divergent.

  3. 3.

    We demonstrate how our general frameworks can be instantiated by proving security of well-known examples of both primitives. In detail, we formalized the \(\varSigma \)-protocols by Schnorr, Chaum-Pedersen, and Okamoto; and the commitment schemes by Rivest and Pedersen.

  4. 4.

    We prove the construction of commitment schemes from \(\varSigma \)-protocols [23] secure at an abstract level. That is, the construction works for any \(\varSigma \)-protocol. Consequently the proof effort for any instantiations of the construction is only in proving that the underlying \(\varSigma \)-protocol is secure. The commitment scheme result then comes in a matter of lines of proof. At an estimate this halves the proof effort as, in our experience, proofs of commitment schemes’ security are similar in length (and effort) to proofs of \(\varSigma \)-protocols. In particular, for every new \(\varSigma \)-protocol proven secure in our framework we get a proof of a new commitment scheme being secure for free. For example, security for the Pedersen commitment scheme needs about 20 proof lines compared to a few hundred in previous work [13].

  5. 5.

    We formalise the AND and OR compound statement construction of two \(\varSigma \)-protocols. Here we generalise the proof to arbitrary boolean algebras. The construction from the literature [22] given over bitstrings is one instance of our result.

This paper extends and improves the conference paper [13] as follows:

  • We additionally formalize and prove secure the Rivest commitment scheme and the Okamoto \(\varSigma \)-protocol. The Rivest commitment scheme uses a trusted initialiser who distributes correlated randomness to both parties. Formalising this result shows that our framework can cope with different structures of commitment scheme.

  • We formalize the generic construction of commitment schemes from \(\varSigma \)-protocols. In [13], only the instantiated results were formalised.

  • The formalisation of compound statements of two \(\varSigma \)-protocols is new to this work.

Outline Figure 1 outlines the work we present in this paper. Solid arrows represent proofs of concrete commitment schemes or \(\varSigma \)-protocols; the arrows end at the instantiated framework. The double arrow represents our formalisation of the general construction of commitment schemes from \(\varSigma \)-protocols, and the corresponding commitment schemes from our instantiated \(\varSigma \)-protocols whose security statements come for free due to the general proof. We highlight one of these in particular with the dotted arrow as the instantiation of the Schnorr \(\varSigma \)-protocol under the general construction yields the Pedersen commitment scheme, a result we formalised from scratch in [13] but comes in a matter of lines of proof here.Footnote 2

Fig. 1
figure 1

The diagram outlines our formalisation in this paper

In Sect. 2 we introduce the relevant background on \(\varSigma \)-protocols, commitment schemes, and CryptHOL. Section 3 outlines the general method of formalising cryptographic primitives in CryptHOL. In Sects. 4 and 7 we introduce our formalisation of \(\varSigma \)-protocols and commitment schemes respectively. We show how they can be instantiated for the Schnorr \(\varSigma \)-protocol in Sect. 5, compound statements of \(\varSigma \)-protocol relations in Sect. 6, and for the general proof of commitment schemes from \(\varSigma \)-protocols in Sect. 8. We show in Sect. 9 how the security of the Pedersen commitment scheme follows from the general proof. We discuss related work in Sect. 12 and detail how, during our formalisation, we came across discrepancies in the definitions of \(\varSigma \)-protocols and how we resolved these. Finally we conclude in Sect. 13.

The security definitions presented in Sect. 2.1 are the traditional paper-based definitions of commitment schemes and \(\varSigma \)-protocols; all definitions and statements given in the rest of the paper have been checked by the proof assistant Isabelle/HOL.


\(\varSigma \)-Protocols and Commitment Schemes

Commitment schemes and \(\varSigma \)-protocols are two party protocols considered to be fundamental building blocks in modern cryptography. Commitment schemes allow a party to commit to a message and reveal it at a later time. This is a powerful construction that is widely used, for example in MPC where they are used as a tool to convert semi-honest protocols to protocols secure in the stronger malicious model. \(\varSigma \)-protocols allow a prover to convince a verifier of some knowledge they posses and are a direct building block for Zero-Knowledge proofs. The major limitation of \(\varSigma \)-protocols is that they do not account for a cheating verifier, it is assumed that the verifier follows the protocol exactly—this is analogous to the semi-honest model considered in simulation-based proofs.

\(\varSigma \)-Protocols

Cramer [21] introduced the abstract notion of a \(\varSigma \)-protocol, coined the term \(\varSigma \)-protocol, and gave the definitions of the properties we consider here. He also developed a rich theory of \(\varSigma \)-protocols that goes beyond what we formalise in this work. Schnorr introduced the first efficient \(\varSigma \)-protocol [38]—the protocol we formalise in Sect. 5. The presentation of \(\varSigma \)-protocols follows Damgård [23], Hazay and Lindell [31] and Cramer [21].Footnote 3

A \(\varSigma \)-protocol is considered with respect to a relation R. If \((h,w) \in R\) then h can be considered an instance of a computational problem where w is the witness or solution to the problem. For example consider the discrete log relation which is considered over a group G with generator g. We say w is a witness to \(h \in G\) if the following relation holds.

$$\begin{aligned} (h,w) \in R_{DL} \iff h = g ^ w \end{aligned}$$

The discrete log relation is widely used in cryptography as for certain groups (e.g \(\mathbb {Z}_p^{*}\) and elliptic curves over finite fields) it is considered a hard relation, meaning that it is computationally infeasible to obtain the witness w from \(h = g ^ w\).

Any relation, R, gives rise to a language \(L_R = \{h. \; \exists w. \; (h,w) \in R\}\) that consists of statements in R.

A \(\varSigma \)-protocol is a three move protocol run between a Prover (P) and a Verifier (V) where h is common public input to both P and V and w is a private input to P such that \((h,w) \in R\).

Informal Definition 2

A \(\varSigma \)-protocol has the following three part form:

figure a

That is: first the Prover sends an initial message a. Here r denotes the ramdomness used to create a and is kept by the Prover. In a typical workflow r will be sampled by the prover (as part of the \( initial \) phase) before a is created. Second the Verifier sends a challenge e and finally the Prover sends a response, from which the Verifier decides if it will accept or reject the proof.

A conversation for an execution of a \(\varSigma \)-protocol is the transcript of the protocol—(aez). The conversation is said to be accepting if the tuple corresponds to the outputs of the three moves in the protocol and the verifier accepts the response z.

There are three properties that are required for a protocol of the above form to be a \(\varSigma \)-protocol.

Informal Definition 3

Assume a protocol, \(\pi \), of the above form run between P and V. Then \(\pi \) is a \(\varSigma \)-protocol for a relation R if the following properties hold:

  • Completeness: if P and V follow the protocol on public input h and private input w such that \((h,w) \in R\), then V always accepts.

  • Special soundness: there exists an adversary, \(\mathcal {A}\), such that when given a pair of accepting conversations (on public input h) (aez) and \((a,e',z')\) where \(e \ne e'\) it can compute w such that \((h,w) \in R\).

  • Honest verifier Zero-Knowledge (HVZK): The following conditions must hold.

    1. 1.

      There exists a polynomial-time simulator S that on input h (public input) and e (a challenge) outputs an accepting conversation (aez) with the same probability distribution as the real conversations between P and V on input (hw). That is for all h and w such that \((h,w) \in R\) and every e we have

      $$\begin{aligned} \{S(h,e)\} = \{\langle {P(h,w), V(h,e)}\rangle \} \end{aligned}$$

      where \(\{S(h,e)\}\) is the output distribution of the simulator and \(\{\langle {P(h,w), V(h,e)}\rangle \}\) denotes the distribution of the output transcript of an execution of the protocol between P and V.

    2. 2.

      For \(h \notin L_R\) the simulator S(he) must nevertheless output an accepting conversation (aez).

A \(\varSigma \)-protocol is said to be complete if the Verifier accepts in the final stage whenever the protocol is executed honestly. The intuition for the special soundness property is that if a Prover can respond correctly to two different challenges then it can also compute the witness, meaning a prover cannot cheat the Verifier if they do not know the witness—that is convince the verifier when a witness is not known to the prover. The HVZK property ensures that no information about the witness is leaked during the execution of the protocol. The first condition resembles definitions from Multi-Party Computation (MPC) where the real view (the real conversation generated by the Prover and Verifier) can be simulated without the private input (the witness). Condition 2 ensures that the OR construction of \(\varSigma \)-protocols satisfies completeness (Sect. 6.1).

Commitment Schemes

Commitment schemes were first introduced by Blum [8] and Even [26]. The problem Blum proposed was that of coin flipping by telephone; how do Alice and Bob flip a coin via telephone. Blum proposed commitments to solve such a problem: Alice first guesses the outcome of the coin flip and commits to her guess. Bob then flips the coin and reveals the result upon which Alice reveals the value she committed to so Bob can verify her call matches her commitment—if Alice’s call matches the coin flip she wins.

Informal Definition 4

A commitment scheme has the following three part form:

  1. 1.

    Key generation: \((ck,vk) \leftarrow key\). The algorithm key outputs a pair of keys that is sent to the committer and verifier respectively.

  2. 2.

    Commitment phase: \((c,d) \leftarrow com(ck, m)\). The algorithm com takes as input the message to be committed and outputs the commitment c and an opening value d, which is sent to V in the verification phase. C sends c to V.

  3. 3.

    Verification phase: \(b \leftarrow ver(vk,c,m,d)\). The algorithm ver takes the verification key, commitment, original message and opening value as input and outputs a boolean depending on whether the verification is successful.

The three properties we want from a commitment scheme are correctness, hiding and binding.

Informal Definition 5

(Correctness) A commitment scheme is said to be correct if the protocol is run honestly between C and V, then V will always accept in the verification phase for all messages that can be committed.

Fig. 2
figure 2

The hiding game played between the committer (the challenger) and the adversary (the verifier)

To define the hiding and binding properties we consider security games that are played between an adversary and a benign challenger. Games are used to tame complexity [39] of security proofs. The security games we consider can be considered as pseudo-protocols played between the committer and the verifier, where one of the parties is controlled by an adversary and the other is the challenger. Consider the hiding game depicted in Fig. 2. Here the committer is the challenger and the verifier the adversary; the keys are distributed and the adversary is asked to output two messages of its choosing and send them to the committer upon which the committer picks one at random and constructs its commitment. The adversary is then required to output its guess as to which message was committed and wins the game if it guesses correctly. More generally the definition of security with respect to a security game is tied to an event E (in the hiding game this is \(b = b'\)), security requires that the probability that E occurs close to some target probability (this is \(\frac{1}{2}\) for the hiding property)—the difference between the probability of the event E occurring and target probability is called the advantage of the adversary. Intuitively security is achieved if this advantage is small.

The game-based approach allows the cryptographer to be more formal in their reasoning about security properties. In particular they afford the opportunity to provide more rigorous proofs of security. Informally a proof is structured as follows: let \(G_0,\ldots ,G_n\) be a sequence of games where \(G_0\) is the original security game and \(G_n\) is a game where the target probability is met. In the proof one shows that the value \(|Pr[G_i] - Pr[G_{i+1}]|\) is small and thus the value of \(|Pr[G_0] - Pr[G_n]|\) is also small.

We note that all the definitions here are actually parameterised by a security parameter and it must be shown that the advantage approaches zero faster than any inverse polynomial grows—that is the advantage is a negligible function. In our presentation here we omit the security parameter and refer only to the advantages of adversaries. Intuitively the security parameter gives a measure of the level of security of the protocol, a higher security parameter means a higher level of security. Practically this is realised by, for example, the size of group or field the protocol is considered over.

To define the hiding property we consider the algorithm which plays out the hiding game from Fig. 2. Informally the algorithm, \( hid \text {-} game \), is as follows:

  1. 1.

    \((ck,vk) \leftarrow key\)

  2. 2.

    \((m_0,m_1) \leftarrow \mathcal {A}(vk)\)

  3. 3.

    \(b \xleftarrow {\$} \{0,1\}\)

  4. 4.

    \((c,d) \leftarrow com(ck,m_b) \)

  5. 5.

    \(b' \leftarrow \mathcal {A}(c)\)

  6. 6.

    \( return \; b = b'\)

The notation \(\xleftarrow {\$}\) denotes uniform sampling while we use \(\xleftarrow {}\) to denote assignment. We define this game formally in (11).

Informal Definition 6

(Hiding) The hiding advantage is defined for all polynomial-time adversaries, \(\mathcal {A}\), as

$$\begin{aligned} hid \text {-} adv (\mathcal {A}) = |Pr[ hid \text {-} game (\mathcal {A}) = 1] - \frac{1}{2}| \end{aligned}$$

The scheme is said to be perfectly hiding if for all adversaries, \(\mathcal {A}\), we have

$$\begin{aligned} hid \text {-} adv (\mathcal {A}) = 0. \end{aligned}$$

The scheme is said to be computationally hiding if for all computationally bounded adversaries, \(\mathcal {A}\), the advantage value \( hid \text {-} adv (\mathcal {A})\) is negligible.Footnote 4

Fig. 3
figure 3

The binding game played between the challenger (the verifier) and the adversary (the committer)

Analogously to the hiding property we define the binding property with respect to the binding game which is depicted in Fig. 3. The informal algorithm for playing the binding game is as follows:

  1. 1.

    \((ck,vk) \leftarrow key\)

  2. 2.

    \((c,m,d,m',d') \leftarrow \mathcal {A}(ck)\)

  3. 3.

    \( checks \; m \ne m'\)

  4. 4.

    \(b \leftarrow ver (vk,c,m,d)\)

  5. 5.

    \(b' \leftarrow ver (vk,c,m',d')\)

  6. 6.

    \( return (b' \wedge b)\)

Intuitively the challenger asks the adversary to output two messages \((m,m')\) and corresponding opening values \((d,d')\) for the same commitment c. If the adversary can achieve this such that both messages (and corresponding opening values) verify then the adversary (the committer) is not bound to the original message they commit to.

Informal Definition 7

(Binding) The binding advantage is defined for all polynomial-time adversaries, \(\mathcal {A}\), as

$$\begin{aligned} bind \text {-} adv (\mathcal {A}) = Pr[ bind \text {-} game (\mathcal {A}) = 1] \end{aligned}$$

The scheme is said to be perfectly binding if for all adversaries, \(\mathcal {A}\), we have

$$\begin{aligned} bind \text {-} adv (\mathcal {A}) = 0. \end{aligned}$$

The scheme is said to be computationally binding if for all computationally bounded adversaries, \(\mathcal {A}\), the advantage \( bind \text {-} adv (\mathcal {A})\) is negligible.

We revert back to our coin flipping example to give some intuition regarding these properties. In the example Alice is the committer and Bob the verifier. Firstly we want the scheme to be correct, that is if both parties run the commitment protocol in the prescribed way then the Verifier will always be convinced in the verification phase. Secondly, we do not want Bob to be able to learn anything about Alice’s call (what she commits to) from the commitment itself — that is we want the commitment to be hiding. Finally we do not want Alice to be able to decommit to a different call of the coin flip from the one she committed to, that is we want her commitment to be binding.

CryptHOL and Isabelle Background

In this section we introduce Isabelle/HOL and CryptHOL highlighting the parts important to our work. For more detail on CryptHOL see [6, 32].


Isabelle/HOL is an interactive theorem prover that implements Higher Order Logic (HOL). HOL is built on simple set-theory, where types are interpreted as sets of elements and terms are elements of the set corresponding to their type. In this section we highlight some of the basic notions and notations we use in this paper, however for a more comprehensive overview we point the reader to [35].

The notations we use in this paper resemble closely the syntax of Isabelle/HOL (Isabelle).Footnote 5 For function application we write f(xy) in an uncurried form for ease of reading instead of \(f \; x \; y\) as in the sources. To indicate that term t has type \(\tau \) we write \(t \; {:}{:} \; \tau \). Isabelle uses the symbol \(\Rightarrow \) for the function type, so \(a \Rightarrow b\) is the type of functions that takes an input of type a and outputs an element of type b. The type variable ‘a denotes an abstract type. Isabelle provides a sum type \(`a + `b\) that allows for the combination of elements of two different types into a new type. The two constructors, inject left and inject right, are \(Inl \; {:}{:} \; `a \Rightarrow `a + `b\) and \(Inr \; {:}{:} \; `b \Rightarrow `a + `b\).

The implication arrow \(\longrightarrow \) is used to separate assumptions from conclusions inside a HOL statement. In HOL a function may be nameless, that is, \(\lambda x. \; s(x)\), is the function that maps every value w to the results of s where x is replaced by w. In the situation where s does not depend on x, the underscore \(\_\), replaces x in our notation. Pairs have the type \(`a \times `b\), the projections of the first and second elements are written \( fst \) and \( snd \) respectively.

One technical aspect of Isabelle we use heavily is the module system, called locales in Isabelle. At a technical level locales allow the user to prove theorems abstractly, relative to given assumptions. These theorems can be reused in situations where the assumptions themselves are theorems. For example we use locales to parametrise over cyclic groups as well as fix parameters and assumptions. The locale system also allows us to modularise our proofs in a natural way; to do this we use the \(\mathbf {sublocale}\) command. Sublocales are a form of interpretation of locales, in our case they allow us to work with an instance of a locale. For example, we may wish to prove a particular protocol is indeed a \(\varSigma \)-protocol with respect our formal definition of a \(\varSigma \)-protocol. When constructing this instance we must prove that all assumptions of the original locale are met. We expand on this in Sect. 3.1 where we outline the structure of our formalisation.


CryptHOL [6] is a framework for reasoning about reduction-based security arguments that is embedded inside the Isabelle/HOL theorem prover. At a high level it allows the user to formally reason about game-based cryptographic proofs by writing probabilistic programs and reason about relationships between them.

CryptHOL, like much of modern cryptography, is based on probability theory. Probabilistic programs in CryptHOL are shallowly embedded as subprobability mass functions of type \( spmf \) using Isabelle’s library for discrete distributions. These can be thought of as probability mass functions with the exception that they do not have to sum to one—we can lose some probability mass. This allows us to model failure events and assertions. When a subprobability mass function does sum to one, we say it is lossless—if so, we can consider the subprobability mass function (spmf) to be a probability mass function (pmf).

HOL functions cannot in themselves provide effects like probabilistic choice therefore all such effects are modeled using monads. A monad consists of a (polymorphic) type constructor, in this case \( spmf \) and two (polymorphic) operations, \( return \; {:}{:} \; \alpha \Rightarrow \alpha \; spmf \) and \( bind \; {:}{:} \; \alpha \; spmf \Rightarrow (\alpha \Rightarrow \beta \; spmf ) \Rightarrow \beta \; spmf \). The \( return \) operation embedds an effect-free value into the world of effects, and the \( bind \) operation composes components inside the monad.

We now introduce the parts of CryptHOL that are relevant for this paper.

Writing probabilistic programs Probabilistic programs can be encoded as sequences of functions that compute over values drawn from spmfs. CryptHOL provides some easy-to-read do notation, like in Haskell, to write probabilistic programs, where \(\mathbf {do} \{ x \leftarrow p; \; f(x)\}\) is the probabilistic program that samples x from the distribution p and returns the \( spmf \) produced by f when given x. We can also return an \( spmf \) using the monad operation \( return \).

To illustrate these operators, consider the random experiment with two urns shown in Fig. 4. The first urn contains one white ball \(W_1\) and two black ones \(B_2\) and \(B_3\). The second urn contains one black ball \(B_1\) and one red ball \(R_1\). In the experiment, first choose one of the urns uniformly at random, then draw one ball from the chosen urn and look at the ball’s colour. This experiment can be formalized as follows:

$$\begin{aligned} X =&do \ \{\\&U \xleftarrow {\$} \{ urn _1, urn _2\}; \\&Ball \xleftarrow {\$} if \, U = urn _1\, then \,\{W_1,B_2,B_3\}\, else \,\{B_1,R_1\} \\&return ( colour ( Ball )) \} \end{aligned}$$

Syntactically, this probabilistic program constructs the decision tree shown in Fig. 4: first the urn is chosen, then a ball is drawn from the urn, and finally colour its is determined. In the last step, the three different black balls \(B_1\) to \(B_3\) yield the same observation, namely black. So their probabilities are summed. Semantically, this probabilistic program denotes just the probability distribution in the last line. So, in the logic, one can prove that this program is equal to the distribution that assigns the colours white, black, and red the probabilities \(\frac{1}{6}\), \(\frac{7}{12}\) and \(\frac{1}{4}\). This shallow embedding of probabilistic programs induces many a rich theory of identities that our proofs can exploit.

Fig. 4
figure 4

Urn example

The following probabilistic program, \( completeness \text {-} game \), is used in our formalisation of the correctness property of commitment schemes, given in Sect. 4. Here \( init \) and \( response \) are the probabilistic programs that define the two steps of a \(\varSigma \)-protocol completed by the Prover and \( check \) is the function that the verifier uses to validate the response. To define the \( completeness \text {-} game \), \( init \) and \( response \) are sampled like in a real execution of a commitment scheme, and the distribution (\( spmf \)) of \( check \) is returned. Note, as \( check \) is deterministic we must return the output as a probability distribution.

$$\begin{aligned} \begin{array}{l} completeness \text {-} game (h,w,e)= do \; \{ \\ \quad (r,a) \leftarrow init (h,w); \\ \quad z\leftarrow response (r,w,e); \\ \quad return ( check (h,a,e,z)) \} \\ \end{array} \end{aligned}$$

We note that \( bind \) is commutative, that is, assuming no dependency conditions one can bind spmfs in any order. In particular, given a sequence of samplings the ordering of such samplings is irrelevant.

Under \( bind \) we also have that constant elements cancel. In particular if p is lossless (its probability mass sums to one), then

$$\begin{aligned} bind (p, \lambda \_. \; q) = q. \end{aligned}$$

Our proofs of security are mainly completed by manipulating the appropriate probabilistic programs. While the proofs that each manipulation is valid are not always accessible to non-experts, the effect of each manipulation can be easily seen and recognised as they are explicitly written in the do notation.

Assertions Making assertions inside probabilistic programs is sometimes useful. For example we must ensure that the adversary in the hiding game (Eq. 11) outputs two valid messages for the game to proceed. The monad for subprobabilities has an element, \(\bot \), that accounts for failure meaning the current part of the probabilistic program is aborted. This is captured by assertion statements

$$\begin{aligned} assert (b) = if \; b \; then \; return(\_) \; else \; \bot \end{aligned}$$

where if b holds then the probabilistic program continues otherwise it fails. Here \((\_)\) is the only element of the unit type, returning this element continues with execution of the program with no effect. Assertions are often used in conjunction with the \( TRY \; p \; ELSE \; q\) construct. For example \(TRY \; p \; ELSE \; q\) would distribute the probability mass not assigned by p to the distribution according to q. Picking up on our example of the hiding game; if the adversary fails to output two valid messages, the assertion fails and the \( ELSE \) branch is invoked—resulting in the adversary’s output being a coin flip meaning they do not win the resulting security game.

Assertions are not a necessity to our formalisation as the assumptions could be made explicitly in the theorem statements, for example in any statement of the hiding property we could assume all messages outputted by the adversary (\(\mathcal {A}_1\)) are valid:

$$\begin{aligned} \forall vk. \; (m_0, m_1) \in set \text {-} spmf (\mathcal {A}_1) \longrightarrow valid \text {-} msg (m_0) \wedge valid\text {-} msg (m_1). \end{aligned}$$

Assertions however, in general, make the formalisation more neat and readable.

Sampling Sampling from sets is important in cryptography. CryptHOL provides an operation \( uniform \) which returns a uniform distribution over a finite set. We use two cases of this function extensively: by \( samp \text {-} uniform (q)\), where q is a natural, we denote the uniform sampling from the set \(\{0,\ldots ,q - 1\}\) and by coin we denote the uniform sampling from the set \(\{True, False\}\)—a coin flip.

The monad operations give rise to another function, \( map {:}{:} (\alpha \Rightarrow \beta ) \Rightarrow \alpha \; spmf \Rightarrow \beta \; spmf \).

$$\begin{aligned} map (f, p) = bind (p, (\lambda x. \; return (f (x)))) \end{aligned}$$

The map function can be thought of as the post-processing of sampled values. It is from this level of abstraction that we are able to reason about the equivalence of distributions and thus complete major steps in our proofs. For example, we can apply one time pad lemmas. Below is that statement of the one time pad for addition in the finite group \(\mathbb {Z}_q\).

$$\begin{aligned} map ((\lambda b. \; (y + b) \; mod \; q), ( samp \text {-} uniform (q))) = samp \text {-} uniform (q) \end{aligned}$$

Probabilities Security definitions are based on explicit probabilities of events occurring. In CryptHOL the expression \(\mathcal {P}[Q = x]\) denotes the subprobability mass the spmf Q assigns to the event x. In our proofs reasoning at this level is often the last step, much of the proof effort is in showing properties of the probabilistic programs over which the probabilities are defined.

Negligible functions To reason about security in the asymptotic case we must consider negligible functions. These are formalised as a part of CryptHOL in the canonical way. A function, \(f \; {:}{:} \; nat \Rightarrow real\) is said to be negligible if

$$\begin{aligned} \forall c > 0. \; f \in o(\lambda x. \; inverse (x^c)) \end{aligned}$$

where o is the little o notation. We discuss the use of such functions in our proofs in Sect. 10.

Cyclic Groups CryptHOL formalizes cyclic groups with a generator g. The formalisation extends the formalisation of monoids in Isabelle/HOL meaning there is an armoury of lemmas immediately available for use. We use cyclic groups in the formalisation of the Pedersen commitment scheme and the Schnorr, Chaum-Pedersen and Okamoto \(\varSigma \)-protocols. In the formal parts of this paper we denote group multiplication by \(\otimes \) whereas we denote the multiplication of natural numbers by \(\cdot \). In the informal parts of the paper all multiplication is written as ‘\(\cdot \)’.

Formalisation Overview

CryptHOL has been used for a number of formalisations of cryptography thus far. Our work lends weight to the fact that CryptHOL provides a good environment for such formalisations, in particular that the method of modularisation can be used for considering low level cryptographic primitives.

In this section we first discuss the general method of our formalisation at a high level, in particular how CryptHOL allows the user to make their definitions abstract and then instantiate them for the proofs we consider. This method could be considered as the general, most effective, method that Isabelle and CryptHOL allow for. Second we briefly discuss asymptotic security statements in CryptHOL.

Method of Formalisation

Isabelle’s module system and CryptHOL’s monadic structure allow for a natural hierarchy in our formalisation. We begin our formalisation by abstractly defining the security properties required for both commitment schemes and \(\varSigma \)-protocols. This part of the formalisation is defined over abstract types, giving the flexibility for it to be instantiated for any protocol. The human reader needs to only check the high level, abstract, definitions of security to have confidence in the whole collection of proof as all instantiated proofs are made with respect to these definitions. We are able to prove some lemmas at the abstract level and have them at our disposal in any instantiation, thus reducing the workload for future proofs. Some of the properties are technical and uninteresting to the cryptographer, for example we prove losslessness of various probabilistic programs used in the definitions, however we are also able to reason about the properties more generally. For example, to formalise the construction of commitment schemes from \(\varSigma \)-protocols we work at an abstract level, only assuming the existence of a \(\varSigma \)-protocol. This means the instantiated proofs (for the \(\varSigma \)-protocols we consider) come for free once we prove they are \(\varSigma \)-protocols.

We next more explicitly describe the workflow in constructing our formalisation. We do not expect the reader to understand the details of formulas here; these will be covered later. We present the general formalisation approach here so that it does not get lost in the details of the constructions and formalisation later in the paper.

We use Isabelle’s locales to define properties of security relative to fixed parameters and then instantiate these definitions for explicit protocols and prove the security properties as theorems.

To illustrate this formalisation process we outline how we formalise and instantiate the completeness property for \(\varSigma \)-protocols.

Formalisation Process

  1. 1.

    To consider \(\varSigma \)-protocols abstractly and define the completeness property we fix in a locale the probabilistic programs (algorithms) that make up the primitive (i.e. initresponsecheck) as well as other parameters of a \(\varSigma \)-protocol (\(Rel, S_{raw}, \mathcal {A}_{ss}\), \(challenge\text {-}space, valid\text {-}pub\))—the locale is given in Fig. 5 in Sect. 4, we introduce the remaining parameters in Sect. 4.

  2. 2.

    We use the parameters to define a probabilistic program, \(completeness\text {-}game\), given in Eq. 2 in Sect. 2.2.2 and use it to define the completeness property given in Definition 9\(\varSigma \)-protocol is complete if for all valid challenges the completeness game returns true.

  3. 3.

    To instantiate a \(\varSigma \)-protocol and prove it is complete we explicitly define the fixed parameters from the locale, \(\varSigma \text {-}protocol\text {-}base \). To do this we refine the types and define the probabilistic programs that describe the protocol. In the case of the Schnorr \(\varSigma \)-protocol we work with a cyclic group G by fixing it in the locale \(schnorr\text {-}base\), given in (7) in Sect. 5.2.

    Inside this locale we define the instantiated parameters:

    \(init^S, response^S, check^S, R_{DL}, S_{raw}^S, \mathcal {A}_{ss}^S, challenge\text {-}space^S\) and \(valid\text {-}pub^S\)—here the superscript S denotes they are the parameters for the Schnorr protocol, and \(R_{DL}\) is the discrete log relation.

  4. 4.

    We then utilise Isabelle’s locale structure by importing the abstract theory using the sublocale command—this is shown in (8) in Sect. 5. In doing this, not only must the explicit definitions be of the correct type, one must also discharge any assumptions that come with the locale. This means that our instantiation is valid with respect to the \({\varSigma \text {-}protocol\text {-}base}\) locale and we can refer its definition of correctness. In this case we must prove that \({Domain(Rel^S) \subseteq valid\text {-}pub^S}\) (the only assumption in the base locale).

  5. 5.

    Any call of a definition from the original locale (in this case \(\varSigma \text {-}protocol\text {-}base\)) requires the definition name to be prefixed by the name we give to the sublocale (in this case \({Schnorr\text {-}\varSigma }\)). The statement of completeness for the Schnorr \(\varSigma \)-protocol is now given by \( schnorr \text {-}\varSigma . completeness \).

Concrete Versus Asymptotic Security

In our formalisation, we first prove concrete security bounds using reduction-style proofs. That is, we bound on adversary’s advantage as a function of advantages of different adversaries of the primitives used in the construction. For example, we show in Lemma 30 in Sect. 8.2 that the binding advantage for commitment schemes constructed from \(\varSigma \)-protocols is bounded by the advantage that the (transformed) adversary breaks the hard relation \( Rel \). This is in line with other CryptHOL formalisations [6, 12].

From these concrete statements, we can easily derive more abstract asymptotic security statements. To that end, a security parameter must be introduced. We describe in Sect. 10 how we achieve this with little effort using Isabelle’s locale system. Conceptually, this process replaces a locale parameter such as the cyclic group \({\mathcal {G}} {:}{:} `{ grp }\; { cyclic }\text {-}{ group }\) with a family of cyclic groups \({\mathcal {G}} {:}{:} { nat } \Rightarrow ` grp \; { cyclic }\text {-}{ group }\). And similarly, the challenge space \({ challenge }\text {-}{ space }\) becomes a family of type \({ nat } \Rightarrow `{ challenge }\;{ set }\). This parameterisation is also the reason for the locale parameters \( valid \text {-} pub \) and \( challenge \text {-} space \). Since HOL does not have dependent types, the same abstract type \(` challenge \) must hold the challenge spaces for every possible security parameter value. The parameter \( challenge \text {-} space \) then carves out the right challenge space for the chosen security parameter.

Unfortunately, CryptHOL cannot reason about computational aspects, due to the shallow embedding. We therefore cannot formalise notions like computational binding (Definition 7) that quantify over computationally bounded adversaries. Instead, we capture the underlying reduction argument in a reduction-based security theorem. As an example, for constructing a commitment scheme from a \(\varSigma \)-protocol, the concrete security theorem has the following form: the binding advantage \( bind \text {-} adv (\mathcal {A})\) of an adversary \(\mathcal {A}\) is bounded by the advantage of a different adversary \(\mathcal {A}'\) against the hardness of the underlying relation \( Rel \). This adversary \(\mathcal {A}'\) is obtained by a reduction f, which systematically transforms binding-game adversaries \(\mathcal {A}\) into hardness game adversaries \(\mathcal {A'} = f(\mathcal {A})\). Such statements naturally yield asymptotic security statements of the following form: The binding advantage of a family of adversaries \(\mathcal {A}_\eta \) against the commitment scheme is negligible if the family of reduced adversaries \(f(\mathcal {A}_\eta )\) has negligible advantage against the hardness of the underlying relation.

Such a reduction-based statement captures the key aspects of the security proof. Compared to a computational statement, which quantifies over all computationally bounded adversaries, the reduction f shows up in the security statement itself. This makes the statement more generic in the sense that we need not commit to a particular computational model or complexity class such as polynomial time. Conversely, the reader must manually check that the reduction lies in the desired complexity class.

Formalising \(\varSigma \)-Protocols

In this section we detail our formalisation of \(\varSigma \)-protocols following the definitions given in Sect. 2.1.1. This follows Steps 1 and 2 from the formalisation process outlined in the previous section.

We first define a locale where we fix the parameters required for the definitions of \(\varSigma \)-protocols (Fig. 5). That is we fix, as probabilistic programs, the components of a \(\varSigma \)-protocol:

  • init constructs the initial message sent from P to V, and its corresponding randomness.

  • response is the response sent from P to V.

  • check performs the verification V runs on the response from P.

We also fix the relation Rel, the adversary \(\mathcal {A}_{ss}\) required in the special soundness definition, the \( challenge \text {-} space \) which is the set of all possible challenges and the set \( valid \text {-} pub \) which contains all the valid public inputs. We also require a simulator for the HVZK definition: the simulator outputs a conversation of the form (aez), however the outputted challenge e must be the same as the inputted challenge e; overall the simulator looks as follows:

$$\begin{aligned} (a,e,z) \leftarrow S(h,e). \end{aligned}$$

To formally model this we fix in the locale the part of the simulator, \(S_{ raw }\), that constructs a and z and then define the full simulator that outputs (aez) using \(S_{raw}\) as follows:

$$\begin{aligned} S(h,e) = map (\lambda \; (a,z). \; (a,e,z), S_{ raw }(h,e)). \end{aligned}$$

The motivation of this definition is maintaining consistency with the literature. The incurred complexity, namely the introduction of \(S_{raw}\), is needed to ensure that e on both sides (input and output) of the simulator are the same.Footnote 6 We show this trivial property with the following Lemma.

Lemma 8

\(\mathbf {shows} \; (a,e',z) \in set \text {-} spmf (S(h,e)) \Longrightarrow e = e'\)

To improve the readability of the formalisation we define three type synonyms; the first two define the type of \(S_{ raw }\) and a conversation respectively and the third the type of the special soundness adversary.

$$\begin{aligned}&\mathbf {type-synonym} \; (`msg, `response) \; sim\text {-}out = (`msg \; \times \; `response)\\&\mathbf {type-synonym} \; (`msg, `challenge, `response) \; conv\text {-}tuple = \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (`msg \; \times \; `challenge \; \times \; `response) \\&\mathbf {type-synonym} \\&\quad (`pub\text {-}input, `msg, `challenge, `response, `witness) \; prover\text {-}adversary \\&\quad \qquad = `pub\text {-}input \Rightarrow (`msg, `challenge, `response) \; conv\text {-}tuple \\&\quad \quad \qquad \Rightarrow (`msg, `challenge, `response) \; conv\text {-}tuple \Rightarrow `witness \; spmf \end{aligned}$$

The locale where we fix these parameters is given in Figure. 5—note this is the same as the locale given in the example in Sect. 3. The assumption requires that the domain of the relation is contained in the set of valid public inputs. We now make our formalised definitions of \(\varSigma \)-protocols.

Fig. 5
figure 5

Locale fixing the constants for \(\varSigma \)-protocols

Using the parameters we fixed in the locale \(\varSigma \text {-} protocol \text {-} base \) we define the properties of \(\varSigma \)-protocols. First we define completeness. For this property we define a probabilistic program, \(completeness\text {-}game\), that runs the components of the protocol and outputs the output of check. We repeat the definition from Eq. 2.

$$\begin{aligned} \begin{array}{l} completeness \text {-} game (h,w,e) = do \; \{ \\ \quad (r,a) \leftarrow init ; \\ \quad z \leftarrow response (r, w,e);\\ \quad return ( check (h,a,e,z))\} \\ \end{array} \end{aligned}$$

The definition of completeness is quantified over all public inputs, witnesses and challenges.

Definition 9

$$\begin{aligned} completeness= (\forall h \; w \; e.&\; (h,w) \in Rel \longrightarrow e \in challenge\text {-}space\\&\longrightarrow \mathcal {P}[completeness\text {-}game(h,w,e) = True] = 1) \end{aligned}$$

For special soundness to hold we require the special soundness adversary (\(\mathcal {A}_{ss}\)) to output the witness when given two accepting conversations (with distinct challenges) with respect to the public input h, (aez) and \((a,e',z')\). An accepting conversation is a tuple upon which check is satisfied. To capture this formally we must show that for all \(w'\) in the support set (\(set\text {-}spmf\)) of \(\mathcal {A}_{ss}\) the relation is satisfied. Together with this we require that the adversary, \(\mathcal {A}_{ss}\), is lossless; if not \(\mathcal {A}_{ss}\) may abort leaving no way to reason about all outputs of \(\mathcal {A}_{ss}\).

Definition 10

$$\begin{aligned}&special\text {-}soundness = (\forall h \; a \; e \; z \; e' \; z'. \; h \in valid\text {-}pub \\&\quad \qquad \longrightarrow e \in challenge\text {-}space \longrightarrow e' \in challenge\text {-}space \longrightarrow e \ne e' \\&\qquad \qquad \qquad \quad \longrightarrow check(h, a, e, z) \longrightarrow check(h, a, e', z') \longrightarrow \\&\qquad \qquad \qquad \qquad lossless(\mathcal {A}_{ss}(h, (a,e,z), (a,e',z'))) \; \wedge \\&\qquad \qquad \qquad \qquad \forall w' \in set\text {-}spmf(\mathcal {A}_{ss}(h, (a,e,z), (a,e',z'))). \; Rel(h,w')) \end{aligned}$$

The paper-based special soundness definition, given in Informal Definition 3 requires the existance of a special soundness adversary. Our formal definition skolemizes over this quantifier: we fix the adversary as a parameter in the locale. Such an adversary must thus exist in any instance. Its properties are given by our speical soundness definition.

The definition of HVZK follows the simulation-based paradigm: we require the output distribution of the simulator S to be equal to the output distribution of the real view of the protocol which is given below.

$$\begin{aligned}&real \text {-} view (h,w,e) = do \; \{\\&\quad (r,a) \leftarrow init ; \\&\quad z \leftarrow response (r, w,e);\\&\quad return(a,e,z) \} \end{aligned}$$

The real view can be defined abstractly as we know the structure of the protocol. This is unlike in general MPC protocols [12] where the real view has to be defined for each MPC protocol considered. We must nevertheless construct a simulator for each instantiated \(\varSigma \)-protocol. As noted in Sect. 2.1.1, we additionally require that the simulator’s output produces an accepting conversation even if the public input h does not belong to the language.

Definition 11

$$\begin{aligned} HVZK&= (\forall e \in challenge\text {-}space. \\&\quad ( \forall (h,w) \in Rel. \; real\text {-}view(h,w,e) = S(h,e)) \\&\qquad \wedge (\forall h \in valid\text {-}pub \; \text {-} \; Domain(Rel). \\&\qquad \quad \forall (a,e,z) \in set\text {-}spmf(S(h,e)). \; check(h,a,e,z))) \end{aligned}$$

Interestingly the second condition holds for all valid public inputs, whether they are in the relation or not, assuming the completeness property holds. We prove this in Lemma 13 after we define the notion of a \(\varSigma \)-protocol.

Definition 12

(\(\varSigma \)-protocol)

$$\begin{aligned} \varSigma \text {-}protocol = completeness \wedge special\text {-}soundness \wedge HVZK \end{aligned}$$

It may appear surprising that in our formalisation of \(\varSigma \)-protocols we do not fix a probabilistic program to output the challenge, like we do for the other components of the protocol. In this case it is not needed as the verifier, who outputs the challenge, is assumed to be honest. In particular we define the properties over all allowed challenges (\(\forall e \in challenge \text {-} space \)). This is valid when the challenge is always generated honestly. It is not strong enough if we moved to assume the challenge was not generated honestly—in the case of a corrupt verifier. This extension [31] is considered by full Zero-Knowledge protocols, which we do not consider in this work.

As mentioned above, if the protocol is a \(\varSigma \)-protocol, a stronger property for the second requirement in the HVZK definition holds, namely that the simulator outputs a correct conversation whenever h is a valid public input.

Lemma 13

figure b


We split the proof into the cases depending on whether there exists a w such that \((h,w) \in Rel\). If so, the real and simulated views are equal by the first HVZK property. The result thus follows using the completeness property. Otherwise, we can directly use the second HVZK property as \(h \in valid \text {-} pub \; \text {-} Domain(Rel) \). \(\square \)

The Schnorr \(\varSigma \)-Protocol

In this section we detail how we instantiate our formal definitions of \(\varSigma \)-protocols given in Sect. 4 for the Schnorr \(\varSigma \)-protocol. This achieves Steps 3 - 5 of the formalisation process in Sect. 3.1. We first explain the protocol in Sect. 5.1 and give some intuition and informal arguments as to why the desired properties hold and then in Sect. 5.2 we detail our formalisation.

The Schnorr \(\varSigma \)-Protocol

The Schnorr protocol uses a cyclic group G with generator g and considers the discrete log relation which on public input h requires the witness to be the discrete log of h in G\(h = g ^ w\). The Schnorr \(\varSigma \)-protocol is given in Fig. 6.

Fig. 6
figure 6

The Schnorr \(\varSigma \)-protocol

The Prover holds (hw) such that \(h = g ^ w\) and the Verifier holds only h. The initial message sent by P to V is a uniformly sampled group element and the challenge is uniformly sampled from the field of size |G|. The response is constructed by P as \(z = (w \cdot e + r) {mod} |G|\) and sent to V who accepts or rejects based on whether \(a \cdot h ^ e = g ^ z\).

Completeness comes directly by unfolding the definitions and proving the identity \(g^r \cdot (g^w)^e = g^{r + w \cdot e}\).

For the special soundness property a witness can be extracted from two accepting conversations (aez) and \((a,e',z')\) by taking \(w = (\frac{z - z'}{e - e'}) {mod} |G|\). This can be seen as follows. Given two accepting conversations (aez) and \((a,e',z')\) we have \(a \cdot h^e = g ^ z\) and \(a \cdot h^{e'} = g ^ {z'}\) which after unfolding \(h = g ^ w\) and rearranging leaves us with \(g^{z - w \cdot e} = g^{z' - w \cdot e'}\) meaning we have \([z - w \cdot e = z' - w \cdot e'] {mod} |G|\). Rearranging this we find \(w = (\frac{z - z'}{e - e'}) {mod} |G|\) as claimed. Note it is important that \([e \ne e'] {mod} |G|\), this comes from \(e,e' < |G|\) (the challenges are from \(\mathbb {Z}_{|G|}\)) and \(e \ne e'\) (a condition on the special soundness property).

The protocol also observes the HVZK property. The intuition behind constructing the simulator for the HVZK property is to work backwards. We would like the response to independent of w, so let us pick it uniformly at random and then try to reconstruct the initial message. If we sample z uniformly from the field and then set \(a = g ^ z \cdot h ^{-e}\) it can be shown the resulting conversation gives a distribution equal to the output conversation distribution of a real execution of the protocol.

Formalising the Schnorr \(\varSigma \)-Protocol

Throughout our formalisation we work with natural numbers instead of formalising a field construction. Therefore we work modulo q whenever we actually work in a field. One issue we encounter is constructing inverses modulo q. We are required to reason about the inverses of elements in a field in many places in our formalisation, for example the special soundness adversary outputs \(w = (\frac{z - z'}{e - e'}) {mod} |G|\) in the Schnorr protocol.

We formalise such an inverse as follows,

$$\begin{aligned} inv_q(a) = fst ( bezw (a,q)). \end{aligned}$$

Its construction, and the use of the Bezout (\( bezw \)) function, is not trivial. We outline our method in “Appendix A”.

The Schnorr \(\varSigma \)-protocol is defined over a cyclic group of prime order. We use the construction of cyclic groups from [32] to fix a group \(\mathcal {G}\) in the locale we work in as follows.

$$\begin{aligned} \begin{array}{l} \mathbf {locale} \; schnorr\text {-}base = \\ \quad \mathbf {fixes} \; \mathcal {G} \; {:}{:} \; `grp \; cyclic\text {-}group \; \mathbf {(structure)}\\ \quad \mathbf {assumes} \; prime (order (\mathcal {G})) \end{array} \end{aligned}$$

To show the Schnorr \(\varSigma \)-protocol has the desired properties of \(\varSigma \)-protocols we explicitly define the constants introduced in Sect. 4. We define

$$\begin{aligned} init ^{S}, response ^{S}, check ^{S}, R_{DL}, S_{raw}^{S}, \mathcal {A}^{S}_{ss}, challenge \text {-} space ^S, valid \text {-} pub ^S \end{aligned}$$

where the superscript S denotes that these constants are for the Schnorr \(\varSigma \)-protocol. We make these definitions inside the context of the locale. The types of the components of the protocol are made more concrete from definitional theory of \(\varSigma \)-protocols, in particular we define the following type synonyms.

  • \(\mathbf {type\text {-}synonym} \; witness = nat \)

  • \(\mathbf {type\text {-}synonym} \; ` grp \; pub \text {-} in = ` grp \)

  • \(\mathbf {type\text {-}synonym} \; ` grp \; msg = ` grp \)

  • \(\mathbf {type\text {-}synonym} \; rand = nat \)

  • \(\mathbf {type\text {-}synonym} \; challenge = nat \)

  • \(\mathbf {type\text {-}synonym} \; response = nat \)

These new types specialize the types from the definitional theory to the Schnorr protocol. For example, the witness, randomness, challenge and response are all naturals and the public input and initial message are group elements.

For the Schnorr \(\varSigma \)-protocol the relation is the discrete log relation, as given informally in Eq. 1; formally this is encoded into Isabelle as

$$\begin{aligned} R_{DL} = \{(h,w). \; h = g^w\}. \end{aligned}$$

The programs \( init ^S, response ^S\) and \( check ^S\) correspond to the stages of the protocol given in Fig. 6.

figure c

A public input is valid if it is in the group, \( valid \text {-} pub ^S = carrier(G) \). And the challenge set is the set of naturals up to the order of G, \( challenge \text {-} space ^S = \{0,\dots ,|G|\}\).

We show these constants are an instantiation of the \(\varSigma \text {-}protocol\text {-}base\) locale (Fig. 5). As explained in Sect. 3.1 we do this using the sublocale command; this is an extension of the sublocale given in Eq. 8.

$$\begin{aligned} \mathbf {sublocale} \; schnorr\text {-}\varSigma&: \; \varSigma \text {-}protocol\text {-}base \; init^S \; response^S \; check^S \nonumber \\&\quad Rel^S \; S_{raw} \; \mathcal {A}_{ss} \; challenge\text {-}space^S\;valid\text {-}pub^S \; \end{aligned}$$

We also inherit the cyclic group properties of the group G by forming the following locale.

$$\begin{aligned} \mathbf {locale} \; schnorr = schnorr\text {-}base + cyclic\text {-}group(G) \end{aligned}$$

In this context we can prove the desired properties of the Schnorr \(\varSigma \)-protocol. When proving instantiated results we highlight the exact locale that the result corresponds to (in brackets in the statement), in this case it is the \( schnorr \) locale.

Lemma 14

(in schnorr) \( \mathbf {shows} \; Schnorr\text {-}\varSigma .completeness\)


Completeness follows after proving the identity \(g^r \otimes (g^w)^e = g^{r + w \cdot e}\) and passing it as a rewrite rule to the simplifier. \(\square \)

Second we consider special soundness. To prove this property we construct an adversary that can extract the witness from accepting conversations of the protocol. We informally gave the construction of this adversary in the previous section; given two accepting conversations (aez) and \((a,e',z')\) the adversary outputs \((\frac{z - z'}{e - e'}) {mod} |G|\). The encoding of the adversary in Isabelle must be mindful of whether \(e > e'\); as we are working with naturals bounded subtraction in the denominator \(e - e'\) will return 0 if \(e < e'\). So we construct an adversary that is mindful of this—we know that \(e \ne e'\) as it is a condition on the conversations given to the adversary.

$$\begin{aligned}&\mathcal {A}_{ss}^S(h,c_1,c_2) = do \; \{ \\&\quad let \; (a,e,z) = c_1; \\&\quad let \; (a', e', z') = c_2; \\&\quad return (if \; e > e' \; then \; (z - z') \cdot inv_G (e - e') {mod} |G|\; \\&\qquad \qquad \qquad else \; (z' - z) \cdot inv _G(e' - e) {mod} |G|) \} \end{aligned}$$

Using this adversary we prove the special soundness property for the Schnorr \(\varSigma \)-protocol.

Lemma 15

(in schnorr) \(\mathbf {shows} \; Schnorr \text {-}\varSigma . special \text {-} soundness \)


The adversary \(\mathcal {A}_{ss}^S\) is clearly lossless—it does not do any probabilistic sampling. Showing the adversary outputs a witness to the relation is proven by using Lemma 31 to rewrite the output of the adversary in a similar manner to a paper proof given in Sect. 5.1. \(\square \)

Finally we consider the honest verifier zero knowledge property. This proof technique follows the technique of simulation-based proofs that was formally introduced in Isabelle and CryptHOL in [12]. To prove HVZK we define the simulator, \( S _{raw}^S\), which in turn defines \( Schnorr \text {-}\varSigma .S^S\). We then prove this mimicks the real view. The unfolded simulator is formed as follows; recall the intuition of sampling the response first and constructing the initial message from it.

$$\begin{aligned}&Schnorr \text {-}\varSigma .S^S(h,e) = do \; \{ \\&\quad z \leftarrow samp \text {-} uniform (|G|); \\&\quad let \; a = g^z \otimes (h^e)^{-1}; \\&\quad return \; (a,e,z) \} \end{aligned}$$

Lemma 16

(in schnorr) \(\mathbf {shows} \; Schnorr \text {-}\varSigma . HVZK (h,w)\)


First we show the simulator and the real view are equal. The unfolded real view can be written as:

$$\begin{aligned}&Schnorr\text {-}\varSigma .real\text {-}view^S(h,w,e) = do \; \{ \\&\quad r \leftarrow samp \text {-} uniform (|G|);\\&\quad let \; a = g^r; \\&\quad let \; z = (w \cdot c + r) \; mod \; |G|; \\&\quad return \; (a,e,z) \} \\ \end{aligned}$$

The gist of the proof is showing that z constructed in the real view is a uniform sample—as it is in the simulator—this destroys any information passed to V about the witness. To do this we use the following one time pad lemma:

$$\begin{aligned} map (\lambda b. \; (y+b) \; mod \; q, \; samp\text {-}uniform(q)) = samp\text {-}uniform(q) \end{aligned}$$

To use this lemma in the proof we must rewrite some of the terms in the real view. These rewriting statements of equality are nearly always needed when using such lemmas as the remaining probabilistic program can no longer depend on b and must be rewritten in terms of the other variables in the probabilistic program.

Second we show the output of the simulator is a valid transcript. This part of the proof comes easily and in a similar manner to the proof of correctness. \(\square \)

Using Lemmas 14, 15 and 16 we show that the Schnorr \(\varSigma \)-protocol satisfies the definition of a \(\varSigma \)-protocol given in Sect. 4.

Theorem 17

(in schnorr) \(\mathbf {shows} \; Schnorr\text {-}\varSigma .\varSigma \text {-}protocol\)

Compound \(\varSigma \)-Protocols

\(\varSigma \)-protocols can be combined to prove knowledge for AND and OR statements. Consider two \(\varSigma \)-protocols, \(\varSigma _0\) and \(\varSigma _1\), with relations \(Rel_0\) and \(Rel_1\) respectively. The AND construction allows the prover to prove they know witnesses \(w_0\) and \(w_1\) such that both \(Rel_0(x_0,w_0)\) and \(Rel_1(x_1,w_1)\) are true and the OR construction allows for the proof of knowledge of a witness such that \(Rel_0(x_0,w)\) or \(Rel_1(x_1,w)\) is true—\((x_0,x_1)\) is the public input. Cryptographers have found many uses for these basic constructions, for example the voting protocols in [21]. In this section we detail our formalisation of the OR construction; details of the AND construction can be found in “Appendix D”.

The OR Construction

The construction of the OR protocol follows the idea that the prover can run the real protocol for the relation for which the witness is known and run the simulator to generate the conversation for the relation for which the witness is not known. By the HVZK property of \(\varSigma \)-protocols the simulated view is equivalent to the real view, therefore the verifier cannot tell which was constructed by the real protocol and which from the simulator. The protocol is shown in Fig. 7. In this section we just give the statement of the lemmas, the proof sketches can be found in “Appendix B”.

Fig. 7
figure 7

The OR construction for two \(\varSigma \)-protocols, \(\varSigma _0\) and \(\varSigma _1\). L is the boolean algebra that the protocol is run over. \((x_0,x_1)\) is the public input such that \(Rel_0(x_0,w)\) or \(Rel_1(x_1,w)\) is satisfied and b represents the relation that holds, that is we have that \( Rel _b(x_b,w)\)

In the literature [21, 23, 31] the OR construction is considered over bitstrings. However we only require the one time pad property of the xor function thus we are able to generalise the construction to arbitrary boolean algebras. We formalise the concept of a boolean algebra and prove the one time pad property, whose statement is seen in Eq. 9. Using this formalisation we fix an abstract boolean algebra L (in the locale where we formalise the construction)—the classical bitstring version of the construction comes by instantiating the parameter L with the boolean algebra of bitstrings of a given length.

$$\begin{aligned} map ((\lambda a. \; a \oplus x), (uniform(carrier(L))) = uniform(carrier(L)) \end{aligned}$$

where L is the boolean algebra with xor function \(\oplus \).

To formalise the OR construction we fix two \(\varSigma \)-protocols (\(\varSigma _0\) and \(\varSigma _1\)) and their respective components

$$\begin{aligned}&init_0, response_0, check_0, Rel_0, S_{raw,0}, \mathcal {A}_{ss, 0}, challenge\text {-}space_0, valid\text {-}pub_0 \\&init_1, response_1, check_1, Rel_1, S_{raw,1}, \mathcal {A}_{ss, 1}, challenge\text {-}space_1, valid\text {-}pub_1 \end{aligned}$$

as well as a boolean algebra \(L \; {:}{:} ` bool \text {-} alg \; boolean \text {-} algebra \). The only type constraint on the components of \(\varSigma _0\) and \(\varSigma _1\) is that both challenges must be of type \(`bool\text {-}alg\). We allow the types of \(\varSigma _0\) and \(\varSigma _1\) to be different, thus the witness must be a sum type \(w \; {:}{:} \; (`witness_0 + `witness_1)\).

We define the relation,

$$\begin{aligned} Rel_{OR} \; {:}{:} \; ((`pub_0 \times `pub_1) \times (`witness_0 + `witness_1)) \; set \end{aligned}$$

as a set with the following introduction rules:

$$\begin{aligned}&((x_0, x_1), Inl (w_0)) \in Rel_{OR} \; \mathbf {if} \; (x_0, w_0) \in Rel_0 \wedge x_1 \in valid \text {-} pub _1 \\&((x_0, x_1), Inr (w_1)) \in Rel_{OR} \; \mathbf {if} \; (x_1, w_1) \in Rel_1 \wedge x_1 \in valid \text {-} pub _0 \end{aligned}$$

In particular the prover knows a witness for one of the two relations, and knows to which relation the witness belongs to. We also require that the public input for which the prover does not know the witness is a valid public input for its respective \(\varSigma \)-protocol.

In the OR construction the initial message is constructed as either the real initial message (of the \(\varSigma \)-protocol for which the prover knows the witness) or the first message of the simulator (of the other \(\varSigma \)-protocol). \( init _{OR}\)’s output has two parts: (1) the randomness consisting of the randomness from \(init_b\) (where \(b \in \{0,1\}\) is the relation for which the prover knows the witness), the random challenge sampled, as well as the response from the conversation that is simulated and (2) the initial messages sent in the protocol, one (and only one) of which is constructed by the simulator.

$$\begin{aligned}&init _{OR}((x_0,x_1), Inl(w_0)) = do \; \{ \\&\quad (r_0,a_0) \leftarrow init_0(x_0,w_0); \\&\quad e_1 \leftarrow uniform ( carrier (L)); \\&\quad (a_1, e_1, z_1) \leftarrow S_1(x_1, e_1); \\&\quad return ( Inl (r_0,e_1,z_1),(a_0,a_1)\} \\&init _{OR}((x_0,x_1), Inr(w_1)) = do \; \{ \\&\quad (r_1,a_1) \leftarrow init_1(x_1,w_1); \\&\quad e_0 \leftarrow uniform ( carrier (L)); \\&\quad (a_0, e_0, z_0) \leftarrow S_0(x_0, e_0);\\&\quad return ( Inr (r_1,e_0,z_0),(a_0,a_1))\} \end{aligned}$$

We recall, from Sect. 2.2.2 that \( uniform \) samples uniformly from a set. To respond to a challenge, s, the prover constructs a new challenge to be used in constructing the real response by xoring it with the challenge e it generated in \(init_{OR}\). The response for the relation the prover does not know is given as the simulated response from the \(init_{OR}\) phase. The inputs to \( response _{OR}\) consist of 1. the randomness outputted by \( init _{OR}\) (a 3-tuple) 2. the witness that is known and 3. the challenge.Footnote 7

$$\begin{aligned}&response _{OR}(Inl(r_0,e_1, z_1), Inl(w_0),s) = do \; \{ \\&\quad let \; e_0 = s \oplus e_1; \\&\quad z_0 \leftarrow response _0(r_0,w_0,e_0);\\&\quad return ((e_0,z_0),(e_1,z_1)) \} \\&response _{OR}(Inr(r_1,e_0, z_0), Inr(w_1),s) = do \; \{ \\&\quad let \; e_1 = s \oplus e_0; \\&\quad z_0 \leftarrow response _1(r_1,w_1,e_1); \\&\quad return ((e_0,z_0),(e_1,z_1)) \} \end{aligned}$$

To check the responses given by the prover, the verifier checks both conversations it receives are valid with respect the \(\varSigma \)-protocols they correspond to as well as checking that the challenge they provided, s, is the xor of the challenges in the respective conversations—\(s = e_0 \oplus e_1\).

$$\begin{aligned}&check_{OR}((x_0,x_1), (a_0,a_1), s, ((e_0,z_0), (e_1,z_1)))\\&\qquad \quad = (s = e_0 \oplus e_1 \wedge e_0 \in challenge\text {-}space \wedge e_1 \in challenge\text {-}space \\&\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \wedge check_0(x_0, a_0, e_0, z_0) \wedge \; check_1(x_1, a_1, e_1, z_1)) \end{aligned}$$

The \( challenge \text {-} space \) is defined as the carrier set of L\( challenge \text {-} space _{OR} = carrier (L)\) and the public input \((x_0,x_1)\) is valid if \(x_i\) is a valid public input with respect to its underlying \(\varSigma \)-protocol, that is:

$$\begin{aligned} valid\text {-}pub_{OR} = \{(x_0,x_1). \; x_0 \in valid\text {-}pub_0 \wedge x_1 \in valid\text {-}pub_1\}. \end{aligned}$$

As usual we import the \(\varSigma \text {-}protocol\text {-}base\) locale — this time under the name \(\varSigma \text {-} OR \)—so we can reason about the properties of \(\varSigma \)-protocols. First we show completeness.

The proof of the completeness property requires Condition 2 of the HVZK definition in Definition 3. It is required because the simulated transcript in the OR protocol must also produce a valid conversation if the verifier is to accept the proof. Without Condition 2 we have no guarantee that this is the case.

Lemma 18

(in \(\varSigma \text {-}OR\text {-}proof\)) \(\mathbf {shows} \; \varSigma \text {-}OR.completeness\)

To prove HVZK we use the following simulator, as always this is constructed by defining \( S _{ raw,OR }\).

$$\begin{aligned} \begin{array}{l} \varSigma \text {-} OR .S_{OR}((x_0,x_1), s) = do \; \{ \\ \quad e_1 \leftarrow uniform ( carrier(L) ); \\ \quad (a_1, e'_1, z_1) \leftarrow S_1(x_1, e_1); \\ \quad let \; e_0 = s \oplus e_1; \\ \quad (a_0, e_0, z_0) \leftarrow S_0(x_0, e_0); \\ \quad let \; z = ((e'_0,z_0), (e'_1,z_1)); \\ \quad return ((a_0, a_1),s,z)\} \end{array} \end{aligned}$$

Note, in constructing the simulator we had a design choice: sample either \(e_1\) or \(e_0\) and constructing the other—either choice results in the same simulator, this can be seen by applying Eq. 9.

Lemma 19

(in \(\varSigma \text {-}OR\text {-}proof\)) \(\mathbf {shows} \; \varSigma \text {-}OR.HVZK\)

To construct the special soundness adversary we condition on the case \(e_0 \ne e'_0\). The reason for this is that in the proof of the special soundness property we show that either \(e_0 \ne e'_0\) or \(e_1 \ne e'_1\) must hold (depending on which relation the witness pertains to). In either case the adversary outputs the witness to the respective relation using the special soundness adversaries from \(\varSigma _0\) or \(\varSigma _1\).

$$\begin{aligned}&\mathcal {A}_{ss,OR} ((x_0,x_1), conv, conv') = do \; \{ \\&\quad let \; ((a_0,a_1),s,(e_0,z_0), e_1,z_1) = conv; \\&\quad let \; ((a_0,a_1),s',(e'_0,z'_0), e'_1,z'_1) = conv'; \\&\quad if \; (e_0 \ne e'_0) \; then \; do \; \{ \\&\quad \quad {w_0 \leftarrow \mathcal {A}_{ss,0}(x_0, (a_0,e_0,z_0),(a_0,e'_0,z'_0))} ; \\&\quad \quad return (Inl(w_0)) \} \\&\quad else \; do \{ \\&\quad \quad w_1 \leftarrow \mathcal {A}_{ss,1}(x_1, (a_1,e_1,z_1),(a_1,e'_1,z'_1)); \\&\quad \quad return (Inr(w_1)) \} \; \} \end{aligned}$$

Lemma 20

(in \(\varSigma \text {-}OR\text {-}proof\)) \(\mathbf {shows} \; \varSigma \text {-}OR.special\text {-}soundness\)

Using Lemmas 18, 19 and 20 we can prove the OR construction is a \(\varSigma \)-protocol.

Theorem 21

(in \(\varSigma \text {-}OR\text {-}proof\)) \(\mathbf {shows} \; OR\text {-}\varSigma .\varSigma \text {-}protocol\)

Formalising Commitment Schemes

We formalise commitment schemes analogously to \(\varSigma \)-protocols. First we fix the required parameters in the locale, \( commit \text {-} base \), given in Fig. 8.

Fig. 8
figure 8

Abstract commitment scheme locale

The probabilistic programs \( key \text {-} gen \), \( commit \) and \( verify \) correspond to the three components of a commitment scheme. The key generation algorithm outputs the keys that are available to the committer and verifier. If, for example, all the keys are public then we have \(ck = vk\). The predicate \( valid \text {-} msg \) ensures the messages outputted by the adversary in the hiding game are valid, for example we may require them to be group elements.

Using these fixed parameters we define the correctness, hiding and binding for commitment schemes.

For the correctness property we define the probabilistic program \( correct \text {-} game \).

$$\begin{aligned}&correct \text {-} game (m)= do \; \{ \\&\quad (ck,vk) \leftarrow key \text {-} gen ; \\&\quad (c,d)\leftarrow commit (ck, m);\\&\quad return ( verify (vk, m, c, d))\} \end{aligned}$$

For a commitment scheme to be correct we require that for all valid messages \( correct \text {-} game \) always returns True.

Definition 22

$$\begin{aligned} correct = (\forall m. \; valid\text {-}msg(m) \longrightarrow \mathcal {P}[correct\text {-}game(m) = True] = 1) \end{aligned}$$

When considering the hiding and binding properties we define the advantage an adversary has of winning the corresponding security game as well as perfect hiding and binding.

The hiding game, \( hiding \text {-} game \) is defined as follows.

$$\begin{aligned} \begin{array}{l} hiding \text {-} game \; (\mathcal {A}_1, \mathcal {A}_2) = TRY \; do \; \{\\ \quad (ck,vk) \leftarrow key \text {-} gen ;\\ \quad ((m_0, m_1), \sigma ) \leftarrow \mathcal {A}_1 (vk); \\ \quad \_ \; \leftarrow assert ( valid \text {-} msg (m_0) \wedge valid \text {-} msg ( m_1));\\ \quad b \leftarrow coin ;\\ \quad (c,d) \leftarrow commit (ck, (if \; b \; then \; m_1 \; else \; m_2));\\ \quad b' \leftarrow \mathcal {A}_2(c , \sigma ); \\ \quad return (b = b')\} \; ELSE \; coin \\ \end{array} \end{aligned}$$

In this game the challenger asks the adversary to output two messages, commits one of the messages and hands it back to the adversary who must determine which message was committed. The adversary is said to win the game if it guesses correctly. Formally the adversary is split into two parts \((\mathcal {A}_1, \mathcal {A}_2)\), the first part outputs the messages and the second its guess at which messages was committed to. We highlight that we must check the messages \((m_0,m_1)\) outputted by the adversary are valid, if the assertion fails then the \( ELSE \) branch is invoked and the adversary only wins the game half the time (equivalent to if it guessed randomly). Also note the two parts of the adversary must be allowed to pass state to each other. The hiding advantage is defined with respect to the hiding game.

Definition 23

\( hiding \text {-} advantage (\mathcal {A}) = |\mathcal {P}[hiding\text {-}game(\mathcal {A}) = True] - \frac{1}{2}| \)

Definition 24

\( perfect \text {-} hiding (\mathcal {A}) = (hiding\text {-}advantage(\mathcal {A}) = 0)\)

The binding game requires the adversary to output a commitment c and two message-opening value pairs \(((m,d),(m',d'))\) such that both verify correctly—the messages outputted by the adversary must be distinct and valid, with respect to c, this is accounted for by the assert statement.

$$\begin{aligned}&binding \text {-} game (\mathcal {A}) = \; TRY \; do \; \{ \\&\quad (ck,vk) \leftarrow key \text {-} gen ; \\&\quad (c,m,d,m',d') \leftarrow \mathcal {A} (ck); \\&\quad \_ \; \leftarrow assert (m \ne m' \wedge valid \text {-} msg(m) \wedge valid \text {-} msg (m')); \\&\quad b \leftarrow verify (vk, m, c, d) ; \\&\quad b' \leftarrow verify (vk, m', c, d'); \\&\quad return (b \wedge b') \} \; ELSE \; return(False) \end{aligned}$$

Definition 25

\( binding \text {-} advantage (\mathcal {A}) = \mathcal {P}[binding\text {-}game(\mathcal {A}) = True] \)

Definition 26

\( perfect \text {-} binding (\mathcal {A}) = (binding\text {-}advantage(\mathcal {A}) = 0)\)

Commitment Schemes from \(\varSigma \)-Protocols

In this section we first describe the construction from [24] that uses a \(\varSigma \)-protocol and a generator for elements in the relation R to realise a commitment scheme that is perfectly hiding and computationally binding. We then detail our formalisation of the construction at an abstract level. To realise this we fix a \(\varSigma \)-protocol and use its components to construct a commitment scheme and prove it secure. Realising the proof at a general level like this allows us to easily instantiate the result for the \(\varSigma \)-protocols we consider.

Constructing Commitment Schemes from \(\varSigma \)-Protocols

Modern cryptography is based on hardness assumptions. These are relations that are considered computationally infeasible to break. For example the discrete log assumption given in Eq. 1.

Consider a hard relation R for a \(\varSigma \)-protocol, we let gen be the generator of elements in R. That is gen outputs h and w such that R(hw) is satisfied. Using a \(\varSigma \)-protocol for the relation R we can construct the commitment scheme given in Fig. 9. In the key generation phase the verifier runs the generation algorithm, \((h,w) \leftarrow gen\) and sends h to the committer. To commit to a message e the committer runs the simulator on their key h and e; that is they run \((a,e,z) \leftarrow S(h, e)\) and send a to the verifier and keeps e and z as the opening values. In the verification stage the prover sends e and z to the verifier who uses the check algorithm of the \(\varSigma \)-protocol to confirm that (aez) is an accepting conversation, with respect to the public input h.

Fig. 9
figure 9

A commitment scheme constructed from a \(\varSigma \)-protocol, e is the message being committed to

Correctness comes from the HVZK property of the \(\varSigma \)-protocol. The simulator’s output is the same as the output of a real execution of the protocol, meaning the check algorithm will accept the conversation. The commitment scheme is perfectly hiding because the commitment a is the first message of the \(\varSigma \)-protocol which is created independently of the challenge (the message being committed to). The binding property follows from the special soundness property of the \(\varSigma \)-protocol; if the committer could output the commitment a and opening values (ez) and \((e',z')\) such that both (aez) and \((a,e',z')\) are accepting conversations then by the special soundness property there exists an adversary that can output the witness w which contradicts the assumption on the relation being hard.

Formalising the Construction

To formalise this construction we fix the components of a \(\varSigma \)-protocol in a locale and assume they form a \(\varSigma \)-protocol. The locale can be seen in Fig. 10, where the superscript C denotes we are using the parameters to construct a commitment scheme. The only additional parameter we require in this construction beyond what the \(\varSigma \)-protocol provides is a generator,

$$\begin{aligned} gen ^C \; {:}{:} \; (`pub\text {-}input \times `witness) \; spmf \end{aligned}$$

that outputs (hw) such that the relation is satisfied. Note, \(\mathbf {for}\) indicates that all parameters in the locale have been renamed (compared to the base \(\varSigma \)-protocol locale) and \('+'\) indicates further assumptions are added to the locale.

Fig. 10
figure 10

The locale fixing the parameters of a \(\varSigma \)-protocol and the assumptions required to prove the commitment scheme construction

Using these fixed parameters we make the assumptions that they form a \(\varSigma \)-protocol and that the generator outputs a tuple for which the relation holds. The assumptions on the lossessness of the parameters ensure the components of the protocol do not return nothing, intuitively this is assuming the protocol is executed, and not terminated.

To formalise the general notion of a hard relation we define a security game played by an adversary who is trying to break the relation: (hw) is sampled from \( gen ^C\) and h is given to the adversary who is asked to output \(w'\). The adversary wins the game if \((h,w') \in Rel ^C\).

$$\begin{aligned}&rel\text {-}game(\mathcal {A}) = TRY \; do \; \{ \\&\quad (h,w) \leftarrow gen ^C; \\&\quad w' \leftarrow \mathcal {A}(h); \\&\quad return ((h,w') \in Rel ^C)\} \; ELSE \; return (False) \end{aligned}$$

Using this game we define the relation advantage—the probability an adversary has of winning the game.

Definition 27

$$\begin{aligned} rel\text {-}advantage(\mathcal {A}) = \mathcal {P}[rel\text {-}game(\mathcal {A}) = True] \end{aligned}$$

We show a reduction to this advantage in the proof of the binding property.

To formalise the protocol given in Fig. 9 we define the three components \( key \text {-} gen ^C, commit ^C, verify ^C\) that make up the commitment scheme and also what constitutes a valid message by defining \( valid \text {-} msg ^C = (m \in challenge \text {-} space ^C)\). The keys are generated by sampling from \( gen ^C\).

$$\begin{aligned}&key \text {-} gen ^C = do \; \{ \\&\quad (h,w) \leftarrow gen ^C; \\&\quad return (h, (h,w))\} \end{aligned}$$

To commit to a message the committer runs the simulator and outputs the initial message from the simulator as the commitment and holds the response as the opening value.

$$\begin{aligned}&commit ^C(h,e) = do \; \{ \\&\quad (a,e,z) \leftarrow S^C(h,e); \\&\quad return (a, z)\} \end{aligned}$$

Finally the verifier checks if the messages it has received from the committer correspond to an accepting conversation.

$$\begin{aligned} verify ^C((h,w),e,a,z) = check ^C(h,a,e,z) \end{aligned}$$

We now prove that our construction of the commitment scheme meets the desired properties. The \( commit \text {-} base \) locale is imported under the name \(\varSigma \text {-}commit\) thus all definitions are prefixed with this.

$$\begin{aligned} \mathbf {sublocale} \; \varSigma \text {-} commit : \; commit \text {-} base \; key \text {-} gen ^C \; commit ^C \; verify ^C \; valid \text {-} msg ^C \; . \end{aligned}$$

The formal proofs of the security properties broadly follow the intuition given in Sect. 8.1. The proof sketches can be found in “Appendix C”. The correctness and hiding properties are given in Lemmas 28 and 29 below.

Lemma 28

(in \(\varSigma \text {-}commit\)) \(\mathbf {shows} \; \varSigma \text {-} commit . correct \)

Lemma 29

(in \(\varSigma \text {-}commit\)) \(\mathbf {shows} \; \varSigma \text {-} commit . perfect \text {-} hiding (\mathcal {A})\)

Finally we consider the binding property. Here we show a reduction to the relation advantage. To show this reduction we construct an adversary, \( adversary _{rel}\), that interacts with the relation game using the \(\varSigma \)-protocols special soundness adversary and the adversary used in the binding game—\( adversary _{rel}\) calls the binding adversary and constructs two conversations from it to pass them as inputs to the special soundness adversary and outputs the witness given.

$$\begin{aligned}&adversary _{rel} (\mathcal {A},h)= do \; \{ \\&\quad (c, e, z, e', z') \leftarrow \mathcal {A}(h); \\&\quad \mathcal {A}_{ss}^C(x, (c,e,z), (c,e',z'))\} \end{aligned}$$

Lemma 30

(in \(\varSigma \text {-}commit\))

\(\mathbf {shows} \; \varSigma \text {-} commit . bind \text {-} advantage (\mathcal {A}) \le rel\text {-}advantage( adversary _{rel}(\mathcal {A}))\)

The next Section details how we use this general proof to realise the commitment schemes constructed from the \(\varSigma \)-protocols we consider—in particular we show how the security statements for the Pedersen commitment scheme come with very little proof effort.

The Pedersen Commitment Scheme

The Pedersen commitment scheme is a well known commitment scheme that allows for the commitment to an element in \(\mathbb {Z}_p\). In [13] we formalised the Pedersen commitment scheme from scratch. In this work, our general proof of the construction of commitment schemes from \(\varSigma \)-protocols, from Sect. 8, gives the result in a few lines of proof.

Fig. 11
figure 11

The Pedersen commitment protocol, the committer commits to message m. No keys are known only to one party, we only have a publicly known key pk

We note the exact instantiation of the general result from Sect. 8 outputs a form of the Pedersen scheme that is slightly different from the traditional version presented. Specifically the commitment is taken as \(c = g \cdot pk ^{-m}\) rather than \(c = g \cdot pk ^{m}\) that is commonly presented in the literature, note the verification step is also modified in the analogous way. This is due to the simulator in the Schnorr protocol taking the inverse of the public input in constructing the initial message. The Pedersen protocol that arises from our formalisation is given in Fig. 11.

To realise the proof we leverage our proof of the Schnorr protocol and the general proof of the construction from Sect. 8. Figure 12 shows the entire proof effort required to prove the Pedersen commitment scheme secure using the two components outlined above. First we import, under the name \( pedersen \), the locale where the general proof is given and prove the import is valid. The correctness and perfect hiding properties come directly from the general proof, this is seen by the proof that only calls the on the lemmas \( pedersen.correct \text {-} commit \) and \( pedersen.perfect \text {-} hiding \) respectively. For the binding property in the general proof (Lemma 30) we show a reduction to the hard relation, in any instantiation we must relate this to the hardness assumption corresponding to the commitment scheme that has been constructed. In this case we show the relation advantage in the general construction is equivalent to the discrete log advantage. This is shown by the lemma \( rel \text {-} adv \text {-} eq \text {-} dis \text {-} log \text {-} adv \) in Fig. 12. Using this we can show the binding advantage is bound by the discrete log advantage, thus completing the reduction for the binding property.

We note that with the general proof, for every \(\varSigma \)-protocol proven secure, we get the corresponding commitment scheme ‘for free’ (with the proof effort shown in Fig. 12).

Fig. 12
figure 12

The proof (extracted from Isabelle) of the instantiation of the security statements for the Pedersen commitment scheme using the general proof of the construction of commitment schemes from \(\varSigma \)-protocols

Asymptotic Security for the Pedersen and Schnorr Protocols

So far, we have proved concrete security statements. Information-theoretic security notions like perfect hiding can be easily formalised in the concrete setting. Computational properties like computationally binding, however, can only be formalised in this setting by proving bounds in terms of hard problems. We now switch to the asymptotic security setting where we can formally express and prove computational security notions.

To that end, we must introduce a security parameter n to the formalisation and make all definitions and statements depend on n. Then, we can easily derive the conventional asymptotic security statements from the concrete ones. We use Isabelle’s locale instantiation mechanism as shown in Fig. 13 to achieve this with little effort. First we construct a locale that fixes the family of cyclic groups and then import the \( schnorr \text {-}\varSigma \text {-} protocol \) locale for all n. The statement that the Schnorr protocol is a \(\varSigma \)-protocol in the asymptotic setting comes trivially from the concrete setting (lemma \(\varSigma \text {-} protocol \)), as do the statements of correctness (\( asymp \text {-} correct \)) and perfect hiding (\( asymp \text {-} perfect \text {-} hiding \)) for the Pedersen commitment scheme.

It is left to show computational binding for the Pedersen commitment scheme. Here we show \(\mathcal {A}\)’s advantage against the binding game is negligible if \( adversary \)’s advantage against the discrete log game is negligible. This follows directly from the bound in the concrete case.

Fig. 13
figure 13

Proving security in the asymptotic setting for the Schnorr \(\varSigma \)-protocol and the Pedersen commitment scheme

Further Protocols and Schemes

We have formalised more protocols beyond those discussed in the main part of this paper. The full outline of our formalisation is given in Fig. 1. Here we briefly discuss the other protocols we formalise and point to the more detailed discussion of them in the appendix.

Compound \(\varSigma \)-Protocols: The AND Construction

In Sect. 6 we described a formalisation of a \(\varSigma \)-protocol for the OR of two statements. We have also formalised the corresponding construction for the AND of two statements. Like in the OR construction we let \(\varSigma _0\) and \(\varSigma _1\) be the underlying \(\varSigma \)-protocols. The relation \( Rel _{ AND }\) is formally defined as:

$$\begin{aligned} Rel _{ AND } = \{((x_0,x_1), (w_0,w_1)). \; (x_0,w_0) \in Rel _0 \wedge (x_1,w_1) \in Rel _1\}. \end{aligned}$$

where \(Rel_0\) and \(Rel_1\) correspond to the relations of the two underlying \(\varSigma \)-protocols. Unlike in the OR construction we define this as a set rather than an inductive set.

The idea of the construction, \(\varSigma _{ AND }\), is simpler than the OR construction. The prover proves both statements in parallel for the same challenge sent by the verifier.

The formal proofs come more easily than in the OR construction as the underlying \(\varSigma \)-protocols are run in parallel, making it easier to use their respective security properties. The added complexity of the sum type needed in the OR construction is also not needed as the witness is a tuple \((w_0,w_1) \; {:}{:} \; ` witness _0 \times ` witness _1\) rather than a single element that could either be of type \(` witness _0\) or \(` witness _1\).

Our formalisation of the AND construction is given in “Appendix D”.

The Chaum-Pedersen and Okamoto \(\varSigma \)-Protocols

The Chaum-Pedersen and Okamoto protocols are based on variations of the discrete log assumption. The Chaum-Pedersen protocol is based on the equality of discrete logarithms relation: \(Rel_{CP} = \{((h_0,h_1),w). \; h_0 = g^w \wedge h_1 = g'^w\}\) whereas the Okamoto protocol is based on a relation whereby the public input is just h and the witness comprises as a tuple \((w_0,w_1)\): \( Rel _{Oka} = \{(h,(w_0,w_1)). \; (h = g ^ {w_0} \wedge h = g^{w_1})\}\) where g and \(g'\) are distinct generators of the cyclic group G.

Naturally both protocols are similar to the Schnorr protocol which is based on the discrete log assumption. Many similar arguments are used in the formal proof, especially in the rewriting of various terms. However, it was not always possible to reuse the exact auxiliary lemmas proven in the Schnorr protocol as the form of the group element constructions are subtly different in each case.

More details on our formalisation of the Chaum-Pedersen and Okamoto \(\varSigma \)-protocols are given in “Appendices E” and “F” respectively.

Rivest Commitment Scheme

The Rivest commitment scheme uses a trusted initialiser to distribute correlated randomness to both parties before the protocol is run. Its formalisation is of interest for two reasons.

Firstly, the trusted initialiser model is different from the standard form of a commitment scheme. So we must consider how to model it in our framework. We choose to model the distributed randomness sent to each party by the trusted initialiser as the keys each party holds in the execution of the protocol—specifically we define the key generation algorithm to output the randomness the trusted initialiser sends to the respective parties.

Secondly, the security results for the Rivest protocol are not obtained by the general result of commitment schemes from \(\varSigma \)-protocols proven in Sect. 8. This is because it is not based on any hardness assumption, and thus there is not an associated relation. Commitment schemes without a trusted initialiser cannot be both perfectly hiding and binding [25]. However as the Rivest protocol utilises a trusted initialiser, it can achieve both perfect hiding and binding and thus not rely on a hardness assumption.

Details of our formalisation of the Rivest commitment scheme can be found in “Appendix G”.

Related Work and Discussion

There are a number of tools that can be used for reduction based cryptographic proofs such as CertiCrypt [4], CryptHOL [6], EasyCrypt [3] and FCF [36]. These tools were all initially designed for game-based cryptographic proofs however some have been used for simulation-based proofs too; in [11, 12, 14, 29] standalone MPC protocols were considered whereas more recent work [17, 33] considers composibility in the form of Constructive Cryptography and Universal Composibility respectively.

We highlight two reasons we believe the choice of using CryptHOL and Isabelle is justified. Firstly, as we have mentioned throughout this paper, CryptHOL provides a strong foundation to formalise cryptography in a modular way. This allows others to pick up and easily extend the work given here. For example if one wanted to extend the definitions of \(\varSigma \)-protocols to consider witness indistinguishablitiy then one can simply incorporate the definitions into the abstract theory and construct the instantiated proofs in the relevant places. Likewise, if one needed a \(\varSigma \)-protocol or commitment scheme, and its corresponding security properties, in a more complex protocol we have demonstrated how they can be assumed and general proofs constructed. Thus we feel CryptHOL goes a long way to providing the ability to formally reason about security proofs in the way they are often considered on paper, with a cut and pasting of properties of underlying primitives. While other frameworks for formalising cryptography have similar concepts — EasyCrypt has a theory cloning mechanism and CertiCrypt and FCF inherit the module system from Coq—they are not used as extensively as in CryptHOL, for example they do not prove security in the asymptotic setting.

Secondly we highlight what is in our opinion an understated advantage of Isabelle—the archive of formal proofs (AFP). The AFP is a refereed collection of formalisations in Isabelle that is kept up to date for the current Isabelle release. In particular this ensures any formalisation accepted to the AFP can be used and added to with ease. Even if CryptHOL were not to be used for a number of years one could still download an up-to-date version compatible with the most recent Isabelle release at any point in the future. It is perhaps not quite as obvious how to do this with other frameworks for cryptography that do not have such support behind them. The AFP also means there is a vast infrastructure of mathematical libraries available to the user, this is especially relevant in our instantiations where the results rely heavily on the underlying number theory—much of which has been formalised already.

The drawback or barrier to entry to using CryptHOL is that one needs to understand Isabelle first. While this is not a trivial undertaking we suggest it is not considerably greater than learning the intricacies of any other formal cryptographic framework.

Commitment schemes have been studied before in EasyCrypt in [34] where the Pedersen commitment scheme was proven secure. One noticeable difference between the proof effort required is in the construction of the adversary used to prove computational binding—in particular in outputting the inverse of an element in a field. In EasyCrypt the inverse function is defined with the required property, that is: \(x \ne 0 \Rightarrow x \cdot inv(x) = 1\) and consequently division is defined as \(y \ne 0 \Rightarrow \frac{x}{y} = x \cdot inv(y)\). In Isabelle on the other hand we do not axiomatise the property of an inverse, but derive it from the Bezout function. This means our approach could be considered more foundational, and thus warrants the extra proof effort required.

\(\varSigma \)-protocols have been considered in [5] using CertiCrypt. The authors first proved secure a general construction of \(\varSigma ^{\phi }\)-protocols that prove knowledge of a preimage under a group homomorphism \(\phi \)—the Schnorr and Okamoto \(\varSigma \)-protocols that we formalise are examples of this type. Secondly they considered the compound statements we formalise in Sect. 6. Their work however only considered the compound statements over bitstrings whereas our formalisation is over an arbitrary boolean algebra of which bitstrings of a given length are one instance.

Both [5, 34] formalise some of the protocols we consider however they do so in different frameworks. For the ongoing development of the area we believe that it is important to have up-to-date and usable formalisations in the same framework; therefore we feel our work provides a strong basis for further formalisations in this area.

Differences in the Definitions of \(\varSigma \)-Protocols

There are different definitions of \(\varSigma \)-protocols presented in the literature [5, 20, 21, 23, 31]. We now discuss their differences and the consequences of Cramer’s additional HVZK requirement (Condition 2 in Definition 3). We also outline how Barthe et al. dealt with this issue in their formalisation of \(\varSigma \)-protocols [5].

Damgard’s HVZK definition Damgard’s definition [23] of HVZK does not require the inputs to the real view to satisfy the relation, namely it only requires that the output distributions of the simulator and real view are equal. We found two problems with this requirement. First, the real view is not well-defined if the public input is not in the relation: to construct the real view, we must run the prover and the prover runs only if it gets a witness as input, but there is no such witness when the public input is not in the relation. Accordingly, none of the proofs of HVZK for \(\varSigma \)-protocols we study would work. For example, without the assumption that \(h = g^w\) (from \((h,w) \in Rel^S\)) in the Schnorr \(\varSigma \)-protocol, we cannot reason about the real view and the simulator being equal. In particular, we have no way of showing \(a = g^z \cdot h^{-e}\) outputted by the simulator is equal to the initial message that is constructed in the real view. Second, Damgård assumes in the proofs in [23] that the relation holds for the input. We therefore conclude that Damgård probably intended to include the restriction that \((h,w) \in Rel \) in his definition.

Hazay’s and Lindell’s HVZK definition In [31], Hazay and Lindell credit Damgard for providing the ‘basis’ of their presentation of \(\varSigma \)-protocols. Their definition requires the relation to be satisfied on the public input and witness that are inputs to the real view. This corresponds to Condition 1 of Definition 3 in this work.

Damgård [23] and Hazay and Lindell [31] both carry out the OR construction for \(\varSigma \)-protocols with the relation \( Rel _{ OR }\) as defined in Sect. 6.1, with a proof similar to ours. However, their proofs are flawed as the simulator for the HVZK property is unspecified for public inputs h that are not in the language. Accordingly, completeness need not hold.

Cramer’s HVZK definition Cramer [21] additionally requires that the simulator outputs an accepting conversation when the public input is not in the language, which corresponds to Condition 2 in Definition 3 of Sect. 2. This ensures that the completeness proof of the OR construction for \(\varSigma \)-protocols goes through. Lindell has confirmed that it was implicitly assumed in the proof [private communication, 2019]. We therefore conclude that the extended definition should be the standard one.

To our knowledge no real-world \(\varSigma \)-protocol violates the additional requirement—pathological examples can of course be constructed. In fact, it was straightforward to show the additional requirement for all the \(\varSigma \)-protocols we consider, yet this extended property is rarely required in the literature. However, it is crucial for the OR construction, which allows to efficiently prove compound statements in zero knowledge.

Barthe et al.’s formalisation and Ciampi et al.’s HVZK definition There is another way to rescue the OR construction without adding Cramer’s requirement, namely changing the definition of \( Rel _ OR \). Barthe et al. [5] also noticed the completeness issue for the OR construction in their formalisation of \(\varSigma \)-protocols. They recovered the proof by defining \( Rel _ OR \) as

$$\begin{aligned} Rel _{OR} = \{((x_0,x_1),w)&. \; ((x_0,w) \in Rel _0 \wedge x_1 \in Domain ( Rel _1)) \nonumber \\&\qquad \vee ((x_1,w) \in Rel _1 \wedge x_0 \in Domain ( Rel _0))\}, \end{aligned}$$

i.e., that both inputs \(x_0\) and \(x_1\) are in the language. Ciampi et al. [20] use the same definition in their paper proofs.

In contrast, our definition (and Damgard’s, Hazay’s and Lindell’s, and Cramer’s) requires only one input \(x_0\) or \(x_1\) to be in the language; the other need only meet syntactic constraints as formalised by \( valid \text {-} pub \). This small difference has a substantial impact on the expressive power of the OR construction. With (12), the languages for the constituent \(\varSigma \)-protocols must be efficiently decidable. Indeed, Ciampi et al. “implicitly assume that the verifier of a protocol for relation R executes the protocol only if the common input x belongs to \(L_{R}\) and rejects immediately common inputs not in \(L_R\)” [19]. For relations like the discrete logarithm, this is not a problem because every group element has a discrete logarithm; the hard part is computing it. However, there are \(\varSigma \)-protocols where the language itself is hard, e.g., Blum’s protocol for a Hamiltonian cycle in a graph [9]. The OR construction with the relation (12) does not work for such \(\varSigma \)-protocols.


In this work we have formalised commitment schemes and \(\varSigma \)-protocols using the CryptHOL framework in Isabelle/HOL. The frameworks we provide are modular and thus can easily be used and extended by others. In principle the work we present could have been carried out in other formal frameworks for cryptography.

The merit of formalising cryptography is shown by the issue we uncover regarding the definition of \(\varSigma \)-protocols. While the cryptographer’s intuition may usually suffice, it is important that the correct definitions are presented consistently in the literature.

Our work is limited as it cannot reason about polynomial runtime, a central concept in modern cryptography. Without being able to express this efficiency notion the security definitions we provide must be considered without it, however due to the nature of our reductions this does not pose a significant problem. The main drawback is in the inability to formalise the hardness assumptions adequately—we cannot quantify over the set of all efficient adversaries, but only over all adversaries. This limitation of our work is due to CryptHOL not yet having a mechanism for reasoning about runtime. It is likely that if such a feature is added it would be easily integrated with this work for two reasons: (1) the adversaries we construct are, in general, simple. We do not require rewinding, or other procedures whereby technical arguments will be required to determine runtime (2) the structure of our proofs is similar to other formalisations using CryptHOL, thus any such feature will likely be constructed with this in mind. In particular, our security definitions take adversaries and simulators as explicit arguments rather than quantifying over them. This should allow us reason in a modular way about run-time as a refinement, without having to rewrite the existing proofs. We hope to integrate it into our framework here when it becomes available.

Consequently, incorporating the notion of run-time into our framework constitutes future work. Moreover a logical next step to increase the usability of our framework for others would be to define and reason about full Zero-Knowledge as this is an extension of the HVZK property of \(\varSigma \)-protocols. We believe this work is also likely to be of interest when formalising the malicious MPC security model as commitment schemes, \(\varSigma \)-protocols and Zero-Knowledge are commonly used to transfer protocols from semi-honest to malicious security.