A Note on Perfect Correctness by Derandomization

We show a general compiler that transforms a large class of erroneous cryptographic schemes (such as public-key encryption, indistinguishability obfuscation, and secure multiparty computation schemes) into perfectly correct ones. The transformation works for schemes that are correct on all inputs with probability noticeably larger than half, and are secure under parallel repetition. We assume the existence of one-way functions and of functions with deterministic (uniform) time complexity 2O(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^{O(n)}$$\end{document} and non-deterministic circuit complexity 2Ω(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2^{\Omega (n)}$$\end{document}. Our transformation complements previous results that showed how public-key encryption and indistinguishability obfuscation that err on a noticeable fraction of inputs can be turned into ones that for all inputs are often correct, showing that they can be made perfectly correct. The technique relies on the idea of “reverse randomization” [Naor, Crypto 1989] and on Nisan–Wigderson style derandomization, previously used in cryptography to remove interaction from witness-indistinguishable proofs and commitment schemes [Barak, Ong and Vadhan, Crypto 2003].


Introduction
Randomized algorithms are often faster and simpler than their state-of-the-art deterministic counterparts, yet, by their very nature, they are error-prone. This gap has motivated a rich study of derandomization, where a central avenue has been the design of pseudorandom generators [8,28,31] that could offer one universal solution for the problem. This has led to surprising results, intertwining cryptography and complexity theory, and culminating in a derandomization of BPP under worst-case complexity assump-tions, namely the existence of functions in E = Dtime (2 O(n) ) with worst-case circuit complexity 2 (n) [24,28].
For cryptographic algorithms, the picture is somewhat more subtle. Indeed, in cryptography, randomness is almost always necessary to guarantee any sense of security. While many cryptographic schemes are perfectly correct even if randomized, some do make errors. For example, in some encryption algorithms, notably the lattice-based ones [1,29], most but not all ciphertexts can be decrypted correctly. Here, however, we cannot resort to general derandomization, as a (completely) derandomized version will most likely be totally insecure.
It gets worse. While for general algorithms infrequent errors are tolerable in practice, for cryptographic algorithms, errors can be (and have been) exploited by adversaries (see [4] and a long line of follow-up works). Thus, the question of eliminating errors is ever more important in the cryptographic context. This question was addressed in a handful of special contexts in cryptography. In the context of interactive proofs, [16,19] show how to turn any interactive proof into one with perfect completeness. In the context of encryption schemes, Goldreich, Goldwasser, and Halevi [17] showed how to partially eliminate errors from lattice-based encryption schemes [1,29]. Subsequent works, starting from that of Dwork, Naor and Reingold [15], show how to partially eliminate errors from any encryption scheme [23,26]. Here, "partial" refers to the fact that they eliminate errors from the encryption and decryption algorithms, but not the key generation algorithm. That is, in their final immunized encryption scheme, it could still be the case that there are bad keys that always cause decryption errors. In the context of indistinguishability obfuscation (IO), Bitansky and Vaikuntanathan [11] show how to partially eliminate errors from any IO scheme. Concretely, assuming subexponential hardness of learning with errors (LWE) [29], they show how to convert any subexponentially secure IO scheme that might err on a fraction of the inputs into one that is correct on all inputs, with high probability over the coins of the obfuscator. This was improved by Ananth, Jain, and Sahai [2] who base the result on polynomial hardness and one-way functions instead of LWE. This Work We show how to completely immunize a large class of cryptographic algorithms, turning them into algorithms that make no errors at all. Our most general result concerns cryptographic algorithms (or protocols) that are "secure under parallel repetition." We show: Theorem 1.1. (Informal) Assume that one-way functions exist and functions with deterministic (uniform) time complexity 2 O(n) and non-deterministic circuit complexity 2 (n) exist. Then, any encryption scheme, indistinguishability obfuscation scheme, and multiparty computation protocol that is secure under parallel repetition can be completely immunized against errors.
More precisely, we show that perfect correctness is guaranteed when the transformed scheme or protocol is executed honestly. The security of the transformed scheme or protocol is inherited from the security of the original scheme under parallel repetition. In the default setting of encryption and obfuscation schemes, encryption and obfuscation are always done honestly, and security under parallel repetition is well known to be guaranteed automatically. Accordingly, we obtain the natural notion of perfectly correct encryption and obfuscation. In contrast, in the setting of MPC, corrupted parties may in general affect any part of the computation. In particular, in the case of corrupted parties, the transformed protocol does not provide a better correctness guarantee, but only the same correctness guarantees as the original (repeated) protocol.
We find that perfect correctness is a natural requirement and the ability to generically achieve it for a large class of cryptographic schemes is aesthetically appealing. In addition, while in many applications almost perfect correctness may be sufficient, some applications do require perfectly correct cryptographic schemes. For example, using public-key encryption as a commitment scheme requires perfect correctness, the construction of non-interactive witness-indistinguishable proofs in [10] requires a perfectly correct indistinguishability obfuscation, and the construction of 3-message zero knowledge against uniform verifiers [3] requires perfectly correct delegation schemes.
Our tools, perhaps unsurprisingly given the above discussion, come from the area of derandomization, in particular, we make heavy use of Nisan-Wigderson (NW) type pseudo-random generators. Such NW-generators were previously used by Barak,Ong,and Vadhan [9] to remove interaction from commitment schemes and ZAPs. We use it here for a different purpose, namely to immunize cryptographic algorithms from errors. Below, we elaborate on the similarities and differences.

The Basic Idea
We briefly explain the basic idea behind the transformation, focusing on the case of public-key encryption. Imagine that we have an encryption scheme given by randomized key-generation and encryption algorithms, and a deterministic decryption algorithm (Gen, Enc, Dec), where for any message m ∈ {0, 1} n , there is a tiny decryption error: Pr (r g ,r e )←{0,1} poly(n) Dec sk (Enc pk (m; r e )) = m | ( pk, sk) = Gen(r g ) ≤ 2 −n .
Can we deterministically choose "good randomness" (r g , r e ) that leads to correct decryption? This question indeed seems analogous to the question of derandomizing BPP. There, the problem can be solved using Nisan-Wigderson type pseudo-random generators [28]. Such generators can produce a poly(n)-long pseudo-random string using a short random seed of length d(n) = O(log n). They are designed to fool distinguishers of some prescribed polynomial size t (n) and may run in time 2 O(d) t. Derandomization of the BPP algorithm is then simply done by enumerating over all 2 d = n O (1) seeds and taking the majority.
We can try to use NW-type generators to solve our problem in a similar way. However, the resulting scheme would not be secure -indeed, it will be deterministic, which means it cannot be semantically secure [18]. To get around this, we use the idea of reverse randomization from [14,15,25,27]. For each possible seed i ∈ {0, 1} d for the NWgenerator NWPRG, we derive corresponding randomness Here BMYPRG is a Blum-Micali-Yao (a.k.a cryptographic) pseudo-random generator [7,32], and the seeds (s i g , s i e ) ∈ {0, 1} are chosen independently for every i, with the sole restriction that their image is sparse enough (say, they are of total length = n/2). Encryption and decryption for any given message are now done in parallel with respect to all 2 d copies of the original scheme, where the final result of decryption is defined to be the majority of the 2 d decrypted messages. Security is now guaranteed by the BMY-type generators and the fact that public-key encryption can be securely performed in parallel. Crucially, the pseudo-randomness of BMY strings is guaranteed despite the fact that their image forms a sparse set. The fact that the set of BMY string is sparse will be used to the perfect correctness of the scheme. In particular, when shifted at random, this set will evade the (tiny) set of "bad randomness" (that lead to decryption errors) with high probability 1−2 −n ≥ 1−2 −n/2 .
In the actual construction, the image is not shifted truly at random, but rather by an NW-pseudo-random string, and we would like to argue that this suffices to get the desired correctness. To argue that NW-pseudo-randomness is enough, we need to show that with high enough probability (say 0.51) over the choice of the NW string, the shifted image of the BMY generator still evades "bad randomness." This last property may not be efficiently testable deterministically, but can be tested non-deterministically in fixed polynomial time, by guessing the seeds for the BMY generator that would lead to bad randomness. We accordingly rely on NW generators that fool non-deterministic circuits. Such pseudo-random generators are known under the worst case assumption that there exist functions in E with non-deterministic circuit complexity 2 (n) [30].

Related Work on Derandomization in Cryptography
Barak, Ong, and Vadhan [9] were the first to demonstrate how NW-type derandomization can be useful in cryptography. They showed how NW generators can be used to derandomize Naor's commitments [27] and Dwork and Naor's ZAPs [14]. In Sect. 3.1, we discuss in further detail the relation and differences between their work and this work.
Subsequently, Barak, Lindell, and Vadhan [6] used NW generators to obtain a lower bound for 2-message public-coin (plain) zero-knowledge proofs. Organization In Sect. 2, we give the required preliminaries. Section 3 presents the transformation itself. In Sect. 4, we discuss several examples of interest where the transformation can be applied.

Preliminaries
In this section, we give the required preliminaries, including standard computational concepts, cryptographic schemes and protocols, and the derandomization tools that we use.

Standard Computational Concepts
We recall standard computational concepts concerning Turing machines and Boolean circuits.
• By algorithm we mean a uniform Turing machine. We say that an algorithm is PPT if it is probabilistic and polynomial time.
• A polynomial-size circuit family C is a sequence of circuits C = {C λ } λ∈N , such that each circuit C λ is of polynomial size λ O (1) and has λ O(1) input and output bits. • We follow the standard habit of modeling any efficient adversary strategy A as a family of polynomial-size circuits. For an adversary A corresponding to a family of polynomial-size circuits {A λ } λ∈N , we often omit the subscript λ, when it is clear from the context. For simplicity, we shall simply call such an adversary a polynomial-size adversary. • We say that a function f : N → R is negligible if it decays asymptotically faster than any polynomial. • Two ensembles of random variables X = {X λ } λ∈N and Y = {Y λ } λ∈N are said to be computationally indistinguishable, denoted by X ≈ c Y, if for all polynomial-size distinguishers D, there exists a negligible function ν such that for all λ,

Cryptographic Schemes and Rotocols
We consider a simple model of cryptographic schemes and protocols that will allow to describe the transformation generally. In Sect.

Repeated Executions For a function
. . , r m j ), in parallel and obtaining the corresponding outputs, namely, y = (y i j )

NW and BMY PRGs
We now define the basic tools required for the main transformation -NW-type PRGs [28] and BMY-type PRGs [7,32]. The transformation itself is given in the next section. Definition 2.2. (Non-deterministic Circuits) A non-deterministic Boolean circuit C(x, w) takes x as a primary input and w as a witness. We define C(x) := 1 if and only if there exists w such that C(x, w) = 1.

Definition 2.3. (NW-Type PRGs against Non-deterministic Circuits) An algorithm
We shall rely on the following theorem by Shaltiel and Umans [30] regarding the existence NW-type PRGs as above assuming worst-case hardness for non-deterministic circuits.
with non-deterministic circuit complexity 2 (n) . Then, for any polynomial t (·), there exists an NW-generator against non-deterministic circuits of size t (n) We remark that the above is a worst-case assumption in the sense that the function f needs to be hard in the worst-case (and not necessarily in the average-case). The assumption can be seen as a natural generalization of the assumption that EXP ⊆ NP. We also note that there is a universal candidate for the corresponding PRG, by instantiating the hard function with any E-complete language under linear reductions. See further discuss in [9]. We now define BMY-type (a.k.a cryptographic) PRGs.

The Error-Removing Transformation
We now describe a transformation from any (1 − α)-correct scheme for a function f into a perfectly correct one. For a simpler exposition, we restrict attention to the case that the error α is tiny. We later show how this restriction can be removed. Ingredients. In the following, let λ be a security parameter, let m = m(λ), n = n(λ), = (λ) be polynomials, and α = α(λ) ≤ 2 −λm−2 . We rely on the following:
Correctness. We now turn to show that the new scheme is perfectly correct. Proof. We first note that had r NW been chosen at truly random (instead of using NWPRG) then for any input, with high probability over the choice of r NW , the corresponding scheme would have been perfectly correct. Proof. Fixing any such λ, x, and s = (s 1 , . . . , s m ), the string r = r BMY s ⊕ r NW is distributed uniformly at random. In this case, the scheme is guaranteed to err with probability at most α ≤ 2 −λm /4. The claim now follows by taking a union bound over all 2 λm tuples s 1 , . . . , s m .
We now claim that a similar property holds with roughly the same probability when r NW is pseudo-random as in the actual transformation.
Proof. Assume toward contradiction that the claim does not hold for some λ ∈ N and x ∈ {0, 1} n×m . We construct a non-deterministic distinguisher that breaks NWPRG. The distinguisher has the input x hardwired. Given r NW , it non-deterministically guesses s 1 , . . . , s m , computes r BMY = (BMYPRG(s 1 ), . . . , BMYPRG(s m )), r = r NW ⊕r BMY , and checks whether f (x) = (1 λ , x, r ). As we just proved in the previous claim, when r NW is truly random, such a witness s 1 , . . . , s m exists with probability at most 1/4, whereas, by our assumption toward contradiction, when r NW is pseudo-random such a witness exists with probability larger than 1 t + 1 4 . The size of the above distinguisher is some fixed polynomial t (λ) that depends only on m, n, and the time required to compute , f, BMYPRG. Thus, in the construction we choose t > max t , 8 , meaning that the constructed distinguisher indeed breaks NWPRG.
With the last claim, we now conclude the proof of Proposition 3.1. Indeed, for any input x, when emulating the k-fold repetition ⊗k (1 λ , x, r ), the randomness used for the jth copy (1 λ s j1 ), . . . , BMYPRG(s j1 )). By the last claim, for all but a 1 4 + 1 t ≤ 3 8 fraction of the NW-seeds j, any choice of BMY-seeds s j yields the correct result y j = f (x) in the corresponding execution (1 λ , x, r j ). In particular, it is always the case that the majority of executions results in y = f (x), as required. Security. We now observe that the randomness generated according to the transformation is indistinguishable from real randomness. Intuitively, this means that if the original scheme was secure under parallel repetition, when the honest parties use real randomness, it will remain as secure when using randomness generated according to the transformation. Examples are given in the next section. Concretely, we consider two distributions r tra and r uni on randomness for the parties in ⊗k : Proof. By the security of the BMY PRG, for any i, j: Since r tra i j (respectively, r uni i j ) is generated independently from all other r tra i j (respectively r uni i j ), the proposition follows by a standard hybrid argument.

Removing the Assumption Regarding Tiny Error
Above we assumed that α(λ) ≤ 2 −λm−2 . We now show how to remove this assumption using standard amplification by parallel repetition. For completeness, we explicitly describe the transformation. • samples k random strings (r i1 , . . . , r ik ), where r i j ← {0, 1} .

Proposition 3.3. The new scheme is
Proof. By a Chernoff-Hoeffding bound, each party i obtains the correct output y i , except with probability 2 −6λm . By a union bound all m parties obtain the correct outputs (y 1 , . . . , y m ), except with probability m2 −6λm < 2 −λm−2 , as required.

The Combination of the Two Steps
We observe that the combination of the two steps also has a simple form of a parallel execution using specially crafted randomness that is pseudo-random. This will be used in the next section to prove security for specific examples in which security under parallel repetition is guaranteed. Specifically, when combining the two steps, the final transformed scheme tra consists of three high-level steps: 1. Each party i locally generates randomness r tra i, j for j ∈ [K ], where K = k × k . 2. The parties emulate the repeated scheme ⊗K (1 λ , x, r ) where r tra = (r tra i j ) i∈[m], j∈ [K ] . 3. Each party i obtains outputs (y i1 , . . . , y i K ) and performs a local postprocessing step to obtain its final output y i .
By Proposition 3.2 r tra is pseudo-random.

Relation to the Work of Barak, Ong, and Vadhan
Barak, Ong, and Vadhan [9] use NW generators to derandomize Naor's commitments [27] and Dwork and Naor's ZAPs [14]. In these applications, "reverse randomization" is already encapsulated in the constructions of ZAPs and commitments that they start from, and they show that "the random shift" can be derandomized, using the fact that ZAPs and commitments are secure under parallel repetition. Barak, Ong, and Vadhan were not interested in the correctness of a specific computation per se, but rather in the existence of an "incorrect object", namely an accepting proof for a false statement in ZAPs, or a commitment with two inconsistent openings. In their applications, it is in fact enough to use hitting-set generators rather than pseudo-random generators. Such generators only guarantee to hit any set of large density that can be recognized by circuits of some prescribed size. (The density of corresponding seeds just has to be positive, and not necessarily large.) Intuitively, the reason that hitting-set generators are enough for their applications is that they only need to deal with a one-sided error. For example, in a ZAP system, one already assumes that true statements are always accepted by the verifier, so when derandomizing they only need to recognize false statements. This is analogous to having an encryption system that is always correct on encryptions of zero, but may make mistakes on encryptions of one. We note, however, that as an assumption, hitting set generators (against non-deterministic circuits) are sufficient for constructing corresponding pseudo-random generators (against non-deterministic circuits), since they directly imply required hard functions (see further discussion in [21]).
We note that for the case of bit commitments, Barak, Ong, and Vadhan settle for (hitting-set) generators against uniform distinguishers. We can also rely on (pseudorandom) generators against uniform distinguishers for some specific applications such as correcting errors in bit encryption; however, in general (e.g., for indistinguishability obfuscation), we need to deal with non-uniform distinguishers. Roughly speaking, uniform indistinguishability is sufficient only in cases where the input space can be uniformly enumerated in polynomial time, which is indeed the case for bit commitments or bit encryption, where the inputs are only zero and one.

Examples of Interest
We now discuss three examples of interest.
2. Semantic security: for any polynomial-size distinguisher D there exists a negligible function μ(·), such that for any two messages m, m ∈ M of the same size: where the probability is over the coins of Enc and the choice of pk sampled by Gen(1 λ ).
Applying the Transformation Public-key encryption can be modeled as a three-party scheme consisting of a generator, an encryptor, and a decryptor. The generator has no input and uses its randomness r 1 to generate pk and sk, which are sent to the encryptor and decryptor, respectively. The encryptor has as input a message m and uses its randomness r 2 in order to generate an encryption Enc pk (m; r 2 ), which is sent to the decryptor. The decryptor has no input nor randomness, it uses the secret key to decrypt and outputs the decrypted message. (In this case the function computed by In the repeated scheme ⊗K , the generator Gen(1 λ ; r 1 j ) is applied K independent times, with fresh randomness r 1 j for each j ∈ [K ], to generate corresponding keys pk = pk j , sk = sk j . Encryption involves K independent encryptions: Enc ⊗K pk (m; r 2 ) := Enc pk 1 (m; r 21 ), . . . , Enc pk k (m; r 2K ) .
As defined in Sect. 3, when applying the error-removal transformation, the randomness r = r i j : i ∈ [2], j ∈ [K ] is sampled according to r tra instead of truly at random according to r uni . Decryption is done by decrypting each encryption with the corresponding sk j and postprocessing the K results as prescribed by the transformation. Proof. The correctness of the new scheme given by the transformation, follows directly from Proposition 3.1. We next observe that the new scheme is also secure. Concretely, for any (infinite sequence of) two messages m, m ∈ M, The first and last indistinguishability relations follow directly from Proposition 3.2.
The fact that Enc ⊗K pk (m; r uni 2 ) ≈ c Enc ⊗K pk (m ; r uni 2 ) follows from the semantic security of the underlying encryption scheme, which is known to be preserved under multiple encryptions (see, e.g., [20]).
In [15], Dwork, Naor, and Reingold show how public-key encryption where decryption errors may even occur for a large fraction of messages, can be transformed into ones that only have a tiny decryption error over the randomness of the scheme. Applying our transformation, we can further turn such schemes into perfectly correct ones.

Indistinguishability Obfuscation
Our second example concerns indistinguishability obfuscation (IO) [5]. We start by recalling the definition.
2. Indistinguishability: for any polynomial-size distinguisher D there exists a negligible function μ(·), such that for any two circuits C, C ∈ C that compute the same function and are of the same size: where the probability is over the coins of D and O.
Applying the Transformation IO can be modeled as a two-party scheme consisting of an obfuscator and an evaluator. The obfuscator has as input a circuit C and uses its randomness r 1 in order to create an obfuscated circuit C = O(C, 1 λ ; r 1 ), which is sent to the evaluator. The evaluator has an input x for the circuit, and no randomness, it computes C(x) and outputs the result. (In this case the function computed by is f (C, x) = (⊥, C(x)).) In the repeated scheme ⊗K , obfuscation involves K independent obfuscations: As defined in Sect. 3, when applying the error-removal transformation, the randomness r = r 1 j : j ∈ [K ] is sampled according to r tra instead of truly at random according to r uni . Evaluation for input x is done by running each obfuscated circuit on the input x and postprocessing the K results as prescribed by the transformation.

Claim 4.2. If
, then the new indistinguishability obfuscation scheme given by tra is secure and perfectly correct.
Proof. The correctness of the new scheme given by the transformation follows directly from Proposition 3.1. We now observe that the new scheme is also secure, which follows similarly to the case of public-key encryption considered above. Concretely, for any (infinite sequence of) two equal-size circuits C, C ∈ C, The first and last indistinguishability relations follow directly from Proposition 3.2.
The fact that O ⊗K (C, 1 λ ; r uni 1 ) ≈ c O ⊗K (C , 1 λ ; r uni 1 ) follows from the security of the underlying obfuscation scheme, which is known to be preserved under multiple obfuscations (see e.g. [11]).
In [11], Bitansky and Vaikuntanathan show how indistinguishability obfuscation [5] where the obfuscated circuit may err also on a large fraction of inputs can be transformed into one that only has a tiny error over the randomness of the obfuscator as required here. Applying our transformation, we can further turn such schemes into perfectly correct ones.

Multi-Party Computation
Our third and last example concerns multi-party computation (MPC) protocols. There are several models for capturing the adversarial capabilities in an MPC protocol. Roughly speaking, our transformation can be applied whenever the protocol is secure against parallel repetition. In the new protocol, perfect correctness will be guaranteed when all the parties behave honestly. The security guarantee given by the new protocol will be inherited from the original repeated protocol. We stress that, in the case of corrupted parties, the transformed protocol does not provide any correctness guarantees beyond those given by the original (repeated) protocol. In particular, if the adversary can inflict a certain correctness error in the original (repeated) protocol, it may also be able to do so in the transformed protocol. The Formal Model Since we rely on standard MPC conventions, we shall keep our description relatively light, abstracting out less relevant details (for further reading, see for instance [12,20]). We consider protocols with security against static corruptions according to the real-ideal paradigm. We restrict attention to the standalone model. 1 In this setting, the adversary A corrupts some set of parties C ⊆ [m], which it fully controls throughout the protocol, and can also choose the inputs for honest parties at the onset of the computation. The adversarial view in the protocol consists of all the communication generated by the honest parties and their respective outputs. We denote by Real A (1 λ , z; r ) the polynomial-time process that generates the adversarial view and the outputs of the honest parties in [m] \ C when these parties execute protocol for functionality f with randomness r = (r i 1 , . . . , r i m−|C| ), and a PPT adversary A with auxiliary input z controlling the parties in C.
The requirement is that the output of this process can be simulated by a PPT process Ideal S f (1 λ , z) called the ideal process where A is replaced by an efficient simulator S. The simulator can only submit inputs x 1 , . . . , x m to f , learn the outputs of the corrupted parties in C, and has to generate the adversarial view. The ideal process outputs the view generated by the simulator as well as the output generated by f for the honest parties.
As before, we denote by ⊗K the K -fold parallel repetition of a protocol for computing f ⊗K (x) = ( f (x)) K , where each honest party i ∈ [m] \ C, given input x i , runs K parallel copies of , all with the same input x i and obtains outputs y i1 , . . . , y i K . We consider protocols that are secure under parallel repetition in the following sense. Definition 4.3. We say that an MPC protocol (for some functionality f ) is secure under parallel repetition with respect to an ideal process Ideal if for any PPT adversary A and polynomial K (λ) there exists a PPT simulator S such that for any (infinite sequence of) security parameter λ ∈ N and auxiliary input in z ∈ {0, 1} λ O (1) , (1 λ , z) . Applying the Transformation We consider applying our transformation to eliminate errors in the case that all parties execute the protocol honestly, while preserving the same level of security under corruptions. We denote by tra the protocol for computing f after applying the transformation from Sect. 3 where is repeated in K times in parallel, the randomness of parties is derived as defined in the transformation, and the final output of party i is derived by postprocessing the K outputs (y i1 , . . . , y i K ) obtained as prescribed by the transformation.

Claim 4.3. Assume that
is a protocol for f that is (1/2 + η)-correct, for some η = λ −O(1) and secure under parallel repetition. Then tra is a secure and perfectly correct protocol for f .
Proof. The perfect correctness of the new protocol tra , when all parties behave honestly, follows directly from Proposition 3.1.
We now show that the protocol tra is secure. For any PPT adversary A against tra , viewing A as an adversary against ⊗K , let S be its simulator given by Definition 4.3. We show that for any (infinite sequence of) security parameter λ, and auxiliary input z, Let pp ⊗K be the protocol where the parties first execute the K -fold repetition of ⊗K and then each party sets its final output by postprocessing its K outputs as specified by the correctness amplification transformation. Then we first note that by definition where r tra is the randomness of the honest parties, generated according to our transformation. Next, by Proposition 3.2, it holds that: where r tra is randomness generated according to our transformation and r uni is truly random.
It is left to show that Indeed, recall that by Definition 4.3, Next, note that and each of the two distributions in Equation 1 can be efficiently generated given a sample from the respective distribution in the Equation 2. This is done by postprocessing the K outputs of each honest party as prescribed by the correctness amplification transformation. Accordingly, any efficient distinguisher for 1 immediately implies an efficient distinguisher for 2. Thus, the indistinguishability in Equation 1 follows by Equation 2. This concludes the proof.