Abstract
Garg et al. (Crypto 2015) initiated the study of cryptographic protocols over noisy channels in the noninteractive setting, namely when only one party speaks. A major question left open by this work is the completeness of finite channels, whose input and output alphabets do not grow with the desired level of security. In this work, we address this question by obtaining the following results:

1.
Completeness of BitROT with Inverse Polynomial Error. We show that bitROT (i.e., Randomized Oblivious Transfer channel, where each of the two messages is a single bit) can be used to realize general randomized functionalities with inverse polynomial error. Towards this, we provide a construction of stringROT from bitROT with inverse polynomial error.

2.
No Finite Channel is Complete with Negligible Error. To complement the above, we show that no finite channel can be used to realize stringROT with negligible error, implying that the inverse polynomial error in the completeness of bitROT is inherent. This holds even with semihonest parties and for computational security, and is contrasted with the (negligibleerror) completeness of stringROT shown by Garg et al.

3.
Characterization of Finite Channels Enabling ZeroKnowledge Proofs. An important instance of secure computation is zeroknowledge proofs. Noisy channels can potentially be used to realize truly noninteractive zeroknowledge proofs, without trusted common randomness, and with nontransferability and deniability features that cannot be realized in the plain model. Garg et al. obtain such zeroknowledge proofs from the binary erasure channel (BEC) and the binary symmetric channel (BSC). We complete the picture by showing that in fact any nontrivial channel suffices.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
A noisy communication channel is a probabilistic function \(\mathcal C:\mathcal X\rightarrow \mathcal Y\), mapping a sent symbol x to a received symbol y. Standard examples include the binary symmetric channel (BSC), which flips a bit \(x\in \{0,1\}\) with probability \(0<p<1/2\), and the binary erasure channel (BEC), which erases x with probability p. A fundamental question in informationtheoretic cryptography is – what cryptographic protocols can be constructed from noisy communication channels? This question has been studied extensively, with respect to various cryptographic tasks and a variety of channels, and has uncovered a rich landscape of structural relationships. Starting with the pioneering work of Wyner [30] who showed that the wiretap channel can be used for secure communication, many works studied the usefulness of noisy channels for additional cryptographic tasks (e.g., [5, 6, 14, 23, 25, 28, 29]). This culminated in a complete characterization of the channels on which oblivious transfer, and hence general secure twoparty computation, can be based [12, 13].
Most cryptographic constructions from noisy channels crucially require interaction. While this is not a barrier for some applications, there are several useful settings which are inherently noninteractive. A natural question that arises is what cryptographic tasks can be realized using only oneway noisy channels, namely by protocols over noisy channels in which only one party speaks. The question of realizing secure communication in this setting was the topic of Wyner’s work, and is a central theme in the big body of work on “physical layer security” [8, 24].
A clean way to capture tasks that can potentially be realized using oneway noisy communication is via a senderreceiver functionality, which takes an input from a sender S and delivers a (possibly) randomized output to a receiver R. In more detail, such a senderreceiver functionality is a deterministic or randomized mapping \(f: \mathcal {A} \rightarrow \mathcal {B} \) that takes an input \(a \in \mathcal {A} \) from a sender S and delivers an output \(b=f(a)\) to a receiver R. In the randomized case, the randomness is internal to the functionality; neither S nor R learn it or can influence its choice.
Useful Instances. Several important cryptographic tasks can be captured as senderreceiver functionalities. For instance, a foundational primitive in cryptography is noninteractive zeroknowledge (NIZK) [9, 15], which is typically constructed in the common random string (CRS) model. NIZK proofs can be captured in the senderreceiver framework by a deterministic function that takes an NPstatement and a witness from the sender and outputs the statement along with the output of the verification predicate to the receiver. As noted by Garg et al. [17], secure implementation of this function over a oneway channel provides the first truly noninteractive solution to zero knowledge proofs, where no trusted common randomness is available to the parties. Moreover, this solution can achieve useful properties of interactive zeroknowledge protocols such as nontransferability and deniability, which are impossible to achieve in the standard noninteractive setting. Another example from [17] is that of randomly generating “puzzles” without giving any of the parties an advantage in solving them. For instance, the sender can transmit to a receiver a random Sudoku challenge, or a random image of a oneway function, while the receiver is guaranteed that the sender has no advantage in solving the puzzle and can only general a puzzle of the level of difficulty prescribed by the randomized algorithm that generates it. A third example of a useful senderreceiver functionality is randomized blind signatures, which can be used for applications such as ecash [3, 10, 11]. Blind signatures are captured by a randomized function that takes a message and a signing key from the sender and delivers a signature on some randomized function of the message to the receiver (for instance by adding a random serial number to a given dollar amount).^{Footnote 1} Another usecase for such randomized blind signatures is a noninteractive certified PKI generation, where an authority can issue to a user signed public keys, while only the users learn the corresponding secret keys. Applications notwithstanding, understanding the cryptographic power of noisy channels with oneway communication is a fundamental question from the theoretical standpoint.
Prior Work. A large body of theoretical and applied work studied how to leverage oneway communication to construct secure message transmission (see, e.g., [4, 24] and references therein). More recently, Garg et al. [17] broadened the scope of this study to include more general cryptographic functionalities. Notably, they showed that oneway communication over the standard BEC or BSC channels suffices for realizing NIZK, or equivalently any deterministic senderreceiver functionality. Moreover, for general (possibly randomized) functionalities, a randomized stringOT channel or (stringROT for short) is complete. A stringROT channel takes a pair of random \(\ell \)bit strings from the sender and delivers only one of them, chosen at random by the channel, to the receiver. This completeness result was extended in [17] to other channels. However, in all of these general completeness results, the input and alphabet size of the channel grow (superpolynomially) with both the desired level of security and the complexity of the functionality being realized. On the negative side, it was shown in [17] that standard BEC/BSC channels are not complete. A major question that was left open is the existence of a complete finite channel, whose input and output alphabets do not grow with the security parameter or the complexity of the functionality. Furthermore, for the special case of deterministic functionalities (equivalently, NIZK), it was not known whether completeness holds for all nontrivial finite channels.
Next, we describe our framework in a bit more detail, followed by a summary of our results, which essentially settle the above mentioned questions.
Our Framework. Let \(\mathcal {C} \) be a finite channel. We define a oneway secure computation protocol (OWSC) for a functionality f over channel \(\mathcal {C} \) as a randomized encoder that maps the sender’s input a into a sequence \(\varvec{x} \) of channel inputs, and a decoder that maps the sequence of receiver’s channel outputs \(\varvec{y} \) into an output b. Given an error parameter \(\epsilon \), the protocol should satisfy the following security requirements: (i) given the sender’s view, which consists of its input a and the message \(\varvec{x} \) that it fed into the channel, the receiver’s output should be distributed as f(a), and (ii) the view of the receiver, namely the message \(\varvec{y} \) it received from the channel, can be simulated from f(a). Note that (i) captures receiver security against a corrupt sender as well as correctness, while (ii) captures sender security against a corrupt receiver.
We will construct OWSC protocols for various functionalities over various finite channels. Of particular interest to us is the randomized \(\ell \)bit stringROT channel discussed above, which we denote by \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \), and its finite instance \(\mathcal {C} _{\mathsf {ROT}}^{1} \) that we refer to as the bitROT channel.
1.1 Our Results
We are ready to state our results:

1.
Completeness of BitROT with Inverse Polynomial Error. We show that bitROT is complete for randomized functionalities with inverse polynomial simulation error. Towards this, we provide a construction of stringROT from bitROT with inverse polynomial error, and appeal to the completeness of stringROT. This is captured by the following (formal statement in Theorem 7):
Theorem 1
(Informal) The bitROT channel (\(\mathcal {C} _{\mathsf {ROT}}^{1} \)) is complete for oneway secure computation, with inversepolynomial error. This holds for both semihonest and malicious parties. The protocol establishing completeness can either be efficient in the circuit size, in which case it is computationally secure using any pseudorandom generator, or efficient in the branching program size, in which case is it informationtheoretically secure.

2.
No Finite Channel is Complete with Negligible Error. To complement the above positive result, we show that no finite channel is complete for randomized functionalities with negligible error. This is contrasted with the completeness of stringROT discussed above. In more detail, we prove the following theorem (formal statement in Theorem 9):
Theorem 2
(Informal): No finite channel is complete for oneway secure computation, with negligible error, even with semihonest parties and for computational security. More concretely, stringROT cannot be implemented in this setting.

3.
Every Nontrivial Finite Channel is Complete for ZeroKnowledge. As discussed above, a particularly compelling use case for oneway communication over noisy channels is truly noninteractive zeroknowledge proofs, without a trusted common randomness setup and with desirable features such as nontransferability and deniability. The results of Garg et al. [17] obtain such NIZK proofs from the binary erasure channel (BEC) and the binary symmetric channel (BSC). This raises the question whether all nontrivial channels enable NIZK.
We show that this is indeed the case if we define a “trivial” channel to be one that either does not enable communication at all, or is essentially equivalent to a noiseless channel, when used by malicious senders. In more detail, we prove the following theorem (see Sect. 5 for a formal statement):
Theorem 3
(Informal): Given a language \(L\in \mathrm {NP} \setminus \mathrm {BPP} \), a oneway secure computation protocol over channel \(\mathcal {C} \) for zeroknowledge for L exists if and only if \(\mathcal {C} \) is nontrivial.
1.2 Our Techniques
In this section we provide an overview of our techniques.
Completeness of BitROT with Inverse Polynomial Error. We show that bitROT is complete for randomized functionalities with inverse polynomial error. Towards this, we show, in Theorem 6, that (\(\ell \)bit) stringROT can be realized with polynomially many invocations of bitROT channel with inversepolynomial error. The OWSC protocol is efficient in \(\ell \) and is secure even against malicious adversaries.
In more detail, we use average case secret sharing, which is a weak version of ramp secret sharing, where both the reconstruction and privacy conditions are to be satisfied for a random set of r players and t players respectively, where r and t are the reconstruction and privacy thresholds, respectively. Theorem 4 provides a construction of OWSC protocol for stringROT using bitROT given an average case secret sharing schemes (\({\mathsf {Avg\text{ }SSS}} \)) with sufficiently small gap parameter. The analysis of this theorem crucially uses the anticoncentration bound for Bernoulli sums for a small window around the mean. In Theorem 5, we construct efficient \({\mathsf {Avg\text{ }SSS}} \) for N players in which the gap between r and t is inverse polynomial in N and which have inverse polynomial privacy guarantee. The scheme we construct and its analysis build on techniques for secret sharing with binary shares that were recently introduced by Lin et al. [22] (for a different goal). Our result on efficient realization of stringROT from bitROT directly follows from combining the above two results.
Impossibility of StringROT from Finite Channel with Negligible Error. Next, we show that stringROT cannot be constructed from bitROT with negligible error. We establish our result in two steps. Our first negative result in Theorem 8 shows that stringROT cannot be realized with polynomially many invocations of bitROT channel while guaranteeing negligible error. Our proof is inspired by [17]. In more detail, we use an isoperimetric inequality for Boolean hypercubes (Harper’s theorem), to show the existence of strategies that can efficiently guess both input strings in any implementation of stringROT with polynomially bounded number of bitROT invocations, which is a violation of the ROT security. The machine we describe for guessing the two input strings is computationally efficient, hence our impossibility result applies to computationally bounded semihonest adversaries.
We then extend this result in Theorem 9 to show that no finite channel can be used to realize stringROT using polynomially many invocations of the channel while guaranteeing negligible error. To show this, we model a channel as a function from the input of the channel and its internal randomness to the output of the channel. We then proceed to prove the impossibility in a manner similar to the impossibility for the bitROT channel.
Impossibility of Completeness of Finite Channels with Negligible Error. Theorem 9 shows that stringROT cannot be realized over any finite channel efficiently (in terms of the number of channel invocations) and with negligible error, even in the computational setting. Since stringROT is a simple functionality which has a small description in many functional representation classes, we obtain an impossibility result that strikes off the possibility of a complete channel with negligible error for most function representation classes of interest.
Characterization of Finite Channels Enabling ZeroKnowledge Proofs. It is a fundamental question to understand which channels enable ZK proofs. We give a complete characterization of all finite channels over which a OWSC protocol for zeroknowledge (proof of knowledge) functionality is possible. In fact, we show that the only channels which do not enable zeroknowledge proofs are “trivial” channels (a proof over a trivial channel translates to a proof over a plain oneway communication channel which is possible only for languages in \(\mathrm {BPP}\)). Over any other finite channel, we build a statistical zeroknowledge proof of knowledge, which is unconditionally secure. Our result generalizes a result of [17], which gave OWSC zeroknowledge proof protocols over Binary Erasure Channels (BEC) and Binary Symmetric Channels (BSC) only. Extending this result to all nontrivial channels requires new ideas, exploiting a geometric view of channels.
2 Preliminaries
To begin, we define some notation that we will use throughout the paper.
Notation 1
A member of a finite set \(\mathcal {X} \) is represented by x and sampling an independent uniform sample from \(\mathcal {X} \) is denoted by \(x {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {X} \). A vector in \(\mathcal {X} ^n\) is represented by \(\varvec{x} \in \mathcal {X} ^n\), whose coordinate \(i \in [n]\) is represented by either \(x_i\) or \(\varvec{x} (i)\).
For a vector \(\varvec{x} \in \mathcal {X} ^n\) and a set \(A \subseteq [n]\), the restriction of \(\varvec{x} \) to the set A, represented by \(\left. \varvec{x} \right _{A} \) is the vector with all the coordinates outside of A replaced by an erasure symbol \(\bot \) which is not a member of \(\mathcal {X} \). That is, \(\left. \varvec{x} \right _{A} (i) = \varvec{x} (i)\) if \(i \in A\) and \(\left. \varvec{x} \right _{A} (i) = \bot \) otherwise. Finally, \(\varDelta \left( \mu _0,\mu _1 \right) \) denotes the total variation distance between distributions \(\mu _0\) and \(\mu _1\).
2.1 SenderReceiver Functionalities and Channels
This work addresses secure computation tasks that are made possible by oneway communication over a noisy channel. Such tasks can be captured by senderreceiver functionalities, that take an input from a sender S and deliver a (possibly) randomized output to a receiver R. More precisely, a senderreceiver functionality is a randomized mapping \(f: \mathcal {A} \rightarrow \mathcal {B} \) that takes an input \(a \in \mathcal {A} \) from a sender S and delivers an output \(b=f(a)\) to a receiver R. We will sometimes refer to f simply as a function and write \(f(a;\rho )\) when we want to make the internal randomness of f explicit.
In order to realize f, we assume that S and R are given parallel access to a channel \(\mathcal {C}: \mathcal {X} \rightarrow \mathcal {Y} \), which is a senderreceiver functionality that is typically much simpler than the target function f. We will typically view \(\mathcal {C} \) as being finite whereas f will come from an infinite class of functions. We will be interested in the number of invocations of \(\mathcal {C} \) required for realizing f with a given error \(\epsilon \) (if possible at all).
We will be particularly interested in the following channel.
Definition 1
(ROT channel). The \(\ell \)bit randomized string oblivious transfer channel (or \(\ell \)bit stringROT for short), denoted by \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \), takes from S a pair of strings \(\varvec{a} _0, \varvec{a} _1 \in \{0, 1\}^{\ell }\), and delivers to R
Finally, it is sometimes convenient to assume that a senderreceiver functionality f can additionally take a public input that is known to both parties. For instance, in a zeroknowledge proof such a public input can include the NPstatement, or in blind signatures it can include the receiver’s public verification key (allowing f to check the validity of the secret key). All of our definitions and results can be easily extended to this more general setting.
2.2 Secure Computation with OneWay Communication
A secure protocol for \(f:\mathcal {A} \rightarrow \mathcal {B} \) over a channel \(\mathcal {C} \) is formalized via the standard definitional framework of reductions in secure computation. Our default setting shall be that of informationtheoretic security against semihonest parties, with extensions to the setting of computational security and malicious parties. All our negative results in fact hold for the weakest setting of computational security against semihonest parties. All our positive results hold for (either informationtheoretic or computational) security against malicious parties.
OWSC Protocols. A oneway secure computation protocol for f over \(\mathcal {C} \) specifies a randomized encoder that maps the sender’s input a into a sequence of channel inputs \(\varvec{x} \), and a decoder that maps the receiver’s channel outputs \(\varvec{y} \) into an output b. Given an error parameter \(\epsilon \), the protocol should satisfy the following security requirements: (i) given the sender’s view, which consists of an input a and the message \(\varvec{x} \) that it fed into the channel, the receiver’s output should be distributed as f(a), and (ii) the view of the receiver, namely the message \(\varvec{y} \) it received from the channel, can be simulated from f(a). Note that (i) captures receiver security against a corrupt sender as well as correctness, while (ii) captures sender security against a corrupt receiver. We formalize this below.
Definition 2
(Oneway secure computation). Given a randomized function \(f:\mathcal {A} \rightarrow \mathcal {B} \) and a channel \(\mathcal {C}:\mathcal {X} \rightarrow \mathcal {Y} \), a pair of randomized functions \(\langle \mathsf {S}, \mathsf {R}\rangle \), where \(\mathsf {S}: \mathcal {A} \rightarrow \mathcal {X} ^N\) and \(\mathsf {R}: \mathcal {Y} ^N \rightarrow \mathcal {B} \) is said to be an \((N, \epsilon )\) OWSC protocol for f over \(\mathcal {C} \) if there exists a simulator \(\mathsf {Sim}_{\mathsf {R}}: \mathcal {B} \rightarrow \mathcal {Y} ^N\), such that for all \(a \in \mathcal {A} \),
OWSC for Malicious Parties. In this case, our security requirement coincides with UC security, but with simplifications implied by the communication model. Specifically, since a corrupt receiver has no input to the functionality nor any message in the protocol, UC security against a malicious receiver is the same as in the semihonest setting. UC security against a malicious sender, on the other hand, requires that from any arbitrary strategy of the sender, a simulator is able to extract a valid input.
Formally, an OWSC protocol for f over \(\mathcal {C} \) is secure against malicious parties if, in addition to the requirements in Definition 2, there exists a randomized simulator \(\mathsf {Sim}_{\mathsf {S}}:\mathcal {X} ^N \rightarrow \mathcal {A} \) such that for every \(\varvec{x} \in \mathcal {X} ^N\),
In our (positive) results in this setting, we shall require the simulator to be computationally efficient as well.
OWSC with Computational Security. We can naturally relax the above definition of (statistical) \((N, \epsilon )\) OWSC to computational \((N, T, \epsilon )\) OWSC, for a distinguisher size bound T, by replacing each statistical distance bound \(\varDelta \left( A,B \right) \le \epsilon \) by the condition that for all circuits C of size T, \(\Pr (C(A)=1)  \Pr (C(B)=1)\le \epsilon \).
Complete Channels for OWSC. So far, we considered OWSC protocols for a concrete function f and with a concrete level of security \(\epsilon \). However, in a cryptographic context, one is typically interested in a single “universal” protocol that takes a description \(\hat{f}\) of a function f and a security parameter \(\lambda \) as inputs and runs in polynomial time in its input length.
To meaningfully specify the goal of such a universal OWSC protocol, we need to fix a representation class \(\mathcal {F} \) that defines an association between a bitstring \(\hat{f}\) and the (deterministic or randomized) function f it represents. The representation classes \(\mathcal {F} \) we will be interested in include circuits (capturing general polynomialtime computations) and branching programs (capturing logarithmicspace computations and logarithmicdepth circuits). The stringROT channel \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) can also be viewed as a degenerate function class \(\mathcal {F} \) in which \(\hat{f}=1^\ell \) specifies the string length.
If a channel \(\mathcal {C} \) enables a universal protocol for \(\mathcal {F} \), we say that \(\mathcal {C} \) is OWSCcomplete for \(\mathcal {F} \). We will distinguish between completeness with inversepolynomial error and completeness with negligible error, depending on how fast the error vanishes with \(\lambda \). We will also distinguish between completeness with statistical and computational security. We formalize this notion of completeness below.
Definition 3
(OWSCcomplete channel). Let \(\mathcal {F} \) be a function representation class and \(\mathcal {C} \) be a channel. We say that \(\mathcal {C} \) is OWSCcomplete for evaluating \(\mathcal {F} \) with (statistical) inversepolynomial error if for every positive integer c there is a polynomialtime protocol \(\varPi =\langle \mathsf {S},\mathsf {R}\rangle \) that, on common input \((1^\lambda ,\hat{f})\), realizes \((N, \epsilon )\) OWSC of f over \(\mathcal {C} \), where \(\epsilon =\mathcal {O} (\frac{1}{\lambda ^c})\) and \(N=poly(\lambda ,\hat{f})\). We say that \(\mathcal {C} \) is complete with negligible error if there is a single \(\varPi \) as above such that \(\epsilon \) is negligible in \(\lambda \). We similarly define the computational notions of completeness by requiring the above to hold with \((N, T, \epsilon )\) instead of \((N, \epsilon )\), for an arbitrary polynomial \(T=T(\lambda )\).
As discussed above, useful instances of \(\mathcal {F} \) include circuits, branching programs, and stringROT. We will assume statistical security against semihonest parties by default, and will explicitly indicate when security is computational or against malicious parties.
2.3 OWSC ZeroKnowledge Proof of Knowledge
For a language L in \(\mathrm {NP}\), let \(R_L\) denote a polynomial time computable relation such that \(x \in L\) if and only if for some w of length polynomial in the length of x, we have \(R_L(x, w) = 1\). In the classic problem of zeroknowledge proof, given a common input \(x\in L\), a polynomial time prover who has access to a w such that \(R_L(x, w) = 1\) wants to convince a polynomial time verifier that \(x \in L\), without revealing any additional information about w. On the other hand, if \(x\not \in L\), even a computationally unbounded prover should not be able to make the verifier accept the proof, except with negligible probability.
While classically, the prover and the verifier are allowed to interact with each other, or in the case of NonInteractive ZeroKnowledge (NIZK), are given a common random string generated by a trusted third party, in a ZK protocol in the OWSC model, a single string is transmitted from the prover to the receiver, over a channel \(\mathcal {C}\), with no other trusted set up. We shall require informationtheoretic security, with both soundness and zeroknowledge properties defined via simulation. As simulationbased soundness corresponds to a proof of knowledge (PoK), we shall refer to this primitive as \(\mathsf {OWSC} / {\mathcal {C}}\) ZKPoK.^{Footnote 2}
Definition 4
(OWSC Zeroknowledge Proof of Knowledge). Given a channel \(\mathcal {C}\), a pair of PPT algorithms \(({\mathsf {P}_{ZK}}, {\mathsf {V}_{ZK}})\) is a \(\mathsf {OWSC} / {\mathcal {C}}\) zeroknowledge proof of knowledge (ZKPoK) for an \(\mathrm {NP}\) language L with an associated relation \(R_L\) if the following hold:
Completeness. There is a negligible function \(\mathrm {negl}\), such that \(\forall x \in L\) and w such that \(R_L(x, w) = 1\),
(where the probability is over the randomness of \({\mathsf {P}_{ZK}} \) and \({\mathsf {V}_{ZK}} \) and that of the channel).
Soundness. There exists a probabilistic polynomial time (PPT) extractor E such that, for all x and all collection of strings \(z_\lambda \) (collection indexed by \(\lambda \))
ZeroKnowledge. There exists a PPT simulator S such that, for all \(x \in L\), and w such that \(R_L(x, w) = 1\),
where \(\approx \) represents computational indistinguishability.
In our construction we use the notion of oblivious zeroknowledge PCP, which was explicitly defined in [17]. In the problem of oblivious zeroknowledge PCP, a prover with access to \(x \in L\) and w such that \(R_L(x, w) = 1\) would like to publish a proof. The verifier’s algorithm probes a constant number of random locations in the published proof and decides to accept or reject while guaranteeing correctness and soundness. The notion of oblivious zeroknowledge requires that the PCP is zeroknowledge when each bit in the proof is erased with finite probability.
Definition 5
(Oblivious ZKPCP). [17, Definition 1] \(({\mathsf {P}_{oZK}}, {\mathsf {V}_{oZK}})\) is a \((c, \nu )\)oblivious ZKPCP with knowledge soundness \(\kappa \) for an NP language L if, when \(\lambda \) is the security parameter, \({\mathsf {P}_{oZK}}, {\mathsf {V}_{oZK}} \) are probabilistic algorithms that run in polynomial time in \(\lambda \) and the length of the input x and satisfy the following conditions.
Completeness. \(\forall (x, w) \in R_L\) when \(\pi {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} {\mathsf {P}_{oZK}} (x, w, \lambda ) \), \(\Pr ({\mathsf {V}_{oZK}} (x,\pi ^*)) = 1\) for all choices of \(\pi ^*\) obtained by erasing arbitrary locations of \(\pi \).
cSoundness. There exists a PPT extractor E such that, for all x and purported proofs \(\pi '\), if \((x, E(x, \pi ')) \notin R_L\) then
where the probability is taken over the random choices of g, where g is any function that replaces all but c locations of \(\pi '\) with \(\bot \) (and leaves the other locations untouched).
\(\nu \)ZeroKnowledge. There exists a PPT simulator S such that, for all \(x \in L\), the following distributions are statistically indistinguishable:

Sample \(\pi {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} {\mathsf {P}_{oZK}} (\lambda , x, w) \), replace each bit in \(\pi \) with \(\bot \) with probability \(1  \nu \) and output the resultant value.

\(S(x, \lambda )\).
As described in [17], the following result is implied by a construction in [2]:
Proposition 1
[17, Proposition 1]. For any constant \(\nu \in (0, 1)\), there exists a \((3, \nu )\)oblivious ZKPCP with a knowledge soundness \(\kappa = 1  \frac{1}{p(\lambda )}\), where \(p(\lambda )\) is some polynomial in \(\lambda \).
3 StringROT from BitROT with Inverse Polynomial Error
In this section, we construct stringROT from bitROT with inverse polynomial error, and apply this to show that bitROT is complete for general senderreceiver functionalities with inversepolynomial error. Since the intuition was discussed in Sect. 1, we proceed directly with the construction.
3.1 Average Case Secret Sharing
An N player average case secret sharing scheme, for \(\ell \)bit secrets with reconstruction threshold r and privacy threshold t, consists of a sharing algorithm \({\mathsf {Share}} \) and a reconstruction algorithm \({\mathsf {Recst}} \) which guarantees that a random subset of t players learns nothing about the secret and that a random set of r players can reconstruct the secret with high probability. This is formalized by the next definition, where the following notation will be useful.
Notation 2
For integers \(1\le s\le N\), we use the following families of subsets of [N]: \(\mathcal {A} _s = \{A \subseteq [N]: A = s\}\), \(\mathcal {A} _{\ge s} = \{A \subseteq [N]: A \ge s\}\), and \(\mathcal {A} _{\le s} = \{A \subseteq [N]: A \le s\}\).
Definition 6
A \((\ell , N, t, r, \epsilon )\) averagecase secretsharing scheme (\({\mathsf {Avg\text{ }SSS}}\), for short) is a pair of randomized algorithms \(\langle {\mathsf {Share}}, {\mathsf {Recst}} \rangle \) such that,
where \(\mathcal {R} \) is the private randomness, that satisfy the following properties.
Reconstruction Property: \({\mathsf {Recst}} \) must be able to reconstruct any secret from a uniformly random set of r shares produced by \({\mathsf {Share}} \), with at least \(1  \epsilon \) probability. Formally, for all \(\varvec{s} \in \{0, 1\}^{\ell }\),
where the probability is over the randomness used by \({\mathsf {Share}} \) and the choice of \({A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _r}\).
Privacy Property: t random shares of every pair of secrets are \(\epsilon \)close to each other in statistical distance. Formally, for all \(\varvec{s}, \varvec{s} ' \in \{0, 1\}^{\ell }\), and \(A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _t \),
We will typically be interested in \((\ell , N, t, r, \epsilon )\)\({\mathsf {Avg\text{ }SSS}}\) where \(\ell , t, r, \epsilon \) are functions of N and require \({\mathsf {Share}}, {\mathsf {Recst}} \) to be probabilistic algorithms with \(\mathrm {poly} (N)\) complexity.
3.2 StringROT from BitROT and Average Case Secret Sharing
In this section, we show that an average case secret sharing scheme can be used to reduce string ROT to bit ROT. The following theorem demonstrates such a reduction.
Theorem 4
For \(\delta \in (0, \frac{1}{2})\) and for sufficiently large N, given a \((\ell , N, t, r, \epsilon )\)\({\mathsf {Avg\text{ }SSS}}\), with \(t = \left\lfloor {\frac{N}{2}} \right\rfloor  N^{\delta }\), \(r = \left\lceil {\frac{N}{2}} \right\rceil + N^{\delta }\) and \(\epsilon = N^{\delta  \frac{1}{2}}\), there exists a secure (even against malicious parties) \((N, 4N^{\delta  \frac{1}{2}})\) OWSC protocol for \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) over \(\mathcal {C} _{\mathsf {ROT}}^{1} \). If the \({\mathsf {Avg\text{ }SSS}}\) scheme is efficient in N, then so is our protocol.
Proof:
Let \(\langle {\mathsf {Share}}, {\mathsf {Recst}} \rangle \) be an \((\ell , N, t, r, \epsilon )\)\({\mathsf {Avg\text{ }SSS}}\). The protocol that realizes \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) in the \(\mathsf {OWSC} / {\mathcal {C} _{\mathsf {ROT}}^{1}} \) model proceeds as follows.
Let \((\varvec{a} _0, \varvec{a} _1) \in \{0, 1\}^{\ell } \times \{0, 1\}^{\ell }\) be the input to the \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \). Sender computes \(\varvec{x} _0 = {\mathsf {Share}} (\varvec{a} _0)\) and \(\varvec{x} _1 = {\mathsf {Share}} (\varvec{a} _1)\). For \(i = 1, \ldots , N\), sender sends \((\varvec{x} _0(i), \varvec{x} _1(i))\) in the ith invocation of the \(\mathcal {C} _{\mathsf {ROT}}^{1} \) channel.
The receiver gets \(\left. \varvec{x} _0\right _{A}, \left. \varvec{x} _1\right _{[N] \setminus A} \), where A is a uniformly random subset of [N]. If \(A\ge r\), it uniformly samples \(A_0\subseteq A\) such that \(A_0=r\) and outputs \(({\mathsf {Recst}} (\left. \varvec{x} _0\right _{A_0}),\bot )\), and if \([N]\setminus A \ge r\), it uniformly samples \(A_1\subseteq [N]\setminus A\) such that \(A_1=r\) and outputs \((\bot ,{\mathsf {Recst}} (\left. \varvec{x} _1\right _{A_1}))\). If \(A\in (t,r)\), \(\mathsf {R}\) samples \(\varvec{a} _0,\varvec{a} _1 {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0,1\}^{\ell } \) and \(i {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0,1\} \) and outputs \((\varvec{a} _0,\bot )\) if \(i=0\) and \((\bot ,\varvec{a} _1)\) if \(i=1\).
Complexity. The complexity of this reduction is N. If \({\mathsf {Avg\text{ }SSS}}\) is efficient, the protocol is efficient as well.
Security. We first show that the receiver’s output is consistent with probability at least \(1  3N^{\delta  \frac{1}{2}}\). That is, if the input to the sender is \((\varvec{a} _0,\varvec{a} _1)\), with probability \(13N^{\delta \frac{1}{2}}\), the receiver outputs either \((\bot ,\varvec{a} _1)\) or \((\varvec{a} _0,\bot )\). To show this, we bound the probability of the event \(A \in (t, r)\) using an anticoncentration bound on Bernoulli sums and then argue that conditioned on \(A \notin (t, r)\), the receiver’s output is consistent with probability \(\ge 1  \epsilon \).
Claim 1
Let \(X_i\) be i.i.d \(\text {Bernoulli}(\frac{1}{2})\) random variables for \(i \in [N]\). Then, for all \(\delta \in (0, 1/2)\),
Proof:
This follows from the fact that,
\(\square \)
Denote the event \(A\notin (t,r)\) by E. Since \(rt=2N^{\delta }\), \(\Pr (E)\ge 12N^{\delta \frac{1}{2}}\) by the above claim. Conditioned on \(A\ge r\), A is uniformly distributed in \(\mathcal {A} _{\ge r}\). Hence, \(A_0\) is uniformly distributed in \(\mathcal {A} _r\). The receiver is correct if \({\mathsf {Recst}} (\left. {\mathsf {Share}} (\varvec{a} _0)\right _{A_0}) = \varvec{a} _0\). By the reconstruction property of \(\langle {\mathsf {Share}},{\mathsf {Recst}} \rangle \), for all \(\varvec{a} _0\in \{0,1\}^{\ell }\), we have
where the probability is over the randomness used by \({\mathsf {Share}} \) and \({A_0 {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _r}\). Similar bound applies for \(\Pr ({\mathsf {Recst}} (\left. {\mathsf {Share}} (\varvec{a} _1)\right _{A_1})\) conditioned on the event \(A\le t\). From these observations, the probability that the receiver outputs \((\varvec{a} _0, \bot )\) or \((\bot , \varvec{a} _1)\) when the sender’s input is \((\varvec{a} _0, \varvec{a} _1)\) can be lower bounded as,
Furthermore, when \(A \notin (t, r)\), the events \(A \ge r\) and \(N  A \ge r\) are equiprobable. That is, the index on which the receiver outputs \(\bot \) is decided entirely by the randomness in the channel. Hence, for all \(\varvec{a} _0,\varvec{a} _1\in \{0,1\}^{\ell }\),
We now analyze security against the receiver. We claim that conditioned on the event \(A \le t\), for any \(\varvec{a} _0, \varvec{a} '_0, \varvec{a} _1 \in \{0, 1\}^{\ell }\), the view of the receiver when the input to the sender is \((\varvec{a} _0, \varvec{a} _1)\) is sufficiently close to its view when the sender’s input is \((\varvec{a} '_0, \varvec{a} _1)\). Note that conditioned on \(A \le t\), A is a uniformly random set of size at most t. Our claim is that for all \(\varvec{a} _0, \varvec{a} '_0 \in \{0, 1\}^{\ell }\) and \(A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{\le t} \),
To show this, note that the output distributions of the following two experiments are the same for every \(\varvec{a} \in \{0, 1\}^{\ell }\):

(1)
Choose \(0 \le k \le t\) with probability \(\Pr _{S {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{\le t}}(S=k)\). When \(A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{t} \), let B be a uniformly random subset of A of size k. Output \(\left. {\mathsf {Share}} (\varvec{a})\right _{B} \).

(2)
\(A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{\le t} \), output \(\left. {\mathsf {Share}} (\varvec{a})\right _{A} \). Hence, the distribution \(\left. {\mathsf {Share}} (\varvec{a} _0)\right _{A} \) where \(A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{\le t} \) can be generated by postprocessing the distribution \(\left. {\mathsf {Share}} (\varvec{a} _0)\right _{A} \) where \(A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{t} \). The claim now follows from the privacy guarantee of \({\mathsf {Avg\text{ }SSS}} \) and the fact that statistical distance only decreases on postprocessing.
On input \((\bot , \varvec{a} _1)\) the simulator \(\mathsf {Sim}_R\) proceeds as follows: Sample \(\varvec{a} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^{\ell } \) and run the algorithm of the sender with input \((\varvec{a},\varvec{a} _1)\), to generate \((\varvec{x} _0, \varvec{x} _1)\). Sample \(A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{\le t} \) and output \((\left. \varvec{x} _0\right _{A}, \left. \varvec{x} _1\right _{[N] \setminus A})\). The case for \((\varvec{a} _0, \bot )\) is symmetric.
That \(\mathsf {Sim}_R\) satisfies sender’s privacy follows from the following observations: (a) The event \(A \notin (t, r)\) happens with probability at least \(1  2N^{\delta  \frac{1}{2}}\). (b) \(\varvec{a} _0\) (resp. \(\varvec{a} _1\)) is decoded correctly with probability \(1  N^{\delta  \frac{1}{2}}\) when \(A \ge r\) (resp. \(A \le t\)). Furthermore, conditioned on both these events, the receiver’s view for input \((\varvec{a} _0, \varvec{a} _1)\) and for input \((\varvec{a} '_0, \varvec{a} _1)\) are at most \(N^{\delta  \frac{1}{2}}\) far in statistical distance, for all \(\varvec{a} _0, \varvec{a} '_0 \in \{0, 1\}^{\ell }\). Hence,
UCSecurity Against Malicious Adversaries. For any \(\varvec{x} \in \{0,1\}^N\), simulator \(\mathsf {Sim}_{\mathsf {S}}\) works as follows. Sample \(A_{\ge r} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{\ge r} \) and \(A_{\le t} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {A} _{\le t} \) (this can be done efficiently by rejection sampling). Let \((\varvec{b} _0, \bot ) = \mathsf {R}(\left. \varvec{x} \right _{A_{\ge r}})\) and \((\bot , \varvec{b} _1) = \mathsf {R}(\left. \varvec{x} \right _{A_{\le t}})\). Sample \(A {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} [N] \), if \(A \in (t, r)\), output \((\varvec{s} _0, \varvec{s} _1)\), where \(\varvec{s} _0, \varvec{s} _1 {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^{\ell } \), else output \((\varvec{b} _0, \varvec{b} _1)\).
We claim that distribution \(\mathcal {C} _{\mathsf {ROT}}^{1} (\mathsf {Sim}_{\mathsf {S}}(\varvec{x}))\) is identical to the output distribution of the receiver when a malicious sender sends \(\varvec{x} \). In the event that \(A \in (t, r)\), the output of the receiver is distributed as if the input to the stringROT were a pair of random strings. In the events \(A \in \mathcal {A} _{\le t}\) and \(A \in \mathcal {A} _{\ge r}\), \(\mathsf {R}\) outputs according to a random erasure from \(\mathcal {A} _{\le t}\) and \(\mathcal {A} _{\ge r}\) respectively. This is indeed the distribution generated by the simulator and so this proves the theorem. \(\square \)
Remark 1
The OWSC protocol is said to be LasVegas if it either aborts after returning \(\bot \) or is correct conditioned on not aborting, i.e., outputs \((\varvec{a} _0, \bot )\) or \((\bot , \varvec{a} _1)\) with equal probability. Suppose the \({\mathsf {Avg\text{ }SSS}} \) is LasVegas in the following sense. For every \(A \in \mathcal {A} _r\), \({\mathsf {Recst}} \) either reconstructs the secret correctly or aborts after returning \(\bot \). We can tweak the above OWSC protocol to output \(\bot \) whenever \(A \in (t, r)\) and to return whatever the \({\mathsf {Recst}} \) outputs when \(A \ge r\) makes the OWSC protocol also LasVegas. This guarantees that in Theorem 4, if \({\mathsf {Avg\text{ }SSS}} \) is LasVegas, then OWSC protocol is also LasVegas. In the next section, we will construct an \({\mathsf {Avg\text{ }SSS}} \) scheme which is LasVegas.
3.3 Construction of Average Case Secret Sharing
In this section, we construct an average case secret sharing scheme. Our construction is similar to the construction of constant rate secret sharing schemes in [22]. The only difference is that the reconstruction and privacy properties are with respect to random corruptions, hence we are able to use randomized erasure correcting codes with better error parameters. Before we describe the construction, we provide the following definitions.
Definition 7
A function \({\mathsf {Ext}}: \{0, 1\}^d \times \{0, 1\}^n \rightarrow \{0, 1\}^{\ell }\) is a \((k, \epsilon )\) strong seeded extractor if for every random variable X, with alphabet \(\{0, 1\}^n\) and minentropy k, when \(\varvec{z} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^d \) and \(\varvec{r} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^{\ell } \),
A randomized map \({\mathsf {Ext}} ^{1}\) is an inverter map of \({\mathsf {Ext}} \) if it maps \(\varvec{z} \in \{0, 1\}^d, \varvec{s} \in \{0, 1\}^{\ell }\) to a sample from the uniform distribution over \(\{0, 1\}^n\), i.e. \(U_n\), subject to \(({\mathsf {Ext}} (\varvec{z}, U^n) = \varvec{s})\).
The following lemma describes an improvement of Trevisan’s extractor [27] due to Raz \(\textit{et al}. \) [26]. The statement itself is from [22].
Lemma 1
[22, Lemma 4]. There is an explicit linear \((k, \epsilon )\) strong seeded extractor \({\mathsf {Ext}}: \{0, 1\}^d \times \{0, 1\}^n \rightarrow \{0, 1\}^{\ell }\) with \(d = \mathcal {O} (\log ^3{n/\epsilon })\) and \(\ell = k  \mathcal {O} (d)\).
The other component in our construction is an erasure correcting code. Since \({\mathsf {Avg\text{ }SSS}}\) allows for shared randomness between the sharing algorithm \({\mathsf {Share}} \) and the reconstruction algorithm \({\mathsf {Recst}} \), we could use randomized erasure correcting codes.
Definition 8
An \((n, k, r, \epsilon )\)linear erasure correcting scheme \(({\mathsf {Enc}}, {\mathsf {Dec}})\) consists of a linear encoder \({\mathsf {Enc}}:\{0 , 1\}^{k} \rightarrow \{0, 1\}^n\) and a decoder \({\mathsf {Dec}}:\{0 , 1\}^{n} \rightarrow \{0, 1\}^k\) such that, for all \(\varvec{x} \in \{0, 1\}^k\),
Lemma 2
For all \(k\le r\le n\), there exist efficient \((n, k, r, \epsilon )\)linear erasure correcting schemes with \(\epsilon = 2^{k  r}\).
A proof of the lemma is provided in the full version [1], where we will also argue that the erasure correcting code we construct is LasVegas i.e., the decoder either aborts or correctly decodes the message. It can be verified that the \({\mathsf {Avg\text{ }SSS}} \) scheme we construct is LasVegas whenever the erasure correcting scheme is LasVegas.
Theorem 5
For parameters \(t< n< n+d< r < N\) and \(\ell , \epsilon \), let \({\mathsf {Ext}}: \{0, 1\}^d \times \{0, 1\}^n \rightarrow \{0, 1\}^{\ell }\) be a linear \((n  t, \epsilon )\) strong seeded extractor with inverter map \({\mathsf {Ext}} ^{1}\). Let \(({\mathsf {Enc}}, {\mathsf {Dec}})\) be a \((N, n + d, r, \epsilon )\)randomized linear erasure correcting code. Then, \(\langle {\mathsf {Share}}, {\mathsf {Recst}} \rangle \), described below, is a \((\ell , N, t, r, 8\epsilon )\)\({\mathsf {Avg\text{ }SSS}}\):
where \(\varvec{s} \in \{0, 1\}^{\ell }\) and \(A \subset [N]\), when \((\cdot  \cdot )\) is the concatenation operator.
Proof:
We show that the scheme satisfies the reconstruction and privacy properties.
Reconstruction. By the performance guarantee of the error correcting code, for any \(\varvec{v} \in \{0, 1\}^{n + d}\),
Hence, \({\mathsf {Recst}} (\left. \varvec{v} \right _{A}) = \varvec{s} \), for a random A, with probability \(1  \epsilon \).
Privacy. We use the following result from [22]:
Lemma 3
[22, Lemma 13]. Let \({\mathsf {Ext}}: \{0, 1\}^d \times \{0, 1\}^n \rightarrow \{0, 1\}^{\ell }\) be a linear \((k, \epsilon )\) strong extractor. Let \(f_A:\{0, 1\}^{n + d} \rightarrow \{0, 1\}^t\) be an affine function with \(t \le n  k\). For any \(\varvec{s} , \varvec{s} ' \in \{0, 1\}^{\ell }\), when \((Z, X) = (U_d, U_n)  ({\mathsf {Ext}} (U_d, U_n) = \varvec{s})\) and \((Z', X') = (U_d, U_n)  ({\mathsf {Ext}} (U_d, U_n) = \varvec{s} ')\), we have
\({\mathsf {Enc}} \) is a linear function and for any \(A \subseteq [N]\) the restriction operator \(\left. (\cdot )\right _{A} \) is a projection. Hence, for any \(\varvec{s} \in \{0, 1\}^{\ell }\) and \(A \subseteq [N]\) such that \(A = t\), \(\left. {\mathsf {Share}} (\varvec{s})\right _{A} \) is an affine map with range \(\{0, 1\}^{t}\) applied to \((U_d, U_n)  ({\mathsf {Ext}} (U_d, U_n) = \varvec{s})\). \({\mathsf {Ext}} \) used in the theorem is a \((n  t, \epsilon )\) extractor, hence the privacy follows directly from the above lemma. \(\square \)
For any N and \(\delta \in (0, 1/2)\), Lemma 1 guarantees an explicit linear \((N^\delta , \frac{1}{8N})\) strong seeded extractor \({\mathsf {Ext}}: \{0, 1\}^d \times \{0, 1\}^{\frac{N}{2}} \rightarrow \{0, 1\}^{\ell }\) with \(d = \mathcal {O} (\log ^3{N})\) and \(\ell = N^\delta  \mathcal {O} (\log ^3{N})\). Furthermore, Lemma 2 guarantees a \((N, k, r, \epsilon )\)linear erasure correcting code for \(k = \frac{N}{2} + d\), \(r = \frac{N}{2} + N^{\delta }\) and \(\epsilon = \frac{1}{8N}\) (in fact, the lemma gives much better maximum error probability guarantees, but we would not need this). Note that both \({\mathsf {Ext}} ^{1}\) and \(({\mathsf {Enc}}, {\mathsf {Dec}})\) are efficient. Using this extractor and the erasure correcting scheme in Theorem 5, we obtain the following corollary.
Corollary 1
For large enough N and \(\delta \in (0, \frac{1}{2})\), when \(\ell = \frac{N^\delta }{2}, t = \frac{N}{2}  N^{\delta }, r = \frac{N}{2} + N^{\delta }\) and \(\epsilon = \frac{1}{N}\), there exists an efficient \((\ell , N, t, r, \epsilon )\)\({\mathsf {Avg\text{ }SSS}}\).
Given such a \({\mathsf {Avg\text{ }SSS}}\), we appeal to the Theorem 4 to get the following theorem.
Theorem 6
For \(\delta \in (0, \frac{1}{2})\), there exists an efficient protocol that realizes \((N, \epsilon )\) secure OWSC for \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) over \(\mathcal {C} _{\mathsf {ROT}}^{1} \), with \(\epsilon = \mathcal {O} (N^{\delta  \frac{1}{2}})\), and \(\ell = \frac{N^\delta }{2}\). In particular, bitROT is complete for stringROT with inversepolynomial error.
3.4 General Completeness of BitROT with Inverse Polynomial Error
In the previous section, we showed that bitROT is complete for stringROT with inversepolynomial error. Garg \(\textit{et al}. \) [17] (Theorem 11) showed that stringROT is complete for arbitrary finite functionalities even for the case of malicious parties, where the (statistical) error is negligible in the ROT string length \(\ell \). Combined with our reduction from stringROT to bitROT, this gives a similar completeness result for bitROT with inversepolynomial error. Below we extend this to functions represented by branching programs and circuits, where in the latter case we need to settle for computational security using any (blackbox) pseudorandom generator. Thus, assuming the existence of a oneway function, bitROT is complete with inversepolynomial computational error for any polynomialtime computable functionality.
Theorem 7
(BitROT is complete with inversepolynomial error). The bitROT channel \(\mathcal {C} _{\mathsf {ROT}}^{1} \) is OWSCcomplete, with inversepolynomial error, for evaluating circuits with computational security against malicious parties, assuming a (blackbox) pseudorandom generator. Moreover, replacing circuits by branching programs, the same holds unconditionally with inversepolynomial statistical error.
Proof:
We start by addressing the simpler case of semihonest parties. In this case, the computational variant follows by combining the reduction from stringROT to bitROT with Yao’s garbled circuit construction [31] in the following way. Given a randomized senderreceiver functionality f(a; r), define a deterministic (twoway) functionality \(f'\) that takes \((a,r_1)\) from the sender and \(r_2\) from the receiver, and outputs \(f(a;r_1\oplus r_2)\) to the receiver. Using Yao’s protocol to securely evaluate \(f'\) with uniformly random choices of \(r_1,r_2\), we get a computationally secure reduction of f to (choseninput) stringOT where the receiver’s inputs are random. Replacing the random choices of the receiver by the use of a stringROT channel, we get a computational OWSC protocol for f over stringROT using any (blackbox) PRG. Finally, applying the reduction from stringROT to bitROT with a suitable choice of parameters, we get the inversepolynomial completeness result for circuits with semihonest parties. A similar result for branching programs with statistical (and unconditional) security can be obtained using informationtheoretic analogues of garbled circuits [16, 18, 20].
To obtain similar protocols for malicious parties, we appeal to a result of [19], which obtains an analogue of Yao’s protocol with security against malicious parties by only making a blackbox use of a pseudorandom generator along with parallel calls to a stringOT oracle.^{Footnote 3} (This result too has an unconditional version for the case of branching programs.) Unlike Yao’s protocol, the protocol from [19] encodes the receiver’s input before feeding it into the parallel OTs. However, this encoding has the property that a random receiver input is mapped to random OT choice bits. Thus, the same reduction as before applies. \(\square \)
The unconditional part of Theorem 7 implies polynomialtime statisticallysecure protocols (with inversepolynomial error) for the complexity classes NC\(^1\) and Logspace. This is a vast generalization of the positive result for \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \). In the result for general circuits, the use of a pseudorandom generator is inherent given the current state of the art on constantround secure computation.
4 Impossibility of StringROT from BitROT with Negligible Error
In this section we show that stringROT with negligible error is impossible to achieve from bitROT. Moreover, this holds even against a computationally bounded semihonest adversary.
Theorem 8
For sufficiently large N and \(\ell \ge 2\log {N}\), an \((N, \frac{1}{N^2})\) OWSC protocol for \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) over \(\mathcal {C} _{\mathsf {ROT}}^{1} \) is impossible even against semihonest parties. In fact, the same holds even if one settles for OWSC with computational security. That is, there exists a polynomial \(T=T(N)\) such that there is no computational \((N,T, \frac{1}{N^2})\) OWSC protocol for \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) over \(\mathcal {C} _{\mathsf {ROT}}^{1} \).
Proof:
\(\mathcal {C} _{\mathsf {ROT}}^{1} \) may be equivalently described as a randomized function \(f_{\mathcal {C} _{\mathsf {ROT}}^{1}}\) from the input of the channel and the internal randomness of the channel to the output of the channel. Formally, For \((x_0, x_1) \in \{0, 1\} \times \{0, 1\}\), and \(s \in \{0, 1\}\),
Observe that for all \((x_0, x_1) \in \{0, 1\} \times \{0, 1\}\), the following distributions are identical: (1) \(\mathcal {C} _{\mathsf {ROT}}^{1} (x_0, x_1)\) and (2) Sample \(s {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\} \) and output \(f_{\mathcal {C} _{\mathsf {ROT}}^{1}}((x_0, x_1), s)\). Similarly, N invocations of \(\mathcal {C} _{\mathsf {ROT}}^{1} \) are equivalent to the randomized function \(f^{N}_{\mathcal {C} _{\mathsf {ROT}}^{1}} \) which on input \((\varvec{x} _0, \varvec{x} _1) \in \{0, 1\}^N \times \{0, 1\}^N\), samples \(\varvec{s} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^N \) and outputs \((\varvec{y} _0, \varvec{y} _1)\), where \((\varvec{y} _0(i), \varvec{y} _1(i)) = f_{\mathcal {C} _{\mathsf {ROT}}^{1}}((\varvec{x} _0(i), \varvec{x} _1(i)), \varvec{s} (i))\).
Suppose \(\langle \mathsf {S}, \mathsf {R}\rangle \) is a \((N, \frac{1}{N^2})\) OWSC protocol for \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) over \(\mathcal {C} _{\mathsf {ROT}}^{1} \) channel. The joint distribution generated by this protocol for an input (pair of strings) \((\varvec{a} _0, \varvec{a} _1) \in \{0, 1\}^{\ell } \times \{0, 1\}^{\ell }\) is described in Fig. 1. The receiver’s algorithm \(\mathsf {R}\) can be assumed to be deterministic w.l.o.g. since we may fix the randomness in the decoder incurring only a constant hit to the \(\epsilon =\frac{1}{N^2}\) parameter. This is because, for most values of \((\varvec{y} _0, \varvec{y} _1)\), \(\mathsf {R}\) should decode one of the indices with low probability of error and should be almost entirely unsure of the other index. Refer to the full version [1] for a formal proof.
In the sequel, for brevity, we would represent the tuples \((\varvec{a} _0, \varvec{a} _1), (\varvec{x} _0, \varvec{x} _1), (\varvec{y} _0, \varvec{y} _1)\) and \((\varvec{b} _0, \varvec{b} _1)\) also by \(\varvec{a}, \varvec{x}, \varvec{y} \) and \(\varvec{b} \), respectively, whenever this does not cause confusion. For \((\varvec{a} _0, \varvec{a} _1) \in \{0, 1\}^{\ell } \times \{0, 1\}^{\ell }\), consider the joint distribution \(\langle \mathsf {S}, R\rangle (\varvec{a} _0, \varvec{a} _1)\) described in Fig. 1. We now make some claims about this distribution.
Lemma 4
There exists a set \(X \subseteq \{0, 1\}^{N} \times \{0, 1\}^{N}\) such that \(\Pr (\varvec{x} \in X) \ge 1  \frac{2}{N}\) and for all \(\varvec{x} \in X\),
The lemma is a consequence of computational \(\frac{1}{N^2}\)security against sender. Intuitively, the sender can guess the index of the message output by the receiver with substantial probability if \(\Pr (\varvec{x} \in X) < 1  \frac{2}{N}\). Refer to the full version [1] for a formal proof.
We now design a machine \(\mathsf {M} \) that guesses both \(\varvec{a} _0\) and \(\varvec{a} _1\) from \((\varvec{y} _0, \varvec{y} _1)\) with substantial probability, contradicting sender’s privacy. On receiving \(\varvec{y} \), machine \(\mathsf {M} \) uses the receiver’s strategy \(\mathsf {R}(\varvec{y})\) to decode one of the messages, say \(\varvec{a} _i\), where i is either 1 or 0. It then computes \(\varvec{a} _{1i}\) by ‘guessing’ a random neighbor of \(\varvec{y} \), say \(\varvec{\hat{y}} \) and computing \(\mathsf {R}(\varvec{\hat{y}})\). We would show that with substantial probability, \(\mathsf {R}(\varvec{\hat{y}})\) yields \(\varvec{a} _{1i}\), breaking sender’s privacy property. \(\mathsf {M} \) is formally described in Fig. 2.
Analysis of \(\mathsf {M} \): We show that \(\mathsf {M} \) outputs \((\varvec{a} _0, \varvec{a} _1)\) with substantial probability. We would analyze the output of the machine M for a fixed \(\varvec{x} \in X\), where X is as guaranteed by Lemma 4. Define function \(f_{\varvec{x}} : \{0, 1\}^N \rightarrow \{0, 1\}\) such that when \(\varvec{y} = f^{N}_{\mathcal {C} _{\mathsf {ROT}}^{1}} (\varvec{x}, \varvec{s})\), \(f_{\varvec{x}}(\varvec{s}) = 1\) if \(\mathsf {R}(\varvec{y}) = (\varvec{b} _0, \varvec{b} _1)\) such that \(\varvec{b} _0 = \bot \) and 0 otherwise. We next observe a property of \(f_{\varvec{x}}\) which is a consequence of an isoperimetric inequality on Boolean hypercubes (Harper’s Lemma). For binary strings \(\varvec{u}, \varvec{v} \in \{0, 1\}^n\), denote the Hamming distance between them by \(\varvec{u} \varvec{v} \).
Lemma 5
For any function \(f:\{0, 1\}^n \rightarrow \{0, 1\}\), if \(\underset{{\varvec{v} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^n}}{\Pr }(f(\varvec{v}) = i) \ge \frac{1}{2}(1  \frac{1}{\sqrt{n}})\) for each \(i \in \{0, 1\}\), then \(\underset{\varvec{v} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^n}{\Pr }(\exists \varvec{\tilde{v}} : \varvec{v}  \varvec{\tilde{v}}  = 1 \text { and } f(\varvec{\tilde{v}})=1  f(\varvec{v}) ) \ge \varOmega (\frac{1}{\sqrt{n}})\).
In words, the lemma says that if f is a 2coloring of the Boolean hypercube, where the colors are (almost) balanced, then a significant fraction of the nodes of the hypercube, have a neighbor of a different color.
By Harper’s Lemma, Hamming balls have the smallest vertex boundary amongst all sets of the same probability. W.l.o.g, the probability of \(f(\varvec{v}) = 1\) is at most \(\frac{1}{2}\) and at least \(\frac{1}{2}(1  \frac{1}{\sqrt{n}})\) and \(\Pr _{\varvec{v} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^n}(\varvec{v}  \varvec{0}  = \left\lfloor \frac{n}{2} \right\rfloor ) \ge \frac{1}{2\sqrt{n}}\), where \(\varvec{0} \) is the all zero string. Hence the Hamming ball centered at \(\varvec{0} \) with probability at most \(\frac{1}{2}\) and at least \(\frac{1}{2}(1  \frac{1}{\sqrt{n}})\) has strings with \(\left\lfloor \frac{n}{2}\right\rfloor \) or \(\left\lfloor \frac{n}{2}\right\rfloor  1\) number of 1’s in its boundary. Consequently, the size of this boundary is \(\varOmega (\frac{1}{\sqrt{n}})\).
For any \(\varvec{x} \in \{0, 1\}^N \times \{0, 1\}^N\), the input to \(\mathsf {M} \) is \(\varvec{y} = f^{N}_{\mathcal {C} _{\mathsf {ROT}}^{1}} (\varvec{x}, \varvec{s})\), where \(\varvec{s} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^N \). The process of generating \(\varvec{\hat{y}} \) in \(M(\varvec{y})\) is equivalent to the following process. Compute \((\varvec{\hat{x}} _0, \varvec{\hat{x}} _1)\) and \(\varvec{\hat{s}} \) as follows: Sample \(j \leftarrow [N]\), set \(\varvec{\hat{s}} (j) = 1\varvec{s} (j)\) and \((\varvec{\hat{x}} _0(j), \varvec{\hat{x}} _1(j)) {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\} \times \{0, 1\} \). For all \(k \ne j\), set \(\varvec{\hat{s}} (k) = \varvec{s} (k)\) and \((\varvec{\hat{x}} _0(k), \varvec{\hat{x}} _1(k)) = (\varvec{x} _0(k), \varvec{x} _1(k))\). Compute \(\varvec{\hat{y}} = f^{N}_{\mathcal {C} _{\mathsf {ROT}}^{1}} (\varvec{\hat{x}}, \varvec{\hat{s}})\). We make the following observations about the above process.

(i.)
\(\varvec{\hat{s}} \) is uniformly distributed over \(\{0, 1\}^N\) and \(\varvec{s}  \varvec{\hat{s}}  = 1\).

(ii.)
\(\varvec{\hat{y}} = f^{N}_{\mathcal {C} _{\mathsf {ROT}}^{1}} (\varvec{x}, \varvec{\hat{s}})\) with probability \(\frac{1}{2}\).

(iii.)
For any \(\varvec{x} \in X\), \(\Pr (f_{\varvec{x}}(\varvec{s}) = 1  f_{\varvec{x}}(\varvec{\hat{s}})) \ge \varOmega (\frac{1}{N\sqrt{N}})\).
(i) follows from \(\varvec{s} \) being uniform in \(\{0, 1\}^N\) and \(\varvec{\hat{s}} \) being obtained by flipping the value of a random coordinate of \(\varvec{s} \). (ii) can be verified easily from the process description. When \(\varvec{x} \in X\) and \(\varvec{s} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0,1\}^N \), \(\Pr (f_{\varvec{x}}(\varvec{s})=i)\ge \frac{1}{2}(1\frac{1}{\sqrt{N}})\) for \(i\in \{0,1\}\), by Lemma 4. Hence, by Harper’s Lemma,
Conditioned on the event that such a \(\varvec{\tilde{s}} \) exists, \(\varvec{\hat{s}} = \varvec{\tilde{s}} \) with probability at least \(\frac{1}{N}\). This proves (iii).
\((\varvec{b} _0, \varvec{b} _1)\) is said to be correct if it is either \((\varvec{a} _0, \bot )\) or \((\bot , \varvec{a} _1)\). Let \(E_1\) be the event ‘\(\varvec{b} = \mathsf {R}\left( f^{N}_{\mathcal {C} _{\mathsf {ROT}}^{1}} (\varvec{x}, \varvec{s})\right) \) is correct’. Since \(\varvec{s} \) is uniform in \(\{0, 1\}^N\), by the correctness property, \(E_1\) happens with probability \(1  \frac{1}{N^2}\). Let \(E_2\) be the event ‘\(\varvec{b} = \mathsf {R}(f^{N}_{\mathcal {C} _{\mathsf {ROT}}^{1}} (\varvec{x}, \varvec{\hat{s}})\) is correct’. By (i), \(\varvec{\hat{s}} \) is also uniform in \(\{0, 1\}^N\), hence \(E_2\) happens with probability \(1  \frac{1}{N^2}\). From (ii) and (iii) we conclude that, when \(\varvec{x} \in X\), \(M(\varvec{y})\) outputs \((\varvec{\hat{a}} _0, \varvec{\hat{a}} _1)\) (instead of aborting) with probability \(\varOmega (\frac{1}{N\sqrt{N}})\). Since \(\varvec{x} \in X\) happens with probability \((1  \frac{2}{N})\), we may conclude that with probability at least \((1  \frac{2}{N})\varOmega (\frac{1}{N\sqrt{N}})\), the following event \(E_3\) occurs: \(\varvec{\hat{y}} = f^{N}_{\mathcal {C} _{\mathsf {ROT}}^{1}} (\varvec{x}, \varvec{\hat{s}})\) and \(\mathsf {M} \) outputs \((\varvec{\hat{a}} _0, \varvec{\hat{a}} _1)\). In the event \(E_1 \cap E_2 \cap E_3\), the machine \(\mathsf {M} \) guesses the input correctly and outputs \((\varvec{a} _0, \varvec{a} _1)\). By a union bound, \(E_1 \cap E_2 \cap E_3\) happens with probability \((1  \frac{2}{N})\varOmega (\frac{1}{N\sqrt{N}})  \frac{2}{N^2}\). Hence, \(\mathsf {M} \) predicts \((\varvec{a} _0, \varvec{a} _1)\) with probability \(\varOmega (\frac{1}{N\sqrt{N}})\). This is a contradiction since, when \(\ell = 2\log {N}\) and the protocol is \(\frac{1}{N^2}\)secure, the adversary can succeed in guessing both inputs with at most \(2^{2\log {N}} + \frac{1}{N^2} = \frac{2}{N^2}\) probability. This proves the theorem. \(\square \)
4.1 Extending Impossibility to All Finite Channels
In this section we show that the negative result from the previous section applies not only to bitROT but, in fact, to all finite channels. W.l.o.g we consider channels with rational conditional probability matrices. We begin by modeling an arbitrary finite channel as a randomized function.
Definition 9
Consider a channel \(\mathcal {C}: \mathcal {X} \rightarrow \mathcal {Y} \) with rational conditional distribution matrix. We define the states of \(\mathcal {C} \) as a finite set \({\mathcal {C}} . \mathsf {states} \) and the channel function \(f_{\mathcal {C}}: \mathcal {X} \times {\mathcal {C}} . \mathsf {states} \rightarrow \mathcal {Y} \), such that for all \(x \in \mathcal {X} \) and \(y \in \mathcal {Y} \),
We emphasize that our channels are all memoryless, and that “states” in this context should be interpreted as the internal randomness of the channel used in each invocation (uniform distribution over the set \({{\mathcal {C}} . \mathsf {states}}\)).
The existence of \({\mathcal {C}} . \mathsf {states} \) and \(f_{\mathcal {C}}\) is proved in the full version [1]. For the convenience of modeling we have defined \(f_{\mathcal {C}}\) in such a way that the state is chosen uniformly at random from \({\mathcal {C}} . \mathsf {states}\). Given the above definition, for a fixed input \(x \in \mathcal {X} \), the channel \(\mathcal {C} \) essentially samples a state uniformly from \({\mathcal {C}} . \mathsf {states} \) and deterministically maps x to the output y. This model motivates our next observation about multiple uses of the channel.
For a finite N, let \(\varvec{x} = (x_1, \ldots , x_N) \in \mathcal {X} ^N\) and let \(\varvec{y} = (y_1, \ldots , y_N) \in \mathcal {Y} ^N\) be the output of N independent uses of \(\mathcal {C} \) with input \(\varvec{x} \). Then the distribution \((\varvec{x}, \varvec{y})\) can be thought to be generated by the following equivalent process: Sample \(\varvec{s} = (s_1, \ldots , s_N) \leftarrow ({\mathcal {C}} . \mathsf {states})^N\) and for \(i = 1, \ldots , N\), compute \(y_i = f_{\mathcal {C}}(x_i, s_i)\).
Before we state the next lemma, we set up some notation for generalizing distance between strings over finite alphabets. For \(\varvec{x}, \varvec{\tilde{x}} \in \mathcal {X} ^n\), \(\varvec{x}  \varvec{\tilde{x}}  = 1\) if they differ in exactly one of the n coordinates, i.e., there exists \(i \in [n]\) such that \(x_i \ne \tilde{x}_i\) and \(x_j = \tilde{x}_j\) for all \(j \ne i\). The following lemma is an extension of the isoperimetric bound in Lemma 5 that we used for proving Theorem 8. The lemma is formally proved in the full version [1].
Lemma 6
Let \(\mathcal {X} \) be a finite set such that \(\mathcal {X}  = 2^k\) for some k. For any function \(f: \mathcal {X} ^n \rightarrow \{0, 1\}\), if \(\Pr _{\varvec{x} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {X} ^n}(f(\varvec{x}) = i) \ge \frac{1}{2}  \frac{1}{\sqrt{k \cdot n}}\), for each \(i\in \{ 0, 1\}\), then
We are now ready to state the generalization of Theorem 8.
Theorem 9
Let \(\mathcal {C} \) be a finite channel. For sufficiently large N and \(\ell \ge 2\log {N}\), an \((N, \frac{1}{N^2})\) OWSC protocol for \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) over \(\mathcal {C} \) is impossible even against semihonest parties. In fact, the same holds even if one settles for computational security.
Proof:
We proceed in the same way we showed the impossibility in Theorem 8. To prove a contradiction, suppose \(\langle \mathsf {S}, \mathsf {R}\rangle \) is a \((N, \frac{1}{N^2})\) OWSC protocol for \(\mathcal {C} _{\mathsf {ROT}}^{\ell } \) over \(\mathcal {C} \). The joint distribution, generated by the protocol for input \((\varvec{a} _0, \varvec{a} _1)\in \{0, 1\}^{\ell } \times \{0, 1\}^{\ell }\), is described in Fig. 3. We would use a machine \(\mathsf {M} \) similar to the one used in the proof of Theorem 8 to guess both \(\varvec{a} _0\) and \(\varvec{a} _1\) from the received \(\varvec{y} \) with substantial probability, contradicting sender’s privacy. The machine is described in Fig. 4. Intuitively, M tries to obtain one string from \(\varvec{y} \) (due to correctness of the ROT protocol) and the other string, by changing one item of \(\varvec{y} \), and hoping to get into a case where the receiver outputs the other string.
Analysis of \(\mathsf {M} \). We show that \(\mathsf {M} \) outputs \((\varvec{a} _0, \varvec{a} _1)\) with substantial probability. As observed in Lemma 4, since the protocol is \(\frac{1}{N^2}\)secure, due to the receiver’s privacy property, there exists a set \(X\subseteq \mathcal {X} ^N\) such that \(\Pr (\varvec{x} \in X) \ge 1  \frac{2}{N}\) and for all \(\varvec{x} \in X\),
Fix an \(\varvec{x} \in X\). Recall that for a fixed \(\varvec{x} \in \mathcal {X} ^N\), the output \(\varvec{y} \) of the channel is a deterministic function of the state of the channel \(\varvec{r} \), i.e., \(\varvec{y} = f_{\mathcal {C}}^{N} (\varvec{x}, \varvec{r})\). Here \(f_{\mathcal {C}}^{N} (\varvec{x}, \varvec{r})\) outputs \(\varvec{y} \) such that \(y_i = f_{\mathcal {C}}(x_i, r_i)\). Define function \(f_{\varvec{x}} : ({\mathcal {C}} . \mathsf {states})^N \rightarrow \{0, 1\}\) as follows: for \(\varvec{r} \in ({\mathcal {C}} . \mathsf {states})^N\), when \(f_{\mathcal {C}}^{N} (\varvec{x}, \varvec{r}) = \varvec{y} \) and \((\varvec{b} _0, \varvec{b} _1) = \mathsf {R}(\varvec{y})\), then \(f_{\varvec{x}}(\varvec{r}) = 0\) if \(\varvec{b} _0 = \bot \) and \(f_{\varvec{x}}(\varvec{r}) = 1\) otherwise. Hence, for all \(\varvec{x} \in X\), function \(f_{\varvec{x}}\) is such that \(\Pr _{\varvec{r} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} ({\mathcal {C}} . \mathsf {states})^N}(f(\varvec{x}) = i) \ge \frac{1}{2}  \frac{1}{N}\) for \(i = 0, 1\). When \(\frac{1}{N^2} \le \frac{1}{k \cdot N}\), invoking Lemma 6,
Note that \(\varvec{y} \) is generated by \(\varvec{x} \) and a random state \(\varvec{r} \leftarrow ({\mathcal {C}} . \mathsf {states})^N\) (see Fig. 3). On input \(\varvec{y} \), machine \(\mathsf {M} \) can be equivalently thought to be computing \(\varvec{\tilde{y}} \) as \(f_{\mathcal {C}}^{N} (\varvec{\tilde{x}}, \varvec{\tilde{r}})\), where \(\varvec{\tilde{x}} \) and \(\varvec{\tilde{r}} \) can be described as follows: Choose a random coordinate \(i {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} [N] \) (see Fig. 4) and \(\varvec{\tilde{x}} \) is computed as \(\tilde{x}_i {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \mathcal {X} \) and \(\tilde{x}_j = x_j\) for \(j \ne i\) and \(\varvec{\tilde{r}} \) is computed as \(\tilde{r}_i {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} {\mathcal {C}} . \mathsf {states} \) and \(\tilde{r}_j = r_j\) for \(j \ne i\). We make the following simple observations.

(i).
\(\varvec{\tilde{r}} \) is distributed uniformly in \(({\mathcal {C}} . \mathsf {states})^N\) and \(\varvec{r}  \varvec{\tilde{r}}  = 1\).

(ii).
\(\Pr (\varvec{\tilde{x}} = \varvec{x}) = \frac{1}{\mathcal {X} }\).

(iii).
With probability \(\varOmega (\frac{1}{N \sqrt{N}})\), we have \(f_{\varvec{x}}(\varvec{\tilde{r}}) = 1  f_{\varvec{x}}(\varvec{r})\).
Here, (i) and (ii) are clear from the process. For any \(\varvec{s} \in \{0, 1\}^N\) such that \(\varvec{r}  \varvec{s}  = 1\), \(\varvec{\tilde{r}} = \varvec{s} \) with probability \(\frac{1}{N \cdot {\mathcal {C}} . \mathsf {states} } = \frac{1}{2^k \cdot N}\). Hence, when \(\varvec{x} \in X\) and \(\varvec{r} {\mathop {\leftarrow }\limits ^{\scriptscriptstyle {\$}}} \{0, 1\}^N \), the probability of the event ‘\(f_{\varvec{x}}(\varvec{r}) = 1  f_{\varvec{x}}(\varvec{\tilde{r}}))\)’ is at least \(\frac{1}{2^k \cdot N} \cdot \varOmega (\frac{1}{\sqrt{k \cdot N}}) = \varOmega (\frac{1}{N\sqrt{N}})\).
We are now ready to show that \(\mathsf {M} \) outputs \(\varvec{a} _0, \varvec{a} _1\) with substantial probability. Let \(E_1\) be the event ‘\(\varvec{\tilde{x}} = \varvec{x} \) and \(f_{\varvec{x}}(\varvec{\tilde{r}}) = 1  f_{\varvec{x}}(\varvec{r})\)’. We have already established that conditioned on any \(\varvec{x} \in X\), the event \(E_1\) occurs with probability \(\varOmega (\frac{1}{N\sqrt{N}})\). Since \(\Pr (\varvec{x} \in X) \ge 1  \frac{2}{N}\), the probability of \(E_1\) is at least \((1  \frac{2}{N}) \cdot \varOmega (\frac{1}{N\sqrt{N}})\). Let \(E_2\) be the event ‘\(\mathsf {R}(f_{\mathcal {C}}^{N} (\varvec{x}, \varvec{r}))\) is correct’ and \(E_3\) be the event ‘\(\mathsf {R}(f_{\mathcal {C}}^{N} (\varvec{x}, \varvec{\tilde{r}}))\) is correct’. Since \(\varvec{r} \) and \(\varvec{\tilde{r}} \) are uniformly distributed in \(\{0, 1\}^N\), by the correctness of the protocol, \(E_2\) and \(E_3\) occur with probability at least \(1  \frac{1}{N^2}\). In the event \(E_1 \cap E_2 \cap E_3\), the machine \(\mathsf {M} \) guesses the input correctly and outputs \((\varvec{a} _0, \varvec{a} _1)\). By a union bound, \(E_1 \cap E_2 \cap E_3\) happens with probability \((1  \frac{2}{N})\varOmega (\frac{1}{N\sqrt{N}})  2\frac{1}{N^2}\). Hence, \(\mathsf {M} \) predicts \((\varvec{a} _0, \varvec{a} _1)\) with probability \(\varOmega (\frac{1}{N\sqrt{N}})\). Note that this is a contradiction since, when \(\ell = 2\log {N}\), such a machine should not exist when the protocol is \(\frac{1}{N^2}\)secure. This proves the theorem. \(\square \)
5 ZeroKnowledge Proofs from Any Nontrivial Channel
In this section, we characterize finite channels that allow OWSC of zeroknowledge proofs of knowledge. Our result states that zeroknowledge proofs of knowledge (ZK PoK) can be realized with OWSC over a channel if and only if the channel is nontrivial. A trivial channel is one which is essentially equivalent (as formalized below) to a noiseless channel, when used by actively corrupt senders.
Theorem 10
(Informal). Given a language \(L\in \mathrm {NP} {\setminus }\mathrm {BPP} \), an \(\mathsf {OWSC} / {\mathcal {C}}\) zeroknowledge protocol for L exists if and only if \(\mathcal {C} \) is nontrivial.
Previously, this result was known only for two special channels, namely, BEC and BSC [17]. To extend it to all nontrivial channels, we need to take a closer look at the properties of abstract channels. To understand what a nontrivial channel is, it is helpful to geometrically model a channel as we do below.
Redundant Inputs, Core and Trivial Channels. Given a channel \(\mathcal {C}: \mathcal {X} \rightarrow \mathcal {Y} \), for each input \(\alpha \in \mathcal {X} \), define a \(\mathcal {Y} \)dimensional vector \(\varvec{\mu } _\alpha \) with coordinates indexed by elements of \(\mathcal {Y}\), such that \(\varvec{\mu } _\alpha (\beta ) = \Pr (\mathcal {C} (\alpha )=\beta )\) for each \(\beta \in \mathcal {Y} \). We define the convex polytope \(R_{{\mathcal {C}}} \) associated with \(\mathcal {C} \) as the convex hull of the vectors \(\{ \varvec{\mu } _{\alpha }  \alpha \in \mathcal {X} \}\).
Any \(\alpha \in \mathcal {X} \) such that \(\varvec{\mu } _\alpha \) is a convex combination of \(\{ \varvec{\mu } _{\alpha '}  \alpha ' \in \mathcal {X} \setminus \{\alpha \}\}\) is a redundant input, because a sender could perfectly simulate the use of \(\alpha \) with a linear combination of other inputs, without being detected (and possibly obtaining more information about the output at the receiver’s end). Geometrically, a redundant input corresponds to a point in the interior of (possibly a face of) \(R_{{\mathcal {C}}} \) (or multiple inputs that share the same vertex of the polytope). Consider a new channel \(\widehat{\mathcal {C}}\) without any redundant inputs, obtained by restricting \(\mathcal {C} \) to a subset of inputs, one for each vertex of the convex hull. \(\widehat{\mathcal {C}}\) is called the core of \(\mathcal {C}\).^{Footnote 4}
We note that \(\mathcal {C}:\mathcal {X} \rightarrow \mathcal {Y} \) can be securely realized over \(\widehat{\mathcal {C}}:\widehat{\mathcal {X}} \rightarrow \mathcal {Y} \), with security (in fact, UC security) against active adversaries. In this protocol, when the sender is given an input \(\alpha \in \mathcal {X} \setminus \widehat{\mathcal {X}} \), it samples an input \(\alpha '\) from \(\widehat{\mathcal {X}} \) according to a distribution that results in the same channel output distribution as produced by \(\alpha \) (this is always possible since \(R_{{\mathcal {C}}} \) is the same as \(R_{{\widehat{\mathcal {C}}}}\)). Correctness (when both parties are honest) and security against a corrupt receiver are immediate from the fact that the output distribution is correct; security against a corrupt sender follows from the fact that its only action in the protocol – sending an input to \(\widehat{\mathcal {C}}\)– can be carried out as it is in the ideal world involving \(\mathcal {C}\), with the same effect. This means that there is a secure OWSC protocol over \(\mathcal {C} \) only if such a protocol exists over \(\widehat{\mathcal {C}}\). In turn, since \(\widehat{\mathcal {C}}\) has no redundant inputs, it suffices to characterize which channels among those without redundant inputs, admit ZK proofs.
A channel without any redundant inputs is trivial if the output distributions for each of its input symbols are disjoint from each other. Such a channel corresponds to a noiseless channel, as the receiver always learns exactly the symbol that was input to the channel. Over a noiseless channel, zeroknowledge proofs exist only for languages in \(\mathrm {BPP}\).
Our main goal then, is to show that if a channel \(\mathcal {C} \) without redundant inputs is nontrivial, then every language in \(\mathrm {NP}\) has an \(\mathsf {OWSC} / {\mathcal {C}}\) zeroknowledge protocol. We start by providing some intuition about how we achieve this.
5.1 Intuition Behind the Construction
The ZK protocol involves sending many independently generated copies of an Oblivious ZKPCP over the channel, after encoding it appropriately; the verifier tests the proof using a carefully designed scheme before accepting it. The encoding and testing are designed to ensure, on one hand, erasure of a large fraction of the bits in the proofs (to guarantee zeroknowledge) and, on the other hand, delivery of sufficiently many bits so that the verifier can detect if the transmitted proof is incorrect (for soundness). At a highlevel, the transmission and testing of the proof takes place over three “layers”: (i) an innermost binary channel layer at the bottom, (ii) an erasure layer over it, and (iii) an outer PCP layer.
The innermost and outermost layers are used to ensure soundness while the middle and outermost layers work in tandem to obtain the zeroknowledge property.
BinaryInput Channel Layer. A given channel \(\mathcal {C} \) (without redundant inputs) may have an arbitrary number of inputs, which may provide the prover with room for cheating in the protocol. The binaryinput channel layer involves a mechanism to enforce that the prover (mostly) uses only a prescribed pair of distinct input symbols \(\alpha _0\) and \(\alpha _1\). We require that over several uses of the channel, if the sender uses a different symbol significantly often, then the receiver can detect this from the empirical distribution of the output symbols it received. This requires that the sender cannot simulate the effect of sending a combination of these two symbols by using a combination of some other symbols. Using the geometric interpretation of the channel, this corresponds to the requirement that the line segment connecting the two vertices \(\varvec{\mu } _{\alpha _0}\) and \(\varvec{\mu } _{\alpha _1}\) of the polytope \(R_{{\mathcal {C}}} \) actually form an edge of the polytope. However, for the erasure layer (described below) to work we require that the output distributions of \(\alpha _0\) and \(\alpha _1\) have intersecting supports. In Lemma 7, we show that in any nontrivial channel \(\mathcal {C} \) (without redundant inputs), there indeed exist \(\alpha _0,\alpha _1\) which satisfy both these requirements simultaneously. Then, in Lemma 8, we show that there is a statistical test—whose parameters are determined by the geometry of the polytope \(R_{{\mathcal {C}}} \)—that can distinguish between a sender who sends a long sequence of these two symbols from a sender who uses other symbols in a significant fraction of positions.
Erasure Layer. We can obtain a nonzero probability of perfect erasure by encoding 0 as the pair \((\alpha _0,\alpha _1)\) and 1 as the pair \((\alpha _1,\alpha _0)\), to be transmitted over two independent uses of the channel \(\mathcal {C}\). Since there is some symbol \(\beta \) such that both \(q_0 := \Pr (\mathcal {C} (\alpha _0)=\beta )>0\) and \(q_1:= \Pr (\mathcal {C} (\alpha _1)=\beta )>0\), the probability of the receiver obtaining \((\beta ,\beta )\) is the same positive value \(q_0q_1\), whether 0 or 1 is sent as above.^{Footnote 5} Hence, one can interpret the view of the receiver as obtained by postprocessing the output of a BEC with erasure probability \(q_0q_1\), so that the erasure symbol is mapped to the outcome \((\beta ,\beta )\).
At the receiver’s end, we use a maximum likelihood decoding, that always outputs a bit (rather than allowing an erasure symbol as well); if the likelihood of a received pair of symbols is the same for 0 and 1, it is decoded as a uniformly random bit. Note that if the sender sends a pair \((\alpha _0,\alpha _0)\) or \((\alpha _1,\alpha _1)\), then the decoding strategy will have the same effect as when the sender sends the encoding of a random bit – namely, it will be decoded to a uniformly random bit. Thus, the net effect of these two layers is that the prover communicates with the verifier using bits sent via a BSC, except for a few positions where the sender may arbitrarily control the channel characteristics. While the receiver’s view includes more information than the output of the BSC, it can be entirely simulated from the output of a BEC.
PCP Layer. At the outermost layer, our proof resembles the \(\mathsf {OWSC} / {BSC}\) ZK protocol of [17], but is in fact somewhat simpler.^{Footnote 6} Here, the prover simply sends several independently generated copies of an Oblivious ZKPCP (routed through the inner layers discussed above). As we noted above, the view of the receiver is obtained by postprocessing the output of a BEC; hence, by choosing the parameters of the ZKPCP appropriately, we can ensure that the receiver’s view can be statistically simulated.
Ensuring soundness requires more work. The receiver, after obtaining the bits decoded from the inner layers (provided that no deviation was detected at the innermost layer), can try to execute the PCP verification on each proof. However, it cannot reject the proof on encountering a single proof that fails the verification, because, even if the prover is honest, the channel can introduce errors in the received bits. As such, the verifier should be prepared to tolerate a certain probability of error. One may expect that if the proof was originally incorrect, then the probability of error would increase. However, this intuition is imprecise: it is plausible that a wrong proof can match or even surpass some honest proofs in the probability of passing the PCP verification.
To deal with this, we note that it is not necessary to carry out the original PCP verification test on the received bits, but rather one should design a statistical test that separates all correct proofs from incorrect proofs, as received through the inner layers. We show that for any predicate used by the original PCP verifier, there is an errorscore one can assign to the bits decoded from the BSC, so that the expected errorscore of the decoded bits is lower when they originally satisfy the PCP verifier’s predicate. The verifier accepts or rejects the proof by computing the empirical average of the score across all repetitions of the proof, and thresholding it appropriately.
We remark that our scoring scheme and its analysis are more direct, and perhaps simpler, compared to the one in [17]. An additional subtlety that arises in our case is that there can be a few positions where the inner layers do not constitute the BSC that we try to enforce. Nevertheless, the above approach remains robust to such deviations, by ensuring that the scores come from a suitably bounded range.
5.2 Properties of Nontrivial Channels
The following lemma shows that if \(\mathcal {C} \) is nontrivial and without redundant inputs, there is a pair of input symbols \(\alpha _0, \alpha _1\) with properties that we can use to enforce binaryinput channel layer in Lemma 8 and to realize erasure channel layer in Lemma 9. Proofs of these lemmas are provided in the full version [1] (Fig. 5).
Lemma 7
If \(\mathcal {C}: \mathcal {X} \rightarrow \mathcal {Y} \) without redundant inputs is nontrivial, then there exist distinct symbols \(\alpha _0, \alpha _1 \in \mathcal {X} \), \(\varvec{v} \in [1, 1]^{\mathcal {Y}}\) and \(\epsilon > 0\) with the following properties:

(i)
\(\exists y \in \mathcal {Y} \) such that \(\varvec{\mu } _{\alpha _0}(y), \varvec{\mu } _{\alpha _1}(y) > 0\).

(ii)
\(\langle \varvec{\mu } _{\alpha _0}, \varvec{v} \rangle = \langle \varvec{\mu } _{\alpha _1}, \varvec{v} \rangle \), and for all \(\alpha \in \mathcal {X} \setminus \{\alpha _0, \alpha _1\}\), \(\langle \varvec{\mu } _{\alpha }, \varvec{v} \rangle  \langle \varvec{\mu } _{\alpha _0}, \varvec{v} \rangle \ge \epsilon \).
In the next lemma, we show that, over several uses of \(\mathcal {C} \), a sender who uses only \(\alpha _0, \alpha _1\) described in the previous lemma, can be distinguished from one that uses other symbols (different than \(\alpha _0, \alpha _1\)) significantly often, using the empirical distribution of the output symbols. Let histogram of a vector \(\varvec{y} \in \mathcal {Y} ^m\) be defined as \(\mathsf {hist}_{\varvec{y}} (\beta ) = \frac{1}{m}\{i \in [m] : y_i = \beta \}\) for all \(\beta \in \mathcal {Y} \). The following function is a statistical test that achieves this: \(f_m(\varvec{y}) = \langle \mathsf {hist}_{\varvec{y}},\varvec{v} \rangle  \langle \varvec{\mu } _{\alpha _0},\varvec{v} \rangle \).
Lemma 8
If a channel \(\mathcal {C} \) without redundant inputs is nontrivial, then there exist \(\alpha _0, \alpha _1 \in \mathcal {X}, \epsilon > 0\) and functions \(f_{{m}} : \mathcal {Y} ^m \rightarrow \mathbb {R}\), for \(m \in \mathbb {N}\), such that, for all \(\lambda >0\), when \(\varvec{x} \in \mathcal {X} ^m, t = \{i \in [m] : x_i \notin \{\alpha _0, \alpha _1\}\}\) and \(\varvec{y} = \mathcal {C} (\varvec{x})\),
The following lemma analyzes the coding scheme in Fig. 6 that realizes erasure layer using \(\alpha _0, \alpha _1\) described in Lemma 7. The fidelity of the scheme is a consequence of \(\varvec{\mu } _{\alpha _0}\) and \(\varvec{\mu } _{\alpha _1}\) being distinct. As we already observed, receiving \((\beta , \beta )\) in this scheme is effectively the same as receiving an erasure. The lemma shows that since \(\varvec{\mu } _{\alpha _0}, \varvec{\mu } _{\alpha _1}\) having intersecting supports, erasure happens with nonzero probability. The lemma also formalizes the observation that sending invalid encodings \((\alpha _i, \alpha _i)\) for \(i \in \{0, 1\}\) is effectively the same as sending the valid encoding of a random bit.
Lemma 9
The scheme \(\langle {\mathsf {Enc}}, {\mathsf {Dec}} \rangle \) in Fig. 6 satisfies the following properties:

(i).
\(\Pr \left[ {\mathsf {Dec}} \left( {\mathsf {Enc}} (a)\right) = a\right] = p > \frac{1}{2}\) for \(a \in \{0, 1\}\);

(ii).
\(\Pr \left[ {\mathsf {Dec}} \left( \mathcal {C} (\alpha _i, \alpha _i)\right) = 0\right] = \frac{1}{2}\) for \(i = 0, 1\);

(iii).
Let \(\bot \) be the event that the receiver gets \((\beta , \beta )\) as output, where \(\beta \) is in the support of \(\varvec{\mu } _{\alpha _0}\) and \(\varvec{\mu } _{\alpha _1}\). Then \(\Pr (\bot  {\mathsf {Enc}} (a)) = \rho > 0\), for all \(a \in \{0, 1\}\).
The Binary Symmetric Channel (BSC), with parameter p, is defined as \(\mathsf {BSC}^{{p}} : \{0, 1\} \rightarrow \{0, 1\}\) such that for \(b \in \{0, 1\}\), \(\Pr (\mathsf {BSC}^{{p}} (b) = b) = p\). Consider the scenario where a configuration \(\varvec{x} \in \{0, 1\}^k\) is sent through \(\mathsf {BSC}^{{p}} \) amongst which \(S \subset \{0, 1\}^k\) is the set of acceptable configurations. The following lemma assigns scores \(\{\gamma ^S_{\varvec{y}}\}_{\varvec{y} \in \{0, 1\}^k}\) to the received configurations in such a way that the expected score is 0 when an acceptable configuration \(\varvec{x} \in S\) is sent and the expected score is a strictly positive constant \(\phi ^S\) when an unacceptable configuration \(\varvec{x} \notin S\) in sent.
Lemma 10
For \(k \in \mathbb {N}\), let \(U = \{0, 1\}^k\) and \(S \subseteq U\). For \(\varvec{x}, \varvec{y} \in U\), define \(p_{\varvec{x} \varvec{y}} = \Pr (\mathsf {BSC}^{{p}} (\varvec{x}) = \varvec{y})\). There exists \(\phi ^S > 0\) and \(\{\gamma ^S_{\varvec{y}}\}_{\varvec{y} \in U} \in [1, 1]\) such that
Proof:
Consider the matrix \(M \in \mathbb {R}^{U \times U}\) such that \(M_{\varvec{x} \varvec{y}} = p_{\varvec{x} \varvec{y}}\). By the definition of \(\mathsf {BSC}^{{p}} \), when \(\varvec{x}  \varvec{y} \) denotes the Hamming distance between \(\varvec{x}, \varvec{y} \in U\), \(p_{\varvec{x} \varvec{y}} = (1  p)^{\varvec{x}  \varvec{y} } \cdot p^{k  \varvec{x}  \varvec{y} }\). It can be verified that, when \(\otimes \) denotes the tensor operation,
Since H is invertible and tensor operation preserves nonsingularity, M is an invertible matrix. The existence of \(\phi ^S > 0\) and \(\{\gamma ^S_{\varvec{y}}\}_{\varvec{y} \in U} \in [1, 1]\) follows directly from the invertibility of M. \(\square \)
5.3 Construction and Analysis
The scheme \(\langle {\mathsf {P}_{ZK}}, {\mathsf {V}_{ZK}} \rangle \) is given in Fig. 7. We now formally prove that this is a zeroknowledge proof of knowledge with negligible completeness and soundness error.
We first comment on the strategy of a malicious prover who encodes bits as \((\alpha _i, \alpha _i)\) for \(i = 0, 1\). Notice that the statistical test of thresholding \(f_{{2n \cdot \ell }} (\varvec{y})\) is insensitive to such a malicious strategy. But, by statement (ii) in Lemma 9, a bit that is encoded as \((\alpha _i, \alpha _i)\) is decoded as 0 (resp. 1) with probability \(\frac{1}{2}\). Hence, with respect to decoding, such a malicious strategy is effectively the same as encoding a random bit honestly using \({\mathsf {Enc}} \). Consequently, every malicious prover strategy (including ones that encode bits incorrectly using \((\alpha _i, \alpha _i)\)) can be thought of as a randomized strategy over a subclass of strategies in which each bit is encoded as \((\alpha , \alpha ')\), where \(\alpha \ne \alpha '\). Hence, in the sequel, we analyze soundness only with respect to this class of strategies.
The proof proceeds by bounding the number of bad proofs a malicious sender can send without getting rejected by the tests performed by the verifier. We define \(B_{\text {encoding}}\) as the set of bad proofs in which at least one bit is encoded using symbols outside the set \(\{\alpha _0, \alpha _1\}\). Also, define \(B_{\text {incorrect}}\) as the set of proofs in which each bit is correctly encoded using \({\mathsf {Enc}} \), but the proof itself is invalid. This is formalized as the proofs from which the extractor E for \(\langle {\mathsf {P}_{oZK}}, {\mathsf {V}_{oZK}} \rangle \) cannot extract a valid witness. We would argue soundness by showing that if the sizes of \(B_{\text {encoding}}\) and \(B_{\text {incorrect}}\) are substantial, then \({\mathsf {V}_{ZK}} \) rejects with all but negligible probability. Furthermore, completeness follows from the tests accepting an honest prover with all but negligible probability. These are established in the following claims; see the full version [1] for formal proofs. Formally, \(B_{\text {encoding}}\) and \(B_{\text {incorrect}}\) are defined as follows.
Claim 2
If \(B_{\text {encoding}}\) is empty, then the probability with which \(f_{{2n \cdot \ell }} (\varvec{y}) \ge \sqrt{\frac{\lambda }{2n\ell }}\) is negligible in \(\lambda \). If \(B_{\text {encoding}} \ge \frac{n \kappa \phi }{6}\), then the probability with which \(f_{{2n \cdot \ell }} (\varvec{y}) < \sqrt{\frac{\lambda }{2n\ell }}\) is negligible in \(\lambda \).
Claim 3
If \(B_{\text {encoding}} = B_{\text {incorrect}} = \emptyset \), then \(\frac{1}{n}\sum _{k=1}^n s _k\ge \frac{\kappa \cdot \phi }{12}\) with probability at most \(2e^{\frac{1}{2}\left( \frac{\ell \lambda \cdot \phi }{12}\right) ^2}\). If \(B_{\text {encoding}} \le n \kappa \phi \) and \(B_{\text {incorrect}} \ge \frac{n}{3}\), then \(\frac{1}{n}\sum _{k=1}^n s _k<\frac{\kappa \cdot \phi }{12}\) with probability at most \(2e^{\frac{1}{2}\left( \frac{\ell \lambda \cdot \phi }{12}\right) ^2}\).
Below, we argue that \(\langle {\mathsf {P}_{ZK}}, {\mathsf {V}_{ZK}} \rangle \) is a zeroknowledge proof using these claims.
Completeness. The above claims directly imply that if \(\pi _1, \ldots , \pi _n\) are valid proofs which are correctly encoded, then \({\mathsf {V}_{ZK}} \) accepts with all but negligible probability.
Soundness. We build an extractor \(E'\) from E (the extractor for \(\langle {\mathsf {P}_{oZK}}, {\mathsf {V}_{oZK}} \rangle \)) as follows. For each \(i \in [n]\), extractor \(E'\) tries to extract a proof \(\pi ^*_i\) from the encoding of the purported proof \(\pi _i\). Rejecting each purported proof \(\pi _i\) that is incorrectly encoded, i.e., \(i \in B_{\text {encoding}}\). If for some i, we have \(R_L(x, E(\pi ^*_i, x)) = 1\), output \(E(\pi ^*_i, x)\); else, output \(\bot \). Clearly, \(E'\) aborts only if \(B_{\text {encoding}} \cup B_{\text {incorrect}} = [n]\). But the above claims imply that \({\mathsf {V}_{ZK}} \) rejects with all but negligible probability, whenever \(B_{\text {encoding}} \cup B_{\text {incorrect}} \ge \frac{2n}{3}\).
ZeroKnowledge. By Lemma 9, \({\mathsf {Enc}} \) induces an erasure (\(\bot \) in the lemma) with probability \(\rho > 0\). Recall that the proof uses a \((3,1\rho )\)ZKPCP \(\langle {\mathsf {P}_{oZK}}, {\mathsf {V}_{oZK}} \rangle \). Let S be a simulator for this ZKPCP. The construction of simulator \(S'\) for \(\langle {\mathsf {P}_{ZK}}, {\mathsf {V}_{ZK}} \rangle \), using the simulator S is quite straightforward: \(S'\) runs n independent executions of \(S(x, \lambda )\) to get \(\pi ^*_1, \ldots , \pi ^*_n\). It is easy to see that if S produced a perfect simulation of the ZKPCP, then \(S'\) would also produce a perfect simulation of the verifier’s view in the ZK proof. Since the simulation by S incurs a negligible error, so does the simulation by \(S'\).
Notes
 1.
In more detail, the sender can generate an anonymous \(\$100\) bill by letting the input be \(m\,=\,\)(Sendername, 100) and the transmitted message be (m, id) for a random identifier id picked by the functionality. Consider the scenario where multiple \(\$100\) bills are sent to different receivers. The id is needed to prevent double spending. Anonymity comes from the fact that the sender doesn’t learn id, so it cannot associate a particular \(\$100\) bill with the receiver to whom it was sent.
 2.
Indeed, an \(\mathsf {OWSC} / {\mathcal {C}}\) ZKPoK protocol is equivalent to an informationtheoretic UCsecure protocol for the ZK functionality in the \(\mathcal {C}\)hybrid model, with an additional requirement that the protocol involves a single invocation of \(\mathcal {C}\) and no other communication.
 3.
Note that the conceptually simpler approach of applying NIZK proofs is not applicable here, since in the setting of secure computation over noisy channels there is no public transcript to which such a proof can apply.
 4.
The notions of redundancy and core were defined more generally in [21], in the context of 2party functionalities where both parties have inputs and outputs. Here we present simpler definitions that suffice for the case of channels.
 5.
This is essentially identical to the Von Neumann extractor trick.
 6.
In [17], an encoding scheme was used to argue that with some probability, the bits sent through the BSC are “erased.” But this encoding turns out to be redundant, as a BSC implicitly guarantees erasure: Concretely, a BSC with error probability p can be simulated by postprocessing a BEC with erasure probability 2p. The postprocessing corresponds to decoding the erasure symbol as a uniformly random bit.
References
Agrawal, S., Ishai, Y., Kushilevitz, E., Narayanan, V., Prabhakaran, M., Prabhakaran, V., Rosen, A.: Cryptography from oneway communication: on completeness of finite channels. In: Cryptology ePrint Archive (2020)
Ajtai, M.: Oblivious rams without cryptogrpahic assumptions. In: STOC 2010, pp. 181–190 (2010)
Bellare, M., et al.: iKP  a family of secure electronic payment protocols. In: USENIX Workshop on Electronic Commerce (1995)
Bellare, M., Tessaro, S., Vardy, A.: Semantic security for the wiretap channel. In: SafaviNaini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 294–311. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642320095_18
Bennett, C.H., Brassard, G., Crepeau, C., Maurer, U.M.: Generalized privacy amplification. IEEE Trans. Inf. Theor. 41(6), 1915–1923 (1995)
Bennett, C.H., Brassard, G., Robert, J.M.: Privacy amplification by public discussion. SIAM J. Comput. 17(2), 210–229 (1988)
Bertsimas, D., Tsitsiklis, J.N.: Introduction to Linear Optimization. Athena Scientific, Nashua (1997)
Bloch, M., Barros, J.: PhysicalLayer Security: from Information Theory to Security Engineering. Cambridge University Press, Cambridge (2011)
Blum, M., Feldman, P., Micali, S.: Proving security against chosen ciphertext attacks. In: Goldwasser, S. (ed.) CRYPTO 1988. LNCS, vol. 403, pp. 256–268. Springer, New York (1990). https://doi.org/10.1007/0387347992_20
Chaum, D.: Blind signatures for untraceable payments. In: Chaum, D., Rivest, R.L., Sherman, A.T. (eds.) Advances in Cryptology, pp. 199–203. Springer, Boston, MA (1983). https://doi.org/10.1007/9781475706024_18
Chaum, D.: Online cash checks. In: Quisquater, J.J., Vandewalle, J. (eds.) EUROCRYPT 1989. LNCS, vol. 434, pp. 288–293. Springer, Heidelberg (1990). https://doi.org/10.1007/3540468854_30
Crepeau, C., Kilian, J.: Achieving oblivious transfer using weakened security assumptions. In: FOCS, pp. 42–52 (1988)
Crépeau, C., Morozov, K., Wolf, S.: Efficient unconditional oblivious transfer from almost any noisy channel. In: Blundo, C., Cimato, S. (eds.) SCN 2004. LNCS, vol. 3352, pp. 47–59. Springer, Heidelberg (2005). https://doi.org/10.1007/9783540305989_4
Damgård, I., Kilian, J., Salvail, L.: On the (Im)possibility of basing oblivious transfer and bit commitment on weakened security assumptions. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 56–73. Springer, Heidelberg (1999). https://doi.org/10.1007/354048910X_5
Feige, U., Lapidot, D., Shamir, A.: Multiple noninteractive zero knowledge proofs based on a single random string. In: FOCS, vol. 1, pp. 308–317, October 1990
Feige, U., Kilian, J., Naor, M.: A minimal model for secure computation (extended abstract). In: STOC, pp. 554–563 (1994)
Garg, S., Ishai, Y., Kushilevitz, E., Ostrovsky, R., Sahai, A.: Cryptography with oneway communication. In: Gennaro, R., Robshaw, M. (eds.) CRYPTO 2015. LNCS, vol. 9216, pp. 191–208. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662480007_10
Ishai, Y., Kushilevitz, E.: Private simultaneous messages protocols with applications. In: ISTCS 1997, pp. 174–184. IEEE Computer Society (1997)
Ishai, Y., Kushilevitz, E., Ostrovsky, R., Prabhakaran, M., Sahai, A.: Efficient noninteractive secure computation. In: Paterson, K.G. (ed.) EUROCRYPT 2011. LNCS, vol. 6632, pp. 406–425. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642204654_23
Kilian, J.: Founding cryptography on oblivious transfer. In: STOC, pp. 20–31 (1988)
Kraschewski, D., Maji, H.K., Prabhakaran, M., Sahai, A.: A full characterization of completeness for twoparty randomized function evaluation. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 659–676. Springer, Heidelberg (2014). https://doi.org/10.1007/9783642552205_36
Lin, F., Cheraghchi, M., Guruswami, V., SafaviNaini, R., Wang, H.: Secret sharing with binary shares. In: ITCS, pp. 53:1–53:20 (2019)
Maurer, U.M.: Perfect cryptographic security from partially independent channels. In: STOC 1991, pp. 561–571 (1991)
Poor, H.V., Schaefer, R.F.: Wireless physical layer security. Proc. Natl. Acad. Sci. 114(1), 19–26 (2017)
Ranellucci, S., Tapp, A., Winkler, S., Wullschleger, J.: On the efficiency of bit commitment reductions. In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 520–537. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642253850_28
Raz, R., Reingold, O., Vadhan, S.: Extracting all the randomness and reducing the error in trevisan’s extractors. J. Comput. Syst. Sci. 65, 97–128 (2002)
Trevisan, L.: Extractors and pseudorandom generators. J. ACM 48(4), 860–879 (2001)
Winter, A., Nascimento, A.C.A., Imai, H.: Commitment capacity of discrete memoryless channels. In: Paterson, K.G. (ed.) Cryptography and Coding 2003. LNCS, vol. 2898, pp. 35–51. Springer, Heidelberg (2003). https://doi.org/10.1007/9783540409748_4
Wullschleger, J.: Oblivious transfer from weak noisy channels. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 332–349. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642004575_20
Wyner, A.D.: The wiretap channel. Bell Syst. Tech. J. 54(8), 1355–1387 (1975)
Yao, A.C.C.: How to generate and exchange secrets (extended abstract). In: FOCS 1986, pp. 162–167 (1986)
Acknowledgements
We thank the anonymous Asiacrypt reviewers for their careful reading and many helpful comments. This Research was supported by Ministry of Science and Technology, Israel and Department of Science and Technology, Government of India, and in part by the International Centre for Theoretical Sciences (ICTS) during a visit for participating in the programFoundational Aspects of Blockchain Technology (ICTS/Progfabt2020/01). In addition, S. Agrawal was supported by the DST “Swarnajayanti” fellowship, and IndoFrench CEFIPRA project; Y. Ishai was supported by ERC Project NTSC (742754), NSFBSF grant 2015782, ISF grant 2774/20, and BSF grant 2018393; E. Kushilevitz was supported by ISF grant 2774/20, BSF grant 2018393, and NSFBSF grant 2015782; V. Narayanan and V. Prabhakaran were supported by the Department of Atomic Energy, Government of India, under project no. RTI4001, DAE OM No. 1303/4/2019/R&DII/DAE/1969 dated 7.2.2020; M. Prabhakaran was supported by the Dept. of Science and Technology, India via the Ramanujan Fellowship; A. Rosen was supported in part by ISF grant No. 1399/17 and Project PROMETHEUS (Grant 780701).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 International Association for Cryptologic Research
About this paper
Cite this paper
Agrawal, S. et al. (2020). Cryptography from OneWay Communication: On Completeness of Finite Channels. In: Moriai, S., Wang, H. (eds) Advances in Cryptology – ASIACRYPT 2020. ASIACRYPT 2020. Lecture Notes in Computer Science(), vol 12493. Springer, Cham. https://doi.org/10.1007/9783030648404_22
Download citation
DOI: https://doi.org/10.1007/9783030648404_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030648398
Online ISBN: 9783030648404
eBook Packages: Computer ScienceComputer Science (R0)