1 Introduction

In the setting of secure two-party computation (2PC), the goal is to allow two mutually distrustful parties to compute some function of their private inputs. The computation should preserve some security properties, even in the face of adversarial behavior by one of the parties. The two most common types of adversaries are malicious adversaries (which may instruct the corrupted party to deviate from the prescribed protocol in an arbitrary way), and semi-honest adversaries (which must follow the instructions of the protocol, but may try to infer additional information based on the view of the corrupted party).

Oblivious transfer (OT) is a two-party functionality, fundamental to 2PC and the more general secure multiparty computation (MPC). It was first introduced by Rabin [28] and Even et al. [13]. In the setting of \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\), there is a receiver holding a bit \(b\in \{0,1\}\), and a sender holding two bit-messages \(a_0,a_1\in \{0,1\}\). At the end of the interaction, the receiver learns \(a_b\) and nothing else, and the sender learns nothing. It turns out that OT can be used in the construction of protocols, both in 2PC and MPC with various security guarantees [6, 14, 22, 33]. Moreover, giving to the parties access to an ideal process that computes OT securely, is potentially useful. Constructing protocols in this model, called the OT-hybrid model, could be used for optimizing the complexity of real-world, computationally secure protocols for several reasons. First, using the OT-precomputation paradigm of Beaver [4], the heavy computation of OT can many times be pushed back to an off-line phase. This off-line phase is performed before the actual inputs for the computation (and possibly even the function to be computed) are known. Later, as the actual computation takes place, the precomputed OTs are very cheaply converted into actual OT interactions. Furthermore, the OT-extension paradigm of [5] offers a way to efficiently implement many OTs using a relatively small number of base OTs. This can be done using only symmetric-key primitives (e.g., one-way functions, pseudorandom generators). Furthermore, it can also be used to implement \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-}s\text {-string-OT}\) using a sub-linear (in the security parameter) number of calls to \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\) and some additional sub-linear work, assuming a strong variant of PRG [17]. Additionally, there is a variety of computational assumptions that are sufficient to realize OT [27], or even with unconditional security under physical assumptions [10, 11, 21, 26, 32].

An interesting family of two-party functionalities are the client-server functionalities, where only one party – the client – receives an output. In addition to the OT functionality mentioned earlier, client-server functionalities include many other examples. Securely computing some of theses functionalities could be useful for many interesting applications, both in theory and in practice.

For client-server, a well known result due to Kilian [22], asserts that OT is complete. That is, any two-party client-server functionality can be computed with unconditional security in the OT-hybrid model. Ishai, Prabhakaran, and Sahai [18] further showed that the protocol can be made efficient. Later, it was shown by Ishai et al. [19], that in the OT-hybrid model, every client-server functionality can be computed using a single round. Furthermore, the protocol’s computational and communication complexity are efficient for functions in \(\mathrm {NC}^1\). However, all of the results achieve only statistical security, namely, it is allowed to have some error in security.

For the case of perfect security in this setting much less is known. Given access to (many parallel) ideal computations for \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\), Brassard et al. [8] showed how to compute the functionality \(\left( {\begin{array}{c}n\\ 1\end{array}}\right) \text {-}s\text {-string-OT}\), and Wullschleger [30] showed how to compute \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-TO}\), which is the same as \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\) where the roles of the parties are reversed. Furthermore, the former protocol has a single round, in which the parties invoke the OT, and with no additional bits to be sent over the channel between the parties. The latter protocol requires an additional bit to be sent by the server.

Observe that the result of [8] implies that any client-server functionality f can be computed with perfect security against semi-honest corruptions. Indeed, let n be the number of inputs in the client’s domain, and let s be the number of bits required to represent an output of f. The server will send to the \(\left( {\begin{array}{c}n\\ 1\end{array}}\right) \text {-}s\text {-string-OT}\) functionality all of the possible outputs with respect to its input, and the client will send its input. The client then outputs whatever it received from the OT. Clearly, the protocol is secure against semi-honest adversaries, however, in the malicious case, this is not true, in general. This is due to the fact that the server has complete control over the output of the client. For instance, for the “greater-than” function, the server can force the output of the client to be 1 if and only if y is even. Therefore, we are only interested in security against malicious adversaries.

Ishai et al. [20] studied perfectly secure multiparty computation in the correlated randomness model. They showed that any multiparty client-server functionality can be computed with perfect security, when the parties have access to a correlated randomness whose correlation depends on the function to be computed by the parties.

There are also various client-server functionalities that can be computed trivially (even in the plain model). For example, the XOR functionality can be computed by having the server sending its input to the client. These simple examples suggest that fairness is not a necessary condition for being able to compute a function perfectly in the client-server model.

Thus, the state of affairs is that most two-party client-server functionalities remain unclassified as to perfect security in the OT-hybrid model. In this work we address the following natural questions.

Which client-server functionalities can be computed with perfect security against malicious adversaries in the OT-hybrid model? What is the round complexity of such protocols?

The questions have an obvious theoretical appeal to it, and understanding it could help us gain a better understanding of general secure computation. In addition, perfect security may be useful for designing multiparty protocols with high concrete efficiency, achieved by eliminating the dependency on a security parameter.

We stress that, under the assumption that \(\mathrm {NP}\not \subseteq \mathrm {BPP}\), it is impossible to achieve completeness theorems in our setting, similar to the completeness theorems of Kilian [22]. Indeed, suppose the parties want to compute an \(\mathrm {NP}\) relation with perfect zero-knowledge and perfect soundness. Then it is impossible even when given access to any ideal functionality with no input (distributing some kind of correlated randomness) [20]. This is due to the fact that if such a protocol does exist, then one can use the simulator to decide the relation, putting it in \(\mathrm {BPP}\). Since \({\text {OT}}\) can be perfectly reduced to a suitable no-input functionality, this implies that no such protocol exist in the \({\text {OT}}\)-hybrid model.

1.1 Our Results

Our main result is that if the parties have access to many parallel ideal computations of \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\), most client-server functionalities, where the server’s domain is larger than the client’s domain, can be computed with perfect full-security in a single round. Interestingly, the set of functions that we are able to compute was previously identified by Asharov [2] in the context of fairness in two-party computation, naming these functions as full-dimensional.

Let \(f : \mathcal {X}\times \mathcal {Y}\mapsto \{0,1\}\) be a function, where the server’s domain size \(|\mathcal {X}|\) is larger than the client’s domain size \(|\mathcal {Y}|\). Write \(\mathcal {X}=\left\{ x_1,\ldots ,x_n\right\} \) and \(\mathcal {Y}=\left\{ y_1,\ldots ,y_m\right\} \). We consider the geometric representation of f as \(|\mathcal {X}|\) points over \({\mathbb R}^{|\mathcal {Y}|}\), where the j-th coordinate of the i-th point is simply \(f(x_i, y_j)\). We then consider the convex polytopeFootnote 1 defined by these points. The function is called full-dimensional if the dimension of the polytope is exactly \(|\mathcal {Y}|\), e.g., a triangle in the plane.Footnote 2 We prove the following theorem:

Theorem 1

(Informal). Let \(f:\mathcal {X}\times \mathcal {Y}\mapsto \{0,1\}\) be a client-server functionality. If f is full-dimensional, then it can be computed with perfect full-security in the OT-hybrid model in a single round. Furthermore, the number of OT calls is \(O\left( {\text {poly}}\left( |\mathcal {Y}| \right) \right) \).

In fact, we generalize the above theorem, and we give a similar criterion for randomized non-Boolean functions. The class of functions that our protocol can compute can be further extended by letting the client have inputs that fix the output. This class of functions includes many interesting examples, such as Yao’s “millionaires’ problem” (the “greater-than” function). Here the parties have inputs that ranges from 1 to n, and the output of the client is 1 if and only if its input is greater than or equal to the server’s input. The communication complexity of our protocol is polynomial in the client’s domain size, and not in its input’s size. For functions with small domain, however, this does improve upon known construction that achieve statistical security (e.g., the single round protocol by Ishai et al. [19], see Sect. 7 for more details).

Its was proven by [2], that the number of full-dimensional functions tends to 1 exponentially fast as \(|\mathcal {X}|\) and \(|\mathcal {Y}|\) grow. Specifically, a random function with domains sizes \(|\mathcal {X}|=m+1\) and \(|\mathcal {Y}|=m\), will be full-dimensional with probability at least \(1-p_m\), where \(p_m\) denotes the probability that a random Boolean \(m\times m\) matrix is singular. The value \(p_m\) is conjectured to be \((1/2+o(1))^m\). Currently, the best known upper bound is \((1/\sqrt{2}+o(1))^m\) proved by [31].

Theorem 1 identifies a set of client-server functionalities that are computable with perfect full-security. It does not yield a full characterization of such functions. For example, the status of the equality function \(\mathsf {3EQ}:\left\{ x_1,x_2,x_3\right\} \times \left\{ y_1,y_2,y_3\right\} \mapsto \{0,1\}\), defined as \(\mathsf {3EQ}(x,y)=1\) if and only if \(x=y\), is currently unknown. However, for the case of Boolean functions (even randomized), we are able to show that the protocol suggested in the proof of Theorem 1 computes only full-dimensional functions.

1.2 Our Techniques

The protocol we suggest is a variation of the protocol of Ishai et al. [19]. Viewing the protocol abstractly, in addition to the computation of some related function, the server will also send (via the OT) a proof of correct behavior. The client will use the OT functionality to learn only a few random bits from the proof so that privacy is preserved. We next give a technical overview of our construction.

In our construction, we make use of perfect randomized encoding (PRE) [1]. A PRE \(\hat{f}\) of a function f is a randomized function, such that for every input x and a uniformly random choice of the randomness r, it is possible to decode \(\hat{f}(x;r)\) and compute f(x) with no error. In addition, the output distribution of \(\hat{f}\) on input x reveals no information about x except what follows from f(x). For our construction, we rely on a property called decomposability. A PRE is said to be decomposable, if it can be written as \(\hat{f}=\left( \hat{f}_1,\ldots ,\hat{f}_n \right) \). Here, each \(\hat{f}_i\) can be written as one of two vectors that depends on the i-th bit of x, i.e., we can write it as \(\mathbf{v}_{i,x_i}\), where \((\mathbf{v}_{i,0},\mathbf{v}_{i,1})\) depends on the randomness r. This definition can be viewed as the perfect version of garbled circuits [24, 33].

Our starting point is the protocol of Ishai et al. [19], which will be dubbed the IKOPS protocol. It is a single round protocol in the OT-hybrid model that achieves statistical security. It allows the parties to compute a “certified OT” functionality. We next give a brief overview of the IKOPS protocol.

The main idea behind the IKOPS protocol is to have the server run an “MPC in the head” [18]. That is, the real server locally emulates the execution of a perfectly secure protocol \(\varPi \) with many virtual servers performing the computation, and 2m virtual clients, denoted \(\mathsf {C}_{1,0},\mathsf {C}_{1,1},\ldots ,\mathsf {C}_{m,0},\mathsf {C}_{m,1}\), receiving output, where m is the number of bits in the real client’s input y. The underlying protocol \(\varPi \) computes (and distributes among the clients) a decomposable PRE \(\hat{f} = (\hat{f}_1,\ldots ,\hat{f}_m)\) of f. Specifically, the input of the virtual servers’ are secret sharing of the real server’s input x and randomness r. The output of the virtual client \(\mathsf {C}_{j,b}\) in an execution of \(\varPi \), is \(\hat{f}_j(b;r)\), i.e., the part of the encoding that corresponds to the j-th bit of y being equal to b.

The real client can then use OT in order to recover the correct output of the PRE and reconstruct the output f(xy). As part of the “MPC in the head” paradigm, the client and server jointly set up a watchlist (the views of some of the virtual servers) allowing the client to check consistency between the virtual servers’ views and the virtual clients’ views. If there was an inconsistency, the client outputs \(f(x_0,y)\) for some default value \(x_0\in \mathcal {X}\). However, it is unclear how to have the server send only some of the views according to the request of the client. Ishai et al. [19] handle this by letting the client get each view with some constant probability independently of the other views.

The security of the protocol as described so far can still be breached by a malicious server. By tampering with the outputs of the virtual clients, a malicious server could force the output of the real client to be f(xy) for some inputs y and force the output to be \(f(x_0,y)\) for other values of y, where the choice is completely determined by the adversary. To overcome this problem, the function f is replaced with a function \(f'\) where each bit \(y_i\) is replaced with \(\kappa \) random bits whose XOR equals to \(y_i\), where \(\kappa \) is the security parameter.Footnote 3 This modification prevents the adversary from having complete control over which inputs the client will output \(f(x_0,y)\), and for which inputs it will output f(xy).

Two problems arise when trying to use the IKOPS protocol to achieve perfect security. First, a malicious client could potentially receive the views of all virtual servers, and as a result, it could learn the server’s input. Second, with some non-zero probability, a malicious server might still be able to have the client output be \(f(x_0,y)\) for some inputs y, but output f(xy) for other inputs y.

We solve the former issue, by showing how the client can request views deterministically. We would like to have the request be made using \(\left( {\begin{array}{c}n\\ t\end{array}}\right) \text {-}s\text {-string-OT}\), where t bounds the number of corruptions allowed in \(\varPi \), namely, the client asks for exactly t views. However, it is not known if implementing it in the OT-hybrid model with perfect security is even possible. Therefore, we slightly relax the security requirement, so that a malicious client will not be able to receive more than twice the number of views that an honest client receives. We then let the honest client ask for exactly t/2 of the views. The idea in constructing such a watchlist is the following. For each view of a virtual server, the real server sends (via the OT functionality) either a masking of the view, or a share of the concatenation of the maskings. That is, the server’s input to the OT is \((V_i \oplus r_i,\mathbf{r}[i])\) for every view \(V_i\) of a virtual server \(\mathsf {S}_i\), where \(\mathbf{r}=\left( r_1,\ldots ,r_n \right) \) is a vector of random strings, and \(\mathbf{r}[i]\) is the i-th share of \(\mathbf{r}\), for some threshold secret sharing scheme with sufficiently large threshold value.Footnote 4 As a result, in each invocation of the OT, the client will be able to learn either a masked view or a share, which bounds the number of views it can receive.

To solve the second issue, it will be convenient to represent the server security requirement from a geometric point of view. To simplify the explanation in this introduction, we only focus on deterministic Boolean functions. Recall that we can view the function f as \(|\mathcal {X}|\) points over \({\mathbb R}^{|\mathcal {Y}|}\), where the j-th coordinate of the i-th point is simply \(f(x_i, y_j)\). Observe that all a simulator for a malicious server can do, is to send a random input according to some distribution D. The goal of the simulator is to force the distribution of the client’s output to be equivalent to the distribution in the real-world. Thus, perfect simulation of a malicious server is possible if and only if there exists such distribution D over the server’s inputs in the ideal-world, such that for every input \(y\in \mathcal {Y}\) of the client, \(\Pr _{x\leftarrow D}[f(x,y)=1]=q_y\), where \(q_y\) is the probability the client outputs 1 in the real-world where its input is y. Since for every \(y\in \mathcal {Y}\) the value \(\Pr _{x\leftarrow D}[f(x,y)=1]\) can be written as the same convex combination of the points \(\left\{ f(x_i,y)\right\} _{i=1}^{|\mathcal {X}|}\), the point \(\left( \Pr _{x\leftarrow D}[f(x,y)=1] \right) _{y\in \mathcal {Y}}\) lie inside the convex hull of the points of f. Thus, we can state perfect security as follows. Simulation of an adversary is possible if and only if the vector of outputs \(\left( q_y \right) _{y\in \mathcal {Y}}\) in the real-world is in the convex-hull of the points in \({\mathbb R}^{|\mathcal {Y}|}\) described by f.

Now, consider the IKOPS protocol. It could be the case that the vector of outputs has different errors in each coordinate created by an adversary, and hence is not necessarily inside the convex-hull of the points of f. To fix this issue, instead of having the client output according to a default value in case of an inconsistency, the client will now pick \(x_0\) uniformly at random, and output \(f(x_0,y)\). Stated differently, it outputs according to \(c_y\), where \(\mathbf{c}\) is the center of the polytope.Footnote 5 We next (roughly) explain why this results in a perfectly secure protocol. Let p denote the probability of detecting an inconsistency (more precisely, for each y the probability \(p_y\) of detecting an inconsistency is in \([p-\varepsilon ,p+\varepsilon ]\), for some small \(\varepsilon \)). Further defined the matrix \(M_f(x,y)=f(x,y)\) (i.e., each row of \(M_f\) describes a point in \({\mathbb R}^{|\mathcal {Y}|}\)). Thus, the output vector of the client is close to the point \(\mathbf{q}=p\cdot \mathbf{c}+ (1-p)\cdot M_f(x,\cdot )\), give or take \(\pm \varepsilon \) in each coordinate, for some small \(\varepsilon \). If p is close to 1, this point \(\mathbf{q}\) is close to \(\mathbf{c}\), and since \(\mathbf{c}\) is an internal point, \(\mathbf{q}\) is also internal for a sufficiently small \(\varepsilon \). Otherwise, the point \(\mathbf{q}\) will be close to the boundary. As a result, it is unclear as to why perfect security holds. Here, we utilize a special property of IKOPS protocol’s security. We manage to prove that \(\varepsilon \) is bounded by \(p\cdot \varepsilon '\), for some small \(\varepsilon '\). That is, \(\varepsilon \) depends on p, unlike the standard security requirement. This property allows us to prove that perfect security holds.

1.3 Related Work

In the 2PC settings, Cleve [9] showed that the functionality of coin-tossing, where the parties output the same random bit, is impossible to compute with full-security, even in the OT-hybrid model. In spite of that, in the seminal work Gordon et al. [15], and later followed by [2, 3, 12, 25], it was discovered that in the OT-hybrid model, most two-party functionalities can be evaluated with full security by efficient protocols. In particular, [3] completes the characterization of symmetric Boolean functions (where both parties receive the same output). However, all known general protocols for such functionalities have round complexity that is super-logarithmic in the security parameter. Moreover, this was proven to be necessary for functions with embedded XOR [15].

1.4 Organization

In Sect. 2 we provide some notations and definitions that we use in this work, alongside some required mathematical background. Section 3 is dedicated to expressing security in geometrical terms and the formal statement of our result. In Sects. 4 and 5 we present the proof of the main theorem. In Sect. 6 we show that the analysis of our protocol for Boolean functions is tight.Finally, in Sect. 7 we briefly discuss the efficiency of our construction.

2 Preliminaries

2.1 Notations

We use calligraphic letters to denote sets, uppercase for random variables and matrices, lowercase for values, and we use bold characters to denote vectors and points. All logarithms are in base 2. For \(n\in {\mathbb {N}}\), let \([n]=\{1,2\ldots n\}\). For a set \(\mathcal {S}\) we write \(s\leftarrow \mathcal {S}\) to indicate that s is selected uniformly at random from \(\mathcal {S}\). Given a random variable (or a distribution) X, we write \(x\leftarrow X\) to indicate that x is selected according to X. We use \({\text {poly}}\) to denote an unspecified polynomial, and we use \({\text {polylog}}\) to denote an unspecified polylogarithmic function. For a randomized function (or an algorithm) f we write f(x) to denote the random variable induced by the function on input x, and write f(xr) to denote the value when the randomness of f is fixed to r.

For a vector \(\mathbf{v}\in {\mathbb R}^n\), we denote its i-th component with \(v_i\) and we let \(\left| \left| \mathbf{v}\right| \right| _{\infty }=\max _i |v_i|\) denote its \(\ell _\infty \) norm. We denote by \( \mathbf{1 }_n\) (\( \mathbf{0 }_n\)) the all-ones (all-zeros) vector of dimension n. A vector \(\mathbf{p}\in {\mathbb R}^n\) is called a probability vector if \(\mathbf{p}_i\ge 0\) for every \(i\in [n]\) and \(\sum _{i=1}^n p_i=1\).

For a matrix \(M\in {\mathbb R}^{n\times m}\), we let \(M\left( i,\cdot \right) \) be its i-th row, we let \(M\left( \cdot ,j \right) \) be its j-th column, and we denote by \(M^T\) the transpose of M. For a pair of matrices \(M_1\in {\mathbb R}^{n\times m_1},M_2\in {\mathbb R}^{n\times m_2}\), we denote by \(\left[ M_1||M_2\right] \) the concatenation of \(M_2\) to the right of \(M_1\).

2.2 Cryptographic Tools

Definition 1

The statistical distance between two finite random variables X and Y is

$${\text {SD}}\left( X,Y \right) =\frac{1}{2}\sum _{a}\left|\Pr \left[ X=a\right] -\Pr \left[ Y=a\right] \right|.$$

Secret Sharing Schemes. A \((t+1)\)-out-of-n secret-sharing scheme is a mechanism for sharing data among a set of parties \(\left\{ {\text {P}}_1,\ldots ,{\text {P}}_n\right\} \), such that every set of size \(t+1\) can reconstruct the secret, while any smaller set knows nothing about the secret. As a convention, for a secret s and \(i\in [n]\) we let s[i] be the i-th share, namely, the share received by \({\text {P}}_i\). In this work, we rely on Shamir’s secret sharing scheme [29].

In a \((t+1)\)-out-of-n Shamir’s secret sharing scheme over a field \(\mathbb {F}\), where \(|\mathbb {F}|>n\), a secret \(s\in \mathbb {F}\) is shared as follows: A polynomial \(p(\cdot )\) of degree at most \(t+1\) over \(\mathbb {F}\) is picked uniformly at random, conditioned on \(p(0)=s\). Each party \({\text {P}}_i\), for \(1\le i\le n\), receives a share \(s[i]:=p(i)\) (we abuse notation and let i be the element in \(\mathbb {F}\) associated with \({\text {P}}_i\)).

Decomposable Randomized Encoding. We recall the definition of randomized encoding [1, 33]. They are known to exists unconditionally [1, 16].

Definition 2

(Randomized Encoding). Let \(f:\{0,1\}^n\mapsto \mathcal {Z}\) be some function. We say that a function \(\hat{f}:\{0,1\}^n\times \mathcal {R}\mapsto \mathcal {W}\) is a perfect randomized encoding (PRE) of f if the following holds.

  • Correctness: There exists a decoding algorithm \(\mathsf {Dec}\) such that for every \(x\in \{0,1\}^n\)

    $$\Pr _{r\leftarrow \mathcal {R}}\left[ \mathsf {Dec}\left( \hat{f}\left( x;r \right) \right) =f(x)\right] =1.$$
  • Privacy: There exists a randomized algorithm \(\mathsf {Sim}\) such that for every \(x\in \{0,1\}^n\) it holds that

    $$\mathsf {Sim}\left( f(x) \right) \equiv \hat{f}\left( x;r \right) ,$$

    where \(r\leftarrow \mathcal {R}\).

Definition 3

(Decomposable Randomized Encoding). For every \(x\in \{0,1\}^n\), we write \(x=x_1,\ldots ,x_n\), where \(x_i\) is the i-th bit of x. A randomized encoding \(\hat{f}\) is said to be decomposable if it can be written as

$$\hat{f}\left( x;r \right) =\left( \hat{f}_0\left( r \right) ,\hat{f}_1\left( x_1;r \right) ,\ldots ,\hat{f}_n\left( x_n;r \right) \right) ,$$

where each \(\hat{f}_i\), for \(i\in [n]\), can be written as one of two vectors that depends on \(x_i\), i.e., we can write it as \(\mathbf{v}_{i,x_i}\), where \((\mathbf{v}_{i,0},\mathbf{v}_{i,1})\) depends on the randomness r.

2.3 Mathematical Background

Definition 4

(Convex Combination and Convex Hull). Let \(\mathcal {V}=\{\mathbf{v}_1,\ldots ,\mathbf{v}_m\}\subseteq {\mathbb R}^n\) be a set of vectors. A convex combination is a linear combination \(\sum _{i=1}^m \alpha _i\cdot \mathbf{v}_i\) where \(\sum _{i=1}^m\alpha _i=1\) and \(\alpha _i\ge 0\) for all \(1\le i\le m\). The convex hull of \(\mathcal {V}\), denoted

$$ {\mathbf {conv}\left( \mathcal {V} \right) }=\left\{ \sum _{i=1}^m\alpha _i\cdot \mathbf{v}_i\;|\;\sum _{i=1}^m\alpha _i=1\text { and } \alpha _i\ge 0 \text { for all } i\in [m]\right\} ,$$

is the set of all vectors that can be represented as a convex combination of the vectors in \(\mathcal {V}\). For a matrix \(M=[\mathbf{v}_1||\ldots ||\mathbf{v}_m]\) we let \( {\mathbf {conv}\left( M \right) }= {\mathbf {conv}\left( \left\{ \mathbf{v}_1,\ldots ,\mathbf{v}_m\right\} \right) }\).

Definition 5

(Affine Hull). For a set of vectors \(\mathcal {V}=\{\mathbf{v}_1,\ldots ,\mathbf{v}_m\}\subseteq {\mathbb R}^n\), we define their affine hull to be the set

$$ {\mathbf {aff}}\left( \mathcal {V} \right) =\left\{ \sum _{i=1}^m\alpha _i\cdot \mathbf{v}_i\;|\;\sum _{i=1}^m\alpha _i=1\right\} .$$

For a matrix \(M=[\mathbf{v}_1||\ldots ||\mathbf{v}_m]\) we let \( {\mathbf {aff}}\left( M \right) = {\mathbf {aff}}\left( \left\{ \mathbf{v}_1,\ldots ,\mathbf{v}_m\right\} \right) \).

Definition 6

(Affine Independence). A set of points \(\mathbf{v}_1,\ldots ,\mathbf{v}_m\in {\mathbb R}^n\) is said to be affinely independent if whenever \(\sum _{i=1}^m\alpha _i\cdot \mathbf{v}_i= \mathbf{0 }_n\) and \(\sum _{i=1}^m\alpha _i=0\), then \(\alpha _i=0\) for every \(i\in [m]\). Observe that \(\mathbf{v}_1,\ldots ,\mathbf{v}_m\) are affinely independent if and only if \(\mathbf{v}_2-\mathbf{v}_1,\ldots ,\mathbf{v}_m-\mathbf{v}_1\) are linearly independent.

For a square matrix \(M\in {\mathbb R}^{n\times n}\), we denote by \({\text {det}}\left( M \right) \) the determinant of M, and we denote by \(M_{i,j}\) the (ij)’th cofactor of M, which is the \((n-1)\times (n-1)\) matrix obtained by removing the i’th row and j’th column of M. It is well known that:

Fact 2

Let \(M\in {\mathbb R}^{n\times n}\) be an invertible matrix. Then for every \(i,j\in [n]\) it holds that \(\left|M^{-1}\left( i,j \right) \right|=\left|{\text {det}}\left( M_{j,i} \right) /{\text {det}}\left( M \right) \right|\).

2.4 The Model of Computation

We follow the standard ideal vs. real paradigm for defining security. Intuitively, the security notion is defined by describing an ideal functionality, in which both the corrupted and non-corrupted parties interact with a trusted entity. A real-world protocol is deemed secure if an adversary in the real-world cannot cause more harm than an adversary in the ideal-world. This is captured by showing that an ideal-world adversary (simulator) can simulate the full view of the real world adversary.

We focus our attention on the client-server model. In this model a server \(\mathsf {S}\) holds some input x and a client \(\mathsf {C}\) holds some input y. At the end of the interaction the client learns the output of some function of x and y, while the server learns nothing. We further restrict ourselves to allow only a single round of interaction between the two parties, however, as only trivial functionalities are computable in this setting, the parties interact in the \(\mathcal {OT}\text{- }\text {hybrid}\) model. We next formalize the interaction done in this model.

The OT Functionality. We start by formally defining the (family) of the OT functionality. The \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\) functionality, is a two-party client-server functionality in which the server inputs a pair of bit-messages \(a_0\) and \(a_1\), and the client inputs a single bit b. The server receives \(\bot \) and the client receives \(a_b\). For every natural number \(\ell \ge 1\), we define the functionality \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}^{\ell }\) as follows. Let \(\mathbf{a}=\left( a_0^i,a_1^i \right) _{i=1}^{\ell }\) and let \(\mathbf{b}=\left( b_i \right) _{i=1}^{\ell }\), where \(a_0^i,a_1^i,b_i\in \{0,1\}\) for every i. We let \(\mathbf{a}[\mathbf{b}]:=\left( a^i_{b_i} \right) _{i=1}^{\ell }\). The functionality is then defined as \(\left( \mathbf{a},\mathbf{b} \right) \mapsto \left( \bot ,\mathbf{a}[\mathbf{b}] \right) \). That is, it is the equivalent to computing \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\) \(\ell \) times in parallel. Finally, we let \(\mathcal {OT}=\left\{ \left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}^{\ell }\right\} _{\ell \ge 1}\).

A generalization of \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\) is the \(\left( {\begin{array}{c}n\\ 1\end{array}}\right) \text {-bit-OT}\) functionality, which lets the client pick one out of n bits \(a_1,a_2,\ldots ,a_n\) supplied by the server, and on input \(i\in [n]\) the client learns \(a_i\). This can be further generalized to \(\left( {\begin{array}{c}n\\ 1\end{array}}\right) \text {-}s\text {-string-OT}\) where the n bits are replaced by strings \(a_1,\ldots ,a_n\in \{0,1\}^s\), and this can generalized even further to \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \text {-}s\text {-string-OT}\) where the input i of the client is replaced with k inputs \(i_1,\ldots ,i_k\in [n]\), and it receives \(a_{i_1},\ldots ,a_{i_k}\).

The 1-Round \(\varvec{\mathcal {OT}}{} \mathbf -Hybrid \) Model. We next describe the execution in the 1-round \(\mathcal {OT}\)-hybrid model. In the following we fix a (possibly randomized) client-server functionality \(f:\mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} \). A protocol \(\varPi \) in the 1-round \(\mathcal {OT}\)-hybrid model with security parameter \(\kappa \), is a triple of randomized functions \(\left( \alpha ,\beta ,\varphi \right) \). The server and client use the function \(\alpha \) and \(\beta \) respectively to obtain messages to send to the OT. The client then compute some local function \(\varphi \) on its view to obtain an output. Formally, the computation is done as follows.

  • Inputs: The server \(\mathsf {S}\) holds input \(x\in \mathcal {X}\) and the client \(\mathsf {C}\) holds input \(y\in \mathcal {Y}\). In addition, both parties hold the security parameter \(1^{\kappa }\).

  • Parties send inputs to the OT: \(\mathsf {S}\) samples \(2\ell \left( \kappa \right) \) bits \(\mathbf{a}=\alpha \left( x,1^{\kappa } \right) \), and \(\mathsf {C}\) samples \(\ell \left( \kappa \right) \) bits \(\mathbf{b}=\beta \left( y,1^{\kappa } \right) \), for some \(\ell (\cdot )\) determined by the protocol. \(\mathsf {S}\) and \(\mathsf {C}\) send \(\mathbf{a}\) and \(\mathbf{b}\) to the OT functionality, respectively. \(\mathsf {C}\) then receives \(\mathbf{a}[\mathbf{b}]\) from the OT.

  • Outputs: The server \(\mathsf {S}\) outputs nothing, while the client \(\mathsf {C}\) computes the local function \(\varphi \left( y,\mathbf{b},\mathbf{a}[\mathbf{b}],1^{\kappa } \right) \) and outputs its result.

We refer to the \(\ell \left( \kappa \right) \) used in the protocol as the communication complexity (\({\text {CC}}\)) of \(\varPi \).

We consider an adversary \(\mathcal {A}\) that controls a single party. The adversary has access to the full view of that party. We assume the adversary is malicious, that is, it may instruct the corrupted party to deviate from the protocol in any way it chooses. The adversary is non-uniform, and is given an auxiliary input \(\mathsf {aux}\). For simplicity we do not concern ourselves with the efficiency of the protocols or the adversaries, namely, we assume that the parties and the adversary are unbounded.

Fix inputs \(x\in \mathcal {X}\), \(y\in \mathcal {Y}\), and \(\kappa \in {\mathbb {N}}\). For an adversary \(\mathcal {A}\) corrupting the server, we let \({\text {Out}}^{{\text {HYBRID}}}_{\mathcal {A}\left( x,\mathsf {aux} \right) ,\varPi }\left( x,y,1^{\kappa } \right) \) denote the output of the client in a random execution of \(\varPi \). For an adversary \(\mathcal {A}\) corrupting the client, we let \({\text {View}}^{{\text {HYBRID}}}_{\mathcal {A}\left( y,\mathsf {aux} \right) ,\varPi }\left( x,y,1^{\kappa } \right) \) denote the adversary’s view in a random execution of \(\varPi \), when it corrupts the client. This includes its input, auxiliary input, randomness, and the output received from the OT functionality.

The Ideal Model. We now describe the interaction in the ideal model, which specifies the requirements for fully secure computation of the function f with security parameter \(\kappa \). Let \(\mathcal {A}\) be an adversary in the ideal-world, which is given an auxiliary input \(\mathsf {aux}\) and corrupts one of the parties.

The Ideal Model – Full-Security

  • Inputs: The server \(\mathsf {S}\) holds input \(x\in \mathcal {X}\) and the client \(\mathsf {C}\) holds input \(y\in \mathcal {Y}\). The adversary is given an auxiliary input \(\mathsf {aux}\in \{0,1\}^{*}\) and the input of the corrupted party. The trusted party \(\mathsf {T}\) holds \(1^{\kappa }\).

  • Parties send inputs: The honest party sends its input to \(\mathsf {T}\). The adversary sends a value w from its domain as the input for corrupted party.

  • The trusted party performs computation: \(\mathsf {T}\) selects a random string r and computes \(z=f\left( x,w;r \right) \) if \(\mathsf {C}\) is corrupted and computes \(z=f(w,y;r)\) if \(\mathsf {S}\) is corrupted. \(\mathsf {T}\) then sends z to \(\mathsf {C}\) (which is also given to \(\mathcal {A}\) if \(\mathsf {C}\) is corrupted).

  • Outputs: An honest server outputs nothing, an honest client output z, and the malicious party outputs nothing. The adversary outputs some function of its view.

Fix inputs \(x\in \mathcal {X}\), \(y\in \mathcal {Y}\), and \(\kappa \in {\mathbb {N}}\). For an \(\mathcal {A}\) corrupting the server we let \({\text {Out}}^{{\text {IDEAL}}}_{\mathcal {A}\left( x,\mathsf {aux} \right) ,f}\left( x,y,1^{\kappa } \right) \) denote the output of the client in a random execution of the above ideal-world process. For an \(\mathcal {A}\) corrupting the client we let \({\text {View}}^{{\text {IDEAL}}}_{\mathcal {A}\left( y,\mathsf {aux} \right) ,f}\left( x,y,1^{\kappa } \right) \) be the view description being the output of \(\mathcal {A}\) in such a process.

We next present the definition for security against malicious adversaries. The definition we present is tailored to the setting of the 1-round two-party client-server in the \(\mathcal {OT}\text{- }\text {hybrid}\) model.

Definition 7

(malicious security). Let \(\varPi =(\alpha ,\beta ,\varphi )\) be a protocol for computing f in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model. Let \(\varepsilon (\cdot )\) be a positive function of the security parameter.

  1. 1.

    Correctness: We say that \(\varPi \) is correct if for all \(\kappa \in {\mathbb {N}}\), \(x\in \mathcal {X}\), and \(y\in \mathcal {Y}\)

    $$\Pr \left[ \varphi \left( y,\mathbf{b},\mathbf{a}[\mathbf{b}],1^{\kappa } \right) =f(x,y)\right] =1.$$

    Here, \(\mathbf{a}=\alpha \left( x,1^{\kappa } \right) \), \(\mathbf{b}=\beta \left( y,1^{\kappa } \right) \) and the probability is taken over the random coins of \(\alpha \), \(\beta \), \(\varphi \), and f.

  2. 2.

    Server Security: We say that \(\varPi \) is \(\varepsilon \)-server secure, if for any non-uniform adversary \(\mathcal {A}\) corrupting the server in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, there exists a non-uniform adversary \(\mathsf {Sim}_{\mathcal {A}}\) (called the simulator) corrupting the server in the ideal-world, such that for all \(\kappa \in {\mathbb {N}}\), \(x\in \mathcal {X}\), \(y\in \mathcal {Y}\), and \(\mathsf {aux}\in \{0,1\}^*\) it holds that

    $${\text {SD}}\left( {\text {Out}}^{{\text {HYBRID}}}_{\mathcal {A}\left( x,\mathsf {aux} \right) ,\varPi }\left( x,y,1^{\kappa } \right) ,~{\text {Out}}^{{\text {IDEAL}}}_{\mathsf {Sim}_{\mathcal {A}}\left( x,\mathsf {aux} \right) ,f}\left( x,y,1^{\kappa } \right) \right) \le \varepsilon \left( \kappa \right) .$$

    We say that \(\varPi \) has perfect server security if it is 0-server secure.

  3. 3.

    Client Security: We say that \(\varPi \) is \(\varepsilon \)-client secure, if for any non-uniform adversary \(\mathcal {A}\) corrupting the client in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, there exists a non-uniform simulator \(\mathsf {Sim}_{\mathcal {A}}\) corrupting the client in the ideal-world, such that for all \(\kappa \in {\mathbb {N}}\), \(x\in \mathcal {X}\), \(y\in \mathcal {Y}\), and \(\mathsf {aux}\in \{0,1\}^*\) it holds that

    $${\text {SD}}\left( {\text {View}}^{{\text {HYBRID}}}_{\mathcal {A}\left( y,\mathsf {aux} \right) ,\varPi }\left( x,y,1^{\kappa } \right) ,~{\text {View}}^{{\text {IDEAL}}}_{\mathsf {Sim}_{\mathcal {A}}\left( y,\mathsf {aux} \right) ,f}\left( x,y,1^{\kappa } \right) \right) \le \varepsilon \left( \kappa \right) .$$

    We say that \(\varPi \) has perfect client security if it is 0-client secure.

We say that \(\varPi \) computes f with \(\varepsilon \)-statistical full-security, if \(\varPi \) is correct, is \(\varepsilon \)-server secure, and is \(\varepsilon \)-client secure. Finally, we say that \(\varPi \) computes f with perfect full-security, if it computes f with 0-statistical full-security.

To alleviate notation, from now on we will completely remove \(1^{\kappa }\) from the input the functions \(\alpha \), \(\beta \), and \(\varphi \), and remove \(\kappa \) from \(\ell \) and \(\varepsilon \). Statistical security will now be stated as a function of \(\varepsilon \) and the CC of the protocol as a function of \(\ell \). Observe that aborts in this model are irrelevant. Indeed, honest server outputs nothing, and if a malicious server aborts then the client can output \(f(x_0,y)\) for some default value \(x_0\in \mathcal {X}\), which can be perfectly simulated. Therefore, throughout the paper we assume without loss of generality that the adversary does not abort the execution.

We next describe the notion of security with input-dependent abort [19]. Generally, it is a relaxation of the standard full-security notion, which allows an adversary to learn at most 1 bit of information by causing the protocol to abort depending on the other party’s inputs. We state only perfect security. Furthermore, the security notion is written with respect only to a malicious server. Since we work in the client-server model, the trusted party does not send to the server any output. Therefore, in this relaxation selective abort attacks [22, 23] are simulatable.

Definition 8

Fix \(f:\mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} \). In the input-dependent model, we modify the ideal-world so that the malicious adversary corrupting the server, in addition to sending an input \(x^*\in \mathcal {X}\), also gives the trusted party \(\mathsf {T}\) a predicate \(P:\mathcal {Y}\mapsto \{0,1\}\). \(\mathsf {T}\) then sends to the client \(f(x^*,y)\) if \(P(y)=0\), and \(\bot \) otherwise. We let \({\text {Out}}^{{\text {ID}}}_{\mathcal {A}\left( x,\mathsf {aux} \right) ,f}\left( x,y \right) \) denote the output of the client in a random execution of the above ideal-world process, with \(\mathcal {A}\) corrupting the server.

Let \(\varPi \) be a protocol that computes f in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model. We say that \(\varPi \) has perfect input-dependent security, if for every non-uniform adversary \(\mathcal {A}\) corrupting the server in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, there exists a non-uniform adversary \(\mathsf {Sim}_{\mathcal {A}}\) corrupting the server in the input-dependent ideal-world, such that for all \(x\in \mathcal {X}\), \(y\in \mathcal {Y}\), and \(\mathsf {aux}\in \{0,1\}^*\) it holds that

$${\text {Out}}^{{\text {HYBRID}}}_{\mathcal {A}\left( x,\mathsf {aux} \right) ,\varPi }\left( x,y \right) \equiv {\text {Out}}^{{\text {ID}}}_{\mathsf {Sim}_{\mathcal {A}}\left( x,\mathsf {aux} \right) ,f}\left( x,y \right) .$$

3 A Class of Perfectly Computable Client-Server Functions

In this section, we state the main result of this paper – presenting a large class of two-party client-server functions that are computable with perfect security. We start with presenting a geometric view of security in our model. We take a similar approach to that of [2] to representing the server-security requirement geometrically.

3.1 A Geometrical Representation of the Security Requirements

Boolean Functions. We start with giving the details for (randomized) Boolean functions. For any function \(f:\mathcal {X}\times \mathcal {Y}\mapsto \{0,1\}\) we associate an \(|\mathcal {X}|\times |\mathcal {Y}|\) matrix \(M_f\) defined as \(M_f(x,y)=\Pr \left[ f(x,y)=1\right] \), where the probability is taken over f’s random coins (if f is deterministic, then this value is Boolean). Let \(\mathcal {X}=\left\{ x_1,\ldots ,x_n\right\} \). Observe that in the ideal-world, every strategy that is employed by a simulator corrupting the server can be encoded with a probability vector \(\mathbf{p}\in {\mathbb R}^{n}\), where \(p_i\) corresponds to the probability of sending \(x_i\) to \(\mathsf {T}\). Therefore, if the input of the client is y, then the probability that the output is 1, equals to \(\mathbf{p}^T\cdot M_f(\cdot ,y)\). On the other hand, in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model, a malicious server can only choose a string \(\mathbf{a}^*\in \{0,1\}^{2\ell }\) and send in to the OT. Then on input \(y\in \mathcal {Y}\), the probability the client outputs 1 is exactly

$$q^{\varPi }_y\left( \mathbf{a}^* \right) :=\Pr \left[ \varphi \left( y,\mathbf{b},\mathbf{a}^*[\mathbf{b}] \right) =1\right] ,$$

where \(\mathbf{b}=\beta \left( y \right) \) and the probability is over the randomness of \(\beta \) and \(\varphi \). This implies that an ideal-world simulator must send a random input \(x^*\in \mathcal {X}\) such that the client will output 1 with probability \(q^{\varPi }_y\left( \mathbf{a}^* \right) \). Thus, perfect security holds if and only if for every \(\mathbf{a}^*\in \{0,1\}^{2\ell }\) there exists a probability vector \(\mathbf{p}\in {\mathbb R}^n\) such that for every \(y\in \mathcal {Y}\)

$$\mathbf{p}^T\cdot M_f\left( \cdot ,y \right) =q^{\varPi }_y\left( \mathbf{a}^* \right) .$$

Equivalently, for every \(\mathbf{a}^*\) the vector \(\mathbf{q}^{\varPi }\left( \mathbf{a}^* \right) :=(q^{\varPi }_y\left( \mathbf{a}^* \right) )_{y\in \mathcal {Y}}\) is inside the convex-hull of the rows of \(M_f\). Further observe that this holds true regardless of the auxiliary input held by a corrupt server.

General Functions. We now extend the above discussion to non-Boolean functions. For every function \(f:\mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} \), and every possible output \(z\in \left\{ 0,\ldots ,k-1\right\} \), we associate an \(|\mathcal {X}|\times |\mathcal {Y}|\) matrix \(M_f^{z}\) defined as \(M_f^z\left( x,y \right) =\Pr \left[ f(x,y)=z\right] \). Similarly to the Boolean case, in the ideal world, every strategy that is employed by a corrupt server can be encoded with a probability vector \(\mathbf{p}\in {\mathbb R}^{n}\), hence the probability that the client will output z, on input y, is \(\mathbf{p}^T\cdot M_f^z\left( \cdot ,y \right) \). In the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model, for a string \(\mathbf{a}^*\in \{0,1\}^{2\ell }\) chosen by a malicious server, the probability to output z equals to

$$q^{\varPi }_{y,z}\left( \mathbf{a}^* \right) :=\Pr \left[ \varphi \left( y,\mathbf{b},\mathbf{a}^*[\mathbf{b}] \right) =z\right] ,$$

where \(\mathbf{b}=\beta \left( y \right) \) and the probability is over the randomness of \(\beta \) and \(\varphi \). Therefore, perfect security holds if and only if for every \(\mathbf{a}^*\in \{0,1\}^{2\ell }\) there exists a probability vector \(\mathbf{p}\in {\mathbb R}^n\) such that for every \(y\in \mathcal {Y}\) and for every \(z\in \left\{ 0,\ldots ,k-1\right\} \)

$$\begin{aligned} \mathbf{p}^T\cdot M^z_f\left( \cdot ,y \right) =q^{\varPi }_{y,z}\left( \mathbf{a}^* \right) . \end{aligned}$$
(1)

Observe that since \(\mathbf{p}\) is a probability vector and since \(\sum _z M_f^z\) is the all-one matrix, it is equivalent to consider only \(k-1\) possible values for z instead of all k values considered in Eq. (1). We next write the perfect security formulation more succinctly.

Let \(M_f=\left[ M_f^{1}||\ldots ||M_f^{k-1}\right] \) be the concatenation of the matrices by columns, and let \(\mathbf{q}^{\varPi }\left( \mathbf{a}^* \right) :=\left( (q^{\varPi }_{y,z}\left( \mathbf{a}^* \right) )_{y\in \mathcal {Y}}\right) _{z\in [k-1]}\). Then Eq. (1) is equivalent to saying that for every \(\mathbf{a}^*\) the vector \(\mathbf{q}^{\varPi }\left( \mathbf{a}^* \right) \) belongs to the convex-hull of the rows of \(M_f\). It will be convenient to index the columns of \(M_f\) with (yz), i.e., we let \(M_f\left( x,(y,z) \right) =M_f^z\left( x,y \right) \).Footnote 6 We now have an equivalent definition of perfect server security.

Lemma 1

Let \(\varPi \) be a protocol for computing some function \(f:\mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} \) in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model with \({\text {CC}}\) of \(\ell \). Then \(\varPi \) has perfect server security if and only if for every \(\mathbf{a}^*\in \{0,1\}^{2\ell }\) it holds that

$$\mathbf{q}^{\varPi }\left( \mathbf{a}^* \right) \in {\mathbf {conv}\left( M_f^T \right) }.$$

We next describe another for security against a corrupt server. Intuitively, it states that for a malicious server, the less it deviates from the prescribed protocol, the better it can be simulated. Moreover, instead of using the traditional \(\ell _1\) distance (i.e., statistical distance) we phrase the security in terms of the \(\ell _\infty \) norm. This, somewhat non-standard definition will later act as a sufficient condition for reducing perfect server-security to perfect client-security.

Definition 9

Let \(f:\left( \mathcal {X}\cup \left\{ \bot \right\} \right) \times \mathcal {Y}\mapsto \left\{ \bot ,0,\ldots ,k-1\right\} \). Assume that \(f(x,y)=\bot \) if and only if \(x=\bot \). Let \(\varPi =\left( \alpha ,\beta ,\varphi \right) \) be a protocol for computing f in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model. We say that \(\varPi \) is strong \(\varepsilon \)-server secureFootnote 7 if the following holds. For every message \(\mathbf{a}^*\) sent by a malicious server in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, there exists a probability vector \(\mathbf{p}=\left( p_x \right) _{x\in (\mathcal {X}\cup \left\{ \bot \right\} )}\in {\mathbb R}^{|\mathcal {X}|+1}\) such that

$$\left| \left| \mathbf{q}^{\varPi }\left( \mathbf{a}^* \right) - M^T_f\cdot \mathbf{p}\right| \right| _{\infty }\le \varepsilon \cdot p_\bot .$$

3.2 Stating the Main Result

With the above representation in mind, we are now ready to state our main result. We first recall the definition of a full-dimensional function, as stated in [2].

Definition 10

(full-dimensional function). We say that a function \(f:\mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} \) is full-dimensional if

$${\text {dim}}\left( {\mathbf {aff}}\left( M_f^T \right) \right) =(k-1)\cdot |\mathcal {Y}|,$$

namely, the affine-hull defined by the rows of \(M_f\) spans the entire vector space.

Recall that a basis for an affine space of dimension n has cardinality \(n + 1\), and therefore it must holds that \(|\mathcal {X}|>(k-1)\cdot |\mathcal {Y}|\). Thus, the assumption that f is full-dimensional implies this condition. We are now ready to state our main result.

Theorem 3

Let \(f:\mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} \) be a full-dimensional function. Then there exists a protocol \(\varPi \) in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model, that computes f with perfect full-security. Furthermore, if f is deterministic the \({\text {CC}}\) is the following. Let \(\gamma _i\) denote the size of the smallest formula for evaluating the i’th bit of f(xy), and let \(\gamma =\max _{i} \gamma _i\). Then \(\varPi \) has \({\text {CC}}\) at most

$$\xi \cdot \gamma ^2\cdot \log k\cdot \log |\mathcal {Y}| \cdot {\text {poly}}\left( k\cdot |\mathcal {Y}| \right) ,$$

where \(\xi \in {\mathbb R}^+\) is some global constant independent of the function f.

Although the communication complexity of our protocol is roughly \({\text {poly}}\left( k\cdot |\mathcal {Y}| \right) \), for functions with small client-domain, it does yield a concrete improvement upon known protocols such as the protocol proposed by [19].

A simple corollary of Theorem 3 is that adding constant columns to a full-dimensional function, results in a functions that can still be computed with perfect security.

Corollary 1

Let \(f:\mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} \) be some function. Assume that there exists a subset \(\mathcal {Y}'\subseteq \mathcal {Y}\) that fixes the output distribution of f, i.e., for all \(y\in \mathcal {Y}'\) there exists a distribution \(D_y\) over \(\left\{ 0,\ldots ,k-1\right\} \) such that \(f(x,y)\equiv D_y\) for every \(x\in \mathcal {X}\). Then if the function \(f':\mathcal {X}\times (\mathcal {Y}\setminus \mathcal {Y}')\mapsto \left\{ 0,\ldots ,k-1\right\} \), defined as \(f'\left( x,y \right) =f(x,y)\), is full-dimensional, then f can be computed the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model with perfect full-security and with the same communication complexity as \(f'\).

Many interesting examples of functionalities that satisfy the constraints in Theorem 3 and Corollary 1 exists. Yao’s millionaires’ problem is an example for such a function. Here, the server and the client each hold a number from 1 to n. The output is 1 if and only if the client’s input is greater than or equal to the server’s input. The matrix for this function has a constant column of 1’s (when taking the client’s input to be n). After removing it, the last row of the matrix will be the all 0 vector, and the other rows are linearly independent. Therefore the function satisfies the constraints in Corollary 1.

Theorem 3 clearly follows from the following two lemmata. The first lemma reduces the problem of constructing a perfectly secure protocol, to the task of constructing a protocol with perfect client security and strong statistical server security. The second lemma states that such a protocol exists.

Lemma 2

Let \(f:\mathcal {X}\times \mathcal {Y}\mapsto \left\{ 0,\ldots ,k-1\right\} \) be some function. Define the function \(g:(\mathcal {X}\cup \left\{ \bot \right\} )\times \mathcal {Y}\mapsto \left\{ \bot ,0,\ldots ,k-1\right\} \) as \(g(x,y)=f(x,y)\) if \(x\ne \bot \) and \(g(\bot ,y)=\bot \), for every \(y\in \mathcal {Y}\). Assume that for every \(\varepsilon >0\), there exists a protocol \(\varPi _g(\varepsilon )\) in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model that computes g with correctness, is strong \(\varepsilon \)-server secure, has perfect client security, and has \({\text {CC}}\) at most \(\ell \left( \varepsilon ,|\mathcal {X}|,|\mathcal {Y}|,k \right) \). Then, if f is full-dimensional, there exists a protocol \(\varPi _f\) in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model, that computes f with perfect full-security. Moreover, if f is deterministic then \(\varPi _f\) has \({\text {CC}}\) at most

$$\ell \left( \frac{1}{2n(n+1)!},~|\mathcal {X}|,~|\mathcal {Y}|,~k \right) ,$$

where \(n=(k-1)\cdot |\mathcal {Y}|\).

Lemma 3

Let \(g:(\mathcal {X}\cup \left\{ \bot \right\} )\times \mathcal {Y}\mapsto \left\{ \bot ,0,\ldots ,k-1\right\} \) be a function such that \(g(x,y)=\bot \) if and only if \(x=\bot \). Then for every \(\varepsilon >0\), there exists a protocol \(\varPi _g(\varepsilon )\) in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model that computes g with correctness, is strong \(\varepsilon \)-server secure, and has perfect client security. Furthermore, its communication complexity is the following. Let \(\gamma _i\) denote the size of the smallest formula for evaluating the i-th bit of g(xy), and let \(\gamma =\max _{i} \gamma _i\). Then \(\varPi _g(\varepsilon )\) has \({\text {CC}}\) at most

$$\xi \cdot \gamma ^2\cdot \log k\cdot \log |\mathcal {Y}|\cdot {\text {polylog}}\left( \varepsilon ^{-1} \right) ,$$

where \(\xi \in {\mathbb R}^+\) is some global constant independent of the function g and of \(\varepsilon \).

We prove Lemma 2 in Sect. 4 and we prove Lemma 3 in Sect. 5.

4 Proof of Lemma 2

In this section, we reduce the problem of constructing a perfectly secure protocol, to the problem of constructing a protocol that has perfect client security and has strong statistical server security. The idea is to wrap the given protocol for computing g. Whenever the output of \(\varPi _g(\varepsilon )\) is \(\bot \) (for small enough \(\varepsilon \)), the client will choose \(x_0\in \mathcal {X}\) at random and output \(f(x_0,y)\). Stated from a geometric point of view, the client outputs according to a distribution that is consistent with some point that is strictly inside the convex-hull of the rows of \(M_f\) (e.g., the center).

Proof

(of Lemma 2). It is easy to see that if the probability that the output of \(\varPi _g(\varepsilon )\) equals \(\bot \) is 0 for every \(y\in \mathcal {Y}\) for some \(\varepsilon >0\), then \(\varPi _g(\varepsilon )\) computes f with perfect security.

Assume otherwise. Let \(n=(k-1)\cdot |\mathcal {Y}|\). Since f is full-dimensional there exists a subset \(\mathcal {S}=\left\{ \mathbf{x}_0,\ldots ,\mathbf{x}_{n}\right\} \subseteq {\mathbb R}^{n}\) of the rows of \(M_f\), that is affinely independent. Let \(\mathbf{u}_\mathcal {S}\in {\mathbb R}^{n}\) be the vector associated with uniform distribution over \(\mathcal {S}\) (i.e., \(u_i=1/|\mathcal {S}|\) if \(i\in \mathcal {S}\) and \(u_i=0\) otherwise), and let \(\mathbf{c}=\left( c_{y,z} \right) _{y\in \mathcal {Y},z\in [k-1]}:=M^T_f\cdot \mathbf{u}_{\mathcal {S}}\) be the center of the simplexFootnote 8 defined by the points in \(\mathcal {S}\). The protocol \(\varPi _f\) is described as follows.

 

Protocol 4

( \(\varPi _f\) )

Input: Server \(\mathsf {S}\) has input \(x\in \mathcal {X}\) and client \(\mathsf {C}\) has input \(y\in \mathcal {Y}\).

  1. 1.

    The parties execute protocol \(\varPi _g\left( \varepsilon \right) \) with small enough \(\varepsilon >0\) to be determined by the analysis. Let z be the output \(\mathsf {C}\) receive.

  2. 2.

    If \(z\ne \bot \), then \(\mathsf {C}\) output z. Otherwise, output \(z'\in [k-1]\) with probability \(c_{y,z'}\) (and output 0 with the complement probability).

 

Correctness and perfect client-security follows from the fact that \(\varPi _g\) satisfies these properties. It remains to show that perfect server-security holds. By Lemma 1, it suffices to show that for every \(\mathbf{a}^*\in \{0,1\}^{2\ell }\) sent to the OT by a malicious server, it holds that

$$\begin{aligned} \mathbf{q}^{\varPi _f}\left( \mathbf{a}^* \right) \in {\mathbf {conv}\left( M_f^T \right) }. \end{aligned}$$
(2)

Fix \(\mathbf{a}^*\in \{0,1\}^{2\ell }\). For brevity, we write \(\mathbf{q}^f\) and \(\mathbf{q}^g\) instead of \(\mathbf{q}^{\varPi _f}\left( \mathbf{a}^* \right) \) and \(\mathbf{q}^{\varPi _g(\varepsilon )}\left( \mathbf{a}^* \right) \) respectively. Since \(\varPi _g(\varepsilon )\) is strong \(\varepsilon \)-server secure, it follows that there exists a probability vector \(\mathbf{p}^g\in {\mathbb R}^{|\mathcal {X}|+1}\) such that

$$\begin{aligned} \mathbf{q}^{g}=M_g^T\cdot \mathbf{p}^g+\mathbf{err}, \end{aligned}$$
(3)

where \(\mathbf{err}\in {\mathbb R}^{k\cdot |\mathcal {Y}|}\) satisfies \(\left| \left| \mathbf{err}\right| \right| _{\infty }\le \varepsilon \cdot p^g_\bot \). Let \(\overline{\mathbf{p}}^g=\left( p_x^g \right) _{x\in \mathcal {X}}\) be the vector \(\mathbf{p}\) with \(p_\bot \) removed. We first show that Eq. (2) follows from the following two claims.

Claim 5

There exists a vector \(\widehat{\mathbf{err}}\in {\mathbb R}^{k\cdot |\mathcal {Y}|}\) satisfying \(\left| \left| \widehat{\mathbf{err}}\right| \right| _{\infty }\le 2\varepsilon \), such that

$$\mathbf{q}^f=M_f^T\cdot \overline{\mathbf{p}}^g+p_\bot \cdot (\mathbf{c}+\widehat{\mathbf{err}}).$$

Claim 6

There exists a small enough \(\varepsilon >0\) such that

$$\mathbf{c}+\widehat{\mathbf{err}}\in {\mathbf {conv}\left( M^T_f \right) },$$

where \(\widehat{\mathbf{err}}\) is the same as in Claim 5.

Indeed, by Claim 6 there exists a probability vector \(\widehat{\mathbf{p}}\in {\mathbb R}^{|\mathcal {X}|}\) such that

$$\mathbf{c}+\widehat{\mathbf{err}}=M_f^T\cdot \widehat{\mathbf{p}}.$$

Thus, by Claim 5

$$\mathbf{q}^f=M_f^T\cdot \overline{\mathbf{p}}^g+p_\bot \cdot (\mathbf{c}+\widehat{\mathbf{err}})=M_f^T\cdot (\overline{\mathbf{p}}^g+p_\bot \cdot \widehat{\mathbf{p}}).$$

Recall that the entries of \(\overline{\mathbf{p}}\) sum up to \(1-p_{\bot }\). Therefore \(\overline{\mathbf{p}}^g+p_\bot \cdot \widehat{\mathbf{p}}\) is a probability vector, hence Eq. (2) holds.

To conclude the proof, we next prove Claims 5 and 6.

Proof

(of Claim 5). Let \(\mathbf{err}'=\frac{1}{p_\bot }\cdot \mathbf{err}\). Observe that for every \(y\in \mathcal {Y}\) and \(z\in [k-1]\) it holds that

$$\begin{aligned} q^{f}_{y,z}&=q^{g}_{y,z}+q^{g}_{y,\bot }\cdot c_{y,z}\\&=M_g^T\left( \cdot ,(y,z) \right) \cdot \mathbf{p}^g+{\text {err}}_{y,z}+\left( M_g^T\left( \cdot ,(y,\bot ) \right) \cdot \mathbf{p}^g+{\text {err}}_{y,\bot }\right) \cdot c_{y,z}\\&=M_f^T\left( \cdot ,(y,z) \right) \cdot \overline{\mathbf{p}}^g+{\text {err}}_{y,z}+(p_\bot +{\text {err}}_{y,\bot })\cdot c_{y,z}\\&=M_f^T\left( \cdot ,(y,z) \right) \cdot \overline{\mathbf{p}}^g+p_\bot \cdot (c_{y,z}+{\text {err}}'_{y,z}+{\text {err}}'_{y,\bot }\cdot c_{y,z}), \end{aligned}$$

where the first equality is by the description of \(\varPi _f\), the second is by Eq. (3), and the third follows from the definition of g. Define the vector \(\widehat{\mathbf{err}}\) as follows. For every \(y\in \mathcal {Y}\) and \(z\in [k-1]\) let \(\widehat{{\text {err}}}_{y,z}={\text {err}}'_{y,z}+{\text {err}}'_{y,\bot }\cdot c_{y,z}\). Then

$$\mathbf{q}^f=M_f^T\cdot \overline{\mathbf{p}}^g+p_\bot \cdot (\mathbf{c}+\widehat{\mathbf{err}}).$$

To conclude the proof, we upper-bound \(\left| \left| \widehat{\mathbf{err}}\right| \right| _{\infty }\). It holds that

$$\left| \left| \widehat{\mathbf{err}}\right| \right| _{\infty }\le \left| \left| \mathbf{err}'\right| \right| _{\infty }\cdot (1+\left| \left| \mathbf{c}\right| \right| _{\infty })=\frac{1}{p_\bot }\cdot \left| \left| \mathbf{err}\right| \right| _{\infty }\cdot (1+\left| \left| \mathbf{c}\right| \right| _{\infty })\le 2\varepsilon .$$

Proof

(of Claim 6). One approach would be to use similar techniques as in [2], namely, take a “small enough” Euclidean ball around \(\mathbf{c}\) and take \(\varepsilon \) to be small enough so that \(\mathbf{c}+\widehat{\mathbf{err}}\) is contained inside the ball. This approach, however, only proves the existence of such an \(\varepsilon \). We take a slightly different approach, which would also provide an explicit upper bound on \(\varepsilon \) for deterministic functions.

For every \(i\in [n]\) let \(\overline{\mathbf{x}}_i=\mathbf{x}_i-\mathbf{x}_{0}\), let \(\overline{\mathcal {S}}=\left\{ \overline{\mathbf{x}}_1,\ldots ,\overline{\mathbf{x}}_n\right\} \) be a basis for \({\mathbb R}^n\), and let \(A=[\overline{\mathbf{x}}_1||\ldots ||\overline{\mathbf{x}}_n]\) be the corresponding change of basis matrix. Then

$$\begin{aligned} \mathbf{c}=M_f^T\cdot \mathbf{u}_\mathcal {S}=\sum _{i=0}^{n} \frac{1}{n+1}\cdot \mathbf{x}_i=\mathbf{x}_{0}+\sum _{i=1}^n \frac{1}{n+1}\cdot \overline{\mathbf{x}}_i=\mathbf{x}_{0}+\frac{1}{n+1}\cdot A\cdot \mathbf{1 }_n. \end{aligned}$$
(4)

Observe that a point \(\mathbf{v}\) is in the convex-hull of \(\mathcal {S}\) if and only if it can be written as \(\mathbf{x}_{0}+\sum _{i=1}^n p_i\cdot \overline{\mathbf{x}}_i\), where the \(p_i\)’s are non-negative real numbers that sum up to at most 1. Indeed, we can write

$$\mathbf{x}_{0}+\sum _{i=1}^n p_i\cdot \overline{\mathbf{x}}_i=\left( 1-\sum _{i=1}^n p_i\right) \cdot \mathbf{x}_{0}+\sum _{i=1}^n p_i\cdot \mathbf{x}_i.$$

Next, as \(\overline{\mathcal {S}}\) forms a basis, there exists a vector \(\widetilde{\mathbf{err}}\in {\mathbb R}^{n}\) such that \(\widehat{\mathbf{err}}=A\cdot \widetilde{\mathbf{err}}\). Then, if \(\left| \left| \widetilde{\mathbf{err}}\right| \right| _{\infty }\le \frac{1}{n(n+1)}\), by Eq. (4) it follows that

$$\mathbf{c}+\widehat{\mathbf{err}}=\mathbf{x}_{0}+A\cdot \left( \frac{1}{n+1}\cdot \mathbf{1 }_n+\widetilde{\mathbf{err}}\right) =\mathbf{x}_{0}+\sum _{i=1}^n p_i\cdot \overline{\mathbf{x}}_i,$$

where \(0\le p_i\le 1/n\) for every \(i\in [n]\), implying that the point is inside \( {\mathbf {conv}\left( \mathcal {S} \right) }\). Thus, it suffices to find \(\varepsilon \) for which \(\left| \left| \widetilde{\mathbf{err}}\right| \right| _{\infty }\le \frac{1}{n(n+1)}\). It holds that

$$\begin{aligned} \left| \left| \widetilde{\mathbf{err}}\right| \right| _{\infty }&=\left| \left| A^{-1}\cdot \widehat{\mathbf{err}}\right| \right| _{\infty }\\&=\max _{i\in [n]}\left\{ \left|A^{-1}(i,\cdot )\cdot \widehat{\mathbf{err}} \right|\right\} \\&\le \max _{i\in [n]}\left\{ \sum _{j=1}^n\left|A^{-1}(i,j)\cdot \widehat{{\text {err}}}_j \right|\right\} \\&=\max _{i\in [n]}\left\{ \sum _{j=1}^n\left|\frac{{\text {det}}\left( A_{j,i} \right) }{{\text {det}}\left( A \right) } \right|\cdot \left|\widehat{{\text {err}}}_j \right|\right\} \\&\le n\cdot \frac{(n-1)!}{|{\text {det}}\left( A \right) |}\cdot 2\varepsilon \\&=\frac{2n!}{\left|{\text {det}}\left( A \right) \right|}\cdot \varepsilon , \end{aligned}$$

where the third equality is by Fact 2, and the second inequality is due to the fact that each entry in A is a real number between \(-1\) and 1. Therefore, by taking \(\varepsilon =\frac{|{\text {det}}\left( A \right) |}{2n(n+1)!}\) the claim will follow. Observe that if the function f is deterministic, then the entries of A are in \(\left\{ -1,1\right\} \) implying that \(|{\text {det}}\left( A \right) |\ge 1\), and hence taking \(\varepsilon =\frac{1}{2n(n+1)!}\) suffices. Therefore the communication complexity will be at most \(\ell \left( \frac{1}{2n(n+1)!},~|\mathcal {X}|,~|\mathcal {Y}|,~k \right) \) in this case.

5 Proof of Lemma 3

In this section we fix a function \(g:(\mathcal {X}\cup \left\{ \bot \right\} )\times \mathcal {Y}\mapsto \left\{ \bot ,0,\ldots ,k-1\right\} \) satisfying \(g(x,y)=\bot \) if and only if \(x=\bot \). We show how to construct a protocol for computing the function g in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model. The protocol we construct has perfect client security, and has strong statistical server security. Our protocol is a modified version of the protocol by Ishai et al. [19], which we shall next give an overview of. Their protocol is parametrized with \(\varepsilon \), and we denote this protocol by \(\varPi _{{\text {IKOPS}}}(\varepsilon )\). It is a single round protocol in the \(\mathcal {OT}\text{- }\text {hybrid}\) model, that has \(\varepsilon \)-statistical full-security. It is stated for functions computable by \(\mathrm {NC}^1\) circuits, however, this is only done for improving concrete efficiency, which is not a concern in our paper. We therefore restate it for general functions, and bound its communication complexity as a function of \(|\mathcal {X}|\), \(|\mathcal {Y}|\), and k (which are assumed to be finite in our work).

5.1 The Protocol \(\varPi _{{\text {IKOPS}}}\)The IKOPS Protocol

We next give the rough idea of \(\varPi _{{\text {IKOPS}}}\). First, we view the inputs x and y as a binary strings.Footnote 9 The parties will compute a “certified OT” functionality. We next give a brief overview of the IKOPS protocol.

The main idea behind the \(\varPi _{{\text {IKOPS}}}\) is to have the server run an “MPC in the head” [18]. That is, the real server locally emulates the execution of a perfectly secure protocol \(\varPi \) with many virtual servers performing the computation, and 2m virtual clients, denoted \(\mathsf {C}_{1,0},\mathsf {C}_{1,1},\ldots ,\mathsf {C}_{m,0},\mathsf {C}_{m,1}\), receiving output, where m is the number of bits in the client’s input y. The underlying protocol \(\varPi \) computes a decomposable PRE \(\hat{g} = (\hat{g}_0,\hat{g}_1,\ldots ,\hat{g}_m)\) of g. Specifically, the output of client \(\mathsf {C}_{j,b}\) in an execution of \(\varPi \) is the corresponds to the j-th bit of y, when the bit equals to b.

The real client can then use OT in order to recover the correct output of the PRE and reconstruct the output g(xy). As part of the “MPC in the head” paradigm, the client further ask the server to send a watchlist (the views of some of the virtual servers) and check consistency. If there was an inconsistency, then the client outputs \(\bot \). To make sure that the client will not receive too large of a watchlist and break the privacy requirement, it will get each view with some (constant) probability independently of the other views.

Observe that although the client can use OT in order to receive the correct output from the virtual clients, the two real parties need to use string-OT, while they only have access to bit-OT. This technicality can be overcome using the perfect reduction from \(\left( {\begin{array}{c}n\\ 1\end{array}}\right) \text {-}s\text {-string-OT}\) to \({\text {OT}}\) that was put forward in the elegant work of Brassard et al. [8], which also constitutes one of the few examples of perfect reductions to \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\) known so far. They proved the following theorem.

Theorem 7

There exists a protocol \(\varPi _{{\text {BCS}}}=\left( \alpha _{{\text {BCS}}},\beta _{{\text {BCS}}},\varphi _{{\text {BCS}}} \right) \) in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) world that computes \(\left( {\begin{array}{c}n\\ 1\end{array}}\right) \text {-}s\text {-string-OT}\) with perfect full-security. Furthermore, its communication complexity is at most \( 5s(n-1)\).

The security of the protocol described so far can still be breached by a malicious server. By tampering with the outputs of the virtual clients, a malicious server could force the output of the real client to be g(xy) for some inputs y and force the output to be \(\bot \) for other values of y, where the choice is completely determined by the adversary. To overcome this problem, we replace g with a function \(g'\) where each bit \(y_i\) is replaced with \(m'\) random bit whose XOR equals to \(y_i\), for some large \(m'\).Footnote 10 Here, the adversary does not have complete control over which inputs the client will output \(\bot \), and for which inputs it will output g(xy). We next describe the protocol formally. We start with some notations.

Notation

Throughout the following section, client’s input are now binary strings \(\mathbf{y}\) of length m. Let \(m'=m'(\varepsilon )=\left\lceil \log \left( \varepsilon ^{-1} \right) \right\rceil +1\) and let \(\mathsf {Enc}:\{0,1\}^{m}\mapsto \left( \{0,1\}^{m'}\right) ^{m}\) be a randomized function that on input m bits \(y_1,\ldots ,y_m\), outputs \(m\cdot m'\) random bits \(\left( y_i^1,\ldots ,y_i^{m'} \right) _{i\in [m]}\) conditioned on \(\oplus _{j=1}^{m'}y_i^j=y_i\) for every \(i\in [m]\). We also let \(\mathsf {Dec}:\left( \{0,1\}^{m'}\right) ^{m}\mapsto \{0,1\}^{m}\) be the inverse of \(\mathsf {Enc}\), namely,

$$\mathsf {Dec}\left( \left( y_i^1,\ldots ,y_i^{m'} \right) _{i\in [m]} \right) =\left( y_i^1\oplus \ldots \oplus y_i^{m'} \right) _{i\in [m]}.$$

Finally, we let \(g':(\mathcal {X}\cup \left\{ \bot \right\} )\times \left( \{0,1\}^{m'}\right) ^{m}\mapsto \left\{ \bot ,0,\ldots ,k-1\right\} \) be defined as

$$g'\left( x,\left( y_i^1,\ldots ,y_i^{m'} \right) _{i\in [m]} \right) =g\left( x,\mathsf {Dec}\left( \left( y_i^1,\ldots ,y_i^{m'} \right) _{i\in [m]} \right) \right) , $$

and let \(\hat{g}\) be a decomposable PRE of \(g'\).

 

Protocol 8

( \(\varPi _{{\text {IKOPS}}}\left( \varepsilon \right) \) )

Input: Server has input \(x\in (\mathcal {X}\cup \left\{ \bot \right\} )\) and client has input \(\mathbf{y}\in \{0,1\}^m\).

  • \(\alpha \left( x \right) \) :

    1. 1.

      The server \(\mathsf {S}\) runs “MPC in the head” for the following functionality. There are \(n=\varTheta \left( \log \left( \varepsilon ^{-1} \right) \right) \) virtual servers \(\mathsf {S}_1,\ldots ,\mathsf {S}_n\) with inputs and \(2m\cdot m'\) virtual clients \(\mathsf {C}_{1,0},\mathsf {C}_{1,1},\ldots ,\mathsf {C}_{m\cdot m',0},\mathsf {C}_{m\cdot m',1}\) receiving outputs. Each virtual server holds a share of the \(\mathsf {S}\)’s input and randomness, where the shares are in an n-out-of-n secret sharing scheme. Each virtual client \(\mathsf {C}_{j,b}\) will receive \(\hat{g}_{j,b}(x)\), namely, it will receive the (jb)-th component of the decomposable PRE where the first part of the input is fixed to x. In addition every virtual client will hold \(\hat{g}_0\left( x \right) \) which is the value of \(\hat{g}\) that depends only on x and the randomness.

    2. 2.

      The virtual parties execute a multiparty protocol in order to compute \(\hat{g}\). The protocol used has perfect full-security against \(t=\left\lceil n/3 \right\rceil -1\) corrupted virtual servers and any number of corrupted virtual clients. We also assume that the virtual clients receive messages at the last round of the protocol. (e.g., the BGW protocol [7]).

    3. 3.

      Let \(V_{j,b}\) be the view of \(\mathsf {C}_{j,b}\), and let \(\mathbf{a}_1=\left( \alpha _{{\text {BCS}}}\left( V_{j,0},V_{j,1} \right) \right) _{j\in [m\cdot m']}\).

    4. 4.

      Let \(V_i\) be the view of \(\mathsf {S}_i\). For each \(i\in [n]\) the server creates \(\tilde{\mathbf{a}}_i\) of length \(\left\lceil 2n/t \right\rceil \), where \(V_i\) is located in a randomly chosen entry, while the other entries are \(\bot \) (this allows the server to send each \(V_i\) with probability t/2n). Let \(\mathbf{a}_2=\left( \alpha _{{\text {BCS}}}\left( \tilde{\mathbf{a}}_i \right) \right) _{i\in [n]}\).

    5. 5.

      Output \(\mathbf{a}=(\mathbf{a}_1,\mathbf{a}_2)\).

  • \(\beta \left( \mathbf{y} \right) \) :

    1. 1.

      The client computes \(\left( y_i^1,\ldots ,y_i^{m'} \right) _{i\in [m]}=\mathsf {Enc}\left( \mathbf{y} \right) \).

    2. 2.

      Let \(\mathbf{b}_1=\left( \beta _{{\text {BCS}}}\left( y_j^{j'} \right) \right) _{j\in [m],j\in [m']}\).

    3. 3.

      Let \(\mathbf{b}_2=\left( \beta _{{\text {BCS}}}\left( 1 \right) \right) _{i\in [\left\lceil 2n/t \right\rceil ]}\) (i.e., a constant vector of length \(\left\lceil 2n/t \right\rceil \)).

    4. 4.

      Output \(\mathbf{b}=(\mathbf{b}_1,\mathbf{b}_2)\).

  • \(\varphi \left( \mathbf{y},\mathbf{b},\mathbf{c}' \right) \) :

    1. 1.

      Let \(\mathbf{c}=\left( \varphi _{{\text {BCS}}}\left( c'_i \right) \right) _{i}\)Footnote 11. Write \(\mathbf{c}=(\mathbf{c}_1,\mathbf{c}_2)\), where \(\mathbf{c}_1\) corresponds to the outputs and \(\mathbf{c}_2\) corresponds to the watchlist.

    2. 2.

      For every \(V_{j,b}\) in \(\mathbf{c}_1\), we may write without loss of generality that \(V_{j,b}=\left( V_{j,b}^i \right) _{i\in [n]}\), where \(V_{j,b}^i\) is the message that \(V_{i}\) sends to \(V_{j,b}\).

    3. 3.

      If there exists \(V_{i_1},V_{i_2}\in \mathbf{c}_2\) or \(V_i\in \mathbf{c}_2\) and \(V_{j,b}^i\in \mathbf{c}_1\) that are inconsistent, output \(\bot \).

    4. 4.

      Otherwise, apply the PRE decoder on \(\mathbf{c}_1\) to recover the output z.

 

We summarize the properties of the protocol below.

Theorem 9

([19, Theorem 1]). For every \(\varepsilon >0\), \(\varPi _{{\text {IKOPS}}}\left( \varepsilon \right) \) computes g with \(\varepsilon \)-statistical full security.Footnote 12 Furthermore, using the PRE from [1, 16] and the BGW protocol, the \({\text {CC}}\) will be the following. Let \(\gamma _i\) denote the size of the smallest formula for evaluating the i’th bit of g(xy), and let \(\gamma =\max _{i} \gamma _i\). Then, \(\varPi _{{\text {IKOPS}}}\) has \({\text {CC}}\) at most

$$\ell _{{\text {IKOPS}}}=\xi _{{\text {IKOPS}}}\cdot \gamma ^2\cdot \log k\cdot \log |\mathcal {Y}|\cdot {\text {polylog}}\left( \varepsilon ^{-1} \right) ,$$

where \(\xi _{{\text {IKOPS}}}\in {\mathbb R}^+\) is some global constant independent of the function g and of \(\varepsilon \).

Observe that \(\varPi _{{\text {IKOPS}}}\) has a (small) non-zero probability of the client seeing to many views of the virtual servers (in the worst case all of them which gives him the knowledge of x). Thus, \(\varPi _{{\text {IKOPS}}}\) is not perfectly client secure.

In the following section, we slightly tweak \(\varPi _{{\text {IKOPS}}}\), making the watchlists deterministic, thereby making it perfectly client secure. The new protocol will have the desired properties as stated in Lemma 3.

5.2 Setting up Fixed-Size Watchlists

Recall the problem with client privacy was in the fact that the client may watch the internal state of too many servers, breaching perfect security of the protocol \(\varPi _{{\text {IKOPS}}}\), and thus of the entire construction. To solve this problem, we replace the current watchlist setup with a fixed-size watchlist setup.

In order to achieve the fixed-size watchlist, the parties will use a perfectly secure protocol for computing \(\left( {\begin{array}{c}n\\ t/2\end{array}}\right) \text {-}s\text {-string-OT}\). We do not know, however, if such a protocol even exists in the \(\mathcal {OT}\text{- }\text {hybrid}\) model. Instead, we relax the security notion a bit, so that we will be able to construct the protocol, and its security guarantees still suffice for the main protocol. Specifically, we show how in the \(\mathcal {OT}\text{- }\text {hybrid}\) model, the parties can compute \(\left( {\begin{array}{c}n\\ t/2\end{array}}\right) \text {-}s\text {-string-OT}\) in a single round, where a malicious client will only be able to learn at most t strings rather than t/2. We stress that the construction we suggest does not achieve perfect server security. Instead, it admits perfect input-dependent security. As we show in Sect. 5.3, this will not affect the security properties of our final construction.

Let \(t,n,s\in {\mathbb {N}}\) where \(t<n\), and \(s\ge 1\). For simplicity, we assume that t is even. Let \(f_1\) and \(f_2\) be the \(\left( {\begin{array}{c}n\\ t/2\end{array}}\right) \text {-}s\text {-string-OT}\) and \(\left( {\begin{array}{c}n\\ t\end{array}}\right) \text {-}s\text {-string-OT}\) functionalities respectively. We next briefly explain the ideas behind the construction. The parties will use protocol \(\varPi _{{\text {BCS}}}\) in order to simulate computation of n instances of \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-}sn\text {-string-OT}\) in parallel. On input \(\left( x_1,\ldots ,x_n \right) \), the i-th pair of strings the server will send (by first applying \(\alpha _{{\text {BCS}}}\)) consist of a masking of the i-th string \(x_i\), and a Shamir share of the concatenation of all of the maskings, that is, the pair will be \((x_i\oplus r_i,\mathbf{r}[i])\), where \(\mathbf{r}=\left( r_1,\ldots ,r_n \right) \). The client will then recover the maskings of the correct outputs alongside the shares, which will help him to reconstruct the outputs. Since for each i the client will learn either a share or a masked string, a malicious client will not be able to learn to many masked strings. The protocol \(\varPi _{{\text {ROT}}}=\left( \alpha _{{\text {ROT}}},\beta _{{\text {ROT}}},\varphi _{{\text {ROT}}} \right) \) for computing \(f_1\) in the 1-round \(\mathcal {OT}\text{- }\text {hybrid}\) model is formally described as follows.

Construction 10

( \(\varPi _{{\text {ROT}}}\) )

Input: Server \(\mathsf {S}\) holds \(\mathbf{x}=\left( x_1,\ldots ,x_n \right) \in \left( \{0,1\}^{s}\right) ^n\), and the client \(\mathsf {C}\) holds \(\mathbf{y}=\left\{ y_1,\ldots ,y_{t/2}\right\} \subseteq [n]\).

  • \(\alpha _{{\text {ROT}}}\left( \mathbf{x} \right) \): Samples n random strings \(r_1,\ldots ,r_n\leftarrow \{0,1\}^s\) independently. For every \(i\in [n]\), let \(\mathbf{r}[i]\in \{0,1\}^{sn}\) be a share of \(\mathbf{r}=(r_1,\ldots ,r_n)\) in an \((n-t)\)-out-of-n Shamir’s secret sharing (we pad \(\mathbf{r}[i]\) if needed). Output \(\mathbf{a}= \big (\alpha _{{\text {BCS}}}\left( (x_i\oplus r_i,\mathbf{r}[i]) \right) \big )_{i\in [n]}\) (the \(x_i\oplus r_i\)’s are also padded accordingly).

  • \(\beta _{{\text {ROT}}}\left( \mathbf{y} \right) \): Output \(\mathbf{b}=(\beta _{{\text {BCS}}}\left( b_1 \right) ,\ldots ,\beta _{{\text {BCS}}}\left( b_n \right) )\), where \(b_i=0\) if and only if \(i\in \mathbf{y}\).

  • \(\varphi _{{\text {ROT}}}\left( \mathbf{y},\mathbf{b},\mathbf{c}' \right) \): Let \(\mathbf{c}=\left( \varphi _{{\text {BCS}}}\left( c'_i \right) \right) _{i=1}^{n}\), let \(\mathbf{c}_1=\left( c_i \right) _{i\in \mathbf{y}}\), and let \(\mathbf{c}_2=\left( c_i \right) _{i\notin \mathbf{y}}\). If the elements in \(\mathbf{c}_2\) agree on a common secret \(\mathbf{r}\in \{0,1\}^{sn}\), then output \(\mathbf{c}_1\oplus \left( r_i \right) _{i\in \mathbf{y}}\). Otherwise, output \(\bot \).

Lemma 4

\(\varPi _{{\text {ROT}}}\) computes \(f_1\) with \({\text {CC}}\) at most \(5\cdot sn^2\), such that the following holds:

  • \(\varPi _{{\text {ROT}}}\) is correct.

  • \(\varPi _{{\text {ROT}}}\) has perfect input-dependent security.

  • For any non-uniform adversary \(\mathcal {A}\) corrupting the client in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, there exists a non-uniform simulator \(\mathsf {Sim}_{\mathcal {A}}\) corrupting the client in the ideal-world of \(f_2\), such that for all \(\mathbf{x}\in \left( \{0,1\}^s\right) ^n\), \(\mathbf{y}\subseteq [n]\) of size t/2, and \(\mathsf {aux}\in \{0,1\}^*\) it holds that

    $${\text {View}}^{{\text {HYBRID}}}_{\mathcal {A}\left( \mathbf{y},\mathsf {aux} \right) ,\varPi _{{\text {ROT}}}}\left( x,\mathbf{y} \right) \equiv {\text {View}}^{{\text {IDEAL}}}_{\mathsf {Sim}_{\mathcal {A}}\left( \mathbf{y},\mathsf {aux} \right) ,f_2}\left( x,\mathbf{y} \right) .$$

    In other words, although the simulator receives t/2 indexes as inputs, it is allowed to ask for t strings from the server’s input.

Intuitively, a malicious server cannot force the client to reconstruct two different secrets \(\mathbf{r}\) for two different inputs. This is due to the fact that for every two different inputs the set of common \(b_i\)’s that are 1 (i.e., the number of common shares the client will receive for both inputs) is of size at least \(n-t\). This implies that up to a certain set of client-inputs that the adversary can choose, the client will receive a correct output. As for a malicious client, observe that it can ask for at most t masked values, as otherwise it will not have enough shares to recover the secret \(\mathbf{r}\).

We next incorporate \(\varPi _{{\text {ROT}}}\) into \(\varPi _{{\text {IKOPS}}}\) to get a protocol that is perfectly client-secure. The full proof of Lemma 4 is deferred to Sect. 5.4.

5.3 Upgrading \(\varPi _{{\text {IKOPS}}}\)

We are finally ready to prove Lemma 3. As stated in Sect. 5.2, we replace the randomly chosen watchlist with a deterministic one using \(\varPi _{{\text {ROT}}}\). Formally, the protocol, denoted \(\varPi _{{\text {IKOPS}}}^+\), is described as follows.

Protocol 11

( \(\varPi ^+_{{\text {IKOPS}}}\left( \varepsilon \right) \) )

Input: Server has input \(x\in (\mathcal {X}\cup \left\{ \bot \right\} )\) and client has input \(\mathbf{y}\in \{0,1\}^m\).

  • \(\alpha ^+\left( x \right) \): Output \((\mathbf{a}_1,\mathbf{a}_2)\) as in \(\varPi _{{\text {IKOPS}}}\), with the exception of \(\mathbf{a}_2\) being equal to \(\alpha _{{\text {ROT}}}\left( V_1,\ldots ,V_n \right) \) (recall that \(V_i\) is the view of the virtual server \(\mathsf {S}_i\)).

  • \(\beta ^+\left( \mathbf{y} \right) \): Output \((\mathbf{b}_1,\mathbf{b}_2)\) as in \(\varPi _{{\text {IKOPS}}}\), with the exception of \(\mathbf{b}_2\) being equal to \(\beta _{{\text {ROT}}}\left( \mathcal {W} \right) \), where \(\mathcal {W}\subseteq [n]\) is of size t/2 chosen uniformly at random (recall that \(t=\left\lceil n/3 \right\rceil -1\) bounds the number of corrupted parties in the MPC protocol).

  • \(\varphi ^+\left( \mathbf{y},\mathbf{b},\mathbf{c}' \right) \): Output same as \(\varphi \left( \mathbf{y},\mathbf{b},\mathbf{c}' \right) \), with the exception that we apply \(\varphi _{{\text {ROT}}}\) to recover the outputs and watchlist.

 

Clearly, Lemma 3 follows from the following lemma, asserting the security of \(\varPi _{{\text {IKOPS}}}^+\).

Lemma 5

For every \(\varepsilon >0\), \(\varPi _{{\text {IKOPS}}}^+\left( \varepsilon \right) \) computes g with correctness, it is strong \(\varepsilon \)-server secure, and has perfect client security. Furthermore, using the PRE from [1, 16] and the BGW protocol, the \({\text {CC}}\) will be the following. Let \(\gamma _i\) denote the size of the smallest formula for evaluating the i’th bit of g(xy), and let \(\gamma =\max _{i} \gamma _i\). Then, \(\varPi ^+_{{\text {IKOPS}}}\) has \({\text {CC}}\) at most

$$\ell _{{\text {IKOPS}}}^+=\xi _{{\text {IKOPS}}}^+\cdot \gamma ^2\cdot \log k\cdot \log |\mathcal {Y}|\cdot {\text {polylog}}\left( \varepsilon ^{-1} \right) ,$$

where \(\xi _{{\text {IKOPS}}}^+\in {\mathbb R}^+\) is some global constant independent of the function and of \(\varepsilon \). In comparison to \(\varPi _{{\text {IKOPS}}}\), the only difference in the \({\text {CC}}\) is in the constant and the exponent of \(\log \left( \varepsilon ^{-1} \right) \) taken. Specifically, it holds that

$$\frac{\ell _{{\text {IKOPS}}}^+}{\ell _{{\text {IKOPS}}}}=\frac{\xi _{{\text {IKOPS}}}^+}{\xi _{{\text {IKOPS}}}}\cdot \log ^2\left( \varepsilon ^{-1} \right) .$$

Proof

Correctness trivially holds. We next prove that the protocol is strong \(\varepsilon \)-server secure. Consider a message \(\mathbf{a}^*\) sent by a malicious server holding \(x\in (\mathcal {X}\cup \bot )\) and an auxiliary input \(\mathsf {aux}\in \{0,1\}^*\) in the \(\mathcal {OT}\text{- }\text {hybrid}\) world. We need to show the existence of a certain probability vector \(\mathbf{p}\in {\mathbb R}^{|\mathcal {X}|+1}\). It will be convenient to describe the vector \(\mathbf{p}\) using a simulator \(\mathsf {Sim}\) that will describe the probability of sending \(x^*\) to \(\mathsf {T}\) as an input.

The idea is to have the simulator check the inconsistencies made by the adversary. This is done via an inconsistency graph, where each vertex corresponds to a virtual party, and each edge corresponds to an inconsistency. There are three cases in which the simulator will send \(\bot \) to \(\mathsf {T}\). The first case, is when there is a large vertex cover among the servers. In the \(\mathcal {OT}\text{- }\text {hybrid}\) world, the client will see an inconsistency with high probability, and hence output \(\bot \). The second case, is when there are two virtual clients \(\mathsf {C}_{j,0}\) and \(\mathsf {C}_{j,1}\), corresponding to the same bit of \(\mathsf {Enc}\left( \mathbf{y} \right) \) that are both inconsistent with the same server. Observe that the real client will always see an inconsistency, regardless of its input or randomness. The final case remaining, is when for each \(j\in [m\cdot m']\), the adversary tampered with exactly one of \(\mathsf {C}_{j,0}\) or \(\mathsf {C}_{j,1}\). Here the real client will not notice the inconsistency only if asked for the virtual clients the adversary did not tamper with, which happens with low probability. For all other cases, the probability that the real client will see an inconsistency is independent of its input. Therefore the simulator can compute it and send \(\bot \) with this probability. When the simulator does not send \(\bot \) as its input, it uses the MPC simulator to reconstruct an effective input.

We next formalize the description of the simulator. The simulator holds \(\mathbf{a}^*\) and \(\mathsf {aux}\) as an input.

  1. 1.

    Write \(\mathbf{a}^*=(\mathbf{a}^*_1,\mathbf{a}^*_2)\), where \(\mathbf{a}^*_1\) corresponds to the outputs and \(\mathbf{a}^*_2\) corresponds to the watchlist.

  2. 2.

    Apply the simulator guaranteed by the security of \(\varPi _{{\text {BCS}}}\) to each pair of messages in \(\mathbf{a}^*_1\) to obtain \(V_{1,0},V_{1,1},\ldots ,V_{m\cdot m',0},V_{m\cdot m',1}\), and apply the simulator guaranteed by \(\varPi _{{\text {ROT}}}\) for each pair in \(\mathbf{a}^*_2\) to obtain \(V_1,\ldots ,V_n\) and a predicate P (if the output of the simulator is \(\bot \) instead of views, then send \(\bot \) to \(\mathsf {T}\)).

  3. 3.

    Generate an inconsistency graph \(G'\), with [n] as vertices, and where \(\left\{ i_1,i_2\right\} \) is an edge if and only if \(V_{i_1}\) and \(V_{i_2}\) are inconsistent. Let \(\mathsf {VC}\) be a minimum vertex cover of \(G'\).Footnote 13 If \(|\mathsf {VC}|>t\) then send \(\bot \) to \(\mathsf {T}\).

  4. 4.

    Otherwise, pick a subset \(\mathcal {W}\subseteq [n]\) of size t/2 uniformly at random. If there exist \(i_1,i_2\in \mathcal {W}\) with an edge between them in \(G\) or \(P(\mathcal {W})=1\), then send \(\bot \) to \(\mathsf {T}\).

  5. 5.

    Otherwise, extend \(G'\) into an inconsistency graph G, where there are new vertices \((j,b)\in [m\cdot m']\times \{0,1\}\), and \(\left\{ i,(j,b)\right\} \) is an edge if and only if \(V_{j,b}^i\) is inconsistent with \(V_i\) (i.e., the view \(\mathsf {C}_{j,b}\) received from \(\mathsf {S}_i\) is inconsistent with the view of \(\mathsf {S}_i\)).

  6. 6.

    Let \(\mathcal {S}\subseteq [m\cdot m']\times \{0,1\}\) be the set of vertices corresponding to the virtual clients, that have an edge with a vertex in \(\mathcal {W}\). If there exists \(j\in [m]\) such that either

    • \(\left( m'(j-1)+j',0\right) ,(m'(j-1)+j',1)\in \mathcal {S}\) for some \(j'\in [m']\), or

    • for every \(j'\in [m']\) exactly one the vertices \((m'(j-1)+j',0),(m'(j-1)+j',1)\) is in \(\mathcal {S}\),

    then send \(\bot \) to \(\mathsf {T}\).

  7. 7.

    Otherwise, send \(\bot \) with probability \(1-2^{-e(\mathcal {S})}\), where \(e(\mathcal {S})\) is the number of edges coming out of \(\mathcal {S}\). With the complement probability, apply the (malicious) MPC simulator on the virtual servers \(\mathsf {S}_i\), where \(i\in \mathsf {VC}\), to get an input for each of virtual servers in \(\mathsf {VC}\). The simulator \(\mathsf {Sim}\) can then use the inputs of the other virtual servers to get an effective input \(x^*\in (\mathcal {X}\cup \left\{ \bot \right\} )\), and send it to \(\mathsf {T}\).

The vector \(\mathbf{p}\) is then defined as \(p_{x^*}=\Pr \left[ \mathsf {Sim}\text { sends }x^*\text { to }\mathsf {T}\right] \). Recall that for every \(\mathbf{y}\in \mathcal {Y}\) and \(z\in \left\{ \bot ,0,\ldots ,k-1\right\} \) we denote

$$q_{y,z}^{\varPi _{{\text {IKOPS}}}^+}\left( \mathbf{a}^* \right) =\Pr \left[ \varphi ^+\left( \mathbf{y},\mathbf{b},\mathbf{a}^*[\mathbf{b}] \right) =z\right] ,$$

where \(\mathbf{b}=\beta \left( \mathbf{y} \right) \) and the probability is over the randomness of \(\beta \) and \(\varphi \). To alleviate notations, we will write \(\mathbf{q}=\mathbf{q}^{\varPi _{{\text {IKOPS}}}^+\left( \varepsilon \right) }\left( \mathbf{a}^* \right) \). Fix \(\mathbf{y}\in \{0,1\}^{m}\) and \(z\in \left\{ \bot ,0,\ldots ,k-1\right\} \). We show thatFootnote 14

$$\begin{aligned} \left|q_{\mathbf{y},z}-M_g^T\left( \cdot ,(\mathbf{y},z) \right) \cdot \mathbf{p} \right|\le \varepsilon \cdot p_\bot . \end{aligned}$$
(5)

Observe that since \(\varPi _{{\text {BCS}}}\) and \(\varPi _{{\text {ROT}}}\) has perfect server-security, each \(V_{m'(j-1)+ j',b}\) and each \(V_i\) in the \(\mathcal {OT}\text{- }\text {hybrid}\) world is distributed exactly the same as its counterpart in the ideal world. Therefore, we may condition on the event that they are indeed the same. Furthermore, by the security of \(\varPi _{{\text {ROT}}}\), we may also assume that the watchlist \(\mathcal {W}\) is distributed the same, and that \(P(\mathcal {W})=0\), as otherwise in both worlds the client will output \(\bot \). In the following we fix the views and \(\mathcal {W}\). We next separate into four cases, stated in the following claims (proven below). These claims together immediately imply Eq. (5).

Claim 12

If \(|\mathsf {VC}|>t\) then Eq. (5) holds.

Claim 13

Assume that \(|\mathsf {VC}|\le t\) and that for every \(i\in \mathcal {W}\) and every \(j\in [m]\), there exists \(j'\in [m']\) such that either both \(V_{m'(j-1)+j',0}\) and \(V_{m'(j-1)+j',1}\) are consistent with \(V_i\), or both are inconsistent with \(V_i\). Then Eq. (5) holds. Moreover, the simulation is perfect.

Claim 14

Assume that \(|\mathsf {VC}|\le t\) and that there exists \(i\in \mathcal {W}\) and \(j\in [m]\), such that for every \(j'\in [m']\) exactly one of the views \(V_{m'(j-1)+j',0}\) and \(V_{m'(j-1)+j',1}\) are inconsistent with \(V_i\), then Eq. (5) holds.

Proof

(of Claim 12). Intuitively, the vertex cover of the graph G gives us information on which servers “misbehaved”. A large vertex cover means that a lot of servers have inconsistent views, implying that there are many edges in the graph. Therefore, a random subset of the vertices would contain at least one edge with high probability. We next formalize this intuition.

Since \(|\mathsf {VC}|>t\) then the maximum matching in \(G'\) is of size at least \((t+1)/2\). Therefore, in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, the expected number of edges that the client will have in its watchlist is at least \(\frac{t+1}{2}\cdot \frac{\left( {\begin{array}{c}n-2\\ t/2-2\end{array}}\right) }{\left( {\begin{array}{c}n\\ t/2\end{array}}\right) }=\varTheta \left( n \right) \). By applying Hoeffding’s inequality,Footnote 15 with probability at least \(1-2^{-\varTheta \left( n \right) }=1-\varepsilon \) the client will output \(\bot \). As in the ideal-world the simulator sends \(\bot \) to \(\mathsf {T}\) with probability 1, Eq. (5) follows.

Proof

(of Claim 13). We separate into two cases. For the first case, assume that there exist \(i\in \mathcal {W}\), \(j\in [m]\), and \(j'\in [m']\) such that both \(V_{m'(j-1)+j',0}\) and \(V_{m'(j-1)+j',1}\) are inconsistent with \(V_i\). Then \((m'(j-1)+j',0),(m'(j-1)+j',1)\in \mathcal {S}\), hence the simulator always sends \(\bot \) in this case. Furthermore, in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, for every input \(\mathbf{y}\in \{0,1\}^m\) the client will see an inconsistency between either \(V_{m'(j-1)+j',0}\) and \(V_i\), or an inconsistency between either \(V_{m'(j-1)+j',1}\) and \(V_i\). Thus, Eq. (5) holds with no error.

By the assumptions of the claim, for the second case we may assume that for every \(i\in \mathcal {W}\) and every \(j\in [m]\), there exists \(j'\in [m']\) such that both \(V_{m'(j-1)+j',0}\) and \(V_{m'(j-1)+j',1}\) are consistent with \(V_i\). In this case, in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, the client will see an inconsistency with probability \(1-2^{-e(\mathcal {S})}\). With the complement probability, its output is determined by whatever the virtual servers computed. The output of the client in the ideal-world is either \(\bot \) with probability \(1-2^{-e(\mathcal {S})}\) or it is determined by the MPC simulator. Since it is assumed to be perfect and \(|\mathsf {VC}|\le t\) bound from above the number of corrupted servers, it follows that Eq. (5) holds with no error.

Proof

(of Claim 14). By construction, the ideal-world simulator always sends \(\bot \) in this case. Additionally, in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, the client uses \(\mathsf {Enc}\) on its input \(\mathbf{y}\) to receive \(m\cdot m'\) random bits \(\left( y_j^{j'} \right) _{j\in [m],j'\in [m']}\) conditioned on \(\oplus _{j'=1}^{m'}y_j^{j'}=y_j\) for every \(j\in [m]\). Since we assume that exactly \(m'\) virtual clients, corresponding to the same input bit \(y_j\), where tampered by the adversary, it follows that with probability \(2^{-(m'-1)}\le \varepsilon \) the client will see only consistent views. Therefore, for every \(\mathbf{y}\in \{0,1\}^m\) it holds that

$$\left|q_{\mathbf{y},\bot }-M_g^T\left( \cdot ,(\mathbf{y},\bot ) \right) \cdot \mathbf{p} \right|=\left|q_{\mathbf{y},\bot }-p_{\bot } \right|=\Pr \left[ \varphi ^+\left( \mathbf{y},\mathbf{b},\mathbf{a}^*[\mathbf{b}] \right) \ne \bot \right] \le \varepsilon ,$$

and for every \(z\ne \bot \)

$$\left|q_{\mathbf{y},z}-M_g^T\left( \cdot ,(\mathbf{y},z) \right) \cdot \mathbf{p} \right|=\left|\Pr \left[ \varphi ^+\left( \mathbf{y},\mathbf{b},\mathbf{a}^*[\mathbf{b}] \right) =z\right] -0 \right|\le \varepsilon .$$

Equation (5) follows.

We next show that the protocol has perfect client-security. Consider an adversary \(\mathcal {A}\) corrupting the client. We construct the simulator \(\mathsf {Sim}_{\mathcal {A}}\). The construction of the simulator is done in the natural way, namely, it will apply the simulators of \(\varPi _{{\text {BCS}}}\) and \(\varPi _{{\text {ROT}}}\), and then the decoding of the PRE, to receive an output. It can then use the MPC simulator to simulate the views of the virtual servers in its watchlist. Formally, the simulator operates as follows.

  1. 1.

    On input \(\mathbf{y}\in \{0,1\}^m\) and auxiliary input \(\mathsf {aux}\in \{0,1\}^*\), query \(\mathcal {A}\) to receive a message \(\mathbf{b}^*\) to be sent to the OT.

  2. 2.

    Write \(\mathbf{b}^*=(\mathbf{b}^*_1,\mathbf{b}^*_2)\), where \(\mathbf{b}^*_1\) corresponds to the outputs and \(\mathbf{b}^*_2\) corresponds to the watchlist.

  3. 3.

    Apply the simulator guaranteed by the security of \(\varPi _{{\text {BCS}}}\) to each pair of messages in \(\mathbf{b}^*_1\) to obtain \(\left( b_{j} \right) _{j\in [m\cdot m']}\) for some \(b_j\in \{0,1\}\), and apply the simulator \(\mathsf {Sim}_{{\text {ROT}}}\), guaranteed by the security of \(\varPi _{{\text {ROT}}}\), for each pair in \(\mathbf{b}^*_2\) to obtain a set \(\mathcal {W}\subseteq [n]\).

  4. 4.

    Send \(\mathsf {Dec}\left( \left( b_{j} \right) _{j\in [m\cdot m']} \right) \) to \(\mathsf {T}\) to obtain an output z.

  5. 5.

    Apply the PRE simulator on z to obtain outputs \(\left( z_j \right) _{j\in [m\cdot m']}\) for each virtual client.

  6. 6.

    If \(|\mathcal {W}|>t\) then output \(\left( z_j \right) _{j\in [m\cdot m']}\) alongside whatever \(\mathsf {Sim}_{{\text {ROT}}}\) outputs and halt.

  7. 7.

    Otherwise, apply the (semi-honest) MPC simulator on the parties \(\left\{ \mathsf {S}_i\right\} _{i\in \mathcal {W}}\) with random strings as inputs, and on \(\left\{ \mathsf {C}_{j,b_j}\right\} _{j\in [m\cdot m']}\) with \(z_j\) as the output respectively. Send the output of the MPC simulator to \(\mathsf {Sim}_{{\text {ROT}}}\), outputs whatever it outputs and halt.

The security of \(\varPi _{{\text {BCS}}}\) and \(\varPi _{{\text {ROT}}}\) implies that \(\left( b_{j} \right) _{j\in [m\cdot m']}\) and \(\mathcal {W}\) are distributed exactly the same in both worlds. Therefore, the output \(z=g\left( x,\mathsf {Dec}\left( \left( b_{j} \right) _{j\in [m\cdot m']} \right) \right) \) is distributed the same, hence applying the PRE simulator on z will also result in the same distribution. Now, if \(|\mathcal {W}|>t\) then \(\mathsf {Sim}_{{\text {ROT}}}\) is guaranteed to produce a correct view as an output. If \(|\mathcal {W}|\le t\), then the MPC simulator will perfectly generate \(|\mathcal {W}|\) virtual views. Handing them over to \(\mathsf {Sim}_{{\text {ROT}}}\) would result in the view that is distributed the same as in the \(\mathcal {OT}\text{- }\text {hybrid}\) world.

5.4 Proof of Lemma 4

We first prove the following simple claim, stating that the client will always reconstruct a unique secret (if its not outputting \(\bot \)).

Claim 15

Consider a message \(\mathbf{a}^*=\left( (a^*_{1,0},a^*_{1,1}),\ldots ,(a^*_{n,0},a^*_{n,1}) \right) \in \{0,1\}^{2sn}\) sent to the OT by a malicious server. Then for any different inputs \(\mathbf{y}_1\ne \mathbf{y}_2\) for the client, either it will output \(\bot \) for at least one of the inputs, or there exists a common secret \(\mathbf{r}\) that will be reconstructed.

Proof

Let \(\mathcal {B}=\left\{ i\in [n]:i\notin \mathbf{y}_1\wedge i\notin \mathbf{y}_2\right\} \). Then \(|\mathcal {B}|\ge n-t\), hence the client – who receives the shares \(\left( a^*_{i,1} \right) _{i\in \mathcal {B}}\) – can reconstruct a secret \(\mathbf{r}\) in case the share are consistent. This secret will be the same for both \(\mathbf{y}_1\) and \(\mathbf{y}_2\).

We now prove the lemma.

Proof

(of Lemma 4). By construction, it is not hard to see that the protocol is correct. We next prove that the protocol has perfect input-dependent security. Consider a adversary \(\mathcal {A}\) corrupting the server. We construct a simulator \(\mathsf {Sim}_{\mathcal {A}}\) as follows. On input \(\mathbf{x}\) and auxiliary input \(\mathsf {aux}\in \{0,1\}^*\), query \(\mathcal {A}\) to receive a message \(\mathbf{a}^*=\left( (a^*_{1,0},a^*_{1,1}),\ldots ,(a^*_{n,0},a^*_{n,1}) \right) \in \{0,1\}^{2sn}\). If there are no \(n-t\) shares from \(\left( a^*_{i,1} \right) _{i\in [n]}\) that are consistent, then \(\mathsf {Sim}_{\mathcal {A}}\) will send the constant 1 predicate alongside some arbitrary input \(\mathbf{x}_0\) to the trusted party \(\mathsf {T}\). Otherwise, let \(\mathcal {B}\) be the maximum set of indexes \(i\in [n]\) such that the \(a^*_{i,1}\) are shares consistent with single value \(\mathbf{r}\in \left( \{0,1\}^s\right) ^n\). Then \(\mathsf {Sim}_\mathcal {A}\) will send to \(\mathsf {T}\) the input \(\left( \left( a^*_{i,0}\oplus r_i \right) _{i\notin \mathcal {B}},\left( 0^s \right) _{i\in \mathcal {B}}\right) \) with the predicate \(P_\mathcal {B}(\mathbf{y})=1\) if and only if \(\mathbf{y}\cap \mathcal {B}\ne \emptyset \).

To see why the simulator works, observe that \(\mathsf {Sim}_\mathcal {A}\) sends constant 1 predicate if and only if \(\mathcal {A}\) sent at most t consistent shares, forcing \(\mathsf {C}\) to output \(\bot \) in the ideal-world. Since this happens if there are too many inconsistencies, \(\mathsf {C}\) will output \(\bot \) in the \(\mathcal {OT}\text{- }\text {hybrid}\) world as well. Furthermore, if there are at least \(n-t\) shares that are consistent, then by Claim 15, there is a unique secret \(\mathbf{r}\) that can be reconstructed. Therefore, in the \(\mathcal {OT}\text{- }\text {hybrid}\) world, on input \(\mathbf{y}\), \(\mathsf {C}\) will output \(\bot \) if \(\mathbf{y}\cap \mathcal {B}\ne \emptyset \), and output \(\left( \mathbf{a}^*[\beta _{{\text {ROT}}}\left( \mathbf{y} \right) ]_i\oplus r_i \right) _{i\in \mathbf{y}}\) otherwise. Since \(\mathcal {B}\) was chosen to be the maximum set of indexes, the same holds in the input-dependent ideal-world.

We next show that the relaxed security requirement against malicious clients holds. Let \(\mathcal {A}\) be an adversary corrupting the client. The simulator \(\mathsf {Sim}_{\mathcal {A}}\) works as follows. On input \(\mathbf{y}\) and auxiliary input \(\mathsf {aux}\in \{0,1\}^*\), query \(\mathcal {A}\) to receive \(\mathbf{b}^*\in \{0,1\}^{n}\). If there are strictly more than t 0’s in \(\mathbf{b}^*\) then output n random strings, each of length s. Otherwise, send \(\left\{ i\in [n]:b^*_i=0\right\} \) to \(\mathsf {T}\) to receive output \(\left( x_i \right) _{i:b^*_i=0}\). \(\mathsf {Sim}_{\mathcal {A}}\) samples n random strings \(r_1,\ldots ,r_n\leftarrow \{0,1\}^s\). For \(i\in [n]\), let \(\mathbf{r}[i]\in \{0,1\}^{sn}\) be a share of \(\mathbf{r}=(r_1,\ldots ,r_n)\) in an \((n-t)\)-out-of-n Shamir’s secret sharing (pad \(\mathbf{r}[i]\) if needed). The simulator then generates the values

$$\mathbf{a}:=\Big (\left( \alpha _{{\text {BCS}}}(x_i\oplus r_i,\mathbf{r}[i]) \right) _{i:b^*_i=0},\left( \alpha _{{\text {BCS}}}\left( 0^{sn},\mathbf{r}[i] \right) \right) _{i:b^*_i=1}\Big ),$$

where the \(x_i\oplus r_i\)’s are padded accordingly. \(\mathsf {Sim}_\mathcal {A}\) will then compute and output

$$\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}^\ell \left( \mathbf{a},\left( \beta _{{\text {BCS}}}\left( b^*_1 \right) ,\ldots ,\beta _{{\text {BCS}}}\left( b^*_n \right) \right) \right) .$$

\(\mathsf {Sim}_{\mathcal {A}}\) works since in the case where there are more than t 0’s in \(\mathbf{b}^*\), by the properties the sharing scheme, the view of \(\mathsf {C}\) in the \(\mathcal {OT}\text{- }\text {hybrid}\) world consist only of random values. Otherwise, \(\mathsf {C}\) will receive the masked \(x_i\) for the indexes i on which \(b^*_i=0\), and shares of the maskings for the indexes i on which \(b^*_i=1\).

6 Tightness of the Analysis

Recall that our final protocol is a “wrapper” for an upgraded version of the protocol by Ishai et al. [19], namely, protocol \(\varPi _{{\text {IKOPS}}}^+\) from Sect. 5.3. In the following section, we prove that for any (randomized) Boolean function f that is not full-dimensional, and does not satisfy the constraints in Corollary 1, no “wrapper” protocol for \(\varPi _{{\text {IKOPS}}}^+\) will compute f with perfect full-security. Here, the “wrapper” protocol simply replaces the output \(\bot \) that the client receive from \(\varPi _{{\text {IKOPS}}}^+\) with a random bit. Formally, any “wrapper” protocol is parametrized with a vector \(\mathbf{v}\in [0,1]^{|\mathcal {Y}|}\) and an \(\varepsilon >0\), and is denoted by \(\varPi _f^{\mathbf{v}}\left( \varepsilon \right) \). Let \(g:(\mathcal {X}\cup \left\{ \bot \right\} )\times \mathcal {Y}\mapsto \left\{ \bot ,0,1\right\} \) be defined as \(g(x,y)=f(x,y)\) if \(x\ne \bot \) and \(g(\bot ,y)=\bot \). The “wrapper” protocol \(\varPi _f^{\mathbf{v}}\) is described as follows.

Protocol 16

(\(\varPi _f^{\mathbf{v}}\left( \varepsilon \right) \)) 

Input: Server \(\mathsf {S}\) has input \(x\in \mathcal {X}\) and client \(\mathsf {C}\) has input \(y\in \mathcal {Y}\).

  1. 1.

    The parties execute protocol \(\varPi _{{\text {IKOPS}}}^+\left( \varepsilon \right) \) in order to compute g. Let z be the output \(\mathsf {C}\) receive.

  2. 2.

    If \(z\ne \bot \), then \(\mathsf {C}\) output z. Otherwise, output 1 with probability \(v_{y}\).

 

We next claim that the protocol cannot compute Boolean functions that are not full-dimensional with perfect full-security.

Theorem 17

Let \(f:\mathcal {X}\times \mathcal {Y}\mapsto \{0,1\}\) denote a (possibly randomized) Boolean function that has no constant columns, i.e., \(M_f(\cdot ,y)\) is not constant for every \(y\in \mathcal {Y}\), and is not full-dimensional, i.e., \({\text {dim}}\left( {\mathbf {aff}}\left( M_f^T \right) \right) < |\mathcal {Y}|\). Then for every \(\mathbf{v}\in [0,1]^{|\mathcal {Y}|}\) and every \(\varepsilon >0\), \(\varPi _{f}^{\mathbf{v}}\left( \varepsilon \right) \) does not compute f with perfect full-security.

Proof

Assume towards contradiction that \(\varPi _f^{\mathbf{v}}\left( \varepsilon \right) \) has perfect server security, for some \(\mathbf{v}\) and \(\varepsilon \). We next construct \(|\mathcal {Y}|+1\) adversaries, such that each adversary forces the vector of outputs of the client \(\mathbf{q}^{\varPi _f^{\mathbf{v}}\left( \varepsilon \right) }\), to be a different point inside the convex-hull of the rows of \(M_f\). We then show that these points are affinely independent, giving us a contradiction. First, write each input of the client as a binary string \(\mathbf{y}\) of length m. For every \(\mathbf{y}\in \mathcal {Y}\) define the adversary \(\mathcal {A}_\mathbf{y}\) as follows.

  1. 1.

    Fix an encoding \(\mathbf{y}'=\left( y_j^1,\ldots ,y_j^{m'} \right) _{j\in [m]}\in {\text {Supp}}\left( \mathsf {Enc}(\mathbf{y}) \right) \), and fix some \(x^*_{\mathbf{y}}\in \mathcal {X}\) such that \(f(x^*_{\mathbf{y}},\mathbf{y})\ne v_y\). (such an \(x^*_{\mathbf{y}}\) exists, since \(M_f\) does not have constant columns).

  2. 2.

    Execute \(\varPi _{{\text {IKOPS}}}^{+}\left( \varepsilon \right) \) honestly with input \(x^*_{\mathbf{y}}\), as fixed above, with the following one exception. For every \(i\in [n]\), \(j\in [m]\), and \(j'\in [m']\), modify \(V^i_{m'(j-1)+j',1 - y^{j'}_j}\) such that it is inconsistent with \(V_i\).

Finally, define the adversary \(\mathcal {A}_0\) who picks an arbitrary \(x^*_0\in \mathcal {X}\) as an input, and acts honestly with the exception that it tampers with all \(V^i_{j,b}\)’s, making them inconsistent with the corresponding \(V_i\). Let \(\mathbf{a}^*\left( x_{\mathbf{y}}^* \right) \) be the message \(\mathcal {A}_\mathbf{y}\) sends to the OT.

Let us analyze the client’s vector of outputs \(\mathbf{q}^{\varPi _f^{\mathbf{v}}\left( \varepsilon \right) }\left( \mathbf{a}^*\left( x_{\mathbf{y}}^* \right) \right) \), for any adversary \(\mathcal {A}_{\mathbf{y}}\), for \(\mathbf{y}\in \mathcal {Y}\cup \left\{ 0\right\} \). For brevity, we write \(\mathbf{q}\left( x^*_{\mathbf{y}} \right) \) instead. By definition, \(\mathcal {A}_0\) forces the client to sample its output according to \(\mathbf{v}\), hence \(\mathbf{q}\left( x^*_0 \right) =\mathbf{v}\). Next, fix \(\mathbf{y}\in \mathcal {Y}\). Observe that for every \(\hat{\mathbf{y}}\ne \mathbf{y}\), any of their encodings will differ on at least one bit, i.e., \(\hat{y}_j^{j'}\ne y_j^{j'}\) for some \(j\in [m]\) and \(j'\in [m']\), hence on input \(\hat{\mathbf{y}}\), the client will see \(V^i_{m'(j-1)+j',1 - y^{j'}_j}\) for every i. Since the inconsistency is made with every virtual server, on input \(\hat{\mathbf{y}}\), the client will notice it and output \(\bot \) with probability 1. By the description of \(\varPi _f^{\mathbf{v}}\), it follows that

$$\begin{aligned} q_{\hat{\mathbf{y}}}\left( x^*_{\mathbf{y}} \right) =v_{\hat{\mathbf{y}}}, \end{aligned}$$
(6)

for every \(\hat{\mathbf{y}}\ne \mathbf{y}\). On the other hand, on input \(\mathbf{y}\), the client outputs \(\bot \) if and only if the event \(\mathsf {Enc}\left( \mathbf{y} \right) =\mathbf{y}'\) occurs, which happens with probability \(1-2^{-m(m'-1)}\). With the complement probability \(2^{-m(m'-1)}\) it does not detect an inconsistency, and outputs \(f(x^*_{\mathbf{y}},\mathbf{y})\). Therefore

$$\begin{aligned} q_\mathbf{y}\left( x^*_{\mathbf{y}} \right) =2^{-m(m'-1)}\cdot f(x^*_{\mathbf{y}},\mathbf{y})+(1-2^{-m(m'-1)})\cdot v_{\mathbf{y}}. \end{aligned}$$
(7)

Thus, Eqs. (6) and (7) yield that

$$\begin{aligned} \mathbf{q}\left( x^*_{\mathbf{y}} \right) = \mathbf{v}- 2^{-m(m'-1)}(v_\mathbf{y}- f(x^*_\mathbf{y},\mathbf{y}))\cdot \mathbf{e}_\mathbf{y}, \end{aligned}$$

where \(\mathbf{e}_\mathbf{y}\) is the \(\mathbf{y}\)-th unit vector in \(\mathbb {R}^{|\mathcal {Y}|}\).

To conclude the proof, observe that \(x^*_{\mathbf{y}}\) was chosen so that \(f(x^*_\mathbf{y},\mathbf{y})\ne v_\mathbf{y}\), implying that the set of points \(\left\{ \mathbf{q}\left( \mathbf{x}^*_{\mathbf{y}} \right) \right\} _{\mathbf{y}\in \mathcal {Y}\cup \left\{ 0\right\} }\) are affinely independent. Furthermore, since \(\varPi _f^{\mathbf{v}}\) is assumed to have perfect server security, Lemma 1 implies that all of theses points lie inside \( {\mathbf {conv}\left( M_f^T \right) }\). Therefore, \( {\mathbf {aff}}\left( M_f^T \right) ={\mathbb R}^{|\mathcal {Y}|}\) contradicting the assumption that f is not full-dimensional.

7 A Note on Efficiency

While our main goal is to understand the feasibility of perfectly secure 2PC, our construction does confer concrete efficiency benefits for certain parameter ranges. It is instructive to compare our construction with the IKOPS protocol, for deterministic functions (from the right class). Here we focus on the number of OT calls, which are the most expensive part to implement in practice (usually with computational security). Specifically, for simplicity, we consider the number of calls to a \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-}s\text {-string-OT}\) oracle, of any length s, rather than \(\left( {\begin{array}{c}2\\ 1\end{array}}\right) \text {-bit-OT}\). We note that the strings’ length of our OT’s is quite a bit larger than in IKOPS (due to the step ensuring perfect client security, where the length is multiplied by the number of servers). However, we claim that this comparison is somewhat justified when having practical efficiency in mind, since for particularly long strings, a string-OT oracle can be used to pick short PRG seeds instead of the strings themselves during a preprocessing phase. This will be done by having the server send \(s_0\) and \(s_1\) to the “short” string-OT functionality, and the client will receive \(s_b\), where \(b\in \{0,1\}\) as its input. Then, to implement the “long” string-OT during the protocol execution, the sender sends to the client \(\mathsf {G}(s_a)\oplus m_a\), for \(a\in \{0,1\}\), where \(\mathsf {G}\) is a PRG and \(m_0\) and \(m_1\) are the “long” messages.

Fix a deterministic function \(f:\mathcal {X}\times \mathcal {Y}\rightarrow \left\{ 0,\ldots ,k-1\right\} \) satisfying the conditions of Corollary 1. The number of calls to string-OT in \(\varPi ^+_{{\text {IKOPS}}}\left( \varepsilon \right) \) and \(\varPi _{{\text {IKOPS}}}\left( \varepsilon \right) \) is \(\log \left( \varepsilon ^{-1} \right) (\log {|\mathcal {Y}|}+c)\), where c is a constant circa 1400 (c is roughly the same in both protocols). When considering our perfectly secure protocol, we set \(\varepsilon =\frac{1}{2n(n+1)!}\), where \(n=(k-1)\cdot |\mathcal {Y}|\). On the one hand, this results in communication complexity that is polynomial in \(|\mathcal {Y}|\) and k, which may be prohibitive for functions with large client-domain or range sizes. On the other hand, for functions with a small client-domain and range sizes, we do better than IKOPS even for real-world error ranges, and the advantage grows as the allowed error \(\varepsilon \) decreases. For instance, consider the greater than function \(3\mathsf {GT}:\{0,1,2\}\times \{0,1,2\}\rightarrow \{0,1\}\), with an error of \(\varepsilon =2^{-40}\). The communication complexity we obtain is bounded by a factor smaller than \(40/\log (24)\approx 8.724\) than that of the IKOPS protocol.