1 Introduction

Secure multiparty computation (MPC) [7, 9, 19, 32] allows a set of mutually distrusting parties to compute any function of their local inputs while guaranteeing (to the extent possible) the privacy of the inputs and the correctness of the outputs. Security is formulated by requiring that a real execution of a protocol is indistinguishable from an ideal execution in which the parties hand their inputs to a trusted party who computes the function and returns the outputs.

The strongest level of security one could hope for is so-called “full security” [8, 19]. Full security ensures guaranteed output delivery in the sense of allowing all parties to learn their outputs without revealing additional information about other inputs. In particular, it implies fairness: malicious parties cannot learn their outputs while preventing honest parties from learning their outputs. This level of security is achievable in the presence of an honest majority, either unconditionally [4, 7, 9, 31] (assuming secure point-to-point channels and a broadcast channel) or under standard cryptographic assumptions [18, 19] (assuming a public-key infrastructure).

Without an honest majority, a classical result of Cleve [11] shows that full security, or even fairness alone, is generally impossible. Concretely, there are many natural functionalities such that in every protocol for computing them, malicious parties can gain a significant advantage over honest parties in learning information about the output. Thus, when no honest majority is assumed, it is common to settle for weaker notions of security such as “security with abort” [5, 19,20,21, 32].

In this paper, we consider the possibility of achieving full security for functionalities that deliver output to a single party, to which we refer as “functionalities with solitary output” or “solitary functionalities” for short. Such functionalities capture many realistic use-cases of MPC in which different participants play different roles. For instance, consider a (single) employer who wishes to learn some aggregate private information about a group of employees, where the output should remain hidden from the employees. This type of functionalities is commonly considered in the non-interactive setting, including the Private Simultaneous Messages (PSM) model of secure computation [15] and its robust variants [1, 6].

Beyond being a natural class of functionalities, the class of solitary functionalities is also interesting because it bypasses all fairness-based impossibility results. Indeed, fairness is not an issue when only one party receives an output, and thus Cleve’s impossibility result does not have any consequences for such functionalities. Therefore, the first question that we ask is a very basic feasibility question in the theory of MPC:

Do all functionalities with solitary output admit a fully secure protocol?

This feasibility question can be contrasted with the state of affairs in other ongoing lines of work on characterizing the functionalities that admit protocols with information-theoretic security, or UC security, or fairness [3, 10, 13, 23, 28], where the high-order bit is already known and the current efforts are focused on trying to fully characterize the realizable functionalities.

We make two main contributions. On the negative side, we settle the high-order bit by proving that some solitary functionalities cannot be computed with full security. This is conceptually intriguing because, as mentioned above, solitary functionalities do not introduce “fairness” problems. So what is the source of difficulty in achieving full security? Our impossibility proof extends Cleve’s original attack in a rather subtle way. In Cleve’s attack, the adversary gains advantage over honest parties by aborting the protocol at a point where it knows significantly more information about the output than the honest parties do. Our new attack, dubbed the “double-dipping attack”, is based on the following rough intuition. (The following simplified description of the attack ignores important subtleties; see Sects. 1.2 and 3 for a more precise version.) The adversary controls a majority of the parties that includes the output party. It instructs one of the parties it controls to abort the protocol just when learning enough (but not all) information about the output. Intuitively, in such a case, the protocol must be run again with default values (in particular, the original inputs cannot be recovered as the aborting parties form a majority). In the end of the protocol, the adversary learns the output of f on two inputs, with the same input values for honest parties. This is an information that the adversary cannot obtain in the ideal world, hence security fails.

On the positive side, we make progress towards full characterization of the solitary functionalities that admit fully secure protocols. We present such protocols for several natural and useful families of solitary functionalities, including variants of commonly studied MPC problems such as Private Set Intersection. Our positive results apply in many cases where negative results are known for the multi-output variant. We elaborate on both our positive and negative results below.

1.1 Our Results

For our negative result, we present a family \(\varOmega \) of solitary functionalities for which no fully secure protocol exists. A representative example of such a functionality, first considered in the context of “best of both worlds” security [25] (see below), is the following 3-party functionality \(f_\mathsf {eq}\) with two parties \(P_1\) and \(P_2\) receiving inputs \(x,y\in \{1,2,3\}\), respectively, and an output-receiving party Q. The output of \(f_\mathsf {eq}\) is defined as \(f_\mathsf {eq}(x,y)=x\) if \(x= y\) and \(f_\mathsf {eq}(x,y)= \perp \) otherwise. We sketch below how “double dipping” is applied to this functionality, and present the family \(\varOmega \) and the formal impossibility proof in Sect. 3.

Next, in Sect. 4, we present several positive results. We start by proving that fairness implies full security in the following sense: if f is an n-party function, where all parties receive the output, and f can be computed with fairness, then the \((n+1)\)-party solitary functionality \(f'\), with inputs given to \(P_1,\ldots ,P_n\), as in f, and with the output delivered to the output party Q, can be computed with full security. Our next positive result shows that we can go much beyond fairness positive results; specifically, we consider a family of n-party functionalities that we call functions with “forced output distribution”. Described for the 3-party case, this family includes all functions f(xy) (with inputs xy to \(P_1,P_2\), respectively, and output to Q) such that for at least one of the input parties, say \(P_1\), there is a distribution on its input, where the output f(xy) is distributed the same, no matter what the other input is. Note that such (non-trivial) functions f cannot be computed with fairness, as this would imply fair coin-tossing, which is impossible [11]. Finally, as a third positive result, we consider a family of functionalities that we term “functionalities with fully revealing input”. Described in the 3-party setting above, this family includes all functionalities where one of the parties, say \(P_1\), has an input for which the function f becomes injective.

We stress that these results fall short of providing a full characterization of the fully secure solitary functionalities, as we give an example of a function that does not fall into any of the families of positive results but nevertheless can be computed with full security. Interestingly, we compute this function using a variant of the GHKL protocol [23] for computing fair two-party functionalities, yet—viewed as a symmetric two-party functionality—it is inherently unfair. We leave the question of finding a full characterization as an intriguing open question for future work.

Example. To demonstrate the usefulness of the above positive and negative results, we consider some variants of the Private Set Intersection (PSI) problem. In this problem, the inputs xy of \(P_1,P_2\) correspond to subsets \(S_1,S_2\) of some domain [m] and the output is the intersection \(S=S_1\cap S_2\). It follows from our negative result that if \(|S_1|=|S_2|=k\), for some fixed k, then this function cannot be computed with full security (in fact, the function \(f_\mathsf {eq}\) mentioned above is exactly the case \(k=1\)). On the other hand, for the same inputs, if the required output is only the intersection size, i.e. |S|, then this becomes a functionality with a forced output distribution (e.g., by choosing \(S_1\) as a uniformly random set of size k) and so this functionality can be computed with full security. Similarly, if we allow \(|S_1|,|S_2|\) to be anywhere between k and m then PSI with full security becomes possible (using [m] as a revealing input) and, if we allow \(|S_1|,|S_2|\) to be anywhere between 0 and k, this is also possible (using a degenerate version of the forced output distribution, where \(\emptyset \) is selected with probability 1). Other interesting cases, like the case where \(|S_1|,|S_2|\) are between 1 and k, are left as an open problem. (See the full version of the present paper [24] for an analysis of additional variants of PSI, including additional variants where the output is just the intersection size |S|, or just a bit indicating whether \(S=\emptyset \), sometimes referred to as the disjointness function. The full version also includes similar analyses for different natural flavors of Oblivious Transfer (OT)).

Fig. 1.
figure 1

Table summarizing our results vis-à-vis the PSI problem

Finally, as an additional contribution, we analyse the round complexity of computing solitary functionalities with full security. We observe that some of the protocols presented in our positive results are constant-round protocols, while others use super-logarithmic number of rounds. We prove that, for certain solitary functionalities, full security actually requires super-constant round complexity (see Sect. 5). We leave the question of figuring out the exact round-complexity for any solitary functionality as an intriguing open question for future work.

Feasibility Landscape of Boolean Solitary Functionalities. We conclude this section with a few sentences regarding the “feasibility” landscape of solitary MPC. We focus on functions with Boolean output where the output receiving party does not provide input; this case is interesting as it is readily comparable to the non-solitary Boolean two-party case (the most well understood instance of fully secure MPC with dishonest majority). We distinguish two cases depending of the size of the input domains. From the fairness criterion, if one party has a strictly bigger input domain than the other, then almost all functionalities are computable with full security, because almost all two-party Boolean functions admit fair protocols in this case [3]. On the other hand, when the parties have exactly the same number of inputs, the fairness criterion does not apply, because almost all two-party Boolean functions are not computable with fairness.Footnote 1 However, by excluding the functions that are computable using a variant of the forced criterion, we can succinctly describe the set of functions whose status is unknown: \(\{M\in \{0,1\}^{n\times n} \mid \exists \mathbf {x}\in {\mathbb R}^n\text { s.t. }M\mathbf {x}=\mathbf {1}_n \wedge \sum _i \mathbf {x}_{i}\le 0\} \). In words, the set corresponds to 0–1 matrices (viewed as matrices over the reals) whose columns span \(\mathbf {1}_n\) with coefficient that have a negative sum. While we could not rigorously analyze the measure of this set, we conjecture that it represents a vanishing fraction of the entire space, i.e. relative to \(\{0,1\}^{n\times n} \); experimental evidence for \(n\le 300\) strongly supports our conjecture (see [24], Appendix A). Thus, the following picture emerges for functionalities with equal-sized input domains: almost all 2-party functionalities cannot be computed fairly, while almost all solitary 3-party functionalities (two inputs and one output) can be computed with full security.

1.2 Our Techniques

Next, we elaborate on some of the techniques that we use.

(i) Impossibility result. As mentioned above, for our impossibility result, we use a technique inspired by Cleve’s seminal “biasing” attack on coin-tossing [11]. In Cleve’s attack, the adversary is trying to bias the output of a fair coin-flip. The adversary picks a random round i, and plays honestly until that round. Then, the adversary computes the corrupted party’s backup value for that round, i.e. the output prescribed by the protocol in case the other party aborted at that round. The adversary aborts the corrupted party at that round or the next round depending on the “direction” it is attempting to bias the output to. Intuitively, because the protocol is inherently unfair, the adversary has an advantage in learning the output. Therefore, by aborting prematurely, the adversary alters the distribution of the honest party’s output.

Translating the above attack to our setting is not straightforward, given that the above gives an attack on correctness while we aim for an attack on privacy. For concreteness, we now explain how our impossibility applies to the 3-party functionality \(f_\mathsf {eq}\) described above. Notice that, in an ideal execution, if \(P_1\) chooses its input at random, then the other two colluding parties can only be sure of \(P_1\)’s input with probability at most 1/3 (i.e. by guessing the right value). In the real-world however, there must be some round of the protocol where the joint backup value of \(P_2\) and Q (i.e. the output prescribed by the protocol in case \(P_1\) aborted at that round) contains information about \(P_1\)’s input, while the joint backup value of \(P_1\) and Q does not contain information about \(P_2\)’s input. By aborting \(P_2\) at that round, the adversary can effectively compute the output on two different inputs of \(P_2\) and thus guess \(P_1\)’s input with probability noticeably greater than 1/3.

Rather crudely, the above can be summarized as follows: We define a coin-toss between \(\left\{ P_1, Q\right\} \) and \(\left\{ P_2, Q\right\} \) such that the outcome of the “coin-toss” is tied to some privacy event. By “biasing” the coin-toss, the adversary effectively increases its chance that the privacy event occurs, which results in a privacy breach. It should be noted that this picture is not accurate since, in our setting, the direction of bias is very important and this cannot be guaranteed by Cleve’s attack.

(ii) Protocols. Our transformation from n-party fair protocols (with output to all) to \((n+1)\)-party fully secure protocols with solitary output to Q describes a compiler that takes a fair protocol \(\varPi \) and transforms it into a fully secure protocol \(\varPi '\) with solitary output. The idea is to emulate \(\varPi \) by sharing the view of each party \(P_i\) in the original protocol \(\varPi \) between \(P_i\) and Q in \(\varPi '\). This way, an adversary corrupting a subset of parties not including Q learns nothing, while an adversary corrupting a subset of parties that includes Q only learns the views of the corresponding parties in \(\varPi \). The latter cannot be used to mount an attack, given the presumed security of the original protocol. Our protocols for the forced output distribution class and for the fully revealing input class are very different. Interestingly, these two cases are symmetric in some sense, where each has “problematic” parties. In the former (forced output distribution) case, the problematic party is the one that does not have a forced output distribution. The protocol we propose in this case funnels the communication through the others parties. Thus, by design, the problematic party only contributes to the computation once. For the latter (fully revealing input) case, the problematic parties are the ones without fully revealing input. The protocol we propose for this case funnels the communication through the party with a revealing input, say \(P_1\). Thus, by design, unless \(P_1\) is corrupt (in which case there are no secrets), computation only occurs once.

Related Work. Below, we discuss some related work that deals with full security and other related security notions (in particular, fairness).

In the two-party case, it is known that fairness is equivalent to full security (with guaranteed output delivery), since if an honest party aborts it can safely replace the input of the corrupted party by a default value and compute the resulting output locally. In contrast, Cohen and Lindell [12] show that in the multiparty case there are functionalities that admit fair protocols but do not admit fully secure protocols.

Since the work of Cleve [11], it is known that full security, or even fairness, cannot be achieved in general unless there is an honest majority. This led to a rich line of work [2, 3, 14, 23, 30] attempting to characterize which functions can be computed with full security. Most works along this line focused on the two-party case, starting with the results of [23], and culminating in a full characterization for the class of fair Boolean functions with the same output for both parties [3].

Less is known for the multi-party case. Examples of multi-output functions for which fair protocols exist (specifically, n-party OR and 3-party majority) are given in [22]. In [25, 27] (see also [26]), the notion of “Best-of-both-worlds security” is introduced as a hybrid between full security and security with abort. A protocol satisfies this definition, if there is one protocol that simultaneously provides full security if there is an honest majority and otherwise it guarantees security with abort. Note that, in the context of best-of-both-worlds, [25] already gives an example of a 3-party solitary function for which no constant-round protocol exists (concretely, the function \(f_\mathsf {eq}\) mentioned above). This was improved to \(\log n\) rounds in [27].

Open Problems. As mentioned above, the most obvious open problems are obtaining a characterization or at least reducing the gap between the positive and negative results, and working out the exact round complexity for fully secure computation of solitary functionalities. Less obviously, we identify the following interesting open questions.

  1. 1.

    Our attack in Sect. 3 crucially relies on the rushing capability of the adversary. It would be interesting to show that this is inherent for impossibility or to extend the negative result to the case of a non-rushing adversary.

  2. 2.

    In this work, we are mainly concerned with the feasibility questions of solitary MPC. Therefore, for obtaining malicious security, our protocols use a generic step that we have not tried to optimize. We leave the interesting question of improving concrete efficiency for future work, or designing concretely efficient fully secure protocols for useful special cases such as PSI.

  3. 3.

    As explained in subsequent sections, broadcast is necessary for solitary MPC. However, some functionalities do not require broadcast. While the question is orthogonal to the goal of the paper, it would be interesting to understand which functionalities require broadcast in the solitary setting.

2 Preliminaries

The following models and definitions are adapted from [12, 17].

2.1 Models

In this section we outline the definition of secure computation, following Canetti’s definition approach for the standalone model [8], and highlight some details that are important for our purposes. The following version of the definition is somewhat simplified. We refer the reader to [8] for more complete definitions.

Communication Model. We consider a network of n processors, usually denoted \(P_1,\ldots ,P_n\) and referred to as parties. Each pair of parties is connected via a private, authenticated point-to-point channel. In addition, all parties share a common broadcast channel, which allows each party to send an identical message to all other parties. In some sense, the broadcast channel can be viewed as a medium which “commits” the party to a specific value.Footnote 2

Functionality. A secure computation task is defined by some n-party functionality \(f:X_1\times \ldots \times X_n \rightarrow \varSigma ^n\), specifying the desired mapping from the parties’ inputs to their final outputs. Party \(P_i\)’s input domain is denoted by \(X_i\), for each \(i\in [n]\), and the outputs of the parties are assumed to belong to some alphabet \(\varSigma \). When \(n=3\), the parties’ input domains will be denoted X, Y and Z to make the distinction more explicit. One may also consider randomized functionalities, which take an additional random input; however, in this work we focus on the deterministic case.

Functionality with Solitary Output. A n-party functionality \(f:X_1\times \ldots \times X_n \rightarrow \varSigma ^n\) admits solitary output if it delivers output to (the same) one party alone, i.e. f is of the form \((x_1,\ldots , x_n)\mapsto (\emptyset , \ldots , \emptyset , \sigma ,\emptyset , \ldots , \emptyset )\), where the index of \(\sigma \) does not depend on the input. The output-receiving party will be denoted by, Q, and, unless stated otherwise, will be identified with \(P_n\). If no confusion arises, we simply write \(f:X_1\times \ldots \times X_n \rightarrow \varSigma \) or \(f:(x_1,\ldots , x_n)\mapsto \sigma \).

Some Notations. Denote by \(\mathcal {P}=\{P_1,\ldots , P_n\}\) the set of all parties. If no confusion arises, we sometimes identify \(\mathcal {P}\) with the numbers in \([n]=\left\{ 1,\ldots ,n\right\} \). Subsets of these parties are denoted by calligraphic letters (\(\mathcal {S},\mathcal {T},\ldots \)), and their complements will be denoted by (\(\overline{\mathcal {S}},\overline{\mathcal {T}},\ldots \)). Random variables are denoted by lower-case boldface (\(\mathbf {x},\mathbf {y},\ldots \)) and distributions by upper-case boldface (\(\mathbf {X},\mathbf {Y},\ldots \)). For a functionality f taking input from \(X_1\times \ldots \times X_n\) we will write \(x_\mathcal {S}\) to denote an element of the subspace \({\times }_{i\in \mathcal {S}} X_i\) and, abusing notation, \(f(x_\mathcal {S},x_{\overline{\mathcal {S}}})\) denotes the value of \(f(x_1,x_2,\ldots ,x_n)\). Furthermore, for integers m and k, we let \({[m] \atopwithdelims ()k}\) denote the subsets of [m] of size exactly k and \(2^{[m]}\) the set of all subsets of [m]. For set \(\mathcal {S}\) and distribution \(\mathbf {S}\), we write \(s\leftarrow \mathcal {S}\) and \(s\leftarrow \mathbf {S}\) to denote that element s is sampled uniformly at random from \(\mathcal {S}\) or according to distribution \(\mathbf {S}\), respectively.

Protocol. Initially, each party \(P_i\) holds an input \(x_i\), a random input \(\rho _i\) and, possibly, a common security parameter \(\kappa \). The parties are restricted to (expected) polynomial time in \(\kappa \). The protocol proceeds in rounds, where in each round each party \(P_i\) may send a “private” message to each party \(P_j\) (including itself) and may broadcast a “public” message, to be received by all parties. The messages \(P_i\) sends in each round may depend on all its inputs (\(x_i,\rho _i\) and \(\kappa \)) and the messages it received in previous rounds. Without loss of generality, we assume that each \(P_i\) sends \(x_i,\rho _i,\kappa \) to itself in the first round, so that the messages it sends in each subsequent round may be determined from the messages received in previous rounds. We assume that the protocol terminates after a fixed number of rounds, denoted r (that may depend on the security parameter \(\kappa \)), and that honest parties never halt prematurely, i.e. honest parties are active at any given round of the protocol. Finally, each party locally computes some output based on its view. We note that our negative results extend to protocols that have expected polynomial number of rounds (in \(\kappa \)) via a simple Markov inequality argument.

Fail-Stop Adversary. We consider a fail-stop t-adversary \(\mathcal {A}\), where the parameter t is referred to as the security threshold. The adversary is an efficient interactive algorithm,Footnote 3 which is initially given the security parameter \(\kappa \) and a random input \(\rho \). Based on these, it may choose a set \(\mathcal {T}\) of at most t parties to corrupt. The adversary then starts interacting with a protocol (either a “real” protocol as above, or an ideal-process or hybrid-process protocol to be defined below), where it takes control of all parties in \(\mathcal {T}\). In particular, it can read their inputs, random inputs, and received messages and, contrary to the malicious case (see below), it can control the messages that parties in \(\mathcal {T}\) send only by deciding whether to send them or to abort. We assume by default that the adversary has a rushing capability: at any round it can first wait to hear all messages sent by uncorrupted parties to parties in \(\mathcal {T}\), and use these to make its decisions whether to abort or continue (some of) the parties he corrupts. Corrupted parties that do not abort send their prescribed messages for the present round, while corrupted parties that abort send a special abort symbol to all parties.Footnote 4

Malicious Adversaries. Adversaries that deviate arbitrarily from the protocol are not discussed in the present paper. Using the GMW compiler [19], our positive results can be extended to malicious adversaries. Negative results trivially extend to such adversaries (since fail-stop is a special kind of malicious adversary).

Security. We consider two types of security known as full security and security with identifiable abort. The former is the focus of the paper, i.e. it corresponds to the security notion we want to realize or rule out. The latter is a weaker security notion that is useful towards realizing our positive results. Informally, a protocol computing f is said to be t-secure if whatever a t-adversary can “achieve” by attacking the protocol, it could have also achieved (by corrupting the same set of parties) in an ideal process in which f is evaluated using a trusted party. To formalize this definition, we have to define what “achieve” means and what the ideal process is. The ideal process for evaluating the functionality f is a protocol \(\pi _f\) involving the n parties and an additional, incorruptible, trusted party TP.

Ideal Model with Full Security. The protocol proceeds as follows: (1) each party \(P_i\) sends its input \(x_i\) to TP; (2) TP computes f on the inputs (using its own random input in the randomized case), and sends to each party its corresponding output. Note that when the adversary corrupts parties \(\mathcal {T}\) in the ideal process, it can pick the inputs sent by parties in \(\mathcal {T}\) to TP (possibly, based on their original inputs) and then output an arbitrary function of its view (including the outputs it received from TP). Honest parties always output the message received from the trusted party and the corrupted parties output nothing.

Ideal Model with Identifiable Abort. In this case, an adversary can abort the computation in the ideal model after learning its outputs, at the cost of revealing to the honest parties the identity of at least one of the corrupted parties. The protocol proceeds as follows: (1) each \(P_i\) sends its input \(x_i\) to TP; (2) TP computes f on the inputs (using its own random input in the randomized case), and sends to each of the corrupted parties its corresponding output. (3) By sending to TP either \((\mathsf {continue}, \emptyset )\) or \((\mathsf {abort},P_i)\), for some \(P_i\) in \(\mathcal {T}\), according to whether the adversary continues the execution, or aborts the execution at the cost of revealing one corrupted party. (4) TP sends the outputs to the honest parties if the adversary continues, or the identity of the corrupted \(P_i\) together with a special abort-symbol, if the adversary aborted the computation. Similarly to the previous case, when an adversary corrupts parties in the ideal process, it can pick the inputs sent by parties in \(\mathcal {T}\) to TP (possibly, based on their original inputs) and then output an arbitrary function of its view (including the outputs it received from TP). Honest parties always output the message received from the trusted party and the corrupted parties output nothing.

2.2 Security Definition

To formally define security, we capture what the adversary “achieves” by a random variable concatenating the adversary’s output together with the outputs and the identities of the uncorrupted parties. For a protocol \(\varPi \), adversary \(\mathcal {A}\), input vector x, and security parameter \(\kappa \), let \({{exec}}_{\varPi ,A}(\kappa ,x)\) denote the above random variable, where the randomness is over the random inputs of the uncorrupted parties, the trusted party (if f is randomized), and the adversary. The security of a protocol \(\varPi \) (also referred to as a real-life protocol) is defined by comparing the \({{exec}}\) variable of the protocol \(\varPi \) to that of the ideal process \(\pi ^{\mathsf {type}}_f\), where \(\mathsf {type}\in \left\{ \mathsf {full\_sec},\mathsf {id\_abort}\right\} \) specifies the ideal process to be compared with (either full security or identifiable abort). Formally:

Definition 2.1

We say that a protocol \(\varPi \) t-securely computes f if, for any (real-life) t-adversary \(\mathcal {A}\), there exists (an ideal-process) t-adversary \(\mathcal {A}'\) such that the distribution ensembles \({{exec}}_{\varPi ,\mathcal {A}}(\kappa ,x)\) and \({{exec}}_{\pi ^{\mathsf {type}}_f,\mathcal {A}'}(\kappa ,x)\) are indistinguishable. The security is referred to as perfect, statistical, or computational according to the notion of indistinguishability being achieved. For instance, in the computational case it is required that for any family of polynomial-size circuits \(\{C_\kappa \}\) there exists some negligible functionality \({\text {neg}}\), such that for any x,

$$\begin{aligned} |C_\kappa ({{exec}}_{\varPi ,\mathcal {A}}(\kappa ,x))- C_\kappa ({{exec}}_{\pi ^{\mathsf {type}}_f,\mathcal {A}'}(\kappa ,x))|\le {\mathsf {neg}}(\kappa ). \end{aligned}$$

An equivalent form of Definition 2.1 quantifies over all input distributions X rather than specific input vectors x. This equivalent form is convenient for proving our negative results.

Intuitive Discussion. Definition 2.1 asserts that for any real-life t-adversary \(\mathcal {A}\) attacking the real protocol there is an ideal-process t-adversary \(\mathcal {A}'\) which can “achieve” in the ideal process as much as \(\mathcal {A}\) does in the real life. The latter means that the output produced by \(\mathcal {A}'\) together with the inputs and outputs of uncorrupted parties in the ideal process is indistinguishable from the output (wlog, the entire view) of \(\mathcal {A}\) concatenated with the inputs and outputs of uncorrupted parties in the real protocol. This concatenation captures both privacy and correctness requirements. On the one hand, it guarantees that the view of \(\mathcal {A}\) does not allow it to gain more information about inputs and outputs of uncorrupted parties than is possible in the ideal process and, on the other hand, it ensures that the inputs and outputs of the uncorrupted parties in the real protocol be consistent with some correct computation of f in the ideal process. We stress that ideal-world adversary can indeed choose whatever input it likes, and it need not restrict itself to the input chosen by the real-world adversary.

Default Security Threshold. Throughout the paper, we assume that the security threshold is \(t=n-1\), namely an arbitrary strict subset of the parties can be corrupted. We therefore do not mention the parameter t in the rest of the paper.

2.3 Hybrid Model and Composition

Hybrid Model.The hybrid model extends the real model with a trusted party that provides ideal computation for predetermined functionalities. In more detail, the parties communicate with this trusted party as per the specifications of the ideal models described above (either fully secure or identifiable abort, to be specified). Let \(\mathsf {Fn}\) be a functionality. Then, an execution of a protocol \(\varPi \) computing a functionality f in the \(\mathsf {Fn}\)-hybrid model involves the parties interacting as per the real model and, in addition, having access to a trusted party computing \(\mathsf {Fn}\). The protocol proceeds in rounds such that, at any given round, the parties send normal messages as in the standard model, or, make a single invocation of the functionality \(\mathsf {Fn}\). Security is defined analogously to Definition 2.1 by replacing the real protocol with the hybrid one. The model in question is referred to as the \((\mathsf {Fn},\mathsf {type})\)-hybrid model, depending on the specification of the ideal functionality.

Composition. The hybrid model is useful because it allows cryptographic tasks to be divided into subtasks. In particular, a fully secure hybrid protocol making ideal invocations to an ideal functionality with identifiable abort can be transformed into a fully secure real protocol, if there exists a real protocol for the ideal functionality that is secure with identifiable abort. This technique is captured by Canneti’s sequential composition theorem.

Theorem 2.1

(Canetti [8]). Suppose that protocol \(\varPi \) securely computes f in the \((\mathsf {Fn},\mathsf {id\_abort})\)-hybrid model with full security, and suppose that \(\varPsi \) securely computes f in the real model. Then, protocol \(\varPi ^\varPsi \) securely computes f in the real model, where \(\varPi ^\varPsi \) is obtained by replacing ideal invocations of \(\mathsf {Fn}\) with real executions of \(\varPsi \). Furthermore, the quality of the security (computational, statistical or perfect) of the resulting protocol is the weakest among the security of \(\varPi \) and \(\varPsi \).

Finally, we define the notion of backup values. It is immediate from the security definition that any fully secure protocol admits well defined backup values.

Definition 2.2

(Backup values). The following definitions are with respect to a fixed honest execution of an n-party, r-round correct protocol (determined by the parties’ random coins) for solitary functionality f. The \(i^\mathrm{th}\) round backup value of a subset of parties \(\mathcal {Q}=\left\{ Q\right\} \cup \mathcal {S}\subseteq \mathcal {P}\) at round \(i\in [r]\), denoted \(\mathsf {Backup}(\mathcal {Q},i)\), is defined as the value Q would output, if all parties in \(\mathcal {P}\setminus \mathcal {Q}\) abort at round \(i+1\) and no other party aborts. For consistency, we let \(\mathsf {Backup}(\mathcal {Q},r)\) denote the output of the protocol if no parties abort (i.e \(\mathsf {Backup}(\mathcal {Q},r)=\mathsf {Backup}(\mathcal {Q}',r)\), for every \(\mathcal {Q}\) and \(\mathcal {Q}'\)).

3 Impossibility: The Double-Dipping Attack

In this section we prove our main negative result. Namely, we show impossibility of achieving full security for a number of solitary functionalities, including the following natural families:

  • Equality testing with leakage of input (including \(f_\mathsf {eq}\) from the introduction).

  • Private Set Intersection for fixed input size (i.e. PSI as defined in Definition 3.1).

Definition 3.1

Let \(\mathsf {PSI}^\mathrm {id}_{m,k}:{[m] \atopwithdelims ()k}\times {[m] \atopwithdelims ()k} \rightarrow 2^{[m]} \) be such that \(\mathsf {PSI}^\mathrm {id}_{m,k}(S_1,S_2)=S_1\cap S_2\). As a three party functionality, \(\mathsf {PSI}^\mathrm {id}_{m,k}\) receives inputs from \(P_1\) and \(P_2\) and delivers output to an additional party Q.

Namely, \(\mathsf {PSI}^\mathrm {id}_{m,k}\) takes as input two sets of size k and outputs their intersection. We point out that \(f_\mathsf {eq}\equiv \mathsf {PSI}^\mathrm {id}_{m,1}\). In this section, we show impossibility for a class of functions that includes \(\mathsf {PSI}^\mathrm {id}_{m,k}\), for every \(0<k<m/2\). As a warm-up, we sketch our impossibility result for the specific functionality \(f_\mathsf {eq}\); the general case is essentially an extrapolation of this case. We will be using the following notation.

Notation 3.1

Let \(\varPi \) be a three-party, r-round protocol for computing a function \(f:X\times Y \times Z \rightarrow \varSigma \) with solitary output. Define random variables \(\mathbf {a}_0,\ldots , \mathbf {a}_r\) and \(\mathbf {b}_0,\ldots , \mathbf {b}_r\) such that \(\mathbf {a}_i\) is the value of \(\mathsf {Backup}(\left\{ Q,P_1\right\} ,i)\) in a random execution of \(\varPi \) and, similarly, \(\mathbf {b}_i\) is the value of \(\mathsf {Backup}(\left\{ Q,P_2\right\} ,i)\) in a random execution of \(\varPi \), where \(\mathsf {Backup}(\mathcal {Q},i)\) is according to Definition 2.2.

3.1 Warm up

Let \(\varPi \) be a three-party protocol for computing \(f_\mathsf {eq}\). Let \(\mathbf {X}\) and \(\mathbf {Y}\) denote the uniform distribution for the inputs of \(P_1\) and \(P_2\) respectively. We proceed under the following simplifying assumptions for \(\varPi \): for every \(i\in [r]\), it holds that \(\Pr _{x\leftarrow \mathbf {X}}\left[ \mathbf {a}_i=x\right] =1/3\) and \(\Pr _{y\leftarrow \mathbf {Y}}\left[ \mathbf {b}_i=y\right] =1/3\). In words, if \(P_1\) (resp. \(P_2\)) chooses its input uniformly at random, then the backup output of Q and \(P_1\) (resp. Q and \(P_2\)) at round i is equal to the aforementioned input with probability exactly 1/3, regardless of \(P_2\)’s (resp. \(P_1\)’s) choice of input. For the purposes of the present warm up, we will further assume that \(\mathbf {a}_0\) and \(\mathbf {b}_0\) are independent random variables. Next, we rule out fully secure computation for \(f_\mathsf {eq}\) under these simplifying assumptions. When we tackle the general case in the next subsection, we get rid of these simplifying assumptions, by showing additional attacks (adversaries) where the aforementioned properties do no to hold.

We show that there exists an adversary that can guess the honest party’s input with probability noticeably greater than what the ideal model allows. First, in the ideal model with full security, notice that when an honest party \(P_\ell \) chooses his input uniformly at random, then an adversary corrupting \(\left\{ P_{3-\ell }, Q\right\} \) may guess (with certainty) the honest party’s input with probability at most 1/3 (by using the right input for the corrupted party). We show that for any real protocol, there exists an adversary that can guess the input with noticeably greater probability, thus violating security.

Consider two adversaries \(A^{P_1}\) and \(A^{P_2}\) corrupting \(\left\{ Q,P_1\right\} \) and \(\left\{ Q,P_2\right\} \), respectively, acting as follows. The honest party and corrupted party choose their inputs uniformly at random; write x and y for the inputs chosen by \(P_1\) and \(P_2\). The adversary \(A^{P_1}\) chooses a round i uniformly at random. Then, before sending its messages for round i, if \(\mathbf {a}_i\ne x\), the adversary aborts party \(P_1\) without sending further messages and instructs Q to continue honestly with \(P_2\); otherwise, it sends its messages for round i and aborts \(P_1\) alone. The adversary \(A^{P_2}\) chooses a round i uniformly at random. Then, after sending its messages for round i, if \(\mathbf {b}_i\ne y\), the adversary aborts \(P_2\) without sending further messages and instructs Q to continue honestly with \(P_1\); otherwise, it sends its messages for round \(i+1\) and aborts \(P_2\) alone. Adversary \(A^{P_1}\) outputs \(\mathbf {b}_{i-1}\) or \(\mathbf {b}_{i}\) (depending on the round \(P_1\) aborted) and \(A^{P_2}\) outputs \(\mathbf {a}_i\) or \(\mathbf {a}_{i+1}\) (depending on the round \(P_2\) aborted). We show that at least one of the adversaries outputs the honest party’s input with probability noticeably greater than 1/3, in violation of privacy. Next, we compute each of the relevant probabilities.

$$\begin{aligned} \Pr \left[ A^{P_1}\text { outputs } y\right]&= \frac{1}{r}\cdot \sum _{i=1}^r \left( \mathop {\mathrm {Pr}}\limits _{\begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}}\left[ {\mathbf {a}_i\ne x\wedge \mathbf {b}_{i-1}=y}\right] \,+\,\mathop {\mathrm {Pr}}\limits _{ \begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}}\left[ {\mathbf {a}_i=x\wedge \mathbf {b}_{i}= y} \right] \right) \\ \Pr \left[ A^{P_2}\text { outputs } x\right]&= \frac{1}{r}\cdot \sum _{i=0}^{r-1} \left( \mathop {\mathrm {Pr}}\limits _{ \begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}} \left[ {\mathbf {b}_i\ne y\wedge \mathbf {a}_{i}=x}\right] \,+\,\mathop {\mathrm {Pr}}\limits _{ \begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}}\left[ {\mathbf {b}_i=y\wedge \mathbf {a}_{i+1}=x}\right] \right) \end{aligned}$$

Next, we compute the average of the two quantities above.

$$\begin{aligned}&\left( \Pr \left[ A^{P_1}\text { outputs } y\right] + \Pr \left[ A^{P_2}\text { outputs } x\right] \right) /2 \\&=\, \frac{1}{2r} \left( \mathop {\mathrm {Pr}}\limits _{ \begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}}\left[ {\mathbf {b}_0\ne y\wedge \mathbf {a}_{0}=x}\right] \,+\,\mathop {\mathrm {Pr}}\limits _{ \begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}} \left[ {\mathbf {a}_r=x\wedge \mathbf {b}_{r}=y}\right] \,+\,\sum _{i=1}^{r-1} \mathop {\mathrm {Pr}}\limits _{ \begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}}\left[ {\mathbf {a}_i= x}\right] + \sum _{i=0}^{r-1} \mathop {\mathrm {Pr}}\limits _{ \begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}}\left[ {\mathbf {b}_i= y} \right] \right) \end{aligned}$$

By correctness of the protocol and simplifying assumptions,

$$\begin{aligned} \left( \Pr \left[ A^{P_1}\text { outputs } y\right] + \Pr \left[ A^{P_2}\text { outputs } x\right] \right) /2&= \frac{1}{2r} \cdot \mathop {\mathrm {Pr}}\limits _{ \begin{array}{c} x\leftarrow \mathbf {X}\\ y\leftarrow \mathbf {Y} \end{array}}\left[ {\mathbf {b}_0\ne y\wedge \mathbf {a}_{0}=x}\right] + \frac{1}{3} \\&= \frac{1}{3} + \frac{1}{2r} \cdot \frac{2}{9} \end{aligned}$$

We conclude that at least one of the adversaries can guess with certainty the opponent’s input with probability noticeably greater than 1/3, thus violating privacy.

3.2 General Case

We define a class \(\varOmega \) of 3-party functions, and we show that no function in this class admits a fully secure realization. Intuitively, this class of functions satisfies the following requirement: For both \(\ell \in \left\{ 1,2\right\} \), there is a (non-trivial) partition of the inputs of \(P_\ell \) and a distribution over the inputs of \(P_\ell \) such that if \(P_\ell \) samples its input according to the specified distribution then, with some fixed probability bounded away from 0 or 1, the output aloneFootnote 5 fully determines what set of the partition \(P_\ell \)’s chosen input belongs to, no matter how the inputs of Q and \(P_{3-\ell }\) were chosen. Furthermore, if both parties sample their inputs according to their respective distributions, then either for both inputs their sets in the partitions are determined from the output alone, or for neither. Formally,

Definition 3.2

The class of functions \(\varOmega \) consists of all functions f satisfying the following conditions, for some \(\gamma _1,\gamma _2\in (0,1)\). There exist distributions \(\mathbf {X}\) and \(\mathbf {Y}\) over X and Y, respectively, such that \(\mathrm {supp}(\mathbf {X})= X\) and \(\mathrm {supp}(\mathbf {Y})= Y\), and partitions \(X_1\ldots X_{k}\) and \(Y_1\ldots Y_{\ell }\) of X and Y, respectively, such that

  1. 1.

    For every distribution \(\varDelta _1\) over \(X\times Z\),

    \(\Pr _{\begin{array}{c} (x_0,z_0)\leftarrow \varDelta _1 \\ \widetilde{y}\leftarrow \mathbf {Y} \end{array}}\left[ \exists j \text { s.t. }\Pr _{ y'\leftarrow \mathbf {Y}}\left[ y' \in Y_j\mid f(x_0,\widetilde{y},z_0)=f(x_0,y',z_0)\right] =1\right] ~=~\gamma _1\)

  2. 2.

    For every distribution \(\varDelta _2\) over \(Y\times Z\),

    \(\Pr _{\begin{array}{c} \widetilde{x}\leftarrow \mathbf {X}\\ (y_0,z_0)\leftarrow \varDelta _2 \end{array}}\left[ \exists j \text { s.t. }\Pr _{ x'\leftarrow \mathbf {X}}\left[ x' \in X_j\mid f(\widetilde{x},y_0,z_0)=f(x',y_0,z_0)\right] =1\right] ~=~\gamma _2\)

  3. 3.

    There exists \(z_0\in Z\) such that, for every \(\sigma \in \varSigma \),

    \( \exists j \text { s.t. }\Pr _{}\left[ \widetilde{x} \in X_j\mid f(\widetilde{x},\widetilde{y},z_0)=\sigma \right] =1 \) if and only if

    \(\exists j \text { s.t. }\Pr _{\begin{array}{c} \widetilde{x}\leftarrow \mathbf {X}\\ \widetilde{y}\leftarrow \mathbf {Y} \end{array}}\left[ \widetilde{y} \in Y_j\mid f(\widetilde{x},\widetilde{y},z_0)=\sigma \right] =1\)

Note that \(\mathsf {PSI}^\mathrm {id}_{m,k}\), with \(0<k<m/2\), satisfies the above definition: define \(\mathbf {X}=\mathbf {Y}\) as the uniform distribution and define partitions \(\left\{ X_x=\left\{ x\right\} \right\} _{x\in X}\) and \(\left\{ Y_y=\left\{ y\right\} \right\} _{y\in Y}\).

Remark 3.1

The class of functions \(\varOmega \) can be generalized in few ways that we omitted, for the sake of presentation. The first generalization considers functions that take more than three inputs and can be reduced to functions in \(\varOmega \) by grouping parties together. The second generalization relaxes the requirement on the support of the distributions \(\mathbf {X}\) and \(\mathbf {Y}\) (allowing \(\mathrm {supp}(\mathbf {X})\subsetneq X\) or \(\mathrm {supp}(\mathbf {Y})\subsetneq Y\)). The proof for the latter is almost identical to the one below.

Theorem 3.2

For any \(f\in \varOmega \) and for any protocol \(\varPi \) computing f, at least one of the following holds.

  • There exists an adversary corrupting either \(P_1\) or \(P_2\) that can violate correctness.

  • There exists an adversary corrupting either Q and \(P_1\), or Q and \(P_2\) that can violate privacy.

Hereafter, fix a function f, real numbers \(\gamma _1,\gamma _2\in (0,1)\), distributions \(\mathbf {X}\) and \(\mathbf {Y}\) and partitions \(X_1\ldots X_{k}\) and \(Y_1\ldots Y_{\ell }\), and \(z_0\) satisfying Definition 3.2. It is immediate that \(\gamma _1=\gamma _2\), hence we simply write \(\gamma \) (\(=\gamma _1=\gamma _2\)). We define \(4r+1\) adversaries \(\{A_i^{P_1}\}_{i=1}^r\), \(\{ A_i^{P_2}\}_{i=0}^{r-1}\), \(\{\mathcal {C}^{P_\ell }_i\}_{i=1}^r\) and \(\widetilde{A}^{P_1}_0\) (See Fig. 2). Let \(\varSigma ' \subset \varSigma \) denote all the elements \(\sigma \in \varSigma \) such that there exists j for which \(\Pr _{\begin{array}{c} \widetilde{x}\leftarrow \mathbf {X}\\ \widetilde{y}\leftarrow \mathbf {Y} \end{array}}\left[ \widetilde{y} \in Y_j\mid f(\widetilde{x},\widetilde{y},z_0)=\sigma \right] =1\). Such a \(\varSigma '\) is guaranteed to exist by Item 2 of Definition 3.2.

Fig. 2.
figure 2

Description of the adversaries

Proof

Define \(\widetilde{\mathbf {a}}_0,\ldots , \widetilde{\mathbf {a}}_r\) and \(\widetilde{\mathbf {b}}_0,\ldots , \widetilde{\mathbf {b}}_r\) such that \(\widetilde{\mathbf {a}}_i=1\) (resp. \(\widetilde{\mathbf {b}}_i=1\)) if and only if \(\mathbf {a}_i\in \varSigma '\) (resp. \(\mathbf {b}_i\in \varSigma '\)) and 0 otherwise. In the following, we consider an execution of the protocol where Q uses \(z_0\) as input, \(P_1\) uses input sampled according to \(\mathbf {X}\) and \(P_2\) uses input sampled according to \(\mathbf {Y}\), regardless of whether the parties are corrupted or not.

Claim 3.1

Unless \(\mathcal {C}^{P_1}_i\) or \(\mathcal {C}^{P_2}_i\) violate correctness, it holds that \(|\mathrm {Pr}[\widetilde{\mathbf {b}}_i=1]-\gamma |\), \(\left| \Pr \left[ \widetilde{\mathbf {a}}_i=1\right] -\gamma \right| \le {\text {neg}}(\kappa )\), for every \(i\in \{0,\ldots , r-1\}\).

Next, we analyze the probability that \(A_i^{P_1}\) and \(A_i^{P_2}\) output 1. Observe that, by correctness, with all but negligible probability, whenever \(A_i^{P_1}\) (resp. \(A_i^{P_2}\)) outputs 1, the adversary succeeds in guessing the “bucket” the honest party’s input belongs to, with certainty. To prove our theorem, we show that one of the adversaries \(A_i^{P_\ell }\) or \(\widetilde{A}_0^{P_1}\) outputs 1 with probability greater than \(\gamma \), violating privacy.

$$\begin{aligned} \Pr \left[ A_i^{P_1}\text { outputs } 1\right]&= \Pr \left[ \widetilde{\mathbf {a}}_i=0\wedge \widetilde{\mathbf {b}}_{i-1}=1\right] + \Pr \left[ \widetilde{\mathbf {a}}_i=1\wedge \widetilde{\mathbf {b}}_{i}=1\right] \\ \Pr \left[ A_i^{P_2}\text { outputs } 1\right]&= \Pr \left[ \widetilde{\mathbf {b}}_i=0\wedge \widetilde{\mathbf {a}}_{i}=1\right] + \Pr \left[ \widetilde{\mathbf {b}}_i=1\wedge \widetilde{\mathbf {a}}_{i+1}=1\right] \end{aligned}$$

Therefore,

$$\begin{aligned} \sum _{i=1}^r&\Pr \left[ A_i^{P_1}\text { outputs } 1\right] + \sum _{i=0}^{r-1} \Pr \left[ A_i^{P_2}\text { outputs } 1\right] \\&=\,\Pr \left[ \widetilde{\mathbf {b}}_0=0\wedge \widetilde{\mathbf {a}}_{0}=1\right] + \sum _{i=1}^{r-1} \Pr \left[ \widetilde{\mathbf {a}}_i=1 \right] + \sum _{i=0}^{r-1} \Pr \left[ \widetilde{\mathbf {b}}_i=1 \right] + \Pr \left[ \widetilde{\mathbf {a}}_r=1\wedge \widetilde{\mathbf {b}}_{r}=1\right] \nonumber \end{aligned}$$
(1)

Thus

$$\begin{aligned} \sum _{i=1}^r \Pr \left[ A_i^{P_1}\text { outputs } 1\right]&+ \sum _{i=0}^{r-1} \Pr \left[ A_i^{P_2}\text { outputs } 1\right] = \Pr \left[ \widetilde{\mathbf {b}}_0=0\wedge \widetilde{\mathbf {a}}_{0}=1\right] + 2r\cdot \gamma \end{aligned}$$
(2)

The last equation follows by correctness and Items 1 to 3 of Definition 3.2. Next, we argue that \(\mathrm {Pr} [\widetilde{\mathbf {b}}_0=0\wedge \widetilde{\mathbf {a}}_{0}=1 ]\) is a noticeable quantity. If not, then we claim that adversary \(\widetilde{A}_0^{P_1}\) can violate privacy. Suppose that \(\mathrm {Pr} [\widetilde{\mathbf {b}}_0=0\wedge \widetilde{A}_0^{P_1} =1 ]\le {\mathsf {neg}}(\kappa )\) and let \(\rho \) denote the (joint) randomness of parties \(P_1\) and Q. In the presence of adversary \(\widetilde{A}_0^{P_1}\), we claim that the events \(\mathbf {a}_0\notin \varSigma '\) and \(\mathbf {a}_r\notin \varSigma '\) are independent of each other. To prove it, first notice that \(\mathbf {a}_0\) may be viewed as deterministic function of the inputs of \(P_1\) and Q and \(\rho \), and \(\mathbf {a}_r\) may be viewed as a deterministic function of the inputs of f (the latter assumption holds by correctness, with all but negligible probability). We write \(\mathbf {a}_0(x ,z_0; \rho )\) and \(\mathbf {a}_r(x,y,z_0)\) to make the dependency explicit and compute:

Observe that for any fixed \(x_0\), the random variables \(\mathbf {a}_0(x_0,y; \rho )\) and \(\mathbf {a}_r(x_0,y,z_0)\) are independent random variables. Therefore,

Finally, by correctness and Item 2 of Definition 3.2

The last equality follows from correctness and Item 1 of Definition 3.2. Thus, if \(\mathrm {Pr} [\widetilde{\mathbf {b}}_0=0\wedge \widetilde{\mathbf {a}}_{0}=1 ]\le {\text {neg}}(\kappa )\), then adversary \(\widetilde{A}_0^{P_1}\) outputs 1 with probability \(1-(1-\gamma )^2>\gamma \), in violation of privacy. In conclusion, using an averaging argument in Eq. 2, at least one of \(\{A_i^{P_1}\}_{i=1}^r\), \(\{ A_i^{P_2}\}_{i=0}^{r-1}\) outputs 1 with probability noticeably greater than \(\gamma \) and, thus, violates privacy.

4 Positive Results

In this section, we present our positive results. First, we give a generic transformation from a fully secure n-party protocol with non-solitary output to a fully secure \((n+1)\)-party protocol with solitary output; The latter protocol computes the associated functionality that delivers output to an additional auxiliary party that doesn’t provide input. In light of the positive results for fair two-party computation, our transformation enables fully secure computation for (almost all) Boolean functions with unequal domain size. For instance, it yields a secure protocol for the following PSI variant that escapes our other criteria: From a universe of size n, party \(P_1\) picks a set of size between 1 and k, for some arbitrary fixed \(k\le n-2\), party \(P_2\) picks a set size between 1 and \(k+1\) (i.e. one party has more inputs to pick than the other), and Party Q receives value 1 if the sets intersect and 0 if not.Footnote 6 Interestingly, this technique yields protocols with super-constant (in fact, super-logarithmic) round complexity since, with few exceptions, super-logarithmic number of rounds is necessary for fair computation. In Sect. 5, we show that super-constant round complexity is inherent for fully secure MPC with solitary output.

Then, we present a generic protocol for functionalities that satisfy the “forced output distribution” criterion. Intuitively, these are functionalities where (almost) all parties can “force” the distribution of the output to be invariant of the other parties’ choice of input. These functionalities should be contrasted with the above fair ones, since they are utterly unfair viewed as non-solitary functionalities (they imply coin-tossing). Interestingly, every functionality in this class can be computed in a constant number of rounds.

We also present a generic protocol for functionalities that satisfy the “fully revealing input” criterion. Intuitively, these are functionalities where at least one party has a choice of input that reveals all other parties’ inputs. While this family may appear somewhat pathological from a cryptographic point of view, it contains several natural examples. In particular, it contains a PSI variant where one party may choose the entire universe as input. Similarly to the previous case, every functionality in this class can be computed in constant number of rounds.

Finally, for a functionality that escapes the above criteria, we design a fully secure protocol that runs in superlogarithmic number of rounds. This protocol is inspired by the GHKL protocol [23]. We emphasize that the feasibility of this functionality does not follow from the fairness criterion since, viewed as a non-solitary functionality, it cannot be computed fairly. Furthermore, in the next section, we show that superconstant round complexity is inherent for this function.

4.1 Security via Fairness

Let \(f:X_1\times \ldots \times X_n \rightarrow \varSigma \) be an n-party functionality that delivers the same output to all parties. Let \(\varPi \) be a fully secure protocol for f. Write \(m^{(\ell ,\ell ')}_i\in \{0,1\}^{\mu _\kappa }\) for the message sent by \(P_{\ell }\) to \(P_{\ell '}\) at round i. Let \(M_\kappa = \mu _\kappa \cdot n\) denote the total length of messages received by party \(P_\ell \) in a single round (without loss of generality \(\mu _\kappa \) and \(M_\kappa \) do not depend on i, \(\ell '\) or \(\ell \)). In this section, we show how to transform protocol \(\varPi \) into a protocol \(\varPi '\) that computes the associated solitary functionality that delivers the output to one of the parties, or to an additional auxiliary party. We note that the transformation and analysis of the two cases are the same, therefore we only focus on the latter transformation (i.e. from n-party to \(n+1\)-party protocol, where the output receiving party does not provide input). The rest of this sub-section is dedicated to the proof of the following theorem.

Theorem 4.1

Let \(\varPi \) be a protocol for computing non-solitary functionality f with full security. Then, there exist a protocol \(\varPi '\) that computes with full security the associated \((n+1)\)-party solitary functionality that delivers the output to an additional auxiliary party.

At a high level, to transform the n-party non-solitary protocol \(\varPi \) into an \((n+1)\)-party solitary protocol \(\varPi '\), we have each party \(P_\ell \) in \(\varPi '\) share the view of the party \(P_\ell \) in the original protocol \(\varPi \) between himself and the auxiliary party Q. To do so, we begin by defining protocol’s \(\varPi \) message function \(\mathsf {NxtMsg}_\varPi \) that deterministically maps each party \(P_\ell \)’s view until some round i (a view that includes its identity, its input, its private coins and all incoming messages until that round) to all messages that \(P_\ell \) sends at the upcoming round.

Definition 4.1

Let \(\mathsf {NxtMsg}_\varPi \) denote the next message function of r-round protocol \(\varPi \). Formally, \(\mathsf {NxtMsg}_\varPi \) maps \(\mathsf {view}^{P_\ell }_{i}\mapsto (m^{(\ell ,1)}_{i+1 },\ldots ,m^{(\ell ,n)}_{i+1 })\) such that

  1. 1.

    \(\mathsf {view}^{P_\ell }_{i}\in \{0,1\}^{i\cdot M_\kappa }\) corresponds to the view of party \(P_\ell \) up to and including round i (wlog, we assume that the value of i and the identity of \(P_\ell \) are contained in its view).

  2. 2.

    If \(i\ne r\), then \(m^{(\ell ,\ell ')}_{i+1}\in \{0,1\}^{\mu _\kappa }\) corresponds to \(P_{\ell }\)’s prescribed message to \(P_{\ell '}\) at round \(i+1\) according to \(\varPi \). If \(i=r\) then \(m^{(\ell ,\ell ')}_{i+1}\in \{0,1\}^{\mu _\kappa }\) corresponds to \(P_{\ell }\)’s prescribed output.

In our protocol design, all messages will be additively-shared between party P and a helper party Q. That is, a message m will be randomly split into \(m_1, \; m_2\) such that \(m=m_1 \oplus m_2\) and party P will hold \(m_1\) and Q will hold \(m_2\). In the following functionality \(\mathsf {ShrNxtMsg}_\varPi \) (Fig. 3) we describe how the messages of the protocol are created to deliver this sharing. Party P and Q hold \(\mathsf {view}^{P}_{i}\), P’s view up to and including round i, in shared form as \(v_P,v_Q\) and they receive the next round messages of P also in shared form.

Fig. 3.
figure 3

Two-party functionality \(\mathsf {ShrNxtMsg}_\varPi \) for parties P and Q.

We describe the protocol for computing a function with an auxiliary party Q that receives the solitary output. The idea is that each party \(P_\ell \) will invoke with party Q the protocol for creating the messages that \(P_\ell \) needs to send to all the other parties in the upcoming round. This is done by utilizing the functionality \(\mathsf {ShrNxtMsg}_\varPi \). The result is that \(P_{\ell }\) and Q receive the set of messages \((m^{(\ell ,1)}_{i+1},\ldots ,m^{(\ell ,n)}_{i+1})\) in shared form. Then, \(P_\ell \) send to each other party \(P_j\) its share of the message \(m^{(\ell ,j)}\). The auxiliary party Q holds in a string \(\mathsf {view}^{Q_\ell }_i\) its share of the view of the messages of party \(P_\ell \) up to and including round i (a different string for each \(P_\ell \)). If (some) parties abort, then proceed under the specifications of the original protocol \(\varPi \), while maintaining the invariant that each \(P_\ell \)’s view from the original protocol is shared between \(P_\ell \) and Q. At the end of the execution, Q together with one of the \(P_\ell \)’s that hasn’t aborted reconstruct the output (which is a deterministic function of their joint views).

Fig. 4.
figure 4

\((n+1)\)-party protocol for solitary f in the \(\mathsf {ShrNxtMsg}_\varPi \)-hybrid model with identifiable abort

The above protocol is described where the output is delivered to the auxiliary party Q (not one of the \(P_1\ldots P_n\)). However, as noted at the beginning of this section, this party can be one of the n original parties and simply serves both as himself and as party Q. Observe that, in this case, Q will simply see all the messages that it sends and receives (as it holds both shares of the messages).

Proof of Theorem 4.1. We prove the claim by showing that protocol \(\varPi '\) from Fig. 4 is fully secure in the \(\mathsf {ShrNxtMsg}_\varPi \)-hybrid model with identifiable abort. Then, the theorem follows from composition [8]. Let A be an adversary corrupting up to n parties (of the \(n+1\) parties). Observe that, if party Q is not among the corrupted parties, then A’s view can be trivially simulated since it is just a uniform random string, and it is not hard to see that he cannot affect correctness. It remains to prove that the protocol is secure when Q is among the corrupted parties. Let \(\mathcal {C}\) denote the set of corrupt parties, assuming that \(Q\in \mathcal {C}\). For adversary A attacking \(\varPi '\) corrupting parties in \(\mathcal {C}\), we construct an adversary \(\widetilde{A}\) attacking \(\varPi \) (on the same input distribution and auxiliary information) and corrupting parties \(\widetilde{\mathcal {C}}=\mathcal {C}\setminus \left\{ Q\right\} \) (there are at most \(n-1\) such parties). Since A’s and \(\widetilde{A}\)’s views are identically distributed (modulo a 2-out-of-2 secret sharing), and since the latter can be simulated in the ideal model with full security, it follows that the former can be simulated as well. Formally, let \(\widetilde{S}\) denote the simulator for \(\widetilde{A}\) and define simulator S for A as follows:

  1. 1.

    S runs \(\widetilde{S}\) on the relevant inputs, security parameter and auxiliary information. Write \((v_{P_{i}})_{P_{i}\in \widetilde{\mathcal {C}}}\) for \(\widetilde{S}\)’s output corresponding to the joint simulated view of the parties.

  2. 2.

    S samples \((\nu _{P_{i}})_{P_{i}\in \widetilde{\mathcal {C}}}\) uniformly at random from the relevant space and outputs \((v_{P_{i}}\oplus \nu _{P_{i}})_{P_{i}\in \widetilde{\mathcal {C}}}\) (the simulated views of parties in \(\widetilde{\mathcal {C}}\)) and \((\nu _{P_{i}})_{P_{i}\in \widetilde{\mathcal {C}}}\) (the simulated view of Q).

   \(\square \)

4.2 Functions with Forced Output Distribution

In this section, we present the “Forced Output Distribution” criterion. First, we define the notion.

Definition 4.2

A party \(P_i\ne Q\) admits a forced output distribution for f if there exists a distribution \(\varDelta _i\) over \(X_i\) such that the distribution of the random variable \(f(x_1,\ldots ,x_{i-1},\hat{x}_i,x_{i+1}, \ldots , x_n)|_{\hat{x}_i \leftarrow \varDelta _i}\) is independent of the \((n-1)\)-tuple \((x_1,\ldots ,x_{i-1},x_{i+1}, \ldots , x_n)\).

Intuitively, a party admits a forced output distribution if it can choose its input in a way that “forces” the output, i.e. it makes the output distribution independent of the other parties’ inputs. The theorem below states that if all-but-one parties, not including Q, admit a forced output distribution, then the functionality is computable with perfect full security in a constant number of rounds in a hybrid model with ideal access with identifiable abort to functionality \(\mathsf {ShrGn}_f\) (to be specified below). As a corollary, assuming OT, functions with a forced output distribution admit fully secure protocol in the plain model.

Theorem 4.2

Assume that at least \(n-1\) of the parties in \(\mathcal {P}\setminus \left\{ Q\right\} \) admit a forced output distribution for functionality f. Then, f is computable with perfect full security in the \(\mathsf {ShrGn}_f\)-hybrid model with identifiable abort. Furthermore, the computation runs in a constant number of rounds.

We now introduce functionality \(\mathsf {ShrGn}_f\) (Fig. 5) and we will prove our theorem in the \(\mathsf {ShrGn}_f\)-hybrid model with identifiable abort. This functionality provides the following. It shares the output of the function f between the parties that invoke it, by obliviously choosing a random input for the parties that do not provide input. That is, it provides uniform random shares to all-parties-but-one, and that last party gets the xor of these shares with the output of the function. We emphasize that this functionality may be invoked by a subset of the n parties, and, as per the ideal model with identifiable abort, the invocation can be aborted by any single party in that set (at the cost of revealing its identity).

Fig. 5.
figure 5

n-party functionality \(\mathsf {ShrGn}_f\).

Without loss of generality, if it exists, suppose that \(P_1\) is the party without forced output distribution (the protocol and our analysis remains sound if all parties have a forced output distribution). The protocol (see Fig. 6) proceeds as follows: the parties invoke the trusted party for computing \(\mathsf {ShrGn}_f\), and obtain shares of the output. Then, in two distinct steps (1) \(P_1\) sends its share of the output to Q and (2) all other parties send their shares to Q. In case of abort, there are two scenarios; either \(P_1\) aborts alone, in which case the process starts again without \(P_1\), or, if anyone else aborts at this iteration or the next, the computation halts and Q outputs a value from the forced distribution. Intuitively, the protocol maintains security because it is not useful to abort any of the parties; aborting any party but \(P_1\) halts the execution, while aborting \(P_1\) does not reveal anything about the output (since the honest party will not send its share before \(P_1\) sends his).

Proof of Theorem 4.2. First note that distribution \(\mathbf {D}\) in Fig. 6 is well defined since it is unique. Let A denote an adversary corrupting a subset of parties. Like in the previous proof, it is straightforward that if A does not corrupt Q then it cannot affect correctness and its view can be trivially simulated. Let \(\mathcal {C}\) be the set of corrupted parties. Define simulator S that does the following: S invokes the trusted party on the inputs of the corrupted parties and receives output \(\mathsf {out}\) form the trusted party. Then, S samples \(\left| \mathcal {C}\right| \) random elements \(\left\{ \sigma '_{C}\right\} _{C\in \mathcal {C}}\) and hands them to the adversary.

  • If \(P_1\) alone aborts, S samples \(\left| \mathcal {C}\right| -1\) fresh random values \(\left\{ \sigma ''_{C}\right\} _{C\in \mathcal {C}\setminus \left\{ P_1\right\} }\), and hands them to the adversary.

  • If any other party aborts (at any point in the simulation), S samples \(d'\leftarrow \mathbf {D}\), hands d to the adversary, and outputs whatever A outputs.

  • If no other party aborts, S hands \(\mathsf {out}\) to the adversary and outputs whatever A outputs.   \(\square \)

Fig. 6.
figure 6

n-Party Protocol \(\varPi \) for f with Ideal Access to \( \mathsf {ShrGn}_{f} \) with Identifiable Abort

4.3 Functions with Fully Revealing Input

In this section, we present the “Fully Revealing Input” criterion. First, we define the notion.

Definition 4.3

Let \(\mathcal {S}\subsetneq \mathcal {P}\). We say that the parties in \(\mathcal {S}\) admit a fully revealing input, if there exists \(x_\mathcal {S}\in \underset{P_i\in \mathcal {S}}{\times } X_{i}\) such that the following function is injective

$$f_{x_\mathcal {S}}: x_{\overline{\mathcal {S}}} \mapsto f(x_\mathcal {S}, x_{\overline{\mathcal {S}}}). $$

The theorem below states that if there exists a fixing of the inputs of \(P_1\) and Q (or any \(P_i\) and Q) that yields an injective function, then the overlying functionality f is computable with full security in a constant number of rounds in the \(\mathsf {ShrGn}_f\)-hybrid model. Similarly to the previous section, assuming OT, it follows as an immediate corollary that functions with fully revealing input admit fully secure protocol in the plain model.

Theorem 4.3

Assume there exists i such that \(\left\{ P_i,Q\right\} \) admit a fully revealing input. Then, functionality f is computable with perfect full security in the \(\mathsf {ShrGn}_f\)-hybrid model with identifiable abort. Furthermore, the computation runs in a constant number of rounds.

Without loss of generality, suppose that \({P_1,Q}\) admit a fully revealing input. The protocol (Fig. 7) proceeds as follows: the parties invoke the trusted party for computing \(\mathsf {ShrGn}_f\), and obtain shares of the output. Then, in two distinct steps (1) All-parties-but-\(P_1\) send their shares of the output to Q and (2) \(P_1\) sends its share to Q. In case of abort, the process is repeated until it succeeds. Intuitively, the protocol maintains security because the only way to extract more information from the protocol is to corrupt both \(P_1\) and Q. In that case however, \(P_1\) and Q can provide input in the ideal model that reveals everything about the inputs of the honest parties.

Proof of Theorem 4.3. Let A denote the adversary corrupting a subset of parties. Like in the previous proof, it is straightforward that if A does not corrupt both Q and \(P_1\) then it cannot affect correctness and its view can be trivially simulated. If A corrupts both \(P_1\) and Q, then by instructing the simulator to send the fully revealing input in the ideal model, the adversary’s view can be simulated perfectly, no matter what is its course of action.Footnote 7    \(\square \)

Fig. 7.
figure 7

n-Party Protocol \(\varPi \) for f in the \(\mathsf {ShrGn}_{f}\)-Hybrid Model with Identifiable Abort

4.4 Outliers

In this section, we present protocol for a function that escapes the above criteria but is nevertheless computable with full security. Due to space constraints, we only give here a brief overview of the protocol. For the formal description and security analysis, the reader is refered to the full version of the present paper [24]. Define functionality f that takes inputs \(x\in \{0,1,2\}\) from \(P_1\) and \(y\in \{0,1,2\}\) from \(P_2\) and delivers f(xy) to Q such that

$$f(x,y)={\left\{ \begin{array}{ll}1&{}\text {if } x=y\in \{0,1\}\\ 2&{}\text {if } x=y=2 \\ 0 &{}\text {otherwise}\end{array}\right. }$$

In this section, we show that the functionality f is computable with full security in \(\omega (\log (\kappa ))\) rounds. In what follows, we identify \(\{0,1,2\}\) with \(\{x_0,x_1,x_2\}\) or \(\{y_0,y_1,y_2\}\) to make the distinction between the parties’ input-spaces explicit.

Our protocol is inspired by the GHKL protocol and proceeds as follows. Formal descriptions and more detailed security analysis appear in the full version of the present paper [24]. In the remainder of this section, we only give a high level overview. Write x and y for the inputs used by the parties. In a share generation phase, the parties obliviously generate two sequences of values \((a_0,\ldots ,a_r)\) and \((b_0,\ldots ,b_r)\) and an integer \(i^*\in [r]\) such that every value \(a_i\) and \(b_i\) is equal to f(xy) for indices succeeding \(i^*\), and, for indices preceding \(i^*\), \(a_i\) is computed by obliviously choosing a fresh input from \(\left\{ y_0,y_1\right\} \) for \(P_2\), and using input x for \(P_1\) and, similarly, \(b_i\) is computed by obliviously choosing a fresh input from \(\left\{ x_0,x_1\right\} \) for \(P_1\), and using input y for \(P_2\). The value of \(i^*\) is chosen according to a suitable distribution. The two sequences are then shared in a 3-out-of-3 additive (modulo 3) secret sharing among the parties. Then, in the share exchange phase, in r iterations, \(P_1\) is instructed to send its share of \(b_i\) to Q, and \(P_2\) is instructed to send its share of \(a_i\) to Q. If party \(P_1\) aborts at round i, then \(P_2\) sends its share of \(b_{i-1}\) to Q, and, similarly, if \(P_2\) aborts at round i, then \(P_1\) sends its share of \(a_{i}\) to Q. Party Q is instructed to output the value it can reconstruct from the shares.

We crucially observe that, prior to \(i^*\), the obliviously chosen input for each party is sampled from \(\{0,1\}\), and not \(\{0,1,2\}\). This seemingly superficial technicality is what enables the protocol to be secure.

We conclude with the following theorem which immediately yields full security for f, assuming a protocol for OT.

Theorem 4.4

Protocol \(\varPi \) computes f with statistical full security in the \(\mathsf {ShrGn}^*_f\)-hybrid model with identifiable abort.

5 Lower-Bound on Round-Complexity

In this section, we present a lower bound for the functionality f from the previous section. Let f be the three-party solitary functionality from Sect. 4.4. In what follows, let \(\varPi \) denote a protocol for f, let \(\kappa \) denote the security parameter, and assume the round-complexity of \(\varPi \) is set to some value r that is independent of \(\kappa \). It follows as an immediate corollary of the theorem below that no such protocol can be fully secure.

Theorem 5.1

Using the notation above, there exists \(i\in [r]\) such that at least one of the following is true:

  1. 1.

    An adversary corrupting \(P_2\) and Q violates \(P_1\)’s privacy by aborting \(P_2\) at round i.

  2. 2.

    An adversary corrupting \(P_1\) and Q violates \(P_2\)’s privacy by aborting \(P_1\) at round i.

  3. 3.

    An adversary corrupting \(P_1\) violates correctness by aborting at round i.

  4. 4.

    An adversary corrupting \(P_2\) violates correctness by aborting at round i.

For the proof of the above, the reader is referred to the full version [24] of the present paper.