Abstract
Classical definitions for secure multiparty computation assume the existence of a single adversarial entity controlling the set of corrupted parties. Intuitively, the definition requires that the view of the adversary, corrupting t parties, in a realworld execution can be simulated by an adversary in an ideal model, where parties interact only via a trustedparty. No restrictions, however, are imposed on the view of honest parties in the protocol, thus, if honest parties obtain information about the private inputs of other honest parties – it is not counted as a violation of privacy. This is arguably undesirable in many situations that fall into the MPC framework. Nevertheless, there are secure protocols (e.g., the 2round multiparty protocol of Ishai et al. [CRYPTO 2010] tolerating a single corrupted party) that instruct the honest parties to reveal their private inputs to all other honest parties (once the malicious party is somehow identified).
In this paper, we put forth a new security notion, which we call FaFsecurity, extending the classical notion. In essence, \((t,h^*)\)FaFsecurity requires the view of a subset of up to \(h^*\) honest parties to also be simulatable in the ideal model (in addition to the view of the malicious adversary, corrupting up to t parties). This property should still hold, even if the adversary leaks information to honest parties by sending them nonprescribed messages. We provide a thorough exploration of the new notion, investigating it in relation to a variety of existing security notions. We further investigate the feasibility of achieving FaFsecurity and show that every functionality can be computed with (computational) \((t,h^*)\)FaF fullsecurity, if and only if \(2t+ h^*<m\). Interestingly, the lowerbound result actually shows that even fair FaFsecurity is impossible in general when \(2t\,+\,h^*\ge m\) (surprisingly, the view of the malicious attacker is not used as the trigger for the attack).
We also investigate the optimal round complexity for \((t,h^*)\)FaFsecure protocols and give evidence that the leakage of private inputs of honest parties in the protocol of Ishai et al. [CRYPTO 2010] is inherent.
Research supported by ISF grant 152/17, and by the Ariel Cyber Innovation Center in conjunction with the Israel National Cyber directorate in the Prime Minister’s Office.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
In the setting of secure multiparty computation (MPC), the goal is to allow a set of m mutually distrustful parties to compute some function of their private inputs in a way that preserves some security properties, even in the face of adversarial behavior by some of the parties. Classical security definitions (cf., [25]) assume the existence of a single adversarial entity controlling the set of corrupted parties. The two most common types of adversaries are malicious adversaries (which may instruct the corrupted parties to deviate from the prescribed protocol in any possible way), and semihonest adversaries (which must follow the instructions of the protocol, but may try to infer additional information based on the joint view of the corrupted parties).
Classical security definition. Some of the most basic security properties that may be desired are correctness, privacy, independence of inputs, fairness, and guaranteed output delivery. A general paradigm for defining the desired security of protocols is known as the ideal vs. real paradigm. This paradigm avoids the need to specify a list of desired properties. Rather, security is defined by describing an ideal functionality, where parties interact via a trusted party to compute the task at hand. A realworld protocol is then deemed secure (against a class \(\mathcal {C}\) of adversaries), if no adversary \(\mathcal {A}\in \mathcal {C}\) can do more harm than an adversary in the idealworld. In some more detail, the definition requires that the view of the adversary, corrupting t parties, in a realworld execution can be simulated by an adversary (corrupting the same t parties) in the idealworld.
Classical instantiations of this paradigm, however, pose no restrictions on the view of honest parties in the protocol. Hence, such definitions do not count it as a violation of privacy if honest parties learn private information about other honest parties. This is arguably undesirable in many situations that fall into the MPC framework. Furthermore, when considering MPC solutions for reallife scenarios, it is hard to imagine that possible users would agree to have their private inputs revealed to honest parties (albeit not to malicious ones). Indeed, there is no guarantee that an honest party would not get corrupted at some later point in the future. If that honest party has learned some sensitive information about another party’s input during the protocol’s execution (say, the password to its bank account), then this information may still be used in a malicious manner. Furthermore, as most of us are reluctant to reveal the password to our bank account even to our own friends, it is natural to consider a model, where every uncorrupted party is honestbutcurious by itself, operating simultaneously to the malicious adversary.^{Footnote 1}
There are two manners in which honest parties may come to learn some private information about other parties (in a secure protocol). The first is if the protocol itself instructs the honest parties to reveal some information about their private inputs (which is not implied by the output) to all other honest parties (once all malicious parties are somehow identified). An example of such a protocol is the 2round mparty protocol (with \(m\ge 5\)) of Ishai et al. [32], tolerating a single malicious party.
Alternatively, honest parties may also be exposed to the private information of other parties if the adversary sends them parts of its view during the execution (although, not instructed to do so by the protocol). We stress that such an attack is applicable to many classical results in MPC that assume an upper bound of t malicious parties and rely on \((t+1)\)outofm secret sharing. Consider, for example, the BGW protocol [11], which is secure against \(t<m/3\) corruptions. In the first round of the protocol, the parties share their inputs in a \((t+1)\)outofm Shamir’s secret sharing scheme [39]. If an adversary, controlling t parties, sends all its t shares to an honest party, then this honest party can reconstruct the inputs of all other parties.
It may be natural to try to overcome the second type of information leakage by simply instructing honest parties to disregard and erase unsolicited messages sent to them by the adversary. However, in many settings assuming that the parties are able to reliably erase parts of their state might be unrealistic, due to e.g., physical limitations on erasures. Moreover, it is not even always clear how to define what should be erased in the first place. Consider, for example, the case that the adversary has some room for action or some redundancy in the messages it is instructed to send by the protocol. In such a case, the adversary can implant additional nonprescribed information about other parties into these messages. Thus, the honest parties receiving these messages are not able to detect the leakage of information. If, say, the adversary implanted a sharing of some private information among a subset of honest parties, then, a ‘semihonest’ entity can reconstruct this information by taking control over the parties in this subset and seeing their internal states.
In this paper, we investigate the following question that arises from the above discussion.
Can the classical notion of security for malicious adversaries be extended to also prevent leakage of private information to (possibly colluding) subsets of (semi)honest parties?
The issue of honest parties being able to obtain information (not available to them from their inputs and from the output of the functionality) was already shortly mentioned in [38]. They showed how to construct verifiable secret sharing (and thus compute any functionality) with unconditional security, assuming broadcast and an honest majority. Their solution for preventing from honest parties learning additional information was to increase the threshold for the secret sharing used in the protocol. However, this came at the expense of the bound on the number of corrupted parties.
The solution of [38] may seem as a natural answer to the above question, and it may further seem that any secure protocol could be turned into one that prevents leakage to honest parties by increasing the bound on the number of corrupt parties. Say, for example that the protocol should withstand t malicious parties and we wish to avoid leakage to sets of size \(h^*\) semihonest parties. In this case, taking a protocol that is secure (by classical definition) against \(t+h^*\) malicious parties may seem to suffice for the desired security. However, now one must consider the efficiency toll incurred by increasing the security threshold. Furthermore, it could be the case that increasing the threshold would render the protocol altogether insecure. Indeed, in Sect. 4.1, we give such an example of a functionality that cannot be computed in the face of \(t+h^*\) malicious parties, but can be computed with full security in the face of t malicious, while avoiding leakage to any subset of \(h^*\) honest parties.
Quite surprisingly, we further show that the approach of increasing the threshold simply does not work in general. In particular, there exist protocols with standard full security against \(t+1\) malicious parties, yet t malicious parties could leak information to an honest party.^{Footnote 2}
1.1 Our Contribution
In this paper, we address the above question by putting forth a new security notion, which we call FaFsecurity, extending standard (static, malicious) MPC security (in the standalone model). We give a fullsecurity variant as well as a securitywithabort variant for the new notion. In essence, \((t,h^*)\)FaFsecurity requires that for every malicious adversary \(\mathcal {A}\) corrupting t parties, and for any disjoint subset of \(h^*\) parties, both the view of the adversary and the joint view of the additional \(h^*\) parties can be simulated (separately) in the ideal model. A more elaborate summary of the various definitions is given in Sect. 1.1.1. A comprehensive discussion appears in Sect. 3.
We accompany the new security notion with a thorough investigation of its feasibility and limitations in the various models of interest. Most notably, we discuss the feasibility of achieving full FaFsecurity against computational adversaries, and show that it is achievable for any functionality if and only if \(2t\,+\,h^* < m\). Interestingly, the lowerbound result actually shows that any protocol admits a round in which the adversary can leak the output to some parties without learning it, however, not allowing other honest parties to learn it. Hence, even fair FaFsecurity is impossible in general when \(2t+ h^*\ge m\). In Sect. 1.1.2 we elaborate on these results. We also investigated the optimal roundcomplexity of FaFsecure protocols, and the feasibility of obtaining statistical/perfect FaFsecurity. A summary of these also appears in Sect. 1.1.2.
Finally, we provide an thorough exploration of how the new notion relates to a variety of existing security notions. Specifically, we show some counter intuitive facts on how FaFsecurity relates to standard malicious security and mixedadversaries security. See Sect. 1.1.3 for more on that.
1.1.1 FaFSecurity – A Generalization of Classical Security
Before moving on to describe our new security notion in more detail, we first recall the notion of static, malicious, standalone security. We stress that while there are stronger security notions, some of which we mention below, this is arguably the most standard notion, serving much of the works on secure multiparty computation. Security is defined via the real vs. ideal paradigm. Here, the security is described as an ideal functionality, where all parties (including the adversary) interact with a trusted entity. A malicious adversary is, thus, limited to selecting the inputs of the subset of corrupted parties.
A realworld protocol (for the functionality at hand) is deemed secure if it emulates the ideal setting. In a bit more detail, the protocol is tsecure, for a class \(\mathcal {C}\) of adversaries, if for every adversary \(\mathcal {A}\in \mathcal {C}\), corrupting at most t parties and interacting with the remaining parties, there exists an idealworld adversary (called simulator) that outputs a view for the realworld adversary that is distributed closely to its view in an actual random execution of the real protocol. A static adversary is one that chooses which parties to corrupt before the execution of the protocol begins. A formal definition of security is given in Sect. 3 as a special case of FaFsecurity.
The notion of FaFsecurity. We now give a more detailed overview of the new notion of security. As above, we follow the real vs. ideal paradigm, and strengthen the requirements of standard security. We say that a protocol \(\varPi \) computes a functionality f with \((t,h^*)\)FaF security (with respect to a class \(\mathcal {C}\) of adversaries), if for any adversary \(\mathcal {A}\in \mathcal {C}\) (statically) corrupting at most t parties, the following holds: (i) there exists a simulator \(\mathsf {S}\) that can simulate (in the idealworld) \(\mathcal {A}\)’s view in the realworld (so far, this is standard security), and (ii) for any subset \(\mathcal {H}\) (of size at most \(h^*\)) of the uncorrupted parties, there exists a “semihonest” simulator \(\mathsf {S}_{\mathcal {H}}\), such that, given the parties’ inputs and \(\mathsf {S}\)’s idealworld view (i.e., its randomness, inputs, auxiliary input, and output received from the trusted party), \(\mathsf {S}_{\mathcal {H}}\) generates a view that is indistinguishable form the realworld view of the parties in \(\mathcal {H}\), i.e., is indistinguishable from .
The reason for giving \(\mathsf {S}_{\mathcal {H}}\) the idealworld view of \(\mathsf {S}\) is that in the realworld, nothing prevents the adversary from sending its view to honest parties. Observe that since the definition requires that the adversary is simulatable according to the standard definition, it also protects the parties in \(\mathcal {H}\) from the adversary. This condition is in agreement with our motivation, where the parties in \(\mathcal {H}\) are honest but might later collude in a different protocol. The universal quantifier on \(\mathcal {H}\) yields, for example, that the definition also captures the model where every uncorrupted party is honestbutcurious by itself. The formal definition appears in Sect. 3.
FaF fullsecurity and FaF securitywithabort. So far, we left vague the way that outputs are being distributed to parties by the trusted party in the idealworld. The first option is that the trusted party sends the appropriate output to each of the parties. This captures the notion of fullsecurity, as it guarantees that the honest parties always receive the output of the computation (in addition to other properties, such as correctness and privacy). Cleve [16] showed that (standard) fullsecurity is not generally achievable. This led to a relaxed notion of security, called securitywithabort. This notion is captured very similarly to the above fullsecurity, with the difference being that in the idealworld, the trusted party first gives the output to the adversary, which in turn decides whether the honest parties see the output or not. This notion is naturally augmented with identifiability, by requiring the adversary to identify at least one malicious party in case the output is not given to all honest parties.
In this work, we appropriately define and consider a fullsecurity variant and a security with (identifiable) abort variant of FaFsecurity. To define FaF securitywithidentifiableabort, we need to account for scenarios, where some of the uncorrupted parties learn their output in the realworld while others do not. Therefore, in the ideal execution, we explicitly allow the “semihonest” simulator \(\mathsf {S}_\mathcal {H}\) to receive the output from the trusted party. The formal definition appears in Sect. 3.1.
It is also natural to consider a stronger security notion, where the joint view of the malicious adversary is simulatable together with the view of parties in \(\mathcal {H}\). In Sect. 5.3, we show that this variant is strictly stronger than the variant defined above. In fact, we show that the GMW protocol [26] satisfies the weaker notion of FaFsecurity, but not the stronger notion. In the following, we will sometimes refer to the weaker notion as weak FaFsecurity, and refer to the stronger notion as strong FaFsecurity.
A natural property that is highly desirable from any definition is to allow (sequential) composition. We show that both the weak variant and the strong variant of FaFsecurity satisfy this property. Due to space limitations, the proof is given in the fullversion [1].
1.1.2 Feasibility and Limitations of FaFSecure Computation
Our main theorem provides a characterization of the types of adversaries, for which we can compute any multiparty functionality with computational FaF fullsecurity.
Theorem 1 (informal)
Let \(t,h^*,m\in {\mathbb {N}}\). Assuming OT and OWP exist, any mparty functionality f can be computed with (weak) computational \((t,h^*)\)FaF fullsecurity, if and only if \(2t+h^*<m\).
For the positive direction, we first show that the GMW protocol admits FaF securitywithidentifiableabort. Then, we reduce the computation to FaF securitywithidentifiableabort, using a player elimination technique. That is, the parties compute a functionality whose output is an \((mt)\)outofm secret sharing of f. Since the joint view of the malicious and semihonest parties contain \(t+h^*<mt\) shares, they learn nothing from the output. We stress that the adversary itself cannot see the output unless all honest parties see it, and hence, cannot bias the output.
We now turn to the negative direction. Interestingly, we essentially show that for \(m\le 2t+h^*\), any mparty protocol admits a round in which an adversary (corrupting t parties) can leak the output to some \(h^*\) uncorrupted parties, while, not allowing other honest parties to learn the output.
Somewhat surprisingly, for the case where \(t<m/2\), there are protocols where the adversary’s view consists of only random values throughout the execution. Indeed, in our attack, the adversary learns nothing about the output, and furthermore, the view of the adversary is not used as a trigger for the attack.
We next give an overview of the proof. First, by a simple player partitioning argument, we reduce the general mparty case to the 3party case, where \(t=h^*=1\). Let \({\text {A}}\), \({\text {B}}\), and \({\text {C}}\) be three parties. Let f be a oneway permutation. We consider the following functionality. Party \({\text {A}}\) holds a string a, party \({\text {C}}\) holds a string c, and party \({\text {B}}\) holds \(y_{{\text {A}}},y_{{\text {C}}}\). The output of all parties is (a, c) if \(f(a)=y_{{\text {A}}}\) and \(f(c)=y_{{\text {C}}}\), and \(\bot \) otherwise. We assume the strings a and c are sampled uniformly, and that \(y_{{\text {A}}}=f(a), y_{{\text {C}}}=f(c)\).
An averaging argument yields that there must exists a round i, where two parties, say \({\text {A}}\) together with \({\text {B}}\), can recover (a, c) with significantly higher probability than \({\text {C}}\) together with \({\text {B}}\). Our attacker corrupts \({\text {A}}\), acts honestly (using its original input a) until round i and then aborts (regardless of its view so far). Finally, as the protocol terminates, \({\text {A}}\) will send its entire view to \({\text {B}}\). This allows \({\text {B}}\) it to recover (a, c) with significantly higher probability than \({\text {C}}\).
Intuitively, in order to have the output of the honest party \({\text {C}}\) in the ideal world distributed as in the real world (where it is with noticeable probability \(\bot \)), the malicious simulator have to change its input (sent to the trusted party) with high enough probability. However, in this case, the semihonest simulator for \({\text {B}}\), receives \(\bot \) from the trusted party. Since the only information it has on c is f(c), by the assumed security of f, the simulator for \({\text {B}}\) will not be able to recover c with nonnegligible probability. Hence, \({\text {B}}\)’s simulator will fail to generate a valid view for \({\text {B}}\).
We stress that since \({\text {A}}\) aborts at round i, independently of its view, our attack works even if the parties have a simultaneous broadcast channel. The detailed proof appears in Sect. 4.2.
Low round complexity. Optimal round complexity of protocols is a well studied question for classical MPC (see, e.g., [7, 8, 24, 32]). Here, we explore the optimal number of rounds required for general computation with (1, 1)FaF fullsecurity. Our motivation for investigating this question comes from the tworound protocol of Ishai et al. [32], tolerating a single malicious party. In the second round, the honest parties can either complete the computation or are able to detect the malicious party. If a party was detected cheating, then the honest parties reveal their inputs to some of the other honest parties.
Clearly, this is not considered secure according to FaFsecurity. Indeed, we prove that there are functionalities that cannot be computed with (1, 1)FaF security in less than three round. We interpret this as evidence that some kind of leakage on the inputs of honest parties is necessary in order to achieve a tworound protocol.^{Footnote 3} The next theorem completes the picture, asserting that for \(m\ge 9\) parties, the optimal round for (1, 1)FaF fullsecurity is three.
Theorem 2 (informal)
Let \(m\ge 9\). There exists an mparty functionality that has no 2round protocol that computes it with (weak) (1, 1)FaF fullsecurity. On the other hand, assuming that pseudorandom generators exist, for any mparty functionality, there exists a 3round protocol that computes it with strong (1, 1)FaF fullsecurity.
We now present an overview of the proof. For the negative direction, we rely on the proof by Gennaro et al. [24] of the impossibility to compute \(\left( x_1,x_2,\bot ,\ldots ,\bot \right) \mapsto x_1\wedge x_2\) in two rounds against two corrupted parties. We observe that the adversary they proposed corrupts one party maliciously and another semihonestly. Moreover, the semihonest corrupted party has no input, hence the actions of the adversary can be adopted into our setting. More concretely, we show that an adversary corrupting \({\text {P}}_2\), can force all of the parties to gain specific information on \(x_1\), yet by sending its view (at the end of the interaction) to a carefully chosen honest party, it can “teach” that party some information about \(x_1\) that no other party has (not even the adversary itself). This proof, in fact, works for any \(m\ge 3\).
For the positive direction, we consider the protocol of Damgård and Ishai [18]. Using share conversion techniques ([17]) and the 2round verifiable secret sharing (VSS) protocol of [23], they were able to construct a 3round protocol that tolerates \(t<m/5\) corruptions. We follow similar lines as [18]. First we show how to slightly modify the VSS protocol so it will admit FaFsecurity. Then, by making the observation that the parties in the protocol of [18] hold only shares of the other parties’ input, we are able to show that by increasing the threshold of the sharing scheme, the protocol admits FaFsecurity. The construction of the VSS protocol follows similar lines as in [23]. We further show that the protocol can be generalized to admit \((t,h^*)\)FaF fullsecurity, whenever \(5t+3h^*<m\).
Information theoretic FaFsecurity. Information theoretic security have been studied extensively in the MPC literature, see e.g., [11, 21, 38]. We further generalize the corruption model to allow nonthreshold adversaries (for both the malicious and the semihonest adversaries). We consider the same adversarial structure as Fitzi et al. [21], called monotone mixed adversarial structure. Roughly, it states that turning a malicious party to being semihonest does not compromise the security of the protocol. As discussed previously, this is not generally the case.
We prove the following theorem, characterizing the types of adversaries, for which we can compute any multiparty functionality with information theoretic security.
Theorem 3 (informal)
Let \(\mathcal {R}\subseteq \left\{ \left( \mathcal {I},\mathcal {H} \right) :\mathcal {I}\cap \mathcal {H}=\emptyset \right\} \) be a monotone mixed adversarial structure over a set of parties \(\mathcal {P}\). Then:

1.
Any mparty functionality f can be computed with \(\mathcal {R}\)FaF fullsecurity, assuming an available broadcast channel, if and only if
$$\mathcal {I}_1\cup \mathcal {H}_1\cup \mathcal {I}_2\cup \mathcal {H}_2\ne \mathcal {P},$$for every \((\mathcal {I}_1,\mathcal {H}_1),(\mathcal {I}_2,\mathcal {H}_2)\in \mathcal {R}\).

2.
Any mparty functionality f can be computed with \(\mathcal {R}\)FaF fullsecurity (without broadcast), if and only if
$$\mathcal {I}_1\cup \mathcal {H}_1\cup \mathcal {I}_2\cup \mathcal {H}_2\ne \mathcal {P}\text { and }\mathcal {I}_1\cup \mathcal {I}_2\cup \mathcal {I}_3\ne \mathcal {P},$$for every \((\mathcal {I}_1,\mathcal {H}_1),(\mathcal {I}_2,\mathcal {H}_2),(\mathcal {I}_3,\mathcal {H}_3)\in \mathcal {R}\).

3.
Any mparty functionality f can be computed with \(\mathcal {R}\)FaF fullsecurity, if and only if
$$\mathcal {I}_1\cup \mathcal {H}_1\cup \mathcal {I}_2\cup \mathcal {H}_2\cup \mathcal {I}_3\ne \mathcal {P},$$for every \((\mathcal {I}_1,\mathcal {H}_1),(\mathcal {I}_2,\mathcal {H}_2),(\mathcal {I}_3,\mathcal {H}_3)\in \mathcal {R}\).
Interestingly, the positive direction holds with respect to strong FaFsecurity, and the negative holds with respect to weak FaFsecurity. Additionally, as Fitzi et al. [21] showed that the same conditions hold with respect to mixed adversaries, this yields an equivalence between all three notions of security, as far as general MPC goes for monotone adversarial structures.
The proof follows similar lines as [21]. For the positive direction we show how the parties can securely emulate a 4party BGW protocol tolerating a single malicious party. The negative direction is done by reducing the computation to a functionality known to be impossible to compute securely (according to the standard definition), using a player partitioning argument. The full treatment of information theoretic FaFsecurity with respect to monotone adversarial structures is deferred to the fullversion of the paper [1].
1.1.3 The Relation Between FaF Security and Other Definitions
The relation between FaFsecurity and standard fullsecurity. It is natural to explore how the new definition relates to classical definitions both in the computational and in the informationtheoretic settings.
We start by comparing FaFsecurity to the standard definition (for static adversaries). It is easy to see that standard tsecurity does not imply in general \((t,h^*)\)FaF fullsecurity, even for functionalities with no inputs (see Sect. 5.1 for a simple example showing this). Obviously, \((t,h^*)\)FaFsecurity readily implies its classical tsecurity counterpart. One might expect that classical \((t+h^*)\)security must imply \((t,h^*)\)FaFsecurity. We show that this is not the case in general. Specifically, in Example 1, we present a protocol that admits traditional (static) malicious security against t corruptions, however, it does not admit \((t1,1)\)FaFsecurity.
In contrast to the above, we claim that adaptive \((t+h^*)\)security implies strong \((t,h^*)\)FaF fullsecurity. Recall that an adaptive adversary is one that can choose which parties to corrupt during the execution and after the termination of the protocol and depending on its view. Indeed, strong FaFsecurity can be seen as a special case of adaptive security. We do believe, however, that the FaF model is of special interest, and specifically, that in many scenarios, the full power of adaptive security is an overkill.
The relation between FaFsecurity and mixedadversaries security. The notion of “mixed adversaries” was introduced in [21]. It considers a single entity that corrupts a subset \(\mathcal {I}\) maliciously, and another subset \(\mathcal {H}\) semihonestly. Similarly, the simulator for a mixed adversary is a single simulator controlling the parties in \(\mathcal {I}\cup \mathcal {H}\), with the restriction of only being able to change the inputs of the parties in \(\mathcal {I}\).
It is instructive to compare the mixedadversary notion to that of FaFsecurity, which in turn, can be viewed as if there are two distinct adversaries (which do not collude) – one malicious and one semihonest. One might expect that \((t,h^*)\)mixed fullsecurity would imply \((t,h^*)\)FaF fullsecurity. However, similarly to the case with standard security, we show the that this is not generally the case in the computational setting (cf., Example 2).
1.2 Related Works
Definitions of standard MPC where the subject of much investigation in the area of MPC. Notable works introducing various definitions are [5, 6, 13, 25, 37]. The question of achieving (standard) fullsecurity was given quite some attention. See, e.g., [4, 16, 19, 28, 36] for two parties, [4, 11, 27] in the multiparty setting.
The definition we propose can also be viewed as if there where two different adversaries, one is corrupting actively and the second is corrupting passively, while the adversaries cannot exchange messages outside of the environment. Some forms of “decentralized” adversaries were considered in [2, 3, 14, 34], with the motivation of achieving collusionfree protocols. However, unlike our definition, the definitions they proposed were both complicated, and did not allow an external entity to corrupt more than a single party.
Fitzi et al. [21] where the first to consider the notion of mixed adversaries. In their model, an adversary can corrupt a subset of the parties actively, and another subset passively. Moreover, their work considered general nonthreshold adversary structures. They gave a complete characterization of the adversary structures for which general unconditional MPC is possible, for four different models: Perfect security with and without broadcast, and statistical security (with negligible error probability) with and without broadcast. BeerliováTrubíniová et al. [9], Hirt et al. [31] further studied adversaries that can additionally failcorrupt another subset of parties. They give the exact conditions for general secure function evaluation (SFE) and general MPC to be possible for perfect security, statistical security, and for computational security, assuming a broadcast channel. In all these settings they confirmed the strict separation between SFE and MPC. Koo [35] considered adversaries that can maliciously corrupt certain parties, and in addition omission corrupt others. Omission corruptions allow the adversary to either block incoming and outgoing messages. Zikas et al. [41] further refined this model by introducing the notions of sendomission corruptions, where the adversary can selectively block outgoing messages, and receiveomission corruption, where the adversary can selectively block incoming messages. For a full survey of works on those notions of mixed adversaries see Zikas [40].
1.3 Organization
In Sect. 2 we present the required preliminaries. In Sect. 3 we formally define our new notion of FaFsecurity. Then, in Sect. 4 we characterize computational FaF fullsecurity. In Sect. 5 we compare the new definition to other existing notions of security.
2 Preliminaries
We use calligraphic letters to denote sets, uppercase for random variables, lowercase for values, and we use bold characters to denote vectors. For \(n\in {\mathbb {N}}\), let \([n]=\{1,2\ldots n\}\). For a set \(\mathcal {S}\) we write \(s\leftarrow \mathcal {S}\) to indicate that s is selected uniformly at random from \(\mathcal {S}\). Given a random variable (or a distribution) X, we write \(x\leftarrow X\) to indicate that x is selected according to X. A PPTM is a polynomial time Turing machine.
A function \(\mu (\cdot )\) is called negligible, if for every polynomial \(p(\cdot )\) and all sufficiently large n, it holds that \(\mu (n)<1/p(n)\). For a vector \(\mathbf{v}\) of dimension n, we write \(v_i\) for its ith component, and for \(\mathcal {S}\subseteq [n]\) we write \(\mathbf{v}_{\mathcal {S}}=\left( v_i \right) _{i\in \mathcal {S}}\). For a randomized function (or an algorithm) f we write f(x) to denote the random variable induced by the function on input x, and write f(x; r) to denote the value when the randomness of f is fixed to r. Other preliminaries are standard and for space considerations are deferred to the full version [1].
3 The New Definition – FaF FullSecurity
In this section, we present our new security notion, aiming to strengthen the classical definition of security in order to impose privacy restrictions on (subsets of) honest parties, even in the presence of malicious behavior by other parties. Crucially, we wish to prevent the adversary from leaking private information of one subset of parties to another subset of parties, even though neither subset is under its control. The definition is written alongside the classical definition.
We follow the standard ideal vs. real paradigm for defining security. Intuitively, the security notion is defined by describing an ideal functionality, in which both the corrupted and noncorrupted parties interact with a trusted entity. A realworld protocol is deemed secure if an adversary in the realworld cannot cause more harm than an adversary in the idealworld. In the classical definition, this is captured by showing that an idealworld adversary (simulator) can simulate the full view of the real world adversary. For FaF security, we further require that the view of any subset of the uncorrupted parties can be simulated in the idealworld (including the interaction with the adversary).
To shed some light on some of the subtleties in defining the proposed notion, in the fullversion we review several possible approaches for capturing the desired security notion (avoiding leakage to honest parties), and demonstrate why they fall short in doing so. In Sect. 5, we compare the actual definition we put forth with the standard fullsecurity definition, and with the mixedadversaries definition.
To make the above intuition more formal, fix a (possibly randomized) mary function \(f = \left\{ f_n:\mathcal {X}^n_1\times \cdots \times \mathcal {X}^n_m \mapsto \mathcal {Y}^n_1\times \cdots \times \mathcal {Y}^n_m\right\} _{n\in {\mathbb {N}}}\) and let \(\varPi \) be a protocol for computing f. We further let \(\mathcal {X}^n=\mathcal {X}^n_1\times \cdots \times \mathcal {X}^n_m\).
The FaF Real Model
An mparty protocol \(\varPi \) for computing the function f is defined by a set of m interactive probabilistic polynomialtime Turing machines \(\mathcal {P}= \left\{ {\text {P}}_1,\ldots ,{\text {P}}_m\right\} \). Each Turing machine (party) holds the security parameter \(1^n\) as a joint input and a private input \(x_i\in \mathcal {X}^n_i\). The computation proceeds in rounds. In each round, the parties either broadcast and receive messages over a common broadcast channel, or send messages to an individual party over a secure channel. The number of rounds in the protocol is expressed as some function r(n) in the security parameter (where r(n) is bounded by some polynomial). At the end of the protocol, the (honest) parties output a value according to the specifications of the protocol. When the security parameter is clear from the context, we remove it from the notations. The view of a party consists of the party’s input, randomness, and the messages received throughout the interaction.
We consider two adversaries. The first is a malicious adversary \(\mathcal {A}\) that controls a subset \(\mathcal {I}\subset \mathcal {P}\). The adversary has access to the full view of all corrupted parties. Additionally, the adversary may instruct the corrupted parties to deviate from the protocol in any way it chooses. We make explicit the fact that the adversary can send messages (even if not prescribed by the protocol) to any uncorrupted party – in every round of the protocol, and can do so after all messages for this round were sent (see Remark 1 for more on this). The adversary is nonuniform, and is given an auxiliary input \(z_{\mathcal {A}}\). The second adversary is a semihonest adversary \(\mathcal {A}_{\mathcal {H}}\) that controls a subset \(\mathcal {H}\subset \mathcal {P}\setminus \mathcal {I}\) of the remaining parties (for the sake of clarity, we will only refer to the parties in \(\mathcal {I}\) as corrupted). Similarly to \(\mathcal {A}\), this adversary also has access to the full view of its parties. However, \(\mathcal {A}_{\mathcal {H}}\) cannot instruct the parties to deviate from the prescribed protocol in any way, but may try to infer information about noncorrupted parties, given its view in the protocol (which includes the joint view of parties in \(\mathcal {H}\)). This adversary is also nonuniform, and is given an auxiliary input \(z_{\mathcal {H}}\). When we say that the adversary is computationally bounded, we mean it is a PPTM. Both adversaries are static, that is, they choose the subset to corrupt prior to the execution of the protocol. For a subset of parties \(\mathcal {S}\subseteq \mathcal {P}\), we let \(\mathbf{x}_{\mathcal {S}}\) be the vector of inputs of the parties in \(\mathcal {S}\), specifically, \(\mathbf{x}_{\mathcal {I}}\) and \(\mathbf{x}_{\mathcal {H}}\) denote the vector of inputs of the parties controlled by \(\mathcal {A}\) and \(\mathcal {A}_{\mathcal {H}}\) respectively.
We next define the realworld global view for security parameter \(n\in {\mathbb {N}}\), an input sequence \(\mathbf{x}=\left( x_1,\ldots ,x_m \right) \), and auxiliary inputs \(z_{\mathcal {A}},z_{\mathcal {H}}\in \{0,1\}^*\) with respect to adversaries \(\mathcal {A}\) and \(\mathcal {A}_\mathcal {H}\) controlling the parties in \(\mathcal {I}\subset \mathcal {P}\) and \(\mathcal {H}\subset \mathcal {P}\setminus \mathcal {I}\) respectively. Let denote the outputs of the uncorrupted parties (those in \(\mathcal {P}\setminus \mathcal {I}\)) in a random execution of \(\varPi \), with \(\mathcal {A}\) corrupting the parties in \(\mathcal {I}\). Further let be the adversary’s view during an execution of \(\varPi \), which contains its auxiliary input, its random coins, the inputs of the parties in \(\mathcal {I}\), and the messages they see during the execution of the protocol. In addition, we let be the view of \(\mathcal {A}_\mathcal {H}\) during an execution of \(\varPi \) when running alongside \(\mathcal {A}\) (this view consists of their random coins, their inputs, and the messages they see during the execution of the protocol, and specifically, those nonprescribed messages sent to them by the adversary).
We denote the global view in the real model by
It will be convenient to denote
i.e., the projection of to their view of the adversary and the uncorrupted parties’ output (those in \(\mathcal {P}\setminus \mathcal {I}\)), and denote
When \(\varPi \) is clear from the context, we will remove it for brevity.
Remark 1
A subtlety in the proposed model is how to deal with messages sent by the adversary at a later point in time, after the protocol execution terminated. Specifically, if honest parties need to react to such messages, then the protocol has no predefined termination point. It is possible to incorporate a parameter \(\tau \) of time to the security definition, asserting that the protocol is secure until time \(\tau \). To keep the definition clean and simple, we overcome this subtlety by only allowing the realworld adversary to communicate with other (noncorrupted) parties until the last round of the protocol.
The FaF Ideal Model
We next describe the interaction in the FaF fullsecurity ideal model, which specifies the requirements for fully secure FaF computation of the function f with security parameter n. Let \(\mathcal {A}\) be an adversary in the idealworld, which is given an auxiliary input \(z_\mathcal {A}\) and corrupts the subset \(\mathcal {I}\) of the parties called corrupted. Further let \(\mathcal {A}_{\mathcal {H}}\) be a semihonest adversary, which is controlling a set of parties denoted \(\mathcal {H}\) and is given an auxiliary input \(z_\mathcal {H}\). We stress that the classical formulation of the ideal model assumes \(\mathcal {H}=\emptyset \).
The FaF ideal model – fullsecurity.

Inputs: Each party \({\text {P}}_i\) holds \(1^n\) and \(x_i\in \mathcal {X}^n_i\). The adversaries \(\mathcal {A}\) and \(\mathcal {A}_\mathcal {H}\) are given each an auxiliary input \(z_{\mathcal {A}},z_{\mathcal {H}}\in \{0,1\}^*\) respectively, and \(x_i\) for every \({\text {P}}_i\) controlled by them. The trusted party \(\mathsf {T}\) holds \(1^n\).

Parties send inputs: Each uncorrupted party \({\text {P}}_j\in \mathcal {P}\setminus \mathcal {I}\) sends \(x_j\) as its input to \(\mathsf {T}\). The malicious adversary \(\mathcal {A}\) sends a value \(x'_i\in \mathcal {X}^n_i\) as the input for party \({\text {P}}_i\in \mathcal {I}\). Write \(\left( x'_1,\ldots ,x'_m \right) \) for the tuple of inputs received by the trusted party.

The trusted party performs computation: The trusted party \(\mathsf {T}\) selects a random string r and computes \( \mathbf{y }=\left( y_1,\ldots ,y_m \right) =f\left( x'_1\ldots ,x'_m;r \right) \) and sends \(y_i\) to each party \({\text {P}}_i\).

The malicious adversary sends its (idealworld) view: \(\mathcal {A}\) sends to \(\mathcal {A}_{\mathcal {H}}\) its randomness, inputs, auxiliary input, and the output received from \(\mathsf {T}\).

Outputs: Each uncorrupted party (i.e., not in \(\mathcal {I}\)) outputs whatever output it received from \(\mathsf {T}\), the parties in \(\mathcal {I}\) output nothing. \(\mathcal {A}\) and \(\mathcal {A}_\mathcal {H}\) output some function of their respective view.
Note that we gave \(\mathcal {A}_\mathcal {H}\) the idealworld view of \(\mathcal {A}\). This is done due to the fact that in the realworld, we cannot prevent the adversary from sending its entire view to the uncorrupted parties. Consider the following example. Suppose three parties computed a functionality \(\left( \bot ,\bot ,\bot \right) \mapsto \left( r,\bot ,r \right) \), where r is some random string. A corrupted \({\text {P}}_1\) can send r to \({\text {P}}_2\) at the end of the interaction, thereby teaching it the output of an honest party. In the idealworld described above, \(\mathcal {A}_{{\text {P}}_2}\) will receive r as well, allowing us to simulate this interaction.
We next define the idealworld global view for security parameter \(n\in {\mathbb {N}}\), an input sequence \(\mathbf{x}=\left( x_1,\ldots ,x_m \right) \), and auxiliary inputs \(z_{\mathcal {A}},z_{\mathcal {H}}\in \{0,1\}^*\) with respect to adversaries \(\mathcal {A}\) and \(\mathcal {A}_\mathcal {H}\) controlling the parties in \(\mathcal {I}\subset \mathcal {P}\) and \(\mathcal {H}\subset \mathcal {P}\setminus \mathcal {I}\) respectively. Let denote the outputs of the uncorrupted parties (those in \(\mathcal {P}\setminus \mathcal {I}\)) in a random execution of the above idealworld process, with \(\mathcal {A}\) corrupting the parties in \(\mathcal {I}\). Further let be the (simulated, realworld) view description being the output of \(\mathcal {A}\) in such a process. In addition, we let be the view description being the output of \(\mathcal {A}_{\mathcal {H}}\) in such a process, when running alongside \(\mathcal {A}\). We denote the global view in the ideal model by
As in the real model, it will be convenient to denote
and
When f is clear from the context, we will remove it for brevity. We first define correctness.
Definition 1
(correctness). We say that a protocol \(\varPi \) computes a function f if for all \(n\in {\mathbb {N}}\) and for all \(\mathbf{x}\in \mathcal {X}^n\), in an honest execution, the joint output of all parties is identically distributed to a sample of \(f( \mathbf{x })\).
We next give the definition for the classical definition of computational security alongside FaFsecurity.
Definition 2
(classical malicious and FaF security). Let \(\varPi \) be a protocol for computing f. We say that \(\varPi \) computes f with computational \((t,h^*)\)FaF fullsecurity, if the following holds. For every nonuniform PPTM adversary \(\mathcal {A}\), controlling a set \(\mathcal {I}\subset \mathcal {P}\) of size at most t in the realworld, there exists a nonuniform PPTM adversary \(\mathsf {S}_{\mathcal {A}}\), controlling \(\mathcal {I}\) in the ideal model; and for every subset of the remaining parties \(\mathcal {H}\subset \mathcal {P}\setminus \mathcal {I}\) of size at most \(h^*\), controlled by a nonuniform semihonest PPTM adversary \(\mathcal {A}_\mathcal {H}\), there exists a nonuniform PPTM adversary \(\mathsf {S}_{\mathcal {A},\mathcal {H}}\), controlling \(\mathcal {H}\) in the idealworld, such that
and
We say that \(\varPi \) computes f with computational tsecurity if it computes it with computational (t, 0)FaF fullsecurity.
Finally, we say that \(\varPi \) computes f with strong computational \((t,h^*)\)FaF fullsecurity if
To abbreviate notations, whenever \(\mathcal {H}=\left\{ {\text {P}}\right\} \) we denote its simulator by \(\mathsf {S}_{\mathcal {A},{\text {P}}}\).
The statistical/perfect security variants of the above definitions are obtained naturally from the above definition by replacing computational indistinguishability with statistical distance.
Remark 2
Observe that for the twoparty case, since we also protect \(\mathcal {H}\) from \(\mathcal {A}\), (weak) (1, 1)FaFsecurity is equivalent to the security considered by Beimel et al. [10]. There, security holds if and only if no malicious adversary and no semihonest adversary can attack the protocol.
Remark 3
Observe that according to the definition, we first need to describe a malicious simulator before fixing the semihonest parties in \(\mathcal {H}\). This should be considered in regard to the definition of the idealmodel, where the malicious simulator \(\mathsf {S}_{\mathcal {A}}\) sends to the semihonest simulator \(\mathsf {S}_{\mathcal {A},\mathcal {H}}\) its idealworld view, implying that \(\mathsf {S}_{\mathcal {A}}\) should know the identities of \(\mathcal {H}\). Formally, we let the malicious simulator have an additional tape, where it writes its idealworld view on it, and then the semihonest simulator reads from it.
fHybrid Model. Let f be a mary functionality. The fhybrid model is identical to the real model of computation discussed above, but in addition, each msize subset of the parties involved, has access to a trusted party realizing f.
3.1 FaF SecurityWithIdentifiableAbort
We also make use of protocols admitting securitywithidentifiableabort. In terms of the definition, the only requirement that is changed, is to have the idealworld simulator operate in a different ideal model. We next describe the interaction in the FaFsecurewithidentifiableabort ideal model for the computation of the function f with security parameter n. Unlike the fullsecurity ideal model, here the malicious adversary can instruct the trusted party to not send the output to the honest parties, however, in this case the adversary must publish the identity of a corrupted party. In addition, since there is no guarantee that in the realworld the semihonest parties won’t learn the output, we always let the semihonest parties to receive their output in the ideal execution.
Let \(\mathcal {A}\) be a malicious adversary in the idealworld, which is given an auxiliary input \(z_\mathcal {A}\) and corrupts the subset \(\mathcal {I}\) of the parties. Further let \(\mathcal {A}_\mathcal {H}\) be a semihonest adversary, which is controlling a set of parties denoted \(\mathcal {H}\) and is given an auxiliary input \(z_\mathcal {H}\). Just like in the fullsecurity idealworld, the standard formulation of securitywithidentifiableabort assumes \(\mathcal {H}=\emptyset \).
The FaF ideal model – securitywithidentifiableabort.

Inputs: Each party \({\text {P}}_i\) holds \(1^n\) and \(x_i\in \mathcal {X}^n_i\). The adversaries \(\mathcal {A}\) and \(\mathcal {A}_\mathcal {H}\) are given each an auxiliary input \(z_\mathcal {A},z_\mathcal {H}\in \{0,1\}^*\) respectively, and \(x_i\) for every \({\text {P}}_i\) controlled by them. The trusted party \(\mathsf {T}\) holds \(1^n\).

Parties send inputs: Each uncorrupted party \({\text {P}}_j\in \mathcal {P}\setminus \mathcal {I}\) sends \(x_j\) as its input to \(\mathsf {T}\). The malicious adversary sends a value \(x'_i\in \mathcal {X}^n_i\) as the input for party \({\text {P}}_i\in \mathcal {I}\). Write \(\left( x'_1,\ldots ,x'_m \right) \) for the tuple of inputs received by the trusted party.

The trusted party performs computation: The trusted party \(\mathsf {T}\) selects a random string r and computes \( \mathbf{y }=\left( y_1,\ldots ,y_m \right) =f\left( x'_1\ldots ,x'_m;r \right) \) and sends \( \mathbf{y }_\mathcal {I}\) to \(\mathcal {A}\) and \( \mathbf{y }_\mathcal {H}\) to \(\mathcal {A}_\mathcal {H}\).

The malicious adversary sends its (idealworld) view: \(\mathcal {A}\) sends to \(\mathcal {A}_{\mathcal {H}}\) its randomness, inputs, auxiliary input, and the output received from \(\mathsf {T}\).

Malicious adversary instructs trusted party to continue or halt: the adversary \(\mathcal {A}\) sends either \( {\texttt {continue}}\) or \(( {\texttt {abort}},{\text {P}}_i)\) for some \({\text {P}}_i\in \mathcal {I}\) to \(\mathsf {T}\). If it sent \( {\texttt {continue}}\), then for every honest party \({\text {P}}_j\) the trusted party sends \(y_j\). Otherwise, if \(\mathcal {A}\) sent \(( {\texttt {abort}},{\text {P}}_i)\), then \(\mathsf {T}\) sends \(( {\texttt {abort}},{\text {P}}_i)\) to the each honest party \({\text {P}}_j\).

Outputs: Each uncorrupted party outputs whatever output it received from \(\mathsf {T}\) (the parties in \(\mathcal {H}\) output \(( {\texttt {abort}},{\text {P}}_i)\) if they received it in the last step), the parties in \(\mathcal {I}\) output nothing. The adversaries output some function of their respective view.
4 Characterizing Computational FaFSecurity
In this section we prove our main theorem regarding FaFsecurity. We give a complete characterization the types of adversaries, for which we can compute any multiparty functionality with computational FaF fullsecurity. We prove the following result.
Theorem 4
Let \(t,h^*,m\in {\mathbb {N}}\). Then under the assumption that OT and OWP exist, any mparty functionality f can be computed with (weak) computational \((t,h^*)\)FaF fullsecurity, if and only if \(2t+h^*<m\). Moreover, the negative direction holds even assuming the availability of simultaneous broadcast.
In Sect. 4.1 we show the positive direction, while in Sect. 4.2 we prove the negative direction.
4.1 Feasibility of FaFSecurity
In this section, we prove the positive direction of Theorem 4. In fact, we show how to reduce FaF fullsecurity to FaF securitywithidentifiableabort whenever \(2t+h^*<m\). In addition, we explore the feasibility of both FaF fullsecurity and FaF securitywithidentifiableabort, and provide interesting consequences of these results. We first show that the GMW protocol [26] admits FaF securitywithidentifiableabort, for all possible threshold values of t and \(h^*\), and admits FaF fullsecurity assuming \(t+h^*<m/2\). In Sect. 4.1.2 we show that, assuming an uncorrupted majority (i.e., \(t<m/2\)), residual FaF fullsecurity is (perfectly) reducible to FaFsecuritywithidentifiableabort. The notion of residual security [30], intuitively allows an adversary to learn the output of the function on many choices of inputs for corrupted parties. A formal definition and some motivation for using residual security variant appear in Sect. 4.1.2.
4.1.1 Feasibility of FaF SecurityWithIdentifiableAbort
We next show that the GMW protocol admits FaF securitywithidentifiableabort, and admits FaF fullsecurity in the presence of an honest majority (i.e., \(t+h^*<m/2\)).
Theorem 5
Let \(m,t,h^*\in {\mathbb {N}}\) be such that \(t+h^*\le m\) and Let f be an mparty functionality. Then, assuming OT exists, there exists a protocol for computing f with (weak) computational \((t,h^*)\)FaF securitywithidentifiableabort. Moreover, if \(t+h^*<m/2\) then the protocol admits computational \((t,h^*)\)FaF fullsecurity.
Proof Sketch
We will show that a slight variation on the GMW protocol [26], setting the secret sharing (for sharing the inputs) to a \((t+h^*+1)\)outofm scheme, admits FaFsecurity.
Fix an adversary \(\mathcal {A}\) corrupting \(\mathcal {I}\) of size at most t, and let \(\mathcal {H}\subseteq \mathcal {P}\setminus \mathcal {I}\) be of size at most \(h^*\). The semihonest simulator \(\mathsf {S}_{\mathcal {A},\mathcal {H}}\) will work very similarly to the malicious simulator \(\mathsf {S}_{\mathcal {A}}\). The only difference is that the messages it sends to the adversary on behalf of the parties in \(\mathcal {H}\), are the real message that the protocol instruct them to send (e.g., in the input commitment phase it will commit to the real input unlike \(\mathsf {S}_{\mathcal {A}}\), which commits to 0). Additionally, if the adversary did not abort, for every output wire held by a party in \(\mathcal {I}\cup \mathcal {H}\), set the message received from the honest parties (i.e., from \(\mathcal {P}\setminus (\mathcal {I}\cup \mathcal {H})\)) as the XOR of the output of that wire and the shares of the wire held by the corrupted and semihonest parties.
Security follows from the fact that the messages \(\mathsf {S}_{\mathcal {A},\mathcal {H}}\) sends to \(\mathcal {A}\) are consistent with the inputs of the malicious and the semihonest parties. \(\square \)
4.1.2 Reducing Residual FaF FullSecurity to FaF SecurityWithIdentifiableAbort
In this section, we present a reduction from residual FaF fullsecurity to FaF securitywithidentifiableabort, in the uncorrupted majority setting. This reduction further has the property that if \(2t+h^*<m\) then FaF fullsecurity is obtained (i.e., not residual). We first formally define the residual function. Intuitively, the residual of an mary function with respect to a subset \(\mathcal {S}\) of the indexes, fixes the inputs on the indexes \([m]\setminus \mathcal {S}\). More formally, it is defined as follows.
Definition 3
(Residual Function [29, 30]). Let \(f:\mathcal {X}\mapsto \mathcal {Y}\) be an mary functionality, let \( \mathbf{x }= \left( x_1,\ldots , x_m \right) \) be an input to f, and let \(\mathcal {S}=\left\{ i_1, . . . , i_{m'}\right\} \subseteq [m]\) be a subset of size \(m'\). The residual function of f for \(\mathcal {S}\) and \( \mathbf{x }\) is an \(m'\)ary function \(f_{\mathcal {S}, \mathbf{x }}:\mathcal {X}_{i_1}\times \ldots \times \mathcal {X}_{i_{m'}}\mapsto \mathcal {Y}_{i_1}\times \ldots \times \mathcal {Y}_{i_{m'}}\), obtained from f by restricting the input variables indexed by \([m] \setminus \mathcal {S}\) to their values in \( \mathbf{x }\). That is, \(f_{\mathcal {S}, \mathbf{x }}\left( x'_1,\ldots , x'_{m'} \right) = f\left( x_1,\ldots , x_m \right) \), where for \(k\notin \mathcal {S}\) we have \(x'_k= x_k\), while for \(k=i_j\in \mathcal {S}\) we have \(x'_k = x_j\).
Residual FaF fullsecurity is defined similarly to FaF fullsecurity, with the only exception being in the idealworld, the two adversaries receive the residual function \(f_{\mathcal {I}, \mathbf{x }}\) instead of a single output (all the uncorrupted parties still receive an output from \(\mathsf {T}\), which they output).
Before stating the result, we first define the functionalities to which we reduce the computation. For an mparty functionality f, and for \(m'\in \left\{ mt,\ldots ,m\right\} \), we define the \(m'\)party functionality \(f'_{m'}( \mathbf{x })\) in the securitywithidentifiableabort model as follows. Let \( \mathbf{y }=\left( y_1,\ldots ,y_m \right) \) be the output of \(f( \mathbf{x })\). Share each \(y_i\) in an \((mt)\)outof\(m'\) secret sharing scheme, so that party \({\text {P}}_i\) is required for the reconstruction (this can be done by first sharing in a 2outof2 secret sharing, and then give one of the shares to \({\text {P}}_i\) and share the other among \(m'1\) parties). The output of party \({\text {P}}_j\) is its respective shares of each \(y_i\), i.e., \({\text {P}}_j\) receives \(\left( y_i[j] \right) _{i=1}^m\). We next present the statement. The proof is given in Sect. 4.1.3.
Lemma 1
Let \(m,t,h^*\in {\mathbb {N}}\) be such that \(t+h^*\le m(n)\) and that \(t<m/2\), and let f be an mparty functionality. Then there exists a protocol \(\varPi \) that computes f with strong perfect \((t,h^*)\)residual FaF fullsecurity in the \(\left( f'_{mt},\ldots ,f'_{m} \right) \)hybrid model. Moreover, the protocol satisfies the following.

1.
Standard malicious security achieved is standard security (i.e., not residual)

2.
If \(2t+h^*<m\) then \(\varPi \) admits \((t,h^*)\)FaF fullsecurity in the \(\left( f'_{mt},\ldots ,f'_{m} \right) \)hybrid model.
Remark 4
Note that in general, classical generic protocols, such as the GMW protocol, will not achieve FaF fullsecurity, even if we increase the threshold for the secret sharing scheme to \(t+h^*+1\). As an example, consider the 3party functionality \((a,\bot ,c)\mapsto a\oplus b\oplus c\), where \(b\leftarrow \{0,1\}\), and let \(t,h^*=1\). Using a 2outof3 secret sharing scheme, would allow a corrupted \({\text {P}}_1\) to help \({\text {P}}_2\) to learn c. Using a 3outof3 secret sharing scheme, would allow the adversary to withhold information on the output.
We stress that even standard techniques, such as having the parties compute a functionality whose output is a secret sharing of the original output, fail to achieve security. This is due to the fact that an adversary can abort the execution forcing the parties in \(\mathcal {H}\) to (possibly) learn an output. Then, after executing the same protocol with one party labeled inactive, the parties in \(\mathcal {H}\) will learn an additional output, which cannot be simulated. In Sect. 4.1.3 we show that such protocol can achieve residual security, namely the parties in \(\mathcal {H}\) will not learn more than the function on many choices of inputs for corrupted parties.
Assuming that OT exists, we can apply the composition theorem to combine Lemma 1 with Theorem 5 and get as a corollary that whenever an uncorrupted majority is present (i.e., \(t<m/2\)), any functionality can be computed with (weak) computational residual FaF fullsecurity.
Corollary 1
Let \(m,t,h^*\in {\mathbb {N}}\) be such that \(t\,+\,h^*\le m\) and that \(t<m/2\), and let f be an mparty functionality. Then, assuming OT exists, there exists a protocol \(\varPi \) that computes f with (weak) computational \((t,h^*)\)residual FaF fullsecurity.

1.
Standard malicious security achieved is standard security (i.e., not residual)

2.
If \(2t+h^*<m\) then \(\varPi \) admits \((t,h^*)\)FaF fullsecurity.
Item 2 of the above corollary concludes the positive direction of Theorem 4. The proof of Lemma 1 is given in Sect. 4.1.3. Before providing a proof, we first discuss some interesting consequences. One interesting family of functionalities to consider in the corollary, is the family of noinput functionalities (e.g., cointossing). Since there are no inputs, it follows that such functionalities can be computed with FaF fullsecurity (i.e., not residual).
Corollary 2
Let \(m,t,h^*\in {\mathbb {N}}\) be such that \(t\,+\,h^*\le m\) and that \(t<m/2\), and let f be an mparty noinput functionality. Then, assuming OT exists, there exists a protocol \(\varPi \) that computes f with (weak) computational \((t,h^*)\)FaF fullsecurity.
As a result, in the computational setting, we claim that we have separation between (weak) FaFsecurity and mixedsecurity. Recall that a mixed adversary is one that controls a subset of the parties maliciously and another subset semihonestly. Consider the 3party functionality \(f(\bot ,\bot ,\bot )=\left( b,\bot ,b \right) \), where \(b\leftarrow \{0,1\}\). As we proved in Corollary 2, this functionality can be computed with computational (1, 1)FaF fullsecurity. However, we claim that f cannot be computed with computational (1, 1)mixed security.
Theorem 6
No protocol computes f with (1, 1)mixed fullsecurity.
The proof follows from a simple observation on a result by Ishai et al. [33]. They showed that for any protocol computing the functionality \(g(a,\bot ,c)=(a\oplus c,\bot ,a\oplus c)\), where a and c are chosen uniformly at random, there exists a mixed adversary successfully attacking the protocol. Consequently, the same attack would work on any protocol computing f. As a result, we conclude that for noinput functionalities, the definition of security against mixed adversaries is strictly stronger than FaF security.
Even for various functionalities with inputs, Lemma 1 implies FaF fullsecurity for interesting choices of parameters. For example, consider the 3party XOR functionality. Then it can be computed with (1, 1)FaF fullsecurity since the input of the honest party can be computed by the semihonest party’s simulator.
4.1.3 Proof of Lemma 1
We next provide the proof of Lemma 1. Recall that for an mparty functionality f and for \(m'\in \left\{ mt,\ldots ,m\right\} \), we define the \(m'\)party functionality \(f'_{m'}( \mathbf{x })\) in the securitywithidentifiableabort model as follows. Let \( \mathbf{y }=\left( y_1,\ldots ,y_m \right) \) be the output of \(f( \mathbf{x })\). Share each \(y_i\) in an \((mt)\)outof\(m'\) secret sharing scheme, so that party \({\text {P}}_i\) is required for the reconstruction. The output of party \({\text {P}}_j\) is its respective shares of each \(y_i\), i.e., \({\text {P}}_j\) receives \(\left( y_i[j] \right) _{i=1}^m\).
Proof (of Lemma 1). The protocol \(\varPi \) in the real world is described as follows:
Protocol 7
Input: Party \({\text {P}}_i\) holds an input \(x_i\in \mathcal {X}_i\).
Common input: Security parameter \(1^n\).

1.
The parties call the functionality \(f'_{m'}\), where \(m'\) is the number of active parties, and the inputs of the inactive parties is set to a default value.

2.
If the computation followed through, then the parties broadcast their shares, reconstruct the output, and halt.^{Footnote 4}

3.
Otherwise, they have the identity of a corrupted party. The parties then go back to Step 1 without said party (updating \(m'\) in the process and setting its input to a default value).
Intuitively, the protocol works since there is an honest majority, so the parties can always reconstruct the output in case the computation in Step 1 followed through. Moreover, the only information the parties receive in case of an abort during Step 1, is an output of f that is consistent with their inputs. In particular the adversary cannot add additional information to any subset of the honest parties. We next present the formal argument.
Fix an adversary \(\mathcal {A}\) corrupting a set of parties \(\mathcal {I}\subset \mathcal {P}\) of size at most t, and let \(\mathcal {H}\subset \mathcal {P}\setminus \mathcal {I}\) be a subset of the uncorrupted parties of size at most \(h^*\). We first construct the simulator \(\mathsf {S}_{\mathcal {A}}\) for the adversary. To prove Item 1 of the “moreover” part, we will construct the simulator \(\mathsf {S}_{\mathcal {A}}\) assuming that receives a single output from the trusted party. This is indeed a stronger result, since a simulator with the residual function can always simulate the simulator that received a single output. With an auxiliary input \(z_{\mathcal {A}}\), the simulator \(\mathsf {S}_{\mathcal {A}}\) does the following:

1.
Let \(m'\) be the number of active parties. Share some garbage value m times independently as follows. Denote \(y'_j=\left( \hat{y}_i[j] \right) _{i=1}^m\) the shares held by \({\text {P}}_j\), where \(\hat{y}_i\) is a garbage value, shared in a \((mt)\)outof\(m'\) Shamir’s secret sharing scheme with respect to party \({\text {P}}_i\).

2.
Send \( \mathbf{y }'_\mathcal {I}\) to \(\mathcal {A}\) to receive the message it sends to \(f'_{m'}\).

3.
If \(\mathcal {A}\) replied with \(( {\texttt {abort}},{\text {P}}_i)\), then go back to Step 1 with \({\text {P}}_i\) labeled inactive.

4.
Otherwise, \(\mathcal {A}\) sent some vector of inputs \(\hat{ \mathbf{x }}_\mathcal {I}\). Pass \(\hat{ \mathbf{x }}_\mathcal {I}\) to the trusted party to receive an output \( \mathbf{y }_\mathcal {I}\). Complete the t shares held by \(\mathcal {A}\) to a sharing of the real output \( \mathbf{y }_\mathcal {I}\) (recall that \(t<m/2\) so this is possible by the properties of the secret sharing scheme).

5.
Output all of the \( \mathbf{y }'_\mathcal {I}\)’s generated and the completed shares, and halt.
We next describe the simulator \(\mathsf {S}_{\mathcal {A},\mathcal {H}}\) for the adversary \(\mathcal {A}_{\mathcal {H}}\) controlling the parties in \(\mathcal {H}\) interacting with \(\mathcal {A}\). The idea is to have the simulator use the shares generated by \(\mathsf {S}_{\mathcal {A}}\) to ensure consistencies between their views. Additionally, for the last iteration, where the shares should be reconstructed to the output, we modify the shares not held by \(\mathcal {A}\) so the output will also be consistent with generated view. In addition, for every abort occurred, the simulator will use the residual function to hand over to \(\mathcal {H}\) the output of that iteration. Formally, given an auxiliary input \(z_{\mathcal {H}}\), \(\mathsf {S}_{\mathcal {A},\mathcal {H}}\) operates as follows.

1.
Receive the residual function \(f_{\mathcal {I}, \mathbf{x }}\) from the trusted party, and receive \(( \mathbf{x }_{\mathcal {I}},r,z_{\mathcal {A}})\) – \(\mathsf {S}_{\mathcal {A}}\)’s input, randomness, and the auxiliary input, respectively.

2.
Apply \(\mathsf {S}_{\mathcal {A}}\) to receive its view, which consists of \( \mathbf{y }'_\mathcal {I}\) – shares of some values, held by the adversary.

3.
Query \(\mathcal {A}\) on each \( \mathbf{y }'_\mathcal {I}\) to receive the messages it sends to \(\mathcal {H}\), and in case of an abort, get the identity of a corrupted party.

4.
Complete each \( \mathbf{y }'_\mathcal {I}\) to shares of an output \(\hat{ \mathbf{y }}\) computed using the residual function \(f_{\mathcal {I}, \mathbf{x }}\) (fixing the input of the inactive parties to be a default value, and input of the active corrupted parties according to the choice of \(\mathcal {A}\)), so that the last \( \mathbf{y }'_\mathcal {I}\) is completed to shares of the real output. Note that by the properties of the secret sharing scheme, this can be done efficiently.

5.
Output all of the completed shares and the messages sent by \(\mathcal {A}\), and halt.
In every iteration, the view generated by \(\mathsf {S}_{\mathcal {A},\mathcal {H}}\) is consistent with the view generated by the malicious simulator \(\mathsf {S}_{\mathcal {A}}\). Moreover, they send to \(\mathcal {A}\) the exactly the same messages, hence they will receive the same identities of the aborting parties, and inputs given to the functionalities \(f'_{m'}\). Since this is generated with the same distribution as in the realworld, we conclude that joint view of the two adversaries with the output of the honest parties, is identically distributed in both worlds.
Finally, in order to see why Item 2 of the “moreover” part is true, observe that if \(2t+h^*<m\) then \(t+h^*<mt\), implying that the number of shares that can be held by the \(\mathcal {A}\) and \(\mathcal {H}\) is smaller than the secret sharing threshold. Thus, \(\mathsf {S}_{\mathcal {A},\mathcal {H}}\) can use random shares for each iteration (except the last iteration), without using the output.
4.2 Impossibility Result
In this section, we prove the negative direction of Theorem 4. Specifically, we prove the following lemma.
Lemma 2
Let \(m,t,h^*\in {\mathbb {N}}\) be such that \(2t+h^*=m\). Then there exists an mparty functionality that no protocol computes it with (weak) computational \((t,h^*)\)FaF fullsecurity. Moreover, the claim holds even assuming the availability of simultaneous broadcast.
For the proof, we first show that it holds for the 3party case where \(t,h^*=1\). Then, using a playerpartitioning argument, we generalize the result to more than three parties. The following lemma states the result for the 3party case. Throughout the remainder of the section, we denote the parties by \({\text {A}}\), \({\text {B}}\), and \({\text {C}}\).
Lemma 3
Assume that oneway permutation exists. Then there exists a 3party functionality that no protocol computes it with (weak) computational (1, 1)FaF fullsecurity. Moreover, the following hold

1.
The malicious adversary we construct corrupts either \({\text {A}}\) or \({\text {C}}\), while the remaining third party \({\text {B}}\) will be in \(\mathcal {H}\).

2.
The claim holds even assuming the availability of simultaneous broadcast.
The proof of Lemma 2 is deferred to the fullversion. We next give an overview of the proof of Lemma 3. We assume that each round is composed of 3 broadcast messages, the first sent by \({\text {A}}\), the second sent by \({\text {B}}\), and the third by \({\text {C}}\) (this is without loss of generality, as we allow the adversary to be rushing). Intuitively, the proof is done as follows. By an averaging argument there must exists a round where two parties, say \({\text {A}}\) and \({\text {B}}\), together can reconstruct the output with significantly higher probability than \({\text {C}}\) and \({\text {B}}\). We then have \({\text {A}}\) act honestly (using the original input it held) and abort at that round. As a result, with high probability the output of \({\text {C}}\) will change. Finally, \({\text {A}}\) will send its entire view to \({\text {B}}\), allowing it to recover the correct entry with significantly higher probability than \({\text {C}}\). We show that for an appropriate functionality, the advantage of the pair \(({\text {A}},{\text {B}})\) over \(({\text {C}},{\text {B}})\) cannot be simulated.
Proof (of Lemma 3). Let \(f=\left\{ f_n:\{0,1\}^n\mapsto \{0,1\}^n\right\} _{n\in {\mathbb {N}}}\) be a oneway permutation. Define the symmetric 3party functionality \(\mathsf {Swap}=\) \(\{\mathsf {Swap}_n:\{0,1\}^n\times \{0,1\}^{2n}\times \{0,1\}^n\mapsto \{0,1\}^{2n}\}_{n\in {\mathbb {N}}}\) as follows. Parties \({\text {A}}\) and \({\text {C}}\) each hold a string \(a,c\in \{0,1\}^n\) respectively. Party \({\text {B}}\) holds two strings \(y_{{\text {A}}},y_{{\text {C}}}\in \{0,1\}^n\). The output is then defined to be
Assume for the sake of contradiction that there exists a 3party protocol \(\varPi \) that computes \(\mathsf {Swap}\) with computational (1, 1)FaF fullsecurity. We fix a security parameter n, we let r denote the number of rounds in \(\varPi \), and consider an evaluation of \(\mathsf {Swap}\) with the output being (a, c). Formally, we consider the following distribution over the inputs.

a, c are each selected from \(\{0,1\}^n\) uniformly at random and independently.

\(y_{{\text {A}}}=f_n(a)\) and \(y_{{\text {C}}}=f_n(c)\).
For \(i\in \left\{ 0,\ldots ,r\right\} \) let \(a_i\) be the final output of \({\text {A}}\) assuming that \({\text {C}}\) aborted after sending i messages. Similarly, for \(i\in \left\{ 0,\ldots ,r\right\} \) we define \(c_i\) to be the final output of \({\text {C}}\) assuming that \({\text {A}}\) aborted after sending i messages. Observe that \(a_r\) and \(c_r\) are the outputs of \({\text {A}}\) and \({\text {C}}\) respectively. We first claim that there exists a round where either \({\text {A}}\) and \({\text {B}}\) gain an advantage in computing the correct output, or \({\text {C}}\) and \({\text {B}}\) gain this advantage.
Claim 8
Either there exists \(i\in \left\{ 0,\ldots ,r\right\} \) such that
or there exists \(i\in [r]\) such that
The probabilities above are taken over the choice of inputs and of random coins for the parties.
The proof is done using a simple averaging argument, and is proven below. We first use this fact to show an attack.
Assume without loss of generality that there exists an \(i\in [r]\) such that the former equality holds (the other case is done analogously). Define a malicious adversary \(\mathcal {A}\) as follows. For the security parameter n, it receives as auxiliary input the round i. Now, \(\mathcal {A}\) corrupts \({\text {A}}\) and have it act honestly (using the party’s original input a) up to and including round i. After receiving the ith message, the adversary instructs \({\text {A}}\) to abort. Finally, the adversary sends its entire view to \({\text {B}}\). We next show that no pair of simulators \(\mathsf {S}_{\mathcal {A}}\) and \(\mathsf {S}_{\mathcal {A},{\text {B}}}\) can produce views for \(\mathcal {A}\) and \({\text {B}}\) so that Eqs. (1) and (2) would hold. For that, we assume towards contradiction that such simulators do exist. Let \(a^*\in \{0,1\}^n\) be the input that \(\mathsf {S}_{\mathcal {A}}\) sent to the trusted party. Additionally, denote \(q=\Pr \left[ c_{i}=(a,c)\right] \).
We next separate into two cases. For the first case, let us assume that \(\Pr \left[ a^*=a\right] \ge q+1/p(n)\) for some polynomial \(p(\cdot )\) for infinitely many n’s. Let be the output of \({\text {C}}\) in the ideal world. Since \(f_n\) is a permutation we have that
Thus, by comparing the output of \({\text {C}}\) to (a, c) it is possible to distinguish the real from the ideal with advantage at least 1/p(n).
For the second case, we assume that \(\Pr \left[ a^*=a\right] \le q+{\text {neg}}(n)\). Here we show how to distinguish between the view of \({\text {B}}\) in the real world from its ideal world counterpart. Recall that in the real world \(\mathcal {A}\) sent its view to \({\text {B}}\). Let \({\text {M}}\) be the algorithm specified by the protocol, that \({\text {A}}\) and \({\text {B}}\) use to compute their output assuming \({\text {C}}\) has aborted. Namely, \({\text {M}}\) outputs \(a_i\) in the real world. By Claim 8 it holds that \(\Pr \left[ a_i=(a,c)\right] \ge q+\frac{1{\text {neg}}(n)}{2r+1}\). We next consider the ideal world. Let V be the view generated by \(\mathsf {S}_{\mathcal {A},{\text {B}}}\). We claim that
Indeed, since \(f_n\) is a permutation and \({\text {B}}\) does not change the input it sends to \(\mathsf {T}\), the output computed by \(\mathsf {T}\) will be \(\bot \). Moreover, as \(f_n\) is oneway it follows that if \({\text {M}}(V)\) did output (a, c), then it can be used to break the security of \(f_n\). This can be done by sampling \(a\leftarrow \{0,1\}^n\), computing f(a), and finally, compute a view V using the simulators and apply \({\text {M}}\) to it (if \(a^*\) computed by \(\mathsf {S}_{\mathcal {A}}\) equals to a then abort). We conclude that
Therefore, by applying \({\text {M}}\) to the view it is possible to distinguish with advantage at least \(\frac{1{\text {neg}}(n)}{2r+1}{\text {neg}}(n)\). To conclude the proof we next prove Claim 8.
Proof (of Claim 8). The proof follows by the following averaging argument. By correctness and the fact that \(f_n\) is oneway, it follows that
Since there are \(2r+1\) summands, there must exists an i for which one of the differences is at least \(\frac{1{\text {neg}}(n)}{2r+1}\).
Finally, in order to see why Item 2 is true, observe that the attack is not based on the view of \(\mathcal {A}\), hence the same attack works assuming simultaneous broadcast.
Remark 5
Intuitively, we showed that in the real world the parties \({\text {A}}\) and \({\text {B}}\) hold more information on the output, than what \({\text {B}}\) and \({\text {C}}\) hold. To make this statement formal, observe that the proof in fact shows that \(\mathsf {Swap}\) cannot be computed with fairness. Roughly, for fairness to hold we require that either all parties receive an output, or none of them do. To see this, observe that for the functionality at hand, aborting in the ideal world is the same as sending a different input a. Therefore the attack cannot be simulated. We present the formal definition of fairness in the fullversion.
5 Comparison Between FaFSecurity and Other Definitions
In this section, we compare the notion of FaFsecurity to other existing notions. In Sect. 5.1, we investigate how FaFsecurity relates to classical fullsecurity. In Sect. 5.2, we review the differences between our notion and the notion of mixed adversaries. In the mixedadversary scenario, a single adversary controls a set \(\mathcal {I}\) of parties, however, within \(\mathcal {I}\) different limitations are imposed on the behavior (deviation) of different parties. In Sect. 5.3, we show that strong FaFsecurity is a strictly stronger notion than (weak) FaFsecurity.
5.1 The Relation Between FaFSecurity and Standard FullSecurity
We start with comparing FaFsecurity to the standard definition. It is easy to see that standard tsecurity does not imply in general \((t,h^*)\)FaF fullsecurity, even for functionalities with no inputs. Consider the following example. Let f be a 3party noinput functionality defined as \((\bot ,\bot ,\bot )\mapsto (\bot ,\bot ,r)\) where \(r\leftarrow \{0,1\}^n\), and let \(t,h^*=1\). Consider the following protocol: \({\text {P}}_1\) and \({\text {P}}_2\) sample \(r_1,r_2\leftarrow \{0,1\}^n\), respectively and send the random strings to \({\text {P}}_3\). The output of \({\text {P}}_3\) is then \(r_1\oplus r_2\).
It is easy to see that the protocol computes f with perfect fullsecurity tolerating a single corruption. However, a malicious \({\text {P}}_1\) can send \(r_1\) to \({\text {P}}_2\) as well, thereby allowing \({\text {P}}_2\) to learn \({\text {P}}_3\)’s output. Indeed, this protocol is insecure according to Definition 2. Obviously, \((t,h^*)\)FaFsecurity readily implies the classical tsecurity counterpart. Conversely, one might expect that classical \((t+h^*)\)security must imply \((t,h^*)\)FaFsecurity. We next show that this is not the case in general. We present an example of a protocol that admits traditional malicious security against t corruptions, however, it does not admit \((t1,1)\)FaFsecurity. Intuitively, this somewhat surprising state of affairs is made possible by the fact that in \((t1,1)\)FaFsecurity both the attacker and the two simulators are weaker.
The following example is a simple extension of the known example (cf., [10]), showing that for standard security, there exists a maliciously secure protocol (for computing the twoparty, onesided OR function), but none semihonest secure.
Example 1
Let \({\text {A}}\), \({\text {B}}\), and \({\text {C}}\) be three parties with inputs \(a,b,c\in \{0,1\}\) respectively. Consider the 3party functionality \(3\mathsf {OR}:\{0,1\}^3\mapsto \{0,1\}^3\) defined as \(3\mathsf {OR}\left( a,b,c \right) =\left( \bot ,\bot ,(a\oplus b)\vee c \right) \), with the following protocol for computing it. In the first round, parties \({\text {A}}\) and \({\text {B}}\) both select shares for their respective inputs with each other. That is, \({\text {A}}\) selects \(a_1\leftarrow \{0,1\}\) and sends \(a_2= a\oplus a_1\) to \({\text {B}}\), and \({\text {B}}\) selects \(b_2\leftarrow \{0,1\}\) and sends \(b_1= b\oplus b_2\) to \({\text {A}}\). In the second round, \({\text {A}}\) sends \(a_1\oplus b_1\) to \({\text {C}}\) and \({\text {B}}\) sends \(a_2\oplus b_2\) to \({\text {C}}\). Party \({\text {C}}\) outputs \((a_1\oplus b_1\oplus a_2\oplus b_2)\vee c\).
We first claim that the protocol computes \(3\mathsf {OR}\) with perfect fullsecurity tolerating coalitions of size at most 2. Indeed, an adversary that maliciously corrupts \({\text {A}}\), \({\text {B}}\), or both, learns nothing and can be simulated by selecting the inputs defined by the shared values. An adversary that maliciously corrupts \({\text {C}}\) can be simulated by sending \(c=0\) to the trusted party, and as a result, learning the same information as in the protocol. For example, corrupting \({\text {A}}\) and \({\text {C}}\) and sending a, 0 (resp.) to the trusted party, the adversary learns b.
We argue that although the protocol is 2secure in the standard definition, it does not compute \(3\mathsf {OR}\) with (1, 1)FaF fullsecurity. Specifically, a semihonest \({\text {C}}\) cannot be simulated. Take for example, an adversary \(\mathcal {A}\) that corrupts \({\text {A}}\) maliciously and let \(\mathcal {H}= \left\{ {\text {C}}\right\} \). In the realworld, \(\mathcal {A}\) can reveal b to \({\text {C}}\). However, in the idealworld, this cannot be simulated (when \(c=1\)).
Remark 6
Example 1 shows that “moving” a party from being malicious to being semihonest (i.e., taking a party from \(\mathcal {I}\) and moving it to \(\mathcal {H}\)) could potentially break the security of the protocol. Similarly to [10], it is arguably natural to consider a definition that requires the protocol to be \((t,h^*)\)FaFsecurity if and only if it is \((t1,h^*+1)\)FaFsecurity. Our definition does not impose this extra requirement, however, all of our protocols satisfy it.
In contrast to the above example, we claim that adaptive \((t+h^*)\)security does imply strong \((t,h^*)\)FaF fullsecurity. Intuitively, this follows from the fact that an adaptive adversary is allowed to corrupt some of the parties after the execution of the protocol terminated. We formulate the theorem for the fullsecurity setting, however, we stress that it also holds in the security with (identifiable) abort setting.
Theorem 9
Let \( {\texttt {type}}\in \left\{ \text {computational, statistical, perfect}\right\} \) and let \(\varPi \) be an mparty protocol computing some mparty functionality f with \( {\texttt {type}}\) adaptive \((t+h^*)\)security. Then \(\varPi \) computes f with \( {\texttt {type}}\) \((t,h^*)\)FaF fullsecurity.
A proof sketch of Theorem 9 is given in the fullversion.
By applying recent results on adaptive security, we get that there exist constantround protocol that are FaF securewithabort [12, 15, 22].
5.2 The Relation Between FaFSecurity and MixedSecurity
The notion of “mixed adversaries” [21, 40] considers a single entity that corrupts a subset \(\mathcal {I}\) maliciously, and another subset \(\mathcal {H}\) semihonestly.^{Footnote 5} A simulator for a mixed adversary, is a single simulator controlling the parties in \(\mathcal {I}\cup \mathcal {H}\). This simulator is restricted so to only be allowed to change inputs for the parties in \(\mathcal {I}\) (i.e., the simulator is not allowed to change the inputs for the parties in \(\mathcal {H}\)). We say that a protocol has computational \((t,h^*)\)mixed fullsecurity, if Eq. (2) is written with respect to a mixed adversary and its simulator.
In comparison, FaFsecurity can be viewed as if there are two distinct adversaries – one malicious and one semihonest, making it a natural question to compare the two definitions. One might expect that \((t,h^*)\)mixed fullsecurity would imply \((t,h^*)\)FaF fullsecurity. However, similarly to the case with standard security, we show the that this is not generally the case in the computational setting (note that the protocol from Example 1 is not (1, 1)mixed secure).
Example 2
Consider the 5party functionality \(f:\left( \{0,1\}^n\right) ^3\times \emptyset ^2\mapsto \left( \{0,1\}^n\right) ^2\times \emptyset ^3\) whose output on input \((x_1,x_2,x_3,\bot ,\bot )\), is defined as follows. If \(x_1=x_2\), then \({\text {P}}_1\) and \({\text {P}}_2\) will each receive a share of a 2outof2 secret sharing of \(x_3\), i.e., \({\text {P}}_1\) will receive \(x_3[1]\) and \({\text {P}}_2\) will receive \(x_3[2]\). If \(x_1\ne x_2\) then \({\text {P}}_1\) and \({\text {P}}_2\) will each receive a string of length n chosen uniformly at random and independently. In both cases, all other 3 parties will receive no output. We next show a protocol that is secure against any adversary corrupting at most 2 parties (including mixed adversaries), yet it does not admits (1, 1)FaF fullsecurity. In the following we let \(({\text {Gen}},{\text {Enc}},{\text {Dec}})\) be a nonmalleable and semantically secure publickey encryption scheme [20].
Protocol 10

1.
The parties will compute a functionality whose output to \({\text {P}}_i\) for \(i\in \left\{ 1,2,3\right\} \) is \(\mathsf {pk}\), and for party \({\text {P}}_i\), for \(i\in \left\{ 4,5\right\} \) is \((\mathsf {pk},\mathsf {sk}[i])\), where the \(\mathsf {sk}[i]\)s are shares of \(\mathsf {sk}\) in a 2outof2 secret sharing, and where \((\mathsf {pk},\mathsf {sk})\leftarrow {\text {Gen}}(1^n)\). This can be done using, say the GMW protocol [26].

2.
\({\text {P}}_2\) sends \(c_2 \leftarrow {\text {Enc}}_{\mathsf {pk}}(x_2,2)\) to \({\text {P}}_1\).

3.
The parties compute the following 5party functionality g. The input of \({\text {P}}_1\) is \(c_1\leftarrow {\text {Enc}}_{\mathsf {pk}}\left( x_1,1 \right) \), the input of \({\text {P}}_2\) is \(c_2\), and the input of \({\text {P}}_3\) is \(x_3\). The input of \({\text {P}}_i\), for \(i\in \left\{ 4,5\right\} \), is the pair \((\mathsf {pk},\mathsf {sk}[i])\).
The output is defined as follows. \({\text {P}}_3\), \({\text {P}}_4\), and \({\text {P}}_5\) receive no output.

If \({\text {Dec}}_{\mathsf {sk}}\left( c_i \right) =(x_i,i)\), for every \(i\in \left\{ 1,2,3\right\} \) and \(x_1=x_2\), then \({\text {P}}_1\) will receive \(x_3[1]\) and \({\text {P}}_2\) will receive \(x_3[2]\).

Else, if \({\text {Dec}}_{\mathsf {sk}}\left( c_1 \right) =(x_1,2)\), \({\text {Dec}}_{\mathsf {sk}}\left( c_2 \right) =(x_2,2)\), and \(x_1=x_2\), then \({\text {P}}_1\) will receive a random string \(r\in \{0,1\}^n\) and \({\text {P}}_2\) will receive \((x_3[1],x_3[2])\).

Otherwise, both \({\text {P}}_1\) and \({\text {P}}_2\) will receive random strings \(r_1,r_2\in \{0,1\}^n\) respectively, chosen independently and uniformly.


4.
\({\text {P}}_1\) output what it received from g. If \({\text {P}}_2\) received one random string \(r_2\) from g then output \(r_2\), and if \({\text {P}}_2\) received two random strings from g, then output the second one.
Claim 9
Protocol 10 computes f with computational 2security and with computational (1, 1)mixed security, yet it does not compute f with computational (1, 1)FaF fullsecurity.
The proof is deferred to the fullversion due space considerations.
5.3 Comparison Between (Weak) FaFSecurity and Strong FaFSecurity
In this section, we separate the notion of (weak) FaFsecurity from strong FaFsecurity in the computational setting. Specifically, we show a protocol that admits (weak) FaFsecurity, yet it does not admit strong FaFsecurity. We assume we have available a commitment scheme. Consider the 3party functionality f mapping \((\bot ,b,\bot )\mapsto (\bot ,\bot ,b)\), where \(b\in \{0,1\}\), and let \(t,h^*=1\). Consider the following protocol: \({\text {P}}_2\) broadcasts a commitment to b, and then sends the decommitment only to \({\text {P}}_3\).
Claim 10
The above protocol computes f with (weak) computational (1, 1)FaF fullsecurity, yet does not provide strong computational (1, 1)FaF fullsecurity.
The proof is deferred to the fullversion due to space considerations. One consequence of the above claim, is that protocols where the parties commit to their inputs, e.g., the GMW protocol, will not satisfy strong FaFsecurity in general.
Notes
 1.
This is indeed the origin of the term FaFsecurity (protecting one’s privacy from friends and foes alike).
 2.
It remains open whether preventing leakage using a different protocol is possible.
 3.
Naturally, we do not claim that the protocol must instruct honest parties to leak information. Rather, we prove that a malicious adversary can leak the private information of honest parties.
 4.
For this step to work, we need to assume that the adversary does not change its shares. We can force it to send the correct shares using standard techniques. One way to do so is to sign each output of each \(f'_{m'}\) using a MAC and give the other parties the key for verification. For the sake of clarity of presentation, however, we decide to skip this and assume that the corrupted parties are using correct shares.
 5.
References
Alon, B., Omri, E., PaskinCherniavsky, A.: MPC with friends and foes. Cryptology ePrint Archive, Report 2020/701. https://eprint.iacr.org/2020/701
Alwen, J., Shelat, A., Visconti, I.: Collusionfree protocols in the mediated model. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 497–514. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540851745_28
Alwen, J., Katz, J., Maurer, U., Zikas, V.: Collusionpreserving computation. In: SafaviNaini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 124–143. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642320095_9
Asharov, G., Beimel, A., Makriyannis, N., Omri, E.: Complete characterization of fairness in secure twoparty computation of Boolean functions. In: Dodis, Y., Nielsen, J.B. (eds.) TCC 2015. LNCS, vol. 9014, pp. 199–228. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662464946_10
Beaver, D.: Foundations of secure interactive computing. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 377–391. Springer, Heidelberg (1992). https://doi.org/10.1007/3540467661_31
Beaver, D.: Secure multiparty protocols and zeroknowledge proof systems tolerating a faulty minority. J. Cryptol. 4(2), 75–122 (1991). https://doi.org/10.1007/BF00196771
Beaver, D.: Minimallatency secure function evaluation. In: Preneel, B. (ed.) EUROCRYPT 2000. LNCS, vol. 1807, pp. 335–350. Springer, Heidelberg (2000). https://doi.org/10.1007/3540455396_23
Beaver, D., Micali, S., Rogaway, P.: The round complexity of secure protocols. In: STOC 1990, pp. 503–513. ACM (1990)
BeerliováTrubíniová, Z., Fitzi, M., Hirt, M., Maurer, U., Zikas, V.: MPC vs. SFE: perfect security in a unified corruption model. In: Canetti, R. (ed.) TCC 2008. LNCS, vol. 4948, pp. 231–250. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540785248_14
Beimel, A., Malkin, T., Micali, S.: The allornothing nature of twoparty secure computation. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 80–97. Springer, Heidelberg (1999). https://doi.org/10.1007/3540484051_6
BenOr, M., Goldwasser, S., Wigderson, A.: Completeness theorems for noncryptographic faulttolerant distributed computation (extended abstract). In: Proceedings of the 29th Annual Symposium on Foundations of Computer Science (FOCS), pp. 1–10 (1988)
Benhamouda, F., Lin, H., Polychroniadou, A., Venkitasubramaniam, M.: Tworound adaptively secure multiparty computation from standard assumptions. In: Beimel, A., Dziembowski, S. (eds.) TCC 2018. LNCS, vol. 11239, pp. 175–205. Springer, Cham (2018). https://doi.org/10.1007/9783030038076_7
Canetti, R.: Security and composition of multiparty cryptographic protocols. J. Cryptol. 13(1), 143–202 (2000). https://doi.org/10.1007/s001459910006
Canetti, R., Vald, M.: Universally composable security with local adversaries. In: Visconti, I., De Prisco, R. (eds.) SCN 2012. LNCS, vol. 7485, pp. 281–301. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642329289_16
Canetti, R., Poburinnaya, O., Venkitasubramaniam, M.: Equivocating YAO: constantround adaptively secure multiparty computation in the plain model. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 497–509. ACM (2017)
Cleve, R.: Limits on the security of coin flips when half the processors are faulty. In: Proceedings of the 18th Annual ACM Symposium on Theory of Computing (STOC), pp. 364–369 (1986)
Cramer, R., Damgård, I., Ishai, Y.: Share conversion, pseudorandom secretsharing and applications to secure computation. In: Kilian, J. (ed.) TCC 2005. LNCS, vol. 3378, pp. 342–362. Springer, Heidelberg (2005). https://doi.org/10.1007/9783540305767_19
Damgård, I., Ishai, Y.: Constantround multiparty computation using a blackbox pseudorandom generator. In: Shoup, V. (ed.) CRYPTO 2005. LNCS, vol. 3621, pp. 378–394. Springer, Heidelberg (2005). https://doi.org/10.1007/11535218_23
Daza, V., Makriyannis, N.: Designing fully secure protocols for secure twoparty computation of constantdomain functions. In: Kalai, Y., Reyzin, L. (eds.) TCC 2017. LNCS, vol. 10677, pp. 581–611. Springer, Cham (2017). https://doi.org/10.1007/9783319705002_20
Dolev, D., Dwork, C., Naor, M.: Nonmalleable cryptography. SIAM Rev. 45(4), 727–784 (2003)
Fitzi, M., Hirt, M., Maurer, U.: General adversaries in unconditional multiparty computation. In: Lam, K.Y., Okamoto, E., Xing, C. (eds.) ASIACRYPT 1999. LNCS, vol. 1716, pp. 232–246. Springer, Heidelberg (1999). https://doi.org/10.1007/9783540480006_19
Garg, S., Sahai, A.: Adaptively secure multiparty computation with dishonest majority. In: SafaviNaini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 105–123. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642320095_8
Gennaro, R., Ishai, Y., Kushilevitz, E., Rabin, T.: The round complexity of verifiable secret sharing and secure multicast. In: STOC 2001, pp. 580–589 (2001)
Gennaro, R., Ishai, Y., Kushilevitz, E., Rabin, T.: On 2round secure multiparty computation. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 178–193. Springer, Heidelberg (2002). https://doi.org/10.1007/3540457089_12
Goldreich, O.: Foundations of Cryptography  Volume 2: Basic Applications. Cambridge University Press, Cambridge (2004)
Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game or a completeness theorem for protocols with honest majority. In: STOC, pp. 218–229 (1987)
Gordon, S.D., Katz, J.: Complete fairness in multiparty computation without an honest majority. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 19–35. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642004575_2
Gordon, S.D., Hazay, C., Katz, J., Lindell, Y.: Complete fairness in secure twoparty computation. In: Proceedings of the 40th Annual ACM Symposium on Theory of Computing (STOC), pp. 413–422 (2008)
Halevi, S., Lindell, Y., Pinkas, B.: Secure computation on the web: computing without simultaneous interaction. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 132–150. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642227929_8
Halevi, S., Ishai, Y., Kushilevitz, E., Rabin, T.: Best possible informationtheoretic MPC. In: Beimel, A., Dziembowski, S. (eds.) TCC 2018. LNCS, vol. 11240, pp. 255–281. Springer, Cham (2018). https://doi.org/10.1007/9783030038106_10
Hirt, M., Maurer, U., Zikas, V.: MPC vs. SFE: unconditional and computational security. In: Pieprzyk, J. (ed.) ASIACRYPT 2008. LNCS, vol. 5350, pp. 1–18. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540892557_1
Ishai, Y., Kushilevitz, E., Paskin, A.: Secure multiparty computation with minimal interaction. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 577–594. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642146237_31
Ishai, Y., Katz, J., Kushilevitz, E., Lindell, Y., Petrank, E.: On achieving the “best of both worlds” in secure multiparty computation. SIAM J. Comput. 40(1), 122–141 (2011)
Katz, J., Lindell, Y.: Collusionfree multiparty computation in the mediated model. IACR Cryptology ePrint Archive 2008:533 (2008)
Koo, C.Y.: Secure computation with partial message loss. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 502–521. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_26
Makriyannis, N.: On the classification of finite Boolean functions up to fairness. In: Abdalla, M., De Prisco, R. (eds.) SCN 2014. LNCS, vol. 8642, pp. 135–154. Springer, Cham (2014). https://doi.org/10.1007/9783319108797_9
Micali, S., Rogaway, P.: Secure computation. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 392–404. Springer, Heidelberg (1992). https://doi.org/10.1007/3540467661_32
Rabin, T., BenOr, M.: Verifiable secret sharing and multiparty protocols with honest majority. In: STOC 1989, pp. 73–85 (1989)
Shamir, A.: How to share a secret. Commun. ACM 22(11), 612–613 (1979)
Zikas, V.: Generalized corruption models in secure multiparty computation. Ph.D. thesis, ETH Zurich (2010). http://dnb.info/1005005729
Zikas, V., Hauser, S., Maurer, U.: Realistic failures in secure multiparty computation. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 274–293. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642004575_17
Acknowledgements
We are grateful to Amos Beimel for many helpful discussions.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 International Association for Cryptologic Research
About this paper
Cite this paper
Alon, B., Omri, E., PaskinCherniavsky, A. (2020). MPC with Friends and Foes. In: Micciancio, D., Ristenpart, T. (eds) Advances in Cryptology – CRYPTO 2020. CRYPTO 2020. Lecture Notes in Computer Science(), vol 12171. Springer, Cham. https://doi.org/10.1007/9783030568801_24
Download citation
DOI: https://doi.org/10.1007/9783030568801_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030568795
Online ISBN: 9783030568801
eBook Packages: Computer ScienceComputer Science (R0)