The basic framework of the hybrid protocol
We apply our proposed attacking model to the protocols (Moran et al. 2009; Katz 2007; Gordon et al. 2008; Gordon and Katz 2012; Groce and Katz 2012), which include a “preprocessing” stage and a “share-exchanging” stage. Recall that in the first stage, there exists a trusted party. We restate the framework of Groce and Katz (2012) for completeness.
The first stage of our proposed framework
-
1.
Two parties A and B present their private inputs \(x_A\) and \(x_B\) to the trusted party, which correctly computes \(f(x_A,x_B)\).
-
2.
The trusted party selects \(i^*\) (\(i^*\in \{1,2,\ldots ,n\}\)) according to a geometric distribution p.
-
3.
Random values \(r^A_i\) and \(r^B_i\) are chosen:
-
(a)
\(r^A_i\) and \(r^B_i\) are randomly chosen in the domain of \(f(\cdot )\) when \(i<i^*\) (\(i\in \{1,2,\ldots ,n\}\)).
-
(b)
\(r^A_i\) and \(r^B_i\) are set to be \(f(x_A,x_B)\) when \(i\ge i^*\).
-
4.
Randomly values \(s_i^A\), \(s_i^B\) and \(t_i^A\), \(t^B_i\) (shares of \(r^A_i\) and \(r^B_i\)) are chosen such that \(s_i^A\oplus t_i^A=r_i^A\) and \(s_i^B\oplus t_i^B=r_i^B\). Message authentication codes (Black 2000) on values \(s_i^A\), \(s_i^B\) and \(t_i^A\), \(t^B_i\) are also generated to guarantee the validity of the shares.
-
5.
\(s_i^A\), \(s_i^B\) and \(t_i^A\), \(t^B_i\) with their corresponding message authentication codes are presented to A and B, respectively.
There are altogether n rounds in the second stage. A and B exchange their shares in each round.
The second stage of our proposed framework
-
1.
In the \(i\mathrm{th}\) round,
-
(a)
A firstly passes \(t_i^B\) to B. B ensures the validity of the shares using the corresponding message authentication codes. B computes \(r_i^B=t_i^B\oplus s_i^B\) and the protocol moves to the second step.
-
(b)
B passes \(s_i^A\) to A. A verifies the validity of the shares using the corresponding message authentication codes. A computes \(r_i^A=t_i^A\oplus s_i^A\).
-
2.
Each party considers its latest reconstructed value as its final output.
-
3.
If both parties do not abort in the ith round, the protocol will move into the \(i+1\)th round.
The utility matrix of Groce and Katz (2012) can be presented in a matrix (ref. Table 2).
Table 2 Utility of Groce and Katz (2012)
Analysis of the attacking model
We apply our attacking model to the protocol of Groce and Katz (2012), where both parties A and B are rational without the participation of an external/internal attacker \(\mathcal {A}\). There are two stages in Groce and Katz (2012): A and B receive shares of the result at the end of the first stage and exchange the shares in the second stage to reconstruct the result. In this paper, we consider practical settings and an attacking model for the second stage.
-
1.
Suppose A and B have private types: honest or dishonest with incomplete information. Here, honest means that the parties honestly pass shares in each round and dishonest means that the parties abort in a certain round. Note that we do not consider the case where parties send fake shares since they will be detected due to the validity of message authentication codes (Black 2000).
-
2.
A and B have a prior probability on the private types. B treats A as honest with probability \(\mu \). A regards B as honest with probability \(\nu \). Both parties hold the expected utilities at the end of the protocol.
-
3.
The practical attacker \(\mathcal {A}\) owns some additional information on the private types of A and B. \(\mathcal {A}\) regards A as honest with probability \(\mu \) and B as honest with probability \(\nu \). We assume that \(\eta >\mu \) and \(\theta >\nu \). Otherwise, \(\mathcal {A}\) has no incentives to attack this protocol. Note that A and B learn nothing about \(\eta \) and \(\theta \). Furthermore, they may not even know that \(\mathcal {A}\) own the additional information.
-
4.
\(\mathcal {A}\) may seek some advantage attacking one party or both of them, which depends on the utility functions. Here, when stating \(\mathcal {A}\) corrupts the parties, we mean that \(\mathcal {A}\) bribes one party or both the parties with required costs and participate the protocol with the replacement of the party or both of them.
Table 2 shows that \(b_1>a_1\ge d_1\ge c_1\) and \(b_2>a_2\ge d_2\ge c_2\). Correct means that the party learns the correct output, and Incorrect means that the party learns an incorrect output. In Groce and Katz (2012), the utilities of A and B are defined according to Table 2.
In this section, we list the expected utility of \(\mathcal {A}\) and analyze the conditions for \(\mathcal {A}\) to corrupt only one party. Consequently, the conditions for the cases, no one or both are corrupted, can be drawn based on the conditions mentioned above. That is, \(\mathcal {A}\) has incentives to corrupt one party if he gets advantages for him compared with the case when no one is corrupted. Therefore, we should first get the expected utilities \(U^A\) and \(U^B\) when no one is corrupted, which are detailed below. Here, \(U_X^H\) and \(U_X^D\) denote the utility of X when \(\mathcal {A}\) treats his (or her) opponent Y as an honest and dishonest ones, respectively, where \(X,Y\in \{A,B,\mathcal {A}\}\) and \(X\ne Y\) [ref. Eq. (4)].
$$\begin{aligned} \begin{aligned} U^A&=\nu U_A^H+(1-\nu )U_A^D\\&=\nu a_1+(1-\nu )[\varphi d_1+pc_1+(1-p-\varphi )a_1] \\ U^B&=\mu U_B^H+(1-\mu )U_B^D\\&=\mu [\varphi d_2+pb_2+(1-p-\varphi )a_2]\\&\quad +\,(1-\mu )[\varphi d_2+pd_2+(1-p-\varphi )a_2]. \\ \end{aligned} \end{aligned}$$
(4)
Let \(c_A=U^A\), \(c_B=U^B\) be the maximum costs for \(\mathcal {A}\) to corrupt A and B, respectively. Let \(U^B_{\mathcal {A}B}=U^B_{AB}\) and \(U^A _{A\mathcal {A}}=U^A_{AB}\) [ref. Eq. (5)].
$$\begin{aligned} \begin{aligned} U^\mathcal {A}_{\mathcal {A}B}&=\theta U_\mathcal {A}^H+(1-\theta )U_\mathcal {A}^D-c_A\\&=\theta a_1+(1-\theta )[\varphi d_1+pc_1+(1-p-\varphi )a_1]-c_A\\&=(\theta -\nu )[p(a_1-c_1)+\varphi (a_1-d_1)]\\ U^\mathcal {A}_{A\mathcal {A}}&=\eta U_{\mathcal {A}}^H+(1-\eta )U_{\mathcal {A}}^D-c_B\\&=\eta [\varphi d_2+p b_2+(1-p-\varphi )a_2]\\&\qquad +(1-\eta )[\varphi d_2+pd_2+(1-p-\varphi )a_2]-c_B\\&=(\eta -\mu )p(b_2-d_2).\\ \end{aligned} \end{aligned}$$
(5)
Here, p denotes the probability of \(i=i^*\), after which round both parties reconstruct the output. However, B may reconstruct the output, but A cannot achieve the same when B receives the share and aborts in the \(i^{*}\)th round. Let \(\varphi \) denote the probability of \(i<i^*\), where both parties reconstruct random values. Recall that we assume \(\theta >\nu \), \(\eta >\mu \), \(b_1>a_1\ge d_1\ge c_1\) and \(b_2>a_2\ge d_2\ge c_2\). It satisfies that \( U^\mathcal {A}_{\mathcal {A}B}>0\), \(U^\mathcal {A}_{A\mathcal {A}}>0\). That is, given necessary information on the types of A and B, \(\mathcal {A}\) has incentives to corrupt one party or both of them since the advantages are positive.
Definition 1
The advantage is defined as the additional income for an attacker, which is the attacker’s utility minus the corruption cost.
The advantage is used for describing the attacker’s incentives to corrupt parties. Recall that existing works malicious attackers sabotage protocols without reason, whom are simply assumed to break the security of the protocols.
Proposition 1
The attacker has incentives to corrupt parties if his advantage is positive.
In this paper, we utilize the notion of advantage to measure the incentives for the attacker to corrupt the parties. For example, the attacker has strong incentives to corrupt parties if advantage is large enough.
Theorem 1
Given \(\theta >\nu \), \(\eta >\mu \), \(b_1>a_1\ge d_1\ge c_1\) and \(b_2>a_2\ge d_2\ge c_2\), it is possible for \(\mathcal {A}\) to corrupt one or two parties.
Proof
(Brief:) The inequations \(\theta >\nu \), \(\eta >\mu \) mean that the attacker has additional information with respect to the private types of the parties. Each party learns little about the private type of his opponent. Therefore, they are cautious when they participate into the protocol. On the contrary, the attacker masters more information and he may take adventurous actions when he participate into the protocol. However, he should first corrupt one or two parties. Equation (5) lists the attacker’s advantages when he corrupts A and B, respectively.
Given \(\theta >\nu \), \(\eta >\mu \), \(b_1>a_1\ge d_1\ge c_1\) and \(b_2>a_2\ge d_2\ge c_2\), it satisfies that \( U^\mathcal {A}_{\mathcal {A}B}>0\), \(U^\mathcal {A}_{A\mathcal {A}}>0\). Thus, the attacker has incentives to corrupt one or two parties according to Proposition 1. \(\square \)
As we have mentioned above, the attacker may be an internal party. For example, B may bribe A in the protocol. That is, B bribes A with cost \(c_A\), has the input of A and then learns the output. We assume the utility is still calculated referring to Table 2. It can be derived that B has incentives to bribe A if \(U^A-b_2<c_A<c_1-U^B\).
A random solution for the proposed attacking model
The assumption on the adversary \(\mathcal {A}\) is strong enough to corrupt both the parties, where the adversary’s income is no less than \(c_A+c_B\). However, we may have a sensible attacking model in practice if the adversary dominates the entire system. Therefore, measures must be taken to prevent such attack. The intuition is that two parties may resort to some cryptographic primitives in order to identify the membership of them. More specifically, two parties may request an accumulator, a one-way function (Derler et al. 2015), with probability \(\psi >0\) so as to enforce cooperation before they exchange their shares. The function of the accumulator is to proving a membership without leaking information with respect to any individual members. Note that \(\psi \) is not necessary to be 1 since the introduction of the accumulator through the hybrid protocol may increase the computational complexity. Therefore, we only choose a proper \(\psi \) to deter the adversary and prevent attacking.
In the scenario where the adversary corrupts two parties, one party (say A) has utility \(c_A\) when he (or she) is corrupted (bribed). Suppose A calls an accumulator and cooperates with B with probability \(\psi \), the expected utility is: \(\psi a_1+(1-\psi )c_A\). It should satisfy \(\psi a_1+(1-\psi )c_A>c_A\) such that the attack is prevented. It satisfies \(a_2>c_B\) for the same reason. Recall that in Eq. (2), \(\mathbf {u}_\mathcal {A}\varvec{\gamma }^T\) is at least \(a_1\) and \(\mathbf {u}_\mathcal {A}\varvec{\delta }^T\) is at least \(a_2\). In Eq. (3), \(\varDelta _\mathcal {A}\) is at least \(a_1+a_2\), otherwise \(\mathcal {A}\) has no incentives to corrupt one party or two parties. Therefore, the conditions for the two parties to resist the attack, where \(a_1>c_A\) or (and) \(a_2>c_B\), are satisfied. That is, the adversary \(\mathcal {A}\) cannot conduct the attack mentioned in Sect. 4.2 when \(\psi >0\).