Abstract
Adversaries in twoparty computation may sabotage a protocol, leading to possible collapse of the information security management. In practice, attackers often breach security protocols with specific incentives. For example, attackers manage to reap additional rewards by sabotaging computing tasks between two clouds. Unfortunately, most of the existing research works neglect this aspect when discussing the security of protocols. Furthermore, the construction of corrupting two parties is also missing in twoparty computation. In this paper, we propose an incentivedriven attacking model where the attacker leverages corruption costs, benefits and possible consequences. We here formalize the utilities used for twoparty protocols and the attacker(s), taking into account both corruption costs and attack benefits. Our proposed model can be considered as the extension of the seminal work presented by Groce and Katz (Annual international conference on the theory and applications of cryptographic techniques, Springer, Berlin, pp 81–98, 2012), while making significant contribution in addressing the corruption of two parties in twoparty protocols. To the best of our knowledge, this is the first time to model the corruption of both parties in twoparty protocols.
Introduction
Twoparty protocols allow two distributed and distrustful parties to jointly consider a general function with their private inputs. Twoparty protocols can be widely used in various fields such as cloud computing (Gao et al. 2018; Tian et al. 2018; Ibtihal et al. 2017; Zheng and Wang 2018), file encryption (Yang et al. 2018), verifiable computation (Chen et al. 2016), keyword searching (Li et al. 2015) and trust evaluation (Jiang et al. 2016, 2018; Jhaveri et al. 2018). Normally, the security of twoparty protocols is discussed with a single external attacker, who is assumed to arbitrarily sabotage the protocols by corrupting at most one party. In most cases, however, a realistic attacker often sabotages the security protocols with certain incentives instead of arbitrary purposes (Haddi and Benchaïba 2015; Zhao et al. 2012; Wu et al. 2014). If the attacker can justify his (or her) attacks, he (or she) will be motivated to launch the corruption such that the protocols are changed toward the way to benefit him (or her). On the other hand, corruption is not free for any involved party. Otherwise, any party will modify the protocols without any liability (Li et al. 2018; Liu et al. 2018b; Gupta et al. 2016). The settings of corrupting two parties are often missing, and it is less important to discuss the security of a protocol with two corrupted parties in twoparty protocols. However, it has a great potential to consider the case of corrupting both parties for an incentivedriven attacker, who has to leverage corruption costs and possible benefits. It is worth pointing out that semihonest and malicious adversaries may never reveal authentic incentives (Li et al. 2017; Yu et al. 2018).
In this paper, we revisit the problem of incentivedriven attackers and corruption costs by significantly extending the framework reported in Groce and Katz (2012). We propose the new incentivedriven attacking model based on the intuition that no party will participate in a protocol without any benefit. Existing works assume parties are arbitrarily malicious, which is not proper for reality attackers. Rational parties take part into protocols to maximize their payoffs. It is rarely considered to corrupt two parties at the same time. However, currently corrupting two parties is in line with reality. For example, one malicious attacker learns additional information with respect to a protocol and can maximize his profit when obtaining inputs of both parties. At the same time, two parties in the protocol do not learn the additional information and can only obtain their own specific payoff when following the protocol. The malicious attacker has incentives to bribe both parties with a price, which is higher than the party’s specific payoff, but lower than the maximize profit. On the other hand, the corrupted parties would like to sell their inputs and receive the bribe money. Therefore, we formalize this malicious attacker into an incentivedriven attacker and discuss the conditions when the bribe is feasible. The basic idea of this work is more or less similar to social engineering (Hadnagy 2010; Abraham and ChengalurSmith 2010). The main contributions of our work are summarized below.

1.
We propose a new incentivedriven attacking model for costing the corruption of twoparty protocols. Our proposed model describes the price of an attack when the attacker(s) sabotage the protocols. This cost computation influences the decision made for the launch of any attack.

2.
We extensively discuss the required utilities under three corruption case studies. The significance of our work is the consideration of corrupting both parties, different from the other stateoftheart techniques.

3.
We instantiate the incentives of the attacker(s) by largely extending the framework reported in Groce and Katz (2012). We take into account the private inputs from the parties involved in the protocols, which may lead to different outcomes. We substantially investigate the preconditions for the attacker(s) to successfully sabotage the protocols’ security.
Comparison against the state of the arts
Halpern and Teague (2004) introduced rational parties in secret sharing schemes and secure multiparty computation protocols, where rational parties have incentives to maximize their utilities. Asharov et al. (2011) presented formal definitions for the security of rational protocols with both positive and negative outputs on twoparty protocols. Groce and Katz (2012) redefined utilities for rational parties. They proved that negative outcomes described in Asharov et al. (2011) can be avoided when rational parties were given proper incentives. However, their work has not considered the effects of costs on the corruption. Garay et al. introduced an external attacker and transferred a protocol to a twoparty game (Garay et al. 2013). They also discussed the relationship between the corruption cost and the utilities when the attacker successfully broke privacy. However, they failed to present specific utility definitions with respect to the attacker’s incentives. Furthermore, they only considered the corruption of partial parties instead of all the involved parties. Recently, bitcoin and blockchain (Meng et al. 2018; Lin et al. 2018; Liu et al. 2018a) are introduced as incentives in multi/twoparty computation for correctness (Kumaresan and Bentov 2014) and fairness (Bentov and Kumaresan 2014; Andrychowicz et al. 2014). Andrychowicz et al. (2014) simulated fairness in twoparty computation, where the party which does not learn the output may be awarded financial compensation. Unfortunately, their work does not cover the setting of corruption costs. Wang et al. (2018, 2016) propose rational secure twoparty computation to describe the attacking model toward the view of game theory. Adat et al. (2018) propose an economic incentivebased risk transfer mechanism, which can prevent denial of service attack Adat et al. (2018) in Internet of Things. Their economic incentivebased agreement can provide additional security for Internet of Things if needed. Other attacking models consist of covert channels (Zhang et al. 2018) and correlated fading channels (Fan et al. 2017), which are out of the scope of this paper.
Our attacking model does not break the security of protocols per se. The attacker corrupts parties by stealing or bribing, which is the biggest difference from the work reported in Groce and Katz (2012). For the protocols, the attacker learns the output through cost estimation. On the other hand, honest or corrupted parties either obtain pecuniary compensation or certain beneficial outputs. We compare our work against the other and show the comparisons in Table 1.
Outlines
Section 2 presents the proposed environmental model and explains the used notions such as outcomes and expected utility. Section 3 shows the attacking model and defines the corresponding utility functions for different corruption cases. At the end of Sect. 3, an instance based on the protocol of Groce and Katz (2012) is presented to show the possibility for an attacker to corrupt one party or both parties. Section 4 concludes this paper and presents future work.
Environmental model
In this paper, we consider a general twoparty computation protocol \(\pi \). Two distinct parties, who execute \(\pi \), belong to set \(\mathcal {P}=\{A,B,C,D,\ldots \}\). Note that parties may be entities such as hosts or machines. Let \(\pi _{P_1P_2}\) denote an instance of \(\pi \), where \(P_1,P_2\in \mathcal {P}\) execute \(\pi \) with their inputs. Each party has his (or her) own identities such that they recognize each other in the specific protocol. Two parties can learn an output jointly with their inputs. Let \(\mathcal {A}\) denote an attacker, either internal or external. The internal attacker is one of the parties inside the protocol and an external attacker outside the protocol. Note that the internal attacker belongs to \(\mathcal {P}\), who attacks the other parties by providing proper incentives. Recall that only external attackers are discussed in traditional twoparty computation. In this paper, we consider the possibility of the internal attackers for simplicity. We will discuss this setting in the following sections.
The main target of the attacker is to learn the output of the protocol by attacking two parties in the protocol. The attacker is assumed to have the following abilities: (1) he (or she) controls the communication channel (like eavesdrop) between parties A and B; (2) the corruption is not free; (3) he (or she) may corrupt the parties to retrieve the identities (and the private inputs) to play protocol \(\pi \). Here, corrupting one party has two styles. One is to steal inputs and identities with necessary costs without paying any cost for the corrupted parties. Another one is to bribe parties with bribery funds, and the corrupted parties reward the attacker afterward.
Let \(p_A\) and \(p_B\) denote the probabilities when the attacker bribes parties A and B, respectively. The attacker may learn the output by replacing the corrupted parties in protocol \(\pi \) with the stolen or rewarded inputs and identities from the corrupted parties. The difference between these two corruption styles is whether the corrupted parties are paid when they lose their inputs and identities. Both corruption styles are not free, and the costs are identical in both styles. Our model can apply to secure twoparty computation protocols, where parties’ identities are authenticated. The attacker may steal or bribe parties for their inputs and identities like session keys.
Suppose two parties A and B execute protocol \(\pi \). In fact, any two parties belonging to \(\mathcal {P}\) may execute the protocol. Let vectors \(\mathbf {o}_A=(o^1_A,o^2_A,o^3_A,\ldots ,o^n_A)\) and \(\mathbf {o}_B=\{o^1_B,o^2_B,o^3_B,\ldots ,o^n_B\}\) denote all the possible outcomes for A and B, respectively, where n denotes the number of the possible outcomes. Let vectors \(\varvec{\alpha }=\{\alpha _1,\alpha _2,\alpha _3,\) \( \ldots ,\alpha _n\}\) and \(\varvec{\beta }=\{\beta _1,\beta _2,\beta _3,\ldots ,\beta _n\}\) denote the distribution of the corresponding outcomes. Let vectors \(\mathbf {u}_A=\{u^1_A,u^2_A,u^3_A,\ldots ,u^n_A\}\) and \(\mathbf {u}_B=\{u^1_B,u^2_B,u^3_B,\ldots ,u^n_B\}\) denote the corresponding utilities for A and B, respectively. Therefore, parties A and B have the expected utilities \(U^A=\mathbf {u}_A\varvec{\alpha }^T\), \(U^B=\mathbf {u}_B\varvec{\beta }^T\), respectively.
The approach proposed in this paper is distinct from the established frameworks due to the incentives held by the attacker. The attacker \(\mathcal {A}\) may not arbitrarily attack a protocol. Instead, he (or she) sabotages a protocol with specific incentives by paying necessary costs. We formulate the incentives through utilities. That is, the attacker has incentives to sabotage a protocol if it brings positive utilities. Therefore, the attacker should find additional criteria in the computation for costs or losses. For example, the attacker has higher probability \(\varvec{\gamma }=\{\gamma _1,\gamma _2,\gamma _3,\ldots ,\gamma _n\}\) and \(\varvec{\delta }=\{\delta _1,\delta _2,\delta _3,\ldots ,\delta _n\}\) on the outcomes of parties A and B. In this case, the corresponding outcome and utility of \(\mathcal {A}\) are denoted as \(\mathbf {o}_\mathcal {A}=\{o^1_\mathcal {A},o^2_\mathcal {A},o^3_\mathcal {A},\ldots ,o^n_\mathcal {A}\}\) and \(\mathbf {u}_\mathcal {A}=\{u^1_\mathcal {A},u^2_\mathcal {A},u^3_\mathcal {A},\ldots ,u^n_\mathcal {A}\}\), respectively. Note that \(\mathbf {u}_\mathcal {A}\) may be different under the cases of corruptions A and B. For simplicity, in this paper, we only use \(\mathbf {u}_\mathcal {A}\) to denote the corresponding utility of \(\mathcal {A}\) when one party is corrupted. It will not exclude the case where both A and B are corrupted, which is often ignored in twoparty computation protocols.
Attacking model and the corresponding utility functions
The attacking model on twoparty protocols consists of three corruption cases: no one is corrupted, only one is corrupted and both are corrupted. In this section, we only present definitions of the utility functions with respect to these cases. In the following section, we instantiate the utilities by using a concrete protocol.
No one is corrupted
A and B execute protocol \(\pi \). \(\mathcal {A}\) corrupts no one. The expected utilities are shown in Eq. (1).
\(U^A_{AB}\), \(U^B_{AB}\) and \(U^\mathcal {A}_{AB}\) denote the expected utilities of A, B and \(\mathcal {A}\), respectively, in the protocol \(\pi _{AB}\). The utility of \(\mathcal {A}\) is the function of the intermediate messages \(m_1,m_2,\ldots ,m_n\) within protocol \(\pi \). It is hard to measure the attacker’s utility when he/she does not participate in the protocol. Therefore, we assume a function \(f(\cdot )\) with respect to the intermediate messages to denote his utility.
One party is corrupted
We only present the expected utility of the case when A is corrupted. The case when B is corrupted is analogously defined. \(\mathcal {A}\) corrupts A with cost \(c_A\). The corruption cost \(c_A\) should be at least \(U^A_{AB}\), otherwise A would not accept the corruption. \(\mathcal {A}\) participates in the protocol \(\pi \) with B instead of A. The expected utilities are demonstrated in Eq. (2).
\(U^A_{\mathcal {A}B}\), \(U^B_{\mathcal {A}B}\) and \(U^\mathcal {A}_{CB}\) denote the expected utilities of A, B and \(\mathcal {A}\), respectively, in the protocol \(\pi \). \(\varDelta _A\) denotes the utility if A does not learn the output.
Both is corrupted
Generally, corruption of two parties is often neglected since it is less important to discuss security property in twoparty computation (Goldreich 2009). In this paper, we consider the attacking model on parties instead of the protocol itself. That is, the protocol per se is secure. On the contrary, we allow the attacker to corrupt the two parties with double costs. The balance between the utility gained by the corruption and the costs may be much higher than 0, which refers to the incentives for the attacker. In the example of two millionaires (Yao 1982), we assume that the inputs of two millionaires are x and y, respectively. Let \(\hbox {Greater}(f(x,y)_{x>y})=(\hbox {yes},\hbox {no})\) if \(x>y\), otherwise \(\hbox {Greater}(f(x,y)_{x>y})=(\hbox {no},\hbox {yes})\), where the first value is returned to the first millionaire and the second value to the second millionaire. It is identical for the attacker to corrupt one party and two parties if \(x>y\) or \(x<y\). An extreme case is \(x=y\). If the attacker corrupts one party, he (or she) can learn which is of larger cost and the output of one party is either yes or no. However, if the attacker corrupts two parties, he (or she) may have the output (no, no), which means two millionaires have the same amount of money. If the attacker is another millionaire, the corruption of the two parties may infer additional information. In this paper, we include the case of the corruption of two parties and reason the practical scenario. It is possible \(\mathcal {A}\) can learn the output by itself. The expected utilities are depicted in Eq. 3.
\(U^A_{\mathcal {A}\mathcal {A}}\), \( U^B_{\mathcal {A}\mathcal {A}}\) and \(U^\mathcal {A}_{\mathcal {A}\mathcal {A}}\) denote the utilities for parties A, B and \(\mathcal {A}\), respectively. \(\varDelta _\mathcal {A}\) denotes the utility when \(\mathcal {A}\) learns the output. \(\varDelta '_A\) is the utility when A does not learn the output. \(\varDelta '_B\) denotes the utility when B does not learn the output.
We only list the utilities for general cases, and the concrete utilities depend on specific implements. An instance of utilities are presented in the following section.
An instance for the proposed attacking model
The basic framework of the hybrid protocol
We apply our proposed attacking model to the protocols (Moran et al. 2009; Katz 2007; Gordon et al. 2008; Gordon and Katz 2012; Groce and Katz 2012), which include a “preprocessing” stage and a “shareexchanging” stage. Recall that in the first stage, there exists a trusted party. We restate the framework of Groce and Katz (2012) for completeness.
The first stage of our proposed framework

1.
Two parties A and B present their private inputs \(x_A\) and \(x_B\) to the trusted party, which correctly computes \(f(x_A,x_B)\).

2.
The trusted party selects \(i^*\) (\(i^*\in \{1,2,\ldots ,n\}\)) according to a geometric distribution p.

3.
Random values \(r^A_i\) and \(r^B_i\) are chosen:

(a)
\(r^A_i\) and \(r^B_i\) are randomly chosen in the domain of \(f(\cdot )\) when \(i<i^*\) (\(i\in \{1,2,\ldots ,n\}\)).

(b)
\(r^A_i\) and \(r^B_i\) are set to be \(f(x_A,x_B)\) when \(i\ge i^*\).

(a)

4.
Randomly values \(s_i^A\), \(s_i^B\) and \(t_i^A\), \(t^B_i\) (shares of \(r^A_i\) and \(r^B_i\)) are chosen such that \(s_i^A\oplus t_i^A=r_i^A\) and \(s_i^B\oplus t_i^B=r_i^B\). Message authentication codes (Black 2000) on values \(s_i^A\), \(s_i^B\) and \(t_i^A\), \(t^B_i\) are also generated to guarantee the validity of the shares.

5.
\(s_i^A\), \(s_i^B\) and \(t_i^A\), \(t^B_i\) with their corresponding message authentication codes are presented to A and B, respectively.
There are altogether n rounds in the second stage. A and B exchange their shares in each round.
The second stage of our proposed framework

1.
In the \(i\mathrm{th}\) round,

(a)
A firstly passes \(t_i^B\) to B. B ensures the validity of the shares using the corresponding message authentication codes. B computes \(r_i^B=t_i^B\oplus s_i^B\) and the protocol moves to the second step.

(b)
B passes \(s_i^A\) to A. A verifies the validity of the shares using the corresponding message authentication codes. A computes \(r_i^A=t_i^A\oplus s_i^A\).

(a)

2.
Each party considers its latest reconstructed value as its final output.

3.
If both parties do not abort in the ith round, the protocol will move into the \(i+1\)th round.
The utility matrix of Groce and Katz (2012) can be presented in a matrix (ref. Table 2).
Analysis of the attacking model
We apply our attacking model to the protocol of Groce and Katz (2012), where both parties A and B are rational without the participation of an external/internal attacker \(\mathcal {A}\). There are two stages in Groce and Katz (2012): A and B receive shares of the result at the end of the first stage and exchange the shares in the second stage to reconstruct the result. In this paper, we consider practical settings and an attacking model for the second stage.

1.
Suppose A and B have private types: honest or dishonest with incomplete information. Here, honest means that the parties honestly pass shares in each round and dishonest means that the parties abort in a certain round. Note that we do not consider the case where parties send fake shares since they will be detected due to the validity of message authentication codes (Black 2000).

2.
A and B have a prior probability on the private types. B treats A as honest with probability \(\mu \). A regards B as honest with probability \(\nu \). Both parties hold the expected utilities at the end of the protocol.

3.
The practical attacker \(\mathcal {A}\) owns some additional information on the private types of A and B. \(\mathcal {A}\) regards A as honest with probability \(\mu \) and B as honest with probability \(\nu \). We assume that \(\eta >\mu \) and \(\theta >\nu \). Otherwise, \(\mathcal {A}\) has no incentives to attack this protocol. Note that A and B learn nothing about \(\eta \) and \(\theta \). Furthermore, they may not even know that \(\mathcal {A}\) own the additional information.

4.
\(\mathcal {A}\) may seek some advantage attacking one party or both of them, which depends on the utility functions. Here, when stating \(\mathcal {A}\) corrupts the parties, we mean that \(\mathcal {A}\) bribes one party or both the parties with required costs and participate the protocol with the replacement of the party or both of them.
Table 2 shows that \(b_1>a_1\ge d_1\ge c_1\) and \(b_2>a_2\ge d_2\ge c_2\). Correct means that the party learns the correct output, and Incorrect means that the party learns an incorrect output. In Groce and Katz (2012), the utilities of A and B are defined according to Table 2.
In this section, we list the expected utility of \(\mathcal {A}\) and analyze the conditions for \(\mathcal {A}\) to corrupt only one party. Consequently, the conditions for the cases, no one or both are corrupted, can be drawn based on the conditions mentioned above. That is, \(\mathcal {A}\) has incentives to corrupt one party if he gets advantages for him compared with the case when no one is corrupted. Therefore, we should first get the expected utilities \(U^A\) and \(U^B\) when no one is corrupted, which are detailed below. Here, \(U_X^H\) and \(U_X^D\) denote the utility of X when \(\mathcal {A}\) treats his (or her) opponent Y as an honest and dishonest ones, respectively, where \(X,Y\in \{A,B,\mathcal {A}\}\) and \(X\ne Y\) [ref. Eq. (4)].
Let \(c_A=U^A\), \(c_B=U^B\) be the maximum costs for \(\mathcal {A}\) to corrupt A and B, respectively. Let \(U^B_{\mathcal {A}B}=U^B_{AB}\) and \(U^A _{A\mathcal {A}}=U^A_{AB}\) [ref. Eq. (5)].
Here, p denotes the probability of \(i=i^*\), after which round both parties reconstruct the output. However, B may reconstruct the output, but A cannot achieve the same when B receives the share and aborts in the \(i^{*}\)th round. Let \(\varphi \) denote the probability of \(i<i^*\), where both parties reconstruct random values. Recall that we assume \(\theta >\nu \), \(\eta >\mu \), \(b_1>a_1\ge d_1\ge c_1\) and \(b_2>a_2\ge d_2\ge c_2\). It satisfies that \( U^\mathcal {A}_{\mathcal {A}B}>0\), \(U^\mathcal {A}_{A\mathcal {A}}>0\). That is, given necessary information on the types of A and B, \(\mathcal {A}\) has incentives to corrupt one party or both of them since the advantages are positive.
Definition 1
The advantage is defined as the additional income for an attacker, which is the attacker’s utility minus the corruption cost.
The advantage is used for describing the attacker’s incentives to corrupt parties. Recall that existing works malicious attackers sabotage protocols without reason, whom are simply assumed to break the security of the protocols.
Proposition 1
The attacker has incentives to corrupt parties if his advantage is positive.
In this paper, we utilize the notion of advantage to measure the incentives for the attacker to corrupt the parties. For example, the attacker has strong incentives to corrupt parties if advantage is large enough.
Theorem 1
Given \(\theta >\nu \), \(\eta >\mu \), \(b_1>a_1\ge d_1\ge c_1\) and \(b_2>a_2\ge d_2\ge c_2\), it is possible for \(\mathcal {A}\) to corrupt one or two parties.
Proof
(Brief:) The inequations \(\theta >\nu \), \(\eta >\mu \) mean that the attacker has additional information with respect to the private types of the parties. Each party learns little about the private type of his opponent. Therefore, they are cautious when they participate into the protocol. On the contrary, the attacker masters more information and he may take adventurous actions when he participate into the protocol. However, he should first corrupt one or two parties. Equation (5) lists the attacker’s advantages when he corrupts A and B, respectively.
Given \(\theta >\nu \), \(\eta >\mu \), \(b_1>a_1\ge d_1\ge c_1\) and \(b_2>a_2\ge d_2\ge c_2\), it satisfies that \( U^\mathcal {A}_{\mathcal {A}B}>0\), \(U^\mathcal {A}_{A\mathcal {A}}>0\). Thus, the attacker has incentives to corrupt one or two parties according to Proposition 1. \(\square \)
As we have mentioned above, the attacker may be an internal party. For example, B may bribe A in the protocol. That is, B bribes A with cost \(c_A\), has the input of A and then learns the output. We assume the utility is still calculated referring to Table 2. It can be derived that B has incentives to bribe A if \(U^Ab_2<c_A<c_1U^B\).
A random solution for the proposed attacking model
The assumption on the adversary \(\mathcal {A}\) is strong enough to corrupt both the parties, where the adversary’s income is no less than \(c_A+c_B\). However, we may have a sensible attacking model in practice if the adversary dominates the entire system. Therefore, measures must be taken to prevent such attack. The intuition is that two parties may resort to some cryptographic primitives in order to identify the membership of them. More specifically, two parties may request an accumulator, a oneway function (Derler et al. 2015), with probability \(\psi >0\) so as to enforce cooperation before they exchange their shares. The function of the accumulator is to proving a membership without leaking information with respect to any individual members. Note that \(\psi \) is not necessary to be 1 since the introduction of the accumulator through the hybrid protocol may increase the computational complexity. Therefore, we only choose a proper \(\psi \) to deter the adversary and prevent attacking.
In the scenario where the adversary corrupts two parties, one party (say A) has utility \(c_A\) when he (or she) is corrupted (bribed). Suppose A calls an accumulator and cooperates with B with probability \(\psi \), the expected utility is: \(\psi a_1+(1\psi )c_A\). It should satisfy \(\psi a_1+(1\psi )c_A>c_A\) such that the attack is prevented. It satisfies \(a_2>c_B\) for the same reason. Recall that in Eq. (2), \(\mathbf {u}_\mathcal {A}\varvec{\gamma }^T\) is at least \(a_1\) and \(\mathbf {u}_\mathcal {A}\varvec{\delta }^T\) is at least \(a_2\). In Eq. (3), \(\varDelta _\mathcal {A}\) is at least \(a_1+a_2\), otherwise \(\mathcal {A}\) has no incentives to corrupt one party or two parties. Therefore, the conditions for the two parties to resist the attack, where \(a_1>c_A\) or (and) \(a_2>c_B\), are satisfied. That is, the adversary \(\mathcal {A}\) cannot conduct the attack mentioned in Sect. 4.2 when \(\psi >0\).
Conclusion
Secure twoparty computation protocols may avoid attacks in the communication between two distributed parties in the presence of adversaries. In general, these protocols should satisfy specific security requirements like privacy, correctness and fairness. However, the abilities for the adversary are overestimated and attacks to the system cannot be avoided in the real world. Therefore, the discussion under such settings can be made for impractical protocols. In this paper, we firstly considered a security problem in a realistic scenario, which cannot be directly solved by the existing twoparty protocols. We proposed a new attacking model, where the attacker has asymmetrical information compared with the case of two parties. The attacker has incentives to corrupt one party or both of them if he (or she) is awarded enough by corrupting the protocols. We have derived the probability of corrupting both parties by giving an instance based on the protocol reported in Groce and Katz (2012). Finally, we compared our work against the other techniques in six aspects and concluded that our work has more better achievements than the others.
The future work includes the introduction of more intelligent settings like stealing shares instead of bribing parties. Furthermore, we will also consider the collusion between two parties or between the attacker and one party and demonstrate its impact on the communication security.
References
Abraham S, ChengalurSmith IS (2010) An overview of social engineering malware: trends, tactics, and implications. Technol Soc 32(3):183–196
Adat V, Dahiya A, Gupta BB (2018) Economic incentive based solution against distributed denial of service attacks for IoT customers. In: IEEE international conference on consumer electronics, ICCE 2018, Las Vegas, NV, USA, January 12–14, 2018, pp 1–5. https://doi.org/10.1109/ICCE.2018.8326280
Andrychowicz M, Dziembowski S, Malinowski D, Mazurek Ł (2014) Fair twoparty computations via bitcoin deposits. In: International conference on financial cryptography and data security. Springer, Berlin, pp 105–121
Asharov G, Canetti R, Hazay C (2011) Towards a game theoretic view of secure computation. EUROCRYPT 6632:426–445
Bentov I, Kumaresan R (2014) How to use bitcoin to design fair protocols. In: International cryptology conference. Springer, Berlin, pp 421–439
Black JR (2000) Message authentication codes. University of California, Davis
Chen X, Li J, Weng J, Ma J, Lou W (2016) Verifiable computation over large database with incremental updates. IEEE Trans Comput 65(10):3184–3195
Derler D, Hanser C, Slamanig D (2015) Revisiting cryptographic accumulators, additional properties and relations to other primitives. In: Cryptographers\(\circ \emptyset \) track at the RSA conference. Springer, San Francisco, pp 127–144
Fan L, Lei X, Yang N, Duong TQ, Karagiannidis GK (2017) Secrecy cooperative networks with outdated relay selection over correlated fading channels. IEEE Trans Veh Technol 66(8):7599–7603
Gao C, Cheng Q, He P, Susilo W, Li J (2018) Privacypreserving Naive Bayes classifiers secure against the substitutionthencomparison attack. Inf Sci 444:72–88
Garay J, Katz J, Maurer U, Tackmann B, Zikas V (2013) Rational protocol design: cryptography against incentivedriven adversaries. In: 2013 IEEE 54th annual symposium on foundations of computer science (FOCS). IEEE, pp 648–657
Goldreich O (2009) Foundations of cryptography: volume 2, basic applications. Cambridge University Press, Cambridge
Gordon SD, Katz J (2012) Partial fairness in secure twoparty computation. J Cryptol 25(1):14–40
Gordon SD, Hazay C, Lindell Y, Katz J (2008) Complete fairness in secure twoparty computation. In 40th annual ACM symposium on theory of computing (STOC). Citeseer
Groce A, Katz J (2012) Fair computation with rational players. In: Annual international conference on the theory and applications of cryptographic techniques. Springer, Berlin, pp 81–98
Gupta BB, Agrawal DP, Yamaguchi S (2016) Handbook of research on modern cryptographic solutions for computer and cyber security. IGI Global, Hershey
Haddi FL, Benchaïba M (2015) A survey of incentive mechanisms in static and mobile p2p systems. J Netw Comput Appl 58:108–118
Hadnagy C (2010) Social engineering: the art of human hacking. J Quant Spectrosc Radiat Trans 130(11):51–61
Halpern J, Teague V (2004) Rational secret sharing and multiparty computation: extended abstract. In: STOC 2004: proceedings of the 36th annum ACM symposium on theory of computing, New York, USA: ACM, pp 623–632
Ibtihal M, Driss EO, Hassan N (2017) Homomorphic encryption as a service for outsourced images in mobile cloud computing environment. Int J Cloud Appl Comput 7(2):27–40
Jhaveri RH, Patel NM, Zhong Y, Sangaiah AK (2018) Sensitivity analysis of an attackpattern discovery based trusted routing scheme for mobile adhoc networks in industrial IoT. IEEE Access. https://doi.org/10.1109/ACCESS.2018.2822945
Jiang W, Guojun Wang M, Bhuiyan ZA, Jie W (2016) Understanding graphbased trust evaluation in online social networks: methodologies and challenges. ACM Comput Surv (CSUR) 49(1):10
Jiang L, Cheng Y, Yang L, Li J, Yan H, Wang X (2018) A trustbased collaborative filtering algorithm for ecommerce recommendation system. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s126520180887z
Katz J (2007) On achieving the best of both worlds in secure multiparty computation. In: Proceedings of the thirtyninth annual ACM symposium on theory of computing. ACM, pp 11–20
Kumaresan R, Bentov I (2014) How to use bitcoin to incentivize correct computations. In: Proceedings of the 2014 ACM SIGSAC conference on computer and communications security. ACM, pp 30–41
Li J, Chen X, Xhafa F, Barolli L (2015) Secure deduplication storage systems supporting keyword search. J Comput Syst Sci 81(8):1532–1541
Li P, Li J, Huang Z, Li T, Gao CZ, Yiu SM, Chen K (2017) Multikey privacypreserving deep learning in cloud computing. Future Gener Comput Syst 74:76–85
Li T, Li J, Liu Z, Li P, Jia C (2018) Differentially private naive bayes learning over multiple data sources. Inf Sci 444:89–104
Lin Q, Yan H, Huang Z, Chen W, Shen J, Tang Y (2018) An idbased linearly homomorphic signature scheme and its application in blockchain. IEEE Access IEEE. https://doi.org/10.1109/ACCESS.2018.2809426
Liu Z, Wu Z, Li T, Li J, Shen C (2018a) GMM and CNN hybrid method for short utterance speaker recognition. IEEE Trans Ind Inform IEEE. https://doi.org/10.1109/TII.2018.2799928
Liu Z, Huang Y, Li J, Cheng X, Shen C (2018b) Divoram: towards a practical oblivious ram with variable block size. Inf Sci 447:1–11
Meng W, Tischhauser E, Wang Q, Wang Y, Han J (2018) When intrusion detection meets blockchain technology: a review. IEEE Access. https://doi.org/10.1109/ACCESS.2018.2799854
Moran T, Naor M, Segev G (2009) An optimally fair coin toss. In: Reingold O (ed) Theory of cryptography. Springer, Berlin, pp 1–18
Tian H, Chen Z, Chang CC, Huang Y, Wang T, Huang Z, Cai Y, Chen Y (2018) Public audit for operation behavior logs with error locating in cloud storage. Soft Comput. https://doi.org/10.1007/s0050001830388
Wang Y, Li T, Chen L, Li P, Leung H, Liu Z, Qiuliang X (2016) Rational computing protocol based on fuzzy theory. Soft Comput 20(2):429–438
Wang Y, Zhang S, Tang Y, Su Q, Chen B (2018) Rational adversary with flexible utility in secure twoparty computation. J Ambient Intell Human Comput, pp 1–15
Wu TY, Lee WT, Guizani N, Wang TM (2014) Incentive mechanism for p2p file sharing based on social network and game theory. J Netw Comput Appl 41:47–55
Yang L, Han Z, Huang Z, Ma J (2018) A remotely keyed file encryption scheme under mobile cloud computing. J Netw Comput Appl 106:90–99
Yao AC (1982) Protocols for secure computations. In: 23rd annual symposium on foundations of computer science, 1982. SFCS’08. IEEE, pp 160–164
Yu C, Li J, Li X, Ren X, Gupta BB (2018) Fourimage encryption scheme based on quaternion fresnel transform, chaos and computer generated hologram. Multimed Tools Appl 77(4):4585–4608
Zhang X, Tan YA, Liang C, Li Y, Li J (2018) A covert channel over volte via adjusting silence periods. IEEE Access 6:9292–9302
Zhao H, Yang X, Li X (2012) An incentive mechanism to reinforce truthful reports in reputation systems. J Netw Comput Appl 35(3):951–961
Zheng Q, Wang X, Muhammad KK, Wenfang Z, Gupta BB, Wei G (2018) A lightweight authenticated encryption scheme based on chaotic scml for railway cloud service. IEEE Access 6(99):711–722
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Funding
This study was funded by National Natural Science Foundation of China (Grant Number: 61502218, 61771231), Natural Science Foundation of Shandong Province (Grant Number: ZR2017MF010, ZR2017MF062), H. Zhou was funded by EU H2020 DOMINOES Project (Grant Number: 771066).
Conflict of Interest
The authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Communicated by B. B. Gupta.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, Y., Metere, R., Zhou, H. et al. Incentivedriven attacker for corrupting twoparty protocols. Soft Comput 22, 7733–7740 (2018). https://doi.org/10.1007/s0050001833423
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0050001833423
Keywords
 Cost corruption
 Incentivedriven adversary
 Twoparty computation