Timing analysis of leaderbased and decentralized Byzantine consensus algorithms
 518 Downloads
Abstract
We consider the Byzantine consensus problem in a partially synchronous system with strong validity. For this problem, two main algorithms—with different resilience—are described in the literature. These two algorithms assume a leader process. A decentralized variant (variant without leader) of these two algorithms has also been given in a previous paper. Here, we compare analytically, in a roundbased model, the leaderbased variant of these algorithms with the decentralized variant. We show that, in most cases, the decentralized variant of the algorithm has a better worstcase execution time. Moreover, for the practically relevant case t≤2 (where t is the maximum number of Byzantine processes), this worstcase execution time is even at least as good as the execution time of the leaderbased algorithms in faultfree runs.
Keywords
Distributed algorithms Fault tolerance Byzantine consensus Timing analysis1 Introduction
Consensus is a fundamental building block for faulttolerant distributed systems. Algorithms for solving the consensus problem can be classified into two broad categories: leaderbased algorithms that use the notion of a (changing) leader (a process with some specific role), and decentralized algorithms, where no such dedicated process is used. Most of the consensus algorithms proposed in early 80s, for both synchronous and asynchronous systems,^{1} are decentralized (e.g., [2, 11, 14, 15]). Later, a leader (also called coordinator) was introduced, in order to reduce the message complexity and/or improve the bestcase performance (e.g., [5, 7, 10]).
Obviously, there is a tradeoff between the bestcase performance and the worstcase performance of leaderbased algorithms. For instance, a leaderbased algorithm that requires α rounds in the best case (α is usually a constant), would typically require α(t+1) rounds in the worst case (where t is the maximum number of faulty processes). The first question we address, is whether the decentralized version of the same algorithm requires less than α(t+1) rounds or not? If it requires less, since the best case for a leaderbased algorithm is expected to have better performance than the best case for its decentralized version, there is an interesting tradeoff to analyze. The second question is to analyze the worstcase performance of the leaderbased algorithm and the decentralized algorithm in terms of (i) number of rounds and (ii) in terms of execution time. The last question we address is whether the performance in terms of number of rounds allows us to predict the performance in terms of execution time.
This work is motivated by the results of Amir et al. [1] and Clement et al. [6]. These two papers have pointed out that the leaderbased PBFT Byzantine consensus algorithm [4] is vulnerable to performance degradation. According to these two papers, a malicious leader can introduce latency into the global communication path simply by delaying the message that it has to send. Moreover, a malicious leader can manipulate the protocol timeout and slow down the system throughput without being detected. This has motivated the development of decentralized Byzantine consensus algorithms [3]. The next step, addressed here, is to compare analytically the execution time of decentralized and leaderbased consensus algorithms. We study the question analytically in the model considered in [4] for PBFT, namely a partially synchronous system in which the endtoend messages transmission delay δ is unknown.
Our paper analyzes two Byzantine consensus algorithms that ensure strong validity, each one with a decentralized and a leaderbased variant.^{2} One of these two algorithms is inspired by Fast Byzantine (FaB) Paxos [12], the other by PBFT [4]. Our analysis shows that there is a significant tradeoff between the leaderbased and the decentralized variants. Mainly, it shows the superiority of the decentralized variants over the leaderbased variants in different cases: First, the analysis shows that for the decentralized variants the worstcase performance and the faultfree case performance overlap, which is not the case for the leaderbased variants. Second, it shows that, in most cases, the worst case of the decentralized variant of our two algorithms is better than the worst case of its leaderbased variant. Third, for t≤2, it shows that the worstcase execution time of our decentralized variant is never worse than the execution time of the leaderbased variant in faultfree runs.
Finally, our detailed timing analysis confirms that the number of rounds in an algorithm is not necessarily a good predictor for the performance of the algorithm.
Roadmap
In the next section, we give the system model for our analysis and introduce the round model that we use for the description of our algorithms. Section 3 presents in a modular way the consensus algorithms under consideration. In Sect. 4, we give the implementation of the round model. Section 5 contains our main contribution, the analysis and comparison of the algorithms. Section 6 discusses about the hybrid variants. Finally, we conclude the paper in Sect. 7.
2 Definitions and system model
2.1 System model
We consider a set Π of n processes, among which at most t can be faulty. A faulty process behaves arbitrarily. Nonfaulty processes are called correct processes, and \(\mathcal{C}\) denotes the set of correct processes.
Processes communicate through message passing, and the system is partially synchronous [7]. Instead of separate bounds on the process speeds and the transmission delay, we assume that in every run there is a bound δ, unknown to processes, on the endtoend transmission delay between correct processes, that is, the time between the sending of a message and the time where this message is actually received (this incorporates the time for the transmission of the message and of possibly several steps until the process makes a receive step that includes this message). This is the same model considered in [4] for PBFT. We do not make use of digital signatures. However, the communication channels are authenticated, i.e., the receiver of a message knows the identity of the sender. In addition, we assume that processes have access to a local nonsynchronized clock; for simplicity we assume that this clock is driftfree.
2.2 Round model
As in [7], we consider rounds on top of the system model. This improves the clarity of the algorithms, makes it simpler to change implementation options, and makes the timing analysis easier to understand. In the round model, processing is divided into rounds of message exchange.
Consensus algorithms consist of a sequence of phases, where each phase consists of one or more rounds. For our consensus algorithms, we need eventually a phase where all rounds are synchronous, and the first round is consistent. A round in which \(\mathcal{P}_{\mathrm{cons}}\) eventually holds will be called a WIC round (Weak Interactive Consistency, defined in [13]).
2.3 Byzantine consensus

Strong validity: If all correct processes have the same initial value, this is the only possible decision value.

Agreement: No two correct processes decide differently.

Termination: All correct processes eventually decide.
3 Consensus algorithms
In this section, we first present two consensus algorithms, namely MA and CL, both from [3, 13], that we use for our analysis. Both algorithms require a WIC round. Then we give two implementations of WIC rounds, one leaderbased (L), the other decentralized (D).
3.1 Consensus algorithms with WIC rounds
3.1.1 The MA algorithm
Agreement follows from the fact that once a process decided, at least n−2t correct processes have the same estimate x, and thus no other value will ever be adopted in line 9. A similar argument is used for validity. Termination follows from the fact that in a round 2ϕ−1≥GSR with a consistent reception vector μ^{ r } all correct processes adopt the same value in line 9, and thus decide on this value in round 2ϕ.
3.1.2 The CL algorithm

If a correct process p receives the same estimate v in round 3ϕ−2 from n−t processes by line 14, then it accepts v as a valid vote and puts 〈v,ϕ〉 in prevote_{ p } set by line 15. The prevote set is used later to detect an invalid vote (lines 28–30).

If a correct process p receives the same prevote 〈v,ϕ〉 in round 3ϕ−1 from n−t processes by line 20, then it votes v (i.e., vote_{ p }←v) and updates its timestamp to ϕ (i.e., ts_{ p }←ϕ) by line 21.

If a correct process p receives the same vote v with the same timestamp ϕ in round 3ϕ from 2t+1 processes by line 26, it decides v in line 27.
The algorithm is safe with t<n/3. For termination, the three rounds of a phase must eventually be synchronous and the first round must be a WIC round.^{5}
3.2 Implementation of a WIC round
3.2.1 Leaderbased implementation
Algorithm 3, which appears in [13], implements WIC rounds using a leader. It implements a WIC round if eventually the coordinator is correct and all three rounds are synchronous. If a correct process is the coordinator, and round ρ=3 is synchronous, all processes receive the same set of messages from this process in round ρ=3.
In round ρ=2, the coordinator compares the value received from some process p with the value indirectly received from other processes. If at least 2t+1 same values have been received, the coordinator keeps that value, otherwise it sets the value to ⊥. This guarantees that if the coordinator keeps v, at least t+1 correct processes have received v from p in round ρ=1. Finally, in round ρ=3 every process sends values received in round ρ=1 or ⊥ to all. Each process verifies whether at least t+1 processes validate the value that it has received from the coordinator in round ρ=3. Rounds ρ=1 and ρ=3 are used to verify that a faulty leader cannot forge the message from another process (integrity).
Since a WIC round can be ensured only with a correct coordinator, we need to ensure that the coordinator is eventually correct. In Sect. 4 we do so by using a rotating coordinator. A WIC round using this leaderbased implementation needs three “normal” rounds.
3.2.2 Decentralized implementation
Similarly to the leaderbased implementation, Algorithm 4 requires n>3t. On the other hand, a WIC round using this decentralized implementation needs t+1 “normal” rounds.
3.3 The four combinations
Performance of algorithms in terms of number of rounds
Best case  Worst case  

MAD  t+2  t+2 
MAL  4  4(t+1) 
CLD  t+3  t+3 
CLL  5  5(t+1) 
4 Round implementation
As already mentioned in Sect. 2.1, we consider a partially synchronous system with an unknown bound δ on the endtoend transmission delay between correct processes. The main technique to find the unknown δ in the literature is using an adaptive timeout, i.e., starting the first phase of an algorithm with a small timeout Γ_{0} and increasing it from time to time. The timeout required for an algorithm can be calculated based on the bound δ and the number of rounds needed by one phase of the algorithm. The approach proposed in [7] is to increase the timeout linearly, while recent works, e.g., PBFT [4], increase the timeout exponentially.
The main question is when to increase the timeout. Increasing the timeout in every phase provides a simple solution, in which all processes adopt the same timeout for a given phase. However, this is not an efficient solution, since processes might increase the timeout unnecessarily. An efficient solution is increasing the timeout when a correct process requires that. This occurs typically when a correct process is unable to terminate the algorithm (i.e., decide) with the current timeout. The problem with this solution is that different processes might increase the timeout at different points in time. Therefore, an additional synchronization mechanism is needed in this case.
For leaderbased algorithms, a related question is the relationship between leader change and timeout change. Most of the existing protocols apply both timeout and leader modifications at the same time [1, 4, 6, 7, 9, 12]. Our round implementation allows decoupling timeout modification and leader change. We show that such a strategy performs better than the traditional strategies in the worst case.
4.1 The algorithm
Each process p keeps a round number r_{ p } and a view number v_{ p }, initially equal to 1. While the round number corresponds also to the round number of the consensus algorithm, the view number increases only upon reconfiguration. Thus, the leader and the timeout are functions of the view number. The leader changes whenever the view changes, based on the rotating leader paradigm (line 7). Note that the value of Open image in new window is ignored in decentralized algorithms. The timeout does not necessarily change whenever the view changes. After line 7, a process starts the input & send part, in which it queries the input queue for new proposals (using a function input(), line 8), initializes new slots on the state vector for each new proposal (line 10), calls the send function of all active consensus instances (line 13), and sends the resulting messages (line 16). The process then sets a timeout for the current round using a deterministic function Γ based on its view number v_{ p } (line 17), and starts the receive part, where it collects messages (line 22). Basically, this part uses an init/echo message scheme for round synchronization based on ideas that appear already in [7, 8, 16]. The receive part is described later. Next, in the comp. & output part, the process calls the state transition function of each active instance (line 41), and outputs any new decisions (line 44) using the function output(). Finally, a check is done at the end of each phase, i.e., only if Open image in new window (line 45), where α represents the number of rounds in a phase. The check may lead to request a view change, therefore, the check is skipped if Open image in new window (the view changes anyway). The check is whether all instances, started at the beginning of the phase, have decided (lines 45–46). If not, the process concludes that the current view was not successful (either the current timeout was small or the coordinator was faulty), and it expresses its intention to start the next view by sending an Init message for view v_{ p }+1 (line 47).
The function init(v) (line 10) gives the initial state for initial value v of the consensus algorithm; respectively, decision(state) (line 42) gives the decision value of the current state of the consensus algorithm, or ⊥ if the process has not yet decided.
Receive part
To prevent a Byzantine process from increasing the round number and view number unnecessarily, the algorithm uses two different type of messages, Init messages and Start messages. Process p expresses the intention to enter a new round r or new view v by sending an Init message. For instance, when the timeout for the current round expires, the process—instead of starting immediately the next round—sends an Init message (line 20) and waits that enough processes timeout. If process p in round r_{ p } and view v_{ p } receives at least 2t+1 Init messages for round r_{ p }+1 (line 33), resp. view v_{ p }+1 (line 35), it advances to round r_{ p }+1, resp. to view v_{ p }+1, and sends an Start message with current round and view (line 16). If the process receives t+1 Init messages for round r+1 with r≥r_{ p }, it enters immediately round r (line 23), and sends an Init message for round r+1. In a similar way, if the process receives t+1 Init messages for view v+1 with v≥v_{ p }, it enters immediately view v (line 28), and sends an Init message for view v+1.
Properties of Algorithm 5
 1.
If one correct process starts round r (resp. view v), then there is at least one correct process that wants to start round r (resp. view v). This is because at most t processes are faulty (see Lemma 1).
 2.
If all correct processes want to start round r+1 (resp. view v+1), then all correct processes eventually start round r+1 (resp. view v+1). This is because n−t≥2t+1 and lines 33–36 (see Lemma 2).
 3.
If one correct process starts round r (resp. view v), then all correct processes eventually start round r (resp. view v). This is because a correct process starts round r (resp. view v) if it receives 2t+1 Init messages for round r (resp. view v). Any other correct process in round r′<r (resp. view v′<v) will receive at least t+1 Init messages for round r (resp. view v). By lines 23 to 26, these correct processes will start round r−1 (resp. view v−1) and will send an Init message for round r (resp. view v), see line 27. From item 2, all correct processes eventually start round r (resp. view v). The complete proof is given by Lemmas 3–5.
4.2 Timing properties of Algorithm 5
 1.
If processpstarts roundr (resp. viewv) at timeτ, all correct processes will start roundr (resp. view v) by timeτ+2δ. This is because p has received 2t+1 Init messages for round r (resp. view v), at time τ. All correct processes receive at least t+1 Init messages by time τ+δ, start round r−1 (resp. view v−1) and send an Init message for round r (resp. view v). This message takes at most δ time to be received by all correct processes. Therefore, all correct processes receive at least 2t+1 Init messages by time τ+2δ, and start round r (resp. view v). The complete proof is given by Lemma 5.
 2.
If a correct processpstarts roundr (viewv) at timeτ, it will start roundr+1 the latest by timeτ+3δ+Γ(v). By item 1, all correct processes start round r, by time τ+2δ. Then they wait for the timeout of round r, which is Γ(v). Therefore, by time τ+2δ+Γ(v) all correct processes timeout for round r, and send an Init message for round r+1, which takes δ time to be received by all correct processes. Finally, the latest by time τ+3δ+Γ(v), process p receives 2t+1 Init messages for round r+1 and starts round r+1. The complete proof is given by Lemma 6.
 (3)
A timeoutΓ(v)≥3δfor roundr (viewv) ensures that if a correct process starts roundrat timeτ, it receives all roundrmessages from all correct processes before the expiration of the timeout (at timeτ+3δ). By item 1, all correct processes start round r, by time τ+2δ. The message of round r takes an additional δ time. Therefore, a timeout of at least 3δ ensures the stated property. The complete proof is given by Lemma 7.
4.3 Parameterizations of Algorithm 5
Summary of different strategies
Strategy  A  B  C 

Γ(v)  vΓ _{0}  2^{v−1}Γ_{0}  \(2^{\lfloor \frac{v1}{t+1}\rfloor}\varGamma_{0}\) 
4.4 Correctness proofs of Algorithm 5
In the sequel, let τ_{ G } denote the first time that the actual endtoend transmission delay δ is reached. All messages sent before τ_{ G } are received the latest by time τ_{ G }+δ. Let v_{0} denote the largest view number such that no correct process has sent a Start message for view v_{0} by time τ_{ G }, but some correct process has sent a Start message for view v_{0}−1. Let r_{0} denote the largest round number such that no correct process has sent a Start message for round r_{0} by time τ_{ G }, but some correct process has sent a Start message for round r_{0}−1. We prove the results related to the view number, similar results hold for round numbers:
Lemma 1
Letpbe a correct process that sends message 〈Start,−,v,−,p〉 at some timeτ_{0}, then at least one correct processqhas sent message 〈Init,v,−,q〉 at timeτ≤τ_{0}.
Proof
Assume by contradiction that no correct process q has sent message 〈Init,v,−,q〉. This means that a correct process can receive at most t messages 〈Init,v,−,−〉 in line 28. Therefore, no correct process executes line 32, and no correct process starts view v because of line 35, which is a contradiction. □
Lemma 2
Let all correct processespsend message 〈Init,v,−,p〉 at some timeτ_{0}, then all correct processespwill send message 〈Start,−,v,−,p〉 by timemax{τ_{0},τ_{ G }}+δ.
Proof
If all correct processes p send message 〈Init,v,−,p〉 at some time τ_{0}, then all correct processes are in view v−1 at time τ_{0} by lines 45–47. A correct process q in view v−1, receives at least n−t≥2t+1 messages 〈Init,v,−,p〉 by time τ_{0}+δ if τ_{0}≥τ_{ G }, or by time τ_{ G }+δ if τ_{0}<τ_{ G }. From lines 35 and 36, q starts view v by time max{τ_{0},τ_{ G }}+δ. □
Lemma 3
Every correct processpsends message 〈Start,−,v_{0}−1,−,p〉 by timeτ_{ G }+2δ.
Proof
We assume that there is a correct process p with v_{ p }=v_{0}−1 at time τ_{ G }. This means that p has received at least 2t+1 messages 〈Init,v_{0}−1,−,−〉 (line 35). Or at least t+1 correct processes are in view v_{0}−2 and have sent a message 〈Init,v_{0}−1,−,−〉. These messages will be received by all correct processes the latest by time τ_{ G }+δ. Therefore, all correct processes in view <v_{0}−1 receive at least t+1 messages 〈Init,v_{0}−1,−,−〉 by time τ_{ G }+δ, start view v_{0}−2 (line 31) and send a message 〈Init,v_{0}−1,−,−〉 (line 32). These messages are received by all correct processes by time τ_{ G }+2δ. Because n−t>2t, all correct processes receive at least 2t+1 messages 〈Init,v_{0}−1,−,−〉 by time τ_{ G }+2δ (line 35), start view v_{0}−1 (line 36), and send a message 〈Start,−,v_{0}−1,−,−〉 (line 16). □
Lemma 4
Letpbe the first (not necessarily unique) correct process that sends message 〈Start,−,v,r,p〉 withv≥v_{0}at some timeτ≥τ_{ G }. Then no correct process sends message 〈Start,−,v+1,−,−〉 before timeτ+Γ(v). Moreover, no correct process sends message 〈Init,v+2,−,−〉 before timeτ+Γ(v).
Proof
For the Start message, assume by contradiction that process q is the first correct process that sends message 〈Start,−,v+1,1,q〉 before time τ+Γ(v). Process q can send this message only if it receives 2t+1 messages 〈Init,v+1,−,−〉 (line 35), This means that at least t+1 correct processes are in view v and have sent 〈Init,v+1,−,−〉. In order to send 〈Init,v+1,−,−〉, a correct process takes at least Γ(v) time in view v (line 19). So message 〈Start,−,v+1,−,q〉 is sent by correct process q at the earliest by time τ+Γ(v). A contradiction.
For the Init message, since no correct process starts view v+1 before time τ+Γ(v), no correct process sends message 〈Init,v+2,−,q〉 before time τ+Γ(v). □
Lemma 5
Letpbe the first (not necessarily unique) correct process that sends message 〈Start,−,v,−,p〉 withv≥v_{0}at some timeτ≥τ_{ G }. Then every correct processqsends message 〈Start,−,v,−,q〉 by timeτ+2δ.
Proof
Note that by the assumption, all view v≥v_{0} messages are sent at or after τ_{ G }, and thus they are received by all correct processes δ time later. By Lemma 4, there is no message 〈Start,−,v′,−,−〉 with v′>v in the system before τ+Γ(v). Process p sends message 〈Start,−,v,−,p〉 if it receives 2t+1 messages 〈Init,v,−,−〉 (line 35). This means that at least t+1 correct processes are in view v−1 and have sent message 〈Init,v,−,−〉, the latest by time τ. All correct processes in view <v receive at least t+1 messages 〈Init,v,−,−〉 the latest by time τ+δ, start view v−1 (line 31) and send 〈Init,v,−,−〉 (line 32) which is received at most δ time later. Because n−t>2t, every correct process q receives at least 2t+1 messages 〈Init,v,−,−〉 by time τ+2δ (line 35), start view v (line 36), and send message 〈Start,−,v,−,q〉 (line 16). □
Following two lemmas hold for round numbers.
Lemma 6
If a correct processpsends message 〈Start,−,v,r,p〉 at timeτ>τ_{ G }, it will send message 〈Start,−,v,r+1,p〉 the latest by timeτ+3δ+Γ(v).
Proof
From Lemma 5 (similar result for round number), all correct processes q send message 〈Start,−,v,r,q〉 the latest by time τ+2δ. Then they wait for the timeout of round r which is Γ(v) (lines 17 and 19). Therefore, by time τ+2δ+Γ(v) all correct processes timeout for round r, and send 〈Init,v,r+1,q〉 message to all (line 20), which takes δ time to be received by all correct processes. Finally, the latest by time τ+3δ+Γ(v), process p receives n−t≥2t+1 messages 〈Init,v,r+1,−〉 and starts round r+1 (line 36). □
Lemma 7
A timeoutΓ(v)≥3δfor roundrensures that if a correct processpsends message 〈Start,−,v,r,p〉 to all at timeτ≥τ_{ G }, it will receive all round messages 〈Start,−,v,r,q〉 from all correct processesq, before the expiration of the timeout (at timeτ+3δ).
Proof
From Lemma 5 (similar result for round number), all correct processes q send message 〈Start,−,v,r,q〉 to all the latest by time τ+2δ. The message of round r takes an additional δ time. Therefore a timeout of at least 3δ ensures the stated property. □
Therefore, we have the following theorem.
Theorem 1
Algorithm 5withn>3tensures the existence of roundr_{0}such that\(\forall r \geq r_{0}: \mathcal{P}_{\mathrm{sync}}(r)\).
5 Timing analysis
In this section, we analyze the impact of the strategies A, B and C on our four consensus algorithms. We start with the analysis of the round implementation. Then we use these results to compute the execution time of k consecutive instances of consensus using the four algorithms MAL, MAD, CLL, and CLD.
Parameters for algorithms MA and CL
Faultfree case  Worst case  

α  β  α  β  
MAD  t+2  0  t+2  0 
MAL  4  0  4  t 
CLD  t+3  0  t+3  0 
CLL  5  0  5  t 
5.1 Best case analysis
In the best case we have Γ_{0}=δ and there are no faults. Processes start a round at the same time and a round takes 2δ (δ for the timeout and δ for the Init messages), and processes decide at the end of each phase (=α rounds). Therefore, the decision for k consecutive instances of consensus occurs at time 2δαk. Obviously, the algorithm with the smallest α (that is, the leaderbased or the decentralized with t≤2) performs in this case the best.
5.2 Worst case analysis
We compute now τ_{ X }(k,α,β), the worstcase execution time until the kth decision when using the strategy X∈{A,B,C}. Based on item 3 in Sect. 4.2 (and Lemma 7), the first decision does not occur until the round timeout is larger or equal to 3δ. We denote below with v_{0} the view that corresponds to the first decision (k=1).
5.2.1 Strategy A
5.2.2 Strategy B
5.2.3 Strategy C
5.2.4 Comparison
Table 3 gives α and β for all algorithms we discussed. For the worst case analysis, we distinguish two cases: the worst faultfree case, which is the worst case in terms of the timing for a run without faulty process; and the general worstcase that gives the values for a run in which t processes are faulty.
We first focus on the first instance of consensus, that is, we fix k=1 and assume δ=10Γ_{0} which gives ⌈log_{2}(3δ/Γ_{0})⌉=5, i.e., the transmission delay is estimated correctly after five times doubling the timeout. The result is depicted in Fig. 2. We first observe, as expected, that the faultfree case and the worstcase are the same for the decentralized versions (curves D+A and D+B). For the—in real systems relevant—cases t<3, for each strategy, the decentralized algorithm decides even faster in the worstcase than the leaderbased version of the same algorithm in the faultfree case. For larger t, the leaderbased algorithms with strategy B, are faster in the faultfree case (L+B dotted curves), but less performant in the worstcase (L+B dashed curves). In the worst case, the execution time of leaderbased algorithms with strategy B grows exponentially with the number of faults. This shows the interest for strategy C (L+C dashed curves) in the worst case.
One can also observe that the behavior of different algorithms shown in Fig. 2 could not be derived from Table 1, although the main results match. This means that the performance of different algorithms in terms of the number of rounds does not completely predict the performance of those algorithms in terms of execution time. Even the same algorithm with different timeout strategies has different performance. This confirms the need for a detailed timing analysis.
In all graphs, algorithm MA performs better than algorithm CL, since it requires less number of rounds, as shown in Table 1. But both algorithms have similar behaviors.
6 Discussion
There are two important additional issues that we would like to emphasize before concluding the paper: the choice of the partial synchronous system model and the possibility to get a hybrid algorithm.
6.1 System model issue
 1.
Each correct process increases its timeout according to the timeout strategy until its first consensus decision.
 2.
Then the process asks to reset the timeout to Γ_{0} by sending a reset message.
 3.
If a correct process receives 2t+1 reset messages, it resets the timeout to the initial timeout, i.e., Γ_{0}.
 4.
If a correct process receives t+1 reset messages, it sends a reset message.
6.2 Hybrid algorithm issue
The second issue is related to the leaderbased versus decentralized WIC round implementation. The leaderbased version has better performance in the best case, while the decentralized version performs better in the worst case. By combining two approaches, we can obtain an algorithm that performs good in both cases. The idea is the following: in the first phase (or view) run the leaderbased algorithm, i.e., MAL or CLL. If the first view is not successful, i.e., if there is a view change, then switch to the corresponding decentralized algorithm, i.e., MAD or CLD.
Parameters for the hybrid algorithms
Faultfree case  Worst case  

α  β  α  β  
MAH  4  0  t+6  0 
CLH  5  0  t+8  0 
7 Conclusion
We compared the leaderbased and the decentralized variant of two typical Byzantine consensus algorithms with strong validity in an analytical way using the same round implementation.
Our analysis allows us to better understand the tradeoff between the leaderbased and the decentralized variants of an algorithm. The results show a surprisingly clear preference for the decentralized version. The decentralized variant of algorithms has a better worstcase performance for the best strategy. Moreover, for the practically relevant cases t≤2, the decentralized variant is at least as good as the faultfree case of the leaderbased variant. Finally, in the best case, for t≤2, the decentralized variant is at least as good as the leaderbased variant.
The results of our detailed timing analysis confirm the fact that the number of rounds is not necessarily a good estimation of the performance of a consensus algorithm.
Footnotes
 1.
In asynchronous systems, using randomization to solve probabilistic consensus.
 2.
A similar study could be done for the consensus algorithms that ensure only weak validity, such as FaB Paxos and PBFT. The results and conclusion would be similar.
 3.
FaB Paxos is expressed using “proposers”, “acceptors” and “learners.” MA is expressed without these roles. Moreover, FaB Paxos solves consensus with weak validity, while MA solves consensus with strong validity. In addition, MA is expressed using rounds.
 4.
PBFT solves a sequence of consensus instances with weak validity, while CL solves consensus with strong validity.
 5.
The proofs are given in [3].
 6.
Note that from \(v = 1 + (t+1)\lceil\log _{2}\frac{3\delta}{\varGamma_{0}} \rceil\) it follows that \(\frac {v1}{t+1}\) is an integer.
References
 1.Amir Y, Coan B, Kirsch J, Lane J (2008) Byzantine replication under attack. In: DSN’08, pp 197–206Google Scholar
 2.BenOr M (1983) Another advantage of free choice (extended abstract): Completely asynchronous agreement protocols. In: PODC’83. ACM, New York, pp 27–30. doi: 10.1145/800221.806707Google Scholar
 3.Borran F, Schiper A (2010) A leaderfree byzantine consensus algorithm. In: ICDCN. Lecture notes in computer science (LNCS). Springer, Berlin, pp 67–78Google Scholar
 4.Castro M, Liskov B (2002) Practical Byzantine fault tolerance and proactive recovery. ACM Trans Comput Syst 20(4):398–461CrossRefGoogle Scholar
 5.Chandra TD, Toueg S (1996) Unreliable failure detectors for reliable distributed systems. J ACM 43(2):225–267MathSciNetCrossRefGoogle Scholar
 6.Clement A, Wong E, Alvisi L, Dahlin M, Marchetti M (2009) Making Byzantine fault tolerant systems tolerate Byzantine faults. In: NSDI’09. USENIX Association, Berkeley, pp 153–168Google Scholar
 7.Dwork C, Lynch N, Stockmeyer L (1988) Consensus in the presence of partial synchrony. J ACM 35(2):288–323MathSciNetCrossRefGoogle Scholar
 8.Hutle M, Schiper A (2007) Communication predicates: a highlevel abstraction for coping with transient and dynamic faults. In: Dependable systems and networks (DSN 2007). IEEE Press, New York, pp 92–100Google Scholar
 9.Kotla R, Alvisi L, Dahlin M, Clement A, Wong E (2007) Zyzzyva: speculative byzantine fault tolerance. Oper Syst Rev 41(6):45–58. doi: 10.1145/1323293.1294267CrossRefGoogle Scholar
 10.Lamport L (1998) The parttime parliament. ACMTCS 16(2):133–169Google Scholar
 11.Lamport L, Shostak R, Pease M (1982) The byzantine generals problem. ACM Trans Program Lang Syst 4(3):382–401. doi: 10.1145/357172.357176CrossRefGoogle Scholar
 12.Martin JP, Alvisi L (2006) Fast Byzantine consensus. IEEE Trans Dependable Secure Comput 3(3):202–215. doi: 10.1109/TDSC.2006.35CrossRefGoogle Scholar
 13.Milosevic Z, Hutle M, Schiper A (2009) Unifying byzantine consensus algorithms with weak interactive consistency. In: OPODIS, pp 300–314Google Scholar
 14.Pease M, Shostak R, Lamport L (1980) Reaching agreement in the presence of faults. J ACM 27(2):228–234. doi: 10.1145/322186.322188MathSciNetCrossRefGoogle Scholar
 15.Rabin M (1983) Randomized Byzantine generals. In: Proc symposium on foundations of computer science, pp 403–409Google Scholar
 16.Srikanth TK, Toueg S (1987) Optimal clock synchronization. J ACM 34(3):626–645. doi: 10.1145/28869.28876MathSciNetCrossRefGoogle Scholar