SelfStabilizing Byzantine Clock Synchronization with Optimal Precision
 297 Downloads
Abstract

We give a simple analysis of the Lynch and Welch protocol with improved bounds on skew and tolerable difference in clock rates by rebuilding upon the main ingredient of their protocol, called approximate agreement.

We give a modified version of the protocol so that the frequency and amount of communication between the nodes is reduced. The modification adds a step to adjust the clock rates by another application of approximate agreement. The skew bound achieved is asymptotically optimal for suitable choices of parameters.

We present a method to add selfstabilization to the above protocols while preserving their skew bounds. The heart of the method is a coupling scheme that leverages a selfstabilizing protocol with a larger skew.
Keywords
Distributed algorithm Stabilization time Phase and frequency correction1 Introduction
When designing a synchronous distributed system, the most fundamental question is how to generate a system clock at all the n nodes, i.e., how to periodically generate a distinguished event or pulse at each node so that the actual time of the i^{ t h } pulse at each node is close to the actual time of the i^{ t h } pulse of any other node. This clock synchronization problem is easily solved if each node is reliable and equipped with an accurate clock. However, neither is always the case. For instance, in space applications accurate clocks such as quartz oscillators are prone to failure, so less accurate electronic oscillators are preferable, and nodes are subject to radiationinduced transient faults. Thus, nodes have to frequently adjust their clocks by sending and receiving messages and executing a suitable algorithm. The inaccuracy in the clocks is modelled by assigning a clock rate or frequency that varies at each node, but within fixed bounds. We measure the precision of the algorithm by skew, which is the maximum over all pulses i and pairs of (correct) nodes of the time difference between the i^{ t h } pulses of the respective nodes.
The clock synchronization task is mission critical, both in terms of performance and reliability. Therefore, faulttolerant distributed clock synchronization algorithms have found their way into realworld systems with high reliability demands. For example, the TimeTriggered Protocol (TTP) [13] and FlexRay [9, 10] tolerate Byzantine failure (i.e., arbitrary outofspec behavior) of less than n/3 nodes and are utilized in cars and airplanes. This means that these algorithms guarantee that correct nodes continue to generate synchronized pulses. They are based on the classic Byzantine clock synchronization algorithm by Lynch and Welch [19].
Another application domain with even more stringent requirements is hardware for spacecraft and satellites. Here, a reliable system clock is in demand despite frequent transient failure of any number of nodes due to radiation. The property to recover from an unknown state once the transient failures have stopped is known as selfstabilization. This is essential for the space domain, but also highly desirable in the systems utilizing TTP or FlexRay. This claim is supported by the presence of various mechanisms that monitor the nodes and perform resets in case of observed faulty behavior in both protocols. Thus, it is of interest to devise synchronization algorithms that stabilize on their own, instead of relying on monitoring techniques: these need to be highly reliable as well, or their failure may bring down the system due to erroneous detection of or response to faults.
Thus, selfstabilizing Byzantine clock synchronization algorithms with small skew have critical and useful applications and, accordingly, have received significant attention in the past (e.g., [2, 8]). However, existing algorithms cannot achieve asymptotically optimal skew. Our key motivation and main goal is to build a selfstabilizing Byzantine clock synchronization algorithm with asymptotically optimal skew.
Our Contribution
 1.
We present a simplified analysis of the LynchWelch algorithm. We show that the algorithm converges to a steadystate error E ∈ O((𝜗 − 1)d + U) , where hardware clock rates are between 1 and 𝜗 and messages take between d − U and d time to arrive at their destination. This works even for very inaccurate clocks: it suffices if 𝜗 ≤ 1.1, although the skew bound goes to infinity as 𝜗 approaches the critical value.^{1} However, for, e.g., 𝜗 ≤ 1.01, Theorem 1 bounds the skew by E(𝜗, d, U) ≤ 2.222(𝜗 − 1)d + 4.533U.
 2.
We give a conceptually simple extension of the previous algorithm that, in addition to changing the (logical) clock values, also adjusts the clock rates using approximate agreement. If the clocks are sufficiently stable, i.e., the maximum rate of change ν of clock rates is sufficiently small, then we can significantly increase the nominal round length T and decrease the frequency of communication without substantially affecting skew. Concretely, if 𝜗 ≤ 1.01, max{F, U}≪ T (where nodes’ clocks are initialized within F of each other), and max{(𝜗 − 1)^{2}T, νT^{2}}≪ U, it is possible to guarantee a skew of O(U) (see Corollary 12 and subsequent explanation), which is asymptotically optimal.
 3.
We introduce a generic scheme that enables making either of these algorithms selfstabilizing. The scheme couples one of the above (nonstabilizing) algorithms with a selfstabilizing Byzantine clock synchronization algorithm of larger skew 2d.^{2} The coupled algorithm is both selfstabilizing and has the original smaller skew of the nonstabilizing algorithm (Theorem 4 and Theorem 5). The selfstabilizing Byzantine clock synchronization algorithm that we utilize is FATAL [4, 5], which already offers a suitable interface to our coupling mechanism.
 1.
A prototype FPGA implementation [12] strongly indicates that these algorithms are also easy to implement in hardware.^{3}
 2.
There is no mathematical analysis of a clock rate or frequency correction scheme in the literature that can be readily applied to yield accurate bounds for simple algorithms. We provide such a tailored analysis of our second algorithm.
In contrast to the above contributions, the coupling scheme we use to combine our nonstabilizing algorithms with the FATAL algorithm showcases a novel technique of independent interest. We leverage FATAL’s clock “beats” to effectively (re)initialize the synchronization algorithm we couple it to. Here, care has to be taken to avoid such resets from occurring during regular operation of the nonstabilizing algorithms, as this could result in large skews or even spurious clock pulses. The solution is a feedback mechanism that enables the synchronization algorithm to actively trigger the next beat of FATAL at the appropriate time. FATAL stabilizes regardless of how these feedback signals behave, while actively triggering beats ensures that all nodes pass the checks which, if failed, trigger the respective node being reset. While a specific interface is required from the stabilizing algorithm to permit this approach, it seems likely that most, if not all, selfstabilizing synchronization algorithms could be modified to provide it. Thus, we consider the technique a highly useful separation of the tasks to achieve small skews and to ensure (fast) stabilization.
Organization of the Paper
After presenting related work and the model, we proceed in the order of the contributions listed above: simplified phase synchronization (Section 4), frequency synchronization (Section 5), and finally the coupling scheme adding selfstabilization (Section 6). Section 7 concludes the paper.
2 Related Work
TTP [13] and FlexRay [9, 10] are both implemented in software (barring minor hardware components). This is sufficient for their application domain, in which synchronous communication between hardware components at frequencies in the megahertz range is required. Solutions fully implemented in hardware are of interest for two reasons. First, having to implement the full software abstraction dramatically increases the number of potential reasons for a node to fail – at least from the point of view of the synchronization algorithm. A slim hardware implementation is thus likely to result in a substantially higher degree of reliability of the clocking mechanism. Second, if higher precision of synchronization is required, the significantly smaller delays incurred by dedicated hardware make it possible to meet these demands.
Apart from these issues, the complexity of a software solution renders TTP and FlexRay unsuitable as faulttolerant clocking schemes for VLSI circuits. The DARTS project [3, 11] aimed at developing such a scheme, with the goal of coming up with a robust clocking method for space applications. Instead of being based on the LynchWelch approach, it implements the faulttolerant synchronization algorithm by Srikanth and Toueg [18]. Unfortunately, DARTS falls short of its design goals in two ways. On the one hand, the SrikanthToueg primitive achieves skews of Θ(d), which tend to be significantly larger than those attainable with the LynchWelch approach.^{5} Accordingly, the operational frequency DARTS can sustain (without large communication buffers and communication delays of multiple logical rounds) is in the range of 100MHz, i.e., about an order of magnitude smaller than typical system speeds. Moreover, DARTS is not selfstabilizing. This means that DARTS – just like TTP and FlexRay – is unlikely to successfully cope with high rates of transient faults. Worse, the rate of transient faults will scale with the number of nodes (and thus sustainable faulty nodes). For space environments, this implies that adding faulttolerance without selfstabilization cannot be expected to increase the reliability of the system at all.
These concerns inspired a followup work called FATAL, which seeks to overcome the downsides of DARTS. From an abstract point of view, FATAL [4, 5] can be interpreted as another incarnation of the SrikanthToueg approach. However, FATAL combines tolerance to Byzantine faults with selfstabilization in O(n) time with probability 1 − 2^{−Ω(n)}; after recovery is complete, the algorithm maintains correct operation deterministically. Like DARTS, FATAL and the substantial line of prior work on Byzantine selfstabilizing synchronization algorithms (e.g., [2, 8]) cannot achieve better clock skews than Θ(d). The key motivation for the present paper is to combine the better precision achieved by the LynchWelch approach with the selfstabilization properties of FATAL.
Concerning frequency correction, little related work exists. A notable exception is the extension of the intervalbased synchronization framework to rate synchronization [16, 17]. In principle, it seems feasible to derive similar results by specialization and minor adaptions of this powerful machinery to our setting. Unfortunately, apart from the technical hurdles involved, an educated guess (based on the amount of necessary specialization and estimates that need to be strengthened) results in worse constants and more involved algorithms, and it is unclear whether our approach to selfstabilization can be fitted to this framework. However, it is worth noting that the overall proof strategies for our (nonstabilizing) phase and frequency correction algorithms bear notable similarities to the generic framework: separately deriving bounds on the precision of measurements, plugging these into a generic convergence argument, and separating the analysis of frequency and phase corrections.

impossibility results In a system of n nodes, no algorithm can tolerate ⌈n/3⌉ Byzantine faults. All mentioned algorithms are optimal in that they tolerate ⌈n/3⌉− 1 Byzantine faults [6].

To tolerate this number of faults, Ω(n^{2}) communication links are required.^{6} All mentioned algorithms assume full connectivity and communicate by broadcasts (faulty nodes may not adhere to this). Less wellconnected topologies are outside the scope of this work.

The worstcase precision of an algorithm cannot be better than (1 − 1/n)U in a network where communication delays may vary by U [15]. In the faultfree case and with 𝜗 − 1 sufficiently small, this bound can be almost matched (cf.
Section 4); all variants of the LynchWelch approach match this bound asymptotically, granted sufficiently accurate local clocks.

Trivially, the worst case precision of any algorithm is at least (𝜗 − 1)T if nodes exchange messages every T time units. Moreover, a simple indistinguishability argument shows a lower bound of (𝜗 − 1)d, regardless of T. In the faultfree case, this is essentially matched by our phase correction algorithm as well.

With faults, the upper bound on the skew of the algorithm increases by factor 1/(1 − α), where α ≈ 1/2 if 𝜗 ≈ 1. It appears plausible that this is optimal under the constraint that the algorithm’s resilience to Byzantine faults is optimal, due to a lower bound on the convergence rate of approximate agreement [7].
3 Model
We assume a fully connected system of n nodes, up to f := ⌊(n − 1)/3⌋ of which may be Byzantine faulty (i.e., arbitrarily deviate from the protocol). We denote by V the set of all nodes and by C ⊆ V the subset of correct nodes, i.e., those that are not faulty.
Communication is by broadcast of “pulses,” which are messages without content: the only information conveyed is when a node transmitted a pulse. Nodes can distinguish between senders; this is used to distinguish the case of multiple pulses being sent by a single (faulty) node from multiple nodes sending one pulse each. Note that faulty nodes are not bound by the broadcast restriction, i.e., may send a pulse to a subset of the nodes only. The system is semisynchronous. A pulse sent by node v ∈ C at (Newtonian) time \(p_{v}\in \mathbb {R}_{0}^{+}\) is received by node w ∈ C at time t_{ v w } ∈ [p_{ v } + d − U, p_{ v } + d]; we refer to d as the maximum message delay (or, chiefly, delay) and to U as the delay uncertainty (or, chiefly, uncertainty).
Executions are eventbased, where an event at node v is the reception of a message, a previously computed (and stored) local time being reached, or the initialization of the algorithm. A node may then perform computations and possibly send a pulse. For simplicity, we assume that these operations take zero time; adapting our results to account for computation time is straightforward.
Problem
 1.
∀v, w ∈ C : p_{ v }(r) − p_{ w }(r)≤ e(r)
 2.
∀v ∈ C : A_{min} ≤ p_{ v }(r + 1) − p_{ v }(r) ≤ A_{max}
3.1 Model for Frequency Correction Algorithms
In order for frequency corrections to be useful, we need to assume that hardware clock rates do not change faster than the algorithm can adjust to keep the effective frequencies aligned.
3.2 Selfstabilization
An algorithm is selfstabilizing, if it (re)establishes correct operation from arbitrary states in bounded time. If there is an upper bound on the time this takes in the worst case, we refer to it as the stabilization time.
In Section 6, we will make use of a selfstabilizing pulse synchronization algorithm to “reset” the system from inconsistent initial states. Starting the analysis only from this point, we have a consistent labeling of the pulses (modulo some \(M\in \mathbb {N}\)) that is shared by all correct nodes. For this special case, we can still apply the above problem formulation (w.r.t. this labeling).
4 Phase Synchronization Algorithm
In this section, we give a basic algorithm for byzantine clock synchronization and show its guarantees in Theorem 1. The basic algorithm is a variant of the one by Lynch and Welch [19], which synchronizes clocks by simulating perpetual synchronous approximate agreement [7] on the times when clock pulses should be generated. We diverge only in terms of communication: instead of round numbers, nodes broadcast contentfree pulses. Due to sufficient waiting times between pulses, during regular operation received messages from correct nodes can be correctly attributed to the respective round. In fact, the primary purpose of transmitting round numbers in the LynchWelch algorithm is to add recovery properties. Our technique for adding selfstabilization (presented in Section 6) leverages the pulse synchronization algorithm from [4, 5] instead, which requires to broadcast constantsized messages only.
Before presenting the algorithm and its analysis in Sections 4.2 and 4.3, respectively, we revisit some basic properties of the approximate agreement technique [7]. The results in this section are derivatives of the ones from [7, 19], but adapting them to our setting and notation is essential for deriving our main results in Sections 5 and 6.
4.1 Properties of Approximate Agreement Steps
Abstractly speaking, the synchronization performs approximate agreement steps in each (simulated synchronous) round. In approximate agreement, each node is given an input value and the goal is to let nodes determine values that are close to each other and within the interval spanned by the correct nodes’ inputs.
In the clock synchronization setting, there is the additional obstacle that the communicated values are points in time. Due to delay uncertainty and drifting clocks, the communicated values are subject to a (worstcase) perturbation of at most some \(\delta \in \mathbb {R}^{+}_{0}\). We will determine δ later in our analysis of the clock synchronization algorithms; we assume it to be given for now. The effect of these disturbances is straightforward: they may shift outputs by at most δ in each direction, increasing the range of the outputs by an additive 2δ in each step (in the worst case).
Consider the special case of δ = 0. Intuitively, Algorithm 1 discards the smallest and largest f values each to ensure that values from faulty nodes cannot cause outputs to lie outside the range spanned by the correct nodes’ values. Afterwards, y_{ v } is determined as the midpoint of the interval spanned by the remaining values. Since f < n/3, i.e., n − f ≥ 2f + 1, the median of correct nodes’ values is part of all intervals computed by correct nodes. From this, it is easy to see that \(\\vec {y}\\leq \\vec {x}\/2\), see Fig. 1. For δ > 0, we simply observe that the resulting values y_{ v }, v ∈ C, are shifted by at most δ compared to the case where δ = 0, resulting in \(\\vec {y}\\leq \\vec {x}\/2 + 2\delta \). We now prove these properties.
Lemma 1
Proof
Corollary 1
\(\max _{v\in C}\{y_{v}x_{v}\} \leq \\vec {x}\+\delta \) .
Lemma 2
\(\\vec {y}\\leq \\vec {x}\/2 + 2\delta \) .
Proof
For the general case, observe that \(S_{v}^{f + 1}\), \(S_{w}^{f + 1}\), \(S_{v}^{nf}\), and \(S_{w}^{nf}\) each can be changed by at most δ. This can affect \((S_{v}^{f + 1}  S_{w}^{f + 1} + S_{v}^{nf}S_{w}^{nf})/2\) by at most 4δ/2 = 2δ; the claim follows. □
4.2 Algorithm
 1.
for all v, w ∈ C, the message that v broadcasts at time t_{ v }(r − 1) + τ_{1}(r) is received by w at a local time from [H_{ w }(t_{ w }(r − 1)), H_{ w }(t_{ w }(r − 1)) + τ_{1}(r) + τ_{2}(r)] and
 2.
for all v ∈ C, T (r) − Δ_{ v } (r) ≥ τ_{1} (r) + τ_{2}(r), i.e., v computes H_{ v }(t_{ v }(r)) before time t_{ v }(r).
Condition 1
Here, e(r) is a bound on the synchronization error in round r, i.e., we will show that \(\\vec {p}(r)\\leq e(r)\) for all \(r\in \mathbb {N}\), provided Condition 1 is satisfied. Condition 1 cannot be satisfied for arbitrary 𝜗 > 1 such that e(r) is bounded independently of r. The intuition is that rounds must be long enough to ensure that all pulses from correct nodes are received (i.e., at least 𝜗e(r)), but during this time additional error is built up by drifting clocks; if the approximate agreement step cannot overcome this relative skew increase, round r + 1 has to be even longer, and so on. However, any 𝜗 ≤ 1.1 can be sustained.
Lemma 3
Proof

α goes to 1/2 as 𝜗 goes to 1. For 𝜗 = 1.01, we already have that α ≈ 0.55. Thus, the approach can support fairly large phase drifts.

For 𝜗 ≈ 1, we have that \(\lim _{r\to \infty } e(r)\approx 4U + 2(\vartheta 1)d\). From Corollary 2, one can see that if (𝜗 − 1)d ≪ U, this can be reduced to \(\lim _{r\to \infty } e(r)\approx 2U\).

The lower bound by Lynch and Welch [15] shows that this is optimal up to factor 2. It is straightforward to verify that in the faultfree case with 𝜗 = 1, the algorithm attains the lower bound.

The convergence is exponential, i.e., for any ε > 0 we have that \(e(r)\leq (1+\varepsilon )\lim _{r\to \infty } e(r)\) for all \(r\geq r_{\varepsilon }\in {\Theta }(\log F/(\varepsilon \lim _{r\to \infty } e(r)))\).
4.3 Analysis
In this section, we prove that Condition 1 is indeed sufficient to ensure that \(\\vec {p}(r)\\leq e(r)\) for all \(r\in \mathbb {N}\). In the following, denote by \(\vec {p}(r)\), \(r\in \mathbb {N}_{0}\), the vector of times when nodes v ∈ C broadcast their r^{ t h } pulse, i.e., H_{ v }(p_{ v }(r)) = H_{ v }(t_{ v }(r − 1)) + τ_{1}(r). If v ∈ C takes note of the pulse from w ∈ C in round r, the corresponding value τ_{ w v } − τ_{ v v } can be interpreted as inexact measurement of p_{ w }(r) − p_{ v }(r). This is captured by the following lemma, which provides precise bounds on the incurred error.
Lemma 4
Proof
We remark that if (𝜗 − 1)d < U and U is known, it is beneficial to refrain from having v send a message to itself. Instead it estimates the arrival time of the message using its hardware clock, yielding the following corollary.
Corollary 2
Proof
In the sequel, we use the bounds provided by Lemma 4. However, the reader should keep in mind that in case (𝜗 − 1)d ≪ U and sufficiently precise bounds on U are known, Corollary 2 shows how to effectively cut the influence of the uncertainty in half.
Using Lemma 4, we can interpret the phase shifts Δ_{ v }(r) as outcomes of an approximate agreement step, yielding the following corollary.
Corollary 3
 1.
\({\Delta }_{v}(r)< \vartheta (\\vec {p}(r)\+U)\) and
 2.
\(\max _{v,w\in C}\{p_{v}(r){\Delta }_{v}(r)p_{w}(r)+{\Delta }_{w}(r)\}\leq (5\vartheta 3)\\vec {p}(r)\/(2(\vartheta + 1))+ 2\vartheta U\) .
Proof
By Lemma 4, we can interpret the values 2(τ_{ w v } − τ_{ v v })/(𝜗 + 1) as measurements of p_{ w }(r) − p_{ v }(r) with error \(\delta =\vartheta U + (\vartheta 1)\\vec {p}(r)\/(\vartheta + 1)\). Note that shifting all values by p_{ v }(r) in an approximate agreement step changes the result by exactly p_{ v }(r), implying that p_{ v }(r) −Δ_{ v }(r) equals the result of an approximate agreement step with inputs p_{ w }(r), w ∈ C, and error δ at node v. Thus, the claims follow from Corollary 1 and Lemma 2, noting that 1/2 + 2(𝜗 − 1)/(𝜗 + 1) = (5𝜗 − 3)/(2(𝜗 + 1)). □
To derive a bound on \(\\vec {p}(r + 1)\\), it remains to analyze the effect of the clock drift between the pulses. To this end, we examine how an established timing relation between actions of two correct nodes deteriorates due to measuring time using the inaccurate hardware clocks.
Lemma 5
Proof
Since hardware clocks are increasing, \(t_{v}^{\prime }\geq t_{v}\) and \(t_{w}^{\prime }\geq t_{w}\). The inequalities follow because hardware clock rates are between 1 and 𝜗 ≥ 1. □
This readily yields a bound on \(\\vec {p}(r + 1)\\) – provided that all nodes can compute when to send the next pulse on time.
Corollary 4
Proof
This bound hinges on the assumption that the round is executed correctly. We next establish sufficient conditions for this to be the case.
Lemma 6
Proof
It remains to prove that for each v ∈ C, it holds that T(r) −Δ_{ v }(r) ≥ τ_{1}(r) + τ_{2}(r). By the preconditions of the lemma, this is satisfied if \({\Delta }_{v}(r)\leq \vartheta (\\vec {p}(r)\+U)\). As we already established the precondition of Corollary 3 for round r, the corollary shows that this inequality is satisfied. □
We have almost all pieces in place to inductively bound \(\\vec {p}(r)\\) and determine suitable values for τ_{1}(r), τ_{2}(r), and T(r). The last missing bit is an anchor for the induction, i.e., a bound on \(\\vec {p}(1)\\).
Corollary 5
\(\\vec {p}(1)\\leq F+(11/\vartheta )\tau _{1}(1)=e(1)\) .
Proof
Since H_{ v }(0) ∈ [0, F) for all v ∈ C, t_{ v }(0) ∈ [0, F) for all v ∈ C. The claim follows by applying Lemma 5. □
Theorem 1
Proof
To show the first part, inductively use Lemma 6 and Lemma 4 to show that round r is executed correctly and that \(\\vec {p}(r + 1)\\leq e(r + 1)\), respectively; the induction anchor is given by \(\\vec {p}(1)\\leq e(1)\) according to Corollary 5. The second part directly follows from Lemma 3. □
5 Phase and Frequency Synchronization Algorithm
In this section, we extend the phase synchronization algorithm to also synchronize frequencies and give the guarantees of the extended algorithm in Theorem 3; a simplified statement is provided by Corollary 12. The basic idea is to apply the approximate agreement not only to phase offsets, but also to frequency offsets. To this end, in each round the phase difference is measured twice, applying any phase correction only after the second measurement. This enables nodes to obtain an estimate of the relative clock speeds, which in turn is used to obtain an estimate of the differences in clock speeds.
Ensuring that this procedure is executed correctly is straightforward by limiting μ_{ v }(r) − 1 to be small, where μ_{ v }(r) is the factor by which node v changes its clock rate during round r. However, constraining this multiplier means that approximate agreement steps cannot be performed correctly in case μ_{ v }(r + 1) would lie outside the valid range of multipliers. This is fixed by introducing a correction that “pulls” frequencies back to the default rate.
Of course, for all this to be meaningful, we need to assume that hardware clock rates do not change faster than the algorithm can adjust the multipliers to keep the effective frequencies aligned. We recall the additional model assumption stated in Section 3.1: we assume that H_{ v } is differentiable (for all v ∈ C) with derivative h_{ v }, where h_{ v } satisfies for \(t,t\in \mathbb {R}^{+}_{0}\) that h_{ v }(t^{′}) − h_{ v }(t)≤ νt^{′}− t for some ν > 0.
5.1 Algorithm
Algorithm 3 gives the pseudocode of our approach. Mostly, the algorithm can be seen as a variant of Algorithm 2 that allows for speeding up clocks by factors μ_{ v }(r) ∈ [1, 𝜗^{2}], where 𝜗h_{ v }(t) is considered the nominal rate at time t.^{11} For simplicity, we fix all local waiting times independently of the round length.
The main difference to Algorithm 2 is that a second pulse signal is sent before the phase correction is applied, enabling to determine the rate multipliers for the next round by an approximate agreement step as well. A frequency measurement is obtained by comparing the (observed) relative rate of the clock of node w during a local time interval of length τ_{2} + τ_{3} to the desired relative clock rate of 1. Since the clock of node v is considered to run at speed μ_{ v }(r)h_{ v }(t) during the measurement period, the former takes the form μ_{ v }(r)Δ_{ w v }/(τ_{2} + τ_{3}), where Δ_{ w v } is the time difference between the arrival times of the two pulses from w measured with H_{ v }. The approximate agreement step results in a new multiplier \(\hat {\mu }_{v}(r + 1)\) at node v; we then move this result by a (small) value ε in direction of the nominal rate multiplier 𝜗 and ensure that we remain within the acceptable multiplier range [1, 𝜗^{2}].
 1.
for all v, w ∈ C, the message v broadcasts at time t_{ v }(r − 1) + τ_{1}/μ_{ v }(r − 1) is received by w at a local time from [H_{ w }(t_{ w }(r − 1)), H_{ w }(t_{ w }(r − 1)) + τ_{1}/μ_{ v }(r − 1) + τ_{2}/μ_{ w }(r)],
 2.
for all v, w ∈ C, the message v broadcasts at time t_{ v }(r − 1) + τ_{1}/μ_{ v }(r − 1) + (τ_{2} + τ_{3})/μ_{ v }(r) is received by w at a local time from [H_{ w }(t_{ w }(r− 1)) + τ_{1}/μ_{ v }(r− 1) + τ_{2}/μ_{ w }(r), H_{ w }(t_{ w }(r− 1)) + τ_{1}/μ_{ v }(r− 1)+(τ_{2} + τ_{3} + τ_{4})/μ_{ w }(r)], and
 3.
for all v ∈ C, T −Δ_{ v }(r) ≥ τ_{1}/μ_{ v }(r − 1) + (τ_{2} + τ_{3} + τ_{4})/μ_{ v }(r), i.e., v computes H_{ v }(t_{ v }(r)) before time t_{ v }(r).
We now specify the constraints our choices for the parameters must satisfy to ensure that all rounds are executed correctly and both phase and frequency errors converge to small values.
Condition 2
Here, all but the last conditions mimic Condition 1, where the bounds on τ_{3} and τ_{4} account for the fact that between the first and the second pulse of each round, the nodes’ opinion on the “synchronized time” drift apart slowly. The lower bound on ε ensures that the pullback of multipliers to the nominal ones is sufficiently strong to guarantee that, in fact, multipliers will never leave the valid range of [1, 𝜗^{2}]. We now show that these constraints can be satisfied provided that 𝜗 is not too large.
Lemma 7
Proof
5.2 Analysis
In the following, denote by \(\vec {p}(r)\) and \(\vec {q}(r)\), \(r\in \mathbb {N}\), the vectors of times when nodes v ∈ C broadcast their first and second pulse in round r, respectively. Thus, we have that H_{ v }(p_{ v }(r)) = H_{ v }(t_{ v }(r − 1)) + τ_{1}/μ_{ v }(r − 1) and H_{ v }(q_{ v }(r)) = H_{ v }(t_{ v }(r − 1)) + τ_{1}/μ_{ v }(r − 1) + (τ_{2} + τ_{3})/μ_{ v }(r).
We will first make use of the analysis we performed for the phase correction algorithm to show that all rounds are executed correctly. Then we will refine the analysis by examining the impact of the frequency correction steps.
5.2.1 Phase Correction Steps
Observe that because for all \(r\in \mathbb {N}_{0}\) and v ∈ C, we have that 1 ≤ μ_{ v }(r) ≤ 𝜗^{2}, for all times t we have that \(1\leq \mu _{v}(r)h_{v}(t)\leq \vartheta ^{3}=\bar {\vartheta }\). Thus, we may interpret the waiting periods of Algorithm 3 as nodes waiting for τ_{1}, τ_{2}, etc. local time with hardware clocks of drift \(\bar {\vartheta }=\vartheta ^{3}\). Thus, we can make use of the same arguments as in Section 4.3 to obtain a series of results.
Corollary 6
For all \(r\in \mathbb {N}\) , \(\\vec {q}(r)\\leq \\vec {p}(r)\+(11/\bar {\vartheta })(\tau _{1}+\tau _{2})\) .
Proof
By application of Lemma 5. □
Corollary 7
Proof
As for Lemma 6, where the pulse in the frequency correction step is analyzed analogously. □
Theorem 2
Proof
As for Theorem 1, where we replace 𝜗 with \(\bar {\vartheta }\), Lemma 6 with Corollary 7 and Lemma 3 with Lemma 7. However, the induction step requires that we can apply Lemma 6 again in step r + 1 if we could do so in step \(r\in \mathbb {N}\). This readily follows from Condition 2 if e(r + 1) ≤ e(r) for all \(r\in \mathbb {N}\).
5.2.2 Frequency Correction Steps
In the following, we assume that the prerequisites of Theorem 2 are satisfied. In particular, all rounds are executed correctly, i.e., we can assume that correct nodes receive each others’ pulses. We introduce some notation to capture the behavior of the (logical) rates of the nodes’ clocks. This notation may seem somewhat cumbersome; basically, the reader may think of the clock rates h_{ v }(t) as being almost constant, implying that all considered values for a given node v ∈ C are essentially the same, slowly deviating at rate at most ν.
We start by showing that \(\bar {\rho }(r)_{v}\) approximates μ_{ v }(r)h_{ v }(t) well for times t between pulse r and r + 1 of v ∈ C, i.e., we may see \(\bar {\rho }(r)_{v}\) as “the” clock rate of v in round r.
Lemma 8
Proof
Two corollaries relate the progress of the hardware clocks between (i) p_{ v }(r) and q_{ v }(r) and (ii) \(t_{wv}^{\prime }\) and t_{ w v } to \(\bar {\rho }(r)_{v}\), respectively.
Corollary 8
Proof
Corollary 9
Proof
These results put us in the position to prove that 1 − μ_{ v }(r)Δ_{ w v }/(τ_{2} + τ_{3}) is indeed a good estimate of \(\bar {\rho }(r)_{w}\bar {\rho }(r)_{v}\). Thus, this (computable) value can serve as a proxy for the difference between “the” clock rates of w and v in round r.
Lemma 9
Proof
We remark that the Θ((1 − 1/𝜗^{3})^{2}) factor is, more precisely, bounded as \({\Theta }((11/\vartheta ^{3})\\bar {\rho }(r)\)\). However, for this to be of use, we would have to choose ε depending on r. Since ruleofthumb calculations show that this term is unlikely to be significant in any real system and the improvement would not extend to the selfstabilizing variant of the algorithm, we refrained from adding this additional complication.
Given that we can bound the “measurement error” of the frequency correction step by Lemma 9, the results from Section 4.1 can be invoked to show convergence. First, we analyze the properties of \(\hat {\mu }_{v}(r + 1)\), which Lemma 11 then uses to control μ_{ v }(r + 1).
Lemma 10
Proof
Lemma 11
Proof
 Case 1: \(\mu _{v}(r + 1)\hat {\mu }_{v}(r + 1)\leq \varepsilon \) and \(\hat {\mu }_{w}(r + 1)\mu _{w}(r + 1)\leq \varepsilon \). Because we have that \(\max \{h_{v}(\bar {t}_{v}),h_{w}(\bar {t}_{w})\}\leq \vartheta \), we get$$\begin{array}{@{}rcl@{}} \mu_{v}\!(r\,+\,1)h_{v}\!(\bar{t}_{v})\,\,\mu_{w}\!(r\,+\,1)h_{w}\!(\bar{t}_{w}) \!&\leq&\! (\mu_{v}(r\,+\,1)\,\,\hat{\mu}_{v}(r\,+\,1))h_{v}(\bar{t}_{v})\\ &&~+\hat{\mu}_{v}\!(r\,+\,1)h_{v}\!(\bar{t}_{v})\,\,\hat{\mu}_{w}\!(r\,+\,1)h_{w}(\bar{t}_{w})\\ &&~+(\hat{\mu}_{w}(r\,+\,1)\mu_{w}(r\,+\,1))h_{w}(\bar{t}_{w})\\ &\leq& \frac{2\vartheta1}{2}\,\\bar{\rho}(r)\+ 3\vartheta\varepsilon\,. \end{array} $$
 Case 2: \(\mu _{v}(r + 1)\hat {\mu }_{v}(r + 1)>\varepsilon \). This implies that μ_{ v }(r + 1) = 1 ≤ μ_{ v }(r).
 \(\hat {\mu }_{w}(r + 1)\leq \vartheta \), i.e., we have that \(\mu _{w}(r + 1)\geq \hat {\mu }_{w}(r + 1)+\varepsilon \). Using Lemma 10, we bound$$\begin{array}{@{}rcl@{}} \mu_{v}(r\,+\,1)h_{v}(\bar{t}_{v})\,\,\mu_{w}(r\,+\,1)h_{w}(\bar{t}_{w})\!&\leq&\! h_{v}(\bar{t}_{v})\mu_{v}(r)\\&&\!\left( \!\min\limits_{u\in C}\{ \mu_{u}(r)h_{u}(\bar{t}_{u})\}\,+\,\frac{\varepsilon}{2}\right)\\ &\leq& \\bar{\rho}(r)\\frac{\varepsilon}{2}\,. \end{array} $$
 \(\hat {\mu }_{w}(r + 1)> \vartheta \), yielding that μ_{ w }(r + 1) ≥ 𝜗 − ε. It follows that$$\mu_{v}(r + 1)h_{v}(\bar{t}_{v})\mu_{w}(r + 1)h_{w}(\bar{t}_{w}) \leq h_{v}(\bar{t}_{v})(\vartheta\varepsilon) \leq \varepsilon\,. $$

 Case 3: \(\hat {\mu }_{w}(r + 1)\mu _{w}(r + 1)> \varepsilon \). This implies that μ_{ w }(r + 1) = 𝜗^{2} ≥ μ_{ w }(r).
 \(\hat {\mu }_{v}(r + 1)> \vartheta \), i.e., we have that \(\mu _{v}(r + 1)\leq \hat {\mu }_{v}(r + 1)\varepsilon \). Using Lemma 10, we bound$$\begin{array}{@{}rcl@{}} \mu_{v}(r\,+\,1)h_{v}(\bar{t}_{v})\,\,\mu_{w}(r\,+\,1)h_{w}(\bar{t}_{w})\!&\leq& \!\left( \max\limits_{u\in C}\{ \mu_{u}(r)h_{u}(\bar{t}_{u})\}\frac{\varepsilon}{2}\right) \\&&h_{w}(\bar{t}_{w})\mu_{w}(r)\\ &\leq& \\bar{\rho}(r)\\frac{\varepsilon}{2}\,. \end{array} $$
 \(\hat {\mu }_{v}(r + 1)\leq \vartheta \), yielding that μ_{ v }(r + 1) ≤ 𝜗 + ε. It follows that$$\mu_{v}(r + 1)h_{v}(\bar{t}_{v})\mu_{w}(r + 1)h_{w}(\bar{t}_{w}) \leq (\vartheta+\varepsilon)h_{v}(\bar{t}_{v})\vartheta^{2} \leq \vartheta\varepsilon\,. $$

It remains to take into account that hardware clock speeds change between rounds using Lemma 8.
Corollary 10
Proof
By applying Lemma 11 and noting that for all u ∈ C, \(\bar {\rho }(r)_{v}\bar {\rho }(r + 1)_{v}\leq \nu (T+\tau _{2})\) by Lemma 8. □
We conclude that the steady state frequency error is in O(ε).
Corollary 11
Proof
5.2.3 Steady State Error with Frequency Correction
To make use of Corollary 11, we need to derive a variant of Corollary 4 that allows for better control of \(\\vec {p}(r + 1)\\) in case \(\\bar {\rho }(r)\\) is small.
Lemma 12
Proof
Plugging this into our machinery we arrive at the main result of this section.
Theorem 3
Proof
Under reasonable assumptions we can obtain a more readable error bound. Intuitively, we require that (i) 𝜗 is not too large, so that α ≈ 1/2, (ii) rounds are long enough to allow for a sufficiently accurate frequency measurement, which is the case if T ≫ max{F, U}, i.e., rounds are long compared to both the precision F of the initialization and the uncertainty U, and (iii) rounds remain short enough to not let the drifting clocks dominate the error. The third condition amounts to two further constraints: we need that νT^{2} ≪ U, since the rate of change of the speed of clocks enters the skew bound quadratically in T, and we also need that (𝜗 − 1)^{2}T ≪ U, because inaccurate frequency measurements prevent us from synchronizing frequencies better than up to a factor of Θ((𝜗 − 1)^{2}).
Corollary 12

α ≈ 1/2,

ε is chosen minimally such that it satisfies Condition 2,

T ≈ τ_{3} ≫ τ_{2}, which is feasible whenever \(T\gg \bar {\vartheta } (e(1)+d)\), and

\(\max \{(\bar {\vartheta }1)^{2}T,\nu T^{2}\}\ll U\) .
Proof

Note that that 𝜗 ≤ 1.01 implies that β < α < 0.55, \(\bar {\vartheta }< 1.031\) and e(1) ≤ max{1.031F,0.07T + 4.65U}. Thus the requirements of the corollary are met if max{F, U}≪ T and \(\max \{(\bar {\vartheta }1)^{2}T,\nu T^{2}\}\ll U\) for the minimal choice of ε, yielding the claim stated in the introduction.

Corollary 12 basically states that increasing T is fine, as long as \(\max \{(\bar {\vartheta }1)^{2}T,\nu T^{2}\}\ll U\). This improves over Algorithm 2, where it is required that (𝜗 − 1)T ≪ U, as it permits transmitting pulses at significantly smaller frequencies.

While the error bound of roughly 28U is about factor 7 larger than the about 4U Algorithm 2 provides, this is likely to be overly conservative. The source of this difference is that we assume that in a frequency measurement, the full uncertainty U may skew the observation of the relative clock speed. However, this measurement is based on sending two signals in the same direction over the same communication link in fairly short order. In most settings, the difference in delays will be much smaller than between messages on different communication links. Accordingly, the relative contribution of the frequency measurement to the error is likely to be much smaller in practice.

If this is not the case, one may extend the time span for a frequency measurement over multiple rounds to decrease the effect of the uncertainty. This requires that the accumulated phase corrections do not become so large as to prevent a clear distinction of the frequencyrelated pulse (whose sending time must not be altered due to phase corrections) from phaserelated pulses.^{12} To not further complicate the analysis, we refrained from presenting this option; it is used in [16, 17].
6 SelfStabilization
In this section, we propose a generic mechanism that can be used to transform Algorithm 2 and Algorithm 3 into selfstabilizing solutions and give the corresponding main results in Theorem 4 and Theorem 5. An algorithm is selfstabilizing, if it (re)establishes correct operation from arbitrary states in bounded time. If there is an upper bound on the time this takes in the worst case, we refer to it as the stabilization time. We stress that, while selfstabilizing solutions to the problem are known, all of them have skew Ω(d); augmenting the LynchWelch approach with selfstabilization capabilities thus enables us to achieve an optimal skew bound of O((𝜗 − 1)T + U) in a Byzantine selfstabilizing manner for the first time.
Our approach can be summarized as follows. Nodes locally count their pulses modulo some \(M\in \mathbb {N}\). We use a lowfrequency, imprecise, but selfstabilizing synchronization algorithm (called FATAL) from earlier work [4, 5] to generate a “heartbeat.” On each such beat, nodes will locally check whether the next pulse with number 1 modulo M will occur within an expected time (local) window whose size is determined by the precision the algorithm would exhibit after M correctly executed pulses (in the nonstabilizing case). If this is not the case, the node is “reset” such that pulse 1 will occur within this time window.
This simple strategy ensures that a beat forces all nodes to generate a pulse with number 1 modulo M within a bounded time window. Assuming a value of F corresponding to its length in Algorithm 2 or Algorithm 3 hence ensures that the respective algorithm will run as intended—at least up to the point when the next beat occurs. Inconveniently, if the beat is not synchronized with the next occurrence of a pulse 1 mod M, some or all nodes may be reset, breaking the guarantees established by the perpetual application of approximate agreement steps. This issue is resolved by leveraging a feedback mechanism provided by FATAL: FATAL offers a (configurable) time window during which a NEXT signal externally provided to each node may trigger the next beat. If this signal arrives at each correct node at roughly the same time, we can be sure that the corresponding beat is generated shortly thereafter. This allows for sufficient control on when the next beat occurs to prevent any node from ever being reset after the first (correct) beat. Since FATAL stabilizes regardless of how the externally provided signals behave, this suffices to achieve stabilization of the resulting compound algorithm.
6.1 FATAL
We summarize the properties of FATAL in the following corollary, where each node has the ability to trigger a local NEXT signal perceived by the local instance of FATAL at any time.
Corollary 13 (of [5])
 1.
For all v, w ∈ C, we have that b_{ v }(k) − b_{ w }(k)≤ P.
 2.
If no v ∈ C triggers its NEXT signal during [min_{w∈C}{b_{ w }(k)} + B_{1}, t] for some t ≤ min_{w∈C}{b_{ w }(k)} + B_{1} + B_{2} + B_{3}, then min_{w∈C}{b_{ w }(k + 1)}≥ t.
 3.
If all v ∈ C trigger their NEXT signals during [min_{w∈C}{b_{ w }(k)} + B_{1} + B_{2}, t] for some t ≤ min_{w∈C}{b_{ w }(k)} + B_{1} + B_{2} + B_{3}, then max_{w∈C}{b_{ w }(k + 1)}≤ t + P.
Proof
For ϕ = 1, all statements follow directly from Lemma 3.4 and Corollary 4.16 in [5], noting that nodes will switch from state ready to propose (in the main state machine) in response to a NEXT signal if their timeout T_{3} is expired. Once all correct nodes switched to propose, this results in all nodes switching to accept and generating a beat within d_{ F } time. For ϕ > 1, one simply needs to observe that multiplying each timeout for choices satisfying Condition 3.3 in [5] by ϕ results in another valid choice; the bound on the stabilization time given in Corollary 4.16 scales accordingly. □
6.2 Algorithm
Our selfstabilizing solution utilizes both FATAL and the clock synchronization algorithm with very limited interaction. We already stressed that FATAL will stabilize regardless of the NEXT signals and note that it is not influenced by Algorithm 4 in any other way. Concerning the clock synchronization algorithm (either Algorithm 2 or Algorithm 3), we assume that a “careful” implementation is used that does not maintain state variables for a long time. Concretely, Algorithm 2 will clear memory between loop iterations, and Algorithm 3 will memorize the new multiplier value μ_{ v }(r + 1) only, which is explicitly assigned during round r. If this is satisfied, no further consistency checks of variables are required, and it will be straightforward to reuse the analyses from Sections 4.3 and 5.2.
Condition 3 lists the constraints on R^{−} (the minimum local time between a beat and local pulse 1 mod M), R^{+} (the respective maximum local time), and M (the number of pulses between beats) – the parameters of Algorithm 4 – need to satisfy so that we can show that the algorithm is guaranteed to stabilize.
Condition 3

Equation (9) says that resets on a beat enforce the skew to become bounded by e(1).

Equations (10) and (11) ensure that correct nodes receive the first pulses from all other correct nodes after a beat.

Equation (12) guarantees that these are actually the “round1” pulses also for nodes that have been reset, i.e., there are no spurious pulses from before such a reset that are received during the respective time window.

Equations (13) and (14) make sure that FATAL will ignore any NEXT signals that may still be active when a beat occurs and that there is sufficient time for the first round after the beat to complete.

Equations (15) and (16) enforce that the (now correctly executing) algorithm will trigger the NEXT signals and thus the next beat is wellaligned with the time reference it provides.

Finally, (17) and (18) imply that such a beat will result in no resets.
We need to show that these constraints can be satisfied in conjunction with the ones required by the employed synchronization algorithm.
Lemma 13
Proof
Finally, note that P ∈ O(d_{ F }) and all factors occurring in this proof are constants depending on 𝜗 only, implying that ϕ and M are constants as well. The bound on the stabilization time thus readily follows from Corollary 13 as well. □
In the remainder of the section, we assume (i) that the beat generation algorithm has already stabilized, i.e., the guarantees stated in Corollary 13 hold, (ii) that the executed clock synchronization algorithm is Algorithm 2, and (iii) that Condition 1 holds. The analysis for Algorithm 3 is analogous, where \(\bar {\vartheta }=\vartheta ^{3}\) takes the role of 𝜗 and Condition 2 takes the role of Condition 1; this is formalized by the following corollary and Theorem 5 at the end of this section.
Corollary 14
Proof
6.3 Analysis
Our analysis starts with the first correct beat produced by FATAL, which is perceived at node v ∈ C at time b_{ v }(1). Subsequent beats at v occur at times b_{ v }(2), b_{ v }(3), etc. We first establish that the first beat guarantees to “initialize” the synchronization algorithm such that it will run correctly from this point on (neglecting for the moment the possible intervention by further beats). We use this do define the “first” pulse times p_{ v }(1), v ∈ C, as well; we enumerate consecutive pulses accordingly.
Lemma 14
 1.
Each v ∈ C generates a pulse at time p_{ v }(1) ∈ [b + R^{−}/𝜗, b + P + R^{+} + τ_{1}].
 2.
\(\\vec {p}(1)\\leq e(1)\) .
 3.
At time p_{ v }(1),v ∈ C setsi := 1.
 4.
w ∈ C receives the pulse sent by v ∈ Cat a local time from the range[H_{ w }(p_{ w }(1)) − τ_{1}, H_{ w }(p_{ w }(1)) + τ_{2}].
 5.
This is the only pulse w receives from v at a local time from the range [H_{ w }(p_{ w }(1)) − τ_{1}, H_{ w }(p_{ w }(1)) + τ_{2}].
 6.
Denoting by round 1 the execution of the forloop in Algorithm 2 during which each v ∈ C sends the pulse at time p_{ v }(1), this round is executed correctly.
Proof
Assume for the moment that min_{v∈C}{b_{ v }(2)} is sufficiently large, i.e., no second beat will occur at any correct node for the times relevant to the proof of the lemma; we will verify this at the end of the proof.
Note that, until we show the last claim, it is not clear that p_{ v }(1) is unique for each v ∈ C. For the moment, let p_{ v }(1) be the first pulse v ∈ C sends during the local time interval [H_{ v }(b_{ v }(1)) + R^{−}, H_{ v }(b_{ v }(1)) + R^{+} + τ_{1}]. With this convention, the third claim is shown as follows. Observe that any v ∈ C that executes the reset function in response to the beat sets i := 0 when doing so. Hence, it will set i := 1 at time p_{ v }(1). Thus, consider v ∈ C that does not execute the reset function. This entails that i = 0 at time b_{ v }(1) and v generates no pulse during local times from [H_{ v }(b_{ v }(1), H_{ v }(b_{ v }(1)) + R^{−}). Consequently, v will increase i to 1 at time p_{ v }(1).
Lemma 14 serves as induction anchor for the argument showing that all rounds of the algorithm are executed correctly. However, due to possible interference of future beats, for the moment we can merely conclude that this is the case until the next beat; we obtain the following corollary.
Corollary 15
Denote by N the infimum over all times t ≥ b + B_{1} at which some v ∈ C triggers a NEXT signal. If min_{v∈C}{p_{ v }(M) + e(M)}≤ min{N, b + B_{1} + B_{2} + B_{3}}, then all rounds r ∈{1,…, M} are executed correctly and \(\\vec {p}(r)\\leq e(r)\).
Proof
Lemma 14 shows that the first beat “initializes” the system such that \(\\vec {p}(1)\\leq e(1)\) and the first round is executed correctly. By Corollary 13, minv∈C{b_{ v }(2)}≥ min{N, b + B_{1} + B_{2} + B_{3}}. Hence, after round 1 Algorithm 2 will be executed without interference from Algorithm 4 until (at least) time minv∈C{p_{ v }(M) + e(M)}. For r ∈{2,…, M}, the claim thus follows as in Section 4.3. □
Next, we leverage this insight to prove that the progress of the synchronization algorithm – which will operate correctly at least until the next beat – together with the constraints of Condition 3 ensures the following: the first time when node v ∈ C triggers its NEXT signal after time b + B_{1} falls within the window of opportunity for triggering the next beat provided by FATAL.
Lemma 15
Proof 32
With respect to the second case, observe that since no NEXT signal is triggered at any v ∈ C after time b + B_{1} until time b + B_{1} + B_{2} + B_{3}, minv∈C{b_{ v }(2)}≥ b + B_{1} + B_{2} + B_{3} by Corollary 13. Thus, Algorithm 2 runs without interference up to this time. Using this, we can establish the same bounds as for the first case. □
This immediately implies that the second beat occurs in response to the NEXT signals, which itself are aligned with pulse M.
Corollary 16
For allv ∈ C,b_{ v }(2) ∈ [p_{ v }(M), p_{ v }(M) + (𝜗 + 1)e(M) + P].
Proof
Having established this timing relation between \(\vec {b}(2)\) and \(\vec {p}(M)\), we can conclude that no correct node is reset due to the second beat.
Lemma 16
Node v ∈ C does not call the reset function of Algorithm 4 in response to beat b_{ v }(2).
Proof
Repeating the above reasoning for all pairs of beats \(\vec {b}(k)\), \(\vec {b}(k + 1)\), \(k\in \mathbb {N}\), it follows that no correct node is reset by any beat other than the first. Thus, the clock synchronization algorithm is indeed (re)initialized by the first beat to run without any further meddling from Algorithm 4. This implies the same bounds on the steady state error as for the original synchronization algorithm.
Theorem 4
Proof
Lemma 13 that Conditions 1 and 3 can be satisfied such that \(\lim _{r\to \infty } e(r)=((\vartheta 1)T+(3\vartheta 1)U)/\beta \) and T_{0} ∈ O(d_{ F } + d). Hence, we may apply the statements derived in this section.
By Corollary 13, the beat generation mechanism will eventually stabilize. Afterwards, we can apply Lemma 16 to show that the second (correct) beat results in no calls to the reset function in Algorithm 4. In fact, this extends to any beat except for the first: letting beat \(k\in \mathbb {N}\) take the role of beat 1, our reasoning shows that beat k + 1 does not result in a reset at any node. Moreover, applying the same reasoning to Corollary 15, we conclude that all rounds \(r\in \mathbb {N}\) are executed correctly, and that \(\\vec {p}(r)\\leq e(r)\). The bound on E follows. □
Observe that, in comparison to Theorem 1, the expression obtained for the steady state error replaces d by O(d_{ F } + d), which is essentially the skew upon initialization by the first beat. In Algorithm 2, we circumvented any dependence on F by varying round lengths over time. For the selfstabilizing solution, this is not possible, since counting rounds locally is not guaranteed to ensure a consistent opinion across all nodes concerning the nominal length of the current round; we are restricted to counting rounds \(\bmod M\in \mathbb {N}\), so any long round length will reoccur regularly.
It remains to draw the analogous conclusions for using Algorithm 4 with Algorithm 3 as synchronization algorithm.
Theorem 5
Proof
As for Theorem 4, with Corollary 14 taking the place of Lemma 13 and noting that the convergence argument for the frequencies relies on rounds being executed correctly only (i.e., no assumptions on μ_{ v }(1), v ∈ C, are required). □
We remark that despite the stringent requirements on 𝜗 for the recovery argument to work (i.e., \(\bar {\alpha }<1\)), the actual bound on the precision involves α and β. If 𝜗 ≤ 1.004, we have α ≤ 0.512 and β ≤ 0.502. Concerning stabilization, we remark that it takes O(n) time with probability 1 − 2^{−Ω(n)}, which is directly inherited from FATAL. The subsequent convergence to small skews is not affected by n, and will be much faster for realistic parameters, so we refrain from a more detailed statement.
7 Conclusions
The results derived in this paper demonstrate that the LynchWelch synchronization principle is a promising candidate for reliable clock generation, not only in software, but also in hardware. Apart from accurate bounds on the synchronization error depending on the quality of clocks, we present a generic coupling scheme enabling to add selfstabilization properties.
We believe these results to be of practical merit. Concretely, first results from a prototype FieldProgrammable Gate Array (FPGA) implementation of Algorithm 2 show a skew of 182ps [12]. Given the appealing simplicity of the presented algorithms and this excellent performance, we consider the approach a viable candidate for reliable clock generation in faulttolerant lowlevel hardware and other areas.
Footnotes
 1.
 2.
All prior selfstabilizing algorithms have at least this skew. It should also be noted that d involves computational delay and turns out to be larger for FATAL, due to issues related to implementation.
 3.
The prototype implementation achieves 182ps skew [12], which is suitable for generating a system clock.
 4.
Constraining feasible clock rates is necessary to avoid that measurement errors result in clocks speeding up or slowing down arbitrarily over time.
 5.
The maximum delay d tends to be at least one or two orders of magnitude larger than the delay uncertainty U.
 6.
If a node has fewer than 2f + 1 neighbors in a system tolerating f faults, it cannot distinguish whether it synchronizes to a group of f correct or f faulty neighbors.
 7.
It is common to define the drift symmetrically, i.e., (1 − ρ)(t^{′}− t) ≤ H_{ v }(t^{′}) − H_{ v }(t) ≤ (1 + ρ)(t^{′}− t) for some 0 < ρ < 1. For ρ ≪ 1 and 𝜗 ≈ 1, up to minor order terms this is equivalent to setting ρ := (𝜗 − 1)/2 and rescaling the real time axis by factor 1 − ρ. The onesided formulation results in less cluttered notation.
 8.
Discretization can be handled by reinterpreting the discretization error as part of the delay uncertainty. All our algorithms use the hardware clock exclusively to measure bounded time differences.
 9.
Typically, e(r) is a monotone sequence, implying that simply \(E=\lim _{r\to \infty }e(r)\).
 10.
Note that we divide the measured local time differences by factor (𝜗 + 1)/2, the average of the minimum and maximum clock rates. This is an artifact of our more notationfriendly “onesided” definition of hardware clock rates from [1, 𝜗]; in an implementation, one simply reads the hardware clocks (which exhibit symmetric error) without any scaling.
 11.
Given that hardware clock speeds may differ by at most factor 𝜗, nodes need to be able to increase or decrease their rates by factor 𝜗: a single deviating node may be considered faulty by the algorithm, so each node must be able to bridge this speed difference on its own.
 12.
This issue can be circumvented by having a second, dedicated communication link between each pair of nodes.
Notes
Acknowledgements
Open access funding provided by Max Planck Society. We thank Matthias Függer and Attila Kinali for fruitful discussions, and the anonymous reviewers of an earlier version for valuable comments.
References
 1.Overview of Silicon Oscillators by Linear Technology (retrieved May 2016). http://cds.linear.com/docs/en/productselectorcard/2PB_osccalcfb.pdf
 2.Daliot, A., Dolev, D.: Selfstabilizing byzantine pulse synchronization computing research repository. arXiv:0608092 (2006)
 3.Distributed Algorithms for Robust TickSynchronization (2005–2008). Research project [retrieved: 05, 2014]. http://ti.tuwien.ac.at/ecs/research/projects/darts
 4.Dolev, D., Függer, M., Lenzen, C., Posch, M., Schmid, U., Steininger, A.: Rigorously modeling selfstabilizing faulttolerant circuits: An ultrarobust clocking scheme for systemsonchip. J. Comput. Syst. Sci. 80(4), 860–900 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 5.Dolev, D., Függer, M., Lenzen, C., Schmid, U: Faulttolerant algorithms for tickgeneration in asynchronous logic: Robust pulse generation. J. ACM 61(5), 30:1–30:74 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 6.Dolev, D., Halpern, J.Y., Strong, H.R.: On the possibility and impossibility of achieving clock synchronization. J. Comput. Syst. Sci. 32(2), 230–250 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
 7.Dolev, D., Lynch, N.A., Pinter, S.S., Stark, E.W., Weihl, W.E.: Reaching approximate agreement in the presence of faults. J. ACM 33, 499–516 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
 8.Dolev, S., Welch, J.L.: Selfstabilizing clock synchronization in the presence of byzantine faults. J. ACM 51(5), 780–799 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
 9.FlexRay Consortium, et al.: FlexRay communications systemprotocol specification. Version 2.1 (2005)Google Scholar
 10.Függer, M., Armengaud, E., Steininger, A.: Safely stimulating the clock synchronization algorithm in timetriggered systems  a combined formal & experimental approach. IEEE Trans. Indus. Inf. 5(2), 132–146 (2009)CrossRefGoogle Scholar
 11.Függer, M., Schmid, U.: Reconciling faulttolerant distributed computing and systemsonchip. Distrib. Comput. 24(6), 323–355 (2012)CrossRefzbMATHGoogle Scholar
 12.Huemer, F., Kinali, A., Lenzen, C.: Faulttolerant clock synchronization with high precision. In: IEEE Symposium on VLSI (ISVLSI), pp. 490–495 (2016)Google Scholar
 13.Kopetz, H., Bauer, G.: The timetriggered architecture. Proc. IEEE 91(1), 112–126 (2003)CrossRefGoogle Scholar
 14.Lenzen, C., Rybicki, J.: Selfstabilising Byzantine clock synchronisation is almost as easy as consensus. In: 31st Symposium on Distributed Computing (DISC). To appear (2017)Google Scholar
 15.Lundelius, J., Lynch, N.: An upper and lower bound for clock synchronization. Inf. Control. 62(2–3), 190–204 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
 16.Schossmaier, K.: Intervalbased Clock State and Rate Synchronization. Technical University of Vienna, Ph.D. thesis (1998)zbMATHGoogle Scholar
 17.Schossmaier, K., Weiss, B.: An algorithm for faulttolerant clock state and rate synchronization. In: 18th Symposium on Reliable Distributed Systems (SRDS), pp. 36–47 (1999)Google Scholar
 18.Srikanth, T.K., Toueg, S.: Optimal clock synchronization. J. ACM 34(3), 626–645 (1987)MathSciNetCrossRefGoogle Scholar
 19.Welch, J.L., Lynch, N.A.: A new faulttolerant algorithm for clock synchronization. Inf. Comput. 77(1), 1–36 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.