Proactive Leader Election in Asynchronous Shared Memory Systems
In this paper, we give an algorithm for fault-tolerant proactive leader election in asynchronous shared memory systems, and later its formal verification. Roughly speaking, a leader election algorithm is proactive if it can tolerate failure of nodes even after a leader is elected, and (stable) leader election happens periodically. This is needed in systems where a leader is required after every failure to ensure the availability of the system and there might be no explicit events such as messages in the (shared memory) system. Previous algorithms like DiskPaxos are not proactive.
In our model, individual nodes can fail and reincarnate at any point in time. Each node has a counter which is incremented every period, which is same across all the nodes (modulo a maximum drift). Different nodes can be in different epochs at the same time. Our algorithm ensures that per epoch there can be at most one leader. So if the counter values of some set of nodes match, then there can be at most one leader among them. If the nodes satisfy certain timeliness constraints, then the leader for the epoch with highest counter also becomes the leader for the next epoch(stable property). Our algorithm uses shared memory proportional to the number of processes, the best possible. We also show how our protocol can be used in clustered shared disk systems to select a primary network partition. We have used the state machine approach to represent our protocol in Isabelle HOL logic system and have proved the safety property of the protocol.
KeywordsShared Memory Failure Detector Safety Property Leader Election Network Partition
Unable to display preview. Download preview PDF.
- 1.Gafni, E., Lamport, L.: Disk Paxos. In: Proceedings of the International Symposium on Distributed Computing, pp. 330–344 (2000)Google Scholar
- 5.Jayanti, P., Chandra, T.D., Toueg, S.: Fault-tolerant wait-free shared objects. In: Proceedings of the 33rd Annual Symposium on Foundations of Computer Science (1992)Google Scholar
- 6.Chockler, G., Malkhi, D.: Light-Weight Leases for Storage-Centric Coordination. MIT-LCS-TR-934 Publication Date: 4-22-2004Google Scholar
- 7.Aquilera, M.K., Delporte-Gallet, C., Fauconnier, H., Toueg, S.: Stable Leader Election. In: Proceedings of the 15th International Conference on Distributed Computing, pp. 108–122 (2001)Google Scholar
- 8.Lampson, B.: How to build a highly available system using consensus. In: Babaoğlu, Ö., Marzullo, K. (eds.) WDAG 1996. LNCS, vol. 1151, pp. 1–17. Springer, Heidelberg (1996)Google Scholar
- 9.De Prisco, R., Lampson, B., Lynch, N.: Revisiting the Paxos algorithm. In: Proceedings of the 11th Workshop on Distributed Algorithms (WDAG), Saarbrücken, September 1997, pp. 11–125 (1997)Google Scholar
- 10.Larrea, M., Fernández, A., Arévalo, S.: Optimal implementation of the weakest failure detector for solving consensus. In: Proceedings of the 19th IEEE Symposium on Reliable Distributed Systems, SRDS 2000, Nurenberg, Germany, October 2000, pp. 52–59 (2000)Google Scholar
- 12.Lo, W.-K., Hadzilacos, V.: Using Failure Detectors to Solve Consensus in Asynchronous Shared-Memory Systems. In: Proceedings of the 8th International Workshop in Distributed Algorithms, pp. 280–295 (1994)Google Scholar
- 16.Chockler, G., Malkhi, D.: Active Disk Paxos with Infinitely Many Processes. In: Proceedings of the 21st ACM Symposium on Principles of Distributed Computing (PODC) (August 2002)Google Scholar
- 18.Cristian, F., Fetzer, C.: The timed asynchronous system model. In: Proceedings of the 28th Annual International Symposium on Fault-Tolerant Computing, Munich, Germany, June 1998, pp. 140–149 (1998)Google Scholar