This paper considers the problem of programming a multiple process system so that it continues to operate despite the failure of individual processes. A powerful synchronizing primitive is defined, and it is used to solve some sample problems. An algorithm is then given which implements this primitive under very weak assumptions about the nature of interprocess communication, and a careful informal proof of its correctness is given.
KeywordsInformation System Operating System Data Structure Communication Network Information Theory
Unable to display preview. Download preview PDF.
- 1.Brinch Hansen, P.: Concurrent programming concepts. Computing Surveys 5, 223–245 (1973)Google Scholar
- 2.Courtois, P. J., Heymans, F., Parnas, D. L.: Concurrent control with “Readers” and “Writers”. Comm. ACM 14, 667–668 (1971)Google Scholar
- 3.Dijkstra, E. W.: The structure of the “THE” multiprogramming system. Comm. ACM 11, 341–346 (1968)Google Scholar
- 4.Dijkstra, E. W.: Cooperating sequential processes. In: Genuys, F. (ed.): Programming Languages. New York: Academic Press 1968, p. 43–112Google Scholar
- 5.Dijkstra, E. W.: Hierarchical ordering of sequential processes. Acta Informatica 1, 115–138 (1971)Google Scholar
- 6.Dijkstra, E. W.: Self-stabilizing systems in spite of distributed control. Comm. ACM 17, 643–644 (1974)Google Scholar
- 7.Hoare, C. A. R.: A structured paging system. Computer J. 16, 209–214 (1973)Google Scholar
- 8.Lamport, L.: A new solution of Dijkstra's concurrent programming problem. Comm. ACM 17, 453–455 (1974)Google Scholar
- 9.Lamport, L.: On concurrent reading and writing. To appear in Comm. ACMGoogle Scholar
- 10.Nilsen, R. N. (ed.): Distributed function computer architectures. Computer 7, 15–37 (1974)Google Scholar