Cache-Aware Lock-Free Queues for Multiple Producers/Consumers and Weak Memory Consistency
A lock-free FIFO queue data structure is presented in this paper. The algorithm supports multiple producers and multiple consumers and weak memory models. It has been designed to be cache-aware and work directly on weak memory models. It utilizes the cache behavior in concert with lazy updates of shared data, and a dynamic lock-free memory management scheme to decrease unnecessary synchronization and increase performance. Experiments on an 8-way multi-core platform show significantly better performance for the new algorithm compared to previous fast lock-free algorithms.
Unable to display preview. Download preview PDF.
- 1.Giacomoni, J., Moseley, T., Vachharajani, M.: Fastforward for efficient pipeline parallelism: a cache-optimized concurrent lock-free queue. In: Proceedings of the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP 2008), pp. 43–52. ACM, New York (2008)Google Scholar
- 4.Higham, L., Kawash, J.: Impact of instruction re-ordering on the correctness of shared-memory programs. In: Proceedings of the 8th International Symposium on Parallel Architectures, Algorithms and Networks, pp. 25–32. IEEE, Los Alamitos (December 2005)Google Scholar
- 7.Michael, M.M.: Hazard pointers: Safe memory reclamation for lock-free objects. IEEE Transactions on Parallel and Distributed Systems 15(8) (August 2004)Google Scholar
- 9.Moir, M., Nussbaum, D., Shalev, O., Shavit, N.: Using elimination to implement scalable and lock-free fifo queues. In: Proceedings of the 17th Annual ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2005), pp. 253–262. ACM, New York (2005)Google Scholar
- 11.Tsigas, P., Zhang, Y.: A simple, fast and scalable non-blocking concurrent FIFO queue for shared memory multiprocessor systems. In: Proceedings of the 13th Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA 2001), pp. 134–143. ACM Press, New York (2001)Google Scholar