Message Lower Bounds via Efficient Network Synchronization
We present a uniform approach to derive message-time tradeoffs and message lower bounds for synchronous distributed computations using results from communication complexity theory.
Since the models used in the classical theory of communication complexity are inherently asynchronous, lower bounds do not directly apply in a synchronous setting. To address this issue, we show a general result called Synchronous Simulation Theorem (SST) which allows to obtain message lower bounds for synchronous distributed computations by leveraging lower bounds on communication complexity. The SST is a by-product of a new efficient synchronizer for complete networks, called \(\sigma \), which has simulation overheads that are only logarithmic in the number of synchronous rounds with respect to both time and message complexity in the CONGEST model. The \(\sigma \) synchronizer is particularly efficient in simulating synchronous algorithms that employ silence. In particular, a curious property of this synchronizer, which sets it apart from its predecessors, is that it is time-compressing, and hence in some cases it may result in a simulation that is faster than the original execution.
While the SST gives near-optimal message lower bounds up to large values of the number of allowed synchronous rounds r (usually polynomial in the size of the network), it fails to provide meaningful bounds when a very large number of rounds is allowed. To complement the bounds provided by the SST, we then derive message lower bounds for the synchronous message-passing model that are unconditional, that is, independent of r, via direct reductions from multi-party communication complexity.
We apply our approach to show (almost) tight message-time tradeoffs and message lower bounds for several fundamental problems in the synchronous message-passing model of distributed computation. These include sorting, matrix multiplication, and many graph problems. All these lower bounds hold for any distributed algorithms, including randomized Monte Carlo algorithms.
- 2.Avin, C., Borokhovich, M., Lotker, Z., Peleg, D.: Distributed computing on core-periphery networks: axiom-based design. In: Esparza, J., Fraigniaud, P., Husfeldt, T., Koutsoupias, E. (eds.) ICALP 2014. LNCS, vol. 8573, pp. 399–410. Springer, Heidelberg (2014). doi:10.1007/978-3-662-43951-7_34 Google Scholar
- 5.Awerbuch, B., Peleg, D.: Network synchronization with polylogarithmic overhead. In: Proceedings of the 31st Annual Symposium on Foundations of Computer Science (FOCS), pp. 514–522 (1990)Google Scholar
- 8.Drucker, A., Kuhn, F., Oshman, R.: On the power of the congested clique model. In: Proceedings of the 33rd ACM Symposium on Principles of Distributed Computing (PODC), pp. 367–376 (2014)Google Scholar
- 10.Ellen, F., Oshman, R., Pitassi, T., Vaikuntanathan, V.: Brief announcement: private channel models in multi-party communication complexity. In: Proceedings of the 27th International Symposium on Distributed Computing (DISC), pp. 575–576 (2013)Google Scholar
- 11.Frischknecht, S., Holzer, S., Wattenhofer, R.: Networks cannot compute their diameter in sublinear time. In: Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 1150–1162 (2012)Google Scholar
- 13.Hegeman, J.W., Pandurangan, G., Pemmaraju, S.V., Sardeshmukh, V.B., Scquizzato, M.: Toward optimal bounds in the congested clique: graph connectivity and MST. In: Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing (PODC), pp. 91–100 (2015)Google Scholar
- 14.Impagliazzo, R., Williams, R.: Communication complexity with synchronized clocks. In: Proceedings of the 25th Annual IEEE Conference on Computational Complexity (CCC), pp. 259–269 (2010)Google Scholar
- 15.Klauck, H., Nanongkai, D., Pandurangan, G., Robinson, P.: Distributed computation of large-scale graph problems. In: Proceedings of the 26th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 391–410 (2015)Google Scholar
- 22.Lenzen, C.: Optimal deterministic routing and sorting on the congested clique. In: Proceedings of the 2013 ACM Symposium on Principles of Distributed Computing (PODC), pp. 42–50 (2013)Google Scholar
- 25.Nanongkai, D., Sarma, A.D., Pandurangan, G.: A tight unconditional lower bound on distributed randomwalk computation. In: Proceedings of the 30th ACM Symposium on Principles of Distributed Computing (PODC), pp. 257–266 (2011)Google Scholar
- 27.Pandurangan, G., Robinson, P., Scquizzato, M.: Fast distributed algorithms for connectivity and MST in large graphs. In: Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 429–438 (2016)Google Scholar
- 28.Pandurangan, G., Robinson, P., Scquizzato, M.: A time- and message-optimal distributed algorithm for minimum spanning trees. CoRR, abs/1607.06883 (2016)Google Scholar
- 36.Van Gucht, D., Williams, R., Woodruff, D.P., Zhang, Q.: The communication complexity of distributed set-joins with applications to matrix multiplication. In: Proceedings of the 34th ACM Symposium on Principles of Database Systems (PODS), pp. 199–212 (2015)Google Scholar
- 37.Woodruff, D.P., Zhang, Q.: When distributed computation is communication expensive. Distrib. Comput. (to appear)Google Scholar
- 38.Yao, AC.-C.: Some complexity questions related to distributive computing. In: Proceedings of the 11th Annual ACM Symposium on Theory of Computing (STOC), pp. 209–213 (1979)Google Scholar