Skip to main content

The Energy Complexity of Diameter and Minimum Cut Computation in Bounded-Genus Networks

  • Conference paper
  • First Online:
Structural Information and Communication Complexity (SIROCCO 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13892))

  • 296 Accesses

Abstract

This paper investigates the energy complexity of distributed graph problems in multi-hop radio networks, where the energy cost of an algorithm is measured by the maximum number of awake rounds of a vertex. Recent works revealed that some problems, such as broadcast, breadth-first search, and maximal matching, can be solved with energy-efficient algorithms that consume only \({{\text {poly}}}\log n\) energy. However, there exist some problems, such as computing the diameter of the graph, that require \(\varOmega (n)\) energy to solve. To improve energy efficiency for these problems, we focus on a special graph class: bounded-genus graphs. We present algorithms for computing the exact diameter, the exact global minimum cut size, and a \((1\pm \epsilon )\)-approximate s-t minimum cut size with \(\tilde{O}(\sqrt{n})\) energy for bounded-genus graphs. Our approach is based on a generic framework that divides the vertex set into high-degree and low-degree parts and leverages the structural properties of bounded-genus graphs to control the number of certain connected components in the subgraph induced by the low-degree part.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Akhoondian Amiri, S., Schmid, S., Siebertz, S.: A local constant factor MDS approximation for bounded genus graphs. In: Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing (PODC), pp. 227–233 (2016)

    Google Scholar 

  2. Ambühl, C.: An optimal bound for the MST algorithm to compute energy efficient broadcast trees in wireless networks. In: Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) Automata, Languages and Programming, pp. 1139–1150. Springer, Berlin Heidelberg, Berlin, Heidelberg (2005)

    Chapter  Google Scholar 

  3. Augustine, J., Moses, Jr, W.K., Pandurangan, G.: Distributed MST computation in the sleeping model: awake-optimal algorithms and lower bounds. arXiv preprint arXiv:2204.08385 (2022)

  4. Bar-Yehuda, R., Goldreich, O., Itai, A.: On the time-complexity of broadcast in multi-hop radio networks: an exponential gap between determinism and randomization. J. Comput. Syst. Sci. 45(1), 104–126 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  5. Barenboim, L., Maimon, T.: Deterministic logarithmic completeness in the distributed sleeping model. In: Gilbert, S. (ed.) 35th International Symposium on Distributed Computing (DISC). Leibniz International Proceedings in Informatics (LIPIcs), vol. 209, pp. 10:1–10:19. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Dagstuhl, Germany (2021). https://doi.org/10.4230/LIPIcs.DISC.2021.10

  6. Bender, M.A., Kopelowitz, T., Pettie, S., Young, M.: Contention resolution with log-logstar channel accesses. In: Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC), pp. 499–508 (2016). https://doi.org/10.1145/2897518.2897655

  7. Berenbrink, P., Cooper, C., Hu, Z.: Energy efficient randomised communication in unknown adhoc networks. Theor. Comput. Sci. 410(27), 2549–2561 (2009). https://doi.org/10.1016/j.tcs.2009.02.002

    Article  MathSciNet  MATH  Google Scholar 

  8. Bordim, J.L., Jiangtao, C., Hayashi, T., Nakano, K., Olariu, S.: Energy-efficient initialization protocols for ad-hoc radio networks. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 83(9), 1796–1803 (2000)

    MATH  Google Scholar 

  9. Caragiannis, I., Galdi, C., Kaklamanis, C.: Basic computations in wireless networks. In: Deng, X., Du, D.-Z. (eds.) ISAAC 2005. LNCS, vol. 3827, pp. 533–542. Springer, Heidelberg (2005). https://doi.org/10.1007/11602613_54

    Chapter  Google Scholar 

  10. Chang, Y., Dani, V., Hayes, T.P., He, Q., Li, W., Pettie, S.: The energy complexity of broadcast. In: Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing (PODC) (2018)

    Google Scholar 

  11. Chang, Y., Kopelowitz, T., Pettie, S., Wang, R., Zhan, W.: Exponential separations in the energy complexity of leader election. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pp. 771–783 (2017)

    Google Scholar 

  12. Chang, Y.J., Dani, V., Hayes, T.P., Pettie, S.: The energy complexity of BFS in radio networks. In: Proceedings of the 39th Symposium on Principles of Distributed Computing (PODC), pp. 273–282. ACM (2020). https://doi.org/10.1145/3382734.3405713

  13. Chang, Y.J., Duan, R., Jiang, S.: Near-optimal time-energy trade-offs for deterministic leader election. In: Proceedings of the 33th Annual ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). ACM (2021)

    Google Scholar 

  14. Chang, Y.J., Jiang, S.: The energy complexity of Las Vegas leader election. In: Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 75–86 (2022)

    Google Scholar 

  15. Chatterjee, S., Gmyr, R., Pandurangan, G.: Sleeping is efficient: MIS in \(O(1)\)-rounds node-averaged awake complexity. In: Proceedings of the 39th Symposium on Principles of Distributed Computing (PODC), pp. 99–108. ACM (2020). https://doi.org/10.1145/3382734.3405718

  16. Chlamtac, I., Kutten, S.: On broadcasting in radio networks-problem analysis and protocol design. IEEE Trans. Commun. 33(12), 1240–1246 (1985)

    Article  MATH  Google Scholar 

  17. Clementi, A.E.F., Crescenzi, P., Penna, P., Rossi, G., Vocca, P.: On the complexity of computing minimum energy consumption broadcast subgraphs. In: Ferreira, A., Reichel, H. (eds.) STACS 2001. LNCS, vol. 2010, pp. 121–131. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44693-1_11

    Chapter  Google Scholar 

  18. Czygrinow, A., Hańćkowiak, M., Wawrzyniak, W.: Fast distributed approximations in planar graphs. In: Taubenfeld, G. (ed.) DISC 2008. LNCS, vol. 5218, pp. 78–92. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87779-0_6

    Chapter  MATH  Google Scholar 

  19. Dani, V., Gupta, A., Hayes, T.P., Pettie, S.: Wake up and join me! an energy-efficient algorithm for maximal matching in radio networks. Distrib. Comput. (2022)

    Google Scholar 

  20. Dani, V., Hayes, T.P.: How to wake up your neighbors: safe and nearly optimal generic energy conservation in radio networks. arXiv preprint arXiv:2205.12830 (2022)

  21. Dufoulon, F., Moses, Jr, W.K., Pandurangan, G.: Sleeping is superefficient: MIS in exponentially better awake complexity. arXiv preprint arXiv:2204.08359 (2022)

  22. Ephremides, A., Truong, T.V.: Scheduling broadcasts in multihop radio networks. IEEE Trans. Commun. 38(4), 456–460 (1990). https://doi.org/10.1109/26.52656

    Article  Google Scholar 

  23. Ga̧sieniec, L., Kantor, E., Kowalski, D.R., Peleg, D., Su, C.: Energy and time efficient broadcasting in known topology radio networks. In: Pelc, A. (ed.) DISC 2007. LNCS, vol. 4731, pp. 253–267. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-75142-7_21

    Chapter  Google Scholar 

  24. Ghaffari, M., Haeupler, B.: Distributed algorithms for planar networks II: low-congestion shortcuts, MST, and min-cut. In: Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 202–219. SIAM (2016)

    Google Scholar 

  25. Ghaffari, M., Haeupler, B.: Low-congestion shortcuts for graphs excluding dense minors. In: Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing PODC, pp. 213–221 (2021)

    Google Scholar 

  26. Ghaffari, M., Parter, M.: Near-optimal distributed DFS in planar graphs. In: 31st International Symposium on Distributed Computing (DISC 2017). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2017)

    Google Scholar 

  27. Ghaffari, M., Portmann, J.: Average awake complexity of MIS and matching. In: Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 45–55 (2022)

    Google Scholar 

  28. Haeupler, B., Li, J., Zuzic, G.: Minor excluded network families admit fast distributed algorithms. In: Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing (PODC), pp. 465–474 (2018)

    Google Scholar 

  29. Jurdzinski, T., Kutylowski, M., Zatopianski, J.: Energy-efficient size approximation of radio networks with no collision detection. In: Proceedings of the 8th Annual International Conference on Computing and Combinatorics (COCOON), pp. 279–289 (2002)

    Google Scholar 

  30. Jurdzinski, T., Kutylowski, M., Zatopianski, J.: Weak communication in single-hop radio networks: adjusting algorithms to industrial standards. Concurr. Comput. Pract. Exp. 15(11–12), 1117–1131 (2003)

    Article  MATH  Google Scholar 

  31. Jurdziński, T., Kutyłowski, M., Zatopiański, J.: Efficient algorithms for leader election in radio networks. In: Proceedings of the 21st Annual ACM Symposium on Principles of Distributed Computing (PODC), pp. 51–57 (2002). https://doi.org/10.1145/571825.571833

  32. Kardas, M., Klonowski, M., Pajak, D.: Energy-efficient leader election protocols for single-hop radio networks. In: Proceedings of the 42nd International Conference on Parallel Processing (ICPP), pp. 399–408 (2013)

    Google Scholar 

  33. Kirousis, L.M., Kranakis, E., Krizanc, D., Pelc, A.: Power consumption in packet radio networks. Theor. Comput. Sci. 243(1), 289–305 (2000). https://doi.org/10.1016/S0304-3975(98)00223-0

    Article  MathSciNet  MATH  Google Scholar 

  34. Kutyłowski, M., Rutkowski, W.: Adversary immune leader election in ad hoc radio networks. In: Di Battista, G., Zwick, U. (eds.) ESA 2003. LNCS, vol. 2832, pp. 397–408. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39658-1_37

    Chapter  Google Scholar 

  35. Lavault, C., Marckert, J.F., Ravelomanana, V.: Quasi-optimal energy-efficient leader election algorithms in radio networks. Inf. Comput. 205(5), 679–693 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  36. Lenzen, C., Pignolet, Y.A., Wattenhofer, R.: Distributed minimum dominating set approximations in restricted families of graphs. Distrib. Comput. 26(2), 119–137 (2013)

    Article  MATH  Google Scholar 

  37. Li, J., Parter, M.: Planar diameter via metric compression. In: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC), pp. 152–163 (2019)

    Google Scholar 

  38. Miller, G.L., Peng, R., Xu, S.C.: Parallel graph decompositions using random shifts. In: Proceedings of the Twenty-Fifth Annual ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 196–203. ACM (2013)

    Google Scholar 

  39. Nakano, K., Olariu, S.: Randomized leader election protocols in radio networks with no collision detection. In: Goos, G., Hartmanis, J., van Leeuwen, J., Lee, D.T., Teng, S.-H. (eds.) ISAAC 2000. LNCS, vol. 1969, pp. 362–373. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-40996-3_31

    Chapter  Google Scholar 

  40. Parter, M.: Distributed planar reachability in nearly optimal time. In: 34th International Symposium on Distributed Computing (DISC 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2020)

    Google Scholar 

  41. Roditty, L., Williams, V.V.: Fast approximation algorithms for the diameter and radius of sparse graphs. In: Proceedings 45th ACM Symposium on Theory of Computing (STOC), pp. 515–524 (2013)

    Google Scholar 

  42. Sen, A., Huson, M.L.: A new model for scheduling packet radio networks. Wirel. Netw. 3(1), 71–82 (1997). https://doi.org/10.1023/A:1019128411323

    Article  Google Scholar 

  43. Takagi, H., Kleinrock, L.: Optimal transmission ranges for randomly distributed packet radio terminals. IEEE Trans. Commun. 32(3), 246–257 (1984). https://doi.org/10.1109/TCOM.1984.1096061

    Article  Google Scholar 

  44. Wawrzyniak, W.: A strengthened analysis of a local algorithm for the minimum dominating set problem in planar graphs. Inf. Process. Lett. 114(3), 94–98 (2014)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi-Jun Chang .

Editor information

Editors and Affiliations

Appendices

A Learning the Topology of \(G^\star \)

By Lemma 10, the task of computing the diameter of a bounded-genus graph G is reduced to computing the diameter of \(G^\star \). In this section, we show that all vertices can learn the graph topology of \(G^\star \) using \(\tilde{O}(\sqrt{n})\) energy.

Recall that \(G_H\) is the graph defined by the vertex set \(V_H\) and the edge set \(\{\{u,v\} \, : \, |C(u,v)| > 0\}\). By Lemma 6, we know that \(E(G_H) = O(\sqrt{n})\) and there exists an assignment \(F: E(G_H) \mapsto V_H\) mapping each pair \(\{u,v\} \in E(G_H)\) to one vertex in \(\{u,v\}\) such that each \(w \in V_H\) is mapped to at most O(1) times. Let \(\mathcal {A}'\) be any deterministic centralized algorithm that finds such an assignment F, and we fix \(F^\star \) to be the outcome of \(\mathcal {A}'\) on the input \(G_H\). If each vertex \(v \in V\) already knows the graph \(G_H\), then v can locally calculate \(F^\star \) by simulating \(\mathcal {A}'\).

To learn \(G^\star \), we will let each vertex \(u \in V\) learn the following information:

Basic information \(\mathcal {I}_0(u)\).:

For each vertex \(u \in V\), \(\mathcal {I}_0(u)\) contains the following information: (i) whether \(u \in V_H\) or \(u \in V_L\), (ii) the list of vertices in \(N(u) \cap V_H\), and (iii) the set of all pairs \(\{u',v'\} \in E(G_H)\).

If u is in a connected component S of \(G[V_L]\), then \(\mathcal {I}_0(u)\) contains the following additional information: (i) the list of vertices in S, and (ii) the topology of the subgraph \(G^+[S]\).

Information about type-1 components \(\mathcal {I}_1(u)\).:

For each \(u \in V_H\), \(\mathcal {I}_1(u)\) contains the graph topology of \(G^+[S']\), for each \(S' = A_1[u]\), \(A_2[u]\), and B[u].

Information about type-2 components \(\mathcal {I}_2(u)\).:

For each \(u \in V_H\), \(\mathcal {I}_2(u)\) contains the following information. For each pair \(\{u,v\} \in E(G_H)\) such that \(F^\star (\{u,v\}) = u\), \(\mathcal {I}_2(u)\) includes the graph topology of \(G^+[S']\), for each \(S' = A_i^k[u,v]\), \(A_i^k[v,u]\), \(B^l[u,v]\), and R[uv], for each \(i \in \{1,2\}\), \(k \in \{-\sqrt{n}, \ldots , \sqrt{n}\}\), and \(l \in \{1, \ldots , \sqrt{n}\}\).

The information \(\mathcal {I}_0(u)\) contains the graph topology of \(G_H\), allowing each vertex u to calculate \(F^\star \) locally. Note that \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\) contain nothing if \(u \in V_L\). The following lemma shows that the graph topology of \(G^\star \) can be learned efficiently given that each vertex \(u \in V\) already knows \(\mathcal {I}_0(u)\), \(\mathcal {I}_1(u)\), and \(\mathcal {I}_2(u)\).

Lemma 11

Given that each \(u \in V\) already knows \(\mathcal {I}_0(u)\), \(\mathcal {I}_1(u)\), and \(\mathcal {I}_2(u)\), using \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy, we can let all vertices in G learn the graph topology of \(G^\star \) w.h.p.

Proof

To learn \(G^\star \), it suffices to know the following information: (i) \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\) for each \(u \in V_H\), (ii) the graph topology of \(G^+[S]\) for each type-3 component S, and (iii) the graph topology of the subgraph induced by \(V_H\). For each type-3 component S, let \(r_S\) be the smallest ID vertex in S. In view of the above, to let each vertex learn the topology of \(G^\star \), it suffices to let the following \(O(\sqrt{n})\) vertices broadcast the following information:

  • For each \(u \in V_H\), u broadcasts \(\mathcal {I}_1(u)\), \(\mathcal {I}_2(u)\), and the list of vertices \(N(u) \cap V_H\), which is contained in \(\mathcal {I}_0(u)\).

  • For each \(u \in V_L\) such that \(u = r_S\) for a type-3 component S, u broadcasts the graph topology of \(G^+[S]\). Note that each vertex \(u \in V_L\) can decide locally using the information in \(\mathcal {I}_0(u)\) whether or not u itself is \(r_S\) for a type-3 component S.

Since \(|V_H| = O(\sqrt{n})\) and the number of type-3 components is also \(O(\sqrt{n})\) by Lemma 5, the number of vertices that need to broadcast is \(O(\sqrt{n})\). We run the algorithm of Lemma 1 to find a good labeling \(\mathcal {L}\) of G, and then we use Lemma 2(2) with \(x = O(\sqrt{n})\) to let the above \(O(\sqrt{n})\) vertices broadcast their information. This can be done in time \(\tilde{O}(n^{1.5})\), and energy \(\tilde{O}(\sqrt{n})\). After that, all vertices know the graph topology of \(G^\star \).   \(\square \)

Next, we consider the task of learning the basic information \(\mathcal {I}_0(u)\).

Lemma 12

Using \(\tilde{O}(\sqrt{n})\) time and energy, we can let all vertices \(v\in V\) learn the following information w.h.p.

  • Each \(v \in V\) learns whether \(v \in V_H\) or \(v \in V_L\).

  • If \(v \in V_H\), then v also learns the list of vertices in \(N(v) \cap V_H\).

  • If \(v \in V_L\), then v also learns the two lists of vertices \(N(v) \cap V_L\) and \(N(v) \cap V_H\).

Proof

First, we run \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) with \(W=1\), \(\epsilon = 1\), \(\mathcal {S}=\mathcal {R}=V\), and \(m_u = 1\), for each \(u \in \mathcal {S}\). This step lets each \(v \in V\) estimate \(\deg (v)\) up to a factor of 2. This step costs \({{\text {poly}}}\log n\) time, by Lemma 27.

After that, we run \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\) with \(\mathcal {S} = V\) and \(\mathcal {R}\) being the set of all vertices v whose estimate of \(\deg (v)\) is at most \(2 \sqrt{n}\). The message \(m_v\) for each vertex v is \({\text {ID}}(v)\), and we use the bound \(\varDelta ' = 4\sqrt{n}\) for \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\). Recall that \(V_L\) is the set of vertices of degree at most \(\sqrt{n}\), so we must have \(V_L \subseteq \mathcal {R}\). The algorithm of \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\) allows each vertex \(v \in \mathcal {R}\) to calculate \(\deg (v)\) precisely. Therefore, after this step, each vertex \(v \in V\) has enough information to decide whether \(v \in V_H\) or \(v \in V_L\). Furthermore, if \(v \in V_L\), then v knows the list of all vertices N(v). This step takes \(\tilde{O}(\sqrt{n})\) time, by Lemma 22.

In order for each vertex to learn all the required vertex lists, we run \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\) again with the following parameters: \(\mathcal {S} = V_H\), \(\mathcal {R} = V\), and the message \(m_v\) for each vertex \(v \in \mathcal {S}\) is its \({\text {ID}}(v)\). This time we may use the bound \(\varDelta ' = \sqrt{n} \ge |V_H|\). After the algorithm of \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\), each vertex \(v \in V\) knows the list of vertices in \(N(v) \cap V_H\). For each \(v \in V_L\), since v already knows the list of all vertices N(v), it can locally calculate the list \(N(v) \cap V_L\). This step also takes \(\tilde{O}(\sqrt{n})\) time.   \(\square \)

Lemma 13

Using \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy, we can let all vertices v in all connected components S of \(G[V_L]\) learn (i) the vertex set S and (ii) the graph topology of \(G^+[S]\) w.h.p.

Proof

First, we apply Lemma 12 to let all vertices \(v \in V_L\) learn the two lists \(N(v) \cap V_L\) and \(N(v) \cap V_H\). To let all vertices learn the required information in the lemma statement, it suffices to let each vertex \(v \in S\) broadcast the two lists \(N(v) \cap V_L\) and \(N(v) \cap V_H\) to all other vertices in S, for all connected components S of \(G[V_L]\).

We do the above broadcasting task in parallel, for all connected components S of \(G[V_L]\). We use Lemma 1 to let each component S compute a good labeling, and then we use Lemma 2(1) to let each vertex \(v \in S\) broadcast the two lists \(N(v) \cap V_L\) and \(N(v) \cap V_H\) to all other vertices in S. Recall that the degree of any vertex in \(V_L\) is less than \(\sqrt{n}\), so the algorithm of Lemma 2(1) costs \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy.   \(\square \)

For each connected component S of \(G[V_L]\), at the end of the algorithm of Lemma 13, each vertex \(w \in S\) is able to determine the type of S. If S is of type-1, w knows the vertex \(u \in V_H\) such that \(S \in C(u)\). If S is of type-2, w knows the two vertices \(u, v \in V_H\) such that \(S \in C(u,v)\). Given such information, in the following lemma, we design an algorithm for learning the topology of \(G_H\).

Lemma 14

Suppose that each vertex in each type-2 component S already knows (i) the vertex set S and (ii) the graph topology of \(G^+[S]\). Using \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy, all vertices in the graph can learn the set of all pairs \(\{u,v\} \in E(G_H)\) w.h.p.

Proof

First of all, we let all vertices in \(V_H\) agree on a fixed ordering \(V_H = \{v_1, \ldots , v_{|H|}\}\) as follows. We use Lemma 1 to compute a good labeling of G, and then we use Lemma 2(2) with \(x = \sqrt{n}\) to let each vertex \(v \in V_H\) broadcast \({\text {ID}}(v)\). After that, we may order \(V_H = \{v_1, \ldots , v_{|H|}\}\) by increasing ordering of \({\text {ID}}\). This step takes \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy.

Next, we consider the task of letting each \(u \in V_H\) learn the list of all \(v \in V_H\) such that \(C(u,v) \ne \emptyset \). We solve this task by \(|V_H|\) invocations of \(\textsf{SR}{\text{- }}\textsf{comm}\). Given a type-2 component \(S \in C(u,v)\), we define \(z_{u,S}\) as the smallest-ID vertex in \(N(v) \cap S\). The vertex \(z_{u,S}\) will be responsible for letting v know that \(C(u,v) \ne \emptyset \). For \(i = 1\) to \(|V_H|\), we do an \(\textsf{SR}{\text{- }}\textsf{comm}\) with \(\mathcal {R} = V_H\) and \(\mathcal {S}\) being the set of all vertices \(z_{v_i,S}\) such that S is a type-2 component S with \(v_i \in G^+[S]\). Observe that a vertex \(u \in V_H\) receives a message during the ith iteration if and only if \(C(u, v_i)\ne \emptyset \), i.e., \(\{u, v_i\} \in E(G_H)\). By Lemma 21, this step takes \(|V_H| \cdot {{\text {poly}}}\log n = \tilde{O}(\sqrt{n})\) time.

At the end of the above algorithm, each \(u \in V_H\) knows the list of all \(v \in V_H\) such that \(C(u,v) \ne \emptyset \). In order to let all vertices in G learn the topology of \(G_H\), it suffices to let all \(u \in V_H\) broadcast this information. This can be done using Lemma 2(2) with \(x = \sqrt{n}\), which costs \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy.   \(\square \)

Lemma 15

In \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy, we can let all \(u \in V\) learn \(\mathcal {I}_0(u)\) w.h.p.

Proof

This follows from Lemma 13 and Lemma 14.   \(\square \)

Next, we consider the task of learning \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\).

Lemma 16

Suppose that each \(v \in V\) knows \(\mathcal {I}_0(v)\). Using \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy, we can let all vertices \(u \in V_H\) learn \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\) w.h.p.

Proof

Consider any vertex \(u \in V_H\). For each component \(S \in C(u)\), we let \(r_{S,u}\) be the smallest-ID vertex in the set \(S \cap N(u)\). For each \(v \in V_H\) such that \(F^\star (\{u,v\}) = u\), and for each component \(S \in C(u,v)\), we similarly let \(r_{S,u}\) be the smallest-ID vertex in the set \(S \cap N(u)\). As we will later see, \(r_{S,u}\) will be the vertex in S responsible for sending the graph topology \(G^+[S]\) to u in case \(G^+[S]\) belongs to \(\mathcal {I}_1(u)\) or \(\mathcal {I}_2(u)\).

Recall that \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\) consist of the graph topology \(G^+[S']\) of some selected type-1 and type-2 component \(S'\) such that u belongs to \(G^+[S']\). We will present a generic approach that lets \(u \in V_H\) learn one graph topology in \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\). As we will later see, the cost of learning one graph topology is \({{\text {poly}}}\log n\) time and energy. If the graph topology to be learned is in C(u), then only u and the vertices \(r_{S,u}\) for all \(S \in C(u)\) need to participate in the algorithm for learning the graph topology. If the graph topology to be learned is in C(uv), then only u and the vertices \(r_{S,u}\) for all \(S \in C(u,v)\) need to participate in the algorithm for learning the graph topology. We only describe the algorithms that let \(u \in V_H\) learn \(A_1[u]\) and \(A_2[u]\). The algorithms for learning the remaining graph topologies are analogous.

  • Learning \(A_1[u]\). Recall that \(A_1[u]\) is a component \(S' \in C(u)\) that maximizes \({\text {eccentricity}}(u,S')\). To learn \(A_1[u]\), we use \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\) with \(\mathcal {S} = \{r_{S, u} \, : \, S \in C(u)\}\) and \(\mathcal {R} = \{u\}\). The message \(m_v\) of \(v = r_{S, u}\) is the graph topology of \(G^+[S]\), and the key of \(v = r_{S, u}\) is \(k_v = {\text {eccentricity}}(u,S)\). Since each type-1 and type-2 component satisfies \(|S| \le \sqrt{n}\), the maximum possible value of \({\text {eccentricity}}(u,S)\) is \(\sqrt{n}\), so the size of the key space for \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\) is \(K = \sqrt{n}\).

    If \(|C(u)| > 0\), then the message that u receives from \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\) is the topology of \(G^+[S']\), for a component \(S' \in C(u)\) that attains the maximum value of \({\text {eccentricity}}(u,S')\) among all components in C(u), so u may set \(A_1[u] = S'\). If \(|C(u)| = 0\), the vertex u receives nothing from \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\), so u may set \(A_1[u] = \emptyset \). By Lemma 24, the cost of \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\) is \(O(\log K \log \varDelta \log n) = {{\text {poly}}}\log n\).

  • Learning \(A_2[u]\). The procedure for learning \(A_2[u]\) is almost exactly the same as that for \(A_1[u]\), with only one difference. Recall that \(A_2[u]\) is a component \(S' \in C(u) \setminus \{A_1[u]\}\) that maximizes \({\text {eccentricity}}(u,S')\), so we need to exclude the component \(A_1[u]\) from participating. To do so, before we apply \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\), we use one round to let u send \({\text {ID}}(r_{A_1[u],u})\) to all vertices \(\{r_{S, u} \, : \, S \in C(u)\}\). This allows each \(r_{S,u}\) to learn whether or not \(S = A_1[u]\).

For each \(u \in V_H\), the number of pairs \(\{u,v\}\) such that \(F^\star (\{u,v\}) = u\) is O(1), so the number of graph topologies needed to be learned in \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\) by u is \(O(\sqrt{n})\). The total number of graph topologies needed to be learned, for all \(u \in V_H\), is at most \(|V_H| \cdot O(\sqrt{n}) = O(n)\). We fix any ordering of these learning tasks and solve them sequentially. For each of these tasks, we use the above generic approach to solve the task, so the time and energy cost for learning one graph topology is \({{\text {poly}}}\log n\). Since there are O(n) tasks, the overall time complexity is \(O(n) \cdot {{\text {poly}}}\log n = \tilde{O}(n)\). Each vertex participates in \(O(\sqrt{n})\) tasks, so the overall energy complexity is \(O(\sqrt{n}) \cdot {{\text {poly}}}\log n = \tilde{O}(\sqrt{n})\).   \(\square \)

Lemma 17

Using \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy, we can let all vertices in G learn the graph topology of \(G^\star \) w.h.p.

Proof

The lemma follows from combining Lemmas 11, 15 and 16.   \(\square \)

Now we are ready to prove Theorem 1.

Theorem 1

There is an algorithm that computes the diameter in \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy w.h.p. for bounded-genus graphs in \(\textsf{No}{\text{- }}\textsf{CD}\).

Proof

The theorem follows from combining Lemmas 10 and 17.   \(\square \)

B Minimum Cut

In this section, we apply the approach introduced in Sect. 5 to show that (i) the exact global minimum cut size and (ii) an approximate st minimum cut size of any bounded-genus graph can be computed in \(\tilde{O}(\sqrt{n})\) energy. We follow the same approach introduced in Sect. 5. That is, we still decompose the vertex set into \(V_H\) and \(V_L\), and we categorize the connected components of \(G[V_L]\) into three types. The only difference here is the information that we extract from type-1 and type-2 components.

Given a cut \(\mathcal {C} = (X, V \setminus X)\) of \(G=(V,E)\), the two vertex sets \(X \ne \emptyset \) and \(V \setminus X \ne \emptyset \) are called the two parts of \(\mathcal {C}\), and the cut edges of \(\mathcal {C}\) are defined as \(\{ \{u,v\} \, : \, u \in X, v \in V \setminus X\}\). The size of a cut \(\mathcal {C}\), which we denote as \(|\mathcal {C}|\), is defined as the number of cut edges of \(\mathcal {C}\). A minimum cut of a graph is a cut \(\mathcal {C}\) that minimizes \(|\mathcal {C}|\) among all possible cuts. An s–t minimum cut of a graph is a cut \(\mathcal {C}\) the minimizes \(|\mathcal {C}|\) among all possible cuts subject to the constraint that s and t belong to different parts. We consider the following definitions:

c(S).:

For any type-1 component S, let c(S) be the minimum cut size of \(G^+[S]\).

\(c'(S)\).:

For any type-2 component \(S \in C(u,v)\), let \(c'(S)\) be the uv minimum cut size of \(G^+[S]\).

\(c''(S)\).:

For any type-2 component \(S \in C(u,v)\), let \(c''(S)\) be the minimum cut size of \(G^+[S]\) among all cuts such that both u and v are within the same part of the cut.

We make the following observations.

Lemma 18

Let \(\mathcal {C} = (X, V \setminus X)\) be any minimum cut of G. For any vertex \(u \in V_H\), one of the following statements is true:

  • One part of the cut contains all vertices in \(\bigcup _{S \in C(u)} S \cup \{u\}\).

  • the size of the cut is \(\min _{S \in C(u)} c(S)\).

Proof

Suppose that the first statement is false. Then there exists a component \(S' \in C(u)\) such that \(S' \cup \{u\}\) intersects both parts of the cut, so \(\mathcal {C}' = (X \cap (S' \cup \{u\}), (V \setminus X) \cap (S' \cup \{u\}))\) is a cut of \(G^+[S']\). Therefore, \(\min _{S \in C(u)} c(S) \le c(S') \le |\mathcal {C}'| \le |\mathcal {C}|\). To prove that the second statement is true, we just need to show that \(|\mathcal {C}| \le \min _{S \in C(u)} c(S)\). This inequality follows from the observation that for any component \(S \in C(u)\), any cut of \(G^+[S]\) can be extended to a cut of G of the same size by adding all vertices in \(V \setminus (S \cup \{u\})\) to the part of the cut that contains u.    \(\square \)

Lemma 19

Let \(\mathcal {C} = (X, V \setminus X)\) be any minimum cut of G. For two distinct vertices \(u,v \in V_H\), one of the following statements is true:

  • One part of the cut contains all vertices in \(\bigcup _{S \in C(u,v)} S \cup \{u,v\}\).

  • The size of the cut is \(\min _{S \in C(u,v)} c''(S)\).

  • u and v belong to different parts of the cut, and the number of cut edges that have at least one endpoint in \(\bigcup _{S \in C(u,v)} S'\) is \(\sum _{S \in C(u,v)} c'(S)\).

Proof

Suppose that the first statement is false. We first focus on the case where u and v belong to the same part of the cut \(\mathcal {C}\). In this case, there exists a component \(S' \in C(u,v)\) such that \(S' \cup \{u,v\}\) intersects both parts of the cut, so \(\mathcal {C}' = (X \cap (S' \cup \{u,v\}), (V \setminus X) \cap (S' \cup \{u,v\}))\) is a cut of \(G^+[S]\) such that u and v belong to the same part of the cut. Therefore, \(\min _{S \in C(u,v)} c''(S) \le c''(S') \le |\mathcal {C}'| \le |\mathcal {C}|\). Similar to the proof of Lemma 18, we also have \(|\mathcal {C}| \le \min _{S \in C(u,v)} c''(S)\), as any cut of \(G^+[S]\) such that u and v belong to the same part of the cut can be extended to a cut of G of the same size. Therefore, we must have \(|\mathcal {C}| = \min _{S \in C(u,v)} c''(S)\), that is, the second statement is true.

For the rest of the proof, we consider the case where u and v belong to different parts of the cut \(\mathcal {C}\). For each component \(S \in C(u,v)\), we write \(Z_{S}\) to denote the number of cut edges of \(\mathcal {C}\) that have at least one endpoint in S. Then we must have \(Z_{S} = c'(S)\), since otherwise \(\mathcal {C}\) is not a minimum cut. Therefore, the number of cut edges that have at least one endpoint in \(\bigcup _{S \in C(u,v)} S'\) is \(\sum _{S \in C(u,v)} c'(S)\), that is, the third statement is true.   \(\square \)

Using the above two observations, we prove Theorem 2.

Theorem 2

There is an algorithm that computes the minimum cut size in \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy w.h.p. for bounded-genus graphs in \(\textsf{No}{\text{- }}\textsf{CD}\).

Proof

Bounded-genus graphs have bounded arboricity. The minimum degree of any graph of arboricity \(\alpha \) is at most \(2 \alpha - 1\). The minimum cut size of any graph is at most the minimum degree of the graph. Therefore, there is a constant \(\lambda _0\) such that the minimum cut size of G is at most \(\lambda _0\).

Define the graph \(G^\diamond \) as the result of applying the following operations to G:

  • Remove all type-1 components.

  • For each pair \(\{u,v\}\) of distinct vertices in \(V_H\) with \(|C(u,v)| > 0\), replace C(uv) with \(\min \{\lambda _0, \sum _{S \in C(u,v)} c'(S)\}\) multi-edges between u and v.

By Lemmas 18 and 19, the minimum cut size of G is the minimum of the following numbers:

  • The minimum value of \(\min _{S \in C(u)} c(S)\) among all \(u \in V_H\) such that \(|C(u)|>0\).

  • The minimum value of \(\min _{S \in C(u,v)} c''(S)\) among all \(u,v \in V_H\) such that \(|C(u,v)|>0\).

  • The minimum cut size of \(G^\diamond \).

For each vertex \(u \in V\), we define \(\mathcal {I}^\diamond _0(u)\), \(\mathcal {I}^\diamond _1(u)\), and \(\mathcal {I}^\diamond _2(u)\) as follows.

  • \(\mathcal {I}^\diamond _0(u)\) is the same as the basic information \(\mathcal {I}_0(u)\) defined in Sect. 5.

  • \(\mathcal {I}^\diamond _1(u)\) contains the number \(\min _{S \in C(u)} c(S)\).

  • \(\mathcal {I}^\diamond _2(u)\) contains \(\min _{S \in C(u,v)} c''(S)\) and \(\min \{\lambda _0, \sum _{S \in C(u,v)} c'(S)\}\), for all pairs \(\{u,v\} \in E(G_H)\) such that \(F^\star (\{u,v\}) = u\).

Similar to the proof of Theorem 1, \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\) contain nothing if \(u \in V_L\).

As \(\mathcal {I}^\diamond _0(u) = \mathcal {I}_0(u)\), we may use the algorithm of Lemma 15 to let all vertices \(u \in V\) learn the information \(\mathcal {I}^\diamond _0(u)\) using \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy.

The algorithm of Lemma 16 can be modified to allow all vertices \(u \in V_H\) learn the information \(\mathcal {I}^\diamond _1(u)\) and \(\mathcal {I}^\diamond _2(u)\). Specifically, the number \(\min _{S \in C(u)} c(S)\) can be learned by the same algorithm for learning \(A_1[u]\) described in the proof of Lemma 15 by replacing \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\) with \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {min}}}\) letting \(v = r_{S_u}\) use the key \(k_v = c(S)\). The algorithm for learning \(\min _{S \in C(u,v)} c''(S)\) is similar.

For each pair \(\{u,v\} \in E(G_H)\) such that \(F^\star (\{u,v\}) = u\), to let u learn \(\min \{\lambda _0, \sum _{S \in C(u,v)} c'(S)\}\), we use \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) with the following parameters:

  • \(\mathcal {S} = \{r_{S, u} \, : \, S \in C(u,v)\}\).

  • \(\mathcal {R} = \{u\}\).

  • \(\epsilon = 1/(2\lambda _0 + 1)\).

  • \(W = \lambda _0\).

  • For each \(S \in C(u,v)\), the message \(m_v\) of the representative \(v = r_{S, u}\) of S is \(\min \{\lambda _0, c'(S)\}\).

After the algorithm of \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\), u learns an \((1\pm \epsilon )\)-approximation of

$$\begin{aligned} \sum _{v \in N^+(u) \cap \mathcal {S}} m_v = \sum _{S \in C(u,v)} \min \{\lambda _0, c'(S)\}. \end{aligned}$$

We claim that this allows u to calculate \(\min \{\lambda _0, \sum _{S \in C(u,v)} c'(S)\}\) precisely. To prove this claim, we break the analysis into two cases. Let x be the approximation of \(\sum _{S \in C(u,v)} \min \{\lambda _0, c'(S)\}\) computed by \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\).

If \(\min \{\lambda _0, \sum _{S \in C(u,v)} c'(S)\} = \lambda _0\), then

$$\begin{aligned} \sum _{v \in N^+(u) \cap \mathcal {S}} m_v = \sum _{S \in C(u,v)} \min \{\lambda _0, c'(S)\} \ge \lambda _0, \end{aligned}$$

which implies

$$\begin{aligned} x \ge (1-\epsilon )\lambda _0 > \lambda _0 - 1/2. \end{aligned}$$

If \(\min \{\lambda _0, \sum _{S \in C(u,v)} c'(S)\} = \sum _{S \in C(u,v)} c'(S)\), then

$$\begin{aligned} \sum _{v \in N^+(u) \cap \mathcal {S}} m_v = \sum _{S \in C(u,v)} \min \{\lambda _0, c'(S)\} = \sum _{S \in C(u,v)} c'(S), \end{aligned}$$

which implies

$$\begin{aligned} x&\in \left[ (1-\epsilon )\sum _{S \in C(u,v)} c'(S), (1+\epsilon )\sum _{S \in C(u,v)} c'(S)\right] \\ {}&\subseteq \left[ \left( \sum _{S \in C(u,v)} c'(S)\right) - \frac{1}{2}, \left( \sum _{S \in C(u,v)} c'(S)\right) +\frac{1}{2}\right] . \end{aligned}$$

Therefore, u can calculate \(\min \{\lambda _0, \sum _{S \in C(u,v)} c'(S)\}\) precisely from x. By Lemma 27, the cost for u to calculate \(\min \{\lambda _0, \sum _{S \in C(u,v)} c'(S)\}\) via \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) is \({{\text {poly}}}\log n\) time.

For each \(u \in V_H\), the number of pairs \(\{u,v\}\) such that \(F^\star (\{u,v\}) = u\) is O(1), so the number of parameters needed to be learned in \(\mathcal {I}^\diamond _1(u)\) and \(\mathcal {I}^\diamond _2(u)\) by u is O(1). The total number of parameters needed to be learned, for all \(u \in V_H\), is at most \(|V_H| \cdot O(1) = O(\sqrt{n})\). We fix any ordering of these learning tasks and solve them sequentially. The time and energy cost for learning one parameter is \({{\text {poly}}}\log n\). Since there are \(O(\sqrt{n})\) tasks, the overall time complexity for learning \(\mathcal {I}^\diamond _1(u)\) and \(\mathcal {I}^\diamond _2(u)\) for all \(u \in V_H\) is \(O(\sqrt{n}) \cdot {{\text {poly}}}\log n = \tilde{O}(\sqrt{n})\).

In view of the above discussion, the minimum cut size of G can be calculated from the following information: (i) \(\mathcal {I}^\diamond _1(u)\) and \(\mathcal {I}^\diamond _2(u)\) for all \(u \in V_H\), (ii) the topology of \(G^+[S]\) for each type-3 component S, and (iii) the topology of the subgraph induced by \(V_H\). By replacing \(\mathcal {I}_1(u)\) and \(\mathcal {I}_2(u)\) with \(\mathcal {I}^\diamond _1(u)\) and \(\mathcal {I}^\diamond _2(u)\) in the description of the algorithm of Lemma 11, we obtain an algorithm that lets all vertices learn this information using \(\tilde{O}(n^{1.5})\) time and \(\tilde{O}(\sqrt{n})\) energy.   \(\square \)

The proof of Theorem 3 is similar to that of Theorem 2. The main difference for the setting of st minimum cut is that if s or t happens to be within a type-1 or a type-2 component S, then we additionally need to learn the topology of \(G^+[S]\). Any type-1 component that does not contain s or t is irrelevant to the st minimum cut size.

In the subsequent discussion, we fix s and t to be any two distinct vertices of G. for each \(x \in \{s,t\}\), let \(S_x\) be the type-1 or type-2 component containing x. In case x is not contained in any type-1 or type-2 component, we let \(S_x = \emptyset \). We define \(G^\bullet \) as the result of applying the following operations to G.

  • Remove all type-1 components, except for \(S_s\) and \(S_t\).

  • For each pair \(\{u,v\}\) of distinct vertices in \(V_H\) with \(|C(u,v) \setminus \{S_s, S_t\}| > 0\), replace all components in \(C(u,v) \setminus \{S_s, S_t\}\) with \(\sum _{S \in C(u,v) \setminus \{S_s, S_t\}} c'(S)\) multi-edges between u and v.

Similar to Lemmas 18 and 19, we have the following observation.

Lemma 20

Both G and \(G^\bullet \) have the same minimum st cut size.

Proof

Fix \(\mathcal {C} = (X, V \setminus X)\) to be any minimum st cut of G, where \(s \in X\) and \(t \in V\setminus X\). To show that both G and \(G^\bullet \) have the same minimum st cut size, it suffices to show the following two statements:

  • For each type-1 component S that is not \(S_s\) and \(S_t\), we must have either \(S \subseteq X\) or \(S \subseteq V\setminus X\).

  • For each pair \(\{u,v\}\) of distinct vertices in \(V_H\) with \(|C(u,v) \setminus \{S_s, S_t\}| > 0\), if u and v belong to different parts of cut \(\mathcal {C}\), then the number of cut edges of \(\mathcal {C}\) with at least one endpoint in \(\bigcup _{S \in C(u,v) \setminus \{S_s, S_t\}} S\) equals \(\sum _{S \in C(u,v) \setminus \{S_s, S_t\}} c'(S)\).

The first statement follows from the observation that for each \(u \in V_H\), all vertices in \(\bigcup _{S \in C(u) \setminus \{S_s, S_t\}} S\) must belong to the part of cut \(\mathcal {C}\) that u belongs to, since otherwise \(\mathcal {C}\) is not a minimum st cut, as moving all vertices in \(\bigcup _{S \in C(u) \setminus \{S_s, S_t\}} S\) to the part of cut that u belongs to reduces the number of cut edges.

To show the second statement, consider a pair \(\{u,v\}\) of distinct vertices in \(V_H\) with \(|C(u,v) \setminus \{S_s, S_t\}| > 0\) such that u and v belong to different parts of cut \(\mathcal {C}\). Similar to the proof of Lemma 19, for each component \(S \in C(u,v) \setminus \{S_s, S_t\}\), we write \(Z_{S}\) to denote the number of cut edges of \(\mathcal {C}\) that have at least one endpoint in S. Then we must have \(Z_{S} = c'(S)\), since otherwise \(\mathcal {C}\) is not a minimum cut. Therefore, the number of cut edges of \(\mathcal {C}\) that have at least one endpoint in \(\bigcup _{S \in C(u,v) \setminus \{S_s, S_t\}} S'\) is \(\sum _{S \in C(u,v) \setminus \{S_s, S_t\}} c'(S)\).   \(\square \)

We are ready to prove Theorem 3.

Theorem 3

There is an algorithm that computes an \((1 \pm \epsilon )\)-approximate st minimum cut size in \(\tilde{O}(n^{1.5}) \cdot \epsilon ^{-O(1)}\) time and \(\tilde{O}(\sqrt{n}) \cdot \epsilon ^{-O(1)}\) energy w.h.p. for bounded-genus graphs in \(\textsf{No}{\text{- }}\textsf{CD}\).

Proof

Let \(\tilde{G}^\bullet \) be any graph such that for each pair of vertices \(\{u,v\}\), the number of mult-edges in \(\tilde{G}^\bullet \) is within a \((1 \pm \epsilon )\) factor of the number of mult-edges in \(G^\bullet \). By Lemma 20, the minimum st cut size in \(\tilde{G}^\bullet \) is a \((1 \pm \epsilon )\)-approximation of the minimum st cut size of G. Therefore, the task of computing the minimum st cut size of G is reduced to computing such a graph \(\tilde{G}^\bullet \).

For each \(u \in V_H\), we let \(\mathcal {I}_2^{\bullet }(u)\) contain the number \(\sum _{S \in C(u,v) \setminus \{S_s, S_t\}} c'(S)\) for all pairs \(\{u,v\} \in E(G_H)\) with \(F^\star (\{u,v\}) = u\). The same algorithm for learning \(\mathcal {I}_2^{\diamond }(u)\) presented in the proof of Theorem 2 can be applied here to let all \(u \in V_H\) approximately learn each number in \(\mathcal {I}_2^{\bullet }(u)\) within a \((1 \pm \epsilon )\) factor, by using \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) with parameter \(\epsilon \). We can tolerate this approximation factor because here our goal is to learn \(\tilde{G}^\bullet \).

In view of the above, a \((1\pm \epsilon )\)-approximation of the minimum st cut size of G can be calculated from the following information: (i) \(\mathcal {I}_2^\bullet (u)\) for all \(u \in V_H\), (ii) the topology of \(G^+[S]\) for \(S = S_s\), \(S = S_t\), and each type-3 component S, and (iii) the topology of the subgraph induced by \(V_H\). Same as the proof of Theorem 2, we can obtain an algorithm that lets all vertices learn this information using \(\tilde{O}(n^{1.5}) \cdot \epsilon ^{-O(1)}\) time and \(\tilde{O}(\sqrt{n}) \cdot \epsilon ^{-O(1)}\). Here the extra \(\epsilon ^{-O(1)}\) is due to the use of \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\), which requires \(\epsilon ^{-O(1)} \cdot {{\text {poly}}}\log n\) time and energy.   \(\square \)

C Algorithms for Communication Between Two Sets of Vertices

In this section, we present our algorithms for \(\textsf{SR}{\text{- }}\textsf{comm}\) and its variants. Recall that \(\textsf{SR}{\text{- }}\textsf{comm}\) requires that each vertex \(v \in \mathcal {R}\) with \(N^+(v) \cap \mathcal {S} \ne \emptyset \) receives a message \(m_u\) from at least one vertex \(u \in N^+(v) \cap \mathcal {S}\) w.h.p.

Lemma 21

( [4]) \(\textsf{SR}{\text{- }}\textsf{comm}\) can be solved in time \(O(\log \varDelta \log n)\) and energy \(O(\log \varDelta \log n)\).

Proof

By the definition of \(\textsf{SR}{\text{- }}\textsf{comm}\), each vertex \(v \in \mathcal {S} \cap \mathcal {R}\) is not required to receive any message from other vertices, as we already have \(v \in N^+(v) \cap \mathcal {S}\). Therefore, in the subsequent discussion, we assume that \(\mathcal {S} \cap \mathcal {R} = \emptyset \).

The task \(\textsf{SR}{\text{- }}\textsf{comm}\) with \(\mathcal {S} \cap \mathcal {R} = \emptyset \) can be solved using the well-known decay algorithm of [4], which repeats the following routine for \(C \log n\) times: For \(i = 1\) to \(\log \varDelta \), let each vertex \(u \in \mathcal {S}\) transmit with probability \(2^{-i}\). Each \(v \in \mathcal {R}\) is always listening throughout the procedure. Here \(C > 0\) is some large enough constant to be determined.

Consider a vertex \(v \in \mathcal {R}\) such that \(N(v) \cap \mathcal {S} \ne \emptyset \). Let \(i^\star \) be the largest integer i such that \(2^{i} \le 2|N(v) \cap \mathcal {S}|\). Consider a time slot t where each vertex \(u \in \mathcal {S}\) transmits with probability \(2^{-i^\star }\). For notational simplicity, we write \(n' = |N(v) \cap \mathcal {S}|\) and \(p' = 2^{-i^\star }\). Our choice of \(i^\star \) implies that \(1/n' \ge p' \ge 1/(2n')\). The probability of the event that exactly one vertex in the set \(N(v) \cap \mathcal {S}\) transmits equals \(n' p' (1 - p')^{n'-1} \ge 1/(2e)\). The calculation follows from the inequalities \(n' p' \ge 1/2\) and \((1 - p')^{n'-1} \ge (1 - 1/n')^{n'-1} \ge 1/e\).

If the above event occurs, then v successfully receives a message \(m_u\) from a vertex \(u \in N(v) \cap \mathcal {S}\). The probability that v does not receive any message from vertices in \(N(v) \cap \mathcal {S}\) throughout the entire algorithm is at most \((1 - 1/(2e))^{C \log n} = n^{-\varOmega (C)}\). By setting C to be a large enough constant, the algorithm successfully solves \(\textsf{SR}{\text{- }}\textsf{comm}\) w.h.p., and the time and energy complexities of the algorithm are \(O(\log \varDelta \log n)\).   \(\square \)

Recall that the goal of \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\) is to let each vertex \(u \in \mathcal {S} \cap N^+(v)\) deliver a message \(m_u\) to \(v \in \mathcal {R}\), for each \(v \in \mathcal {R}\).

Lemma 22

\(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\) can be solved in time \(O(\varDelta ' \log n)\) and energy \(O(\varDelta ' \log n)\), where \(\varDelta '\) is an upper bound on \(|\mathcal {S} \cap N(v)|\), for each \(v \in \mathcal {R}\).

Proof

Consider the algorithm which repeats the following routine for \(C \cdot \varDelta ' \log n\) rounds, for some sufficiently large constant \(C > 0\). In each round, each vertex \(u \in \mathcal {S}\) sends \(m_u\) with probability \(1 / \varDelta '\). For each \(u \in \mathcal {R}\), if u does not send in this round, then u listens.

Let \(e = \{u,v\}\) be any edge with \(u \in \mathcal {S}\) and \(v \in \mathcal {R}\). In one round of the above algorithm, u successfully sends a message to v if (i) all vertices in \(\{v\} \cup (\mathcal {S} \cap N(v)) \setminus \{u\}\) do not send, and (ii) u sends. Therefore, the probability that u successfully sends a message to v is

$$ (1 - 1/\varDelta ')^{|\mathcal {S} \cap N(v)|-1} \cdot (1/\varDelta ') \ge (1 - 1/\varDelta ')^{\varDelta '-1} \cdot (1/\varDelta ') \ge 1/(e\varDelta ') $$

The probability that u does not successfully send a message to v throughout all \(C \cdot \varDelta ' \log n\) rounds is at most \((1 - 1/(e\varDelta '))^{C \cdot \varDelta ' \log n} = n^{-\varOmega (C)}\). Selecting a large enough constant C, by a union bound for all \(u \in \mathcal {S} \cap N(v)\) and all \(v \in \mathcal {R}\), we conclude that the algorithm solves \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\) w.h.p. The time and energy complexities are \(O(\varDelta ' \log n)\).   \(\square \)

Recall that the task \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {multi}}}\) requires that each vertex \(v \in \mathcal {R}\) receive all distinct messages in \(\bigcup _{u \in N^+(v) \cap \mathcal {S}} \mathcal {M}_u\), where is the \(\mathcal {M}_u\) is the set of messages hold by u.

Lemma 23

\(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {multi}}}\) can be solved in time \(O(M \log \varDelta \log ^2 n)\) and energy \(O(M \log \varDelta \log ^2 n)\), where M is an upper bound on the number of distinct messages in \(\bigcup _{u \in N^+(v) \cap \mathcal {S}}\), for each \(v \in \mathcal {R}\).

Proof

Consider the algorithm which repeatedly runs \(\textsf{SR}{\text{- }}\textsf{comm}\) for \(C \cdot M \log n\) times, where in each iteration, the sets \((\mathcal {S}',\mathcal {R'})\) for \(\textsf{SR}{\text{- }}\textsf{comm}\) are chosen randomly as follows. We select \(\mathcal {R'}\) as a random subset of \(\mathcal {R}\) such that each \(v \in \mathcal {R}\) joins \(\mathcal {R'}\) with probability 1/2. We select \(\mathcal {S}'\) as a random subset of \(\mathcal {S} \setminus \mathcal {R'}\) such that for each message m, all vertices in \(\mathcal {S} \setminus \mathcal {R'}\) that hold m join \(\mathcal {S}'\) with probability 1/M, using the shared randomness associated with the message m.

Due to the shared randomness, if \(u \in \mathcal {S} \setminus \mathcal {R'}\) joins \(\mathcal {S}'\) due to message m, then all vertices in \(\mathcal {S} \setminus \mathcal {R'}\) holding the same message m also joins \(\mathcal {S}'\). Note that a vertex \(u \in \mathcal {S} \setminus \mathcal {R'}\) might hold more than one message in that \(|\mathcal {M}_u| > 1\). The probability that \(u \in \mathcal {S} \setminus \mathcal {R'}\) joins \(\mathcal {S}'\) equals \({\text {Pr}}[{\text {Binomial}}(|\mathcal {M}_u|, 1/M) \ge 1]\), because each message \(m \in \mathcal {M}_u\) lets u join \(\mathcal {S}'\) with probability 1/M independently.

To analyze the algorithm, we focus on one vertex \(v \in \mathcal {R}\) in one iteration of the above algorithm. Consider any message \(m \in \bigcup _{u \in N(v) \cap \mathcal {S}} \mathcal {M}_u \setminus \mathcal {M}_v\). Observe that v receives m if the following three events \(\mathcal {E}_1\), \(\mathcal {E}_2\), and \(\mathcal {E}_3\) occur:

  • \(\mathcal {E}_1\) is the event that v joins \(\mathcal {R}'\).

  • \(\mathcal {E}_2\) is the event that at least one vertex \(u \in N(v) \cap \mathcal {S}\) with \(m \in \mathcal {M}_u\) does not join \(\mathcal {R}'\).

  • \(\mathcal {E}_3\) is the event that the subset of vertices of \(N(v) \cap \mathcal {S} \setminus \mathcal {R'}\) joining \(\mathcal {S}'\) is exactly the set of all vertices \(u \in N(v) \cap \mathcal {S} \setminus \mathcal {R'}\) with \(m \in \mathcal {M}_u\).

If \(\mathcal {E}_1\), \(\mathcal {E}_2\), and \(\mathcal {E}_3\) occur, then \(v \in \mathcal {R}'\), \(N(v) \cap \mathcal {S}' \ne \emptyset \), and all vertices \(u \in N(v) \cap \mathcal {S}'\) satisfy \(m \in \mathcal {M}_u\). Therefore, conditioning on \(\mathcal {E}_1\), \(\mathcal {E}_2\), and \(\mathcal {E}_3\), \(\textsf{SR}{\text{- }}\textsf{comm}\) in this iteration allows v to receive message m.

The way \(\mathcal {R}'\) is selected implies that \({\text {Pr}}[\mathcal {E}_1] = 1/2\) and \({\text {Pr}}[\mathcal {E}_2] \ge 1/2\). Observe that \(\mathcal {E}_1\) and \(\mathcal {E}_2\) are independent events. The way \(\mathcal {S}'\) is selected implies that \({\text {Pr}}[\mathcal {E}_3 | \mathcal {E}_1 \cap \mathcal {E}_2] \ge {\text {Pr}}[{\text {Binomial}}(M, 1/M)=1] = (1/M) \cdot (1 - 1/M)^{M-1} \ge 1/(eM)\). Therefore, the probability that v receives m in this iteration is at least 1/(4eM).

The probability that v does not receive m in all iterations is at most \((1 - 1/(4eM))^{C \cdot M \log n} = n^{-\varOmega (C)}\). Selecting a large enough constant C, by a union bound for all \(v \in \mathcal {R}\) and all \(m \in \bigcup _{u \in N(v) \cap \mathcal {S}} \mathcal {M}_u \setminus \mathcal {M}_v\), we conclude that the algorithm solves \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {all}}}\) w.h.p. The time and energy complexities are \(O(M \log \varDelta \log ^2 n)\), as the number of iterations is \(O(M \log n)\) and the time complexity of each iteration is \(O(\log \varDelta \log n)\) by Lemma 21.   \(\square \)

Consider the setting where the message \(m_u\) sent from each vertex \(u \in \mathcal {S}\) contains a key \(k_u\) from the key space \([K] = \{1, 2, \ldots , K\}\). Recall that \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {min}}}\) requires that each vertex \(v \in \mathcal {R}\) with \(N^+(v) \cap \mathcal {S} \ne \emptyset \) receives a message \(m_u\) from a vertex \(u \in N^+(v) \cap \mathcal {S}\) such that \(k_u = \min _{u' \in N^+(v) \cap \mathcal {S}} k_{u'}\).

Lemma 24

Both \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {min}}}\) and \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\) can be solved in time \(O(K \log \varDelta \log n)\) and energy \(O(\log K \log \varDelta \log n)\). For the special case of \(\mathcal {S} \cap \mathcal {R} = \emptyset \) and \(|\mathcal {R} \cap N(u)| \le 1\) for each \(u \in \mathcal {S}\), the time complexity can be improved to \(O(\log K \log \varDelta \log n)\).

Proof

We only prove the lemma for \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {min}}}\), as the proof for \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {max}}}\) is the same. The proof presented here is analogous to the analysis of a deterministic version of \(\textsf{SR}{\text{- }}\textsf{comm}\) in [10]. Observe that we can do \(\textsf{SR}{\text{- }}\textsf{comm}\) once to let each \(v \in \mathcal {R}\) test whether or not \(N^+(v) \cap \mathcal {S} \ne \emptyset \). If a vertex \(v \in \mathcal {R}\) knows that \(N^+(v) \cap \mathcal {S} = \emptyset \), then v may remove itself from \(\mathcal {R}\). Thus, in the subsequent discussion, we assume \(N^+(v) \cap \mathcal {S} \ne \emptyset \) for each \(v \in \mathcal {R}\).

Let \(v \in \mathcal {R}\), and we define \(f_v = \min _{u \in N^+(v) \cap \mathcal {S}} k_u\). The high-level idea of the algorithm is to conduct a binary search to determine all \(\log K\) bits of the binary representation of \(f_v\).

General Case. Suppose at some moment each vertex \(v \in \mathcal {R}\) already knows the first x bits of \(f_v\). The following procedure allows each \(v \in \mathcal {R}\) to learn the \((x+1)\)th bit of \(f_v\). For each \((x+1)\)-bit binary string s, we do \(\textsf{SR}{\text{- }}\textsf{comm}\) with the following choices of \((\mathcal {S}',\mathcal {R}')\):

  • \(\mathcal {S}'\) is the set of vertices \(u \in \mathcal {S}\) such that the first \(x+1\) bits of \(k_u\) equal s.

  • \(\mathcal {R}'\) is the set of vertices \(v \in \mathcal {R}\) such that the first x bits of \(f_v\) equal the first x bits of s.

In this procedure, we perform \(2^{x+1}\) times of \(\textsf{SR}{\text{- }}\textsf{comm}\) in total, but each vertex only participates in at most three of them, as each vertex joins \(\mathcal {S}'\) at most once and joins \(\mathcal {R}'\) at most twice. Thus, the procedure costs \(O(2^x \log \varDelta \log n)\) time and \(O(\log \varDelta \log n)\) energy, by Lemma 21. For each \(v \in \mathcal {R}\), the messages that v receive during the procedure allows v to determine the \((x+1)\)th bit of \(f_v\).

We will run the above procedure for \(\log K\) iterations from \(x=0\) to \(x = \log K - 1\). Observe that in the last iteration, each vertex \(v \in \mathcal {R}\) is guaranteed to receive a message \(m_u\) from a vertex \(u \in N^+(v) \cap \mathcal {S}\) such that \(k_u = f_v = \min _{w \in N^+(v) \cap \mathcal {S}} k_{w}\), so this algorithm allows us to solve \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {min}}}\). The overall time complexity of the algorithm is

$$\begin{aligned} \sum _{x=0}^{\log K - 1} O(2^x \log \varDelta \log n) = O(K \log \varDelta \log n), \end{aligned}$$

and the overall energy complexity of the algorithm is

$$\begin{aligned} \sum _{x=0}^{\log K - 1} O(\log \varDelta \log n) = O(\log K \log \varDelta \log n). \end{aligned}$$

Special Case. For the rest of the proof, we focus on the special case of \(\mathcal {S} \cap \mathcal {R} = \emptyset \) and \(|\mathcal {R} \cap N(u)| \le 1\) for each \(u\in \mathcal {S}\). These assumptions imply that the family of sets \((\mathcal {S} \cap N(v)) \cup \{v\}\) for all \(v \in \mathcal {R}\) are disjoint. The high-level idea is that for each \(v \in \mathcal {R}\), we may let the set of vertices \((\mathcal {S} \cap N(v)) \cup \{v\}\) jointly conduct a binary search to determine all bits of \(f_v = \min _{u \in N(v) \cap \mathcal {S}} k_u\), in parallel for all \(v \in \mathcal {R}\).

Suppose that for each vertex \(v \in \mathcal {R}\), all vertices in the set \((\mathcal {S} \cap N(v)) \cup \{v\}\) already know the first x bits of \(f_v\). We present a more efficient algorithm that let all vertices in the set \((\mathcal {S} \cap N(v)) \cup \{v\}\) learn the \((x+1)\)th bit of \(f_v\).

  • Step 1. Perform \(\textsf{SR}{\text{- }}\textsf{comm}\) with the following choices of \((\mathcal {S}',\mathcal {R}')\):

    • \(\mathcal {R}' = \mathcal {R}\).

    • \(\mathcal {S}'\) is the subset of \(\mathcal {S}\) that contains all vertices \(u \in \mathcal {S}\) satisfying the following conditions:

      • The first x bits of \(k_u\) equal the first x bits of \(f_v\), where v is the unique vertex in \(\mathcal {R} \cap N(u)\).

      • The \((x+1)\)th bit of \(k_u\) is 0.

    This step allows each \(v \in \mathcal {R}\) to learn the \((x+1)\)th bit of \(f_v\). If \(v \in \mathcal {R}\) receives a message in \(\textsf{SR}{\text{- }}\textsf{comm}\), then v knows that the \((x+1)\)th bit of \(f_v\) is 0. Otherwise, v knows that the \((x+1)\)th bit of \(f_v\) is 1.

  • Step 2. Perform \(\textsf{SR}{\text{- }}\textsf{comm}\) with the following choices of \((\mathcal {S}',\mathcal {R}')\):

    • \(\mathcal {R}' = \mathcal {S}\).

    • \(\mathcal {S}' = \mathcal {R}\).

    This step lets each \(v \in \mathcal {R}\) send the \((x+1)\)th bit of \(f_v\) to all vertices in \(\mathcal {S} \cap N(v)\).

The time and energy complexities of this algorithm are asymptotically the same as that of \(\textsf{SR}{\text{- }}\textsf{comm}\), which are \(O(\log \varDelta \log n)\). As discussed earlier, to solve \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {min}}}\), all we need to do is to run the above algorithm from \(x=0\) to \(x = \log K - 1\). The overall time and energy complexities of the algorithm for \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {min}}}\) are \(O(\log K \log \varDelta \log n)\), as there are \(\log K\) iterations.   \(\square \)

For the rest of the section, we consider the task \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\), which requires each vertex \(v \in \mathcal {R}\) to compute an \((1 \pm \epsilon )\)-factor approximation of the summation \(\sum _{u \in N^+(v) \cap \mathcal {S}} m_u\). We need the following fact, whose correctness can be verified by means of a simple calculation.

Lemma 25

There exist three universal constants \(0< \epsilon _0 < 1\), \(N_0 \ge 1\), and \(c_0 \ge 1\) such that the following statement holds: For any pair of numbers \((N,\epsilon )\) such that \(N \ge N_0\) and \(\epsilon _0 \ge |\epsilon | \ge c_0 / \sqrt{N}\),

$$\begin{aligned} e^{-1}(1 - 0.51 \epsilon ^2) \le (1+\epsilon ) (1 - (1+\epsilon )/N)^{N-1} \le e^{-1}(1 - 0.49 \epsilon ^2). \end{aligned}$$

Note that the parameter \(\epsilon \) in Lemma 25 can be either positive or negative. For the rest of the section, we assume that the message \(m_u\) sent from each vertex \(u \in \mathcal {S}\) is an integer within the range [W]. We first consider the special case of \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) with \(W = 1\). In this case, \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) is the same as the approximate counting problem whose goal is to let each \(v \in \mathcal {R}\) compute \(|N^+(v) \cap \mathcal {S}|\), up to a \((1 \pm \epsilon )\)-factor error.

Lemma 26

For \(W = 1\), \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) can be solved in \(O((1/\epsilon ^5) \log \varDelta \log n)\) time and energy.

Proof

In this proof, we will focus on a slightly different task of estimating \(|N(v) \cap \mathcal {S}|\) within a \((1 \pm \epsilon )\)-factor approximation, for each \(v \in \mathcal {R}\). If each \(v \in \mathcal {R}\) knows such an estimate of \(|N(v) \cap \mathcal {S}|\), then v can locally calculate an estimate of \(|N^+(v) \cap \mathcal {S}|\) within a \((1 \pm \epsilon )\)-factor approximation, thereby solving \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) for the case of \(W=1\).

Basic Setup. Let \(C > 0\) be a sufficiently large constant. Let \(\epsilon _0, N_0\), and \(c_0\) be the constants in Lemma 25. We assume that \(\epsilon \le \epsilon _0\). If this is not the case, then we may reset \(\epsilon = \epsilon _0\).

The algorithm consists of two phases. The first phase of the algorithm aims to achieve the following goals: For each \(v \in \mathcal {R}\), either (i) v learns the number \(|N(v) \cap \mathcal {S}|\) exactly or (ii) v detects that \(\epsilon \ge 10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\). For each vertex \(v \in \mathcal {R}\) that calculates the number \(|N(v) \cap \mathcal {S}|\) exactly in the first phase, we remove v from \(\mathcal {R}\). The second phase of the algorithm then solves \(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) for the remaining vertices in \(\mathcal {R}\). These vertices \(v \in \mathcal {R}\) satisfy \(\epsilon \ge 10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\).

The First Phase. We define \(Z = (10 c_0 / \epsilon )^2\). The algorithm consists of \(C \cdot Z \log n\) rounds, where we do the following in each round:

  • Each vertex \(u \in \mathcal {S} \cup \mathcal {R}\) flips a biased coin that produces head with probability 1/Z.

  • Each \(u \in \mathcal {S}\) sends \({\text {ID}}(u)\) if the outcome of its coin flip is head.

  • Each vertex \(v \in \mathcal {R}\) listens if the outcome of its coin flip is tail.

For each vertex \(v \in \mathcal {R}\), there are two cases:

  • Suppose that there is a vertex \(u \in N(v) \cap \mathcal {S}\) such that the number of messages that v receives from is smaller than \(0.5 \cdot (C \log n)/e\). Then v decides that \(\epsilon \ge 10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\) and proceeds to the second phase.

  • Suppose that for all vertices \(u \in N(v) \cap \mathcal {S}\), the number of messages that v receives from is at least \(0.5 \cdot (C \log n)/e\). Then v calculate \(|N(v) \cap \mathcal {S}|\) by the number of distinct \({\text {ID}}\)s that v receives.

The time complexity of the first phase of the algorithm is \(C \cdot Z \log n = O((1/\epsilon ^{2}) \log n)\).

Analysis. To analyze the algorithm, let \(e = \{u,v\}\) be any edge such that \(u \in \mathcal {S}\) and \(v \in \mathcal {R}\). In one round of the above algorithm, u successfully sends a message to v if and only if (i) the outcome of u’s coin flip is head, and (ii) the outcome of the coin flips of all vertices in \((N(v)\cap \mathcal {S}) \cup \{v\} \setminus \{u\}\) are all tails. This event occurs with probability \(p^\star = (1 - 1/Z)^{|N(v) \cap \mathcal {S}|} \cdot (1/Z)\). Let X be the number of times v receives a message from u. To prove the correctness of the algorithm, we show the following three concentration bounds:

  • If \(v \in \mathcal {R}\) satisfies \(\epsilon \le 10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\), then \({\text {Pr}}[X \ge 0.8 \cdot (C \log n)/e] = 1 - n^{-\varOmega (C)}\).

  • If \(v \in \mathcal {R}\) satisfies \(\epsilon \ge 20 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\), then \({\text {Pr}}[X \le 0.2 \cdot (C \log n)/e] = 1 - n^{-\varOmega (C)}\).

  • If \(v \in \mathcal {R}\) satisfies \(\epsilon \le 20 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\), then \({\text {Pr}}[X \ge 1] = 1 - n^{-\varOmega (C)}\).

We show the correctness of the algorithm given these concentration bounds. For the case \(\epsilon \ge 20 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\), the second bound implies that the number of messages that v receives from u is greater than \(0.5 \cdot (C \log n)/e\) w.h.p., so v correctly decides that \(\epsilon \ge 10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\) and proceeds to the second phase. For the case \(\epsilon \le 20 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\), the third bound implies that v receives at least one message from each vertex in \(N(v) \cap \mathcal {S}\) w.h.p., so v can calculate \(|N(v) \cap \mathcal {S}|\) precisely. The only remaining thing to show is that when \(\epsilon \) is at most \(10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\), w.h.p. v does not decide that \(\epsilon \ge 10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\). This follows from the first bound, which implies that the number of messages that v receives from u is greater than \(0.5 \cdot (C \log n)/e\) w.h.p.

We prove the three concentration bounds as follows:

  • Suppose that vertex \(v \in \mathcal {R}\) satisfies \(\epsilon \le 10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\). We show that in this case the number of messages that v receives from \(u \in N(v) \cap \mathcal {S}\) is at least \(0.8 \cdot (C \log n)/e\), with probability \(1 - n^{-\varOmega (C)}\). In this case, we have \(Z = (10 c_0 / \epsilon )^2 \ge |N(v) \cap \mathcal {S}|\), so \(p^\star = (1 - 1/Z)^{|N(v) \cap \mathcal {S}|} \cdot (1/Z) \ge (1 - 1/Z)^{Z} \cdot (1/Z) \ge 0.9 /(e Z)\). The expected value \(\mu \) of X satisfies \(\mu = C \cdot Z \log n \cdot p^\star \ge 0.9 (C \log n)/e\). By a Chernoff bound, \({\text {Pr}}[X \le 0.8 \cdot (C \log n)/e] \le \exp (-\varOmega (C \log n)) = n^{-\varOmega (C)}\).

  • Suppose that vertex \(v \in \mathcal {R}\) satisfies \(\epsilon \ge 20 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\). We show that in this case the number of messages that v receives from \(u \in N(v) \cap \mathcal {S}\) is at most \(0.2 \cdot (C \log n)/e\), with probability \(1 - n^{-\varOmega (C)}\). In this case, we have \(Z = (10 c_0 / \epsilon )^2 \le |N(v) \cap \mathcal {S}| / 4\), so \(p^\star = (1 - 1/Z)^{|N(v) \cap \mathcal {S}|} \cdot (1/Z) \le (1 - 1/Z)^{4Z} \cdot (1/Z) \le 1/(e^4 Z)\). The expected value \(\mu \) of X satisfies \(\mu = C \cdot Z \log n \cdot p^\star \le (C \log n)/ e^4 < 0.1 (C \log n)/ e\). By a Chernoff bound, \({\text {Pr}}[X \ge 0.2 \cdot (C \log n)/e] \le \exp (-\varOmega (C \log n)) = n^{-\varOmega (C)}\).

  • Suppose that vertex \(v \in \mathcal {R}\) satisfies \(\epsilon \le 20c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\). We show that in this case the number of messages that v receives from \(u \in N(v) \cap \mathcal {S}\) is at least 1, with probability \(1 - n^{-\varOmega (C)}\). In this case, we have \(Z = (10 c_0 / \epsilon )^2 \ge |N(v) \cap \mathcal {S}| / 4\), so \(p^\star = (1 - 1/Z)^{|N(v) \cap \mathcal {S}|} \cdot (1/Z) \ge (1 - 1/Z)^{4Z} \cdot (1/Z) \ge 0.9 /(e^4 Z)\). We have \({\text {Pr}}[X < 1] = (1-p^\star )^{CZ \log n} \le (1 - 0.9 /(e^4 Z))^{CZ \log n} = n^{-\varOmega (C)}\).

The Second Phase. For each vertex \(v \in \mathcal {R}\) that have already calculated the number \(|N(v) \cap \mathcal {S}|\) exactly in the first phase, v removes itself from \(\mathcal {R}\). We know that all the remaining vertices in \(\mathcal {R}\) satisfy \(\epsilon \ge 10 c_0 / \sqrt{|N(v) \cap \mathcal {S}|}\).

We consider the sequence of sending probabilities: \(p_1 = 2/\varDelta \), and \(p_i = \min \{1, p_{i-1} \cdot (1+ \epsilon )\}\) for \(i > 1\). We let \(i^\star = O((1/\epsilon ) \log \varDelta )\) be the smallest index i such that \(p_i = 1\).

The second phase of the algorithm consists of \(i^\star \) iterations, where the ith iteration repeats the following procedure for \(C \cdot (1/\epsilon ^4) \log n\) times for all vertices \(v \in \mathcal {S} \cup \mathcal {R}\):

  • v flips a fair coin.

  • If the outcome of the coin flip is head and \(v \in \mathcal {S}\), then v sends with probability \(p_i\).

  • If the outcome of the coin flip is tail and \(v \in \mathcal {R}\), then v listens to the channel.

After finishing the algorithm, each vertex \(v \in \mathcal {R}\) finds an index \(i'\) such that the number of messages that v successfully receives during the \(i'\)th iteration is the highest. Then v decides that \(2 / p_{i'}\) is an estimate of \(|N(v) \cap \mathcal {S}|\) within a factor of \((1\pm \epsilon )\). The time complexity of the second phase of the algorithm is \(i^\star \cdot C \cdot (1/\epsilon ^4) \log n = O((1/\epsilon ^5) \log \varDelta \log n)\).

Analysis. To show the correctness of the above algorithm, in the subsequent discussion, we focus on a vertex \(v \in \mathcal {R}\) in the ith iteration. We say that i is good for v if \(p_i/2\) is within a \((1\pm 0.6\epsilon )\)-factor of \(1/|N(v) \cap \mathcal {S}|\), and we say that i is bad for v if \(p_i/2\) is not within a \((1\pm \epsilon )\)-factor of \(1/|N(v) \cap \mathcal {S}|\). Our choice of the sequence \((p_1, p_2, \ldots )\) implies that there must be at least one good index i for v.

We write \(p_i^{\text {suc}}\) to denote the probability that v successfully receives a message in one round of the ith iteration. From the description of the algorithm, we have

$$\begin{aligned} p_i^{\text {suc}} = (1/2) \cdot |N(v) \cap \mathcal {S}| \cdot (p_i / 2) \cdot (1-(p_i / 2))^{|N(v) \cap \mathcal {S}|-1}. \end{aligned}$$

We define

$$\begin{aligned} p_{\text {good}} = (1/2) \cdot e^{-1}(1 - 0.51 (0.6 \epsilon )^2) \ \ \ \text {and} \ \ \ p_{\text {bad}} = (1/2) \cdot e^{-1}(1 - 0.49 \epsilon ^2). \end{aligned}$$

We claim that (i) \(p_i^{\text {suc}} \ge p_{\text {good}}\) if i is good for v and (ii) \(p_i^{\text {suc}} \le p_{\text {bad}}\) if i is bad for v.

We first prove this claim for the case that i is good for v. For simplicity, we write \(N = |N(v) \cap \mathcal {S}|\). Since i is good, \(p_i / 2 = (1 + \epsilon ')/|N(v) \cap \mathcal {S}|\) for some \(\epsilon ' \in [-0.6 \epsilon , 0.6 \epsilon ]\). Using the new notations, we may rewrite \(p_i^{\text {suc}}\) as

$$\begin{aligned} p_i^{\text {suc}}&= (1/2) \cdot |N(v) \cap \mathcal {S}| \cdot (p_i / 2) \cdot (1-(p_i / 2))^{|N(v) \cap \mathcal {S}|-1}\\&= (1/2) \cdot (1 + \epsilon ') \cdot (1-(1 + \epsilon '))^{N-1}. \end{aligned}$$

By Lemma 25, we infer that \(p_i^{\text {suc}} \ge (1/2) \cdot e^{-1}(1 - 0.51 (\epsilon ')^2) \ge e^{-1}(1 - 0.51 (0.6 \epsilon )^2) = p_{\text {good}}\).

Now consider the case i is bad for v. Again, we write \(N = |N(v) \cap \mathcal {S}|\). Since i is bad, \(p_i / 2 = (1 + \epsilon ')/|N(v) \cap \mathcal {S}|\) for some \(\epsilon ' \notin (-\epsilon , \epsilon )\). The above formula for \(p_i^{\text {suc}}\) still applies to this case, and Lemma 25 implies that \(p_i^{\text {suc}} \le (1/2) \cdot e^{-1}(1 - 0.49 (\epsilon ')^2) \le e^{-1}(1 - 0.49 \epsilon ^2) = p_{\text {bad}}\).

Let X be the number of messages that v receives in the ith iteration of the algorithm. The expected value of X is \(\mu = p_i^{\text {suc}} \cdot C \cdot (1/\epsilon ^4) \log n\). For the case i is good for v, we have \(\mu \ge p_{\text {good}} \cdot C \cdot (1/\epsilon ^4) \log n\), so by a Chernoff bound, we have:

$$\begin{aligned} {\text {Pr}}[X \le (1-0.01 \epsilon ^2) p_{\text {good}} \cdot C \cdot (1/\epsilon ^4) \log n] = e^{-\varOmega (\epsilon ^4 \cdot C \cdot (1/\epsilon ^4) \log n)} = n^{-\varOmega (C)}. \end{aligned}$$

For the case i is bad for v, we have \(\mu \le p_{\text {bad}} \cdot C \cdot (1/\epsilon ^4) \log n\), so by a Chernoff bound, we have:

$$\begin{aligned} {\text {Pr}}[X \ge (1+0.01 \epsilon ^2) p_{\text {bad}} \cdot C \cdot (1/\epsilon ^4) \log n] = e^{-\varOmega (\epsilon ^4 \cdot C \cdot (1/\epsilon ^4) \log n)} = n^{-\varOmega (C)}. \end{aligned}$$

Since \((1-0.01 \epsilon ^2) p_{\text {good}} > (1+0.01 \epsilon ^2) p_{\text {bad}}\), we conclude that w.h.p. the index \(i'\) selected by v must be good, which implies that the estimate \(2/p_{i'}\) calculated by v is within a \((1\pm \epsilon )\)-factor of \(|N(v) \cap \mathcal {S}|\), as we know that \(p_{i'}/2\) is within a \((1\pm 0.6\epsilon )\)-factor of \(1/|N(v) \cap \mathcal {S}|\), as \(i'\) is good.   \(\square \)

In the following lemma, we extend Lemma 26 to any value of W.

Lemma 27

\(\textsf{SR}{\text{- }}\textsf{comm}^{{\text {apx}}}\) can be solved in \(O((1/\epsilon ^6) \log W \log \varDelta \log n)\) time and energy.

Proof

We let \(\epsilon ' = \Theta (\epsilon )\) be chosen such that \((1+\epsilon ')^2 < 1+\epsilon \) and \((1-\epsilon ')^2 > 1-\epsilon \). We consider the following sequence: \(w_1 = 1\) and \(w_i = \min \{ W, (1+\epsilon ')w_{i-1}\}\) for \(i > 1\). Let \(i^\star \) be the smallest index i such that \(w_i = W\).

From \(i = 1\) to \(i^\star \), we run the algorithm of Lemma 26 with the following setting:

  • \(\mathcal {S}'\) is the vertices \(u \in \mathcal {S}\) with \(m_u \in (w_{i-1}, w_{i}]\).

  • \(\mathcal {R}' = \mathcal {R}\).

  • The error parameter is \(\epsilon '\).

The algorithm of Lemma 26 lets each \(v \in \mathcal {R}'\) compute an \((1 \pm \epsilon ')\)-factor approximation of \(|N^+(v) \cap \mathcal {S}'|\) using \(O((1/\epsilon ^5) \log \varDelta \log n)\) time and energy.

For each \(v \in \mathcal {R}\), we write \(N_i\) to denote the number of vertices \(u \in N^+(v) \cap \mathcal {S}\) such that \(m_u \in (w_{i-1}, w_{i}]\), and we write \(\tilde{N}_i\) to denote the estimate of \(|N^+(v) \cap \mathcal {S}'|\) computed by v in the ith iteration. We have the following observations:

  • \(\tilde{N}_i\) is an \((1\pm \epsilon ')\)-factor approximation of \(N_i\).

  • \(\sum _{i=1}^{i^\star } w_i N_i\) is an \((1\pm \epsilon ')\)-factor approximation of \(\sum _{u \in N^+(v) \cap \mathcal {S}} m_u\).

Thus, \(\sum _{i=1}^{i^\star } w_i \tilde{N}_i\), which can be calculated locally at v at the end of the algorithm, is an \((1\pm \epsilon )\)-factor approximation of \(\sum _{u \in N^+(v) \cap \mathcal {S}} m_u\), by our choice of \(\epsilon '\).

By Lemma 26, the time and energy complexities for each iteration are \(O((1/\epsilon ^5)\log \varDelta \log n)\). The total number of iterations is \(i^\star = O((1/\epsilon ) \log W)\). Thus, the overall time and energy complexities are \(O((1/\epsilon ^6) \log W \log \varDelta \log n)\).

   \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chang, YJ. (2023). The Energy Complexity of Diameter and Minimum Cut Computation in Bounded-Genus Networks. In: Rajsbaum, S., Balliu, A., Daymude, J.J., Olivetti, D. (eds) Structural Information and Communication Complexity. SIROCCO 2023. Lecture Notes in Computer Science, vol 13892. Springer, Cham. https://doi.org/10.1007/978-3-031-32733-9_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-32733-9_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-32732-2

  • Online ISBN: 978-3-031-32733-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics