Abstract
Very large scale networks have become common in distributed systems. To efficiently manage these networks, various techniques are being developed in the distributed and networking research community. In this paper, we focus on one of those techniques, network clustering, i.e., the partitioning of a system into connected subsystems. The clustering we compute is size-oriented: given a parameter K of the algorithm, we compute, as far as possible, clusters of size K.
We present an algorithm to compute a binary hierarchy of nested disjoint clusters. A token browses the network and recruits nodes to its cluster. When a cluster reaches a maximal size defined by a parameter of the algorithm, it is divided when possible, and tokens are created in both of the new clusters. The new clusters are then built and divided in the same fashion. The token browsing scheme chosen is a random walk, in order to ensure local load balancing.
To allow the division of clusters, a spanning tree is built for each cluster. At each division, information on how to route messages between the clusters is stored. The naming process used for the clusters, along with the information stored during each division, allows routing between any two clusters.
Similar content being viewed by others
References
Aleliunas R, Karp R, Lipton R, Lovasz L, Rackoff C (1979) Random walks, universal traversal sequences and the complexity of maze problems. In: FOCS 79, pp 218–223
Aldous DJ (1990) The random walk construction of uniform spanning trees and uniform labelled trees. SIAM J Discrete Math 3:450–465
Amis AD, Prakash R, Vuong THP, Huynh DT (2000) Max-min d-cluster formation in wireless ad hoc networks. In: IEEE INFOCOM, pp 32–41
Basagni S (1999) Distributed clustering for ad hoc networks. In: International symposium on parallel architectures, algorithms and networks (ISPAN), pp 310–315
Bernard T, Bui A, Pilard L, Sohier D (2010) Distributed clustering algorithm for large-scale dynamic networks. Int J Clust Comput. doi:10.1007/s10586-011-0153-z
Bui A, Clavière S, Datta AK, Larmore LL, Sohier D (2011) Self-stabilizing construction of bounded size clusters. In: 8th international colloquium on structural information and communication complexity SIROCCO 2011. Lecture notes in computer science. Springer, Berlin
Bui A, Clavière S, Sohier D (2011) Distributed construction of nested clusters with intercluster routing IPDPSW. In: IEEE international symposium on parallel and distributed processing workshops and PhD forum, pp 673–680
Bui A, Kudireti A, Sohier D (2009) A fully distributed clustering algorithm based on random walks. In: International symposium on parallel and distributed computing (ISPDC), pp 125–128
Datta AK, Larmore LL, Vemula P (2010) A self-stabilizing O(k)-time k-clustering algorithm. Comput J 53(3):342–350
Ducourthial B, Khalfallah S, Petit F (2010) Best-effort group service in dynamic networks. In: Proceedings of the 22nd ACM symposium on parallelism in algorithms and architectures (SPAA’10), pp 233–242
Dolev S, Tzachar N (2009) Empire of colonies: self-stabilizing and self-organizing distributing algorithm. Theor Comput Sci 410:514–532
Ephremides A, Wieselthier JE, Baker DJ (1987) A design concept for reliable mobile radio networks with frequency hopping signaling. In: Proceedings of the IEEE, pp 56–73
Johnen C, Mekhaldi F (2010) Robust self-stabilizing construction of bounded size weight-based clusters. In: Euro-Par 2010, pp 535–546
Johnen C, Nguyen L (2009) Robust self-stabilizing weight-based clustering algorithm. Theor Comput Sci 410(6–7):581–594
Lovasz L (1993) Random walks on graphs: a survey. In: Szonyi T, Miklos D, Sos VT (eds) Combinatorics: Paul Erdos is eighty, vol 2. Janos Bolyai Mathematical Society, Budapest, pp 353–398
Frédéric Myoupo J, Cheikhna AO, Sow I (2010) A randomized clustering of anonymous wireless ad hoc networks with an application to the initialization problem. J Supercomput 52(2)::135–148
Sucec J, Marsic I (2002) Location management handoff overhead in hierarchically organized mobile ad hoc networks. In: International parallel and distributed processing symposium (IPDPS), vol 2, p 198. 2:0194
Taheri H, Neamatollahi P, Mohamed Younis O, Naghibzadeh S, Hossein Yaghmaee M (2012) An energy-aware distributed clustering protocol in wireless sensor networks using fuzzy logic. In: Ad Hoc Networks, vol 10, pp 1469–1481. doi:10.1016/j.adhoc.2012.04.004
Thaler DG, Ravishankar CV (1998) Distributed top-down hierarchy construction. In: Proceedings on the seventeenth annual joint conference of the IEEE computer and communications societies. IEEE INFOCOM’98, vol 2, pp 693–701
Tsai C-H, Tseng Y-C (2012) A path-connected-cluster wireless sensor network and its formation, addressing, and routing protocols. IEEE Sens J 12:2135–2144. doi:10.1109/JSEN.2012.2183348
Yang S-J, Chou H-C (2009) Design issues and performance analysis of location-aided hierarchical cluster routing on the MANET. In: Communications and mobile computing (CMC), pp 26–31
Yu J, Qi Y, Wang G, Guo Q, Gu X (2011) An energy-aware distributed unequal clustering protocol for wireless sensor networks. Int J Distrib Sens Netw 2011:202145. 8 pp.. doi:10.1155/2011/202145
Author information
Authors and Affiliations
Corresponding author
Appendix: examples
Appendix: examples
Construction
In this section, we present an example execution of the recruitment and the division of cluster. We use K=5.
Initially, 12 owns the token, and sends it to 8.
When receiving the token (see Fig. 7), node 8 executes On receiving a token message (cf. Algorithm 3 ). Since node 8 is unclustered (P 8=0), it joins cluster x (cf. line 2) and adds its id to the tree (cf. lines 3 and 4). All nodes belonging to cluster x are in the token array.
Then, isDivisible(Tab) returns 4. Node 8 launches a division wave (cf. lines 12 to 14). In Fig. 7(b), Child0 messages are propagated on the tree up to 12. 8 creates new token x0 that retain the link (1,4) as a gateway between clusters x0 and x1. Upon reception of a Child0 message, node 4 will create a new token that will retain the link (4,1) as a gateway between clusters x1 and x0. Then it will inform all its descendants in the tree that they now are in cluster x1 with Child1 messages.
In Fig. 8(a), node 12 has received a Child0 message. It executes On reception of Child0 message (cf. Algorithm 4 ), joins cluster x0 (cf. line 3) and sends Child0 messages to nodes 15 and 1 (cf. lines 4–6). In Fig. 8(b), node 15 has received the Child0 message. It executes On reception of Child0 message (cf. Algorithm 5 ), joins the new cluster (cf. line 3). 15 has no child in the tree described by Tab t : it does not propagates this message.
A message Child0 is then received by node 1, that executes On reception of Child0 message (cf. Algorithm 5 ). It joins the new cluster (cf. line 3) and sends Child0 messages to nodes 4 and 6.
In Fig. 8, the token x0 continues its walk. Nodes 45 and 52 join cluster x0.
In Fig. 9(a), node 6 has received a Child0 message. It executes On reception of Child0 message (cf. Algorithm 4 ): it joins cluster x0 (cf. line 3). 6 has no child in Tab t , it does not propagates this message.
A message Child0 is then received by node 4, that executes On reception of Child0 message (cf. Algorithm 5 ). It joins the new cluster x1 (cf. line 3) and sends Child1 messages to nodes 5, 11 and 22. Line 12, it also creates a new token and sends it to a random neighbor, node 12.
Next, the propagation of division wave continues and both tokens continue their walks. Node 12 will send the token x1 back to node 4.
Finally, in Fig. 9(a), a division wave is ended and both tokens continue their walks.
After the division (Fig. 9(b)), we have tokens x0 and x1 with these trees:
and
Communication mechanism
Suppose that node 26, in cluster x00, wants to send a message to node 5 in cluster x101. With local information, it can compute a path toward cluster x1.
To send the message, it executes Algorithm 1. 26≠5, path=∅ and x00≠x1, then it executes line 8, Pup←x. Pup.0 is a prefix of x00 (the cluster of node 26), then line 10, Pnext←x1. Next, line 14, it computes a path toward x1. Tab 26[x1]=x01 and Tab 26[x01]=8 then, node 26 computes a path for node 8 and sends a message to the next node on the path. It sends the message to node 32. When node 32 receives this message, it executes the same algorithm. 32≠5 and path≠0, then, line 17, it propagates this message to the next node on the path. It sends the message to node 8 (cf. Fig. 10).
Node 8, in cluster x01, receives a message aimed at node 5 in cluster x101. With local information, it computes a path toward cluster x1.
When node 8 receives this message, it executes Algorithm 1. 8≠5, path=∅ and x01≠x1, then it executes line 8 Pup←x. Pup.0 is a prefix of x01 (the cluster of node 8), then line 10, Pnext←x1. Next, line 14 it computes a path toward x1. Tab 8[x1]=1 then, node 8 computes a path for node 1 and sends the message to the next node on the path. It sends the message to node 45. When node 45 receives this message, it sends this message to the next node on the path (since it is nonempty). It sends the message to node 52. Node 52 forwards this message to the node 12. When node 12 receives this message, it executes the same algorithm. 12≠5 and path≠0, then, line 17, it sends the message to node 1 (cf. Fig. 11).
Node 1, in cluster x110, receives the message, aimed at node 5 in cluster x101. With local information, it computes a path toward cluster x10.
When node 1 receives this message, it executes Algorithm 1. 1≠5, path=0 and x110≠x101, then it executes line 8 Pup←x1. Now, the routing will take place in x1. Pup.1 is a prefix of x110 (the cluster of node 1), then line 10, Pnext←x1. Next, line 14 it computes a path toward x10. Tab 1[x10]=4 then, node 1 computes a path for node 4 and sends the message to the next node on the path. It sends the message to node 6. When node 6 receives this message, it sends this message to the next node on the path (since it is nonempty). It sends the message to node 11. When node 11 receives this message, it executes the same algorithm. 11≠5 and path≠0, then, line 17, it sends the message to node 4 (cf. Fig. 12).
Node 4, in cluster x101, receives the message, aimed at node 5 in cluster x101. With local information, it computes a path toward to node 5.
Node 4 executes Algorithm 1. 4≠5, path=0 and x101=x101, then it executes line 4. It computes a path for node 5 and line 17, it sends message to the next node on the path. Tab 4[5]=4 then, it sends message to node 5. Finally, node 5 receives this message (cf. Fig. 13).
Rights and permissions
About this article
Cite this article
Bui, A., Clavière, S. & Sohier, D. Nested clusters with intercluster routing. J Supercomput 65, 1353–1382 (2013). https://doi.org/10.1007/s11227-013-0886-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-013-0886-y