Advertisement

Blocking Problem

  • Mohsen Jahanshahi
  • Fathollah Bistouni
Chapter
Part of the Computer Communications and Networks book series (CCN)

Abstract

This chapter focuses on the blocking problem. Different existing solutions to cope with this problem as well as their scalability will be analyzed. According to previous works, two main solutions are as follows: (1) Using small-size crossbar networks to build scalable interconnection networks with different topology compared to crossbar. Using this approach, many topologies have been introduced. Most of which are known as multistage interconnection networks. (2) Using small-size crossbar networks to build scalable crossbar networks. From this perspective, like to crossbar network, designed networks are non-blocking.

3.1 Introduction

In the early 50s, Neumann suggested a simple cost-effective design for electronic computers in which a single processing unit was connected to a single memory module. During the 60s, using the concept of solid-state components, the cost of large computing machines fell. Once it, very large-scale integration (VLSI) is evolved in which thousands of transistors placed on a single chip. Supercomputers were successfully deal with scientific issues such as climate modeling, aerodynamic aircraft design, and particle physics, which created a strong incentive for the development of parallel computers. After the 80s, this technology has played an undeniable role in solving other challenging issues [1].

As discussed in Chap.  1, processors, memory hierarchy, and the interconnection network are vital parts of a parallel system. In other words, the design of an efficient interconnection network is crucial for the efficient construction of multiprocessor systems [2, 3, 4, 5].

One of the important factors for choosing a proper interconnection topology is to take into account the blocking problem. If a network is able to handle all possible requests each of which are as a permutation (i.e., defined as a request for parallel connections of each \( N \) sources to \( N \) corresponding distinct destinations), then the network is non-blocking. As a result, a network is blocking, if it is unable to handle all such requests without conflict and blocking [2, 6, 7, 8].

So far, a large number of interconnection topologies have been introduced. However, a few of them can efficiently resolve the blocking problem. For systems with \( N \) terminal nodes, a topology would be ideal, if that can connect these nodes by a single switch of size \( N \times N \). This type of topology is known as crossbar. In a crossbar network, any processor in the system can connect to any other processor or memory module so that many processors can communicate simultaneously without contention. Clearly, a crossbar network is strictly non-blocking to any permutation of connections. Here, a question arises is that if the crossbar network is strictly non-blocking, whether blocking problem can be considered as a solved problem? Unfortunately, the answer is no. There is an important problem in the use of crossbar network, namely scalability. The number of available pins and the area of the wiring make limit in the size of the largest crossbar implemented by a single chip. Although the technology of VLSI can integrate the crossbar switch hardware into a single chip, the number of pins within a single VLSI chip cannot exceed a certain number [2, 6, 9]. The scalability problem will prevent the direct use of crossbar network for large-size systems. Therefore, crossbar network can be used in small-size multiprocessor systems in practice. To tackle this issue, there is a reasonable solution to take advantage of crossbar networks in systems with large sizes. This solution offers the use of small-size crossbars as building blocks for networks with larger sizes. By studying the pervious works, it can be deduced that this solution can be implemented by two different approaches:

(1) Designing different scalable interconnection networks topology compared to crossbar, using small-size crossbar networks as switching elements [2, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. So far, a large number of interconnection topologies are designed using this approach that most of them are known as multistage interconnection networks. (2) Designing scalable crossbar networks using small-size crossbar networks as switching elements [2, 6, 33]. This approach can lead to the design of scalable networks, which are non-blocking similar to crossbar network.

In the remainder of this chapter, the two aforementioned approaches will be discussed more detailed. Next, in Chap.  4 on behalf of the first approach, several approaches will be introduced to improve fault-tolerance metric (as a way to improve blocking problem) in multistage interconnection networks. Then, in Chap.  5, a new non-blocking interconnection topology will be proposed on behalf of the second approach.

3.2 Related Works

As discussed in the previous section, crossbar network suffers from scalability problem to exploit in large-size systems. The main reason for this problem is that a large number of pins are required to implement a large-size crossbar network on a single VLSI chip. However, the number of pins in a VLSI chip cannot exceed a few hundred. This will result in restrictions on the size of crossbar network. The solution to deal with this problem is the use of small-size crossbars as building blocks of larger networks. On the other hand, two different scenarios can implement this solution: (1) Take advantage of small-size crossbar networks as switching elements to build larger networks that differ with crossbar, topologically [2, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. (2) Take advantage of small-size crossbar networks as switching elements to build larger crossbar networks that are equivalent to the crossbar, topologically [2, 6, 33]. We will discuss these two different scenarios in sub-Sects. 3.2.1 and 3.2.2, respectively.

3.2.1 Construction of Scalable Non-crossbar Networks by Small-Size Crossbars

This approach can be used as a base for making different topologies. In what follows, important interconnection topologies made based on this approach have been investigated:

Generally, when it is discussed about the banyan-type network in the literature, the purpose of it is a typical multistage interconnection network (MIN) that can provide only a single path between each pair of source–destination. Therefore, the network faces with single-point-of-failure and fails in the event of a failure in one of its components. So far, a variety of banyan-type topologies such as shuffle-exchange network, omega network, baseline network, binary n-cube network, and delta networks are presented by researchers. The remarkable thing is that all of these types of networks typically have used 2 × 2 crossbar networks as their switching elements.

A MIN called gamma has been presented in [12]. A gamma network of size 8 × 8 is shown in Fig. 3.1. This network can establish connection between \( N \) source nodes to \( N \) destination nodes. This network made up of \( \left( {\log_{2} N + 1} \right) \) stages, which are numbered from 0 through \( (\log_{2} N) \). In addition, it uses small-size crossbar networks as switching elements on each stage. The number of crossbar switches used on each stage is equal to \( N \). Also, the crossbar switches used in the first, last, and middle stages are of small size 1 × 3, 3 × 1, and 3 × 3, respectively. Gamma network can provide different paths for many source–destination pairs, as this can help to fault-tolerance capability. However, it cannot provide more than one path, when the tag number is same for the source and destination. In these cases, the network will be the single-point-of-failure, which is non-fault-tolerant. Therefore, gamma has different levels of reliability to different terminal nodes and cannot guarantee fault-tolerance capability for all scenarios.
Fig. 3.1

A gamma network of size 8 × 8

In [13], two new designs of the 4-disjoint paths MINs namely 4-disjoint gamma interconnection networks (4DGIN-1 and 4DGIN-2) have been proposed in order to improve fault tolerance and reliability of gamma network. The 4DGIN-1 and 4DGIN-2 networks are shown in Figs. 3.2 and 3.3, respectively. Consider two 4DGIN-1 and 4DGIN-2 network of size \( N \times N \). The number of switching stages in these new topologies is equal to \( \left( {\log_{2} N + 1} \right) \), which are numbered from 0 to \( (\log_{2} N) \). Also, the small-size crossbar networks have been used in any of the switching stages of these fault-tolerant networks.
Fig. 3.2

A 4DGIN-1 network of size 8 × 8

Fig. 3.3

A 4DGIN-2 network of size 8 × 8

A new fault-tolerant MIN topology is proposed in [14] called Combining Switches Multistage Interconnection Network (CSMIN). To meet the fault-tolerant metric, there are two different paths between each pair of source–destination. When one of the paths fails, then the other path dynamically can be used as a successor route for forwarding packets to improve the blocking situation. A CSMIN of size 8 × 8 is shown in Fig. 3.4. Consider a CSMIN network of general size \( N \times N \). This topology has \( \left( {\log_{2} N + 1} \right) \) stages, which are numbered from 0 to \( (\log_{2} N) \). CSMIN network uses small-size crossbar networks as the switching elements in each of the stages. Size of the crossbar switches in the first and last stage is 2 × 4 and 3 × 2, respectively. In addition, size of them in the stage 1 and intermediate stage is 3 × 3 and 4 × 4, respectively.
Fig. 3.4

A CSMIN network of size 8 × 8

In [15], a new topology called Fault-tolerant Fully-Chained Combining Switches Multistage Interconnection Network (FCSMIN) has been introduced to eliminate the backtracking penalties of the CSMIN. Figure 3.5 shows a FCSMIN of size 8 × 8. Since FCSMIN provides several different paths between each pair of source–destination, it is able to meet the fault-tolerance parameter. In the FCSMIN, in stages 1 to \( \left( {\log_{2} N - 1} \right) \), one of the original non-straight links of CSMIN has been changed to a chained link. In addition, either of the non-straight links between the last two stages of CSMIN has been removed in FCSMIN structure. Generally, a FCSMIN of size \( N \times N \) has \( \left( {\log_{2} N + 1} \right) \) switching stages that are numbered from 0 to \( (\log_{2} N) \). This network also takes advantage of the small-size crossbars within its structure. Size of the crossbar switches in the first stage, the middle stages, and the last stage is 2 × 4, 3 × 3, and 2 × 1, respectively.
Fig. 3.5

A FCSMIN network of size 8 × 8

Wei and Lee [16] introduces a new MIN topology that can provide fault-tolerance and reliability parameters. An 8 × 8 EGN is shown in Fig. 3.6. Let us consider an EGN network of size \( N \times N \). Then, this topology has \( \left( {N + \frac{N}{m}} \right) \) multiplexers in the input stage, \( \left( {\frac{N}{2} + \frac{N}{2m}} \right) \) switches in each of the intermediate stages, and \( \left( {N + \frac{N}{m}} \right) \) demultiplexers in the output stage (\( m \) is related to the size of multiplexers and demultiplexers). In addition, there are one \( m \times 1 \) multiplexer stage as input stage, \( \log_{2} \left( {\frac{N}{m}} \right) \) intermediate stages including crossbar switches of size 2 × 2, and one \( 1 \times m \) demultiplexer stage as output stage. Moreover, this structure is classified in some groups in such a way that each single path network of size \( \frac{N}{m} \) plus its corresponding multiplexers and demultiplexers forms a group. From the figure, the EGN network has taken advantage of the small-size crossbars in the development of its scalable structure.
Fig. 3.6

An EGN network of size 8 × 8

In another research in [7], a new MIN topology named Improved Extra Group Network (IEGN) has been proposed. IEGN is derived from EGN and its aim is to improve the parameters of fault tolerance and reliability. Although the IEGN structure has changed compared to EGN, it still adheres to use of small-size crossbar networks in its topology. Some auxiliary links are added to the IEGN network that can help to improve fault tolerance, reliability, and performance even in the presence of faults. The auxiliary links have caused the size of the crossbars is changed from 2 × 2 to 3 × 3. An IEGN of size 8 × 8 is shown in Fig. 3.7. An IEGN of size \( N \times N \) has \( \left( {N + \frac{N}{m}} \right) \) multiplexers in the input stage, \( \left( {\frac{N}{2} + \frac{N}{2m}} \right) \) crossbar switches in each of the middle stages, and \( \left( {N + \frac{N}{m}} \right) \) demultiplexers in the output stage (Here, \( m \) is related to the size of multiplexers and demultiplexers.). In addition, IEGN uses one \( m \times 1 \) multiplexer stage as input stage, \( \log_{2} \left( {\frac{N}{m}} \right) \) intermediate stages of 3 × 3 crossbar switches, and one \( 1 \times m \) demultiplexer stage as output stage.
Fig. 3.7

An IEGN network of size 8 × 8

A new topology of interconnection networks called Hierarchical Adaptive Switching Interconnection Network (HASIN) has been introduced in [17]. In general, this topology has two structural levels namely local level and global level. Local level makes use of small-size crossbar networks and general level uses the mesh network. Figure 3.8 shows an example of HASIN topology consisting of 28 cores. This hierarchical structure reduces the number of hops and explores the communication locality. In addition, since small-size crossbar switches have an efficient structure with no need to buffer, the power consumption is much less compared to conventional router structure. Therefore, the use of small-size crossbar networks is a good idea to improve the performance in HASIN.
Fig. 3.8

HASIN topology consisting of 28 cores

A new fault-tolerant MIN called Augmented Shuffle-Exchange Network (ASEN) was proposed in [18]. The main objective of this research was to improve the reliability and fault tolerance compared to banyan-type networks such as shuffle-exchange network (SEN). ASEN of size 8 × 8 is shown in Fig. 3.9. As can be seen, the ASEN is made up of small-size crossbar networks as switching elements. In fact, the ASEN is a SEN but one of the switching stages has been removed. In addition, some auxiliary links, multiplexers, and demultiplexers have been added to the ASEN. Suppose that size of ASEN is \( N \times N \). In this general case, ASEN will have \( \left( {(\log_{2} N) - 1} \right) \) stages so that each stage includes \( \left( {\frac{N}{2}} \right) \) crossbar switches. The size of the crossbar switches used in stages 1 through \( \left( {(\log_{2} N) - 2} \right) \) and last stage is 3 × 3 and 2 × 2, respectively. In addition, there is one 2 × 1 multiplexer stage as input stage before switching stage 1 and one 1 × 2 demultiplexer stage as output stage after switching stage \( \left( {(\log_{2} N) - 1} \right) \). The number of multiplexers and demultiplexers is equal to \( N \) for ASEN of size \( N \times N \). Let us define network complexity as the number of 2 × 2 switching elements in the network. As a result, the network complexity of an \( N \times N \) ASEN is equal to \( \left[ {\left( {\frac{3N}{2}} \right)\left( {1 + \frac{3}{4}\left( {(\log_{2} N) - 2} \right)} \right)} \right] \).
Fig. 3.9

ASEN network of size 8 × 8

A new class of fault-tolerant MINs named Augmented Baseline Networks (ABNs) was proposed in [22]. An ABN of size 16 × 16 is illustrated in Fig. 3.10. In general, if we assume a size \( N \times N \) for ABN, then the ABN can be divided into two main groups, each consisting of \( \frac{N}{2} \) sources and \( \frac{N}{2} \) destinations. Each source node in the network can be connected to both groups by multiplexers. Each multiplexer is connected to one input link of a given switch in the stage 1 and size of these multiplexers is 4 × 1. In addition, there is one 1 × 2 demultiplexer stage after stage \( \left( {(\log_{2} N) - 2} \right) \) so that each output link of a switch in stage \( \left( {(\log_{2} N) - 2} \right) \) is connected to one demultiplexer. It should be noted that ABN uses small-size crossbar network as switching elements in its switching stages. Size of the switches used in stages 1 through \( \left( {(\log_{2} N) - 3} \right) \) is 3 × 3, and the size of the switches used in last switching stage is 2 × 2. The network complexity of an \( N \times N \) ABN is given by \( \left[ {\left( {\frac{9N}{8}} \right)\left( {\frac{16}{9}\left( {(\log_{2} N) - 3} \right)} \right)} \right] \).
Fig. 3.10

ABN network of size 16 × 16

Another important class of interconnection networks that are considered by many researchers in this area is replicated MINs [7, 27]. Generally, replicated MIN refers to a network that is derived from a banyan-type MIN so that the banyan-type MIN is replicated in \( L \) layers. Therefore, the replicated MINs can provide several different paths between any source–destination pairs that offers hope for a better fault tolerance and reliability compared to the banyan-type MINs. Figure 3.11 shows the architecture of such an 8 × 8 replicated MIN consisting of two layers \( (L = 2) \) in a three-dimensional view. Consider \( L \)-layer replicated MIN of size \( N \times N \). In this general case, number of stages in the network will be equal to \( (\log_{2} N) \). In addition, all switches used in these stages are small-size crossbars, mainly 2 × 2 crossbars. It is worth noting that the number of crossbar switches in the network according to the number of network layers is equal to \( \left( {\frac{L \times N}{2}} \right) \). In this topology, there is one \( 1 \times L \) demultiplexer for each \( {\text{L}} \) input links of peer switches located in stage 1 and one \( L \times 1 \) multiplexer for each \( L \) output links of peer switches located in stage \( (\log_{2} N) \). Therefore, the number of demultiplexers and multiplexers is equal to \( N \). The network complexity for an \( N \times N \) \( L \)-layer replicated MIN is calculated as \( \left[ {\left( {\frac{L \times N}{2}} \right)\left( {1 + \left( {\log_{2} N} \right)} \right)} \right] \).
Fig. 3.11

Three-dimensional perspective of an 8 × 8 two-layer replicated MIN

As can be seen, all topologies discussed in this subsection use the small-size crossbar networks as their constituent elements. Since each of these small-size crossbar switches can be implemented in a single chip, scalability problem can be easily solved. However, these topologies that are mostly of the type of multistage interconnection networks are incapable of providing an efficient solution to the blocking problem. Below, we will examine the reasons of this issue:

In a general view, MINs can be divided into two main groups: (1) single-path MINs (banyan-type) and (2) multipath MINs. Single-path MINs are the ones that cannot provide more than one path between each pair of source–destination. This structure can lead to the blocking problem, since the request for a new connection may be impossible due to busy resources such as links and switches by other existing connections. That is why these networks are known as blocking MINs. In contrast to these single-path networks, there are multipath MINs that can provide more than one path between each pair of source–destination. When a path is not available, then the network can switch to alternative path to handle a connection request. As a result, the existence of multiple paths in the network can improve fault-tolerance capability and reduce the occurrence of the blocking problem. According to this argument, one of the approaches to alleviate the blocking problem is improving fault-tolerance feature on the network. For this reason, fault tolerance in MINs is one of the favorite topics among researchers. Therefore, because of the importance of fault-tolerance parameter in multistage interconnection networks, the next chapter is devoted to a discussion on some new approaches to improve this feature in these networks.

Fault-tolerant MINs are of interest because of cost-effectiveness compared to crossbar network. However, most of these networks cannot provide a fully non-blocking mode that needs to manage all conflicts. Thus, most of this kind of MINs are also considered as blocking MINs. Only two classes of fault-tolerant MINs that may be able to solve the blocking problem are as follows: (1) rearrangeable non-blocking (or simply rearrangeable) MINs such as Benes network [31] and (2n − 1)-Stage Shuffle-Exchange Networks \( (n = \log_{2} N) \) [28, 29, 30] and (2) non-blocking Clos network [32].

The main idea of the rearrangeable network to deal with the blocking problem is re-arrangement of connections. This network can respond to all connections requests in every permutation by re-arrangement of current connections if needed. At first glance, these networks seem to be efficient solution in theory. Nevertheless, there are some problems in practice with a closer look: (a) In uninterruptible applications, re-arrangement of existing connections is not acceptable [33]. (b) Rearrangeable networks need a central controller to re-arrangement of connections. However, it is very difficult to re-arrangement of connections, since the access of the processor to the network is asynchronously. In fact, when accesses are asynchronous, rearrangeable networks act like blocking networks [2].

If a network can successfully handle all possible permutations without re-arrangement of current connections, then the network is non-blocking. Clos network is the most well-known non-blocking MIN. In essence, the Clos network is a three-stage MIN that each stage is made of some crossbar switches. However, other Clos network with an odd number of switching stages can be built recursively by pasting a three-stage Clos network instead of the switches located in middle stage. A symmetric Clos can be defined by triple of \( \left( {m,n,r} \right) \), where \( m \) is the number of switches located in the middle stage, \( n \) is the number of incoming links for each switch in the first stage and the number of outgoing links for each switch in the last stage, and \( r \) is the number of switches in each of the first and last stages. In addition, size of crossbar switches located on the first stage, the middle stage, and the last stage is \( n \times m \), \( r \times r \), and \( m \times n \), respectively. Although the Clos network can be non-blocking in theory, there are some important issues in the way of this solution in practice: (a) It has been proven that Clos is non-blocking if this condition be true: \( m \ge 2n - 1 \) [6, 16]. Therefore, there are some structural constraints for a non-blocking Clos. (b) An efficient control mechanism for the allocation of connections in the Clos is essential. However, this mechanism is usually complex in a Clos network [6, 19, 33, 34, 35, 36]. For routing a packet in the Clos, after it was sent to switch on the first stage, each switch in the middle stage can be considered for forwarding packet, as long as the link connected to the switch is free. Also, this middle switch should choose a free link to switch on the last stage. Here, when this link is busy, the path is impossible. Finally, switch on the last stage should choose the selected outgoing link. Thus, the problem of routing in the Clos is largely dependent on an efficient mechanism for the allocation of each packet to a middle-stage switch.

Altogether, according to the discussions in this section, it can be said that MINs can solve the scalability problem raised in the crossbar network. On the other hand, although these networks cannot fully cope with the blocking problem, they can support the important metric of fault tolerance that can result in reducing the blocking problem. Therefore, fault-tolerant MINs are of particular importance in this area. In Chap.  4, some important methods to improve the fault-tolerance metric on MINs will be examined.

3.2.2 Construction of Scalable Crossbar Topologies by Small-Size Crossbars

In this approach, the idea is to build large-size crossbar networks using the small-size networks as switching elements. This approach can have several important advantages: First, the blocking problem can easily be solved, because the crossbar networks are strictly non-blocking. Second, the issue of scalability can be solved by the use of small-size crossbars. Therefore, this approach can be a more efficient solution compared to that approach discussed in the previous subsection. In this area, some ideas can be found in [2, 6]. However, the number of topologies designed based on this approach is very low. A rare instance of this type of topology is Multistage Crossbar Network (MCN) [33]. MCN is a multistage implementation of crossbar architecture and it uses small-size crossbar networks as switching elements.

Figure 3.12 shows a MCN of size 4 × 4, which made up of some 2 × 2 crossbar switches. In a general case, consider a MCN of size \( N \times N \). The structure is composed of \( (N^{2} ) \) crossbar switches of size 2 × 2. Such crossbar-based networks are useful to build large crossbar networks, promoting scalability due to exploitation of small-size crossbar switches as switching elements. Therefore, thanks to using the small-size crossbars, MCN can solve the problem of scalability. For some source–destination pairs in MCN, the number of paths is more than one. The path length is defined as the number of switching elements between a source–destination pair. In the MCN, path length is not the same for all origin–destination pairs. More precisely, the path length could be a number between 1 through \( \left( {2N - 1} \right) \) in the MCN.
Fig. 3.12

A MCN of size 4 × 4

MCN can be a reasonable solution to the problem of scalability. However, disadvantage of this structure is its high hardware cost. The hardware cost of a network can be calculated based on the total number of crosspoints in it [7, 10, 22, 27, 35]. According to this definition, the hardware cost of the crossbar network is equal to \( N^{2} \). In addition, the number of 2 × 2 crossbar switches in the MCN is equal to \( N^{2} \); thus, its cost is \( 4N^{2} \). The cost of the MCN is not acceptable, because it is four times higher than the cost of typical crossbar network.

In view of the discussions that took place in this subsection, design of scalable crossbar networks using small-size crossbar switches is a good idea for construction of scalable non-blocking interconnection networks. However, in a few interconnection topologies, this approach has been used in their design. MCN is a network built based on this approach and it can solve the problems of blocking and scalability. However, it has four times higher hardware cost compared to typical crossbar network. Therefore, other designs should be provided to achieve better performance compared to MCN in terms of cost. For this purpose, a new interconnection topology named Scalable Crossbar Network (SCN) will be discussed in Chap.  5. SCN is designed based on the approach discussed in this subsection and it brings the following advantages: (1) It is a non-blocking network. (2) It can solve the problem of scalability using small-size crossbar networks as switching elements in its structure. (3) It has a same hardware cost compared to typical crossbar network.

As will be discussed in Chap.  5, SCN can meet all the above requirements. In addition, the routing mechanism for the SCN network can be fast, cost-effectiveness, and self-routing. In addition, performance analysis conducted in Chap.  5 demonstrates that the SCN can obtain very good performance in terms of various important metrics including terminal reliability, mean time to failure, and system failure rate compared to different interconnection topologies namely SEN, SEN+, Benes network, replicated MIN, multilayer MINs, and MCN.

References

  1. 1.
    Jadhav SS (2009) Advanced computer architecture and computing. Technical PublicationsGoogle Scholar
  2. 2.
    Duato J, Yalamanchili S, Ni LM (2003) Interconnection networks: an engineering approach. Morgan Kaufmann, USAGoogle Scholar
  3. 3.
    Dubois M, Annavaram M, Stenström P (2012) Parallel computer organization and design. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  4. 4.
    Culler DE, Singh JP, Gupta A (199) Parallel computer architecture: a hardware/software approach. Morgan KaufmannGoogle Scholar
  5. 5.
    Agrawal DP (1983) Graph theoretical analysis and design of multistage interconnection networks. IEEE Trans Comput 100(7):637–648CrossRefGoogle Scholar
  6. 6.
    Dally WJ, Towels BP (2004) Principles and practices of interconnection networks. Morgan Kaufmann, San Francisco, Calif, USAGoogle Scholar
  7. 7.
    Bistouni F, Jahanshahi M (2014) Improved extra group network: a new fault-tolerant multistage interconnection network. J Supercomput 69(1):161–199CrossRefGoogle Scholar
  8. 8.
    Villar JA et al (2013) An integrated solution for QoS provision and congestion management in high-performance interconnection networks using deterministic source-based routing. J Supercomput 66(1):284–304CrossRefGoogle Scholar
  9. 9.
    Hur JY et al (2007) Systematic customization of on-chip crossbar interconnects. Reconfigurable computing: architectures, tools and applications. Springer Berlin Heidelberg, pp 61–72Google Scholar
  10. 10.
    Bistouni F, Jahanshahi M (2015) Pars network: a multistage interconnection network with fault-tolerance capability. J Parallel Distrib Comput 75:168–183CrossRefGoogle Scholar
  11. 11.
    Bistouni F, Jahanshahi M (2014) Analyzing the reliability of shuffle-exchange networks using reliability block diagrams. Reliab Eng Syst Saf 132:97–106CrossRefGoogle Scholar
  12. 12.
    Parker DS, Raghavendra CS (1984) The gamma network. IEEE Trans Comput 100(4):367–373CrossRefGoogle Scholar
  13. 13.
    Rajkumar S, Goyal NK (2014) Design of 4-disjoint gamma interconnection network layouts and reliability analysis of gamma interconnection networks. J Supercomput 69(1):468–491CrossRefGoogle Scholar
  14. 14.
    Chen CW, Chung CP (2005) Designing a disjoint paths interconnection network with fault tolerance and collision solving. J Supercomput 34(1):63–80MathSciNetCrossRefGoogle Scholar
  15. 15.
    Nitin SG, Srivastava N (2011) Designing a fault-tolerant fully-chained combining switches multi-stage interconnection network with disjoint paths. J Supercomput 55(3):400–431CrossRefGoogle Scholar
  16. 16.
    Wei S, Lee G (1988) Extra group network: a cost-effective fault-tolerant multistage interconnection network. ACM SIGARCH Comput Archit News 16(2) IEEE Computer Society PressCrossRefGoogle Scholar
  17. 17.
    Matos D et al (2013) Hierarchical and multiple switching NoC with floorplan based adaptability. Reconfigurable computing: architectures, tools and applications. Springer, Berlin, Heidelberg, pp 179–184Google Scholar
  18. 18.
    Kumar VP, Reddy SM (1987) Augmented shuffle-exchange multistage interconnection networks. Computer 20(6):30–40CrossRefGoogle Scholar
  19. 19.
    Vasiliadis DC, Rizos GE, Vassilakis C (2013) Modelling and performance study of finite-buffered blocking multistage interconnection networks supporting natively 2-class priority routing traffic. J Netw Comput Appl 36(2):723–737CrossRefGoogle Scholar
  20. 20.
    Gunawan I (2008) Reliability analysis of shuffle-exchange network systems. Reliab Eng Syst Saf 93(2):271–276CrossRefGoogle Scholar
  21. 21.
    Blake JT, Trivedi KS (1989) Reliability analysis of interconnection networks using hierarchical composition. IEEE Trans Reliab 38(1):111–120CrossRefGoogle Scholar
  22. 22.
    Bansal PK, Joshi RC, Singh K (1994) On a fault-tolerant multistage interconnection network. Comput Electr Eng 20(4):335–345CrossRefGoogle Scholar
  23. 23.
    Blake JT, Trivedi KS (1989) Multistage interconnection network reliability. IEEE Trans Comput 38(11):1600–1604CrossRefGoogle Scholar
  24. 24.
    Nitin, Subramanian A (2008) Efficient algorithms and methods to solve dynamic MINs stability problem using stable matching with complete ties. J Discrete Algorithms 6(3):353–380MathSciNetCrossRefGoogle Scholar
  25. 25.
    Fan CC, Bruck J (2000) Tolerating multiple faults in multistage interconnection networks with minimal extra stages. IEEE Trans Comput 49(9):998–1004CrossRefGoogle Scholar
  26. 26.
    Adams GB, Siegel HJ (1982) The extra stage cube: a fault-tolerant interconnection network for supersystems. IEEE Transac Comput 100(5):443–454CrossRefGoogle Scholar
  27. 27.
    Tutsch D, Hommel G (2008) MLMIN: a multicore processor and parallel computer network topology for multicast. Comput Oper Res 35(12):3807–3821CrossRefGoogle Scholar
  28. 28.
    Çam H (2001) Analysis of shuffle-exchange networks under permutation trafic. Switching networks: recent advances. Springer, USA, pp 215–256Google Scholar
  29. 29.
    Çam H (2003) Rearrangeability of (2n − 1)-stage shuffle-exchange networks. SIAM J Comput 32(3):557–585MathSciNetCrossRefGoogle Scholar
  30. 30.
    Dai H, Shen X (2008) Rearrangeability of 7-stage 16 × 16 shuffle exchange networks. Front Electr Electron Eng China 3(4):440–458CrossRefGoogle Scholar
  31. 31.
    Beneš VE (1965) Mathematical theory of connecting networks and telephone traffic, vol 17. Academic PressGoogle Scholar
  32. 32.
    Clos C (1953) A study of non-blocking switching networks. Bell Syst Tech J 32(2):406–424CrossRefGoogle Scholar
  33. 33.
    Kolias C, Tomkos I (2005) Switch fabrics. IEEE Circ Devices Mag 21(5):12–17CrossRefGoogle Scholar
  34. 34.
    Fey D et al (2012) Optical multiplexing techniques for photonic Clos networks in high performance computing architectures. J Supercomput 62(2):620–632CrossRefGoogle Scholar
  35. 35.
    Cuda D, Giaccone P, Montalto M (2012) Design and control of next generation distribution frames. Comput Netw 56(13):3110–3122CrossRefGoogle Scholar
  36. 36.
    Sibai FN (2011) Design and evaluation of low latency interconnection networks for real-time many-core embedded systems. Comput Electr Eng 37(6):958–972CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer Engineering, Central Tehran BranchIslamic Azad UniversityTehranIran

Personalised recommendations