Multi-toroidal Interconnects: Using Additional Communication Links to Improve Utilization of Parallel Computers
Three-dimensional torus is a common topology of network interconnects of multicomputers due to its simplicity and high scalability. A parallel job submitted to a three-dimensional toroidal machine typically requires an isolated, contiguous, rectangular partition connected as a mesh or a torus. Such partitioning leads to fragmentation and thus reduces resource utilization of the machines. In particular, toroidal partitions often require allocation of additional communication links to close the torus. If the links are treated as dedicated resources (due to the partition isolation requirement) this may prevent allocation of other partitions that could, otherwise, use those links. Overall, on toroidal machines, the likelihood of successful allocation of a new partition decreases as the number of toroidal partitions increases.
This paper presents a novel ”multi-toroidal” interconnect topology that is able to accommodate multiple adjacent meshed and toroidal partitions at the same time. We prove that this topology allows connecting every free partition of the machine as a torus without affecting existing partitions. We also show that for toroidal jobs this interconnect topology increases machine utilization by a factor of 2 to 4 (depending on the workload) compared with three-dimensional toroidal machines. This effect exists for different scheduling policies. The BlueGene/L supercomputer being developed by IBM Research is an example of a multi-toroidal interconnect architecture.
Unable to display preview. Download preview PDF.
- 1.Parallel workload archive, http://www.cs.huji.ac.il/labs/parallel/workload
- 2.Cray Research, Inc. Cray T3D System Architecture Overview, Technical Report (September 1993)Google Scholar
- 3.Feitelson, D.G., Jette, M.A.: Improved Utilization and Responsiveness with Gang Scheduling. In: Feitelson, D.G., Rudolph, L. (eds.) IPPS-WS 1997 and JSSPP 1997. LNCS, vol. 1291, pp. 238–261. Springer, Heidelberg (1997)Google Scholar
- 4.Kessler, R., Schwarzmeier, J.: CRAY T3D: A New Dimension for Cray Research. In: COMPCON, pp. 176–182 (1993)Google Scholar
- 5.Earth Simulator, http://www.es.jamstec.go.jp
- 7.Weil, M., Feitelson, D.: Utilization, Predictability, Workloads, and User Runtime Estimates in Scheduling the IBM SP2 with Backfilling. IEEE Trans. Parallel & Distributed Syst. 12(6) (2001)Google Scholar
- 8.Lifka, D.: The ANL/IBM SP Scheduling System. In: Feitelson, D.G., Rudolph, L. (eds.) IPPS-WS 1995 and JSSPP 1995. LNCS, vol. 949, pp. 295–303. Springer, Heidelberg (1995)Google Scholar
- 9.Das Sharma, D., Pradhan, D.K.: A Fast and Efficient Strategy for Submesh Allocation in Mesh-Connected Parallel Computers. In: IEEE Symposium on Parallel and Distributed Processing, pp. 682–689 (1993)Google Scholar
- 10.Chuang, P.J., Tzeng, N.F.: An Efficient Submesh Allocation Strategy for Mesh Connected Systems. In: International Conference. on Distributed Computing Systems, pp. 256–263 (1991)Google Scholar
- 11.Ding, J., Bhuyan, L.N.: An Adaptive Submesh Allocation Strategy for Two- Dimensional Mesh Connected Systems. In: International Conference on Parallel Processing, pp. 193–200 (1993)Google Scholar
- 12.Qiao, W., Ni, L.M.: Efficient Processor Allocation for 3D Tori., Technical Report, Michigan State University, East Lansing, MI, 48824-1027 (1994)Google Scholar
- 13.Yoo, S.M., Choo, H., Youn, H.Y.: Processor Scheduling and Allocation for 3D Torus Multicomputer Systems. IEEE Trans. on Parallel and Dist. Syst., 475–484 (2000)Google Scholar
- 14.Yoo, S., Das, C.R.: Processor Management Techniques for Mesh-Connected Multiprocessors. In: International Conference on Parallel Processing, pp. 105–112 (1995)Google Scholar
- 16.Adiga, N.R., et al.: An Overview of the BlueGene/L Supercomputer. In: Supercomputing (2002)Google Scholar