Advertisement

Cluster Computing

, Volume 6, Issue 2, pp 153–159 | Cite as

An SCI-Based PC Cluster Utilizing Coherent Network Cache

  • Sang-Hwa Chung
  • Soo-Cheol Oh
Article
  • 23 Downloads

Abstract

It is extremely important to minimize network access time in constructing a high-performance PC cluster system. For an SCI-based PC cluster, it is possible to reduce the network access time by maintaining network cache in each cluster node. This paper presents a Network-Cache-Coherent-NUMA (NCC-NUMA) card that utilizes network cache for SCI-based PC clustering. The NCC-NUMA card is directly plugged into the PCI slot of each node, and contains shared memory, network cache, and interconnection modules. The network cache is maintained for the shared memory on the PCI bus of cluster nodes. The coherency mechanism between the network cache and the shared memory is based on the IEEE SCI standard. Both a simulator and an NCC-NUMA prototype card are developed to evaluate the performance of the system. According to the experiments, the cluster system with the NCC-NUMA card showed considerable improvements compared with an SCI-based cluster without network cache.

cluster system network cache CC-NUMA SCI 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    http://www.myrinet.comGoogle Scholar
  2. [2]
    A. Mainwaring and D. Culler, Active message applicatons programming interface and communication subsystem organization, Technical Document (1995).Google Scholar
  3. [3]
    S. Pakin, V. Karamcheti and A.A. Chien, Fast messages (FM): Effi-cient, portable communication for workstation clusters and massivelyparallel processors, IEEE Concurrency 5(2) (1997) 60–73.Google Scholar
  4. [4]
    A. Basu, V. Buch, W. Vogels and T. von Eicken, U-Net: A user-level network interface for parallel and distributed computing, in: Proceedings of the 15th ACM Symposim on Operating Systems Principles, Copper Mountain, Colorado, 3–6 December 1995, pp. 40–53.Google Scholar
  5. [5]
    Myricom, Inc., The GM API, White Paper, Myricom, Inc. (1997). http://www.myri.com/GM/doc/gm_toc.htmlGoogle Scholar
  6. [6]
    http://www.sequent.com/whitepapers/numa_arch.htmlGoogle Scholar
  7. [7]
    R. Clark, SCI interconnect chipset and adapter: Building large scale enterprise servers with Pentium Pro SHV nodes, White Paper, Data General Corporation (1999).Google Scholar
  8. [8]
    http://www.dolphinics.comGoogle Scholar
  9. [9]
    W. Karl, M. Leberecht and M. Schulz, Supporting shared memory and message passing on cluster of PCs with a SMiLE, in: CANPC, Orlando, USA (together with HPCA-5), January 1999.Google Scholar
  10. [10]
    M. Trams, W. Rehm, D. Balkanski and S. Simeonov, Memory management in a combined VIA/SCI hardware, in: IPDPS 2000 Workshops, Cancun, Mexico, May 2000, pp. 4–15.Google Scholar
  11. [11]
    D. Magdic, Limes: A multiprocessor simulation environment for PC platforms, IEEE TCCA Newsletter (March 1997).Google Scholar
  12. [12]
    http://www-flash.stanford.edu/SPLASH/Google Scholar
  13. [13]
    Dolphin Interconnect Solutions, PCI-SCI card IRM driver version 1.5.0, Dolphin Interconnect Solutions (1998).Google Scholar
  14. [14]
    D.A. Patterson and J.L. Hennessy, Computer Architecture: A Quantitative Approach (Morgan Kaufmann Publishers, 1996) pp. 687–689.Google Scholar

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  1. 1.School of Electrical and Computer EngineeringPusan National UniversityPusanKorea

Personalised recommendations