Advertisement

Journal of Computer Science and Technology

, Volume 22, Issue 5, pp 641–652 | Cite as

Server-Based Data Push Architecture for Multi-Processor Environments

  • Xian-He Sun
  • Surendra Byna
  • Yong Chen
Regular Paper

Abstract

Data access delay is a major bottleneck in utilizing current high-end computing (HEC) machines. Prefetching, where data is fetched before CPU demands for it, has been considered as an effective solution to masking data access delay. However, current client-initiated prefetching strategies, where a computing processor initiates prefetching instructions, have many limitations. They do not work well for applications with complex, non-contiguous data access patterns. While technology advances continue to increase the gap between computing and data access performance, trading computing power for reducing data access delay has become a natural choice. In this paper, we present a server-based data-push approach and discuss its associated implementation mechanisms. In the server-push architecture, a dedicated server called Data Push Server (DPS) initiates and proactively pushes data closer to the client in time. Issues, such as what data to fetch, when to fetch, and how to push are studied. The SimpleScalar simulator is modified with a dedicated prefetching engine that pushes data for another processor to test DPS based prefetching. Simulation results show that L1 Cache miss rate can be reduced by up to 97% (71% on average) over a superscalar processor for SPEC CPU2000 benchmarks that have high cache miss rates.

Keywords

performance measurement evaluation modeling simulation of multiple-processor system cache memory 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

11390_2007_9090_MOESM1_ESM.pdf (97 kb)
Supplementary material - Chinese Abstract (PDF 97 kb)

References

  1. [1]
    DARPA. High productivity computing systems (HPCS), vision: Focus on the lost dimension of HPC — “User & system efficiency and productivity”. http://www.darpa.mil/ipto/programs/hpcs/vision.htm.
  2. [2]
    John Hennessy, David Patterson. Computer Architecture: A Quantitative Approach. Fourth edition, Morgan Kaufmann, ISBN: 0123704901, 2006.Google Scholar
  3. [3]
    Wm A Wulf, Sally A McKee. Hitting the memory wall: Implications of the obvious. ACM SIGARPH Computer Architecture News, March 1995, 23(1): 20–24.CrossRefGoogle Scholar
  4. [4]
    Chen T F, Baer J L. Effective hardware-based data prefetching for high performance processors. IEEE Transactions on Computers, 1995, 44(5): 609–623.zbMATHCrossRefGoogle Scholar
  5. [5]
    Dahlgren F, Dubois M, Stenström P. Fixed and adaptive sequential prefetching in shared-memory multiprocessors. In Proc. International Conference on Parallel Processing (ICPP), Los Alamitos, CA, USA, CRC Press, 1993, Vol.1, pp.56–63.Google Scholar
  6. [6]
    Fu J, Patel J H. Data prefetching in multiprocessor vector cache memories. In Proc. the 17th Annual International Symposium on Computer Architecture, Toronto, Canada, 1991, pp.54–63.Google Scholar
  7. [7]
    Joseph D, Grunwald D. Prefetching using Markov predictors. In Proc. the 24th International Symposium on Computer Architecture, Denver-Colorado, 1997, pp.252–263.Google Scholar
  8. [8]
    Gokul Kandiraju, Anand Sivasubramaniam. Going the distance for TLB prefetching: An application-driven study. In Proc. the International Symposium on Computer Architecture, Anchorage, Alaska, 2002, p.195.Google Scholar
  9. [9]
    Alexander T, Kedem G. Distributed predictive cache design for high performance memory system. In Proc. the 2nd International Symposium on High Performance Computer Architecture (HPCA), San Jose, CA, 1996, pp.254–263.Google Scholar
  10. [10]
    Collins J, Tullsen D, Wang H, Shen J. Dynamic speculative precomputation. In Proc. the 34th International Symposium on Microarchitecture, Austin, Texas, 2001, pp.306–317.Google Scholar
  11. [11]
    Wessam Hassanein, José Fortes, Rudolf Eigenmann. Data forwarding through in-memory precomputation threads. In Proc. the International Conference on Supercomputing (ICS), 2004.Google Scholar
  12. [12]
    Hughes C J. Prefetching linked data structures in systems with merged DRAM-logic [Thesis]. University of Illinois at Urbana-Champaign, Technical Report UIUCDCS-R-2001-2221, May 2000.Google Scholar
  13. [13]
    Liao S, Wang P, Wang H, Hoflehner G, Lavery D, Shen J. Post-pass binary adaptation tool for software-based speculative precomputation. In Proc. the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI'02), Berlin, Germany, 2002, pp.117–128.Google Scholar
  14. [14]
    Chi-Keung Luk. Tolerating memory latency through software-controlled pre-execution in simultaneous multithreading processors. In Proc. the 28th Annual International Symposium on Computer Architecture, Göeborg, Sweden, 2001, pp.40–51.Google Scholar
  15. [15]
    Amir Roth, Gurindar S Sohi. Speculative data-driven multithreading. In Proc. the 7th International Symposium on High Performance Computer Architecture, Nuevo Lenone, Mexico, 2001, p.37.Google Scholar
  16. [16]
    Craig Zilles, Gurindar Sohi. Execution-based prediction using speculative slices. In Proc. the 28th Annual International Symposium on Computer Architecture (ISCA), Göeborg, Sweden, 2001, pp.2–13.Google Scholar
  17. [17]
    Yang C L, Lebeck A R. Push vs. pull: Data movement for linked data structures. In Proc. the International Conference on Supercomputing (ICS), Santa Fe, New Mexcio, 2000, pp.176–186, pp.176–186.Google Scholar
  18. [18]
    James E Smith. Decoupled access/execute computer architectures. In Proc. the 9th Annual International Symposium on Computer Architecture (ISCA), Gold Coast, Queensland, 1982, pp.112–119.Google Scholar
  19. [19]
    Culler D, Singh J P, Gupta A. Parallel Computer Architecture: A Hardware/Software Approach. Morgan Kaufmann, ISBN 1558603433, August 1998.Google Scholar
  20. [20]
    Xian-He Sun, Surendra Byna. Data-access memory servers for multi-processor environments. IIT CS TR-2005-001, November 2005, http://www.cs.iit.edu/∼suren/research.html.
  21. [21]
    Burger D C, Austin T M, Bennett S. Evaluating future microprocessors: The SimpleScalar tool set. Technical Report 1308, University of Wisconsin-Madison Computer Sciences, 1996.Google Scholar
  22. [22]
    Surendra Byna, Xian-He Sun, William Gropp, Rajeev Thakur. Predicting the memory-access cost based on data access patterns. In Proc. the IEEE International Conference on Cluster Computing, San Diego, September 2004, pp.327–336.Google Scholar
  23. [23]
    Annavaram M, Patel J M, Davidson E S. Data prefetching by dependence graph pre-computation. In Proc. the 28th International Symposium on Computer Architecture (ISCA), Göeborg, Sweden, 2001, pp.52–61.Google Scholar
  24. [24]
    Kohout N, Choi S, Kim D, Yeung D. Multi-chain prefetching: Effective exploitation of inter-chain memory parallelism for pointer-chasing codes. In Proc. the 10th International Conference on Parallel Architectures and Compilation Techniques, Barcelona, Spain, 2001, pp.268–279.Google Scholar
  25. [25]
    Roth A, Moshovos A, Sohi G S. Dependence based prefetching for linked data structures. In Proc. the 8th International Conference on Architectural Support for Programming Languages and Operating Systems, San Jose, CA, 1998, pp.115–126.Google Scholar
  26. [26]
    Ilya Ganusov, Martin Burtscher. Future execution: A hardware prefetching technique for chip multiprocessors. In Proc. the 14th Annual International Conference on Parallel Architectures and Compilation Techniques (PACT'05), Saint Louis, MO, 2005, pp.350–360.Google Scholar
  27. [27]
    Conway J H, Guy R K. The Book of Numbers. Springer-Verlag, New York, 1996, ISBN: 038797993X.Google Scholar
  28. [28]
    Box G E P X, Jenkins G M, Reinsel G C. Time Series Analysis: Forecasting and Control. 3rd ed, Prentice Hall, 1994.Google Scholar
  29. [29]
    Jack Doweck. Inside Intel core microarchitecture and smart memory access. White paper, Intel Research website, Available online at http://download.intel.com/technology/architecture/sma.pdf, 2006.
  30. [30]
    Sun Microsystems. UltraSPARC IV Processor Architecture Overview. www.sun.com/processors/white-papers/us4_whitepaper.pdf
  31. [31]
    IBM. Cell Broadband Engine resource center. http://www-128.ibm.com/developerworks/power/cell/.
  32. [32]
    Thomas R Puzak, A Hartstein, P G Emma, V Srinivasan. When prefetching improves/degrades performance. In Proc. the 2nd Conference on Computing Frontiers, Ischia, Italy, May 04–06, 2005, pp.342–352.Google Scholar
  33. [33]
    Standard Performance Evaluation Corporation. SPEC Benchmarks, http://www.spec.org/.
  34. [34]
    Jack J Dongarra, Jeremy Du Croz, Sven Hammarling, Iain Duff. A set of level 3 basic linear algebra subprograms. ACM Transactions on Mathematical Software, 1990, 16(1): 1–17.zbMATHCrossRefGoogle Scholar
  35. [35]
    John D McCalpin. Memory bandwidth and machine balance in current high performance computers. IEEE Technical Committee on Computer Architecture, 1995, http://www.cs.virginia.edu/stream.
  36. [36]
    Sherwood T, Perelman E, Calder B. Basic block distribution analysis to find periodic behavior and simulation points in applications. In Proc. the International Conference on Parallel Architectures and Compilation Techniques, Barcelona, Spain, 2001, pp.3–14.Google Scholar
  37. [37]
    Dahlgren F, Dubois M, Stenström P. Sequential hardware prefetching in shared-memory multiprocessors. IEEE Transactions on Parallel and Distributed Systems, 1995, 6(7): 733–746.CrossRefGoogle Scholar
  38. [38]
    Yue Liu, David R Kaeli. Branch-directed and stride-based data cache prefetching. In Proc. the 1996 International Conference on Computer Design, VLSI in Computers and Processors, October 7–9, 1996, pp.225–230.Google Scholar
  39. [39]
    Mowry T, Gupta A. Tolerating latency through software-controlled prefetching in shared-memory multiprocessors. Journal of Parallel and Distributed Computing, June 1991, 12(2): 87–106.CrossRefGoogle Scholar
  40. [40]
    Pai V S, Ranganathan P, Abdel-Shafi H, Adve S. The impact of exploiting instruction-level parallelism on shared-memory multiprocessors. IEEE Transactions on Computers, February 1999, 48(2): 218–226.CrossRefGoogle Scholar
  41. [41]
    Zhou H. Dual-core execution: Building a highly scalable single-thread instruction window. In Proc. the 2005 International Conference on Parallel Architectures and Compilation Techniques (PACT'05), Saint Louis, MO, 2005, pp.231–242.Google Scholar
  42. [42]
    Solihin Y, Lee J, Torrellas J. Using a user-level memory thread for correlation prefetching. In Proc. International Symposium on Computer Architecture, Anchorage, Alaska, May 2002, pp.171–182.Google Scholar

Copyright information

© Science Press, Beijing, China and Springer Science + Business Media, LLC, USA 2007

Authors and Affiliations

  1. 1.Department of Computer ScienceIllinois Institute of TechnologyChicagoU.S.A.
  2. 2.Computing DivisionFermi National Accelerator LaboratoryBataviaU.S.A.

Personalised recommendations