Advertisement

Effects of Memory Performance on Parallel Job Scheduling

  • G. Edward Suh
  • Larry Rudolph
  • Srinivas Devadas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2221)

Abstract

We develop a new metric for job scheduling that includes the effects of memory contention amongst simultaneously-executing jobs that share a given level of memory. Rather than assuming each job or process has a fixed, static memory requirement, we consider a generalscenario wherein a process’ performance monotonically increases as a function of allocated memory, as defined by a miss-rate versus memory size curve. Given a schedule of jobs in a shared-memory multiprocessor (SMP), and an isolated miss-rate versus memory size curve for each job, we use an analyticalmemory modelto estimate the overallmemory miss-rate for the schedule. This, in turn, can be used to estimate overall performance. We develop a heuristic algorithm to find a good schedule of jobs on a SMP that minimizes memory contention, thereby improving memory and overall performance.

Keywords

Memory Performance Time Slice Good Schedule Page Fault Footprint Size 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    A. Batat and D. G. Feitelson. Gang scheduling with memory considerations. In 14th International Parallel and Distributed Processing Symposium, 2000.Google Scholar
  2. 3.
    W. J. Dally, S. Keckler, N. Carter, A. Chang, M. Filo, and W. S. Lee. M-Machine architecture v1.0. Technical Report Concurrent VLSI Architecture Memo 58, Massachusetts Institute of Technology, 1994.Google Scholar
  3. 4.
    S. J. Eggers, J. S. Emer, H. M. Levy, J. L. Lo, R. L. Stamm, and D. M. Tullsen. Simultaneous multithreading: A platform for next-generation processors. IEEE Micro, 17(5), 1997.Google Scholar
  4. 5.
    D. G. Feitelson and L. Rudolph. Evaluation of design choices for gang scheduling using distributed hierarchical control. Journal of Parallel and Distributed Computing, 1996.Google Scholar
  5. 6.
    D. G. Feitelson and A. M. Weil. Utilization and predictability in scheduling the ibm sp2 with backfilling. In 12th International Parallel Processing Symposium, 1998.Google Scholar
  6. 7.
    J. L. Henning. SPEC CPU2000: Measuring CPU performance in the new millennium. IEEE Computer, July 2000.Google Scholar
  7. 10.
    J. L. Lo, J. S. Emer, H. M. Levy, R. L. Stamm, D. M. Tullsen, and S. J. Eggers. Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading. ACM Transactions on Computer Systems, 15, 1997.Google Scholar
  8. 11.
    E. W. Parsons and K. C. Sevcik. Coordinated allocation of memory and processors in multiprocessors. In the ACM SIGMETRICS conference on Measurement & modeling of computer systems, 1996.Google Scholar
  9. 12.
    G. E. Suh, S. Devadas, and L. Rudolph. Analytical cache models with applications to cache partitioning. In 15th ACM International Conference on Supercomputing, 2001.Google Scholar
  10. 13.
    D. M. Tullsen, S. J. Eggers, and H. M. Levy. Simultaneous multithreading: Maximizing on-chip parallelism. In 22nd Annual International Symposium on Computer Architecture, 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • G. Edward Suh
    • 1
  • Larry Rudolph
    • 1
  • Srinivas Devadas
    • 1
  1. 1.MIT Laboratory for Computer ScienceCambridge

Personalised recommendations