Advertisement

Using Run-Time Predictions to Estimate Queue Wait Times and Improve Scheduler Performance

  • Warren Smith
  • Valerie Taylor
  • Ian Foster
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1659)

Abstract

On many computers, a request to run a job is not serviced immediately but instead is placed in a queue and serviced only when resources are released by preceding jobs. In this paper, we build on runtime prediction techniques that we developed in previous research to explore two problems. The first problem is to predict how long applications will wait in a queue until they receive resources. We develop runtime estimates that result in more accurate wait-time predictions than other run-time prediction techniques. The second problem we investigate is improving scheduling performance. We use run-time predictions to improve the performance of the least-work-first and backfill scheduling algorithms. We end that using our run-time predictor results in lower mean wait times for the workloads with higher offered loads and for the backfill scheduling algorithm.

Keywords

Schedule Algorithm Wait Time Argonne National Laboratory Schedule Performance Maximum History 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    C. Catlett and L. Smarr. Metacomputing. Communications of the ACM, 35 (6):44–52, 1992.CrossRefGoogle Scholar
  2. 2.
    K. Czajkowski, I. Foster, N. Karonis, C. Kesselman, S. Martin, W. Smith, and S. Tuecke. A Resource Management Architecture for Metasystems. Lecture Notes on Computer Science, 1998.Google Scholar
  3. 3.
    Allen Downey. Predicting Queue Times on Space-Sharing Parallel Computers. In International Parallel Processing Symposium, 1997.Google Scholar
  4. 4.
    N. R. Draper and H. Smith. Applied Regression Analysis, 2nd Edition. John Wiley and Sons, 1981.Google Scholar
  5. 5.
    Dror Feitelson and Bill Nitzberg. Job Characteristics of a Production Parallel Scientific Workload on the NASA Ames iPSC/860. Lecture Notes on Computer Science,949:337–360, 1995.Google Scholar
  6. 6.
    Ian Foster and Carl Kesselman. Globus: A Metacomputing Infrastructure Toolkit. International Journal of Supercomputing Applications, 11(2):115–128, 1997.Google Scholar
  7. 7.
    Ian Foster and Carl Kesselman, editors. The Grid: Blueprint for a New Computing Infrastructure. Morgan Kauffmann, 1999.Google Scholar
  8. 8.
    Richard Gibbons. A Historical Application Profiler for Use by Parallel Scheculers. Lecture Notes on Computer Science, 1297:58–75, 1997. Gibbons. A Historical Profiler for Use by Parallel Schedulers. Master’s thesis, University of Toronto, 1997.Google Scholar
  9. 10.
    David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, 1989.Google Scholar
  10. 11.
    David A. Lifka. The ANL/IBM SP Scheduling System. Lecture Notes on Computer Science, 949:295–303, 1995.Google Scholar
  11. 12.
    Warren Smith, Ian Foster, and Valerie Taylor. Predicting Application Run Times Using Historical Information. Lecture Notes on Computer Science, 1459:122–142, 1998.Google Scholar
  12. 13.
    Neil Weiss and Matthew Hassett. Introductory Statistics. Addison-Wesley, 1982.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Warren Smith
    • 1
  • Valerie Taylor
    • 2
  • Ian Foster
    • 1
  1. 1.Mathematics and Computer Science DivisionArgonne National LaboratoryArgonneEvanston
  2. 2.Electrical and Computer Engineering DepartmentNorthwestern UniversityEvanston

Personalised recommendations