Job characteristics of a production parallel scientific workload on the NASA Ames iPSC/860

  • Dror G. Feitelson
  • Bill Nitzberg
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 949)


Statistics of a parallel workload on a 128-node iPSC/860 located at NASA Ames are presented. It is shown that while the number of sequential jobs dominates the number of parallel jobs, most of the resources (measured in node-seconds) were consumed by parallel jobs. Moreover, most of the sequential jobs were for system administration. The average runtime of jobs grew with the number of nodes used, so the total resource requirements of large parallel jobs were larger by more than the number of nodes they used. The job submission rate during peak day activity was somewhat lower than one every two minutes, and the average job size was small. At night, submission rate was low but job sizes and system utilization were high, mainly due to NQS. Submission rate and utilization over the weekend were lower than on weekdays. The overall utilization was 50%, after accounting for downtime. About 2/3 of the applications were executed repeatedly, some for a significant number of times.


Interarrival Time Submission Rate Gang Schedule Unix Command Multiprogramming Level 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    A. K. Agrawala, J. M. Mohr, and R. M. Bryant, “An approach to the workload characterization problem”. Computer 9(6), pp. 18–32, Jun 1976.Google Scholar
  2. 2.
    G. M. Amdahl, “Validity of the single processor approach to achieving large scale computer capabilities”. In AFIPS Spring Joint Comput. Conf., vol. 30, pp. 483–485, Apr 1967.Google Scholar
  3. 3.
    E. Barszcz, “Intercube communication for the iPSC/860”. In Scalable High-Performance Comput. Conf., pp. 307–313, 1992.Google Scholar
  4. 4.
    M. Calzarossa and G. Serazzi, “A characterization of the variation in time of work-load arrival patterns”. IEEE Trans. Comput. C-34(2), pp. 156–162, Feb 1985.Google Scholar
  5. 5.
    M. Calzarossa and G. Serazzi, “Workload characterization: a survey”. Proc. IEEE 81(8), pp. 1136–1150, Aug 1993.Google Scholar
  6. 6.
    S-H. Chiang, R. K. Mansharamani, and M. K. Vernon, “Use of application characteristics and limited preemption for run-to-completion parallel processor scheduling policies”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 33–44, May 1994.Google Scholar
  7. 7.
    M. V. Devarakonda and R. K. Iyer, “Predictability of process resource usage: a measurement-based study on UNIX”. IEEE Trans. Softw. Eng. 15(12), pp. 1579–1586, Dec 1989.Google Scholar
  8. 8.
    D. G. Feitelson and L. Rudolph, “Wasted resources in gang scheduling”. In 5th Jerusalem Conf. Information Technology, pp. 127–136, IEEE Computer Society Press, Oct 1990.Google Scholar
  9. 9.
    D. Ferrari, “Workload characterization and selection in computer performance measurement”. Computer 5(4), pp. 18–24, Jul/Aug 1972.Google Scholar
  10. 10.
    J. L. Gustafson, “Reevaluating Amdahl's law”. Comm. ACM 31(5), pp. 532–533, May 1988. See also Comm. ACM 32(2), pp. 262–264, Feb 1989, and Comm. ACM 32(8), pp. 1014–1016, Aug 1989.Google Scholar
  11. 11.
    J. L. Gustafson, G. R. Montry, and R. E. Benner, “Development of parallel methods for a 1024-processor hypercube”. SIAM J. Sci. Statist. Comput. 9(4), pp. 609–638, Jul 1988.Google Scholar
  12. 12.
    P. Heidelberger and S. S. Lavenberg, “Computer performance evaluation methodology”. IEEE Trans. Comput. C-33(12), pp. 1195–1220, Dec 1984.Google Scholar
  13. 13.
    Intel Corp., iPSC/860 Multi-User Accounting, Control, and Scheduling Utilities Manual. Order number 312261-002, May 1992.Google Scholar
  14. 14.
    D. Kotz and N. Nieuwejaar, “Dynamic file-access characteristics of a production parallel scientific workload”. In Supercomputing '94, pp. 640–649, Nov 1994.Google Scholar
  15. 15.
    S. Krakowiak, Principles of Operating Systems. MIT Press, 1988.Google Scholar
  16. 16.
    P. Krueger and R. Chawla, “The stealth distributed scheduler”. In 11th Intl. Conf. Distributed Comput. Syst., pp. 336–343, May 1991.Google Scholar
  17. 17.
    P. Krueger, T-H. Lai, and V. A. Radiya, “Processor allocation vs. job scheduling on hypercube computers”. In 11th Intl. Conf. Distributed Comput. Syst., pp. 394–401, May 1991.Google Scholar
  18. 18.
    M. Kumar, “Measuring parallelism in computation-intensive scientific/engineering applications”. IEEE Trans. Comput. 37(9), pp. 1088–1098, Sep 1988.Google Scholar
  19. 19.
    S. T. Leutenegger and M. K. Vernon, “The performance of multiprogrammed multiprocessor scheduling policies”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 226–236, May 1990.Google Scholar
  20. 20.
    S. Majumdar, D. L. Eager, and R. B. Bunt, “Characterisation of programs for scheduling in multiprogrammed parallel systems”. Performance Evaluation 13(2), pp. 109–130, 1991.Google Scholar
  21. 21.
    S. Majumdar, D. L. Eager, and R. B. Bunt, “Scheduling in multiprogrammed parallel systems”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 104–113, May 1988.Google Scholar
  22. 22.
    P. Messina, “The Concurrent Supercomputing Consortium: year 1”. IEEE Parallel & Distributed Technology 1(1), pp. 9–16, Feb 1993.Google Scholar
  23. 23.
    V. K. Naik, S. K. Setia, and M. S. Squillante, Performance Analysis of Job Scheduling Policies in Parallel Supercomputer Environments. Research Report RC 19138 (82333), IBM T. J. Watson Research Center, Sep 1993. Also in Supercomputing '93.Google Scholar
  24. 24.
    K. C. Sevcik, “Characterization of parallelism in applications and their use in scheduling”. In SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 171–180, May 1989.Google Scholar
  25. 25.
    J. P. Singh, J. L. Hennessy, and A. Gupta, “Scaling parallel programs for multi-processors: methodology and examples”. Computer 26(7), pp. 42–50, Jul 1993.Google Scholar
  26. 26.
    X-H. Sun and L. M. Ni, “Scalable problems and memory-bounded speedup”. J. Parallel & Distributed Comput. 19(1), pp. 27–37, Sep 1993.Google Scholar
  27. 27.
    P. H. Worley, “The effect of time constraints on scaled speedup”. SIAM J. Sci. Statist. Comput. 11(5), pp. 838–858, Sep 1990.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Dror G. Feitelson
    • 1
  • Bill Nitzberg
    • 2
  1. 1.IBM T. J. Watson Research CenterYorktown Heights
  2. 2.NASA Ames Research CenterMoffett Field

Personalised recommendations