International Journal of Parallel Programming

, Volume 23, Issue 2, pp 135–160 | Cite as

Coscheduling based on runtime identification of activity working sets

  • Dror G. Feitelson
  • Larry Rudolph
Article

Abstract

This paper introduces a method for runtime identification of sets of interacting activities (“working sets”) with the purpose ofcoscheduling them, i.e., scheduling them so that all the activities in the set execute simultaneously on distinct processors. The identification is done by monitoring access rates to shared communication objects: activities that access the same objects at a high rate thereby interact frequently, and therefore would benefit from coscheduling. Simulation results show that coscheduling with our runtime identification scheme can give better performance than uncoordinated scheduling based on a single global activity queue. The finer-grained the interactions among the activities in a working set, the better the performance differential. Moreover, coscheduling based on automatic runtime identification achieves about the same performance as coscheduling based on manual identification of working sets by the programmer.

Key Words

Coscheduling gang scheduling on-line algorithms activity working set 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. K. Ousterhout, Scheduling Techniques for Concurrent Systems,3rd Intl. Conf. Distributed Computing Syst., pp. 22–30 (1982).Google Scholar
  2. 2.
    J. Edler, A. Gottlieb, C. P. Kruskal, K. P. McAuliffe, L. Rudolph, M. Snir, P. J. Teller, and J. Wilson, Issues Related to MIMD Shared-Memory Computers: The NYU Ultracomputer Approach,12th Ann. Intl. Symp. Computer Architecture Conf. Proc., pp. 126–135 (1985).Google Scholar
  3. 3.
    S. F. Hummel and E. Schonberg, Low-Overhead Scheduling of Nested Parallelism,IBM J. Research & Development,35(5/6):743–765 (1991).CrossRefGoogle Scholar
  4. 4.
    S. Krakowiak,Principles of Operating Systems, MIT Press (1988).Google Scholar
  5. 5.
    D. G. Feitelson and L. Rudolph, Distributed Hierarchical Control for Parallel Processing,Computer,23(5):65–77 (1990).CrossRefGoogle Scholar
  6. 6.
    INMOS Ltd.,Occam Programming Manual, Prentice-Hall (1984).Google Scholar
  7. 7.
    N. Francez, B. Hailpern, and G. Taubenfeld, Script: A Communication Abstraction Mechanism,Science of Computer Programing,6(1):35–88 (1986).MATHCrossRefGoogle Scholar
  8. 8.
    D. G. Feitelson, Communicators: Object-Based Multiparty Interactions for Parallel Programming, Technical Report 91-12, Department of Computer Science, The Hebrew University of Jerusalem (1991).Google Scholar
  9. 9.
    M. K. Seager and J. M. Stichnoth, Simulating the Scheduling of Parallel Supercomputer Applications, Technical Report UCRL-102059, Lawrence Livermore National Laboratory (1989).Google Scholar
  10. 10.
    B. C. Gorda and E. D. Brooks III, Gang Scheduling a Parallel Machine, Technical Report UCRL-JC-107020, Lawrence Livermore National Laboratory (1991).Google Scholar
  11. 11.
    P. Steiner, Extending Multiprogramming to a DMPP,Future Generation Comput. Syst.,8(1–3):93–109 (1992).CrossRefGoogle Scholar
  12. 12.
    R. H. Campbell, N. Islam, and P. Madany, Choices, Frameworks and Refinement,Computing Systems,5(5):217–257 (1992).Google Scholar
  13. 13.
    D. G. Feitelson and L. Rudolph, Gang Scheduling Performance Benefits for Fine-Grain Synchronization,J. Parallel & Distributed Comput. 16(4):306–318 (1992).MATHCrossRefGoogle Scholar
  14. 14.
    E. M. Chaves Filho and V. C. Barbosa, Time Sharing in Hypercube Multiprocessors,4th IEEE Symp. Parallel & Distributed Processing, pp. 354–359 (1992).Google Scholar
  15. 15.
    J. A. Test, Multi-Processor Management in the Concentrix Operating System,Proc. Winter USENIX Technical Conf., pp. 173–182 (1986).Google Scholar
  16. 16.
    C. E. Leiserson, Z. S. Abuhamdeh, D. C. Douglas, C. R. Feynman, M. N. Ganmukhi, J. V. Hill, W. D. Hillis, B. C. Kuszmaul, M. A. St. Pierre, D. S. Wells, M. C. Wong, S.-W. Yang, and R. Zak, The Network Architecture of the Connection Machine CM-5,4th Symp. Parallel Algorithms & Architectures, pp. 272–285 (1992).Google Scholar
  17. 17.
    D. G. Feitelson, A Survey of Scheduling in Multiprogrammed Parallel Systems, Research Report RC 19790 (87657), IBM T. J. Watson Research Center (1994).Google Scholar
  18. 18.
    J. Błażewicz, M. Drabowski, and J. Węglarz, Scheduling Multiprocessor Tasks to Minimize Schedule Length,IEEE Trans. Computers,C-35(5):389–393 (1986).MATHGoogle Scholar
  19. 19.
    D. G. Feitelson and L. Rudolph, Mapping and Scheduling in a Shared Parallel Environment Using Distributed Hierarchical Control,Intl. Conf. Parallel Processing, Vol. I, pp. 1–8 (1990).Google Scholar
  20. 20.
    D. G. Feitelson and L. Rudolph, Wasted Resources in Gang Scheduling,5th Jerusalem Conf. Information Technology, IEEE Computer Society Press, pp. 127–136 (1990).Google Scholar
  21. 21.
    H. Sullivan, T. R. Bashkow, and D. Klappholz, A Large Scale, Homogeneous, Fully Distributed Parallel Machine, II,4th Ann. Intl. Symp. Computer Architecture Conf. Proc., pp. 118–124 (1977).Google Scholar
  22. 22.
    A. M. van Tilborg and L. D. Wittie, Wave Scheduling—Decentralized Scheduling of Task Forces in Multicomputers,IEEE Trans. Computers,C-33(9):835–844 (1984).Google Scholar
  23. 23.
    D. L. Tuomenoksa and H. J. Siegel, Task Scheduling on the PASM Parallel Processing System,IEEE Trans. Software Engineering,SE-11(2):145–157 (1985).Google Scholar
  24. 24.
    A. Tucker and A. Gupta, Process Control and Scheduling Issues for Multiprogrammed Shared-Memory Multiprocessors,12th Symp. Operating Systems Principles, pp. 159–166 (1989).Google Scholar
  25. 25.
    C. McCann, R. Vaswani, and J. Zahorjan, A Dynamic Processor Allocation Policy for Multiprogrammed Shared-Memory Multiprocessors,ACM Trans. Computer Systems,11(2):146–178 (1993).CrossRefGoogle Scholar
  26. 26.
    V. K. Naik, S. K. Setia, and M. S. Squillante, Scheduling of Large Scientific Applications on Distributed Memory Multiprocessor Systems,6th SIAM Conf. Parallel Processing for Scientific Computing, Vol. II, pp. 913–922 (1993).Google Scholar
  27. 27.
    C. McCann and J. Zahorjan, Processor Allocation Policies for Message Passing Parallel Computers,SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 19–32 (1994).Google Scholar
  28. 28.
    S. T. Leutenegger and M. K. Vernon, The Performance of Multiprogrammed Multiprocessor Scheduling Policies,SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 226–236 (1990).Google Scholar
  29. 29.
    A. Gupta, A. Tucker, and S. Urushibara, The Impact of Operating System Scheduling Policies and Synchronization Methods on the Performance of Parallel Applications,SIGMETRICS Conf. Measurement & Modeling of Comput. Syst., pp. 120–132 (1991).Google Scholar
  30. 30.
    J. Zahorjan, E. D. Lazowska, and D. L. Eager, The Effect of Scheduling Discipline on Spin Overhead in Shared Memory Parallel Systems,IEEE Trans. Parallel & Distributed Syst.,2(2):180–198 (1991).CrossRefGoogle Scholar
  31. 31.
    D. L. Black, Scheduling Support for Concurrency and Parallelism in the Mach Operating System,Computer,23(5):35–43 (1990).CrossRefGoogle Scholar
  32. 32.
    D. Ghosal, G. Serazzi, and S. K. Tripathi, The Processor Working Set and Its Use in Scheduling Multiprocessor Systems,IEEE Trans. Software Engineering,17(5):443–453 (1991).CrossRefGoogle Scholar
  33. 33.
    Message Passing Interface Forum,MPI: A Message-Passing Interface Standard (1994).Google Scholar
  34. 34.
    E. G. Coffman, Jr., M. R. Garey, and D. S. Johnson, Approximation Algorithms for Bin-Packing—An Updated Survey,Algorithm Design for Computer Systems Design, G. Ausiello, M. Lucertini, and P. Serafini (eds.), Springer-Verlag, pp. 49–106 (1984).Google Scholar
  35. 35.
    A. R. Karlin, K. Li, M. S. Manasse, and S. Owicki, Empirical Studies of Competitive Spinning for a Shared-Memory Multiprocessor,13th Symp. Operating Systems Principles, pp. 41–55 (1991).Google Scholar

Copyright information

© Plenum Publishing Corporation 1995

Authors and Affiliations

  • Dror G. Feitelson
    • 1
  • Larry Rudolph
    • 2
  1. 1.IBM T. J. Watson Research CenterYorktown Heights
  2. 2.Institute of Computer ScienceThe Hebrew University of JerusalemJerusalemIsrael

Personalised recommendations