Advertisement

Building Portable Thread Schedulers for Hierarchical Multiprocessors: The BubbleSched Framework

  • Samuel Thibault
  • Raymond Namyst
  • Pierre-André Wacrenier
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4641)

Abstract

Exploiting full computational power of current more and more hierarchical multiprocessor machines requires a very careful distribution of threads and data among the underlying non-uniform architecture. Unfortunately, most operating systems only provide a poor scheduling API that does not allow applications to transmit valuable scheduling hints to the system. In a previous paper [1], we showed that using a bubble-based thread scheduler can significantly improve applications’ performance in a portable way. However, since multithreaded applications have various scheduling requirements, there is no universal scheduler that could meet all these needs. In this paper, we present a framework that allows scheduling experts to implement and experiment with customized thread schedulers. It provides a powerful API for dynamically distributing bubbles among the machine in a high-level, portable, and efficient way. Several examples show how experts can then develop, debug and tune their own portable bubble schedulers.

Keywords

Threads Scheduling Bubbles NUMA SMP Multi-Core SMT 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Thibault, S.: A flexible thread scheduler for hierarchical multiprocessor machines. In: Second International Workshop on Operating Systems, Programming Environments and Management Tools for High-Performance Computing on Clusters (COSET-2), Cambridge / USA. ICS / ACM / IRISA (2005)Google Scholar
  2. 2.
    Marathe, J., Mueller, F.: Hardware Profile-guided Automatic Page Placement for ccNUMA Systems. In: Sixth Symposium on Principles and Practice of Parallel Programming (March 2006)Google Scholar
  3. 3.
    Shen, X., Gao, Y., Ding, C., Archambault, R.: Lightweight reference affinity analysis. In: 19th ACM International Conference on Supercomputing, Cambridge, MA, USA, pp. 131–140. ACM Press, New York (2005)CrossRefGoogle Scholar
  4. 4.
    Durand, D., Montaut, T., Kervella, L., Jalby, W.: Impact of memory contention on dynamic scheduling on NUMA multiprocessors. In: Int. Conf. on Parallel and Distributed Systems, vol. 7. IEEE Computer Society Press, Los Alamitos (1996)Google Scholar
  5. 5.
    Hénon, P., Ramet, P., Roman, J.: PaStiX: A parallel sparse direct solver based on a static scheduling for mixed 1d/2d block distributions. In: Proceedings of the 15 IPDPS 2000 Workshops on Parallel and Distributed Processing (January 2000)Google Scholar
  6. 6.
    Thibault, S.: BubbleSched API, http://runtime.futurs.inria.fr/marcel/doc/
  7. 7.
    Danjean, V., Namyst, R.: An efficient multi-level trace toolkit for multi-threaded applications. In: EuroPar, Lisbonne, Portugal (September 2005)Google Scholar
  8. 8.
    Barreto, L.P., Muller, G.: Bossa: une approche langage á la conception d’ordonnanceurs de processus. In: Rencontres francophones en Parallélisme, Architecture, Systéme et Composant (RenPar 14), Hammamet, Tunisie (April 2002)Google Scholar
  9. 9.
    Steckermeier, M., Bellosa, F.: Using locality information in userlevel scheduling. Technical Report TR-95-14, University of Erlangen-Nũrnberg (December 1995)Google Scholar
  10. 10.
    Fedorova, A.: Operating System Scheduling for Chip Multithreaded Processors. PhD thesis, Harvard University, Cambridge, Massachusetts (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Samuel Thibault
    • 1
  • Raymond Namyst
    • 1
  • Pierre-André Wacrenier
    • 1
  1. 1.INRIA Futurs - LAbri — 351 cours de la libération — 33405 Talence cedexFrance

Personalised recommendations