Hierarchical Scheduling in Parallel and Cluster Systems

  • Sivarama Dandamudi

Part of the Series in Computer Science book series (SCS)

Table of contents

  1. Front Matter
    Pages i-xxv
  2. Background

    1. Front Matter
      Pages 1-1
    2. Sivarama Dandamudi
      Pages 3-11
    3. Sivarama Dandamudi
      Pages 13-48
    4. Sivarama Dandamudi
      Pages 49-84
  3. Hierarchical Task Queue Organization

    1. Front Matter
      Pages 85-85
    2. Sivarama Dandamudi
      Pages 87-119
    3. Sivarama Dandamudi
      Pages 121-139
    4. Sivarama Dandamudi
      Pages 141-164
  4. Hierarchical Scheduling Policies

    1. Front Matter
      Pages 165-165
    2. Sivarama Dandamudi
      Pages 167-191
    3. Sivarama Dandamudi
      Pages 193-211
    4. Sivarama Dandamudi
      Pages 213-229
  5. Epilog

    1. Front Matter
      Pages 231-231
    2. Sivarama Dandamudi
      Pages 233-237
  6. Back Matter
    Pages 239-251

About this book

Introduction

Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph­ ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro­ cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e. , to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro­ cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass­ ing to facilitate communication among the processors. As a result, they do not provide single address space.

Keywords

Scala architecture organization processor scheduling

Authors and affiliations

  • Sivarama Dandamudi
    • 1
  1. 1.Carleton UniversityOttawaCanada

Bibliographic information

  • DOI https://doi.org/10.1007/978-1-4615-0133-6
  • Copyright Information Kluwer Academic/Plenum Publishers, New York 2003
  • Publisher Name Springer, Boston, MA
  • eBook Packages Springer Book Archive
  • Print ISBN 978-1-4613-4938-9
  • Online ISBN 978-1-4615-0133-6
  • Series Print ISSN 1567-7974
  • About this book