Influences between Performance Based Scheduling and Service Level Agreements

  • Antonella Galizia
  • Alfonso Quarati
  • Michael Schiffers
  • Mark Yampolskiy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7156)

Abstract

The allocation of resources to jobs running on e-Science infrastructures is a key issue for scientific communities. In order to provide a better efficiency of computational jobs we propose an SLA-aware architecture. The core of this architecture is a scheduler relying on resource performance information. For performance characterization we propose a two-level benchmark that includes tests corresponding to specific e-Science applications. In order to evaluate the proposal we present simulation results for the proposed architecture.

Keywords

resource allocation benchmarks scheduling SLA 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure, 2nd edn. Elsevier (2004)Google Scholar
  2. 2.
    Distributed European Infrastructure for Supercomputing Applications (May 10, 2011), http://www.deisa.eu/science/benchmarking
  3. 3.
    Hockney, R.W.: The science of computer benchmarking. Software, environments, tools. SIAM, Philadelphia (1996)CrossRefGoogle Scholar
  4. 4.
    Simmhan, Y., Ramakrishnan, L.: Comparison of Resource Platform Selection Approaches for Scientific Workflows. In: 19th ACM International Symposium on High Performance Distributed Computing, HPDC 2010, pp. 445–450 (2010), doi:10.1145/1851476.1851541Google Scholar
  5. 5.
    Clematis, A., Corana, A., D’Agostino, D., Galizia, A., Quarati, A.: Job–resource matchmaking on Grid through two-level benchmarking. Future Generation Computer Systems 26(8), 1165–1179 (2010)CrossRefGoogle Scholar
  6. 6.
    Bode, B., et al.: The Portable Batch Scheduler and the Maui Scheduler on Linux Clusters. In: 4th Annual Linux Showcase and Conference, Atlanta, USA (2000)Google Scholar
  7. 7.
    Frey, J., Tannenbaum, T., Livny, M., Foster, I., Tuecke, S.: Condor-G: A Computation Management Agent for Multi-Institutional Grids. Cluster Computing 5(3), 237–246 (2002)CrossRefGoogle Scholar
  8. 8.
    Huedo, E., Montero, R., Llorente, I.: A framework for adaptive execution in grids. Software Practice and Experience 34(7), 631–651 (2004)CrossRefGoogle Scholar
  9. 9.
    Flops Benchmark (May 10, 2011), http://home.iae.nl/users/mhx/flops.html
  10. 10.
    McCalpin, J.D.: Memory Bandwidth and Machine Balance in Current High Performance Computers. In: IEEE Technical Committee on Computer Architecture (TCCA) Newsletter (1995)Google Scholar
  11. 11.
    Mucci, P.J., London, K., Thurman, J.: The CacheBench Report, University of Tennessee (Cachebench Home Page) (May 10, 2011), http://icl.cs.utk.edu/projects/llcbench/cachebench.html
  12. 12.
    Gropp, W., Lusk, E.: Reproducible Measurements of MPI Performance Characteristics. In: Margalef, T., Dongarra, J., Luque, E. (eds.) PVM/MPI 1999. LNCS, vol. 1697, pp. 11–18. Springer, Heidelberg (1999), http://www-unix.mcs.anl.gov/mpi/mpptest/ CrossRefGoogle Scholar
  13. 13.
    Rabenseifner, R., Koniges, A.E.: Effective File-I/O Bandwidth Benchmark. In: Bode, A., Ludwig, T., Karl, W.C., Wismüller, R. (eds.) Euro-Par 2000. LNCS, vol. 1900, pp. 1273–1283. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  14. 14.
    Tsouloupas, G., Dikaiakos, M.: GridBench: A Tool for the Interactive Performance Exploration of Grid Infrastructures. Journal of Parallel and Distributed Computing 67, 1029–1045 (2007)MATHCrossRefGoogle Scholar
  15. 15.
    The High Performance LINPACK Benchmark (May 10, 2011) http://www.netlib.org/benchmark/hpl/
  16. 16.
    D’Agostino, D., Clematis, A., Gianuzzi, V.: Parallel Isosurface Extraction for 3D Data Analysis Workflows. Distributed Environments, Concurrency and Computation: Practice and Experience (2011), doi:10.1002/cpe.1710Google Scholar
  17. 17.
    Clematis, A., D’Agostino, D., Galizia, A., Quarati, A.: Profiling e-Science Infrastructures with Kernel and Application Benchmarks. Submitted for the publication in Journal of Computer Systems Science and EngineeringGoogle Scholar
  18. 18.
    Casale, G., Serazzi, G.: Quantitative System Evaluation with Java Modeling Tools. In: ICPE 2011, Karlsruhe, Germany, March 14-16 (2011)Google Scholar
  19. 19.
    Lazowska, E.D., Zahorjan, J., Scott Graham, G., Sevcik, K.C.: Quantitative System Performance - Computer System Analysis Using Queuing Network Models. Prentice-Hall, Inc. (1984)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Antonella Galizia
    • 1
  • Alfonso Quarati
    • 1
  • Michael Schiffers
    • 2
    • 4
  • Mark Yampolskiy
    • 3
    • 4
  1. 1.Institute for Applied Mathematics and Information TechnologiesNational Research Council of ItalyGenoaItaly
  2. 2.Ludwig-Maximilians-Universität MünchenMunichGermany
  3. 3.Leibniz Supercomputing CentreGarchingGermany
  4. 4.Munich Network Management (MNM) TeamGermany

Personalised recommendations