Abstract
The drive for performance in parallel computing and the need to evaluate platform upgrades or replacements are major reasons frequent running of benchmark codes has become commonplace for application and platform evaluation and tuning. NIST is developing a prototype for an automated benchmarking toolset to reduce the manual effort in running and analyzing the results of such benchmarks. Our toolset consists of three main modules. A Data Collection and Storage module handles the collection of performance data and implements a central repository for such data. Another module provides an integrated mechanism to analyze and visualize the data stored in the repository. An Experiment Control Module assists the user in designing and executing experiments. To reduce the development effort this toolset is built around existing tools and is designed to be easily extensible to support other tools.
This NIST contribution is not subject to copyright in the United States. Certain commercial items may be identified but that does not imply recommendation or endorsement by NIST, nor does it imply that those items are necessarily the best available.
Visiting scientist from University of Maryland, UMIACS.
Guest researcher from the Institut National des Télécommunications, France.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ruth A. Aydt, “The Pablo Self-Defining Data Format”, http://www.pablo.cs.uiuc.edu.
George E.P. Box, William G. Hunter, “Statistics for Experimenters, an introduction to design, data analysis, and modeling building”, 1978.
“DQS-Distributed Queueing System”, http://www.scri.fsu.edu/~pasko/dqs.html
“IDL-The Interactive Data Language”, http://www.rsinc.com/idl/index.cfm
“MPIProf”, http://www.itl.nist.gov/div895/cmr/mpiprof
“The MathWorks-MATLAB Introduction”. http://www.mathworks.com
B.P. Mille et al., “The Paradyn Parallel Performance Measurement Tools”, IEEE Computer, 28, 1995.
Barton P. Miller, Karen L. Karavanic. “Improving Online Performance Diagnosis by the Use of Historical Performance Data”, SC’99, Portland, Oregon (USA) November 1999.
Alan Mink, “The Multikron Project”, http://www.multikron.nist.gov
“Octave Home Page”, http://www.che.wisc.edu/octave
“PostgreSQL Home Page”, http://www.pyrenet.fr/postgresql
“Scotty-Tcl Extensions for Network Management”, http://wwwhome.cs.utwente.nl/~schoenw/scotty
University of California at Davis, “The UCD-SNMP project home page”, http://www.ece.ucdavis.edu/ucd-snmp
Pallas Gmbh, “VAMPIR”, http://www.pallas.de/pages/vampir.htm, 1999.
J. C. Yan and S. R. Sarukkai, “Analyzing Parallel Program Performance Using Normalized Performance Indices and Trace Transformation Techniques”, Parallel Computing Vol. 22, No. 9, November 1996. pages 1215–1237, 1996.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Courson, M., Mink, A., Marçais, G., Traverse, B. (2000). An Automated Benchmarking Toolset. In: Bubak, M., Afsarmanesh, H., Hertzberger, B., Williams, R. (eds) High Performance Computing and Networking. HPCN-Europe 2000. Lecture Notes in Computer Science, vol 1823. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45492-6_50
Download citation
DOI: https://doi.org/10.1007/3-540-45492-6_50
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67553-2
Online ISBN: 978-3-540-45492-2
eBook Packages: Springer Book Archive