Advertisement

The Journal of Supercomputing

, Volume 15, Issue 2, pp 123–140 | Cite as

Chronos: a Performance Characterization Tool Inside the EDPEPPS Toolset

  • J. Bourgeois
  • F. Spies
  • M. J. Zemerly
  • T. Delaitre
Article

Abstract

The EDPEPPS toolset is the fruit of a 10 man-year-research development and integrates many modules in order to predict and classify the execution times of C/PVM programs mapped on a cluster of heterogeneous workstations. In this project, a performance characterization tool called Chronos has been developed to model the processor and C instructions. Chronos can be used to characterize a wide range of machines as it is developed round a specialized benchmark. Chronos uses a parameter-based model and characterizes the machine and the program studied. Then, the execution predictor evaluates the time spent in each program block, according to a generic model of cache memory which simulates most of the CPU internal cache memory architecture. Chronos does not need any user's intervention as all the operations are automatic. The performance accuracy of Chronos is highlighted by a real processor-consuming sequential example.

This tool can be used by designers to predict the average execution time of their applications quickly. Average percentage errors obtained from this tool are below 10%.

Performance characterization parallel programing environments CPU modeling cache memory modeling 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. Akhtar and A. K. Sood. Performance evaluation of CSMA/CD networks using markovian approach. IEEE, 1988.Google Scholar
  2. 2.
    C. Anglano. Performance modeling of heterogeneous distributed applications. In Proceedings of the Fourth International Workshop on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS'96), San Jose, February 1996.Google Scholar
  3. 3.
    G. Balbo, G. Conte, M. A. Marsan, and G. Chiola. Generalized stochastic petri nets: A definition at the net level and its implications. IEEE Transactions on Software Engineering, 19(2): 89-107, 1993.Google Scholar
  4. 4.
    F. Bodin, P. Beckman, D. Gannon, J. Gotwals, S. Narayana, S. Srinivas, and B. Winnicka. Sage++: An object-oriented toolkit and class library for building Fortran and C++ restructuring tools. In 2nd Annual Object-Oriented Numerics Conference, 1994. http: //www.extreme.indiana.edu/sage/docs.html.Google Scholar
  5. 5.
    J. Bourgeois. CPU modelling in EDPEPPS. Technical Report EDPEPPS EPSRC Project (GR/K40468). Centre for Parallel Computing, University of Westminster, London, June 1997.Google Scholar
  6. 6.
    T. Delaitre, J. Zemerly, J. Bourgeois, G. Justo, and S. C. Winter. Final model definition. Technical Report EDPEPPS EPSRC Project (GR/K40468) D3.1.4. Centre for Parallel Computing, University of Westminster, London, March 1997.Google Scholar
  7. 7.
    T. Delaitre, J. Zemerly, G. Justo, P. Vekariya, J. Bourgeois, F. Schinkmann, and S. C. Winter. EDPEPPS: An environment for the design and performance evaluation of portable parallel software. In Proceedings Portable Software Tools for Parallel Applications 97, July 1997.Google Scholar
  8. 8.
    T. Delaitre, M. J. Zemerly, J. Bourgeois, G. R. Justo, and S. C. Winter. Final syntax specification of SimPVM. Technical Report EDPEPPS EPSRC Project (GR/K40468) D2.1.4. Centre for Parallel Computing, University of Westminster, London, March 1997.Google Scholar
  9. 9.
    T. Delaitre, M. J. Zemerly, P. Vekariya, G. R. Justo, J. Bourgeois, F. Schinkmann, F. Spies, S. Randoux, and S. Winter. A toolset for the design and performance evaluation of parallel applications. In Proceedings 4th International Euro-Par'98 Conference, Southampton, UK, 1998.Google Scholar
  10. 10.
    A. C Downton, R. W. S. Tregidgo, and A. Cuhadar. Top-down structured parallelisation of embedded image processing applications. In IEEE Proceedings-Visual Image Signal Processing, pp. 431-437, 1994.Google Scholar
  11. 11.
    S. Fdida and G. Pujolle. Modèle de systèmes et rèseaux, Vol. 1, Performance. Eyrolles, 1989.Google Scholar
  12. 12.
    A. Ferscha. A petri net approach for performance oriented parallel program design. Journal of Parallel and Distributed Computing, 15: 188-206, 1992.Google Scholar
  13. 13.
    A. Ferscha and J. Johnson. N-MAP: a virtual processor discrete event simulation tool for performance prediction in the CAPSE environment. In Proceedings of the HICSS Conference, pp. 276-285. IEEE Computer Society Press, Maui, HI, January 3-6, 1995.Google Scholar
  14. 14.
    E. Gabber. A Portable and Efficient Programming Environment for Multiprocessors. PhD thesis, Tel-Aviv University, October 1992.Google Scholar
  15. 15.
    H. Hermanns, V. Mertsiotakis, and M. Rettelbach. Performance analysis of distributed systems using TIPP--a case study. In Pooley, Hillston, and King, eds., UKPEW'94, pp. 131-144. University of Edinburgh, September 1994.Google Scholar
  16. 16.
    H. Jonkers. Computer and Telecommunication System Performance Engineering, chapter Queueing models of shared-memory parallel applications. Pentech Press, London, 1994.Google Scholar
  17. 17.
    H. Jonkers, van A. J. C. Gemund, and G. L. Reijns. A probabilistic approach to parallel system performance modelling. In 28th Annual Hawaii International Conference on System Sciences, Vol. II, Wailca, HI, January 1995. Software Technology.Google Scholar
  18. 18.
    G. R. Justo. PVMGraph: graphical editor for the design of PVM programs. Technical Report EDPEPPS EPSRC Project (GR/K40468) D2.3.3, EDPEPPS/5. Centre for Parallel Computing, University of Westminster, London, February 1996.Google Scholar
  19. 19.
    J. P. F. W Kitajima. Modeles Quantitatifs d'Algorithmes Paralleles. Ph.D. thesis, Institut national polytechnique de Grenoble, October 1994.Google Scholar
  20. 20.
    E. Maillet. TAPE/PVM an efficient performance monitor for PVM applications--user guide. LMCIMAG, ftp: //ftp.imag.fr/ in pub/APACHE/TAPE, March 1995.Google Scholar
  21. 21.
    A. de Mes. An abstract machine for execution time estimation. Master's thesis, University of Amsterdam, August 1994.Google Scholar
  22. 22.
    M. K. Molloy. Performance analysis using stochastic petrinets. IEEE Transactions on Computers, C-31(9): 913-917, 1982.Google Scholar
  23. 23.
    I. R. Mujica. Coloured SPNP 3.0. In Pooley, Hillston, and King, eds., UKPEW'94, pp. 159-172. University of Edinburgh, September 1994.Google Scholar
  24. 24.
    T. Murata. Petri nets: properties, analysis and applications. Proceedings of IEEE, 77(4): 541-580, 1989.Google Scholar
  25. 25.
    J. F. de Ronde, P. M. A. Sloot, M. Beemster, and L. O. Hertzberger. A simulation methodology for the prediction of spmd programs performance. In A. Verbraeck and E. J. H. Kerckhoffs, eds., European Simulation Symposium 1993, pp. 24-25. Society for Computer Simulation International, Delft, The Netherlands, October 1993.Google Scholar
  26. 26.
    R. H. Saavedra-Barrera, E. Miya, and A. J. Smith. Machine characterization based on an abstract high-level language machine. IEEE Transactions on Computers, 38(12): 1659-1679, 1989.Google Scholar
  27. 27.
    R. H. Saavedra-Barrera and A. J. Smith. Performance characterization of optimizing compilers. IEEE Transactions on Software Engineering, 21 (No. 7), July 1995.Google Scholar
  28. 28.
    Scientific and Engineering Software, Inc., 4301 Westbank Drive, Austin TX 78746. SES/Workbench Reference Manual, Release 3.1.Google Scholar
  29. 29.
    K. Sheehan and M. Esslinger. The SES/sim Modeling Language, pp. 25-32, In The Society for Computer Simulation, San Diego CA, July 1989.Google Scholar
  30. 30.
    M. J. Zemerly, J. Papay, D. Kerbyson, G. Nudd, T. Atherton, Y. Lavaux, and G. Cristau. Validation of tests. Technical report, University of Warwick, July 1996. Final report, Deliverable D8.5, ESPRIT III project 6942.Google Scholar

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • J. Bourgeois
    • 1
  • F. Spies
    • 1
  • M. J. Zemerly
    • 2
  • T. Delaitre
    • 2
  1. 1.Laboratoire d'Informatique de BesançonUniversité de Franche-Comté, IUT Belfort-MontbéliardBelfortFrance
  2. 2.Centre for Parallel ComputingUniversity of WestminsterLondon

Personalised recommendations