Advertisement

Integration of performance evaluations in the design process of CPUs and computer systems

  • Uwe Langer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 977)

Abstract

Performance evaluations often receive insufficient attention during the design of a computer system due to the cost of modelling and the lack of quantitative values for model parameters. Nevertheless, cost could be reduced significantly and values could be better estimated if information which already emerged in component designs was made available. In this contribution a concept is presented which links the different “worlds” of computer design, CPU design, and performance evaluation to enable the exchange of information. Generic objects for the hardware components and for the workload predefine a hierarchy of control flow models with associated tasks. Thus, results from CPU analyses can be used in computer system models to describe the CPU behaviour. Just as results from computer analyses can be used for a CPU design. The workload consists of CPU instructions, base algorithms, and processes. Standard instructions support the specification of a CPU's instruction set, base algorithms represent the workload for CPUs and realize an interface to the computer system models where the workload is composed of base algorithms. The i/o-behaviour of a CPU model for the different base algorithms is recorded and re-executed in a computer system model.

Keywords

computer design CPU design performance evaluation workload hierarchy generic objects modelling simulation information re-use 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. AGHH93.
    C.A. Addison, V.S. Getov, A.J.G. Hey, R.W. Hockney, I.C. Wolton: “The Genesis distributed benchmarks”. In: J.J. Dongarra, W. Gentzsch (Ed.), Computer Benchmarks, Advances in Parallel Computing Vol. 8, North-Holland 1993, 257–271.Google Scholar
  2. Das90.
    S. Dasgupta: “A Hierarchical Taxonomic System for Computer Architectures”. IEEE Computer, March 1990, 64–74.Google Scholar
  3. DIN91.
    DIN 66273 Part 1 Measurement and Rating of Data Processing Performance Deutsches Institut für Normung 1991Google Scholar
  4. Dix93.
    K.M. Dixit “The SPEC benchmarks” In: J.J. Dongarra, W. Gentzsch (Ed.), Computer Benchmarks, Advances in Parallel Computing Vol. 8, North-Holland 1993, 149–163.Google Scholar
  5. GaKu83.
    D. Gajski, R.Kuhn: “Guest Editor's Introduction: New VLSI Tools” Computer, Dec 1983, 11–14.Google Scholar
  6. GrHe81.
    R.D. Grappel, J.E. Hemenway: “A tale of four ΜPs: Benchmarks quantify performance”. EDN, April 1981, 179–185.Google Scholar
  7. IaNg87.
    S. Iacobovici, C. Ng: “VLSI and System Performance Modeling”. IEEE Micro, Aug. 1987, 59–72.Google Scholar
  8. Jamie87.
    L.H. Jamieson: “Characterizing Parallel Algorithms”. In: The Characteristics of Parallel Algorithms, MIT Press 1987, 65–100.Google Scholar
  9. Jamie88.
    L.H. Jamieson: “Using Algorithm Characteristics to Evaluate Parallel Architectures”. Proc. Performance Evaluation of Supercomputers, 1988, 21–49.Google Scholar
  10. KKTJ91.
    Y. Kuo, L. Kung, C. Tzeng, G. Jeng, W. Chia: “KMDS: An Expert System for Integrated Hardware/Software Design of Microprocessor-Based Digital Systems”. IEEE Micro, October 1991, 32–35 + 86–92.Google Scholar
  11. Lan92.
    U. Langer: “A Concept for the Design and Analysis of Computer Systems in a Knowledge Based Environment” Proc. European Simulation Symposium ESS'92, 236–240.Google Scholar
  12. Lan93.
    U. Langer: “Eine generische Lastbeschreibung für frühzeitige Leistungsanalysen beim Prozessor-Entwurf” In: Walke, Spaniol (Hrsg.), Messung, Modellierung und Bewertung von Rechen-und Kommunikationssystemen, Aachen 1993, Kurzberichte und Werkzeugvorstellungen, Aachener Beiträge zur Informatik, Band 2, Verlag der Augustinus Buchhandlung, 19–24.Google Scholar
  13. Lan94.
    U. Langer: “HARUSPECS — A HierARchical Uniform Simulation environment for Performance Evaluations of Computer Systems” Proc. European Simulation Symposium ESS '94, 295–299.Google Scholar
  14. LiSh86.
    R. Lipsett, E. Marschner, M. Shahdad: “VHDL — The Language”. IEEE Design & Test, April 1986, 28–41.Google Scholar
  15. Mar85.
    P. Marwedel: “The MIMOLA Design System: A Design System which spans several levels”. Proc. Methodologies for Computer Design 1985, 223–237.Google Scholar
  16. Pats81.
    W. Patstone: “16-bit-ΜP benchmarks — an update with explanations”. EDN, Sept. 16, 1981, 169–203.Google Scholar
  17. Scherf90.
    B. Scherf: Wissensbasierte Leistungsbewertung von Rechnerarchitekturen Dissertation TU München 1990Google Scholar
  18. Steen93.
    A.J. van der Steen: “The benchmark of the EuroBen group”. In: J.J. Dongarra, W. Gentzsch (Ed.), Computer Benchmarks, Advances in Parallel Computing Vol. 8, North-Holland 1993, 165–175.Google Scholar
  19. UniDo93.
    H. Beilner: “The Hierarchical Evaluation Tool HIT”. In: Walke, Spaniol (Hrsg.), Messung, Modellierung und Bewertung von Rechen-und Kommunikationssystemen, Aachen 1993, Kurzberichte und Werkzeugvorstellungen, Aachener Beiträge zur Informatik, Band 2, Verlag der Augustinus Buchhandlung, 196–200.Google Scholar
  20. WaTh85.
    Walker, R.A. and D.E. Thomas: “A Model of Design Representation and Synthesis.” Proc. of the 22nd Design Automation Conference 1985, 453–459.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1995

Authors and Affiliations

  • Uwe Langer
    • 1
  1. 1.Computer Science DepartmentFederal Armed Forces University MunichNeubiberg

Personalised recommendations