Computer Science - Research and Development

, Volume 27, Issue 4, pp 227–233 | Cite as

Brainware for green HPC

  • Christian BischofEmail author
  • Dieter an Mey
  • Christian Iwainsky
Special Issue Paper


The reduction of the infrastructural costs of HPC, in particular power consumption, currently is mainly driven by architectural advances in hardware. Recently, in the quest for the EFlop/s, hardware-software codesign has been advocated, owing to the realization that without some software support only heroic programmers could use high-end HPC machines. However, in the topically diverse world of universities, the EFlop/s is still very far off for most users, and yet their computational demands shape the HPC landscape in the foreseeable future. Based on experiences made at RWTH Aachen University and in the context of the distributed Computational Science and Engineering support of the UK HECToR program, we claim based on economic considerations that HPC hard- and software installations need to be complemented by a “brainware” component, i.e., trained HPC specialists supporting performance optimization of users’ codes. This statement itself is not new, and the establishment of simulation labs at HPC centers echoes this fact. However, based on our experiences, we quantify the savings resulting from brainware, thus providing an economic argument that sufficient brainware must be an integral part of any “green” HPC installation. Thus, it also follows that the current HPC funding regimes, which favor iron over staff, are fundamentally flawed, and long-term efficient HPC deployment must emphasize brainware development to a much greater extent.


Green IT Brainware Performance tuning Cluster efficiency 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Attig N, Eickermann T, Gibbon P, Lippert T (2009) Community-oriented support and research structures. J Phys Conf Ser 180(1):012038 CrossRefGoogle Scholar
  2. 2.
  3. 3.
    Meuer HW (2008) The top500 project: looking back over 15 years of supercomputing experience. Inform-Spektrum 31:203–222 CrossRefGoogle Scholar
  4. 4.
    Post DE, Kendall RP (2004) Software project management and quality engineering practices for complex, coupled multiphysics, massively parallel computational simulations: lessons learned from asci. Int J High Perform Comput Appl 18:399–416 CrossRefGoogle Scholar
  5. 5.
    Sharma S, Hsu CH, Feng CW (2006) Making a case for a green500 list. In: 20th international, parallel and distributed processing symposium, 2006, IPDPS 2006, p 8 Google Scholar
  6. 6.
    Wylie BJN, Geimer M, Nicolai M, Probst M (2007) Performance analysis and tuning of the xns cfd solver on blue gene/l. In: PVM/MPI, pp 107–116 Google Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Christian Bischof
    • 1
    • 2
    Email author
  • Dieter an Mey
    • 1
  • Christian Iwainsky
    • 1
  1. 1.Center for Computing and CommunicationRWTH Aachen UniversityAachenGermany
  2. 2.Institute for Scientific ComputingRWTH Aachen UniversityAachenGermany

Personalised recommendations