HPPC 2007: Workshop on Highly Parallel Processing on a Chip
Technological developments are bringing parallel computing back into the limelight after some years of absence from the stage of mainstream computing and computer science between the early 1990 and early 2000s. The driving forces behind this return are mainly technological: increasing transistor densities along with hot chips, leaky transistors, and slow wires – coupled with the infeasibility of extracting significantly more ILP at execution time – make it unlikely that the increase in single processor performance can continue the exponential growth that has been sustained over the last 30 years. To satisfy the needs for application performance, major processor manufacturers are instead counting on doubling the number of processor cores per chip every second year, in accordance with the original formulation of Moore’s law. We are therefore on the brink of entering a new era of highly parallel processing on a chip. However, many fundamental unresolved hardware and software issues remain that may make the transition slower and more painful than is optimistically expected from many sides. Among the most important issues are convergence on an abstract architecture, programming model, and language to easily and efficiently realize the performance potential inherent in the technological developments.