High image quality rendering on scalable parallel systems

  • Rolf Herken
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 796)


The term ‘interactive rendering’ is commonly associated with the special purpose rendering hardware of current high-end graphics workstations. But although the quality of the images produced by these systems has increased at an astonishing rate over the last few years, the correct simulation of optical phenomena, most notably of global illumination, remains to be outside their reach. Tricks like reflection mapping can be used to produce the illusion of global illumination, but they compromise the physical correctness of the image and thus fail in applications where this is required, such as optical quality control of free-form surfaces in CAD or lighting design in architecture. Furthermore, special purpose hardware systems do not provide the degree of programmability required for high-end computer animation image quality and complexity.

Currently, the commercial rendering systems modelling the widest range of optical phenomena are software based. They support a wide variety of surface representations, most notably trimmed free-form surfaces, and can process procedural shaders which are defined by the user in a special purpose shading language or as C language code fragments that are dynamically compiled and linked to the rendering software. All of these systems operate essentially in batch mode.

The performance of software based rendering systems on today's most powerful workstations is still between three to five orders of magnitude less than that required for real-time interactive work. The rendering of a single image in high-end computer animation production or in an industrial application such as the design of car bodies can take hours or even days on conventional machines for the typical size of the databases involved.

If we define the term ‘interactive’ to mean that the response time of the system approaches the average response time of the user to create a new situation which requires a recomputation it becomes evident that the current state of the art either compromises image quality or interactive performance.

It is argued that the simultaneous requirements of interactivity and very high image quality suggest the implementation of the rendering software component on truly scalable, distributed memory parallel computers.

This raises a number of implementation problems, in particular with respect to the algorithms and data structures that provide scalable performance on such computer architectures. The main issue is the effective use of distributed memory in the presence of adaptive acceleration techniques that produce irregular memory access patterns. A further requirement dictated by an intended interactive use of the rendering software is the ability to incrementally update the distributed database.

Since it is very likely that scalable parallel computer architectures will provide the dominant basis for large-scale computing in the near future, the development of such rendering software is of great industrial relevance. It can be seen as a first step towards high image quality virtual reality systems.

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Rolf Herken
    • 1
  1. 1.Mental images GmbH & Co. KGBerlinGermany

Personalised recommendations