Rendering Natural Phenomena
In computer graphics, rendering is a process of synthetically generating an image – or a sequence of images – of an object, based on its mathematical and possibly physical description. Natural phenomena are inherently very diverse, and therefore, their rendering is a very heterogeneous field with different paradigms and approaches.
As the name implies the objects and phenomena of interest will mainly have a natural origin. However, due to similar characteristics of the underlying problems, rendering of artificial objects made of natural materials is often considered a part of the field as well.
Although the main target of rendering is the creation of images, it is usually not trivial to obtain the mathematical and physical description of the simulated entities (i.e., the input data). Because of this the rendering algorithms can potentially require coupling with simulation methods, which provide means to computationally generate the required data. Alternatively, acquired or even hand-modeled data can be used, in case the simulation proves to be infeasible for some reason (e.g., too time-consuming).
Categories of Natural Phenomena
Sparse phenomena, involving gases, aerosols, and vapors but also effects of electromagnetism or high-energy particles. These include various well-known meteorological phenomena like atmospheric scattering, clouds, fog, rainbows, or lightning; astronomical phenomena such as auroras, stars, nebulae, and other stellar bodies; and also smaller-scale phenomena like fire, smoke, and dust.
Fluid phenomena, most notably oceans and other large water bodies and effects associated with their surfaces, such as waves and streams. Also medium- and small-scale liquid substances, especially beverages like milk, fruit juices, and coffee; suspensions like blood, paints, and inks; and volcanic phenomena such as lava flow. In addition, especially from the perspective of simulation methodology, it is possible to regard fine-grained solid materials like sand and partly solid substances such as gels, as fluid phenomena.
Solid objects and phenomena. On a large scale, primarily geological formations ranging from mountains to entire planets. On a medium scale, organic entities like vegetation, biological tissues, hair, and fur and inorganic objects such as ice and rock formations, crystals, metals and their alloys, and man-made objects manufactured from these. Among small-scale solid objects, most effort has been focused on rendering precious gems. Additionally, rendering of natural and artificial solid objects exhibiting a layered structure has attracted research attention, for example, coated or painted objects, oxidized and patinated metals, composite materials, gemstones, and many others.
From the perspective of rendering and also modeling, another important distinction can be made between phenomena and objects with well-definable geometry and opaque surfaces and those with a dominating volumetric character (either from the spatial or the optical point of view).
The first group will likely be modeled using geometric primitives and rendered with algorithms suitable for distinguishing between discrete parts of the phenomenon, e.g., the object-air interface. The light interaction will primarily be taking place on these interfaces, mainly as reflection and refraction. Most opaque solids belong into this category but also some fluid substances, such as clear liquids.
On the other hand, the second group will likely utilize a volumetric representation (or a combination with a geometric one) and naturally also volume-rendering methods. Materials with these properties are called participating media, and the dominating optical interactions here will be scattering and absorption. These characteristics are inherent to virtually all sparse phenomena but also to most fluid and many solid ones.
Causes of Color
Emission, due to incandescence (e.g., stellar radiation, volcanic activity, lightning), gas excitation (auroras, artificial gas lamps), or a combination of these (fire). The variation of color is primarily caused by varying energy density. Virtually all natural light originates in these processes.
Geometric causes, such as scattering (atmosphere, clouds, smoke, milk, and generally almost all substances exhibiting volumetric properties or diffuse reflection), dispersion (rainbows, sundogs, snow), diffraction (opals, thin filament-like objects such as spider webs and hair, carapaces of certain bugs), interference (single- or multilayered structures such as bubbles and certain gemstones and insects), and polarization (specular reflection from smooth surfaces). These interactions are usually elastic, i.e., they conserve energy, and the color variation is mainly caused by geometric configurations and relations.
Absorption and reemission, mostly in organic compounds (plant and animal tissues, natural and artificial pigments, dyes, and inks) and all metals but also in certain minerals and semiconductors and occasionally in light liquids, most notably in pure water. These interactions are usually inelastic, leading to energy loss described by the Beer-Bouguer law. Color variation is caused by relative efficiency of the reemission in different wavelengths.
Synthesizing physically plausible images of natural phenomena has arguably been one of the primary foci of rendering from the beginning . However, the complexity of most natural phenomena prevented their plausible rendering until the early 1980s, mostly because of the lack of theoretical understanding and computational power.
A significant milestone was reached by introducing radiosity  and stochastic ray tracing  algorithms in 1984, enabling physically based simulation of global illumination effects. A more general solution was then presented in 1986  by James Kajiya in the form of the rendering equation – a unified mathematical framework that enables the simulation of all effects that conform to geometric optics. These approaches were then extended to rendering of volumetric phenomena in the late 1980s and early 1990s.
Algorithmic improvements have gone hand to hand with the evolution of computing hardware, making rendering of image sequences and even entire movies feasible by the end of the 1980s. However, probably the biggest breakthrough came with the introduction of parallel programmable graphics processor units (GPUs) to the consumer market in 2001. Their programmability enables researchers and developers to produce specialized algorithms focused on simulating and rendering effects of diverse nature. Programmable GPUs quickly became widespread and enabled the development of algorithms and applications capable of simulating numerous natural phenomena, including video games and other interactive applications.
The Lambert reflectance model (1760) states that reflection from a sufficiently rough surface is perfectly diffuse, i.e., isotropic with regard to observation direction. Despite building on physically meaningful assumptions (multiple scattering underneath the surface) and the fact that many materials (such as uncoated paper) behave very closely to the model, there is no ideal diffuse reflector.
The Phong reflectance model (1975) approximates reflection from glossy surfaces by the cosine function raised to an exponent proportional to the surface smoothness. This leads to reflection in the form of a blurry circular patch which gets brighter and smaller with increasing smoothness and in the limit case corresponds to the Dirac function for a perfect mirror. Despite having no physical basis, the model has been widely adopted and is still used, mainly thanks to its simplicity.
Particle systems (1983) are ubiquitously used in both interactive and offline applications to model volumetric phenomena (such as clouds). In this sense, particles are small semitransparent entities often modeled by mapping a texture onto a rectangular geometric primitive that always faces the observer. If used in sufficient numbers, they are able to mask their discrete nature, while still being cheaper than the corresponding full volumetric representation (e.g., a 3D voxel grid). However, simulating global illumination effects in conjunction with particle systems requires additional effort, which might hinder their utilization. As a result it is often necessary to design another empirical model to compute illumination in an intended way, increasing the necessary amount of work.
The main downsides of empirical methods are that they often require a lot of artistic supervision during the content creation process and that they seldom work outside the range of phenomena they were designed for.
The Torrance-Sparrow reflectance model (1992) simulates reflection from rough glossy surfaces (and as such being an alternative to the Phong model). The method considers the simulated surface to consist of microscopic facets oriented according to a certain statistical distribution (i.e., Gaussian distribution). Each facet is assumed to reflect light according to the Fresnel law, and additionally, probabilistic shadowing of neighboring facets is taken into account. Although in some situations the model produces results similar to the Phong model, it is energy conserving, correctly handles objects made of conductors, and behaves plausibly in grazing angles, albeit being somewhat more difficult to understand.
Photon mapping (1996)  is a general framework for rendering global illumination effects. It is applicable to both surface and volume illumination, which makes it especially suitable for rendering natural phenomena. Its main idea lies in distributing the light energy by shooting and tracing small energy particles, photons. Every interaction of a photon with the simulated environment is recorded, and when all illumination energy is distributed, the algorithm calculates the energy density at observed locations by performing local photon density estimation. Much work has been invested into improving the original technique, resulting in one of the most versatile global illumination algorithms to date.
Originally, physically based methods were only used in offline applications. Recently, however, with the increasing performance and flexibility of the programmable hardware, the boundary between offline and interactive methods has gotten weaker. Many physically based rendering algorithms (such as ray tracing or photon mapping) can today be implemented to run interactively, albeit with decreased rendering quality.
Finally, predictive methods represent a step further than physically based methods. Opposite to empirical approaches, their primary focus is physical correctness and radiometric accuracy. The utilized algorithms must be spectral, unbiased, and support all significant physical phenomena that occur in the simulated environment. They require measured and acquired input data, and the results usually need to be viewed under controlled conditions. The resulting methods typically need validation and are generally very slow, even compared to physically based approaches. As such they are useful mainly in virtual prototyping applications such as in automotive industry, architecture, and gem processing.
As mentioned in the “Introduction,” in addition to rendering a phenomenon (and thereby producing an image of its momentary appearance), it is often necessary to obtain data about its spatial and temporal development prior to the rendering. Virtually all natural phenomena are dynamic in a certain sense, and this dynamicity can be regarded in the short and the long term.
Short-term development usually stems from the character of the phenomenon itself and its internal dynamics. Its nature can be synthetic (e.g., condensation of water vapor leading to cloud formation), evolutionary (for instance, flow of liquid particles in a stream), or destructive (such as cracking or shattering of an iceberg). Capturing this behavior will often entail using discrete particle or continuous dynamics.
On the other hand, long-term development is usually related to interactions of a phenomenon with its surrounding environment , sometimes referred to as weathering (in case the process is destructive). Many forms of this behavior exist, for instance, terrain formation, soil erosion and cracking, oxidation and patination of metals, organic tissue decomposition, and others.
Terrain rendering methods usually generate the overall terrain morphology by simulating the orogenetic and erosive processes (or simply use satellite data) and add more detailed features by random fractal perturbations.
Rendering of plants frequently employs the so-called L-systems to generate the plants and trees. L-systems are iterated functions described by grammars that imitate branching in real plants. Adding random perturbations into the system can produce plausibly looking plants with unlimited number of variants that retain the overall character defined by the L-system.
Rendering of ocean waves can generate larger-scale waves with a fluid dynamics simulation or synthesizing them from trochoidal wave theory using measured wave frequency spectra and then again add random fractal perturbations to break disturbing repetitive patterns.
The phenomenon of interest might be too large and as a result produce overwhelming amounts of data. A good example is acquisition of terrains, which, although today possible via satellite scanning, still does not produce data with resolution sufficient for some applications.
Dynamic phenomena are generally difficult to acquire, especially in cases when the scanning process is slower than the rate of the phenomenon’s significant change. Acquisition of flames and smoke is a good example here.
Scanning processes yield limited amounts of instances of the target phenomenon. If many such instances are needed (e.g., in cloud rendering), the cost of the acquisition process might become prohibitive.
Similar to physically,based simulation, however, these problems can in some cases be overcome by applying procedural perturbation techniques to the scanned data.
The present knowledge in physics theoretically allows us to explain and hence simulate virtually all observable natural phenomena. The limiting factors in doing so are therefore always computational resources. In the future, the increasing memory density will enable us to work with larger natural systems, and the growing parallel computing power of CPUs and GPUs will allow simulation and rendering of more complex phenomena. Especially in interactive applications, the current trend of using physically based approaches over empirical ones will most likely continue.
- 1.Nassau, K.: The Physics and Chemistry of Color, 2nd edn. Wiley-Interscience, ISBN 0471391069 Hoboken, New Jersey, USA (2001)Google Scholar
- 2.Pharr, M., Humphreys, G.: Physically Based Rendering, 2nd edn. Morgan Kaufmann, ISBN 0123750792 Burlington, Massachusetts, USA (2010)Google Scholar
- 6.Akenine-Moller, T., Haines, E., Hoffman, N.: Real-Time Rendering, 3rd edn. AK Peters, ISBN 1568814240 Natick, Massachusetts, USA (2008)Google Scholar
- 7.Jensen, H. W.: Global illumination using photon maps. In: Proceedings of EGWR, pp. 91–100 Porto, Portugal (1996)Google Scholar
- 8.Dorsey, J., Rushmeier, H., Sillion, F.: Digital Modeling of Material Appearance. Morgan Kaufmann, ISBN 0122211812 Burlington, Massachusetts, USA (2007)Google Scholar