We present a PTG system which combines synthetic, physics- and example-based approaches to produce vast landscapes composed of different biomes and populated with huge amounts of assets. We chose an incremental pipeline design with a focus on high performance to ensure providing the user with quick results. The pipeline currently consists of four individual main steps where each step is customizable. Direct visualization of each step improves usability and provides a fast, iterative workflow. In case that a single step does not meet the user’s desires, it can be easily repeated. Additionally, intermediate results are cached to allow reuse of finished pipeline steps. This system design guarantees the best trade-off between the partly contradictory requirements such as performance, usability, realism and flexibility.
Figure 1 illustrates the individual four steps of our sequential pipeline. The idea is to generate a coarse base terrain using noise functions first, which gets refined with biome-specific details later. To compute realistic biome distributions, we implemented a multiple-step climate simulation, which is carefully simplified to meet the performance requirements while maintaining good results. To add biome-specific terrain details, we chose an example-based approach where DEM data are combined with the previously generated base terrain. Finally, it is possible to generate asset distributions following a rule-based local-to-global model.
The advantage of this approach is that we can use the different PTG styles in the individual pipeline steps and concatenate them in such a way that brings out the respective strength, which results in a better trade-off between the requirements. The synthetic noise functions are able to quickly generate a general terrain and are highly adaptable. The biome distribution is then computed using our physically based climate simulation resulting in realistic looking results while being easily adjustable by transparent parameters. Highly realistic biome-specific terrain features and details finally are quickly added by overlaying DEM images, which is an example-based approach. The individual steps of our pipeline and the chosen methods are described in more detail later.
For compatibility reasons with external applications, e.g., modeling tools or 3D rendering engines, we decided to represent our terrains by heightmaps instead of voxels. Moreover, we use different resolutions for the individual steps of our pipeline, for example, the climate-simulation requires a less detailed grid (see Fig. 2). The final heightmap can be exported as a set of tiles in a standard file format (grayscale images). It can also be used directly in the Unreal Engine 4 (UE) which enables us to use build-in techniques like level of detail (LOD), instancing and level streaming. In the following, we will detail the steps of our pipeline.
Base terrain
To generate the base terrain, we decided to employ synthetic PTG methods, specifically, noise functions. In this first step, we only generate a rough terrain and such methods offer the most flexibility and widest range of possible terrains while also being very fast. Moreover, they are not limited in size or resolution. Physically and example-based methods would be more restrictive, e.g., have more constraints between parameters or need specific example images, and the potential benefits of greater realism and more details are not relevant as we refine the terrain in later steps. The drawback of noise functions, the need for tedious fine-tuning to get realistic looking results, does not apply because only the high-level terrain has to be generated.
We create the rough terrain by relying on common noise functions, more precisely, multiple octaves of simplex noise (using
[16]) as this is well suited to generate a general fractal terrain. This method is fast, scalable, not too complex regarding usability and sufficient as a coherent, coarse basis. The noise parameters, as well as the number of octaves, can be set by the user. However, replacing or adding other noise functions for more diverse base terrains would be an easy modification. A user definable threshold marks the sea level to distinguish between land and water bodies (see Fig. 3a).
Climate simulation
For the next pipeline step, the computation of the biome distribution, we use a physics-based approach in contrast to other fully synthetic methods that often rely on noise. We have developed a climate simulation which allows the generation of realistic, or at least plausible, distributions with a couple of easy-to-understand parameters. By comparison, noise functions would, in our mind, entail more fine-tuning or result in less realistic terrains and sketch-based methods would require more manual work which we want to avoid. However, sophisticated simulations are more computationally expensive, which is why we disregard some effects to simplify the system and focus on reasonable approximations. The goal of our climate simulation is to add physically plausible realism to the terrain while still being moderately fast to compute.
Following our pipeline-based design, the climate simulation is composed of multiple, sequential steps by itself, namely temperature, wind and precipitation computation, and lastly the biome classification. In detail, our climate simulation works as follows:
-
The first step in our climate simulation is the temperature computation. We provide two different interpolation methods: a bilinear interpolation and a sine-based alternative. The former provides more flexibility for the user, while the latter is more suited to model one-dimensional gradients resembling the behavior observable on the earth between the equator and the poles. Both modes account for a height-based temperature falloff to simulate the temperature decline which occurs with increasing height, and are easily adjustable with a few parameters.
-
The next step is the simulation of the prevailing wind to distribute the later generated moisture over the terrain. In order to keep the performance reasonable high, we use an iterative approach to calculate the wind directions instead of applying a computationally expensive fluid dynamics solver. Our method is a simplified version of the semi-Lagrangian scheme
[21]. We dropped the diffusion process and the pressure calculations as we handle these separately in a later pipeline step. Therefore, we only consider external forces and self-advection to simulate the wind and compute its directions in a vector field. For these two components, we developed a less computational expensive algorithm.
The basic idea is to specify initial values for the four corners which act as the external forces. An iterative approach distributes the wind directions on a vector field: in each iteration, the new wind direction for each cell is computed by combining it with its adjacent cell in the wind direction and adding a little random deviation to simulate micro disturbances. Finally, we additionally consider the closest corner to model the persistence of the external forces. This delivers a plausible smoothing or cancellation behavior along the dynamically moving fringes between the main wind currents.
-
In the third step, we use the wind and temperature data to compute a precipitation distribution for the terrain. Again, we decided to use an iterative simulation-based approach. Basically, cells marked as water represent moisture sources. The evaporation is modeled as a temperature-dependent function; in fact, it can be chosen between an exponential and a linear version. The wind currents are responsible for distributing the moisture. Most of the moisture gets transported to the neighboring cell in the direction of the wind, but some shares also are transferred to the two cells adjacent to the neighbor and source. The actual distribution depends on the wind’s direction and the previous moisture amount of all affected cells. With this algorithm, it is possible to model some form of dispersion and equalization. The amount of precipitation occurring depends on the local moisture and temperature and is modeled as a two-step process. First, the precipitation arising during moisture transport is computed. By using the previously computed temperature values and calculating the difference between the target and source, we can also simulate natural phenomenons like rain shadows. Finally, additional precipitation is computed for moisture-holding cells to simulate other, more local causes. Again, exponential or linear formulas can be used. Although we provide reasonable standard values, the system can be modified by a set of user parameters steering the formulas and therefore the results.
-
The last step of the climate simulation is the classification of the resulting biomes according to the computed properties, in particular, the temperature and precipitation. For this purpose, we use a slightly modified and discretized Whittaker diagram
[22] as a lookup table. For each pair of temperature and precipitation values, a specific biome ID is assigned according to the lookup table. In principle, other classification systems are possible as the lookup table can be freely changed or replaced by the user.
Figure 3 shows the results of the temperature (b), wind (c), moisture (d), precipitation (e) and terrain refinement (f) steps.
Terrain refinement
To complete the terrain generation, the rough base terrain is enriched with more realistic details based on the biome distribution provided by the climate simulation. We decided to use an example-based approach, in particular, DEMs, to obtain realistic biome-specific terrain details because of the vast pool of freely available DEM data which can be exploited. The DEMs serve as examples which can be blended onto the base terrain. The advantage is that the DEMs inherently provide realistic biome-specific terrain features and details. To get such realistic details, other methods would need a lot more user tuning and manual work, e.g., crafting specific multi-layered noise functions for each biome type, or complex computations in case of physically based methods.
Another aspect which has to be considered for multi-biome terrains is, that especially organic, natural looking biome transitions are essential. Therefore, we further customize the previously computed biome borders. The basic idea is to initially use user-adjustable, simplex-based fractal noise to distort the borders at a more granular level. For this purpose, we allocate a higher-resolution biome grid. Compared to more sophisticated techniques from the field of texture synthesis, this is a simple and fast-to-compute method which guarantees a result with consistent quality. Depending on the input data, other methods may occasionally result in technically correct but visually unsatisfactory results like straight biome borders (e.g., graph cut).
In a second step, we compute a biome-based DEM weighting using a convolution kernel to blend the adjacent biomes and their corresponding DEMs: each DEM weight equals the area of the corresponding biome inside the kernel boundaries proportional to the whole kernel region. The strength of the resulting blend and the required computation time depend on the size of the kernel, which can be set by the user. The final DEM value can be easily calculated as a weighted sum over the occurring DEMs. For simplicity, we assume a one-to-one relationship between the DEM texels and the terrain heightmap. Finally, we combine the generated biome-specific detail layer with the base terrain by using a weighted sum of the two heightmaps.
Asset placement
In the final step of our pipeline, we populate the biomes by placing assets. We have developed an iterative, rule-based local-to-global model, that, in contrast to global-to-local models, enables the creation of emergent distributions. Additional advantages are that the model can easily be modified or extended by further constraints and the individual assets, through the defined rules, inherently consider the biome transitions. We also considered using a global-to-local model in combination with real plant distribution data, but such data are hardly available for all kinds of biomes.
Our system is designed to use pre-modeled meshes, which allows for arbitrary generation methods to be used. However, the mesh generation itself, in a modeling sense, is not part of this work. We provide a basic database of pre-defined assets that can be easily extended by the user. Each asset is associated with a set of properties, e.g., clustering probability, shadow tolerance or repelling distance. The placement is done iteratively via the dart-throwing principle where a random position is sampled and checked for the assets constraints. Our sampling approach is generally based on Poisson-disk sampling, where all the points are guaranteed to maintain minimal distances between each other. However, we extended this basic approach to cover also more complex multi-object distributions with bilateral constraints. Yet our approach is very flexible through the easy-to-understand parameters which steer the placement. We divide the assets into a few main classes, e.g. organic- and inorganic, with corresponding relevant parameters, which helps to improve the usability. Additionally, assets are partitioned into size categories which are processed iteratively such that smaller assets consider the previously placed bigger ones. With this technique, we achieve more plausible mixed distributions and environments. Generally, depending on the parameters, it is possible to model clustered, random or uniform distribution and anything in between.
As seasons have a significant influence on the terrain cover’s visual appearance, each asset can be associated with up to four different meshes representing its seasonal look. The meshes then are swapped automatically according to the current season, which can be changed in real-time. Additionally, the Unreal Engine 4 provides instancing which improves the rendering performance, and a LOD system for dynamic switching between the placed assets’ detail levels.