Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The earth is three-dimensional, heterogeneous and, for its major part, inaccessible to direct observations. A consequence is that the static and dynamic parameters governing physical processes below the earth surface are generally poorly known. A recurrent challenge for geoscientists and engineers is, therefore, to predict the likely nature or behavior of the subsurface from limited data. In all fields of geophysics sensu lato, these forecasts may use physically and mathematically-based data processing (such as upward continuation of potential fields, seismic processing, classical processing of ground penetrating radar (Nobakht et al. 2013), reservoir production decline curves (Davis and Annan 1989; Fetkovich 1980, Fig. 28.1a), or the resolution of an inverse problem that explicitly uses physical models computing observations from some earth parameters and physical parameters (Fig. 28.1b–d, f–h). In geology, forecasts (e.g., about the location and volume of a specific formation or resource) and geological scenarios involve direct observations and geophysical images (Jackson and Rotevatn 2013; Perrouty et al. 2014; Fig. 28.1e). In this process, the loop may not always close: in the end, the interpretations are not guaranteed to be compatible with the initial geophysical observations. This may or may not be a problem, depending on the purpose of this interpretation. For example, a qualitative match between reflection seismic data and structural interpretations is probably sufficient to discuss fault growth models (Jackson and Rotevatn 2013), whereas such mismatch can be problematic in other tasks such as natural resource assessment (Caumon 2010; Jessell et al. 2014). Another practical problem is the interpretation and fusion of several independent data sets corresponding to different physical or geological observations (Corbel and Wellmann 2015; Paasche 2016). Geostatistics (Chiles and Delfiner 2012; Goovaerts 1997) was historically developed with these problems in mind, and is an attractive theoretical framework to recombine point and volume data coming from geophysical images consistently with spatial statistics. However geological reasoning and statistical reasoning are of different nature (Frodeman 1995), so honoring some spatial statistics is very useful but not always sufficient to represent geological concepts. Therefore, several methodologies have been introduced to explicitly incorporate geological knowledge in subsurface interpretation, all of them explicitly considering geological objects (Fig. 28.1f–h).

Fig. 28.1
figure 1

Examples of approaches and workflows using geological and geophysical data to make forecasts about the earth or the associated processes and some illustrative references. a-d workflows that use no or minimal geological prior information. e classical use of geophysical data in fundamental and applied geology. f-h workflows that explicitly incorporate geological parameters in the process

The main focus of this chapter is to review the main frameworks by which geological concepts can be represented in earth models and inverse methods addressing several types of physics. Thus, it aims at complementing the existing reviews and discussions of Linde et al. (2015) and Jessell et al. (2014), who address this problem with similar objectives but different perspectives. As the topic is very vast, the reader is also referred to previous review papers related to this topic (Farmer 2005; Lelièvre and Farquharson 2016; Linde et al. 2015; de Marsily et al. 2005; Mosegaard and Hansen 2016; Oliver and Chen 2011; Pyrcz et al. 2015; Zhou et al. 2014a). Several books also present complementary perspectives and more complete descriptions and details (Agterberg 2014; Caers 2011; Mallet 2002, 2014; Perrin and Rainaud 2013; Pyrcz and Deutsch 2014). Section 28.2 provides further motivations for considering geology in geophysical models, and tries to define what “geology” means in that sense. Then, Sect. 28.3 briefly describes the type of parameterizations classically used in computational physics. We discuss some links between these physical parameterizations and the frameworks used to represent geological domains in Sect. 28.4.

2 Motivations for Explicit Geological Parameterizations

A wealth of perspectives is essential and complementary to make progresses in the understanding of our planet and its resources. This is exemplified by the various disciplines involved in natural resource characterization, see for instance Ringrose and Bentley (2015). Feedbacks and interactions between the various approaches generate many types of possible workflows to integrate geological data and produce forecasts, as illustrated in Fig. 28.1. For example, geophysical processing and inverse methods that use minimal geological prior information (Fig. 28.1a–d) are typically considered as data for geological interpretations (Fig. 28.1e). Whereas these “minimal prior” approaches are not this chapter’s focus, they are very useful and are always used to some extent in practical studies, because they provide at least a useful first-order view of the geological domain. This is illustrated in particular in deterministic workflows of Fig. 28.1h that strive for fit-for purpose, simplest as possible, subsurface models (Elrafie et al. 2008; Ringrose and Bentley 2015; Williams et al. 2004). They are also conceptually satisfying in the sense that they produce images or forecasts that mainly depend on the physics, hence can be claimed to be parsimonious and objective. As a consequence of this parsimony and of the non-linear nature of most involved physical processes, these models make it difficult to evaluate uncertainty (Watson et al. 2013). The term “objective” is also relative, as some choices are always made in these methods. In data processing methods these subjective choices relate to the underlying model assumptions (e.g., sub-horizontal layers). In inverse methods, choices must also be made about the parameterization, and a statistical model (e.g., the multi-Gaussian model) or a particular regularization (e.g., Mosegaard 2011).

Among the approaches that try to get the most out of the physics with minimal assumptions, recent and most promising developments use several types of data and petrophysical models to constrain local anisotropy, (see for instance Clapp et al. 2004; Ma et al. 2012; Sava et al. 2014; Zhou et al. 2014b) and recent reviews in geophysical imaging (Meju and Gallardo 2016), reservoir seismology (Bosch et al. 2010), hydrogeophysics (Linde and Doetsch 2016), mineral exploration (Lelièvre and Farquharson 2016), petroleum exploration (Moorkamp et al. 2016). Two main ideas underlie these approaches. First, some local structural orientations are inferred from borehole data or other geophysical data to constrain the covariance function used during inversion. Second, a petrophysical model is used to exploit the existing correlation between the physical parameters. As these correlations generally depend on the rock type, the model often includes discrete variables that estimate or sample the rock type at a given location. This notion of rock type is close to the notion of lithofacies, so it is a way to integrate geological reasoning into inverse methods.

In the field of reservoir engineering and hydrogeology, methods incorporating prior geological knowledge in flow and transport models have also been developed very early on, as discussed in several review papers (Farmer 2005; Linde et al. 2015; de Marsily et al. 2005; Oliver and Chen 2011; Zhou et al. 2014a). One fundamental reason is that flow and transport processes can be highly non-linear while pressure and concentration measurements are generally quite sparse as compared to the number of potential factors influencing fluid transfers in porous and fractured media. The same observation holds in potential field inversion, where geological prior information can significantly help addressing the ill-posedness of the inverse problem (Lelièvre and Farquharson 2016). But what does “geological prior” exactly mean?

As noted in particular by Frodeman (1995), geology is an interpretive science which includes a significant component of historical thinking. One aim of geology is to describe the earth in historical terms by identifying the main geological processes and their impact. In terms of scientific philosophy, it is interesting to highlight that geology generally produces refutable scenarios, whereas mathematics are concerned with formal and irrefutable proofs (given some hypotheses). The encounter of these two scientific methods is deeply written in the DNA of Mathematical Geosciences. Advanced methods in physically-based modeling have been developed to quantitatively model geological processes. Some very interesting inverse methods that use such models have been developed recently to quantitatively integrate spatial observations (Charvin et al. 2009; Cross and Lessenger 1999; Gallagher et al. 2009). These methods are ideal in the sense that they could in principle unify geology and geophysics rigorously. However, the interplay of multiple coupled physical and chemical processes at geological time scales remains extremely challenging to model on a computer. The use of such models in an inverse framework is also very challenging, as the number of unknown or poorly known parameters makes the inverse problem highly ill-posed and computationally intractable. This empty space problem is very general and applies to most inverse problems in geosciences, but it is critical when an explicit time dimension is considered because the density of information in time-space is very small (e.g., only a few points typically constrain pressure and temperature in basin studies). This explains why most of the methods in Fig. 28.1e–h do not explicitly consider geological time and instead use an object-based approach, a statistics-based approach or a combination of both to represent the geological prior information and make forecasts in the 3D physical space.

Classically, the object-based strategy is essential to the geological approach. For example, geological mapping typically decomposes a complex reality into discrete and interconnected tectonic, igneous, metamorphic, diagenetic, stratigraphic and sedimentological objects. These object definitions do integrate historical and process-based considerations. For instance, time is explicitly considered in the definition of the remarkable surfaces that sequence stratigraphers use to interpret geoscience data. The characterization of these objects in mathematical and computational terms has been a significant focus of the IAMG for that last 50 years. The statistics-based approach, another clear focus of the IAMG, is clearly complementary to the object-based approach. Indeed, objects are heterogeneous, boundaries between objects may be difficult to define and objects can be difficult to map from available observations. Statistical reasoning is key to address these problems. In this chapter, we will try to explain a few manners by which the object-based and statistics-based methods interact in the frame of geo-data and physical modeling integration. For this, we will start from the perspective of what physical modeling needs.

3 Parameterizations for Physical Models

Sambridge et al. (2012), among others, give a very crisp and generic summary of the parameterizations used in most numerical physical modelling methods. In this view, a model \( m\left( {\mathbf{x}} \right) \) is defined at any point \( {\mathbf{x}} \) of the physical space by a set of basis functions:

$$ m\left( {\mathbf{x}} \right) = \sum\limits_{k = 1}^{K} {m_{k} \varphi_{k} \left( {\mathbf{x}} \right)} . $$
(28.1)

For example, in the finite element method with linear triangular elements, a basis function \( \varphi_{k} \) is defined for each mesh vertex \( {\mathbf{x}}_{k} \): \( \varphi_{k} \left( {{\mathbf{x}}_{k} } \right) \) is equal to 1, \( \varphi_{k} \left( {{\mathbf{x}}_{j \ne k} } \right) \) is equal to 0 and \( \varphi_{k} \) linearly decreases in the mesh elements adjacent to \( {\mathbf{x}}_{k} \). The values \( m_{k} \) are the parameter values (e.g., thermal conductivity) associated to the mesh vertices.

The general formulation (28.1) allow to compute or approximate differential operators to solve partial differential equations describing physical processes. Many recent advances in computational physics consist of particular choices of basis functions. For instance, in the extended finite element method, the use of Heaviside basis functions to represent internal discontinuities in a mesh was a step change in the computation of fracture growth (Moës et al. 2002). Another very active research field concerns the combination of basis functions at several scales (e.g., Efendiev et al. 2013). These methods have been applied for instance in finite volume modeling of flow in porous media to solve the flow and transport equations at two distinct and interacting scales (Jenny et al. 2003; Møyner and Lie 2014).

Equation (28.1) is also compatible with the theory of spatial random fields. At point scale, the values \( m_{k} \) are seldom known below the Earth surface. Geostatistics offers many ways to estimate or simulate such values (Chiles and Delfiner 2012; Goovaerts 1997) using statistical parameters inferred from subsurface data. One of these parameters is the variogram, which models the statistical correlation between two variables as a function of the distance. In dual kriging, Eq. (28.1) is also used, as the unknown value is estimated as a linear combination of covariance functions centered on the data points. The use of point-based parameterizations is also much studied in computational physics under the term “meshless methods”, see for instance Liu and Gu (2005). In the practice of geostatistical methods, the values \( m_{k} \) are generally modeled on a Cartesian grid, but recent papers also discuss about the application of geostatistics on unstructured grids (Gross and Boucher 2015; Manchuk et al. 2005; Zaytsev et al. 2016), or directly on points (Zagayevskiy and Deutsch 2016). A major interest of these methods is to estimate or simulate values directly on the physical modeling support, and also to use adaptive resolution depending on the local information density and on the sensitivity between the model parameters and the physical process.

Last, but not least, Eq. (28.1) is compatible with a new breed of inverse methods in which the number of parameters \( K \) is variable, see Sambridge et al. (2012) and references therein. These transdimensional inverse methods show much promise to address some of the challenges highlighted in this chapter.

The beauty of Eq. (28.1) lies in its potential to unify object-based geological descriptions and mathematical descriptions. In a sense, the goal of the various workflows described in Fig. 28.1 and in the associated references can be seen as a quest to find “geological basis functions” to model the earth. The purpose of Sect. 28.4 is to try to establish a more explicit correspondence between geological concepts and existing mathematical and computational models for representing geological domains in three-dimensional space. In doing so, we keep in mind that these 3D models will eventually need to be expressed by Eq. (28.1) in physical models.

4 Geological Parameterizations

As discussed in Sect. 28.2, geologists apply the divide and conquer principle to analyze the earth. Hundreds of years of geological reasoning have essentially led to identify multiple geological features at various scales, depending on their origin:

  • Tectonic objects: Faults, joints, folds, cleavages.

  • Sedimentary objects: stratigraphic units, horizons and unconformities, sedimentary bodies, facies, bedding structures.

  • Intrusive and effusive objects: salt diapirs, salt sheets, shale diapirs, shale mounds, sills or dykes, lava flows, etc.

  • Epigenetic objects (originating from chemical and mineralogical processes after rock formation): Metamorphic units, hydrothermalized facies, dissolved rocks (karsts).

These features typically exist at kilometric to micrometric scales (from plates to minerals and fluid inclusions). It is not useful (and not possible) for a model to explicitly represent all objects across these scales. Rather, most modeling approaches hierarchically subdivide the domain to represent a few nested scales (Pyrcz and Deutsch 2014; Ringrose and Bentley 2015).

Two main complementary mathematical and numerical frameworks exist to represent these geological features: spatial random fields and object-based methods. The choice of which framework is most appropriate (or whether and how these frameworks should be combined) depends on the size of the features with regard to the density of observations and on the likely impact of the features for the question at hand. Whereas the object size can be objectively discussed and characterized, the impact of features is often based on rules of thumb derived from experience (Ringrose and Bentley 2015). This may be a source of biases in forecasts. In practical studies, choices may also be constrained by very practical reasons, as some methods are implemented only in commercial software or in distinct software which are not interoperable. These problems and the need for better and abstract knowledge integration are also discussed by Perrin and Rainaud (2013).

4.1 Spatial Random Fields

As geological processes are not random and result from many physical processes, the resulting spatial fields are generally correlated in space. The characterization of the correlation structure by statistical inference is an essential aspect of geostatistics (Chiles and Delfiner 2012; Goovaerts 1997). Indeed, trust can be gained when data are numerous enough to provide robust statistics—even though the modeling assumptions themselves may remain questionable (Journel 2005). In inverse modeling of flow and transport in porous media, this has led to many approaches that perturb parameters on a grid while preserving variogram or spatial covariance models (de Marsily et al. 2005; Oliver and Chen 2011; Zhou et al. 2014a).

In geostatistics, a result of the divide-and-conquer strategy used in geology is the definition of many types of discrete categories to describe the physical world. These categories can be localized in space in the form of a geological map (or, in three dimensions, a 3D geological model). From a geostatistical standpoint, categories can be modeled with indicator variables. This has led to significant advances, in particular in the field of multiple-point geostatistics (MPS), to represent discrete facies from sparse data and analog training images. Since the seminal work of Guardiano and Srivastava (1993), a vast community of mathematical geoscientists has embraced this field and made essential advances, see Hu and Chugunova (2008), Mariethoz and Caers (2014). In particular, MPS have opened concrete and effective ways to using complex (and deliberately subjective) geological priors models in inversion (Linde et al. 2015; Melnikova et al. 2015). MPS have shown, in a number of instances, the impact of applying analog reasoning and scenarios to find sensible sets of solutions to inverse problems and to assess uncertainties. They also make up an interesting formalism to analyze complex geological systems (Scheidt et al. 2016).

However, even though progresses can still be made (see for instance Renard et al. 2011), a recurrent challenge with the indicator geostatistical approaches is to ensure that some categories are always connected or adjacent to other categories. This is why, to echo a friendly discussion we had with Andre Journel in 2005, I persist considering that there is more to geological realism than MPS (in its spatial understanding). The Truncated Gaussian method and the Pluri-Gaussian methods (Armstrong et al. 2011), even though they rely on multi-Gaussian assumptions, enforce continuity conditions that approach geological reasoning in a very interesting way. For instance, they can produce consecutive successions of facies from shallow marine to offshore environments. This type of method is appropriate when the discrete geological categories originate from an underlying continuous variable (in the previous example, this variable can be assimilated to bathymetry, all facies being defined between consecutive threshold values). In the Pluri-Gaussian approach, the application of Boolean operations on simulated random fields is also a way to emulate the succession of geological events (e.g. simulation of late diagenetic facies overprinting the depositional facies).

In general, spatial random field methods are implemented on grids of fixed resolution. As a result, the discontinuities that may exist in the medium are sampled at that particular resolution. However, some important features such as fractures or shale lenses may be much smaller than the grid resolution, hence cannot be explicitly represented in the grid. Under some hypotheses, this can be addressed by directly modeling a field of equivalent properties assumed representative of the block scale (e.g., equivalent dual porosity and dual permeability fields in fractured media). However, this can be a source of bias in a number of cases (Jackson et al. 2014). The explicit consideration of these objects generally relies on fewer assumptions and provides a way to deal with more complex geometries and with spatial observations, as will be discussed in Sect. 28.4.2. Note that these two approaches are not mutually exclusive and a combination of both equivalent and explicit approaches are, in general, relevant (Bourbiaux et al. 2002; Maier et al. 2016).

Another important aspect of geological reality is that the orientation and the magnitude of spatial correlation can vary in space. This can be modelled with random fields using locally varying anisotropy (Boisvert et al. 2009; Stroet and Snepvangers 2005; Xu 1996). In geophysics, the use of local anisotropy is illustrated for instance by Clapp et al. (2004) and by the image-guided inversion methods mentioned in Sect. 28.2 and Fig. 28.1d. In the absence of exhaustive data to constrain these orientations, one should estimate or simulate the orientations away from local observations (Gumiaux et al. 2003; Stroet and Snepvangers 2005; Xu 1996). A practical challenge in the presence of locally varying anisotropy is the inference of geostatistical parameters, as the domain is non-stationary. Object approaches offer another way of dealing with locally varying anisotropy, as will be discussed in the next section.

4.2 Object Models

In a general sense, object models directly represent the tectonic, sedimentological, intrusive and epigenetic features listed at the beginning of Sect. 28.4. As geological objects originate from distinct geological processes at different periods of time, they often correspond to contrasts or discontinuities of the physical parameters of interest. This explains why, beyond pure cartographic goals, so much effort is dedicated to object modeling in geosciences.

Geometry and Topology

As discussed by Mallet (2002) and Perrin and Rainaud (2013), geological objects can be represented in geometrical and topological terms. Topology refers to essential characteristics: the dimension of objects (line, surface or volume), whether objects have inclusions or holes, and if they are connected to other objects. Depending on the type of geological objects, some topological configurations are impossible (Caumon et al. 2004). For instance, a chronostratigraphic horizon must be an open surface and may include internal holes due to faults or intrusions. More generally, the continuity of objects can have a relation to the genesis of the object, hence is a way to constrain geological models. Knowing what is topologically possible and what is not gives precious insights to design modeling methods and to test the validity of geological models (Pellerin et al. 2017; Wellmann et al. 2014). Topological analysis also provides interesting metrics to characterize and understand geological objects such as karsts (Collon et al. 2017), fracture networks (Sanderson and Nixon 2015) and structural models (Lindsay et al. 2013; Pellerin et al. 2015; Thiele et al. 2016a, b). Last, but not least, topology is very important for flow modeling, as it directly relates to the connectivity of permeability conduits and barriers. The links between connectivity and effective flow properties has been much studied at multiple scales in the frame of percolation theory (Berkowitz and Balberg 1993; King et al. 2001). In the cases where geological considerations are not sufficient to fully characterize the topology of the medium, specific methods have been proposed to find possible object geometry honoring some prescribed connectivity (Borghi et al. 2012; Collon-Drouaillet et al. 2012; Henrion et al. 2010).

Geometry concerns the embedding of the topological objects in 3D space, and is typically described either analytically (e.g., ellipses for fractures) or numerically (using a mesh). Meshes provide much flexibility to discretize the geometry of rock volumes (geological bodies), surfaces (geological boundaries) and lines (contacts between boundaries). All these geometric components are linked by topological relationships (Pellerin et al. 2017). More fundamentally, meshes are a way to define basis functions approximating the geometry of the true object. For example, one can define mathematically a triangulated surface as a set of a “hat” basis functions centered on each surface node (taking the value 1 at each node and linearly decreasing it to zero at the node’s neighbors), as in Eq. (28.1). This description is very powerful to devise advanced geometry processing algorithms and reduce the dimensionality of complex geometrical shapes (Vallet and Lévy 2008). In the frame of inverse modeling, several inverse methods use the meshed model geometry as an unknown parameter (Fullagar et al. 2000; Gjøystdal et al. 1985; Mondal et al. 2010).

Over the past decade, computational advances have also made it possible to consider implicit surfaces to represent geological boundaries. In these approaches, the surfaces are considered as level sets of some three-dimensional scalar field (Calcagno et al. 2008; Cowan et al. 2003; Frank et al. 2007; Henrion et al. 2010). These methods share the same principles as the Truncated Gaussian and Pluri-Gaussian methods (Mannseth 2014), but the underlying random function model is not necessarily Gaussian, and their focus is set on the geometry of object boundaries. These level set methods are very powerful to automate geometric modeling tasks such as interpolation and extrapolation. In particular, they have shown much interest in stratigraphic modeling as one single scalar field can represent a conformable stratigraphic series at once, which opens new possibilities in structural data interpolation (Calcagno et al. 2008; Caumon et al. 2013; Hillier et al. 2014; Laurent et al. 2016). Implicit surfaces also offer very nice ways to consider geometric model perturbations needed to address inverse problems in geosciences (Cardiff and Kitanidis 2009; Caumon et al. 2007; Noetinger 2013; Zheglova et al. 2013). A major distinction between explicit and implicit surface models is about topological control: the surface topology has to be chosen before interpolation in explicit methods, whereas it emerges from the interpolation in implicit models, see also Collon et al. (2016) for more discussions.

As in Pluri-Gaussian simulation, it is possible to indirectly account for geological time in object models using the truncation between implicit or explicit objects (Calcagno et al. 2008; Caumon et al. 2009; Gjøystdal et al. 1985). Boolean operations also provide ways to obtain sharp features in object geometry using constructive solid geometry principles (Rongier et al. 2014; Ruiu et al. 2016). In terms of Eq. (28.1), Boolean operations between implicit objects can be described as indicator (or Heaviside) basis functions (Mannseth 2014; Moës et al. 2002): these functions are equal to zero on one side of the interface and equal to 1 on the other side. The representation of faults is a major challenge which is specific to geosciences. Indeed faults are not just discontinuities or sharp geometric features: they result from sliding of rocks that were previously connected. Several authors have proposed mathematical or numerical solutions to address this problem by considering directly or indirectly the displacement between either sides of a fault (Calcagno et al. 2008; Georgsen et al. 2012; Hale 2013; Holden et al. 2003; Jessell and Valenta 1996; Laurent et al. 2013; Mallet 2002, 2014).

From Objects to Physical Parameters

Generally, geological object geometry cannot be described analytically and determining the associated physical parameter fields is not straightforward. In most cases, objects are first discretized in space with a mesh that will support the numerical resolution of the physical equations (Kolditz et al. 2012; Pellerin et al. 2017). This mesh is a numerical translation of Eq. (28.1) discretizing the space in elementary volumes deemed representative of some effective physical properties (the values \( m_{k} \) in Eq. (28.1)).

A possible working assumption is to consider a constant (or analytically defined) parameter value associated to each type of geological object. This principle is used for simplicity in a number of numerical models (Gjøystdal et al. 1985; Jackson et al. 2015). However, as discussed above, heterogeneity exists at many different scales and can have an impact on the physical process below the scale of the objects that are explicitly represented in a numerical model. For example, it is well known in stochastic hydrogeology and reservoir engineering that petrophysical heterogeneity exists within layers or sedimentary facies and impacts flow and transport (see for instance de Marsily et al. 2005 for a review). In many cases, the orientation of heterogeneities within a geological object depends on the object geometry (e.g., crystal orientations in a dyke may be preferentially aligned along the dyke boundaries; sedimentary heterogeneities tend to be more continuous along layers than orthogonally to layers). This can be addressed in modeling by explicitly using locally variable directions of anisotropy (Boisvert et al. 2009) or by considering a geometric transform between two spaces (Mallet 2014; Shtuka et al. 1996). This last option is very promising as it provides a way to simplify geostatistical modeling, and as it allows to define some useful geological variables such as the apparent sedimentation rate (Kedzierski et al. 2007; Mallet 2014; Massonnat 1999). Such use of indirect geological parameters is an essential and powerful way to introduce geological principles in earth models.

Nonetheless, one should not neglect that object geometry affects model predictions at the two main stages of geostatistical models: (1) geostatistical inference (distributions of continuous variables within each subdomain, multivariate relationships between different variables, trends, spatial variability) and (2) geostatistical modeling (interpolation or simulation). The separation of integrated modeling into an object-modeling phase and a petrophysical modeling phase are, therefore, relatively easy in the classical case where objects are known, when a clear separation of scales exist between representative elementary volume (REV) properties and object geometry, and when objects do not affect geostatistical parameters. However, uncertainty about object geometry and topology can have a significant impact on statistical parameters (Lallier et al. 2016), which can be a significant source of complexity in practical studies. More generally, finding at what scale explicit objects properties and REV effective properties can be separated is a fundamental problem in modeling. Therefore, more research is clearly needed to capture the interactions between object geometric (and topological) parameters and random field parameters.

Object Uncertainty

Geometric uncertainty can be sampled by adding geometric perturbations to an existing reference model (Caumon et al. 2007; Corre et al. 2000; Lecour et al. 2001) or creating several models after perturbing data (Lindsay et al. 2013; Wellmann et al. 2010). As the very existence of some objects is also uncertain in many cases, it is also useful to consider object-based stochastic simulation. In random set theory, geometric objects are placed randomly and independently in the domain by combining the simulation of points (Poisson Point Process) and the simulation of objects shapes around these points (see Chiles and Delfiner 2012 and references therein; Lantuéjoul 2002). Classically, objects are geometric primitives defined analytically, whose shape, orientation and size parameters are simulated from some input distribution. Random set theory places a lot of emphasis on the statistical aspects of this process and on conditioning to spatial data, see in particular Lantuéjoul (2002) and Allard et al. (2006). These models, in particular the Boolean Model, have been used to simulate many types of geological objects such as fractures (Chiles 1988), shale lenses (Haldorsen and Lake 1984) or sedimentary channels (Deutsch and Wang 1996; Holden et al. 1998). Extensions of the Boolean Model have also been proposed to introduce interactions between objects such as attraction or repulsion between fractures to reproduce their mechanical interactions (Aydin and Caers 2017; Bonneau et al. 2016; Chiles 1988; Hollund et al. 2002).

From a random set perspective, a deterministic object model is a particular realization of some underlying random set process. In this case, the relatively large data density allows one to consider mainly the data conditioning problem rather than focusing on the number of objects and on their spatial density. Another focus of deterministic object modeling approaches relates to the expert-guided definition of interactions between objects using interactive editing tools to ensure that the connectivity between objects is compatible with the geological history of the domain (e.g., how faults branch one onto another and how faults displace horizons).

Yet, more and more complex geometric object parameterizations have recently been introduced in object-based simulation methods. For instance, several authors propose to anchor sedimentary channels on discrete polygonal curves (Mariethoz et al. 2014; Pyrcz et al. 2009; Rongier et al. 2017; Ruiu et al. 2016; Viseur 2004). Other variants consider the bounding surfaces of stratigraphic deposits together with some rules to mimic depositional processes (Graham et al. 2015; Labourdette 2008; Michael et al. 2010; Pyrcz et al. 2005, 2015; Rongier et al. 2017; Ruiu et al. 2016; Sech et al. 2009). As argued in the review of Pyrcz et al. (2015), these models make it possible to consider genetic principles such as erosion, progradation and aggradation of sedimentary deposits in an automatic way. Similarly, pseudo-process-based models have also been proposed in the area of fracture modeling to approximate mechanical interactions and truncations that occur during fracture growth (Bonneau et al. 2013; Davy et al. 2013; Srivastava et al. 2005). At a larger scale, a recent trend has been to simulate possible stochastic geometries where the number and the connectivity of faults is variable (Aydin and Caers 2017; Cherpeau et al. 2010, 2012; Cherpeau and Caumon 2015; Holden et al. 2003; Julio et al. 2015a). In all these approaches, the use of rules is often a means to generate realistic objects and to produce likely connectivities and spatial features without being constrained by some input grid resolution. However, conditioning to dense spatial data sets remains challenging with these approaches. A possible way forward is to consider parameter-rich object-models and to consider process-based rules backward in time (Parquer et al. 2016; Ruiu et al. 2015). In all cases, expert control of model realism is also difficult and may call for additional “geological likelihood” functions to scrutinize the realizations (Jessell et al. 2010).

Interestingly, the use of continuous functions around the Poisson points used in object-based simulation (Random Function Model (Jeulin 2002)), is a possible way to relate random sets to Eq. (28.1). However, formalizing the link between object models and basis functions used in physical models is not easy and relies on the assumption that values are analytically defined on each object, and that objects have stationary statistics (Jeulin 2012; Oda 1986). Dealing with more realistic geometries and sequential Boolean operations to reproduce the succession of geological events calls for further numerical and mathematical developments. Meanwhile, as statistical properties of random sets are not easily checked in practical cases, the numerical approach to relate objects to physics clearly remains an area of much interest (Botella et al. 2016; Cacace and Blöcher 2015; Karimi-Fard and Durlofsky 2016; Merland et al. 2014; Mustapha 2011; Pellerin et al. 2014; Zehner et al. 2015).

5 Conclusions and Challenges

Several complementary ways exist to incorporate geological information in earth models (Fig. 28.2): spatial statistics, geological variables, geometry and topology of geological objects and explicit geological process modeling. Links exist between the random field and object-based frameworks in cases where the canonical random field theory is applicable (e.g., homogeneous and stationary object densities). This forms the rationale for most modeling methods where “small” objects are treated though their (spatially correlated) equivalent properties at the representative elementary volume scale. “Large” objects are modeled explicitly using rules and parameters that incorporate geological principles and may be calibrated from data and analogs.

Fig. 28.2
figure 2

Summary of the various complementary ways to incorporate geological knowledge in earth models

Although geostatistics has proven an invaluable theoretical framework to rigorously describe geological domains, it needs to be complemented by geological reasoning (sensu Frodeman 1995). Namely, considering discrete time steps approximating geological history and geological variables which cannot be directly measured can significantly help generating more predictive geological models, which may not always have stationary statistical properties. Geometric and topological interactions between objects have a direct connection to geological history and prove a powerful tool to characterize geological domains.

From a physical modeling perspective, geometric object models allow to represent small spatial features which can have a large impact on physical processes (Jackson et al. 2015; Julio et al. 2015b; Matthäi et al. 2007). This calls for specific developments in meshing and physical simulation, for example to better account for object features directly in the numerical code (Pichot et al. 2012). In the frame of inverse problems, sensitivity analysis is essential in practical studies. Theoretically, specific methods integrating the probability of existence of objects also need to be considered more widely, such as random vector parameterization (Cherpeau et al. 2012), reversible jump Monte-Carlo Markov Chain simulation (Green 1995; Sambridge et al. 2012) or ensemble-based methods (Scheidt and Caers 2009). Both in forward and inverse physical models, an additional and significant challenge is to better characterize the multi-sale interactions between geometrical and petrophysical parameterizations (basis functions and associated parameters values).