Skip to main content
Log in

Can a Training Image Be a Substitute for a Random Field Model?

  • Special Issue
  • Published:
Mathematical Geosciences Aims and scope Submit manuscript

Abstract

In most multiple-point simulation algorithms, all statistical features are provided by one or several training images (TI) that serve as a substitute for a random field model. However, because in practice the TI is always of finite size, the stochastic nature of multiple-point simulation is questionable. This issue is addressed by considering the case of a sequential simulation algorithm applied to a binary TI that is a genuine realization of an underlying random field. At each step, the algorithm uses templates containing the current target point as well as all previously simulated points. The simulation is validated by checking that all statistical features of the random field (supported by the simulation domain) are retrieved as an average over a large number of outcomes. The results are as follows. It is demonstrated that multiple-point simulation performs well whenever the TI is a complete (infinitely large) realization of a stationary, ergodic random field. As soon as the TI is restricted to a limited domain, the statistical features cannot be obtained exactly, but integral range techniques make it possible to predict how much the TI should be extended to approximate them up to a prespecified precision. Moreover, one can take advantage of extending the TI to reduce the number of disruptions in the execution of the algorithm, which arise when no conditioning template can be found in the TI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Arpat G (2005) Sequential simulation with patterns. PhD dissertation, Stanford University

  • Chilès J, Delfiner P (2012) Geostatistics. Modeling spatial uncertainty. Wiley, New York

    Book  Google Scholar 

  • Eskandaridalvand K, Srinivasan S (2010) Reservoir modelling of complex geological systems—a multiple-point perspective. J Can Pet Technol 49(8):59–69

    Google Scholar 

  • Guardiano F, Srivastava R (1993) Multivariate geostatistics: beyond bivariate moments. In: Soares A (ed) Geostatistics Tróia’92. Kluwer, Dordrecht, pp 133–144

    Chapter  Google Scholar 

  • Holden L (2006) Markov random fields and multipoint statistics. In: Proceedings of the 10th European conference on the mathematics of oil recovery. European Association of Geoscientists & Engineers, Amsterdam

    Google Scholar 

  • Hu L, Chugunova T (2008) Multiple-point geostatistics for modeling subsurface heterogeneity. Water Resour Res 44:W11413

    Google Scholar 

  • Journel A, Zhang T (2006) The necessity of a multiple-point prior model. Math Geol 38(5):591–610

    Article  Google Scholar 

  • Lantuéjoul C (1991) Ergodicity and integral range. J Microsc 161(3):387–404

    Article  Google Scholar 

  • Lantuéjoul C (2002) Geostatistical simulation: models and algorithms. Springer, Berlin

    Book  Google Scholar 

  • Liu Y (2005) An information content measure using multiple-point statistics. In: Leuangthong O, Deutsch C (eds) Geostatistics Banff 2004, vol 2. Springer, Dordrecht, pp 1047–1054

    Chapter  Google Scholar 

  • Mariethoz G, Renard P, Straubhaar J (2010) Direct sampling method to perform multiple-point geostatistical simulation. Water Resour Res 46(1):1–22

    Google Scholar 

  • Matheron G (1971) The theory of regionalized variables and its applications. Ecole des Mines, Paris

    Google Scholar 

  • Matheron G (1975) Random sets and integral geometry. Wiley, New York

    Google Scholar 

  • Matheron G (1989) Estimating and choosing. Springer, Berlin

    Book  Google Scholar 

  • Ortiz J (2008) An overview of the challenges of multiple-point geostatistics. In: Ortiz J, Emery X (eds) Proceedings of the eighth international geostatistics congress, vol 1. Gecamin, Santiago, pp 11–20

    Google Scholar 

  • Renard P, Straubhaar J, Caers J, Mariethoz G (2011) Conditioning facies simulations with connectivity data. Math Geosci 43(8):879–903

    Article  Google Scholar 

  • Serra J (1982) Image analysis and mathematical morphology. Academic Press, London

    Google Scholar 

  • Strebelle S (2002) Conditional simulation of complex geological structures using multiple-point statistics. Math Geol 34(1):1–22

    Article  Google Scholar 

  • Strebelle S, Remy N (2005) Post-processing of multiple-point geostatistical models to improve reproduction of training patterns. In: Leuangthong O, Deutsch C (eds) Geostatistics Banff 2004, vol 2. Springer, Dordrecht, pp 979–988

    Chapter  Google Scholar 

  • Strebelle S, Zhang T (2005) Non-stationary multiple-point geostatistical models. In: Leuangthong O, Deutsch C (eds) Geostatistics Banff 2004, vol 1. Springer, Dordrecht, pp 235–244

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editors and the three anonymous reviewers for their constructive comments on a former version of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xavier Emery.

Appendices

Appendix A: Proof of Algorithm 1

The proof is established by induction. Let S 0 and S 1 be the current level sets of the outcome, and suppose that \(\operatorname{Prob} \{ Z(S_{0})=0, Z(S_{1})=1 \} = \beta> 0\). Put \(p = \lim_{\lambda\rightarrow\infty} \frac{N_{\lambda}}{D_{\lambda}}\) with

$$N_\lambda= \frac{\# [ \eta^I_{T_1} \cap Sq(\lambda) ] }{\# Sq(\lambda)} \qquad D_\lambda= \frac{\# [ \eta^I_{T_0} \cap Sq(\lambda) ] + \# [ \eta^I_{T_1} \cap Sq(\lambda) ]}{ \# Sq(\lambda)}. $$

By the ergodic property (Eq. (1)), one has

$$\begin{aligned} \lim_{\lambda\rightarrow\infty} N_\lambda&= \operatorname{Prob} \bigl\{ Z(S_0) = 0, Z \bigl(S_1 \cup\{\bold x\} \bigr) = 1 \bigr\} \\ \lim_{\lambda\rightarrow\infty} D_\lambda&= \operatorname{Prob} \bigl\{ Z \bigl(S_0 \cup\{ \bold x\} \bigr) = 0, Z(S_1) = 1 \bigr \} + \operatorname{Prob} \bigl\{ Z(S_0) = 0, Z \bigl(S_1 \cup\{\bold x\} \bigr) = 1 \bigr\} \\ &= \operatorname{Prob} \bigl\{ Z(S_0) = 0, Z(S_1) = 1 \bigr\}. \end{aligned}$$

As lim λ→∞ D λ >0, it follows

$$p = \frac{\lim_{\lambda\rightarrow\infty} N_\lambda}{\lim _{\lambda\rightarrow\infty} D_\lambda} = \operatorname{Prob} \bigl \{ Z(\bold x) = 1 \, \vert\, Z(S_0) = 0, Z(S_1) = 1 \bigr\}. $$

Let \(S'_{0}\) and \(S'_{1}\) be the next level sets obtained once \(\bold x\) has been allocated. Note that p can take all values on [0,1]. If p<1, step (iv) of Algorithm 1 shows that \(\bold x\) can be assigned the value 0, in which case \(S'_{0} = S_{0} \cup\{\bold x\}\) and \(S'_{1} = S_{1}\), and one has \(\operatorname{Prob} \{ Z(S'_{0}) = 0 , Z(S'_{1}) = 1 \} = (1-p) \beta> 0\). Similarly, if p>0, then \(\bold x\) can be assigned the value 1, in which case \(S'_{0} = S_{0}\) and \(S'_{1} = S_{1} \cup\{\bold x\}\), and one has \(\operatorname{Prob} \{ Z(S'_{0}) = 0 , Z(S'_{1}) =1 \} = p \beta> 0\). Consequently, one has \(\operatorname{Prob} \{ Z(S'_{0}) = 0 , Z(S'_{1}) = 1 \} > 0\) whatever the allocation of \(\bold x\). The induction hypothesis is thus preserved, which proves the correctness of the sequential algorithm for infinite TI’s.

Appendix B: Proof of Eqs. (3) and (4)

To calculate \(\operatorname{Prob} \{ Z (K_{0}) = 0 \}\), the starting point is to express that none of the Boolean objects hits K 0

$$\operatorname{Prob} \bigl\{ Z (K_0) = 0 \bigr\} = \operatorname{Prob} \bigl\{ \forall\bold u \in\mathbb{Z}^2 , \forall n \leq N( \bold u) , \tau_{\bold{u}} A_{\bold u,n} \cap K_0 = \varnothing\bigr\}. $$

The right-hand side is now expanded using the fact that the Boolean model is made of independent objects in independent Poisson numbers

$$\begin{aligned} \operatorname{Prob} \bigl\{ Z (K_0) = 0 \bigr\} &= \prod _{\bold u \in\mathbb{Z}^2} \operatorname{Prob} \bigl\{ \forall n \leq N(\bold u) , \tau_{\bold{u}} A_{\bold u,n} \cap K_0 = \varnothing\bigr \} \\ &= \prod_{\bold u \in\mathbb{Z}^2} \sum_{n=0}^\infty \exp(- \theta) \frac {\theta^n}{n!} \bigl(\operatorname{Prob} \{ \tau_{\bold{u}} A \cap K_0 = \varnothing\} \bigr)^n \\ &= \prod_{\bold u \in\mathbb{Z}^2} \exp\bigl( - \theta+ \theta \operatorname{Prob} \{ \tau_{\bold{u}} A \cap K_0 = \varnothing\} \bigr) \\ &= \prod_{\bold u \in\mathbb{Z}^2} \exp\bigl( - \theta \operatorname{Prob} \{ \tau_{\bold{u}} A \cap K_0 \neq \varnothing\bigr). \end{aligned}$$

Moreover, one has \(\tau_{\bold{u}} A \cap K_{0} \neq\varnothing\) if and only if \(A \cap\tau_{-\bold u} K_{0} \neq\varnothing\), that is, \(- \bold u \in\delta_{K_{0}} A\). Accordingly,

$$\begin{aligned} \operatorname{Prob} \bigl\{ Z (K_0) = 0 \bigr\} &= \exp\biggl( - \theta\sum_{\bold u \in\mathbb{Z} ^2} \operatorname{Prob} \{ - \bold u \in \delta_{K_0} A \} \biggr) \\ &= \exp\biggl( - \theta\sum_{\bold u \in\mathbb{Z}^2} E \{ 1_{- \bold u \in\delta_{K_0} A} \} \biggr) \\ &= \exp\biggl( - \theta E \biggl\{ \sum_{\bold u \in\mathbb{Z}^2} 1_{- \bold u \in\delta_{K_0} A} \biggr\} \biggr) \\ &= \exp\bigl( - \theta E \{ \# \delta_{K_0} A \} \bigr) \end{aligned}$$

as announced in Eq. (3).

To prove Eq. (4), rewrite the probability as the expectation of an indicator function

$$\begin{aligned} \operatorname{Prob} \bigl\{ Z(K_0) = 0, Z(K_1)=1 \bigr \} &= E \biggl\{ 1_{ Z(K_0) = 0 } \, \prod_{\bold x_1 \in K_1} 1_{Z(\bold x_1) = 1} \biggr\} \\ &= E \biggl\{ 1_{ Z(K_0) = 0 } \, \prod_{\bold x_1 \in K_1} ( 1 - 1_{Z(\bold x_1) = 0} ) \biggr\}. \end{aligned}$$

By expanding, one obtains

$$\begin{aligned} \operatorname{Prob} \bigl\{ Z(K_0) = 0, Z(K_1)=1 \bigr \} &= E \biggl\{ 1_{ Z(K_0) = 0 } \sum_{L \subset K_1} (-1)^{\# L} 1_{Z(L) = 0} \biggr\} \\ &= \sum_{L \subset K_1} (-1)^{\# L} E \{ 1_{ Z(K_0 \cup L) = 0 } \} \\ &= \sum_{L \subset K_1} (-1)^{\# L} \operatorname{Prob} \bigl\{ Z(K_0 \cup L) = 0 \bigr\}. \end{aligned}$$

Equation (4) is derived by replacing \(\operatorname{Prob} \{ Z(K_{0} \cup L) = 0 \}\) by its expression given in Eq. (3).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Emery, X., Lantuéjoul, C. Can a Training Image Be a Substitute for a Random Field Model?. Math Geosci 46, 133–147 (2014). https://doi.org/10.1007/s11004-013-9492-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11004-013-9492-z

Keywords

Navigation