Abstract
Kriging is a standard method for conditioning surfaces to observations. Kriging works for vertical wells, but may produce surfaces that cross horizontal wells between surface observations. We establish an approach that also works for horizontal wells, where surfaces are modeled as a set of correlated Gaussian random fields. The constraints imposed by the horizontal wells makes the conditional surfaces non-Gaussian. We present a method for exact conditional simulation and an approximation for prediction and prediction uncertainty. Thousands of constraints can be handled efficiently without numerical instabilities. The approach is illustrated with synthetic and real examples that show how the constraints influence the surfaces and reduce uncertainty.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Abrahamsen, P.: Bayesian kriging for seismic depth conversion of a multi-layer reservoir. In: Soares, A. (ed.) Geostatistics Tróia ’92. https://doi.org/10.1007/978-94-011-1739-5_31, pp 385–398. Kluwer Academic Publ., Dordrecht (1993)
Abrahamsen, P.: A review of Gaussian random fields and correlation functions. Report 917, Norwegian Computing Center, P.O.Box 114 Blindern, N-0314 oslo norway (1997)
Abrahamsen, P., Benth, F.E.: Kriging with inequality constraints. Math. Geol. 33(6), 719–744 (2001). https://doi.org/10.1023/A:1011078716252
Abrahamsen, P., Dahle, P., Hauge, V. L., Almendral-Vazquez, A., Vigsnes, M.: Surface prediction using rejection sampling to handle non-linear constraints. Bulletin of Canadian Petroleum Geology 63 (4), 304–317, 12 (2015). ISSN 0007-4802. https://doi.org/10.2113/gscpgbull.63.4.304
Abrahamsen, P., Kvernelv, V., Barker, D.: Simulation of gaussian random fields using the fast fourier transform (fft). In: Proceeding of ECMOR XVI - 16th European Conference on the Mathematics of Oil Recovery, pp. 1–14. EAGE (2018), https://doi.org/10.3997/2214-4609.201802134
Abrahamsen, P., Dahle, P., Kvernelv, V., Sektnan, A., Vázquez, A.A., Aarnes, I.: Cohiba User Manual, 2020. https://nr.no/en/industries/natural-resources/cohiba/
Armstrong, M., Galli, A., Beucher, H., Loc’h, G., Renard, D., Doligez, B., Rémi Eschard, F.G.: Plurigaussian Simulations in Geosciences. Springer-Verlag Inc., Berlin (2011)
Choi, K.-M., Christakos, G., Serre, M.L.: Recent developments in vectorial and multi-point bme. In: Buccianti, A., Nardi, G., Potenza, R. (eds.) Proceeding of IAMG’98, pp 91–96. Isola d’Ischia, Naples (1998)
Chopin, N.: Fast simulation of truncated gaussian distributions. Statistics and Computing 21 (2), 275–288 (2011). https://doi.org/10.1007/s11222-009-9168-1
Dahle, P., Almendral-Vazquez, A., Abrahamsen, P.: Simultaneous prediction of geological surfaces and well paths. In: Petroleum Geostatistics 2015, EAGE, pp 5. https://doi.org/10.3997/2214-4609.201413588 (2015)
Diamond, P.: Interval-values random functions and the kriging of intervals. Math. Geol. 20(3), 145–165 (1988). https://doi.org/10.1007/BF00890251
Dubrule, O., Kostov, C.: An interpolation method taking into account inequality constraints. 1. Methodology. Math. Geol. 18(1), 33–51 (1986). https://doi.org/10.1007/BF00897654
Emery, X., Arroyo, D., Pelaez, M.: Simulating Large Gaussian Random Vectors Subject to Inequality Constraints by Gibbs Sampling. Math. Geol. 46(3), 265–283 (2014). https://doi.org/10.1007/s11004-013-9495-9
Freulon, X., de Fouquet, C.: Conditioning a gaussian model with inequalities. In: Soares, A. (ed.) Geostatistics Tróia ’92. https://doi.org/10.1007/978-94-011-1739-5_17, pp 201–212 (1993)
Fridley, B.L., Dixon, P.: Data augmentation for a Bayesian spatial model involving censored observations. Environmetrics 18(2), 107–123 (2007). https://doi.org/10.31274/RTD-180813-12076
Journel, A.: Constrained interpolation and qualitative information - the soft Kriging approach. Math. Geol. 18(3), 269–305 (1986). https://doi.org/10.1007/BF00898032
Konstantinos, D.K., Konstantinos, E.T., Athanasios, A.R.: On the efficient estimation of the mean of multivariate truncated normal distributions. arXiv:1307.0680 (2014)
Kostov, C., Dubrule, O.: An interpolation method taking into account inequality constraints. 2.Practical approach. Math. Geol. 18(1), 53–73 (1986). https://doi.org/10.1007/BF00897655
Langlais, V.: On the neighborhood search procedure to be used when kriging with constraints. In: Armstrong, M. (ed.) Geostatistics, pp. 603–614. Springer Netherlands, Dordrecht (1989), https://doi.org/10.1007/978-94-015-6844-9_47
Maatouk, H., Bay, X.: Gaussian Process Emulators for Computer Experiments with Inequality Constraints. Math. Geol. 49(5), 557–582 (2017). https://doi.org/10.1007/s11004-017-9673-2
Mannini, A., Pearse, S.: How big an elephant can be. In: 76th EAGE Conference & Exhibition 2014 Amsterdam RAI, The Netherlands, 16-19 June 2014, pp. 1–5 (2014), https://doi.org/10.3997/2214-4609.20141503
Mardia, K.V., Kent, J.T., Bibby, J.M.: Multivariate Analysis. Academic Press Inc., London (1979)
Michalak, A.M.: A Gibbs sampler for inequality-constrained geostatistical interpolation and inverse modeling. Water Resources Res 44(9). https://doi.org/10.1029/2007WR006645 (2008)
Militino, A.F., Ugarte, M.D.: Analyzing censored spatial data. Math. Geol. 31(5), 551–561 (1999). https://doi.org/10.1023/A:1007516023962
Oliveira, V.D.: Bayesian inference and prediction of gaussian random fields based on censored data. Journal of Computational and Graphical Statistics 14(1), 95–115 (2005). https://doi.org/10.1198/106186005X27518
Omre, H., Halvorsen, K.B.: The Bayesian bridge between simple and universal kriging. Math. Geol. 21(7), 767–786 (1989). https://doi.org/10.1007/BF00893321
Robert, C.P.: Simulation of truncated normal variables. Statistics and Computing 5, 121–125 (1995). https://doi.org/10.1007/BF00143942
Rosenbaum, S.: Moments of a truncated bivariate normal distribution. Journal of the Royal Statistical Society. Series B (Methodological) 23, 405–408 (1961). http://www.jstor.org/stable/2984029
Stein, M.L.: Prediction and inference for truncated spatial data. Journal of Computational and Graphical Statistics 1(1), 91–110 (1992). https://doi.org/10.2307/1390602
Stenerud, V.R., Kallekleiv, H., Abrahamsen, P., Dahle, P., Skorstad, A., Viken, M.H.A.: Added value by fast and robust conditioning of structural surfaces to horizontal wells for real-world reservoir models. In: Proceedings of the SPE ATCE, page 8 (2012), https://doi.org/10.2118/159746-MS
Tanner, M.A., Wong, W.H.: The calculation of posterior distributions by data augmentation (with discussion). J. Amer. Statist. Assoc. 82(398), 528–550 (1987). https://doi.org/10.2307/2289457
Vigsnes, M., Kolbjørnsen, O., Hauge, V.L., Dahle, P., Abrahamsen, P.: Fast and accurate approximation to kriging using common data neighborhoods. Math. Geol. 49(5), 619–634 (2017). https://doi.org/10.1007/s11004-016-9665-7
Wellmann, J.F., Caumon, G.: 3-D Structural geological models: Concepts, methods, and uncertainties, volume 59 of Advances in Geophysics, pp 1–121. Elsevier, Amsterdam (2018). https://publications.rwth-aachen.de/record/750586
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Appendices
Appendix A: Gibbs sampler for truncated multi-Gaussian distribution
The time consuming part of the Data Augmentation Algorithm is the rejection sampling in line 5 since acceptance rates are minute in realistic examples. The number of sample points and constraints can be hundreds or more. We aim thus at an efficient rejection sampler for a multivariate truncated Gaussian distribution, where the goal is to obtain a valid sample \(\mathbf {z}^{I}_{(s)}\). One possibility is to use the Gibbs sampler in [27]. The essential idea is to sequentially update each single variable conditioned on all other variables and loop through all variables many times.
Let \(\mathbf {z} = \mathbf {z}^{I}_{(s)} = \{z_{1}, \dots , z_{n} \}\) be one sample s of the constrained surface points zI with size n = 2 ⋅ NI. We define next an iteration zk that approximates the sample z, for every iteration in the Gibbs sampler. Starting with an initial sample \(\mathbf {z}^{0} = \{{z_{1}^{0}}, \dots , {z_{n}^{0}} \}\), we obtain the sample k from the sample k − 1 by sequentially drawing from the univariate distribution
for \(i = 1, \dots , n\). Here \(\mathcal {N}_{z_{i} \in R_{i}}\) is the univariate truncated Gaussian distribution, truncated on some interval \(R_{i} = (-\infty , {z^{I}_{i}})\) or \(R_{i} = ({z^{I}_{i}}, \infty )\) and the symbol \(\underline {i}\) denotes all the indices except i. The acceptance rate in each step will normally be quite low but can be improved by using the optimal accept-reject algorithm in [27]. The sequence of iteration updates could be arbitrary so we may update any permutation \(z_{j_{1}},\dots , z_{j_{n}}\) of the components \(z_{1},\dots , z_{n}\). A convenient permutation that takes into account the hardest constraints first improves the convergence of this sampler.
This univariate Gibbs sampler will in principal converge to the correct distribution but it suffers from extremely slow convergence since the constrained surface points zI are close laterally and are therefore highly correlated. The change in each iteration in Eq. 18 is therefore small and convergence is extremely slow. To improve on this we need to utilize the geometry of the problem.
1.1 A.1 Block Gibbs sampler
For simplicity of the notation, from now on we ignore the trend coefficients β. We introduce the distances from the well depth \(z_{i}^{I}\) to both surfaces above and below
Define next the distance pairs
and collect them into the set of all pairs \(\mathbf {d} = \{\mathbf {d}_{1}, {\dots } ,\mathbf {d}_{N^{I}} \}\).
The constraints Eq. 5 take on a simple form using the new variates, i.e., \(\mathbf {d}_{i} \in \mathcal {R}^{+} = \mathbb {R}^{+} \times \mathbb {R}^{+} \), for all i. We may thus equivalently draw from the distribution in terms of the distance variates
A bivariate Gaussian distribution with mean μ and covariance Σ truncated on the set \(\mathcal {R}^{+} = \mathbb {R}^{+} \times \mathbb {R}^{+}\) will be denoted by \(\mathcal {N}_{\mathcal {R}^{+}} ( {\mu }, {\Sigma })\). The conditional mean and expectations of a pair di given well points w are denoted by \( {\hat \mu }\) and \( {\hat {\Sigma }}\).
We now present a block version of the Gibbs sampler Eq. 18 where variates are simultaneously updated in pairs. The pair strategy has proven to be way more efficient than the univariate strategy explained in Section A. Starting with initial realization pairs \(\mathbf {d}^{0} = \{{\mathbf {d}_{1}^{0}}, \dots , \mathbf {d}_{N^{I}}^{0} \}\), the block Gibbs algorithm computes the sample k from the sample k − 1 by sequentially drawing from the bivariate distribution
Again, the symbol \(\underline {i}\) denotes all the indices except i. The two-dimensional mean \( {\hat \mu }_{i} = \mathrm {E}\left [{\mathbf {d}_{i} | \mathbf {d}_{\underline {i}}^{k}}\right ]\) and the 2 × 2 covariance matrix covariance \( {\hat {\Sigma }}_{i} = \text {Var}\left ({\mathbf {d}_{i} | \mathbf {d}_{\underline {i}}^{k} }\right )\) are given by the well-known expressions
Here \( {\mu }_{i} = \mathrm {E}\left [{\mathbf {d}_{i}}\right ]\) denotes the 2 × 1 mean of the pair i and \( {\Sigma }_{i,j} = \text {Cov}\left ({\mathbf {d}_{i}, \mathbf {d}_{j}}\right )\) is the 2 × 2 covariance matrix. The vector \( {\mu }_{\underline {i}}\) excludes the index i from the full mean vector μ = E[d]. The matrix Σu,v is the part of the full covariance matrix Σ = Var(d) with indices u ∈u and v ∈v.
The computation of the matrix \( {\Sigma }_{\underline {i}, \underline {i}}^{-1}\) of size (n − 2) × (n − 2) can be expedited by pre-computing and storing the full precision matrix V = Σ− 1. We re-utilize parts of this matrix with the help of the following identity
Paper [9] presents a satisfactory fast rejection sampler to draw from this type of distributions. The basic idea behind the fast truncated bivariate sampler is to divide the variable space into cases for which a conveniently chosen proposal distribution gives theoretical acceptance rates of 0.5.
1.2 A.2 Iterative approximation of the multivariate mean
The large number of dimensions and highly correlated points poses a threat to the convergence of the Gibbs sampler in Eq. 19 since the large number of iterations needed for convergence makes it prohibitively expensive. The Gibbs sampler requires therefore an initial state. We obtain that state by a (non-linear) iterative scheme whose objective is to approximate the mean of the truncated multivariate distribution of all distance pairs. The main idea is that the bivariate mean δ of a truncated pair given the rest of the pairs obtained in a previous iteration is used as hard data instead of a randomly drawn vector as in Eq. 19.
For each iteration, k, we compute \(\boldsymbol {\delta }^{k}= \{{\boldsymbol {\delta }_{0}^{k}}, \dots , \boldsymbol {\delta }_{N^{I}}^{k} \}\) as the mean of the positively truncated bivariate Gaussian distribution with untruncated mean. I.e., the bivariate Gaussian
has the moments computed with formulas Eqs. 20–21 and
The mean of this bivariate truncated Gaussian variable is computed analytically using formulas in [28].
Our experience on applications indicate that 1000 fixed-point iterations is sufficient for a good approximation. In [17] each variable is updated using univariate conditional truncated variables. The authors show that a sufficient condition for this iterative scheme to converge is that the full precision matrix is diagonally dominant. This condition is not satisfied in our application. The authors show numerically that the iteration gives good approximations for the multivariate mean even if diagonal dominance is not satisfied.
This exploratory analysis of our multivariate truncated distribution needs to be handled with care. Using Gibbs sampler in Eq. 19 with the initial vector resulting from Eq. 22 underestimates the uncertainty. A remedy that produces better spread is to start the Gibbs sampler with higher conditional variances which accounts to adding random noise to the initial sample.
Appendix B: Estimation and prediction
The linear regression model Eq. 3 can be written in matrix form as
where
1.1 B.1 Estimating the trend coefficients
The best linear unbiased estimator (BLUE) for the coefficients is the generalized least squares (GLS) estimator [22, p. 172]
where the kriging matrix K is the covariance matrix
with elements given by Eq. 4.
A Bayesian estimate is obtained by specifying the prior means and the prior covariances in the prior P-dimensional Gaussian distribution for the coefficient values
The Bayesian estimate for the posterior expectations and covariances are
This estimate is robust for any number of surface observations, N, including zero. If the prior uncertainty vanishes, \( {\Sigma }_{0} \rightarrow \mathbf {0}\), we recover the prior mean μ0. It can also be shown, with reasonable assumptions, that if \( {\Sigma }_{0} \rightarrow \boldsymbol {\infty }\), we obtain the GLS estimate [26].
1.2 B.2 Conditional distribution given surface observations
The kriging predictor for a surface l is
where the estimated means are obtained from the coefficient estimates as \(\widehat {m}_{l}(\mathbf {x}) = \mathbf {f}^{\prime }_{l}(\mathbf {x})\cdot \widehat { {\beta }}\) and \(\widehat {\mathbf {m}} = \mathbf {F}\cdot \widehat { {\beta }}\). For universal kriging, with unknown trend coefficients, the GLS estimates Eq. 23 are used for the coefficients, whereas for Bayesian kriging, the Bayesian estimates Eqs. 7–8 are used for the coefficients. The covariance vector kl(x) is the covariance between the location of interest and all the surface observations: \(\mathbf {k}_{l}(\mathbf {x}) = \text {Cov}\left ({Z^{l}(\mathbf {x}),\mathbf {z}}\right )\), where covariances are given by Eq. 4.
The Bayesian prediction error is given by
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Vázquez, A.A., Dahle, P., Abrahamsen, P. et al. Conditioning geological surfaces to horizontal wells. Comput Geosci 26, 1223–1236 (2022). https://doi.org/10.1007/s10596-022-10154-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10596-022-10154-6