Skip to main content
Log in

A deep learning approach to the inversion of borehole resistivity measurements

  • Original Paper
  • Published:
Computational Geosciences Aims and scope Submit manuscript

Abstract

Borehole resistivity measurements are routinely employed to measure the electrical properties of rocks penetrated by a well and to quantify the hydrocarbon pore volume of a reservoir. Depending on the degree of geometrical complexity, inversion techniques are often used to estimate layer-by-layer electrical properties from measurements. When used for well geosteering purposes, it becomes essential to invert the measurements into layer-by-layer values of electrical resistivity in real time. We explore the possibility of using deep neural networks (DNNs) to perform rapid inversion of borehole resistivity measurements. Accordingly, we construct a DNN that approximates the following inverse problem: given a set of borehole resistivity measurements, the DNN is designed to deliver a physically reliable and data-consistent piecewise one-dimensional layered model of the surrounding subsurface. Once the DNN is constructed, we can invert borehole measurements in real time. We illustrate the performance of the DNN for inverting logging-while-drilling (LWD) measurements acquired in high-angle wells via synthetic examples. Numerical results are promising, although further work is needed to achieve the accuracy and reliability required by petrophysicists and drillers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Constable, S., Srnka, L.J.: An introduction to marine controlled-source electromagnetic methods for hydrocarbon exploration. Geophysics 72(2), WA3–WA12 (2007)

    Article  Google Scholar 

  2. Bakr, S.A., Pardo, D., Mannseth, T.: Domain decomposition fourier fe method for the simulation of 3d marine csem measurements. J. Comput. Phys. 255, 456–470 (2013)

    Article  Google Scholar 

  3. Hardage, B.A.: Vertical seismic profiling. Lead. Edge 4(11), 59–59 (1985)

    Article  Google Scholar 

  4. Alvarez-Aramberri, J., Pardo, D.: Dimensionally adaptive hp-finite element simulation and inversion of 2D magnetotelluric measurements. J. Comput. Sci. 18, 95–105 (2017)

    Article  Google Scholar 

  5. Davydycheva, S., Wang, T.: A fast modelling method to solve Maxwell’s equations in 1D layered biaxial anisotropic medium. Geophysics 76(5), F293–F302 (2011)

    Article  Google Scholar 

  6. Ijasana, O., Torres-Verdín, C., Preeg, W.E.: Inversion-based petrophysical interpretation of logging-while-drilling nuclear and resistivity measurements. Geophysics 78(6), D473–D489 (2013)

    Article  Google Scholar 

  7. Davydycheva, S., Homan, D., Minerbo, G.: Triaxial induction tool with electrode sleeve: FD modeling in 3D geometries. J. Appl. Geophys. 67, 98–108 (2004)

    Article  Google Scholar 

  8. Shahriari, M., Rojas, S., Pardo, D., Rodríguez-Rozas, A., Bakr, S.A., Calo, V.M., Muga, I.: A numerical 1.5D method for the rapid simulation of geophysical resistivity measurements. submitted to Journal of Computational Physics (2017)

  9. Pardo, D., Torres-Verdin, C.: Fast 1D inversion of logging-while-drilling resistivity measurements for the improved estimation of formation resistivity in high-angle and horizontal wells. Geophysics 80(2), E111–E124 (2014)

    Article  Google Scholar 

  10. Key, K.: 1D inversion of multicomponent, multifrequency marine CSEM data: Methodology and synthetic studies for resolving thin resistive layers. Geophysics 74(2), F9–F20 (2009)

    Article  Google Scholar 

  11. Tarantola, A.: Inverse problem theory and methods for model parameter estimation. Society for Industrial and Applied Mathematics (2005)

  12. Vogel, C.: Computational methods for inverse problems. Society for Industrial and Applied Mathematics (2002)

  13. Watzenig, D.: Bayesian inference for inverse problems- statistical inversion. Elektrotechnik & Informationstechnik 124, 240–247 (2007)

    Article  Google Scholar 

  14. Ivakhnenko, A.G.: Cybernetic predicting devices. CCM Information Corporation (1973)

  15. Dechter, R.: Learning while searching in constraint-satisfaction-problems. In: Proceedings of the Fifth AAAI National Conference on Artificial Intelligence, pp. 178–183 (1986)

  16. Aizenberg, I., Aizenberg, N.N., Vandewalle, J.P.L.: Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Springer Science & Business Media (2000)

  17. Lu, L., Zheng, Y., Carneiro, G., Yang, L.: Deep Learning for Computer Vision: Expert Techniques to Train Advanced Neural Networks Using TensorFlow and Keras. Springer, Switzerland (2017)

    Google Scholar 

  18. Yu, D., Deng, L.: Automatic Speech Recognition: A Deep Learning Approach. Springer, London (2017)

    Google Scholar 

  19. Bhanu, B., Kumar, A.: Deep Learning for Biometrics. Springer, Switzerland (2017)

    Book  Google Scholar 

  20. Bougher, B.B.: Machine learning applications to geophysical data analysis. Master’s thesis, The University of British Colombia (2016)

  21. Araya-Polo, M., Dahlke, T., Frogner, C., Zhang, C., Poggio, T., Hohl, D.: Automated fault detection without seismic processing. Leading Edge 36(3), 208–214 (2017)

    Article  Google Scholar 

  22. Lary, D.J., Alavi, A.H., Gandomi, A.H., Walker, A.L.: Machine learning in geosciences and remote sensing. Geosci. Front. 7(1), 3–10 (2016). Special Issue: Progress of Machine Learning in Geosciences

    Article  Google Scholar 

  23. Hegde, C., Wallace, S., Gray, K.: Using trees, bagging, and random forests to predict rate of penetration during drilling. Soc. Petrol. Eng., 1–12 (2015)

  24. Aulia, A., Rahman, A., Velasco, J.J.Q.: Strategic well test planning using random forest. Soc. Petrol. Eng., 1–23 (2014)

  25. Bize-Forest, N., Lima, L., Baines, V., Boyd, A., Abbots, F., Barnett, A.: Using machine-learning for depositional facies prediction in a complex carbonate reservoir. Soc. Petrophys. Well-Log Analysts, 1–11 (2018)

  26. Wang, Y., Cheung, S.W., E.T.C., Eendiev, Y., Wang, M.: Deep multiscale model learning. arXiv:1806.04830 (2018)

  27. Higham, C.F., Higham, D.J.: Deep learning: An introduction for applied mathematicians. Computing Research Repository. arXiv:abs/1801.05894 (2018)

  28. Key, K.: 1D inversion of multicomponent, multifrequency marine CSEM data: Methodology and synthetic studies for resolving thin resistive layers. Geophysics 74(2), F9–F20 (2009)

    Article  Google Scholar 

  29. Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)

    Article  Google Scholar 

  30. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp 318–362. MIT Press, Cambridge (1986)

    Book  Google Scholar 

  31. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  32. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. U.S.A. 79(8), 2554–2558 (1982)

    Article  Google Scholar 

  33. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv:1512.03385 (2015)

  34. Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proc. IEEE 78(10), 1550–1560 (1990)

    Article  Google Scholar 

  35. Hochreiter, S., Bengio, Y., Frasconi, P.: Gradient flow in recurrent nets: The difficulty of learning long-term dependencies. In: Field Guide to Dynamical Recurrent Networks. IEEE Press (2001)

  36. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  37. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45(11), 2673–2681 (1997)

    Article  Google Scholar 

  38. Lipton, Z.C., Berkowitz, J., Elkan, C.: A critical review of recurrent neural networks for sequence learning (2015)

  39. Chollet, F.: Keras. https://github.com/fchollet/keras (2015)

Download references

Funding

This article has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 777778, the Project of the Spanish Ministry of Economy and Competitiveness with reference MTM2016-76329-R (AEI/FEDER, EU), the BCAM “Severo Ochoa” accreditation of excellence SEV-2018-0718, and the Basque Government through the BERC 2018-2021 program, the EMAITEK program, the two Elkartek projects ArgIA (KK-2019-00068) and MATHEO (KK-2019-00085), and the Consolidated Research Group MATHMODE (IT1294-19) given by the Department of Education.

Mostafa Shahriari has also been supported by the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry for Digital and Economic Affairs, and the Province of Upper Austria in the frame of the COMET center SCCH.

Carlos Torres-Verdín was partially funded by The University of Texas at Austin Research Consortium on Formation Evaluation, jointly sponsored by Anadarko, Aramco, Baker Hughes, BHP, BP, Chevron, China Oilfield Services Limited, CNOOC International, ConocoPhillips, DEA, Eni, Equinor ASA, ExxonMobil, Halliburton, INPEX, Lundin Norway, Occidental, Oil Search, Petrobras, Repsol, Schlumberger, Shell, Southwestern, Total, Wintershall Dea, and Woodside Petroleum Limited. He is also grateful for the financial support provided by the Brian James Jennings Memorial Endowed Chair in Petroleum and Geosystems Engineering.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Shahriari.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Convolutional neural networks

Convolutional neural networks (CNNs) [31] are a particular kind of NNs built by replacing fully connected affine layers \(\boldsymbol {\mathcal {N}}\) by convolutional operators \(\boldsymbol {\mathcal {C}}\) defined by convolution kernels f. Hence, Eq. 3 becomes:

$$ \boldsymbol{\mathcal{I}}_{\boldsymbol{\theta}}(x) = (\boldsymbol{\mathcal{C}}^{\mathbf{f}^{(L)}} \circ {\ldots} \circ \boldsymbol{\mathcal{C}}^{\mathbf{f}^{(l)}}\circ \ldots \boldsymbol{\mathcal{C}}^{\mathbf{f}^{(2)}} \circ \boldsymbol{\mathcal{C}}^{\mathbf{f}^{(1)}})(x), $$
(18)

In a discrete setting, at layer l of Eq. 18, operator \(\boldsymbol {\mathcal {C}}^{\mathbf {f}^{(l)}}\) is determined by the set of convolutional kernels \(\mathbf {f}^{(l)} = \{\mathbf {f}^{(l)}_{s}, s=1, {\ldots } c_{j+1}\}\). Each of these kernels transforms an input tensor x(l) of dimension hl × wl × cl into an output \(\mathbf {x}_{s}^{(l+1)}\) of dimension hl × wl. Each kernel is defined by a tensor of dimension Ml × Nl × cl that acts on its inputs through a simple convolution-like operation, followed by a non-linear function like the one in Eq. 4:

$$ \begin{array}{lll} \displaystyle \mathbf{x}_{s}^{(l+1)}(h,w) =\mathbf{s}\Bigg(&\sum\limits_{m=1}^{M_{l}} \sum\limits_{n=1}^{N_{l}} \sum\limits_{c=1}^{c_{l}} \mathbf{f}^{(l)}_{s}(m,n)\\ & \cdot \mathbf{x}^{(l)}(h+m, w+n, c)\Bigg). \end{array} $$
(2)

Application of all the cl+ 1 convolution kernels of f(l) on the input x(l) finally results into an output tensor x(l+ 1) of dimension hl × wl × cl+ 1. Each of these convolutional layers \(\boldsymbol {\mathcal {C}}^{\mathbf {f}^{(l)}}\) is followed by a non-linear point-wise function, and the spatial size of the output from each layer is decreased by a fixed projection operator \(\displaystyle \boldsymbol {\mathcal {P}}^{(l)}:\mathbb {R}^{h_{l}\times w_{l}} \rightarrow \mathbb {R}^{h_{l+1}\times w_{l+1}}\). Typically, \(\boldsymbol {\mathcal {P}}^{(l)}\) is defined as a local averaging operation. Again, eventually the dimensionality of the initial input x is transformed into that of an element of the target space \(\mathbb {R}^{P}\).

Appendix B: Recurrent neural networks

Let us first consider a simple neural network with an input, an intermediate, and an output layer like the one defined in Section 2.1 as a directed graph in which nodes store the result of the operations described in Eq. 3 and edges store the weights of the network W, b, as in Fig. Fig. 27a. Computations performed by such a network to obtain an output, given an input x, are described as:

$$ \begin{array}{lll} \mathbf{z}^{(1)} & = \mathbf{s}(\mathbf{a}^{(1)}) = \mathbf{s}(\mathbf{W}^{(1)}\cdot \mathbf{x} + \mathbf{b}^{(1)}),\\ \boldsymbol{\mathcal{I}}_{\boldsymbol{\theta}}(\mathbf{x})& = \mathbf{s}(\mathbf{W}^{(2)}\cdot \mathbf{z}^{(1)} + \mathbf{b}^{(2)}), \end{array} $$
(20)

where a(1), also known as activation, denotes the output of the network at the first layer of this network before passing through the non-linearity s. The key difference between regular NN and a recurrent neural network (RNN), as shown in Fig. 27b, is that the graph defining an NN is acyclical, whereas in an RNN internal cycles are allowed. This introduces a notion of time or sequential dependency into the computations of the network.

In our case, we interpret a data sample as a temporal sequence of length T, x = (x1,x2,...,xT), and the goal is to predict an output sequence p from x. In an RNN, a regular NN is trained to predict \(\mathbf {p}=\boldsymbol {\mathcal {I}}_{\boldsymbol {\theta }}(\mathbf {x}_{t})\) out of xt for 1 ≤ tT, but the data is scanned left-to-right, and the previous activation is multiplied by a second set of learnable weights. Hence, the necessary computations within an RNN for a forward pass are specified by the following two equations:

$$ \begin{array}{lll} \mathbf{a}_{t}&= \mathbf{W}_{\mathbf{a}\mathbf{x}} \mathbf{x}_{t} + \mathbf{W}_{\mathbf{a}\mathbf{a}}\mathbf{a}_{t-1} + \mathbf{b}_{\mathbf{a}}\\ \boldsymbol{\mathcal{I}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})&=\mathbf{s}(\mathbf{W}_{\mathbf{p} \mathbf{a}} \mathbf{a}_{t} + \mathbf{b}_{\mathbf{p}}), \end{array} $$
(21)

where Wax is a matrix of conventional weights between the input and the inner layer, Waa is a matrix holding recurrent weights between the inner layer at time step t and itself at adjacent time step t + 1, Wax maps the result of the inner layer computations to the output \(\boldsymbol {\mathcal {I}}_{\boldsymbol {\theta }}(\mathbf {x}_{t})\), and ba,bp are bias vectors allowing layers within the network to learn an offset. None of the weight matrices depend on the temporal component t and remain fixed, and the transition matrix Waa of the RNN is reset between processing two independent sequences.

The temporal nature of the process described in Eq. 21 is better illustrated if operations are unfolded, as shown in Fig. 28. Following this representation, an RNN can be interpreted not as cyclic, but as a standard network with one layer per time step and shared weights across time steps. It becomes clear that the network can be trained across many time steps using a variant of standard backpropagation algorithm, termed backpropagation through time [34, 35].

From these first principles, many different flavors of RNNs have been successfully applied over time to temporal data. In this work, we make use of two significant advances in the field of RNNs, namely long short-term memory RNN (LSTM), and bidirectional recurrent neural network (BRNN).

LSTM networks [36] are similar to a standard RNN with one inner layer, but a so-called memory cell replaces each ordinary node in this layer. Each memory cell contains a node with a self-connected recurrent edge of fixed weight one, ensuring that the gradient can be propagated across many time steps without vanishing or exploding. BRNNs contain two layers, both linked to input and output [37]. These two layers are different: the first has a recurrent connection from the past time steps while in the second the direction of recurrent of connections is reversed, performing computations backward along the sequence. More details about both architectures can be found in [38].

Appendix C: Proposed neural network architecture

The following is a listing of the neural network architecture built in this work in the Keras framework [39]:

figure a

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shahriari, M., Pardo, D., Picon, A. et al. A deep learning approach to the inversion of borehole resistivity measurements. Comput Geosci 24, 971–994 (2020). https://doi.org/10.1007/s10596-019-09859-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10596-019-09859-y

Keywords

Navigation