Introduction

In thin-bedded reservoirs, a vertical range of logging tools investigation volume is usually larger than the thickness of individual thin beds. Therefore, standard logging tools generally do not allow the direct measurement of physical properties of individual thin beds and in some cases even cannot detect individual beds themselves (Zorski 1987). The lower limit of bed thicknesses below which the thin-bed problem begins to be significant is defined by the vertical resolution of the deep resistivity tool used during the evaluation process since the deep resistivity is a key log in evaluating reservoir hydrocarbons (Passey et al. 2006).

In Poland, the thin-bed problem exists in sandy–shaly Miocene formations of the Carpathian Foredeep, one of the most important petroleum provinces in Poland. In these formations, the main source of errors in gas saturation evaluation is the underestimation of resistivity of thin, hydrocarbon-bearing beds, which is the result of the low vertical resolution of induction tools (Zorski 2009). Two induction tools commonly used in the Carpathian Foredeep are the Dual Induction Tool (DIT) and the High-Resolution Array Induction (HRAI) tool. The DIT was introduced in 1962 and provides two resistivity logs, medium and deep, at two radial depths of investigation (30 and 60 in.) with the vertical resolution around 5–8 ft (Anderson 2001). The HRAI tool was introduced in 2000 and provides resistivity logs at six radial depths of investigation (10, 20, 30, 60, 90, and 120 in.) with 1, 2, and 4 ft vertical resolution (Beste et al. 2000). The vertical resolution of logs provided by the DIT is significantly lower than the vertical resolution of logs provided by the HRAI tool. Therefore, the thin-bed problem is especially visible in older boreholes drilled in times where the DIT was the primary induction tool used for determining the formation resistivity, and in shallowest depth intervals of newer boreholes where the DIT was used instead of the HRAI tool for cost-saving reasons.

In this paper, we show how a global inversion algorithm was used to improve the vertical resolution of DIT logs. Our implementation of the inversion algorithm utilizes a one-dimensional formation model, vertical response functions of the DIT, and a modified simulated annealing algorithm to determine the true vertical distribution of the formation resistivity. To better deal with the nature of the problem, a probability of selecting model parameters to modification was changed from typical for simulated annealing random choice to weighted random choice. This modification allows the algorithm to focus on problematic depth intervals and results in faster optimization of the model of the true vertical distribution of the formation resistivity.

The algorithm was tested on resistivity logs recorded in a borehole drilled in the Carpathian Foredeep in Poland, where the DIT and the HRAI tool were run in the same depth interval.

Iterative inversion

The iterative inversion is the main inversion method used in well logging applications (Passey et al. 2006). In this approach, no attempt is made to reverse physical processes occurring during well logging. Instead, the method utilizes an iterative forward modeling procedure to find a formation model that best explains measured data.

The generic process of the iterative inversion (Fig. 1) starts with the construction of an initial formation model. Next, a synthetic log is generated based on the initial formation model and compared with the measured log. If the difference between measured and synthetic data is small enough, the model is accepted as the solution. Otherwise, the model is changed and the synthetic log is recomputed and compared again with the measured log. This iterative forward modeling procedure is repeated until an acceptable agreement is obtained between measured and synthetic data (Passey et al. 2006; Sen and Stoffa 2013).

Fig. 1
figure 1

A simplified flowchart of the iterative inversion procedure

The exact structure of the iterative inversion algorithm depends primarily on a formulation of a forward problem and a method used to find the formation model that best explains measured data. Model parameters may be modified manually by the analyst until a satisfactory qualitative or quantitative fit between synthetic and measured data is obtained, but usually local or global optimization methods are used to find the formation model that minimalizes the value of an objective function (a quantitative measure of the difference between synthetic and measured data) (Passey et al. 2006; Sen and Stoffa 2013).

Forward problem

Logging tools are used to measure formation parameters at discrete, regularly spaced points along a borehole (Lyle and Williams 1986):

$$\varvec{d} = \left[ {d_{0} , d_{1} , \ldots , d_{M} } \right]$$
(1)

where \(d_{0} , d_{1} , \ldots , d_{M}\) are the log values recorded at depths \(z_{0} , z_{1} , \ldots , z_{M}\) and \(M + 1\) is the number of measurements.

A relation between the value measured by the logging tool at a depth \(z_{i}\) and the true vertical distribution of the measured parameter \(D\left( z \right)\) along the borehole can be described by the convolution equation (Lyle and Williams 1986; Zorski 1987):

$$\begin{array}{*{20}c} {d\left( {z_{i} } \right) = } & {\mathop \smallint \limits_{ - \infty }^{\infty } D\left( {z_{i} - z^{{\prime }} } \right) \cdot v\left( {z^{{\prime }} } \right){\text{d}}z^{{\prime }} } \\ \end{array}$$
(2)

where \(d\left( {z_{i} } \right)\) is the value recorded at a depth \(z_{i}\), \(D\left( z \right)\)—the true vertical distribution of the measured parameter, \(v\left( {z^{{\prime }} } \right)\)—the vertical response function of the logging tool, and \(z^{{\prime }}\)—position in the formation with respect to \(z_{i}\).

For real logging tools, it can be assumed that the signal measured at a depth \(z_{i}\) comes from a finite depth interval (Lyle and Williams 1986):

$$\begin{array}{*{20}c} {d\left( {z_{i} } \right) \cong } & {\mathop \smallint \limits_{{z_{i} - p}}^{{z_{i} + p}} D\left( {z_{i} - z^{{\prime }} } \right) \cdot v\left( {z^{{\prime }} } \right){\text{d}}z^{{\prime }} } \\ \end{array}$$
(3)

where \(p\) is a parameter, which represents the vertical range of investigation of the logging tool.

The true vertical distribution of the measured parameter \(D\left( z \right)\) can be approximated as a sequence of non-overlapping layers with the spacing of the centers of adjacent layers equal to the distance between measurement points (Lyle and Williams 1986). Each layer is perpendicular to the well axis, has a thickness equal to the logging step, and is characterized by a single parameter, whose value corresponds to the value of the logging tool measurement in an infinitely thick layer with the same properties (Fig. 2).

Fig. 2
figure 2

A formation model. Parameters \(m_{ - r} ,m_{ - r + 1} , \ldots ,m_{M + r}\) characterize layers within which corresponding measure points from depths \(z_{ - r} , z_{ - r + 1} , \ldots , z_{M + r}\) are located

A formation model can be shown as a vector of parameters describing subsequent layers:

$$\varvec{m} = \left[ {m_{ - r} , m_{ - r + 1} , \ldots , m_{M + r} } \right]$$
(4)

where \(m_{ - r} ,m_{ - r + 1} , \ldots ,m_{M + r}\) are the parameters of layers and \(M + 2r + 1\)—the number of layers.

The vertical response function of the logging tool can be shown as a vector of weights with the spacing of adjacent weights equal to the distance between measurement points:

$$\varvec{v} = \left[ {v_{ - r} , v_{ - r + 1} , \ldots , v_{r} } \right]$$
(5)
$$\sum \limits_{i = - r}^{r} v_{i} = 1$$
(6)

where \(v_{ - r} ,v_{ - r + 1} , \ldots ,v_{r}\) are the logging tool response coefficients and \(2r + 1\)—the number of logging tool response coefficients.

The synthetic log \(\varvec{d}^{{\left( \varvec{s} \right)}}\) can be generated by the forward calculation using a discrete version of interdependence given by Eq. (3):

$$d_{i}^{\left( s \right)} = \mathop \sum \limits_{n = - r}^{r} m_{i + n} \cdot v_{n} \quad {\text{for}}\; i \in \left[ {0, M} \right]$$
(7)

The shape of the vertical response function depends on the physics and geometry of the measurement. In case of many logging tools (including the induction logging tools), the shape of the vertical response function varies depending on the properties of the geological formation. This nonlinearity is often omitted, and a constant vertical response function designed for an “average” set of geologic conditions is used to approximate the vertical response function across a wider range of conditions (Passey et al. 2006).

Simulated annealing

The simulated annealing algorithm (Kirkpatrick et al. 1983) is based on the analogy between the simulation of annealing of solids and the problem of solving large combinatorial optimization problems. The algorithm can be viewed as a sequence of Metropolis algorithms (Metropolis et al. 1953) adapted to generate sequences of configurations of a combinatorial optimization problem and evaluated at a sequence of decreasing values of the temperature (control parameter without physical meaning) (van Laarhoven and Aarts 1987).

In the course of the optimization process, the temperature is slowly decreased. In each temperature value, the algorithm iteratively applies a sequence of small, random modifications to the model. Each modification is accepted with a probability given by the Metropolis criterion (Kirkpatrick et al. 1983; van Laarhoven and Aarts 1987):

$$P\left( {\Delta E,T} \right) = \left\{ {\begin{array}{*{20}l} {e^{{\frac{ - \Delta E}{T}}} } \hfill & { {\text{if}}\; \Delta E > 0} \hfill \\ 1 \hfill & {{\text{if}}\;\Delta E \le 0} \hfill \\ \end{array} } \right.$$
(8)

where \(\Delta E\) is the difference in the value of an objective function between modified and unmodified models and \(T\)—the temperature value.

The process is repeated until the algorithm reaches the final temperature value or the acceptable objective function value (Kirkpatrick et al. 1983; van Laarhoven and Aarts 1987).

The cooling schedule is designed in such a way that the acceptance probability of modification, which increases the value of the objective function, decreases from a value close to 1 near the initial temperature value to 0 when the temperature approaches the final value. This allows the algorithm to make use of benefits of maximally explorative and minimally exploitive random walk algorithm and minimally explorative and maximally exploitive hill climbing algorithm (Kirkpatrick et al. 1983; van Laarhoven and Aarts 1987; Weise 2011).

The simulated annealing optimization method was previously used in well logging applications by Runge and Runge (1991), Szucs and Civan (1996), and Dobróka and Szabó (2001, 2011, and 2015).

Structure of the algorithm

Input data

The input dataset consists of:

  • measured well log data \((\varvec{d})\),

  • an appropriately discretized logging tool vertical response function \((\varvec{v})\),

  • an initial temperature value \((T_{0} )\),

  • a temperature change coefficient \((\Delta T)\),

  • a final temperature value \((T_{n} )\),

  • an acceptable value of the global objective function \(( {E^{{( \min)}} })\),

  • a parameter which controls the number of iterations per temperate value \(\left( {N_{T} } \right)\), and

  • parameters which control the size of modifications \((a,b,c).\)

Initialization of the optimization procedure

The initial formation model \(\varvec{m}^{\left( 0 \right)}\) is based on the measured log \(\varvec{d}\), which is considered to be a good first approximation of the true vertical distribution of the measured parameter along the borehole:

$$m_{i}^{\left( 0 \right)} = \left\{ {\begin{array}{*{20}l} {d_{0} } \hfill & {{\text{for}}\; i \in \left[ {\left. { - r, 0} \right)} \right.} \hfill \\ {d_{i} } \hfill & {{\text{for}}\;i \in \left[ {0, M} \right]} \hfill \\ {d_{M} } \hfill & {{\text{for}}\;i \in \left( {\left. {M, M + r} \right]} \right.} \hfill \\ \end{array} } \right.$$
(9)

The initial synthetic log \(\varvec{d}^{\left( 0 \right)}\) generated on the basis of the initial formation model is given by the equation:

$$d_{i}^{\left( 0 \right)} = \mathop \sum \limits_{n = - r}^{r} m_{i + n}^{\left( 0 \right)} \cdot v_{n} \quad {\text{for}}\;i \in \left[ {0, M} \right].$$
(10)

The accuracy of the initial formation model is characterized by the vector of relative error values between the initial synthetic log and the measured \(\log \varvec{E}^{\left( 0 \right)}\):

$$E_{i}^{\left( 0 \right)} = \left| {\frac{{d_{i} - d_{i}^{\left( 0 \right)} }}{{d_{i} }}} \right|\quad {\text{for}}\;i \in \left[ {0, M} \right]$$
(11)

and by the global objective function \(E^{\left( 0 \right)}\):

$$E^{\left( 0 \right)} = \sqrt {\frac{1}{M}\mathop \sum \nolimits_{i = 0}^{M} \left( {E_{i}^{\left( 0 \right)} } \right)^{2} } .$$
(12)

The calculated data are used to create:

  • the current formation model dataset: \(\left\{ {\begin{array}{*{20}c} {\varvec{m}^{{\left( \varvec{c} \right)}} = \varvec{m}^{\left( 0 \right)} } \\ {\varvec{d}^{{\left( \varvec{c} \right)}} = \varvec{d}^{\left( 0 \right)} } \\ {\varvec{E}^{{\left( \varvec{c} \right)}} = \varvec{E}^{\left( 0 \right)} } \\ \end{array} } \right.,\) and

  • the best formation model dataset: \(\left\{ {\begin{array}{*{20}c} {\varvec{m}^{{\left( \varvec{b} \right)}} = \varvec{m}^{\left( 0 \right)} } \\ {\varvec{d}^{{\left( \varvec{b} \right)}} = \varvec{d}^{\left( 0 \right)} } \\ {E^{\left( b \right)} = E^{\left( 0 \right)} } \\ \end{array} } \right..\)

Optimization procedure

In the course of optimization procedure, the temperature is iteratively lowered from the initial value \(T_{0}\) to the final value \(T_{n}\) according to the schema given by the equation:

$$T_{i + 1} = \Delta_{T} \cdot T_{i} .$$
(13)

For each temperature value \(T_{i}\), a sequence of iterations is performed. In every iteration step, the algorithm tries to modify the value of a single model parameter.

Optimization sequence

Initialization of the optimization sequence

The optimization sequence starts with calculation of the modification probabilities of the individual model parameters in a given temperature. A probability of selecting a specific model parameter is calculated on the basis of the mean value of elements of the current model’s relative error vector, whose values are affected by the parameter:

$$p_{i}^{{\left( {T_{i} } \right)}} = \left\{ {\begin{array}{*{20}l} {\frac{1}{r - i + 1}\mathop \sum \nolimits_{n = - i }^{r} E_{i + n}^{\left( c \right)} } \hfill & {{\text{for}}\; i \in \left[ {\left. { - r, r} \right)} \right.} \hfill \\ {\frac{1}{2r + 1}\mathop \sum \nolimits_{n = - r}^{r} E_{i + n}^{\left( c \right)} } \hfill & {{\text{for}}\;i \in \left[ {r, M - r} \right]} \hfill \\ {\frac{1}{M - i - r + 1}\mathop \sum \nolimits_{n = - r}^{M - i} E_{i + n}^{\left( c \right)} } \hfill & {{\text{for}}\;i \in \left( {\left. {M - r, M + r} \right]} \right.} \hfill \\ \end{array} } \right.$$
(14)

Probabilities of modification model parameters in a given temperature \(\varvec{P}^{{\left( {\varvec{T}_{\varvec{i}} } \right)}}\) are given by the equation:

$$P_{i}^{{\left( {T_{i} } \right)}} = \begin{array}{*{20}c} {\frac{{p_{i}^{{\left( {T_{i} } \right)}} }}{{\mathop \sum \nolimits_{n = - r}^{M + r} p_{n}^{{\left( {T_{i} } \right)}} }}} & {{\text{for}}\; i \in \left[ { - r, M + r} \right]} \\ \end{array} .$$
(15)

This mechanism is absent in the standard simulated annealing algorithm and was added to adjust the optimization procedure to the specific character of the forward problem. Different parts of a geological formation may present a different degree of complexity. As a consequence, the different parts of the formation model may reach acceptable values of the objective function after a different number of iterations. The mechanism allows the algorithm to focus on these specific depth intervals where differences between measured data and synthetic data are the largest. This approach, in comparison with the random choice selection, which is typical for the simulated annealing, allows the algorithm to reduce the value of the objective function more quickly.

Single iteration of the optimization sequence

In the course of optimization procedure, the algorithm chooses the model parameter \(m_{j}^{{\left( {n_{j} } \right)}}\) with the probability given by the vector \(\varvec{P}^{{\left( {\varvec{T}_{\varvec{i}} } \right)}}\) and selects the part of the current model parameters vector, which allows recalculation of the part of the synthetic log, whose values are affected by the selected model parameter:

$$\varvec{m}^{{\left( {\varvec{n}_{\varvec{j}} } \right)}} = \left\{ {\begin{array}{*{20}l} {\left[ {m_{ - r}^{\left( c \right)} , m_{ - r + 1}^{\left( c \right)} , \ldots , m_{j + 2r}^{\left( c \right)} } \right]} \hfill & {{\text{if}}\;j \in \left[ {\left. { - r, r} \right)} \right.} \hfill \\ {\left[ {m_{j - 2r}^{\left( c \right)} , m_{j - 2r + 1}^{\left( c \right)} , \ldots , m_{j + 2r}^{\left( c \right)} } \right]} \hfill & {{\text{if}}\;j \in \left[ {r, M - r} \right]} \hfill \\ {\left[ {m_{j - 2r}^{\left( c \right)} , m_{j - 2r + 1}^{\left( c \right)} , \ldots , m_{M + r}^{\left( c \right)} } \right]} \hfill & {{\text{if}}\;j \in \left( {\left. {M - r,M + r} \right]} \right.} \hfill \\ \end{array} } \right.$$
(16)

The value of the chosen model parameter \(m_{j}^{{\left( {n_{j} } \right)}}\) within the selected part of data is then modified. For this purpose, a perturbation value \(\Delta m\) is randomly chosen from the normal distribution \(N\left( {\mu , \sigma^{2} } \right)\), which is characterized by the mean value \(\mu = 0\) and the standard deviation \(\sigma\) given by the equation:

$$\sigma = a \cdot m_{j}^{{\left( {n_{j} } \right)}} \cdot \left( {\frac{{\log_{10} T_{i} + \left| {\log_{10} T_{n} } \right|}}{{\log_{10} T_{0} + \left| {\log_{10} T_{n} } \right|}}} \right)^{b},$$
(17)

and the value of the model parameter \(m_{j}^{{\left( {n_{j} } \right)}}\) is modified:

$$m_{j}^{{\left( {n_{j} } \right)}} = \left\{ {\begin{array}{*{20}c} {m_{j}^{{\left( {n_{j} } \right)}} +\Delta m + c} & {{\text{if}}\;\Delta m \ge 0} \\ {m_{j}^{{\left( {n_{j} } \right)}} +\Delta m - c} & {{\text{if}}\;\Delta m < 0} \\ \end{array} } \right.$$
(18)

The value of the part of Eq. (17) inside the bracket changes from 1 in the initial temperature to 0 in the final temperature. Therefore, the value of the standard deviation of the normal distribution, from which the perturbation value \(\Delta m\) is randomly chosen, is lowered with the advance of the procedure. Parameters \(a\),\(b\) and \(c\) in Eqs. (17) and (18) allow additional control of the size of modifications.

For the purpose of establishing the impact of the modification on the accuracy of the formation model, the part of the new synthetic log is generated, whose values are affected by the modification \(\varvec{d}^{{\left( {\varvec{n}_{\varvec{j}} } \right)}}\):

$$d_{i}^{{\left( {n_{j} } \right)}} = \left\{ {\begin{array}{*{20}l} {\mathop \sum \nolimits_{n = - r}^{r} m_{i + n} \cdot v_{n} } \hfill & {{\text{for}}\; i \in \left[ {0, j} \right]} \hfill & {{\text{if}}\;j \in \left[ {\left. { - r,r} \right)} \right.} \hfill \\ {\mathop \sum \nolimits_{n = - r}^{r} m_{i + n} \cdot v_{n} } \hfill & {{\text{for}}\;i \in \left[ {j - r, j + r} \right]} \hfill & {{\text{if}}\;j \in \left[ {r, M - r} \right]} \hfill \\ {\mathop \sum \nolimits_{n = - r}^{r} m_{i + n} \cdot v_{n} } \hfill & {{\text{for}}\;i \in \left[ {j, M} \right]} \hfill & {{\text{if}}\;j \in \left( {\left. {M - r, M + r} \right]} \right.} \hfill \\ \end{array} } \right.$$
(19)

and compared with the corresponding part of the measured log in order to produce the part of the vector of relative error between the new synthetic log and the measured log \(\varvec{E}^{{\left( {\varvec{n}_{\varvec{j}} } \right)}}\):

$$E_{i}^{{\left( {n_{j} } \right)}} = \left\{ {\begin{array}{*{20}l} {\left| {\frac{{d_{i} - d_{i}^{{\left( {n_{j} } \right)}} }}{{d_{i} }}} \right|} \hfill & {{\text{for}}\; i \in \left[ {0, j} \right]} \hfill & {{\text{if}}\; j \in \left[ {\left. { - r,r} \right)} \right.} \hfill \\ {\left| {\frac{{d_{i} - d_{i}^{{\left( {n_{j} } \right)}} }}{{d_{i} }}} \right|} \hfill & {{\text{for}}\;i \in \left[ {j - r, j + r} \right]} \hfill & {{\text{if}}\;j \in \left[ {r, M - r} \right]} \hfill \\ {\left| {\frac{{d_{i} - d_{i}^{{\left( {n_{j} } \right)}} }}{{d_{i} }}} \right|} \hfill & {{\text{for}}\;i \in \left[ {j, M} \right]} \hfill & {{\text{if}}\;j \in \left( {\left. {M - r, M + r} \right]} \right.} \hfill \\ \end{array} } \right.$$
(20)

Then, the local objective function of the new model \(E^{{\left( {n_{j} } \right)}}\) and the local objective function of the current model \(E^{{\left( {c_{j} } \right)}}\) are calculated:

$$E^{{\left( {n_{j} } \right)}} = \left\{ {\begin{array}{*{20}l} {\sqrt {\frac{1}{j + 1}\mathop \sum \nolimits_{i = 0}^{j} \left( {E_{i}^{{\left( {n_{j} } \right)}} } \right)^{2} } } \hfill & {{\text{if}}\; j \in \left[ {\left. { - r,r} \right)} \right.} \hfill \\ {\sqrt {\frac{1}{2r + 1}\mathop \sum \nolimits_{i = j - r}^{j + r} \left( {E_{i}^{{\left( {n_{j} } \right)}} } \right)^{2} } } \hfill & {{\text{if}}\;j \in \left[ {r, M - r} \right]} \hfill \\ {\sqrt {\frac{1}{M - j + 1}\mathop \sum \nolimits_{i = j}^{M} \left( {E_{i}^{{\left( {n_{j} } \right)}} } \right)^{2} } } \hfill & {{\text{if}}\;j \in \left( {\left. {M - r, M + r} \right]} \right.} \hfill \\ \end{array} } \right.$$
(21)
$$E^{{\left( {c_{j} } \right)}} = \left\{ {\begin{array}{*{20}l} {\sqrt {\frac{1}{j + 1}\mathop \sum \nolimits_{i = 0}^{j} \left( {E_{i}^{{\left( {c_{j} } \right)}} } \right)^{2} } } \hfill & {{\text{if}}\; j \in \left[ {\left. { - r,r} \right)} \right.} \hfill \\ {\sqrt {\frac{1}{2r + 1}\mathop \sum \nolimits_{i = j - r}^{j + r} \left( {E_{i}^{{\left( {c_{j} } \right)}} } \right)^{2} } } \hfill & {{\text{if}}\;j \in \left[ {r, M - r} \right]} \hfill \\ {\sqrt {\frac{1}{M - j + 1}\mathop \sum \nolimits_{i = j}^{M} \left( {E_{i}^{{\left( {c_{j} } \right)}} } \right)^{2} } } \hfill & {{\text{if}}\;j \in \left( {\left. {M - r, M + r} \right]} \right.} \hfill \\ \end{array} } \right.$$
(22)

The value of the local objective function of the new model \(E^{{\left( {n_{j} } \right)}}\) is compared with the value of the local objective function of the current model \(E^{{\left( {c_{j} } \right)}}\):

$$\Delta E_{L} = E^{{\left( {c_{j} } \right)}} - E^{{\left( {n_{j} } \right)}} .$$
(23)

The model parameter modification is accepted with the probability given by the Metropolis criterion:

$$P\left( {\Delta E,T} \right) = \left\{ {\begin{array}{*{20}l} {e^{{\frac{{ -\Delta E_{L} }}{{T_{i} }}}} } \hfill & {{\text{if}}\;\Delta E_{L} > 0} \hfill \\ 1 \hfill & {{\text{if}}\;\Delta E_{L} \le 0} \hfill \\ \end{array} } \right.$$
(24)

If the modification is accepted, the current model dataset is actualized. The parameter \(m_{j}^{\left( c \right)}\) is substituted by the parameter \(m_{j}^{{\left( {n_{j} } \right)}}\), and parts of vectors \(\varvec{d}^{{\left( \varvec{c} \right)}}\) and \(\varvec{E}^{{\left( \varvec{c} \right)}}\) affected by the modification are substituted by vectors \(\varvec{d}^{{\left( {n_{j} } \right)}}\) and \(\varvec{E}^{{\left( {\varvec{n}_{\varvec{j}} } \right)}} .\)

Finalization of the optimization sequence

The optimization sequence ends after \(N_{T} \left( {M + 2r + 1} \right)\) iterations, where \(M + 2r + 1\) is the number of model parameters. For the purpose of establishing the impact of the entire sequence of modifications on the accuracy of the formation model, the global objective function of the current model \(E^{\left( c \right)}\):

$$E^{\left( c \right)} = \sqrt {\frac{1}{M}\mathop \sum \nolimits_{i = 0}^{M} \left( {E_{i}^{\left( c \right)} } \right)^{2} }$$
(25)

is compared with the global objective function of the best model \(E^{\left( b \right)}\), whose value remains unchanged during the entire optimization sequence:

$$\Delta E_{G} = E^{\left( b \right)} - E^{\left( c \right)} .$$
(26)

The best model dataset is actualized only if the value of the current model global objective function is equal to or less than the value of the best model objective function:

$$\begin{array}{*{20}c} {\left\{ {\begin{array}{*{20}c} {\varvec{m}^{{\left( \varvec{b} \right)}} = \varvec{m}^{{\left( \varvec{c} \right)}} } \\ {\varvec{d}^{{\left( \varvec{b} \right)}} = \varvec{d}^{{\left( \varvec{c} \right)}} } \\ {E^{\left( b \right)} = E^{\left( c \right)} } \\ \end{array} } \right.} & {{\text{if}}\;\Delta E_{G} \le 0} \\ \end{array}$$
(27)

Finalization of the optimization procedure

The optimization sequence is iteratively repeated until the algorithm reaches the acceptable value of the global objective function \(( {E^{(b)} \le E^{{(\min)}} })\) or the final temperature value \(\left( {T_{i} \le T_{n} } \right).\)

Application of inversion algorithm to DIT logs

The algorithm was tested on resistivity logs recorded in a borehole drilled in the Carpathian Foredeep in Poland, where the DIT and the HRAI tool were run in the same depth interval.

The borehole penetrates a multi-horizon gas field located within thin-bedded sandy–shaly Miocene deposits. The depth interval selected for the test is located within one of gas horizons encountered in the well. Top 8 m of the selected depth interval was cored. Rock samples indicate that the sedimentary formation in the cored interval consists of mudstones, siltstones, shales, and sandstones. The thicknesses of individual layers within cored interval range from millimeters within heterolithic complexes to around 70 cm in case of relatively thick sandstone layers located at the bottom of the cored interval.

The algorithm was applied to medium (ILM) and deep (ILD) resistivity logs recorded by the DIT. Values of inversion parameters (Table 1) were derived from the analysis of the inversion procedure behavior when applied to synthetic and real data. The acceptable value of the global objective function was set to 0 to allow us to observe the whole optimization procedure. In addition, the repeatability of results was tested based on 50 independent runs of the algorithm.

Table 1 Values of inversion parameters

Results of inversion were compared with HRAI logs with a similar depth of investigation as DIT logs and 1 ft vertical resolution (Fig. 3). Comparison of standard DIT logs with corresponding HRAI logs shows how significantly lower vertical resolution of the DIT affects measured data. The complex structure of thin-bedded formation visible on HRAI logs is almost invisible on DIT logs. Values of the formation resistivity recorded by the DIT are averaged over large depth intervals. This leads to an underestimation of resistivity of thin hydrocarbon-bearing sandstone layers and may result in significant errors during interpretation. The inversion procedure manages to restore a significant portion of information averaged in the process of recording DIT logs. Models of the vertical distribution of the formation resistivity obtained as the result of inversion of DIT logs provide a level of details similar to corresponding HRAI logs.

Fig. 3
figure 3

Results of inversion of DIT logs in comparison with HRAI logs with similar depth of investigation and 1 ft vertical resolution. ILM—the medium DIT log; RTV_ILM—the model of the true vertical resistivity distribution obtained from the ILM log; dRTV_ILM—the range of RTV_ILM values obtained during 50 independent runs of the algorithm; ILM_S—the synthetic medium DIT log; ILD—the deep DIT log; RTV_ILD—the model of the true vertical resistivity distribution obtained from the ILD log; dRTV_ILD—the range of RTV_ILD values obtained during 50 independent runs of the algorithm; ILD_S—the synthetic deep DIT log; H003—the HRAI log (30 in depth of investigation and 1 ft vertical resolution); and H006—the HRAI log (60 in depth of investigation and 1 ft vertical resolution)

Results of inversion are also very repeatable. The maximal difference in the value of the specific model parameters within 50 independent runs of the algorithm ranges from 0.0231 to 0.0942 Ωm with the mean value at 0.0483 Ωm in case of models obtained from the ILM log, and from 0.0389 to 0.1797 Ωm with the mean value at 0.0936 Ωm in case of models obtained from the ILD log. Taking into account the mean values of model parameters, the maximal difference in value of specific model parameters within 50 independent runs of the algorithm accounts from 1.08 to 2.81% of the mean values of these parameters with the mean value at 1.89% in case of models obtained from the ILM log, and from 1.49 to 5.91% of the mean values of these parameters with the mean value at 3.53% in case of models obtained from the ILD log.

To compare the performance of modified and standard versions of the optimization procedure, the algorithm was run additional 50 times with the same values of all inversion parameters and the ILM log as an input, but with a disabled weighted random choice mechanism of the model parameter selection. A comparison of results (Fig. 4) shows that the modification allows the algorithm to reduce the objective function value more quickly and obtain a lower final value of the objective function.

Fig. 4
figure 4

Objective function value versus the temperature value for the standard simulated annealing algorithm and the modified simulated annealing algorithm. E_S—the mean value of the global objective function for the standard simulated annealing algorithm obtained during 50 independent runs of the algorithm; dE_S—the range of E_S values obtained during 50 independent runs of the algorithm; E_M—the mean value of the global objective function for the modified simulated annealing algorithm obtained during 50 independent runs of the algorithm; dE_M—the range of E_M values obtained during 50 independent runs of the algorithm

Summary

The paper presents the use of the global inversion algorithm to improve the vertical resolution of DIT logs. The algorithm was tested on resistivity logs recorded in the borehole drilled in the Carpathian Foredeep in Poland, where the DIT and the HRAI tool were run in the same depth interval. Resistivity models computed on the basis of DIT logs very closely follow HRAI logs with a similar depth of investigation and 1 ft vertical resolution. The modification introduced to the mechanism of selecting model parameters to modification allowed the algorithm to reduce the objective function value more quickly and obtain a lower final value of the objective function in comparison with the unmodified algorithm.