1 Introduction—the data system

In the process of well logging interpretation and inversion, the data are usually analyzed in predefined, disjoint depth intervals (interpretation intervals). For a given depth interval, the applied petrophysical model (petrophysical equations) and the associated predefined zone parameters (e.g. formation water resistivity, cementation exponent, shale and matrix characteristics etc.) are the same. In conventional inversion the estimation of rock parameters is performed on a depth by depth, usually as a solution to a weakly overdetermined problem, despite the high sensitivity of the model equation to zone parameters.

The focus of this study is mainly the zone parameter problem. It should be noted that interval inversion has been used by other authors to approximate the depth function of the local parameters (Dobróka, Szabó 2011; Dobróka et al. 2016).

If the zone parameters are included in the inversion, the fitting process for the different depth point data are coupled due to the common parameter(s). When the maximum likelihood (ML) method is used, the likelihood function should be extended over the entire interval data set (interval likelihood function).

The input data set for the inversion problem is the sampled well logging measurements \(\left( {{\mathbf{Y}}^{M \times N} } \right)\). Under the assumption of statistically independent measurement errors, the interval likelihood function (conditional density function of the measurement data on the parameters and petrophysical model) is as follows:

$$L({\mathbf{Y}}) = pdf\left( {{\mathbf{Y}}\left| {{\mathbf{P}},{\mathbf{p}}_{z} } \right.} \right) = \prod\limits_{i = 1}^{N} {\prod\limits_{j = 1}^{M} {pdf_{i,j} (Y_{i,j} \left| {{\mathbf{P}}_{i} ,{\mathbf{p}}_{z} )} \right.} }$$
(1)

where, N is the number of depth points in the interpretation interval, M is the number of logs, i is the depth index, j is the index of the measurement type \(\left( {i \in \left( {1..N} \right);j \in \left( {1..M} \right)} \right)\),\({\mathbf{P}}_{i} \in \Re^{K}\) is the parameter vector (local parameters) associated with the ith depth point, which is the row vector of the parameter matrix P, K is the number of local parameters at a given depth point, pz is the vector of zone parameters. In case of zero mean, additive Gaussian noise, the interval likelihood function will be (Tarantola 1987; Szatmáry 2002):

$$L\left( {\mathbf{Y}} \right) = \prod\limits_{i = 1}^{N} {L_{i} } \left( {{\mathbf{Y}}_{i} } \right) = \frac{{\left( {\prod\limits_{i = 1}^{N} {\prod\limits_{j = 1}^{M} {w_{j} } } } \right)^{0.5} }}{{\left( {2\pi \sigma^{2} } \right)^{0.5 \cdot M \cdot N} }}\exp \left( { - \frac{1}{{2\sigma^{2} }}\sum\limits_{i = 1}^{N} {\sum\limits_{j = 1}^{M} {w_{j} \left( {Y_{i,j} - f_{j} \left( {{\mathbf{P}}_{i} ,{\mathbf{p}}_{z} } \right)} \right)^{2} } } } \right)$$
(2)

where wj is the weight associated with a given measurement type, inversely proportional to the variance of the measurement type \(\left( {w_{j} = {\raise0.7ex\hbox{${\sigma^{2} }$} \!\mathord{\left/ {\vphantom {{\sigma^{2} } {\sigma_{j}^{2} }}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{${\sigma_{j}^{2} }$}}} \right)\), σ is the normalization factor for the weights, and fj() is the model function (direct problem for the jth measurement) over the parameter space. Li() is the local likelihood function connected to the ith depth point. Taking the logarithm of interval likelihood to derive the functional to be minimized (Q) for the whole interpretation interval:

$$Q\left( {{\mathbf{Y}}\left| {{\mathbf{P}}_{{}} ,{\mathbf{p}}_{z} } \right.} \right) = \sum\limits_{i = 1}^{N} {Q_{i} } \left( {{\mathbf{Y}}_{i} \left| {{\mathbf{P}}_{i} ,{\mathbf{p}}_{z} } \right.} \right) = \sum\limits_{i = 1}^{N} {\sum\limits_{j = 1}^{M} {w_{j} \left( {Y_{i,j} - f_{j} \left( {{\mathbf{P}}_{i} ,{\mathbf{p}}_{z} } \right)} \right)^{2} } }$$
(3)

The sum of squares for the interval is the sum of weighted squares (Qi) for the local (single depth point) fitting problem. The parameters behave as a complex coupled system whose state is determined by the minimum value of Q.

For ease of handling, let us convert the elements of the measurement data system Y into the following vector form (y), ordered by depth:

$${\mathbf{y}}^{T} = \left[ {\underbrace {{Y_{1,1} ,Y_{1,2} ...Y_{1,M} }}_{Depth1},\underbrace {{Y_{2,1} ,Y_{2,2} ,...Y_{2,M} }}_{Depth2},...\underbrace {{Y_{N,1} ,Y_{N,2} ,...Y_{N,M} }}_{DepthN}} \right]$$
(4a)

The measurement data vector defined above are elements of a smooth manifold (My). The parameter vectors \(\left( {{\mathbf{p}} \in \Re^{{N \cdot K + K_{z} }} } \right)\) to be determined are the elements of a manifold (Mp):

$${\mathbf{p}}^{T} = \left[ {\underbrace {{P_{1,1} ,P_{1,2} ,...P_{1,K} }}_{Depth1},\underbrace {{P_{2,1} ,P_{2,2} ,...P_{2,L} }}_{Depth2},...\underbrace {{P_{N,1} ,P_{N,2} ,...P_{NL} }}_{DepthN},p_{z} } \right]$$
(4b)

Kz is the number of zone parameters involved in the fitting (in this paper we consider the case of one zone parameter, but the statements can be generalized). The elements of the space Mp are mapped by a \(\left( {{\mathbf{f}}:\Re^{{N \cdot K + K_{z} }} \to \Re^{N \cdot M} } \right)\) vector-valued function into Mf manifold, which is a subspace of My (Mf \(\subset\) My). Hence Mf is parameterizable by elements of the parameter space. Given a proper log set, the relation Mp and Mf is bijective (which is also bijective locally per depth point). The model-dependent vector-valued function f() between the two manifolds:

$${\mathbf{f}}\left( {{\mathbf{p}},p_{z} } \right)^{T} = \left[ {\underbrace {{f_{1} \left( {{\mathbf{p}}_{1} ,p_{z} } \right),..f_{M} \left( {{\mathbf{p}}_{1} ,p_{z} } \right),}}_{Depth1}\underbrace {{f_{1} \left( {{\mathbf{p}}_{2} ,p_{z} } \right),..f_{M} \left( {{\mathbf{p}}_{2} ,p_{z} } \right)}}_{Depth2}...\underbrace {{f_{1} \left( {{\mathbf{p}}_{N} ,p_{z} } \right),..f_{M} \left( {{\mathbf{p}}_{N} ,p_{z} } \right)}}_{DepthN}} \right]$$
(4c)

In the process of inversion (which means the minimizing of the functional Q), the current element of My (the measurement data vector for the interval) is projected onto Mf and, due to the homeomorphic relationship between Mf and Mp, also into the parameter space. This projection determines not only the parameter estimation (point estimate), but also the confidence interval of the parameters through the projection of the confidence interval associated with the measurement error distribution. This homeomorphism is the criterion for the necessary relationship between the measurement complex and the model space to ensure the unambiguity of the inversion. In the inversion, the inverse function between manifolds Mf and Mp is just approximated using Taylor series (mostly only up to the linear term).

2 Jacobian matrix

The Jacobian matrix \(\left( {{\mathbf{J}}^{{\left( {N \cdot M \times \left( {N \cdot K + K_{z} } \right)} \right)}} } \right)\) has fundamental importance in the relationship between the manifolds Mf and Mp in determining the inversion, i.e. the spatial projection to Mf. Jacobian is also extended over the whole depth interval. Derivative according to local parameters define the matrix elements:

$$J_{{\left( {i - 1} \right) \cdot M + j,\left( {i - 1} \right)K + k}} = \frac{{\partial f_{j} \left( {{\mathbf{p}}_{i} ,p_{z} } \right)}}{{\partial p_{i,k} }}$$
(5a)

Similarly the derivative according to zone parameters:

$$J_{{\left( {i - 1} \right) \cdot M + j,NK + 1}} = \frac{{\partial f_{j} \left( {{\mathbf{p}}_{i} ,p_{z} } \right)}}{{\partial p_{z} }}$$
(5b)

The interval Jacobian matrix defined above provides a point-to-point local coordinate system (column vectors) on tangent space of Mf to describe the projection of the error vectors. It plays a fundamental role in providing the projection that determines the linearized weighted least squares estimate. As is well known, at close to the optimum where the linear approximation is valid, the projection of Δy (error vector) on the parameter domain can be approximated (Tarantola 1987):

$$\Delta {\mathbf{p}} = \left( {{\mathbf{J}}^{T} {\mathbf{WJ}}} \right)^{ - 1} {\mathbf{J}}^{T} {\mathbf{W}}\Delta {\mathbf{y}}$$
(6)

where

$$\Delta y_{{\left( {i - 1} \right)M + j}} = y_{{\left( {i - 1} \right)M + j}} - f_{j} \left( {{\mathbf{p}}_{i} ,p_{z} } \right)$$

In the case of an additive error model, the estimated error vector belongs to the kernel of the operator on the right-hand side of Eq. 6.

The weights (w) in the quadratic form (Eq. 3) are also transformed to adapt for the “interval” formalism. Thus, for ease of handling can be written in the form of a diagonal weight matrix \(\left( {{\mathbf{W}}^{M \cdot N \times M \cdot N} } \right)\) with diagonal elements:

$$W_{{\left( {i - 1} \right)M + j,\left( {i - 1} \right)M + j}} = W_{j,j} = w_{j}$$
(7)

The structure of the interval Jacobian matrix can be seen on Fig. 1.

Fig. 1
figure 1

The structure of interval Jacobian matrix. a Jacobian in the case of 1 zone parameter. b Jacobian without coupling: conventional approach

The coupling part related to the zone parameter is represented by the rightmost column of the matrix (vector Jz). If there are no common zone parameters in the inversion, then the fitting problem is separable for different depth points and the inversion can be performed in the conventional way. The matrix \(\left( {{\mathbf{J}}^{{\mathbf{T}}} {\mathbf{WJ}}} \right)\) is also essential for inversion and later for estimating the covariance matrix of the parameters. Based on the known expression (Tarantola 1987; Szatmáry 2002) for the covariance matrix of the fitted parameters (applying Eq. 6):

$${\mathbf{C}}_{{{\mathbf{\Delta p,\Delta p}}}} = E\left( {\Delta {\mathbf{p}}\Delta {\mathbf{p}}^{T} } \right) = \left( {{\mathbf{J}}^{{\mathbf{T}}} {\mathbf{WJ}}} \right)^{ - 1} {\mathbf{J}}^{T} {\mathbf{W}}E\left( {\Delta {\mathbf{y}}\Delta {\mathbf{y}}^{T} } \right){\mathbf{WJ}}\left( {{\mathbf{JWJ}}^{T} } \right)^{ - 1} = \sigma^{2} \left( {\underbrace {{{\mathbf{J}}^{{\mathbf{T}}} {\mathbf{WJ}}}}_{,}} \right)^{ - 1}$$
(8)

In the transformation of above formula, the interval diagonal covariance matrix of the independent measurement data was used:

$${\mathbf{C}}_{\Delta y,\Delta y} = E\left( {\Delta {\mathbf{y}}\Delta {\mathbf{y}}^{T} } \right) = \sigma^{2} {\mathbf{W}}^{ - 1}$$
(9)

The parameter σ in Eq. 9 can be estimated using the minimum of Q functional (denominator is the number of degrees of freedom):

$$\sigma = \sqrt {\frac{{Q_{\min } }}{{N \cdot \left( {M - K} \right) - K_{z} }}}$$
(10)

The following notation is introduced for the contraction of the J-matrix:

$${\mathbf{R}} = {\mathbf{J}}^{{\mathbf{T}}} {\mathbf{WJ}}$$
(11)

The J-matrix and thus the matrix R for the interval can be decomposed into sub-matrices since the parts corresponding to the parameters of each depth point and the zone parameter are separated. The R-matrix of the local parameters associated with the oth depth point:

$$R_{{o;m,n}} = \sum\limits_{{j = 1 + \left( {o - 1} \right)M}}^{{oM}} {W_{{j,j}} J_{{j,m}} } J_{{n,j}}^{T} \;m,n \in \left( {K \cdot \left( {o - 1} \right),K \cdot o} \right)\;o \in \left( {1,N} \right)$$
(12a)

The contribution of the zone parameter and the local parameters of the oth depth point submatrix:

$$R_{zo;m} = \sum\limits_{{j = 1 + \left( {o - 1} \right)M}}^{oM} {W_{j,j} J_{j,m} } J_{NK + 1,j}^{T}$$
(12b)

The R-matrix element defined by only the zone parameter:

$$R_{zz} = \sum\limits_{o = 1}^{MN} {W_{o,o} J_{{_{o,NK + 1} }}^{2} }$$
(12c)

In the case of interval inversion, the structure of this matrix is also special, known as "arrowhead" matrix (Fig. 2).

Fig. 2
figure 2

The matrix R arrowhead type hypermatrix structure. a Decomposition is shown, where the sub-matrices for different depth point are separated. b Decomposition, where the conventional part and coupling part can be seen (as a sub-matrices)

3 Inverse of arrowhead matrix

Equation 8 shows that the covariance matrix of the parameters is determined by the inverse of the matrix R. It is also well known that the covariance matrix of the fitted function values is also based on this (Szatmáry 2002):

$${\mathbf{C}}_{{{{\varvec{\Delta}}}\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y} {\mathbf{,\Delta }}\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y} }} = \sigma^{2} {\mathbf{JR}}^{ - 1} {\mathbf{J}}^{T}$$
(13)

The inverse of the matrix with the "arrowhead" structure can be expressed analytically (Jakovcevic et al 2015). Consider the following decomposition of matrix R (submatrices are defined in Eq. 12):

$${\mathbf{R}} = \left[ {\begin{array}{*{20}c} {{\mathbf{R}}_{0} } & {{\mathbf{R}}_{z} } \\ {{\mathbf{R}}_{z}^{T} } & {R_{zz} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathbf{R}}_{1} } & {} & {} & {{\mathbf{R}}_{z1} } \\ {} & {{\mathbf{R}}_{2} } & {} & {{\mathbf{R}}_{z2} } \\ {} & {} & {...} & {...} \\ {{\mathbf{R}}_{z1} } & {{\mathbf{R}}_{z2} } & {} & {R_{zz} } \\ \end{array} } \right]$$
(14)

where R0 is the uncoupled R-type matrix of local parameters. This matrix also contains the previously defined quadratic local matrices (R1,R2…) in a diagonal arrangement. The kth element of the vector Rz is:

$$R_{z;k} = \sum\limits_{j = 1}^{M \cdot N} {J_{k,j}^{T} W_{j,j} } J_{j,MK + 1}$$
(15)

This vector can also be decomposed into sub-vectors per depth point (Fig. 2). The inverse of the matrix R can be generated as the sum of two matrices, the first part can be written as the inverse of the uncoupled part of the matrix R, the second element can be defined in a dyadic form:

$${\mathbf{R}}^{ - 1} = \left[ {\begin{array}{*{20}c} {{\mathbf{R}}_{0}^{ - 1} } & 0 \\ 0 & {0_{{}} } \\ \end{array} } \right] + \rho \cdot {\mathbf{uu}}^{T}$$
(16)

The definition of vector u:

$${\mathbf{u}} = \left[ \begin{gathered} {\mathbf{R}}_{0}^{ - 1} {\mathbf{R}}_{z} \hfill \\ - 1 \hfill \\ \end{gathered} \right]$$
(17)

The coefficient (ρ) of part 2:

$$\rho = \frac{1}{{R_{zz} - {\mathbf{R}}_{z}^{T} {\mathbf{R}}_{0}^{ - 1} {\mathbf{R}}_{z} }} = \frac{1}{{{\mathbf{R}}_{z}^{T} \left( {{\mathbf{I}} - {\mathbf{R}}_{0}^{ - 1} } \right){\mathbf{R}}_{z} }} = \frac{1}{{\sum\limits_{i = 1}^{N} {{\mathbf{R}}_{z;i}^{T} \left( {{\mathbf{I}} - {\mathbf{R}}_{i}^{ - 1} } \right){\mathbf{R}}_{z;i} } }}$$
(18)

where I is the identity matrix. The correctness of the above inverse can be verified by the Sherman-Morrison theorem (Sherman, Morrison 1949; Saberi Najafi et al. 2014; Stanimirovic et al. 2019). This theorem can be applied to special matrices consisting of a diagonal and a dyadic part (Eq. 16). Thanks to the inverse given in the above analytical form, the covariance matrix (variances, covariances) of the fitted parameters can be studied directly (Fig. 3).

Fig. 3
figure 3

Structure of arrowhead matrix inverse. The left part contains the R-inverse of conventional inversion \(\left( {{\mathbf{R}}_{0}^{ - 1} } \right)\)

4 Sigma parameter

Besides the R matrix, the other important factor that determines the covariance matrix of the estimated parameters is σ:

$${\mathbf{C}}_{\Delta p,\Delta p} = \sigma^{2} {\mathbf{R}}^{ - 1}$$
(19)

It also determines the probability distribution of the minimum value of Q (Szatmáry 2002):

$$Q_{\min } = Q\left( {{\mathbf{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{p} }}} \right) = \sigma^{2} \chi_{DF}^{2}$$
(20)

It can be estimated using the minimum value of Q, but its value is also affected by the number of degrees of freedom (DF) associated with the estimate. As the number of parameters decreases, the number of degrees of freedom increases, but the value of the squared deviation (Q) increases due to a potentially worse-fitting (or “smoother”) model. For the analysis, let us take the reference case, a conventional depth-per-depth inversion to ensure the smallest weighted quadratic deviation (Q0), where the parameter pz, which is later considered as the zone parameter, is also fitted per depth point (pz;i) using Eq. 6. Then σ is the reference value (σ0) for the whole interval:

$$\sigma_{0} = \sqrt {\frac{{Q_{0} }}{NM - N(K + 1)}} = \sqrt {\frac{{\sum\limits_{i = 1}^{N} {Q_{0,i} } }}{NM - N(K + 1)}}$$
(21)

If the zone parameter is fitted as a constant (pz) over the entire interval, the number of degrees of freedom increases significantly (NM-NK-1), so the variance of the parameters could be significantly reduced, but the Qmin value increases.

The constant zone parameter is not optimal locally at the given depth point, so locally differs from the reference values (pzi). This change also means a change in local parameters to "compensate", that is minimize Qi value with a correlated change in the new condition. Let’s examine the change in Qi,min. The changes in Qi value can be approximated (by linear term):

$$Q_{i} = \sum\limits_{j = 1}^{M} {w_{j} \left( {Y_{i,j} - f_{j} \left( {{\mathbf{P}}_{0,i} ,p_{0z} } \right) + \sum\limits_{k = 1}^{K} {\frac{{\partial f_{j} }}{{\partial P_{i,k} }}\Delta P_{i,k} + \frac{{\partial f_{j} }}{{\partial p_{z} }}\Delta p_{z;i} } } \right)^{2} }$$
(22)

This inequality is persisting during the change \(\left( {Q_{i} \ge Q_{0,i} } \right)\). The ΔQi increment is expressed by Taylor-expansion around the reference (\(Q_{0i}\)):

$$\Delta Q_{i} = \sum\limits_{j = 1}^{M} {w_{j} \left( {\sum\limits_{k = 1}^{K} {\frac{{\partial f_{j} }}{{\partial P_{i,k} }}\Delta P_{i,k} + \frac{{\partial f_{j} }}{{\partial p_{z} }}\Delta p_{z;i} } } \right)^{2} }$$
(23)

Plotting the quadratic form above in matrix form:

$$\Delta Q_{i} = \Delta {\mathbf{P}}_{i}^{T} {\mathbf{J}}_{0,i}^{T} {\mathbf{W}}_{i} {\mathbf{J}}_{0,i} \Delta {\mathbf{P}}_{i} + 2\Delta p_{z;i} {\mathbf{J}}_{0z;i} {\mathbf{W}}_{i} {\mathbf{J}}_{0;i} \Delta {\mathbf{P}}_{i} + \Delta p_{z;i}^{2} J_{0;zz} = \Delta {\mathbf{P}}_{i}^{T} {\mathbf{R}}_{0;i} \Delta {\mathbf{P}}_{i} + 2\Delta p_{z;i} {\mathbf{R}}_{0z} \Delta {\mathbf{P}}_{i} + \Delta p_{z;i}^{2} R_{0;zz}$$
(24)

This defines a quadratic hyper-surface as a function of the parameter change vector (ΔQiPpz;i)), which takes a value of zero at the origin. Again, a new minimum search, the direction of the smallest Qi changes can be determined:

$$\frac{{\partial \Delta Q_{i} }}{{\partial \Delta {\mathbf{P}}_{i} }} = 2{\mathbf{R}}_{0;i} \Delta {\mathbf{P}}_{i} + 2\Delta p_{z;i} {\mathbf{R}}_{0z;i} = {\mathbf{0}}$$
(25)

From here, the equation of the parameter change:

$$\Delta {\mathbf{P}}_{i} = - \Delta p_{z;i} {\mathbf{R}}_{0,i}^{ - 1} {\mathbf{R}}_{z;i}$$
(26)

Substituting this back into the equation for ΔQi:

$$\Delta Q_{i} = \Delta p_{z;i}^{2} {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0;i}^{ - T} {\mathbf{R}}_{0;i} {\mathbf{R}}_{0;i}^{ - 1} {\mathbf{R}}_{0z;i} - 2\Delta p_{z;i}^{2} {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0;i}^{ - 1} {\mathbf{R}}_{0z;i} + \Delta p_{z;i}^{2} R_{0;zz}$$
(27a)
$$\Delta Q_{i} = \Delta p_{z;i}^{2} {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0;i}^{ - T} {\mathbf{R}}_{0z;i} - 2\Delta p_{z;i}^{2} {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0;i}^{ - 1} {\mathbf{R}}_{0z;i} + \Delta p_{z;i}^{2} R_{0zz} = \Delta p_{z;i}^{2} \left( {R_{0zz} - {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0;i}^{ - 1} {\mathbf{R}}_{0z;i} } \right)$$
(27b)

Summarizing the increments for all depth points to get the change for the whole interval:

$$\Delta Q = \sum\limits_{i = 1}^{N} {\Delta p_{z,i}^{2} } \left( {R_{0zz} - {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0,i}^{ - 1} {\mathbf{R}}_{0z;i}^{{}} } \right)$$
(28)

Taking this effect into account, the value of σ is:

$$\sigma = \sqrt {\frac{{Q_{0} + \Delta Q}}{NM - NK - 1}} \approx \sqrt {\frac{M}{{M - K - {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 N}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$N$}}}}}$$
(29)

By including the zone parameter into the inversion, the improvement in the degree of freedom is not significant.

5 Parameter estimation by interval likelihood

Parameter estimation can be done for the interval inversion using Eq. 6. We often use this linearized formula to find the minimum value of Q, usually by iteration with R matrix. For interval inversion, the form prescribed in Eq. (16) is preferable. The parameter estimator operator is then split into two parts (except for the zone parameter).:

$$\Delta {\mathbf{p}} = \left( {\mathbf{R}} \right)^{ - 1} {\mathbf{J}}^{T} {\mathbf{W}}\Delta {\mathbf{y}} = \underbrace {{\left[ {\begin{array}{*{20}c} {{\mathbf{R}}_{0}^{ - 1} } & 0 \\ 0 & {0_{{}} } \\ \end{array} } \right]{\mathbf{J}}^{T} {\mathbf{W\Delta }}y}}_{Conventional} + \underbrace {{\rho \cdot {\mathbf{uu}}^{T} {\mathbf{J}}^{T} {\mathbf{W\Delta }}y}}_{Correction}$$
(30)

The coefficient of the correction term (ρ) is decreasing with N, this also indicates that the result of the conventional inversion can be a good starting value for the interval inversion. Because of the large number of parameters, a good choice of initial value is particularly important. If the overdetermination allows, the zone parameter can be estimated per depth point from which the initial value can be estimated. From the linear approximation (Eq. 28), we can determine the initial value for pz:

$$\Delta Q = \sum\limits_{i = 1}^{N} {\Delta p_{z,i}^{2} } \left( {R_{0zz} - {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0,i}^{ - 1} {\mathbf{R}}_{0z;i}^{{}} } \right) = \sum\limits_{i = 1}^{N} {\left( {p_{z} - p_{z,i} } \right)_{z,i}^{2} } \left( {R_{0zz} - {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0,i}^{ - 1} {\mathbf{R}}_{0z;i}^{{}} } \right)$$
(31)

Minimizing the increment gives this initial value estimator:

$$p_{z} = \frac{{\sum\limits_{i = 1}^{N} {p_{z,i} \left( {R_{0zz} - {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0,i}^{ - 1} {\mathbf{R}}_{0z;i}^{{}} } \right)} }}{{\sum\limits_{i = 1}^{N} {\left( {R_{0zz} - {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0,i}^{ - 1} {\mathbf{R}}_{0z;i}^{{}} } \right)} }}$$
(32)

The above equations can also be used to study the conventional interpretation practice: i.e., using prior information and not inversion to arrive at the zone parameter (pa).

$$\begin{gathered} \Delta Q_{\min } = \sum\limits_{i = 1}^{N} {\left( {p_{z} - p_{a} + p_{a} - p_{z,i} } \right)_{z,i}^{2} } \left( {R_{0zz} - {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0,i}^{ - 1} {\mathbf{R}}_{0z;i}^{{}} } \right) = \hfill \\ = \frac{{\left( {p_{z} - p_{a} } \right)^{2} }}{\rho } - 2\frac{{\left( {p_{z} - p_{a} } \right)^{2} }}{\rho } + \sum\limits_{i = 1}^{N} {\left( {p_{a} - p_{z,i} } \right)_{z,i}^{2} } \left( {R_{0zz} - {\mathbf{R}}_{0z;i}^{T} {\mathbf{R}}_{0,i}^{ - 1} {\mathbf{R}}_{0z;i}^{{}} } \right) = \hfill \\ \Delta Q_{\min } + \frac{{\left( {p_{z} - p_{a} } \right)^{2} }}{\rho } = \Delta Q_{a} \hfill \\ \end{gathered}$$
(33)

This approximate formula shows how the goodness of fit deteriorates if the zone parameter is not from the inversion, and thus may contain some bias. (This is essentially a manifestation of Steiner's theorem.) The formula shows that the mismatch increases with N.

6 Statistical properties of the estimated parameters

From Eq. (30) it follows that the parameter covariance matrix also can be separated two parts. The first term contains the local covariance matrices for the local parameters and the second term containing the effects due to coupling. The latter contains the covariances between the parameters of different depth points and the covariances between the zone parameter and the local parameters (off diagonal elements). The magnitude of this correlation is greater the more similar the rock at the two depth points (under similar conditions, the local parameters "compensate" with similar way, which causes the correlation). In other words, if the local rock parameters are similar for two depth points of an interpretation zone, then a small change in the zone parameter (generated by the global minimum search), causes similar changes in the local parameters to reduce the local sum of squares.

The second part (in Eq. 3) also modifies the diagonal elements which determines the estimated value of the parameter variances.

The coefficient ρ, which determines the importance of term 2, inversely proportional to the number of depth points (N) of the interval. This means that as N increases, the term 2 tends to zero, i.e. for large values of N the coupling effects tends to zero. Asymptotically, the different depth point parameters become uncorrelated (in case of Gaussian noise: asymptotic freedom).

6.1 Variances of estimated parameters

The variances of the parameters are the corresponding diagonal elements of the covariance matrix (Eq. 8). The contribution of the diagonal elements in matrix 2 represents a "correction" that vanishes as N increases. In the case of the zone parameter, only the part 2 contributes to the variance:

$$\sigma_{{p_{z} }}^{2} = \sigma^{2} \rho$$
(34)

Thus, of course, for large N, this variance also converges to zero (consistent estimate). The variances of the local parameters, on the other hand, do not tend to zero, but to the corresponding diagonal element in the first term (Fig. 4).

Fig. 4
figure 4

Typical parameter variance trend as a function of depth point number. The variance of local parameter tends to the variance can be get from the local estimation

Estimation of the variance of the kth local parameter (at depth point D):

$$\sigma_{k}^{2} = \sigma^{2} \left( {{\mathbf{R}}_{0}^{ - 1} } \right)_{k,k} + \sigma^{2} \rho \cdot \left( {{\mathbf{R}}_{z}^{T} {\mathbf{R}}_{0}^{ - T} {\mathbf{R}}_{0}^{ - 1} {\mathbf{R}}_{z} } \right)_{k,k} = \sigma^{2} R_{0;k,k}^{ - 1} + \sigma^{2} \rho \sum\limits_{q = 1}^{M} {\sum\limits_{s = 1}^{M} {R_{0D;q,k}^{ - 1} R_{zD;q} } } R_{0D;k,s}^{ - 1} R_{zD;s}$$
(35a)
$$\mathop {{\text{lim}}}\limits_{N \to \infty } \sigma_{k}^{2} = \sigma^{2} \left( {{\mathbf{R}}_{0}^{ - 1} } \right)_{k,k}$$
(35b)

The interval maximum likelihood method can therefore be used to increase the efficiency of the estimation, especially for the zone parameter, which improves steadily as the interval increases.

6.2 Parameter covariances

The covariances can be estimated using the corresponding off-diagonal elements. Normalizing the covariances by the related variances gives the correlations.

$${\text{cov}}_{k,l} = \sigma^{2} \left( {{\mathbf{R}}_{0}^{ - 1} } \right)_{k,l} + \sigma^{2} \rho \cdot \left( {{\mathbf{R}}_{z}^{T} {\mathbf{R}}_{0}^{ - T} {\mathbf{R}}_{0}^{ - 1} {\mathbf{R}}_{z} } \right)_{k,l} = \sigma^{2} R_{0;k,l}^{ - 1} + \sigma^{2} \rho \sum\limits_{q = 1}^{M} {\sum\limits_{s = 1}^{M} {R_{0D;qk}^{ - 1} R_{zD;q} } } R_{0D;l,s}^{ - 1} R_{zD;s}$$
(36)

If the two parameters do not belong to the same depth point (depth points d, D), the covariance is the following:

$${\text{cov}}_{k,l} = \sigma^{2} \rho \cdot \left( {{\mathbf{R}}_{z}^{T} {\mathbf{R}}_{0}^{ - T} {\mathbf{R}}_{0}^{ - 1} {\mathbf{R}}_{z} } \right)_{k,l} = \sigma^{2} \rho \sum\limits_{q = 1}^{M} {\sum\limits_{s = 1}^{M} {R_{0D;q,k}^{ - 1} R_{zD;q} } } R_{0d;l,s}^{ - 1} R_{zd;s}$$
(37)

Equation (24) shows that the covariance between zone and local parameters has opposite sign to the covariance between the local parameters. As is can be seen on Fig. 5, by the increment of the number of depth points, the covariance structure also tends to the values obtained with the conventional method (asymptotic freedom in the case of Gaussian noise).

Fig. 5
figure 5

The absolute value of correlation coefficient as a function of interval depth point number. The correlation between the zone parameter and local parameter is higher than the correlation between two local parameters (which are not belong the same depth point)

7 Conclusion

If the likelihood function extended over the entire data set of a depth interval, then the zone parameters can be included in the inversion. In terms of inversion, this links the parameter changes during the minimum search, which is reflected in the estimated covariance structure. In this study the covariance matrix has been generated in a form that separates the covariance structure associated with conventional inversion and the correction from parameter coupling. The coefficient of the correction part decreases with the number of depth points in the interpretation interval. Therefore, the correlation between the zone parameter and local parameter or local parameters in different depth point is vanishing with the number of depth point (asymptotic freedom). The variance of zone parameter also decreases by N tending to zero with large point number, while the variance of local parameters approaching the value of the conventional inversion. So, the paper has shown that the zone parameter can be efficiently estimated using ILM, which is essential when considering the sensitivity of petrophysical equations.