Skip to main content
Log in

Accuracy of adaptive maximum likelihood algorithm for determination of micro earthquake source coordinates using surface array data in condition of strong coherent noise

  • Original Paper
  • Published:
GEM - International Journal on Geomathematics Aims and scope Submit manuscript

Abstract

In the paper (Kushnir et al., Int J Geomath 4(2):201–225, 2013) we offered the mathematically rigorous justification of the adaptive algorithm for statistical estimation of parameters of micro-earthquake source based on data recorded by a surface array of seismic receivers. The algorithm exploits the popular statistical Maximum Likelihood approach for determination of unknown parameters of probability distribution of a multichannel data generated by micro-earthquake source and registered by the surface array in the presence of strong noise. In this paper we consider unique properties of the mentioned adaptive maximum-likelihood estimator (AMLE) in conditions when the noise affecting the surface array receivers contains strong time-spatially correlated (coherent) component of man-made origin. These conditions are typical when AMLE algorithm is implemented for location of the micro-earthquakes caused by the medium hydraulic fracturing at hydrocarbon deposit sites. In the mentioned conditions AMLE algorithm demonstrates capability to suppress coherent noise component and hence to significantly improve the accuracy of determination of the coordinates of micro-earthquake sources. In the paper we theoretically investigate the signal-to-noise ratio (SNR) of AMLE statistic and show that this SNR tends to infinity while seismic noise becomes purely coherent. We have undertaken also the computer simulation of the micro-earthquake location with the help of AMLE algorithm using deployment of the 150 seismic receivers which corresponds geometry of a real USA surface seismic array. For this array we computed the model multidimensional realizations of the micro-earthquake signal, the coherent man-made noise component and the natural noise component. The results of this computer simulation have shown that in conditions when the power of the coherent man-made component of the modeled noise significantly exceeds the power of the natural component of this noise the errors of micro-earthquake source position determination become negligible.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Ackerley, N.: Estimating the spectra of small events for the purpose of evaluating micro seismic detection thresholds, CSEG Geo-Convention 2012: vision, expanded abstracts (2012)

  • Brillinger, D.: Time Series. Data Analysis and Theory. Holt, Reinhart and Winston Inc, New York (1975)

    MATH  Google Scholar 

  • Capon, J., Greenfield, R.J., Kolker, R.J.: Multidimensional maximum likelihood processing of large aperture seismic arrays. Proc. IEEE 55, 192–211 (1967)

    Article  Google Scholar 

  • Capon, J.: Application of detection and estimation theory to large array seismology. Proc. IEEE 57, 170–180 (1970)

    Google Scholar 

  • Chebotareva, I., Rozhkov, M., Tagizade, T.: A method of micro-seismic monitoring of spatial distribution of the emission sources and dissipated radiation, and a device for its implementation. Patent of Russian Federation N 2278401 (2006) (in Russian)

  • Chebotareva, I.: New algorithms of emission tomography for passive seismic monitoring of a producing hydrocarbon deposit: part I. Izvestiya. Phys. Solid Earth 46(3), 187–198. Pleiades Publishing, Ltd. (2010) [original Russian Text Chebotareva, I.: Fizika Zemli 46(3), 7–19 (2010)](2010) (In Rassian))

  • Duncan, P., Eisner, L.: Reservoir characterization using surface micro-seismic monitoring. Geophysics 75(5), 139–146 (2010)

    Article  Google Scholar 

  • Eisner, L., Williams-Stroud, S., Hill, A., Duncan, P., Thornton, M.: Beyond the dots in the box: microseismicity-constrained fracture models for reservoir simulation. Leading Edge 29(3), 326–333 (2010)

    Article  Google Scholar 

  • Horn, A., Johnson, R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)

    Book  MATH  Google Scholar 

  • Kiselevitch, V.L., Nikolaev, A.V., Troitskiy, P.A., Shubik, B.M.: Emission tomography: main ideas, results, and prospects. In: Proceedings of the 61st Annual International Meeting, SEG, expanded abstracts, 1602 (1991)

  • Knopp, K.: Infinite Sequences and Series. Dover publications Inc., New York (1956)

    MATH  Google Scholar 

  • Kushnir, A.F., Rozhkov, N.M., Varypaev, A.V.: Statistically-based approach for monitoring of micro-seismic events. Int. J. Geomath. 4(2), 201–225 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Kushnir, A., Varypaev, A., Dricker, I., Rozhkov, M., Rozhkov, N.: Passive surface micro seismic monitoring as a statistical problem: location of weak micro seismic signals in the presence of strongly correlated noise. Geophys. Prospect. 62(4), 819–833 (2014)

    Article  Google Scholar 

  • Kushnir, A.F.: Statistical and Computational Methods of Seismic Monitoring. URSS, Moscow (2012). (in Russian)

    Google Scholar 

  • Kushnir, A.F.: Algorithms for adaptive statistical processing of seismic array data. NATO ASI Ser. 303, 565–586 (1996)

    Google Scholar 

  • Peterson J.: Observation and modeling of background seismic noise, open file report, vol 93, issue 322, U.S. Geological Survey, Technical Report, (1993)

  • Ren, Z., Zheng, B., Ma, S., Huang, B., Liu, L., Liang, B.: Hydro-fracture monitoring using vector scanning with surface microseismic data. In: International Conference and Exhibition, Istanbul (2014)

  • Rudin, W.: Functional Analysis. McGraw-Hill Science/Engineering/Math Inc., New York (1991)

    MATH  Google Scholar 

  • Strang, G.: Introduction to Linear Algebra, 3rd edn. Wellesley-Cambridge Press, Wellesley (2003)

    MATH  Google Scholar 

  • Thornton, M.: Resolution and location uncertainties in surface micro seismic monitoring, CSEG GeoConvention 2012: vision, expanded abstracts (2012)

  • Verweij, M.D., de Hoop, A.T.: Determination of seismic wavefields in arbitrary continuously layered media using modified Cagniard method. Geophys. J. Int. 103(3), 731–754 (1990)

    Article  Google Scholar 

  • Zhang, C., Florêncio, D., Demba, E., Zhang, Z.: Maximum likelihood sound source localization and beamforming for directional microphone arrays in distributed meetings. IEEE Trans. Multim. 10(3), 538–548 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Varypaev.

Appendices

Appendix 1

Let us consider a m-dimensional random stationary regular time series \(\varvec{v}_t =\left( {v_{1,t} ,\ldots ,v_{m,t} } \right) ^{T}, t\in {\mathbb {Z}}\) and its truncation \(\varvec{v}_{t,N} \) at the discrete time interval \(t\in \overline{1,N} \): \(\varvec{v}_{t,N} =\varvec{v}_t , t\in \overline{1,N} ; \varvec{v}_{t,N} \equiv 0, t\notin \overline{1,N} \). The Finite Discrete Fourier Transform (DFFT) of the time sequence \(\varvec{v}_{t,N} \) exists for any length N of the discrete time interval \(t\in \overline{1,N} \) and has the form

$$\begin{aligned} \varvec{u}_N \left( {\lambda _j } \right)= & {} \left( {{\varvec{u}}_{N,k} \left( {\lambda _j } \right) , k\in \overline{1,N} } \right) =\frac{1}{\sqrt{N}}\sum _{t=1}^N {\varvec{v}_{t,N} exp \left\{ {-i\lambda _j t} \right\} } , \nonumber \\ \lambda _j= & {} \frac{2\pi j}{N}, \quad j\in \overline{1,N} . \end{aligned}$$
(6.1)

Designate as \(\varvec{F}_N \left( {\lambda _j } \right) =E\left\{ {\varvec{u}_N \left( {\lambda _j } \right) \varvec{u}_N^*\left( {\lambda _j } \right) } \right\} \) the mean square matrix of the random complex vectors \(\varvec{u}_N \left( {\lambda _j } \right) \).

The our goal is to derive conditions for which the matrix function \(\varvec{F}_N \left( {\lambda _j } \right) =\left[ {F_{N, k,l} \left( {\lambda _j } \right) ; k,l\in \overline{1,m} } \right] \) converge to the matrix power spectral density

$$\begin{aligned} \varvec{F}\left( \lambda \right) =\sum _{\tau =-\infty }^\infty {\varvec{R}\left( \tau \right) exp \left( {-i2\pi \tau \lambda } \right) } , \lambda \in \left( {0,2\pi } \right) \end{aligned}$$

of the time series \(\varvec{v}_t , t\in {\mathbb {Z}}\), (where \(\varvec{R}\left( \tau \right) =E\left\{ {\varvec{v}_t \varvec{v}_{t+\tau }^*} \right\} , t,\tau \in {\mathbb {Z}}\) is the matrix autocovariance function of the time series \(\varvec{v}_t , t\in {\mathbb {Z}})\) and this conversion have the rate \(\mathbf{O}\left( {N^{-\varepsilon }} \right) \) where \(\varepsilon \ge 1\).

The matrix function \(\varvec{F}_N \left( {\lambda _j } \right) \) can be written in the form

$$\begin{aligned} \varvec{F}_N \left( {\lambda _j } \right)= & {} \left[ {F_{N, k,l} \left( {\lambda _j } \right) ; k,l\in \overline{1,m} } \right] =E\left\{ {\varvec{u}_N \left( {\lambda _j } \right) \varvec{u}_N^*\left( {\lambda _j } \right) } \right\} \nonumber \\= & {} \frac{1}{N}\sum _{t=1}^N {\sum _{r=1}^N {E\left\{ {\varvec{v}_{t,N } \varvec{v}_{r,N}^*} \right\} exp \left\{ {-i\lambda _j \left( {t-r} \right) } \right\} } }. \end{aligned}$$
(6.2)

Let us assign \(\tau =t-r\in \overline{-N+1,N-1} \). Because the time series \(\varvec{v}_t , t\in {\mathbb {Z}}\) is stationary we have:

$$\begin{aligned} E\left\{ {\varvec{v}_{t,N} \varvec{v}_{r,N}^*} \right\} =\varvec{R}\left( \tau \right) , \tau =t-r, \quad \tau \in \overline{-N+1,N-1} \end{aligned}$$
(6.3)

Using Eq. (6.3) we can write Eq. (6.2) in the form:

$$\begin{aligned} \varvec{F}_N \left( {\lambda _j } \right) =\frac{1}{N}\sum _{t=1}^N {\sum _{r=1}^N {\varvec{R}\left( {t-r} \right) exp \left\{ {-i\lambda _j \left( {t-r} \right) } \right\} } } . \end{aligned}$$
(6.4)

The double sum at the right side of Eq. (6.4) means summation of the square \(m\times m\) blocks \(\varvec{B}_{t,r} =\varvec{R}\left( {t-r} \right) exp \left\{ {-i\lambda _j \left( {t-r} \right) } \right\} \) of the matrix \(\varvec{Q}_N =\left[ {\varvec{B}_{t,r} ; t,r\in \overline{1,N} }\, \right] \) over its columns (\(t\in \overline{1,N} )\) and rows (\(r\in \overline{1,N} )\). But the value of this double sum can be calculated by consequent summation of the blocks \(\varvec{B}_{t,r} \) located on each block diagonal of the matrix \(\varvec{Q}_N \). This is convenient because at each block diagonal of the matrix \(\varvec{Q}_N \) the blocks \(\varvec{B}_{t,r} =\varvec{R}\left( {t-r} \right) \) are identical. The blocks \(\varvec{B}_{t,r} \) vary only from one diagonal to another due to changing the value \(\tau =t-r\) in the interval \(\left[ {-N+1, N-1} \right] \) .

For summation of the matrix \(\varvec{Q}_N \) elements though its block diagonals we change the summing variables in the double sum of Eq. (6.4). We will use the following variables: \(\tau \in \overline{-N+1,N-1} \) which numerates the block diagonals of the matrix \(\varvec{Q}_N \) and \(k\in \overline{1,N-\left| \tau \right| } \) which numerates the blocks at the diagonal with number \(\tau \). Thus we get the next convenient equation for calculation of the function \(\varvec{F}_N \left( {\lambda _j } \right) \):

$$\begin{aligned} \varvec{F}_N \left( {\lambda _j } \right)= & {} \frac{1}{N}\sum _{\tau =-N+1}^{N-1} {\sum _{k=1}^{N-\left| \tau \right| } {\varvec{R}\left( \tau \right) exp \left\{ {-i\lambda _j \tau } \right\} } }\nonumber \\= & {} \sum _{\tau =-N+1}^{N-1} {\left( {1-\frac{\left| \tau \right| }{N}} \right) \varvec{R}\left( \tau \right) exp \left\{ {-i\lambda _j \tau } \right\} } ,\nonumber \\ \varvec{F}_N \left( {\lambda _j } \right)= & {} \sum _{\tau =-N}^N \varvec{R} \left( \tau \right) exp \left\{ {-i\lambda _j \tau } \right\} -\frac{1}{N}\sum _{\tau =-N+1}^{N-1} {\left| \tau \right| \varvec{R}} \left( \tau \right) exp \left\{ {-i\lambda _j \tau } \right\} .\qquad \quad \end{aligned}$$
(6.5)

Theorem 1

Let the elements \(R_{k,l} \left( \tau \right) \) of matrix autocovariance function

$$\begin{aligned} \varvec{R}\left( \tau \right) =\left[ {R_{k,l} \left( \tau \right) , k,l\in \overline{1,m} } \right] \end{aligned}$$

of a regular stationary time series \(\varvec{v}_t =\left( {v_{1,t} ,\ldots ,v_{m,t} } \right) ^{T} , t\in {\mathbb {Z}}\) satisfy the requirement

$$\begin{aligned} \sum _{\tau =-\infty }^\infty {\left| \tau \right| \left| {R_{k,l} \left( \tau \right) } \right| } =a_{k,l} <\infty , \quad k,l\in \overline{1,m} \end{aligned}$$
(6.6)

Than:

  1. (1)

    The elements

    $$\begin{aligned} F_{k,l} \left( \lambda \right) =\sum _{\tau =-\infty }^\infty {R_{k,l} \left( \tau \right) } exp \left\{ {-i\lambda \tau } \right\} , k,l\in \overline{1,m} , \lambda \in \left( {0,2\pi } \right) \end{aligned}$$

    of the matrix power spectral density function \(\varvec{F}\left( \lambda \right) =\left[ {F_{k,l} \left( \lambda \right) , k,l\in \overline{1,m} } \right] \) for the time series \(\varvec{v}_t =\left( {v_{1,t} ,\ldots ,v_{m,t} } \right) ^{T}\) exist and have a derivative uniformly bounded in respect to \(\lambda \in \left( {0,2\pi } \right) \) :

    $$\begin{aligned} \mathop {sup }\limits _{\lambda \in \left( {0,2\pi } \right) } \left| {\frac{\partial F_{k,l} \left( \lambda \right) }{\partial \lambda }} \right| <\infty \end{aligned}$$
    (6.7)
  2. (2)

    For any sequence \(j_N \) such that \(\mathop {lim }_{N\rightarrow \infty } \lambda _{j_N } =\frac{2\pi j_N }{N}=\lambda \in \left( {0,2\pi } \right) \) the functions \(F_{N,k,l} \left( {\lambda _{j_N } } \right) , \lambda \in \left( {0,2\pi } \right) \) have the asymptotic representation:

$$\begin{aligned} F_{N,k,l} \left( {\lambda _{j_N } } \right) =F_{k,l} \left( \lambda \right) +O_{k,l,j_N } \left( {N^{-1}} \right) , \quad j\in \overline{1,N} \end{aligned}$$
(6.8)

where: the matrix \(\varvec{F}_N \left( {\lambda _j } \right) =\left[ {F_{N,k,l} \left( {\lambda _j } \right) , k,l\in \overline{1,m} } \right] \) is determined by Eq. (6.5); \(\mathop {\overline{lim } }_{N\rightarrow \infty } NO_{k,l,j_N } \left( {N^{-1}} \right) =c_\lambda ; NO_{k,l,j_N } \left( {N^{-1}} \right)<O\left( {N^{-1}} \right) ; \mathop {\overline{lim } }_{N\rightarrow \infty } N O\left( {N^{-1}} \right) =c<\infty ; c_\lambda , c\) are some constants.

Proof

  1. (1)

    Under requirement (6.6) the infinite functional series

    (6.9)

    converge uniformly with respect \(\lambda \in \left( {0,2\pi } \right) \) according to Weierstrass theorem (Rudin 1991).

  2. (2)

    Accordingly with Eq. (6.5) we have to prove that the left side of Eq. (6.6):

    $$\begin{aligned} F_{N,k,l} \left( {\lambda _{j_N } } \right) -F_{k,l} \left( \lambda \right)= & {} -\left( \sum _{\tau =-\infty }^\infty {R_{k,l} } \left( \tau \right) exp \left\{ {-i\lambda \tau } \right\} \right. \nonumber \\&\left. -\sum _{\tau =-N}^N {R_{k,l} } \left( \tau \right) exp \left\{ {-i\lambda _{j_N } \tau } \right\} \right) \nonumber \\&+\frac{1}{N}\sum _{\tau =-N+1}^{N-1} {\left| \tau \right| R_{k,l} } \left( \tau \right) exp \left\{ {-i\lambda _{j_N } \tau } \right\} \nonumber \\= & {} O_{k,l,j_N } \left( {N^{-1}} \right) \end{aligned}$$
    (6.10)

    The condition (6.6) of the Theorem T1 implies that

    $$\begin{aligned} \left| {\frac{1}{N}\sum _{\tau =-N+1}^{N-1} {\left| \tau \right| R_{k,l} } \left( \tau \right) exp \left\{ {-i\lambda _{j_N } \tau } \right\} } \right|\le & {} \frac{1}{N}\sum _{\tau =-N+1}^{N-1} {\left| \tau \right| } \left| {R_{k,l} \left( \tau \right) } \right| \nonumber \\= & {} O_{k,l} \left( {N^{-1}} \right) \end{aligned}$$
    (6.11)

    It follows also from Eq. (6.6) that

    $$\begin{aligned}&\left| {\sum _{\tau =-\infty }^\infty {R_{k,l} } \left( \tau \right) exp \left\{ {-i\lambda \tau } \right\} -\sum _{\tau =-N}^N {R_{k,l} } \left( \tau \right) exp \left\{ {-i\lambda _{j_N } \tau } \right\} } \right| \nonumber \\&\quad \le \sum _{\left| \tau \right| =N}^\infty {\left| {R_{k.l} \left( \tau \right) } \right| } \rightarrow 0\; \hbox {while}\; N\rightarrow \infty . \end{aligned}$$
    (6.12)

    Let us calculate the rate of convergence to zero of the left side of Eq. (6.12).

The conversion of the series \(\mathop {\sum }_{\tau =-\infty }^\infty {\left| {R_{k.l} \left( \tau \right) } \right| } \) that follows from condition (6.6) implies that the following restriction on the “tails” of the functions \(R_{k.l} \left( \tau \right) \) is valid: there exist such a number \(L\in {\mathbb {Z}}^{+}\) for which the next inequalities are valid for \(\left| \tau \right| >L\):

$$\begin{aligned} \left| {R_{k,l} \left( \tau \right) } \right| <g_{k,l} \left| \tau \right| ^{-1-\varepsilon }; \left| \tau \right| >L; k,l\in \overline{1,m} , \end{aligned}$$
(6.13)

where \(\varepsilon >1\).

By the restriction (6.13) the following inequality is valid for any \(N>L\)

$$\begin{aligned} \sum _{\left| \tau \right| =N}^\infty {\left| {R_{k,l} \left( \tau \right) } \right| } <\sum _{\left| \tau \right| =N}^\infty {g_{k,l} \left| \tau \right| ^{-1-\varepsilon }} , \end{aligned}$$
(6.14)

By applying the Maclauren–Cauchy theorem (Knopp 1956) we get the for any \(N>L\):

$$\begin{aligned}&\sum _{\left| \tau \right| =N}^\infty {g_{k,l} \left| \tau \right| ^{-1-\varepsilon }} = \mathop {lim }\limits _{n\rightarrow \infty } \sum _{\left| \tau \right| =N}^{n-1} {g_{k,l} \left| \tau \right| ^{-1-\varepsilon }}\nonumber \\&\quad \le \mathop {lim }\limits _{n\rightarrow \infty } \mathop {\int }\limits _N^n {g_{k,l} \left| t \right| ^{-1-\varepsilon }} d\left| t \right| =\mathop {lim }\limits _{n\rightarrow \infty } \left( {\frac{g_{k,l} n^{-\varepsilon }}{-\varepsilon }-\frac{g_{k,l} N^{-\varepsilon }}{-\varepsilon }} \right) =\frac{g_{k,l} }{\varepsilon }N^{-\varepsilon }.\qquad \qquad \end{aligned}$$
(6.15)

Hence,

$$\begin{aligned} \sum _{\left| \tau \right| =N}^\infty {\left| {R_{k,l} \left( \tau \right) } \right| <} \frac{g_{k,l} N^{-\varepsilon }}{\varepsilon }=O_{k,l} \left( {N^{-\varepsilon }} \right) \end{aligned}$$
(6.16)

where \(\varepsilon >1\).

The statement (2) of the Theorem T1 follows immediately from equations (6.11), (6.12) and (6.16). \(\square \)

Appendix 2

Proof of Lemma 1.

Statement 1

Let \(\varvec{F}_0 \) is a nonsingular \(m\times m\) matrix, \(\varvec{U}_s \) is a \(m\times s\) matrix, and the \(m\times m\) matrix \(\varvec{F}\) is equal to \(\varvec{F}=\left[ {\varvec{\sigma } ^{2}\varvec{F}_0 +\varvec{U}_s \varvec{U}_s^*} \right] \). Then the following representation of the inverse matrix \(\varvec{F}^{-1}\) is valid:

$$\begin{aligned} {\varvec{F}}^{-1}= \left[ {\sigma ^{-2}\varvec{F}_0^{-1} -\sigma ^{-4}\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{I}_s +\sigma ^{-2}\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}\varvec{U}_s^{*} \varvec{F}_0^{-1} } \right] . \end{aligned}$$
(7.1)

Proof

Below the direct verification of the statement 1 is given, that it is shown that:

$$\begin{aligned} \varvec{X}= \varvec{F}^{-1} \varvec{F}= & {} \left[ {\sigma ^{-2}\varvec{F}_0^{-1} -\sigma ^{-4}\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{I}_s +\sigma ^{-2}\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}\varvec{U}_s^{*} \varvec{F}_0^{-1} } \right] \\&\left[ \sigma ^{2}\varvec{F}_0+\varvec{U}_s \varvec{U}_s^*\right] = \varvec{I}_m . \end{aligned}$$

The matrix \(\varvec{X}\) can be transformed as following:

$$\begin{aligned}&\varvec{F}_0^{-1/2} \left[ \sigma ^{-2}\varvec{I}_m -\sigma ^{-4}\varvec{F}_0^{-1/2} \varvec{U}_s \left( \varvec{I}_s +\sigma ^{-2}\varvec{U}_s^{*} \varvec{F}_0^{-1/2} \varvec{F}_0^{-1/2} \varvec{U}_s \right) ^{-1}\varvec{U}_s^{*} \varvec{F}_0^{-1/2} \right] \\&\quad \varvec{F}_0^{-1/2} \varvec{F}_0^{1/2} \left[ {\sigma ^{2}\varvec{I}_m +\varvec{F}_0^{-1/2} \varvec{U}_s \varvec{U}_s^*\varvec{F}_0^{-1/2} } \right] \varvec{F}_0^{1/2} . \end{aligned}$$

Introducing the notation: \(G_s =\varvec{F}_0^{-1/2} \varvec{U}_s \) we can rewrite \(\varvec{X}\) to more simple form:

$$\begin{aligned} \varvec{X}= \varvec{F}_0^{-1/2} \left[ {\sigma ^{-2}\varvec{I}_m -\sigma ^{-4}\varvec{G}_s \left( {\varvec{I}_s +\sigma ^{-2}\varvec{G}_s ^{*}\varvec{G}_s } \right) ^{-1}\varvec{G}_s ^{*}} \right] \left[ {\sigma ^{2}\varvec{I}_m +\varvec{G}_s \varvec{G}_s ^{*}} \right] \varvec{F}_0 ^{1/2}. \end{aligned}$$

Making multiplication of two expressions in the square brackets we get:

$$\begin{aligned} \varvec{X}= & {} \varvec{F}^{-1/2}\left[ \varvec{I}_m -\sigma ^{-2}\varvec{G}_s \left( {\varvec{I}_s +\sigma ^{-2}\varvec{G}_s^*\varvec{G}_s } \right) ^{-1}\varvec{G}_s^*+\sigma ^{-2}\varvec{G}_s \varvec{G}_s^*-\sigma ^{-4}\varvec{G}_s \left( \varvec{I}_s\right. \right. \\&\left. \left. +\,\sigma ^{-2}\varvec{G}_s^*\varvec{G}_s \right) ^{-1}\varvec{G}_s^*\varvec{G}_s \varvec{G}_s^*\right] \varvec{F}^{1/2}. \end{aligned}$$

Making elementary algebraic transformations we get

$$\begin{aligned} \varvec{X}= & {} \varvec{F}^{-1/2}\left[ \varvec{I}_m +\sigma ^{-2}\varvec{G}_s \varvec{G}_s^*-\sigma ^{-2}\varvec{G}_s \left[ \left( {\varvec{I}_s +\sigma ^{-2}\varvec{G} _s^{*} \varvec{G}_s } \right) ^{-1}\right. \right. \\&\left. \left. +\,\sigma ^{-2}\left( {\varvec{I}_s +\sigma ^{-2}\varvec{G} _s^{*} \varvec{G}_s } \right) ^{-1}\varvec{G}_s^{*} \varvec{G}_s \right] \varvec{G}_s^*\right] \varvec{F}^{-1/2} \\= & {} \varvec{F}^{-1/2}\left[ \varvec{I}_m +\sigma ^{-2}\varvec{G}_s \varvec{G}_s^*-\sigma ^{-2}\varvec{G}_s \left[ \left( {\varvec{I}_s +\sigma ^{-2}\varvec{G}_s^{ {*}} \varvec{G}_s } \right) ^{-1}\left( \varvec{I}_s \right. \right. \right. \\&\left. \left. \left. +\,\sigma ^{-2}\varvec{G}_s^{ {*}} \varvec{G}_s \right) \right] \varvec{G}_s^{*} \right] \varvec{F}^{1/2}\\= & {} \varvec{F}^{-1/2}\left[ {\varvec{I}_m +\sigma ^{-2}\varvec{G}_s \varvec{G}_s^*-\sigma ^{-2}\varvec{G}_s \varvec{G}_s^*} \right] \varvec{F}^{1/2}=\varvec{I}_m \end{aligned}$$

The statement 1 is proven.\(\square \)

Statement 2

Representation (7.1) of the inverse matrix \(\varvec{F}^{-1}\) admits the following expansion in powers of the \(\sigma ^{2}\):

$$\begin{aligned} \varvec{F}_\sigma ^{-1}= & {} \left[ {\sigma ^{-2}\varvec{F}_0^{-1} -\sigma ^{-4}\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{I}_s +\sigma ^{-2}\varvec{U}_s^{ {*}} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}\varvec{U}_s^{ {*}} \varvec{F}_0^{-1} } \right] \\= & {} \sigma ^{-2}\varvec{B}+\varvec{C}+\varvec{O}\left( {\sigma ^{2}} \right) , \end{aligned}$$

were:

$$\begin{aligned} \varvec{B}= & {} \left[ {\varvec{F}_0^{-1} -\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{U}_s^{ {*}} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}\varvec{U}_s^{ {*}} \varvec{F}_0^{-1} } \right] \\= & {} \left[ {\varvec{F}_0^{-1} -\varvec{F}_0^{-1} \varvec{Q}_s \left( {\varvec{Q}_s^{ {*}} \varvec{F}_0^{-1} \varvec{Q}_s } \right) ^{-1}\varvec{Q}_s^{ {*}} \varvec{F}_0^{-1} } \right] ;\\ \varvec{C}= & {} \left[ {\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{U}_s^{ {*}} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-2}\varvec{U}_s^{ {*}} \varvec{F}_0^{-1} } \right] , \quad \frac{1}{\sigma ^{2}}\left\| {\varvec{O}\left( {\sigma ^{2}} \right) } \right\| \rightarrow const>0\; \mathrm{when}\; \sigma \rightarrow 0 ; \end{aligned}$$

\(\varvec{U}_s =\varvec{Q}_s \varvec{T}_s \), \(\varvec{T}_s \) is the \(s\times s\) diagonal matrix, \(\varvec{Q}_s \) is the \(s\times m\) matrix.

Proof

The next transformation of the expression (A.2.1) is trivial:

$$\begin{aligned} \varvec{F}_\sigma ^{-1}= & {} \left[ {\sigma ^{-2}\varvec{F}_0^{-1} -\sigma ^{-4}\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{I}_s +\sigma ^{-2}\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}\varvec{U}_s^{*} \varvec{F}_0^{-1} } \right] \nonumber \\= & {} \left[ {\sigma ^{-2}\varvec{F}_0^{-1} -\sigma ^{-2}\varvec{F}_0^{-1} \varvec{U}_s \left( {\sigma ^{2}\varvec{I}_s +\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}\varvec{U}_s^{*} \varvec{F}_0^{-1} } \right] . \end{aligned}$$
(7.2)

Substitution in Eq. (7.1) the following decomposition of the inverse matrix \(\left( \sigma ^{2}\varvec{I}_s +\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s \right) ^{-1}\)(see Eq. (3.11)):

$$\begin{aligned} \left( {\sigma ^{2}\varvec{I}_s +\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}= \left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}-\sigma ^{2}\left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-2}+\varvec{O}\left( {\sigma ^{4}} \right) ,\quad \qquad \end{aligned}$$
(7.3)

gives us the following equation

$$\begin{aligned}&\left[ {\sigma ^{-2}\varvec{F}_0^{-1} -\sigma ^{-2}\varvec{F}_0^{-1} \varvec{U}_s \left[ {\left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}-\sigma ^{2}\left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-2}+\varvec{O}\left( {\sigma ^{4}} \right) } \right] \varvec{U}_s^{*} \varvec{F}_0^{-1} } \right] \\&\quad = \left[ {\sigma ^{-2}\varvec{F}_0^{-1} -\sigma ^{-2}\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}\varvec{U}_s^{*} \varvec{F}_0^{-1} } \right] \\&\qquad +\left[ {\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-2}\varvec{U}_s^{*} \varvec{F}_0^{-1} } \right] -\sigma ^{-2}\varvec{O}\left( {\sigma ^{4}} \right) \\&\quad = \sigma ^{-2}\varvec{B}+\varvec{C}+\varvec{O}\left( {\sigma ^{2}} \right) , \end{aligned}$$

where

$$\begin{aligned}&\left[ {\varvec{F}_0^{-1} -\varvec{F}_0^{-1} \varvec{U}_s \left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}\varvec{U}_s^{*} \varvec{F}_0^{-1} } \right] \\&\quad = \left[ {\varvec{F}_0^{-1} -\varvec{F}_0^{-1} \varvec{Q}_s \varvec{T}_s \varvec{T}_s^{-1} \left( {\varvec{Q}_s^{*} \varvec{F}_0^{-1} \varvec{Q}_s } \right) ^{-1}\varvec{T}_s^{*-1} \varvec{T}_s^*\varvec{Q}_s^{*} \varvec{F}_0^{-1} } \right] \\&\quad = \left[ {\varvec{F}_0^{-1} -\varvec{F}_0^{-1} \varvec{Q}_s \left( {\varvec{Q}_s^{*} \varvec{F}_0^{-1} \varvec{Q}_s } \right) ^{-1}\varvec{Q}_s^{*} \varvec{F}_0^{-1} } \right] =\varvec{B}. \end{aligned}$$

The statement 2 is proven.\(\square \)

Statement 3

If \(\varvec{U}_s \) is a \(m\times s\) matrix and \(\varvec{F}_0 \) is nonsingular \(m\times m\) matrix then the following asymptotic decomposition is valid for \(\sigma ^{2}\rightarrow 0\):

$$\begin{aligned} \left( {\sigma ^{2}\varvec{I}_s +\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}= \left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-1}-\sigma ^{2}\left( {\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s } \right) ^{-2}+\varvec{O}\left( {\sigma ^{4}} \right) \nonumber \\ \end{aligned}$$
(7.4)

Proof

To simplify the calculations denote \(s\times s\) matrix \(\varvec{U}_s^{*} \varvec{F}_0^{-1} \varvec{U}_s =\varvec{A}\) and \(\sigma ^{2}=\varepsilon \). Then Eq. (7.3) can be rewritten in the form:

$$\begin{aligned} \varvec{C}^{-1}\left( \varepsilon \right) =\left( {\varepsilon \varvec{I}_s +\varvec{A}_s } \right) ^{-1}=\varvec{A}_s^{-1} -\varepsilon \varvec{A}_s^{-2} +\varvec{O}\left( {\varepsilon ^{2}} \right) \end{aligned}$$
(7.5)

Let us apply the Taylor expansion of the matrix function \(\varvec{C}^{-1}\left( \varepsilon \right) \) in Eq. (7.4):

$$\begin{aligned} \varvec{C}^{-1}\left( \varepsilon \right)= & {} \varvec{C}^{-1}\left( \varepsilon \right) \left| {_{\varepsilon =0} +\varepsilon } \right. \frac{d\varvec{C}^{-1}\left( \varepsilon \right) }{d\varepsilon }\left| {_{\varepsilon =0} } \right. +\varvec{O}\left( {\varepsilon ^{2}} \right) \nonumber \\= & {} \varvec{A}^{-1}+\varepsilon \frac{d\varvec{C}^{-1}\left( \varepsilon \right) }{d\varepsilon }\left| {_{\varepsilon =0} } \right. +\varvec{O}\left( {\varepsilon ^{2}} \right) . \end{aligned}$$
(7.6)

Using the well known formula for differentiation of the inverse matrix function depending on a numeric parameter we get:

$$\begin{aligned} \frac{d\varvec{C}^{-1}\left( \varepsilon \right) }{d\varepsilon }\left| {_{\varepsilon =0} } \right. =-\varvec{C}^{-1}\left( \varepsilon \right) \frac{d\varvec{C}\left( \varepsilon \right) }{d\varepsilon }\varvec{C}^{-1}\left( \varepsilon \right) \left| {_{\varepsilon =0} } \right. =-\varvec{A}^{-1}\varvec{IA}^{-1}. \end{aligned}$$
(7.7)

The Eqs. (7.4)–(7.6) (in view of the notation for the matrix \(\varvec{A})\) imply that the Eq. (7.3) to be proved. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kushnir, A., Varypaev, A. Accuracy of adaptive maximum likelihood algorithm for determination of micro earthquake source coordinates using surface array data in condition of strong coherent noise. Int J Geomath 7, 203–237 (2016). https://doi.org/10.1007/s13137-016-0082-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13137-016-0082-3

Keywords

Mathematics Subject Classification

Navigation