Skip to main content
Log in

A Reduced Order Model Approach to Inverse Scattering in Lossy Layered Media

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We introduce a reduced order model (ROM) methodology for inverse electromagnetic wave scattering in layered lossy media, using data gathered by an antenna which generates a probing wave and measures the time resolved reflected wave. We recast the wave propagation problem as a passive infinite-dimensional dynamical system, whose transfer function is expressed in terms of the measurements at the antenna. The ROM is a low-dimensional dynamical system that approximates this transfer function. While there are many possible ROM realizations, we are interested in one that preserves passivity and in addition is: (1) data driven (i.e., is constructed only from the measurements) and (2) it consists of a matrix with special sparse algebraic structure, whose entries contain spatially localized information about the unknown dielectric permittivity and electrical conductivity of the layered medium. Localized means in the intervals of a special finite difference grid. The main result of the paper is to show with analysis and numerical simulations that these unknowns can be extracted efficiently from the ROM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

Data sharing not applicable to this article as no datasets were generated or analysed during the current study. The results of this article are fully reproducible by following the implementation presented in the appendix.

Notes

  1. By \({\widehat{u}}(0,s)\) we mean \(\displaystyle \lim _{T \searrow 0} {\widehat{u}}(T,s)\).

  2. Passive means that the dynamical system does not generate energy internally.

  3. These “truncated spectral measure” measurements are used for convenience, but ROM’s obtained from other matching conditions, such as \(D^{\text {ROM}}_n(s_j) = D(s_j)\) for 2n properly chosen \((s_j)_{j=1}^{2n}\) can be used as well.

  4. The regularity assumptions on r(T) and \(\zeta (T)\) can possibly be relaxed, but since we draw conclusions from [9], we use the assumptions made in that study.

  5. The spectrum is defined as the set of \(s \in {\mathbb {C}}\) such that the operator is not boundedly invertible.

  6. In [3] the grid was called “optimal", but spectrally matched is a more appropriate name.

  7. The Weyl function is denoted by M in [9] and the Laplace frequency by \(\rho \).

References

  1. Beattie, C., Mehrmann, V., Van Dooren, P.: Robust port-hamiltonian representations of passive systems. Automatica 100, 182–186 (2019)

    Article  MathSciNet  Google Scholar 

  2. Benner, P., Goyal, P., Van Dooren, P.: Identification of port-hamiltonian systems from frequency response data. Syst. Control Lett. 143, 104741 (2020)

    Article  MathSciNet  Google Scholar 

  3. Borcea, L., Druskin, V., Knizhnerman, L.: On the continuum limit of a discrete inverse spectral problem on optimal finite difference grids. Commun. Pure Appl. Math. A J. Issued Courant Inst. Math. Sci. 58(9), 1231–1279 (2005)

    Article  MathSciNet  Google Scholar 

  4. Borcea, L., Druskin, V., Mamonov, A., Zaslavsky, M.: A model reduction approach to numerical inversion for a parabolic partial differential equation. Inverse Prob. 30(12), 125011 (2014)

    Article  MathSciNet  Google Scholar 

  5. Borcea, L., Druskin, V., Mamonov, A., Zaslavsky, M.: Robust nonlinear processing of active array data in inverse scattering via truncated reduced order models. J. Comput. Phys. 381, 1–26 (2019)

    Article  MathSciNet  Google Scholar 

  6. Borcea, L., Druskin, V., Mamonov, A., Zaslavsky, M., Zimmerling, J.: Reduced order model approach to inverse scattering. SIAM Imaging Sci. 13(2), 685–723 (2019)

    Article  MathSciNet  Google Scholar 

  7. Borcea, L., Druskin, V., Mamonov, A.V., Zaslavsky, M.: Untangling the nonlinearity in inverse scattering with data-driven reduced order models. Inverse Prob. 34(6), 065008 (2018). https://doi.org/10.1088/1361-6420/aabb16

    Article  MathSciNet  MATH  Google Scholar 

  8. Bruckstein, A.M., Levy, B.C., Kailath, T.: Differential methods in inverse scattering. SIAM J. Appl. Math. 45(2), 312–335 (1985)

    Article  MathSciNet  Google Scholar 

  9. Buterin, S.A., Yurko, V.A.: Inverse problems for second-order differential pencils with dirichlet boundary conditions. J. Inverse Ill-posed Prob. 20(5–6), 855–881 (2012)

    MathSciNet  MATH  Google Scholar 

  10. Chu, M., Golub, G.: Inverse Eigenvalue Problems: Theory, Algorithms, and Applications. Oxford University Press, Oxford (2005)

    Book  Google Scholar 

  11. Coddington, E., Levinson, N.: Theory of Ordinary Differentail Equations. Differential Equations, pp. 16–1022. McGraw-Hill, New York (1955)

    Google Scholar 

  12. Druskin, V., Mamonov, A.V., Thaler, A.E., Zaslavsky, M.: Direct, nonlinear inversion algorithm for hyperbolic problems via projection-based model reduction. SIAM J. Imag. Sci. 9(2), 684–747 (2016)

    Article  MathSciNet  Google Scholar 

  13. Freiling, G., Yurko, V.A.: Inverse Sturm-Liouville Problems and Their Applications. NOVA Science Publishers, New York (2001)

    MATH  Google Scholar 

  14. Gugercin, S., Polyuga, R., Beattie, C., Van Der Schaft, A.: Structure-preserving tangential interpolation for model reduction of port-hamiltonian systems. Automatica 48(9), 1963–1974 (2012)

    Article  MathSciNet  Google Scholar 

  15. Gustavsen, B., Semlyen, A.: Rational approximation of frequency domain responses by vector fitting. IEEE Trans. Power Delivery 14(3), 1052–1061 (1999). https://doi.org/10.1109/61.772353

    Article  Google Scholar 

  16. Gustavsen, B., Semlyen, A.: Enforcing passivity for admittance matrices approximated by rational functions. IEEE Trans. Power Syst. 16(1), 97–104 (2001). https://doi.org/10.1109/59.910786

    Article  Google Scholar 

  17. Jacob, B., Zwart, H.: Linear Port-Hamiltonian Systems on Infinite-dimensional Spaces, vol. 223. Springer, Berlin (2012)

    Book  Google Scholar 

  18. Jaulent, M.: The inverse scattering problem for lcrg transmission lines. J. Math. Phys. 23(12), 2286–2290 (1982)

    Article  MathSciNet  Google Scholar 

  19. Joubert, W.: Lanczos methods for the solution of nonsymmetric systems of linear equations. SIAM J. Matrix Anal. Appl. 13(3), 926–943 (1992)

    Article  MathSciNet  Google Scholar 

  20. Kato, T.: Perturbation Theory for Linear Operators, vol. 132. Springer, Berlin (2013)

    Google Scholar 

  21. Lanczos, C.: An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. J. Res. Nat. Bir. Standards 45, 255–282 (1950)

    Article  MathSciNet  Google Scholar 

  22. Markus, A.: Introduction to the Spectral Theory of Polynomial Operator Pencils. American Mathematical Society, USA (2012)

    Book  Google Scholar 

  23. Marshall, T.: Synthesis of RLC ladder networks by matrix tridiagonalization. IEEE Trans. Circuit Theory 16(1), 39–46 (1969)

    Article  Google Scholar 

  24. Morgan, M., Groves, W., Boyd, T.: Reflectionless filter topologies supporting arbitrary low-pass ladder prototypes. IEEE Trans. Circuits Syst. I Regul. Pap. 66, 594–604 (2019)

    Article  Google Scholar 

  25. Pronska, N.: Spectral Properties of Sturm-liouville Equations with Singular Energy-dependent Potentials. arXiv preprint arXiv:1212.6671 (2012)

  26. Pronska, N.: Reconstruction of energy-dependent sturm-liouville equations from two spectra. Integr. Eqn. Oper. Theory 76(3), 403–419 (2013)

    Article  MathSciNet  Google Scholar 

  27. Saad, Y.: The Lanczos biorthogonalization algorithm and other oblique projection methods for solving large unsymmetric systems. SIAM J. Numer. Anal. 19(3), 485–506 (1982)

    Article  MathSciNet  Google Scholar 

  28. Sorensen, D.: Passivity preserving model reduction via interpolation of spectral zeros. Syst. Control Lett. 54(4), 347–360 (2005)

    Article  MathSciNet  Google Scholar 

  29. Van Der Schaft, A.: Port-hamiltonian systems: network modeling and control of nonlinear physical systems. In: Irshik, H., Schlacher, K. (eds.) Advanced Dynamics and Control of Structures and Machines, pp. 127–167. Springer (2004)

  30. Willems, J.: Dissipative dynamical systems. Eur. J. Control. 13(2–3), 134–151 (2007)

    Article  Google Scholar 

  31. Yagle, A.E.: One-dimensional inverse scattering problems: an asymmetric two-component wave system framework. Inverse Prob. 5(4), 641 (1989)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research is supported in part by the AFOSR awards FA9550-18-1-0131 and FA9550-20-1-0079, and by ONR award N00014-17-1-2057. The last two authors of this article are working with Rob Remis and Murthy Guddati on the related Project “Krein’s embedding of dissipative data-driven reduced order models”. We would like to thank them for stimulating discussions. We also thank Alex Mamonov and Mikhail Zaslavsky for stimulating discussions on the topic of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jörn Zimmerling.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

From the Weyl Function to the Transfer Function

The Weyl function \({{\mathcal {W}}}(s)\) is definedFootnote 7 in [9] as

$$\begin{aligned} {{\mathcal {W}}}(s) = \partial _T \phi (0,s), \end{aligned}$$
(98)

where \(\phi (T,s)\) is the solution of

$$\begin{aligned} {\mathscr {L}}_{q,r}(s) \phi (T,s) = 0, ~~\text{ for }~T \in (0,T_L), \qquad \phi (0,s) = 1, \quad \phi (T_L,s) = 0. \end{aligned}$$
(99)

Let us introduce the following, pairwise linearly independent solutions associated with the operator pencil (33): \(\psi (T,s)\), \(\xi (T,s)\) and \(\eta (T,s)\). The first one satisfies

$$\begin{aligned} {\mathscr {L}}_{q,r}(s) \psi (T,s) = 0, ~~\text{ for }~T \in (0,T_L), \qquad \psi (T_L,s) = 0, \quad \partial _T \psi (T_L,s) = -1, \end{aligned}$$
(100)

the second one satisfies

$$\begin{aligned} {\mathscr {L}}_{q,r}(s) \xi (T,s) = 0, ~~\text{ for }~T \in (0,T_L), \qquad \xi (0,s) = 0, \quad \partial _T \xi (0,s) = 1, \end{aligned}$$
(101)

and the third one satisfies

$$\begin{aligned} {\mathscr {L}}_{q,r}(s) \eta (T,s) = 0, ~~\text{ for }~T \in (0,T_L), \qquad \eta (0,s) = 1, \quad \partial _T \eta (0,s) = 0. \end{aligned}$$
(102)

It is easy to check that the Wronskian

$$\begin{aligned} {\mathbb {W}}_{\psi ,\xi }(T,s) = \psi (T,s) \partial _T \xi (T,s) - \xi (T,s) \partial _T \psi (T,s) \end{aligned}$$
(103)

is constant in T, so we can define

$$\begin{aligned} \varDelta _D(s) = {\mathbb {W}}_{\psi ,\xi }(T,s) ={\mathbb {W}}_{\psi ,\xi }(0,s) = \psi (0,s), \end{aligned}$$
(104)

where we used the boundary condition in (101). Similarly, the Wronskian

$$\begin{aligned} {\mathbb {W}}_{\eta ,\psi }(T,s) = \psi (T,s) \partial _T \eta (T,s) -\eta (T,s) \partial _T \psi (T,s) \end{aligned}$$
(105)

is constant in T, so we can define

$$\begin{aligned} \varDelta _N(s) = {\mathbb {W}}_{\eta ,\psi }(T,s) = {\mathbb {W}}_{\eta ,\psi }(0,s) = -\partial _T \psi (0,s) , \end{aligned}$$
(106)

where we used the boundary condition in (102).

Now it follows that the solution of (99) can be written as

$$\begin{aligned} \phi (T,s) = \frac{\psi (T,s)}{\varDelta _D(s)}, \end{aligned}$$
(107)

and the solution w(Ts) of the Schrödinger problem (29)–(30) is

$$\begin{aligned} w(T,s) = s \zeta _0 \frac{\psi (T,s)}{\varDelta _N(s)}, \qquad T \in (0,T_L). \end{aligned}$$
(108)

The latter is because \({\mathscr {L}}_{q,r}(s) w(T,s) = 0\) for \(T \in (0, T_L)\) by construction, and at the boundary we have

$$\begin{aligned} \partial _T w(0,s) = - s \zeta _0, \qquad w(T_L,s) = 0. \end{aligned}$$
(109)

Now using (30) we obtain the jump condition

$$\begin{aligned} \partial _T w(0,s) - \partial _T w(0-,s) = - s \zeta _0, \end{aligned}$$
(110)

which corresponds to the Dirac delta forcing \(-s \zeta _0 \delta (T)\) in (29).

Solving for \(\psi (T,s)\) in (107) we get

$$\begin{aligned} w(T,s) = s \zeta _0 \frac{\varDelta _D(s)}{\varDelta _N(s)} \phi (T,s), \end{aligned}$$
(111)

and since \(\phi (0,s) = 1\), the transfer function has the expression

$$\begin{aligned} D(s) = w(0,s) = s \zeta _0 \frac{\varDelta _D(s)}{\varDelta _N(s)}. \end{aligned}$$
(112)

Moreover, taking the T derivative in (111) at \(T = 0\) and using the definition (98) of the Weyl function and the boundary condition (109), we obtain that

$$\begin{aligned} {{\mathcal {W}}}(s) = -\frac{\varDelta _N(s)}{\varDelta _D(s)} . \end{aligned}$$
(113)

This proves Eq. (32).

The poles of the transfer function are the zeroes of the Weyl function and therefore of the Wronskian (106). They correspond to the eigenvalues of the quadratic operator pencil \({\mathscr {L}}_{q,r}(s)\) with domain \({{\mathcal {S}}}_N\) defined in (34). The zeroes of the transfer function are \(s = 0\) and the set of poles of the Weyl function, which are the eigenvalues of the quadratic operator pencil \({\mathscr {L}}_{q,r}(s)\) with domain \({{\mathcal {S}}}_D\) defined in (35).

The Transfer Function for Small Variations of the Loss Function

To use the perturbation theory in [20], consider the first order system formulation (26)–(27). We made the assumption (24) to simplify the boundary conditions satisfied by w(Ts). Similarly, we shall assume in this appendix that

$$\begin{aligned} \frac{d}{d T } \zeta (T_L) = 0, \end{aligned}$$
(114)

which implies that \({\widehat{w}}(T,s)\) satisfies a homogeneous Neumann boundary condition at \(T = T_L\). The loss function has small variations, so we model it as

$$\begin{aligned} r(T) = r_0 + \alpha \rho (T), \qquad \sup _{T \in (0,T_L)}|\rho (T)|/r_0 = O(1), \qquad 0 < \alpha \ll 1. \end{aligned}$$
(115)

Using (115) in (26) we obtain

$$\begin{aligned} \left[ {{\mathcal {L}}}+ Q(T) + R_\alpha (T) + s I \right] \begin{pmatrix} w(T,s) \\ {\widehat{w}}(T,s) \end{pmatrix} = \begin{pmatrix} \zeta _0 \delta (T) \\ 0 \end{pmatrix}, \qquad T \in (0-,T_L) , \end{aligned}$$
(116)

where \({{\mathcal {L}}}\) is the differential operator (16) and Q(T) is the skew-symmetric multiplication operator defined in terms of \(\zeta (T)\) in (28). The loss function (115) is in the multiplication operator

$$\begin{aligned} R_\alpha (T) = R_0 + \alpha \begin{pmatrix} \rho (T) &{} 0 \\ 0 &{} 0 \end{pmatrix}, \qquad R_0= \begin{pmatrix} r_0 &{} 0 \\ 0 &{} 0 \end{pmatrix}. \end{aligned}$$
(117)

Note that \(i \big [{{\mathcal {L}}}+ Q(T)\big ]\) acting on the space of functions satisfying the boundary conditions (27) is a self-adjoint (with respect to the Euclidian inner product) indefinite differential operator on a bounded interval and thus has a countable set of real valued eigenvalues with no finite accumulation point [11, Chapter 7]. The squares of these eigenvalues \((-\theta _j^2)_{j \ge 1}\) are the same as the eigenvalues in the Sturm-Liouville problems

$$\begin{aligned} \left[ \frac{d^2}{d T ^2} - q(T) \right] \varphi _j(T) = - \theta _j^2 \varphi _j(T), \quad \frac{d}{d T }\varphi _j(0-) = \varphi _j(T_L) = 0, \end{aligned}$$
(118)

and

$$\begin{aligned} \left[ \frac{d^2}{d T ^2} - {\widehat{q}}(T) \right] {\widehat{\varphi }}_j(T) = - \theta _j^2 {\widehat{\varphi }}_j(T), \quad {\widehat{\varphi }}_j(0-) = \frac{d}{d T} {\widehat{\varphi }}_j(T_L) = 0, \end{aligned}$$
(119)

where

$$\begin{aligned} q(T)&= \left[ \frac{d}{d T} \ln \zeta ^{-\frac{1}{2}}(T)\right] ^2 +\frac{d^2}{d T ^2} \ln \zeta ^{-\frac{1}{2}}(T) = \zeta ^{\frac{1}{2}}(T) \frac{d^2}{d T ^2} \zeta ^{-\frac{1}{2}}(T), \end{aligned}$$
(120)
$$\begin{aligned} {\widehat{q}}(T)&= \left[ \frac{d}{d T} \ln \zeta ^{-\frac{1}{2}}(T) \right] ^2- \frac{d^2}{d T ^2} \ln \zeta ^{-\frac{1}{2}}(T). \end{aligned}$$
(121)

The Sturm-Liouville theory gives that the eigenvalues are simple. Assuming that the eigenfunctions \(\varphi _j(T)\) and \({\widehat{\varphi }}_j\) in (118)–(119) are normalized as

$$\begin{aligned} \int _0^{T_L} \varphi _j(T) \varphi _l(T) d T = \int _0^{T_L} {\widehat{\varphi }}_j(T) {\widehat{\varphi }}_l(T) d T = \delta _{jl}, \end{aligned}$$
(122)

then the eigenfunctions of the operator \(i \big [{{\mathcal {L}}}+ Q(T)\big ]\) are

$$\begin{aligned} \varvec{\varphi }_j^{\pm }(T) = \frac{1}{\sqrt{2}} \begin{pmatrix} \varphi _j(T) \\ \pm i {\widehat{\varphi }}_j(T) \end{pmatrix}, \end{aligned}$$
(123)

and the eigenvalues are \(\pm \theta _j\). Equivalently, the eigenvalues of \({{\mathcal {L}}}+ Q(T)\) are \(\pm i \theta _j\).

Now, if we consider the operator \({{\mathcal {L}}}+ Q(T) + R_0\), for the constant loss \(r_0\), the eigenfunctions \(\varvec{\varphi }_j^\pm (T)\) are still orthonormal and are determined by the components of (123) as follows

$$\begin{aligned} \varvec{\varphi }_j^+(T) = \frac{1}{\sqrt{2 + r_0/\lambda _j}} \begin{pmatrix} \varphi _j(T) \\ \\ i \sqrt{1+r_0/\lambda _j} {\widehat{\varphi }}_j(T) \end{pmatrix}, \quad \varvec{\varphi }_j^-(T) = \overline{\varvec{\varphi }_j^+(T)}, \qquad j \ge 1, \end{aligned}$$
(124)

where \(s = \lambda _j\) is the root of

$$\begin{aligned} s(s+ r_0) = -\theta _j^2. \end{aligned}$$
(125)

Indeed, it is easy to check that with \( \varvec{\varphi }_j^+(T) \) given in (124),

$$\begin{aligned} \big [{{\mathcal {L}}}+ Q(T) + R_0\big ] \varvec{\varphi }_j^+(T) + \lambda _j \varvec{\varphi }_j^+(T) = 0, \end{aligned}$$
(126)

is equivalent to

$$\begin{aligned} \big [{{\mathcal {L}}}+ Q(T)\big ] \begin{pmatrix} \varphi _j(T) \\ i {\widehat{\varphi }}_j(T) \end{pmatrix} + \sqrt{\lambda _j(\lambda _j + r_0)} \begin{pmatrix} \varphi _j(T) \\ i {\widehat{\varphi }}_j(T) \end{pmatrix} = 0, \end{aligned}$$
(127)

which leads to (125).

Finally, if we add the perturbation \({R}_\alpha (T)-R_0\), which is clearly a bounded operator with \(O(\alpha )\) norm, we can use the analytic perturbation theory in [20] to obtain that the eigenvalues and eigenprojections are analytic for \(\alpha \) in some vicinity of 0. Thus, we can use these eigenprojections to express the wave w(Ts) as a series and obtain the expression (46) of the transfer function.

Passivity and pH Structure

A dynamical system is called passive if it does not generate energy internally [28, 30]. As stated in [28, Section 2], this property is realized if the transfer function satisfies

  1. 1.

    \(D({\overline{s}}) = \overline{D(s)}\), for all \(s \in {\mathbb {C}}\).

  2. 2.

    D(s) is analytic for \(\text{ Re }(s) > 0\).

  3. 3.

    \(D(s) + \overline{D(s)} \ge 0\) for \(\text{ Re }(s) > 0\).

It is obvious from the expression (18) of the transfer function that it satisfies the first condition. The second condition says that the dynamical system is stable i.e., the poles of D(s) are in the left half complex plane. We know from Sect. 2 that these poles are \(\{\lambda _j, \overline{\lambda _j}, ~j \ge 1 \}\), where \(-\lambda _j\) are the eigenvalues of the operator \({{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T)\) acting on the space of functions satisfying the boundary conditions (27). If we let \({{\varvec{V}}}_j(T)\) be the eigenfunctions, then we have

$$\begin{aligned} \left[ {{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T) + \lambda _j I \right] {{\varvec{V}}}_j(T) = 0. \end{aligned}$$
(128)

Taking the real part of the inner product with \({{\varvec{V}}}_j(T)\) and using that \({{\mathcal {L}}}+ {{\mathcal {Q}}}(T)\) are skew-symmetric, we get

$$\begin{aligned} 0&= \text{ Re } \left\{ \int _0^{T_L} \overline{{{\varvec{V}}}_j(T)^T} \left[ {{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T) + \lambda _j I \right] {{\varvec{V}}}_j(T) d T\right\} \nonumber \\&=\text{ Re }(\lambda _j) \Vert {{\varvec{V}}}_j\Vert _{L^2(0,T_L)}^2 + \int _0^{T_L} \overline{{{\varvec{V}}}_j(T)^T} R(T) {{\varvec{V}}}_j(T) d T. \end{aligned}$$
(129)

That \(\text{ Re }(\lambda _j) \le 0\) follows from this equation and the fact that the diagonal multiplication operator R(T) is positive semidefinite.

It remains to verify the third condition, which has the following physical interpretation: Since \(-u(T,s) \mathbf{e}_{x_2}\) is the electric field and \({\widehat{u}}(T,s) \mathbf{e}_{x_1}\) is the magnetic field, the Poynting vector at \(T = 0\), which determines the power flow, is

$$\begin{aligned} \frac{1}{2} \text{ Re } \Big \{ \big [-u(0,s) \mathbf{e}_{x_2}\big ] \times \overline{\big [{\widehat{u}}(0,s) \mathbf{e}_{x_1}\big ]}\Big \} = \frac{1}{2} \text{ Re } \{ u(0,s) \overline{{\widehat{u}}(0,s)} \} {\mathbf{e}}_z = \frac{1}{4} \Big [ D(s) + \overline{D(s)} \Big ] {\mathbf{e}}_z. \end{aligned}$$

Thus, the third condition is equivalent to saying that the power flow is into the medium i.e., in the positive range direction.

To check this condition, we use the first order system formulation (26) to write

$$\begin{aligned} D(s)&= w(0,s) = \zeta _0 \int _{0-}^T \big (\delta (T), 0\big ) \left[ {{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T) + s I\right] ^{-1} \begin{pmatrix} \delta (T) \\ 0 \end{pmatrix} dT \\&= \zeta _0 \int _{0-}^T \big (\delta (T), 0\big ) \left[ -{{\mathcal {L}}}- {{\mathcal {Q}}}(T) + R(T) + s I\right] ^{-1} \begin{pmatrix} \delta (T) \\ 0 \end{pmatrix} d T, \end{aligned}$$

where in the second line we took the adjoint of the inverse and recalled that \({{\mathcal {L}}}\) and \({{\mathcal {Q}}}(T)\) are skew-symmetric. Now we can write

$$\begin{aligned} D(s) + \overline{D(s)}&= \zeta _0 \int _{0-}^T \big (\delta (T), 0\big ) {\mathcal {P}}(s) \begin{pmatrix} \delta (T) \\ 0 \end{pmatrix} d T, \end{aligned}$$
(130)

where the operator

$$\begin{aligned}&{\mathcal {P}}(s) = \left[ -{{\mathcal {L}}}- {{\mathcal {Q}}}(T) + R(T) + s I\right] ^{-1} + \left[ {{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T) + {\overline{s}} I\right] ^{-1} \end{aligned}$$

can be factorized as

$$\begin{aligned} {\mathcal {P}}(s)&= \left[ -{{\mathcal {L}}}- {{\mathcal {Q}}}(T) + R(T) + s I\right] ^{-1} \Big \{ \left[ {{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T) + {\overline{s}}I \right] \\&\quad + \left[ -{{\mathcal {L}}}- {{\mathcal {Q}}}(T) + R(T) + s I\right] \Big \} \left[ {{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T) + {\overline{s}} I\right] ^{-1} \\&= 2 \left[ -{{\mathcal {L}}}- {{\mathcal {Q}}}(T) + R(T) + s I\right] ^{-1} \left[ R(T) + \text{ Re }(s) I \right] \left[ {{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T) + {\overline{s}} I\right] ^{-1}. \end{aligned}$$

Substituting in (130) and using that

$$\begin{aligned} \left[ {{\mathcal {L}}}+ {{\mathcal {Q}}}(T) + R(T) + {\overline{s}} I\right] ^{-1} \begin{pmatrix} \delta (T) \\ 0 \end{pmatrix} = \zeta _0^{-1} \begin{pmatrix} w(T,s) \\ {\widehat{w}}(T,s) \end{pmatrix}, \end{aligned}$$

we get that for \( \text{ Re }(s) > 0\),

$$\begin{aligned} D(s) + \overline{D(s)}&= \frac{2}{\zeta _0} \int _{0-}^T \big (\overline{w(T,s)}, \overline{{\widehat{w}}(T,s)} \big ) \left[ R(T) + \text{ Re }(s) I \right] \begin{pmatrix} w(T,s) \\ {\widehat{w}}(T,s) \end{pmatrix} d T \ge 0, \end{aligned}$$
(131)

where the inequality is because R(T) is positive semidefinite.

This proves that our dynamical system is passive. Checking that it has a pH structure is a straightforward verification of [1, Definition 3].

Derivation of the ROM Transfer Function

Multiplying Eq. (47) by \(\zeta _j\) and (48) by \(-{\widehat{\zeta }}_j\), we get the following linear system

$$\begin{aligned} \left[ \mathbb {T} + s \, \mathrm{diag}(1,-1,1,-1, \ldots , -1) \right] {{\varvec{U}}}(s) = \frac{\mathbf{e}_1}{{\widehat{\gamma }}_1}, \end{aligned}$$
(132)

where \({\mathbb {\varvec{T}}}\) is the tridiagonal matrix

$$\begin{aligned} \mathbb {T} = \begin{pmatrix} r_1 &{} \frac{1}{{\widehat{\gamma }}_1} &{} 0 &{} 0 &{} \ldots &{} 0 &{} 0 \\ \frac{1}{\gamma _1} &{} -{\widehat{r}}_1 &{} - \frac{1}{\gamma _1} &{} 0 &{} \ldots &{} 0 &{} 0 \\ 0 &{} - \frac{1}{{\widehat{\gamma }}_2} &{} r_2 &{} \frac{1}{{\widehat{\gamma }}_2} &{} \ldots &{} 0 &{} 0 \\ &{}&{} &{}\vdots &{} \\ 0 &{}0 &{} 0 &{} 0 &{} \ldots &{} \frac{1}{\gamma _n} &{} - {\widehat{r}}_n \end{pmatrix}. \end{aligned}$$
(133)

We can symmetrize this matrix using the diagonal matrix \( {\varvec{\varGamma }}= \mathrm{diag}({\widehat{\gamma }}_1, \gamma _1, {\widehat{\gamma }}_2, \ldots , \gamma _n), \) so we rewrite (132) as

$$\begin{aligned} \Big [ \widetilde{\mathbb {T}} + s \mathrm {diag}(1,-1,1,-1, \ldots , -1)\Big ] {\varvec{\varGamma }}^{\frac{1}{2}} {{\varvec{U}}}(s) = {\varvec{\varGamma }}^{\frac{1}{2}} \frac{\mathbf{e}_1}{{\widehat{\gamma }}_1} = \frac{\mathbf{e}_1}{\sqrt{{\widehat{\gamma }}_1}}, \end{aligned}$$
(134)

where

$$\begin{aligned} \widetilde{\mathbb {T}} = {\varvec{\varGamma }}^{\frac{1}{2}}{\mathbb {\varvec{T}}}{\varvec{\varGamma }}^{-\frac{1}{2}}= \begin{pmatrix} r_1 &{} \frac{1}{\sqrt{\gamma _1 {\widehat{\gamma }}_1}} &{} 0 &{} 0 &{} \ldots &{} 0 &{} 0 \\ \frac{1}{\sqrt{\gamma _1 {\widehat{\gamma }}_1}} &{} -{\widehat{r}}_1 &{} - \frac{1}{\sqrt{\gamma _1{\widehat{\gamma }}_2}} &{} 0 &{} \ldots &{} 0 &{} 0 \\ 0 &{} - \frac{1}{\sqrt{\gamma _1 {\widehat{\gamma }}_2}} &{} r_2 &{} \frac{1}{\sqrt{\gamma _2 {\widehat{\gamma }}_2}} &{} \ldots &{} 0 &{} 0 \\ &{}&{} &{}\vdots &{} \\ 0 &{}0 &{} 0 &{} 0 &{} \ldots &{} \frac{1}{\sqrt{\gamma _n {\widehat{\gamma }}_n}} &{} - {\widehat{r}}_n \end{pmatrix}. \end{aligned}$$
(135)

Finally, we can factor out the square root of the diagonal matrix in (132) to get

$$\begin{aligned} \big [\mathbf{A} + s \mathbf{I}\big ] \mathrm{diag}(1,i, 1, i , \ldots , i) {\varvec{\varGamma }}^{\frac{1}{2}} {{\varvec{U}}}(s) = \frac{\mathbf{e}_1}{\sqrt{{\widehat{\gamma }}_1}}, \end{aligned}$$
(136)

where

$$\begin{aligned} \mathbf{A} = \mathrm{diag}(1,-i, 1, -i , \ldots ,-i) \widetilde{\mathbb {T}} \mathrm{diag}(1,-i, 1, -i , \ldots ,-i) \end{aligned}$$
(137)

is the matrix given in (54)–(52).

The ROM transfer function is

$$\begin{aligned} D^{\text {ROM}}_n(s)&= \mathbf{e}_1^T {{\varvec{U}}}(s) = \mathbf{e}_1^T {\varvec{\varGamma }}^{-\frac{1}{2}} \mathrm{diag}(1,-i, 1, -i , \ldots ,-i) \big [\mathbf{A} + s \mathbf{I}\big ]^{-1} \frac{\mathbf{e}_1}{\sqrt{{\widehat{\gamma }}_1}} \nonumber \\&= \mathbf{e}_1^T \big [\mathbf{A} + s \mathbf{I}\big ]^{-1} \frac{\mathbf{e}_1}{{{\widehat{\gamma }}_1}} \end{aligned}$$
(138)

as stated in Eq. (53).

Proof of Lemma 9

Let us introduce the weighted inner product

$$\begin{aligned} \left\langle \varvec{\varPhi }, \varvec{\varPsi } \right\rangle _{\zeta ^{-1},\zeta } = \int _0^{T_L} \varvec{\varPhi }^\star (T) \begin{pmatrix} \zeta ^{-1}(T) &{} 0 \\ 0 &{} \zeta (T) \end{pmatrix} \varvec{\varPsi } (T) d T, \qquad \forall ~ \varvec{\varPhi }, \varvec{\varPsi } \in \big (L^2([0,T_L])\big )^2. \end{aligned}$$

The linear operator \(i {{\mathcal {L}}}_\zeta \) defined in (79) with boundary conditions (80) is self-adjoint with respect to this inner product and thus has a countable set of real eigenvalues with no finite accumulation point [11, Chapter 7]. This implies that \({{\mathcal {L}}}_\zeta \) has purely imaginary eigenvalues. In fact, \({{\mathcal {L}}}_\zeta \) is related via a similarity transformation to the operator \({{\mathcal {L}}}+ Q(T)\) studied in Appendix B

$$\begin{aligned} {{\mathcal {L}}}+ Q(T) = \begin{pmatrix} \zeta ^{-\frac{1}{2}}(T) &{} 0 \\ 0 &{} \zeta ^{\frac{1}{2}}(T) \end{pmatrix} {{\mathcal {L}}}_\zeta \begin{pmatrix} \zeta ^{\frac{1}{2}}(T) &{} 0 \\ 0 &{} \zeta ^{-\frac{1}{2}}(T) \end{pmatrix} \end{aligned}$$
(139)

so the eigenvalues are the same \(\{\pm i \theta _j, ~~ j \ge 1\}\). The eigenfunctions of \({{\mathcal {L}}}+ Q(T)\) are the vector valued functions (123), which are orthonormal in the Euclidian inner product. These define the eigenfunctions of \({{\mathcal {L}}}_\zeta \)

$$\begin{aligned} \varvec{\varPhi }_j = \frac{1}{\sqrt{2}} \begin{pmatrix} \phi _j (T) \\ {\widehat{\phi }}_j(T) \end{pmatrix} = \begin{pmatrix} \zeta ^{\frac{1}{2}} &{} 0 \\ 0 &{} \zeta ^{-\frac{1}{2}}(T) \end{pmatrix} \frac{1}{\sqrt{2}} \begin{pmatrix} \varphi _j (T) \\ {\widehat{\varphi }}_j(T) \end{pmatrix},\qquad j \ge 1, \end{aligned}$$
(140)

and the orthonormality relations (83) follow from (122).

The same discussion applies to the linear operator \({{\mathcal {L}}}_\zeta \) with boundary conditions (81). \(~ \Box \)

Proof of Proposition 10

Recall from Sect. 2.1 that the poles of the transfer function D(s) are the eigenvalues \(\{\lambda _j, \overline{\lambda _j}, ~ j\ge 1\}\) of the operator pencil \({\mathscr {L}}_{q,r}(s)\) defined in (33) acting on the space \({{\mathcal {S}}}_N\) defined in (34), whereas the zeroes of D(s) are the eigenvalues \(\{\mu _j, \overline{\mu _j}, ~ j\ge 1\}\) of the operator (33) acting on the space \({{\mathcal {S}}}_D\) defined in (35). Moreover, as explained in Sect. 2 (recall Eqs. (26) and (29)), \({\mathscr {L}}_{q,r}(s)\) is connected to the first order pencil

$$\begin{aligned} {\mathscr {P}}_{\zeta ,r}(s) = {{\mathcal {L}}}_\zeta + R_\alpha (T) + s I, \end{aligned}$$
(141)

with \(R_\alpha (T)\) defined in (117) and \({{\mathcal {L}}}_\zeta \) defined in (79) as follows: First, we have the similarity transformation

$$\begin{aligned} {{\mathcal {L}}}+ Q(T) + R_\alpha (T) +s I = \begin{pmatrix} \sqrt{\frac{\zeta _0}{\zeta (T)}} &{} 0 \\ 0 &{} \sqrt{\zeta _0 \zeta (T)} \end{pmatrix} {\mathscr {P}}_{\zeta ,r}(s) \begin{pmatrix} \sqrt{\frac{\zeta (T)}{\zeta _0}} &{} 0 \\ 0 &{} \frac{1}{\sqrt{\zeta _0 \zeta (T)}} \end{pmatrix}. \end{aligned}$$
(142)

Second, the first order system

$$\begin{aligned} \Big [{{\mathcal {L}}}+ Q(T) + R_\alpha (T) +s I\Big ] \begin{pmatrix} w(T,s) \\ {\widehat{w}}(T,s) \end{pmatrix} = \mathbf{0}, \end{aligned}$$
(143)

can be written as the second order equation

$$\begin{aligned} {\mathscr {L}}_{q,r}(s) w(T,s) = 0, \end{aligned}$$
(144)

with potential q(T) defined in (23) and the other way around. This implies that \({\mathscr {L}}_{q,r}(s)\) with domain \({{\mathcal {S}}}_N\) has the same eigenvalues \(\{\lambda _j, \overline{\lambda _j}, ~ j\ge 1\}\) as \({\mathscr {P}}_{\zeta ,r}(s) \) with boundary conditions (80). Similarly, \({\mathscr {L}}_{q,r}(s)\) with domain \({{\mathcal {S}}}_D\) has the same eigenvalues \(\{\mu _j, \overline{\mu _j}, ~ j\ge 1\}\) as \({\mathscr {P}}_{\zeta ,r}(s) \) with boundary conditions (81).

We know that the ROM based estimated functions \(\zeta ^{(n)}(T)\), \({\mathfrak {r}}^{(n)}(T)\) and \(\widehat{{\mathfrak {r}}}^{(n)}(T)\) define an operator pencil

$$\begin{aligned} {\mathscr {P}}_{\zeta ^{(n)},{\mathfrak {r}}^{(n)},\widehat{{\mathfrak {r}}}^{(n)}}(s) = {{\mathcal {L}}}_{\zeta ^{(n)}} + \begin{pmatrix} {\mathfrak {r}}^{(n)}(T) &{} 0 \\ 0 &{} \widehat{{\mathfrak {r}}}^{(n)}(T) \end{pmatrix} + s I \end{aligned}$$
(145)

with the following properties:

  1. 1.

    The pencils (145) and (141) with boundary conditions (80) have the same first n eigenvalues \(\lambda _j\), for \(j = 1, \ldots , n\). Moreover, the residues \(y_j\), which equal the jumps of the spectral measures, are also the same, for \(j = 1, \ldots , n.\)

  2. 2.

    In the case of boundary conditions (81), the eigenvalues of (145) are approximately equal to the zeroes \(\mu _j\) of the transfer function. We denote the error in their approximation by o(1) in the limit \(n \rightarrow \infty \).

Now, using the assumption (86) on the loss function we can write

$$\begin{aligned} {\mathscr {P}}_{\zeta ,r}(s) = {\mathscr {P}}_{\zeta ,r_0}(s) + \alpha \begin{pmatrix} \rho (T) &{} 0 \\ 0 &{} 0 \end{pmatrix}, \end{aligned}$$
(146)

which is an \(O(\alpha )\) perturbation of \({\mathscr {P}}_{\zeta ,r_0}(s)\). Since the poles and residues of (145) and (146) match to all orders of \(\alpha \), we can use Proposition 8 to obtain from the O(1) matching that the O(1) primary loss must be \(r_0\) and the dual loss is zero. Moreover, the impedance \(\zeta ^{(n)}(T)\) approximates \(\zeta (T)\) as \(n \rightarrow \infty \). Therefore, we can write pointwise in \((0,T_L)\) that

$$\begin{aligned} \zeta ^{(n)}(T)&= \zeta (T)\big [1 + o(1) + O(\alpha )\big ], \end{aligned}$$
(147)
$$\begin{aligned} {\mathfrak {r}}^{(n)}(T)&= r_0 + \alpha \rho ^{(n)}(T)\big [1 + o(1) + O(\alpha )\big ], \end{aligned}$$
(148)
$$\begin{aligned} \widehat{{\mathfrak {r}}}^{(n)}(T)&=\alpha {\widehat{\rho }}^{(n)}(T)\big [1 + o(1) + O(\alpha )\big ], \end{aligned}$$
(149)

with functions \(\rho ^{(n)}(T)\) and \({\widehat{\rho }}^{(n)}(T)\) independent of \(\alpha \).

Because the same constant \(r_0\) appears in both the pencils (145) and (146), we can subtract it from both problems. This results in a transformation of the eigenvalues, but since these eigenvalues match, they will be transformed the same way by the subtraction of \(r_0\) i.e., they will still match. The new pencils are

$$\begin{aligned} {{\mathcal {L}}}_{\zeta } + \alpha \begin{pmatrix} \rho (T) &{} 0 \\ 0 &{} 0 \end{pmatrix} \quad \text{ and } \quad {{\mathcal {L}}}_{\zeta ^{(n)}} + \alpha \begin{pmatrix} \rho ^{(n)}(T) &{} 0 \\ 0 &{} {\widehat{\rho }}^{(n)}(T)\end{pmatrix}, \end{aligned}$$
(150)

and their eigenvalues are

$$\begin{aligned} \lambda _j = i \theta _j + \alpha \delta \lambda _{j} + O(\alpha ^2), \qquad \mu _j = i \vartheta _j + \alpha \delta \mu _{j} + O(\alpha ^2), \qquad j \ge 1, \end{aligned}$$
(151)

where \(i \theta _j\) and \(i \vartheta _j\) are the purely imaginary eigenvalues of the lossless problem (see Lemma 9). The \(O(\alpha )\) perturbations of these eigenvalues are

$$\begin{aligned} \delta \lambda _{j} = \left\langle \varvec{\varPhi }^+_j, \begin{pmatrix} \rho (T) &{} 0 \\ 0 &{} 0 \end{pmatrix} \varvec{\varPhi }^+_j \right\rangle _{\zeta ^{-1},\zeta } = \left\langle \varvec{\varPhi }^+_j, \begin{pmatrix} \rho ^{(n)}(T) &{} 0 \\ 0 &{} {\widehat{\rho }}^{(n)}(T) \end{pmatrix} \varvec{\varPhi }^+_j \right\rangle _{\zeta ^{-1},\zeta }, \end{aligned}$$
(152)

and similarly

$$\begin{aligned} \delta \mu _{j} = \left\langle \varvec{\varPsi }^+_j, \begin{pmatrix} \rho (T) &{} 0 \\ 0 &{} 0 \end{pmatrix} \varvec{\varPsi }^+_j \right\rangle _{\zeta ^{-1},\zeta } = \left\langle \varvec{\varPsi }^+_j, \begin{pmatrix} \rho ^{(n)}(T) &{} 0 \\ 0 &{} {\widehat{\rho }}^{(n)}(T) \end{pmatrix} \varvec{\varPsi }^+_j \right\rangle _{\zeta ^{-1},\zeta }, \end{aligned}$$
(153)

where \(\varvec{\varPhi }_j^{+}\) and \(\varvec{\varPsi }^+_j,\) for \(j \ge 1\) are the orthonormal eigenfunctions in Lemma 9. Substituting their expressions in these equations gives (89)–(90).

Finally, note that if \(\zeta ^{(n)}(T)\) had an \(O(\alpha )\) error term, then that would reflect in an imaginary perturbation of the eigenvalues, because \({{\mathcal {L}}}_{\zeta }\) and \({{\mathcal {L}}}_{\zeta ^{(n)}}\) are skew-symmetric operators with respect to the weighted inner product \(\left\langle \cdot , \cdot \right\rangle _{\zeta ^{-1},\zeta }\). However, the perturbations (152)–(152) are real valued, so the error in the impedance must be of higher order in \(\alpha \). \(\square \)

Setup for the Numerical Simulations

In this appendix we describe how we generate the data used in the inversion results in Figs. 3, 4, 5 and 6. We also give details on the calculation of the Jacobian used in the Gauss-Newton iteration for solving the optimization problem (96).

To generate the data, we solve the system (5)–(6), with boundary conditions (7), (13), using finite differences on a staggered grid with constant step size \(\tau \), except for the first dual step size, as illustrated in the following sketch:

figure a

The primary wave u(Ts) is discretized on the primary grid, at the nodes illustrated in blue and the dual wave \({\widehat{u}}(T,s)\) is discretized on the dual grid, at the nodes illustrated in red. We suppress the s dependence of the discretized waves in the illustration. The boundary conditions are imposed at the pale colored nodes. The travel time domain is normalized at \(T_L=1\) and we use \(N=3000\) grid steps, so that the discretization never falls below 30 points per wavelength. The derivatives are calculated with the standard, two point forward differentiation rule. The truncated measure data can be calculated using the spectral decomposition of the finite differences matrix. However, to better emulate the measurement process, we evaluate \(D(s) = u_1(s) \) in the interval \(s \in [-i\omega _{\mathrm{max}},i\omega _{\mathrm{max}}]\), discretized at 10000 equidistant points, and then use the vectorfit algorithm [15, 16] to extract the poles and residues of D(s). The value of \(\omega _{\mathrm{max}}\) depends on n and it is given in the following table:

n

10

40

90

\(\omega _{\mathrm{max}}\)

93

124

281

Used in

Fig. 3 (top)

Fig. 3 (bottom)

Figs. 4 and 6

The vectorfit algorithm alone does not give a good estimate of the truncated spectral measure transfer function \(D_n^\text {ROM}(s)\) from D(s), because the poles and residues outside the spectral interval of interest have a large contribution, especially when the mean loss \(r_0\) is large. Thus, we proceed as follows: First, we estimate the mean loss \(r_0\) by fitting D(s) at points with \(|s| \gg 1\) with the transfer function

$$\begin{aligned} D^{\mathrm{as}}(s;r_0) = \sum _{j=\left\lfloor \frac{T_L \omega _{\mathrm{max}}}{\pi } + \frac{1}{2}\right\rfloor }^\infty \left[ \frac{y_j^{\mathrm{as}}}{s - \lambda _j^{\mathrm{as}}} + \frac{\overline{y_j^{\mathrm{as}}}}{s - \overline{\lambda _j^{\mathrm{as}}}} \right] , \end{aligned}$$
(154)

with poles and residues given by the asymptotes of the spectrum:

$$\begin{aligned} \lambda _j^{\mathrm{as}} = \frac{i(j-1/2)\pi }{T_L} - \frac{r_0}{2}, \qquad y_j^{\mathrm{as}} = \frac{\zeta _0}{T_L}\Big [ 1 + \frac{i r_0 T_L}{2(j-1/2)\pi }\Big ], \qquad j \gg 1. \end{aligned}$$
(155)

Once we estimate \(r_0\), we use the vectorfit algorithm to estimate the poles and residues of \(D(s) - D^{\mathrm{as}}(s;r_0)\), for \(s \in [-i\omega _{\mathrm{max}},i\omega _{\mathrm{max}}]\). These then define the truncated spectral measure transfer function \(D_n^\text {ROM}(s)\) of the ROM, according to Eq. (42).

The Jacobian used in the Gauss-Newton iteration for solving problem (96) is approximated numerically using finite differences. We perturb the Fourier coefficients of \(\zeta ^S(T)\) and \(r^S(T)\) by \(\varDelta _{\mathrm{num}}=0.01\) and compute the resulting ROM parameters \({\mathfrak {r}}^{\mathrm{pert}}_j, \widehat{{\mathfrak {r}}} ^{\mathrm{pert}}_j,\zeta ^{\mathrm{pert}}_j\) and \({\widehat{\zeta }}_j^{\mathrm{pert}}\). The entries of the Jacobian corresponding to \(\zeta _j^S\) are \((\zeta _j^S- \zeta ^{\mathrm{pert}}_j ) \varDelta _{\mathrm{num}}^{-1}\), and similarly for the coefficients corresponding to the loss function.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Borcea, L., Druskin, V. & Zimmerling, J. A Reduced Order Model Approach to Inverse Scattering in Lossy Layered Media. J Sci Comput 89, 1 (2021). https://doi.org/10.1007/s10915-021-01616-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01616-7

Keywords

Mathematics Subject Classification

Navigation