Analysis of Dirichlet–Robin Iterations for Solving the Cauchy Problem for Elliptic Equations

The Cauchy problem for general elliptic equations of second order is considered. In a previous paper (Berntsson et al. in Inverse Probl Sci Eng 26(7):1062–1078, 2018), it was suggested that the alternating iterative algorithm suggested by Kozlov and Maz’ya can be convergent, even for large wavenumbers k2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k^2$$\end{document}, in the Helmholtz equation, if the Neumann boundary conditions are replaced by Robin conditions. In this paper, we provide a proof that shows that the Dirichlet–Robin alternating algorithm is indeed convergent for general elliptic operators provided that the parameters in the Robin conditions are chosen appropriately. We also give numerical experiments intended to investigate the precise behaviour of the algorithm for different values of k2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k^2$$\end{document} in the Helmholtz equation. In particular, we show how the speed of the convergence depends on the choice of Robin parameters.


Introduction
Let be a bounded domain in R d with a Lipschitz boundary divided into two disjoint parts 0 and 1 such that they have a common Lipschitz boundary in and 0 ∪ 1 = , see Fig. 1. The Cauchy problem for an elliptic equation is given as follows: where ν = (ν 1 , . . . , ν d ) is the outward unit normal to the boundary , D j = ∂/∂ x j , a ji and a are measurable real-valued functions such that a is bounded, a i j = a ji and The conormal operator N is defined as usual N u = ν j a ji D i u and the functions f and g are specified Cauchy data on 0 , with a certain noise level. We are seeking real-valued solutions to problem (1.1). We will always assume that there is only trivial solution to Lu = 0 in H 1 ( ) if u = 0, N u = 0 on 0 or u = 0, N u = 0 on 1 . This is certainly true for the Helmholtz equation. This Cauchy problem (1.1), which includes the Helmholtz equation [1,12,17,21], arises in many areas of science and engineering related to electromagnetic or acoustic waves. For example, in underwater acoustics [8], in medical applications [22], etc. The problem is ill-posed in the sense of Hadamard [9].
The alternating iterative algorithm was first introduced by V.A Kozlov and V. Maz'ya in [13] for solving Cauchy problems for elliptic equations. For the Laplace equation, a Dirichlet-Neumann alternating algorithm for solving the Cauchy problem was suggested in [14], see also [10,11].
It has been noted that the Dirichlet-Neumann algorithm does not always work even if L is the Helmholtz operator + k 2 . Thus, several variants of the alternating iterative algorithm have been proposed, see, for instance, [2,7,18,19], and also [3,4] where an artificial interior boundary was introduced in such a way that convergence was restored. Also, it has been suggested that replacing the Neumann conditions by Robin conditions can improve the convergence [6].
The alternating iterative algorithm has several advantages compared to other methods. Most importantly, it is easy to implement as it only requires solving a sequence of well-posed mixed boundary value problems. In contrast most direct methods, e.g. [16,23] or [12], are based on an analytic solution being available and are thus more difficult to apply for general geometries or in the case of variable coefficients. On the other hand, the alternating iterative algorithm, in its basic form, suffers from slow convergence, see [4], and in the presence of noise additional regularization techniques have to be implemented, see, e.g. [5]. Thus, a practically useful form of the alternating algorithm tends to be more complicated than the variant analyzed in this paper.
In this work, we formulate the Cauchy problem for general elliptic operator of second order and consider the Dirichlet-Robin alternating iterative algorithm. Under the assumption that the elliptic operator with the Dirichlet boundary condition is positive we show that the Dirichlet-Robin algorithm is convergent, provided that parameters in the Robin conditions are chosen appropriately. The proof follows basically the same lines as that in [13] but with certain changes due to more general class of operators and the Robin boundary condition. We also make numerical experiments to investigate more precisely how the choice of the Robin parameters influences the convergence of the iterations.

The Alternating Iterative Procedure
In this section, we describe the Dirichlet-Robin algorithm and introduce the necessary assumption.
The main our assumption is the following: where H 1 ( , ) consists of functions u ∈ H 1 ( ) vanishing on . It is shown below in Sect. 3.2 that condition (2.1) is equivalent to existence of two real-valued measurable bounded functions μ 0 and μ 1 defined on 0 and 1 , respectively, such that for all u ∈ H 1 ( )\ 0 . Actually, we prove that for μ 0 = μ 1 to be a sufficiently large positive constant, but we think that it can be useful to have here two functions (as we will see in numerical examples the convergence of the Dirichlet-Robin algorithm weakens when μ 0 and μ 1 become large). With these two bounded real-valued measurable functions μ 0 and μ 1 in place, we consider the two auxiliary boundary value problems Here, These problems are uniquely solvable in H 1 ( ) according to [20]. The algorithm for solving (1.1) is described as follows: take f 0 = f and g 0 = g + μ 0 f , where f and g are the Cauchy data given in (1.1). Then (1) The first approximation u 0 is obtained by solving (2.3) where η is an arbitrary initial guess for the Robin condition on 1 .

Function Spaces, Weak Solutions and Well-Posedness
In this section, we define the weak solutions to the problems (2.3) and (2.4). We also describe the function spaces involved and show that the problems solved at each iteration step are well-posed.

Function Spaces
As usual, the Sobolev space H 1 ( ) consists of all functions in L 2 ( ) whose first-order weak derivatives belong to L 2 ( ). The inner product is given by and the corresponding norm is denoted by u H 1 ( ) . Further, by H 1/2 ( ), we mean the space of traces of functions in H 1 ( ) on . Also, H 1/2 ( 0 ) is the space of restrictions of functions belonging to H 1/2 ( ) to 0 , and H )) * and H −1/2 ( 1 ), see [20].

The Bilinear Form a
Lemma 3. 1 The assumption (2.1) is equivalent to the existence of a positive constant μ such that Proof Clearly the requirement (3.2) implies (2.1). Now assume that (2.1) holds and let us prove (3.2). Let The function λ(μ) is monotone and increasing with respect to μ and λ(μ) ≤ λ 0 for all μ. Therefore, there is a limit λ * := lim μ→∞ λ(μ) which does not exceed λ 0 . Furthermore, λ 0 is the first eigenvalue of the operator −L with the Dirichlet boundary condition and λ(μ) is the first eigenvalue of −L with the Robin boundary condition N u + μu = 0 on . Our goal is to show that λ(μ) → λ 0 as μ → ∞ or equivalently λ * = λ 0 . We denote by u μ an eigenfunction corresponding to the eigenvalue λ(μ) normalised by u μ L 2 ( ) = 1. Then Therefore, where C does not depend on μ. This implies that we can choose a sequence μ j , 1 ≤ j < ∞, μ j → ∞ as j → ∞ such that u μ j is weakly convergent in H 1 ( ), u μ j is convergent in L 2 ( ) and μ j u μ j is bounded. We denote the limit by u ∈ H 1 ( ). Clearly, u L 2 ( ) = 1 and, therefore, u = 0. Moreover, u ∈ H 1 ( , ) since Therefore, λ * is the eigenvalue of the Dirichlet-Laplacian and u is the eigenfunction corresponding to λ * . Using that λ * ≤ λ 0 and that λ 0 is the first eigenvalue of −L with the Dirichlet boundary condition we get λ * = λ 0 . This argument proves that λ(μ) → λ 0 as μ → ∞.
According to Lemma 3.1, we can choose two functions μ 0 and μ 1 such that (2.2) holds. Let us introduce the bilinear form on H 1 ( ) Let us show that the norm · μ is equivalent to the standard norm on H 1 ( ).

Lemma 3.2
There exist positive constants C 1 and C 2 such that This proves the second inequality of (3.3).
To prove the first inequality, we argue by contradiction and, assume that the inequality does not hold. This means that we can find a sequence n=0 converges strongly in L 2 ( ). Moreover, the trace operator from H 1 ( ) to L 2 ( ) is compact; hence, the restrictions of u k n to 0 and 1 converge strongly to the corresponding restrictions of u in the L 2 -norm. Finally, ∇u k n converges weakly to ∇u in L 2 ( ) and Thus, we get that By (3.5), this tends to zero as n → ∞ and hence u 2 Using these facts and (3.5), we find that lim sup n→∞ |∇u k n | 2 dx = 0, which contradicts (3.4). This proves the first inequality in (3.3).
We define the following subspaces of H 1 ( ). First, H 1 ( , ) is the space of functions from H 1 ( ) vanishing on . Second, H 1 ( , 0 ) and H 1 ( , 1 ) are the spaces of functions from H 1 ( ) vanishing on 0 and 1 respectively. The bilinear form defined on H 1 ( , 0 ) will be denoted by a 1 (u, v) and the bilinear form a μ defined on H 1 ( , 1 ) is denoted by a 0 (u, v). They are defined by the expressions

Preliminaries
Let u ∈ H 2 ( ) satisfy the elliptic equation, By Green's first identity, we obtain We add 0 μ 0 uv dS and 1 μ 1 uv dS to both sides and obtain Let H denote the space of the weak solutions to (3.6). Clearly it is a closed subspace of H 1 ( ). Let us define the conormal derivative of functions from H . We use identity (3.7) to define the conormal derivative N u on . By the extension theorem [15], for any where the constant C is independent of ψ. Moreover, this mapping ψ → v can be chosen to be linear.

Lemma 3.4 Let u ∈ H . Then there exists a bounded linear operator
) and the coefficients a i j are smooth.
Proof Consider the functional Let us show that the right-hand side of (3.9) is independent of the choice of v.
and, therefore, Hence, the definition of F(ψ) does not depend on v. Next by the Cauchy-Schwartz inequality, and (3.8), we obtain Thus, F is a bounded operator in H 1/2 ( ). Therefore, Remark 3.5 We will use the notation N u for the extension of the conormal derivative of functions from H . For the distribution N u ∈ H −1/2 ( ), the restrictions N u| 0 and N u| 1 are well defined and

Weak Solutions and Well-Posedness
In this section, we define weak solution to the two well-posed boundary value problems (2.3) and (2.4) and we show that the problems are well posed.
We now show that problem (2.3) is well-posed.
where the constant C is independent of f 0 and η.

Proof
The proof presented here is quite standard. Let w ∈ H 1 ( ) satisfy w| 0 = f 0 and for all v ∈ H 1 ( , 0 ). The right-hand side of (3.12) is a continuous linear functional. Thus, we can write By applying the trace theorem, the Cauchy-Schwartz inequality, and (3.11), we obtain According to Riesz' representation theorem, there exists a unique solution h ∈ H 1 ( , 0 ) of (3.13) such that One can verify that u = w + h by triangular inequality and (3.11) satisfies (3.10).
In the same manner, one can show that problem (2.4) is well posed. We will state the last result without a proof. Proposition 3.9 Let g 0 ∈ H −1/2 ( 0 ) and φ ∈ H 1/2 ( 1 ). Then there exists a unique weak solution u ∈ H 1 ( ) to problem (2.4) such that where C is independent of g 0 and φ.

Convergence of the Algorithm
We now prove the convergence of the Robin-Dirichlet algorithm. We denote the sequence of solutions of (1.1) obtained from the alternating algorithm described in Sect. 2 by (u n ( f 0 , g 0 , η)) ∞ n=0 . The iterations linearly depend on f 0 , g 0 and η.

Proof Lemma 3.4 together with Remark 3.5 shows that
for all n, we have Therefore, it is sufficient to show that the sequence converges in the case when f 0 = 0, g 0 = 0 and η is an arbitrary element from H −1/2 ( 1 ). To simplify the notation, we will denote the elements of this sequences by u n = u n (η) instead of u n (0, 0, η).
3) with f 0 = 0, u 2n is a solution to (2. 3) with f 0 = 0 and η = N u 2n−1 + μ 1 u 2n−1 , and u 2n+1 satisfies (2.4) with g 0 = 0 and φ = u 2n . From the weak formulation of (2.3) , we have that Similarly u 2n+1 solves problem (2.4) with N u 2n+1 +μ 0 u 2n+1 = 0 on 0 , u 2n+1 = u 2n on 1 . Again, it follows from the weak formulation of (2.4) that From these relations, we obtain (4.1) We introduce the linear set R consisting of functions η ∈ H −1/2 ( 1 ) such that u n (η) → 0 in H 1 ( ) as n → ∞. Our goal is to prove that μ is a norm and u n (η) is a linear function of η, we have By squaring both sides, we have Since a μ (u n , u n , ) ∞ n=0 is a decreasing sequence, we obtain that Since u 0 is a solution to problem (2.3), we obtain that Therefore, the first term in the right-hand of (4.2) is small for all n if j is sufficiently large and the second term in (4.2) can be made small by choosing sufficiently large n. Therefore, the sequence (u n (η)) ∞ n=0 converges to zero in H 1 ( ) and thus η ∈ H −1/2 ( 1 ).

Numerical Results
In this section, we present some numerical experiments. To conduct our tests we need to specify a geometry and implement a finite difference method for solving the two well-posed problems that appear during the iterative process. For our tests, we chose a relatively simple geometry. Let L be a positive number and consider the domain For our tests, we consider the Cauchy problem for the Helmholtz equation in , i.e.
Due to zero Dirichlet boundary condition on a part of the boundary, where x = 0 or x = 1 we keep them to be zero on each iteration. Therefore, our theoretical result gives convergence of the Dirichlet-Robin iterations for and for the Dirichlet-Neumann iterations for k 2 < π 2 . In our finite difference implementation, we introduce a uniform grid on the domain of size N × M, such that the step size is h = N −1 , and thus M = round(Lh −1 ), and use a standard O(h 2 ) accurate finite difference scheme. In the case of Robin conditions, on 0 or 1 , we use one sided difference approximations. See [4] for further details. For all the experiments presented in this section, a grid of size N = 401 and M = 201 was used.
We also find that the unknown data, at y = L, is u(x, L) = sin π x cosh π 2 − k 2 L + sinh π 2 − k 2 L and The analytical solution is illustrated in Fig. 2. Note that the solution depends on both L and k 2 .

Example 5.1
In an initial test, we use Cauchy data f (x) and g(x) obtained by sampling the analytical solution, with k 2 = 20.5 and L = 0.5, on the grid. Previously it has been shown that the Dirichlet-Neumann algorithm, e.g. the case μ 0 = μ 1 = 0, is divergent [4] for this set of parameters. To illustrate the properties of the Dirichlet-Robin algorithm, we pick the initial guess φ (0) (x) = η (0) = 0 and compute a sequence of approximations φ (k) (x) of the exact data f (x), as illustrated in Fig. 2.
For this test, we used the same value for the Robin parameters, i.e. μ := μ 0 = μ 1 . The results show that for small values of μ, the Dirichlet-Robin algorithm is divergent but for sufficiently large values of μ, we obtain convergence. The results are displayed in Fig. 3. To see the convergence speed, we display the number of iterations needed for the initial error φ (0) − f 2 to be reduced, or increased, by a factor 10 3 . We see that for small values of μ, we have divergence and the speed of the divergence becomes slower as μ is increased. At μ ≈ 2.6, we instead obtain a slow convergence. As μ is increased further, the rate of convergence is improved up to a point. For very large values of μ, we have slower convergence. It is interesting to note that the transition from divergence to convergence is rather sharp. The optimal choice for μ is just above the minimum required to achieve convergence.

Example 5.2
For our second test we use the same analytical solution, with λ = 20.5 and L = 0.5. We test the convergence of the Dirichlet-Robin algorithm for a range of values 0 ≤ μ 0 , μ 1 ≤ 15. As previously, we find the number of iterations needed for the magnitude of the error to change by a factor 10 3 . In Fig. 4 we display the results. We see that both μ 0 and μ 1 need to be positive for the iteration to be convergent. We also see that the effect of μ 0 and μ 1 is slightly different.  Fig. 3 We illustrate the error during the iterations for the case μ = 2.5 (right, red curve) and for μ = 2.7 (right, blue curve). We see that the rate of convergence is clearly linear. We also show the specific dependence on the parameter μ in the Robin conditions (left graph). Here we show the number of iterations needed for the magnitude of the error to change by a factor 10 3  Fig. 4 We illustrate the convergence speed for different values of μ 0 and μ 1 (left graph). The graphs represents level curves for the number of iterations needed to change the error by a factor 10 3 . The cases where the iteration diverges are illustrated by negative numbers (blue curves) and the cases where the iteration is convergent correspond to positive values (black curves). We also show the convergence speed as a function of the Robin parameter where either μ 0 or μ 1 is fixed. The case when μ 0 = 5 is displayed (right,black curve) and the case when μ 1 = 5 (right,blue curve). Here we see that the curves are similar in shape but not identical Example 5. 3 In the third test, we keep L = 0.5 but vary λ in the range 12.5 < λ < 45.
Recall that k 2 ≈ 12.5 is where the Dirichlet-Neumann algorithm stops working [4].
For this experiment, we use the same value for the parameters μ := μ 0 = μ 1 in the Robin conditions. We are interested in finding the smallest value for μ needed to obtain convergence as a function of λ. The results are shown in Fig. 5. We see that a larger value for k 2 also means a larger value for μ is needed to obtain convergence. We also fix k 2 = 35 and display the number of iterations needed for the initial error φ (0) − f 2 to be reduced, or increased, by a factor 10 3 . This illustrates the convergence speed of the iterations. In this case, μ ≈ 12.7 is needed for convergence. A comparison with the results of Example 5.1 shows that the shape of the graph is similar in both cases. We have very slow convergence, or divergence, only in a small region near μ ≈ 12.7.  5 We display the minimum Robin-parameter μ required for convergence as a function of k 2 , for the case L = 0.5, and μ = μ 0 = μ 1 (left graph). We also show the number of iterations needed to change the initial error by a factor of 10 3 for the case k 2 = 35 (right graph). Here negative numbers mean divergence and positive numbers correspond to convergent cases

Conclusion
In this paper, we investigate the convergence of a Dirichlet-Robin alternating iterative algorithm for solving the Cauchy problem for general elliptic equations of second order. In the Dirichlet-Robin algorithm, two functions μ 0 and μ 1 are chosen to guarantee the positivity of a certain bilinear form associated with the two well-posed boundary value problems, (2.3) and (2.4), that are solved during the iterations. For the Helmholtz equation, we have shown that if we set μ = μ 0 = μ 1 is a positive constant then for small values of μ, the Dirichlet-Robin algorithm is divergent but for sufficiently large values of μ, we obtain convergence. However, for very large values of μ, the convergence is very slow. We also investigated how μ 0 and μ 1 influences the convergence of the algorithm in detail. The results show that both μ 0 and μ 1 need to be positive for the iteration to be convergent. Finally, we investigated the dependence of μ on k 2 for convergence of the algorithm, i.e the smallest value of μ needed to obtain convergence as a function of k 2 . The results show that a larger value for k 2 also means a larger value for μ is needed to obtain convergence.
For future work, we will investigate how to improve the rate of convergence for very large values of μ 0 and μ 1 using methods such as the conjugate gradient method or the generalized minimal residual method. We will also investigate implementing Tikhonov regularization based on the Dirichlet-Robin alternating procedure, see [5]. Also a stopping rule for inexact data will be developed. It will also be interesting to study the convergence of the algorithm in the case of unbounded domains.
Funding Open access funding provided by LinkAping University.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.