Inverse Scattering on the Half-Line for ZS-AKNS Systems with Integrable Potentials

In this paper, we study the inverse scattering problem for ZS-AKNS systems on the half-line with general boundary conditions at the origin. For the class of potentials with certain integrability properties, we give a complete description of the corresponding scattering functions S, justify the algorithm reconstructing the potential and the boundary conditions from S, and prove that the scattering map is homeomorphic.


Introduction
The main aim of the present paper is to develop the direct and inverse scattering theory for 2 × 2 ZS-AKNS systems (studied by Zakharov and Shabat [84,85] and Ablowitz, Kaup, Newell, and Segur [1] in the 1970s) on the semi-axis under minimal integrability assumptions on potentials and general symmetric boundary conditions. Namely, the scattering problem of interest is given by the ZS-AKNS system y = Qy + ikσ 3 y (1.1) and the boundary condition e −iα y 1 (0, k) − e iα y 2 (0, k) = 0, α ∈ [0, π). (1.2) Here k ∈ C is the spectral parameter, 0 , y(x, k) := y 1 (x, k) y 2 (x, k) , 324 R. O. Hryniv and S. S. Manko IEOT and the complex-valued potential u belongs to the Banach space L 1 (R + ) ∩ L p (R + ), with a fixed p ∈ [1, ∞). It is well know (see [57,Ch. XIII.7] and [13,53,56]) that the Dirac operator corresponding to Eq. (1.1) is self-adjoint in the Hilbert space L 2 (R + )×L 2 (R + ) under the boundary conditions (1.2); moreover, (1.2) is a general form of boundary conditions making the above Dirac operator self-adjoint. The Jost solution Ψ is the 2×2 matrix solution of (1.1) for real nonzero k obeying the asymptotics (1. 3) The columns ψ 1 := (ψ 11 , ψ 21 ) t and ψ 2 := (ψ 12 , ψ 22 ) t of the Jost solution form a fundamental system of solutions to Eq. (1.1). Next we introduce the scattering function S by requiring that the solution ϕ(x, k) := ψ 1 (x, k) + S(k)ψ 2 (x, k) of (1.1) should satisfy the boundary condition (1.2); ϕ is called the scattering solution of (1.1). The scattering solution obeys the following asymptotic behavior at infinity: The direct scattering problem consists in constructing the scattering function S given u and α, while the inverse scattering problem consists in recovering the potential u and the constant α from the scattering function S. More generally, the task is to study properties of the direct and inverse scattering maps (u, α) → S and S → (u, α) respectively.
Dirac systems on the whole line of the form (1.1) appear, e.g., in the inverse scattering method for solving nonlinear Schrödinger equations iu t + u xx + κ|u| 2 u = 0 in dimensionless form, with the constant κ comprising physical features of the corresponding model. In the pioneering papers [84,85] Zakharov and Shabat studied in detail the reflectionless case and derived the corresponding soliton solutions. Basic notions and many references on the inverse scattering method and the theory of solitons can be found in [2,3,86]. Inverse scattering for Dirac systems (1.1) has also appeared in the literature in the context of optical couplers to model the coupling between two waveguide modes with different propagation constants [6,7,42,83]. In [6,7], the authors studied the inverse scattering problem for the Dirac system on the line in the case of a rational reflection coefficient.
Here p and q are real-valued functions and m a nonnegative constant, which are related to the potential u of (1.1) via u(x) = −q(x) + i p(x) + m (note that m = 0 in the particular case of (1.1)). System (1.4) can be derived by separation of variables from the Dirac equation in R 3 , with a spherically symmetric potential V . Here ψ is the four-component wavefunction of a relativistic electron in an electrostatic field V , α n are the Pauli matrices, m and E are respectively the particle mass and energy, while c is the speed of light (for more details on the Dirac operator see [44,58,77,80]). Levitan and Gasymov [25][26][27] were seemingly the first to develop the inverse scattering theory for Dirac systems of the form (1.4). The authors were concerned with Dirac systems on the half-line and on the whole line; in particular, in [25], Gasymov considered the system (of order 2n) of the form (1.4) on the half-line. For potentials p and q with a proper behavior at infinity he gave necessary and sufficient conditions on a function S to be the scattering function for a unique pair of potentials in the corresponding class. Frolov [24] considered this problem for the case n = 1 on the full line. All these investigations are based on the classical inverse theory developed in the works of Gelfand and Levitan [28], Krein [48,49], and Marchenko [63,64]; see also the reviews [16,20] and the books [12,55,65,69]. Reconstruction from the Weyl-Titchmarsh functions for Dirac-type and more general first-order systems was discussed in [13,61,73].
Adapting an algorithm suggested originally by Kay and Moses [12,45], Hinton et al. [34] generalized the results of Frolov to a wider class of potentials. Given the reflection coefficient that is a rational function with n poles in the complex plane, the authors gave the reconstruction algorithm for the potentials of the system, when n = 1 and n = 2. The case of an arbitrary n was treated in [46], where the residue theorem with an appropriate contour integration was used to reduce the problem to a linear system of algebraic equations and several explicit examples were discussed.
In [30,31], the scattering problem for the Dirac operator (1.4) on the line, where p and q are real-valued functions of the Faddeev-Marchenko class and m is positive, was treated by means of the trace formula, in the manner of Deift and Trubowitz [16]. Inverse scattering problem for Dirac systems of the form (1.1) or (1.4) were also discussed in [15,22,47,70,75,76,81]. In particular, Dirac systems with discontinuous coefficients and spectral parameter in the boundary conditions were studied in [14,32,62] and systems for nonself-adjoint Dirac operators were discussed in [5,60]. Recently, Demontis and van der Mee in [17][18][19] studied the problem of reconstructing non-self-adjoint 326 R. O. Hryniv and S. S. Manko IEOT matrix Zakharov-Shabat and AKNS systems. An interesting approach to inverse scattering analysis of more general systems was suggested by Beals and Coifman [8][9][10] and Beals, Deift, and Tomei [11]. The motivation for the present work was two-fold. Firstly, we intended to solve completely the direct and inverse scattering problems for ZS-AKNS systems on the half-line with potentials of rather little regularity and under general boundary conditions. Namely, the potential function u is only assumed to belong to L 1 (R + ) ∩ L p (R + ) with some p ≥ 1, while the boundary conditions depend on an extra parameter α, and both u and α must be reconstructed from the scattering data. The main results of the paper (Theorems 3.1 and 3.2) completely characterise the range of the scattering map and prove that it is homeomorphic. We mainly follow the approach of the paper [23], where similar scattering problem but on the whole line was discussed. The most essential component of the inverse scattering map requires solving the corresponding Marchenko equation; this is done by a convergent successive approximation method and thus provides the basis for a reconstruction algorithm. We also note that the case p = 1 for matrix potentials of arbitrary size was discussed in [17]; however, the analysis of the inverse scattering transform in that paper contains some inaccuracies which we had to fix.
The second impetus for the work stems from the scattering problem for energy-dependent Schrödinger equations of the form − y + q(x)y + 2kp(x)y = k 2 y. (1.5) Such equations arise in various models of quantum and classical mechanics; for instance, the Klein-Gordon equation [41,51,52,68] modelling interactions between colliding relativistic spinless particles can be reduced to this form. Energy-dependent Schrödinger equations are also used to describe vibrations of mechanical systems in viscous media [82]. In [37][38][39][40], the authors studied the inverse scattering problems for energy-dependent Schrödinger equations with regular potentials; see also the papers [4,43,59,66,67,74,78,79]. It turns out that the scattering problem for (1.5) can be transformed to the one for the ZS-AKNS system of the form (1.1)-(1.2). In [35] we use this relation to study the scattering problem for (1.5) with a distributional Miura potential q and regular potential p under the weakest possible integrability conditions, which explains the choice of the class of potentials in the present work. The paper is organized as follows. In the next section, we derive the integral representation of the Jost solution, describe properties of the scattering function S, and derive the corresponding Marchenko equation. In Sect. 3 we first discuss properties of some related integral operators and prove solubility of the Marchenko equation. We then justify the algorithm reconstructing the potential function u and the constant α from S and prove the main results of the paper, Theorems 3.1 and 3.2, characterising the range and continuity of the scattering map. The appendix contains the proof of Lemma 3.5 and discussion of the related Hankel and Toeplitz operators.

Notations
In what follows, L p (R) and L p (R + ) will stand for the standard Lebesgue function spaces on the real line R and on the positive half-line R + , respectively. We shall use the Banach space X p := L 1 (R) ∩ L p (R) with the norm where δ 1p is the Kronecker delta; the space X + p := L 1 (R + ) ∩ L p (R + ) can be defined similarly. Next, we introduce the Banach spaces of vector-valued functions on R + , with the norms defined via for p < ∞ and As usual, C ∞ 0 denotes the linear space of all functions of compact support that are infinitely often differentiable. By M 2 (C) we denote the space of 2 × 2 matrices with complex entries, and |A| for such a matrix A stands for its operator norm in C 2 . If Y is any of the above-mentioned function spaces, then Y ⊗ M 2 (C) is identified with the space of matrix-valued functions with entries belonging to Y . Finally, it will be convenient to use a non-standard normalization of the Fourier transformf = F f of a function f ∈ L 1 (R), viz.

The Scattering Solutions
Recall that the Jost solution Ψ of (1.1) is the 2 × 2 matrix satisfying the equation and the asymptotic condition (1.3) at infinity. It is useful to factor out the leading behavior of the Jost solution Ψ of system (1.1) by setting where M takes values in M 2 (C); then M obeys the equation and the asymptotics The solution of (2.3) possesses a very useful integral representation that (apart from the simple change of variables and integration range, see below) IEOT produces the standard triangular representation of Jost solutions; the latter in the context of Schrödinger operators was first suggested by Levin [54] and Marchenko [64] and for ZS-AKNS systems appeared in the pioneering work by Zakharov and Shabat [84]; see also [2,21]. Some useful properties of the corresponding integral kernels (in particular, their continuous dependence on the arguments and the potential function u in various spaces) were established in [23]; we summarize them in the following proposition.

4) possesses a unique solution;
(ii) this solution M ( · , k) has the integral representation with some matrix-valued kernel Γ; moreover, the mapping The above proposition leads in a straightforward manner to the following representation of the Jost solution:
We note that apart from the straightforward change of variables and integration limits, (2.5) coincides with the standard triangular representation of the Jost solutions (cf. [2,21,84]), namely (2.6) Next, the matrix-valued functions Ψ and Γ enjoy the following symmetry relations.
Vol. 84 (2016) Inverse Scattering for ZS-AKNS Systems 329 Lemma 2.3. Denote by ψ jk and Γ jk , j, k = 1, 2, the entries of Ψ and Γ; then the following holds: Proof. Recalling the definition of the standard Pauli matrix σ 1 , and conjugating the system (1.1) with σ 1 shows that Ψ(·, k) = σ 1 Ψ(·, k)σ 1 for all real k. Written entry-wise, this equality yields the symmetry relations (2.7) for the entries of the Jost solution Ψ and of the kernel Γ in its integral representation (2.5).
It turns out that the kernel Γ is related to the potential u in a very simple way, cf.
This formula will play a crucial role in the inverse theory of Sect. 3. Next, we denote by N the matrix solution to (2.3) that obeys the initial condition (2.10) Proposition 2.5. Assume that u ∈ X + p ; then the problem (2.3), (2.10) has a unique solution N, and this solution is of the form This statement can easily be proved by reformulating the initial value problem for N as a Volterra integral equation and then using the successive approximating method (see Lemma 3.1.4 in [65] for details in the similar case of a Schrödinger equation).
We denote by Φ the matrix solution of system (1.1) that obeys the initial condition By virtue of (2.2) and Proposition 2.5, Φ can be written as follows.
Corollary 2.6. If u ∈ X + p , then the matrix solution Φ takes the form Let ϕ 1 be the first column of the solution matrix Φ. It satisfies the initial condition (2.11) and so the boundary condition (1.2). For every fixed k ∈ R there is a vector by the Liouville theorem, we find that det Ψ(0, k) = lim x→∞ det Ψ(x, k) = 1; calculating now the inverse of Ψ(0, k) and recalling (2.11), we get the explicit relation between the vector solutions ϕ 1 , ψ 1 , and ψ 2 of (1.1).
We introduce the Jost function s via then on account of Lemma 2.3 equality (2.12) takes the form (2.14) By virtue of Corollary 2.2, the Jost function s can be written as using now the properties of the kernel Γ of Proposition 2.1, we get the following representation formula.

Corollary 2.7.
For every u ∈ X + p there is an f ∈ X + p such that the corresponding Jost function s can be written as (2.15) The integral representation (2.15) shows that the function s is the boundary value of a function of the complex variable z defined in the closed upper half-plane C + by the formula (2. 16) clearly, this function is continuous and bounded in C + and analytic in C + . In particular, the function s of (2.16) belongs to the Hardy space H ∞ (C + ) of functions that are analytic and bounded in the upper-half complex plane C + . In view of (2.5), the components ψ 11 (0, ·) and ψ 21 (0, ·) of the Jost solution also admit analytic continuations to functions in the Hardy space H ∞ (C + ); moreover, formula (2.13) continues to hold with these analytic continuations for all k in the upper-half complex plane C + .
The contradiction derived proves that s does not vanish on C + .
Next we denote by X p the Banach algebra of Fourier transforms of functions in X p , under the pointwise multiplication, and by 1 X p the unital extension of X p obtained by adjoining the constant functions. In a similar manner we introduce the Banach algebra X + p of Fourier transforms of functions in X + p and its unital extension 1 X + p . We observe that every element of 1 X + p admits an extension to C + as a bounded analytic function that is continuous up to the boundary; thus 1 X + p is a subset of the Hardy space We recall that an element f of a commutative Banach algebra A with unity e is said to be invertible if there exists g ∈ A such that fg = e; g is then the inverse to f and is denoted f −1 . Next we prove that the Wiener theorem giving the invertibility criteria in the Banach algebras 1 X + p and 1 X p in the case p = 1 (see [29,Ch. 17]) continues to hold for an arbitrary p > 1. Lemma 2.9. Suppose that f = β +ĝ, with β ∈ C and g ∈ X p (resp., g ∈ X + p ), is an element of the Banach algebra 1 X p (resp., of the Banach algebra 1 X + p ). Then f is invertible in 1 X p (resp., in 1 X + p ) if and only if β = 0 and f does not vanish on R (resp., on C + ).
Corollary 2.10. The Jost function s is an invertible element of the Banach algebra 1 X + p . Therefore there exists g ∈ X + p such that Comparing (2.14) and the definition of the scattering solution ϕ, we conclude that ϕ(·, k) = ϕ 1 (·, k)/s(k) and thus that In particular, S is unimodular on the real line, so that (S(k)) −1 = S(k) for k ∈ R. Recalling formulae (2.15) and (2.17), we derive the following representation for the scattering function S. Proposition 2.11. Assume that u ∈ X + p ; then there exists F ∈ X p such that the scattering function S takes the form We say a function f is continuous on the extended real line R if f is continuous on R and possesses finite and equal limits lim k→±∞ f (k). If a function f is continuous on R and does not vanish there, then its winding number W R (f ) along R is the integer number equal to where any continuous branch of log f along R is chosen.
Lemma 2.12. The winding number along R of the scattering function S is equal to zero. Proof.
whenever all the terms are well defined, in view of (2.18) we get W R (S) = 2W R (s), and thus it suffices to prove that W R (s) = 0.
To this end we recall that the function s is analytic in C + , continuous up to the boundary R and does not vanish in C + . Moreover, s(z) → e −iα as z tends to infinity within C + . Indeed, given any ε > 0, we first find a

Now we integrate by parts in the last integral to get the bound
for all nonzero z in C + . Therefore there exists an R > 0 such that for every z ∈ C + with |z| > R; as ε > 0 was arbitrary, the claim follows. Now take a Möbius transformation z → w defined via Letting r → 1, we find that the argument increment of g(w) along the circle |w| = 1 is zero, which amounts to saying that the argument increment along R of the Jost function s is zero, i.e., that W R (s) = 0. The proof is complete.

The Marchenko Equation
with n 1 denoting the first column of the matrix solution N of (2.3), (2.10), belong to H ∞ 0 (C − ; C 2 ). These Hardy space properties are the starting point for the derivation of the Marchenko equation, which is an integral equation for Γ of Proposition 2.1 in terms of F of (2.19). In the next section, we will solve this equation and prove consistency of the reconstruction algorithm.
Given the function F of (2.19), we set Lemma 2.14. Suppose that u ∈ X + p ; then the kernel Γ and the matrix Ω satisfy the Marchenko equation for almost every ζ > 0. IEOT Proof. We begin with the identities which in view of (2.14) imply that It follows from Proposition 2.13 and Corollary 2.10 that, for each x ≥ 0, the inclusion holds. In view of (2.22), the above function can also be written as Since (2.23) should give a function in the Hardy space H ∞ 0 (C − ; C 2 ), we conclude that for almost every ζ > 0. Using (2.7), we get from (2.24) the equality

Main Results
We say that a function S : R → C belongs to the class S p if and only if (1) there are F ∈ X p and β ∈ [0, π) such that for all k ∈ R it holds (2) the function S is unimodular on R, i.e. (S(k)) −1 = S(k) for all real k; (3) the winding number of S is zero.
Vol. 84 (2016) Inverse Scattering for ZS-AKNS Systems 335 In a natural manner the class S p is identified with a subset of the metric space X p × [0, π) and inherits the topology of the latter. Namely, if F j ∈ X p and β j ∈ [0, π) correspond to S j ∈ S p for j = 1, 2, then the distance d s between S 1 and S 2 equals and the topology on S p is generated by this distance.
The set of problems (1.1)-(1.2) is naturally parametrised by the pairs (u, α) ∈ X + p × [0, π) of potentials u and the constants α in the boundary conditions and thus becomes a complete metric space under the distance . For brevity, we denote this topological space by U p .
The results of the previous section show that there is a well-defined mapping S p : U p → S p that to every ZS-AKNS system (1.1)-(1.2) determined by (u, α) ∈ U p puts into correspondence its scattering function S ∈ S p . The main results of the present paper are formulated in the following two theorems that characterise the range of the mapping S p and claim its continuity. Before proving these theorems, we briefly outline our approach. Take an arbitrary S ∈ S p and denote by F ∈ X p and β ∈ [0, π) the corresponding function and the number in the integral representation (3.1). In Sect. 3.2 we discuss some properties of the related integral operators H Ω (x) and H − Ω , which are exploited in Sect. 3.3 to prove that the Marchenko equation (2.21) with F associated to S via (3.1) is uniquely soluble for a kernel Γ. This kernel determines a function u via (2.9), and we show that, firstly, the corresponding potential function u belongs to X + p and, secondly, the function (2.5) with this Γ is the Jost solution for the half-line ZS-AKNS system (1.1) with the potential u so constructed and with α equal to the β in the integral representation (3.1) of S. Finally, we justify this reconstruction algorithm by showing that the scattering function for the ZS-AKNS system corresponding to the pair (u, α) coincides with S ∈ S p we have started with.

Integral Operators
Assume that S is a fixed function in S p ; by definition, S allows a representation (3.1), with F ∈ X p and β ∈ [0, π). In other words, F is given by the equality let also H − Ω stand for the operator on L 1 (R − ) defined by the formula Observe that H Ω (x) and H − Ω are Hankel operators on the positive and negative half-axes respectively, cf. [71]. The lemmas below are partly direct generalizations of Lemmas 3.3.1, 3.3.2 and 3.3.3 in [65] to the case of spaces of vector-valued functions; therefore we only sketch the proofs whenever the details can easily be recovered from [65].

Lemma 3.3.
For every x ≥ 0 and r ≥ 1, H Ω (x) is a compact operator in L r (R + ). Likewise, the operator H − Ω is compact in L r (R − ) for every r ≥ 1. Proof. We shall prove that H Ω (x) is bounded in L ∞ (R + ), compact in L 1 (R + ), and then use the Riesz-Thorin and Krasnoselskii interpolation theorems [50] to get boundedness and compactness of H Ω (x) in all intermediate spaces The estimate To prove its compactness in L 1 (R + ), one needs to show that the set . Take an arbitrary f ∈ L 1 (R + ) of norm at most 1 and set g := H Ω (x)f . We find that and thus the set K is bounded in L 1 (R + ). In the same manner we find that uniformly in f in the unit ball of L 1 (R + ); thus the set K is equicontinuous. The inequality Vol. 84 (2016) Inverse Scattering for ZS-AKNS Systems 337 holds uniformly on K, and thus the latter is tight. Therefore the set K is compact in L 1 (R + ) by a vector analogue of the Fréchet-Kolmogorov-Riesz compactness criterion [33]. This completes the statement on H Ω (x). The second assertion of the lemma can be established in a similar way.
Remark 3.4. The Riesz-Thorin interpolation theorem and the estimates for the norm of the operator H Ω (x) in the spaces L 1 (R + ) and L ∞ (R + ) give the bound for all p ∈ [1, ∞]. Also, boundedness and compactness of H Ω (x) in L r (R + ) and of H − Ω in L r (R − ) can be derived from the general theory of Hankel operators [72].
The proof of the next lemma exploits essentially the theory of Hankel and Toeplitz operators and therefore is given in Appendix A along with necessary definitions and explanations.
is continuous, and, in particular, Proof. The first claim follows from the fact that the operator H Ω (x) is compact in X + p by virtue of Lemma 3.3 and that the equation f + H Ω (x)f = 0 has no nontrivial solutions in X + p by virtue of Lemma 3.5. The arguments used in the proof of Lemma 3.3 show that H Ω (x) is a continuous operator-valued function of x tending to zero as x tends to +∞. Since taking inverse is a continuous operation on the group of bounded invertible operators, we conclude that the mapping is continuous for x ≥ 0 and tends to I as x → ∞, thus yielding (3.7).

Reconstruction of the Potential
Given S ∈ S p corresponding to an F ∈ X p and a β ∈ [0, π) in (3.1), we construct the matrix-valued kernel Ω of (3.3) and then consider the integral equation (cf. Eq. (2.21)) Denoting by g := (Γ 11 , Γ 12 ) and f = (0, F ) the first rows of the matrices Γ and Ω respectively, we get the equation We now show that, for each x ≥ 0, Eq. (3.9) has a unique solution g(x, ·) belonging to X + p ; then, motivated by Proposition 2.4, we shall take u(x) := −Γ 12 (x, 0) (3.10) as a putative potential of a ZS-AKNS system looked for.

Theorem 3.7.
Under the assumptions of Theorem 3.1 equation (3.9) for each x ∈ R + has a unique solution g(x, ·) in X + p , and this solution depends therein continuously on x ∈ R + and sup x≥0 g(x, ·) X + p < ∞. Moreover, the following holds: p is continuous and thus the function u of (3.10) is well defined and belongs to X + p ; 2. the mapping S p S → u ∈ X + p induced by (3.2), (3.9), and (3.10) is continuous.
Proof. By Corollary 3.6, the operator I + H Ω (x) is boundedly invertible in X + p , whence the solution of (3.9) is given by Using the continuity of the shift x → f (x + ·) as a mapping from R + to X + p and the continuity and uniform boundedness in x ≥ 0 of the operator I + H Ω (x) −1 as a mapping in X + p , we find that x → g(x, ·) is a continuous and bounded mapping from R + to X + p . This proves the first part of the theorem.
Next we prove assertion (1). Since the set of continuous functions of compact support is dense in every space L r (R), r ≥ 1, the function F admits the representation F = F 0 + F 1 , with F 0 continuous and of compact support and F 1 satisfying F 1 Xp < 1. Denote by Ω k , k = 0, 1, the 2 × 2 matrix as Ω but with the function F k in place of F ; the integral operator as in (3.4) but with Ω replaced by Ω k will be denoted by H Ω k (x). Then H Ω1 (x) X + p ≤ F 1 Xp < 1 by Remark 3.4, so that I + H Ω1 (x) is boundedly invertible in X + p , and with h(x, ·) := H Ω0 (x)g(x, ·) the above solution g(x, ·) to equation (3.9) can be written as We discuss first some properties of the function h = (h 1 , h 2 ); explicitly, Reasonings similar to those used to prove continuity of g(x, ·) show that the mapping x → h(x, ·) is continuous as a mapping from R + into X + p . Also, the Vol. 84 (2016) Inverse Scattering for ZS-AKNS Systems 339 function h is bounded uniformly in x ≥ 0 and ζ ≥ 0, namely, (3.12) then h(x, ζ) = 0 for all x > b and ζ ≥ 0 (we assume that b > 0 as otherwise h ≡ 0 and the proof simplifies). As a result, the function h(·, ζ) belongs to X + p and its norm therein is bounded uniformly in ζ ∈ R + . Now we are in a position to prove that the second component Γ 12 (·, ζ) of g(·, ζ) for every ζ ∈ R + is an element of X + p and that it depends therein continuously on ζ. Writing I + H Ω1 (x) −1 via its Neumann series results in the equality We have the explicit formulae with dt := dt 1 · · · dt 2n . The Hölder inequality results in the relation yielding the bound (3.14) Next we find that, in view of (3.11), Observe that h n (x, ζ) = 0 if x > b with b of (3.12) and ζ ≥ 0; therefore, Henceforth the Neumann series (3.13) converges in X + p absolutely and uniformly in ζ. The above analysis of the summands of this series reveals their continuity as functions of ζ with values in X + p , and thus the sum depends in X + p continuously on ζ ≥ 0. This completes the proof of assertion (1).
To prove (2) it is sufficient to show that the map is continuous on the set of F associated with S ∈ S p . We fix an arbitrary such F ∈ X p and prove that G is continuous at F . In view of Corollary 3.6, the number is finite; we take δ such that γδ ≤ 1 6 and assume thatF is associated with ã S ∈ S p and F −F Xp < δ. Clearly, the operators I+HΩ(x) are all invertible; moreover, we have the uniform bounds sup x≥0 I + HΩ(x) To avoid unnecessary duplication, we agree to denote by tilde objects constructed as above but with F replaced byF ; in particular,f := (0,F ) and is the solution of the corresponding equatioñ g(x, ·) +f (x + ·) + HΩ(x)g(x, ·) = 0.
Then we find that uniformly in x ≥ 0 andF in the δ-neighbourhood of F . To estimate the norm of the differenceg(x, ζ) − g(x, ζ) as a function of x for a fixed ζ, we write it as the Neumann series analogous to (3.13). Recall that the function F is written as F = F 0 + F 1 , with F 0 continuous and of compact support and F 1 satisfying F 1 Xp < ρ, for some ρ < 1, and observe that for δ small enough everyF in the δ-neighbourhood of F may Vol. 84 (2016) Inverse Scattering for ZS-AKNS Systems 341 be expressed asF = F 0 +F 1 , whereF 1 is such that F 1 Xp < ρ. Now the Neumann series forg − g takes the form Using the equality in the multilinear formulae for f n andf n and estimating each summand as in (3.14), we find that Similar reasoning for h n results in the relation . Using (3.16) in the estimates similar to those of (3.11), we conclude that Combining the above bounds in the Neumann series forg − g, we conclude that with the constant C depending only on F . This yields the continuity of the mapping G and completes the proof of the theorem.
It is important to observe that the matrix solution Γ of (3.8) enjoys the symmetry relations of Lemma 2.3. Namely, the following holds: Corollary 3.8. Assume that S ∈ S p . Then the corresponding Marchenko equation (3.8) for each x ≥ 0 possesses a unique solution Γ(x, ·) ∈ X p ⊗ M 2 (C) and, moreover, the entries Γ jk of Γ satisfy the relations Γ 22 = Γ 11 and Γ 21 = Γ 12 .
Proof. Existence and uniqueness of the matrix solution Γ of the Marchenko equation (3.8) follows from invertibility of the operators I + H Ω (x) as in the proof of the above theorem. Take σ 1 as in (2.8); then a direct computation shows that σ 1 Ωσ 1 = Ω. As a result, Γ satisfies the equality σ 1 Γσ 1 = Γ, which amounts to the required relations for the entries Γ jk . Remark 3.9. Although Theorem 3.7 discusses solutions of the Marchenko equation (3.8) in which the function F in the matrix Ω arises from a function S ∈ S p via (3.2), the proof only uses invertibility of the operators I + H Ω (x). Denote therefore by F the set of all F ∈ X p such that the corresponding operators I + H Ω (x) of (3.4), with Ω of (3.3), are all invertible IEOT in X + p . Continuous dependence of H Ω (x) on x shows that, for every F ∈ F, the number γ of (3.15) is finite. Clearly, every F derived from some S ∈ S p via (3.2) belongs to F; however, F is much larger. Namely, if F ∈ F and γ is defined via (3.15), then, as it follows from the proof of part (2) of the above theorem, F also contains a δ-neighbourhood of F in X p for some positive δ, so that F is open in X p .
Therefore the conclusions of Theorem 3.7 remain true for F ∈ F, with (2) replaced by (2 ) the mapping F F → u ∈ X + p induced by (3.3), (3.9) and (3.10) is locally continuous.
Remark 3.10. In the paper [17] the authors considered an inverse scattering problem for a general matrix Dirac-type system with integrable potentials. However, the proof of the fact that the reconstructed potential is integrable (cf. part (1) of the above theorem) was incomplete. One of the motivation for this work was to suggest the approach that would (after suitable adaptation to the matrix case) fill in the gap in the proof of the paper [17]. Also, we give a rigorous proof of continuity of the inverse scattering transform.

Consistency of Reconstruction
Finally we shall show that the function u constructed in the previous subsection for some S ∈ S p is the potential of the ZS-AKNS system (1.1) whose scattering function is this S. To this end we first prove that the solution Γ of (3.8) generates the Jost solution of this system via the equality (2.5).
We recall that the set F was defined in Remark 3.9 and that it is open in X p . The set C ∞ 0 (R) of infinitely smooth function of compact support is dense in X p , and we take first F ∈ F belonging to C ∞ 0 (R). Clearly, then the function g solving the corresponding Eq. (3.9) is continuously differentiable in x and ζ. Indeed, smoothness in ζ is obvious, while the way H Ω (x) depends on x shows continuous differentiability in x.
Differentiation of (3.9) in x and ζ and integration by parts on account of (3.10) result in the relation ζ) . We next multiply the second row of (3.8) by u(x) to get the second relation Let us define a vector-valued function ψ 1 = (ψ 11 , ψ 12 ) t via where (Γ 11 , Γ 21 ) solves Eq. (3.9). Direct calculations using the relations of Lemma 3.11 show that ψ 1 satisfies the ZS-AKNS system (1.1), in which u is given by (3.10). If F is not smooth, but is derived from a given S ∈ S p via (3.2), we choose a sequence of F n in F converging to this F in the topology of X p and then use continuous dependence of u and Γ on F explained in Theorem 3.7 and Remark 3.9 to derive the following result (see the proof of [36,Lemma 4.4] for details).

Proposition 3.12.
Assume that S ∈ S p and that F is defined via (3.2). Form the matrix kernel Ω of (3.3), solve the Marchenko equation (3.8), and define ψ 1 as in (3.17). Then the function ψ 1 solves the ZS-AKNS system (1.1) with the potential u defined via (3.10).
Now we are in a position to justify the reconstruction procedure.
Lemma 3.13. Assume that S ∈ S p is associated with F ∈ X p and β ∈ [0, π) via (3.2), form the matrix kernel Ω of (3.3), and solve the Marchenko equation (3.8). Then the scattering function of the ZS-AKNS system (1.1), with potential u given by (3.10) and subject to the boundary conditions (1.2) with α := β, coincides with S.
Proof. In view of Proposition 3.12, we need to prove the relation with ψ 1 = (ψ 11 , ψ 12 ) t defined by (3.17). Set γ(ζ):=e −iβ Γ 11 (0, ζ)−e iβ Γ 21 (0, ζ) for ζ ≥ 0 and γ(ζ) = 0 for ζ < 0; then a short computation shows that Observe that the right-hand side of the above equality can be written as F Φ with (3.18) thus it suffices to prove that Φ is equal to zero identically on the real line. IEOT First, we take a proper linear combination of the components of equation (3.9) for x = 0 and get the relation recalling that γ is zero on R − , we conclude that Φ(ζ) = 0 for positive ζ.
Next, consider Φ on the negative half-line. To begin with, we recall that S is unimodular and hence Since S(k) − e −2iβ = (F F )(k), the inverse Fourier transform of the above relation reads replacing x with t − x and changing the variables in the integral, we arrive at the relation Now, multiplying (3.18) by e −iβ F (x + ζ), integrating in ζ, and using the above relations, we arrive at the equality Recalling that both γ and F are integrable, using the Fubini theorem to change the integration order in the last integral, and then applying (3.19) in the first and the last summand, we conclude that For negative x, this becomes Vol. 84 (2016) Inverse Scattering for ZS-AKNS Systems 345 since Φ(−x) then vanishes, as we proved above. Therefore the vector Φ,Φ solves the equation f + H − Ω f = 0. In view of (3.18), this solution is in L 1 (R − ), and thus Φ ≡ 0 on R − by Lemma 3.5.
As a result, Φ vanishes on the whole real line, and the proof is complete.
Note that this lemma is a vector analogue of Theorem 3.3.2 by Marchenko [65].

Proof of the Main Results
We are now in position to prove the main results of the paper that completely characterise the scattering data for the ZS-AKNS systems (1.1) subject to the boundary conditions (1.2) and show continuity of the direct and inverse scattering maps.
Proof of Theorem 3.1. Necessity of the inclusion S ∈ S p and of the equality α = β was established in the previous section; thus we only need to show sufficiency of these conditions. Given S in S p , we solve the associated Marchenko equation (3.8) and define a function u via (3.10). By Theorem 3.7, u is in X + p , while by Lemma 3.13 the ZS-AKNS system (1.1) with this u and subject to the boundary conditions (1.2) with α := β has the scattering function equal to S. The proof is complete.
Proof of Theorem 3.2. The mapping S p is surjective in view of Theorem 3.1.
To show its injectivity, assume that S ∈ S p corresponds to two ZS-AKNS systems associated with the pairs (u j , β j ) ∈ U p , j = 1, 2. Then (2.19) yields β 1 = β 2 , so that F is uniquely determined by S.
Next, denote by Γ (j) the kernels in the integral representation for the Jost solutions corresponding to the two ZS-AKNS systems. By Lemma 2.14, Γ (j) and the kernel Ω of (2.21) constructed for the F of (2.19) are related via the Marchenko equation (2.21). Set Γ := Γ (1) − Γ (2) ; then which yields Γ ≡ 0 by Lemma 3.5. It follows that both ZS-AKNS systems (u j , β j ) have the same Jost solutions ψ 1 and ψ 2 , and their matrix potentials Q j are uniquely determined from Eq. (2.1) on account of the fact that the fundamental solution matrix Ψ = (ψ 1 , ψ 2 ) is everywhere non-singular. Therefore Q 1 = Q 2 and consequently u 1 = u 2 ; thus the scattering map is a bijection between U p and S p .
Finally, continuity of S p follows from Proposition 2.1, (2.15) and (2.18). Continuity of the inverse scattering map is proved in part (2) of Theorem 3.7. The proof is complete. IEOT thanks Vladimir Peller for helpful comments on Hankel operators. The research of the first author was partially supported by the Centre for Innovation and Transfer of Natural Sciences and Engineering Knowledge at the University of Rzeszów. The research of the second author was supported by the European Union within the project "Support for research teams on CTU" CZ.1.07/2.3.00/30.0034.
Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Appendix A: Hankel and Toeplitz Operators and the Proof of Lemma 3.5
The spectral theory of Hankel and Toeplitz operators is understood best in the Hardy space H 2 , see [71]. For the purpose of this paper, it is most appropriate to consider the Hardy spaces H 2 (C + ) and H 2 (C − ) on the halfplanes C + and C − respectively.
We recall that the Hardy space H 2 (C + ) consists of all the functions f that are analytic in the open complex half-plane C + , belong to L 2 (R) along the lines R + iy for all y > 0 and satisfy sup y>0 f ( · + iy) L 2 (R) < ∞; the norm in H 2 (C + ) is given by the above supremum. Every f ∈ H 2 (C + ) has non-tangential limits on the real line almost everywhere, and this limit belongs to L 2 (R). In this way, every f ∈ H 2 (C + ) is identified with an element of L 2 (R) and thus H 2 (C + ) can be considered as a subspace of L 2 (R). The Hardy space H 2 (C − ) is defined similarly.
Next, according to the Paley-Wiener theorem, the Fourier transform F is a unitary mapping from L 2 (R ± ) onto H 2 (C ± ). Denote by P + and P − the operators of multiplication by the characteristic functions χ + and χ − of the positive and negative half-lines respectively; then P ± are orthogonal projections such that P + + P − = I. The operators P ± := F P ± F −1 are called the Riesz projectors; they are orthogonal projectors in L 2 (R) mapping the latter onto H 2 (C ± ). Denote also by J the reflection operator in L 2 (R) given by Jf(x) = f (−x).
Given a function ψ ∈ L ∞ (R), we define the corresponding Hankel operator H ψ and Toeplitz operator T ψ on the Hardy space H 2 (C + ) via bounded therein. Moreover, the Nehari theorem claims that the norm of H ψ in H 2 (C + ) is given by in particular, Hankel operators with symbols in L ∞ (R) are bounded.
In view of formula (3.1.5) of [71], if the symbol ψ is unimodular, i.e., if |ψ| = 1 a.e., then the Hankel and Toeplitz operators with symbol ψ satisfy the following identity in H 2 (C + ): Assume that the symbol ψ is unimodular and can be written as . Then the nullspace of the operator I − H * ψ H ψ is trivial. Proof. In view of (A.1), it suffices to show that the nullspace null(T ψ ) of the Toeplitz operator T ψ is trivial. Assume that f ∈ null(T ψ ); then g := ψf belongs to H 2 (C − ). The assumption on ψ leads to the equality gφ − = fφ + ; however, since gφ − ∈ H 2 (C − ) and fφ + ∈ H 2 (C + ), we conclude that f = 0.
A rich class of Hankel operators is given as follows. Assume that k is an integrable function on R + and define an integral operator on L 2 (R + ) via Observe that H k only depends on values of k on the positive half-line R + . Recalling the definition of the operators J and P − , one can easily verify that where * denotes the usual convolution on the line. Since F J = JF and F −1 f =: g belongs to L 2 (R + ) for every f ∈ H 2 (C + ), it follows that i.e., that F H k F −1 coincides with the Hankel operator H ψ on H 2 (C + ) with the symbol ψ := F (Jk) = JF k ∈ L ∞ (R). We observe that H ψ does not change under addition to ψ any function φ ∈ H ∞ (C + ); indeed, the distribution F −1 (φ) has its support in [0, ∞), so that supp(JF −1 φ) has empty intersection with R + .
With these results at hand, we can proceed to the proof of Lemma 3.5.
Proof of Lemma 3.5. We start with the case x = 0 and assume that f := (f 1 , f 2 ) is an integrable solution of the equation We first show that f belongs to L 2 (R + ). To this end we represent F as F 0 +F 1 with a continuous F 0 of compact support and an integrable F 1 satisfying F 1 L 1 (R+) < 1/2 and denote by H Ωj (0), j = 0, 1, the Hankel operators defined as H Ω (0) but with F j instead of F . The above equation for f can now be written as shows that the right-hand side of the above equation is a bounded function. On the other hand, I + H Ω1 (0) is boundedly invertible in L ∞ (R + ) in view of (3.6); therefore f is bounded and thus belongs to L 2 (R + ). Written componentwise, the equation f + H Ω (0)f = 0 implies that f 2 satisfies the equality Recall that F = F −1 (S − e −2iβ ); therefore, the preceding discussions show that the operator F H F F −1 is a Hankel operator with the symbol J(S − e −2iβ ) = S − e −2iβ , or with the symbol ψ := S. Observe that F −1 = π −1 F * and thus F (I − H * F H F )F −1 = I − H * ψ H ψ ; therefore, it suffices to show that the nullspace of the operator I − H * ψ H ψ is trivial in the Hardy space H 2 (C + ). As ψ = S is unimodular by assumption and S admits a representation S = s + /s − with some s ± ∈ H ∞ (C ± ) by Corollary A.3, the claim on the nullspace follows from Lemma A.1. As a result, f 2 = 0, and now the relation f 1 = H F f 2 shows that f = 0 as required, thus completing the proof for x = 0.
Assume now that f is an integrable solution of the equation f +H F (x)f = 0 for x > 0. Extending this solution by zero for t < 0 and setting f h (·) := f (· − h), we find that, for all h > 0 and t > 0 It follows that, for every h ∈ (0, x), the function g h := f h + f x−h satisfies the relation The first part of the lemma now shows that g h = 0; taking h = x/2 results in the desired conclusion that f = 0.
The claim on the integral operator H − Ω can be proved analogously.
The following two statements are a mathematical folklore; however, the authors are at a loss for a proper reference and therefore have decided to include their short proofs.
Lemma A.2. Assume that a function S ∈ 1 X 1 does not vanish on R and that the winding number W (S) of S is zero. Then log S(k) − log S(∞) ∈ X 1 .
Proof. By definition of 1 X 1 , there are c ∈ C and φ ∈ L 1 (R) such that Vol. 84 (2016) Inverse Scattering for ZS-AKNS Systems 349 Without loss of generality, we may assume that c = 1 as otherwise we can divide through by c = S(∞); also, we choose the branch of the logarithm such that log 1 = 0. Observe that the function S is invertible in the algebra 1 X 1 by the Wiener theorem. We choose φ 0 in the Schwartz class S(R) such that and set S 0 (k) := 1 + R e ikt φ 0 (t) dt.
Define now S 1 as S 0 /S; then In particular, neither S 1 nor S 0 = SS 1 vanish on R; moreover, W (S 1 ) = 0 and thus W (S 0 ) = W (S) + W (S 1 ) = 0. The Wiener-Levy theorem can now be used to define log S 1 as an element of 1 X 1 which, in fact, belongs to X 1 . Since S 0 − 1 is in the Schwartz class S(R) and W (S 0 ) = 0, the function log S 0 belongs to S(R) ⊂ X 1 . As log S = log S 0 − log S 1 , the lemma is proved. Proof. According to the above lemma, there exists a function g ∈ L 1 such that log S(k) − log S(∞) =