A Matrix Schrödinger Approach to Focusing Nonlinear Schrödinger Equations with Nonvanishing Boundary Conditions

We relate the scattering theory of the focusing AKNS system with equally sized nonvanishing boundary conditions to that of the matrix Schrödinger equation. This (shifted) Miura transformation converts the focusing matrix nonlinear Schrödinger (NLS) equation into a new nonlocal integrable equation. We apply the matrix triplet method of solving the Marchenko integral equations by separation of variables to derive the multisoliton solutions of this nonlocal equation, thus proposing a method to solve the reflectionless matrix NLS equation.

In 1972 in their seminal paper Zakharov and Shabat (1972), Zakharov and Shabat showed that the NLS equation can be solved by means of the Inverse Scattering Transform (IST) technique. To this aim, they introduced a scattering problem now known as the Zakharov-Shabat (ZS) system. The ZS system was used to solve the scalar NLS system with zero and nonzero boundary conditions (Zakharov andShabat 1972, 1973). In particular, in Zakharov and Shabat (1973), Zakharov and Shabat considered the case of nonzero boundary conditions in the defocusing regime, introducing a spectral parameter belonging to a suitable two-sheeted Riemann surface and studying the analyticity properties of the scattering data on this surface. Moreover, in Ma (1979), it was proven that, in order to develop the IST for the focusing NLS equation with nonvanishing boundary conditions, the associated ZS system leads to introducing a spectral parameter λ which belongs again to a suitable two-sheeted Riemann surface. The introduction of a two-sheeted Riemann surface evidently makes the study of the NLS equation with nonvanishing boundary conditions via the IST much more complicated with respect to the vanishing case. Furthermore, in 1974 Ablowitz, Kaup, Newell and Segur proposed an alternative but equivalent way to develop the IST for the NLS equation consisting of associating to this equation the so-called AKNS system (Ablowitz et al. 1974). In the AKNS system, one (matrix) equation represents the spectral equation, whereas a second (matrix) equation describes the time evolution of the scattering data. Similarly to what happens with the ZS system, developing the IST from the AKNS pairs is significantly more complicated in the nonvanishing cases than in the vanishing case.
Systematic studies of the inverse scattering transform theory (IST) of the (scalar and matrix) NLS equation with nonvanishing boundary conditions have been carried out in the defocusing case in Inoue (1977, 1978), Kato (1981, 1984), Faddeev and Takhtajan (1987), Prinari et al. (2006), Demontis et al. (2013) and in the focusing case in Biondini and Kovačić (2014), Demontis et al. (2014), Ortiz and Prinari (2020), Biondini et al. (2021). In Bilman and Miller (2019), the IST with full account of the spectral singularities leads to rogue wave solutions of the focusing NLS with nonvanishing boundary conditions. In all the papers cited above, a ZS system or an AKNS system is associated to the NLS equation. If one considers the focusing NLS with nonvanishing boundary conditions, it is customary, as we have remarked above, to introduce a new spectral complex parameter, say λ, defined as λ = k 2 + μ 2 (it should be noted that λ is defined through a multivalued function). The study of the analyticity properties of the scattering data with respect to the parameter λ is quite difficult and requires special care. In this article, we show how to associate a Schrödinger equation with a vanishing potential as a spectral problem for the NLS equation with nonzero boundary conditions. In this way, to the best of our knowledge, for the first time we develop the IST for the focusing NLS system with nonzero boundary conditions without associating to it the AKNS system (or the Zakharov-Shabat system). The advantage of associating the Schrödinger equation with vanishing boundary conditions instead of the AKNS system is immediate because the construction of the scattering data for the Schrödinger equation with zero boundary conditions does not require the introduction of a new spectral parameter. Consequently, the study of the analyticity properties of these coefficients can be done in a more transparent way with respect to the analogous study while using the AKNS system. However, we have achieved the important task of establishing a connection between the scattering data of the AKNS system and the scattering data of the Schrödinger equation.
In other words, a major obstacle encountered in the above-cited studies of the IST for the nonvanishing NLS systems is the change of variable from the initial spectral parameter k to a new spectral parameter λ = k 2 + μ 2 which complicates analyticity issues for Jost solutions and scattering coefficients considerably, especially if such change of variable is considered in the entire complex plane. The main purpose of this article is to greatly simplify these issues by relating the focusing NLS equation to a suitable matrix Schrödinger equation, where the spectral parameter (in this case, λ) is typically chosen in the closed upper half complex half-plane C + ∪ R. Here we can rely on a substantial body of knowledge on the direct and inverse scattering theory of the scalar Schrödinger equation on the line (Faddeev 1964;Deift and Trubowitz 1979;Calogero and Degasperis 1982;Chadan and Sabatier 1989) and the matrix Schrödinger equation on the half-line Weder 2018, 2020) and the full-line (Wadati and Kamijo 1974;Aktosun et al. 2001). In particular, the small λ asymptotics of the scattering data, which is crucial to a rigorous matrix Schrodinger scattering theory, has been developed in detail in Aktosun et al. (2001).
In this article, we study the focusing m + m AKNS system is a vector function with n = 2m components, I m is the identity matrix of order m, σ 3 = I m ⊕ (−I m ), the potential Q anticommutes with σ 3 , and the complex conjugate transpose Q † = −Q. The potential Q is to satisfy the integrability condition (1.2) where Q y is the y-derivative of Q and [Q r ,l ] 2 = −μ 2 I n for some μ > 0.
We pursue an approach that is quite different from the one expounded in Biondini and Kovačić (2014), Demontis et al. (2014), Biondini et al. (2021). Letting L = iσ 3 [∂ x I n − Q] stand for the AKNS Hamiltonian, we easily verify that L = L 2 + μ 2 1 is the matrix Schrödinger Hamiltonian given by  The regions k ∈ K ± and λ ∈ C ± with manifold boundary where 1 stands for the identity operator on a suitable function space and is a matrix Faddeev class Schrödinger potential obtained from Q by the (shifted) Miura transform (Ablowitz and Segur 1981). In other words, Q(·) ∈ L 1 (R; (1 + |x|)dx).
Then any solution v of the AKNS system (1.1) is also a solution of the matrix Schrödinger equation is the conformal transformation from the complex k-plane K cut along the segment [−iμ, iμ] onto the complex λ-plane satisfying λ ∼ k at infinity. This transformation provides a 1, 1-correspondence between the open upper/lower half k-plane K ± cut along [−iμ, iμ] onto the open upper/lower half λ-plane C ± as well as a 1, 1correspondence between their boundaries ∂K ± and R and their closures K ± ∪ ∂K ± and C ± ∪ R (Fig. 1).
In this article, we wish to take advantage of the well-developed direct and inverse scattering theory of the matrix Schrödinger equation with selfadjoint potential [Agranovich and Marchenko (1963), Weder (2018, 2020) on the half-line, Wadati and Kamijo (1974), Aktosun et al. (2001) on the full line], especially the established custom of choosing its spectral variable λ in C + ∪ R, in deriving the focusing NLS solutions with nonvanishing boundary conditions. In a previous paper, Demontis and van der Mee (2021), such full-line theory has been made to fit potentials satisfying Q † = σ 3 Qσ 3 . (1.6) The traditional applications of the matrix Schrödinger equation to quantum graphs, quantum wires, and quantum mechanical scattering of particles with internal structure (Berkolaiko 2017;Berkolaiko et al. 2006;Berkolaiko and Kuchment 2013;Berkolaiko and Liu 2017;Boman and Kurasov 2005;Exner et al. 2008;Gerasimenko 1988;Gerasimenko and Pavlov 1988;Gutkin and Smilansky 2001;Harmer 2002Harmer , 2004Harmer , 2005Schrader 1999, 2000;Kuchment 2004Kuchment , 2005Nowaczyk 2010, 2005;Kurasov and Stenberg 2002) have led to the almost exclusive development of matrix Schrödinger scattering theory for selfadjoint potentials satisfying Q † = Q [see Agranovich and Marchenko (1963), Weder (2018, 2020) for the half-line theory and Wadati and Kamijo (1974), Aktosun et al. (2001) for the full-line theory]. Energy losses in such systems naturally lead to potentials whose imaginary part [ Q − Q † ]/2i has constant sign. In the present context where Q satisfies (1.6), we thus require the modified matrix Schrödinger scattering theory given in Demontis and van der Mee (2021) when solving the focusing matrix NLS equation.
Let us discuss the contents of the various sections. In Sect. 2, we introduce the Lax pair {L, A} and the AKNS pair {X, T } whose compatibility conditions lead to an integrable nonlocal equation for Q. We also relate the solutions of this integrable equation to those of a modified matrix NLS equation which is converted into the usual matrix NLS equation by a trivial gauge transformation. Next, in Sects. 3-4 we state the direct and inverse scattering theory of the matrix Schrödinger equation (1.4) with Faddeev class potentials Q satisfying (1.6), disregarding any time dependence. In particular, we introduce the Jost solutions and the scattering coefficients, write them as Fourier transforms of L 1 -functions, and state the Marchenko integral equations to solve the inverse scattering problem. We then go on to derive the time evolution of the scattering data [Sect. 5]. In Sect. 6, we apply the so-called matrix triplet method to derive the multisoliton solutions of the nonlocal integrable equation and the focusing matrix NLS equation by separation of variables in the Marchenko integral equations.
We adopt boldface symbols for many of the quantities pertaining to the matrix Schrödinger equation and calligraphic symbols for many of the quantities pertaining to the AKNS system. We deviate from the praxis of Ablowitz et al. (1974), Ablowitz et al. (2004) in allowing right and left to correspond to the real line endpoints involved in defining the Jost solutions, both in the (matrix) Schrödinger and the AKNS cases. Hence, we prioritize traditional notations regarding (matrix) Schrödinger equations (Faddeev 1964;Deift and Trubowitz 1979;Chadan and Sabatier 1989) over those regarding AKNS systems (Ablowitz et al. 1974(Ablowitz et al. , 2004).

Lax Pair for the New Integrable Model
It is well-known that the matrix NLS system is governed by a Lax pair {L, A} of linear operators (Lax 1968;Ablowitz and Segur 1981;Eckhaus and van Harten 1981) where Q is given by (1.3), Lv = kv is the AKNS eigenvalue problem, and v t = Av describes the time evolution. Then the zero curvature condition where 0 denotes the zero operator on a suitable function space, leads to the integrable PDE which coincides with the usual matrix NLS equation, studied in Ablowitz et al. (2004), Ablowitz et al. (1974), apart from the extra term −2μ 2 Q.
Then the ∂ 2 x term vanishes iff Q = D + Q x for some D commuting with σ 3 and vanishing as and using that e ±xQ r = cos(μx)I n ± sin(μx) μ Q r to arrive at the estimate e ±xQ r ≤ √ μ 2 + Q r 2 μ , we can apply Gronwall's inequality to the estimate to see that E vanishes identically and therefore D = Q 2 + μ 2 I n . Thus, for this particular choice of D we arrive at the nonlinear evolution equation for time invariant matrices Q r ,l satisfying [Q r ,l ] 2 = −μ 2 I n for every t ∈ R. Conversely, substituting where D commutes with σ 3 , Q x anticommutes with σ 3 , and D vanishes as x → ±∞, into (2.3), we obtain Separating the block off-diagonal and block diagonal components, we get where a matrix commutator appears. The gauge transformation then converts (2.5b) into the usual matrix NLS equation This is in agreement with Q t vanishing as x → ±∞ and with the well-known time evolution [see Demontis et al. (2014) Horn and Johnson 1994), and θ r ,l ∈ R are phases. Furthermore, (2.5b) and (1.3) imply the nonlinear equation (2.3). In fact, Let us compute where {X, T } is the AKNS pair given by . (2.7b) Then we easily compute as claimed. Thus, the zero curvature condition for the AKNS pair {X, T } is equivalent to the nonlinear evolution equation (2.3).

Direct Scattering Theory
In this section, we introduce the Jost solutions and scattering coefficients for the matrix Schrödinger equation (1.4) with Faddeev class potential Q satisfying (1.6). For the scalar Schrödinger equation with real Faddeev class potential, the direct scattering theory is well documented (Faddeev 1964;Deift and Trubowitz 1979;Calogero and Degasperis 1982;Novikov et al. 1984;Chadan and Sabatier 1989). The matrix theory is discussed at great length in Weder (2018, 2020) for the half-line and in Wadati and Kamijo (1974), Aktosun et al. (2001) for the full line. Here Aktosun et al. (2001) contains the essential small λ asymptotics of scattering coefficients that is lacking in Wadati and Kamijo (1974). The adjoint symmetry Q requires some modifications of existing theory (cf. Demontis and van der Mee 2021).

n × n Jost Solutions
Let us define the Jost solution from the left F l (x, λ) and the Jost solution from the right F r (x, λ) as those solutions of the matrix Schrödinger equation (1.4) which satisfy the asymptotic conditions Faddeev functions, we easily define them as the unique solutions of the Volterra integral equations Then, for each x ∈ R, m l (x, λ) and m r (x, λ) are continuous in λ ∈ C + ∪ R, are analytic in λ ∈ C + , and tend to I n as λ → ∞ from within C + ∪ R. For 0 = λ ∈ R, we can reshuffle (3.2) and arrive at the asymptotic relations By the same token, B r ,l (λ) is continuous in 0 = λ ∈ R, vanishes as λ → ±∞, and satisfies 2iλB r ,l (λ) → − r ,l as λ → 0 along the real λ-axis.

2n × 2n Jost Solutions
Putting where the prime denotes differentiation with respect to x, we obtain where 0 = λ ∈ R. Using that F r ,l (x, λ) satisfies the linear first-order system with traceless system matrix, we see that, for 0 = λ ∈ R, F r ,l (x, λ) has a determinant not depending on x ∈ R. Using (3.1), we easily verify that det F r ,l (x, λ) = (2iλ) n for 0 = λ ∈ R.
Let us now apply the x-independence (A proof of this property will be given in iσ 3 n×n , for any two square matrix solutions V and W of (3.7) to derive identities for the A and B coefficients by equating the asymptotics as x → +∞ to the asymptotics as x → −∞.
Using V = W = = F r e 1 + F l e 2 , where e 1 = I n ⊕ 0 n×n and e 2 = 0 n×n ⊕ I n , we get where 0 = λ ∈ C + ∪ R.

Reflection Coefficients
Introducing the reflection coefficients R r ,l (λ) = B r ,l (λ)A r ,l (λ) −1 = −A l,r (λ) −1 B l,r (−λ) (3.12) and the transmission coefficients A r ,l (λ) −1 , we obtain the Riemann-Hilbert problem where the matrix S(λ) containing the A and R quantities is called the scattering matrix and a discussion of the nonsingularity of A r ,l (λ) will be presented shortly. Then it is easily verified that provided 0 = λ ∈ R and det A r ,l (λ) = 0. Above we have defined r ,l as follows: where the first limit may be taken from the closed upper half-plane. Then the matrices r ,l have the same determinant. If r ,l is nonsingular, we are said to be in the generic case; if instead r ,l is singular, we are said to be in the exceptional case (Aktosun et al. 2001). We are said to be in the superexceptional case if r ,l = 0 n×n and A r ,l (λ) tends to a nonsingular matrix, A r ,l (0) say, as λ → 0 from within C + ∪ R.
Throughout this article, we assume the absence of spectral singularities, i.e., the absence of nonzero real λ for which det A r ,l (λ) = 0. Under this condition, the reflection coefficients R r ,l (λ) are continuous in 0 = λ ∈ R.

Triangular Representations
The Jost solutions allow the triangular representations The integral equations satisfied by the auxiliary matrix functions K (x, y) and J (x, y) derived in Demontis and van der Mee (2021) imply that (3.17)

Wiener Algebras
For convenience, we introduce the well-known Wiener algebra (Gelfand et al. 1964). By the (continuous) Wiener algebra W, we mean the complex vector space of constants plus Fourier transforms of L 1 -functions endowed with the norm |c| + h 1 . Here we define the Fourier transform as follows: h(k) = ∞ −∞ dy e iky h(y). The invertible elements of the commutative Banach algebra W with unit element are exactly those c +ĥ ∈ W for which c = 0 and c +ĥ(k) = 0 for each k ∈ R (Gelfand et al. 1964).
The algebra W has the two closed subalgebras W + and W − consisting of those c +ĥ ∈ W for which h is supported on R + and R − , respectively. The invertible elements of W ± are exactly those c +ĥ ∈ W ± for which c = 0 and c +ĥ(k) = 0 for each k ∈ C ± ∪ R (Gelfand et al. 1964). Letting W ± 0 and W 0 stand for the (nonunital) closed subalgebras of W ± and W consisting of those c +ĥ for which c = 0, we obtain the direct sum decompositions We denote the (bounded) projections of W onto W ± 0 along W ∓ by ± . Throughout this article, we denote the vector spaces of n×m matrices with entries in W, W ± , and W ± 0 by W n×m , W ± n×m , and W ± 0 n×m , respectively. We write L 1 (R) n×m and L 1 (R ± ) n×m for the vector spaces of n × m matrices with entries in L 1 (R) and L 1 (R ± ), respectively. Using a submultiplicative matrix norm, we can turn all of these vector spaces into Banach spaces. It is then clear that W n×n and W ± n×n are noncommutative Banach algebras with unit element and W ± 0 n×n are (nonunital) noncommutative Banach algebras. As above, we then define ± as the (bounded) projections of W n×m onto W ± 0 n×m along W ∓ n×m . The invertible elements of W n×n and W ± n×n are exactly those elements whose determinants are invertible elements of W and W ± , respectively. Hence, according to (3.15) and (3.16), for each x ∈ R the Faddeev functions m r ,l (x, ·) ∈ W n×n + . We then easily prove with the help of (3.4) that 2iλ[I n − A r ,l (λ)] belong to W n×n + and 2iλB r ,l (λ) belong to W n×n . Assuming the absence of spectral singularities and to be in the generic case, we proved in Demontis and van der Mee (2021) that the reflection coefficients R r ,l (λ) belong to W n×n 0 and the transmission coefficients A r ,l (λ) −1 to W n×n + . In the superexceptional case, where r ,l = 0 n×n , we proved in Demontis and van der Mee (2021) that A r ,l ∈ W n×n + , provided Q ∈ L 1 (R; (1 + |x|) 2 dx); assuming the absence of spectral singularities and using the nonsingularity of A r ,l (0), we see that the reflection coefficients R r ,l (λ) and the transmission coefficients A r ,l (λ) −1 belong to W n×n . At present it is not known if, under the absence of spectral singularities, the reflection and transmission coefficients belong to W n×n in any other exceptional case and for general Q ∈ L 1 (R; (1 + |x|)dx). Under the condition Q ∈ L 1 (R; (1 + |x|)dx), the continuity of the reflection and transmission coefficients at λ = 0 is known for n = 1 (Klaus 1988) and for selfadjoint potentials (Aktosun et al. 2001). In neither case is it known if these continuous functions belong to W.

Inverse Scattering Theory
In this section, we introduce the Marchenko integral equations for the matrix Schrödinger equation (1.4) with Faddeev class potential Q satisfying (1.6). We make use of the hypothesis that the reflection coefficients R r ,l ∈ W n×n 0 , something proved in the generic case but not in the most general exceptional case. For the sake of simplicity, we assume that the poles of A r ,l (λ) −1 in C + are simple. The extension to multiple pole situations is rather technical but straightforward (Demontis and van der Mee 2008a). Inverse scattering theory is well documented in the scalar case (Faddeev 1964;Deift and Trubowitz 1979;Calogero and Degasperis 1982;Chadan and Sabatier 1989), in the matrix half-line case Weder 2018, 2020), and in the matrix full-line case (Wadati and Kamijo 1974;Aktosun et al. 2001). The adjoint symmetry (1.6) requires some modifications to existing theory (cf. Demontis and van der Mee 2021).
Let us write the transmission coefficients in the form where λ 1 , . . . , λ N are the distinct simple poles of A r ,l (λ) −1 in C + , τ r ;s and τ l;s are the residues of A r (λ) −1 and A l (λ) −1 at λ = λ s (s = 1, . . . , N ), and A r 0 (λ) and A l0 (λ) are continuous in λ ∈ C + ∪ R, are analytic in λ ∈ C + , and tend to I n as λ → ∞ from within C + ∪ R. Then it is easily proved that τ r ;s = −σ 3 τ † l;s σ 3 and τ l;s = −σ 3 τ † r ;s σ 3 whenever λ s = −λ * s (cf. Demontis and van der Mee 2021). Let us write whereR r ,l ∈ L 1 (R) n×n . In fact, this has only been proved in the generic case and, under the condition that Q ∈ L 1 (R; (1 + |x|) 2 dx), in the superexceptional case. Using (3.14a), it follows thatR r ,l (α; t) are σ 3 -Hermitian matrices. Then the following Marchenko integral equations can be derived [see Demontis and van der Mee (2021) for details]: where the Marchenko integral kernels are given by Here N r ;s and N l;s are the so-called norming constants defined by where λ s is a (simple) pole of A r ,l (λ) −1 in C + (s = 1, 2, . . . , N ). Then τ r ;s and N r ;s have the same rank and the same null space; the same thing is true for τ l;s and N l;s . As in Demontis and van der Mee (2008a), we can prove the adjoint symmetry relations r ,l (w) = σ 3 r ,l (w) † σ 3 , (4.6) thus implying the following symmetry relations for the norming constants: provided λ s = −λ * s is a simple pole of A r ,l (λ) −1 . For the rather tedious details, we refer to Appendix B of Demontis and van der Mee (2021).
Example. Let us now solve the Marchenko integral equations (4.3) in the onesoliton case, where r (w; t) = e −a 0 w N r ;0 (t) and l (w; t) = e a 0 w N l;0 (t) for a suitable eigenvalue λ 0 = ia 0 ∈ C + . Then separation of variables yields where the σ 3 -Hermitian norming constants N r ;0 (t) and N l;0 (t) will be expressed in their initial values shortly. The off-diagonal parts of these expressions yield explicit expressions for Q r − Q(x; t) and Q(x; t) − Q l , respectively.

Time Evolution of the Scattering Data
In this section, we establish the time evolution of the scattering data of the matrix Schrödinger equation. We then go on to derive the Marchenko integral kernels as a function of time. These results allow us, in Sect. 6, to derive the reflectionless solutions of the integrable nonlocal equation (2.3) and hence of the focusing matrix NLS equation.
Recall that the integrable equation (2.3) arises as the zero curvature condition of the AKNS pair {X, T } given by (2.7). Thus, there exist nonsingular matrices C F r (λ; t) and C F l (λ; t) not depending on x ∈ R such that Then a simple differentiation yields where the two left-hand sides do not depend on x ∈ R. Using (2.4), we now compute the x → ±∞ limits of the two right-hand sides by evaluating the matrix product 1 2iλ iλe iλx I n −e iλx I n iλe −iλx I n e −iλx I n −2iλ 2 σ 3 −2iσ 3 Q r ,l 2iλ 2 σ 3 Q r ,l −2iλ 2 σ 3 e −iλx I n e iλx I n −iλe −iλx I n iλe iλx I n and obtain where λ ∈ C + ∪ R. Relating F r ,l (x, λ; t) by means of the equalities [cf. (3.6)] where the factors A r ,l (λ; t) are given by the matrices (5.6) Using that A l (λ; t) = A r (λ; t) −1 , we obtain from (5.6) Therefore, (5.5), (5.6), and (5.7) imply Proposition 5.1 The reflection coefficients satisfy the following differential equations: where 0 = λ ∈ R. Moreover, for fixed λ the matrices σ 3 R r ,l (λ; t) have time invariant traces.
Proof Using (3.12), we compute where we have not written the dependence on (λ; t). Similarly, we compute Finally, since σ 3 up r ,l (λ)σ 3 = dn r ,l (λ), we see that where the square brackets in the right-hand sides are matrix commutators. Consequently, [σ 3 R r ,l ] t are traceless matrices.
Let us now derive the time evolution equations for the norming constants. First, writing (5.8) in the form and computing the residues at the simple poles λ s , we get Next, using (5.2) we write (5.1) in the form . (5.12) Using the standard block structure T = T 1 T 2 T 3 T 4 as a 2 × 2 matrix having m × m entries, from (5.12) we easily arrive at the identities (5.13b) Differentiating (4.5a) with respect to t, utilizing both of (5.13), and applying (4.5a) as well as its derivative with respect to x, we obtain where we have omitted the arguments (x, λ s ; t), λ s , and t. With the help of (5.11a), we write the latter in the form Using (4.5a) once again and considering the x → +∞ asymptotics of the resulting expression to lose the resulting common factors i F l , we obtain (5.14) Analogously, differentiating (4.5b) with respect to t, utilizing both of (5.13), and applying (4.5b) as well as its derivative with respect to x, we obtain where we have omitted the arguments (x, λ s ; t), λ s , and t. With the help of (5.11b), we write the latter in the form Using (4.5b) once again and considering the x → −∞ asymptotics of the resulting expression to lose the resulting common factors i F r , we obtain As in the proof of Proposition 5.1, we can prove that for each λ the matrices σ 3 N r ;s (t) are similar and the matrices σ 3 N l;s (t) are similar. Hence, the traces of σ 3 N r ;s (t) and σ 3 N l;s (t) are time independent. Thus, the ranks of N r ;s (t) and N l;s (t) are time independent. We recall [see (4.7)] that the norming constants corresponding to eigenvalues symmetrically located with respect to the imaginary axis are each other's σ 3 -adjoints.
Let us now derive the differential equations for the Marchenko integral kernels. Using (4.2) and (5.3), we obtain the PDEs Here we recall that R r ,l (α; t) are σ 3 -Hermitian for all (α, t) ∈ R 2 . Using (4.2) and Proposition 5.1, we see that the traces of σ 3Rr ,l (α; t) are time independent. Using (5.16) and (4.4) to derive PDEs for the Marchenko integral kernels r ,l (w; t), we obtain with the help of (5.14) and (5.15) where r ,l (w; t) are σ 3 -Hermitian for all (w, t) ∈ R 2 . Hence, the reflection ker-nelsR r ,l (w; t) and the Marchenko integral kernels r ,l (w; t) satisfy the same PDEs. Finally, the traces of σ 3 r ,l (w; t) are time independent.
Recalling that Q r ,l are time independent, we observe that the matrices up r ,l (λ) and dn r ,l (λ) are time independent as well. We easily compute e t up r ,l (λ) = cos(2λkt)I n + sin(2λkt) 2λk 2iλ 2 σ 3 + 2λQ r ,l , (5.18a) e t dn r ,l (λ) = cos(2λkt)I n + sin(2λkt) 2λk 2iλ 2 σ 3 − 2λQ r ,l , (5.18b) where k 2 = λ 2 − μ 2 and the expressions (5.18) are even functions of k for fixed λ [cf. Demontis et al. (2014) where these matrix groups also appear]. Using that the initial value problem for the matrix differential equation has the unique solution we obtain for the solutions of (5.9a) and (5.9b) Because of (5.4a), the matrices σ 3 R r (λ; t) are similar and so are the matrices σ 3 R l (λ; t). In the same way, we get for the time evolution of the norming constants where k 2 s = λ 2 s − μ 2 and the expressions (5.20) are even functions of k s for fixed λ s . Because of (5.4a), the matrices σ 3 N r ;s (t) are similar and so are the matrices σ 3 N l;s (t).

Multisoliton Solutions
In this section, we apply the matrix triplet method to write the reflectionless Marchenko integral kernels in separated form and solve the Marchenko equations by separation of variables. This method has been successfully applied to the Korteweg-de Vries (KdV) equation (Aden and Carl 1996;Aktosun and van der Mee 2006), the NLS equation (Aktosun et al. 2007; Demontis and van der Mee 2008b), the sine-Gordon equation (Schiebold 2002;Aktosun et al. 2010), the modified Korteweg-de Vries (mKdV) equation (Demontis 2011), the Toda lattice equation (Schiebold 1998), and the Heisenberg Ferromagnet equation (Demontis et al. 2018(Demontis et al. , 2019. An introduction to this method can be found in van der Mee (2013). In contrast to earlier work, we allow the time factors in these triplets to be absorbed by both the input and output matrices. Before solving the Marchenko integral equations (4.4), we write the reflectionless Marchenko integral kernels in the form where a s = −iλ s (s = 1, . . . , N ). Then it is easily proved that, for s = 1, . . . , N , the norming constants N r ;s (t) and N l;s (t) both have the same time-independent rank r s . In fact, r s coincides with the ranks of the residues τ r ;s and τ l;s of A r ,l (λ; t) −1 at λ = λ s . Since σ 3 N r ;s (t) and σ 3 N l;s (t) have σ 3 N r ;s (t) and σ 3 N l;s (t) as their respective complex conjugate transposes whenever λ s = −λ * s , there exist n × r s matrices e r ;s (t) and e l;s (t) having r s = r s orthonormal columns and spanning the ranges of σ 3 N r ;s (t) and σ 3 N l;s (t) and time-independent diagonal r s × r s matrices d r ;s = d † r ;s and d l;s = d † l;s having only nonzero diagonal entries such that σ 3 N r ;s (t) = e r ;s (t)d r ;s e r ;s (t) † , σ 3 N l;s (t) = e l;s (t)d l;s e l;s (t) † , (6.2) whenever λ s = −λ * s . Furthermore, If λ s = −λ * s is purely imaginary and therefore σ 3 N r ;s (t) and σ 3 N l;s (t) are Hermitian matrices, the number of positive and negative diagonal entries of d r ;s and d l;s corresponds to the (time-independent) number of positive and negative eigenvalues of σ 3 N r ;s (t) and σ 3 N l;s (t).
Now define the matrix triplets as follows: A r = A l = a 1 I r 1 ⊕ . . . ⊕ a N I r N , C r = σ 3 e r ;1 . . . σ 3 e r ;N , C l = σ 3 e l;1 . . . σ 3 e l;N , (6.5b) where we have not written the time dependence. Then the Marchenko integral kernels in (6.1) are given by r (w; t) = C r (t)e −w A r B r (t), (6.6a) l (w; t) = C l (t)e w A l B l (t), (6.6b) where the q × q matrices A r ,l have only eigenvalues with positive real parts, B r ,l (t) are q × n matrices, and C r ,l (t) are n × q matrices.
Let us now depart from arbitrary Marchenko integral kernels (6.6), where the q × q matrices A r ,l have only eigenvalues with positive real parts, B r ,l (t) are q ×n matrices, C r ,l (t) are n × q matrices, and the specific expressions (6.4) and (6.5) need not be applied. Solving the Marchenko integral equations (4.3), we get K (x, y; t) = −W r (x; t)e −y A r B r (t), (6.7a) J (x, y; t) = −W l (x; t)e y A l B l (t), (6.7b) where W r (x; t) = C r e −x A r + ∞ x dz K (x, z; t)C r (t)e −z A r , Substituting (6.7) into (4.3) and solving for W r ,l (x; t) we get W r (x; t) = C r (t)e −x A r I q + e −x A r P r (t)e −x A r −1 , W l (x; t) = C l (t)e x A l I q + e x A l P l (t)e x A l −1 , provided the inverse matrices exist. Here P r ,l (t) = ∞ 0 dz e −z A r ,l B r ,l (t)C r ,l (t)e −z A r ,l are the unique solutions of the Sylvester equations A r ,l P r ,l (t) + P r ,l (t) A r ,l = B r ,l (t)C r ,l (t).
More precisely, given (x, t) ∈ R 2 , the Marchenko integral equations (4.3) are uniquely solvable (in an L 1 -setting) iff the algebraic equations for W r ,l (x; t) are uniquely solvable. Consequently, (6.8b) Using (3.17), we obtain A r e 2x A r e 2x A r + P r (t) −1 B r (t), (6.10b) Using the partitioning and assuming the nonsingularity of P r ,l (t), we obtain Q(x; t) = Q r + 2C up r (t) P r (t) −1 B rt r (t) + 2C dn r (t) P r (t) −1 B lt r (t) Finally, let us now apply Proposition A.1 to V (x, λ) = F l (x, λ) and W (x, λ) = F r (x, λ) and divide the resulting equation by 2λ. For 0 = λ ∈ R, we get by equating the x → +∞ asymptotics to the x → −∞ asymptotics and dividing the resulting equation by 2λ As a result, we arrive at the two identities Identities (A.3) coincide with (3.10). Equation (3.11) can easily be derived from the x-independence of for given solutions V (x, λ) and W (x, λ) of (3.7).