Quaternion Algebra Approach to Nonlinear Schr\"odinger Equations with Nonvanishing Boundary Conditions

In this article we apply quaternionic linear algebra and quaternionic linear system theory to develop the inverse scattering transform theory for the nonlinear Schr\"odinger equation with nonvanishing boundary conditions. We also determine its soliton solutions by using triplets of quaternionic matrices.


Introduction
The initial-value problem for the focusing nonlinear Schrödinger (NLS) equation iq t + qxx − 2|q| 2 q = 0 (1.1) with nonvanishing boundary conditions q(x, t) → qr,l (t) as x → ±∞, where qr,l (t) = µ e −2iµ 2 t+iθ r,l for a positive constant µ and phases θ r,l ∈ R, has been abundantly studied using the inverse scattering transform (IST) technique [20,9,14,10].In [8] the IST with full account of the spectral singularities has led to rogue wave solutions of the focusing NLS with nonvanishing boundary conditions.Throughout this article we study instead of (1.1) the NLS-like equation iq t + q xx − 2|q| 2 q + 2µ 2 q = 0, (1.2) obtained from (1.1) by applying the gauge transformation q(x, t) = e −2iµ 2 t q(x, t), where q(x, t) tends to the time invariant limits q r,l = µ e iθ r,l as t → ±∞.
In [17] a new method to solve the initial-value problem of the matrix NLS equation by means of the inverse scattering transform technique was introduced.Instead of determining the time evolution of the scattering data associated with the Zakharov-Shabat system v x = (−ikσ 3 + Q)v and solving the Marchenko integral equations associated with the time dependent scattering data (as in [14]), we determined the time evolution of the scattering data associated with the matrix Schrödinger equation −ψ xx + Qψ = λ 2 ψ, where Q = Q 2 + Q x + µ 2 I 2 and λ = k 2 + µ 2 is the conformal mapping defined for all complex k cut along [−iµ, iµ] and satisfying λ ∼ k at infinity.Since this conformal mapping k → λ is 1, 1 for k in the upper half-plane C + cut along (i0, iµ] and λ ∈ C + , this has led to a great simplification compared to the treatment based on the Zakharov-Shabat system v x = (−ikσ 3 + Q)v given in [9,14,10].
In this article we restrict ourselves to solving the initial-value problem for the 1 + 1 focusing NLS equation.The advantage of this restriction is that the potential Q satisfies the symmetry relation where the asterisk denotes complex conjugation without transposition and σ 2 = ( 0 −i i 0 ) is the second Pauli matrix.Using the algebra isomorphism between the algebra Σ of complex 2 × 2 matrices S satisfying S * = σ 2 Sσ 2 and the division ring H of quaternions [23], we can reduce the resolution of the Marchenko integral equations to solve the inverse scattering problem for the matrix Schrödinger equation −ψ xx + Qψ = λ 2 ψ to calculations involving quaternions.
In this article we rely significantly on the direct and inverse scattering theory for the matrix Schrödinger equation developed for [3,5,6] on the half-line and in [37,30,4] on the full line, albeit with some modifications due to the symmetry relation (1.4).For technical reasons we assume throughout this article that the integral For the various applications of the matrix Schrödinger equation with selfadjoint potential we refer to [6].
Let us discuss the contents of this article.In Sec. 2 we review the direct and inverse scattering theory of the matrix Schrödinger equation with symmetry relation (1.4), where we essentially rely on the more general scattering theory given in [16,17].In Sec. 3 we discuss the time evolution of the scattering theory.In Sec. 4 we discuss matrices having quaternion elements and their isomorphic images of double matrix order.Here we rely on the seminal monograph on quaternionic matrices by Rodman [33].Section 5 is devoted to the multisoliton solutions of the AKNS system with nonvanishing boundary conditions parametrized by choosing minimal triplets of quaternionic matrices.Results on the invertibility of the Sylvester solutions P r and P l appearing in the multisoliton solutions are relegated to Appendix A.

Direct and Inverse Scattering
In this article we discuss the direct and inverse scattering theory for the matrix Schrödinger equation where the complex 2 × 2 potential Q satisfies the symmetry relation and hence belongs to the algebra Σ = Then this potential Q also satisfies the more restrictive adjoint symmetry relation where σ 3 = ( 1 0 0 −1 ) is the third Pauli matrix.Hence, by virtue of (2.3), all of the results on the direct and inverse scattering theory of (2.1) developed in [16,17] go though in the present situation, although we need to discuss the impact of the more restrictive symmetry relation (2.2) on the results separately.
Let us define the Jost solution from the left F l (x, λ) and the Jost solution from the right F r (x, λ) as those solutions of the matrix Schrödinger equation (2.1) which satisfy the asymptotic conditions x → +∞, (2.4a) Calling m l (x, λ) = e −iλx F l (x, λ) and m r (x, λ) = e iλx F r (x, λ) Faddeev functions, we easily define them as the unique solutions of the Volterra integral equations Then, for each x ∈ R, m l (x, λ) and m r (x, λ) are continuous in λ ∈ C + ∪ R, are analytic in λ ∈ C + , and tend to I 2 as λ → ∞ from within C + ∪ R. For 0 = λ ∈ R we can reshuffle (2.5) and arrive at the asymptotic relations where dy e ∓2iλy Q(y)m r,l (y, λ). (2.7b) By the same token, B r,l (λ) is continuous in 0 = λ ∈ R, vanishes as λ → ±∞, and satisfies 2iλB r,l (λ) → −∆ r,l as λ → 0 along the real λ-axis.
Using the transformation F (x, λ) → F (x, −λ * ) * in the matrix Schrödinger equation (2.1), we easily prove the symmetry relations With the help of (2.6) we then obtain the symmetry relations Introducing the reflection coefficients we easily obtain the symmetry relations provided det A l,r (λ) = 0. Above we have defined ∆ l,r as follows: , where the first limit may be taken from the closed upper half-plane.Then the matrices ∆ l,r have the same determinant.If ∆ l,r is nonsingular, we are said to be in the generic case; if instead ∆ l,r is singular, we are said to be in the exceptional case (cf.[4]).We are said to be in the superexceptional case if ∆ l,r = 0 2×2 and A l,r (λ) tends to a nonsingular matrix, A l,r (0) say, as λ → 0 from within C + ∪ R. It is clear that ∆ l,r = σ 2 ∆ * l,r σ 2 .Throughout this article (as well as in [17]) we assume the absence of spectral singularities, i.e., the absence of nonzero real λ for which det A l,r (λ) = 0.Under this condition the reflection coefficients R l,r (λ) are continuous in 0 = λ ∈ R. For general potentials Q satisfying (2.2) or (2.3) there may very well be spectral singularities (see [29,8] for focusing AKNS examples), even though spectral singularities do not occur if Q † = Q [29,6,17].
Then the potential Q(x) can be found from the auxiliary functions K(x, y) and J(x, y) as follows: Equations (2.8) and (2.12) imply the symmetry relations Thus the auxiliary functions K(x, y) and J(x, y) belong to the algebra Σ.
Let us write the reflection coefficients in the form where Rl,r ∈ L 1 (R) 2×2 .Although this Fourier representation has only been proved under the absence of spectral singularities assumption and in the generic case (for ) and in the superexceptional case (for [16], we assume it to be also true in the most general exceptional case.We then easily prove the symmetry relations Thus the functions Rl (α) and Rr (α) belong to the algebra Σ.So far we have only discussed the direct scattering problem for (2.1).The inverse scattering problem can be solved by computing one of the auxiliary functions K(x, y) or J(x, y) as the solutions of one of the Marchenko integral equations followed by an application of one of (2.13).Here the Marchenko integral kernels Ω l,r (w) are given by Ω r (w) = Rr (w) ) where we assume the poles λ s (s = 1, . . ., N) of the transmission coefficients A l,r (λ) −1 to be simple; in that case the so-called norming constants N r;s and N l;s are defined by where τ r;s and τ l;s are the residues of A r (λ) −1 and A l (λ) −1 at the simple pole λ s ∈ C + (s = 1, . . ., N).If there exist multiple poles of A l,r (λ) −1 in C + , then the expressions for Ω r,l (w) − Rr,l (w) can be derived in a straightforward way as a finite sum of polynomials times exponentials which obviously are entire analytic functions of x.We can then prove the symmetry relations (2.20) Thus the Marchenko kernels Ω r (w) and Ω l (w) belong to the algebra Σ.The proof can be based on (a) the unique solvability of the Marchenko equations (for Ω r,l as unknowns with the auxiliary functions assumed to be known) for large enough ±x, (b) the symmetry relations (2.16), and (c) the analyticity of the functions Ω r,l (w) − Rr,l (w) in x ∈ R. We refer to [15] for the rather technical details.

Time evolution
Straightforward calculations imply [17] Thus any solution of the matrix NLS-like equation (1.3) with nonvanishing time invariant limits Q r,l for Q(x; t) as x → ±∞ is a solution of the nonlinear evolution equation where ).The pair of 4 × 4 matrices (X, T ), where is an AKNS pair for the nonlinear evolution equation (3.2) in the sense that the zero curvature condition 2) (see [17]).Then it is easily verified that T (x, t, λ) tends to the limits Following [17], we introduce the Jost solutions F r,l (x, λ; t) of the first order system , where the prime denotes differentiation with respect to x. Letting V (x, λ; t) be a nonsingular 4 × 4 matrix solution of the pair of first order equations the fact that F r,l (x, λ; t) satisfies the first of (3.5) implies the existence of nonsingular matrices C F r,l (λ; t) not depending on x such that Then a simple differentiation yields where the left-hand side does not depend on x and hence equals the limits of the right-hand side as x → ±∞.Using (3.4) we easily get where are time invariant.Then we easily verify the symmetry relations Using that A r,l (λ; t).(3.9) Then the reflection coefficients satisfy Defining Rr,l (α; t) by (2.15), we easily derive the PDEs Rr,l (α; t) converges for every t ∈ R. Using (3.11) and time evolution properties of the norming constants [17, (4.4)] we obtain Hence, the reflection kernels Rr,l (α; t) and the Marchenko integral kernels Ω r,l (w; t) satisfy the same PDEs.We have also seen before that Rr,l (α; t) and Ω r,l (w; t) belong to the algebra Σ.
4 Quaternionic matrix algebra Let Σ stand for the (noncommutative) division ring of complex 2×2 matrices S satisfying S * = σ 2 Sσ 2 .Then it is easily verified [33] that Σ is isomorphic (as a real unital algebra) to the noncommutative division ring of quaternions H by means of the isomorphism where {1, i, j, k} is the standard quaternion basis.Thus, letting σ 1 = ( 0 1 1 0 ) stand for the first Pauli matrix, we see that {I 2 , iσ 3 , iσ 2 , iσ 1 } is the basis of the real vector space Σ that corresponds to the quaternion basis {1, i, j, k} by means of ϕ.
∈ Σ we see that det S coincides with the squared quaternion length of ϕ(S).
The map ϕ has a natural extension as a real algebra isomorphism from Σ p×p onto H p×p , the algebras of p × p matrices with entries in Σ and H, respectively.For p = r there also exists a natural extension from the real linear subspace Σ p×r onto H p×r .
For later use we introduce the similarity orbit of S =

Determinants and quaternionic linear algebra
Since multiplication of quaternion numbers is noncommutative, there is no obvious way to define the determinant of square quaternion matrices.Fortunately, the map ϕ allows one to define the determinant of a quaternion p × p matrix M ∈ H p×p as the determinant of the complex 2p × 2p matrix ϕ −1 (M) (cf.[33,Ch. 5]).For alternative ways to define determinants of square quaternionic matrices we refer to [35,18,12] and references therein.
The following theorem has been proved by Rodman [33, Th. 5.9.2] using the quaternionic Jordan normal form.Below we present an independent proof based on Schur complements (cf.[19,Sec. 1.7] and references therein).Under the induction hypothesis that all matrices S ∈ Σ (p−1)×(p−1) have a nonnegative determinant, we see from (4.4) that any matrix S ∈ Σ p×p satisfying S 11 > 0 has a nonnegative determinant.If one of S j1 > 0, we switch the first and j-th double rows without changing the determinant and repeat the above Schur complement argument to conclude that det(S) ≥ 0.

Jordan normal form and matrix triplets
The following theorem can be obtained from [33, Thm.5.5.3]upon application of ϕ −1 .
Theorem 4.2 For every S ∈ Σ p×p there exist positive integers m 1 , . . ., m k adding up to p and matrices A [1] , . . ., A [k] ∈ Σ such that S is similar to the direct sum J m 1 (A [1] ) ⊕ . . .
by means of a similarity transformation belonging to Σ p×p .The Σ-Jordan normal form (4.5) is unique up to changing the order in the direct sum and replacing the matrices A [1] , . . ., A [k] by matrices in the same similarity orbit.
It should be noted that the Σ-Jordan normal form (or: the quaternionic Jordan normal form discussed at length in [33]) differs from the usual complex Jordan normal form.Since A = the corresponding complex Jordan normal form is obtained from (4.5) below as follows: 1.If A [s] is the diagonal matrix Re A [s] 1 I 2 , we replace J ms (A [s] ) by the direct sum of Jordan blocks J ms (Re A 1 ) ⊕ J ms (Re A 1 ). [s] is not a real multiple of I 2 , we replace J ms (A [s] ) by the direct sum of the Jordan blocks of order m s at the complex conjugate eigenvalues Re

If A
Let (A, B, C) be a triplet consisting of the p × p matrix A with entries in Σ, the p×1 matrix B with entries in Σ, and the 1×p matrix C with entries in Σ.Then this matrix triplet is called minimal if the matrix order of A is minimal among all triplets for which Ce −zA B is the same Σ-valued function of z ∈ R. According to Theorem 4.2, given a minimal triple (A, B, C) of matrices with entries in Σ there exists an invertible S ∈ Σ p×p such that SAS −1 has the Jordan normal form (4.5) and the triplet (SAS −1 , SB, CS −1 ) is minimal.
Theorem 4.3 Suppose (A, B, C) is a triplet of size compatible matrices with entries of Σ, where the eigenvalues of A all have positive real part.Let us assume that A has been brought to the Σ-Jordan normal form (4.5).Then no pair of matrices A [1] , . . ., A [k] belongs to the same similarity orbit and among the Σ-entries B which is an upper triangular Toeplitz matrix with entries in Σ. Letting X be the column with entries X 1 , . . ., X m ∈ Σ, the identity allowing us to express each X j into X j+1 , . . ., X m (j = 1, 2, . . ., m−1) linearly and to conclude that X m = 0 2×2 .Thus X 1 = . . .= X m = 0 2×2 .In other words, if In the same way we prove that

Soliton solutions using matrix triplets
Let us now solve the right and left Marchenko equations (2.17a) and (2.17b) for reflectionless Marchenko kernels (2.18a) and (2.18b), where the reflection coefficients R r,l (λ; t) vanish.

Minimal matrix triplet representations
Since the Marchenko kernels Ω l,r (w; t) are finite linear combinations of the exponentials e ±iλsw (n = 1, 2, . . ., N) and polynomials of w multiplied by such exponentials with time dependent coefficients, there exist a square matrix A of even order 2p whose eigenvalues have positive real parts, 2p × 2 matrices B r and B l , 2 × 2p matrices C r and C l , and a 2p × 2p matrix H commuting with A such that Ω r (z, t) = C r e −zA e tH B r , Ω l (z, t) = C l e zA e tH B l . (5.1) The representations (5.1) are chosen in such a way that the order of the complex matrix A is minimal among all representations (5.1) for the same Marchenko kernels Ω r (z, t) and Ω l (z, t).In that case 2p coincides with the sum of the algebraic multiplicities of the discrete eigenvalues in C + (which is N if the discrete eigenvalues are algebraically simple, as assumed so far).Moreover, for any pair of minimal representations (5.1) [where the matrices in the second pair carry a prime or double prime, respectively], there exist unique nonsingular 2p × 2p complex matrices S and S such that [7, Ch. 1] (5.2a) (5.2b) In other words, choosing the primed and double primed matrix quadruplets to be (A * , B * r,l σ 2 , σ 2 C * r,l , H * ), the symmetry relations (2.20) for the Marchenko kernels imply the existence of unique nonsingular 2p × 2p matrices S and S such that Taking complex conjugates we get (5.4b) The uniqueness of the similarity transformations S and S then implies that We observe that the minimal matrix triplets (A r , B r , C r ) and (A l , B l , C l ) need not consist of matrices having their entries in Σ, even though the expressions C r e −zAr B r and C l e zA l B l belong to Σ for each z ∈ R.
Let us now apply a similarity transformation to the triplets (A r , B r , C r ) and (A l , B l , C l ) such that the newly found triplets consist of matrices having their entries in Σ.Indeed, letting T = λ −1 (Σ 2 + λ 2 S * ) where Σ 2 is the direct sum of p copies of σ 2 , |λ| = 1, and Σ 2 + λ 2 S * is nonsingular, we obtain and hence S = T * Σ 2 T −1 (see [36] for a similar argument involving the Ansatz S * = S −1 ).Substituting the latter into (5.4a)we get where we have omitted the subscripts r and l.Hence, the matrix triplet (T −1 AT , T −1 B, CT ) consists of matrices having their entries in Σ.In the same way, by replacing H with T −1 HT we arrive at a matrix belonging to Σ p×p .
Since the Zakharov-Shabat system along the segment (i0 + , iµ], the eigenvalues λ s of the matrix Schrödinger equation (2.1) in C + are geometrically simple.Thus the matrix A r,l in the minimal representations (5.1) has a Σ-Jordan structure with exactly two Jordan blocks of the same order per positive eigenvalue, one Jordan block per complex eigenvalue with positive real part, and Jordan blocks of the same order corresponding to complex conjugate eigenvalues (which have positive real part).As a result, there exist quadruplets (A r , B r , C r , H r ) and (A l , B l , C l , H l ) consisting of matrices having their entries in Σ such that A r and A l have the above Σ-Jordan normal form and have minimal matrix order among all quadruplets leading to the same Marchenko integral kernels (5.1).provided the inverse matrix exists.Moreover, Theorem A.3 implies that the inverse matrix in (5.11) exists for all but finitely many x ∈ R. Furthermore, P r and P l belong to Σ p×p .Using (2.13) in (5.8) and (5.11) and differentiating with respect to x we obtain (5.12b) Since P r and P l are nonsingular, these expressions are exponentially decaying as x → ±∞.
and similarly for B l and C l , we obtain the following expressions relating the potentials to the asymptotic potentials q r and q l q(x, t) = q r + 2C r,1 e 2xA e −tH + P r −1 B r,2 q * (x, t) = q * r + 2C r,2 e 2xA e −tH + P r −1 B r,1 provided e ±2xA e tH + P r,l (for each x ∈ R) are nonsingular matrices.Since P r,l are nonsingular, we get Since µ = |q l | = |q r |, the right and left matrix triplets cannot be chosen arbitrarily.The first of (5.14a) implies that C r,1 P −1 r B r,2 = 1 2 µ(e iθ l − e iθr ).Since |e iθ l − e iθr | ≤ 2, we see that the matrix triplet is to satisfy where e i(θ l −θr) and hence e iθ l can be evaluated from known µ and e iθr .This means that the triplet and µ are not independent.Once µ has been chosen to satisfy µ ≥ |C r,1 P −1 r B r,2 |, it is possible to determine θ l uniquely up to an additive multiple of 2π.Moreover, we have established the following Proposition 5.1 If 0 < µ < |C r,1 P −1 r B r,2 |, no soliton solution exists.The matrix H commuting with A is easily seen to be given by where k(λ) = λ 2 − µ 2 is the conformal mapping from C + onto C + satisfying k(λ) ∼ λ at infinity and Γ is a closed rectifiable Jordan contour in the upper half-plane which has winding number +1 with respect to each eigenvalue of iA.Then (5.17) Let us finally derive the expressions for the transmission coefficients.Substituting (5.8) into (2.12a) and (5.9) into (2.12b)we get Dividing by e ±iλx , taking the limits of the resulting equalities as x → ∓∞, and using (2.6a) and (2.6b) we arrive at the identities where we have used the nonsingularity of P r,l .Using the Sylvester equations for P r,l we obtain the transmission coeffients A r (λ) Observe that the transmission coefficients are time-invariant.Using the Sherman-Morrison-Woodbury formula det(I − T S) = det(I − ST ) [cf. [22]] and the Sylvester equations for P r,l we easily obtain det[A l,r (λ) −1 ] = det(λI 2p + iA) det(λI 2p − iA) .

Examples
In this section we work out various examples of multisoliton solutions based on the minimal quadruplet (A, B, C, H), where H = φ(A) for some function φ that is analytic in a neighborhood of the eigenvalues of A. In fact [cf.
(5.17)], φ(λ where k(λ) = λ 2 − µ 2 is the conformal mapping from C + onto C + satisfying k(λ) ∼ λ at infinity.Example 6.1 (one-soliton solution with real eigenvalue) Consider the minimal triplet where a > 0 and B and C have positive determinants.Then where We assume this determinant to be positive for each (x, t) ∈ R 2 .In fact, the determinant D(x; t) vanishes at some x ∈ R for given t ∈ R [namely, at x = 1 2a ln(− d 1 2a e tφ(a) )] iff d 1 < 0 and d 2 = 0, i.e., iff BC is a negative multiple of I 2 .Therefore, Thus, Since P = 1 2a BC with B and C nonsingular, we see that thus conferming our preceding result.
Corollary A. 2 The matrices P r defined by (5.7) and P l defined by (5.10) are invertible.
Theorem A.3 For each x ∈ R except at finitely many values, the matrices e 2xA + P r and e −2xA + P l are invertible.
Example 6.1 contains a triplet where det(e 2xA + P r ) = 0 for some x ∈ R.
Proof.In Theorem 4.1 above we have proved the nonnegativity of the determinants of P r,l and e ±2xA e −tH + P r,l for each x ∈ R. Since for each t ∈ R the function det(e ±2xA e −tH +P r,l ) is entire analytic in x, is nonnegative on the real x-line, tends to +∞ as x → ±∞ along the real line, and tends to det P r,l > 0 as x → ∓∞, there are at most finitely values of x ∈ R for which the matrix e ±2xA e −tH + P r,l is singular.