1 Introduction and Main Results

A great number of technological applications related to data display and non-linear optics, use thin films of nematic liquid cristals, cf. [7] for the general theory of nematic liquid cristals. In such devices the local direction of the optical axis of the liquid crystal is represented by a unit vector \(\textbf{n}(x,t)\), called the director, and may be modified by the application of an electric or magnetic field. The interaction of a light beam with the dynamics of the director \(\textbf{n}(x,t)\), under a magnetic field, helps to improve the device performance.

In this paper we consider the model introduced in [1] to describe the motion of the director field of a nematic liquid crystal submitted to an external constant strong magnetic field \(\textbf{H}\), with intensity \(H \in {\mathbb {R}}\), and also to a laser beam, assuming some simplifications and approximations motivated by previous experiments and models (cf. [2, 3, 20, 21], for magneto-optic experiments, and [16, 24] for the simplified director field equation). The system under consideration reads

$$\begin{aligned} \begin{array}{ll} \left\{ \begin{array}{l} i u_{t} + u_{xx}= -\rho u + a |u|^{2}u + H^{2} x^{2} u\, \,\\ \rho _{tt} = (\sigma (v))_{x} - b\rho + |u|^{2}, \end{array} \right.&x\in {\mathbb {R}},\; t\ge 0, \end{array} \end{aligned}$$
(1.1)

where i is the imaginary unit, u(xt) is a complex valued function representing the wave function associated to the laser beam under the presence of the magnetic field \(\textbf{H}\) orthogonal to the director field, \( \rho \in {\mathbb {R}}\) measures the angle of the director field with de x axis,\(v=\rho _{x}\), \(a,H \in {\mathbb {R}}, b > 0\) are given constants, with initial data

$$\begin{aligned} u(x,0)=u_{0}(x), \rho (x,0)= \rho _{0}(x), \rho _{t}(x,0)=\rho _{1}(x), x \in {\mathbb {R}}, \end{aligned}$$
(1.2)

and where the function \(\sigma (v)\) is given by

$$\begin{aligned} \sigma (v) = \alpha v + \lambda v^3, \quad \lambda = \frac{2}{3} \gamma (\alpha -\beta ), \end{aligned}$$
(1.3)

where \(\alpha \ge \beta > 0\) are elastic constants of the liquid crystal, cf. [16], and

$$\begin{aligned} \gamma = 4(\chi _a)^{-1} H^{-2} \beta > 0, \end{aligned}$$
(1.4)

where \(\chi _a > 0\) is the anisotropy of the magnetic susceptibility, cf. [20].

In the quasilinear case \(\alpha > \beta , \alpha \simeq \beta \), the study of the existence of a weak global solution to the Cauchy problem for the system (1.1) with the initial data (1.2), in suitable spaces, has been developed in [1], by application of the compensated compactness method introduced in [22] to the regularised system with a physical viscosity and the vanishing viscosity method (cf. also [8, 9] for two examples of this technique applied to related systems of short waves-long waves).

In Sect. 2 we prove, in the general case (\(\lambda \ge 0\)), by application of Theorem 6 in [19], a local in time existence and uniqueness theorem of a classical solution for the Cauchy problem (1.1), (1.2). For this purpose we need to introduce some functional spaces and point out several well known results:

Let A be the linear operator defined in \(L^2({\mathbb {R}})\) by

$$\begin{aligned} Au = u_{xx} - H^2x^2u,\, u\in D(A), H \ne {0}, \end{aligned}$$
(1.5)

where \(D(A)= \big \{ u \in X | Au \in L^2 ({\mathbb {R}}) \big \}\), with

$$\begin{aligned} X = \big \{ u \in H^1({\mathbb {R}}) | xu \in L^2 ({\mathbb {R}}) \big \}. \end{aligned}$$
(1.6)

We also define the norm \(\Vert u\Vert _X^2 = \Vert u_x\Vert _2^2 +\Vert xu\Vert _2^2 \), for \(u\in X\), denoting by \(\Vert .\Vert _p\) the norm \(\Vert .\Vert _{L^p({\mathbb {R}})}\). It can be proved, cf. [23], that if \(u\in X_1=\big \{ u | xu, u_x \in L^2 ({\mathbb {R}}) \big \}\), then \(u\in L^2({\mathbb {R}})\) with

$$\begin{aligned} \Vert u\Vert ^2_2 \le 2^{\frac{1}{2}}\Vert u_x\Vert _2 \Vert xu\Vert _2, \forall u \in X_1, \end{aligned}$$
(1.7)

and so \(X=X_1\), and it is not difficult to prove that the injection of X in \(L^q({\mathbb {R}}), 2\le q < + \infty \), is compact (cf. [11]).

Moreover, it may be also proved, cf. [4], lemma 9.2.1, that A is self-adjoint in \(L^2({\mathbb {R}})\), \((Au,u) \le 0, \forall u\in D(A)\), and (cf. [14]),

$$\begin{aligned} D(A) = (-A+1)^{-1} L^2({\mathbb {R}}) = \big \{ u \in H^2({\mathbb {R}}) | x^2 u \in L^2 ({\mathbb {R}}) \big \}. \end{aligned}$$
(1.8)

We can now state the first result that will be proved in Sect. 2:

Theorem 1.1

Let \((u_0,\rho _0,\rho _1)\in D(A)\times H^3\times H^2\) and \(\lambda \ge 0\). Then, there exists \( T^* = T^*(u_0,\rho _0,\rho _1) > 0\) such that, for all \(T < T^*\), there exists an unique solution \((u,\rho )\) to the Cauchy problem (1.1), (1.2) with \(u\in C ( [0,T]; D(A) )\cap C^1( [0,T]; L^2 )\) and \( \rho \in C ( [0,T]; H^3 )\cap C^1( [0,T]; H^2 ) \cap C^2 ( [0,T]; H^1)\).

As it is well known, in the quasilinear case the local solution, in general, blows-up in finite time. In Sect. 3, by obtaining the convenient estimates, we prove the following result in the semilinear case (\(\alpha = \beta \)):

Theorem 1.2

Let \((u_0,\rho _0,\rho _1)\in D(A)\times H^3\times H^2\) and \(\lambda =0\). Then, there exists an unique global in time solution \((u,\rho )\) to the Cauchy problem (1.1), (1.2), with \(u\in C ( [0,+\infty ); D(A) )\cap C^1( [0,+\infty ); L^2 )\) and \( \rho \in C ( [0,+\infty ); H^3 )\cap C^1( [0,+\infty ); H^2 ) \cap C^2 ( [0,+\infty ); H^1)\).

In the special case of initial data with compact support, we will prove in Sect. 4 the following result:

Theorem 1.3

Assuming the hypothesis of Theorem 1.2, consider the particular case where

$$\begin{aligned} {supp} \big \{u_0,\rho _0,\rho _1\big \} \subset D = ] -\theta , \theta [,\quad \theta > 0. \end{aligned}$$
(1.9)

Then, for each \( t > 0 \) and \( \varepsilon > 0 \), there exists a \(\delta = \delta ( t,\varepsilon ,\Vert u_0\Vert _{H^1} ) > 0 \), such that

$$\begin{aligned} \int _{{\mathbb {R}}\setminus (D + B( 0, \delta )) } [ | u |^2 + |\rho |^2 + |\rho _x|^2 + |\rho _t|^2 ] (x,t) dx \le \varepsilon , \end{aligned}$$
(1.10)

where \(B( 0, \delta ) = \big \{ x\in {\mathbb {R}}| |x| < \delta \big \} \).

The proof of this result follows a technique introduced in [6] in the case of the nonlinear Schrödinger equation.

In Sect. 5, which contains the main result in the paper, we study the existence and possible partial orbital stability of the standing waves for the system (1.1) with \(a=-1\) (attractive case) and \(\lambda \ge 0 \). These solutions are of the form

$$\begin{aligned} ( e^{i\mu t} u(x), \rho (x) ), \, \mu \in {\mathbb {R}}, \end{aligned}$$
(1.11)

and the system (1.1) takes the aspect (we fix \(\alpha =1\), without loss of generality):

$$\begin{aligned} \begin{array}{ll} \left\{ \begin{array}{l} u_{xx} - H^{2} x^{2} u + |u|^{2}u + \rho u = \mu u \, \, \\ - \rho _{xx}- \lambda (\rho _x^3)_x + b\rho = |u|^{2}, \end{array} \right.&x\in {\mathbb {R}}. \end{array} \end{aligned}$$
(1.12)

We can rewrite this system as a scalar equation

$$\begin{aligned} u_{xx} - H^{2} x^{2} u + |u|^{2}u + \rho (|u|^2) u = \mu u, \end{aligned}$$
(1.13)

where \(\rho (f)\) is the solution to \( - \rho _{xx}- \lambda (\rho _x^3)_x + b\rho = f\). It is not difficult to prove that if \(f \in L^2\), there exists a unique \(\rho \in H^2\) satisfying the previous equation. This allows for instance to prove that \(\rho (|u|^2) u^2 \in L^1\) provided that \(u\in X\). Now, to find nontrivial solutions of this equation belonging to D(A), the domain of the linear operator defined by (1.5), we will closely follow the technique introduced in [11] for the case of the Gross–Pitaevskii equation. More precisely, we consider the energy functional defined in X by (with \(\int . dx = \int _{\mathbb {R}}. dx\)):

$$\begin{aligned} {\mathcal {E}} (u){} & {} = \frac{1}{2} \int |u_x|^2 dx + \frac{1}{2} H^2 \int x^2|u|^2 dx \nonumber \\{} & {} \quad - \frac{1}{4} \int |u|^4 dx - \frac{1}{2} \int \rho (|u|^2) |u|^2 dx, \, u\in X, \end{aligned}$$
(1.14)

and we look to solve the following constrained minimization problem for a prescribed \(c>0\):

$$\begin{aligned} {\mathcal {I}}_c = \inf \big \{ {\mathcal {E}} (u), u\in X, \, \text {real}, \int |u(x)|^2 dx = c^2 \big \}. \end{aligned}$$
(1.15)

We start by proving the following result which corresponds to Lemma 1.2 in [11].

Theorem 1.4

We have:

i) The energy functional \({\mathcal {E}}\) is \(C^1\) on X real.

ii) The mapping \(c \rightarrow {\mathcal {I}}_c \) is continuous.

iii) Any minimizing sequence of \({\mathcal {I}}_c\) is relatively compact in X and so, if \(\{u_n\}_{n\in {\mathbb {N}}} \subset X\) is a corresponding minimizing sequence, then there exists \(u\in X\) such that \(\Vert u\Vert ^2_2 = c^2\) and \(\lim _ {n\rightarrow +\infty } u_n = u\) in X. Moreover \(u(x)= u(|x|)\) is radial decreasing and satisfies (1.13) for a certain \(\mu \in {\mathbb {R}}\).

To prove this result we follow the ideas in [11] and introduce the real space \( {\tilde{X}}= \big \{w=(u,v) \in X\times X\big \}\), for reals u and v, with norm

$$\begin{aligned} \Vert w\Vert ^2_{{\tilde{X}}}= \Vert u\Vert ^2_{X} + \Vert v\Vert ^2_{X}, u, v \in X, \end{aligned}$$
(1.16)

and observe that if \(u =u_1 + i u_2\), with \(u_1={\mathcal {R}}e\, u, u_2={\mathcal {I}}m\, u\), the Eq. (1.13) can be written in the system form:

$$\begin{aligned} \left\{ \begin{array}{ll} u_{1xx} - H^{2} x^{2} u_1 + |u|^{2}u_1 + \rho (|u|^2)u_1 = \mu u_1 \\ u_{2xx} - H^{2} x^{2} u_2 + |u|^{2}u_2 + \rho (|u|^2) u_2 = \mu u_2 \\ \end{array}\right. \quad x\in {\mathbb {R}}, \end{aligned}$$
(1.17)

with \(w=(u_1,u_2)\in {\tilde{X}}, u_1={\mathcal {R}}e\, u, u_2={\mathcal {I}}m\, u\).

In the new space \({\tilde{X}}\), the functional defined in  (1.14) takes the form, for \(w=(u,v)\in {\tilde{X}}, |w|^4=(|u_1|^2 + |u_2|^2)^2,\)

$$\begin{aligned} \mathcal {{{\tilde{E}}}} (u)&= \frac{1}{2} \int |w_x|^2 dx + \frac{1}{2} H^2 \int x^2|w|^2 dx \nonumber \\&\quad - \frac{1}{4} \int |w|^4 dx - \frac{1}{2} \int \rho (|w|^2) |w|^2 d\xi , \, w\in {\tilde{X}}, \end{aligned}$$
(1.18)

and, for all \(c>0\),we introduce

$$\begin{aligned} \mathcal {{{\tilde{I}}}}_c= \inf \big \{ \mathcal {{{\tilde{E}}}} (w), w \in {\tilde{X}}, \int |w(x)|^2 dx = c^2 \big \}, \end{aligned}$$
(1.19)

and the sets

$$\begin{aligned} {\mathcal {W}}_c = \big \{ u\in X, \Vert u \Vert ^2_2= c^2,{\mathcal {I}}_c = {\mathcal {E}} (u), u>0 \big \}, \end{aligned}$$
$$\begin{aligned} {\mathcal {Z}}_c = \big \{ w\in {\tilde{X}}, \Vert w \Vert ^2_2= c^2,\mathcal {{{\tilde{I}}}}_c = \mathcal {{{\tilde{E}}}} (w) \big \}. \end{aligned}$$

Following [5] and [11], we introduce the following definition:

Definition: The set \({\mathcal {Z}}_c\) is said to be stable if \({\mathcal {Z}}_c \ne \varnothing \) and for all \(\varepsilon > 0\), there exists \(\delta > 0\) such that, for all \(w_0=({u_1}_0,{u_2}_0) \in {\tilde{X}} \), we have, for all \(t\ge 0\),

$$\begin{aligned} \inf _{w \in {\mathcal {Z}}_c} \Vert w_0 - w\Vert _{{\tilde{X}}}<\delta \Longrightarrow \inf _{w \in {\mathcal {Z}}_c}\Vert \psi (.,t) - w\Vert _{{\tilde{X}}} <\varepsilon , \end{aligned}$$

where \(\psi (x,t) = (u_1(x,t),u_2(x,t))\) corresponds to the solution \( u(x,t) = u_1(x,t) + i u_2(x,t) \) of the first equation in the Cauchy problem (1.1),(1.2), with initial data \( u_0(x) = {u_1}_0(x) + i {u_2}_0(x) \) and where \(\rho (x,t)= \rho (|u(x,t)|^2)(x,t)\) satisfies

$$\begin{aligned} - \rho _{xx}- \lambda (\rho _x^3)_x + b\rho = |u(.,t)|^2. \end{aligned}$$

This corresponds to the hypothesis \(\rho _{tt} \simeq 0\), cf. [2, 3, 20]. The local existence and uniqueness in X to the corresponding Cauchy problem for the Schrödinger equation is a consequence of Theorem 3.5.1 in [4]. It is easy to get the global existence of such solution \(\psi (t)\) if their initial data is closed to \({\mathcal {Z}}_c\). Indeed, denote by T the maximal time of existence and suppose that \({\mathcal {Z}}_c\) is stable at least up to the time T. So, using the stability at time T, we see that \(\psi (T)\) is uniformly bounded in \({\tilde{X}}\). Therefore, we can apply the local existence result for initial data \(\psi (T)\). This contradicts the maximality of T and yields to the global existence.

Proceeding as in the proof of Theorem 1.3 (see in particular (5.9)), we can show that

$$\begin{aligned} \Vert \rho (|\psi (t)|^2) - \rho (|w|^2) \Vert _{H^1}\le C\Vert \psi (t) +w\Vert _{L^2} \Vert \psi (t) - w \Vert _{{\tilde{X}}}, \end{aligned}$$

where C is a constant not depending on t. So, if w is stable, we derive, in the conditions of the definition,

$$\begin{aligned} \inf _{w \in {\mathcal {Z}}_c} \Vert \rho (.,t) - \varphi \Vert _{H^1} < c_1( \Vert u_0\Vert _2 +c)\varepsilon . \end{aligned}$$
(1.20)

We point out that, if \(w=(u_1,u_2)\in {\mathcal {Z}}_c\), then there exists a Lagrange multiplier \(\mu \in {\mathbb {R}}\) such that w satisfies  (1.17), that is \(u = u_1 + i u_2\) satisfies  (1.13).

We will prove the following result which is a variant of Theorem 2.1 in [11]:

Theorem 1.5

The functional \(\tilde{{\mathcal {E}}}\) is \(C^1\) in \({\tilde{X}}\) and we have

i) For all \(c>0, {\mathcal {I}}_c = \mathcal {{{\tilde{I}}}}_c, {\mathcal {Z}}_c \ne \varnothing \) and \({\mathcal {Z}}_c\) is stable.

ii) For all \(w\in {\mathcal {Z}}_c, |w| \in {\mathcal {W}}_c\).

iii) \({\mathcal {Z}}_c = \big \{ e^{i\theta }u, \theta \in {\mathbb {R}}\big \}\), with u real being a minimizer of (1.15).

The proof of this result is similar to the proof of Theorem 2.1 in [11]. We repeat some parts of the original proof for sake of completeness. Next, in Sect. 6, also following closely [11], we prove a bifurcation result asserting in particular that all solutions of the minimisation problem (1.15) belongs to a bifurcation branch starting from the point \((\lambda _0,0)\) (in the plane \((\mu ,u)\)) where \(\lambda _0\) is the first eigenvalue of the operator \(-\partial _{xx} +H^2 x^2\).

Proposition 1.6

The point \((\lambda _0,0)\) is a bifurcation point for (1.13) in the plane \((\mu ,u)\) where \(-\mu \in {\mathbb {R}}^+\) and \(u\in X\). The branch issued from this point is unbounded in the \(\mu \) direction (it exists for all \(-\mu >\lambda _0\)). Moreover solutions to (1.13) belonging to this branch are in fact minimizers of problem (1.15).

As already mentioned, the proof of this proposition follows closely the one of [11, Theorem 3.1]. An important ingredient which has also independent interest is the following uniqueness result.

Proposition 1.7

There exists a unique radial positive solution to (1.13) such that \(\lim _{r\rightarrow \infty } u(r)=0\).

The proof of this proposition is strongly inspired by [15].

Finally, in Sect. 7 we present some numerical simulations illustrating the behaviour of the standing waves according to the intensity of the magnetic field \(\textbf{H}\), and also the limit as the Lagrange multiplier \(-\mu \) approaches the bifurcation value \(\lambda _0\).

2 Local Existence in the General Case

In order to prove Theorem 1.1, let us introduce the Riemann invariants associated to the second equation in the system  (1.1),

$$\begin{aligned} l=w+\int _0^v\sqrt{\alpha +3\lambda \xi ^2}d\xi \quad \text {and}\quad r=w-\int _0^v\sqrt{\alpha +3\lambda \xi ^2}d\xi , \end{aligned}$$
(2.1)

where \(w=\rho _t, v=\rho _x\). We derive

$$\begin{aligned} l-r= & {} 2\int _0^v\sqrt{\alpha +3\lambda \xi ^2}d\xi \\= & {} v\sqrt{\alpha +3\lambda v^2}+\frac{1}{\sqrt{3\lambda }}\,\text {arcsinh}(\sqrt{3\lambda } v),\quad w=\frac{l+r}{2}. \end{aligned}$$

Noticing that

$$\begin{aligned} \displaystyle f(v)=v\sqrt{\alpha +3\lambda v^2}+\frac{1}{\sqrt{3\lambda }}\,\text {arcsinh}(\sqrt{3\lambda } v) \end{aligned}$$

is one-to-one and smooth, we have \(v=f^{-1}(l-r)=v(l,r)\) and, for classical solutions, the Cauchy problem (1.1), (1.2) is equivalent to the system

$$\begin{aligned} \left\{ \begin{array}{lllll} iu_t+u_{xx}- H^2x^2u=-\rho u+ a |u|^2u\\ \\ \rho _t=\frac{1}{2} (l+r)\\ \\ l_t-\sqrt{\alpha +3\lambda v^2}l_x= - b \rho + |u|^2\\ \\ r_t+\sqrt{\alpha +3\lambda v^2}r_x= - b \rho + |u|^2\\ \end{array} \right. \end{aligned}$$
(2.2)

with initial data (cf. (1.5), (1.8)),

$$\begin{aligned} \begin{aligned} u(.,0)=u_0\in D(A)=\big \{ u \in H^2{{\mathbb {R}}} | x^2 u \in L^2 ({\mathbb {R}}) \big \}, \\ \rho (.,0)=\rho _0\in H^3({\mathbb {R}}), l(.,0)=l_0\in H^2({\mathbb {R}}),r(.,0)=r_0\in H^2({\mathbb {R}}). \end{aligned} \end{aligned}$$
(2.3)

In order to apply Kato’s theorem (cf. [19, Thm. 6]) to obtain the existence and uniqueness of a local in time strong solution, cf. Theorem 1.1, for the corresponding Cauchy problem, we need to pass to real spaces, introducing the variables

$$\begin{aligned} u_1={\mathcal {R}}e\, u, u_2={\mathcal {I}}m\, u. \end{aligned}$$
(2.4)

Now, we can pass to the proof of Theorem 1:

With \( ({u_{1}}_{0},{u_{2}}_{0}) = (u_1(.,0),u_2(.,0))\), let

$$\begin{aligned} U = (u_1,u_2,\rho ,l,r), U_0 = ({u_{1}}_{0},{u_{2}}_{0},\rho _0,l_0,r_0), \end{aligned}$$
(2.5)

and

$$\begin{aligned}{} & {} {\mathcal {A}}(U)=\left[ \begin{array}{ccccccccccc} 0&{}A&{}0&{}0&{}0\\ -A&{}0&{}0&{}0&{}0\\ 0&{}0&{}0&{}0&{}0\\ 0&{}0&{}0&{}-\sqrt{\alpha +3\lambda v^2}\frac{\partial }{\partial x}&{}0\\ 0&{}0&{}0&{}0&{}\sqrt{\alpha +3\lambda v^2}\frac{\partial }{\partial x} \end{array}\right] , \\{} & {} g(t,U)=\left[ \begin{array}{cccccccccc} -\rho u_2 + a (u_1^2+u_2^2)u_2\\ \rho u_1 - a (u_1^2+u_2^2)u_1\\ \frac{1}{2}(l+r)\\ -b\rho + |u|^2\\ -b\rho + |u|^2 \end{array}\right] . \end{aligned}$$

The initial value problem (2.2), (2.3) can be written in the form

$$\begin{aligned} \left\{ \begin{array}{ccccccc} \displaystyle \frac{\partial }{\partial t}U+{\mathcal {A}}(U)U=g(t,U)\\ U(.,0)=U_0. \end{array}\right. \end{aligned}$$
(2.6)

Let us take

$$\begin{aligned} U_0 = ({u_{1}}_{0},{u_{2}}_{0},\rho _0,l_0,r_0) \in Y= (D(A))^2\times (H^2({\mathbb {R}}))^3 \end{aligned}$$

(the condition \(\rho _0 \in H^3({\mathbb {R}})\) will be used later). We now set \( Z = (L^2({\mathbb {R}}))^2\times (L^2({\mathbb {R}}))^3\) and \(S=((1-A)I)^2\times ((1-\Delta )I)^3\), which is an isomorphism \(S:Y \rightarrow Z\). Furthermore, we denote by \(W_R\) the open ball in Y of radius R centered at the origin and by \(G(Z,1,\omega )\) the set of linear operators \(\Lambda :\,D(\Lambda )\subset Z\rightarrow Z\) such that:

  • \(-\Lambda \) generates a \(C_0\)-semigroup \(\{e^{-t\Lambda }\}_{t\in {\mathbb {R}}_+}\);

  • for all \(t\ge 0\), \(\Vert e^{-t\Lambda }\Vert \le e^{\omega t}\), where, for all \(U\in W_R\),

    $$\begin{aligned} \omega =\frac{1}{2} \sup _{x\in {\mathbb {R}}}\Vert \frac{\partial }{\partial x}a(\rho ,l,r)\Vert \le c(R), \quad c: [0,+\infty [\rightarrow [0,+\infty [ \text { continuous, and } \end{aligned}$$
    $$\begin{aligned} a(\rho ,l,r)=\left[ \begin{array}{cccccccccc} 0&{}0&{}0\\ 0&{}-\sqrt{\alpha +3\lambda v^2}&{}0\\ 0&{}0&{}\sqrt{\alpha +3\lambda v^2} \end{array}\right] . \end{aligned}$$

By the properties of the operator A (cf. Sect. 1) and following [19, Section 12], we derive

$$\begin{aligned} {\mathcal {A}}: U= (u_1,u_2,\rho ,l,r)\in W_R \rightarrow G(Z,1,\omega ), \end{aligned}$$

and it is easy to see that g verifies, for fixed \(T>0\), \(\Vert g(t,U(t))\Vert _Y\le \theta _R\), \(t\in [0,T]\), \(U\in C([0,T];W_R)\).

For \((\rho ,l,r)\) in a ball \({\tilde{W}}\) in \((H^2({\mathbb {R}}))^3\), we set (see [19, (12.6)]), with [., .] denoting the commutator matrix operator,

$$\begin{aligned} B_0(\rho ,l,r)=[(1-\Delta ),a(\rho ,l,r)](1-\Delta )^{-1}\in {\mathcal {L}}((L^2({\mathbb {R}}))^3). \end{aligned}$$

We now introduce the operator \(B(U)\in {\mathcal {L}}(Z)\), \(U=(F_1,F_2,\rho ,l,r) \in W_R\), by

$$\begin{aligned} B(U)=\left[ \begin{array}{cccccccccc} 0&{}0&{}0&{}0&{}0\\ 0&{}0&{}0&{}0&{}0\\ 0&{}0\\ 0&{}0&{}&{}B_0(\rho ,l,r)\\ 0&{}0 \end{array}\right] . \end{aligned}$$

In [19, Section 12], Kato proved that for \((\rho ,l,r)\in {\tilde{W}}\) we have

$$\begin{aligned} (1-\Delta )a(\rho ,l,r)(1-\Delta )^{-1}=a(\rho ,l,r)+B_0(\rho ,l,r). \end{aligned}$$

Hence, we easily derive

$$\begin{aligned} S{\mathcal {A}}(U)S^{-1}={\mathcal {A}}(U)+B(U), U\in W_R. \end{aligned}$$

Now, it is easy to see that conditions (7.1)–(7.7) in Section 7 of [19] are satisfied and so we can apply Theorem 6 in [19] and we obtain the result stated in Theorem 1, with \( \rho \in C ( [0,T]; H^2 )\cap C^1( [0,T]; H^1 ) \cap C^2 ( [0,T]; L^2)\).

To obtain the requested regularity for \(\rho \) it is enough to remark that, since \(\rho _x=v, \rho _t = w, \rho _0 \in H^3, v_0={\rho _0}_x\in H^2, w_0 =\rho _1 \in H^2\), we deduce \(\rho _x =v \in C ( [0,T]; H^2 ), \rho _t=w \in C ( [0,T]; H^2 )\), and this achieves the proof of Theorem 1.1.

3 Global Existence in the Semilinear Case

Now, we consider the semilinear case, that is when \( \alpha = \beta \) and so \(\lambda = 0\).

Hence we pass to the proof of Theorem 1.2. For the local in time unique solution \((u,\rho )\) defined in the interval \([0,T^*[, T > 0\), to the Cauchy problem (1.1),(1.2), obtained in Theorem 1.1, we easily deduce the following conservation laws (cf. [1]) in the case \(\lambda \ge 0, \alpha >0 \):

$$\begin{aligned}{} & {} \int |u(x,t)|^{2}\;dx = \int |u_{0}(x)|^{2}\;dx, t \in [0,T^*[. \end{aligned}$$
(3.1)
$$\begin{aligned}{} & {} \begin{aligned} E(t)&= \frac{1}{2} \int (\rho _{t}(x,t))^{2}\;dx + \frac{\alpha }{2} \int (\rho _{x} (x,t))^{2}\;dx +\frac{\lambda }{4} \int (\rho _{x}(x,t))^{4}\;dx\\&\quad +\frac{b}{2} \int (\rho (x,t))^{2} \;dx - \int \rho (x,t) |u(x,t)|^{2} \;dx +\int |u_{x}(x,t)|^{2} \;dx\\ {}&\quad + \frac{a}{2} \int |u(x,t)|^{4}\;dx +H^{2} \int x^{2}|u(x,t)|^{2}\;dx = E(0), t \in [0,T^*[. \end{aligned} \end{aligned}$$
(3.2)

Applying the Gagliardo–Nirenberg inequality to the term \(|\frac{a}{2} \int |u(x,t)|^{4}\;dx|\) and since \(b>0\) we easily derive (cf. [1]), for \(t\in [0,T^*[\),

$$\begin{aligned} \begin{aligned}&\int (\rho _{t}(x,t))^{2}\;dx + \int (\rho _{x} (x,t))^{2}\;dx +\lambda \int (\rho _{x}(x,t))^{4}\;dx\\&\quad + \int (\rho (x,t))^{2} \;dx +\int |u_{x}(x,t)|^{2} \;dx + H^{2} \int x^{2}|u(x,t)|^{2}\;dx \le c_1. \end{aligned} \end{aligned}$$
(3.3)

We continue with the proof of Theorem 1.2, in the semilinear case, that is \(\lambda =0\). We have, for \(t\in [0,T^*[,\)

$$\begin{aligned} \Vert \rho (t)\Vert _2 \le \Vert \rho _0\Vert _2 + \int _{0}^{t} \Vert \rho _t(\tau )\Vert d\tau \le c_2(1+ t). \end{aligned}$$
(3.4)

Next we estimate \(\Vert Au(t)\Vert _2,\Vert \rho _{xt}(t)\Vert _2\) and \(\Vert \rho _{xx}\Vert _2\). For \(\lambda = 0\), the system (2.2) reads

$$\begin{aligned} \left\{ \begin{array}{ll} iu_t+u_{xx}- H^2x^2u=-\rho u+ a |u|^2u\\ \rho _t=\frac{1}{2} (l+r)\\ l_t-\sqrt{\alpha }\, l_x= - b \rho + |u|^2\\ r_t+\sqrt{\alpha } \, r_x= - b \rho + |u|^2\\ \end{array} \right. \end{aligned}$$
(3.5)

with initial data (2.3). To simplify, we assume \(\alpha =\beta =b=1\).

Recall that we have, since \(\lambda =0\),

$$\begin{aligned} \left\{ \begin{array}{l} l= w+v= \rho _{t} + \rho _{x}\\ r=w-v= \rho _{t} - \rho _{x}.\\ \end{array} \right. \end{aligned}$$
(3.6)

From (3.5), we derive

$$\begin{aligned} r_{tx}r_x + r_{xx}r_x = -\rho _x r_x + 2 {\mathcal {R}}e({\bar{u}}u_x)r_x, \end{aligned}$$

and so

$$\begin{aligned} \frac{1}{2}\frac{d}{d t}\int (r_x)^2 dx \le \frac{1}{2}\int [(\rho _x)^2 + (r_x)^2] dx + c_3\int (r_x)^2 dx + c_3, \end{aligned}$$

and a similar estimate for \(l_x.\) We deduce, with \(c_4(t)\) being a positive, increasing and continuous function,

$$\begin{aligned} \Vert r_x(t)\Vert ^2_2 + \Vert l_x(t)\Vert ^2_2 \le c_4(t), t\in [0,T^*[. \end{aligned}$$
(3.7)

Moreover, we derive from (3.5), formally,

$$\begin{aligned}{} & {} {\mathcal {R}}e (u_{tt}{\bar{u}}_t ) + {\mathcal {I}}m [ (u_{xxt} - H^2x^2u_t){\bar{u}}_t] = a {\mathcal {I}}m[(|u|^2u)_t{\bar{u}}_t],\\{} & {} \frac{1}{2} \frac{d}{dt} \int |u_t|^2dx - {\mathcal {I}}m\int u_{xt}{\bar{u}}_{xt}dx = 2a{\mathcal {I}}m\int {\mathcal {R}}e( u{\bar{u}}_{t}) u{\bar{u}}_{t}dx \end{aligned}$$

\(\le c_5\int |u_t|^2 dx\), and hence

$$\begin{aligned} \int |u_t|^2 dx \le c_6(t), t\in [0,T^*[. \end{aligned}$$
(3.8)

We deduce from (3.5),

$$\begin{aligned} \Vert Au(t) \Vert _2 \le c_7(t), t\in [0,T^*[. \end{aligned}$$
(3.9)

We have by (3.5),

$$\begin{aligned} r_{txx} r_{xx} + r_{txx} r_{xx} = -\rho _{xx} r_{xx} + 2\frac{d}{d x}[{\mathcal {R}}e(u{\bar{u}}_x)] r_{xx} \end{aligned}$$

and so, formally,

$$\begin{aligned} \begin{aligned} \frac{1}{2}\frac{d}{d t}\int (r_{xx})^2 dx&\le \frac{1}{2}\int [(\rho _{xx})^2 + (r_{xx})^2] dx \\ {}&\quad +2 \int ( |u| |u_{xx}| + |u_x|^2) |r_{xx}|dx\\&\le \frac{1}{2}\int [(\rho _{xx})^2 + (r_{xx})^2] dx + c_8(t)\int |r_{xx}|^2 dx, \end{aligned} \end{aligned}$$
(3.10)

by (3.9) and (1.8). But, by (3.6), we derive

$$\begin{aligned} \rho _{xx} = \frac{1}{2}(l_x - r_x), \end{aligned}$$

and so, by (3.7) and (3.10), we deduce

$$\begin{aligned} \frac{d}{dt} \int (r_{xx})^2 dx \le c_9(t) \int (r_{xx})^2 dx +c_{10}(t) \end{aligned}$$
(3.11)

and similarly

$$\begin{aligned} \frac{d}{dt} \int (l_{xx})^2 dx \le c_9(t) \int (l_{xx})^2 dx +c_{10}(t). \end{aligned}$$
(3.12)

We conclude that

$$\begin{aligned} \Vert r_{xx}\Vert ^2_2+ \Vert l_{xx}\Vert ^2_2 \le c_{11}(t), t \in [0,T^*[, \end{aligned}$$
(3.13)

with \(c_{11}(t)\) being a positive, increasing and continuous function of \(t \ge {0}\). This achieves the proof of Theorem 1.2 (the operations that we made formally can be easily justified by a convenient smoothing procedure).

4 Special Case of Initial Data with Compact Support

We assume the hypothesis of Theorem 1.2, that is is we consider the semilinear case (\(\lambda = 0\)) and, without loss of generality, we take \(\alpha = \beta = b = |a| = 1\). We also assume that the initial data verifies (1.9) for a certain \( d > 0 \). Following [6, Section 2], if we take \( \phi \in W^{1,\infty } ({\mathbb {R}}), \) real valued, and u is the solution of the Schrödinger equation in (1.1), we easily obtain

$$\begin{aligned} {\mathcal {R}}e \int \phi ^2 u_t{\bar{u}}dx + {\mathcal {I}}m \int \phi ^2 u_{xx}{\bar{u}}dx = 0. \end{aligned}$$

We derive

$$\begin{aligned} \Vert \phi u(t)\Vert _2 \le \Vert \phi u_0\Vert _2 + c_0 t \Vert \phi _x\Vert _\infty ,\quad t\ge 0, \end{aligned}$$
(4.1)

where

$$\begin{aligned} c_0 = 2 \sup _{t \ge 0} \Vert u_x(t)\Vert _2. \end{aligned}$$

Moreover, from the wave equation in (1.2) with \(\lambda = 0\), we deduce for \(t \ge 0\),

$$\begin{aligned} \phi ^2 \rho _{tt}\rho _t - \phi ^2 \phi _{xx}\rho _t = - \phi ^2 \rho \rho _t +\phi ^2 \rho _t |u|^2, \end{aligned}$$
$$\begin{aligned} \begin{aligned} \frac{d}{dt} \int (\phi \rho _t)^2 dx + \frac{d}{dt} \int (\phi \rho _x)^2 dx + \frac{d}{dt} \int (\phi \rho )^2 dx\\ = 2 \int \phi ^2 \rho _t |u|^2 dx \le 2 \Vert \phi \rho _t\Vert _2 \Vert \phi u\Vert _2 \Vert u_0\Vert _2. \end{aligned} \end{aligned}$$
(4.2)

We assume

$$\begin{aligned} 0 \le \phi \le 1. \end{aligned}$$
(4.3)

We have, by the Gagliardo–Nirenberg inequality and (4.1),

$$\begin{aligned} \begin{aligned}&\Vert \phi u\Vert _\infty \le \Vert \phi u\Vert _2^\frac{1}{2} \Vert (\phi u)_x\Vert _2^\frac{1}{2} \\&\quad \le (\Vert \phi u_0\Vert _2 + c_0 t \Vert \phi _x\Vert _\infty ))^\frac{1}{2} ( \Vert \phi _x\Vert _\infty \Vert u_0\Vert _2+ \frac{c_0}{2})^\frac{1}{2}\\&\quad = g_0 (t). \end{aligned} \end{aligned}$$
(4.4)

Now, with

$$\begin{aligned} g_1(t) = g_0 (t) \Vert u_0 \Vert _2, \end{aligned}$$
(4.5)

we deduce, from (4.2), (4.4) and with

$$\begin{aligned} f_1(t) = \int (\phi \rho _t)^2 dx + \int (\phi \rho _x)^2 dx + \int (\phi \rho )^2 dx, \end{aligned}$$
(4.6)
$$\begin{aligned} f_1^\frac{1}{2}(t) \le f_1^\frac{1}{2}(0) + 2\int _ {0}^{t} g_1(\tau )f_1^\frac{1}{2} (\tau ) d\tau , \end{aligned}$$
$$\begin{aligned} f_1^\frac{1}{2} (t) \le f_1^\frac{1}{2} (0) + \int _ {0}^{t} g_1(\tau ) d\tau \le f_1^\frac{1}{2}(0) + t \Vert u_0\Vert _2 \, g_0(t), t\ge 0. \end{aligned}$$
(4.7)

Hence, if we define

$$\begin{aligned} f(t) = f_1(t) + \Vert \phi u(t)\Vert _2^2, \quad t\ge 0, \end{aligned}$$
(4.8)

we derive, by (4.7), (4.8) and (4.1),

$$\begin{aligned} \begin{aligned} f^\frac{1}{2} (t)&\le f_1^\frac{1}{2} (t) + \Vert \phi u(t)\Vert _2 \le f_1^\frac{1}{2}(0) + t \Vert u_0\Vert _2 g_0(t) + \Vert \phi u_0\Vert _2 +c_0 t\Vert \phi _x\Vert _\infty \\&\le f^\frac{1}{2} (0) + t \Vert u_0\Vert _2(\Vert \phi u_0\Vert _2 + c_0 t \Vert \phi _x\Vert _\infty ))^\frac{1}{2} ( \Vert \phi _x\Vert _\infty \Vert u_0\Vert _2+ \frac{c_0}{2})^\frac{1}{2} \\&\quad + \Vert \phi u_0\Vert _2 +c_0 t\Vert \phi _x\Vert _\infty . \end{aligned} \end{aligned}$$
(4.9)

Now, we fix \( t > 0 \) and \( \varepsilon > 0 \) and assume that the initial data verifies  (1.9). We introduce the set \( C = {\mathbb {R}}{\setminus } (D + B( 0, \delta ))\), \(\delta \) to be chosen, and the function \( \phi \in W^{1,\infty } ({\mathbb {R}}), \) real valued, verifying  (4.3), \(\phi = 0\) in D, \(\phi =1\) in C and \( \Vert \phi _x\Vert _\infty =\frac{1}{ \delta } \). We have \(f(0)=0\), \(\phi u_0 = 0\), and so, by (4.9), we easily obtain

$$\begin{aligned} f(t) \le 2c_0\Vert u_0\Vert _2^3 \frac{t^3}{{\delta }^2} + c_0^2\Vert u_0\Vert _2^2\frac{t^3}{\delta } + 2c_0^2\frac{t^2}{{\delta }^2}, \end{aligned}$$
(4.10)

and now we can choose \(\delta \) such that  (1.10) is satisfied. This concludes the proof of Theorem 1.3.

5 Existence and Partial Stability of Standing Waves

We will consider the system (1.1) in the attractive case \(a =-1\) and without loss of generality we assume that \(\alpha =1\). We want to study the existence and behaviour of standing waves of the system  (1.1), that is solutions of the form  (1.11). As we have seen in the introduction, we can rewrite this system as a scalar equation (1.13). Following the technique introduced in [11] for the Gross–Pitaevski equation, we consider the energy functional defined in X by  (1.14). Recall that \( X \subset L^q ({\mathbb {R}}), 2 \le q < + \infty \), with compact injection, and the norm in X is equivalent to the following norm (which by abuse we also denote by \(\Vert . \Vert _{X}\))

$$\begin{aligned} \Vert u \Vert _{X}^2 = \int |u_x|^2 dx + H^2 \int x^2 |u|^2 dx, H \ne 0, u\in {X}. \end{aligned}$$
(5.1)

We now pass to the proof of Theorem 1.4, which is a variant of Lemma 1.2 in [11], whose proof we closely follow. Let \(\{u_n\}\) be a minimizing sequence of \({\mathcal {E}}\) defined by (1.14) in X (real), that is

$$\begin{aligned} u_n \in X, \Vert u_n\Vert ^2 = c^2, \lim _ {n\rightarrow +\infty } {\mathcal {E}} (u_n) = {\mathcal {I}}_c, \end{aligned}$$

defined by (1.15). Multiplying the equation satisfied by \(\rho (|u|^2)\) by u and integrating by parts, we find

$$\begin{aligned} \int [\rho _x^2 + \lambda \rho _x^4 + b \rho ^2 ] dx= \int |u|^2 \rho dx. \end{aligned}$$

Using Young’s inequality, we get, for a constant C depending on b (we allow this constant to change from line to line),

$$\begin{aligned} \int |u|^2 \rho dx \le \frac{b}{2}\int \rho ^2 dx + C_b \int |u|^4 dx. \end{aligned}$$

So using the two previous lines, we get that

$$\begin{aligned} \int \rho ^2 dx \le C_b \int |u|^4 dx. \end{aligned}$$
(5.2)

From Hölder’s inequality, we obtain that, for some constant \({\tilde{C}}>0\),

$$\begin{aligned} \int \rho |u|^2 dx \le \left( \int \rho ^2 dx \right) ^{\frac{1}{2}} \left( \int |u|^4 dx\right) ^{\frac{1}{2}}\le {\tilde{C}} \int |u|^4 dx. \end{aligned}$$
(5.3)

and, by Gagliardo–Nirenberg inequality,

$$\begin{aligned} \Vert u\Vert ^4 \le C \Vert u_x\Vert _2 \Vert u\Vert _2^3, u\in H^1({\mathbb {R}}). \end{aligned}$$
(5.4)

Hence, reasoning as in [11], (1.1) in Lemma 1.2, we derive, for each \(\varepsilon > 0\) and \(x\in X\), such that \(\Vert u\Vert _2^2 = c^2\),

$$\begin{aligned} \Vert u\Vert _4^4 \le \frac{\varepsilon ^2}{2} \Vert u_x\Vert _2^2 + \frac{1}{2\varepsilon ^2} c^6, \end{aligned}$$
(5.5)

and so, for \(u\in X\) such that \(\Vert u\Vert _2^2 = c^2\), we deduce

$$\begin{aligned} {\mathcal {E}}(u) \ge \left( \frac{1}{2} -\frac{\varepsilon ^2}{2} \left( \frac{1}{4}+\frac{{\tilde{C}}}{2} \right) \right) \Vert u_x\Vert ^2 - \frac{1}{2\varepsilon ^2 }\left( \frac{1}{4}+\frac{{\tilde{C}}}{2}\right) c^6 - \frac{1}{2} H^2\int x^2|u|^2 dx,\qquad \end{aligned}$$
(5.6)

and we can choose \(\varepsilon \) such that \(1- \varepsilon ^2 (\frac{1}{4}+\frac{{\tilde{C}}}{2} ) > 0\).

Hence, the minimizing sequence is bounded in X and there exists a subsequence \({\{u_n}\}\) such that \(u_n \rightharpoonup u\) in X (weakly). Recalling that the injection of X in \(L^4({\mathbb {R}})\) is compact, we derive

$$\begin{aligned} u_n \rightarrow u\, \textit{in} \, L^4({\mathbb {R}}). \end{aligned}$$
(5.7)

Moreover, by lower semi-continuity, we deduce

$$\begin{aligned} \int (|u_x|^2 + H^2x^2|u|^2) dx \le \liminf _{n \rightarrow \infty }\int (|{u_n}_x|^2 + H^2x^2|u_n|^2) dx. \end{aligned}$$
(5.8)

On the other hand, we have, setting \(f:= \rho (|u|^2) - \rho (|u_n|^2):=\rho - \rho _n\),

$$\begin{aligned} - f_{xx} - \lambda (\rho _x^3 -(\rho _n)_x^3 )_x +b f = |u|^2 - |u_n|^2. \end{aligned}$$

Notice using Young’s inequality that

$$\begin{aligned} -\int (\rho - \rho _n) (\rho _x^3 -(\rho _n)_x^3 )_x dx =\int (\rho _x^4 +(\rho _n)_x^4 - \rho _x^3 (\rho _n)_x - (\rho _n)_x^3 \rho _x ) dx \ge 0. \end{aligned}$$

So proceeding as in (5.2), we can show that

$$\begin{aligned} \Vert f\Vert _{L^2}^2\le C \Vert |u|^2- |u_n|^2\Vert _{L^2}^2. \end{aligned}$$
(5.9)

Using this last estimate, we deduce that

$$\begin{aligned}&|\int \rho (|u|^2) |u|^2 - \int \rho (|u_n|^2) |u_n|^2 | \\&\quad \le |\int \rho (|u|^2) (|u|^2 - |u_n|^2) dx| + |\int (\rho (|u|^2) - \rho (|u_n|^2)) |u_n|^2 dx |\\&\quad \le \Vert \rho (|u|^2)\Vert _{L^2} \Vert |u_n|^2 - |u|^2 \Vert _{L^4}^2+ \Vert u_n\Vert _{L^4}^2 \Vert \rho (|u|^2) - \rho (|u_n|^2) \Vert _{L^2}\rightarrow 0, \end{aligned}$$

since \( |u_n|^2 \rightarrow |u|^2\) in \(L^2({\mathbb {R}})\).

Hence, u is a minimizer of  (1.14), that is

$$\begin{aligned} u\in X, \Vert u\Vert _2^2 = c^2, {\mathcal {E}}(u) = {\mathcal {I}}_c. \end{aligned}$$

We conclude that \( {\mathcal {E}}(u_n) \rightarrow {\mathcal {E}}(u)\) and so

$$\begin{aligned} \int |{u_n}_x|^2 dx+ H^2\int x^2 |u_n|^2 dx \rightarrow \int |u_x|^2 dx+ H^2\int x^2 |u|^2 dx. \end{aligned}$$
(5.10)

We derive that \(u\in X\). We denote by \( u^ {\star }\) the Schwarz rearrangement of the real function u, (cf. [18] for the definition and general properties). We know that

$$\begin{aligned} \Vert u^ {\star }_x\Vert ^2 \le \Vert u_x\Vert ^2, \Vert u^ {\star }\Vert ^4 = \Vert u\Vert ^4. \end{aligned}$$

The Polya–Szego inequality asserts that, for any \(f\in W^{1,p}\) with \(p\in [1,\infty ]\),

$$\begin{aligned} \Vert \nabla f\Vert _{L^p} \ge \Vert \nabla f^\star \Vert _{L^p}. \end{aligned}$$

Moreover, by [11], we have

$$\begin{aligned} \int x^2|u^ {\star }|^2 dx< \int x^2|u|^2 dx, \textit{unless} \,\, u= u^ {\star }. \end{aligned}$$
(5.11)

By [12, Theorem 6.3] (see also [13]), we know that

$$\begin{aligned} \int G (v(x)) dx \le \int G (v^\star (x)) dx, \end{aligned}$$

provided that \(G(t)= \int _0^t g (s) ds\) and \(g: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) is such that

$$\begin{aligned} |g(s)|\le K (s+s^l), \end{aligned}$$

where \(K>0\), \(l>1\) and \(s\ge 0\). We want to apply this result for \(G(s)=\rho (s^2) s^2\). So \(g(s)= (\rho (s^2))_s s^2 +2\,s \rho (s^2) \). Observe that \((\rho (s^2))_s :=f \) is the solution to \(- f_{xx} - 3\lambda ((\rho (s^2))_x^2 f_x )_x +bf =2\,s\). Using the maximum principle, we can show that \(g: {\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\). On the other hand, by standard elliptic regularity theory, we have that \(|\rho (s^2)|, |\rho (s^2)_s|\le C(s+s^2 )\), for any \(s>0\). So, [12, Theorem 6.3] yields that

$$\begin{aligned} \int \rho (|u|^2) |u|^2 dx \le \int \rho (|u^\star |^2) |u^\star |^2 dx. \end{aligned}$$

Combining all the previous inequalities, we see that \({\mathcal {E}} (u^{\star }) < {\mathcal {E}} (u)\) unless \(u= u^{\star }\) a.e. and this proves that the minimizers of  (1.14) are non-negative and radial decreasing. This completes the proof of Theorem 1.4.

We now pass to the proof of Theorem 1.5, which follows the lines of the proof of Theorem 2.1 in [11]. For sake of completeness we repeat some parts of the proof to make it easier to follow.

We recall that, cf. [5], to prove the orbital stability it is enough to prove that \({\mathcal {Z}} \ne \varnothing \) and that any sequence \(\big \{w_n=(u_n,v_n)\big \} \subset {\tilde{X}}\) such that \(\Vert w_n\Vert _2^2 \rightarrow c^2\) and \(\tilde{{\mathcal {E}}}(w_n) \rightarrow \tilde{{\mathcal {I}}}_c\), is relatively compact in \({\tilde{X}}\). By the computations in the proof of Theorem 1.4, we have that the sequence \(\{w_n\}\) is bounded in \({\tilde{X}}\) and so we can assume that there exists a subsequence, still denoted by \(\{w_n\}\) and \(w=(u,v) \in {\tilde{X}}\) such that \(w_n \rightharpoonup w\) weakly in \({\tilde{X}}\), that is \(u_n \rightharpoonup u, v_n \rightharpoonup v\) in X. Hence, there exists a subsequence, still denoted by \(\{w_n)\}\), such that there exists

$$\begin{aligned} \lim _{n \rightarrow \infty } \int ( |{u_n}_x|^2 + |{v_n}_x|^2 ) dx. \end{aligned}$$
(5.12)

Now, we introduce \(\varrho _n = |w_n| = ( u_n^2 + v_n^2 )^\frac{1}{2} \), which belongs to X. Following the proof of [11, Theorem 2.1], we have

\({\varrho _n}_x = \frac{u_n {u_n}_x + v_n {v_n}_x}{(u_n^2 + v_n^2)^\frac{1}{2} }\), if \(u_n^2 + v_n^2 > 0\), and \({\varrho _n}_x = 0\), otherwise.

We deduce

$$\begin{aligned} \begin{aligned} \tilde{{\mathcal {E}}}(w_n)-{\mathcal {E}} (\varrho _n)&= \frac{1}{2} \int _{u_n^2 + v_n^2> 0} \Big (\frac{u_n {u_n}_x - v_n {v_n}_x}{u_n^2 + v_n^2}\Big )^2 dx\\&\quad - \frac{1}{4} \int {|(|u_n|^2+ |v_n|^2)|^2}d\xi + \frac{1}{4}\int {|\varrho _n|^4 d\xi }\\&= \frac{1}{2} \int _{u_n^2 + v_n^2 > 0} \Big (\frac{u_n {u_n}_x - v_n {v_n}_x}{u_n^2 + v_n^2}\Big )^2 dx. \end{aligned} \end{aligned}$$
(5.13)

Hence, we derive as in [11, Theorem 2.1],

$$\begin{aligned} {\tilde{{\mathcal {I}}}}_c = \lim _{n \rightarrow \infty } \mathcal {{{\tilde{E}}}}(w_n) \ge \limsup _{n \rightarrow \infty } {\mathcal {E}} (\varrho _n) \end{aligned}$$
(5.14)

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert \varrho _n\Vert _2^2 = \lim _{n \rightarrow \infty } \Vert w_n\Vert _2^2 = c^2. \end{aligned}$$
(5.15)

Applying Theorem 1.4 with \(c_n = \Vert \varrho _n\Vert _2\), we obtain

$$\begin{aligned} \liminf _{n \rightarrow \infty } {\mathcal {E}} (\varrho _n) \ge \liminf _{n \rightarrow \infty } {\mathcal {I}}_{c_n} \ge {\mathcal {I}}_c \ge \mathcal {{\tilde{I}} }_c. \end{aligned}$$
(5.16)

Hence, by (5.14) and (5.16), we derive

$$\begin{aligned} \lim _{n \rightarrow \infty } {\mathcal {E}} (\varrho _n)=\lim _{n \rightarrow \infty } \mathcal {{\tilde{E}}}(w_n) = {\mathcal {I}}_c = \mathcal {{\tilde{I}}}_c \,, \end{aligned}$$
(5.17)

and so, by (5.13) and (5.17), we get

$$\begin{aligned} \lim _{n \rightarrow \infty } \int |{u_n}_x|^2 + |{v_n}_x|^2 - | \partial _x \big (( u_n^2+ v_n^2 )^{\frac{1}{2}}\big ) |^2 dx = 0. \end{aligned}$$
(5.18)

We can rewrite this last line as

$$\begin{aligned} \lim _{n \rightarrow \infty } \int ( |{u_n}_x|^2 + |{v_n}_x|)^2 dx = \lim _{n \rightarrow \infty } \int |{\varrho _n}_x|^2 dx. \end{aligned}$$
(5.19)

Now, by (5.15), (5.17) and iii) in Theorem 1.4, we conclude that there exists \(\varrho \in X\) such that \(\varrho _n \rightarrow \varrho \) in X and \(\Vert \varrho \Vert _2^2 = c^2 , {\mathcal {E}} (\varrho ) = {\mathcal {I}}_c.\) Moreover \(\varrho \in H^2({\mathbb {R}}) \subset C^1({\mathbb {R}})\) is a solution of (1.13) and \(\varrho > 0\). We prove that \(\varrho =(u^2 + v^2)^{\frac{1}{2}} \) just as in the proof of Theorem 2.1 in [11, p. 279].

Finally, we prove that \(\Vert {w_n}_x\Vert _2^2\rightarrow \Vert w_x\Vert _2^2 \). By applying (5.19) we have \(\lim _{n \rightarrow \infty } \Vert w_n\Vert _2^2 = \lim _{n \rightarrow \infty } \Vert {\varrho _n}_x\Vert _2^2\) and \(\Vert {\varrho _n}_x\Vert _2^2 \rightarrow \Vert \varrho _x\Vert _2^2\), since \(\varrho _n \rightarrow \varrho \) in X. Hence, \(\Vert w_x\Vert _2^2 \le \lim _{n \rightarrow \infty } \Vert {w_n}_x\Vert _2^2 = \Vert \varrho _x\Vert _2^2.\) But it is easy to see that

$$\begin{aligned} \Vert w_x\Vert _2^2 = \int (|u_x|^2 + (|v_x|^2) dx \ge \int _{u^2 + v^2 > 0} \Big (\frac{(u u_x + v v_x)^2}{u^2 + v^2}\Big )^2 dx = \Vert \varrho _x\Vert _2^2, \end{aligned}$$

because \( ({u u_x + v v_x}^2) \le (u^2 + v^2) (|u_x|^2 + (|v_x|^2).\) Hence, \(\Vert {w_n}_x\Vert _2^2 \rightarrow \Vert w_x\Vert _2^2\). We also have that \( w_n \rightharpoonup w\), weakly in \( {{\tilde{X}}} \). In particular, by compactness, \( w_n \rightarrow w\) in \((L^2({\mathbb {R}}))^2 \cap (L^4({\mathbb {R}}))^2.\)

Since \(\mathcal {{\tilde{E}}}(w_n) \rightarrow \mathcal {{\tilde{I}}}_c = \mathcal {{\tilde{E}}}(c)\), we derive that \(\int x^2|w_n|^2dx \rightarrow \int x^2|w|^2dx\) and so \(\Vert w_n\Vert _{{{\tilde{X}}}}^2 \rightarrow \Vert w \Vert _{{{\tilde{X}}}}^2\). We conclude that \(w_n \rightarrow w\) in \( {{\tilde{X}}}\), and this achieves the proof of Theorem 1.5.

Remark 5.1

We would like to remark that in the semilinear case, namely when \(\lambda =0\), we can simplify some arguments. Indeed, by applying the Fourier transform to (1.13), we can solve explicitly this equation and derive

$$\begin{aligned} \rho = {\mathcal {F}}^{-1} \Big (\frac{{\mathcal {F}}|u|^2}{b+4\pi ^2\xi ^2}\Big ). \end{aligned}$$
(5.20)

The energy functional is then given by:

$$\begin{aligned} \begin{aligned} {\mathcal {E}} (u)&= \frac{1}{2} \int |u_x|^2 dx + \frac{1}{2} H^2 \int x^2|u|^2 dx \\ {}&\quad - \frac{1}{4} \int |u|^4 dx - \frac{1}{4} \int \frac{|{\mathcal {F}}|u|^2|^2}{1+4\pi ^2\xi ^2} d\xi , \, u\in X. \end{aligned} \end{aligned}$$
(5.21)

We can use directly (5.20) to obtain an estimate on \(\rho \). To prove the symmetry of minimizers, we can use Proposition 3.2 in [17], noticing that \((|u|^2)^{\star } = |u^{\star }|^2\), to deduce that

$$\begin{aligned} \int \frac{|{\mathcal {F}}|u|^2|^2}{1+4\pi ^2\xi ^2} d\xi \le \int \frac{ |{\mathcal {F}}(|u|^2)^{\star }|^2}{1+4\pi ^2\xi ^2} d\xi = \int \frac{ |{\mathcal {F}}(|u^{\star }|^2)|^2}{1+4\pi ^2\xi ^2} d\xi . \end{aligned}$$
(5.22)

6 Bifurcation Structure

This section is devoted to the study of the bifurcation structure of solution to the minimization problem (1.15) namely we prove Proposition 1.6. We begin by showing a Pohozaev identity which is also of independent interest.

Lemma 6.1

(Pohozaev identity) Let \(u\in X\) be a solution to (1.13). Then we have

$$\begin{aligned} 2\Vert u_x\Vert _2^2 - 2H^2 \Vert xu\Vert _2^2 -\frac{1}{2} \Vert u\Vert _4^4 + \int u^2 x\rho _x (|u|^2) dx =0. \end{aligned}$$

Proof

To simplify notation, we set \(\rho := \rho (|u|^2)\). Multiplying the Eq. (1.13) by xu and integrating by parts, we get

$$\begin{aligned} \Vert u_x \Vert _2^2 - 3H^2 \Vert xu\Vert _2^2 + \dfrac{1}{2}\Vert u\Vert _{4}^4 + \int u^2 (\rho +x \rho _x) dx- \mu \Vert u\Vert _2^2=0. \end{aligned}$$

On the other hand, multiplying the equation by u and integrating by parts, we get

$$\begin{aligned} \Vert u_x \Vert _2^2 +H^2 \Vert xu\Vert ^2_2 - \int \rho u^2 dx - \Vert u\Vert _4^4 + \mu \Vert u\Vert _2^2 =0. \end{aligned}$$
(6.1)

So combining the two previous lines, we find

$$\begin{aligned} 2\Vert u_x \Vert _2^2 - 2H^2 \Vert xu\Vert _2^2 -\frac{1}{2} \Vert u\Vert _4^4 + \int u^2 x\rho _x dx=0. \end{aligned}$$

\(\square \)

Let us denote by \(u_c\) a function achieving the minimum for the problem (1.15) and by \(\mu _c\) its lagrange multiplier. We also set \(\lambda _0\) for the first eigenvalue of the harmonic oscillator \(-\partial _{xx}+H^2 x^2 \). We will show that \(\mu _c\) converges to \(-\lambda _0\) when the mass c goes to 0.

Proposition 6.2

We have

$$\begin{aligned} \lim _{c\rightarrow 0} \mu _c =- \lambda _0. \end{aligned}$$

Proof

In a first time, we are going to show that \(-\mu _c \le \lambda _0\). Multiplying the equation satisfied by \(u_c\) by \(u_c\) and integrating by parts, we get

$$\begin{aligned} - c^2 \mu _c = \Vert (u_c)_x \Vert _2^2 +H^2\Vert xu_c\Vert _2^2 -\Vert u_c\Vert _4^4 - \int \rho (|u_c|^2) u_c^2 dx = 2E(u_c) - \dfrac{\Vert u_c\Vert _4^4}{2}. \end{aligned}$$

Thus, we deduce that

$$\begin{aligned} -\mu _c \le \dfrac{2 E(u_c)}{c^2}. \end{aligned}$$

Let \(u_0\) be the eigenfunction associated to \(\lambda _0\) namely \(\Vert u_0\Vert _{X}^2 =\lambda _0\) and \(\Vert u_0\Vert _2=1\). We set \(v_c =cu_0\). Using that \(u_c\) is a minimiser of problem (1.15), we have that \(E(u_c)\le E (v_c)\) and

$$\begin{aligned} \dfrac{2 E(cu_0)}{c^2} =\Vert (u_0)_x \Vert _2^2 +H^2\Vert xu_0\Vert _2^2 - \dfrac{c^2}{2} \int u_0^4 dx - \int \rho (|u_0|^2) u_0^2 dx \le \lambda _0. \end{aligned}$$

This proves that \(-\mu _c \le \lambda _0\).

Using Pohozaev’s identity (see Lemma 6.1) and (6.1), we have

$$\begin{aligned} 2 \Vert ( u_c)_x\Vert _2^2 + \int u_c^2 (\frac{x\rho _x (|u_c|^2)}{2} - \rho (|u_c|^2)) dx - \frac{5}{4} \Vert u_c\Vert _4^4 + \mu _c \Vert u_c\Vert _2^2 =0. \end{aligned}$$

So, recalling that \(-\mu _c \le \lambda _0\), we have for a constant \(M>0\) not depending on c that

$$\begin{aligned} \Vert (u_c)_x\Vert _2^2 \le Mc^2 + M \Vert u_c\Vert _4^4 - \int u_c^2 (\frac{x\rho _x (|u_c|^2)}{2} - \rho (|u_c|^2)) dx. \end{aligned}$$

Notice that, integrating by parts and using radial coordinates,

$$\begin{aligned} \int u_c^2 x\rho _x (|u_c|^2) dx&=\int u_c^2 (x\rho (|u_c|^2))_x dx - \int u_c^2 \rho (|u_c|^2) dx\\&= -2\int u_c (u_c)_r r \rho (|u_c|^2) dr- \int u_c^2 \rho (|u_c|^2) dx \\&\ge - \int u_c^2 \rho (|u_c|^2) dx. \end{aligned}$$

In the last inequality, we used that \(u_r \le 0\). So, by (5.3), we obtain, for some constant M not depending on c,

$$\begin{aligned} \Vert (u_c)_x\Vert _2^2 \le Mc^2 +M \Vert u_c\Vert _4^4 +M \int u_c^2 \rho (|u_c|^2) dx \le Mc^2 +M \Vert u_c\Vert _4^4. \end{aligned}$$

The Gagliardo–Nirenberg’s inequality (5.4) and Young’s inequality then imply that

$$\begin{aligned} \Vert (u_c)_x\Vert _2^2 \le M c^2. \end{aligned}$$

We have, by definition of \(\lambda _0\),

$$\begin{aligned} -\mu _c&= \dfrac{\Vert (u_c)_x\Vert _2^2 +H^2 \Vert x u_c\Vert _2^2}{c^2} - \dfrac{\Vert u_c\Vert _4^4 +\int \rho (|u_c|^2) u_c^2 dx}{c^2} \\&\ge \lambda _0 - \dfrac{\Vert u_c\Vert _4^4 +\int \rho (|u_c|^2) u_c^2 dx}{c^2}. \end{aligned}$$

Then, using (5.3) and Gagliardo–Nirenberg’s inequality (5.4), we deduce that, for some constant k not depending on c,

$$\begin{aligned} -\mu _c \ge \lambda _0 -kc^4. \end{aligned}$$

Taking \(c\rightarrow 0\), the result follows.

\(\square \)

Adapting the proof of Proposition 6.7 of [15], we can prove the uniqueness of positive solution to (1.13) namely Proposition 1.7.

Proof of Proposition 1.7

We denote by \(u(r,\alpha _1)\) the radial solution to (1.13) such that \(u(0,\alpha _1)=\alpha _1\). Suppose that there exist two numbers \(0<\alpha _1<{\tilde{\alpha }}_1\) such that \(u(r,\alpha _1)\) and \(u(r,{\tilde{\alpha }}_1)\) are two positive radial solutions decaying to 0 at infinity. To simplify notation, we set \(u(r)=u(r,\alpha _1)\) and \(\eta (r)=u(r,{\tilde{\alpha }}_1)\). Let \(\psi = \eta - u \). In the following, we denote by \(u'= \partial _r u\). Then \(\psi \) satisfies

$$\begin{aligned} \psi '' - (\lambda +r^2 )\psi + \dfrac{|\eta |^2 \eta - |u|^2 u +\rho (|\eta |^2) \eta - \rho (|u|^2) u}{\eta - u}\psi =0. \end{aligned}$$
(6.2)

Multiplying the previous equation by u and multiplying (1.13) by \(\psi \), taking the difference and integrating by parts, we find

$$\begin{aligned} \psi ' (r) u(r) - u^\prime (r) \psi (r)&= \int _0^r (u^3 +\rho (|u|^2 )u) \psi dx\\&\quad - \int (\eta ^3 + \rho (|\eta |^2) \eta -u^3 -\rho (|u|^2) u) u dx \\&= \int (u^{2} +\rho (|u|^2) - \eta ^{2} - \rho (|\eta |^2)) \eta u dx. \end{aligned}$$

Observe that the left-hand side goes to 0 as \(r\rightarrow \infty \) whereas if we assume that \(\eta (r)>u(r)\) for all \(r\ge 0\), the left-hand side converges to a negative constant. So there exists \(\gamma _1\) such that \(\eta (\gamma _1)=u(\gamma _1)\) (by the maximum principle, we can show that \(\rho (|u|^2) -\rho (|\eta |^2)<0\)).

Next, we will show that it is in fact the only intersection point between u and \(\eta \). Indeed, suppose by contradiction that there exists \(\gamma _2>\gamma _1\) such that

$$\begin{aligned} 0<\eta (r) < u(r) \ \text {for}\ r\in (\gamma _1,\gamma _2),\ u(\gamma _2)=\eta (\gamma _2). \end{aligned}$$

This implies that

$$\begin{aligned} \psi (r)<0\ \text {for}\ r\in (\gamma _1,\gamma _2),\ \psi ^\prime (\gamma _1) <0,\ \psi ^\prime (\gamma _2)>0\ \text {and}\ \psi (\gamma _1)=\psi (\gamma _2). \end{aligned}$$

Let \(\xi \) be a solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} \xi '' - (\lambda +r^2 ) \xi +[p |u|^{p-1}+ \partial _u (\rho (|u|^2) u )]\xi = 2r u,\ r>0\\ \xi (0)=0,\ \xi ^\prime (0)= (\lambda -\rho (\alpha ^2) ) \alpha -\alpha ^p. \end{array}\right. } \end{aligned}$$
(6.3)

In fact, we can think of \(\xi \) as \(u^\prime \) noticing that \((\rho (|u|^2) u )_x= u^\prime (\rho (|u|^2) + u \partial _u (\rho (|u|^2)))\). Let

$$\begin{aligned} \chi (r)=p|u|^{p-1} +\partial _u (\rho (|u|^2)u) - \dfrac{|\eta |^{p-1} \eta - |u|^{p-1} u+ \rho (|\eta |^2) \eta -\rho (|u|^2) u}{\eta -u}. \end{aligned}$$

Observe that the function \(u\rightarrow \rho (|u|^2) u\) is convex. Indeed \(\partial _{uu}(\rho (|u|^2) u)= u\partial _{uu} \rho (|u|^2) + 2 \partial _u \rho (|u|^2)\) where \(\partial _u \rho (|u|^2) :=f\) is the solution to

$$\begin{aligned} -f'' - 3 \lambda ((\rho (|u|^2)^\prime )^2 f_x)' +bf = 2u, \end{aligned}$$

and, \(\partial _{uu}\rho (|u|^2)=g\) is the solution to

$$\begin{aligned} -g'' - 3\lambda ((\rho (|u|^2)^\prime )^2 g_x)' +bg = 2 + 6\lambda ((\partial _u \rho (|u|^2))^2)^{\prime \prime }. \end{aligned}$$

By the maximum principle, we see that \(f\ge 0\) and \(g\ge 0\) (since by comparison principle we can show that \(\rho (t x_1)\le t \rho (x_1)\) which implies, using once more comparison principle that \(\rho (t x_1 +(1-t) x_2)\le t \rho (x_1) + (1-t)\rho (x_2)\), for all \(x_1,x_2\ge 0\) and \(t\in [0,1]\)). Using this and the convexity of \(u^p\), we see that \(\chi (r)>0\) when \(r \in (\gamma _1 ,\gamma _2)\). Taking the difference of (6.2) multiplied by \(\xi \) and (6.3) multiplied by \(\psi \) and integrating by parts on \([\gamma _1,r]\), we find

$$\begin{aligned} \xi (r) \psi ^\prime (r) - \xi ^\prime (r) \psi (r) = \xi (\gamma _1) \psi ^\prime (\gamma _1) +\int _{\gamma _1}^r (\chi (s) \xi (s) \psi (s) -2 s u(s) \psi (s))ds. \end{aligned}$$
(6.4)

Taking \(r=\gamma _2\) in the previous identity, we get

$$\begin{aligned} \xi (r_2) \psi ^\prime (\gamma _2) = \xi (\gamma _1) \psi ^\prime (\gamma _1) + \int _{\gamma _1}^{\gamma _2} [\chi (s) \xi (s) \psi (s) -2 s u(s) \psi (s) ]ds. \end{aligned}$$

This is a contradiction since the left-hand side is strictly negative while the right-hand side is strictly positive. This establishes that u and \(\eta \) intersect exactly once.

Finally, we show that \(\eta \) has to change sign. Suppose by contradiction that

$$\begin{aligned} 0<\eta (r) < u(r)\ \text {for}\ r\in (\gamma _1,\infty ). \end{aligned}$$

This implies that \(\psi (r)<0\) for \(r\in (\gamma _1,\infty )\), \(\psi ^\prime (\gamma _1)<0\) and \(\psi (\gamma _1)=0\). Since \(u,u^\prime \) and \(u''\) go to 0 as \(r\rightarrow \infty \) (and the same for \(\eta \)), we see that the left-hand side of (6.4) goes to 0 taking \(r\rightarrow \infty \) whereas the right-hand side converges to a positive constant. Therefore, \(\eta \) cannot be positive everywhere and consequently u is the unique positive radial solution to our equation. \(\square \)

We are finally in position to prove our bifurcation result, i.e. Proposition 1.6.

Proof of Proposition 1.6

Since \(\lambda _0\) is a simple eigenvalue, we can apply standard bifurcation results (see for instance [10, Theorem 2.1]) to deduce that \((\lambda _0,0)\) is indeed a bifurcation point and that the branch is unique provided that we are sufficiently close to the bifurcation point. Next, Proposition 6.2 guarantees that the minimizer of (1.15) \(u_c\) actually belongs to this branch at least for \(c>0\) small enough. Finally, we use our uniqueness result Proposition 1.7 to see that the set \(\{u_c,c>0\}\) is convex and therefore included in the bifurcating branch. \(\square \)

7 Numerical Simulations

In this section we perform some numerical simulations to illustrate our results. We investigate the limit \(\mu \rightarrow -\lambda _0\) mentioned in the previous section, and analyse the behaviour of standing waves with the variation of the intensity of the magnetic field \(\textbf{H}\).

7.1 Numerical Method

Our first goal is to numerically approximate the standing waves (1.11), according to the system (1.12). Following [11], we use a shooting method. However, in the present case, the director field angle \(\rho = \rho (|u|^2)\) acts as an additional potential type term, depending on u itself. Due to this, we perform a Picard iteration and look for a fixed point u of the operator \(\varphi \mapsto \Phi (\varphi )\), where \(\Phi (\varphi )\) is the solution of

$$\begin{aligned} u_{xx} - H^2 x^2 u + |u|^2 u + \rho (\varphi ) u = \mu u, \end{aligned}$$
(7.1)

with \(\rho (\varphi )\) solving

$$\begin{aligned} -\rho _{xx} - \lambda (\rho ^3_x)_x + b \rho = \varphi \end{aligned}$$
(7.2)

and with boundary conditions \(u(0) = u_0>0\), \(\rho (0) = \rho _0 >0\), \(u(\infty ) =u'(0) = \rho (\infty )= \rho '(0) = 0\).

According to the results in previous sections, we look for \(u\in {\mathbb {R}}\) even, smooth, vanishing at infinity, strictly positive and decreasing with |x|. For convenience, we shall denote the class of functions verifying these conditions by \({\mathcal {V}}\). Although there is no result giving a similar structure for \(\rho (x)\), it is natural to assume that \(\rho \) satisfies the same hypotheses as u, at least for small \(\lambda \), and so we look for \(u,\rho \in {\mathcal {V}}\).

We now describe our procedure in more detail. First, equations (7.1),(7.2) can be recast as a first-order system:

$$\begin{aligned} \left\{ \begin{aligned}&u_x = w \\&w_x = H^2 x^2 u - |u|^2 u - \rho (\varphi ) u + \mu u \\&\rho _x = v \\&v_x = -\lambda (v^3)_x + b \rho - \varphi ^2, \end{aligned}\right. \end{aligned}$$
(7.3)

with boundary conditions \(u(0) = u_0>0\), \(\rho (0)=\rho _0>0\), \(v(0)=w(0)=0\), and \(\varphi \in {\mathcal {V}}\).

At each stage in the Picard iteration, we need, for a given \(\varphi \in {\mathcal {V}}\), to find \((u,w,\rho ,v)\) solving (7.3). As mentioned, we employ a shooting method, which we now describe. Suppose that we have computed \(\rho (\varphi ), v(\varphi )\), and wish to compute uw. The idea is to adjust the initial value \(u(0)=u_0\) so that \(u(\infty ) =0\). Following [11], \(u_0\) should verify \(u_0 = \sup \{ \beta>0 : u(x;\beta )>0, x>0\}\) where \(u(x;\beta )\) is the solution of (7.1) with \(u(0) =\beta \), \(u\in {\mathcal {V}}\). At each step of the shooting method, we look for \(u_0\) in an interval \([a_n,b_n]\). We set \(u_{0,n} = (a_n+b_n)/2\) and solve the first two equations of (7.3) using an explicit Euler scheme (which is sufficient for our purposes) with \(w(0) = 0\). Then, if u attains negative values for some x, we set \(a_{n+1} = u_{0,n}\), \(b_{n+1} = b_n\), thus decreasing \(u_{0,n+1}\). Conversely, if u(x) is increasing at some point (so that it does not belong in the class \({\mathcal {V}}\)), we set \(a_{n+1}=a_n\) and \(b_{n+1} = u_{0,n}\), which increases \(u_{0,n+1}\).

The procedure to compute \(\rho \) and v is similar, except that the behaviour of \(\rho \) exhibits an inverse dependence on the initial value \(\rho (0)\); thus in each iteration of the shooting method the value of \(\rho (0)\) is increased when \(\rho \) becomes negative, and decreased when \(\rho \) becomes increasing.

Let us mention that on each iteration of the shooting method, the equation for v in (7.3) contains a nonlinear term when \(\lambda \ne 0\). The discretized equation reads

$$\begin{aligned} \frac{v_{j+1} -v_j}{dx} = -\lambda \frac{(v_{j+1})^3 - (v_j)^3}{dx} + b\rho _j - (\varphi _j)^2, \end{aligned}$$
(7.4)

and so we use a Newton method at each step to approximately solve for \(v_{j+1}\).

As a starting point to the Picard iteration, we take \(u^{(0)}(x) \in {\mathcal {V}}\) as the solution with \(\rho =0\), that is, \(u^{(0)}\) solves \(u_{xx} - H^2 x^2 u + |u|^2 u = \mu u\) with \(u^{(0)}\in {\mathcal {V}}\). With an initial guess \(u^{(0)}(x) \in {\mathcal {V}}\) for the Picard iteration in hand, we compute \(u^{(1)}(x), w^{(1)}(x)\), and so on, using the shooting method, according to

$$\begin{aligned} \left\{ \begin{aligned}&u^{(n)}_x = w^{(n)} \\&w^{(n)}_x = H^2 x^2 u^{(n)} - (u^{(n)})^3 - \rho (u^{(n-1)}) u^{(n)} + \mu u^{(n)}, \end{aligned}\right. \end{aligned}$$
(7.5)

where \(\rho (u^{(n-1)})\) solves

$$\begin{aligned} \left\{ \begin{aligned}&\rho _x = v, \\&v_x ={} -\lambda (v^3)_x + b \rho - (u^{(n-1)})^2 \end{aligned}\right. \end{aligned}$$

(also using the shooting method), with boundary conditions \(u(0) = u_0>0\), \(\rho (0)=\rho _0>0\), \(v(0)=w(0)=0\).

7.2 Numerical Results

Fig. 1
figure 1

Numerical approximation of the standing wave u(x) and the director field angle \(\rho (x)\), solutions to (1.12), computed using a shooting method and Picard iteration (Picard iterations in dashed lines). Parameters are \(H=1,\mu =-0.8,\lambda =0.1, b=1\)

In Fig. 1, we plot the standing wave u(x) and the director field angle \(\rho (x)\) calculated according to the procedure described previously. The dashed lines correspond to the iterations of the Picard method. For this simulation, we have used a spatial step \(dx = 0.002\) (corresponding to 3000 spatial points) and 15 Picard iterations.

Next, we illustrate the result of Proposition 6.2. First, note that it is easy to see that \(u^*(x) = e^{-\frac{H}{2}x^2}\) is the first eigenfunction of the harmonic oscillator \(-\partial _{xx} + H^2 x^2\), with eigenvalue \(\lambda _0=H\). Note that in our notations, the parameter \(-\mu \) plays the role of \(\lambda _0\). In parallel to [11], and in accordance with Proposition 6.2, we verify numerically that the \(L^2\) norm of \(u_\mu \) goes to zero as \(\mu \rightarrow -\lambda _0^+\). Taking \(H=\lambda _0=2,\) we show in Fig. 2 the numerical solutions of (1.12) for various values of \(\mu \rightarrow -\lambda _0^+\). We can see that the solutions appear to converge to zero, although the convergence is very slow. In Fig. 3, we show how the \(L^2\) norm of \(u=u_\mu \) varies as the Lagrange multiplier \(\mu \) tends to the value \(-\lambda _0\). Our numerical tests indicate that, although slow, the convergence to zero of the \(L^2\) norm of \(u_\mu \) is verified, in accordance with Proposition 6.2.

Fig. 2
figure 2

Numerical approximation of the standing wave u(x) and the director field angle \(\rho (x)\), solutions to (1.12), with \(\mu \rightarrow -\lambda _0\). Parameters are \(H=\lambda _0 =2,\lambda =0.1, b=1\)

Fig. 3
figure 3

The norm \(\Vert u\Vert _2^2\) as a function on the Lagrange multiplier \(\mu \) as \(\mu \rightarrow -\lambda _0 = - H =-2\), in \(\log \)-\(\log \) scale. The values of \(\mu \) are the same as in Fig. 2, but \(\mu \) is ranging from \(-1.9\) to \(-1.9999153\), taking 60 values (left). On the right is a zoom on the last 15 values of \(\mu \)

Next, we investigate numerically the behaviour of the standing wave when the intensity of the magnetic field, H, is varied. It turns out that for each set of parameters that we analyzed, there is a maximum (relatively small) value of H such that our numerical method diverges for larger values of H. This may be related to the observation that the behaviour of u (and \(\rho \)) with respect to u(0) is very sensitive to perturbations: any arbitrarily small perturbation of the u(0) found by the shooting method produces a solution which (numerically at least) quickly blows up exponentially. The desired solution appears to be unstable in this sense, and this effect appears more markedly for larger values of H. Still, in Fig. 4 we show the behaviour of the solution for H between 0 and 2, which lets us nevertheless see the general trend. In particular, it is clear that the director field angle becomes more concentrated at the origin for larger values of H.

Fig. 4
figure 4

Numerical approximation of the standing wave u(x) and the director field angle \(\rho (x)\), solutions to (1.12), with varying magnetic field intensity H. Parameters are \(\mu =0.2\), \(\lambda =0.1,\) \(b=2\)