Abstract
This paper studies the J-method of [E. Jarlebring, S. Kvaal, W. Michiels. SIAM J. Sci. Comput. 36-4:A1978-A2001, 2014] for nonlinear eigenvector problems in a general Hilbert space framework. This is the basis for variational discretization techniques and a mesh-independent numerical analysis. A simple modification of the method mimics an energy-decreasing discrete gradient flow. In the case of the Gross–Pitaevskii eigenvalue problem, we prove global convergence towards an eigenfunction for a damped version of the J-method. More importantly, when the iterations are sufficiently close to an eigenfunction, the damping can be switched off and we recover a local linear convergence rate previously known from the discrete setting. This quantitative convergence analysis is closely connected to the J-method’s unique feature of sensitivity with respect to spectral shifts. Contrary to classical gradient flows, this allows both the selective approximation of excited states as well as the amplification of convergence beyond linear rates in the spirit of the Rayleigh quotient iteration for linear eigenvalue problems. These advantageous convergence properties are demonstrated in a series of numerical experiments involving exponentially localized states under disorder potentials and vortex lattices in rotating traps.
1 Introduction
This paper studies eigenvalues of nonlinear differential operators \({\mathcal {A}}\) on a real Hilbert space V of the form
where \(A:V \times V \rightarrow V^*\) is a continuous, bounded mapping that is invariant under scaling of the first argument and real-linear in the second argument. A well-known example that can be cast in such a format is the Gross–Pitaevskii eigenvalue problem (GPEVP) in a bounded domain \({\mathcal {D}}\subset \mathbb {R}^d\) (with \(d=2,3\)). In the strong form the GPEVP reads
Here, \(W \in L^{\infty }({\mathcal {D}},\mathbb {R})\) denotes a non-negative and space-dependent potential, \(\varOmega \in \mathbb {R}\) is the angular velocity, \({\mathcal {L}}_z = - \mathrm {i}\left( x \partial _y - y \partial _x \right) \) is the z-component of the angular momentum, and \(\kappa \ge 0\) regulates the nonlinearity of the problem. This problem is related, e.g., to the modeling of Bose-Einstein condensates [17, 26, 30, 45]. In this context, a solution \(u^*\) represents a stationary quantum state of the condensate, \(|u^{*}|^2\) is the corresponding density, and \(\lambda ^*\) the so-called chemical potential.
The numerical solution of the GPEVP (1) has been studied extensively in recent years. Popular methods for solving the nonlinear eigenvalue problem are for example the Self Consistent Field Iteration (SCF), cf. [19, 23, 24, 29, 52], which requires to solve a linearized eigenvalue problem in each iteration. Algorithms that belong to the SCF class are for instance the Roothaan algorithm [48] or the optimal damping algorithm proposed in [23]. Another important class of iterative methods is based on gradient flows for the energy functional associated with (1), where we mention the Discrete Normalized Gradient Flow (DNGF), cf. [11,12,13, 15], which is based on an implicit Euler discretization of the \(L^2\)-gradient flow. Improvements of the approach by using conjugated gradients were proposed in [7]. Here we also mention the Projected Sobolev Gradient Flows (PSGFs), cf. [27, 33, 34, 36, 39, 46, 47, 57], which form a subclass of the gradient flow methods. Sobolev gradients are the Riesz representants of the Fréchet derivative of the energy functional in a suitable Hilbert space. With this, PSGFs are based on computing such Sobolev gradients and then projecting them onto the tangent space of the normalization constraint. An improvement of PSGF by using Riemannian conjugate gradients was suggested in [28]. Another strategy to solve the GPEVP involves a direct minimization of the energy functional, cf. [14, 18], which means that (1) is written as a nonlinear saddle point problem which is then solved by a Newton-type method. Global convergence results for solving the GPEVP are very rare in the literature. To the best of our knowledge, global convergence had been so far only established for a damped PSGF suggested in [36], where corresponding analytical results can be found in [36, 57].
The SCF and PSGF approaches are typically based on the simple linearization of the partial differential operator by replacing the density \(|u^{*}|^2\) in (1) by the density of some given approximation of \(u^*\). In the finite-dimensional case (after spatial discretization) these approaches can be interpreted as generalizations of the inverse iteration (inverse power method). Linear convergence is observed empirically which is in agreement with the convergence theory for linear matrix eigenvalue problems. However, in the presence of clustered eigenvalues, which are typically connected to interesting physical phenomena, linear convergence can be very slow and shifting (as in the shifted inverse iteration or the Rayleigh quotient iteration) is a well-established technique for the acceleration of matrix eigenvalue solvers. The problem is that the aforementioned schemes seem to be unable to achieve any speed-up by a sophisticated shift of the nonlinear operator. In [38], Jarlebring, Kvaal, and Michiels observed that this insensitivity towards spectral shifts is the result of an unsuitable choice of linearization. The authors propose a natural but conceptually different linearization using the Jacobian of the nonlinear operator \({\mathcal {A}}\). The resulting approach does not belong to any of the classes above and we will refer to it as the J-method.
This paper generalizes the J-method and its numerical analysis to an abstract Hilbert space setting. Hence, the resulting iteration scheme is based on a variational formulation which is mesh-independent (cf. [2] for a similar approach in electromagnetism). This then allows the application of any spatial discretization, including finite elements and spectral methods. As discovered in [38], the correct choice of the linearization in the J-method does not only lead to a competitive nonlinear eigenvalue solver, it also allows major theoretical progress such as a proof of local linear convergence based on a version of Ostrowski’s theorem (see Sect. 3 for the corresponding result in the Hilbert space setting). The obtained linear convergence rate, which is essentially \(|\lambda ^* + \sigma |/|\mu + \sigma |\), depends on the spectral gap of the \(\sigma \)-shifted Jacobian, where \(\mu + \sigma \) is the second-smallest eigenvalue of the \(\sigma \)-shifted Jacobian around \(u^*\). Our results generalize the findings of [38] from the matrix case to the abstract setting. In the case of the GPEVP, which is discussed in Sect. 4, this marks the first quantified convergence result. Moreover, an adaptive choice of the shifts in the spirit of the Rayleigh quotient iteration amplifies convergence beyond the linear rate in representative numerical experiments, see Sect. 6. At the same time, the sensitivity to spectral shifts facilitates the computation of excited states.
As usual for nonlinear problems, the quantitative convergence results are of local nature in the sense that they require a sufficiently accurate initial approximation. In practical computations this may be a rather challenging task. A simple damping strategy inspired by an energy-dissipative gradient flow [36] resolves this problem. In the case of the GPEVP we prove global convergence of the method towards an eigenfunction with a guaranteed decrease of energy, cf. Sect. 5.
Altogether, the combination of the J-method with damping for globalization and shifting for acceleration provides a powerful methodology for the simulation and analysis of nonlinear PDE eigenvector problems such as the GPEVP.
2 Definition of the J-method in Hilbert spaces
2.1 Nonlinear eigenvector problem
We consider a nonlinear differential operator
that maps a real Hilbert space V into its dual space \(V^*\). In particular, V is equipped with an \(\mathbb {R}\)-inner product and every \(F\in V^*\) satisfies \(\langle F, v\rangle _{V^*,V} \in \mathbb {R}\) for all \(v\in V\). Note that V is still allowed to contain complex-valued functions. We assume that \({\mathcal {A}}\) has the form
where \(A:V \times V \rightarrow V^*\) is continuous, bounded, and (real-)linear in the second argument. Recall that \(A(v,\,\cdot \,)\) being real-linear (cf. [16, Def. 3.50]) means that for all \(v,w_1,w_2\in V\) and \(\alpha \in \mathbb {R}\) we have
Furthermore, we assume that \(A(\,\cdot \, , \cdot \,)\) is sufficiently smooth on \(V {\setminus } \{0\} \times V\) (in particular real-Fréchet differentiable in both arguments) and that it is real-scaling invariant in the first argument, i.e.,
for all \(u,v \in V\) and all \(\alpha \in \mathbb {R}{\setminus } \{ 0\}\).
For the weak formulation of the eigenvalue problem with a nonlinearity in the eigenfunction, we introduce another real Hilbert space H (the pivot space) with \(\mathbb {R}\)-inner product
such that \(V, H, V^*\) forms a Gelfand triple [56, Ch. 23.4]. The corresponding H-norm \(\Vert \cdot \Vert := \Vert \cdot \Vert _H\) will be used for the normalization condition of the eigenfunction. The corresponding embedding \({\mathcal {I}}:V\rightarrow V^*\) is defined via
for \(u\in V\). For brevity, we shall write \(\langle \, \cdot \,,\, \cdot \rangle \) for the canonical duality pairing in V.
The goal of this paper is to solve the corresponding PDE eigenvalue problem: find an eigenfunction \(u^*\in V\) with \(\Vert u^* \Vert =1\) and an eigenvalue \(\lambda ^{*}\in \mathbb {R}\) such that \({\mathcal {A}}(u^*) = \lambda ^* {\mathcal {I}}u^*\). This is equivalent to the variational formulation
Throughout the paper, we denote by \({\overline{\alpha }}\) the complex conjugate of a complex number \(\alpha \in \mathbb {C}\), by \(\mathfrak {R}(v)\) the real part of v, and by \(\mathfrak {I}(v)\) the imaginary part.
Example 1
We are particularly interested in the GPEVP (1). Here we consider \(H=L^2({\mathcal {D}}):=L^2({\mathcal {D}},\mathbb {C})\) as a real Hilbert space equipped with the inner product
and analogously \(V=H^1_0({\mathcal {D}}):=H^1_0({\mathcal {D}},\mathbb {C})\) as a real Hilbert space equipped with
The (real) dual space is denoted by \(V^*= H^{-1}({\mathcal {D}}):=H^{-1}({\mathcal {D}},\mathbb {C})\). The corresponding nonlinear operator reads
Here, we have \(W_{ \text{ R }}(x)= W(x) - \tfrac{1}{4}\, \varOmega ^2\, |x|^2\) and the rotational gradient \(\nabla _{ \text{ R }}\) is given by
for the divergence-free vector field \(R(x,y,z):=(y,-x,0)\) if \(d=3\) and \(R(x,y):=(y,-x)\) if \(d=2\). Note that the nonlinear term is multiplied by \(\Vert u\Vert ^{-2}\) in order to achieve the assumed scaling invariance of A in the first component.
2.2 Linearization and the J-operator
Recall that the real-Fréchet derivative of some \(F:X \rightarrow Y\) in a point \(u \in X\) is a bounded and \(\mathbb {R}\)-linear operator \(F^{\prime }(u):X \rightarrow Y \) with the property
We write \(F^{\prime }(u;h):=F^{\prime }(u)h\) for the Fréchet derivative in u in direction of h. In the following, we speak about the Fréchet derivatives of operators, where we always mean the real-Fréchet derivative as stated above. Moreover, we neglect writing the supplement \(h\in X {\setminus } \{0\}\) beneath the limit. For the operator A, we denote the partial Fréchet derivative with respect to the (nonlinear) first component by \(\partial _1 A:V^3 \rightarrow V^*\). Because of the scaling invariance of A, i.e., \(A(u,\cdot \,)=A(\alpha u,\cdot \,)\), we have \(\partial _1 A(u,\cdot \,;u)=0\), which can be seen by
The Fréchet derivative of \({\mathcal {A}}\) in \(u\in V\) and in direction \(w \in V\) can be expressed as
Due to \(\partial _1 A(u,\cdot \,;u) = 0\), we conclude
For an element \(u\in V\) we now define the operator \(J(u):V \rightarrow V^*\) by
Hence, using \(\langle J(u^{*} ) u^{*} , v \rangle = \langle {\mathcal {A}}^{\prime }(u^{*},u^{*}) , v \rangle = \langle {\mathcal {A}}(u^{*}), v \rangle \), we observe that the eigenvalue problem (2) can be rewritten as: find \(u^{*} \in V\) with \(\Vert u^{*} \Vert =1\) and \(\lambda ^{*}\in \mathbb {R}\) such that
2.3 The shifted J-method
The shifted J-method is defined by applying the inverse power iteration to the reformulated eigenvalue problem (5) based on the linearization J and some spectral shift \(\sigma \). For that, we consider a function \(v\in V\) and \(\sigma \in \mathbb {R}\) such that \(J(v) +\sigma {\mathcal {I}}:V\rightarrow V^*\) is invertible. In practice, one may either choose \(\sigma \) large such that \(J(v) +\sigma {\mathcal {I}}\) is coercive (see Sect. 5.1 for the example of the GPEVP) or close to the negative of the target eigenvalue (as it is done in the numerical experiments of Sect. 6).
For \(F\in V^*\) we define \(u = (J(v) +\sigma {\mathcal {I}})^{-1}F \in V\) as the unique solution of the variational problem
for all \(w \in V\). Considering a fixed shift \(\sigma \) such that \(-\sigma \) is no eigenvalue of J(v), we define \(\psi :V\rightarrow V\) by
Including an additional normalization step finally leads to the operator \(\phi :V\rightarrow V\),
Now consider a normalized eigenpair \((u^*, \lambda ^*)\) of the eigenvalue problem (2). Then, \(\psi (u^*)\) satisfies due to \(J(u^*)u^* = {\mathcal {A}}(u^*) = \lambda ^* {\mathcal {I}}u^*\),
As a consequence, we observe that \(u^{*}\) is a fixed point of \(\phi \), i.e., \(u^{*} = \phi (u^{*})\). This motivates the following iteration scheme: given \(u^0\in V\) with \(\Vert u^0\Vert =1\) and a shift \(\sigma \) such that \(J(u^0) +\sigma {\mathcal {I}}\) is invertible, compute iterates \(u^{n+1}\) for \(n\ge 0\) by
with the normalization factor \(\alpha _n = 1 / \Vert (J(u^n) +\sigma {\mathcal {I}})^{-1} {\mathcal {I}}u^n\Vert \). For the matrix case, the convergence of this scheme was analyzed in [38]. We emphasize that this scheme is well-defined if the shift \(\sigma \) guarantees the invertibility of \(J(u^n) +\sigma {\mathcal {I}}\). This property has to be checked within the specific application and will be proved for the GPEVP in Sect. 5.1.
2.4 The damped J-method
In many applications, eigenvalue problems (with a nonlinearity in the eigenfunction) can be equivalently formulated as a constrained energy minimization problem. In this case, the radius of convergence can be considerably enhanced by introducing a variable damping parameter \(\tau _n\). This damping parameter (or step size) is adaptively selected such that the energy is optimally minimized in each iteration. In [36], a geometric justification of this approach was given, by interpreting it as the discretization of a certain projected Sobolev gradient flow, where the inner product (with which respect the Sobolev gradient is computed) is based on a repeated linearization of the differential operator. Even though \(\langle (J(v) +\sigma {\mathcal {I}})\, \cdot \, , \cdot \, \rangle \) typically does not define an inner product (and hence does not allow a geometric interpretation), it is still possible to generalize the J-method defined in (6) in the spirit of the results in [36]. In Sect. 5 below, we shall give a rigorous justification of this approach by proving global convergence of the damped J-method in the context of the GPEVP.
The damping strategy aims to find a (optimized) linear combination of \(u^n\) and \((J(u^n) +\sigma {\mathcal {I}})^{-1}{\mathcal {I}}u^n\). For brevity we introduce \(J_\sigma (u) := J(u)+\sigma {\mathcal {I}}\) and assume for a moment that the shift \(\sigma \) is chosen such that \(J_\sigma (u^n)\) remains coercive.
One step of the damped J-method then reads as follows: given \(u^n\in V\) with \(\Vert u^n\Vert =1\) and a step size \(\tau _n>0\) compute
with \(\gamma _n^{-1} := (J_\sigma (u^n)^{-1} {\mathcal {I}}u^n , u^n)_H\). This choice provides the important H-orthogonality
Due to the normalization in (7), we recover the original J-method (6) for \(\tau _n=1\). Further, the H-orthogonality (8) implies that the mass (i.e., the H-norm) of \({\tilde{u}}^{n+1}\) is greater or equal to 1 (even strictly larger unless \({u}^{n+1}=u^{n}\)). This can be seen from
For the example of the GPEVP we will show in Sect. 5 that the iteration (7) is well-defined and that it converges to an eigenstate if the step size \(\tau _n\) is sufficiently small.
To summarize, we have introduced a damped version of the J-method (7) (with typical choice \(\sigma =0\) or \(\sigma \) large) which intends to ensure global convergence and a shifted version (6) to accelerate the convergence (with typical choice \(\sigma \) close to \(-\lambda ^{*}\)). Note, however, that we do not intend to use damping and shifting simultaneously as this seems hard to control numerically. Instead, we propose to apply damping for globalization of convergence and then switch to shifting if a certain accuracy is obtained. This practice then provides a powerful methodology as illustrated in Sect. 6.
3 Abstract local convergence of the shifted J-method
The sensitivity of the J-method to spectral shifts facilitates the numerical approximation of excited states. To see this, we transfer the local convergence result presented in [38] to the Hilbert space setting. This shows that the shifted J-method converges to an eigenfunction \(u^{*}\) of \({\mathcal {A}}\) if the starting function and the shift are sufficiently close to the eigenpair \((u^{*},\lambda ^{*}\)) of interest. In this section, we consider the undamped version of the J-method, i.e., we consider the iteration \(u^{n+1} = \phi (u^n)\), including an arbitrary shift \(\sigma \in \mathbb {R}\) and assuming that \(J_\sigma (u^n)\) is invertible.
In order to prove local convergence with a linear rate that depends on the shift, we make use of the following well-known proposition that is a version of the Ostrowski theorem [44] in Banach spaces.
Proposition 1
Let \(\phi :V \rightarrow V\) be a mapping on the Banach space V that is Fréchet-differentiable at a point \(u^{*} \in V\) such that the Fréchet-derivative \(\phi ^{\prime }(u^{*}):V \rightarrow V\) is a bounded linear operator with spectral radius
Then there is an open neighborhood S of \(u^{*}\), such that for all starting values \(u^{0} \in S\) we have that the fixed point iterations
converge strongly in V to \(u^{*}\), i.e., \(\Vert u^n - u^{*} \Vert _V \rightarrow 0\) for \(n\rightarrow \infty \). Furthermore, for every \(\varepsilon >0\) there exist a neighborhood \(S_{\varepsilon }\) of \(u^{*}\) and a (finite) constant \(C_\varepsilon >0\) such that
Hence, \(\rho ^{*}\) defines the asymptotic linear convergence rate of the fixed point iteration.
Proof
The proof is based on the observation that under the assumptions of the lemma and for each \(\varepsilon >0\), there exists an induced operator norm \(\Vert \cdot \Vert _{\varepsilon }\) such that \(\Vert \phi ^{\prime }(u^{*}) \Vert _{\varepsilon } \le \rho ^{*} + \varepsilon \), cf. [1, Lem. 4.3.7]. With this result at hand, we can exploit the norm equivalence together with the definition of Fréchet derivatives to conclude the desired local convergence rate. This simple argument is elaborated e.g. in [32, 50] and generalizes the previous findings obtained in [40].\(\square \)
In the finite dimensional case and under some additional assumptions on the structure of \(\phi ^{\prime }(u^{*})\) it can be shown that \(\Vert u^n - u^{*} \Vert _{V} \le C |\rho ^{*}|^n \Vert u^0 - u^{*} \Vert _{V}\), cf. [42, Th. 10.1.3 and 10.1.4; NR 10.1-5].
In the spirit of Proposition 1 we need to study the spectrum of \(\phi ^{\prime }(u^{*})\) to conclude local convergence.
3.1 Derivative of \(\phi \)
Since we interpret \(\psi \) and \(\phi \) as operators from V to V, we have likewise for the first Fréchet derivative in \(v\in V {\setminus } \{0\}\), \( \psi '(v\,;\, \cdot \,),\ \phi ^{\prime }(v\,;\, \cdot \,) :V \rightarrow V. \) In order to apply Proposition 1, we need to compute the Fréchet derivative of \(\phi \) in \(u^{*}\). As a first step, we compute the derivative of the mapping \(v \mapsto \Vert \psi (v) \Vert \), leading to
or, more compactly, \(\partial _v \Vert \psi (v) \Vert = (\psi ^{\prime }(v), \phi (v))_H\). In order to compute \(\phi ^{\prime }(v)\), we write \(\psi ^{\prime }(v)\) in the form
and conclude that
For \(v=u^{*}\) we can exploit \(\phi (u^{*}) = u^{*}\), \(\Vert u^{*} \Vert =1\), and \(u^* = (\lambda ^*+\sigma )\, \psi (u^{*})\). This implies \(\Vert \psi (u^{*}) \Vert = |\lambda ^{*} + \sigma |^{-1}\) and thus,
To compute \(\psi ^{\prime }(u^{*})\), in turn, we use the identity
Note that \(J^{\prime }(v):V\times V \rightarrow V^*\) denotes here the Fréchet derivative of J in a fixed element \(v\in V {\setminus } \{ 0 \}\). We conclude
Next, we want to verify that \(J^{\prime }(u^{*} ;\, \cdot \,) \psi (u^{*})=0\). For that, we consider the definition of \(J^{\prime }\) which yields
Here we also used that in a neighborhood of \(u^*\) (which excludes the zero element due to \(\Vert u^{*} \Vert =1\)) the operator J is continuous. In summary, we proved \(J^{\prime }(u^{*};w)\, u^{*} = 0\) for all \(w\in V\). Since \(u^*\) equals \(\psi (u^{*})\) up to a multiplicative constant, we conclude that \(J^{\prime }(u^{*}; \cdot )\, \psi (u^{*})=0\) and hence, \(\psi ^{\prime }(u^{*}) = J_\sigma (u^{*})^{-1} {\mathcal {I}}\). Plugging this into previous estimates, we directly obtain
3.2 Local convergence results
Since the J-method is of the form \(u^{n+1} = \phi (u^n)\), we recall from Proposition 1 that the local convergence rate is strongly connected to the spectrum of \(\phi ^{\prime }(u^{*})\). Besides the modified expression in (10), the analysis of this spectrum also requires the following orthogonality result.
Lemma 1
Let \((u,\lambda ) \in V\times \mathbb {R}\) be an eigenpair of \(J(u^*)\). Further, let \((w,\mu ) \in V\times \mathbb {R}\) be an adjoint-eigenpair of \(J(u^*)\) with \(\mu \ne \lambda \), i.e.,
for all \(v\in V\). Then, it holds that \((u,w)_H=0\).
Proof
By definition we know that \(\langle J(u^*)\, u, v\rangle = \lambda \, (u,v)_H\) for all \(v\in V\). Thus, with the test function \(v=w\) we have
The assumption \({\mu }\ne {\lambda }\) then directly implies the H-orthogonality of primal and adjoint eigenfunctions.\(\square \)
We are prepared to study the spectrum of the linear operator \(\phi ^{\prime }(u^*):V\rightarrow V\) to make conclusions on the convergence rate using Proposition 1. The corresponding eigenvalue problem reads: find \(z \in V {\setminus } \{0 \}\) and \(\mu \in \mathbb {R}\) so that
for all \(v \in V\). The subsequent lemma relates the spectra of \(\phi ^{\prime }(u^*)\) and \(J(u^{*})\). Recall that \(J(u^{*})\) is a linear, bounded operator over \(\mathbb {R}\). Thus, its (real) spectrum coincides with the (real) spectrum of the corresponding adjoint operator [37].
Lemma 2
Assume that the (real) eigenvalues of \(J(u^{*})\) are countable and that there exists a basis of corresponding adjoint-eigenfunctions of V. Let the eigenvalue of interest \(\lambda ^*\) be simple and \(\sigma \in \mathbb {R}\) a shift such that \(J_\sigma (u^{*})\) is invertible (note that in particular we have \(\sigma \ne -\lambda ^*\)). Then, the spectra of \(J(u^{*})\) and \(\phi ^{\prime }(u^{*})\) are connected in the following sense: \(\mu \ne \lambda ^*\) is an eigenvalue of \(J(u^{*})\) if and only if \(|\lambda ^*+\sigma | / (\mu +\sigma )\) is an eigenvalue of \(\phi ^{\prime }(u^{*})\) and the eigenvalue \(\lambda ^*\) of \(J(u^{*})\) corresponds to the eigenvalue 0 of \(\phi ^{\prime }(u^{*})\).
Proof
Due to the assumption on the shift, \(J_\sigma (u^{*})\) is invertible and \(J_\sigma (u^{*})^{-1}{\mathcal {I}}\) has the eigenvalues \((\lambda _1+\sigma )^{-1}, (\lambda _2+\sigma )^{-1}, \dots \), including \((\lambda ^*+\sigma )^{-1}\) with eigenfunction \(u^*\). Note that the eigenvalues are shifted but the primal- and adjoint-eigenfunctions are the same as for \(J(u^{*})\). We first consider the eigenvalue \((\lambda ^*+\sigma )^{-1}\) of \(J_\sigma (u^{*})^{-1}{\mathcal {I}}\) with eigenfunction \(u^*\). Here we note that
i.e., 0 is the corresponding eigenvalue of \(\phi ^{\prime }(u^*)\). Now let \((\mu +\sigma )^{-1}\) be an eigenvalue of \(J_\sigma (u^{*})^{-1}{\mathcal {I}}\) with \(\mu \ne \lambda ^*\). Since \(J_\sigma (u^{*})^{-1}{\mathcal {I}}\) is a linear, bounded operator over \(\mathbb {R}\), its (real) spectrum coincides with the (real) spectrum of the corresponding adjoint operator (cf. [37]) such that there exists a corresponding adjoint-eigenfunction w. By definition this means that
for all \(v\in V\). Using that w is also an adjoint-eigenfunction of \(J(u^{*})\) and the orthogonality of Lemma 1, we find that
Thus, w is an adjoint-eigenfunction of \(\phi ^{\prime }(u^{*})\) to the eigenvalue \(|\lambda ^*+\sigma | / (\mu + \sigma )\).
For the reverse direction let \((z,\mu )\) be an eigenpair of \(\phi ^{\prime }(u^*)\), i.e., for all \(v\in V\) we have
For \(\mu =0\) this directly leads to \(z=\pm u^*\), since \(\mu =0\) yields the relation \(J_\sigma (u^*)^{-1} {\mathcal {I}}z = (J_\sigma (u^*)^{-1} {\mathcal {I}}z, u^*)_H\, u^*\). An application of \(J_\sigma (u^*)\) then implies that \(u^*\) and z coincide up to a multiplicative constant. Since both functions are normalized, we get \(z=u^*\) or \(z=-u^*\). In the case \(\mu \ne 0\) we employ the adjoint-eigenfunctions \(v_1, v_2, \dots \) of \(J(u^{*})\) as test functions. This yields
Note that the second term on the right-hand side vanishes due to Lemma 1. The definition of being an adjoint-eigenfunction then leads to
for \(j=1,2,\dots \) . Obviously, \((z, v_j)_H\) cannot vanish for all test functions. This implies that \(\mu = |\lambda ^* + \sigma |/(\lambda _j + \sigma )\) for a certain index j. In other words, \(\mu \) equals an eigenvalue of \(J_\sigma (u^*)^{-1} {\mathcal {I}}\) up to the factor \(|\lambda ^* + \sigma |\).\(\square \)
With the obtained knowledge about the spectrum of \(\phi ^{\prime }(u^{*})\) we can directly apply Proposition 1 to deduce an abstract local convergence result with a rate depending on the spectrum of \(J(u^*)\) relative to the implemented shift as in the matrix case.
Theorem 1
(abstract local convergence) Consider the assumptions of Lemma 2 and let \(\sigma \in \mathbb {R}\) be a shift sufficiently close to \(-\lambda ^*\). By \(\mu \) we denote the (real) eigenvalue of \(J(u^*)\) which is closest to \(-\sigma \) but different from \(\lambda ^*\). If the shift is selected such that
then the iterations of the J-method (6) are locally convergent to \(u^{*}\) in the V-norm. Furthermore, for every \(\varepsilon >0\) there exists a neighborhood \(S_{\varepsilon }\) of \(u^{*}\) and a constant \(C_\varepsilon >0\) such that
and \(n\ge 1\).
Below, we apply this abstract result to the GPEVP of Example 1.
4 Quantified local convergence of the shifted J-method for the GPEVP
As already mentioned in the introduction, we want to apply the damped J-method to the GPEVP (1), cf. Example 1. Thus, the task is to find an eigenfunction \(u^* \in H^1_0({\mathcal {D}})\) with \(\Vert u^*\Vert ^2_{L^2({\mathcal {D}})} = 1\) and corresponding eigenvalue \(\lambda ^{*} \in \mathbb {R}\) such that
Recall that the potential satisfies \(W \in L^{\infty }({\mathcal {D}},\mathbb {R})\), whereas \(\kappa \ge 0\) regulates the nonlinearity. Furthermore, \({\mathcal {L}}_z\) is the angular momentum operator with angular velocity \(\varOmega \in \mathbb {R}\). With these properties it is easily seen that the GPEVP can only have real eigenvalues. In the following we will also assume that
This condition can be interpreted as that trapping frequencies are larger than the angular frequency. Physically speaking, this ensures that centrifugal forces do not become too strong compared to the strength of the trapping potential W.
4.1 The Gross–Pitaevskii energy
We consider homogeneous Dirichlet boundary conditions and the short-hand notation \(L^2({\mathcal {D}}):=L^2({\mathcal {D}},\mathbb {C})\), \(H^1_0({\mathcal {D}}):=H^1_0({\mathcal {D}},\mathbb {C})\), and \(H^{-1}({\mathcal {D}}):= H^{-1}({\mathcal {D}},\mathbb {C})\) for the function spaces over \(\mathbb {R}\), cf. the details in Example 1. We set
This means that \(\Vert \cdot \Vert \) equals the \(L^2({\mathcal {D}})\)-norm and \((\,\cdot \,,\,\cdot \,)_H := \mathfrak {R}(\,\cdot \,,\,\cdot \,)_{L^2({\mathcal {D}})}\), where \((u,v)_{L^2({\mathcal {D}})}:=\int _{{\mathcal {D}}} u \,{\bar{v}} dx\). Similarly, we equip V with the inner product \((\,\cdot \,,\,\cdot \,)_V := \mathfrak {R}(\,\cdot \,,\,\cdot \,)_{H^1({\mathcal {D}})}\), where \((u,v)_{H^1({\mathcal {D}})}:=\int _{{\mathcal {D}}} \nabla u \cdot \overline{\nabla v} dx\). As before, the weak formulation is given by \({\mathcal {A}}(u^*) = A(u^*,u^*) = \lambda ^*{\mathcal {I}}u^*\), where the nonlinear operator A is defined in (3). The eigenvalue problem is equivalent to finding the critical points (on the manifold associated with the \(L^2\)-normalization constraint) of the energy functional \(E:V \rightarrow \mathbb {R}\) given by
cf. [27]. Recalling that \(W_{ \text{ R }}(x)=W(x) - \tfrac{1}{4}\, \varOmega ^2\, |x|^2\) and considering assumption (11), we see that the energy functional is bounded from below by a positive constant \(c=c(\varOmega ,{\mathcal {D}})\), i.e.,
Hence, E is weakly lower semi-continuous and bounded from below, which yields the existence of a minimizer that is typically called a ground state.
If \(\varOmega =0\), i.e., in the absence of a rotating potential, then the GPEVP has infinitely many eigenvalues \(0<\lambda _1^{*}<\lambda _2^{*}\le \lambda _3^{*} \le \cdots < \infty \), where the ground state eigenvalue \(\lambda _1^{*}\) is simple, cf. [21, 36]. If \(\varOmega \not =0\), the smallest eigenvalue is typically no longer simple [15]. This case refers to the physical phenomenon of a broken symmetry, where the ground state can have different shapes which differ in their number and location of vortices.
4.2 The J-operator for the GPEVP
In order to formulate the J-method for the GPEVP, we need to compute the linearization J(u) according to (4). Hence, by calculating the Fréchet derivative of \({\mathcal {A}}(u)=A(u,u)\) using (3), we obtain
where
Note that the operator J(u) induces an \(\mathbb {R}\)-bilinearform \( \langle J(u)\,\cdot ,\, \cdot \,\rangle \), i.e., we still consider V as an \(\mathbb {R}\)-vector space and have bilinearity only for multiplicative constants in \(\mathbb {R}\). Recall that we consider the real-Fréchet derivative, which also fits in the standard framework used in quantum mechanics. Moreover, the complex-Fréchet derivative does not exist for the present example.
We will show coercivity of J(u) up to a shift in Lemma 3 below. Before that, it is worth to mention that the eigenvalue problem can be equivalently expressed in terms of \( {\mathcal {J}}(u)\), which also yields the typical structure with standard inner products. We have the following result.
Proposition 2
Consider the GPEVP with full operator \({\mathcal {J}}\) given by (14) and its real part J given by (13). Then \(\lambda ^{*} \in \mathbb {R}\) is an eigenvalue with \(L^2\)-normalized eigenfunction \(u^{*}\in V\), i.e.,
if and only if
Proof
If \((\lambda ^{*},u^{*}) \in \mathbb {R}\times V\) solves \(\langle {\mathcal {J}}(u^{*}) , v \rangle = \lambda ^{*} ( u^{*} , v)_{L^2({\mathcal {D}})}\), then we can take the real part on both sides and obtain \(\langle J(u^{*}) , v \rangle = \lambda ^{*} \mathfrak {R}( u^{*} , v )_{L^2({\mathcal {D}})} \). Vice versa, if \((\lambda ^{*},u^{*}) \in \mathbb {R}\times V\) solves \(\mathfrak {R}\langle {\mathcal {J}}(u^{*}) , v \rangle = \lambda ^{*}\, \mathfrak {R}(u^{*} , v)_{L^2({\mathcal {D}})}\), then we can use test functions of the form \(\mathrm {i}v \in V\) to obtain \(\mathfrak {I}\langle {\mathcal {J}}(u^{*}) , v \rangle = \lambda ^{*}\, \mathfrak {I}(u^{*} , v)_{L^2({\mathcal {D}})}\). Multiplying the second equation with the complex number \(\mathrm {i}\) and adding it to the first equation readily yields \(\langle {\mathcal {J}}(u^{*}) , v \rangle = \lambda ^{*} (u^{*}, v)_{L^2({\mathcal {D}})}\).\(\square \)
The proposition shows that we can either interpret the eigenvalue problem with real parts only (i.e., with J) or with the full operator \({\mathcal {J}}\). With analogous arguments, we can also prove that \(J(u^{*})^{-1} {\mathcal {I}}= {\mathcal {J}}(u^{*})^{-1} {\mathcal {I}}^\mathbb {C}\), where
This justifies that we can interpret the iterations of the damped J-method (7) equivalently with J or \({\mathcal {J}}\). In particular, we have the following conclusion.
Conclusion 2
Consider the GPEVP with full operator \({\mathcal {J}}\) given by (14) and its real part J defined in (13). Given \(u^n\in V\) with \(\Vert u^n\Vert =1\) and a step size \(\tau _n>0\), we assume that \(J_\sigma (u^n)^{-1}\) is invertible. Then the iterations of the J-method (7) can be equivalently characterized by the \({\mathcal {J}}\)-iteration
with \({\mathcal {J}}_\sigma (u) := {\mathcal {J}}(u)+\sigma {\mathcal {I}}^\mathbb {C}\) and \((\gamma _n^\mathbb {C})^{-1} := ({\mathcal {J}}_\sigma (u^n)^{-1} {\mathcal {I}}^\mathbb {C} u^n , u^n)_{L^2({\mathcal {D}})}\). The assumed existence of \(J_\sigma (u^n)^{-1} {\mathcal {I}}\) implies the existence (and uniqueness) of \({\mathcal {J}}_\sigma (u^n)^{-1} {\mathcal {I}}^\mathbb {C}\).
4.3 Local convergence
Finally, we apply Theorem 1 to the GPEVP.
Theorem 3
(quantified convergence for the GPEVP) Consider the GPEVP as described in Example 1 and let \(u^n\) denote the iterates generated by the shifted J-method (without damping), i.e.,
By \(u^{*}\in V=H^1_0({\mathcal {D}})\) we denote an \(L^2\)-normalized eigenfunction to (1) with eigenvalue \(\lambda ^{*}\). Assume that \(\lambda ^{*}\) is a simple eigenvalue of \(J(u^{*})\) and that the shift \(\sigma \not = - \lambda ^{*}\) is such that \(J_{\sigma }(u^{*})\) has a bounded inverse. Further, let the shift \(\sigma \) be sufficiently close to \(-\lambda ^{*}\) and let \(\mu \) be the eigenvalue of \(J_{\sigma }(u^{*})\) so that \(|\mu + \sigma |\) is minimal and
Then there exists a neighborhood \(S \subset V\) of \(u^{*}\) such that for all starting values \(u^0 \in S\) we have
Furthermore, for every \(\varepsilon >0\) there is a neighborhood \(S_{\varepsilon } \subset V\) of \(u^{*}\) and a constant \(C(\varepsilon )>0\) such that for all \(u^0 \in S_{\varepsilon }\) (and \(n\ge 1\)) it holds
Hence, if the shift \(\sigma \) is close to \(-\lambda ^{*}\) and \(u^n\) close to \(u^*\), then we have locally a linear convergence with rate
Proof
By definition of the problem (cf. Example 1) all eigenvalues are real. Furthermore, the spectrum is countable and does not have an accumulation point in \(\mathbb {C}\). This is a direct consequence from the observation that the operator \(J_\sigma (u^{*})^{-1} {\mathcal {I}}:L^2({\mathcal {D}}) \rightarrow L^2({\mathcal {D}})\) is compact for all shifts \(-\sigma \) that are not in the spectrum of \(J(u^{*})\) (i.e., we have compact resolvents). Hence, we can apply Theorem 1 which readily proves the result.\(\square \)
In the finite dimensional case based on a finite difference discretization, a corresponding result was presented in [38]. Motivated by the above convergence result, a practical realization of the iterations can be based on the more natural formulation of the J-method given by (16). This is also the version for which we discuss the implementation in “Appendix B”.
5 Global convergence of the damped J-method for the GPEVP
In this section we come back to the question of invertibility of the operator J (which hence also implies invertibility of \({\mathcal {J}}\)). This will then lead to a globally convergent method.
5.1 Coercivity of the shifted J-operator
We first show that the operator J is – up to a shift – coercive.
Lemma 3
Given \(u\in V\) and assumption (11), the operator J(u) corresponding to the Gross–Pitaevskii operator satisfies a Gårding inequality. More precisely, for any \(\sigma \ge \frac{\kappa }{3} \Vert u \Vert _{L^4({\mathcal {D}})}^4 / \Vert u \Vert ^4\) the bilinear form
is coercive and thus, the operator \(J_\sigma (u) = J(u)+\sigma {\mathcal {I}}:V\rightarrow V^*\) is invertible.
Proof
Consider \(v\in V\). We start with considering the rotational gradient, for which we observe with Young’s inequality that
Hence, with \(W_{ \text{ R }}= W(x) - \tfrac{1}{4}\, \varOmega ^2|x|^2\) and assumption (11) we have
This leads to
To estimate the negative part, we apply once more Young’s inequality with some parameter \(\mu >0\) and the Cauchy Schwarz inequality to get
Using this estimate in (18) with the particular choice \(\mu = 3\, \Vert u\Vert ^2/ \Vert u\Vert ^4_{L^4({\mathcal {D}})}\), we obtain
Thus, we conclude
\(\square \)
Recall the definition of the energy E in Sect. 4.1. Motivated by the previous lemma, we also define the shifted energy \(E_\sigma (u) := E(u) + \frac{1}{2} \sigma \Vert u\Vert ^2\) such that \(E_0(u) = E(u)\). For \(u\in V\) with normalization constraint \(\Vert u \Vert = 1\), a sufficient shift in the sense of Lemma 3 is thus given by
Note that for a normalized function u we can express the Rayleigh quotient in terms of the energy by
In particular, this formula relates the eigenvalues with the energies of the eigenfunctions.
5.2 Feasibility of the J-method
For the feasibility of the damped J-method, we need to guarantee a priori that \(J_\sigma (u^n)\) stays invertible throughout the iteration process. We fix the shift \(\sigma \) in the beginning of the iteration, e.g., by \(\sigma := \frac{4}{3} E(u^0)\). Now, the aim is to show that the energy of the iterates does not increase such that \(\sigma \ge \frac{4}{3} E(u^n)\) for all \(n\ge 0\).
Lemma 4
Recall the shifted energy \(E_\sigma (u) := E(u) + \frac{1}{2} \sigma \Vert u\Vert ^2\) with E(u) being defined in (12). Further recall the definition of the J-method (7) with iteration steps \({\tilde{u}}^{n+1} = (1-\tau )u^n + \tau \, \gamma _n\, J_\sigma (u^n)^{-1} {\mathcal {I}}u^n\). Then, for any step size \(\tau \le \frac{1}{2}\) we have the following guaranteed estimate for the difference of the shifted energies
Proof
We start by establishing a couple of identities for different evaluations of J. Here we exploit the \(L^2\)-orthogonality (8), i.e., \((u^n,{\tilde{u}}^{n+1}-u^n)=0\). Together with \(\Vert u^n \Vert =1\) this implies \((u^n,{\tilde{u}}^{n+1})=1\). Using these facts, we observe that
Next, using again the \(L^2\)-orthogonality together with the definition of \({\tilde{u}}^{n+1}\) in (7), we have
With this equality, we conclude
Here we need to have a closer look at the term \(\langle J_{\sigma }(u^n) ( {\tilde{u}}^{n+1} - u^n) , u^{n} \rangle \). By the definition of \(J(u^n)\) it is easily seen that
Plugging this into (22) yields for the shifted energy
Since \(\mathfrak {R}\, z = \mathfrak {R}\, {\overline{z}}\) for all \(z\in \mathbb {C}\), we conclude that
Using that
and \(\mathfrak {R}(u^n (\overline{{\tilde{u}}^{n+1} - u^n})) = \mathfrak {R}({\tilde{u}}^{n+1}\overline{u^n} ) - |u^n|^2\) we see that
Finally, an application of the triangle and Young’s inequality yields the estimate
which then implies the desired inequality if \(\tau \le \frac{1}{2}\).\(\square \)
With this result we are now in the position to prove uniform boundedness of the energy of the iterates and even a reduction of the energy.
Theorem 4
(energy reduction) Given \(u^0 \in V\) with \(\Vert u^0\Vert =1\), we let \(\sigma \ge \frac{4}{3} E(u^0)\) and \(W(x) \ge \varOmega ^2 |x|^2\). Then, there exists a \(\tau ^{*}>0\) (that only depends on \(u^0\) and its energy) such that for all \(0 < \tau _n \le \tau ^{*}\) the sequence obtained by the damped J-method (7) is well-posed and strictly energy diminishing for all n, i.e., it holds
where \(E(u^{n+1}) < E(u^{n})\) if \(u^{n}\) is not already a critical point of E and thus an eigenstate.
Proof
We proceed inductively. From estimate (17) in the proof of Lemma 3 we know that \(J_{\sigma }(u^0)\) is coercive with constant 1/2, i.e., \(\Vert \nabla v\Vert ^2 = \Vert v \Vert _{V}^2 \le 2\, \langle J_{\sigma }(u^0)v, v \rangle \). Together with (21) and the \(L^2\)-orthogonality (8), this implies
Hence, we get
Assume that \(\tau _0\le \tau ^{*}\le 2\), where \(\tau ^{*}\) is selected sufficiently small compared to \(C_0\). Then we have that \(\Vert \nabla {\tilde{u}}^{1} - \nabla u^0\Vert <1\) and the Sobolev embedding of \(L^4({\mathcal {D}}) \hookrightarrow H^1_0({\mathcal {D}})\) with constant \(C_S\) implies that
Consequently, we can use (19) and (17) to observe that the energy difference fulfills
Hence, if
then we have
where we have used that the iterations increase the mass intermediately, i.e., \(\Vert {\tilde{u}}^{1} \Vert \ge 1\) as shown in (9). Note that since \(u^0\) and \(u^1\) are normalized in \(L^2({\mathcal {D}})\), we can drop the shift, leading to \(E(u^{1} ) \le E(u^0)\). Hence, we have \(\sigma \ge \frac{4}{3}E(u_0) \ge \frac{4}{3}E(u_1)\) and Lemma 3 guarantees that \(J_{\sigma }(u^1)\) is still coercive. Inductively, we can repeat the arguments for \(u^n\) with the same generic constant \(C_0\) to show that for \(\tau ^n \le \tau ^{*}\) we have
Since the energy is diminished in every iteration, the coercivity of \(\langle J_\sigma (u^n) \cdot , \cdot \rangle \) is maintained and all the iterations are well-defined. Finally, we note that because of (26) we have \(E_{\sigma }(u^n) = E_{\sigma }({\tilde{u}}^{n+1} )\) if and only if \(u^n={\tilde{u}}^{n+1}\). However, this can only happen if
i.e., if \(u^n\) is already an eigenfunction with eigenvalue \(\lambda =\gamma _n\).\(\square \)
It is interesting to note that the \(L^2\)-norm of \({\tilde{u}}^{n}\) cannot diverge. We see this in the following conclusion.
Conclusion 5
In the setting of Theorem 4 it holds that \(\Vert {\tilde{u}}^{n} \Vert \rightarrow 1\) for \(n\rightarrow \infty \).
Proof
In the proof of Theorem 4 we have seen that
Since \(E_{\sigma }(u^n)\) is monotonically decreasing and bounded from below, we have \(E_{\sigma }(u^n) - E_{\sigma }(u^{n+1} ) \rightarrow 0\). This together with the Poincaré-Friedrichs inequality implies that
for \(n\rightarrow \infty \).\(\square \)
5.3 Convergence and optimal damping
In this subsection we prove the convergence of the J-method for a suitable choice of damping parameters. We can make practical use of this result by selecting \(\tau _n\) in each iteration step by the minimizer of a simple one-dimensional minimization problem.
Theorem 6
(global convergence) Suppose that the assumptions of Theorem 4 are fulfilled. Additionally assume that \(\tau _n\) is selected such that it does not degenerate, i.e., is uniformly bounded away from zero. Then there exists a limit energy \(E^{*}:=\lim _{n\rightarrow \infty } E(u^{n})\) and, up to a subsequence, we have that the iterates \(u^n\) of the damped J-method converge strongly in \(H^1({\mathcal {D}})\) to a limit \(u^{*}\in V\). The limit is an \(L^2\)-normalized eigenfunction with some eigenvalue \(\lambda ^{*}>0\), i.e.,
and we have \(E(u^{*})=E^{*}\). If \(u^{*}\) is the only eigenfunction on the energy level \(E^{*}\), then we have convergence of the full sequence \(u^{n}\).
Proof
The proof is similar to the arguments presented in [36, Th. 4.9]. First, Theorem 4 guarantees the existence of the limit \(E^{*}:=\lim _{n\rightarrow \infty } E(u^n)\). Hence, \(u^n\) is uniformly bounded in V and we can extract a subsequence (still denoted by \(u^n\)) that converges weakly in \(H^1({\mathcal {D}})\) and strongly in \(L^p({\mathcal {D}})\) (for \(p<6\)) to a limit \(u^{*} \in V\) with \(\Vert u^{*} \Vert =1\). Since \(J_{\sigma }(u^{*})\) is a real-linear operator that depends continuously on the data and which induces the coercive bilinear form \(\langle (J(u)+\sigma {\mathcal {I}}) \cdot , \cdot \rangle \), we have that
Together with the strong convergence \(u^{n} \rightarrow u^{*}\) in \(L^4({\mathcal {D}})\), we conclude that
This shows that
Furthermore, we have seen in the proof of Theorem 4 (respectively Conclusion 5) that the strong energy reduction implies that for \(n\rightarrow 0\)
and consequently we have with \({\tilde{u}}^{n+1} = (1-\tau _n)u^n + \tau _n \gamma ^n J_{\sigma }(u^{n})^{-1} {\mathcal {I}}u^n\) and the boundedness of \(\tau _n\) that
strongly in \(H^1({\mathcal {D}})\). Since we already know that \(u^n\) converges weakly in \(H^1({\mathcal {D}})\) to \(u^{*}\), we can now conclude that this is even a strong convergence and we have
This shows that \(u^{*}\) is an eigenfunction with eigenvalue \(\gamma ^{*}\). The strong convergence in \(H^1({\mathcal {D}})\) also implies convergence of the energies, i.e., it holds \(E^{*} =\lim _{n\rightarrow \infty } E(u^n) = E(u^{*})\).\(\square \)
For all sufficiently small \(\tau _n\), Theorem 4 proves the energy reduction and Theorem 6 global convergence. However, since we do not know a priori what a sufficiently small value for \(\tau _n\) is, we can combine the damped J-method with a line search algorithm that optimizes \(\tau _n\) in each iteration step such that the energy reduction is (quasi) optimal. Theorems 4 and 6 show that such an optimal \(\tau _n\) exists and that it does not degenerate to zero. We stress that finding such a \(\tau _n\) does not require any additional inversions, which makes the procedure very cheap, cf. “Appendix A” for details.
Conclusion 7
(J-method with optimal damping) Consider a shift \(\sigma \) such that the assumptions of Theorem 4 are fulfilled. Given \(u^n\in V\) with \(\Vert u^n\Vert =1\) the next iteration is obtained by selecting the optimal damping parameter with
and defining \(u^{n+1}\) as in (7). The approximations are energy diminishing and converge (up to a subsequence) strongly in V to an \(L^2\)-normalized eigenfunction of the GPEVP.
Finally, if there is no rotation, we can even achieve guaranteed global convergence to the ground state provided that the selected initial value is non-negative.
Proposition 3
(global convergence to ground state) Assume the setting of Theorem 6. Furthermore, we consider that there is no rotation, i.e., \(\varOmega =0\), and a non-negative and \(L^2\)-normalized starting value \(u^0 \in V\), i.e., \(u^0 \ge 0\). If \(\tau _n \le 1\) and if the shift parameter \(\sigma >0\) is selected sufficiently large, then the iterates \(u^n\) of the damped J-method converge strongly in \(H^1({\mathcal {D}})\) to the unique (positive) ground state \(u^{*} \ge 0\).
Proof
If \(\varOmega =0\), then the problem can be fully formulated over \(\mathbb {R}\) and admits a unique positive ground state \(u^{*} \in V\), cf. [21]. The only other ground state is \(- u^{*}\). Furthermore, all excited states (i.e., all other eigenfunctions) must necessarily change their sign on \({\mathcal {D}}\), cf. [36, Lem. 5.4]. Hence, if we can verify that the iterates of the damped J-method (7) preserve positivity, then Theorem 6 guarantees that the global \(H^1\)-limit must be the desired ground state \(u^{*}\).
Recall the damped J-iteration from (7) together with (14), which (over \(\mathbb {R}\)) reduces to
where \(S_{\sigma }\) is the linear self-adjoint operator given by
and G characterizes the non-symmetric rank-1 remainder, i.e.,
Analogously to the Sherman–Morrison formula for matrices [49], we can see that
Consequently we can write the effect of the inverse as
Since \(\sigma >0\) and \(W\ge 0\), \(S_{\sigma }\) is a self-adjoint elliptic operator and hence preserves positivity, i.e., we have \(S_{\sigma }^{-1} {\mathcal {I}}v \ge 0\) if \(v\ge 0\). This immediately follows by writing the action of \(S_{\sigma }^{-1}\) as an energy minimization problem. Starting (inductively) from a function \(u^n\ge 0\) we conclude that
If we can ensure that \(1 - (u^n , S_{\sigma }^{-1} {\mathcal {I}}f )_{L^2({\mathcal {D}})} >0\), then we have \(J_{\sigma }(u^n)^{-1} {\mathcal {I}}u^n \ge 0\). Let us hence consider \((u^n , S_{\sigma }^{-1} {\mathcal {I}}f )_{L^2({\mathcal {D}})}\), for which we obtain
Since \(S_{\sigma }\) is self-adjoint, we have
where \( \lambda _\text {min}(S_0)>0\) is the minimal eigenvalue of \(S_0\). Consequently, if the shift is such that
then we have positivity of \(J_{\sigma }(u^n)^{-1} {\mathcal {I}}u^n\). Note that by the energy reduction property, we can bound \(\Vert u^n\Vert ^3_{L^6({\mathcal {D}})}\) uniformly for all n by a constant that only depends on the initial energy \(E(u^0)\). Together with the obvious positivity of \((1-\tau _n)u^n\) for \(\tau _n\le 1\), we conclude the existence of a sufficiently large shift \(\sigma \) so that \(u^{n+1}\ge 0\) for all \(n\ge 0\) and hence global convergence to the ground state.\(\square \)
6 Numerical experiments
This section concerns the numerical performance of the proposed J-method enhanced by shifting and/or damping as outlined in Sects. 2.3 and 2.4 . As a general model, we seek critical points of the Gross–Pitaevskii energy (12) in a bounded domain \({\mathcal {D}}=(-L,L)^2\) for some parameter \(L>0\). The particular choice of L as well as the other physical parameters \(\varOmega \), W, and \(\kappa \) will be specified separately in the various model problems below.
For the spatial discretization we always use bilinear finite elements on a Cartesian mesh of width \(h = 2^{-8}L\). We will not investigate discretization errors with respect to the underlying PDE. For approximation properties of discrete eigenfunctions we refer to the analytical results presented in [20, 21, 25, 35] and to [22, 34, 54] for a posteriori estimators and adaptivity. Our focus is the performance of the iterative eigenvalue solver promoted in this paper. As a measure of accuracy we will use the \(L^2({\mathcal {D}})\)-norm of the residual \({\mathcal {A}}(u^n)u^n-\lambda ^n {\mathcal {I}}u^n\) given an approximate finite element eigenpair \((\lambda ^n,u^n)\). We will stop the solver whenever the residual falls below the tolerance \( \text{ TOL }=10^{-8}\).
For a better assessment of the performance of the J-method, we compare it with the projected \(a_z\)-Sobolev gradient flow introduced in [36]: given \(u^0 \in V\) with \(\Vert u^0 \Vert =1\), define for \(n = 1, 2, \ldots \),
We will refer to this approach as the A-method (without shift). Note that every step of the A-method can be interpreted as an energy minimization problem such that the iteration is positivity preserving for the case without rotation. This guarantees global convergence to the ground state for every nonnegative starting value. The corresponding shifted version, which we will also consider in the experiments, considers \(A_\sigma (u^n ,\cdot ) := A(u^n,\cdot ) + \sigma {\mathcal {I}}\) in place of \(A(u^n,\cdot )\) in the definition of \({\hat{u}}^{n+1}\) and a corresponding adjustment of the normalization factor \(\gamma _n\). According to the numerical experiments of [36], this method is representative for the larger class of gradient flows in terms of accuracy-cost ratios. The cost per iteration step for both A- and J-method are proportional and of the same order. Tentatively, the A-method is cheaper by a fixed factor, since (due to the rank-1 matrix that appears in the J-version) an additional linear system per step has to be solved when the Sherman-Morrison formula is used, cf. “Appendix B”. To what extent the computational overhead of the J-method can be reduced by suitable preconditioned iterative solvers is beyond the scope of the paper. Notwithstanding the above, the experiments below clearly show that the J-method easily compensates its possible computational overhead per step by a notedly smaller iteration count.
6.1 Ground state in a harmonic potential
In the first model problem, we consider a harmonic trapping potential with trapping frequencies 1/2, i.e.,
The angular momentum \(\varOmega \) is set to zero and the repulsion parameter to \(\kappa = 1000\). The size of the domain is chosen as \(L=8\). This is larger than the Thomas-Fermi radius of the problem which can be estimated as \(R^{ \text{ TF }}=\sqrt{2} (\kappa /\pi )^{1/4}\approx 5.97\), cf. [10]. We are interested in computing the ground state, i.e., the global minimizer \(u^\text {gs}\) of the energy (12). Note that, up to sign, \(u^\text {gs}\) is the unique eigenfunction that corresponds to the smallest eigenvalue \(\lambda ^\text {gs}\) which is well-separated from the remaining spectrum. As an initial value for all variants of eigenvalue solvers we use the bi-quadratic bubble
interpolated in the finite element space and normalized in \(L^2({\mathcal {D}})\). For this simple model problem, there are certainly more sophisticated initial guesses such as the Thomas-Fermi approximation. Our uneducated initial guess marks an additional challenge. As ground state energy we computed \(E^\text {gs}:= E(u^\text {gs})\le 6.019\). The corresponding eigenvalue approximation is \(\lambda ^\text {gs}\approx 17.93\). Clearly, the accuracy of these numbers is limited by the choice of the discretization parameter h. Mesh adaptivity as used in [34] or a higher-order method would certainly help to improve on these numbers.
Computation of the ground state for a harmonic potential, cf. Sect. 6.1 for details. The figure shows the \(L^2({\mathcal {D}})\)-norms of the residuals (logarithmic scale) vs. the iteration count for several methods indicated in the legend
Figure 1 shows the evolution of the residuals during the iteration of several variants of the J and A-method. Unless specified differently, the general J- and A-methods refer to a combination of damping and shifting. More precisely, we use damping for globalization of convergence until the residual falls below \(10^{-3}\). Then we freeze the time step \(\tau =1\) and switch to a Rayleigh shift strategy to possibly accelerate convergence, i.e., in each step we choose the shift \(\sigma = -\lambda ^n\) to be the current eigenvalue approximation. According to our numerical experience the coexistence of damping and shifting is hard to control. The transition from damping to shifting is clearly seen in the convergence plot of Fig. 1. We observe linear but fairly slow convergence in the damped phase. As soon as we switch to shifting, the convergence is beyond linear. The A-method performs similarly in the damping phase but diverges as soon as the shift is turned on. It is this phenomenon already observed in [38] in a less extreme characteristic (see also the second model problem below) that motivated the derivation of a shift-sensitive J-method. Note that the improved rate can be explained by its connection to Newton’s method, cf. [38, Sect. 3.4] . Convergence proofs with higher rate, however, are only given in the discrete setting for linear and particular nonlinear eigenvalue problems [41, 43].
In Fig. 1 we also show results for the variants of the J- and A-methods where either \(\tau \), \(\sigma \), or both are fixed. Their performance is in between the aforementioned combined approaches.
6.2 Exponentially localized ground state in a disorder potential
In the second model problem the non-negative external potential W reflects a high degree of disorder and the repulsion parameter \(\kappa \) is small. In this situation, the low-energy eigenstates essentially localize in the sense of an exponential decay of their moduli.
The numerical approximation of localized Schrödinger eigenstates in the linear case, i.e., for \(\kappa =0\), has recently caused a large interest in the fields of computational physics and scientific computing [8, 9, 31, 51, 55]. In particular, the results of [4] provide a mathematical justification of the observed localization. In the nonlinear case the phenomenon is still observable but locality deteriorates with increasing interaction [3, 5, 6]. Here, we choose \(\kappa =1\) which leads to a fairly localized ground state as it can be seen in Fig. 2 (right). Its computation turns out to be much more challenging than in the case of a harmonic trapping potential in the sense that convergence rates are slower and iteration counts larger. This is related to a possible clustering of the lowermost eigenvalues. In particular, we expect \(\lambda _1/\lambda _2 \approx 1\) in this example such that shifting can provide a considerable speed up. Due to the small repulsion parameter \(\kappa \), however, we expect a significant gap within the first few eigenvalues as in the linear case.
Exponentially localized ground state in a disorder potential, cf. Sect. 6.2. Left: Disorder potential (random i.i.d. checkerboard) taking values 0 (white) and \((2\varepsilon L)^{-2}\) (black) for parameters \(L=8\), \(\varepsilon =2^{-6}\). Right: Corresponding ground state density for \(\varOmega = 0\), \(\kappa = 1\)
Computation of the exponentially localized ground state in a disorder potential, cf. Sect. 6.2 for details. The figure shows the \(L^2({\mathcal {D}})\)-norms of the residuals (logarithmic scale) vs. the iteration count for several variants of J- and A-method
We have tested the same solvers as in the previous subsection. The J-method involving shifting performs best by far. Surprisingly, the variant without adaptive time step in the damping phase even performed better. This is no contradiction with the theory as we are showing residuals rather than energies (Fig. 3). Moreover, a locally optimal energy decrease does not necessarily lead to better global performance. Still the difference is not too big. As a general recommendation from our numerical experience we would favor to use an adaptive time step because it was more robust. Another difference with regard to the harmonic potential is that, this time, the A-method reacts upon shifting in a positive way. For a few steps the convergence is indeed accelerated. However, thereafter the method turns back to a linear regime of convergence which cannot compete with the shifted J-method. Nevertheless, the A-method does not fail completely as seen in the previous example. This may be caused by the small value of \(\kappa \), compared with the considered potential.
6.3 Vortex lattices in a fast rotating trap
We close the numerical illustration of the J-method with a qualitative study of vortex lattice states in the present of fast rotating potentials. We choose a harmonic potential W as in (28) and set \(\Omega =0.85\), \(\kappa = 1000\), and the size of the computational domain to \(L=10\).
We have computed four different eigenfunctions using the J-method. This was only possible using the shift-sensitivity of the J-method. We shall briefly describe the computational parameters. We use the bi-quadratic bubble (29) as the initial value for all computations. To compute the (tentative) ground state \(u_1\) (see Fig. 4, upper left) we used the combined strategy as before. However, we switched from damping to shifting only once the residual falls below \(10^{-6}\). Switching earlier led to states of higher energy. E.g., switching at a tolerance of \(10^{-3}\) lead to the eigenfunction \(u_2\) depicted in the upper right of Fig. 4. It is interesting to observe that while \(E_1:=E(u_1)<E(u_2):=E_2\) the corresponding eigenvalues are ordered the other way around.
Computation of vortex lattices in a fast rotating trap at different energy levels, cf. Sect. 6.3. The parameters are \(L=10\), \(\varOmega = 0.85\), \(\kappa = 1000\). The upper left figure depicts the density of the tentative ground state. The other three figures show densities corresponding to excited states
Two further excited states are found by limiting the adaptive shift to the interval [15.0, 15.6] for \(u_3\) (see lower left of Fig. 4) and to the interval [15.2, 15.45] for the state \(u_4\) that does not show any vortices (see lower right of Fig. 4). In both cases we used the lower end of the interval as the shift in the damping phase and we switched to adaptive shifting at residual tolerance \(10^{-4}\) for \(u_3\) and \(10^{-3}\) for \(u_4\). Note that \(u_1\) seems to be the global energy minimizer of the (discretized) problem but the exited states \(u_2,u_3,u_4\) do not necessarily represent the next higher energy levels 2 to 4 but some levels of higher energy.
From this rather complicated derivation one can see that it is by no means trivial to compute these excited states. We shall also say that it is not always easy to control the shifting. If one shifts too early in the sense that the approximation is not yet close to the target eigenfunction (e.g. in terms of number of vortices) the procedure may fail completely. Despite this difficulty which is intrinsic to the nonlinear eigenvalue problem at hand, the J-method along with shifting and damping enables the selective approximation of excited states as well as the amplification of convergence beyond linear rates in the spirit of the Rayleigh quotient iteration even in this challenging regime of vortex pattern formation.
7 Conclusion
In this paper we have generalized the J-method proposed in [38] to the abstract Hilbert space setting. This gives rise to a variational formulation that is straightforwardly accessible by Galerkin-type discretizations, e.g., based on finite elements. Moreover, we have transferred the proof of local convergence of the J-method from the discrete setting (cf. [38]) to the abstract setting and recovered a quantitative convergence rate that depends on the spectral shift. Since this fast convergence is indeed a local feature, we have proposed a damped J-method. For the GPEVP, the damping step can be seen as a discretization of a generalized gradient flow and guarantees reduction of the energy associated to the Gross–Pitaevskii operator. This energy reduction is the key to the global convergence of the damped method.
We have proposed a combined strategy of damping and shifting, depending on the residual error. The damping part guides the iterates to a sufficiently small neighborhood of an eigenfunction. Therein, the shifting significantly improves the linear rate of convergence. With a Rayleigh-type shifting strategy remarkable speed-ups beyond linear convergence are observed. In numerical experiments we have demonstrated the excellent performance of the arising method and its suitability for both the computation of ground states and the selective computation of excited states. We believe that the proposed strategy can be also an efficient tool for treating other types of eigenvalue problems with nonlinearities, in particular those that can be rephrased as finding the critical points of constraint energy minimization problems.
References
Abraham, R., Marsden, J.E., Ratiu, T.: Manifolds, tensor analysis, and applications, Applied Mathematical Sciences, vol. 75, 2nd edn. Springer-Verlag, New York (1988). https://doi.org/10.1007/978-1-4612-1029-0
Altmann, R., Froidevaux, M.: PDE eigenvalue iterations with applications in two-dimensional photonic crystals. ESAIM Math. Model. Numer. Anal. (M2AN) 54(5), 1751–1776 (2020)
Altmann, R., Henning, P., Peterseim, D.: Localization and delocalization of ground states of Bose-Einstein condensates under disorder (2020). arXiv:2006.00773
Altmann, R., Henning, P., Peterseim, D.: Quantitative Anderson localization of Schrödinger eigenstates under disorder potentials. Math. Models Methods Appl. Sci. 30(5), 917–955 (2020). https://doi.org/10.1142/S0218202520500190
Altmann, R., Peterseim, D.: Localized computation of eigenstates of random Schrödinger operators. SIAM J. Sci. Comput. 41, B1211–B1227 (2019)
Altmann, R., Peterseim, D., Varga, D.: Localization studies for ground states of the Gross–Pitaevskii equation. PAMM 18(1), e201800343 (2018). https://doi.org/10.1002/pamm.201800343
Antoine, X., Levitt, A., Tang, Q.: Efficient spectral computation of the stationary states of rotating Bose-Einstein condensates by preconditioned nonlinear conjugate gradient methods. J. Comput. Phys. 343, 92–109 (2017). https://doi.org/10.1016/j.jcp.2017.04.040
Arnold, D.N., David, G., Filoche, M., Jerison, D., Mayboroda, S.: Computing spectra without solving eigenvalue problems. SIAM J. Sci. Comput. 41(1), B69–B92 (2019). https://doi.org/10.1137/17M1156721
Arnold, D.N., David, G., Jerison, D., Mayboroda, S., Filoche, M.: Effective confining potential of quantum states in disordered media. Phys. Rev. Lett. 116, 056602 (2016). https://doi.org/10.1103/PhysRevLett.116.056602
Bao, W.: Mathematical models and numerical methods for Bose-Einstein condensation. In: Proceedings of the International Congress for Mathematicians 2014, (2014)
Bao, W., Chern, I.L., Lim, F.Y.: Efficient and spectrally accurate numerical methods for computing ground and first excited states in Bose-Einstein condensates. J. Comput. Phys. 219(2), 836–854 (2006). https://doi.org/10.1016/j.jcp.2006.04.019
Bao, W., Du, Q.: Computing the ground state solution of Bose-Einstein condensates by a normalized gradient flow. SIAM J. Sci. Comput. 25(5), 1674–1697 (2004). https://doi.org/10.1137/S1064827503422956
Bao, W., Shen, J.: A generalized-Laguerre-Hermite pseudospectral method for computing symmetric and central vortex states in Bose-Einstein condensates. J. Comput. Phys. 227(23), 9778–9793 (2008). https://doi.org/10.1016/j.jcp.2008.07.017
Bao, W., Tang, W.: Ground-state solution of Bose-Einstein condensate by directly minimizing the energy functional. J. Comput. Phys. 187(1), 230–254 (2003). https://doi.org/10.1016/S0021-9991(03)00097-4
Bao, W., Wang, H., Markowich, P.A.: Ground, symmetric and central vortex states in rotating Bose-Einstein condensates. Commun. Math. Sci. 3(1), 57–88 (2005). http://projecteuclid.org/euclid.cms/1111095641
Borzì, A., Ciaramella, G., Sprengel, M.: Formulation and numerical solution of quantum control problems. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2017). https://doi.org/10.1137/1.9781611974843.ch1
Bose, S.N.: Plancks Gesetz und Lichtquantenhypothese. Zeitschrift für Physik 26(1), 178–181 (1924)
Caliari, M., Ostermann, A., Rainer, S., Thalhammer, M.: A minimisation approach for computing the ground state of Gross–Pitaevskii systems. J. Comput. Phys. 228(2), 349–360 (2009). https://doi.org/10.1016/j.jcp.2008.09.018
Cancès, E.: SCF algorithms for HF electronic calculations. In: Mathematical models and methods for ab initio quantum chemistry, Lecture Notes in Chem., vol. 74, pp. 17–43. Springer, Berlin (2000). https://doi.org/10.1007/978-3-642-57237-1_2
Cancès, E., Chakir, R., He, L., Maday, Y.: Two-grid methods for a class of nonlinear elliptic eigenvalue problems. IMA J. Numer. Anal. 38(2), 605–645 (2018). https://doi.org/10.1093/imanum/drw053
Cancès, E., Chakir, R., Maday, Y.: Numerical analysis of nonlinear eigenvalue problems. J. Sci. Comput. 45(1–3), 90–117 (2010). https://doi.org/10.1007/s10915-010-9358-1
Cancès, E., Dusson, G., Maday, Y., Stamm, B., Vohralík, M.: A perturbation-method-based a posteriori estimator for the planewave discretization of nonlinear Schrödinger equations. C. R. Math. Acad. Sci. Paris 352(11), 941–946 (2014). https://doi.org/10.1016/j.crma.2014.09.014
Cancès, E., Le Bris, C.: Can we outperform the DIIS approach for electronic structure calculations? Int. J. Quantum Chem. 79(2), 82–90 (2000)
Cancès, E., Le Bris, C.: On the convergence of SCF algorithms for the Hartree–Fock equations. M2AN Math. Model. Numer. Anal. 34(4), 749–774 (2000). https://doi.org/10.1051/m2an:2000102
Chen, H., Gong, X., Zhou, A.: Numerical approximations of a nonlinear eigenvalue problem and applications to a density functional model. Math. Methods Appl. Sci. 33(14), 1723–1742 (2010). https://doi.org/10.1002/mma.1292
Dalfovo, F., Giorgini, S., Pitaevskii, L.P., Stringari, S.: Theory of Bose-Einstein condensation in trapped gases. Rev. Mod. Phys. 71(3), 463–512 (1999)
Danaila, I., Kazemi, P.: A new Sobolev gradient method for direct minimization of the Gross–Pitaevskii energy with rotation. SIAM J. Sci. Comput. 32(5), 2447–2467 (2010). https://doi.org/10.1137/100782115
Danaila, I., Protas, B.: Computation of ground states of the Gross–Pitaevskii functional via Riemannian optimization. SIAM J. Sci. Comput. 39(6), B1102–B1129 (2017). https://doi.org/10.1137/17M1121974
Dion, C.M., Cancès, E.: Ground state of the time-independent Gross–Pitaevskii equation. Comput. Phys. Commun. 177(10), 787–798 (2007). https://doi.org/10.1016/j.cpc.2007.04.007
Einstein, A.: Quantentheorie des einatomigen idealen Gases, pp. 261–267. Sitzber. Kgl. Preuss. Akad. Wiss. (1924)
Filoche, M., Mayboroda, S.: Universal mechanism for Anderson and weak localization. Proc. Natl. Acad. Sci. USA 109(37), 14761–14766 (2012). https://doi.org/10.1073/pnas.1120432109
Finta, B.: About Ostrowski’s iterative method in infinite-dimensional spaces. Comput. Math. Appl. 38(9–10), 193–196 (1999). https://doi.org/10.1016/S0898-1221(99)00274-6
García-Ripoll, J.J., Pérez-García, V.M.: Optimizing Schrödinger functionals using Sobolev gradients: applications to quantum mechanics and nonlinear optics. SIAM J. Sci. Comput. 23(4), 1316–1334 (2001). https://doi.org/10.1137/S1064827500377721
Heid, P., Stamm, B., Wihler, T.P.: Gradient flow finite element discretizations with energy-based adaptivity for the Gross–Pitaevskii equation. J. Comput. Phys. 436, 110165 (2021). https://doi.org/10.1016/j.jcp.2021.110165
Henning, P., Målqvist, A., Peterseim, D.: Two-level discretization techniques for ground state computations of Bose-Einstein condensates. SIAM J. Numer. Anal. 52(4), 1525–1550 (2014). https://doi.org/10.1137/130921520
Henning, P., Peterseim, D.: Sobolev gradient flow for the Gross–Pitaevskii eigenvalue problem: global convergence and computational efficiency. SIAM J. Numer. Anal. 58(3), 1744–1772 (2020). https://doi.org/10.1137/18M1230463
Huhtanen, M., Ruotsalainen, S.: Real linear operator theory and its applications. Integral Equ. Oper. Theory 69(1), 113–132 (2011). https://doi.org/10.1007/s00020-010-1825-4
Jarlebring, E., Kvaal, S., Michiels, W.: An inverse iteration method for eigenvalue problems with eigenvector nonlinearities. SIAM J. Sci. Comput. 36(4), A1978–A2001 (2014). https://doi.org/10.1137/S1064827500366124
Kazemi, P., Eckart, M.: Minimizing the Gross–Pitaevskii energy functional with the Sobolev gradient—analytical and numerical results. Int. J. Comput. Methods 7(3), 453–475 (2010). https://doi.org/10.1142/S0219876210002301
Kitchen, J.W.: Concerning the convergence of iterates to fixed points. Studia Math. 27, 247–249 (1966). https://doi.org/10.4064/sm-27-3-247-249
Mehrmann, V., Voss, H.: Nonlinear eigenvalue problems: a challenge for modern eigenvalue methods. GAMM-Mitteilungen 27(2), 121–152 (2004). https://doi.org/10.1002/gamm.201490007
Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York-London (1970)
Osborne, M.R.: A new method for the solution of eigenvalue problems. Comput. J. 7(3), 228–232 (1964). https://doi.org/10.1093/comjnl/7.3.228
Ostrowski, A.M.: Solution of equations and systems of equations. Pure and Applied Mathematics, vol. 9, 2nd edn. Academic Press, New York-London (1966)
Pitaevskii, L.P., Stringari, S.: Bose-Einstein Condensation. Oxford University Press, Oxford (2003)
Raza, N., Sial, S., Butt, A.R.: Numerical approximation of time evolution related to Ginzburg-Landau functionals using weighted Sobolev gradients. Comput. Math. Appl. 67(1), 210–216 (2014). https://doi.org/10.1016/j.camwa.2013.11.006
Raza, N., Sial, S., Siddiqi, S.S., Lookman, T.: Energy minimization related to the nonlinear Schrödinger equation. J. Comput. Phys. 228(7), 2572–2577 (2009). https://doi.org/10.1016/j.jcp.2008.12.016
Roothaan, C.C.J.: New developments in molecular orbital theory. Rev. Mod. Phys. 23, 69–89 (1951). https://doi.org/10.1103/RevModPhys.23.69
Sherman, J., Morrison, W.J.: Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann. Math. Statist. 21(1), 124–127 (1950). https://doi.org/10.1214/aoms/1177729893
Shih, M.H.: A further generalization of the Ostrowski theorem in Banach spaces. Proc. Japan Acad. Ser. A Math. Sci. 57(3), 168–170 (1981). http://projecteuclid.org/euclid.pja/1195516492
Steinerberger, S.: Localization of quantum states and landscape functions. Proc. Am. Math. Soc. 145(7), 2895–2907 (2017). https://doi.org/10.1090/proc/13343
Upadhyaya, P., Jarlebring, E., Rubensson, E.H.: A density matrix approach to the convergence of the self-consistent field iteration. Numer. Algebra Control Optim. 11(1), 99–115 (2021). https://doi.org/10.3934/naco.2020018
Woodbury, M.A.: Inverting modified matrices. Statistical Research Group, Memo. Rep. no. 42. Princeton University, Princeton, N. J. (1950)
Xie, H., Xie, M.: Computable error estimates for ground state solution of Bose-Einstein condensates (2016). ArXiv e-print arXiv:1604.05228
Xie, H., Zhang, L., Owhadi, H.: Fast eigenpairs computation with operator adapted wavelets and hierarchical subspace correction. SIAM J. Numer. Anal. 57(6), 2519–2550 (2019)
Zeidler, E.: Nonlinear functional analysis and its applications. II/A. Springer-Verlag, New York (1990). https://doi.org/10.1007/978-1-4612-0985-0
Zhang, Z.: Exponential convergence of Sobolev gradient descent for a class of nonlinear eigenproblems (2019). ArXiv e-print arXiv:1912.02135
Acknowledgements
P. Henning also acknowledges the support of the G\(\ddot{o}\)ran Gustafsson foundation.
Funding
Open access funding provided by Royal Institute of Technology.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
P. Henning acknowledges funding by the Swedish Research Council (grant 2016-03339). The work of D. Peterseim is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 865751).
Appendices
Energy-diminishing step size control
We consider the damped J-method (7) in the case of the Gross–Pitaevskii equation. In order to implement an efficient step size control, we consider the function
that we want to minimize for \(\tau \in (0,2)\). Based on \(u^n\), we compute
Note that this implies
With this, we precompute various terms, which are given by
as well as
Note that the terms \(\alpha _i\), \(\beta _i\), and \(\zeta _i\) have to be computed only once per time step (e.g. within one loop over the grid elements). Finally, we define the function
and observe that \(f(\tau )\) is given by
After precomputing \(\alpha _i\), \(\beta _i\), and \(\zeta _i\), the function \(f(\tau )\) can be cheaply evaluated. The minimization step, i.e., \(\tau _n := \text{ arg } \text{ min } \{ f(\tau ) | \tau \in (0,2) \}\) can be easily implemented using, e.g., a golden section search. Note that the energy of \(u^{n+1}\) is now given by \(f(\tau _n)\).
Matrix representation of the \({\mathcal {J}}\)-operator
The \({\mathcal {J}}\)-operator in case of the GPEVP was derived in Sect. 4.2 and we consider the iteration given by (16). For the implementation we need to discuss the handling of the nonlinear terms. Using the identity \((u {\overline{v}} + 2 v {\overline{u}}) u\, {\overline{w}} = 2 \mathfrak {R}\left( u {\overline{v}} \right) u\, {\overline{w}} + |u|^2 v\, {\overline{w}}\), we end up with the integrals
Note that we only need to consider real test functions w and decompose u, v into its real and imaginary part, i.e., \(u=u_R + \mathrm {i}u_I\), \(v=v_R + \mathrm {i}v_I\). This is also done in the finite element discretization, i.e., we work with real vectors of double dimension. For the first integral we note that
Introducing \(M_{v}\) as the mass matrix weighted by v, this leads to the matrix representation
For the second term we have \(|u|^2 v\, {\overline{w}} = |u|^2 \big (v_R w + \mathrm {i}v_I w \big )\). Thus, the corresponding finite element matrix is block diagonal and reads \(\text {diag}(M_{|u|^2}, M_{|u|^2})\). Finally, we have
A simple rearrangement shows that this corresponds to the rank-one matrix
In total, a finite element discretization of the J-method calls for a solution of a linear system which is decomposed of several sparse matrices and the latter rank-1 update. This can be easily inverted using the Sherman-Morrison formula, cf. [49, 53].
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Altmann, R., Henning, P. & Peterseim, D. The J-method for the Gross–Pitaevskii eigenvalue problem. Numer. Math. 148, 575–610 (2021). https://doi.org/10.1007/s00211-021-01216-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00211-021-01216-5
Keywords
- Nonlinear eigenvalue problem
- Gross–Pitaevskii equation
- Iterative eigenvalue solvers