1 Introduction

The Bose gas is one of the simplest models in quantum statistical mechanics, and yet it has a rich and complex phenomenology. As such, it has garnered much attention from the mathematical physics community for over half a century. It consists of infinitely many identical Bosons and is used to model a wide range of physical systems, from photons in black body radiation to gasses of helium atoms. Whereas photons do not directly interact with each other, helium atoms do, and such an interaction makes studying such systems very challenging. To account for interactions between Bosons, Bogolyubov [5] introduced a widely used approximation scheme that accurately predicts many observables [23] in the low density regime. Even though Bogolyubov theory is not mathematically rigorous, it has allowed mathematical physicists to develop the necessary intuition to prove a wide variety of results about the Bose gas, such as the low density expansion of the ground state energy of the Bose gas in the thermodynamic limit [1, 13,14,15,16, 34], as well as many other results in scaling limits other than the thermodynamic limit (see [17] for a review, as well as, among many others, [2,3,4, 6, 7, 11, 12, 19, 27, 28, 30,31,32]). In this note, we will focus on the ground state in the thermodynamic limit.

In 1963, Lieb [24,25,26] introduced a new approximation scheme to compute properties of the ground state of Bose gasses, called the simplified approach, which has recently been found to yield surprisingly accurate results [8,9,10, 20]. Indeed, while Bogolyubov theory is accurate at low densities, the simplified approach has been shown to yield asymptotically accurate results at both low and high densities [8, 9] for interaction potentials that are of positive type, as well as reproduce the qualitative behavior of the Bose gas at intermediate densities [10]. In addition to providing a promising tool to study the Bose gas, the derivation of the Simplified approach is different enough from Bogolyubov theory that it may give novel insights into longstanding open problems about the Bose gas.

The original derivation of the Simplified approach [24] is quite general, and applies to any translation invariant system (it even works for Coulomb [26] and hard-core [10] interactions). In the present paper, we extend this derivation to systems that break translation invariance. This allows us to formulate the simplified approach for systems with external potentials, and with a large class of boundary conditions. In addition, it allows us to compute observables in systems with translation invariance, but whose computation requires breaking the translation invariance. We will discuss an example of such an observable: the momentum distribution.

The momentum distribution \({\mathcal {M}}(k)\) is the probability of finding a particle in the state \(e^{ikx}\). Bose gasses are widely expected to form a Bose–Einstein condensate, although this has still not been proven (at least for continuum interacting gasses in the thermodynamic limit). From a mathematical point of view, Bose–Einstein condensation is defined as follows: if the Bose gas consists of N particles, the average number of particles in the constant state (corresponding to \(k=0\) in \(e^{ikx}\)) is of order N. The condensate fraction is defined as the proportion of particles in the constant state. The momentum distribution is an extension of the condensate fraction to a more general family of states. In particular, computing \({\mathcal {M}}(k)\) for \(k\ne 0\) amounts to counting particles that are not in the condensate. This quantity has been used in the recent proof [15, 16] of the energy asymptotics of the Bose gas at low density. A numerical computation of the prediction of the Simplified approach for \({\mathcal {M}}(k)\) has been published in [22].

The main results in this paper fall into two categories. First, we will derive the simplified approach without assuming translation invariance, see Theorem 1. To do so, we will make the so-called “factorization assumption”, on the marginals of the ground state wavefunction, see Assumption 1. This allows us to derive a simplified approach for a wide variety of situations in which translation symmetry breaking is violated, such as in the presence of external potentials. Second, we compute a prediction for the momentum distribution using the simplified approach. The simplified approach does not allow us to compute the ground state wavefunction directly, so to compute observables, such as the momentum distribution, we use the Hellmann–Feynman technique and add an operator to the Hamiltonian. In the case of the momentum distribution, this extra operator is a projector onto \(e^{ikx}\), which breaks the translation invariance of the ground state wavefunctions. In Theorem 2, we show how to compute the momentum distribution in the simplified approach using the general result of Theorem 1. In addition, we check that the prediction is credible, by comparing it to the prediction of Bogolyubov theory, and find that both approaches agree at low densities and small k, see Theorem 3.

The result in this paper concerns the derivation of the Simplified approach for Bose gasses without translation invariance. As of this writing, this derivation has not been done in a mathematically rigorous. Doing so is an important open problem (as the predictions of the simplified approach are expansive, even more so than Bogolyubov theory). However, the derivation of the simplified approach in translation invariant settings has also not been derived rigorously, and it would seem that the translation invariant situation will be easier to approach. So justifying the simplified approach in the translation invariant setting may be a more pressing task. That being said, the Simplified approach has proved to have strong predictive power [10, 22], so the extension presented in this paper has the potential to yield interesting physical predictions, as the translation-invariant approach has done (although, in all fairness, the non-translation invariant Simplified approach is computationally more difficult than the translation invariant one). In addition, the derivation of the simplified approach for the trapped Bose gas may shine some light on an extension of Gross–Piatevskii theory beyond low density regimes. Work in this direction is ongoing.

Instead of providing a derivation of the simplified approach from the many-body Bose gas (which is beyond reach at the moment), this paper aims to put the derivation of the simplified approach in non-translation invariant settings on a firm footing, and make clear what is rigorous, and what is an approximation.

The rest of the paper is structured as follows. In Sect. 2, we specify the model and state the main results precisely. We then prove Theorem 1 in Sect. 3, Theorem 2 in Sect. 4.1, and Theorem 3 in Sect. 4.2. The proofs are largely independent and can be read in any order.

2 The Model and Main Results

Consider N Bosons in a box of volume V denoted by \(\varOmega _V:=[-V^{\frac{1}{3}}/2,V^{\frac{1}{3}}/2]^3\), interacting with each other via a pair potential \(v\in L_{1}(\varOmega _V^2)\) that is symmetric under exchanges of particles: \(v(x,y)\equiv v(y,x)\) and non-negative: \(v(x,y)\geqslant 0\). The Hamiltonian acts on \(L_{2,\textrm{sym}}(\varOmega _V^N)\) as

$$\begin{aligned} {\mathcal {H}}:= -\frac{1}{2}\sum _{i=1}^N\varDelta _i + \sum _{1\leqslant i<j\leqslant N}v(x_i,x_j) + \sum _{i=1}^N P_i \end{aligned}$$
(1)

where \(\varDelta _i\equiv \partial _{x_i}^2\) is the Laplacian with respect to the position of the i-th particle and \(P_i\) is an extra single-particle term of the following form: given a self-adjoint operator \(\varpi \) on \(L_2(\varOmega _V)\),

$$\begin{aligned} P_i:=\mathbbm {1}^{\otimes i-1}\otimes \varpi \otimes \mathbbm {1}^{\otimes N-i}. \end{aligned}$$
(2)

\(\varpi \) can be chosen to be any self-adjoint operator, as long as \({\mathcal {H}}\) is self-adjoint. For instance, if we take \(\varpi \) to be a multiplication operator by a function \(v_0\geqslant 0\), then \(\sum _i P_i\) is the contribution of the external potential \(v_0\). In particular, this potential could be taken to scale with the volume of the box V, as in the Gross-Pitaevskii approach [18, 33]. Alternatively, \(v_0\) could be taken to be a periodic external potential. Or \(\varpi \) could be a projector onto \(e^{ikx}\), which is what we will do below to compute the momentum distribution. Because \(P_i\) acts on a single particle, it can prevent H from being translation invariant (which is the case when \(\varpi \) is the multiplication operator by \(v_0>0\)). But even if it does not, because the ground states can be degenerate in the presence of \(P_i\) (see below), the translation invariance of the Hamiltonian does not necessarily translate into the translation invariance of the ground states.

We may impose any boundary condition on the box, as long as the Laplacian is self-adjoint. We will consider the thermodynamic limit, in which \(N,V\rightarrow \infty \), such that

$$\begin{aligned} \frac{N}{V}=\rho \end{aligned}$$
(3)

is fixed. We consider a ground state \(\psi _0\), which is an eigenfunction of \({\mathcal {H}}\) with the lowest eigenvalue \(E_0\):

$$\begin{aligned} {\mathcal {H}}\psi _0=E_0\psi _0. \end{aligned}$$
(4)

When the operator \(\varpi \) is a multiplication operator by a function \(v_0\geqslant 0\) (that is, when it is a single-body potential), the ground state is unique, real, and non-negative (this follows from the Perron–Frobenius theorem and the fact that v and \(v_0\) are non-negative, see e.g. [21, Exercise E5]). In more general settings, this is not necessarily the case. In such a case, the Simplified approach should be approximating the properties of one of the ground states, but gives no control over which one it is: as will be apparent below, the derivation of the Simplified approach does not depend on which eigenstate \(\psi _0\) is, just on the factorization Assumption 1.

This is not to say that the Simplified approach applies to all eigenstates, or even to all ground states. The crucial assumption in the simplified approach is the factorization assumption 1. As we will discuss in more detail below, this is actually an approximation rather than an assumption, since it can be shown that it cannot possibly hold exactly for any wavefunction. As such, understanding which states best approximately satisfy the factorization assumption is not an easy task. For this reason, we will remain agnostic so as to which of the ground state is studied.

In order to take the thermodynamic limit, we will assume that v is uniformly integrable in V:

$$\begin{aligned} |v(x,y)|\leqslant {\bar{v}}(x,y),\quad \int _{{\mathbb {R}}^3} dy\ \bar{v}(x,y)\leqslant c \end{aligned}$$
(5)

where \({\bar{v}}\) and c are independent of V. In addition, we assume that, for any f that is uniformly integrable in V,

$$\begin{aligned} \int dx\ \varpi f(x)\leqslant c. \end{aligned}$$
(6)

2.1 The Simplified Approach Without Translation Invariance

The crucial idea of Lieb’s construction [24] is to consider the wave function \(\psi _0\) as a probability distribution, instead of the usual \(|\psi _0|^2\). When \(\varpi \) is the multiplication by \(v_0\geqslant 0\), \(\psi _0\geqslant 0\), so \(\psi _0\), normalized by its \(L_1\) norm, is indeed a probability distribution. In other cases, the probabilistic interpretation of \(\psi _0\) falls through, and the factorization assumption 1 can no longer be interpreted in terms of statistical independence. We then define the i-th marginal of \(\psi _0\) as

$$\begin{aligned} {\mathfrak {g}}_i(x_1,\cdots ,x_i):= \frac{\int \frac{dx_{i+1}}{V}\ldots \frac{dx_N}{V}\ \psi _0(x_1,\ldots ,x_N)}{\int \frac{dy_{1}}{V}\ldots \frac{dy_N}{V}\ \psi _0(y_1,\ldots ,y_N)} \end{aligned}$$
(7)

that is

$$\begin{aligned} {\mathfrak {g}}_i(x_1,\ldots ,x_i) \equiv V^i\frac{\int dx_{i+1}\cdots dx_N\ \psi _0(x_1,\ldots ,x_N)}{\int dy_{1}\ldots dy_N\ \psi _0(y_1,\ldots ,y_N)}. \end{aligned}$$
(8)

In particular, for \(i\in \{2,\ldots ,N\}\),

$$\begin{aligned} \int \frac{dx_i}{V}\ {\mathfrak {g}}_i(x_1,\ldots ,x_i)=\mathfrak g_{i-1}(x_1,\ldots ,x_{i-1}),\quad \int \frac{dx}{V}\ \mathfrak g_1(x)=1. \end{aligned}$$
(9)

Because of the symmetry of \(\psi _0\) under exchanges of particles, \({\mathfrak {g}}_i\) is symmetric under \(x_i\leftrightarrow x_j\).

Remark: If the ground state is not unique, then there may be choices of \(\psi _0\) that are orthogonal to the constant wavefunction, that is, that integrate to 0: \(\int dy_1\ldots d y_N\ \psi _0(y_1,\ldots ,y_N)=0\). The derivation in this paper precludes such a possibility, as \({\mathfrak {g}}_i\) would be ill defined. We will therefore assume that \(\psi _0\) has non-trivial overlap with the constant wavefunction: \(\int dy_1\ldots dy_N\ \psi _0(y_1,\ldots ,y_N)\ne 0\) (which is certainly the case whenever the ground state is non-negative).

We rewrite (4) as a family of equations for \({\mathfrak {g}}_i\).

1. Integrating (4) with respect to \(x_1,\ldots ,x_N\), we find that

$$\begin{aligned} E_0= G^{(2)}_0 +F^{(1)}_0 +B_0 \end{aligned}$$
(10)

with

$$\begin{aligned} G^{(2)}_0:= & {} \frac{N(N-1)}{2V^2}\int dxdy\ v(x,y){\mathfrak {g}}_2(x,y) \end{aligned}$$
(11)
$$\begin{aligned} F^{(1)}_0:= & {} \frac{N}{V}\int dx\ \varpi {\mathfrak {g}}_1(x) \end{aligned}$$
(12)

and \(B_0\) is a boundary term:

$$\begin{aligned} B_0=-\frac{N}{2V}\int dx\ \varDelta {\mathfrak {g}}_1(x). \end{aligned}$$
(13)

2. If, now, we integrate (4) with respect to \(x_2,\ldots ,x_N\), we find

$$\begin{aligned} -\frac{\varDelta }{2}{\mathfrak {g}}_1(x) +\varpi {\mathfrak {g}}_1(x) +G^{(2)}_1(x) +G^{(3)}_1(x) +F^{(2)}_1(x) +B_1(x) =E_0\mathfrak g_1(x) \end{aligned}$$
(14)

with

$$\begin{aligned} G^{(2)}_1(x):= & {} \frac{N-1}{V}\int dy\ v(x,y){\mathfrak {g}}_2(x,y) \end{aligned}$$
(15)
$$\begin{aligned} G^{(3)}_1(x):= & {} \frac{(N-1)(N-2)}{2V^2}\int dydz\ v(y,z){\mathfrak {g}}_3(x,y,z) \end{aligned}$$
(16)
$$\begin{aligned} F^{(2)}_1(x):= & {} \frac{N-1}{V}\int dy\ \varpi _y {\mathfrak {g}}_2(x,y) \end{aligned}$$
(17)

in which we use the notation \(\varpi _y\) to indicate that \(\varpi \) applies to \(y\mapsto {\mathfrak {g}}_2(x,y)\), and \(B_1\) is a boundary term

$$\begin{aligned} B_1(x):=-\frac{N-1}{2V}\int dy\ \varDelta _y {\mathfrak {g}}_2(x,y). \end{aligned}$$
(18)

3. If we integrate with respect to \(x_3,\ldots ,x_N\), we find

$$\begin{aligned}{} & {} -\frac{1}{2}(\varDelta _x+\varDelta _y){\mathfrak {g}}_2(x,y) +v(x,y)\mathfrak g_2(x,y) +(\varpi _y+\varpi _x){\mathfrak {g}}_2(x,y) +\nonumber \\{} & {} + G^{(3)}_2(x,y) +G^{(4)}_2(x,y) +F^{(3)}_2(x,y) +B_2(x,y) =E_0{\mathfrak {g}}_2(x,y) \end{aligned}$$
(19)

where, here again, \(\varpi _y\) indicates that \(\varpi \) applies to the y-degree of freedom, whereas \(\varpi _x\) applies to x, with

$$\begin{aligned} G^{(3)}_2(x,y):= & {} \frac{N-2}{V}\int dz\ (v(x,z)+v(y,z))\mathfrak g_3(x,y,z) \end{aligned}$$
(20)
$$\begin{aligned} G^{(4)}_2(x,y):= & {} \frac{(N-2)(N-3)}{2V^2}\int dzdt\ v(z,t)\mathfrak g_4(x,y,z,t) \end{aligned}$$
(21)
$$\begin{aligned} F^{(3)}_2(x,y):= & {} \frac{N-2}{V}\int dz\ \varpi _z {\mathfrak {g}}_3(x,y,z) \end{aligned}$$
(22)

and \(B_2\) is a boundary term

$$\begin{aligned} B_2(x):=-\frac{N-2}{2V}\int dz\ \varDelta _z {\mathfrak {g}}_3(x,y,z). \end{aligned}$$
(23)

Inspired by [24], we will make the following approximation.

Assumption 1

(Factorization) We will approximate \({\mathfrak {g}}_i\) by functions \(g_i\), which satisfy the following:

$$\begin{aligned} g_2(x,y)=g_1(x)g_1(y)(1-u_2(x,y)) \end{aligned}$$
(24)

and for \(i=3,4\),

$$\begin{aligned} g_i(x_1,\ldots ,x_i) = \prod _{1\leqslant j<l\leqslant i} W_i(x_j,x_l) \end{aligned}$$
(25)

with

$$\begin{aligned} W_i(x,y)=f_i(x)f_i(y)(1-u_i(x,y)) \end{aligned}$$
(26)

in which, for \(i=2,3,4\) and \(j=3,4\), \(f_j\) and \(u_i\) are bounded independently of V, \(f_i\geqslant 0\), and \(u_i\) is uniformly integrable in V:

$$\begin{aligned} |u_i(x,y)|\leqslant {\bar{u}}_i(x,y),\quad \int dy\ {\bar{u}}_i(x,y)\leqslant c_i \end{aligned}$$
(27)

with \(c_i\) independent of V. We further assume that, for \(i=1,2,3\), \(\forall x_1,\ldots ,x_{i-1}\),

$$\begin{aligned} \lim _{V\rightarrow \infty }\int dx_i\ \varDelta _{x_i} g_i(x_1,\ldots ,x_i)=0 \end{aligned}$$
(28)

in other words, these boundary terms vanish in the thermodynamic limit (these are indeed boundary terms by the divergence theorem).

In other words, \(g_i\) factorizes exactly as a product of pair terms \(W_i\). The \(f_i\) in \(W_i\) allow for \(W_i\) to be modulated by a slowly varying density, which is the main novelty of this paper compared to [24]. The inequality (27) ensures that \(u_i\) decays sufficiently fast on the microscopic scale. Note that, by the symmetry under exchanges of particles, \(u_i(x,y)\equiv u_i(y,x)\).

Note, in addition, that assumption (24) is less general than (25): we impose that, as x and y are far from each other, \(g_2\) converges to \(g_1(x)g_1(y)\). This is necessary: if we merely assumed that \(g_2(x,y)=f_2(x)f_2(y)(1-u_2(x,y))\), we would not necessarily recover that \(f_2=g_1\). However, as we will show below, assumption 1 does imply that \(f_3=g_1\) and \(f_4=g_1\) (up to corrections in \(V^{-1}\) that are irrelevant).

Here, we use the term “assumption” because it leads to the simplified approach. However, it is really an approximation rather than an assumption: this factorization will certainly not hold true exactly. At best, one might expect that the assumption holds approximately in the limit of small and large \(\rho \), and for distant points, as numerical evidence suggests in the translation invariant case. In the present paper, we will not attempt a proof that this approximation is accurate, and instead explore its consequences. Suffice it to say that this approximation is one of statistical independence that is reminiscent of phenomena arising in statistical mechanics when the density is low, that is, when the interparticle distances are large. In the current state of the art, we do not have much in the way of an explanation for why this statistical independence should hold (especially in cases where \(\psi _0\) is not even non-negative); instead, we have extensive evidence, both numerical [10] and analytical [8, 9], that this approximation leads to very accurate predictions.

From this point on, we will make no further approximations, and derive the consequences of assumption 1 in a mathematically rigorous way. This thus makes clear what is an approximation, and what is not.

The equations of the Simplified approach are derived from Assumption 1, using the eigenvalue Eq. (4) along with

$$\begin{aligned}{} & {} \int \frac{dx}{V}\ g_1(x)=1 \end{aligned}$$
(29)
$$\begin{aligned}{} & {} \int \frac{dy}{V}\ g_2(x,y)=g_1(x) \end{aligned}$$
(30)
$$\begin{aligned}{} & {} \int \frac{dz}{V}\ g_3(x,y,z)=g_2(x,y) \end{aligned}$$
(31)
$$\begin{aligned}{} & {} \int \frac{dz}{V}\frac{dt}{V}\ g_4(x,y,z,t)=g_2(x,y) \end{aligned}$$
(32)

(all of which hold for \({\mathfrak {g}}_i\), by (9)) to compute \(u_i\) and \(f_i\).

In the translation invariant case, the factorization assumption leads to an equation for \(g_2\) alone, as \(g_1\) is constant. When translation invariance is broken, \(g_1\) is no longer constant, and the simplified approach consists in two coupled equations for \(g_1\) and \(g_2\).

Theorem 1

If \(g_i\) satisfies Assumption 1, the Eqs. (14) and (19) with \({\mathfrak {g}}_1\) replaced by \(g_1\) and \({\mathfrak {g}}_2\) by \(g_2\), as well as (29)–(32), then \(g_1\) and \(u_2\) satisfy the two coupled equations

$$\begin{aligned}{} & {} \left( -\frac{\varDelta }{2} +\left( \varpi -\left<\varpi \right>\right) +2\left( {\mathcal {E}}(x)-\left<{\mathcal {E}}(y)\right>\right) +\frac{1}{2}\left( {\bar{A}}(x)-\left<{\bar{A}}\right>-{\bar{C}}(x)\right) \right) g_1(x)\nonumber \\{} & {} \quad + \varSigma _1(x) =0 \end{aligned}$$
(33)

and

$$\begin{aligned}{} & {} \left( -\frac{1}{2}(\varDelta _x+\varDelta _y)+v(x,y)-2\rho {\bar{K}}(x,y)+\rho ^2{\bar{L}}(x,y)+{\bar{R}}_2(x,y)\right) \nonumber \\{} & {} \quad \cdot g_1(x)g_1(y)(1-u_2(x,y)) +\varSigma _2(x,y)=0 \end{aligned}$$
(34)

where

(35)
(36)
(37)
(38)
(39)
(40)
(41)

in which \(\varpi _x\) is the action of \(\varpi \) on the x-variable, and similarly for \(\varpi _y\) and

$$\begin{aligned} \varSigma _i\mathop {\longrightarrow }_{V\rightarrow \infty }0 \end{aligned}$$
(42)

pointwise. The prediction for the energy per particle is defined as

$$\begin{aligned} e:=\left<{\mathcal {E}}\right>+\left<\varpi \right>+\varSigma _0 \end{aligned}$$
(43)

where \(\varSigma _0\rightarrow 0\) as \(V\rightarrow \infty \).

This theorem is proved in Sect. 3.

Let us compare this to the equation for u in the Simplified approach in the translation invariant case [10, (5)], [20, (3.15)]:

$$\begin{aligned} -\varDelta u(x)= & {} (1-u(x))\left( v(x)-2\rho K(x)+\rho ^2 L(x)\right) \end{aligned}$$
(44)
$$\begin{aligned} K:= & {} u*S,\quad S(y):=(1-u(y))v(y) \end{aligned}$$
(45)
$$\begin{aligned} L:= & {} u*u*S -2u*(u(u*S)) +\frac{1}{2} \int dydz\ u(y)u(z-x)u(z)u(y-x)S(z-y). \end{aligned}$$
(46)

We will prove that these follow from Theorem 1:

Corollary 1

(Translation invariant case) In the translation invariant case \(v(x,y)\equiv v(x-y)\) and \(\varpi =0\) with periodic boundary conditions, if (33)–(34) has a unique translation invariant solution, then (34) reduces to (44) in the thermodynamic limit.

The idea of the proof is quite straightforward. Equation (34) is very similar to (44), but for the addition of the extra term \({\bar{R}}_2\). An inspection of (41) shows that the terms in \({\bar{R}}_2\) are mostly of the form \(f-\left<f\right>\), which vanish in the translation invariant case, and terms involving \(\varpi \), which is set to 0 in the translation invariant case. The only remaining extra term is \(\bar{C}(x)+{\bar{C}}(y)\), which we will show vanishes in the translation invariant case due to the identity (30).

Theorem 1 is quite general, and can be used to study a trapped Bose gas, in which there is an external potential \(v_0\). In this case, \(\varpi \) is a multiplication operator by \(v_0\). A natural approach is to scale \(v_0\) with the volume: \(v_0(x)={\bar{v}}_0(V^{-1/3}x)\) in such a way that the size of the trap grows as \(V\rightarrow \infty \), thus ensuring a finite local density in the thermodynamic limit. Following the ideas of Gross and Pitaevskii [18, 33], we would then expect to find that (33) and (34) decouple, and that (34) reduces to the translation invariant Eq. (44), with a density that is modulated over the trap. However, the presence of \({\bar{R}}_2\) in (34) and \({\bar{C}}\) in (33) breaks this picture. Further investigation of this question is warranted.

2.2 The Momentum Distribution

The momentum distribution for the Bose gas is defined as

$$\begin{aligned} \mathcal M^{(\textrm{Exact})}(k):=\frac{1}{N}\sum _{i=1}^N\left<\varphi _0\right| P_i\left| \varphi _0\right> \end{aligned}$$
(47)

where \(\varphi _0\) is the ground state of the Hamiltonian

$$\begin{aligned} -\frac{1}{2}\sum _{i=1}^N\varDelta _i + \sum _{1\leqslant i<j\leqslant N}v(x_i-x_j) \end{aligned}$$
(48)

and

$$\begin{aligned} \varpi f:=\epsilon | e^{ikx}\big >\big < e^{ikx}|f \equiv \epsilon e^{ikx}\int dy\ e^{-iky}f(y) \end{aligned}$$
(49)

and \(P_i\) is defined as in (2):

$$\begin{aligned} P_i\psi (x_1,\ldots ,x_N)= \epsilon e^{ikx_i}\int dy_y\ e^{iky_i}\psi (x_1,\ldots ,x_{i-1},y_i,x_{i+1},\ldots ,x_N). \end{aligned}$$
(50)

Equivalently,

$$\begin{aligned} \mathcal M^{(\textrm{Exact})}(k)=\frac{\partial }{\partial \epsilon }\left. \frac{E_0}{N}\right| _{\epsilon =0} \end{aligned}$$
(51)

where \(E_0\) is the ground-state energy in (4) for the Hamiltonian (48). Using the simplified approach, we do not have access to the ground state wavefunction, so we cannot compute \({\mathcal {M}}\) using (47). Instead, we use the Hellmann-Feynman theorem, which consists in adding \(\sum _iP_i\) to the Hamiltonian. However, doing so does not ensure the uniqueness of the ground state, and thus, we are not guaranteed that the wavefunction \(\psi _0\) is translation invariant. This is why Theorem 1 is needed to compute the momentum distribution within the framework of the Simplified approach. (A similar computation was done in [10], but, there, the derivation of the momentum distribution for the Simplified approach was taken for granted.)

By Theorem 1, and, in particular, (43), we obtain a natural definition of the prediction of the Simplified approach for the momentum distribution:

$$\begin{aligned} \mathcal M(k):=\frac{\partial }{\partial \epsilon }\left. \left( \left<\mathcal E\right>+\left<\varpi \right>\right) \right| _{\epsilon =0}. \end{aligned}$$
(52)

Theorem 2

(Momentum distribution) Under the assumptions of Theorem 1, using periodic boundary conditions, if v is translation invariant and \(\varpi =0\), and if (33) and (34) have solutions that are twice differentiable in \(\epsilon \), uniformly in V, then, if \(k\ne 0\),

$$\begin{aligned} {\mathcal {M}}(k)=\frac{\partial }{\partial \epsilon }\left. \frac{\rho }{2}\int dx\ (1-u(x))v(x)\right| _{\epsilon =0} \end{aligned}$$
(53)

where

$$\begin{aligned} -\varDelta u(x)=(1-u(x))v(x)-2\rho K(x)+\rho ^2L(x)+\epsilon F(x) \end{aligned}$$
(54)

where K and L are those of the translation invariant Simplified approach (45) and (46) and

$$\begin{aligned} F(x):=-2{\hat{u}}(-k)\cos (kx). \end{aligned}$$
(55)

We thus compute the momentum distribution. To check that our prediction is plausible, we compare it to the Bogolyubov prediction, which can easily be derived from [29, Appendix A]:

$$\begin{aligned} \mathcal M^{(\textrm{Bogolyubov})}(k)=-\frac{1}{2\rho }\left( 1-\frac{k^2+2\rho {\hat{v}}(k)}{\sqrt{k^4+4k^2\rho {\hat{v}}(k)}}\right) \end{aligned}$$
(56)

(this can be obtained by differentiating [29, (A.26)] with respect to \(\epsilon (k)\), which returns the number of particles in the state \(e^{ikx}\), which we divide by \(\rho \) to obtain the momentum distribution). Actually, following the ideas of [23], we replace \({\hat{v}}\) by a so-called “pseudopotential”, which consists in replacing v by a Dirac delta function, while preserving the scattering length:

$$\begin{aligned} {\hat{v}}(k)=4\pi a \end{aligned}$$
(57)

where the scattering length a is defined in [29, Appendix C]. Thus,

$$\begin{aligned} \mathcal M^{(\textrm{Bogolyubov})}(k)=-\frac{1}{2\rho }\left( 1-\frac{k^2+8\pi \rho a}{\sqrt{k^4+16\pi k^2\rho a}}\right) . \end{aligned}$$
(58)

We prove that, for the simple equation, as \(\rho \rightarrow 0\), the prediction for the momentum distribution coincides with Bogolyubov’s, for \(|k|\lesssim \sqrt{\rho a}\). The length scale \(1/\sqrt{\rho a}\) is called the healing length, and is the distance at which pairs of particles correlate [15]. It is reasonable to expect the Bogolyubov approximation to break down beyond this length scale.

The momentum distribution for the simple equation, following the prescription detailed in [8,9,10, 20], is defined as

$$\begin{aligned} \mathcal M^{(\textrm{simpleq})}(k)=\frac{\partial }{\partial \epsilon }\left. \frac{\rho }{2}\int dx\ (1-u(x))v(x)\right| _{\epsilon =0} \end{aligned}$$
(59)

where [8, (1.1)–(1.2)]

$$\begin{aligned} -\varDelta u(x)=(1-u(x))v(x)-4eu+2\rho e u*u+\epsilon F(x),\quad e:=\frac{\rho }{2}\int dx\ (1-u(x))v(x) \end{aligned}$$
(60)

where F was defined in (55).

Theorem 3

Assume that v is translation and rotation invariant (\(v(x,y)\equiv v(|x-y|)\)), and consider periodic boundary conditions. We rescale k:

$$\begin{aligned} \kappa :=\frac{k}{2\sqrt{e}} \end{aligned}$$
(61)

we have, for all \(\kappa \in {\mathbb {R}}^3\),

$$\begin{aligned} \lim _{e\rightarrow 0}\rho {\mathcal {M}}^{(\textrm{simpleq})}(2\sqrt{e}\kappa ) =\lim _{e\rightarrow 0}\rho {\mathcal {M}}^{(\textrm{Bogolyubov})}(2\sqrt{e}\kappa ) =-\frac{1}{2}\left( 1-\frac{\kappa ^2+1}{\sqrt{(\kappa ^2+1)^2-1}}\right) . \end{aligned}$$
(62)

The rotation invariance of v is presumably not necessary. However, the proof of this theorem is based on [9], where rotational symmetry was assumed for convenience.

3 The Simplified Approach Without Translation Invariance, Proof of Theorem 1

3.1 Factorization

We will first compute \(f_i\) and \(u_i\) for \(i=3,4\) in Assumption 1.

3.1.1 Factorization of \(g_3\)

Lemma 1

Assumption 1 with \(i=2,3\) and (29)–(31) imply that

$$\begin{aligned} g_3(x,y,z)=g_1(x)g_1(y)g_1(z)(1-u_3(x,y))(1-u_3(x,z))(1-u_3(y,z))(1+O(V^{-2})) \end{aligned}$$
(63)

with

$$\begin{aligned} u_3(x,y):= & {} u_2(x,y)+\frac{w_3(x,y)}{V} \end{aligned}$$
(64)
$$\begin{aligned} w_3(x,y):= & {} (1-u_2(x,y))\int dz\ g_1(z)u_2(x,z)u_2(y,z). \end{aligned}$$
(65)

Proof

Using (31) in (25),

$$\begin{aligned} g_2(x_1,x_2)=W_3(x_1,x_2) \int \frac{dx_3}{V}\ W_3(x_1,x_3)W_3(x_2,x_3). \end{aligned}$$
(66)

1. We first expand to order \(V^{-1}\). By (27),

$$\begin{aligned} \int \frac{dz}{V}f_3^2(z)u_3(x,z)=O(V^{-1}) \end{aligned}$$
(67)

so, by (26),

$$\begin{aligned} g_2(x,y)=f_3^2(x)f_3^2(y)(1-u_3(x,y)) \left( \int \frac{dz}{V}\ f_3^2(z) +O(V^{-1})\right) . \end{aligned}$$
(68)

By (24),

$$\begin{aligned} g_1(x)g_1(y)(1-u_2(x,y))=f_3^2(x)f_3^2(y)(1-u_3(x,y))\left( \int \frac{dz}{V}\ f_3^2(z)+O(V^{-1})\right) .\nonumber \\ \end{aligned}$$
(69)

We take \(\int \frac{dy}{V}\cdot \) on both sides of this equation. However, by (30),

$$\begin{aligned} g_1(x)\int \frac{dy}{V}\ g_1(y)(1-u_2(x,y))=g_1(x) \end{aligned}$$
(70)

so, by (29),

$$\begin{aligned} \int dy\ g_1(y)u_2(x,y)=0. \end{aligned}$$
(71)

Combining this with (67), we find

$$\begin{aligned} g_1(x)=f_3^2(x)\left( \left( \int \frac{dy}{V}f_3^2(y))\right) ^2+O(V^{-1})\right) \end{aligned}$$
(72)

and, integrating once more implies that \(\int \frac{dy}{V}f_3^2(y)=1+O(V^{-1})\). Therefore,

$$\begin{aligned} f_3^2(x)=g_1(x)(1+O(V^{-1})) \end{aligned}$$
(73)

and

$$\begin{aligned} u_3(x,y)=u_2(x,y)(1+O(V^{-1})). \end{aligned}$$
(74)

2. We push the expansion to order \(V^{-2}\): (66) is

$$\begin{aligned} g_2(x,y)=f_3^2(x)f_3^2(y)(1-u_3(x,y))\int \frac{dz}{V}f_3^2(z) \left( 1 -u_3(x,z)-u_3(y,z) +u_3(x,z)u_3(y,z) \right) . \end{aligned}$$
(75)

By (73)–(74) and (24),

$$\begin{aligned}{} & {} f_3^2(x)f_3^2(y)(1-u_3(x,y))\int \frac{dz}{V}f_3^2(z) =g_1(x)g_1(y)(1-u_2(x,y)) \nonumber \\{} & {} \quad \cdot \left( 1+\int \frac{dz}{V}\ (g_1(z)(u_2(x,z)+u_2(y,z)-u_2(x,z)u_2(y,z)))+O(V^{-2})\right) . \end{aligned}$$
(76)

Therefore, by (71),

$$\begin{aligned}{} & {} f_3^2(x)f_3^2(y)(1-u_3(x,y))\int \frac{dz}{V}f_3^2(z)=g_1(x)g_1(y)(1-u_2(x,y)) \nonumber \\{} & {} \quad \cdot \left( 1-\int \frac{dz}{V}g_1(z)u_2(x,z)u_2(y,z)+O(V^{-2})\right) . \end{aligned}$$
(77)

Now, let us apply \(\int \frac{dy}{V}\cdot \) to both sides of the equation. Note that, by (27),

$$\begin{aligned} \int \frac{dy}{V}\ g_1(y)u_2(x,y)\int \frac{dz}{V}g_1(z)u_2(x,z)u_2(y,z)=O(V^{-2}). \end{aligned}$$
(78)

Furthermore, by (71),

$$\begin{aligned} \int \frac{dy}{V}\ g_1(y)u_2(x,y)=0,\quad \int \frac{dy}{V}\ g_1(y)\int \frac{dz}{V}\ g_1(z)u_2(x,z)u_2(y,z)=0 \end{aligned}$$
(79)

and by (73) and (74),

$$\begin{aligned} \int \frac{dy}{V}\ f_3^2(y)u_3(x,y)=\int \frac{dy}{V}\ g_1(y)u_2(x,y)+O(V^{-2})=O(V^{-2}). \end{aligned}$$
(80)

We are thus left with

$$\begin{aligned} f_3^2(x)\left( \int \frac{dy}{V}\ f_3^2(y)\right) ^2 = g_1(x)(1+O(V^{-2})). \end{aligned}$$
(81)

Taking \(\int \frac{dx}{V}\cdot \), we thus find that

$$\begin{aligned} \left( \int \frac{dx}{V} f_3^2(x)\right) ^3=1+O(V^{-2}) \end{aligned}$$
(82)

and

$$\begin{aligned} f_3^2(x)=g_1(x)(1+O(V^{-2})). \end{aligned}$$
(83)

Therefore,

$$\begin{aligned} 1-u_3(x,y)=(1-u_2(x,y))\left( 1-\frac{1}{V}\int dz\ g_1(z)u_2(x,z)u_2(y,z)+O(V^{-2})\right) . \end{aligned}$$
(84)

\(\square \)

3.1.2 Factorization of \(g_4\)

Lemma 2

Assumption 1 and (29)–(32) imply that

$$\begin{aligned} g_4(x_1,x_2,x_3,x_2)= g_1(x_1)g_1(x_2)g_1(x_3)g_1(x_4) \left( \prod _{i<j}(1-u_4(x_i,x_j))\right) (1+O(V^{-2})) \end{aligned}$$
(85)

with

$$\begin{aligned} u_4(x,y):=u_2(x,y)+\frac{2w_3(x,y)}{V} \end{aligned}$$
(86)

where \(w_3\) is the same as in Lemma 1.

Proof

Using (32) in (25),

$$\begin{aligned} g_2(x_1,x_2)=W_4(x_1,x_2)\int \frac{dx_3dx_4}{V^2}\ W_4(x_1,x_3) W_4(x_1,x_4) W_4(x_2,x_3) W_4(x_2,x_4) W_4(x_3,x_4). \end{aligned}$$
(87)

1. We expand to order \(V^{-1}\). By (27),

$$\begin{aligned} \int \frac{dz}{V}f_4^3(z)u_4(x,z)=O(V^{-1}) \end{aligned}$$
(88)

so by (26),

$$\begin{aligned} g_2(x,y)=f_4^3(x)f_4^3(y)(1-u_4(x,y))\left( \int \frac{dzdt}{V^2}f_4^3(z)f_4^3(t)+O(V^{-1})\right) . \end{aligned}$$
(89)

By (24),

$$\begin{aligned} g_1(x)g_1(y)(1-u_2(x,y))= f_4^3(x)f_4^3(y)(1-u_4(x,y))\left( \left( \int \frac{dz}{V}f_4^3(z)\right) ^2+O(V^{-1})\right) . \end{aligned}$$
(90)

Applying \(\int \frac{dy}{V}\cdot \) to both sides of the equation, using (71) and (88),

$$\begin{aligned} g_1(x)=f_4(x)^3\left( \left( \int \frac{dy}{V}\ f_4^3(y)\right) ^3+O(V^{-1})\right) . \end{aligned}$$
(91)

Integrating once more, we have \(\int \frac{dy}{V}f_4^3(z)=1+O(V^{-1})\) and

$$\begin{aligned} f_4^3(x)=g_1(x)(1+O(V^{-1})). \end{aligned}$$
(92)

Therefore,

$$\begin{aligned} u_4(x,y)=u_2(x,y)(1+O(V^{-1})). \end{aligned}$$
(93)

2. We push the expansion to order \(V^{-2}\): by (27),

$$\begin{aligned}{} & {} \int \frac{dzdt}{V^2}u_4(x,z)u_4(y,t)=O(V^{-2}),\quad \int \frac{dzdt}{V^2}u_4(x,z)u_4(z,t)=O(V^{-2}) \end{aligned}$$
(94)
$$\begin{aligned}{} & {} \int \frac{dzdt}{V^2}u_4(x,z)u_4(x,t)=O(V^{-2}) \end{aligned}$$
(95)

so

$$\begin{aligned} g_2(x,y)= & {} f_4^3(x)f_4^3(y)(1-u_4(x,y)) \left( \int \frac{dzdt}{V^2} f_4^3(z)f_4^3(t) \right. \nonumber \\{} & {} \left. + \int \frac{dzdt}{V^2} g_1(z)g_1(t)(-2u_4(x,z)-2u_4(y,z)-u_4(z,t)+2u_4(x,z)u_4(y,z)) +O(V^{-2}) \right) . \end{aligned}$$
(96)

By (92), (93), and (24)

$$\begin{aligned}{} & {} f_4^3(x)f_4^3(x)(1-u_4(x,y))\left( \int \frac{dz}{V}\ f_4^3(z)\right) ^2 = g_1(x)g_1(y)(1-u_2(x,y)) \nonumber \\{} & {} \qquad \cdot \left( 1+ \int \frac{dzdt}{V^2}\ g_1(z)g_1(t)(2u_2(x,z)+2u_2(y,z)+u_2(z,t)-2u_2(x,z)u_2(y,z)) +O(V^{-2}) \right) .\qquad \qquad \end{aligned}$$
(97)

By (71),

$$\begin{aligned}{} & {} f_4^3(x)f_4^3(y)(1-u_4(x,y))\left( \int \frac{dz}{V}\ f_4^3(z)\right) ^2\nonumber \\{} & {} \quad = g_1(x)g_1(y)(1-u_2(x,y))\left( 1-2\int \frac{dz}{V}g_1(z)u_2(x,z)u_2(y,z)+O(V^{-2})\right) .\qquad \qquad \end{aligned}$$
(98)

We apply \(\int \frac{dy}{V}\cdot \) to both sides of the equation. By (78)-(80), we find

$$\begin{aligned} f_4^3(x)\left( \int \frac{dy}{V}f_4^3(z)\right) ^3=g_1(x)(1+O(V^{-2})). \end{aligned}$$
(99)

Taking \(\int \frac{dx}{V}\cdot \), we find that

$$\begin{aligned} f_4(x)=1+O(V^{-2}) \end{aligned}$$
(100)

and

$$\begin{aligned} f_4^3(x)=g_1(x)(1+O(V^{-2})). \end{aligned}$$
(101)

Therefore,

$$\begin{aligned} 1-u_4(x,y)=(1-u_2(x,y))\left( 1-\frac{2}{V}\int dz\ g_1(z)u_2(x,z)u_2(y,z)+O(V^{-2})\right) . \end{aligned}$$
(102)

\(\square \)

3.2 Consequences of the Factorization

proof of Theorem 1

We rewrite (10), (14) and (19) using Lemmas 1 and 2.

1. We start with (10): by (5) and (24),

$$\begin{aligned} G_0^{(2)}= \frac{N(N-1)}{2V^2}\int dxdy\ v(x,y)g_1(x)g_1(y)(1-u_2(x,y))+O(V^{-1}) \end{aligned}$$
(103)

so

$$\begin{aligned} E_0= & {} \frac{N(N-1)}{2V^2}\int dxdy\ v(x,y)g_1(x)g_1(y)(1-u_2(x,y)) \nonumber \\{} & {} +\frac{N}{V}\int dx\ \varpi g_1(x) +B_0 +O(V^{-1}). \end{aligned}$$
(104)

2. We now turn to (14): by (5) and (24),

$$\begin{aligned} G_1^{(2)}(x)=\frac{N}{V}g_1(x)\left( \int dy\ v(x,y)g_1(y)(1-u_2(x,y))+O(V^{-2})\right) \end{aligned}$$
(105)

and by Lemma 1,

$$\begin{aligned} G_1^{(3)}(x){} & {} = g_1(x)\left( \frac{N^2}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(x,y))(1-u_2(x,z)) \right. \nonumber \\{} & {} \quad \left. (1-u_3(y,z))- \frac{3N}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z)) +O(V^{-1})\right) \nonumber \\ \end{aligned}$$
(106)

(we used (64) to write \(u_3=u_2+O(V^{-1})\); this works fine for \(u_3(x,y)\) and \(u_3(x,z)\) because the integrals over y and z are controlled by \(v(y,z)w_3(x,y)\) and \(v(y,z)w_3(x,z)\) using (5) and (27); in the first term, it does not work for \(u_3(y,z)\), as \(v(y,z)w_3(y,z)\) can only control one of the integrals, and not both; the second term has an extra \(V^{-1}\) that lets us replace \(u_3\) by \(u_2\)) and by (27) and (6),

$$\begin{aligned} F_1^{(2)}(x)= g_1(x)\left( \frac{N}{V}\int dy\ \varpi _y(g_1(y)(1-u_2(x,y))) -\frac{1}{V}\int dy\ \varpi g_1(y) +O(V^{-1}) \right) . \end{aligned}$$
(107)

The first term in \(G_1^{(3)}\) is of order V:

$$\begin{aligned}{} & {} \frac{N^2}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(x,y))(1-u_2(x,z))(1-u_3(y,z))\nonumber \\{} & {} \quad = \frac{N^2}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z)) \nonumber \\{} & {} \qquad - \frac{N^2}{2V^3}\int dydz\ v(y,z)g_1(y)g_1(z)w_3(y,z)+\nonumber \\{} & {} \qquad +\frac{N^2}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z))(-u_2(x,y)\nonumber \\{} & {} \qquad -u_2(x,z)+u_2(x,y)u_2(x,z)) +O(V^{-1}) \end{aligned}$$
(108)

in which the only term of order V is the first one, and is equal to the first term of order V in \(E_0\), and thus cancels out. There is a similar cancellation between the second term of order V in \(F_1^{(2)}\) and \(E_0\). All in all,

$$\begin{aligned} \left( -\frac{\varDelta }{2} +\varpi +{\bar{G}}^{(2)}_1(x) +{\bar{G}}^{(3)}_1(x) +{\bar{F}}^{(2)}_1(x) +{\bar{E}}_0 -B_0 \right) g_1(x) +B_1(x) =g_1(x)O(V^{-1}) \end{aligned}$$
(109)

with, recalling \(\rho :=N/V\),

$$\begin{aligned} {\bar{G}}_1^{(2)}(x):=\rho \int dy\ v(x,y)g_1(y)(1-u_2(x,y)) \end{aligned}$$
(110)

and using (65),

$$\begin{aligned} {\bar{G}}_1^{(3)}(x){} & {} := -\frac{\rho }{2}\int \frac{dydz}{V}\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z))\left( 3+\rho \int dt\ g_1(t)u_2(y,t)u_2(z,t)\right) +\nonumber \\{} & {} \quad + \frac{\rho ^2}{2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z))(-u_2(x,y)-u_2(x,z)+u_2(x,y)u_2(x,z)) \end{aligned}$$
(111)
$$\begin{aligned} {\bar{F}}_1^{(2)}(x){} & {} := -\rho \int dy\ \varpi _y(g_1(y)u_2(x,y)) -\int \frac{dy}{V}\ \varpi g_1(y) \end{aligned}$$
(112)
$$\begin{aligned} {\bar{E}}_0{} & {} := \frac{\rho }{2}\int \frac{dxdy}{V}\ v(x,y)g_1(x)g_1(y)(1-u_2(x,y)). \end{aligned}$$
(113)

Rewriting this using (35)–(38), we find (33) with

$$\begin{aligned} \varSigma _1(x):=B_1(x)-B_0g_1(x)+O(V^{-1}). \end{aligned}$$
(114)

3. Finally, we rewrite (19): by (5) and Lemma 1,

$$\begin{aligned} G_2^{(3)}(x,y){} & {} = \frac{N}{V}g_1(x)g_1(y)(1-u_2(x,y)) \nonumber \\{} & {} \quad \cdot \left( \int dz\ (v(x,z)+v(y,z))g_1(z)(1-u_2(x,z))(1-u_2(y,z))+O(V^{-1})\right) \end{aligned}$$
(115)

and by Lemma 2,

$$\begin{aligned} G^{(4)}_2(x,y){} & {} = g_1(x)g_1(y)\left( \frac{N^2}{2V^2}(1-u_4(x,y))\right. \nonumber \\{} & {} \quad \left. \int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_4(z,t))\Pi (x,y,z,t) \right. \nonumber \\{} & {} \qquad \left. - \frac{5N}{2V^2}(1-u_2(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t)) +O(V^{-1}) \right) \end{aligned}$$
(116)
$$\begin{aligned} \Pi (x,y,z,t){} & {} := (1-u_2(x,z))(1-u_2(x,t))(1-u_2(y,z))(1-u_2(y,t)) \end{aligned}$$
(117)

and by (27) and (6),

$$\begin{aligned} F^{(3)}_2(x,y){} & {} = g_1(x)g_1(y)\left( \frac{N}{V}(1-u_3(x,y))\int dz\ \varpi _z(g_1(z)(1-u_2(x,z))(1-u_2(y,z))) \right. \nonumber \\{} & {} \quad \left. - \frac{2}{V}(1-u_2(x,y))\int dz\ \varpi g_1(z) +O(V^{-1}) \right) . \end{aligned}$$
(118)

The first term in \(G_2^{(4)}\) is of order V: by (86),

$$\begin{aligned}{} & {} \frac{N^2}{2V^2}(1-u_4(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_4(z,t)) \Pi (x,y,z,y)\nonumber \\{} & {} \quad = \frac{N^2}{2V^2}(1-u_2(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t)) \nonumber \\{} & {} \qquad - \frac{N^2}{V^3}w_3(x,y)\int dzdt\ v(z,t)g_1(z)g_1(y)(1-u_2(z,t))\nonumber \\{} & {} \qquad -\frac{N^2}{V^3}(1-u_2(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)w_3(z,t)\nonumber \\{} & {} \qquad + \frac{N^2}{2V^2}(1-u_2(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t)) \left( \Pi (x,y,z,t)-1\right) +O(V^{-1}) \end{aligned}$$
(119)

in which the only term of order V is the first one, and is equal to the term of order V in \(E_0\), and thus cancels out. There is a similar cancellation between the term of order V in \(F_2^{(3)}\) and \(E_0\). All in all,

$$\begin{aligned}{} & {} \left( -\frac{1}{2}(\varDelta _x+\varDelta _y) +v(x,y) +\varpi _x+\varpi _y +\bar{G}^{(3)}_2(x,y) +{\bar{G}}^{(4)}_2(x,y) +{\bar{F}}^{(3)}_2(x,y) +{\bar{E}}_0 -B_0 \right) \nonumber \\{} & {} \quad \cdot g_1(x)g_1(y)(1-u_2(x,y)) +B_2(x,y) = g_1(x)g_1(y)O(V^{-1}) \end{aligned}$$
(120)

with

$$\begin{aligned} {\bar{G}}_2^{(3)}(x,y):= \rho \int dz\ (v(x,z)+v(y,z))g_1(z)(1-u_2(x,z))(1-u_2(y,z)) \end{aligned}$$
(121)

and by (65),

(122)
(123)
(124)

4. Expanding out \(\Pi \), see (117), we find (34) with

(125)

and

$$\begin{aligned} \varSigma _2(x,y):=B_2(x,y)-B_0g_1(x)g_1(y)(1-u_2(x,y))+O(V^{-1}). \end{aligned}$$
(126)

Using (37) and (38), (125) becomes (41).

5. Finally, (43) follows from (10) with

$$\begin{aligned} \varSigma _0:=B_0+O(V^{-1}). \end{aligned}$$
(127)

\(\square \)

3.3 Sanity Check, Proof of Corollary 1

Proof of Corollary 1

Assuming the translation invariance of the solution, \(g_1(x)\) is constant. By (29),

$$\begin{aligned} g_1(x)=1. \end{aligned}$$
(128)

Furthermore, \(\varpi \equiv 0\). We then have

$$\begin{aligned} {\bar{S}}(x,y)=S(x-y),\quad {\bar{K}}(x,y)=K(x-y),\quad \bar{L}(x,y)=L(x-y) \end{aligned}$$
(129)

(see (45) and (46)). Furthermore,

$$\begin{aligned}{} & {} {\mathcal {E}}(x)\equiv {\mathcal {E}}(y)\equiv \left<\mathcal E\right>=\frac{\rho }{2}\int dy\ S(y) \end{aligned}$$
(130)
$$\begin{aligned}{} & {} {\bar{A}}(x)\equiv {\bar{A}}(y)\equiv \left<{\bar{A}}\right>=\rho ^2 S*u*u(0) \end{aligned}$$
(131)
$$\begin{aligned}{} & {} {\bar{C}}(x)\equiv {\bar{C}}_2(y) =2\rho ^2\int dz\ u(z)\int dt\ S(t) \end{aligned}$$
(132)

which vanishes by (30). Thus,

$$\begin{aligned} {\bar{R}}_2(x,y)\equiv 0. \end{aligned}$$
(133)

We conclude by taking the thermodynamic limit.

4 The Momentum Distribution

4.1 Computation of the Momentum Distribution, Proof of Theorem 2

Proof of Theorem 2

We use Theorem 1 with \(\varpi \) as in (49). Note that, by (49),

$$\begin{aligned} \int dx\ \varpi f(x)=0 \end{aligned}$$
(134)

which trivially satisfies (6).

1. We change variables in (34) to

$$\begin{aligned} \xi =\frac{x+y}{2},\quad \zeta =x-y \end{aligned}$$
(135)

and find

$$\begin{aligned}{} & {} \left( -\frac{1}{4}\varDelta _\xi -\varDelta _\zeta +v(\zeta ) -2\rho \bar{K}(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}}) +\rho ^2{\bar{L}}(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}}) +{\bar{R}}_2(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}}) \right) \nonumber \\{} & {} \quad \cdot g_1(\xi +{\textstyle \frac{\zeta }{2}})g_1(\xi -{\textstyle \frac{\zeta }{2}}) (1-u_2(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}})) =-\varSigma _2. \end{aligned}$$
(136)

In addition, by (43),

$$\begin{aligned} e=\frac{\rho }{2}\int \frac{d\xi d\zeta }{V}\ g_1(\xi +{\textstyle \frac{\zeta }{2}})g_1(\xi -{\textstyle \frac{\zeta }{2}})v(\zeta )(1-u_2(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}})) +\int \frac{dx}{V}\ \varpi g_1(x) +\varSigma _1. \end{aligned}$$
(137)

We expand in powers of \(\epsilon \):

$$\begin{aligned} g_1(x)=1+\epsilon g_1^{(1)}(x)+O(\epsilon ^2),\quad u_2(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}})=u_2^{(0)}(\zeta )+\epsilon u_2^{(1)}(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}})+O(\epsilon ^2) \end{aligned}$$
(138)

in which we used the fact that, at \(\epsilon =0\), \(g_1(x)|_{\epsilon =0}=1\), see (128). In particular, the terms of order 0 in \(\epsilon \) are independent of \(\xi \). Note, in addition, that, by (29),

$$\begin{aligned} \int \frac{dx}{V}\ g_1^{(1)}(x)=0. \end{aligned}$$
(139)

2. The trick of this proof is to take the average with respect to \(\xi \) on both sides of (136). Since we take periodic boundary conditions, the \(\varDelta _\xi \) term drops out. We will only focus on the first order contribution in \(\epsilon \), and, as was mentioned above, terms of order 0 are independent of \(\xi \). Thus, the average over \(\xi \) will always apply to a single term, either \(g_1^{(1)}\) or \(u_2^{(1)}\). By (29), the terms involving \(g_1^{(1)}\) have zero average. We can therefore replace \(g_1^{(1)}\) by 1. (The previous argument does not apply to the terms in which \(\varDelta _\zeta \) acts on \(g_1\), but these terms have a vanishing average as well because of the periodic boundary conditions.) In particular, by (30) and  (24),

$$\begin{aligned} \int \frac{d\xi }{V}\ (1-u_2^{(1)}(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}})) =1 \end{aligned}$$
(140)

so

$$\begin{aligned} \int \frac{d\xi }{V}\ u_2^{(1)}(\xi +{\textstyle \frac{\zeta }{2}},\xi -{\textstyle \frac{\zeta }{2}}) =0 \end{aligned}$$
(141)

and thus, we can replace \(u_2\) with \(u_2^{(0)}\). Thus, using the translation invariant computation detailed in Sect. 3.3, we find that the average of (136) is

$$\begin{aligned} (-\varDelta +v(\zeta )-2\rho K(\zeta )+\rho ^2 L(\zeta ))(1-u_2^{(0)}(\zeta ))+\epsilon F(\zeta )+O(\epsilon ^2)+\varSigma _2=0 \end{aligned}$$
(142)

where K and L are defined in (45) and  (46) and F comes from the contribution to \({\bar{R}}_2\) of \(\varpi \), see (41):

$$\begin{aligned} F(\zeta ){} & {} :=\epsilon ^{-1}\int \frac{d\xi }{V}\ \left( \varpi _x+\varpi _y-2\left<\varpi \right> +\rho \int dz\ \varpi _z(u_2^{(0)}(\xi +{\textstyle \frac{\zeta }{2}}-z)u_2^{(0)}(\xi -{\textstyle \frac{\zeta }{2}}-z)) \right. \nonumber \\{} & {} \quad \left. - \rho \int dz\ \varpi _zu_2^{(0)}(\xi +{\textstyle \frac{\zeta }{2}}-z) -\rho \int dz\ \varpi _zu_2^{(0)}(\xi -{\textstyle \frac{\zeta }{2}}-z) \right) (1-u_2^{(0)}(\zeta )). \end{aligned}$$
(143)

Similarly, (137) is

$$\begin{aligned} e=\frac{\rho }{2}\int d\zeta \ v(\zeta )(1-u_2^{(0)}(\zeta )) +\int \frac{dx}{V}\ \varpi g_1(x) +\varSigma _1 +O(\epsilon ^2). \end{aligned}$$
(144)

3. Furthermore, by (49),

$$\begin{aligned} \int dz\ \varpi _z f(z)=0 \end{aligned}$$
(145)

for any integrable f, so

$$\begin{aligned} F(\zeta )=\epsilon ^{-1}\int \frac{d\xi }{V}\ \left( \varpi _x+\varpi _y\right) (1-u_2^{(0)}(\zeta )) \end{aligned}$$
(146)

and

$$\begin{aligned} e=\frac{\rho }{2}\int d\zeta \ v(\zeta )(1-u_2^{(0)}(\zeta )) +\varSigma _1 +O(\epsilon ^2). \end{aligned}$$
(147)

Now,

$$\begin{aligned} \varpi _x f(x-y) = e^{ikx} \int dz\ e^{-ikz}f(z-y) \end{aligned}$$
(148)

so

$$\begin{aligned} \varpi _x f(\zeta ) = \epsilon e^{ik(\xi +{\textstyle \frac{\zeta }{2}})} \int dz\ e^{-ik(z+(\xi -{\textstyle \frac{\zeta }{2}}))}f(z) = \epsilon e^{ik\zeta } \int dz\ e^{-ikz}f(z) =\epsilon e^{ik\zeta }{\hat{f}}(-k). \end{aligned}$$
(149)

Similarly,

$$\begin{aligned} \varpi _y f(\zeta ) =\epsilon e^{-ik\zeta }{\hat{f}}(-k). \end{aligned}$$
(150)

Thus

$$\begin{aligned} F(\zeta )=2\cos (k\zeta )(\delta (k)-{\hat{u}}_2^{(0)}(-k)). \end{aligned}$$
(151)

Since \(k\ne 0\), the \(\delta \) function drops out. We conclude the proof by combining (142), (147) and (151) and taking the thermodynamic limit.

4.2 The Simple Equation and Bogolyubov Theory, Proof of Theorem 3

Proof of Theorem 3

1. We differentiate (60) with respect to \(\epsilon \) and take \(\epsilon =0\):

$$\begin{aligned} (-\varDelta +v+4e+4e\rho u*)\partial _\epsilon u=-4\partial _\epsilon eu+2\partial _\epsilon e\rho u*u+F. \end{aligned}$$
(152)

Let

$$\begin{aligned} {\mathfrak {K}}_e:=(-\varDelta +v+4e(1-\rho u*))^{-1} \end{aligned}$$
(153)

(this operator was introduced and studied in detail in [9]). We apply \({\mathfrak {K}}_e\) to both sides and take a scalar product with \(-\rho v/2\) and find

$$\begin{aligned} \partial _\epsilon e=\rho \partial _\epsilon e\int dx\ v(x){\mathfrak {K}}_e(2u(x)-\rho u*u(x)) -\frac{\rho }{2}\int dx\ v(x){\mathfrak {K}}_eF(x) \end{aligned}$$
(154)

and so, using (59),

$$\begin{aligned} {\mathcal {M}}^{(\textrm{simpleq})}(k)=\partial _\epsilon e =-\frac{\frac{\rho }{2}\int dx\ v(x){\mathfrak {K}}_eF(x)}{1-\rho \int dx\ v(x){\mathfrak {K}}_e(2u(x)-\rho u*u(x))} \end{aligned}$$
(155)

and, by (55),

$$\begin{aligned} {\mathcal {M}}^{(\textrm{simpleq})}(k) =\rho \frac{{\hat{u}}(k)\int dx\ v(x){\mathfrak {K}}_e\cos (kx)}{1-\rho \int dx\ v(x)\mathfrak K_e(2u(x)-\rho u*u(x))}. \end{aligned}$$
(156)

Note that

$$\begin{aligned} \int \frac{dk}{(2\pi )^3}{\mathcal {M}}^{(\textrm{simpleq})}(k) = \frac{\rho \int dx\ v(x){\mathfrak {K}}_e u(x)}{1-\rho \int dx\ v(x){\mathfrak {K}}_e(2u(x)-\rho u*u(x))} \end{aligned}$$
(157)

which is the expression for the uncondensed fraction for the simple equation [10, (38)].

2. By [9, (5.8), (5.27)],

$$\begin{aligned} {\mathcal {M}}^{(\textrm{simpleq})}(k)=\rho \left( {\hat{u}}(k)\int dx\ v(x){\mathfrak {K}}_e\cos (k(x))\right) (1+O(\rho e^{-\frac{1}{2}})). \end{aligned}$$
(158)

Furthermore, by the resolvent identity,

$$\begin{aligned} {\mathfrak {K}}_e\cos (kx) = \xi -{\mathfrak {K}}_e(v\xi ),\quad \xi :={\mathfrak {Y}}_e(\cos (kx)):=(-\varDelta +4e(1-\rho u*))^{-1}\cos (kx) \end{aligned}$$
(159)

in terms of which, using the self-adjointness of \({\mathfrak {K}}_e\),

$$\begin{aligned} {\mathcal {M}}^{(\textrm{simpleq})}(k)=\rho {\hat{u}}(k)\left( \int dx\ v(x)\xi (x) - \int dx\ {\mathfrak {K}}_ev(x)(v(x)\xi (x)) \right) . \end{aligned}$$
(160)

3. Now, taking the Fourier transform,

$$\begin{aligned} {\hat{\xi }}(q)\equiv \int dx\ e^{ikx}\xi (x)=\frac{(2\pi )^3}{2}\frac{\delta (k-q)+\delta (k+q)}{q^2+4e(1-\rho {\hat{u}}(q))} \end{aligned}$$
(161)

and so

$$\begin{aligned} \int dx\ v(x)\xi (x) = \int \frac{dq}{(2\pi )^3}{\hat{v}}(q){\hat{\xi }}(q) = \frac{{\hat{v}}(k)}{k^2+4e(1-\rho {\hat{u}}(k))} \end{aligned}$$
(162)

and thus

$$\begin{aligned} \rho {\hat{u}}(k)\int dx\ v(x)\xi = \rho {\hat{v}}(k)\frac{{\hat{u}}(k)}{k^2+4e(1-\rho {\hat{u}}(k))}. \end{aligned}$$
(163)

We recall [8, (4.25)]:

$$\begin{aligned} \rho {\hat{u}}(k)=\frac{k^2}{4e}+1-\sqrt{\left( \frac{k^2}{4e}+1\right) ^2-{\hat{S}}(k)} \end{aligned}$$
(164)

and, by [8, (4.24)],

$$\begin{aligned} {\hat{S}}(0)=1. \end{aligned}$$
(165)

Therefore, if we rescale

$$\begin{aligned} k=2\sqrt{e}\kappa \end{aligned}$$
(166)

we find

$$\begin{aligned} \rho {\hat{u}}(k)\int dx\ v(x)\xi = \frac{{\hat{v}}(0)}{4e}\frac{\kappa ^2+1-\sqrt{(\kappa ^2+1)^2-1}}{\sqrt{(\kappa ^2+1)^2-1}} +o(e^{-1}). \end{aligned}$$
(167)

4. Now,

$$\begin{aligned} \int dx\ e^{iqx}v(x)\xi (x) = \frac{1}{2}\frac{1}{k^2+4e(1-\rho {\hat{u}}(k))} \int dp\ {\hat{v}}(q-p)(\delta (k-p)+\delta (k+p)) \end{aligned}$$
(168)

so

$$\begin{aligned} \int dx\ e^{iqx}v(x)\xi (x) = \frac{1}{2}\frac{{\hat{v}}(q-k)+{\hat{v}}(q+k)}{k^2+4e(1-\rho {\hat{u}}(k))}. \end{aligned}$$
(169)

Therefore,

$$\begin{aligned} \int dx\ {\mathfrak {K}}_ev(x)(v\xi ) = \frac{1}{2}\frac{1}{k^2+4e(1-\rho {\hat{u}}(k))} \int \frac{dq}{(2\pi )^3}\ \widehat{{\mathfrak {K}}_e v}(q) ({\hat{v}}(k-q)+{\hat{v}}(k+q)) \end{aligned}$$
(170)

which, using the \(q\mapsto -q\) symmetry, is

$$\begin{aligned} \int dx\ {\mathfrak {K}}_ev(x)(v\xi ) = \frac{1}{k^2+4e(1-\rho {\hat{u}}(k))} \int \frac{dq}{(2\pi )^3}\ \widehat{{\mathfrak {K}}_e v}(q) {\hat{v}}(k+q) \end{aligned}$$
(171)

that is,

$$\begin{aligned} \rho {\hat{u}}(k)\int dx\ {\mathfrak {K}}_ev(x)(v\xi ) = \frac{\rho {\hat{u}}(k)}{k^2+4e(1-\rho {\hat{u}}(k))} \int dx\ e^{-ikx} {\mathfrak {K}}_e v(x) v(x) \end{aligned}$$
(172)

in which we rescale

$$\begin{aligned} k=2\sqrt{e}\kappa \end{aligned}$$
(173)

so, by (164)-(165),

$$\begin{aligned} \rho {\hat{u}}(k)\int dx\ {\mathfrak {K}}_ev(x)(v\xi ) = \frac{\kappa ^2+1-\sqrt{(\kappa ^2+1)^2-1}}{4e\sqrt{(\kappa ^2+1)^2-1}} (1+o(1))\int dx\ e^{-i2\sqrt{e}\kappa x} v(x){\mathfrak {K}}_e v(x). \end{aligned}$$
(174)

Therefore, by dominated convergence (using the argument above [9, (5.23)] and the fact that \({\mathfrak {K}}_e\) is positivity preserving), and by [9, (5.23)-(5.24)],

$$\begin{aligned} \rho {\hat{u}}(k)\int dx\ {\mathfrak {K}}_ev(x)(v\xi ) = \frac{\kappa ^2+1-\sqrt{(\kappa ^2+1)^2-1}}{4e\sqrt{(\kappa ^2+1)^2-1}} (-4\pi a+{\hat{v}}(0))+o(e^{-1}). \end{aligned}$$
(175)

5. Inserting (167) and (175) into (160), we find

$$\begin{aligned} {\mathcal {M}}^{(\textrm{simpleq})}(k) = \frac{\pi a}{e}\frac{\kappa ^2+1-\sqrt{(\kappa ^2+1)^2-1}}{\sqrt{(\kappa ^2+1)^2-1}} +o(e^{-1}). \end{aligned}$$
(176)

Finally, we recall [8, (1.23)]:

$$\begin{aligned} e=2\pi \rho a(1+O(\sqrt{\rho })) \end{aligned}$$
(177)

so

$$\begin{aligned} {\mathcal {M}}^{(\textrm{simpleq})}(k) = \frac{1}{2}\frac{\kappa ^2+1-\sqrt{(\kappa ^2+1)^2-1}}{\sqrt{(\kappa ^2+1)^2-1}} +o(e^{-1}). \end{aligned}$$
(178)

6. Finally, by (58)

$$\begin{aligned} {\mathcal {M}}^{(\textrm{Bogolyubov})}(2\sqrt{e}\kappa )=-\frac{1}{2\rho }\left( 1-\frac{\frac{4e}{8\pi \rho a}\kappa ^2+1}{\sqrt{\frac{e^2}{4\pi ^2\rho ^2a^2}\kappa ^4+\frac{e}{\pi \rho a} \kappa ^2}}\right) \end{aligned}$$
(179)

so by (177),

$$\begin{aligned} {\mathcal {M}}^{(\textrm{Bogolyubov})}(2\sqrt{e}\kappa )=-\frac{1}{2\rho }\left( 1-\frac{\kappa ^2+1}{\sqrt{\kappa ^4+2\kappa ^2}}\right) . \end{aligned}$$
(180)

This, together with (178), implies (62).