1 Drift-diffusion system in higher dimensions

We consider the finite time blow-up for the solution to the drift-diffusion system in spatially higher dimensions. Let u be a solution to the Cauchy problem of the drift-diffusion system:

$$\begin{aligned} \left\{ \begin{aligned}&\partial _t u - \Delta u + \nabla \cdot (u\nabla \psi )=0,&\qquad t>0,\ x\in {\mathbb {R}}^n, \\&\quad -\Delta \psi = u,&\qquad t>0,\ x\in {\mathbb {R}}^n, \\&\ u(0, x)=u_{0}(x),&\qquad x\in {\mathbb {R}}^n, \end{aligned} \right. \end{aligned}$$
(1.1)

where \(u = u(t, x)\) denotes particle density and \(\psi \) denotes a potential of the particle field. The equation (1.1) is relevant to a model of self-interacting particles (see Biler–Nadzieja [7], see also Biler [2]), the semi-conductor device simulations (see [15, 21, 32]) and a model of the aggregation of mold known as the Keller–Segel system (see [22]). In some models, the equation can be derived from a singular limit problem from a fluid mechanical approximation of gravitation gaseous stars (cf. [14, 24]). If the solution has enough regularity and integrability, then the problem exhibits an instability of the solution, namely, under certain conditions on the initial data, the solution blows up in finite time. The blow up results can be found in [2, 6, 13, 18, 27, 28, 33, 34, 37, 43, 44]. On the other hand, the global existence and stability are known for two dimensional case ([20, 25, 26, 35,36,37,38]). This property is naturally inherited to the system (1.1).

Since u denotes the density of particles, it is natural to consider a non-negative solution, and in this case, the \(L^1\) norm of the solution u(t) is preserved in time and this corresponds to the mass conservation law. Under this setting the question of whether the solution exists globally in time or blows up in a finite time is a basic problem. If the initial data is large and decays fast at spatially infinity, the solution blows up in a finite time. In this paper, we discuss such instability of solution under weaker assumptions on the initial data that decays slower at space infinity.

The system (1.1) involves the Poisson equation and the solution u is influenced by a non-local effect from the Green’s function of the Poisson equation. Hence, the large time behavior of the solution is largely depending on the behavior of the solution at spatial infinity. Hence, the weight condition on the data may give a subtle effect on the large time behavior of the solution. Therefore, to eliminate the weight condition or reduce the condition is an interesting problem to (1.1).

The local existence of the solution in both the semi-group approach and the energy method is now well established (cf. [27]). To state results, we define some function spaces: For \(s > 0\) and \(1 \le p \le \infty \),

$$\begin{aligned} L^p_s({\mathbb {R}}^n) \equiv \big \{f \in L^p_\textrm{loc}({\mathbb {R}}^n); \ \langle x\rangle ^s f(x) \in L^p({\mathbb {R}}^n)\big \}, \end{aligned}$$

where \(\langle \cdot \rangle = (1 + |\cdot |^2)^{1/2}\). Noting that \(L^p_s({\mathbb {R}}^n) \subset L^1({\mathbb {R}}^n)\) if \(s > n/p\), we recall the existence and uniqueness of the solution for the n dimensional drift-diffusion equation in a critical space \(L^{\frac{n}{2}}({\mathbb {R}}^n)\).

Definition. Let \(n \ge 3\) and \(1 \le p < \infty \). For \(u_0 \in L^p({\mathbb {R}}^n)\), we call u a mild solution to the system (1.1) if u(t) solves the integral equation

$$\begin{aligned} u(t) = e^{t \Delta } u_0 - \int _0^t e^{(t - s) \Delta } \nabla \cdot (u(s) \nabla \psi (s))\, ds \end{aligned}$$

in \(C([0, T); L^p({\mathbb {R}}^n))\), where

$$\begin{aligned} \begin{aligned} \psi = (- \Delta )^{- 1} u \equiv \frac{1}{(n - 2)\omega _{n - 1}} |x|^{-(n - 2)} * u, \end{aligned} \end{aligned}$$

with \(\omega _{n - 1} \equiv 2 \pi ^{\frac{n}{2}}/\Gamma \left( \frac{n}{2}\right) \) as the surface volume of a unit sphere and \(\Gamma (\cdot )\) denotes the gamma function.

Proposition 1.1

(Local well-posedness and conservation laws) Let \(n \ge 3\) and \(\frac{n}{2} \le p < n \). For any \(u_0 \in L^p({\mathbb {R}}^n)\), there exists \(T > 0\) and a unique mild solution \((u, \psi )\) to (1.1) with the initial data \(u_0\) such that \(u \in C([0, T); L^p({\mathbb {R}}^n)) \cap L^\theta (0, T; L^q({\mathbb {R}}^n))\) with \(2/\theta + n/q = 2\) and \(q > \frac{n}{2}\). Moreover, the solution has higher regularity \(u\in C([0,T);W^{2, p}({\mathbb {R}}^n))\cap C^1((0, T); L^p({\mathbb {R}}^n))\) and it is a strong solution for (1.1). Besides there exists a maximal existence time \(T = T_* \le \infty \) such that if \(T_* < \infty \), then for any \(\frac{n}{2} < p \le \infty \),

$$\begin{aligned} \varlimsup _{t \rightarrow T_*} \Vert u(t)\Vert _p = \infty . \end{aligned}$$

Furthermore, the solution satisfies the following properties:

  1. (1)

    If the initial data \(u_0 \in L^1_b({\mathbb {R}}^n)\) for \(b > 0\), then the solution satisfies

    $$\begin{aligned} u \in C([0, T); L^p({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)). \end{aligned}$$
  2. (2)

    If \(u_0(x) \ge 0\), then \(u(t, x) \ge 0\) for any \((t, x)\in (0, T) \times {\mathbb {R}}^n\).

  3. (3)

    If \(u_0 \in L^1({\mathbb {R}}^n)\), then

    $$\begin{aligned} \Vert u(t)\Vert _1 = \Vert u_0\Vert _1. \end{aligned}$$
    (1.2)
  4. (4)

    If in addition \(u_0 \in L^1_b({\mathbb {R}}^n)\), where \(b > 0\) and \(p\ge \frac{n}{2}\), then the solution \((u, \psi )\) satisfies

    $$\begin{aligned} H[u(t)] +\int _0^t \int _{{\mathbb {R}}^n}u(t)\big |\nabla (\log {u(\tau )}-\psi (\tau ))\big |^2\, dx\, d\tau = H[u_0], \end{aligned}$$
    (1.3)

    where

    $$\begin{aligned} H[u(t)] \equiv \int _{{\mathbb {R}}^n} u(t) \log {u(t)}\, dx - \frac{1}{2} \int _{{\mathbb {R}}^n} u(t) \psi (t)\, dx. \end{aligned}$$
    (1.4)
  5. (5)

    If \(u_0 \in L^1_2({\mathbb {R}}^n)\), then the second moment of the solution satisfies the following identity:

    $$\begin{aligned} \begin{aligned} \int _{{\mathbb {R}}^n} |x - x'|^2 u(t)\, dx&= \int _{{\mathbb {R}}^n} |x - x'|^2 u_0(x)\, dx + 2(n-2)\int _0^t H[u(s)]\, ds + 2 n t \Vert u_0\Vert _1 \\&\quad - 2 (n - 2) \int _0^t \int _{{\mathbb {R}}^n} u(s) \log {u(s)}\, dx\, ds, \end{aligned} \end{aligned}$$
    (1.5)

    where \(x' \in {\mathbb {R}}^n\) is an arbitrary point.

The local well-posedness in \(L^p\) space is essentially due to the earlier works by Weissler [50] and Giga [15], where they consider the nonlinear heat equations and incompressible Navier–Stokes equations. Since the scaling structure to (1.1) is similar to the Navier–Stokes equation, we may apply those theories and obtain the existence and uniqueness of the mild solution by Kurokiba–Ogawa [28] (see also for the critical case [25] and in the weighted space [27]). If the initial data is non-negative and integrable, then the weak maximum principle and the conservation law of the average assure that the weak solution preserves the total mass (1.2). This is natural consequence from the equation originally appears from the conservation laws (cf. [14, 24]).

On the other hand, if we consider the invariant scaling property, namely, the equation has a scaling invariant property that for \(\lambda > 0\),

$$\begin{aligned} \left\{ \begin{aligned}&u_\lambda (t, x) \equiv \lambda ^2 u(\lambda ^2 t, \lambda x), \\&\psi _\lambda (t, x) \equiv \psi (\lambda ^2 t, \lambda x) \end{aligned} \right. \end{aligned}$$

is invariant scaling for the system. The critical space coincide with the invariant scale is

$$\begin{aligned} L^{\infty } \big ({\mathbb {R}}_+; L^\frac{n}{2}({\mathbb {R}}^n)\big ) \times L^{\infty }({\mathbb {R}}_+; L^\infty ({\mathbb {R}}^n)) \end{aligned}$$

and for two dimensional case it is \(L^\infty ({\mathbb {R}}_{+}; L^1({\mathbb {R}}^2)) \times L^\infty ({\mathbb {R}}_+; L^\infty ({\mathbb {R}}^2))\). The global existence in the scaling critical spaces is a direct consequence of Fujita–Kato’s principle.

Proposition 1.2

(Global existence) Let \(n \ge 3\), \(p \ge \frac{n}{2}\), and \(b > 0\). Assume that the initial data \(u_0\) is non-negative in \(L^p({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)\), and for some constant \(B_n > 0\) depending only on n such that \(\Vert u_0\Vert _{\frac{n}{2}} < B_n\).

  1. (1)

    Then the corresponding solution u obtained by Proposition 1.1 exists globally in time.

  2. (2)

    The solution decays in time:

    $$\begin{aligned} \Vert u(t)\Vert _p \le C (1 + t)^{- \frac{n}{2} \left( 1 - \frac{1}{p}\right) }. \end{aligned}$$

One can find the constant \(B_n\) can be chosen as

$$\begin{aligned} B_n = \frac{8}{n S_n^2} = 8 \pi (n - 2) \left( \frac{\Gamma (\frac{n}{2})}{\Gamma (n)} \right) ^\frac{2}{n} \end{aligned}$$

(see for instance [13], see also [11] for an improved constant), where \(S_n\) denotes the best possible constant of Sobolev’s inequality (cf. Talenti [47]). Under this assumption we see that for some \(T > 0\), it holds

$$\begin{aligned} \Vert u(t)\Vert _\frac{n}{2} \le \Vert u_0\Vert _\frac{n}{2} \end{aligned}$$

for \(t \in [0, T)\). Then \(\Vert u(t)\Vert _{\frac{n}{2}}\) is uniformly bounded by \(B_n\). This implies a uniform bound for the solution.

The main subject of this paper is to show an instability result of the mild solution to (1.1). When \(n = 2\), the solution to (1.1) exists globally in time or non-negative initial data \(u_0 \in L^1({\mathbb {R}}^2) \cap L^2_s({\mathbb {R}}^2)\) satisfying

$$\begin{aligned} \int _{{\mathbb {R}}^2} u_0(x)\, dx \le 8 \pi . \end{aligned}$$

On the other hand, if

$$\begin{aligned} \int _{{\mathbb {R}}^2} u_0(x)\, dx > 8 \pi , \end{aligned}$$

then the solution blows up in a finite time. Namely, there exists \(T_\textrm{m} < \infty \) such that

$$\begin{aligned} \varlimsup _{t \rightarrow T_\textrm{m}} \Vert u(t)\Vert _p = \infty \end{aligned}$$

for all \(1 < p \le \infty \) (see one dimensional modification [9]).

In Biler [2] and Nagai [33, 34], the finite time blow up of the positive solutions are shown (cf. Kurokiba–Ogawa [27]), under certain conditions for two dimensional case.

While if we consider the higher dimensional case, the invariant space is shifted in \(L^{\frac{n}{2}}\) and the \(L^1\) conservation law does not work very well. In this sense, the problem is a “super critical” case. On the other hand, the usage of the entropy functional yields new difficulty to show the finite time blow-up. Biler [3] obtained a finite time blow-up result for the case of bounded domain with a boundary condition. Corrias–Perthame–Zaag [13] obtained the finite time blowing up result for the higher dimensional cases who did not use the entropy functional. Namely, there exists a large constant \(M = M(n) > 0\) such that if the initial data satisfies

$$\begin{aligned} M \le \frac{\Vert u_0\Vert _1^\frac{n}{n - 2}}{\displaystyle \int _{{\mathbb {R}}^n}|x-{\bar{x}}|^2 u_0(x)\, dx}, \end{aligned}$$

where \({\bar{x}}\) is the second mass center \(u_0\), that is,

$$\begin{aligned} \int _{{\mathbb {R}}^n} |x - {\bar{x}}|^b u_0(x)\, dx \equiv \inf _{x \in {\mathbb {R}}^n} \int _{{\mathbb {R}}^n} |y - x|^b u_0(y)\, dy. \end{aligned}$$

Then the solution blows up in a finite time. In both results, the initial data and solutions are both assumed to be in \(L^1_2({\mathbb {R}}^n)\), namely, the second moment of the solution remains finite for all time up to the maximal existence time. This is indeed a natural condition of the system if we regard that the \(u(t, x)\, dx\) as a probability measure assuming \(\Vert u_0\Vert _1 = 1\) with positivity.

The following statement is essentially due to Biler [3] in a bounded domain and refined by Calvez–Corrias–Ebde [11] for the whole space case and Ogawa–Wakui [41].

Proposition 1.3

(Blow-up criterion) Let \(n \ge 3\) and \(b > 0\). For \(u_0 \in L^{\frac{n}{2}}({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)\), \(u_0 \ge 0\), assume further that for some constant \(C_{n, b} > 0\),

$$\begin{aligned} H[u_0] < -\frac{n}{b} \Vert u_0\Vert _1 \log \left( \frac{C_{n,b}}{\Vert u_0\Vert _1^{1 + \frac{b}{n}}}\int _{{\mathbb {R}}^n}|x-{\bar{x}}|^bu_0(x)\, dx \right) , \end{aligned}$$
(1.6)

where \({\bar{x}}\) is the b-th mass center of \(u_0\).

  1. (1)

    When \(b \ge 2\), then

    $$\begin{aligned} C_{n, b} \ge \frac{2 \pi e^\frac{n}{n - 2}}{n} \end{aligned}$$
    (1.7)

    and the solution u to (1.1) blows up in a finite time. Namely, for any \(\frac{n}{2} < p \le \infty \), there exists some \(T_* < \infty \) such that

    $$\begin{aligned} \varlimsup _{t \rightarrow T_*} \Vert u(t)\Vert _p = \infty . \end{aligned}$$
    (1.8)
  2. (2)

    If \(u_0 \ge 0\) is radially symmetric and \(b > 0\), then

    $$\begin{aligned} C_{n, b} \ge \frac{b c_{n, b} e^{1 + \frac{b (1 + \delta /n)}{n - 2}}}{n} \end{aligned}$$
    (1.9)

    and the solution u to (1.1) blows up in a finite time in the sense of (1.8), where \(c_{n, b}\) is the constant defined in Proposition 2.1.

  3. (3)

    When \(0< b < 2\), then the solution does not remain uniformly bounded in \(L^{\frac{n}{2}}({\mathbb {R}}^n)\).

Our main result is the concentration phenomena for the blowing up solution:

Theorem 1.4

Let \(n \ge 3\) and \(b \ge 2\). For \(u_0 \in L^{\frac{n}{2}}({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)\), \(u_0 \ge 0\), assume that (1.6) holds with \(C_{n, b} = 2 \pi e^\frac{n}{n - 2}/n\).

  1. (1)

    Then the blowing up solution u to (1.1) concentrates the following sense: Let \(T_* > 0\) be the blow up time and \(\{t_k\}_{k \in {\mathbb {N}}}\) satisfy \(t_k \rightarrow T_*\) and \(\Vert u(t_k)\Vert _\frac{n}{2} \rightarrow \infty \) as \(k \rightarrow \infty \). Then for any \(\varepsilon > 0\), there exist a subsequence of \(\{t_k\}_{k \in {\mathbb {N}}}\) and \(x_* \in {\mathbb {R}}^n\) such that

    $$\begin{aligned} \left( \frac{2 n}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2} \le \varliminf _{k \rightarrow \infty } \int _{B_\varepsilon (x_*)} u(t_k, x)^\frac{n}{2}\, dx, \end{aligned}$$
    (1.10)

    where \(C_\textrm{HLS}\) is the best possible constant of the Hardy–Littlewood–Sobolev inequality, that is,

    $$\begin{aligned} \int _{{\mathbb {R}}^n}f(-\Delta )^{-1}f\, dx \le C_\textrm{HLS}\Vert f\Vert _1\Vert f\Vert _{\frac{n}{2}}. \end{aligned}$$
    (1.11)
  2. (2)

    Furthermore, if \(T_* < \infty \), then for any \(\varepsilon > 0\), there exist a subsequence of \(\{t_k\}_{k \in {\mathbb {N}}}\) and \(\{x_k\}_{k \in {\mathbb {N}}} \subset {\mathbb {R}}^n\) such that

    $$\begin{aligned} \left( \frac{2 n}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2} \le \varliminf _{k \rightarrow \infty } \int _{B_{\varepsilon \sqrt{T_* - t_k}}(x_k)} u(t_k, x)^\frac{n}{2}\, dx, \end{aligned}$$
    (1.12)

    where \(C_\textrm{HLS}\) is defined by the above.

By the assumption (1.6) with \(C_{n, b} = 2 \pi e^\frac{n}{n - 2}/n\), we see that

$$\begin{aligned} - \frac{1}{2} \int _{{\mathbb {R}}^n} u_0(x) (- \Delta )^{- 1} u_0(x)\, dx&< -\frac{n}{2} \Vert u_0\Vert _1 \log \left( \frac{2 \pi e^\frac{n}{n - 2}}{n \Vert u_0\Vert _1^{1 + \frac{2}{n}}}\int _{{\mathbb {R}}^n}|x-{\bar{x}}|^2 u_0(x)\, dx \right) \\&\quad - \int _{{\mathbb {R}}^n} u_0(x) \log {u_0(x)}\, dx. \end{aligned}$$

It follows from the Hardy–Littlewood–Sobolev inequality (1.11) and Shannon inequality (2.1) (see Proposition 2.1 below) that

$$\begin{aligned} - \frac{C_\textrm{HLS}}{2} \Vert u_0\Vert _1 \Vert u_0\Vert _\frac{n}{2} < -\frac{n}{2} \Vert u_0\Vert _1 \log \left( e^\frac{2}{n - 2} \right) , \end{aligned}$$

which implies that the initial data \(u_0 \ge 0\) satisfies

$$\begin{aligned} \int _{{\mathbb {R}}^n} u_0(x)^\frac{n}{2}\, dx > \left( \frac{2 n}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2}. \end{aligned}$$
(1.13)

The constant of the right hand side coincides with one appearing in (1.10). We also see that

$$\begin{aligned} B_n = \frac{8}{n S_n^2} < \frac{2 n}{(n - 2) C_\textrm{HLS}}. \end{aligned}$$

for \(n \ge 3\) (see Proposition 2.4 below).

For a radially symmetric solution to (1.1), one can extend the condition of the weighted Lebesgue space. Since the assumption on the initial data is different from the case of \(b \ge 2\), the constant appearing in the concentration phenomena changes as follows:

Theorem 1.5

Let \(n \ge 3\) and \(b > 0\). For a radially symmetric function \(u_0 \in L^{\frac{n}{2}}({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)\), \(u_0 \ge 0\).

  1. (1)

    If \(b \ge 2\) and \(u_0\) satisfies (1.6) with \(C_{n, b} = 2 \pi e^\frac{n}{n - 2}/n\), then the blowing up solution u to (1.1) concentrates the following sense: Let \(0< T_* < \infty \) be the blow up time and \(\{t_k\}_{k \in {\mathbb {N}}}\) satisfy \(t_k \rightarrow T_*\) and \(\Vert u(t_k)\Vert _\frac{n}{2} \rightarrow \infty \) as \(k \rightarrow \infty \). Then for any \(\varepsilon > 0\), there exists a subsequence of \(\{t_k\}_{k \in {\mathbb {N}}}\) such that

    $$\begin{aligned} \left( \frac{2 n}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2} \le \varliminf _{k \rightarrow \infty } \int _{B_{\varepsilon \sqrt{T_* - t_k}}(0)} u(t_k, x)^\frac{n}{2}\, dx. \end{aligned}$$
    (1.14)
  2. (2)

    If \(b > 0\) and \(u_0\) satisfies (1.6) with (1.9) for any \(\delta > 0\), then the blowing up solution u to (1.1) concentrates the following sense: Let \(0< T_* < \infty \) be the blow up time and \(\{t_k\}_{k \in {\mathbb {N}}}\) satisfy \(t_k \rightarrow T_*\) and \(\Vert u(t_k)\Vert _\frac{n}{2} \rightarrow \infty \) as \(k \rightarrow \infty \). Then for any \(\varepsilon > 0\), there exist a subsequence of \(\{t_k\}_{k \in {\mathbb {N}}}\) such that

    $$\begin{aligned} \left( \frac{2 (n + \delta )}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2} \le \varliminf _{k \rightarrow \infty } \int _{B_{\varepsilon \sqrt{T_* - t_k}}(0)} u(t_k, x)^\frac{n}{2}\, dx. \end{aligned}$$
    (1.15)

Theorem 1.5 gives the \(L^\frac{n}{2}\)-concentration rate of the radially symmetric blow-up solution and is the analogous result of a nonlinear Schrödinger equation (see Merle–Tsutsumi [31] and Tsutsumi [48]).

Remark

In the derivation of (1.13), if we assume that \(u_0\) satisfies (1.6) with (1.9) instead of \(C_{n, b} = 2 \pi e^\frac{n}{n - 2}/n\), then we see that

$$\begin{aligned} \int _{{\mathbb {R}}^n} u_0(x)^\frac{n}{2}\, dx > \left( \frac{2 (n + \delta )}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2}. \end{aligned}$$

While we relax the weight condition from \(b \ge 2\) to \(b > 0\) in Theorem 1.5, the assumption of the initial data needs to be tightened when \(b > 0\). The result of Theorem 1.5 (2) follows from the analogous argument of Theorem 1.4 naturally.

It is worth comparing the above result and the case of the degenerate drift-diffusion system:

$$\begin{aligned} \left\{ \begin{aligned}&\partial _t u - \Delta u^\alpha + \nabla \cdot (u \nabla \psi ) = 0,{} & {} t> 0,\ x \in {\mathbb {R}}^n, \\&\quad - \Delta \psi = u,{} & {} t > 0,\ x \in {\mathbb {R}}^n, \\&u(0, x) = u_0(x),{} & {} x \in {\mathbb {R}}^n, \end{aligned} \right. \end{aligned}$$
(1.16)

where \(\alpha > 1\) denotes the adiabatic constant originated from the pressure term \(P(\rho ) = c \rho ^\alpha \) in the barotropic damped compressible Navier–Stokes–Poisson system (cf. [14, 24]). It is known that there are two critical exponents \(\alpha _* = 2 - 4/(n + 2)\) and \(\alpha ^* = 2 - 2/n\) (see [8, 12, 40, 42, 45, 46, 49]). In the case \(\alpha _* \le \alpha \le \alpha ^*\) (see [23]), the threshold for the global existence of the weak solution to (1.16) is identified as

$$\begin{aligned} \Vert u_0\Vert _1^\beta \Vert u_0\Vert _\alpha ^\gamma < \Vert V\Vert _1^\beta \Vert V\Vert _\alpha ^\gamma , \end{aligned}$$

where

$$\begin{aligned} \beta \equiv \frac{2}{2 - \alpha } - \frac{n}{\alpha },\quad \gamma \equiv n - \frac{2}{2 - \alpha } \end{aligned}$$

and V is the optimizer for the Hardy–Littlewood–Sobolev inequality

$$\begin{aligned} \int _{{\mathbb {R}}^n} f(x) (-\Delta )^{-1} f(x)\, dx \le C_{\textrm{HLS}, \alpha }\Vert f\Vert _1^{1 - \sigma } \Vert f\Vert _{\alpha }^{1 + \sigma } \end{aligned}$$

with

$$\begin{aligned} \sigma \equiv \frac{\alpha }{\alpha - 1} \frac{n - 2}{n} - 1. \end{aligned}$$

In particular case of \(\beta = 0\) when \(\alpha = \alpha _*\), the threshold is

$$\begin{aligned} \int _{{\mathbb {R}}^n} u_0(x)^{\alpha _*}\, dx < \int _{{\mathbb {R}}^n} V(x)^{\alpha _*}\, dx = \left( \frac{2 n}{(n - 2) C_{\textrm{HLS}, \alpha _*}}\right) ^\frac{n}{2}.\end{aligned}$$

From this estimate, the above lower bound for the concentration coincides the threshold for the global existence of the weak solution to the degenerate drift-diffusion equation in higher dimensions.

The proof of Theorem 1.4 is based on the profile decomposition in \(L^1({\mathbb {R}}^n)\). We take \(\{t_k\}_{k \in {\mathbb {N}}}\) such that \(t_k \rightarrow T\) as \(k \rightarrow \infty \) and introduce the rescaled solution sequence

$$\begin{aligned} u_k(x) \equiv \lambda _k^{- n} u(t_k, \lambda _k^{- 1} x)\quad \textrm{with}\ \lambda _k \equiv \Vert u(t_k)\Vert _\frac{n}{2}^\frac{1}{n - 2} \end{aligned}$$

for a blowing up solution u to (1.1). Then \(\Vert u_k\Vert _1 = \Vert u_0\Vert _1\) and \(\Vert u_k\Vert _\frac{n}{2} = 1\). In Bedrossian–Kim [1], they showed the profile decomposition in \(L^1({\mathbb {R}}^n)\), but their technique allows that the profile depends on k. In this paper, we improve the extraction of the profile independent of k from \(\{u_k\}_{k \in {\mathbb {N}}}\). In order to deny the possibility that \(\{u_k\}\) is vanishing, we use the bump function method for the rescaled equation (see Biler [2], Biler–Cieślak–Karch–Zienkiewicz [4], Biler–Karch–Zienkiewicz [5]).

2 Preliminaries

The following inequality originally due to Shannon is useful to estimate the logarithmic functional (see [29, 41]).

Proposition 2.1

(Generalized Shannon’s inequality) Let \(n \ge 2\) and \(b > 0\). There exists a constant \(C_{n, b} > 0\) such that for any non-negative function \(f \in L^1_b({\mathbb {R}}^n)\),

$$\begin{aligned} - \int _{{\mathbb {R}}^n} f(x) \log {f(x)}\, dx \le \frac{n}{b} \Vert f\Vert _1 \log \left( \frac{C_{n, b}}{\Vert f\Vert _1^{1 + \frac{b}{n}}} \int _{{\mathbb {R}}^n} |x - {\bar{x}}|^b f(x)\, dx \right) , \end{aligned}$$
(2.1)

where \({\bar{x}}\) is the b-th mass center of f and

$$\begin{aligned} C_{n, b} = \frac{b e c_{n, b}}{n},\quad c_{n, b} = \left( \frac{2 \pi ^\frac{n}{2}}{b} \frac{\Gamma (\frac{n}{b})}{\Gamma (\frac{n}{2})} \right) ^\frac{b}{n} \end{aligned}$$

is the best possible. In particular, if \(b = 2\), then

$$\begin{aligned} C_{n, 2} = \frac{2 \pi e}{n}. \end{aligned}$$

Lemma 2.2

Let \(n \ge 2\) and \(b > 0\). There exists a constant \(C > 0\) such that for all \(f \in L^{\frac{n}{2}}({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)\),

$$\begin{aligned} \Vert f\Vert _1 \le C \Vert f\Vert _\frac{n}{2}^\frac{b}{n - 2 + b} \left( \int _{{\mathbb {R}}^n} |x|^b f(x)\, dx \right) ^\frac{n - 2}{n - 2 + b}. \end{aligned}$$

From this lemma, we see that

$$\begin{aligned} \Vert u_0\Vert _1^\frac{n - 2 + b}{n - 2} \left( \int _{{\mathbb {R}}^n} |x|^b u(t)\, dx \right) ^{- 1} \le C^\frac{n - 2 + b}{n - 2} \Vert u(t)\Vert _\frac{n}{2}^\frac{b}{n - 2} \end{aligned}$$

and there is a limitation of the blow-up speed of the \(L^{\frac{n}{2}}\)-norm of the solution u to (1.1) and the speed of the b-th moment.

The following inequality is well-known and will be used in later often:

Proposition 2.3

(Hardy-Littlewood-Sobolev inequality) Let \(n \ge 3\). Then there exists a constant \(C_\textrm{HLS} > 0\) which is depending only on n such that for any function \(f \in L^1({\mathbb {R}}^n) \cap L^{\frac{n}{2}}({\mathbb {R}}^n)\),

$$\begin{aligned} \left\| |\nabla |^{- 1} f\right\| _2^2 \le C_\textrm{HLS} \Vert f\Vert _1 \Vert f\Vert _\frac{n}{2}. \end{aligned}$$
(2.2)

Moreover, there exists the extremal function \(V \in L^1({\mathbb {R}}^n) \cap L^{\frac{n}{2}}({\mathbb {R}}^n)\) such that V is radially symmetric and decreasing function satisfying the Euler–Lagrange equation

$$\begin{aligned} \left\{ \begin{aligned}&(- \Delta )^{- 1} V = \frac{n}{4} C_\textrm{HLS} V^{\frac{n}{2} - 1} + \frac{1}{2} C_\textrm{HLS} \chi _{\textrm{supp}\,{V}}{} & {} \textrm{in}\ B_R, \\&V > 0{} & {} \textrm{in}\ B_R, \\&V = 0{} & {} \textrm{in}\ {\mathbb {R}}^n \setminus B_R \end{aligned} \right. \end{aligned}$$
(2.3)

for some \(0< R < \infty \).

Proof of Proposition 2.3

The Hardy–Littlewood–Sobolev inequality implies that it follows that

$$\begin{aligned} \iint _{{\mathbb {R}}^n \times {\mathbb {R}}^n} \frac{|f(x)| |f(y)|}{|x - y|^{n - 2}}\, dx\, dy \le C_{n, \alpha } \Vert f\Vert _\alpha \Vert f\Vert _r, \end{aligned}$$

where \(1< \alpha , r < \infty \) satisfy

$$\begin{aligned} \frac{1}{\alpha } + \frac{n - 2}{n} + \frac{1}{r} = 2,\quad \mathrm {i.e.},\ \frac{1}{\alpha } - \frac{2}{n} + \frac{1}{r} = 1. \end{aligned}$$

By Hölder’s inequality, we have

$$\begin{aligned} \iint _{{\mathbb {R}}^n \times {\mathbb {R}}^n} \frac{|f(x)| |f(y)|}{|x - y|^{n - 2}}\, dx\, dy \le C_{n, \alpha } \Vert f\Vert _1^{1 - \theta } \Vert f\Vert _\alpha ^{1 + \theta }, \end{aligned}$$

where \(0 \le \theta \le 1\) satisfies

$$\begin{aligned} 1 + \theta = \frac{\alpha }{\alpha - 1} \frac{n - 2}{n}. \end{aligned}$$

In particular, let \(\alpha = \frac{n}{2}\), then \(\theta = 0\), so that the inequality

$$\begin{aligned} \iint _{{\mathbb {R}}^n \times {\mathbb {R}}^n} \frac{|f(x)| |f(y)|}{|x - y|^{n - 2}}\, dx\, dy \le C_n \Vert f\Vert _1 \Vert f\Vert _\frac{n}{2} \end{aligned}$$

holds for any function \(f \in L^1({\mathbb {R}}^n) \cap L^{\frac{n}{2}}({\mathbb {R}}^n)\). By the definition of the Poisson kernel, we can rewrite as

$$\begin{aligned} \left| \int _{{\mathbb {R}}^n} f(x) (- \Delta )^{- 1} f(x)\, dx\right| \le C_\textrm{HLS} \Vert f\Vert _1 \Vert f\Vert _\frac{n}{2}, \end{aligned}$$

where \(C_\textrm{HLS}\) denotes the best possible constant of the Hardy-Littlewood-Sobolev inequality and \(C_\textrm{HLS}=C_n c_n\). This implies the inequality (2.2). The attainability and Euler–Lagrange equation (2.3) of the inequality (2.2) is proved by Kimijima–Nakagawa–Ogawa [23] (see also [10]).\(\square \)

Proposition 2.4

(Comparison of constants) The best constant \(C_\textrm{HLS}\) is estimated from the above by

$$\begin{aligned} C_\textrm{HLS} < S_n^2, \end{aligned}$$
(2.4)

where \(S_n\) is the best constant of Sobolev’s inequality

$$\begin{aligned} \Vert f\Vert _\frac{2 n}{n - 2} \le S_n \Vert \nabla f\Vert _2, \end{aligned}$$

given by

$$\begin{aligned} S_n^2 = \frac{1}{\pi n (n - 2)} \left( \frac{\Gamma \left( \frac{n}{2}\right) }{\Gamma (n)}\right) ^\frac{2}{n} \end{aligned}$$

for all \(f \in {\dot{H}}^1({\mathbb {R}}^n)\). In particular, for \(n \ge 3\),

$$\begin{aligned} \frac{8}{n S_n^2}< \frac{2 n}{(n - 2) C_\textrm{HLS}}. \end{aligned}$$
(2.5)

Remark

The precise constant for \(S_n\) is known by Talenti [47]. In the view of (1.10), it is interesting to observe that

$$\begin{aligned} \frac{2 n}{(n - 2) C_\textrm{HLS}}\bigg |_{n = 2} = \frac{2 n}{C_n} \frac{2 \pi ^\frac{n}{2}}{\Gamma (\frac{n}{2})}\bigg |_{n = 2} = 8 \pi . \end{aligned}$$

Indeed, the best constant \(C_\textrm{HLS}\) can be decomposed into the explicit constant and implicit one as

$$\begin{aligned} C_\textrm{HLS} = \frac{1}{(n - 2) \omega _{n - 1}} C_n, \end{aligned}$$

where \(C_n\) is the best constant defined by

$$\begin{aligned} C_n \equiv \sup _{\begin{array}{c} f \in L^1({\mathbb {R}}^n) \cap L^\frac{n}{2}({\mathbb {R}}^n), \\ f \ge 0 \end{array}} \frac{\displaystyle \iint _{{\mathbb {R}}^n \times {\mathbb {R}}^n} \frac{f(x) f(y)}{|x - y|^{n - 2}}\, dx\, dy}{\Vert f\Vert _1 \Vert f\Vert _\frac{n}{2}}. \end{aligned}$$

In particular, the best constant \(C_n\) becomes 1 when \(n = 2\). In this case, the role of the constant coincides with the threshold in the case of \(n = 2\).

Proof of Proposition 2.4

It is easy to see that the best constant \(S_n^2\) also gives the best constant for the inequality

$$\begin{aligned} \left\| |\nabla |^{- 1} f\right\| _2^2 \le S_n^2 \Vert f\Vert _\frac{2n}{n + 2} \end{aligned}$$

for any \(f \in L^{2n/(n + 2)}({\mathbb {R}}^n)\) by the duality argument (see for instance [40]). Hence, by Hölder’s inequality, one can observe that

$$\begin{aligned} \int _{{\mathbb {R}}^n} f(x) (- \Delta )^{- 1} f(x)\, dx \le S_n^2 \Vert f\Vert _1 \Vert f\Vert _\frac{n}{2} \end{aligned}$$

holds for any \(f \in L^1({\mathbb {R}}^n) \cap L^{\frac{n}{2}}({\mathbb {R}}^n)\). This shows \(C_\textrm{HLS} \le S_n^2\). To see that \(C_\textrm{HLS} < S_n^2\), we assume on the contrary that \(C_\textrm{HLS} = S_n^2\) for \(n \ge 3\). Then we may show that there exists an extremal function \(V \in L^1({\mathbb {R}}^n) \cap L^{\frac{n}{2}}({\mathbb {R}}^n)\) such that it attains the best constant, that is,

$$\begin{aligned} \left\| |\nabla |^{- 1} V\right\| _2^2 = S_n^2 \Vert V\Vert _1 \Vert V\Vert _\frac{n}{2}. \end{aligned}$$

Then by the Hölder and Sobolev inequalities,

$$\begin{aligned} \left\| |\nabla |^{- 1} V\right\| _2^2 = S_n^2 \Vert V\Vert _1 \Vert V\Vert _\frac{n}{2} \ge S_n^2 \Vert V\Vert _\frac{2n}{n + 2}^2 \ge \left\| |\nabla |^{- 1} V\right\| _2^2. \end{aligned}$$

This shows that V is also the extremal function that attains the best possible constant of Sobolev’s inequality, that is,

$$\begin{aligned} \left\| |\nabla |^{- 1} V\right\| _2^2 = S_n^2 \Vert V\Vert _\frac{2n}{n + 2}^2. \end{aligned}$$

Since the extremal function of this inequality is uniquely identified up to translation and dilation, namely,

$$\begin{aligned} V(x) = a_n \left( 1 + \frac{1}{n (n - 2)} |x|^2\right) ^{- \frac{n + 2}{2}},\ \textrm{where}\ a_n \equiv \left( \frac{2 n}{n - 2}\right) ^\frac{n + 2}{2}. \end{aligned}$$
(2.6)

On the other hand, the extremal function of the Hardy–Littlewood–Sobolev inequality (2.2) must satisfy the Euler–Lagrange equation (2.3). Clearly, the Talenti function (2.6) does not satisfy (2.3). This is a contradiction, and hence, we obtain (2.4). Moreover, it follows from (2.4) that

$$\begin{aligned} \frac{8}{n S_n^2}< \frac{8}{n C_\textrm{HLS}} < \frac{2 n}{(n - 2) C_\textrm{HLS}} \end{aligned}$$

for any \(n \ge 3\). Thus, we conclude that (2.5) holds.

As an alternative proof of (2.4), since the extremal function of this inequality is uniquely identified up to translation and dilation, one can compute \(\Vert V\Vert _1\Vert V\Vert _\frac{n}{2}\) and \(\Vert V\Vert _{2n/(n+2)}\) explicitly. Since it holds that

$$\begin{aligned} \Vert V\Vert _q^q&= \frac{1}{2} a_n^q (n (n - 2))^\frac{n}{2} \omega _{n - 1} B\left( \frac{n}{2}, \frac{q(n+2)}{2}-\frac{n}{2}\right) , \end{aligned}$$

where \(B(\cdot , \cdot )\) is the beta function, the problem can be reduced as the comparison between

$$\begin{aligned} \begin{aligned}&B\left( \frac{n}{2}, 1\right) B\left( \frac{n}{2}, \frac{n^2}{4}\right) ^\frac{2}{n} =\frac{2}{n} \left( \frac{\Gamma (\frac{n}{2})\Gamma (\frac{n^2}{4})}{\Gamma (\frac{n}{2}+\frac{n^2}{4})} \right) ^\frac{2}{n} \ge \Gamma \left( \frac{n}{2}\right) ^\frac{2}{n} \left( \frac{(2/n)^{\frac{n}{2}}\Gamma (\frac{n^2}{4})}{\Gamma (\frac{(n+1)^2}{4})} \right) ^\frac{2}{n}\\ \textrm{and}\quad&B\left( \frac{n}{2}, \frac{n}{2}\right) ^\frac{n+2}{n} = \left( \frac{\Gamma (\frac{n}{2})^2}{\Gamma (n)}\right) ^{1+\frac{2}{n}} = \Gamma \left( \frac{n}{2}\right) ^{1+\frac{2}{n}} \left( \frac{\Gamma (\frac{n}{2})}{\Gamma (n)}\right) ^{1+\frac{2}{n}}. \end{aligned}\end{aligned}$$

One can check those values are different from each other by numerical computationFootnote 1. For the higher dimensions, one can show it by applying the Stirling formula

$$\begin{aligned} \Gamma (z) \simeq (2 \pi )^\frac{1}{2} e^{- z} z^{z - \frac{1}{2}}\quad \textrm{for}\ z \gg 1. \end{aligned}$$

The ratio of norms of V is estimated by

$$\begin{aligned} \frac{\Vert V\Vert _1 \Vert V\Vert _\frac{n}{2}}{\Vert V\Vert _\frac{2n}{n + 2}^2}&= \frac{2}{n} \frac{\Gamma (n)}{\Gamma \left( \frac{n}{2}\right) ^2} \left( \frac{\Gamma (n) \Gamma \left( \frac{n^2}{4}\right) }{\Gamma \left( \frac{n}{2}\right) \Gamma \left( \frac{n}{2} + \frac{n^2}{4}\right) } \right) ^\frac{2}{n} \\&\simeq (2 \pi )^{-\frac{1}{2}} 2^{\frac{n}{2} + 2} n^{- \frac{3}{2}} \left( \frac{1}{2} + \frac{1}{n} \right) ^{- \frac{n}{2} - 1 + \frac{1}{n}} > 1\quad \textrm{for}\ n \gg 1. \end{aligned}$$

For example, in the case of \(n = 200\), we compute

$$\begin{aligned} (2 \pi )^{-\frac{1}{2}} 2^{\frac{n}{2} + 2} n^{- \frac{3}{2}} \left( \frac{1}{2} + \frac{1}{n} \right) ^{- \frac{n}{2} - 1 + \frac{1}{n}} = (2 \pi )^{- \frac{1}{2}} 2^{126 - \frac{1}{2}} 5^{- \frac{9}{2}} \left( \frac{125}{63}\right) ^{126 - \frac{1}{250}} > 1. \end{aligned}$$

\(\square \)

3 Profile decomposition in \(L^1\)

The following decomposition is originally due to Gerard [16] (cf. Nawa [39]) and extended by many authors. Here we show a slightly modified version of the result due to Bedrossian–Kim [1] (see also Hmidi–Keraani [19]).

Lemma 3.1

(Profile decomposition in \(L^1\)) Let \(\{f_k\}_{k \in {\mathbb {N}}} \subset L^1({\mathbb {R}}^n) \cap L \log {L}({\mathbb {R}}^n)\) satisfy

$$\begin{aligned} f_k \ge 0,\quad \Vert f_k\Vert _1 = M,\quad \int _{{\mathbb {R}}^n} f_k(x) \log {f_k(x)}\, dx \le L \end{aligned}$$
(3.1)

for some constant \(M, L > 0\). Then for all \(\varepsilon > 0\), there exists a subsequence of \(\{f_k\}\) (not relabeled), \(J \in {\mathbb {N}}\cup \{0\}\), \(\{x_k^{(j)}\}_{k \in {\mathbb {N}}, j = 1, \dots , J} \subset {\mathbb {R}}^n\), \(\{R^{(j)}\}_{j = 1}^J \subset {\mathbb {R}}_+\), and function sequences \(\{F^{(j)}\}_{j = 1}^J, \{w_k\}_{k \in {\mathbb {N}}}, \{e_k\}_{k \in {\mathbb {N}}} \subset L^1({\mathbb {R}}^n)\) which satisfy

$$\begin{aligned} f_k(x) = \sum _{j = 1}^J F^{(j)}(x - x_k^{(j)}) + w_k(x) + e_k(x)\quad \text {a.a.}\ x \in {\mathbb {R}}^n \end{aligned}$$

with the following properties:

  1. (1)

    The profile \(\{F^{(j)}\}\) is a nonnegative function sequence satisfying that for each \(k \in {\mathbb {N}}\), \(\textrm{supp}\,{F^{(j)}} \subset B_{R_j}(0)\) and for \(j \ne j'\), \(|x_k^{(j)} - x_k^{(j')}| \rightarrow \infty \) as \(k \rightarrow \infty \). Moreover, for each \(k \in {\mathbb {N}}\), \(B_{R^{(j)}}(x_k^{(j)})\) and \(\textrm{supp}\,{w_k}\) are all disjoint for any \(j = 1, \dots , J\).

  2. (2)

    \(\{w_k\}\) is the vanishing part, that is, for all \(R > 0\),

    $$\begin{aligned} \lim _{k \rightarrow \infty } \sup _{x \in {\mathbb {R}}^n} \int _{B_R(x)} w_k(x)\, dx = 0, \end{aligned}$$

    and \(0 \le w_k \le f_k\) almost everywhere \(x \in {\mathbb {R}}^n\).

  3. (3)

    \(\{e_k\}\) is the error term, that is,

    $$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert e_k\Vert _1 < \varepsilon \end{aligned}$$

    and \(e_k \le f_k\) almost everywhere \(x \in {\mathbb {R}}^n\).

In particular, passing \(k \rightarrow \infty \), we have the almost orthogonality

$$\begin{aligned} \Vert f_k\Vert _1 = \sum _{j = 1}^J \Vert F^{(j)}\Vert _1 + \Vert w_k\Vert _1 + \varepsilon . \end{aligned}$$

Proof of Lemma 3.1

By the argument of the concentration compactness lemma in [30], there exists a subsequence \(\{f_k\}_{k \in {\mathbb {N}}}\) (not relabeled) such that one of the following situation occurs for \(\{f_k\}\): (1) the compactness, (2) the vanishing, or (3) the dichotomy. We fix \(\varepsilon > 0\) arbitrarily.

Case 1. Compactness: If the compactness occurs, then there exists a sequence \(\{x_k^{(1)}\}_{k \in {\mathbb {N}}} \subset {\mathbb {R}}^n\) such that

$$\begin{aligned} \int _{B_{R^{(1)}}(x_k^{(1)})} f_k(y)\, dy \ge M - \frac{\varepsilon }{2}. \end{aligned}$$

for some radius \(R^{(1)} > 0\) independent of k. We set

$$\begin{aligned} {\tilde{f}}_k(x) \equiv f_k\Big (x + x_k^{(1)}\Big ) \chi _{B_{R^{(1)}}(0)}(x) \end{aligned}$$

for \(x \in {\mathbb {R}}^n\). By the assumption (3.1), we see that

$$\begin{aligned} \int _{{\mathbb {R}}^n} {\tilde{f}}_k(x) \log {{\tilde{f}}_k(x)}\, dx \le L. \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \int _{{\mathbb {R}}^n} {\tilde{f}}_k(x) \log {{\tilde{f}}_k(x)}\, dx = \int _{B_{R^{(1)}(x_k^{(1)})}} f_k(x) \log {f_k(x)}\, dx \ge - e^{- 1} |B_{R^{(1)}}|. \end{aligned}$$

Thus, it holds that

$$\begin{aligned} \left| \int _{{\mathbb {R}}^n} {\tilde{f}}_k(x) \log {{\tilde{f}}_k(x)}\, dx\right| \le \max \{L, e^{- 1} |B_{R^{(1)}}|\}, \end{aligned}$$

which implies that \(\{{\tilde{f}}_k\}\) is uniformly integrable. Then the Dunford–Pettis theorem implies that there exist a subsequence \(\{{\tilde{f}}_{k_l}\}_{l \in {\mathbb {N}}} \subset \{{\tilde{f}}_k\}\) and a nonnegative function \({\tilde{F}}^{(1)} \in L^1(B_{R^{(1)}}(0))\) such that \({\tilde{f}}_{k_l}\) converges weakly to \({\tilde{F}}^{(1)}\) in \(L^1(B_{R^{(1)}}(0))\). Thus, we set a profile \(F^{(1)}\) as the zero expansion of \({\tilde{F}}^{(1)}\), that is,

$$\begin{aligned} F^{(1)}(x) \equiv \left\{ \begin{aligned}&{\tilde{F}}^{(1)}(x - x_k^{(1)}),{} & {} x \in B_{R^{(1)}}(x_k^{(1)}), \\&0,{} & {} \textrm{otherwise}. \end{aligned} \right. \end{aligned}$$

Moreover, by the weak lower continuity of norm, we have

$$\begin{aligned} M - \frac{\varepsilon }{2} \le \Vert F^{(1)}\Vert _1 \le M. \end{aligned}$$

On the other hand, we set

$$\begin{aligned} e_{k_l}(x) \equiv f_{k_l}(x) - F^{(1)}(x), \end{aligned}$$

then \(e_{k_l}\) converges weakly to 0 in \(L^1\). In this case, we define \(w_k \equiv 0\). Then the claim in Lemma 3.1 holds in \(J = 1\).

Case 2. Vanishing: If the vanishing occurs, it follows that

$$\begin{aligned} \lim _{k \rightarrow \infty } \sup _{x \in {\mathbb {R}}^n} \int _{B_R(x)} f_k(y)\, dy = 0 \end{aligned}$$

for arbitrary \(R > 0\). Define \(w_k \equiv f_k\) and \(e_k \equiv 0\), then we have the claim of Lemma 3.1 with \(J = 0\).

Case 3. Dichotomy: If the dichotomy occurs, there exists \(\mu \in (0, M)\) such that there exist the non-negative function sequences \(\{f_{k, 1}\}_{k \in {\mathbb {N}}}, \{f_{k, 2}\}_{k \in {\mathbb {N}}} \subset L^1({\mathbb {R}}^n)\) which satisfy

$$\begin{aligned} f_k = f_{k, 1} + f_{k, 2} + h_{k}, \end{aligned}$$

where we define \(h_{k} \equiv f_k - (f_{k, 1} + f_{k, 2})\). More precisely, there exist \(\{x_k^{(1)}\}_{k \in {\mathbb {N}}} \subset {\mathbb {R}}^n\), \(R^{(1)} > 0\), and \(\{R_k^{(1)}\}_{k \in {\mathbb {N}}} \subset {\mathbb {R}}_+\) such that

$$\begin{aligned} f_{k, 1} = f_k|_{B_{R^{(1)}}(x_k^{(1)})},\quad f_{k, 2} = f_k|_{{\mathbb {R}}^n \setminus B_{R_k^{(1)}}(x_k^{(1)})}\quad \textrm{with}\ \lim _{k \rightarrow \infty } R_k^{(1)} = \infty \end{aligned}$$

with the following estimates:

$$\begin{aligned} \left\{ \begin{aligned}&\varlimsup _{k \rightarrow \infty } |\Vert f_{k, 1}\Vert _1 - \mu |< \frac{\varepsilon }{2}, \\&\varlimsup _{k \rightarrow \infty } |\Vert f_{k, 2}\Vert _1 - (M - \mu )|< \frac{\varepsilon }{2}, \\&\varlimsup _{k \rightarrow \infty } \Vert h_{k}\Vert _1 < \frac{\varepsilon }{2}, \end{aligned} \right. \end{aligned}$$

and

$$\begin{aligned} \lim _{k \rightarrow \infty } \textrm{dist}(\textrm{supp}\,{f_{k, 1}}, \textrm{supp}\,{f_{k, 2}}) = \infty . \end{aligned}$$
(3.2)

These properties imply that \(f_{k, 1}\), \(f_{k, 2}\), and \(h_{k}\) are the compact, escape, and error term, respectively. Firstly, similarly to Case 1, we set

$$\begin{aligned} {\tilde{f}}_{k, 1}(x) \equiv f_{k, 1}\left( x + x_k^{(1)}\right) = f_k\Big (x + x_k^{(1)}\Big ) \chi _{B_{R^{(1)}}(0)}(x), \end{aligned}$$

and use the Dunford–Pettis theorem, we can get a weak convergence subsequence of \({\tilde{f}}_{k, 1}\) so that we can set a profile \(F^{(1)}\), which has a compact support and satisfies

$$\begin{aligned} \mu - \varepsilon \le \Vert F^{(1)}\Vert _1 \le \mu . \end{aligned}$$

On the other hand, the escape term \(f_{k, 2}\) satisfies

$$\begin{aligned} \varlimsup _{l \rightarrow \infty } \Vert f_{k, 2}\Vert _1 = M - \mu . \end{aligned}$$

Secondary, we divide some cases whether the mass of the escape term is small or not. If we can take \(\mu \in (0, M)\) such that

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert f_{k, 2}\Vert _1 < \frac{\varepsilon }{2}, \end{aligned}$$

then we define \(e_k \equiv f_k - F^{(1)}(x - x_k^{(1)})\), and hence, the claim in Lemma 3.1 holds with \(J = 1\). In the case of

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert f_{k, 2}\Vert _1 \ge \frac{\varepsilon }{2}, \end{aligned}$$

we reset

$$\begin{aligned} {\bar{f}}_{k, 2} \equiv \frac{1}{\Vert f_{k, 2}\Vert _1} f_{k, 2}. \end{aligned}$$

Then \(\Vert {\bar{f}}_{k, 2}\Vert _1 = 1\). We apply the concentration compactness lemma to \(\{{\bar{f}}_{k, 2}\}_{k \in {\mathbb {N}}}\). Then there exists a subsequence \(\{{\bar{f}}_{k, 2}\}\) (not relabeled) such that one of the following situation occur for \(\{{\bar{f}}_{k, 2}\}\): (1) the compactness, (2) the vanishing, or (3) the dichotomy.

Case 3.1. Compactness: If the compactness occurs, then there exists \(\{x_k^{(2)}\}_{k \in {\mathbb {N}}} \subset {\mathbb {R}}^n\) such that

$$\begin{aligned} \int _{B_{R^{(2)}}(x_k^{(2)})} {\bar{f}}_{k, 2}(y)\, dy \ge 1 - \frac{\varepsilon }{2 (M - \mu )} \end{aligned}$$

for some radius \(R^{(2)} > 0\). We note that passing \(k \rightarrow \infty \), then \(|x_k^{(1)} - x_k^{(2)}| \rightarrow \infty \) by (3.2). Then we set

$$\begin{aligned} {\tilde{f}}_{k, 2}(x) \equiv f_{k, 2}(x + x_k^{(2)}) \chi _{B_{R^{(2)}}(0)}(x) \end{aligned}$$

Similarly to Case 1, we get a profile \(F^{(2)}\) as a weak convergence limit of \({\tilde{f}}_{k, 2}\) and define

$$\begin{aligned} e_k(x) \equiv f_k(x) - (F^{(1)}(x) + F^{(2)}(x)). \end{aligned}$$

Then the claim in Lemma 3.1 holds in \(J = 2\).

Case 3.2. Vanishing: If the vanishing occurs, similarly to Case 2, we define

$$\begin{aligned} w_k \equiv f_{k, 2},\quad e_k \equiv f_k - F^{(1)} - f_{k, 2}. \end{aligned}$$

Then we have the claim of Lemma 3.1 with \(J = 1\).

Case 3.3. Dichotomy: If the dichotomy occurs, there exists \(\nu \in (0, 1)\) such that there exists the non-negative function sequences \(\{{\bar{f}}_{k, 21}\}, \{{\bar{f}}_{k, 22}\} \subset L^1({\mathbb {R}}^n)\) which satisfy

$$\begin{aligned} {\bar{f}}_{k, 2} = {\bar{f}}_{k, 2, 1} + {\bar{f}}_{k, 2, 2} + {\bar{h}}_k, \end{aligned}$$

where we define \({\bar{h}}_k \equiv {\bar{f}}_{k, 2} - ({\bar{f}}_{k, 2, 1} + {\bar{f}}_{k, 2, 2})\). For each sequences, there exist \(\{x_k^{(2)}\}_{k \in {\mathbb {N}}} \subset {\mathbb {R}}^n\), \(R^{(2)} > 0\), and \(\{R_k^{(2)}\}_{k \in {\mathbb {N}}} \subset {\mathbb {R}}_+\) such that

$$\begin{aligned} {\bar{f}}_{k, 2, 1} = {\bar{f}}_{k, 2}|_{B_{R^{(2)}}(x_k^{(2)})},\quad {\bar{f}}_{k, 2, 2} = {\bar{f}}_{k, 2}|_{{\mathbb {R}}^n \setminus B_{R_k^{(2)}}(x_k^{(2)})}\quad \textrm{with}\ \lim _{k \rightarrow \infty } R_k^{(2)} = \infty \end{aligned}$$

with the following estimates:

$$\begin{aligned} \left\{ \begin{aligned}&\varlimsup _{l \rightarrow \infty } |\Vert {\bar{f}}_{k, 2, 1}\Vert _1 - \nu |< \frac{\varepsilon }{4 (M - \mu )}, \\&\varlimsup _{l \rightarrow \infty } |\Vert {\bar{f}}_{k, 2, 2}\Vert _1 - (1 - \nu )|< \frac{\varepsilon }{4 (M - \mu )}, \\&\varlimsup _{l \rightarrow \infty } \Vert {\bar{h}}_k\Vert _1 < \frac{\varepsilon }{4 (M - \mu )}. \end{aligned} \right. \end{aligned}$$

By the definition of \({\bar{f}}_{k, 2}\), we can define \(h_{k, 2} \equiv \Vert f_{k, 2}\Vert _1 {\bar{h}}_k\) so that we have

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert h_{k, 2}\Vert _1 < \frac{\varepsilon }{2^3}. \end{aligned}$$

If we can take \(\nu \in (0, 1)\) such that

$$\begin{aligned} \varlimsup _{l \rightarrow \infty } \Vert f_{k, 2, 2}\Vert _1 < \frac{\varepsilon }{2},\quad f_{k, 2, 2} \equiv \Vert f_{k, 2}\Vert _1 {\bar{f}}_{k, 2, 2} \end{aligned}$$

then we take a subsequence \(\{k_l\}\) which satisfies the above estimates and define

$$\begin{aligned} e_k(x) \equiv f_k(x) - F^{(1)}(x) - F^{(2)}(x). \end{aligned}$$

Then the claim in Lemma 3.1 holds with \(J = 2\). In the case of

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert f_{k, 2, 2}\Vert _1 \ge \frac{\varepsilon }{2},\end{aligned}$$

we apply the same argument above many times.

The proof of Lemma 3.1 relies upon the induction argument. We omit to prove in detail. Note that the argument must be terminated in a finite step. Indeed, we assume on the contrary, these steps continue infinitely. Then it holds that

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert f_{k, l, 2}\Vert _1 \ge \frac{\varepsilon }{2} \end{aligned}$$

for any \(l \in {\mathbb {N}}\). Take the sum with respect to l, then the total mass \(\Vert f_k\Vert _1\) diverges, that is,

$$\begin{aligned} \infty > M = \Vert f_k\Vert _1 \ge \sum _{l = 1}^\infty \Vert f_{k, l, 2}\Vert _1 \ge \sum _{l = l_0}^\infty \frac{\varepsilon }{2} = \infty . \end{aligned}$$

This contradicts the assumption of Lemma 3.1.\(\square \)

Remark

In [1], they also showed the profile decomposition on \(L^1({\mathbb {R}}^n)\). We emphasize that Lemma 3.1 assures the profile independent of k while the profile in [1] depends on k. The independence of the profile on k is valid in the proof of Theorem 1.4.

4 Concentration

Lemma 4.1

Let the initial data \(u_0\) satisfy the assumption (1.6) with (1.7), then there exists the corresponding solution u(t) to (1.1) blows up in a finite time \(T_*\). Then it holds that for any \(0< t < T_*\),

$$\begin{aligned} \frac{2n}{n - 2} \Vert u_0\Vert _1 \le \left\| |\nabla |^{- 1} u(t)\right\| _2^2. \end{aligned}$$
(4.1)

Proof of Lemma 4.1

For simplicity, we consider the case of \(b = 2\). For the solution u(t) to the equation (1.1), by the entropy dissipation (1.3) and definition of the entropy (1.4), we have

$$\begin{aligned} \int _{{\mathbb {R}}^n} u(t) \log {u(t)}\, dx - H[u_0] \le \frac{1}{2} \left\| |\nabla |^{- 1} u(t)\right\| _2^2. \end{aligned}$$

By Shannon’s inequality (2.1), it follows that

$$\begin{aligned} - \frac{n}{2} \Vert u(t)\Vert _1 \log \left( \frac{2 \pi e}{n \Vert u(t)\Vert _1^{1 + \frac{2}{n}}} \int _{{\mathbb {R}}^n} |x - {\bar{x}}|^2 u(t)\, dx \right) - H[u_0] \le \frac{1}{2} \left\| |\nabla |^{- 1} u(t)\right\| _2^2. \end{aligned}$$

Since the mass conservation (1.2) holds, then we can rewrite as

$$\begin{aligned} - \frac{n}{2} \Vert u_0\Vert _1 \log \left( \frac{2 \pi e}{n \Vert u_0\Vert _1^{1 + \frac{2}{n}}} \int _{{\mathbb {R}}^n} |x - {\bar{x}}|^2 u(t)\, dx \right) - H[u_0] \le \frac{1}{2} \left\| |\nabla |^{- 1} u(t)\right\| _2^2. \end{aligned}$$
(4.2)

While by the virial law (1.5) for (1.1), we have

$$\begin{aligned}&\frac{d}{dt} \int _{{\mathbb {R}}^n}|x-{\bar{x}}|^2 u(t)\, dx \nonumber \\&\quad = 2 n \Vert u_0\Vert _1+2(n-2)H[u(t)] -2(n-2)\int _{{\mathbb {R}}^n}u(t)\log u(t)\, dx \nonumber \\&\quad \le 2(n-2) \left[ H[u_0]+\frac{n}{n-2}\Vert u_0\Vert _1 +\frac{n}{2}\Vert u_0\Vert _1 \log \left( \frac{2\pi e \int _{{\mathbb {R}}^n}|x-{\bar{x}}|^2 u(t)\, dx }{ n\Vert u_0\Vert _1^{1+\frac{2}{n}} } \right) \right] , \end{aligned}$$
(4.3)

and hence, under the assumption

$$\begin{aligned} H[u_0]<&\frac{n}{2}\Vert u_0\Vert _1 \log \left( \frac{n\Vert u_0\Vert _1^{1+\frac{2}{n}}}{2\pi e^{\frac{n}{n-2}} \int |x-{\bar{x}}|^2u_0\, dx}\right) \nonumber \\ =&\frac{n}{2}\Vert u_0\Vert _1 \log \left( \frac{n\Vert u_0\Vert _1^{1+\frac{2}{n}}}{2\pi e \int |x-{\bar{x}}|^2u_0\, dx}\right) -\frac{n}{n-2}\Vert u_0\Vert _1, \end{aligned}$$
(4.4)

we have from (4.3) and (4.4)

$$\begin{aligned} \left. \frac{d}{dt}\int _{{\mathbb {R}}^n}|x-{\bar{x}}|^2 u(t)dx \right| _{t=0}\le 0. \end{aligned}$$

This implies that

$$\begin{aligned} \int _{{\mathbb {R}}^n}|x-{\bar{x}}|^2 u(t)\, dx \le \int _{{\mathbb {R}}^n}|x-{\bar{x}}|^2 u_0\, \textrm{d}x. \end{aligned}$$
(4.5)

Thus, it follows from (4.2) and (4.5) that

$$\begin{aligned} - \frac{n}{2} \Vert u_0\Vert _1 \log \left( \frac{2 \pi e}{n \Vert u_0\Vert _1^{1 + \frac{2}{n}}} \int _{{\mathbb {R}}^n} |x - {\bar{x}}|^2 u_0\, dx \right) - H[u_0] \le \frac{1}{2} \left\| |\nabla |^{- 1} u(t)\right\| _2^2. \end{aligned}$$

From the assumption (4.4), we obtain

$$\begin{aligned} \frac{n}{n - 2} \Vert u_0\Vert _1 \le \frac{1}{2} \left\| |\nabla |^{- 1} u(t)\right\| _2^2 \end{aligned}$$

for any \(0< t < T_*\).\(\square \)

In what follows, we show Theorem 1.4 (1). We assume that the initial data \(u_0 \in L^{\frac{n}{2}}({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)\) satisfies the blow-up condition (1.6) with (1.7) and the solution u(t) to (1.1) blows up at \(T_* > 0\). Namely,

$$\begin{aligned} \limsup _{t \rightarrow T_*} \Vert u(t)\Vert _\frac{n}{2} = \infty . \end{aligned}$$

Let \(\{t_k\}_{k \in {\mathbb {N}}}\) be a sequence that gives the supremum of \(\Vert u(t)\Vert _{\frac{n}{2}}\). We introduce the scaling transform by \(\lambda > 0\) that

$$\begin{aligned} S_\lambda u(t, x) \equiv \lambda ^{- n} u(t, \lambda ^{- 1} x). \end{aligned}$$

This is the \(L^1\)-invariant scaling \(\Vert S_\lambda u(t)\Vert _1 = \Vert u(t)\Vert _1 = \Vert u_0\Vert _1\) and it also holds that

$$\begin{aligned} \Vert S_\lambda u(t)\Vert _\frac{n}{2} = \lambda ^{2 - n} \Vert u(t)\Vert _\frac{n}{2}. \end{aligned}$$

For a blow-up solution u(t) and blow-up time sequence \(\{t_k\}\), we set

$$\begin{aligned} \lambda _k \equiv \Vert u(t_k)\Vert _\frac{n}{2}^\frac{1}{n - 2}\quad \textrm{and}\quad \lambda _0 \equiv \Vert u_0\Vert _\frac{n}{2}^\frac{1}{n - 2}. \end{aligned}$$
(4.6)

Then we find that the rescale solution

$$\begin{aligned} v(t, x) \equiv \lambda _k^{- n} u(t, \lambda _k^{- 1} x)\quad \textrm{for}\ t_{k - 1} < t \le t_k,\ x \in {\mathbb {R}}^n \end{aligned}$$

is bounded in \(L^1({\mathbb {R}}^n) \cap L^\frac{n}{2}({\mathbb {R}}^n)\). Note that the scaling is \(L^1\)-invariant and hence the \(L^1\)-norm of v is invariant. The rescaled solution v(tx) now solves the Cauchy problem of a semistationary equation

$$\begin{aligned} \left\{ \begin{aligned}&\partial _t v - \lambda _k^2 \Delta v + \lambda _k^n \nabla \cdot (v \nabla \phi ) = 0,{} & {} t_{k - 1} < t \le t_k,\ x \in {\mathbb {R}}^n, \\&\quad - \Delta \phi = v,{} & {} t > 0,\ x \in {\mathbb {R}}^n, \\&v(0, x) = v_0(x) \equiv \lambda _0^{- n} u_0(\lambda ^{- 1} x),{} & {} x \in {\mathbb {R}}^n, \end{aligned} \right. \end{aligned}$$
(4.7)

where \(t_0 = 0\). The solution preserves its total mass \(\Vert v(t)\Vert _1 = \Vert u(t)\Vert _1 = \Vert u_0\Vert _1\) for any time \(t \ge 0\) and \(\Vert v(t_k)\Vert _\frac{n}{2} = 1\) at \(t = t_k\).

We set the rescaled solution sequence

$$\begin{aligned} u_k(x) \equiv v(t_k) = S_{\lambda _k} u(t_k, x) = \lambda _k^{- n} u(t_k, \lambda _k^{-1} x) \end{aligned}$$
(4.8)

satisfies

$$\begin{aligned} \Vert u_k\Vert _1 = \Vert u_0\Vert _1,\quad \Vert u_k\Vert _\frac{n}{2} = 1. \end{aligned}$$
(4.9)

In this case, one can apply Proposition 3.1 with \(\{u_k\}_{k \in {\mathbb {N}}}\):

Proposition 4.2

(Profile decomposition in \(L^p\)) Let \(\{u_k\}_{k \in {\mathbb {N}}}\) be a non-negative sequence in \(L^1({\mathbb {R}}^n) \cap L^{\frac{n}{2}}({\mathbb {R}}^n)\) with

$$\begin{aligned} \Vert u_k\Vert _1 = \Vert u_0\Vert _1\quad \textrm{and}\quad \Vert u_k\Vert _\frac{n}{2} = 1 \end{aligned}$$

defined by the above. Then for all \(\varepsilon > 0\), there exists a subsequence of \(\{u_k\}\) (not relabeled), \(J \in {\mathbb {N}}\), \(\{x_k^{(j)}\}_{k \in {\mathbb {N}}, j = 1, \dots , J} \subset {\mathbb {R}}^n\), \(\{R^{(j)}\}_{j = 1}^J \subset {\mathbb {R}}_+\), and function sequences \(\{U^{(j)}\}_{j = 1}^J, \{w_k\}_{k \in {\mathbb {N}}}, \{e_k\}_{k \in {\mathbb {N}}} \subset L^1({\mathbb {R}}^n) \cap L^\frac{n}{2}({\mathbb {R}}^n)\) which satisfy the following properties:

  1. (1)

    \(u_k\) is decomposed as

    $$\begin{aligned} u_k(x) = \sum _{j = 1}^J U^{(j)}\Big (x - x_k^{(j)}\Big ) + w_k(x) + e_k(x)\quad \text {a.a.}\ x \in {\mathbb {R}}^n, \end{aligned}$$
    (4.10)

    where \(\{U^{(j)}\}_{j = 1}^J\) is a nonnegative function sequence with \(\textrm{supp}\,{U^{(j)}} \subset B_{R^{(j)}}(0)\). Moreover, for each \(k \in {\mathbb {N}}\), \(B_{R^{(j)}}(x_k^{(j)})\) and \(\textrm{supp}\,{w_k}\) are all disjoint for any \(j = 1, 2, \dots , J\). For any \(R > 0\), it holds that

    $$\begin{aligned} \lim _{k \rightarrow \infty } \sup _{x \in {\mathbb {R}}^n} \int _{B_R(x)} w_k(x)\, dx = 0 \end{aligned}$$
    (4.11)

    and the error term satisfies

    $$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert e_k\Vert _1 < \varepsilon \end{aligned}$$
    (4.12)
  2. (2)

    Almost orthogonality:

    $$\begin{aligned} \Vert u_k\Vert _1 = \sum _{j = 1}^J \Vert U^{(j)}\Vert _1 + \Vert w_k\Vert _1 + \varepsilon , \end{aligned}$$
    (4.13)

    and for any \(j \in \{1, 2, \dots , J\}\), it follows that

    $$\begin{aligned} \Vert U^{(j)}\Vert _\frac{n}{2} \le (1 + \varepsilon ) \Vert u_k\Vert _{L^\frac{n}{2}(B_{R^{(j)}}(x_k^{(j)}))}. \end{aligned}$$
    (4.14)
  3. (3)

    Drift term estimate:

    $$\begin{aligned} \varlimsup _{k \rightarrow \infty } \left\| |\nabla |^{- 1} w_k\right\| _2^2 = \varlimsup _{k \rightarrow \infty } \left\| |\nabla |^{- 1} e_k\right\| _2^2 = 0. \end{aligned}$$
    (4.15)

We note that the vanishing in the sense of Lions’ concentration compactness lemma implies \(J = 0\). We emphasize that the sequence \(\{u_k\}_{k \in {\mathbb {N}}}\) is not vanishing and one can extract at least the profile of \(\{u_k\}\).

Proof of Proposition 4.2

We apply Lemma 3.1 with \(\{u_k\}_{k \in {\mathbb {N}}} \subset L^1({\mathbb {R}}^n)\). We note that \(L^\frac{n}{2}\)-boundedness implies that \(\{u_k\}_{k \in {\mathbb {N}}}\) is uniformly integrable. The decomposition (4.10), (4.11), (4.12), and (4.13) are shown directly in Lemma 3.1. By the construction of the profile, we see that

$$\begin{aligned} {\tilde{u}}_k \equiv u_k(\cdot + x_k^{(1)}) \chi _{B_{R^{(1)}}(0)}(\cdot ) \rightharpoonup U^{(1)}\quad \textrm{in}\ L^1(B_{R^{(1)}(0)}). \end{aligned}$$

Since \(\{{\tilde{u}}_k\}\) is uniformly bounded in \(L^\frac{n}{2}(B_{R^{(1)}}(0))\), \(\{{\tilde{u}}_k\}_{k \in {\mathbb {N}}}\) also converges to \(U^{(1)}\) weakly in \(L^\frac{n}{2}(B_{R^{(1)}}(0))\). This argument can be applied for any \(j = 1, 2, \dots , J\). Thus, we obtain (4.14). Moreover, we have

$$\begin{aligned} \Vert e_k\Vert _\frac{n}{2} \le \Vert u_k\Vert _\frac{n}{2} + \sum _{j = 1}^J \Vert U^{(j)}\Vert _\frac{n}{2} + \Vert w_k\Vert _\frac{n}{2} \le J + 2, \end{aligned}$$

which implies that \(\{e_k\}_{k \in {\mathbb {N}}}\) is uniformly bounded in \(L^\frac{n}{2}\).

In what follows, we show (4.15). By the construction of the error term \(e_k\), we have

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert e_k\Vert _1 = 0. \end{aligned}$$

We note that \(\{e_k\}_{k \in {\mathbb {N}}}\) is uniformly bounded in \(L^\frac{n}{2}({\mathbb {R}}^n)\). Hence, the Hardy–Littlewood–Sobolev inequality (2.2) shows that

$$\begin{aligned} \left\| |\nabla |^{- 1} e_k\right\| _2^2 \le C_\textrm{HLS} \Vert e_k\Vert _1 \Vert e_k\Vert _\frac{n}{2} \le C \Vert e_k\Vert _1 \rightarrow 0\quad \textrm{as}\ k \rightarrow \infty . \end{aligned}$$

Next, we shall estimate the vanishing term \(w_k\). We fix \(\varepsilon > 0\) arbitrarily. For \(R > 0\), we separate the non-local term into three parts as follows:

$$\begin{aligned} \left\| |\nabla |^{- 1} w_k\right\| _2^2&= c_n \left( \iint _{\{|x - y|< R^{- 1}\}} + \iint _{\{R^{- 1}< |x - y| < R\}} + \iint _{\{|x - y| > R\}} \right) \frac{w_k(x) w_k(y)}{|x - y|^{n - 2}}\, dx\, dy \\&\equiv I_1 + I_2 + I_3, \end{aligned}$$

where \(c_n = 1/((n - 2) \omega _{n - 1})\). For the integral \(I_1\), by the Hausdorff–Young inequality, we have

$$\begin{aligned} I_1 = c_n \int _{{\mathbb {R}}^n} w_k(y) \int _{|x - y|< R^{- 1}} \frac{w_k(x)}{|x - y|^{n - 2}} \, dx\, dy \le C \Vert w_k\Vert _p^2 \Vert |\cdot |^{2 - n} \chi _{|\cdot | < R^{- 1}}(\cdot )\Vert _{q}, \end{aligned}$$

where \(\chi _A(\cdot )\) denotes the characteristic function of a set A, \(1 \le p \le \frac{n}{2}\) and \(1 \le q < \frac{n}{n - 2}\) satisfy

$$\begin{aligned} 2 = \frac{2}{p} + \frac{1}{q}. \end{aligned}$$

Then the integral of the kernel can be estimated as

$$\begin{aligned} \int _{|x| < R^{- 1}} |x|^{- q (n - 2)}\, dx = \omega _{n - 1} \int _0^{R^{- 1}} r^{- q (n - 2) + n - 1}\, dr = \frac{\omega _{n - 1}}{n - q (n - 2)} R^{- n + q (n - 2)}. \end{aligned}$$

By the uniform boundedness of \(L^1\) and \(L^{\frac{n}{2}}\) for \(\{u_k\}\), there exists \(R_0 > 0\) independent of k such that for any \(R > R_0\),

$$\begin{aligned} I_1 \le C R_0^{- \frac{n}{q} + n - 2} < \frac{\varepsilon }{3}. \end{aligned}$$

For the integral \(I_3\), it follows that

$$\begin{aligned} I_3 = c_n \int _{{\mathbb {R}}^n} w_k(y) \int _{|x - y| > R} \frac{w_k(x)}{|x - y|^{n - 2}}\, dx\, dy \le c_n R^{- (n - 2)} \Vert w_k\Vert _1^2. \end{aligned}$$

Thus, there exists \(R_1 > 0\) independent of k such that for any \(R > R_1\),

$$\begin{aligned} I_3 \le C R_1^{- (n - 2)} < \frac{\varepsilon }{3}. \end{aligned}$$

We set \({\tilde{R}} \equiv \max \{R_0, R_1\}\). Lastly, the integral \(I_2\) can be estimated as

$$\begin{aligned} I_2 = c_n \int _{{\mathbb {R}}^n} w_k(y) \int _{{\tilde{R}}^{- 1}< |x - y| < {\tilde{R}}} \frac{w_k(x)}{|x - y|^{n - 2}}\, dx\, dy \le c_n {\tilde{R}}^{n - 2} \Vert w_k\Vert _1 \sup _{x \in {\mathbb {R}}^n} \int _{B_{{\tilde{R}}}(x)} w_k(y)\, dy. \end{aligned}$$

Since \(w_k\) is the vanishing term, we obtain

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } I_2 \le c_n {\tilde{R}}^{n - 2} \lim _{k \rightarrow \infty } \sup _{x \in {\mathbb {R}}^n} \int _{B_{{\tilde{R}}}(x)} w_k(y)\, dy = 0. \end{aligned}$$

Thus, we conclude

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } \left\| |\nabla |^{- 1} w_k\right\| _2 = 0. \end{aligned}$$

In order to extract at least one profile, we shall show that \(\{v(t_k)\}_{k \in {\mathbb {N}} \cup \{0\}}\) is not a vanishing sequence by using the bump function method (see [2, 4, 5]). We set the bump function as

$$\begin{aligned} \eta (x) \equiv (1 - |x|^2)_+^2. \end{aligned}$$

For \(\varepsilon \in (0, 1/\sqrt{3})\), the Hessian of \(\eta \) satisfies

$$\begin{aligned} H \eta \le - c_\varepsilon I\quad \textrm{for}\ |x| \le \varepsilon , \end{aligned}$$
(4.16)

where \(c_\varepsilon \equiv 4 (1 - 3 \varepsilon ^2)\). For \(R > 0\) and \(a \in {\mathbb {R}}^n\), we define

$$\begin{aligned} \eta _R(x) \equiv R^{- 4} (R^2 - |x - a|^2)_+^2, \end{aligned}$$

which satisfies

$$\begin{aligned} \Delta \eta _R(x) = 4 R^{- 4} (n + 2) |x - a|^2 - 4 n R^{- 2} \ge - 4 n R^{- 2}, \end{aligned}$$
(4.17)

in particular, \(\eta _R\) is concave on \(|x - a| < R \sqrt{n/(n + 2)}\) as (4.16). Multiplying the equation (4.7) by \(\eta _R\) and integrating over \(x \in {\mathbb {R}}^n\), then we have

$$\begin{aligned} \frac{d}{dt} \int _{{\mathbb {R}}^n} v(t) \eta _R\, dx = \lambda _k^2 \int _{{\mathbb {R}}^n} v(t) \Delta \eta _R\, dx + \lambda _k^n \int _{{\mathbb {R}}^n} v(t) \nabla \phi (t) \cdot \nabla \eta _R\, dx. \end{aligned}$$

By the property (4.17), we see that

$$\begin{aligned} \frac{d}{dt} \int _{{\mathbb {R}}^n} v(t) \eta _R\, dx \ge - 4 n R^{- 2} \lambda _k^2 \int _{B_R(a)} v(t)\, dx + \lambda _k^n \int _{{\mathbb {R}}^n} v(t) \nabla \phi (t) \cdot \nabla \eta _R\, dx. \end{aligned}$$

On the second term in the right hand side, using Poisson representation and symmetry, we decompose

$$\begin{aligned}&\quad \int _{{\mathbb {R}}^n} v(t) \nabla \phi (t) \cdot \nabla \eta _R\, dx \\&= \iint _{{\mathbb {R}}^n \times {\mathbb {R}}^n} v(t, x) (\nabla _x {\mathcal {N}}(x - y) v(t, y)) \cdot \nabla \eta _R(x)\, dx\, dy \\&= \frac{1}{2} \iint _{{\mathbb {R}}^n \times {\mathbb {R}}^n} v(t, x) v(t, y) \nabla _x {\mathcal {N}}(x - y) \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y))\, dx\, dy \equiv J_1 + J_2, \end{aligned}$$

where we set

$$\begin{aligned} \begin{aligned}&J_1 \equiv \frac{1}{2} \iint _{B_{\varepsilon R}(a) \times B_{\varepsilon R}(a)} v(t, x) v(t, y) \nabla _x {\mathcal {N}}(x - y) \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y))\, dx\, dy, \\&J_2 \equiv \frac{1}{2} \iint _{{\mathbb {R}}^n \times {\mathbb {R}}^n \setminus B_{\varepsilon R}(a) \times B_{\varepsilon R}(a)} v(t, x) v(t, y) \nabla _x {\mathcal {N}}(x - y) \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y))\, dx\, dy. \end{aligned} \end{aligned}$$

For \(J_1\), the integrand is rewritten by

$$\begin{aligned}&\quad \nabla _x {\mathcal {N}}(x - y) \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y)) \\&= - \frac{1}{\omega _{n - 1}} \frac{x - y}{|x - y|^{n - 2}} \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y)). \end{aligned}$$

We fix any \(\varepsilon \in (0, \sqrt{n/(n + 2)})\). By the concavity of \(\eta _R\) as (4.16), we have

$$\begin{aligned} J_1&= - \frac{1}{2 \omega _{n - 1}} \iint _{B_{\varepsilon R}(a) \times B_{\varepsilon R}(a)} v(t, x) v(t, y) \frac{x - y}{|x - y|^{n - 2}} \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y))\, dx\, dy \\&\ge \frac{c_\varepsilon R^{- 2}}{2 \omega _{n - 1}} \iint _{B_{\varepsilon R}(a) \times B_{\varepsilon R}(a)} \frac{v(t, x) v(t, y)}{|x - y|^{n - 2}}\, dx\, dy. \end{aligned}$$

Since we know that

$$\begin{aligned} \frac{1}{|x - y|^{n - 2}} \ge \left( \frac{1}{2 \varepsilon R}\right) ^{n - 2}\quad \textrm{for}\ |x - a|, |y - a| \le \varepsilon R,\end{aligned}$$

it follows that

$$\begin{aligned}&\quad \iint _{B_{\varepsilon R}(a) \times B_{\varepsilon R}(a)} \frac{v(t, x) v(t, y)}{|x - y|^{n - 2}}\, dx\, dy \\&\ge \left( \frac{1}{2 \varepsilon R}\right) ^{n - 2} \left( \int _{B_R(a)} v(t, x)\, dx - \int _{\varepsilon R \le |x - a| \le R} v(t, x)\, dx\right) ^2 \\&\ge \left( \frac{1}{2 \varepsilon R}\right) ^{n - 2} \left( \int _{B_R(a)} v(t, x)\, dx\right) ^2 - 2 \left( \frac{1}{2 \varepsilon R}\right) ^{n - 2} \\ {}&\quad \times \int _{B_R(a)} v(t, x)\, dx \int _{\varepsilon R \le |x - a| \le R} v(t, x)\, dx. \end{aligned}$$

If we set a constant as

$$\begin{aligned} C_\varepsilon \equiv \inf _{|x - a| \ge \varepsilon R} (1 - \eta _R(x)) = 1 - (1 - \varepsilon ^2)^2 > 0, \end{aligned}$$

then we have

$$\begin{aligned} - \int _{\varepsilon R \le |x - a| \le R} v(t, x)\, dx&\ge - \int _{\varepsilon R \le |x - a| \le R} v(t, x) \frac{1 - \eta _R(x)}{C_{\varepsilon }}\, dx \\&\ge - \frac{1}{C_{\varepsilon }} \left( \int _{B_R(a)} v(t, x)\, dx - \int _{{\mathbb {R}}^n} v(t, x) \eta _R(x)\, dx\right) . \end{aligned}$$

Thus, we obtain

$$\begin{aligned} J_1&= \frac{1}{2} \iint _{B_{\varepsilon R}(a) \times B_{\varepsilon R}(a)} v(t, x) v(t, y) \nabla _x {\mathcal {N}}(x - y) \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y))\, dx\, dy \\&\ge \frac{c_\varepsilon R^{- n}}{2 (2 \varepsilon )^{n - 2} \omega _{n - 1}} \int _{B_R(a)} v(t, x)\, dx \left[ \int _{B_R(a)} v(t, x)\, dx \right. \\ {}&\quad \left. - \frac{2}{C_{\varepsilon }} \left( \int _{B_R(a)} v(t, x)\, dx - \int _{{\mathbb {R}}^n} v(t, x) \eta _R(x)\, dx\right) \right] . \end{aligned}$$

On the other hand, we recall that the integration over other regions is defined as

$$\begin{aligned} J_2 \equiv \frac{1}{2} \iint _{{\mathbb {R}}^n \times {\mathbb {R}}^n \setminus B_{\varepsilon R}(a) \times B_{\varepsilon R}(a)} v(t, x) v(t, y) \nabla _x {\mathcal {N}}(x - y) \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y))\, dx\, dy. \end{aligned}$$

In order to estimate \(J_2\), it suffices to consider the integral over

$$\begin{aligned} \{(x, y) \in {\mathbb {R}}^n \times {\mathbb {R}}^n;\ |x - a| < R,\quad |y - a| \ge \varepsilon R\} \end{aligned}$$

by the symmetry property of \(J_2\) and the fact that \(\nabla \eta _R(x)\) vanishes on the set \(\{x \in {\mathbb {R}}^n;\ |x - a| > R\}\). For \(|x - a| < R\) and \(|y - a| \ge \varepsilon R\), we have

$$\begin{aligned} \nabla _x {\mathcal {N}}(x - y) \cdot (\nabla _x \eta _R(x) - \nabla _y \eta _R(y)) \le \frac{2 R^{- 2}}{\omega _{n - 1}} \frac{1}{|x - y|^{n - 2}}. \end{aligned}$$

Thus, we have

$$\begin{aligned} |J_2| \le \frac{2 R^{- 2}}{\omega _{n - 1}} \iint _{\{|x - a| < R\} \times \{|y - a| \ge \varepsilon R\}} \frac{v(t, x) v(t, y)}{|x - y|^{n - 2}}\, dx\, dy. \end{aligned}$$

By the Hardy–Littlewood–Sobolev inequality and \(\Vert v(t_k)\Vert _\frac{n}{2} = 1\), we have

$$\begin{aligned} J_2 \ge - \frac{2 (n - 2) C_\textrm{HLS}}{\omega _{n - 1}} R^{- 2} \int _{B_R(a)} v(t_k, x)\, dx \end{aligned}$$

at \(t = t_k\). Hence, we have

$$\begin{aligned}&\quad \frac{d}{dt} \int _{{\mathbb {R}}^n} v(t) \eta _R\, dx\bigg |_{t = t_k} \\&\ge - 4 n R^{- 2} \lambda _k^2 \int _{B_R(a)} v(t_k)\, dx + J_1 + J_2 \\&\ge - 4 n R^{- 2} \lambda _k^2 \int _{B_R(a)} v(t_k)\, dx - \frac{2 (n - 2) C_\textrm{HLS}}{\omega _{n - 1}} R^{- 2} \lambda _k^n \int _{B_R(a)} v(t_k)\, dx \\&\quad + \frac{c_\varepsilon R^{- n}}{2 (2 \varepsilon )^{n - 2} \omega _{n - 1}} \lambda _k^n \int _{B_R(a)} v(t_k)\, dx \left[ \int _{B_R(a)} v(t_k)\, dx \right. \\ {}&\quad \left. - \frac{2}{C_{\varepsilon }} \left( \int _{B_R(a)} v(t_k)\, dx - \int _{{\mathbb {R}}^n} v(t_k) \eta _R\, dx\right) \right] \\&= R^{- 2} \lambda _k^n \int _{B_R(a)} v(t_k)\, dx \left[ - 4 n \lambda _k^{- (n - 2)} + \frac{c_\varepsilon R^{- (n - 2)}}{2 (2 \varepsilon )^{n - 2} \omega _{n - 1}} \int _{B_R(a)} v(t_k)\, dx \right. \\&\quad \left. - \frac{2 (n - 2) C_\textrm{HLS}}{\omega _{n - 1}} - \frac{c_\varepsilon R^{- n}}{C_\varepsilon (2 \varepsilon )^{n - 2} \omega _{n - 1}} \left( \int _{B_R(a)} v(t_k)\, dx - \int _{{\mathbb {R}}^n} v(t_k) \eta _R\, dx\right) \right] . \end{aligned}$$

There exists \(k_0 \in {\mathbb {N}}\) and \(\varepsilon _0 > 0\) such that for any \(k \ge k_0\),

$$\begin{aligned} - 4 n \lambda _{k}^{- (n - 2)} + \frac{c_{\varepsilon _0} R^{- (n - 2)}}{2 (2 \varepsilon )^{n - 2} \omega _{n - 1}} \int _{B_R(a)} v(t_{k})\, dx - \frac{2 (n - 2) C_\textrm{HLS}}{\omega _{n - 1}} > C_0 \end{aligned}$$

for some constant \(C_0 > 0\). Thus, there exists \(R_0 = R(\varepsilon _0) > 0\) such that for any \(k \ge k_0\),

$$\begin{aligned} C_0 - \frac{c_{\varepsilon _0} R_0^{- n}}{C_{\varepsilon _0} (2 \varepsilon _0)^{n - 2} \omega _{n - 1}} \left( \int _{B_{R_0}(a)} v(t_k)\, dx - \int _{{\mathbb {R}}^n} v(t_{k}) \eta _{R_0}(x)\, dx\right) > 0 \end{aligned}$$

since \(\Vert v(t_k)\Vert _1 = 1\). Therefore, it follows that for any \(k \ge k_0\),

$$\begin{aligned} \int _{{\mathbb {R}}^n} v(t_{k}) \eta _{R_0}\, dx \ge C \end{aligned}$$

for some constant \(C > 0\), which implies that \(\{v(t_k)\}_{k \in {\mathbb {N}}}\) is not vanishing sequence.\(\square \)

The error estimate (4.15) gives the following decomposition:

Proposition 4.3

(Profile decomposition) There exist an integer \(J \in {\mathbb {N}}\) and a non-negative function sequence \(\{U^{(j)}\}_{j = 1}^J \subset L^1({\mathbb {R}}^n) \cap L^{\frac{n}{2}}({\mathbb {R}}^n)\) with \(\textrm{supp}\,{U^{(j)}} \subset B_{R^{(j)}}(x_k^{(j)})\) for some sequences \(\{x_k^{(j)}\}_{k \in {\mathbb {N}}, j = 1, 2, \dots , J} \subset {\mathbb {R}}^n\) and \(\{R^{(j)}\}_{j = 1}^J\subset {\mathbb {R}}_+\) such that for any \(\varepsilon > 0\), there exists \(k_* \ge 1\) such that for all \(k \ge k_*\),

$$\begin{aligned} (1 - \varepsilon ) \left\| |\nabla |^{- 1} u_k\right\| _2^2 \le \sum _{j = 1}^J \left\| |\nabla |^{- 1} U^{(j)}\right\| _2^2, \end{aligned}$$
(4.18)

where \(\textrm{supp}\,{U^{(j)}}\) are disjoint and \(|x_k^{(j)} - x_k^{(j')}| \rightarrow \infty \) as \(k \rightarrow \infty \) if \(j \ne j'\).

Proof of Theorem 1.4 (1)

On the right hand side of the inequality (4.1), putting \(t = t_k\) and scaling with respect to \(\lambda _k\), then the right hand side is rewritten by

$$\begin{aligned} \left\| |\nabla |^{- 1} u(t_k)\right\| _2^2&= \int _{{\mathbb {R}}^n} u(t_k, x) (- \Delta )^{- 1} u(t_k, x)\, dx \\&= \lambda _k^{2 n} \int _{{\mathbb {R}}^n} u_k(\lambda _k x) (- \Delta )^{- 1} u_k(\lambda _k x)\, dx = \lambda _k^{n - 2} \left\| |\nabla |^{- 1} u_k\right\| _2^2. \end{aligned}$$

By the auxiliary inequality (4.1), it holds that for all \(k = 1, 2, \dots \),

$$\begin{aligned} \frac{2n}{n - 2} \Vert u_0\Vert _1 \le \lambda _k^{n - 2} \left\| |\nabla |^{- 1} u_k\right\| _2^2. \end{aligned}$$
(4.19)

By the above inequality (4.19) and the profile decomposition (4.18), for any \(\varepsilon > 0\), there exists \(k_* \in {\mathbb {N}}\) such that for any \(k \ge k_*\), we see that

$$\begin{aligned} \frac{2 n}{n - 2} \Vert u_0\Vert _1 \le \frac{1}{1 - \varepsilon } \lambda _k^{n - 2} \sum _{j = 1}^J \left\| |\nabla |^{- 1} U^{(j)}_k\right\| _2^2. \end{aligned}$$

By the Hardy–Littlewood–Sobolev inequality (2.2) and (4.13),

$$\begin{aligned} \frac{2 n}{n - 2} \Vert u_0\Vert _1&\le \frac{C_\textrm{HLS}}{1 - \varepsilon } \lambda _k^{n - 2} \sum _{j = 1}^J \Vert U^{(j)}_k\Vert _1 \Vert U^{(j)}_k\Vert _\frac{n}{2} \\ {}&\le \frac{C_\textrm{HLS}}{1 - \varepsilon } \lambda _k^{n - 2} \max _{j = 1, 2, \dots , J} \Vert U^{(j)}_k\Vert _\frac{n}{2} \sum _{j = 1}^J \Vert U^{(j)}_k\Vert _1 \\&\le \frac{1 + \varepsilon }{1 - \varepsilon } C_\textrm{HLS} \lambda _k^{n - 2} \Vert u_0\Vert _1 \max _{j = 1, 2, \dots , J} \Vert U^{(j)}_k\Vert _\frac{n}{2}. \end{aligned}$$

Thus, there exists \(j_0 \in \{1, 2, \dots , J\}\) such that

$$\begin{aligned} \left( \frac{2 n}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2} \le \left( \frac{(1 + \varepsilon ) \lambda _k^{n - 2}}{1 - \varepsilon }\right) ^\frac{n}{2} \int _{{\mathbb {R}}^n} U^{(j_0)}(y)^\frac{n}{2}\, dy. \end{aligned}$$
(4.20)

Moreover, by (4.14), there exist \(x_k^{(j_0)} \in {\mathbb {R}}^n\) and \(R_0 > 0\) such that

$$\begin{aligned} \int _{{\mathbb {R}}^n} U^{(j_0)}(y)^\frac{n}{2}\, dy\le & {} (1 + \varepsilon )^\frac{n}{2} \int _{|y - x_k^{(j_0)}|< R_0} u_k(y)^\frac{n}{2}\, dy \\ {}= & {} (1 + \varepsilon )^\frac{n}{2} \int _{|y - x_k^{(j_0)}| < R_0} \lambda _k^{- \frac{n^2}{2}} u(t_k, \lambda _k^{- 1} y)^\frac{n}{2}\, dy. \end{aligned}$$

Substituting this to the inequality (4.20), then by scaling, we conclude that

$$\begin{aligned} \left( \frac{2 n}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2}&\le \left( \frac{(1 + \varepsilon )^2}{1 - \varepsilon }\right) ^\frac{n}{2} \int _{|x - \lambda _k^{- 1} x_k^{(j_0)}| < \lambda _k^{- 1} R_0} u(t_k, x)^\frac{n}{2}\, dx. \end{aligned}$$

By passing a subsequence if necessary, there exists a point \(x_*\in {\mathbb {R}}^n\) such that \(y_k \equiv \lambda _k^{-1} x_{k}^{(j_0)} \rightarrow x_*\). Indeed, if we assume that \(\{\lambda _k^{-1}x_{k}^{(j_0)}\}_{k \in {\mathbb {N}}}\) is an unbounded sequence, then \(\lambda _k^{-1}| x_{k}^{(j_0)}|\rightarrow \infty \) \((k\rightarrow \infty )\). For \(k \in {\mathbb {N}}\) sufficiently large, we see that

$$\begin{aligned} M \equiv \int _{B_{R_0}(x_k^{(j_0)})} U^{(j_0)}(y)\, dy \le \int _{B_{\lambda _k^{-1} R_0}(\lambda _k^{-1} x_k^{(j_0)})} u(t_k, x)\, dx. \end{aligned}$$

Then, we have

$$\begin{aligned}&\int _{B_{\lambda _k^{- 1} R_0}(y_k)} |x - x_0|^2 u(t_k, x)\, dx \\&\quad \ge 2 \int _{B_{\lambda _k^{- 1} R_0}(y_k)} |y_k - x_0|^2 u(t_k, x)\, dx - 2 \int _{B_{\lambda _k^{- 1} R_0}(y_k)} |x - y_k|^2 u(t_k, x)\, dx \\&\quad \ge 2 M |y_k - x_0|^2 - 2 \Vert u_0\Vert _1 \end{aligned}$$

for any \(k \ge k_0\). While from the assumption of the initial condition, there exists a constant \(C_0 > 0\) such that for any \(k \in {\mathbb {N}}\),

$$\begin{aligned} \int _{{\mathbb {R}}^n} |x - {\bar{x}}|^2 u(t_k, x)\, dx \le C_0. \end{aligned}$$
(4.21)

Since \(\{\lambda _k^{- 1} x_k^{(j_0)}\}_{k \in {\mathbb {N}}}\) is not bounded, there exists \(k_1 \in {\mathbb {N}}\) such that for any \(k \ge k_1\),

$$\begin{aligned} 2 M |y_k - x_0|^2 - 2 \Vert u_0\Vert _1 > C_0, \end{aligned}$$

which contradicts (4.21). Therefore, there exists a point \(x_*\in {\mathbb {R}}^n\) such that \(\lambda _k^{-1} x_{k}^{(j_0)} \rightarrow x_*\), and hence, we obtain (1.10).\(\square \)

5 Proof of Theorem 1.4 (2)

In what follows, we show Theorem 1.4 (2). We assume that the initial data \(u_0 \in L^{\frac{n}{2}}({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)\) satisfies the blow-up condition (1.6) and the solution u(t) to (1.1) blows up in a finite time. Namely, for some \(T_* < \infty \),

$$\begin{aligned} \limsup _{t \rightarrow T_*} \Vert u(t)\Vert _\frac{n}{2} = \infty . \end{aligned}$$

Let \(\{t_k\}_{k \in {\mathbb {N}}}\) be a sequence that gives the supremum of \(\Vert u(t)\Vert _{\frac{n}{2}}\). We introduce the backward self-similar transform

$$\begin{aligned} {\tilde{u}}(s, y) \equiv (T_* - t) u(t, \sqrt{T_* - t} y)\quad \textrm{with}\ s = - \log \left( 1 - \frac{t}{T_*}\right) . \end{aligned}$$

Then \({\tilde{u}}\) satisfies

$$\begin{aligned} \left\{ \begin{aligned}&\partial _s {\tilde{u}} - \Delta {\tilde{u}} + \frac{1}{2} \nabla \cdot (y {\tilde{u}}) - \frac{n - 2}{2} {\tilde{u}} + \nabla \cdot ({\tilde{u}} \nabla {\tilde{\psi }}) = 0,{} & {} s> 0,\ y \in {\mathbb {R}}^n, \\&- \Delta {\tilde{\psi }} = {\tilde{u}},{} & {} s > 0,\ y \in {\mathbb {R}}^n, \\&\quad {\tilde{u}}(0, y) = T_* u_0(\sqrt{T_*} y),{} & {} y \in {\mathbb {R}}^n. \end{aligned} \right. \end{aligned}$$

We note that

$$\begin{aligned} \Vert {\tilde{u}}(s)\Vert _1 = e^{\frac{n - 2}{2} s} M\quad \textrm{with}\ M \equiv \Vert {\tilde{u}}(0)\Vert _1 = T_*^{- \frac{n - 2}{2}} \Vert u_0\Vert _1. \end{aligned}$$

We set \(\{s_k\}_{k \in {\mathbb {N}}}\) as

$$\begin{aligned} s_k \equiv - \log \left( 1 - \frac{t_k}{T_*}\right) . \end{aligned}$$

We consider the scaling transform by \(\lambda > 0\) that

$$\begin{aligned} S_\lambda u(t, x) \equiv \lambda ^{- n} u(t, \lambda ^{- 1} x). \end{aligned}$$

This is the \(L^1\)-invariant scaling \(\Vert S_\lambda u(t)\Vert _1 = \Vert u(t)\Vert _1 = \Vert u_0\Vert _1\) and it also holds that

$$\begin{aligned} \Vert S_\lambda u(t)\Vert _\frac{n}{2} = \lambda ^{2 - n} \Vert u(t)\Vert _\frac{n}{2}. \end{aligned}$$

For a blow-up solution u(t) and blow-up time sequence \(\{t_k\}\), we set

$$\begin{aligned} \lambda _k \equiv \Vert u(t_k)\Vert _\frac{n}{2}^\frac{1}{n - 2} = \Vert {\tilde{u}}(s_k)\Vert _\frac{n}{2}^\frac{1}{n - 2}\quad \textrm{and}\quad \lambda _0 \equiv \Vert u_0\Vert _\frac{n}{2}^\frac{1}{n - 2}. \end{aligned}$$
(5.1)

Then we find that the rescale solution

$$\begin{aligned} v(s, x) \equiv \lambda _k^{- n} {\tilde{u}}(s, \lambda _k^{- 1} x)\quad \textrm{for}\ s_{k - 1} < s \le s_k,\ x \in {\mathbb {R}}^n \end{aligned}$$

is bounded in \(L^\frac{n}{2}({\mathbb {R}}^n)\) and \(\{e^{- \frac{n - 2}{2} s_k} v(s_k)\}_{k \in {\mathbb {N}}}\) is bounded in \(L^1({\mathbb {R}}^n)\). The rescaled solution v(tx) now solves the Cauchy problem of a semi-stationary equation

$$\begin{aligned} \left\{ \begin{aligned}&\partial _s v - \lambda _k^2 \Delta v + \frac{\lambda _k^2}{2} \nabla \cdot (y v) - \frac{n - 2}{2} \lambda _k^2 v + \lambda _k^n \nabla \cdot (v \nabla \phi ) = 0,{} & {} s_{k - 1} < s \le s_k,\ y \in {\mathbb {R}}^n, \\&\quad - \Delta \phi = v,{} & {} t > 0,\ x \in {\mathbb {R}}^n, \\&v(0, y) = v_0(y) \equiv \lambda _0^{- n} u_0(\lambda ^{- 1} y),{} & {} y \in {\mathbb {R}}^n, \end{aligned} \right. \end{aligned}$$
(5.2)

where \(s_0 = 0\). The solution preserves its total mass \(\Vert v(s)\Vert _1 = \Vert {\tilde{u}}(s)\Vert _1\) for any time \(s \ge 0\) and \(\Vert v(s_k)\Vert _\frac{n}{2} = 1\).

We set the rescaled solution sequence

$$\begin{aligned} v_k(y) \equiv v(s_k) = S_{\lambda _k} {\tilde{u}}(s_k, y) = \lambda _k^{- n} {\tilde{u}}(s_k, \lambda _k^{-1} y) \end{aligned}$$
(5.3)

satisfies

$$\begin{aligned} e^{- \frac{n - 2}{2} s_k} \Vert v_k\Vert _1 = M\quad \textrm{and}\quad \Vert v_k\Vert _\frac{n}{2} = 1. \end{aligned}$$
(5.4)

In this case, one can apply Proposition 3.1 with \(\{v_k\}_{k \in {\mathbb {N}}}\):

Proposition 5.1

(Profile decomposition in \(L^p\)) Let \(\{v_k\}_{k \in {\mathbb {N}}}\) be a non-negative sequence in \(L^1({\mathbb {R}}^n) \cap L^{\frac{n}{2}}({\mathbb {R}}^n)\) with (5.4) defined by the above. Then for all \(\varepsilon > 0\), there exists a subsequence of \(\{v_k\}\) (not relabeled), \(J \in {\mathbb {N}}\), \(\{y_k^{(j)}\}_{k \in {\mathbb {N}}, j = 1, \dots , J} \subset {\mathbb {R}}^n\), \(\{R^{(j)}\}_{j = 1}^J \subset {\mathbb {R}}_+\), and nonnegative function sequences \(\{V^{(j)}_k\}_{k \in {\mathbb {N}}}^{j = 1, \dots , J}, \{w_k\}_{k \in {\mathbb {N}}}, \{e_k\}_{k \in {\mathbb {N}}} \subset L^1({\mathbb {R}}^n) \cap L^\frac{n}{2}({\mathbb {R}}^n)\) which satisfy the following properties:

  1. (1)

    \(v_k\) is decomposed as

    $$\begin{aligned} v_k(x) = \sum _{j = 1}^J V^{(j)}_k(x - y_k^{(j)}) + w_k(x) + e_k(x)\quad \text {a.a.}\ x \in {\mathbb {R}}^n, \end{aligned}$$
    (5.5)

    where \(\{V^{(j)}_k\}_{k \in {\mathbb {N}}}^{j = 1, \dots , J}\) is a nonnegative function sequence with \(\textrm{supp}\,{V^{(j)}_k} \subset B_{R^{(j)}}(0)\). Moreover, for each \(k \in {\mathbb {N}}\), \(B_{R^{(j)}}(y_k^{(j)})\) and \(\textrm{supp}\,{w_k}\) are all disjoint for any \(j = 1, 2, \dots , J\). For any \(R > 0\), it holds that

    $$\begin{aligned} \lim _{k \rightarrow \infty } \sup _{x \in {\mathbb {R}}^n} \int _{B_R(x)} w_k(x)^p\, dx = 0 \quad \text {for any}\ 1 \le p \le \frac{n}{2} \end{aligned}$$
    (5.6)

    and the error term satisfies

    $$\begin{aligned} \varlimsup _{k \rightarrow \infty } \Vert e_k\Vert _\frac{n}{2} < \varepsilon \end{aligned}$$
    (5.7)
  2. (2)

    Almost orthogonality:

    $$\begin{aligned} \Vert v_k\Vert _1 = \sum _{j = 1}^J \Vert V^{(j)}_k\Vert _1 + \Vert w_k\Vert _1 + \Vert e_k\Vert _1, \end{aligned}$$
    (5.8)

    and for any \(j \in \{1, 2, \dots , J\}\), it holds that

    $$\begin{aligned} \Vert V^{(j)}_k\Vert _\frac{n}{2} = \Vert v_k\Vert _{L^\frac{n}{2}(B_{R^{(j)}}(y_k^{(j)}))}. \end{aligned}$$
    (5.9)
  3. (3)

    Drift term estimate:

    $$\begin{aligned} \varlimsup _{k \rightarrow \infty } e^{- \frac{n - 2}{2} s_k} \left\| |\nabla |^{- 1} w_k\right\| _2^2 = \varlimsup _{k \rightarrow \infty } e^{- \frac{n - 2}{2} s_k} \left\| |\nabla |^{- 1} e_k\right\| _2^2 = 0. \end{aligned}$$
    (5.10)

In addition, if \(\{v_k\}_{k \in {\mathbb {N}}}\) is radially symmetric, then \(J = 1\) and \(\{y_k^{(1)}\}_{k \in {\mathbb {N}}}\) is bounded.

For the proof of Proposition 5.1, see Appendix.

We set

$$\begin{aligned} u_k(x) \equiv \lambda _k^{- 1} u(t_k, \lambda _k^{- 1} x) \end{aligned}$$

and

$$\begin{aligned} \left\{ \begin{aligned}&{\tilde{U}}^{(j)}_k(x) \equiv (T_* - t_k)^{- 1} V^{(j)}_k\left( \frac{x}{\sqrt{T_* - t_k}}\right) , \\&{\tilde{w}}_k(x) \equiv (T_* - t_k)^{- 1} w_k\left( \frac{x}{\sqrt{T_* - t_k}}\right) , \\&{\tilde{e}}_k(x) \equiv (T_* - t_k)^{- 1} e_k\left( \frac{x}{\sqrt{T_* - t_k}}\right) . \end{aligned} \right. \end{aligned}$$

Then we see that

$$\begin{aligned} u_k(x) = \sum _{j = 1}^J {\tilde{U}}^{(j)}_k(x - y_k^{(j)}) + {\tilde{w}}_k(x) + {\tilde{e}}_k(x)\quad \mathrm {a.a.}\ x \in {\mathbb {R}}^n. \end{aligned}$$

The estimate (5.8) implies that

$$\begin{aligned} \sum _{j = 1}^J \Vert {\tilde{U}}^{(j)}_k\Vert _1 \le (1 + \varepsilon ) \Vert u_k\Vert _1 = (1 + \varepsilon ) \Vert u_0\Vert _1. \end{aligned}$$

By (5.10) and changing variables, we have

$$\begin{aligned} \varlimsup _{k \rightarrow \infty } \left\| |\nabla |^{- 1} {\tilde{w}}_k\right\| _2^2 = \varlimsup _{k \rightarrow \infty } \left\| |\nabla |^{- 1} {\tilde{e}}_k\right\| _2^2 = 0, \end{aligned}$$

in particular, there exists \(k_* \in {\mathbb {N}}\) such that for all \(k \ge k_*\),

$$\begin{aligned} \left\| |\nabla |^{- 1} u_k\right\| _2^2 \le \frac{1}{1 - \varepsilon } \sum _{j = 1}^J \left\| |\nabla |^{- 1} {\tilde{U}}^{(j)}_k\right\| _2^2. \end{aligned}$$

Proof of Theorem 1.4 (2)

By the similar argument of the proof of Theorem 1.4 (1), for any \(\varepsilon > 0\), there exists \(k_* \in {\mathbb {N}}\) such that for any \(k \ge k_*\), we see that

$$\begin{aligned} \frac{2 n}{n - 2} \Vert u_0\Vert _1 \le \lambda _k^{n - 2} \left\| |\nabla |^{- 1} u_k\right\| _2^2&\le \frac{1 + \varepsilon }{1 - \varepsilon } C_\textrm{HLS} \lambda _k^{n - 2} \Vert u_0\Vert _1 \max _{j = 1, 2, \dots , J} \Vert V^{(j)}_k\Vert _\frac{n}{2}. \end{aligned}$$

Thus, there exists \(j_0 \in \{1, 2, \dots , J\}\) such that

$$\begin{aligned} \left( \frac{2 n}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2} \le \left( \frac{1 + \varepsilon }{1 - \varepsilon }\right) ^\frac{n}{2} \lambda _k^\frac{n (n - 2)}{2} \int _{{\mathbb {R}}^n} V^{(j_0)}_k(y)^\frac{n}{2}\, dy. \end{aligned}$$
(5.11)

Moreover, by (5.9), there exist \(x_k^{(j_0)} \in {\mathbb {R}}^n\) and \(R_0 > 0\) such that

$$\begin{aligned} \int _{{\mathbb {R}}^n} V^{(j_0)}_k(y)^\frac{n}{2}\, dy = \int _{|y - y_k^{(j_0)}|< R_0} {\tilde{u}}_k(y)^\frac{n}{2}\, dy = \int _{|y - y_k^{(j_0)}| < R_0} \lambda _k^{- \frac{n^2}{2}} {\tilde{u}}(t_k, \lambda _k^{- 1} y)^\frac{n}{2}\, dy. \end{aligned}$$

Substituting this to the inequality (5.11), then by scaling, we conclude that

$$\begin{aligned} \left( \frac{2 n}{(n - 2) C_\textrm{HLS}}\right) ^\frac{n}{2}&\le \left( \frac{1 + \varepsilon }{1 - \varepsilon }\right) ^\frac{n}{2} \lambda _k^\frac{n (n - 2)}{2} \lambda _k^{- \frac{n^2}{2}} \int _{|y - y_k^{(j_0)}|< R_0} {\tilde{u}}(s_k, \lambda _k^{- 1} y)^\frac{n}{2}\, dy \\&\le \left( \frac{1 + \varepsilon }{1 - \varepsilon }\right) ^\frac{n}{2} \int _{|x - \lambda _k^{- 1} y_k^{(j_0)}| < \lambda _k^{- 1} R_0} {\tilde{u}}(s_k, x)^\frac{n}{2}\, dx. \end{aligned}$$

Therefore, if we set \(x_k \equiv \lambda _k^{- 1} \sqrt{T - t_k} y_k^{(j_0)}\), then we obtain (1.12).\(\square \)

6 Radially symmetric case

In this section, we consider the case that \(u_0 \in L^\frac{n}{2}({\mathbb {R}}^n) \cap L^1_b({\mathbb {R}}^n)\) is radially symmetric and nonnegative for \(b > 0\). We assume that the initial data \(u_0\) satisfies the assumption (1.6) with (1.9) for \(\delta > 0\). Let u(tx) be a blowing up solution to (1.1). We set the scaling parameter (5.1) and consider the rescaled solution (5.3).

Since the assumption on the data is different from the case of \(b \ge 2\), the lower estimate has to be arranged.

Lemma 6.1

Let the initial data \(u_0\) be radially symmetric and satisfy (1.6) with (1.9) for \(\delta > 0\), then there exists the corresponding radially symmetric solution u(t) to (1.1) blows up in a finite time T. Then it holds that for any \(0< t < T\),

$$\begin{aligned} \frac{2 (n + \delta )}{n-2}\Vert u_0\Vert _1 \le \big \Vert |\nabla |^{-1} u(t)\big \Vert _2^2. \end{aligned}$$
(6.1)

Proof of Theorem 1.5

By applying the similar argument in the proof of Theorem 1.4 (2), we obtain (1.14). By Proposition 5.1, we note that the center \(\{y_k^{(1)}\}_{k \in {\mathbb {N}}}\) of the profile \(V^{(1)}\) is bounded. Thus, we have

$$\begin{aligned} \lambda _k^{- 1} \sqrt{T - t_k} y_k^{(1)} \rightarrow 0\quad \textrm{as}\ k \rightarrow \infty . \end{aligned}$$

After admitting Lemma 6.1, we conclude the analogous result to (1.12): For any \(\varepsilon > 0\),

$$\begin{aligned} \left( \frac{2 (n + \delta )}{(n-2)C_\textrm{HLS}}\right) ^\frac{n}{2} \le \varliminf _{k\rightarrow \infty } \int _{B_{\varepsilon \sqrt{T - t_k}}(y_k^{(1)})} u(t_k, x)^\frac{n}{2}\, dx, \end{aligned}$$

which implies that (1.15).

To see Lemma 6.1, we need to introduce the modified moment and derive its dynamics from the modified virial law. Let \(\phi (x)\) be a radially symmetric smooth function such that

$$\begin{aligned} \phi (x) = \left\{ \begin{aligned}&|x|^2,{} & {} 0\le |x|< 1,\\&\text {smooth},{} & {} 1\le |x|< 2,\\&|x|^b,{} & {} 2\le |x| \end{aligned} \right. \end{aligned}$$

and set \(r=|x|\). We define

$$\begin{aligned} \Phi _R(r)=R^2\phi \Big (\frac{r}{R}\Big ) \end{aligned}$$

for any \(R \ge 1\).

Proposition 6.2

(Modified virial law) For \(\frac{n}{2}\le p<n\), \(b>0\) let \(u\in C([0,T);L^1_b({\mathbb {R}}^n)\cap L^{p}({\mathbb {R}}^n))\) be a solution to (1.1) with initial data \(u_{0}\in L^1_b({\mathbb {R}}^n)\) with positive initial data \(u_{0}(x)\ge 0\). Then if \(n\ge 4\) or \(n=3\) and \(b<1\), then it holds for any \(t\in (0,T)\) that

$$\begin{aligned} \begin{aligned}&\frac{d}{dt}\int _{{\mathbb {R}}^n} \Phi _R(x) u(t) dx \\&\quad \le 2n\Vert u(s)\Vert _1 - (n-2) \int _{{\mathbb {R}}^n} u(s) \psi (s)\, dx + (n-2) \int _{B_R^c(0)}u(t)(-\Delta )^{-1}u(t)\, dx \\&\qquad +\int _{B_R^c(0)}\Psi _R(r) \partial _r^2\psi (t)\psi (t)\, dx +\frac{1}{4}\int _{B_R^c(0)}\Delta ^2\Phi _R |\psi (t)|^2\, dx, \end{aligned} \end{aligned}$$
(6.2)

where

$$\begin{aligned} \Psi _R(r)= \frac{\Phi _R'(r)}{r} -\Phi _R^{''}(r) = \left\{ \begin{aligned}&0,{} & {} |x|\le R,\\&\text {smooth},{} & {} R<|x|\le 2R,\\&-b(b-2)\left( \frac{r}{R}\right) ^{b-2},{} & {} 2R<|x| \end{aligned} \right. \end{aligned}$$

is supported in \(B_R^c(0)\).

Lemma 6.3

Let \(\Psi _R(x)\in L^1({\mathbb {R}}^n)\cap L^{\infty }({\mathbb {R}}^n)\) be a radially symmetric function supported in \(B_R^c(0)\) and u(t) is radially symmetric.

  1. (1)

    Then there exists a constant \(C=C(\phi )>0\) depending on \(\Phi _R\) and \(\eta \) such that

    $$\begin{aligned} \int _{{\mathbb {R}}^n} \Psi _R(r)\partial _r^2(-\Delta )^{-1}u(t)\, (-\Delta )^{-1}u(t)\, dx \le CR^{-(n-2)} \Vert u_0\Vert _1^2. \end{aligned}$$
  2. (2)

    For any \(R>0\), it holds that

    $$\begin{aligned} \frac{1}{4}\int _{{\mathbb {R}}^n} \Delta ^2 \Phi _R |\psi (t)|^2\, dx \le C(\phi )R^{-(n-2)} \Vert u_0\Vert _1^2. \end{aligned}$$

Proof for Proposition 6.2 and Lemma 6.3, see [41].

Proposition 6.4

Let the initial data be radially symmetric in \(L^\frac{n}{2}(\mathbb {R}^n) \cap L^1_b(\mathbb {R}^n)\) and satisfy the condition (1.6) with (1.9), then

$$\begin{aligned} \int _{{\mathbb {R}}^n}|x|^b u(t)dx \le \int _{{\mathbb {R}}^n}|x|^b u_0\, \textrm{dx}. \end{aligned}$$
(6.3)

Proof of Proposition 6.4

By Lemma 6.3, choosing \(R > 0\) sufficiently large such that

$$\begin{aligned} \int _{B_R^c(0)}\Psi _R(r) \partial _r^2\psi (t)\psi (t)\, dx +\frac{1}{4}\int _{B_R^c(0)}\Delta ^2\Phi _R |\psi (t)|^2\, dx \le 2 (n - 2) C R^{- (n - 2)} M^2 \le 2 \delta M.\nonumber \\ \end{aligned}$$
(6.4)

By the modified virial law (6.2), (6.4), and Shannon’s inequality (2.1), we have for \(\displaystyle M\equiv \Vert u_0\Vert _1\),

$$\begin{aligned}&\frac{d}{dt}\int _{{\mathbb {R}}^n}\Phi _R(x) u(t)\, dx \nonumber \\&\quad \le 2(n-2)\left[ H[u(t)]+\frac{n}{n-2}M + \frac{\delta }{n - 2} M -\int _{{\mathbb {R}}^n}u(t)\log u(t)\, dx \right] \nonumber \\&\quad \le 2(n-2) \left[ H[u(0)]+\frac{n + \delta }{n-2}M +\frac{n}{b}M \log \left( \frac{bc_{n, b}e}{nM^{1+\frac{b}{n}}} \int _{{\mathbb {R}}^n} |x|^b u(t)\, dx \right) \right] \nonumber \\&\quad = 2(n-2) \left[ H[u_0] +\frac{n}{b}M \log \left( \frac{bc_{n, b}e^{1+\frac{b(n + \delta )}{n (n - 2)}}}{nM^{1+\frac{b}{n}}} \int _{{\mathbb {R}}^n}|x|^bu(t)\, dx \right) \right] . \end{aligned}$$
(6.5)

Hence under the condition (1.6), the right hand side of (6.5) is negative if \(t=0\). Then

$$\begin{aligned} X(t) \equiv \int _{{\mathbb {R}}^n}\Phi _R(x)u(t)\, dx \end{aligned}$$

is decreasing function of t in \([0,\eta )\). Since it holds that

$$\begin{aligned} \int _{{\mathbb {R}}^n}|x|^b u(t)\, dx \le R^{b-2}X(t)+R^bM, \end{aligned}$$

the b-th moment of u does not increase if the initial data satisfies

$$\begin{aligned} H[u_0] < -\frac{n}{b}M \log \left( \frac{b c_{n, b} e^{1 + \frac{b (n + \delta )}{n (n - 2)}}}{n M^{1+\frac{b}{n}}} \int _{{\mathbb {R}}^n}|x|^bu_0\, dx \right) , \end{aligned}$$

where \(c_{n, b}\) is the constant appearing in Proposition 2.1. Thus, we obtain the inequality (6.3).\(\square \)

Proof of Lemma 6.1

From (1.3) we have

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {R}}^n}u(t)\log u(t)\, dx-H[u(t)] \le \frac{1}{2} \big \Vert |\nabla |^{-1} u(t)\big \Vert _2^2. \end{aligned}\end{aligned}$$

By the Shannon inequality (Proposition 2.1), it follows that

$$\begin{aligned} \begin{aligned} \frac{n}{b}\Vert u_0\Vert _1 \log \left( \frac{ n\Vert u_0\Vert _1^{1+\frac{b}{n}} }{bc_{n,b}e\int _{{\mathbb {R}}^n}|x-{\bar{x}}|^b u(t)\, dx } \right) -H[u(t)] \le \frac{1}{2} \big \Vert |\nabla |^{-1} u(t)\big \Vert _2^2. \end{aligned} \end{aligned}$$
(6.6)

We obtain from (6.3) and (6.6) that

$$\begin{aligned} \begin{aligned} \frac{n}{b}\Vert u_0\Vert _1 \log \left( \frac{ n\Vert u_0\Vert _1^{1+\frac{b}{n}} }{bc_{n,b} e\int _{{\mathbb {R}}^n}|x-{\bar{x}}|^b u_0(x)\, dx } \right) - H[u_0]&\le \frac{1}{2} \big \Vert |\nabla |^{-1} u(t)\big \Vert _2^2. \end{aligned} \end{aligned}$$

From the assumption (1.6) with (1.9), we have

$$\begin{aligned} \frac{2 (n + \delta )}{n - 2} \Vert u_0\Vert _1 \le \left\| |\nabla |^{- 1} u(t)\right\| _2^2, \end{aligned}$$

and hence, we obtain the inequality (6.1).\(\square \)