1 Introduction

Explaining the intricate behavior of coherent structures of active/passive media composed by many interacting agents in mathematical biology and technology has attracted lots of attention in the applied mathematics community. These questions are ubiquitous in collective behavior of animal species, cell aggregates by chemical cues or adhesion forces, granular media and self-assembly of particles, see for instance [17, 34, 36, 49] and the references therein. In many of these models, particular solutions emerge from consensus of movement while their relative positions are determined based only on attraction and repulsion effects [23, 28]. These equilibrium shapes at the continuum level can be characterized by probability measures \(\rho \) for which the balance of attractive and repulsive forces hold. This is equivalent to finding probability measures \(\rho \) such that

$$\begin{aligned} \nabla W*\rho = \int _{{\mathbb {R}}^d}\nabla W(\textbf{x}-\textbf{y})\rho (\textbf{y})\,\textrm{d}{\textbf{y}}= 0 \quad \text{ on } \text{ supp }(\rho ) \,, \end{aligned}$$
(1.1)

with \(W:{{\mathbb {R}}^d}\rightarrow (-\infty ,\infty ]\) being an attractive-repulsive interaction potential between the particles. The richness of the shapes of the support of these aggregation equilibria is quite surprising even for simple potentials [37]. Finding particular configurations satisfying (1.1) is a challenging problem due to its highly nonlinear nature since the support of the measure itself is part of the problem and the regularity of the potential plays a key role. These configurations appear naturally as the steady states for the mean-field dynamics associated to the particle system

$$\begin{aligned} \frac{\,\textrm{d}}{\,\textrm{d}{t}}\textbf{x}_i=-\frac{1}{N} \sum _{j\ne i}\nabla W(\textbf{x}_i-\textbf{x}_j), \quad i=1,\ldots ,N\,. \end{aligned}$$
(1.2)

Notice that the system of ODEs (1.2) is the finite dimensional gradient flow of a discrete interaction energy. Its formal mean-field limit follows the nonlocal partial differential equation

$$\begin{aligned} \partial _t \rho + \nabla \cdot (\rho \textbf{u}) = 0\,, \end{aligned}$$
(1.3)

usually referred to as the aggregation equation, where the transport velocity field \(\textbf{u}(t,\textbf{x})\) is given by

$$\begin{aligned} \textbf{u}(t,\textbf{x}) = -\int _{{\mathbb {R}}^d}\nabla W(\textbf{x}-\textbf{y})\rho (t,\textbf{y})\,\textrm{d}{\textbf{y}}\,. \end{aligned}$$
(1.4)

The aggregation Eq. (1.3) is the 2-Wasserstein gradient flow of the total potential energy

$$\begin{aligned} E[\rho ] = \frac{1}{2}\int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d}W(\textbf{x}-\textbf{y})\rho (\textbf{y})\,\textrm{d}{\textbf{y}}\rho (\textbf{x})\,\textrm{d}{\textbf{x}}\,. \end{aligned}$$
(1.5)

In the sequel, we will denote by \(V = V[\rho ] = W*\rho \) the interaction potential generated by the particle density \(\rho \). Notice that \(\rho (\textbf{x})\) is a steady state of (1.3) if \(\textbf{u}(\textbf{x})\) satisfies (1.1), or equivalently \(V[\rho ]\) is constant on each of the connected components of \(\text {supp}\,\rho \), modulo regularity of the velocity field.

We emphasize that even if the interaction potential W is radially symmetric, it is quite challenging to prove or disprove radial symmetry of global or local minimizers of the interaction energy (1.5). Notice that despite of the fact that the energy is rotationally invariant for radial functions with radially symmetric interaction potentials, the uniqueness of global minimizers of the interaction energy, modulo translations, is not known except for particular cases [32, 40] using the linear interpolation convexity (LIC).

Nothing is essentially known about uniqueness for local miminizers. Actually, in order to discuss about local minimizers of the interaction energy (1.5), we obviously need to specify the topology in probability measures that we use to measure the distance. Here, we follow previous works [2, 10, 13] that showed that transport distances between probability measures are the right tool to deal with this variational problem. We remind the reader the main properties of transport distances in Sect. 2, in particular, the infinity Wasserstein distance \(d_\infty \) plays an important role in order to write Euler-Lagrange conditions for local minimizers [2, 14].

The problem of finding global minimizers of the interaction energy, modulo translations, for particular potentials is a classical problem in potential theory [32, 43]. More precisely, for the repulsive logarithmic potential with quadratic confinement \(W(\textbf{x})=\tfrac{|\textbf{x}|^2}{2}-\ln |\textbf{x}|\) in 2D, it is known [32] that the unique global minimizer is the characteristic function of a suitable Euclidean ball. When the repulsive singularity at zero is stronger than Newtonian but still locally integrable, regularity results for \(d_\infty \)-local minimizers have been obtained [13] and the uniqueness is known for particular power-law potential cases again [11, 12, 40]. The existence of compactly supported global minimizers of the interaction energy for generic interaction potentials was obtained in [10, 47] based on the condition of H-stability of interaction potentials.

Some qualitative properties of the support of the minimizers are known depending on the smoothness/singularity of the potential at the origin. Interaction potentials which are at least \(C^2\) smooth at the origin generically lead to minimizers concentrated on Dirac points for which particular geometric constraints and explicit forms are known [16, 39]. The dimensionality of the support of the \(d_\infty \)-local minimizers was estimated in terms of the repulsive singularity strength of weakly singular interaction potentials at the origin in [2] showing that the more repulsive the potential at zero gets the larger the support of the \(d_\infty \)-local minimizers is. We refer to weakly singular repulsive–attractive potentials as potentials with a repulsive singularity at the origin behaving between Newtonian singularity and smooth quadratic behavior at the origin. Other related problems include singular anisotropic potentials [24, 25, 42], in which the explicit form of the global minimizers are known in particular cases, and interaction energies with constraints [9, 26, 30, 31]. Explicit stationary solutions of (1.1) are known for some power-law potentials [22]: attractive power is an even integer and weakly singular at the origin. They were expected to be indeed global minimizers supported by strong numerical evidence [33].

As a conclusion, a key problem not covered by the current literature is to find sufficient conditions by variational methods for radial symmetry or the break of radial symmetry of \(d_\infty \)-local minimizers for weakly singular repulsive–attractive potentials. In order to attack these issues, our first objective is to further exploit and refine the convexity properties of the interaction energy to study the radial symmetry and uniqueness of \(d_\infty \)-local minimizers of (1.5). The crucial assumption on the interaction energy is the LIC property, which basically means that E is convex along the linear interpolation between any two measures \(\rho _0\) and \(\rho _1\). It is well-known that the global minimizer is unique and radially-symmetric under the LIC assumption for certain particular functionals [38, 40, 42].

The main goal of the first part of this work, Sects. 3, 4 and 5, is to find sufficient conditions leading to radial symmetry of local minimizers of (1.5) and its consequences. Assuming the LIC property, we first show that any \(d_\infty \)-local minimizer of the interaction energy (1.5) is radially-symmetric, see Theorem 3.1. Then, by imposing an extra assumption on the sign of \(\Delta ^2 W\), we obtain the uniqueness of \(d_\infty \)-local minimizers modulo translations, see Theorem 4.1. As a test case of our theory, we apply our results to the power-law potentials \(W(\textbf{x}) = \frac{|\textbf{x}|^a}{a}-\frac{|\textbf{x}|^b}{b}\), and identify the ranges of a and b for which we show the radial symmetry and uniqueness of \(d_\infty \)-local minimizers, see Theorem 5.1. In particular, we prove that some of the steady states given by the explicit formula in [22] are the global minimizers of the interaction energy (1.5), and also their unique \(d_\infty \)-local minimizer, see also related results in one dimension [29]. This also confirms accurate numerical simulations of equilibrium measures [33].

We emphasize that Sects. 3, 4 and 5 together lead to the first results in the literature proving radial symmetry and uniqueness of \(d_\infty \)-local minimizers of the interaction energy (1.5) for a general family of interaction potentials. The importance of showing radial-symmetry and uniqueness is not only from the variational viewpoint but also from the evolutionary viewpoint of gradient flows associated to the interaction energy. The connection to the long time asymptotics of the corresponding aggregation equation [1,2,3,4, 6, 8, 15] is not explored in this work although there are still important open problems under different assumptions on the interaction potential. Nevertheless, we remark that the radial symmetry of steady states and the gradient flow structure of the aggregation equation are crucial properties for showing precise long time asymptotics both for the aggregation equation with particular interaction potentials in [7, 11, 12, 46] and also for the aggregation–diffusion equations in [20, 21, 27, 35, 44, 45].

The next main question that we want to address in this work is to give sufficient conditions to allow for non-radial local/global minimizers and even fractal behavior on the structure of their support. The radial symmetry is broken for interaction potentials at least \(C^2\) smooth at the origin. The support of their stationary states consists of finite number of isolated Dirac points with some conditions [16], and some particular configurations such as simplices appear as asymptotic limits for power-law potentials [39]. Stationary states with complex structure have been reported in the numerical literature and by studying the stability/instability of Delta ring solutions [1, 5, 48]. The dimensionality of the support of stationary states was estimated in [2] as mentioned earlier. In this quest, we can wonder if fractal behavior of the support of the minimizers appears among the natural family of power-law potentials. It was numerically observed in [2] that stationary states for power-law potentials seem not to show fractal behavior, i.e., the dimension of the support of the steady states seems to be an integer. We show in Sect. 6, Theorem 6.1, that this is in fact the case in one dimension, i.e. fractal behavior is not possible at least in one dimension for some power-law potentials.

The main result of the second part of this work is to give a generic family of potentials for which we have fractal-like behavior. In order to achieve this, Sect. 7 introduces a novel notion of concavity of the interaction potential W allowing us to show certain fractal behavior on superlevel sets of \(d_\infty \)-local minimizers of the interaction energy (1.5). This notion of concavity is based on negative regions of the Fourier transform of the potential, in contrast to the LIC property and a slightly stronger notion of convexity, the Fourier-LIC (FLIC) property defined in Sect. 2. More precisely, the main result, Theorem 7.1, asserts that if the interaction potential is infinitesimal-concave, then any superlevel set of \(d_\infty \)-local minimizers of the interaction energy (1.5) does not contain interior points. Sect. 8 provides explicit constructive examples of infinitesimal-concave potentials in any dimension based on a careful modification of power-law kernels in Fourier variables, see Theorem 8.1. We finally show in Corollary 8.4 that for these potentials the behavior of the support of the corresponding \(d_\infty \)-local minimizers of the interaction energy (1.5) is almost fractal, in the sense that the interior of any superlevel set is empty, and moreover the support does not contain isolated points. A related idea of breaking the symmetry of minimizers of the interaction energy (1.5) via a finite number of unstable Fourier modes for the uniform distribution on the sphere was described in [50].

Finally, Sect. 9 provides another constructive example in which a steady state is the uniform distribution on a Cantor set, see Theorems 9.2 and 9.6 . The main idea of this construction is to produce a potential in a recursive hierarchical manner that introduces some kind of concavity at a sequence of small scales. These three Sects. 7, 8 and 9 show all together that the behavior of the support of \(d_\infty \)-local minimizers of infinitesimal concave potentials can be hugely sophisticated. This is further corroborated by the numerical simulations in Sect. 10 illustrating by means of particle methods the intricate fractal-like structures of the steady states. One of the remaining open problems is to prove or disprove the fractal behavior of \(d_\infty \)-local minimizers for weakly singular power-law like interaction potentials in \(d\ge 2\).

2 Preliminaries and convexity

We write \({\mathcal {P}}({\mathbb {R}}^d)\) for the set of Borel probability measures. Given \(1\le p<\infty \), we write \({\mathcal {P}}_p({\mathbb {R}}^d)\) for the subset of \({\mathcal {P}}({\mathbb {R}}^d)\) of measures with finite pth moment. The pth Wasserstein distance \(d_p(\mu ,\nu )\) between two probability measures \(\mu \) and \(\nu \) belonging to \({\mathcal {P}}_p({\mathbb {R}}^d)\) is

$$\begin{aligned} d_p(\mu ,\nu ) = \min _{\pi \in \Pi (\mu ,\nu )} \left( \int _{{\mathbb {R}}^d\times {\mathbb {R}}^d} |\textbf{x}-\textbf{y}|^p \,\textrm{d}\pi (\textbf{x},\textbf{y}) \right) ^{1/p} , \end{aligned}$$

where \(\Pi (\mu ,\nu )\) is the set of transport plans between \(\mu \) and \(\nu \); i.e., \(\Pi (\mu ,\nu )\) is the subset of \({\mathcal {P}}({{\mathbb {R}}^d}\times {{\mathbb {R}}^d})\) of measures with \(\mu \) as first marginal and \(\nu \) as second marginal. We also define the \(\infty \)-Wasserstein distance \(d_\infty (\mu ,\nu )\), whenever \(\mu \) and \(\nu \) are compactly supported, by

$$\begin{aligned} d_\infty (\mu ,\nu ) = \inf _{\pi \in \Pi (\mu ,\nu )} \sup _{(\textbf{x},\textbf{y}) \in \text {supp}\,\pi } |\textbf{y}-\textbf{x}|, \end{aligned}$$

where the \(\text {supp}\,\) denotes the support.

We will adopt the following notation for Fourier transform and its inverse

$$\begin{aligned} {\mathcal {F}}[f](\xi ) = {\hat{f}}(\xi ) = \int _{{\mathbb {R}}^d}f(\textbf{x})e^{-i\textbf{x}\cdot \xi }\,\textrm{d}{\textbf{x}},\quad {\mathcal {F}}^{-1}[g](\textbf{x})= {\check{g}}(\textbf{x}) = \frac{1}{(2\pi )^d}\int _{{\mathbb {R}}^d}g(\xi )e^{i\textbf{x}\cdot \xi }\,\textrm{d}{\xi }\,, \end{aligned}$$

for all \(\xi ,\textbf{x}\in {{\mathbb {R}}^d}\). Then \( {\hat{\delta }} = 1,\, {\check{\delta }} = (2\pi )^{-d} \), and

$$\begin{aligned} {\mathcal {F}}\left[ \exp \left( -\frac{|\cdot |^2}{2}\right) \right] (\xi ) =(2\pi )^{d/2}\exp (-\frac{|\xi |^2}{2})\,. \end{aligned}$$

We will use in this work different notions of convexity for the interaction energy functional (1.5). For the sake of notational simplicity, we will denote by E either the energy functional (1.5) acting on probability measures or the bilinear form acting on signed measures associated to the interaction potential W. We also denote particle densities by \(\rho \) whenever it is a probability measure and \(\mu \) if it is a signed measure. For notational simplicity, we will use \(\rho (\textbf{x}) \,\,\textrm{d}{\textbf{x}}\) as the integration against the measure \(\rho \) no matter if it can be identified with a Lebesgue integrable function or not. In the sequel, we also use C and c to refer to generic positive constants.

We start by the simplest notion of linear interpolation convexity (LIC): we say that the interaction energy E is LIC, if for any probability measures \(\rho _0,\rho _1\in {\mathcal {P}}_2({\mathbb {R}}^d)\), \(\rho _0\ne \rho _1\), such that \(E[\rho _0]<\infty \) and \(E[\rho _1]<\infty \) with the same total mass and center of mass, the function \(t\mapsto E[(1-t)\rho _0 + t \rho _1]\) is strictly convex, or equivalently, \(E[\mu ]> 0\) for every nonzero signed measure \(\mu \in {\mathcal {M}}({\mathbb {R}}^d)\) with \(\int _{{\mathbb {R}}^d}\mu (\textbf{x})\,\textrm{d}{\textbf{x}}=\int _{{\mathbb {R}}^d}\textbf{x}\mu (\textbf{x})\,\textrm{d}{\textbf{x}}=0\) and \(E[|\mu |]<\infty \). Notice the equivalence by taking \(\mu =\rho _0-\rho _1\). This notion of convexity has been classically used in statistical mechanics, see [38].

A stronger convexity property is the following: if for any \(0<r<R<\infty \), there exists \(c>0\) such that

$$\begin{aligned} E[\mu ] \ge c\int _{r\le |\xi | \le R} |{\hat{\mu }}(\xi )|^2\,\textrm{d}{\xi }\,, \end{aligned}$$
(2.1)

then we say E has Fourier linear interpolation convexity (FLIC). It is straightforward to check that FLIC implies LIC for the interaction energy. As already mentioned in the introduction, these notions of convexity were used in [40], the author showed that the interaction energy (1.5) associated to the attractive potential \(W(x)=\tfrac{|\textbf{x}|^a}{a}\) has the LIC property, for \(2\le a \le 4\) and that the interaction energy (1.5) associated to the repulsive potential \(W(x)=-\tfrac{|\textbf{x}|^b}{b}\) has the FLIC property for \(-d< b < 0\), since its Fourier transform is shown to be \(c|\textbf{x}|^{-d-b}\) for some \(c>0\).

It is clear that LIC implies the uniqueness of global minimizers for the interaction energy (1.5) in \({\mathcal {P}}_2({\mathbb {R}}^d)\) with fixed total mass and center of mass. Observe also that the sum of an LIC potential and an (F)LIC potential is an (F)LIC potential. The positivity of the Fourier transform of the interaction potential, related to then FLIC convexity, has been used to prove uniqueness of minimizers for interaction energies with asymmetric potentials, see [24, 42] and it is also related to the concept of H-stability in statistical mechanics to detect phase transitions in aggregation–diffusion equations [18, 19] and the existence of compactly supported minimizers for the interaction energy [10, 47].

It is important to remark that linear interpolation convexity of the interaction energy as defined above is totally different from displacement convexity of (1.5) in the optimal transportation sense, see [41]. In fact, the energy functional (1.5) is strictly displacement convex as soon as the potential W is strictly convex.

We now remind the reader about the necessary conditions for local minimizers of the interaction energy (1.5) in [2, 14]. The following conditions come from Euler-Lagrange variational arguments using the mass constraint, and they are related to obstacle-like problems as in [13]. Although further hypotheses on the interaction potential W may be considered in various places below, we minimally assume that

figure a

From now on, we will assume without loss of generality that the interaction potential \(W\ge 0\). Notice that these assumptions imply that \(V=W*\rho \ge 0\) is always a lower semicontinuous function for all \(\rho \in {\mathcal {P}}({\mathbb {R}}^d)\), see [2, Lemma 2]. The results in [2, Proposition 1], together with [13, Remark 2.3], give

Lemma 2.1

If \(\rho \in {\mathcal {P}}({\mathbb {R}}^d)\) is a compactly supported local energy minimizer in the \(d_\infty \)-sense for the energy E with interaction potential W satisfying ((H)), then there exists \(\epsilon _0>0\), such that for any \(\textbf{x}\in \text {supp}\,\rho \),

$$\begin{aligned} V(\textbf{y}) \ge V(\textbf{x}),\quad \text{ a.e. } \textbf{y}\in B(\textbf{x};\epsilon _0). \end{aligned}$$

Lemma 2.2

If \(\rho \in {\mathcal {P}}_2({\mathbb {R}}^d)\) is a local energy minimizer in the \(d_2\)-sense for the energy E with interaction potential W satisfying ((H)), then V is constant on \(\text {supp}\,\rho \) in the sense that \(V(\textbf{x})=C_\rho :=2 E[\rho ]\) \(\rho \)-a.e., and

$$\begin{aligned} V(\textbf{x}) \ge C_\rho ,\quad \text{ a.e. } \textbf{x}\in {{\mathbb {R}}^d}\, . \end{aligned}$$
(2.2)

In order to show the FLIC property of the interaction energy, one can reduce to show (2.1) for compactly supported signed measures \(\mu \).

Lemma 2.3

Assume then interaction potential W satisfies ((H)) and that the associated interaction energy E satisfies (2.1) for compactly supported signed measures \(\mu \), then the interaction energy E is FLIC.

Proof

To see this, assume that we already proved (2.1) for compactly supported \(\mu \in {\mathcal {M}}({\mathbb {R}}^d)\). Then let \(\mu \in {\mathcal {M}}({\mathbb {R}}^d)\) be a signed measure with \(E[|\mu |]<\infty \) and \(\mu =\rho _0-\rho _1\) for some \(\rho _0,\rho _1\in {\mathcal {P}}_2({\mathbb {R}}^d)\) with the same center of mass. Then

$$\begin{aligned} \int _{{{\mathbb {R}}^d}} \mu (\textbf{x})\,\textrm{d}{\textbf{x}} = \int _{{{\mathbb {R}}^d}} \textbf{x}\mu (\textbf{x})\,\textrm{d}{\textbf{x}} = 0\,, \end{aligned}$$
(2.3)

and one can write \(\mu =\mu _+-\mu _-\) as the positive and negative parts, with \(m_\mu :=\int _{{{\mathbb {R}}^d}}\mu _+\,\textrm{d}{\textbf{x}}=\int _{{{\mathbb {R}}^d}}\mu _-\,\textrm{d}{\textbf{x}}>0\) and \(E[\mu _+]<\infty ,\,E[\mu _-]<\infty \).

Fix a compactly supported smooth non-negative radial function \(\psi (\textbf{x})\) with \(\int _{{{\mathbb {R}}^d}}\psi (\textbf{x})\,\textrm{d}{\textbf{x}}=1\). Let \(N\in {\mathbb {N}}\) be sufficiently large, and let

$$\begin{aligned} {\tilde{\mu }}_N = \lambda _N\mu _+\chi _{[-N,N]^d}-\mu _-\chi _{[-N,N]^d},\quad \lambda _N:=\frac{\int _{[-N,N]^d}\mu _-\,\textrm{d}{\textbf{x}}}{\int _{[-N,N]^d}\mu _+\,\textrm{d}{\textbf{x}}} \end{aligned}$$

be a compactly supported signed measure. Since \(\lim _{N\rightarrow \infty }\int _{[-N,N]^d}\mu _+\,\textrm{d}{\textbf{x}} = m_\mu >0\), \(\lambda _N\) is well-defined for sufficiently large N with \(\lim _{N\rightarrow \infty }\lambda _N=1\), and we have \(\int _{{{\mathbb {R}}^d}}{\tilde{\mu }}_N\,\textrm{d}{\textbf{x}}=0\).

Then define

$$\begin{aligned}{} & {} \mu _N(\textbf{x}) = {\tilde{\mu }}_N(\textbf{x}) + c_N(\psi (\textbf{x}-\textbf{x}_N)-\psi (\textbf{x}+\textbf{x}_N)),\\{} & {} |\textbf{x}_N|=1,\,c_N\ge 0,\quad 2c_N\textbf{x}_N = -\int _{{{\mathbb {R}}^d}}\textbf{x}{\tilde{\mu }}_N(\textbf{x})\,\textrm{d}{\textbf{x}}. \end{aligned}$$

Then \(\mu _N\) is compactly supported, satisfies \(\int _{{{\mathbb {R}}^d}} \mu (\textbf{x})\,\textrm{d}{\textbf{x}}=0\), and

$$\begin{aligned}\begin{aligned} \int _{{{\mathbb {R}}^d}} \textbf{x}\mu (\textbf{x})\,\textrm{d}{\textbf{x}} =&\int _{{{\mathbb {R}}^d}} \textbf{x}{\tilde{\mu }}_N(\textbf{x})\,\textrm{d}{\textbf{x}} + c_N\int _{{{\mathbb {R}}^d}} \textbf{x}\psi (\textbf{x}-\textbf{x}_N)\,\textrm{d}{\textbf{x}}- c_N\int _{{{\mathbb {R}}^d}} \textbf{x}\psi (\textbf{x}+\textbf{x}_N)\,\textrm{d}{\textbf{x}} \\ =&\int _{{{\mathbb {R}}^d}} \textbf{x}{\tilde{\mu }}_N(\textbf{x})\,\textrm{d}{\textbf{x}} + c_N\textbf{x}_N + c_N\textbf{x}_N = 0. \end{aligned}\end{aligned}$$

Also, \(\lim _{N\rightarrow \infty }c_N=0\). Then we will show that

$$\begin{aligned}{} & {} |E[\mu ]-E[\mu _N]| \\{} & {} \quad \le |E[\mu ]-E[\mu \chi _{[-N,N]^d}]|+|E[\mu \chi _{[-N,N]^d}]-E[{\tilde{\mu }}_N]|+|E[{\tilde{\mu }}_N]-E[\mu _N]| \end{aligned}$$

converges to zero as \(N\rightarrow \infty \). For the first term on the RHS, this is a consequence of \(E[|\mu |]<\infty \) since

$$\begin{aligned}{} & {} \left| \iint _{(x,y)\notin [-N,N]^d\times [-N,N]^d}\!\!\!\!\!\!W(\textbf{x}-\textbf{y})\mu (\textbf{y})\mu (\textbf{x})\,\textrm{d}{\textbf{y}}\,\textrm{d}{\textbf{x}}\right| \\{} & {} \quad \le \iint _{(x,y)\notin [-N,N]^d\times [-N,N]^d}\!\!\!\!\!\!W(\textbf{x}-\textbf{y})|\mu (\textbf{y})||\mu (\textbf{x})|\,\textrm{d}{\textbf{y}}\,\textrm{d}{\textbf{x}} \end{aligned}$$

and the RHS converges to zero as \(N\rightarrow \infty \). For the second term, this is a consequence of \(E[|\mu |]<\infty \) and \(\lim _{N\rightarrow \infty }\lambda _N=1\), since \({\tilde{\mu }}_N=\mu \chi _{[-N,N]^d}+(\lambda _N-1)\mu _+\chi _{[-N,N]^d}\) and then

$$\begin{aligned}{} & {} |E[\mu \chi _{[-N,N]^d}]-E[{\tilde{\mu }}_N]| \\{} & {} \quad = \left| (\lambda _N-1)\int _{{{\mathbb {R}}^d}}\int _{{{\mathbb {R}}^d}}W(\textbf{x}-\textbf{y})\mu (\textbf{y})\chi _{[-N,N]^d}(\textbf{y})\,\textrm{d}{\textbf{y}}\mu _+(\textbf{x})\chi _{[-N,N]^d}(\textbf{x})\,\textrm{d}{\textbf{x}}\right. \\{} & {} \quad \quad \, \left. + (\lambda _N-1)^2E[\mu _+\chi _{[-N,N]^d}]\right| \end{aligned}$$

with the last integral and \(E[\mu _+\chi _{[-M,M]^d}]\) being finite.

For the third term, we first take a non-negative compactly-supported radial smooth function \(\Psi \) with \(\Psi (\textbf{x}-\textbf{z})\ge \psi (\textbf{x}-\textbf{x}_1)\) for any \(|\textbf{x}_1|=1\) and

$$\begin{aligned} |\textbf{z}|\le 2\frac{\int _{{{\mathbb {R}}^d}}|\textbf{x}\mu (\textbf{x})|\,\textrm{d}{\textbf{x}}}{\int _{{{\mathbb {R}}^d}} |\mu |\,\textrm{d}{\textbf{x}}}. \end{aligned}$$
(2.4)

Notice that \(E[\Psi ]<\infty \). Take \(M>0\) large enough so that \(\int _{{{\mathbb {R}}^d}}|\mu |\chi _{[-M,M]^d}\,\textrm{d}{\textbf{x}}\ge \frac{1}{2}\int _{{{\mathbb {R}}^d}}|\mu |\,\textrm{d}{\textbf{x}}\). Then the FLIC property for compactly supported measures implies an estimate along a linear interpolation curve

$$\begin{aligned} E\left[ \frac{1}{2}\Lambda _M\Psi (\cdot -\textbf{z}_M)+\frac{1}{2}|\mu |\chi _{[-M,M]^d}\right]&\le \max \{E[\Lambda _M\Psi ],E[|\mu |\chi _{[-M,M]^d}]\} \\&\le \max \left\{ \left( \int _{{{\mathbb {R}}^d}}| \mu |\,\textrm{d}{\textbf{x}}\right) ^2E[\Psi ],E[|\mu |]\right\} =C \end{aligned}$$

with

$$\begin{aligned} \Lambda _M = \frac{1}{\Vert \Psi \Vert _{L^1}}\int _{{{\mathbb {R}}^d}}|\mu |\chi _{[-M,M]^d}\,\textrm{d}{\textbf{x}},\quad \textbf{z}_M = \frac{1}{\int _{{{\mathbb {R}}^d}}|\mu |\chi _{[-M,M]^d}\,\textrm{d}{\textbf{x}}}\int _{{{\mathbb {R}}^d}}\textbf{x}|\mu (\textbf{x})|\chi _{[-M,M]^d}\,\textrm{d}{\textbf{x}} \end{aligned}$$

since both \(\Lambda _M\Psi (\cdot -\textbf{z}_M)\) and \(|\mu |\chi _{[-M,M]^d}\) have same total mass and center of mass. Also, since \(\int _{{{\mathbb {R}}^d}}|\mu |\chi _{[-M,M]^d}\,\textrm{d}{\textbf{x}}\ge \frac{1}{2}\int _{{{\mathbb {R}}^d}}|\mu |\,\textrm{d}{\textbf{x}}\), \(\textbf{z}_M\) satisfies (2.4). This implies

$$\begin{aligned} \int _{{{\mathbb {R}}^d}}\int _{{{\mathbb {R}}^d}}W(\textbf{x}-\textbf{y})|\mu (\textbf{y})|\chi _{[-M,M]^d}(\textbf{y})\,\textrm{d}{\textbf{y}}\psi (\textbf{x}-\textbf{x}_1)\,\textrm{d}{\textbf{x}} \le C \end{aligned}$$

for any M sufficiently large and \(|\textbf{x}_1|=1\). Taking \(M\rightarrow \infty \), we get

$$\begin{aligned} \int _{{{\mathbb {R}}^d}}\int _{{{\mathbb {R}}^d}}W(\textbf{x}-\textbf{y})|\mu (\textbf{y})|\,\textrm{d}{\textbf{y}}\psi (\textbf{x}-\textbf{x}_1)\,\textrm{d}{\textbf{x}} \le C. \end{aligned}$$

Therefore, we obtain that

$$\begin{aligned}\begin{aligned} |E[{\tilde{\mu }}_N]-E[\mu _N]| \le \,&c_N \int _{{{\mathbb {R}}^d}}\int _{{{\mathbb {R}}^d}}W(\textbf{x}-\textbf{y})|{\tilde{\mu }}_N(\textbf{y})|\,\textrm{d}{\textbf{y}}(\psi (\textbf{x}-\textbf{x}_N)+\psi (\textbf{x}+\textbf{x}_N))\,\textrm{d}{\textbf{x}} \\ {}&+ c_N^2E[\psi (\textbf{x}-\textbf{x}_N)+\psi (\textbf{x}+\textbf{x}_N)] \\ \le \,&c_N C\int _{{{\mathbb {R}}^d}}\int _{{{\mathbb {R}}^d}}W(\textbf{x}-\textbf{y})|\mu (\textbf{y})|\,\textrm{d}{\textbf{y}}\Psi (\textbf{x})\,\textrm{d}{\textbf{x}} + 4c_N^2E[\Psi ] \le C(c_N+c_N^2) \\ \end{aligned}\end{aligned}$$

converges to zero as \(N\rightarrow \infty \), since \(\lim _{N\rightarrow \infty }c_N=0\).

Similarly one can show that \(\lim _{N\rightarrow \infty }\int _{r\le |\xi | \le R} |{\hat{\mu }}_N(\xi )|^2\,\textrm{d}{\xi }=\int _{r\le |\xi | \le R} |{\hat{\mu }}(\xi )|^2\,\textrm{d}{\xi }\). In fact, we write

$$\begin{aligned}{} & {} \left| \int _{r\le |\xi | \le R} |{\hat{\mu }}(\xi )|^2\,\textrm{d}{\xi }-\int _{r\le |\xi | \le R} |{\hat{\mu }}_N(\xi )|^2\,\textrm{d}{\xi } \right| \nonumber \\{} & {} \le \left| \int _{r\le |\xi | \le R} |{\hat{\mu }}(\xi )|^2\,\textrm{d}{\xi }-\int _{r\le |\xi | \le R} |{\mathcal {F}}(\mu \chi _{[-N,N]^d})(\xi )|^2\,\textrm{d}{\xi }\right| \nonumber \\{} & {} + \left| \int _{r\le |\xi | \le R} |{\mathcal {F}}(\mu \chi _{[-N,N]^d})(\xi )|^2\,\textrm{d}{\xi }-\int _{r\le |\xi | \le R} |\hat{{\tilde{\mu }}}_N(\xi )|^2\,\textrm{d}{\xi }\right| \nonumber \\{} & {} + \left| \int _{r\le |\xi | \le R} |\hat{{\tilde{\mu }}}_N(\xi )|^2\,\textrm{d}{\xi }-\int _{r\le |\xi | \le R} |{\hat{\mu }}_N(\xi )|^2\,\textrm{d}{\xi }\right| . \end{aligned}$$
(2.5)

First notice that \({\hat{\mu }}, {\mathcal {F}}(\mu \chi _{[-N,N]^d}), \hat{{\tilde{\mu }}}_N\) are uniformly bounded in \(L^\infty ({{\mathbb {R}}^d})\). The first term in (2.5) converges to zero as \(N\rightarrow \infty \) since

$$\begin{aligned} |{\mathcal {F}}(\mu -\mu \chi _{[-N,N]^d})(\xi )| = |{\mathcal {F}}(\mu \chi _{([-N,N]^d)^c})(\xi )| \le \int _{([-N,N]^d)^c} |\mu |\,\textrm{d}{\textbf{x}} \rightarrow 0,\quad \text { as }N\rightarrow \infty \end{aligned}$$

uniformly in \(\xi \in {{\mathbb {R}}^d}\). The second term converges to zero since

$$\begin{aligned} |{\mathcal {F}}(\mu \chi _{[-N,N]^d}-{\tilde{\mu }}_N)(\xi )| = |(\lambda _N-1){\hat{\mu }}_+(\xi )| \le |\lambda _N-1|\int _{{{\mathbb {R}}^d}} |\mu |\,\textrm{d}{\textbf{x}} \rightarrow 0,\quad \text { as }N\rightarrow \infty . \end{aligned}$$

The third term converges to zero since

$$\begin{aligned}{} & {} |{\mathcal {F}}({\tilde{\mu }}_N-\mu _N)(\xi )| = |c_N{\mathcal {F}}(\psi (\cdot -\textbf{x}_N)-\psi (\cdot +\textbf{x}_N))(\xi )| \\{} & {} \le 2c_N\int _{{{\mathbb {R}}^d}} \psi \,\textrm{d}{\textbf{x}} \rightarrow 0,\quad \text { as }N\rightarrow \infty . \end{aligned}$$

The compactly supported signed measure \(\mu _N\) satisfying (2.3), and therefore satisfies (2.1) by assumption. Then (2.1) for \(\mu \) follows by taking \(N\rightarrow \infty \). \(\square \)

In order to obtain further consequences of the LIC convexity of the interaction energy, we need stronger assumptions on the potential W:

figure b

These hypotheses can be verified for power law interaction potentials. An important consequence of the LIC convexity of the interaction energy is:

Theorem 2.4

Assume the interaction energy E associated to a potential W satisfying ((H-s)) is LIC and that there exists a global minimizer of E in \({\mathcal {P}}_2({\mathbb {R}}^d)\). Then given any \(\rho \in {\mathcal {P}}_2({\mathbb {R}}^d)\) satisfying the necessary condition (2.2) for the \(d_2\)-local minimizer is the global minimizer.

To show this, we need a technical lemma where the stronger hypotheses are essential.

Lemma 2.5

Consider the interaction energy E associated to a potential W satisfying ((H-s)) and suppose \(\rho \in {\mathcal {P}}_2({\mathbb {R}}^d)\) is compactly supported and satisfies \(E[\rho ]<\infty \). Given a non-negative radially decreasing smooth compactly supported mollifier \(\psi \) supported on \(B_1\) with \(\int _{{{\mathbb {R}}^d}}\psi \,\textrm{d}{\textbf{x}}=1\). Denote \(\psi _\alpha (\textbf{x}) = \frac{1}{\alpha ^d}\psi (\frac{\textbf{x}}{\alpha })\), then \(E[\rho *\psi _\alpha ]<\infty \) for sufficiently small \(\alpha >0\), and

$$\begin{aligned} \lim _{\alpha \rightarrow 0+}E[\rho *\psi _\alpha ]=E[\rho ]. \end{aligned}$$

Proof

Notice it is straightforward to show that \(\rho *\psi _\alpha \) converges weakly to \(\rho \) as \(\alpha \rightarrow 0+\), and conclude that \(E[\rho ]\le \liminf _{\alpha \rightarrow 0+}E[\rho *\psi _\alpha ]\). This lemma improves this \(\liminf \) result to a limit. Let \(R>0\) be such that \(\text {supp}\,\rho \subseteq B_{R/2}\).

$$\begin{aligned} E[\rho *\psi _\alpha ]=\frac{1}{2}\int _{{{\mathbb {R}}^d}}(W*\rho *\psi _\alpha )(\textbf{x})(\rho *\psi _\alpha )(\textbf{x})\,\textrm{d}{\textbf{x}}=\frac{1}{2}\int _{B_{R/2}}(W*\psi _\alpha *\psi _\alpha *\rho )(\textbf{x})\rho (\textbf{x})\,\textrm{d}{\textbf{x}}. \end{aligned}$$

Notice that \(\Psi _\alpha =\psi _\alpha *\psi _\alpha \) is also a compactly supported (on \(B_{2\alpha }\)) non-negative radially decreasing smooth function with \(\int _{{{\mathbb {R}}^d}}\Psi _\alpha \,\textrm{d}{\textbf{x}}=1\). Therefore, for any \(\textbf{x}\in B_{R/2}\),

$$\begin{aligned} (W*\Psi _\alpha )(\textbf{x})= & {} \int _{B_{2\alpha }}W(\textbf{x}-\textbf{y})\Psi _\alpha (\textbf{y})\,\textrm{d}{\textbf{y}} = \int _{B_{2\alpha }}W(\textbf{x}-\textbf{y})\int _0^{\Psi _\alpha (\textbf{y})}\,\textrm{d}{h}\,\textrm{d}{\textbf{y}}\\{} & {} \quad = \int _0^{\Psi _\alpha (0)}\!\! \int _{B_{\Psi _\alpha ^{-1}(h)}}W(\textbf{x}-\textbf{y})\,\textrm{d}{\textbf{y}}\,\textrm{d}{h}, \end{aligned}$$

where \(\Psi _\alpha ^{-1}\) denotes the inverse function of \(\Psi _\alpha \) as a decreasing of \(r\in [0,2\alpha ]\). By the assumptions ((H-s)), if \(\alpha \le R/2\), we have

$$\begin{aligned}{} & {} \int _0^{\Psi _\alpha (0)}\!\! \int _{B_{\Psi _\alpha ^{-1}(h)}}W(\textbf{x}-\textbf{y})\,\textrm{d}{\textbf{y}}\,\textrm{d}{h} \le C_R\int _0^{\Psi _\alpha (0)} W(\textbf{x})\int _{B_{\Psi _\alpha ^{-1}(h)}}\!\!\!\,\textrm{d}{\textbf{y}}\,\textrm{d}{h} \\{} & {} \quad = C_R W(\textbf{x})\int _{B_{2\alpha }}\Psi _\alpha (\textbf{y})\,\textrm{d}{\textbf{y}} = C_R W(\textbf{x}). \end{aligned}$$

This implies

$$\begin{aligned} E[\rho *\psi _\alpha ]\le \frac{C_R}{2}\int _{B_{R/2}}(W*\rho )(\textbf{x})\rho (\textbf{x})\,\textrm{d}{\textbf{x}} = C_R E[\rho ] \end{aligned}$$

i.e., \(E[\rho *\psi _\alpha ]\) is finite.

Then write

$$\begin{aligned} E[\rho *\psi _\alpha ]=\frac{1}{2}\int _{{{\mathbb {R}}^d}}\int _{{{\mathbb {R}}^d}}(W*\Psi _\alpha )(\textbf{x}-\textbf{y})\rho (\textbf{y})\,\textrm{d}{\textbf{y}}\rho (\textbf{x})\,\textrm{d}{\textbf{x}}. \end{aligned}$$
(2.6)

We now use the continuity assumptions on W in ((H-s)). If W is continuous on \({\mathbb {R}}^d\), then \((W*\Psi _\alpha )(\textbf{x}-\textbf{y})\) converges to \(W(\textbf{x}-\textbf{y})\) at every \((\textbf{x},\textbf{y})\in {{\mathbb {R}}^d}\times {{\mathbb {R}}^d}\). Otherwise in case \(W(0)=+\infty \), by the continuity of W away from 0, \((W*\Psi _\alpha )(\textbf{x}-\textbf{y})\) converges to \(W(\textbf{x}-\textbf{y})\) at every \((\textbf{x},\textbf{y})\in {{\mathbb {R}}^d}\times {{\mathbb {R}}^d}\) with \(\textbf{x}\ne \textbf{y}\). We also have that the diagonal set \(x=y\) is negligible with respect to the product measure \(\rho (\textbf{y})\,\textrm{d}{\textbf{y}}\rho (\textbf{x})\,\textrm{d}{\textbf{x}}\) since \(E[\rho ]<\infty \). Therefore we see that \((W*\Psi _\alpha )(\textbf{x}-\textbf{y})\) converges to \(W(\textbf{x}-\textbf{y})\) almost everywhere with respect to the measure \(\rho (\textbf{y})\,\textrm{d}{\textbf{y}}\rho (\textbf{x})\,\textrm{d}{\textbf{x}}\).

In both cases, since we proved that \((W*\Psi _\alpha )(\textbf{x}-\textbf{y})\le C_R W(\textbf{x}-\textbf{y})\) for any \(\textbf{x}\ne \textbf{y}\), the integral in (2.6) is dominated by the integral

$$\begin{aligned} \frac{1}{2}\int _{{{\mathbb {R}}^d}}\int _{{{\mathbb {R}}^d}}C_R W(\textbf{x}-\textbf{y})\rho (\textbf{y})\,\textrm{d}{\textbf{y}}\rho (\textbf{x})\,\textrm{d}{\textbf{x}} = C_R E[\rho ], \end{aligned}$$

and the dominated convergence theorem gives the conclusion. \(\square \)

Proof of Theorem 2.4

Notice that the global minimizer is unique once we fix the center of mass according to our discussion above. Let \(\rho _\infty \in {\mathcal {P}}_2({\mathbb {R}}^d)\) be the global energy minimizer with the same center of mass, and assume on the contrary that \(\rho \ne \rho _\infty \). Define \(\rho _1=\rho _\infty *\psi _\alpha \) with \(\psi _\alpha \) as defined in Lemma 2.5, and

$$\begin{aligned} \rho _t = (1-t)\rho + t \rho _1 \end{aligned}$$

as the linear interpolation curve, satisfying \(\frac{\,\textrm{d}^2}{\,\textrm{d}{t}^2}E[\rho _t] \ge 0,\,\forall t\in (0,1)\). Notice that \(E[\rho _\infty ] < E[\rho ]\) since \(\rho \ne \rho _\infty \) taking into account the uniqueness of global minimizer. By Lemma 2.5, \(E[\rho _1]<E[\rho ]\) if \(\alpha \) is sufficiently small. Then we see that

$$\begin{aligned} \frac{\,\textrm{d}}{\,\textrm{d}{t}}\Big |_{t=0}E[\rho _t] < 0\,. \end{aligned}$$

On the other hand, notice that by the bi-linearity of E and the necessary condition of the \(d_2\)-local minimizers in (2.2),

$$\begin{aligned}{} & {} \frac{\,\textrm{d}}{\,\textrm{d}{t}}\Big |_{t=0}E[\rho _t] = \int _{{\mathbb {R}}^d}V(\textbf{x})(\rho _1(\textbf{x})-\rho (\textbf{x}))\,\textrm{d}{\textbf{x}} \\{} & {} = \int _{{\mathbb {R}}^d}V(\textbf{x})\rho _1(\textbf{x})\,\textrm{d}{\textbf{x}} - C_\rho m_0 \ge \int _{{\mathbb {R}}^d}C_\rho \rho _1(\textbf{x})\,\textrm{d}{\textbf{x}} - C_\rho m_0 = 0 \end{aligned}$$

where the inequality uses the fact that \(\rho _1=\rho _\infty *\psi _\alpha \) is a smooth function and then the possible zero-measure exceptional set in (2.2) does not contribute. This gives a contradiction. \(\square \)

We now focus on defining properly the concept of steady state for the interaction energy E and the aggregation Eq. (1.3) we are dealing with. We will denote by \(\partial V(x)\) the subdifferential of V at the point \(x\in {{\mathbb {R}}^d}\).

Definition 2.6

We say \(\rho \in {\mathcal {P}}({\mathbb {R}}^d)\) is a steady state of the interaction energy E if \(0\in \partial V (\textbf{x})\) \(\rho \)-a.e.

Remark 2.7

Assume that \(\rho \in {\mathcal {P}}({\mathbb {R}}^d)\) is a steady state with \(V(\textbf{x})\in C^1({{\mathbb {R}}^d})\), then \(\textbf{u}=\nabla W*\rho \) is continuous on \({{\mathbb {R}}^d}\) and \(\textbf{u}=0\) on \(\text {supp}\,\rho \). Moreover, \(\rho \) is a stationary distributional solution to (1.3).

Notice also that any \(d_\infty \)-local minimizer is a steady state of the interaction energy E.

3 Radial symmetry of \(d_\infty \)-local minimizers and steady states

In this section, for the locally stable steady states (in the sense of Remark 2.7) of the aggregation Eq. (1.3), we give sufficient conditions for their radial symmetry.

Theorem 3.1

Assume \(d\ge 2\) and W satisfies ((H)), W is radially symmetric, and the interaction energy E is LIC. Then every compactly supported \(d_\infty \)-local minimizer \(\rho \in {\mathcal {P}}({\mathbb {R}}^d)\) is radially symmetric.

Furthermore, suppose E is FLIC and \(\rho \in {\mathcal {P}}({\mathbb {R}}^d)\) is a compactly supported steady state of the interaction energy E such that \(\nabla ^2 W*\rho \) is continuous and \(\nabla ^2 W*\rho \ge 0\) (i.e., being semi-positive-definite) on \(\text {supp}\,\rho \), then \(\rho \) is radially symmetric.

This theorem is proven in several steps, and we first focus on the 2D case and prove by contradiction. We find better competitors in case of asymmetry for local minimizers or quantifying the behavior of the hessian at the boundary of the support for steady states. The multi-D cases can be similarly done by choosing a direction along which \(\rho \) is not rotationally symmetric to reach contradiction, but it needs some further technical details. Without loss of generality, we may assume due to translational invariance that a given \(d_\infty \)-local minimizer / steady state has zero center of mass \(\int _{{\mathbb {R}}^d}\textbf{x}\rho (\textbf{x})\,\textrm{d}{\textbf{x}}=0\). Let \({\mathcal {R}}_\theta f\) denote the rotation of a function f by the angle \(\theta \):

$$\begin{aligned} ({\mathcal {R}}_\theta f)(\textbf{x}) := f(R_\theta ^{-1}\textbf{x}),\quad R_\theta = \begin{pmatrix} \cos \theta &{} -\sin \theta \\ \sin \theta &{} \cos \theta \end{pmatrix}\,. \end{aligned}$$

Proof of Theorem 3.1, local minimizer case, for \(d=2\)) Assume on the contrary that a compactly supported \(d_\infty \)-local minimizer \(\rho \) is not radially symmetric. Define

$$\begin{aligned} \rho _\theta := \frac{1}{2}(\rho + {\mathcal {R}}_\theta \rho )\,. \end{aligned}$$

By the radial symmetry of W, we have \(E[\rho ] = E[{\mathcal {R}}_\theta \rho ]\). Then by LIC, we see that \(E[\rho _\theta ]<E[\rho ]\) since \(\rho \ne {\mathcal {R}}_\theta \rho \) if \(|\theta |\) is small enough. On the other hand, notice that \(d_\infty (\rho ,\rho _\theta ) \le R\theta \) where \(R=\max _{\textbf{x}\in \text {supp}\,\rho }|\textbf{x}|<\infty \). Therefore, by the definition of \(d_\infty \)-local minimizer, we have \(E[\rho _\theta ]\ge E[\rho ]\) for \(|\theta |\) small enough, leading to a contradiction. \(\square \)

To prove the statement on steady states, we first give a lower bound of the linear interpolation convexity.

Lemma 3.2

Assume \(d=2\) and the interaction energy E is FLIC. Assume \(\rho \in {\mathcal {P}}({\mathbb {R}}^d)\) with zero center of mass is not radially-symmetric. Then for small \(\theta >0\),

$$\begin{aligned} E[\rho -{\mathcal {R}}_\theta \rho ] \ge c\theta ^2\,. \end{aligned}$$

To prove this, we first give a lemma:

Lemma 3.3

Let f(x) be a function defined on the torus \({\mathbb {T}}\). Then for \(|\theta | \le c_f\) being small,

$$\begin{aligned} \Vert f(\cdot ) - f(\cdot -\theta )\Vert _{L^2_{{\mathbb {T}}}}^2 \ge c\theta ^2 \Vert f-{\bar{f}}\Vert _{L^2_{{\mathbb {T}}}}^2,\quad {\bar{f}} = \frac{1}{|{\mathbb {T}}|}\int _{{\mathbb {T}}}f(x)\,\textrm{d}{x} \end{aligned}$$

where c is an absolute constant.

Proof

Notice that by Fourier series expansion, we can write

$$\begin{aligned} \Vert f(\cdot ) - f(\cdot -\theta )\Vert _{L^2}^2 = \sum _{k\in {\mathbb {Z}},\,k\ne 0} |{\hat{f}}(k)(1-e^{ik\theta })|^2 \end{aligned}$$

and

$$\begin{aligned} \Vert f-{\bar{f}}\Vert _{L^2}^2 = \sum _{k\in {\mathbb {Z}},\,k\ne 0} |{\hat{f}}(k)|^2 \,. \end{aligned}$$

Therefore, there exists \(K=K_f\) such that

$$\begin{aligned} \sum _{k\in {\mathbb {Z}},\,k\ne 0, |k|\le K} |{\hat{f}}(k)|^2 \ge \frac{1}{2}\Vert f-{\bar{f}}\Vert _{L^2}^2\,. \end{aligned}$$

We observe that \( |1-e^{ik\theta }|^2 \ge \sin ^2(k\theta ) \ge c\theta ^2\), for all \(|k|\le K\), if \(|\theta |\le 0.1/K\). Then the conclusion follows.

\(\square \)

Proof of Lemma 3.2

Due to the FLIC assumption (2.1) of the interaction energy, it suffices to show that

$$\begin{aligned} \int _{R_1\le |\xi |\le R_2} |{\hat{\rho }}-{\mathcal {R}}_{-\theta }{\hat{\rho }}|^2\,\textrm{d}{\xi } \ge c\theta ^2 \end{aligned}$$

for some \(0<R_1<R_2\). Since \(\rho \) is not radially symmetric, we have the same property for \({\hat{\rho }}\), which further implies

$$\begin{aligned} \int _0^\infty \Vert f(r,\cdot )-{\bar{f}}(r)\Vert _{L^2_{{\mathbb {T}}}}^2 r\,\textrm{d}{r} > 0, \end{aligned}$$

where we denote \(f(r,\phi ) = {\hat{\rho }}(r(\cos \phi ,\sin \phi )^T)\), \(\phi \in [0,2\pi ]\), and \({\bar{f}}(r)=\frac{1}{2\pi }\int _{{\mathbb {T}}}f(r,\phi )\,\textrm{d}{\phi }\) is the angular average of f. Therefore, there exists \(\epsilon >0\) such that

$$\begin{aligned} S:= \{r: \Vert f(r,\phi )-{\bar{f}}(r)\Vert _{L^2_{{\mathbb {T}}}}^2 > \epsilon \} \end{aligned}$$

has positive measure. We may assume \(S\subseteq [R_1,R_2]\) for some \(0<R_1<R_2\) by replacing with a subset of itself if necessary. For each \(r\in S\), let \(c_r\) denote the constant \(c_f\) in Lemma 3.3 with \(f = f(r,\cdot )\). We can express the set S as a union of nested sets

$$\begin{aligned} S = \bigcup _{\delta>0} S_\delta ,\quad S_\delta :=\{r\in S: c_r > \delta \} \end{aligned}$$

for any \(\delta >0\), concluding that \(|S_\delta |>0\) for some \(\delta >0\). Therefore, using Lemma 3.3, we infer that

$$\begin{aligned} \Vert f(r,\cdot )-f(r,\cdot -\theta )\Vert _{L^2_{{\mathbb {T}}}}^2 \ge c\theta ^2 \Vert f(r,\cdot )-{\bar{f}}(r)\Vert _{L^2_{{\mathbb {T}}}}^2 \ge c\epsilon \theta ^2,\quad \forall |\theta |\le \delta , r \in S_\delta \,, \end{aligned}$$

where c is an absolute constant. Therefore, we finally obtain

$$\begin{aligned} \int _{R_1\le |\xi |\le R_2} |{\hat{\rho }}-{\mathcal {R}}_{-\theta }{\hat{\rho }}|^2\,\textrm{d}{\xi } \ge 2\pi \int _{S_\delta } \Vert f(r,\cdot )-f(r,\cdot +\theta )\Vert _{L^2_{{\mathbb {T}}}}^2 r \,\textrm{d}{r} \ge c\epsilon |S_\delta | R_1 \theta ^2 \end{aligned}$$

Then the conclusion follows. \(\square \)

Proof of Theorem 3.1, steady state case, for \(d=2\))

Assume on the contrary that \(\rho \) is a steady state satisfying the assumptions but not radially symmetric. Define

$$\begin{aligned} \rho _\theta := \frac{1}{2}(\rho + {\mathcal {R}}_\theta \rho )\,. \end{aligned}$$

Then Lemma 3.2 shows that for small \(|\theta |\),

$$\begin{aligned} E[\rho _\theta ] \le E[\rho ] - c\theta ^2 \end{aligned}$$
(3.1)

by noticing that \(\frac{\,\textrm{d}^2}{\,\textrm{d}{t}^2} E[(1-t)\rho +t {\mathcal {R}}_\theta \rho ] = 2E[\rho -{\mathcal {R}}_\theta \rho ]\).

Since \(\rho \) is a steady state, \(\nabla V=0\) on \(\text {supp}\,\rho \) by definition. By the continuity of \(\nabla ^2 V\) and its semi-positive-definiteness on \(\text {supp}\,\rho \), for any \(\epsilon >0\), there exists \(\delta >0\) such that

Here, we have used the classical notation for order of square matrices with \(I_d\) being the identity matrix in dimension d. Therefore, if \(\textbf{x}_1\in \text {supp}\,\rho \) and \(|\textbf{x}_2-\textbf{x}_1|\le \delta \), then

$$\begin{aligned} V(\textbf{x}_2) \ge V(\textbf{x}_1)-\epsilon |\textbf{x}_2-\textbf{x}_1|^2 \end{aligned}$$

by Taylor expansion.

Notice that

$$\begin{aligned} E[\rho _\theta ] = \frac{1}{2}E[\rho ] + \frac{1}{4}\int _{{{\mathbb {R}}^d}} ({\mathcal {R}}_\theta \rho )(\textbf{x}) V(\textbf{x})\,\textrm{d}{\textbf{x}} \qquad \text{ and } \qquad E[\rho ] = \frac{1}{2}\int _{{{\mathbb {R}}^d}} \rho (\textbf{x}) V(\textbf{x})\,\textrm{d}{\textbf{x}} \end{aligned}$$

due to rotational symmetry of W. Then, we have

$$\begin{aligned} \int _{{{\mathbb {R}}^d}} ({\mathcal {R}}_\theta \rho )(\textbf{x}) V(\textbf{x})\,\textrm{d}{\textbf{x}} - 2E[\rho ]= & {} \int _{{{\mathbb {R}}^d}} \Big (({\mathcal {R}}_\theta \rho )(\textbf{x})-\rho (\textbf{x})\Big ) V(\textbf{x})\,\textrm{d}{\textbf{x}}\\= & {} \int _{{{\mathbb {R}}^d}} \Big ( \rho (R_\theta ^{-1}\textbf{x})-\rho (\textbf{x})\Big ) V(\textbf{x})\,\textrm{d}{\textbf{x}} \\= & {} \int _{{{\mathbb {R}}^d}} \rho (\textbf{x}) \Big (V(R_\theta \textbf{x})-V(\textbf{x})\Big )\,\textrm{d}{\textbf{x}}\\\ge & {} -\epsilon (R\theta )^2 \int _{{{\mathbb {R}}^d}} \rho (\textbf{x})\,\textrm{d}{\textbf{x}} = -\epsilon (R\theta )^2 \,, \end{aligned}$$

if \(R\theta <\delta \), where \(R=\max _{\textbf{x}\in \text {supp}\,\rho }|\textbf{x}|<\infty \). Taking \(\epsilon =\frac{c}{R^2}\) where c is given by (3.1), we conclude that

$$\begin{aligned} E[\rho _\theta ] \ge E[\rho ] - \frac{c}{4}\theta ^2 \end{aligned}$$

for \(\theta \) small enough, contradicting (3.1). \(\square \)

We now go back to the general \(d\ge 3\) case. Denote \(\textbf{x}= (x_1,x_2,\textbf{y})^T\). Since \(\rho \) is not radially-symmetric, we may assume without loss of generality that the center of mass of \(\rho \) is 0, and

$$\begin{aligned} \rho \ne {\tilde{{\mathcal {R}}}}_\theta \rho ,\quad {\tilde{{\mathcal {R}}}}_\theta \rho (x_1,x_2,\textbf{y}) := \rho (R_\theta ^{-1}(x_1,x_2)^T,\textbf{y}) \end{aligned}$$

for small \(|\theta |\). The local minimizer case can be done similarly as above using \(\rho _\theta = \frac{1}{2}(\rho + {\tilde{{\mathcal {R}}}}_\theta \rho )\) instead. To treat the steady state case, we next generalize Lemma 3.2.

Lemma 3.4

Assume \(d\ge 3\) and the interaction energy E is FLIC. Assume \(\rho \in {\mathcal {P}}({\mathbb {R}}^d)\) with zero center of mass satisfies \(\rho \ne {\tilde{{\mathcal {R}}}}_\theta \rho \). Then for small \(\theta >0\), \(E[\rho -{\tilde{{\mathcal {R}}}}_\theta \rho ] \ge c\theta ^2\) for some \(c>0\).

Proof

Due to the FLIC property of E, it suffices to show that

$$\begin{aligned} \int _{R_1\le |\xi |\le R_2} |{\hat{\rho }}-{\mathcal {R}}_{-\theta }{\hat{\rho }}|^2\,\textrm{d}{\xi } \ge c\theta ^2 \end{aligned}$$

for some \(0<R_1<R_2\). The condition \(\rho \ne {\tilde{{\mathcal {R}}}}_\theta \rho \) implies the same property of \({\hat{\rho }}\), which implies

$$\begin{aligned} \int _0^\infty \int _{{\mathbb {R}}^{d-2}} \Vert f(r,\cdot ,\eta )-{\bar{f}}(r,\eta )\Vert _{L^2_{{\mathbb {T}}}}^2 \,\textrm{d}{\eta }r\,\textrm{d}{r} > 0 \end{aligned}$$

where we denote \(f(r,\phi ,\eta ) = {\hat{\rho }}(r(\cos \phi ,\sin \phi )^T,\eta )\), \(\phi \in [0,2\pi ]\) and \({\bar{f}}(r,\eta ) = \frac{1}{2\pi }\int _{{\mathbb {T}}}f(r,\phi ,\eta )\,\textrm{d}{\phi } \). Therefore, there exists \(\epsilon >0\) such that

$$\begin{aligned} S:= \{(r,\eta ): \Vert f(r,\phi ,\eta )-{\bar{f}}(r,\eta )\Vert _{L^2_{{\mathbb {T}}}}^2 > \epsilon \} \end{aligned}$$

has positive measure. We may assume \(S\subseteq \{(r,\eta ):\sqrt{r^2+\eta ^2}\in [R_1,R_2]\}\) for some \(0<R_1<R_2\) by replacing with a subset of itself if necessary. For each \((r,\eta )\in S\), let \(c_{(r,\eta )}\) denote the constant \(c_f\) in Lemma 3.3 with \(f = f(r,\cdot ,\eta )\). Similarly as in the 2D case, we write S as

$$\begin{aligned} S = \bigcup _{\delta>0} S_\delta ,\quad S_\delta :=\{(r,\eta )\in S: c_{(r,\eta )} > \delta \} \end{aligned}$$

to conclude that \(|S_\delta |>0\) for some \(\delta >0\). Therefore, we obtain

$$\begin{aligned} \Vert f(r,\cdot ,\eta )-f(r,\cdot -\theta ,\eta )\Vert _{L^2_{{\mathbb {T}}}}^2 \ge c\theta ^2 \Vert f(r,\cdot ,\eta )-{\bar{f}}(r,\eta )\Vert _{L^2_{{\mathbb {T}}}}^2 \ge c\epsilon \theta ^2,\quad \forall |\theta |\le \delta , (r,\eta ) \in S_\delta \,, \end{aligned}$$

where c is an absolute constant. As a consequence, we get

$$\begin{aligned} \int _{R_1\le |\xi |\le R_2} |{\hat{\rho }}-{\tilde{{\mathcal {R}}}}_{-\theta }{\hat{\rho }}|^2\,\textrm{d}{\xi } \ge \int _{S_\delta } \int _{{\mathbb {R}}^{d-2}}\Vert f(r,\cdot ,\eta )-f(r,\cdot +\theta ,\eta )\Vert _{L^2_{{\mathbb {T}}}}^2 \,\textrm{d}{\eta }\,2\pi r \,\textrm{d}{r} \ge c\epsilon |S_\delta | R_1 \theta ^2 \,. \end{aligned}$$

Then the conclusion follows. \(\square \)

Proof of Theorem 3.1, for \(d\ge 3\)) Similarly as above, we define

$$\begin{aligned} \rho _\theta := \frac{1}{2}(\rho + {\tilde{{\mathcal {R}}}}_\theta \rho ) \end{aligned}$$

Then Lemma 3.4 shows that for small \(|\theta |\), \(E[\rho _\theta ] \le E[\rho ] - c\theta ^2\), by noticing that \(\frac{\,\textrm{d}^2}{\,\textrm{d}{t}^2} E[(1-t)\rho +t {\mathcal {R}}_\theta \rho ] = 2E[\rho -{\tilde{{\mathcal {R}}}}_\theta \rho ]\). Similar to the \(d=2\) case, we can use the steady state properties and the assumption on \(\nabla ^2 V\) to show that for any \(c_1>0\),

$$\begin{aligned} E[\rho _\theta ] \ge E[\rho ] - \frac{c}{4}\theta ^2 \end{aligned}$$

if \(|\theta |\) is small enough, leading to the contradiction. \(\square \)

Remark 3.5

(Failure of uniqueness and radial symmetry for \(d_\infty \)-local minimizers of 1D FLIC potentials) For FLIC potentials, the uniqueness of global minimizer clearly implies its radial symmetry in any dimension. However, for \(d_\infty \) local minimizers, although the radial symmetry is still true for \(d\ge 2\), it is generally false for \(d=1\). We provide an example of a \(d_\infty \)-local minimizer of a 1D FLIC potential that may fail to be unique and radially-symmetric (i.e., in 1D, an even function). Define

$$\begin{aligned} W_\epsilon (x) = -\frac{x^2}{2}+\frac{|x|^3}{3} - \epsilon \Big (\phi (|x-1|)+\phi (|x+1|)\Big ) \end{aligned}$$

where \(\epsilon \ge 0\) and \(\phi \) is a non-negative smooth even function supported on \([-1/2,1/2]\) with \(\phi ''(0)<0\). The result in [40] shows that the interaction energy associated to the potential \(W_0\) is FLIC with

$$\begin{aligned} {\hat{W}}_0(\xi ) = c|\xi |^{-4},\quad \forall \xi \ne 0\,. \end{aligned}$$

Since \(|{\hat{\phi }}(\xi )| \le C(1+|\xi |)^{-5}\), there holds \({\hat{W}}_\epsilon (\xi )>0,\,\forall \xi \ne 0\) if \(\epsilon >0\) is small enough, and then \(W_\epsilon \) is FLIC.

Notice now that \(W_\epsilon (x)\) has local minima at \(x=\pm 1\), with

$$\begin{aligned} W_\epsilon ''(1) = W_\epsilon ''(-1) = 1-\epsilon \phi ''(0)=:\lambda > 1 \end{aligned}$$

Also notice that \(W_\epsilon ''(0) = -1\). It follows that for any \(0<\alpha <\tfrac{1}{2}\), \(\rho _\alpha (x) = (\tfrac{1}{2}-\alpha )\delta (x-\frac{1}{2}) + (\tfrac{1}{2}+\alpha )\delta (x+\frac{1}{2})\) is a steady state for \(W_\epsilon \), since

$$\begin{aligned} (W_\epsilon *\rho _\alpha )(x) = (\tfrac{1}{2}-\alpha )W_\epsilon (x-\tfrac{1}{2}) +(\tfrac{1}{2}+\alpha ) W_\epsilon (x+\tfrac{1}{2}) \end{aligned}$$

satisfies \((W_\epsilon *\rho _\alpha )'(\tfrac{1}{2})=(W_\epsilon *\rho _\alpha )'(-\tfrac{1}{2})=0\). Also, since

$$\begin{aligned} (W_\epsilon *\rho _\alpha )''(\tfrac{1}{2}) = -(\tfrac{1}{2}-\alpha )+(\tfrac{1}{2}+\alpha )\lambda ,\quad (W_\epsilon *\rho _\alpha )''(-\tfrac{1}{2}) = (\tfrac{1}{2}-\alpha )\lambda -(\tfrac{1}{2}+\alpha )\,, \end{aligned}$$

we observe that both quantities are positive by taking \(|\alpha |<\frac{\lambda -1}{2(\lambda +1)}\). This implies \(\rho _\alpha \) is a \(d_\infty \)-local minimizer for every such \(\alpha \) (which will be justified in the next paragraph), and this shows that the \(d_\infty \)-local minimizers of \(W_\epsilon \) are non-unique and non-radially-symmetric in general.

To see that \(\rho _\alpha \) is a \(d_\infty \)-local minimizer, we consider any alternate \(\rho \ne \rho _\alpha \) with the same total mass and center of mass and \(\beta :=d_\infty (\rho ,\rho _\alpha )\) being small. Then \(\text {supp}\,\rho \subseteq [-\tfrac{1}{2}-\beta ,-\tfrac{1}{2}+\beta ]\cup [\tfrac{1}{2}-\beta ,\tfrac{1}{2}+\beta ]\) with

$$\begin{aligned} \int _{[-\tfrac{1}{2}-\beta ,-\tfrac{1}{2}+\beta ]} \rho \,\textrm{d}{x} = \tfrac{1}{2}+\alpha \qquad \text{ and } \qquad \int _{[\tfrac{1}{2}-\beta ,\tfrac{1}{2}+\beta ]} \rho \,\textrm{d}{x} = \tfrac{1}{2}-\alpha . \end{aligned}$$

Define \(\rho _t = (1-t)\rho _\alpha +t\rho \), and we have \(\frac{\,\textrm{d}^2}{\,\textrm{d}{t}^2}E[\rho _t] > 0\) for any \(0\le t \le 1\) by the LIC property of E (with the interaction potential \(W_\alpha \)). Denoting \(V_\alpha :=W_\epsilon *\rho _\alpha \), we have

$$\begin{aligned} \frac{\,\textrm{d}}{\,\textrm{d}{t}} \Big |_{t=0}E[\rho _t]{} & {} = \int _{\mathbb {R}}V_\alpha (x)(\rho (x)-\rho _\alpha (x))\,\textrm{d}{x}\\{} & {} = \int _{[-\tfrac{1}{2}-\beta ,-\tfrac{1}{2}+\beta ]} V_\alpha (x)\big (\rho (x)-(\tfrac{1}{2}+\alpha )\delta (x+\tfrac{1}{2})\big )\,\textrm{d}{x}\\{} & {} \quad + \int _{[\tfrac{1}{2}-\beta ,\tfrac{1}{2}+\beta ]} V_\alpha (x)\big (\rho (x)-(\tfrac{1}{2}-\alpha )\delta (x-\tfrac{1}{2})\big )\,\textrm{d}{x} \end{aligned}$$

Since \(V_\alpha \in C^2\) has vanishing first derivative and positive second derivative at \(\pm \tfrac{1}{2}\), we have \(V_\alpha (x)\ge V_\alpha (\pm \tfrac{1}{2})\) for \(|x-(\pm \tfrac{1}{2})|\le \beta \) if \(\beta >0\) is small enough. Then we see that the first integral in the last expression is non-negative since

$$\begin{aligned} \int _{[-\tfrac{1}{2}-\beta ,-\tfrac{1}{2}+\beta ]} \rho \,\textrm{d}{x} = \tfrac{1}{2}+\alpha , \end{aligned}$$

and similar for the second integral. Therefore we get \(\frac{\,\textrm{d}}{\,\textrm{d}{t}}|_{t=0}E[\rho _t]\ge 0\), which implies \(\frac{\,\textrm{d}}{\,\textrm{d}{t}}E[\rho _t]\ge 0\) for any \(0\le t \le 1\) since \(\frac{\,\textrm{d}^2}{\,\textrm{d}{t}^2}E[\rho _t] > 0\). Therefore

$$\begin{aligned} E[\rho ]-E[\rho _\alpha ]=\int _0^1\frac{\,\textrm{d}}{\,\textrm{d}{t}}E[\rho _t]\,\textrm{d}{t} \ge 0 \end{aligned}$$

which shows that \(\rho _\alpha \) is a \(d_\infty \)-local minimizer.

4 From radially-symmetric steady states to uniqueness of \(d_\infty \)-local minimizer

We first show our main result concerning the uniqueness of minimizers as a consequence of their radial symmetry.

Theorem 4.1

Assume W satisfies ((H-s)), \(W\in C^4({{\mathbb {R}}^d}\setminus \{0\})\) is radially symmetric, the interaction energy (1.5) associated to the potential W is LIC and

$$\begin{aligned} \Delta ^2 W(\textbf{x}) < 0,\quad \forall \textbf{x}\ne 0 \end{aligned}$$
(4.1)

and

$$\begin{aligned} \Delta W(\textbf{x})>0,\quad \forall |\textbf{x}| \text { sufficiently large.} \end{aligned}$$
(4.2)

Let \(\rho \) be a compactly supported \(d_\infty \)-local minimizer such that \(\nabla W * \rho \) is continuous with zero center of mass. Then \(\rho \) is the unique global minimizer of E over \({\mathcal {P}}_2({\mathbb {R}}^d)\) with zero center of mass.

We want to take advantage on the fourth derivative assumption (4.1) in order to apply maximum principle arguments for certain operators. We need some preliminary results in this direction.

Lemma 4.2

Let \(f\in C^1([x_1,x_2])\cap C^4((x_1,x_2))\). Assume

$$\begin{aligned} f'(x_1)=f'(x_2)=0,\quad ({\mathcal {L}}^2f)(x)<0,\,\forall x\in (x_1,x_2),\quad {\mathcal {L}}f := f'' + a(x)f' \end{aligned}$$

for some positive function \(a(x)\in C((x_1,x_2])\). Then either \(x_1\) or \(x_2\) is not a local minimum point of f.

Proof

Denote \(g = {\mathcal {L}}f\). By assumption \(({\mathcal {L}}g)(x)<0,\,\forall x\in (x_1,x_2)\). This implies that there exist at most 2 points in \((x_1,x_2)\) where \(g=0\). In fact, suppose \(g(y_1)=g(y_2)=g(y_3)=0\) with \(y_1<y_2<y_3\), then \(({\mathcal {L}}g)(x)<0\) forces g to be positive on \((y_1,y_2)\) and \((y_2,y_3)\) due to the classical maximum principle, and it follows that \(g'(y_2)=0\), \(g''(y_2)\ge 0\) which is a contradiction to \(({\mathcal {L}}g)(y_2)<0\).

Denote \(A(x) = - \int _x^{x_2}a(y)\,\textrm{d}{y}\) which is a non-positive increasing continuous function defined on \((x_1,x_2]\), satisfying \(A'(x)=a(x)\). Then notice that

$$\begin{aligned} \int _{x_1}^{x_2}e^{A(x)}(f''+a(x)f')\,\textrm{d}{x} = \int _{x_1}^{x_2}(e^{A(x)}f')'\,\textrm{d}{x} = e^{A(x_2)}f'(x_2) - \lim _{x\rightarrow x_1+}e^{A(x)}f'(x_1) = 0 \end{aligned}$$

where the integral is interpreted as an improper integral, and the existence of the last limit follows from the assumption \(f'(x_1)=0\) and the fact that A(x) is negative and increasing for \(x<x_2\). This requires that \(g=f''+a(x)f'\) is positive at some point in \((x_1,x_2)\), and g is negative at some other point in \((x_1,x_2)\), and therefore there exists at least 1 point in \((x_1,x_2)\) where \(g=0\).

Now we separate into the following cases:

  • There exists 1 point y in \((x_1,x_2)\) such that \(g(y)=0\), and \(g|_{(x_1,y)}>0\), \(g|_{(y,x_2)}<0\). In this case,

    $$\begin{aligned} e^{A(x_2-\epsilon )}f'(x_2-\epsilon ) = e^{A(x_2)}f'(x_2)-\int _{x_2-\epsilon }^{x_2}e^{A(x)}g(x)\,\textrm{d}{x} > 0 \end{aligned}$$

    if \(0<\epsilon <x_2-y\). This implies that \(x_2\) is not a local minimum point of f.

  • There exists 1 point y in \((x_1,x_2)\) such that \(g(y)=0\), and \(g|_{(x_1,y)}<0\), \(g|_{(y,x_2)}>0\). In this case,

    $$\begin{aligned} e^{A(x_1+\epsilon )}f'(x_1+\epsilon ) = \lim _{x\rightarrow x_1+}e^{A(x)}f'(x_1)+\int _{x_1}^{x_1+\epsilon }e^{A(x)}g(x)\,\textrm{d}{x} = \int _{x_1}^{x_1+\epsilon }e^{A(x)}g(x)\,\textrm{d}{x} < 0 \end{aligned}$$

    if \(0<\epsilon <y-x_1\). This implies that \(x_1\) is not a local minimum point of f.

  • There exist 2 points \(z_1<z_2\) in \((x_1,x_2)\) such that \(g(z_1)=g(z_2)=0\). Then \(({\mathcal {L}}g)(x)<0\) implies that \(g|_{(x_1,z_1)}<0\), \(g|_{(z_1,z_2)}>0\), \(g|_{(z_2,x_2)}<0\) since otherwise one gets a contradiction at \(z_1\) or \(z_2\) similarly as done above for \(y_2\). Then it follows as in the previous two cases that neither \(x_1\) nor \(x_2\) is a local minimum point of f.

\(\square \)

Lemma 4.3

Let \(f\in C^1([x_1,\infty ))\cap C^4((x_1,\infty ))\). Assume

$$\begin{aligned} f'(x_1)=0,\quad ({\mathcal {L}}^2f)(x)<0,\,\forall x\in (x_1,x_2),\quad {\mathcal {L}}f := f'' + a(x)f',\quad ({\mathcal {L}}f)(x_2)>0 \end{aligned}$$

for some \(x_2>x_1\) and some positive function \(a(x)\in C\big ((x_1,x_2]\big )\). Then either \(x_1\) is not a local minimum point of f, or \(x_1\) is the global minimum point of f on \([x_1,x_2]\).

Proof

Similar to the previous proof, there exists at most 2 points in \((x_1,x_2)\) where \(g={\mathcal {L}}f = 0\). We claim that in fact there exists at most one point with the additional assumption \(g(x_2)>0\). In fact, suppose \(g(y_1)=g(y_2)=0\) with \(y_1<y_2\), then \(({\mathcal {L}}g)(x)<0\) forces g to be positive on \((y_1,y_2)\) due to the classical maximum principle. Moreover, g is also positive on \((y_2,x_2)\) since there are no other zeros of g and \(g(x_2)>0\), and it follows as above that \(g'(y_2)=0\), \(g''(y_2)\ge 0\) which is a contradiction to \(({\mathcal {L}}g)(y_2)<0\). Now, we may separate into the following cases:

  • There exists 1 point y in \((x_1,x_2)\) such that \(g(y)=0\), and \(g|_{(x_1,y)}<0\), \(g|_{(y,x_2)}>0\). Notice there is a change of sign on g at the zero and \(g(x_2)>0\), otherwise we arrive at a contradiction again. In this case, \(x_1\) is not a local minimum point of f proceeding as in the first two cases of Lemma 4.2.

  • Assume now that \(g|_{(x_1,x_2)}>0\). In this case

    $$\begin{aligned} e^{A(x)}f'(x) = \lim _{x\rightarrow x_1+}e^{A(x)}f'(x_1) + \int _{x_1}^x e^{A(z)}g(z)\,\textrm{d}{z} = \int _{x_1}^x e^{A(z)}g(z)\,\textrm{d}{z}> 0,\quad \forall x\in (x_1,x_2) \end{aligned}$$

    which implies that f is increasing on \([x_1,x_2]\). Therefore \(x_1\) is a global minimum point of f on \([x_1,x_2]\).

\(\square \)

Proof of Theorem 4.1

Assume first that \(d\ge 2\). By Theorem 3.1, \(\rho \) is radially-symmetric.

Suppose there exists a subset of \((\text {supp}\,\rho )^c\), \(S_1 = \{R_1< |\textbf{x}| < R_2\}\) for some \(0\le R_1<R_2\), with \(\{|\textbf{x}|=R_1\}\subseteq \text {supp}\,\rho \) and \(\{|\textbf{x}|=R_2\}\subseteq \text {supp}\,\rho \). Denote \(({\mathcal {L}}V)(r) = V''(r) + \frac{d-1}{r}V'(r)\) as an operator on the radial direction, then \(\Delta V(\textbf{x}) = ({\mathcal {L}}V)(r)\), \(\Delta ^2 V(\textbf{x}) = ({\mathcal {L}}^2 V)(r) < 0\) for any \(R_1<r<R_2\). Since \(\rho \) is a \(d_\infty \)-local minimizer, we have \(V'(R_1)=V'(R_2)=0\) by Lemma 2.1 and the assumption that \(V\in C^1\). Therefore Lemma 4.2 shows that either \(R_1\) or \(R_2\) is not a local minimum point of V(r), which contradicts Lemma 2.1.

Suppose there exists a subset of \((\text {supp}\,\rho )^c\), \( S_2 = \{|\textbf{x}| < R_2\} \) for some \(R_2>0\), with \(\{|\textbf{x}|=R_2\}\subseteq \text {supp}\,\rho \). Then \(\Delta ^2 V(\textbf{x}) = ({\mathcal {L}}^2 V)(r) < 0\) for any \(0<r<R_2\). Since \(\rho \) is a \(d_\infty \)-local minimizer, we have \(V'(R_2)=0\) by Lemma 2.1 and the assumption that \(V\in C^1\), and \(V'(0)=0\) by the radial symmetry of V. Therefore Lemma 4.2 shows that either 0 or \(R_2\) is not a local minimum point of V(r). By Lemma 2.1, \(R_2\in \text {supp}\,V(r)\) is a local minimum point of V(r), and thus 0 is not a local minimum point of V(r), which implies \(\Delta V(0)\le 0\). Since \(\Delta V\) is radially symmetric and \(\Delta ^2 V(\textbf{x})<0\) for \(|\textbf{x}|<R_2\), we have \(\Delta V(\textbf{x})<0\) for any \(0<|\textbf{x}|<R_2\). This contradicts the fact that \(V'(R_2)=0\) by integrating on \(S_2\).

Therefore \(\text {supp}\,\rho \) is a ball, and denote its radius as \(R_1\ge 0\). Since \(\Delta V(\textbf{x}) = ({\mathcal {L}}V)(r)>0\) for all r large enough (due to the assumption (4.2)), we apply Lemma 4.3 on \([R_1,R_2]\) for large \(R_2\), and obtain that \(R_1\) is a global minimum of V(r) on any \([R_1,R_2]\). Then the conclusion follows from Theorem 2.4.

For the case \(d=1\), one could argue similarly and show that there cannot be an interval \((x_1,x_2)\subseteq \text {supp}\,\rho \) with \(\{x_1,x_2\}\subseteq \text {supp}\,\rho \), and conclude that \(\text {supp}\,\rho \) is an interval \([X_1,X_2]\). Then one could obtain similarly that \(X_2\) is the global minimum point of V on \([X_2,\infty )\), and \(X_1\) is the global minimum point of V on \((-\infty ,X_1]\). Then Theorem 2.4 shows that \(\rho \) is the unique global minimizer of E. The radial symmetry of \(\rho \) is obtained by noticing that \(\rho (-x)\) is also a global minimizer with zero center of mass, and therefore equal to \(\rho (x)\). \(\square \)

Remark 4.4

In the previous proof, if \(d\ge 2\) and suppose we know that \(\rho \) does not have an isolated Dirac mass at 0, then in the case of \(S_1\) we also can exclude the possibility of \(R_1=0\), and therefore we may weaken the continuity assumption to ‘\(\nabla W*\rho \) is continuous on \({{\mathbb {R}}^d}\backslash \{0\}\)’.

In case the interaction energy does not necessarily meet the FLIC property, we can obtain a similar result for stationary states under an additional regularity assumption: \(\nabla W*\rho \) and \(\Delta W*\rho \) are continuous.

Theorem 4.5

Assume W satisfies ((H)), \(W\in C^4({{\mathbb {R}}^d}\setminus \{0\})\) is radially symmetric with

$$\begin{aligned} \Delta ^2 W(\textbf{x}) < 0,\quad \forall \textbf{x}\ne 0 \end{aligned}$$

and

$$\begin{aligned} \Delta W(\textbf{x})>0,\quad \forall |\textbf{x}| \text { sufficiently large.} \end{aligned}$$

Let \(\rho \) be a compactly supported steady state in the sense of Definition 2.6, with \(\nabla W*\rho \) and \(\Delta W*\rho \) being continuous, and \(\Delta W*\rho \ge 0\) on \(\text {supp}\,\rho \). Then the complement of \(\text {supp}\,\rho \) is connected. If in addition, \(\rho \) is radially-symmetric, or \(d=1\), then \(\rho \) satisfies the \(d_2\)-local minimizer condition (2.2).

If in addition, W satisfies FLIC, then any compactly supported steady state \(\rho \), with the regularity assumption that \(\nabla ^2 W*\rho \) is continuous and semi-positive definite on \(\text {supp}\,{\rho }\), is the global minimizer of the interaction energy.

We first remark that the last part of the previous theorem is a direct consequence of Theorem 3.1 and Theorem 2.4 together with the first part of it. To prove this result, we need the following lemmas, which are standard maximum principle arguments in elliptic theory.

Lemma 4.6

Let \(\Omega \) be a bounded open set, and \(f\in C({\bar{\Omega }})\cap C^2(\Omega )\). If \(\Delta f\le 0\) on \(\Omega \), then \(\min _{\textbf{x}\in {\bar{\Omega }}}f(\textbf{x}) = \min _{\textbf{x}\in \partial \Omega }f(\textbf{x})\). Assume further \(f\in C^1({\bar{\Omega }})\) and \(\Omega \) is connected, if \(\Delta f\le 0\) on \(\Omega \) and \(\nabla f = 0\) on \(\partial \Omega \), then f is constant on \({\bar{\Omega }}\).

Proof of Theorem 4.5

Take R large enough such that \(\text {supp}\,\rho \subseteq \{|\textbf{x}|<R\}\), and by (4.2), we have

$$\begin{aligned} \Delta V(\textbf{x}) = \int _{{{\mathbb {R}}^d}} \Delta W(\textbf{x}-\textbf{y})\rho (\textbf{y})\,\textrm{d}{\textbf{y}} > 0,\quad \forall |\textbf{x}|=R \end{aligned}$$
(4.3)

for R large enough. Denote \(S = \{|\textbf{x}|<R\}\backslash \text {supp}\,\rho \).

STEP 1: Prove that \(\Delta V \ge 0\) on S. First notice that for any \(\textbf{x}\in S\),

$$\begin{aligned} \Delta ^2 V(\textbf{x}) = \int _{{{\mathbb {R}}^d}} \Delta ^2 W(\textbf{x}-\textbf{y})\rho (\textbf{y})\,\textrm{d}{\textbf{y}} < 0 \end{aligned}$$
(4.4)

by (4.1). Then applying Lemma 4.6 to \(\Delta V\) which is continuous on \({{\bar{S}}}\) by assumption, we get

$$\begin{aligned} \inf _{\textbf{x}\in S} \Delta V(\textbf{x}) = \min _{\textbf{x}\in \partial S} \Delta V(\textbf{x}) \,. \end{aligned}$$

Notice that \( \partial S= \partial (\text {supp}\,\rho ) \cup \{|\textbf{x}|=R\}\) and

$$\begin{aligned} \min _{\textbf{x}\in \partial (\text {supp}\,\rho )} \Delta V(\textbf{x}) \ge 0\,, \end{aligned}$$

since \(\Delta V\ge 0\) on \(\text {supp}\,\rho \) by assumption. Combined with (4.3), we get

$$\begin{aligned} \min _{\textbf{x}\in \partial S} \Delta V(\textbf{x}) \ge 0\,, \end{aligned}$$

which implies the claim.

STEP 2: Prove the connectivity of S. Suppose on the contrary that \(S_1\) is a connected component of S whose closure does not intersect \(\{|\textbf{x}|=R\}\). Then \(\partial S_1\subseteq \partial (\text {supp}\,\rho )\), which implies

$$\begin{aligned} \nabla V(\textbf{x}) = 0,\quad \forall \textbf{x}\in \partial S_1\,, \end{aligned}$$

by the continuity of \(\nabla V\) since \(\rho \) is a stationary state. Using the second part of Lemma 4.6 since \(-\Delta V\le 0\) on \(S_1\), we deduce that V is constant on \(S_1\). This fact contradicts (4.4).

STEP 3: Prove the \(d_2\)-local minimizer condition (2.2). Assume that \(d\ge 2\) and \(\rho \) is radially symmetric. Due to the connectivity of S, S has the form \({{\bar{S}}} = \{R_1 \le |\textbf{x}| \le R\}\) for some \(0\le R_1<R\). Let us denote for simplicity the radial potential generated by \(\rho \) as \(V(\textbf{x})=V(r),\,r=|\textbf{x}|\), then it satisfies

$$\begin{aligned} V'(R_1) = 0,\quad \text{ and } \quad \Delta V(r) \ge 0,\, r>R_1\,. \end{aligned}$$

This implies that V(r) is increasing on \([R_1,\infty )\), which gives the desired conclusion. Indeed, to see this, notice that

$$\begin{aligned} \Delta V(r) = V''(r) + (d-1)\frac{V'(r)}{r} \ge 0,\, r> R_1\,. \end{aligned}$$

Multiplying by \(r^{d-1}\) and integrating leads to \(V'(r)\ge 0\). The 1D case without the radial symmetry assumption can be handled similarly. \(\square \)

5 Radial symmetry and global minimizers for power-law potentials

We apply the previous results to the power-law potential

$$\begin{aligned} W(\textbf{x}) = \frac{|\textbf{x}|^a}{a} - \frac{|\textbf{x}|^b}{b} \end{aligned}$$
(5.1)

where \(a>b>-d\) and with the convention \(\tfrac{|x|^0}{0}:=\ln x\).

Let \(\rho \in {\mathcal {P}}\) be compactly supported with zero center of mass. We define \(\rho \) to be mild if \(d=1\) with \(\nabla W*\rho \) continuous, or \(d\ge 2\) with \(\nabla W*\rho \) continuous on \({\mathbb {R}}^d\backslash \{0\}\).

Theorem 5.1

Let W be defined by (5.1). If (ab) satisfies

$$\begin{aligned} 2\le a \le 4,\quad -d< b < 2 \end{aligned}$$
(5.2)

Then any compactly supported \(d_\infty \)-local minimizer \(\rho \) with \(d\ge 2\) is radially-symmetric. If (ab) further satisfies

$$\begin{aligned} \left. \begin{aligned}&a=2,\quad 2-d \le b< 4-d,\quad \text {when }d\ge 2 \\&2\le a \le 3,\quad 1 \le b < 2,\quad \text {when }d=1 \\ \end{aligned}\right. \end{aligned}$$
(5.3)

Then the unique global minimizer \(\rho _\infty \) is mild, and it is the unique mild \(d_\infty \)-local minimizer.

If \(a=2\) and \(2-d<b<\min \{4-d,2\}\) or \(a=4\) and \(2-d<b<{{\bar{b}}} = (2+2d-d^2)/(d+1)\), then the global minimizer is given by the explicit formulas in (5.9) and (5.10).

Remark 5.2

In one dimension, the ranges \(-1<b<2\) for \(a=2\) and \(2<a<3\) for \(b=2\) were discussed in [29]. We now discuss the sharpness of the assumption (5.3) in the case \(d\ge 2\).

It is shown in [1] that for W given by (5.1) with \(a\ge 2\) and \(\frac{(3-d)a-10+7d-d^2}{a+d-3}=:b_{\max }<b<2\), then the Dirac Delta on a particular \((d-1)\)-dimensional sphere is a \(d_\infty \)-local minimizer. Notice that \(b_{\max }<4-d\) if \(a>2\). Therefore the assumption \(a=2\) in (5.3) cannot be improved within our framework, in which a necessary step to get the uniqueness of \(d_\infty \)-local minimizer is to show the connectivity of \((\text {supp}\,\rho )^c\) for any \(d_\infty \)-local minimizer \(\rho \), see Theorem 4.5.

In the case \(d\ge 3\), the same reasoning also shows that the upper bound \(4-d\) for b cannot be improved within our framework, because for \(a=2\) and \(4-d=b_{\max }<b<2\) the Dirac Delta on a sphere is a \(d_\infty \)-local minimizer.

To prove this theorem, we first analyze the FLIC property of power-law potentials to justify the radial symmetry of \(d_\infty \)-local minimizers. Then, using the radial symmetry, we show the mild property of the global minimizer \(\rho _\infty \) with the range of parameters (5.3). In the special case \(a=2\), we obtain a better regularity result, \(\nabla ^2 W*\rho _\infty \) is continuous, by using the explicit formulas given by [22]. Finally we show the uniqueness of mild \(d_\infty \)-local minimizer by applying Theorem 4.1.

5.1 The (F)LIC property and radial symmetry

We first figure out the values of (ab) such that W has FLIC. The author in [40] proved that \(\frac{|\textbf{x}|^a}{a}\) has the LIC property, for \(2\le a \le 4\). It is clear, also used in [40], that \(-\frac{|\textbf{x}|^b}{b}\) has the FLIC property for \(-d< b < 0\), since its Fourier transform is \(c|\textbf{x}|^{-d-b}\) for some \(c>0\). We next extend the range of b.

Theorem 5.3

The interaction energy E associated to the potential \(\frac{|\textbf{x}|^a}{a}-\frac{|\textbf{x}|^b}{b}\), with \(-d< b < 2\le a\le 4\), except \(0\le b<1\) in one dimension, has the FLIC property.

As a consequence, any compactly supported \(d_\infty \)-local minimizer of the interaction energy with \(-d< b < 2\le a\le 4\) and \(d\ge 2\) is radially symmetric. In particular, the unique global minimizer for the interaction energy with \(-d< b < 2\le a\le 4\) and \(d\ge 2\) is radially symmetric.

Proof

The range \(-d<b<0\) is done in [40]. Since W satisfies ((H)), then Lemma 2.3 allows us to reduce to the case of nonzero compactly supported signed measures \(\mu \in {\mathcal {M}}({\mathbb {R}}^d)\), with

$$\begin{aligned} \int _{{{\mathbb {R}}^d}} \mu (\textbf{x})\,\textrm{d}{\textbf{x}} = \int _{{{\mathbb {R}}^d}} \textbf{x}\mu (\textbf{x})\,\textrm{d}{\textbf{x}} = 0\,. \end{aligned}$$
(5.4)

Notice that \(\frac{|\textbf{x}|^a}{a}\) has the LIC property, for \(2\le a \le 4\), as proven in [40]. Since \(\mu \) is compactly supported, we are reduced to show the FLIC property for the repulsive part of the interaction potential, \(U(\textbf{x}) = -\frac{|\textbf{x}|^b}{b}\). Then we separate into cases:

Case 1 \(2-d<b<2\). Let \( f(\textbf{x}) = \Delta ^{-1}\mu (\textbf{x}) \). Then for \(|\textbf{x}|\) large, by (5.4), we have

$$\begin{aligned} |f(\textbf{x})| \le C|\textbf{x}|^{-d},\quad |\nabla f(\textbf{x})| \le C|\textbf{x}|^{-d-1}\,. \end{aligned}$$
(5.5)

Since \(b<2\), this suffices to justify the integration-by-parts below

$$\begin{aligned} \int _{{{\mathbb {R}}^d}} \int _{{{\mathbb {R}}^d}} U(\textbf{x}-\textbf{y})\Delta f(\textbf{y})\,\textrm{d}{\textbf{y}}\mu (\textbf{x})\,\textrm{d}{\textbf{x}} = \int _{{{\mathbb {R}}^d}} \int _{{{\mathbb {R}}^d}} \Delta U(\textbf{x}-\textbf{y}) f(\textbf{y})\,\textrm{d}{\textbf{y}}\mu (\textbf{x})\,\textrm{d}{\textbf{x}} \,. \end{aligned}$$

Notice that \( \Delta U(\textbf{x}) = - (b+d-2)|\textbf{x}|^{b-2} \) has Fourier transform \( \widehat{(\Delta U)}(\xi ) = - (b+d-2)c|\xi |^{-d-b+2} \), where c is a positive constant. Then

$$\begin{aligned}{} & {} \int _{{{\mathbb {R}}^d}} \int _{{{\mathbb {R}}^d}} \Delta U(\textbf{x}-\textbf{y}) f(\textbf{y})\,\textrm{d}{\textbf{y}}\mu (\textbf{x})\,\textrm{d}{\textbf{x}} = \int _{{{\mathbb {R}}^d}} \widehat{(\Delta U)}(\xi ) {\hat{f}}(\xi )\bar{{\hat{\mu }}}(\xi )\,\textrm{d}{\xi } \\{} & {} \quad = (b+d-2)c\int _{{{\mathbb {R}}^d}} |\xi |^{-d-b} |{\hat{\mu }}(\xi )|^2\,\textrm{d}{\xi }\,. \end{aligned}$$

Notice that \(b+d-2>0\) for all the stated cases. Therefore the conclusion follows.

Case 2 The Newtonian cases \(b=2-d\), with \(d=1,2\). In this case, since \(-U\) is the fundamental solution to the Laplacian (up to a positive constant multiple), we have

$$\begin{aligned}\begin{aligned}&\int _{{{\mathbb {R}}^d}} \int _{{{\mathbb {R}}^d}} U(\textbf{x}-\textbf{y})\mu (\textbf{y})\,\textrm{d}{\textbf{y}}\mu (\textbf{x})\,\textrm{d}{\textbf{x}} = -c\int _{{{\mathbb {R}}^d}} f(\textbf{x})\Delta f(\textbf{x})\,\textrm{d}{\textbf{x}} = c\int _{{{\mathbb {R}}^d}} |\nabla f(\textbf{x})|^2\,\textrm{d}{\textbf{x}}\,, \\ \end{aligned}\end{aligned}$$

where the integration-by-parts is justified by (5.5). Then, since \(\nabla f\) is \(L^1\) by (5.5), we have

$$\begin{aligned} \widehat{\nabla f}(\xi ) = - \frac{i\xi }{|\xi |^2}{\hat{\mu }}(\xi )\,, \end{aligned}$$

which implies

$$\begin{aligned} \int _{{{\mathbb {R}}^d}} |\nabla f(\textbf{x})|^2\,\textrm{d}{\textbf{x}} = \int _{{{\mathbb {R}}^d}} \frac{1}{|\xi |^2}|{\hat{\mu }}(\xi )|^2\,\textrm{d}{\xi }\,. \end{aligned}$$

Finally, we apply Theorem 3.1 to deduce the radial symmetry of any compactly supported \(d_\infty \)-local minimizer of E with \(-d< b < 2\le a\le 4\) and \(d\ge 2\).

Moreover, using [10, 47] the global minimizers of E in this range are compactly supported. Then, the last claim about global minimizers results directly from the FLIC property. \(\square \)

Remark 5.4

The cases \(0\le b <1\) and \(2\le a\le 4\) in one dimension are not covered by the previous arguments. In fact, the FLIC property is still true for these cases. However, these cases do not have the necessary properties for Theorem 4.1 to be applicable, and thus, we postpone the proof to the appendix.

5.2 Regularity properties

We first show ((H-s)) for the power-law potentials (5.1).

Lemma 5.5

For W given by (5.1) with \(a\ge 2\) and \(-d<b<2\), there exists \(C_1>0\) such that \(W+C_1\) satisfies ((H-s)).

Proof

Take \(C_1=\max \{-\inf W,0\}+1 > 0\), and then \(W_1:=W+C_1\) is bounded from below by 1. It suffices to show that for any \(R>0\), there exists \(C_R>0\) such that \(\frac{1}{|B(\textbf{x};r)|}\int _{B(\textbf{x};r)}W_1(\textbf{y})\,\textrm{d}{\textbf{y}} \le C_R W_1(\textbf{x}), \forall |\textbf{x}|\le R,\,0<r\le R\). If \(b>0\), then \(W_1\) is continuous, and the function

$$\begin{aligned} \phi (\textbf{x},r) = \left\{ \begin{aligned}&\frac{\frac{1}{|B(\textbf{x};r)|}\displaystyle \int _{B(\textbf{x};r)}W_1(\textbf{y})\,\textrm{d}{\textbf{y}}}{W_1(\textbf{x})},\quad r\ne 0\\&1,\quad r=0 \end{aligned}\right. \end{aligned}$$

is defined for \(\bar{B(0;R)}\times [0,R]\) and continuous. Therefore \(\phi \) achieves its maximum, and the conclusion follows.

If \(b< 0\), with the constants C depending on R,

$$\begin{aligned} \int _{B(\textbf{x};r)}W_1(\textbf{y})\,\textrm{d}{\textbf{y}} \le C\int _{B(\textbf{x};r)}\,\textrm{d}{\textbf{y}} + C\int _{B(\textbf{x};r)}|\textbf{y}|^b\,\textrm{d}{\textbf{y}} \end{aligned}$$

for any \(|\textbf{x}|\le R\) and \(r\le R\). If \(r\le \frac{|\textbf{x}|}{2}\), then \(\frac{|\textbf{x}|}{2}\le |\textbf{y}| \le \frac{3|\textbf{x}|}{2}\) in the last integral, and

$$\begin{aligned} \int _{B(\textbf{x};r)}|\textbf{y}|^b\,\textrm{d}{\textbf{y}} \le C|\textbf{x}|^b\int _{B(\textbf{x};r)}\,\textrm{d}{\textbf{y}} \le CW_1(\textbf{x})\int _{B(\textbf{x};r)}\,\textrm{d}{\textbf{y}} \end{aligned}$$

and the conclusion follows. If \(r> \frac{|\textbf{x}|}{2}\), then

$$\begin{aligned} \int _{B(\textbf{x};r)}|\textbf{y}|^b\,\textrm{d}{\textbf{y}} \le \int _{B(0;r)}|\textbf{y}|^b\,\textrm{d}{\textbf{y}} = C\int _0^r s^bs^{d-1}\,\textrm{d}{s} = Cr^{b+d} \end{aligned}$$

using the radially-decreasing property of \(|\textbf{y}|^b\) and the assumption \(-d<b<0\). Therefore

$$\begin{aligned} \frac{1}{|B(\textbf{x};r)|}\int _{B(\textbf{x};r)}|\textbf{y}|^b\,\textrm{d}{\textbf{y}}\le Cr^b \le C|\textbf{x}|^b \le CW_1(\textbf{x}) \end{aligned}$$

and the conclusion follows.

The case \(b=0\) (i.e., \(-\frac{|\textbf{x}|^b}{b}:=-\ln |\textbf{x}|\)) can be treated similarly as the \(b<0\) case. \(\square \)

Next we give the mild property of the global minimizer for power-law potentials.

Lemma 5.6

Assume \(a\ge 2\), \(2-d<b<2\) and W be given by (5.1). Assume \(\rho \in {\mathcal {P}}\) is compactly supported. If we have either of the following:

  • \(d=1\);

  • \(d\ge 2\), and \(\rho \) is radially-symmetric;

then \(\rho \) is mild. In particular, the global minimizer \(\rho _\infty \) with (ab) satisfying the assumption of Theorem 5.3 is mild.

Proof

For the case \(d=1\), we have \(1<b<2\). Notice that \(\nabla W(x) = \text {sgn}(x)(|x|^{a-1}-|x|^{b-1})\) is continuous, and therefore \(\nabla W*\rho \) is continuous.

For the case \(2-d<b<2\) with \(d\ge 2\) and \(\rho \) being compactly-supported and radially-symmetric, there exists a measure \({\tilde{\rho }}\) supported on [0, R] for some \(R>0\) such that

$$\begin{aligned} \int _{{\mathbb {R}}^d}\phi (\textbf{y})\rho (\textbf{y})\,\textrm{d}{\textbf{y}} = \int _{[0,R]} \int _{|\textbf{y}|=1}\phi (r\textbf{y})\,\textrm{d}{S(\textbf{y})} {\tilde{\rho }}(r)\,\textrm{d}{r} \end{aligned}$$
(5.6)

for any continuous function \(\phi \), where \(\,\textrm{d}{S(\textbf{y})}\) denotes the surface measure on the unit sphere. We clearly have

$$\begin{aligned} \int _{[0,R]} {\tilde{\rho }}(r)\,\textrm{d}{r}=\frac{1}{|S^{d-1}|} \end{aligned}$$
(5.7)

by taking \(\phi =1\), since \(\rho \) has total mass 1. (5.6) is also applicable to \(W(\textbf{x}-\textbf{y})\) for fixed \(\textbf{x}\) by an approximation argument on the potential, and gives

$$\begin{aligned} (W*\rho )(\textbf{x}) =\int _{[0,R]} \int _{|\textbf{y}|=1}W(s\textbf{e}_1-r\textbf{y})\,\textrm{d}{S(\textbf{y})} {\tilde{\rho }}(r)\,\textrm{d}{r},\quad s=|\textbf{x}| \end{aligned}$$

To analyze the continuity of \(W*\rho \) for W given by (5.1) with \(a\ge 2\), we only need to consider the potential \(W=-\frac{|\textbf{x}|^b}{b}\). Also, due to the radial symmetry of \(W*\rho \), we only need to consider directional derivative along the radial direction, which is

$$\begin{aligned} (\nabla W*\rho )(\textbf{x})\cdot \frac{\textbf{x}}{|\textbf{x}|} = \int _{[0,R]} \int _{|\textbf{y}|=1}\omega '(|s\textbf{e}_1-r\textbf{y}|)\frac{s\textbf{e}_1-r\textbf{y}}{|s\textbf{e}_1-r\textbf{y}|}\cdot \textbf{e}_1\,\textrm{d}{S(\textbf{y})} {\tilde{\rho }}(r)\,\textrm{d}{r} \end{aligned}$$
(5.8)

for \(\textbf{x}\ne 0\), where on the RHS we write \(W(\textbf{x})=\omega (|\textbf{x}|)\). Here (5.8) can be justified as long as the RHS is dominated by an \(L^1\) function uniformly in a neighborhood of s, which we will prove in the rest of the proof. For \(W=-\frac{|\textbf{x}|^b}{b}\), we have

$$\begin{aligned}\begin{aligned} \int _{|\textbf{y}|=1}\Bigg (-|s\textbf{e}_1-r\textbf{y}|^{b-1}\Bigg )&\frac{s\textbf{e}_1-r\textbf{y}}{|s\textbf{e}_1-r\textbf{y}|}\cdot \textbf{e}_1\,\textrm{d}{S(\textbf{y})} \\&= -|S^{d-2}|\int _{0}^\pi \Big ((s-r\cos \theta )^2+(r\sin \theta )^2\Big )^{(b-2)/2}\\&\quad \quad (s-r\cos \theta ) |\sin \theta |^{d-2}\,\textrm{d}{\theta }. \end{aligned}\end{aligned}$$

Here the last integrand is dominated by

$$\begin{aligned} \big ((s-r\cos \theta )^2+(r\sin \theta )^2\big )^{(b-1)/2}{} & {} =((s-r)^2 + 2rs(1-\cos \theta ))^{(b-1)/2} \\{} & {} \le C\min \{|s-r|^{b-1},rs|\theta |^{b-1}\}. \end{aligned}$$

Therefore the RHS of (5.8) with is dominated by

$$\begin{aligned} C\int _{[0,R]} \int _{0}^\pi \min \{|s-r|^{b-1},rs|\theta |^{b-1}\}\theta ^{d-2}\,\textrm{d}{\theta } {\tilde{\rho }}(r)\,\textrm{d}{r}, \end{aligned}$$

which is uniformly bounded (by a constant multiple of (5.7)) for \(s\in [\epsilon ,R]\) for any \(\epsilon >0\) as long as \(b>2-d\). This justify (5.8), and proves the continuity of \((\nabla W*\rho )(\textbf{x})\) by dominated convergence, for \(s=|\textbf{x}|\ne 0\).

Finally, if the assumption of Theorem 5.3 is satisfied for (ab), then E has the FLIC property, and then the global minimizer is unique, compactly supported and radially-symmetric. Therefore the previous argument can be applied to conclude that the global minimizer is mild. \(\square \)

Finally we recall the explicit construction [22] for the explicit steady state for power-law potentials defined in (5.1) with \(a=2\) and \(2-d<b<\min \{4-d,2\}\), given by:

$$\begin{aligned} \rho _\infty (\textbf{x}) = A(R^2-|\textbf{x}|^2)^{1-\frac{b+d}{2}}\chi _{|\textbf{x}|\le R} \end{aligned}$$
(5.9)

where \(A=\frac{-d\Gamma (\frac{d}{2})\sin \frac{(b+d)\pi }{2}}{(b+d-2)\pi ^{\frac{d}{2}+1}}>0\), and R is uniquely determined by the total mass condition \(\int _{{\mathbb {R}}^d}\rho _\infty \,\textrm{d}{\textbf{x}}=1\). We also have the explicit formula [22] for the case \(a=4\) and \(2-d<b<{{\bar{b}}}<3-d\), given by:

$$\begin{aligned} \rho _\infty (\textbf{x}) = (A_1 R^2+ A_2 (R^2-|\textbf{x}|^2)) (R^2-|\textbf{x}|^2)^{1-\frac{b+d}{2}}\chi _{|\textbf{x}|\le R} \end{aligned}$$
(5.10)

with \(A_1\), \(A_2\) and R uniquely determined by the total mass condition \(\int _{{\mathbb {R}}^d}\rho _\infty \,\textrm{d}{\textbf{x}}=1\) and a second moment condition. Notice that the upper bound \({{\bar{b}}}\) is given by the relation \(A_1+A_2=0\) for which the function given by (5.10) touches 0 at the origin. We refer to [22] for further details.

Lemma 5.7

If W is given by (5.1) with \(a=2\) and \(2-d<b<\min \{4-d,2\}\) or \(a=4\) and \(2-d<b<{{\bar{b}}}\), then \(\rho _\infty \) defined in (5.9) or (5.10) is a steady state with \(\nabla ^2 W*\rho _\infty \) being continuous.

Proof

It is proved in [22] that \(\rho _\infty \) defined in (5.9) or (5.10) is a steady state.

It is clear that \(\nabla ^2(\frac{|\textbf{x}|^a}{a}) * \rho _\infty \) is continuous. To deal with \(\nabla ^2(\frac{|\textbf{x}|^b}{b}) * \rho _\infty \), first notice that \(\rho _\infty \in L^p\) for any \(p<\frac{2d}{b+d-2}\). Then by Hardy-Littlewood-Sobolev inequality,

$$\begin{aligned} |\textbf{x}|^{b-2-\epsilon } * \rho _\infty \in L^q,\quad \forall 1\le q < \infty \end{aligned}$$

for \(\epsilon >0\) small enough, by checking the index relation

$$\begin{aligned} \frac{b+d-2}{2d} < \frac{d-(2-b)}{d} \end{aligned}$$

for \(b>2-d\).

Then notice that

$$\begin{aligned} \Delta \Big (\frac{|\textbf{x}|^b}{b}\Big ) * \rho _\infty = (b+d-2) |\textbf{x}|^{b-2} * \rho _\infty = c|\textbf{x}|^{-d+\epsilon }*\Big ( |\textbf{x}|^{b-2-\epsilon } * \rho _\infty \Big ) \end{aligned}$$

where the last parenthesis is in \(L^q\) for any \(1\le q < \infty \). Therefore we see that \(\Delta \Big (\frac{|\textbf{x}|^b}{b}\Big ) * \rho _\infty \) is continuous by taking q large enough. The continuity of \(\nabla ^2 (\frac{|\textbf{x}|^b}{b}) * \rho _\infty \) follows by taking the Riesz transform on \(\rho _\infty \) which is bounded on \(L^p\). \(\square \)

Finally we prove Theorem 5.1.

Proof of Theorem 5.1

We first notice that for the special case \((a,b)=(2,2-d)\), [13, Theorem 3.4(i)] shows that any \(d_\infty \)-local minimizer is in \(L^\infty \) with \(W*\rho _\infty \) is \(C^{1,1}\). Then the unique \(d_\infty \)-local minimizer is the characteristic function of a ball as shown in [46, Theorem 2.1], coinciding with the formula (5.9). We also refer to [32] for the classical proof that the characteristic of the ball is the global minimizer. In the rest of the proof, we will assume \((a,b)\ne (2,2-d)\).

Let \(\rho \) be a compactly supported \(d_\infty \)-local minimizer. When \(d\ge 2\) and (5.2) is satisfied, W is FLIC by Theorem 5.3. Then Theorem 3.1 implies that \(\rho \) is radially-symmetric.

If (ab) further satisfies \(2-d\le b<2\), then the unique global minimizer \(\rho _\infty \) is mild. In fact, for the Newtonian case \(b=2-d\), [13, Theorem 3.4(i)] shows that \(W*\rho _\infty \) is \(C^{1,1}\), and in particular, \(\nabla W*\rho _\infty \) is continuous and thus \(\rho _\infty \) is mild. For the case \(2-d<b<\min \{4-d,2\}\), since \(\rho _\infty \) is radially-symmetric, Lemma 5.6 shows that \(\rho _\infty \) is mild.

If (ab) further satisfies (5.3), by calculating

$$\begin{aligned} \Delta W(\textbf{x}){} & {} = (a+d-2)|\textbf{x}|^{a-2} - (b+d-2)|\textbf{x}|^{b-2} \\ \Delta ^2 W(\textbf{x}){} & {} = (a+d-2)(a-2)(a+d-4)|\textbf{x}|^{a-4} - (b+d-2)(b-2)(b+d-4)|\textbf{x}|^{b-4} \end{aligned}$$

we see that (4.1) and (4.2) are satisfied under the assumption (5.3) with \((a,b)\ne (2,2-d)\). Also, by Lemma 5.5, W satisfies ((H-s)) up to adding a constant. Moreover, the positive bound from below on the dimension of the support of \(d_\infty \)-local minimizers, due to [2, Theorem 1], implies that \(\rho _\infty \) does not contain an isolated Dirac at 0. Therefore, if the \(d_\infty \)-local minimizer \(\rho \) is mild, then Theorem 4.1 together with Remark 4.4 allows to conclude that \(\rho \) is the global minimizer.

Finally, if \(a=2\) with \(2-d<b<4-d\), then Lemma 5.7 implies that \(\rho _\infty \) defined in (5.9) is a steady state with \(\nabla ^2W*\rho _\infty \) being continuous (and therefore \(\nabla ^2W*\rho _\infty =0\) on \(\text {supp}\,\rho _\infty =\bar{B(0;R)}\)). Then Theorem 4.5 and Theorem 2.4 implies that \(\rho _\infty \) is the global minimizer.

If \(a=4\) with \(2-d<b<{{\bar{b}}}\), then Lemma 5.7 implies that \(\rho _\infty \) defined in (5.10) is a steady state with \(\nabla ^2W*\rho _\infty \) being continuous. In particular, \((\Delta W*\rho _\infty )(\textbf{x})=0\) for \(|\textbf{x}|=R\). Notice that \(\Delta W\), viewed as a function of \(r=|\textbf{x}|\), is increasing on \(r\in (0,\infty )\). Therefore the radial function \(\Delta W*\rho _\infty \) is increasing on \([R,\infty )\), and thus \((\Delta W*\rho _\infty )(\textbf{x})\ge 0\) for any \(|\textbf{x}|\ge R\). This implies that \(\rho _\infty \) satisfies the condition (2.2), and then Theorem 2.4 implies that \(\rho _\infty \) is the global minimizer.

\(\square \)

Remark 5.8

In the case \(d\ge 3\), \(a=2\), \(b=4-d\), one can show that the steady state \(\rho \) in the form of Dirac Delta on a sphere (with radius R) given by [1] is the global minimizer. In fact, notice that \(\Delta ^2 W(\textbf{x})=0\) for any \(\textbf{x}\ne 0\), and therefore \(\Delta ^2 W*\rho =0\) in B(0; R), which implies that \(\Delta W*\rho \) is constant in B(0; R) by radial symmetry. The steady state \(\rho \) is mild by Lemma 5.6, i.e., \(\nabla W*\rho \) is continuous and vanishes for \(|\textbf{x}|=R\). Then we obtain \(\Delta W*\rho =0\) in B(0; R), which implies that \(W*\rho \) is constant in \(\bar{B(0;R)}\). Similar to the proof of Lemma 5.6, one can show that \(\Delta W*\rho \) is continuous. Then a similar argument as in the last paragraph of the proof of Theorem 5.1 shows that \(\Delta W*\rho \ge 0\) for any \(|\textbf{x}|\ge R\). Therefore we see that \(\rho \) is the global minimizer by Theorem 2.4.

6 \(d_\infty \)-local minimizers of near power-law potentials in 1D are not fractal

Consider a 1D potential of the form

$$\begin{aligned} W(x) = -\frac{|x|^b}{b} + W_1(x) \end{aligned}$$
(6.1)

where \(1<b<2\) and \(W_1\) is smooth. Applying the dimensionality result in [2], we know that \(1\ge \text {dim}(\text {supp}\,\rho )\ge 2-b\) for all \(d_\infty \)-local minimizers. We now prove that the dimension of the support is actually the maximum. It is in this sense that we name these \(d_\infty \)-local minimizers not being fractal, although this does not exclude the case where \(\text {supp}\,\rho \) is the union of some closed intervals and a set with fractal dimension.

Theorem 6.1

Let \(\rho \in {\mathcal {P}}({\mathbb {R}})\) be a \(d_\infty \)-local minimizer of (6.1) which is supported inside \((-R,R)\), and satisfies \(W*\rho \in C^1([-R,R])\). Then there exists \(c_s>0\) depending on bR and \(\Vert W_1\Vert _{C^4([-2R,2R])}\), such that \(|\text {supp}\,\rho | \ge c_s\). In particular, \(\text {dim}(\text {supp}\,\rho ) = 1\).

Proof

By [2, Theorem 1], \(\text {supp}\,\rho \) does not contain any isolated point.

STEP 1: Rough estimate of the local mass. We first show that for every connected component of \(\text {supp}\,\rho \) which is a closed interval I, it satisfies

$$\begin{aligned} \int _I \rho (x)\,\textrm{d}{x} \le \frac{1}{b-1}\Vert W_1\Vert _{C^2}|I|^{2-b}, \end{aligned}$$
(6.2)

where the \(C^2\) norm is on \([-2R,2R]\) (and similar for other \(C^2,C^4\) norms in this proof). In fact, take any fixed \(x\in I\), we have

$$\begin{aligned} 0 = (W''*\rho )(x) = -(b-1)(|\cdot |^{b-2}*\rho )(x) + (W_1''*\rho )(x) \end{aligned}$$

since \(W*\rho \) is a constant on I. Notice that

$$\begin{aligned} (|\cdot |^{b-2}*\rho )(x) = \int _{[-R,R]} |x-y|^{b-2}\rho (y)\,\textrm{d}{y} \ge \int _I |x-y|^{b-2}\rho (y)\,\textrm{d}{y} \ge |I|^{b-2}\int _I \rho (y)\,\textrm{d}{y} \end{aligned}$$

and

$$\begin{aligned} \left| (W_1''*\rho )(x)\right| = \left| \int _{[-R,R]} W_1''(x-y)\rho (y)\,\textrm{d}{y}\right| \le \Vert W_1\Vert _{C^2} \end{aligned}$$

and (6.2) follows.

Then we show that for every open interval J such that \(J\cap \text {supp}\,\rho \) is nonempty and not connected, it satisfies

$$\begin{aligned} \int _J \rho (x)\,\textrm{d}{x} \le \frac{1}{(b-1)(b-2)(b-3)}|J|^{4-b}\Vert W_1\Vert _{C^4} \end{aligned}$$
(6.3)

In fact, since \(J\cap \text {supp}\,\rho \) is nonempty and not connected, we may take \(x_1,x_2\in J\cap \text {supp}\,\rho \) with \(x_1<x_2\) and \([x_1,x_2]\not \subseteq \text {supp}\,\rho \). Then we may take a maximal open interval \((x_3,x_4)\) in \([x_1,x_2]\backslash \text {supp}\,\rho \).

Then for any \(x\in (x_3,x_4)\), we compute

$$\begin{aligned}{} & {} W''''(x)=-(b-1)(b-2)(b-3)|x|^{b-4}+W_1''''(x),\nonumber \\{} & {} W''''*\rho = -(b-1)(b-2)(b-3)|\cdot |^{b-4}*\rho +W_1''''*\rho \end{aligned}$$
(6.4)

and estimate

$$\begin{aligned}\begin{aligned} (|\cdot |^{b-4}*\rho )(x) =&\int _{[-R,R]} |x-y|^{b-4}\rho (y)\,\textrm{d}{y} \ge \int _{J}|x-y|^{b-4}\rho (y)\,\textrm{d}{y} \ge |J|^{b-4} \int _J \rho (y)\,\textrm{d}{y}\\ \end{aligned}\end{aligned}$$

and

$$\begin{aligned} |(W_1''''*\rho )(x)| \le \Vert W_1\Vert _{C^4}. \end{aligned}$$
(6.5)

If (6.3) was not true, then \((W''''*\rho )(x)<0\) for any \(x\in (x_3,x_4)\). Notice that \(W\in C^1({\mathbb {R}})\), and so is \(W*\rho \). Therefore, applying Lemma 4.2 to \(W*\rho \) on \([x_3,x_4]\), we deduce that either \(x_3\) or \(x_4\) is not a local minimum of \(W*\rho \), and we get a contradiction against the \(d_\infty \)-local minimizer property of \(\rho \) in view of Lemma 2.1.

STEP 2: Decomposition of the support. Assume on the contrary that \(|\text {supp}\,\rho |< c_s\), with the constant \(c_s>0\) to be determined. Then we apply the technical Lemma 6.2, that is postponed below, to \(\text {supp}\,\rho \) (which has no isolated points) with \(\epsilon =c_s\). This gives us a decomposition of \(\text {supp}\,\rho \) into connected component intervals \(\{I_1,I_2,\ldots \}\) and a cover of intervals \(\{J_1,J_2,\ldots \}\) with the properties listed therein. In particular, we have \( \sum _k|I_k|+\sum _l |J_l|< |\text {supp}\,\rho |+\epsilon < 2c_s\) from item 4 of Lemma 6.2.

By item 3 of Lemma 6.2, we may apply the estimate (6.3) for \(J_1,J_2,\ldots \) to get

$$\begin{aligned}\begin{aligned} \sum _l \int _{J_l}\rho (x)\,\textrm{d}{x} \le&\frac{\Vert W_1\Vert _{C^4}}{(b-1)(b-2)(b-3)} \sum _l|J_l|^{4-b}\\ \le&\frac{\Vert W_1\Vert _{C^4}}{(b-1)(b-2)(b-3)}(2c_s)^{3-b}\sum _l|J_l| \\ \le&\frac{\Vert W_1\Vert _{C^4}}{(b-1)(b-2)(b-3)}(2c_s)^{4-b}. \end{aligned}\end{aligned}$$

Therefore, with the condition

$$\begin{aligned} \frac{\Vert W_1\Vert _{C^4}}{(b-1)(b-2)(b-3)}(2c_s)^{4-b} \le \frac{1}{4}, \end{aligned}$$
(6.6)

we get

$$\begin{aligned} \sum _l \int _{J_l}\rho (x)\,\textrm{d}{x} \le \frac{1}{4}\,. \end{aligned}$$

Since \(\{I_1,I_2,\ldots \}\cup \{J_1,J_2,\ldots \}\) covers \(\text {supp}\,\rho \), we have \(\sum _k \int _{I_k}\rho \,\textrm{d}{x}+\sum _l \int _{J_l}\rho \,\textrm{d}{x} \ge \int _{[-R,R]}\rho \,\textrm{d}{x}= 1\), and then we get

$$\begin{aligned} \sum _k \int _{I_k}\rho (x)\,\textrm{d}{x} \ge \frac{3}{4}. \end{aligned}$$
(6.7)

Then we define

$$\begin{aligned} \delta _k = |I_k|, \quad m_k = \int _{I_k} \rho (x)\,\textrm{d}{x} ,\quad S= \Big \{k: m_k \ge \frac{1}{4c_s}\delta _k\Big \}. \end{aligned}$$
(6.8)

Then \(\sum _k \delta _k \le |\text {supp}\,\rho | < c_s\) since \(\{I_k\}\), as the connected component intervals of \(\text {supp}\,\rho \), are disjoint. Notice that

$$\begin{aligned} \sum _{k\notin S}m_k \le \frac{1}{4c_s}\sum _{k\notin S}\delta _k \le \frac{1}{4c_s}c_s = \frac{1}{4}. \end{aligned}$$

Combined with (6.7), we get

$$\begin{aligned} \sum _{k\in S} m_k \ge \frac{1}{2}. \end{aligned}$$

We will also use the fact that

$$\begin{aligned} m_k \le \frac{1}{b-1}\Vert W_1\Vert _{C^2}|I_k|^{2-b}\le \frac{1}{b-1}\Vert W_1\Vert _{C^2}c_s^{2-b}=:m_s \end{aligned}$$
(6.9)

for every k, which can be seen by applying (6.2) to \(I_k\).

STEP 3: 4-th order derivative estimate. Fix \(k\in S\). Denote \(I_k=[x_1,x_2]\) and then \(\delta _k=|I_k|=x_2-x_1\). Define

$$\begin{aligned} {\tilde{I}}_k = [x_1-C_1m_k ,x_2+C_1m_k ], \end{aligned}$$

where \(C_1\) is a large constant to be determined. We first show

$$\begin{aligned} (W''''*\rho )(x)<0,\quad \forall x\in {\tilde{I}}_k\backslash \text {supp}\,\rho \end{aligned}$$
(6.10)

under a suitable condition (6.11). In fact, by (6.4), \((W''''*\rho )(x)\) is decomposed into two parts, with the second part controlled by (6.5). To control the first part,

$$\begin{aligned}\begin{aligned} (|\cdot |^{b-4}*\rho )(x) =&\int _{[-R,R]} |x-y|^{b-4}\rho (y)\,\textrm{d}{y} \\ \ge&\int _{I_k}|x-y|^{b-4}\rho (y)\,\textrm{d}{y} \ge (\delta _k+C_1m_k)^{b-4}m_k \\ \ge&(4c_sm_k+C_1m_k)^{b-4}m_k = (4c_s+C_1)^{b-4}m_k^{b-3} \ge (4c_s+C_1)^{b-4}m_s^{b-3}, \end{aligned}\end{aligned}$$

where the second inequality uses the fact that \(|x-y|\le \delta _k+C_1m_k\) for \(x\in {\tilde{I}}_k\) and \(y\in I_k\), the third inequality uses the definition of S in (6.8), and the last inequality uses \(m_k\le m_s\) from (6.9). Then (6.10) follows if we assume the condition

$$\begin{aligned} (b-1)(b-2)(b-3)(4c_s+C_1)^{b-4}m_s^{b-3} > \Vert W_1\Vert _{C^4}. \end{aligned}$$
(6.11)

STEP 4: Vacuum regions and exclusion. Using the same argument as in STEP 1, we claim that

$$\begin{aligned} ({\tilde{I}}_k\backslash I_k)\cap \text {supp}\,\rho = \emptyset ,\quad \forall k\in S\,. \end{aligned}$$
(6.12)

Suppose not, then there exists \(y\in ({\tilde{I}}_k\backslash I_k)\cap \text {supp}\,\rho \), which we may assume to satisfies \(x_2<y\le x_2+C_1m_k\) without loss of generality. Then \([x_2,y]\not \subseteq \text {supp}\,\rho \) since \(I_k=[x_1,x_2]\) is a connected component of \(\text {supp}\,\rho \). Then, in view of (6.10), we may find a maximal open interval \((x_3,x_4)\) in \([x_2,y]\backslash \text {supp}\,\rho \) to apply Lemma 4.2 and get a contradiction, similar to the last paragraph of STEP 1.

For \(I_k = [x_1,x_2]\), define

$$\begin{aligned} {\bar{I}}_k = [x_1-\frac{C_1}{3}m_k ,x_2+\frac{C_1}{3}m_k ] \end{aligned}$$

and we claim that \(\{{\bar{I}}_k:k\in S\}\) are disjoint under a suitable condition (6.13). In fact, suppose \({\bar{I}}_k\cap {\bar{I}}_{k'}\ne \emptyset \) and \(m_k\ge m_{k'}\), then by \(\delta _{k'}\le 4c_sm_{k'}\le 4c_sm_{k}\),

$$\begin{aligned} \text {dist}\,(I_k,I_{k'}) \le \frac{C_1}{3}(m_k+m_{k'}) \le \frac{2C_1}{3}m_k \le C_1m_k - 4c_sm_k\le C_1m_k - \delta _{k'}, \end{aligned}$$

if one assume the condition

$$\begin{aligned} \frac{C_1}{3} \ge 4c_s. \end{aligned}$$
(6.13)

This implies that \(I_{k'}\subseteq {\tilde{I}}_k\). Since \(I_k\) and \(I_{k'}\) are disjoint and \(I_{k'}\cap \text {supp}\,\rho \ne \emptyset \), we get a contradiction with (6.12).

The disjoint property of \(\{{\bar{I}}_k\}_{k\in S}\) and the fact that \(I_k\subseteq [-R,R]\) show that

$$\begin{aligned} \sum _{k\in S} |{\bar{I}}_k| \le 2\Big (R+\frac{C_1}{3}m_s\Big ) \end{aligned}$$
(6.14)

since \({\bar{I}}_k\subseteq [-(R+\frac{C_1}{3}m_s),(R+\frac{C_1}{3}m_s)]\). On the other hand,

$$\begin{aligned} \sum _{k\in S} |{\bar{I}}_k| \ge \frac{2C_1}{3}\sum _{k\in S}m_k \ge \frac{C_1}{3} \end{aligned}$$
(6.15)

by (6.8). Recall from (6.9) that \(m_s\) can be made arbitrarily small by taking \(c_s\) small. To choose the parameters \(C_1\) and \(c_s\), we first choose \(C_1=9R\), and then choose \(c_s\) small enough so that \(m_s\) is small enough to satisfy (6.6), (6.11), (6.13) and \(m_s<1/9\), to obtain a contradiction between (6.14) and (6.15). \(\square \)

Lemma 6.2

For any closed set \(A\subseteq (-R,R)\) with no isolated points and \(\epsilon >0\), there exists a countable collection of intervals \(\{I_1,I_2,\ldots \}\cup \{J_1,J_2,\ldots \}\) which covers A, satisfying

  • \(\{I_1,I_2,\ldots ,J_1,J_2,\ldots \}\) are subsets of \((-R,R)\).

  • \(\{I_k\}\) are the connected components of A being closed intervals. \(\{I_k\}\) is a finite or countable collection.

  • For every l, \(J_l\) is an open interval and \(J_l\cap A\) is nonempty and not connected.

  • \(\sum _k |I_k|+\sum _l |J_l| < |A|+\epsilon \).

(Here we do not regard a single point as a closed interval.)

Proof

We first take \(\{I_k\}\) as the collection of all the connected components of A being closed intervals, and this collection is pairwise disjoint, and either finite or countable. Denote \(I=\bigcup _k I_k\), then

$$\begin{aligned} |A| = \sum _k |I_k| + |A\backslash I|\,. \end{aligned}$$

By the definition of the Lebesgue measure, there exists a countable collection of open intervals \(\{J_1,J_2,\ldots \}\) which are subsets of \((-R,R)\) and cover \(A\backslash I\), such that \(\sum _{l=1}^\infty |J_l| < |A\backslash I|+\epsilon \). By deleting those \(J_l\) with empty intersection with \(A\backslash I\), we may assume that \(J_l\cap (A\backslash I)\ne \emptyset \) for every l.

Then for every l, we claim that the nonempty set \(J_l\cap A\) is not connected. To see this, denote \(J_l=(x_1,x_2)\). Suppose \(J_l\cap A\) is connected. Since A does not contain isolated points, \(J_l\cap A\) has to be an interval, which is a subset of a connect component of A being an interval. Therefore \(J_l\cap A\subseteq I_k\) for some k, and we get a contradiction against \(J_l\cap (A\backslash I)\ne \emptyset \).

\(\square \)

The main conclusion of this section is that if we were looking for fractal behavior on the support of global minimizers of the interaction energy, power-law potentials or their perturbations are not the right family of potentials at least in one dimension.

7 Linear interpolation concavity and its consequences on local minimizers

We say W is linear-interpolation-concave with size \(\delta \), if there exists a nonzero function \(\mu \in L^\infty ({{\mathbb {R}}^d})\) with \(\int _{{\mathbb {R}}^d}\mu \,\textrm{d}{\textbf{x}}=0\) and \(\text {diam}\,(\text {supp}\,\mu ) \le \delta \), such that \(E[\mu ] < 0\). We say W is infinitesimal-concave if W is linear-interpolation-concave with size \(\delta \) for any \(\delta > 0\).

Theorem 7.1

Assume that the interaction potential W satisfies ((H)). If W is linear-interpolation-concave with size \(\delta \), then for any \(d_\infty \)-local minimizer \(\rho \) and \(\epsilon _0>0\), \(\{\textbf{x}: \rho (\textbf{x})\ge \epsilon _0\}\) does not contain a ball of radius \(\delta \). In particular, if W is infinitesimal-concave, then for any \(d_\infty \)-local minimizer \(\rho \) and \(\epsilon _0>0\), \(\{\textbf{x}: \rho (\textbf{x})\ge \epsilon _0\}\) has no interior point.

Here the meaning of the condition \(\rho (\textbf{x})\ge \epsilon _0\) is clear if \(\rho \) is a continuous function. Otherwise, we may use the Radon-Nikodym decomposition to write \(\rho =\rho _c+\rho _s\) where \(\rho _c\) is an \(L^1\) function and \(\text {supp}\,\rho _s\) has Lebesgue measure zero. Then \(\{\textbf{x}: \rho (\textbf{x})\ge \epsilon _0\}\) is interpreted as \(\{\textbf{x}: \rho _c(\textbf{x})\ge \epsilon _0\}\), and the conclusion of Theorem 7.1 is that \(\{\textbf{x}: \rho _c(\textbf{x})\ge \epsilon _0\}\) does not contain a ball of radius \(\delta \) for any representative for \(\rho _c\in L^1\) (which may differ by a set with Lebesgue measure zero).

In order to show Theorem 7.1, we need an improvement on the necessary condition for \(d_\infty \)-local minimizers in Lemma 2.1.

Lemma 7.2

Assume that the interaction potential W satisfies ((H)), and \(\rho \) is a \(d_\infty \)-local minimizer for the corresponding interaction energy. If \(B(\textbf{x};\delta )\subseteq \text {supp}\,\rho \), then \(V=W*\rho \) is a constant on \(B(\textbf{x};\delta )\) almost everywhere.

Proof

We will prove an equivalent statement with \(B(\textbf{x};\delta )\) replaced by a cube \(Q=[x_1,x_1+\delta ]\times \cdots \times [x_d,x_d+\delta ]\). Let \(\epsilon _0\) be as in Lemma 2.1. By replacing \(\epsilon _0\) with a smaller number, we may assume \(\frac{\epsilon _0}{\sqrt{d}}\le \delta \). Take \(\{\textbf{z}_n\}_{n=1}^N\) as the set of points in \(B(\textbf{x},\delta )\) with each coordinate in \(\frac{\epsilon _0}{3\sqrt{d}}{\mathbb {Z}}\). Then for any \(\textbf{y}\in Q\), there exists some \(\textbf{z}_n\) such that \(|\textbf{y}-\textbf{z}_n|\le \frac{\epsilon _0}{3}< \frac{\epsilon _0}{2}\), i.e., the collection of balls \(\{B(\textbf{z}_n;\frac{\epsilon _0}{2})\}\) covers Q.

Suppose it is not true that \(V=W*\rho \) is a constant on Q almost everywhere. Then there exists \(C_1\) such that both

$$\begin{aligned} A_1 = \{\textbf{x}\in Q:V(\textbf{x})<C_1\}\quad \text{ and } \quad A_2 = \{\textbf{x}\in Q:V(\textbf{x})\ge C_1\} \end{aligned}$$

have positive Lebesgue measure. We claim that for any \(n=1,\ldots ,N\),

$$\begin{aligned} \text {if } B\Big (\textbf{z}_n;\frac{\epsilon _0}{2}\Big )\cap A_2\ne \emptyset ,\quad \text {then }|B(\textbf{z}_n;\frac{\epsilon _0}{2})\cap A_1|=0. \end{aligned}$$
(7.1)

In fact, if \(B(\textbf{z}_n;\frac{\epsilon _0}{2})\cap A_2\ne \emptyset \), then we may take \(\textbf{y}\in B(\textbf{z}_n;\frac{\epsilon _0}{2})\cap A_2\). Applying Lemma 2.1 to the point \(\textbf{y}\) shows that \(V(\textbf{y}_1)\ge V(\textbf{y})\ge C_1\) for \(\textbf{y}_1\in B(\textbf{y};\epsilon _0)\) almost everywhere. Since \(|\textbf{y}-\textbf{z}_n|<\frac{\epsilon _0}{2}\), we have \(B(\textbf{z}_n;\frac{\epsilon _0}{2})\subseteq B(\textbf{y};\epsilon _0)\), and the claim follows.

Since \(A_2\) has positive Lebesgue measure, we may take \(\textbf{y}\in A_2\), and take \(\textbf{z}_m\) with \(\textbf{y}\in B(\textbf{z}_m;\frac{\epsilon _0}{2})\). Applying (7.1), we see that \(|B(\textbf{z}_m;\frac{\epsilon _0}{2})\cap A_1|=0\). This implies that \(B(\textbf{z}_{m'};\frac{\epsilon _0}{2})\cap A_2\ne \emptyset \) for any \(\textbf{z}_{m'}\) with \(|\textbf{z}_m-\textbf{z}_{m'}|=\frac{\epsilon _0}{3\sqrt{d}}\), i.e., the closest neighbors of \(\textbf{z}_m\), since \(\textbf{z}_{m'}\) is an interior point of \(B(\textbf{z}_m;\frac{\epsilon _0}{2})\). Applying (7.1) iteratively, we obtain \(|B(\textbf{z}_n;\frac{\epsilon _0}{2})\cap A_1|=0\) for any n. Since \(\{B(\textbf{z}_n;\frac{\epsilon _0}{2})\}\) covers Q, we get \(|A_1|=0\), contradicting the assumption that \(A_1\) has positive measure.

\(\square \)

Proof of Theorem  7.1

The second statement is clearly a consequence of the first one. We prove the first statement by contradiction. Suppose the contrary, then \(\{\textbf{x}: \rho (\textbf{x})\ge \epsilon _0\}\) contains a ball \(B(\textbf{x}_0;\delta )\). Let \(\mu \) be the function as in the definition of linear-interpolation-concavity. By translation, we may assume \(\text {supp}\,\mu \in B(\textbf{x}_0;\delta )\). Then define the family of probability measures

$$\begin{aligned} \rho _\epsilon = \rho + \epsilon \mu ,\quad 0<\epsilon <\frac{\epsilon _0}{\Vert \mu \Vert _{L^\infty }}\,. \end{aligned}$$

See Fig. 1 for an illustration. Since the generated potential \(W*\rho \) is constant in \(B(\textbf{x}_0;\delta )\) almost everywhere by Lemma 7.2 and \(\mu \) is a mean-zero function supported in \(B(\textbf{x};\delta )\), we have \(\int _{{\mathbb {R}}^d}(W*\rho )(\textbf{x})\mu (\textbf{x})\,\textrm{d}{\textbf{x}}=0\), and then

$$\begin{aligned} E[\rho _\epsilon ] = E[\rho ] + \epsilon \int _{{\mathbb {R}}^d}(W*\rho )(\textbf{x})\mu (\textbf{x})\,\textrm{d}{\textbf{x}} + \epsilon ^2E[\mu ] = E[\rho ] + \epsilon ^2E[\mu ] < E[\rho ] \end{aligned}$$

for any \(\epsilon >0\). This contradicts the assumption that \(\rho \) is a \(d_\infty \)-local minimizer since \(d_\infty (\rho ,\rho _\epsilon ) \le C\epsilon \). \(\square \)

Fig. 1
figure 1

\(\mu \) in the definition of linear-interpolation-concavity, and its application is the proof of Theorem 7.1

Now we can give sufficient conditions for W to be infinitesimal-concave.

Lemma 7.3

Assume that the interaction potential W is radially symmetric satisfying ((H)). Given parameters \(\beta ,\alpha ,c_1,C_1>0\). For any \(\delta >0\), there exists \(R>0\), such that the conditions

  1. 1.

    \(W(r) \le C_1(1+r)^\beta \).

  2. 2.

    \({\hat{W}}\) (as a distribution) is a function on any compact set inside \({\mathbb {R}}^d\backslash \{0\}\).

  3. 3.

    There exists an interval \(J\subseteq {\mathbb {R}}_+\) with \(|J|=R\) such that \({\hat{W}}(\xi ) < -c_1R^{-\alpha },\,\forall |\xi |\in J\).

imply that W is linear-interpolation-concave with size \(\delta \). In particular, if in addition condition 3 is true for any \(R\ge 1\), then W is infinitesimal-concave.

Proof

We may assume \(\delta \le 1\) without loss of generality, and then start by requiring that \(R\ge 1/\delta \ge 1\). We fix a smooth positive radial function \(\phi (\xi )\) supported on \(B(0;\frac{1}{2})\). Then for any \(m>0\), there exists \(C_m\) such that

$$\begin{aligned} |{\check{\phi }}(\textbf{x})| \le C_m(1+|\textbf{x}|)^{-m}. \end{aligned}$$
(7.2)

We need to find a nonzero function \(\mu \in L^\infty ({{\mathbb {R}}^d})\) with \(\int _{{\mathbb {R}}^d}\mu \,\textrm{d}{\textbf{x}}=0\) and \(\text {diam}\,(\text {supp}\,\mu ) \le \delta \), such that \(E[\mu ] < 0\). We define \(\mu \) by

$$\begin{aligned} \mu = \chi _{B(0;\delta /2)}\cdot \Big (\mu _1 - \frac{1}{|B(0;\delta /2)|}\int _{B(0;\delta /2)}\mu _1\,\textrm{d}{\textbf{x}}\Big ),\quad \mu _1 = {\mathcal {F}}^{-1}\Big (\phi (\frac{\cdot - \xi _J\textbf{e}_1}{R})+\phi (\frac{\cdot + \xi _J\textbf{e}_1}{R})\Big ) \end{aligned}$$

where \(\textbf{e}_1=(1,0,\ldots ,0)^T\), and \(\xi _J\) is the center of the interval \(J=J(R)\) given as in condition 3. Notice that \(\mu _1\) is real-valued since \({\hat{\mu }}_1\) is even, by definition. Also, \(\mu _1\) is mean-zero on \({\mathbb {R}}^d\) since \({\hat{\mu }}_1(0)=0\), and \(\mu \) is mean-zero by definition. See Fig. 2 for an illustration.

Fig. 2
figure 2

Construction of \(\mu \) in the proof of Lemma 7.3. On the Fourier side \({\hat{\mu }}_1\) is supported on the radial interval J where \({\hat{W}}\) is negative. On the physical side \(\mu _1\) is basically supported on a ball of radius \(O(|J|^{-1})\), with a small tail

STEP 1: We first give a negative upper bound for \(E[\mu _1]\). In fact, notice that

$$\begin{aligned} \text {supp}\,\Big (\phi (\frac{\cdot - \xi _J\textbf{e}_1}{R})\Big ) = B(\xi _J\textbf{e}_1;\frac{R}{2}) \subseteq \{\xi : |\xi |\in J\}. \end{aligned}$$

The condition that \({\hat{W}}\) is a function inside \({\mathbb {R}}^d\backslash \{0\}\) allows us to apply the formula \(E[\mu _1]=\tfrac{1}{2}\int _{{\mathbb {R}}^d}{\hat{W}}|{\hat{\mu }}_1|^2\,\textrm{d}{\xi }\) since \(\mu _1\) is an \(L^1\) function with sufficient decay at infinity and \(0\notin \text {supp}\,{\hat{\mu }}_1\). Therefore we get

$$\begin{aligned} \begin{aligned} E[\mu _1] =&\frac{1}{2}\int _{|\xi |\in J} {\hat{W}}(\xi )\left| \phi \Big (\frac{\xi - \xi _J\textbf{e}_1}{R}\Big )+\phi \Big (\frac{\xi + \xi _J\textbf{e}_1}{R}\Big )\right| ^2\,\textrm{d}{\xi } = \int _{|\xi |\in J} {\hat{W}}(\xi )\left| \phi \Big (\frac{\xi - \xi _J\textbf{e}_1}{R}\Big )\right| ^2\,\textrm{d}{\xi }\\ \le&-c_1R^{-\alpha }\int _{|\xi |\in J} \left| \phi \Big (\frac{\xi - \xi _J}{R}\Big )\right| ^2\,\textrm{d}{\xi } = -c_1\Vert \phi \Vert _{L^2}^2R^{-\alpha +d}. \end{aligned}\end{aligned}$$
(7.3)

STEP 2: Analyze the difference between \(\mu _1\) and \(\mu \) on the physical side. Fix a choice of m with \(m\ge \beta +\alpha +d+1\). Then (7.2) implies

$$\begin{aligned} \Big |{\mathcal {F}}^{-1}\Big (\phi (\frac{\cdot - \xi _J\textbf{e}_1}{R})\Big )(\textbf{x})\Big | = \Big |{\mathcal {F}}^{-1}\Big (\phi (\frac{\cdot }{R})\Big )(\textbf{x})\Big | \le CR^d(1+R|\textbf{x}|)^{-m}. \end{aligned}$$

Therefore, we deduce

$$\begin{aligned} |\mu _1(\textbf{x})| \le CR^d(1+R|\textbf{x}|)^{-m},\quad \Vert \mu _1\Vert _{L^1}\le CR^d\int _{{\mathbb {R}}^d}(1+R|\textbf{x}|)^{-m}\,\textrm{d}{\textbf{x}}=C. \end{aligned}$$
(7.4)

Combined with condition 1, this implies that

$$\begin{aligned} \begin{aligned} |(W*\mu _1)(\textbf{x})| \le&\int _{|\textbf{x}-\textbf{y}|\le |\textbf{x}|}|\mu _1(\textbf{x}-\textbf{y})W(\textbf{y})|\,\textrm{d}{\textbf{y}} + \int _{|\textbf{y}|> |\textbf{x}|}|\mu _1(\textbf{y})W(\textbf{x}-\textbf{y})|\,\textrm{d}{\textbf{y}} \\ \le \,&C(1+|\textbf{x}|)^\beta \Vert \mu _1\Vert _{L^1} + CR^d\int _{|\textbf{y}|>|\textbf{x}|}(1+R|\textbf{y}|)^{-m}(1+|\textbf{x}-\textbf{y}|)^\beta \,\textrm{d}{\textbf{y}}\\ \le \,&C(1+|\textbf{x}|)^\beta \Vert \mu _1\Vert _{L^1} + CR^d\int _{{\mathbb {R}}^d}(1+|\textbf{x}-\textbf{y}|)^{-m}(1+|\textbf{x}-\textbf{y}|)^\beta \,\textrm{d}{\textbf{y}}\\ \le \,&CR^d(1+|\textbf{x}|)^\beta , \end{aligned}\end{aligned}$$
(7.5)

where the second last inequality uses the condition \(R\ge 1\) and the fact \(|\textbf{y}|\ge |\textbf{x}-\textbf{y}|/2\) for any \(|\textbf{y}|>|\textbf{x}|\), and the last inequality uses the definition of m to get the finiteness of the integral.

Since \(\mu _1\) is mean-zero on \({\mathbb {R}}^d\), we have

$$\begin{aligned}\begin{aligned} \left| \int _{B(0;\delta /2)}\mu _1\,\textrm{d}{\textbf{x}}\right| =&\left| \int _{B(0;\delta /2)^c}\mu _1\,\textrm{d}{\textbf{x}}\right| \le CR^d\int _{|\textbf{x}|>\delta /2} (1+R|\textbf{x}|)^{-m}\,\textrm{d}{\textbf{x}} \\ =\,&C\int _{|\textbf{x}|>\delta R/2} (1+|\textbf{x}|)^{-m}\,\textrm{d}{\textbf{x}} \le C(\delta R)^{-m+d} . \end{aligned}\end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \begin{aligned} |\mu (\textbf{x})-\mu _1(\textbf{x})| \le \,&|\chi _{B(0;\delta /2)^c}(\textbf{x})\mu _1(\textbf{x})| + \left| \chi _{B(0;\delta /2)}(\textbf{x})\frac{1}{|B(0;\delta /2)|}\int _{B(0;\delta /2)}\mu _1(\textbf{y})\,\textrm{d}{\textbf{y}}\right| \\ \le \,&|\mu _1(\textbf{x})|\chi _{B(0;\delta /2)^c}(\textbf{x}) + C(\delta R)^{-m+d}\delta ^{-d}\chi _{B(0;\delta /2)}(\textbf{x}). \end{aligned}\end{aligned}$$
(7.6)

Notice that \((W*\chi _{B(0;\delta /2)})(\textbf{x}) \le C(1+|\textbf{x}|)^\beta \) for any \(0<\delta <1\). Therefore, using the assumption \(R\ge 1/\delta \) at the beginning of the proof,

$$\begin{aligned} W*\Big (C(\delta R)^{-m+d}\delta ^{-d}\chi _{B(0;\delta /2)}(\textbf{x})\Big )(\textbf{x}) \le C(\delta R)^{-m+d}(1+|\textbf{x}|)^\beta \le C\delta ^{-d}(1+|\textbf{x}|)^\beta . \end{aligned}$$

Therefore, combined with (7.5) (which is also true if \(\mu _1\) is replaced by \(|\mu _1(\textbf{x})|\chi _{B(0;\delta /2)^c}(\textbf{x})\)), we get

$$\begin{aligned} |(W*\mu )(\textbf{x})| \le C\delta ^{-d}R^d(1+|\textbf{x}|)^\beta , \end{aligned}$$

using \(R\ge 1\) and \(\delta \le 1\). Finally, combining with (7.4), (7.6) and using \(R\ge 1,\,\delta \le 1\),

$$\begin{aligned} |E[\mu ]-E[\mu _1]|\le & {} \frac{1}{2}\int _{{\mathbb {R}}^d}|(W*\mu )(x)(\mu (\textbf{x})-\mu _1(\textbf{x}))|\,\textrm{d}{\textbf{x}} \\{} & {} + \frac{1}{2}\int _{{\mathbb {R}}^d}|(W*\mu _1)(\textbf{x})(\mu (\textbf{x})-\mu _1(\textbf{x}))|\,\textrm{d}{\textbf{x}} \\\le & {} \, C\delta ^{-d}R^d\int _{{\mathbb {R}}^d}(1+|\textbf{x}|)^\beta |(\mu (\textbf{x})-\mu _1(\textbf{x}))|\,\textrm{d}{\textbf{x}} \\\le & {} \, C\delta ^{-d}R^d\int _{B(0;\delta /2)^c}(1+|\textbf{x}|)^\beta |\mu _1(\textbf{x})|\,\textrm{d}{\textbf{x}} \\{} & {} + C\delta ^{-2d}R^d(\delta R)^{-m+d}\int _{B(0;\delta /2)}(1+|\textbf{x}|)^\beta \,\textrm{d}{x} \\{} & {} \le \, C\delta ^{-d}R^{2d}\int _{B(0;\delta /2)^c}(1+R|\textbf{x}|)^\beta (1+R|\textbf{x}|)^{-m}\,\textrm{d}{\textbf{x}} \\{} & {} + C\delta ^{-2d}R^d(\delta R)^{-m+d}\delta ^d\\{} & {} \le \, C\delta ^{-d}R^d(\delta R)^{-m+\beta +d}+ C\delta ^{-d}R^d(\delta R)^{-m+d}, \end{aligned}$$

which implies

$$\begin{aligned} \frac{|E[\mu ]-E[\mu _1]|}{R^{-\alpha +d}} \le C(\delta ^{-m+\beta }R^{-m+\beta +\alpha +d} + \delta ^{-m}R^{-m+\alpha +d}). \end{aligned}$$
(7.7)

By the choice of m, all the above powers on R are negative. Compared with (7.3), if R large enough, one can guarantee the above RHS is no more than \(c_1\Vert \phi \Vert _{L^2}^2/2\), and the conclusion is obtained. \(\square \)

Remark 7.4

Notice that m in the above proof can be chosen arbitrarily large. Therefore, when comparing the RHS of (7.7) and (7.3), for any \(\epsilon >0\), one can choose m such that \(R\sim \delta ^{-1-\epsilon }\).

8 Construction of infinitesimal-concave attractive–repulsive potentials

In this section we aim to construct a class of infinitesimal-concave attractive–repulsive potentials. We start from the Riesz repulsion with quadratic attraction

$$\begin{aligned} W_0(\textbf{x}) = c_{d,\alpha }|\textbf{x}|^{\alpha -d} + \frac{|\textbf{x}|^2}{2},\quad c_{d,\alpha } = \frac{\Gamma ((d-\alpha )/2)}{\pi ^{d/2}2^\alpha \Gamma (\alpha /2)} \end{aligned}$$

where \(0<\alpha <d+2\) is a parameter. Here \(c_{d,\alpha }\) is chosen so that \({\mathcal {F}}[c_{d,\alpha }|\textbf{x}|^{\alpha -d}] = |\xi |^{-\alpha }\) away from \(\xi =0\), and notice that \(c_{d,\alpha }>0\) when \(0<\alpha <d\), and \(c_{d,\alpha }<0\) when \(d<\alpha <d+2\). When \(\alpha =d\), we use the tradition \(c_{d,\alpha }|\textbf{x}|^{\alpha -d}:=-\frac{2}{\pi ^{d/2}2^\alpha \Gamma (\alpha /2)}\ln |\textbf{x}|\). Finally, in one dimension, we consider the ranges \(0<\alpha \le 1\) and \(2\le \alpha <3\), since for \(1<\alpha <2\) we do not know if \({\mathcal {F}}[c_{1,\alpha }|\textbf{x}|^{\alpha -1}] = |\xi |^{-\alpha }\) holds.

Then define the new potentials by adding a hierarchical sequence of perturbations:

$$\begin{aligned} W(\textbf{x}) = W_0(\textbf{x}) - c_W\sum _{k=1}^\infty \lambda ^{(\alpha -d)k} \exp \Big (-\frac{|\textbf{x}|^2}{2\lambda ^{2k}}\Big ) \end{aligned}$$
(8.1)

where \(0<\lambda <1\) and \(c_W>0\).

Theorem 8.1

Let W be given by (8.1). There exist \(c(d,\alpha )\) and \(C(d,\alpha )\) such that the following hold:

  • If \(c_W > c(d,\alpha )\), then W satisfies the conditions in Lemma 7.3 for any \(R\ge 1\), with \(\beta =2\) and the same \(\alpha \). It follows that W is infinitesimal-concave.

  • If \(c_W < C(d,\alpha )\), then for sufficiently small \(\lambda \) (depending on d, \(\alpha \) and \(C(d,\alpha )-c_W\)), W is repulsive for short distances and attractive for long distances: there exists \(R_W>0\) such that \(W'(r)< 0,\,\forall 0<r<R_W\) and \(W'(r)> 0,\,\forall r>R_W\).

Remark 8.2

The explicit expressions of \(c(d,\alpha )\) and \(C(d,\alpha )\) are given in (8.4) and (8.6) respectively. One can show that within the range of \(\alpha \) stated before, \(c(d,\alpha )<C(d,\alpha )\) if and only if

$$\begin{aligned} \frac{d+2}{2}<\alpha<d+2,\text { for }d\ge 2;\quad 2\le \alpha < 3,\text { for }d=1 \end{aligned}$$
(8.2)

after some tedious but easy explicit computations. Therefore, for such \(\alpha \), one can construct W which is infinitesimal-concave and also repulsive for short distances and attractive for long distances.

Proof

First claim: Item 1 of the conditions of Lemma 7.3 is clear. To check items 2 and 3, we start by noticing that \({\mathcal {F}}[|\textbf{x}|^2]= -(2\pi )^d\Delta \delta (\xi )\) is supported on \(\{0\}\), and (as a distribution) \({\mathcal {F}}[c_{d,\alpha }|\textbf{x}|^{\alpha -d}] = \frac{1}{|\xi |^\alpha },\,\forall \xi \ne 0\). Therefore, for any \(\xi \ne 0\),

$$\begin{aligned} {\hat{W}}(\xi )= & {} \frac{1}{|\xi |^\alpha } - c_W\sum _{k=1}^\infty {\mathcal {F}}\Big [\lambda ^{(\alpha -d)k}\exp (-\frac{(\cdot )^2}{2\lambda ^{2k}})\Big ](\xi )\\= & {} \frac{1}{|\xi |^\alpha } - (2\pi )^{d/2}c_W\sum _{k=1}^\infty \lambda ^{\alpha k} \exp \Big (-\frac{\lambda ^{2k}|\xi |^2}{2}\Big ) \end{aligned}$$

where we use \({\mathcal {F}}[\exp (-\frac{(\cdot )^2}{2})]=(2\pi )^{d/2}\exp (-\frac{|\xi |^2}{2})\). See Fig. 3 for an illustration. This shows item 2 of the conditions of Lemma 7.3.

Fig. 3
figure 3

The k-th term in the construction of W. On the physical side, it is essentially supported in a ball of radius \(O(\lambda ^k)\), while on the Fourier side, it is essentially supported in a ball of radius \(O(\lambda ^{-k})\), and its size at \(|\xi |\sim O(\lambda ^{-k})\) is of the same order of \({\hat{W}}_0\), and \(c_W\) is chosen such that \({\hat{W}}_0(\xi )\) is smaller than the k-th term at this scaling

Assume \(c_W > c(d,\alpha )\) where \(c(d,\alpha )\) will be given later in (8.4). For any fixed \(j\in {\mathbb {Z}}_+\), we claim that

$$\begin{aligned} {\hat{W}}(\xi ) < -c|\xi |^{-\alpha },\quad \text{ for } \text{ all } \xi \text { with } (\sqrt{\alpha }-\epsilon )\lambda ^{-j} \le |\xi | \le (\sqrt{\alpha }+\epsilon )\lambda ^{-j} \end{aligned}$$
(8.3)

for some positive constants \(\epsilon \) and c independent of j. Then, since \(\lambda <1\), item 3 of the conditions in Lemma 7.3 follows since the length of the interval \(R=2\epsilon \lambda ^{-j}\) is arbitrarily large by taking j large.

To prove the claim (8.3), notice that

$$\begin{aligned} {\hat{W}}(\xi ){} & {} \le \frac{1}{|\xi |^\alpha } - (2\pi )^{d/2}c_W\lambda ^{\alpha j} \exp \Big (-\frac{\lambda ^{2j}|\xi |^2}{2}\Big ) \\{} & {} = \lambda ^{\alpha j}\psi _1(\lambda ^{2j}|\xi |^2),\quad \psi _1(y):= \frac{1}{y^{\alpha /2}} - (2\pi )^{d/2}c_We^{-y/2}\,. \end{aligned}$$

Notice that the ratio of the two terms in \(\psi _1\) is

$$\begin{aligned} \psi _2(y) := \frac{e^{-y/2}}{1/y^{\alpha /2}} = y^{\alpha /2}e^{-y/2} \end{aligned}$$

which achieves its maximum at \(\psi _2(\alpha ) = \alpha ^{\alpha /2}e^{-\alpha /2}\) since \(\psi _2'(y) = (\frac{\alpha }{2}-\frac{y}{2})y^{(\alpha -2)/2}e^{-y/2}\). Therefore, if

$$\begin{aligned} c_W>c(d,\alpha ):=\alpha ^{-\alpha /2}e^{\alpha /2}(2\pi )^{-d/2} \end{aligned}$$
(8.4)

then \(\psi _1(y) \le -c < 0\) in a neighborhood of \(\alpha \). It follows that \(\psi _1(y) \le -c < 0\) if \(\sqrt{y}\in (\sqrt{\alpha }-\epsilon ,\sqrt{\alpha }+\epsilon )\) for some \(\epsilon >0\), which implies (8.3).

Second claim:

STEP 1: We first analyze small \(r=|\textbf{x}|\):

$$\begin{aligned} \begin{aligned} W'(r) =&-c_{d,\alpha }(d-\alpha )r^{\alpha -d-1} + r + c_W\sum _{k=1}^\infty (\lambda ^{(\alpha -d-2)k} r)\exp \Big (-\frac{|\lambda ^{-k} r|^2}{2}\Big ) \\ =&\left( -c_{d,\alpha }(d-\alpha ) + r^{-\alpha +d+2} + c_W\sum _{k=1}^\infty \psi (\lambda ^{-2k} r^2) \right) r^{\alpha -d-1} \\ \end{aligned}\end{aligned}$$
(8.5)

where

$$\begin{aligned} \psi (y):=y^{(-\alpha +d+2)/2} e^{-y/2} . \end{aligned}$$

Since \(\psi '(y) = (\frac{-\alpha +d+2}{2} - \frac{y}{2})y^{(-\alpha +d)/2}e^{-y/2}\), \(\psi \) achieves its maximum at

$$\begin{aligned}\psi (-\alpha +d+2) = (-\alpha +d+2)^{(-\alpha +d+2)/2}e^{-(-\alpha +d+2)/2} = ((-\alpha +d+2)/e)^{(-\alpha +d+2)/2} ,\end{aligned}$$

and being increasing/decreasing on \([0,-\alpha +d+2]\) and \([-\alpha +d+2,\infty )\) respectively.

Taking \(\lambda \) small enough, we can guarantee that \(\lambda \in [0,-\alpha +d+2]\) and \(\lambda ^{-1}\in [-\alpha +d+2,\infty )\). Notice that for any \(r>0\), \(\{\lambda ^{-2k}r^2\}_{k=1}^\infty \cap [\lambda ,\lambda ^{-1})\) has at most one element. Then

$$\begin{aligned}\begin{aligned} \sum _{k=1}^\infty \psi (\lambda ^{-2k} r^2) \le&((-\alpha +d+2)/e)^{(-\alpha +d+2)/2} + \sum _{k=-\infty }^0 \psi (\lambda ^{1-2k})+ \sum _{k=0}^\infty \psi (\lambda ^{-1-2k}) \\ \le \,&((-\alpha +d+2)/e)^{(-\alpha +d+2)/2}+ C\sum _{k=-\infty }^0 \lambda ^{(1-2k)(-\alpha +d+2)/2}+ C\sum _{k=0}^\infty \lambda ^{1+2k} \\ \le \,&((-\alpha +d+2)/e)^{(-\alpha +d+2)/2} + C(\lambda ^{(-\alpha +d+2)/2} + \lambda ), \end{aligned}\end{aligned}$$

where we use \(\psi (y) \le y^{(-\alpha +d+2)/2}\) and \(\psi (y) \le C/y\) to estimate the two summations respectively (C only depending on d and \(\alpha \)). Therefore, putting together (8.5) with the condition that

$$\begin{aligned} c_W<C(d,\alpha ):=(e/(-\alpha +d+2))^{(-\alpha +d+2)/2}c_{d,\alpha }(d-\alpha ), \end{aligned}$$
(8.6)

then there exists \(r_1>0\) and \(\lambda _1>0\) depending on \(C(d,\alpha )-c_W\), such that \(W'(r)<0\) for all \(0<\lambda \le \lambda _1\) and \(0<r\le r_1\), i.e., W is repulsive on \((0,r_1]\).

STEP 2: Then we analyze \(W'(r)\) for \(r\ge r_1\) fixed by the previous step. Notice that

$$\begin{aligned}\begin{aligned} W''(r) =&c_{d,\alpha }(d-\alpha )(d+1-\alpha )r^{\alpha -d-2} + 1 + c_W\sum _{k=1}^\infty \lambda ^{(\alpha -d-2)k} (1-\lambda ^{-2k}r^2)\exp \Big (-\frac{|\lambda ^{-k} r|^2}{2}\Big )\,. \\ \end{aligned}\end{aligned}$$

We separate into two cases:

  • If \(0<\alpha \le d+1\), then \(W_0''(r)=c_{d,\alpha }(d-\alpha )(d+1-\alpha )r^{\alpha -d-2} + 1 > 0\) for all \(r>0\), and is bounded from below by 1 for \(r\ge r_1\). Notice that \(ye^{-y/2}\le C_n y^{-n}\) for any \(n>0\) and \(y>0\). Using this with \(y=\lambda ^{-2k}r^2\) and \(n =(-\alpha +d+3)/2\), we get

    $$\begin{aligned}\begin{aligned} W''(r) \ge&1 - c_WC_n\sum _{k=1}^\infty \lambda ^{(\alpha -d-2)k} \lambda ^{2nk}r^{-2n}\ge 1 - c_WC_nr_1^{-2n}\sum _{k=1}^\infty \lambda ^{k} = 1 - c_WC_nr_1^{-2n}\frac{\lambda }{1-\lambda } \end{aligned}\end{aligned}$$

    which is positive if \(\lambda \) is small enough. Then W(r) is convex as a function of r on \([r_1,\infty )\), which implies that there exists only one point \(R_W\) where \(W'\) changes sign (\(R_W\) always exists because \(W'(r)<0\) for \(r\in (0,r_1]\) and \(W'(r)>0\) for large enough r).

  • If \(d+1< \alpha <d+2\), then explicit calculation shows that there exists \(0<r_2<r_3\) such that \(W_0''(r)>0\) on \([r_2,\infty )\), and \(W_0'(r)<0\) on \((0,r_3]\). If \(r_1<r_3\), let \(-w_1<0\) be an upper bound of \(W_0'\) on \([r_1,r_3]\). Then for \(r_1<r\le r_3\), (8.5) combined with the fact that \(\psi (y)\le C/y\) gives

    $$\begin{aligned}\begin{aligned} W'(r) \le&-w_1 + c_W\sum _{k=1}^\infty \psi (\lambda ^{-2k} r^2) r^{\alpha -d-1} \le -w_1 + c_WC\sum _{k=1}^\infty \lambda ^{2k} r^{-2} r^{\alpha -d-1} \\ \le&-w_1 + c_WCr_1^{\alpha -d-3}\frac{\lambda ^2}{1-\lambda ^2} \end{aligned}\end{aligned}$$

    which is negative if \(\lambda \) is small enough. Therefore \(W'(r)<0\) on \((r_1,r_3]\). Similar to the previous case, there exists only one point \(R_W\in [r_2,\infty )\) where \(W'\) changes sign. This implies \(W'(r)<0\) on \((0,R_W)\) and \(W'(r)>0\) on \((R_W,\infty )\). If \(r_1\ge r_3\), then one can get the conclusion similar to the previous case for \(0<\alpha \le d+1\) since again \(W_0''(r)\) is bounded below by a positive constant due to \(r_2<r_3\le r_1\).

\(\square \)

Finally we show that any \(d_\infty \)-local minimizer of the potential W in (8.1) does not collapse to Dirac masses.

Proposition 8.3

Let W be given by (8.1) and \(C(d,\alpha )\) as in Theorem 8.1. If \(c_W<C(d,\alpha )\) and \(\lambda \) is sufficiently small, then the support of any compactly supported \(d_\infty \)-local minimizer \(\rho \) does not contain any isolated points.

Proof

Assume on the contrary that there is an isolated point \(\textbf{x}_0\in \text {supp}\,\rho \), then there exists \(\epsilon >0\) such that \(\text {supp}\,\rho \cap B(\textbf{x}_0;\epsilon )=\{\textbf{x}_0\}\), and \(\rho = a\delta (\textbf{x}-\textbf{x}_0) + \rho \chi _{B(\textbf{x}_0;\epsilon )^c}\) for some \(a>0\). Then for any \(\textbf{x}\) with \(|\textbf{x}-\textbf{x}_0|<\epsilon /2\),

$$\begin{aligned}\begin{aligned} (W*\rho )(\textbf{x})&=\, aW(\textbf{x}-\textbf{x}_0) + \int _{{\mathbb {R}}^d}W(\textbf{y})\rho (\textbf{x}-\textbf{y})\chi _{B(\textbf{x}_0;\epsilon )^c}(\textbf{x}-\textbf{y})\,\textrm{d}{\textbf{y}} \\&=\,aW(\textbf{x}-\textbf{x}_0) + \int _{|\textbf{y}|\ge \epsilon /2} W(\textbf{y})\rho (\textbf{x}-\textbf{y})\chi _{B(\textbf{x}_0;\epsilon )^c}(\textbf{x}-\textbf{y})\,\textrm{d}{\textbf{y}}\\&=: aW(\textbf{x}-\textbf{x}_0) + {\mathcal {I}}(\textbf{x}). \end{aligned}\end{aligned}$$

Notice that W is a smooth function on \(\{\textbf{y}:|\textbf{y}|\ge \epsilon /2\}\), and its derivatives are bounded on any compact subset. Therefore \({\mathcal {I}}\) is a smooth function on \(\textbf{x}\in B(\textbf{x}_0;\epsilon /2)\).

On the other hand, by the second item of Theorem 8.1, \(W'(r)<0\) for \(0<r<\epsilon _0\) if \(\epsilon _0\) is small. Indeed, by its proof (STEP 1 of the second claim), one can show a quantitative version, i.e., there exists \(c>0\) such that

$$\begin{aligned} W'(r)<-cr^{\alpha -d-1},\quad \forall 0<r<\epsilon _0. \end{aligned}$$

Then, either \(W(0)=\infty \) (when \(0<\alpha \le d\)), or \(W(0)<\infty \) (when \(d<\alpha <d+2\)) with

$$\begin{aligned} W(0)-W(r) \ge c\int _0^r s^{\alpha -d-1}\,\textrm{d}{s} = cr^{\alpha -d}. \end{aligned}$$
(8.7)

Then we separate into the following cases:

  • If \(W(0)=\infty \), then \((W*\rho )(\textbf{x}_0)=\infty \), while \((W*\rho )(\textbf{x})<\infty \) nearby. Then \(\textbf{x}_0\) is not a local minimum of \(W*\rho \), contradicting the assumption that \(\rho \) is a \(d_\infty \)-local minimizer, in view of Lemma 2.1.

  • If \(W(0)<\infty \) and \(\nabla {\mathcal {I}}(\textbf{x}_0)\ne 0\), then for \(0<\epsilon _1<\frac{\epsilon }{2|\nabla {\mathcal {I}}(\textbf{x}_0)|}\), Taylor expansion of \({\mathcal {I}}\) with (8.7) gives

    $$\begin{aligned}\begin{aligned}&(W*\rho )(\textbf{x}_0-\epsilon _1\nabla {\mathcal {I}}(\textbf{x}_0))-(W*\rho )(\textbf{x}_0)\\ =\,&a\Big (W(\epsilon _1|\nabla {\mathcal {I}}(\textbf{x}_0)|)-W(0)\Big ) + \nabla {\mathcal {I}}(\textbf{x}_0)\cdot (-\epsilon _1\nabla {\mathcal {I}}(\textbf{x}_0)) + O(|\epsilon _1\nabla {\mathcal {I}}(\textbf{x}_0)|^2) \\ \le&-\epsilon _1|\nabla {\mathcal {I}}(\textbf{x}_0)|^2(1+O(\epsilon _1)) < 0, \end{aligned}\end{aligned}$$

    if \(\epsilon _1\) is small enough, and we get a similar contradiction.

  • If \(W(0)<\infty \) and \(\nabla {\mathcal {I}}(\textbf{x}_0)= 0\), then for \(\textbf{x}\in B(\textbf{x}_0;\epsilon /2)\), Taylor expansion of \({\mathcal {I}}\) with (8.7) gives

    $$\begin{aligned}\begin{aligned} (W*\rho )(\textbf{x})-(W*\rho )(\textbf{x}_0) =\,&a\Big (W(|\textbf{x}-\textbf{x}_0|)-W(0)\Big ) + O(|\textbf{x}-\textbf{x}_0|^2) \\ \le&-ac|\textbf{x}-\textbf{x}_0|^{\alpha -d} + O(|\textbf{x}-\textbf{x}_0|^2) < 0 \end{aligned}\end{aligned}$$

    if \(|\textbf{x}-\textbf{x}_0|\ne 0\) is small enough, and we get a similar contradiction since \(\alpha <d+2\).

\(\square \)

Finally, we can state our main result of this section. We say that \(\rho \in {\mathcal {P}}({{\mathbb {R}}^d})\) is almost fractal if the support of \(\rho \) does not contain isolated points and the interior of any superlevel set is empty.

Corollary 8.4

Given a potential of the form (8.1) with parameters satisfying (8.2), \(c(d,\alpha )<c_W<C(d,\alpha )\), and \(\lambda \) is sufficiently small, then any compactly supported \(d_\infty \)-local minimizer \(\rho \) is almost fractal.

Proof

If (8.2) holds, then there exists \(c_W\) with \(c(d,\alpha )<c_W<C(d,\alpha )\). Now, we apply Theorem 8.1, Theorem 7.1, and Proposition 8.3 to conclude that the support of any compactly supported \(d_\infty \)-local minimizer \(\rho \) does not contain isolated points and the superlevel sets do not contain any interior point. \(\square \)

9 Cantor set as steady state

In the 1D case, we construct a potential W such that the uniform distribution on some Cantor set is a steady state, satisfying the necessary condition (2.2) for \(d_2\)-local minimizers. The main idea of this construction is to mimic the structure of the potential as appeared in (8.1) in a recursive hierarchical manner. We will define W as a piecewise-quadratic potential so that it would be easier to verify its steady states compared to (8.1). The main strategy is to produce a potential that introduces some kind of concavity at a sequence of small scales.

9.1 Steady state

We will use a hierarchical construction of an interaction potential W, with a fixed positive number \(M>2\) being the size ratio between adjacent layers, in correspondence with \(\lambda ^{-1}\) in the previous section. For notation convenience, we denote

$$\begin{aligned} a_{(j)}:=a M^{-j} \end{aligned}$$

for \(a>0\) and \(j\in {\mathbb {Z}}\).

We first define a uniform measure supported on a Cantor set inside [0, 1]. Define the intervals \(I_{k,l},\,k=0,1,\ldots ,\,l = 0,1,\ldots ,2^k-1\) iteratively by

$$\begin{aligned} I_{0,0} = [0,1],\quad I_{k+1,2l} = I_{k,l}^{\text {left}},\,I_{k+1,2l+1} = I_{k,l}^{\text {right}} \end{aligned}$$

where for an interval \(I=[x_1,x_2]\),

$$\begin{aligned} I^{\text {left}} = [x_1,x_1+\frac{x_2-x_1}{M}],\quad I^{\text {right}} = [x_2-\frac{x_2-x_1}{M},x_2]\,. \end{aligned}$$

Then define the functions

$$\begin{aligned} \rho _k = \Big (\frac{M}{2}\Big )^k \sum _{l=0}^{2^k-1} \chi _{I_{k,l}} \end{aligned}$$
(9.1)

which has total mass 1 and supported on \(S_k=\bigcup _{l=0}^{2^k-1}I_{k,l}\). The weak limit of \(\rho _k\), denoted as \(\rho _\infty \), is the uniform distribution on a Cantor set \(S=\bigcap _{k=0}^\infty S_k\).

We also introduce the following notation: for \(A,B\subseteq {\mathbb {R}}\),

$$\begin{aligned} |A-B| = \{y\in {\mathbb {R}}:y=|x_1-x_2| \text{ for } \text{ some } x_1\in A,\,x_2\in B\}\,. \end{aligned}$$

We first prove the following lemma, which shows that the possible pairwise distances of points in \(S_k\) have a hierarchical structure.

Lemma 9.1

Assume \(M>3\). For any \(k\ge 0\) and \(l_1,l_2\in \{0,1,\ldots ,2^k-1\}\), \(|I_{k,l_1}-I_{k,l_2}|\) is a subset of one of the following disjoint sets: \([0,1_{(k)}],\, [(M-2)_{(k)}, M_{(k)}],\,\ldots ,\,[(M-2)_{(1)}, M_{(1)}]\), and it is a subset of \([0,1_{(k)}]\) if and only if \(l_1=l_2\).

Proof

First notice that \(M>3\) implies that the intervals \([0,1_{(k)}],\, [(M-2)_{(k)}, M_{(k)}],\ldots ,\,[(M-2)_{(1)}, M_{(1)}]\) are disjoint.

The situation \(l_1=l_2\) is clear. For \(l_1\ne l_2\), we use induction on k. The case \(k=0\) is clear because the condition \(l_1\ne l_2\) never happens.

Suppose the conclusion is true for \(k-1\). For \(x_1\in I_{k,l_1}\) and \(x_2\in I_{k,l_2}\), there holds

$$\begin{aligned} I_{k,l_1} \subseteq I_{k-1,\lfloor l_1/2 \rfloor },\quad I_{k,l_2} \subseteq I_{k-1,\lfloor l_2/2 \rfloor }\,. \end{aligned}$$

If \(\lfloor l_1/2 \rfloor \ne \lfloor l_2/2 \rfloor \), then \(|x_1-x_2| \in [(M-2)_{(j)}, M_{(j)}]\) for some integer \(1\le j \le k-1\) depending on \(\lfloor l_1/2 \rfloor ,\lfloor l_2/2 \rfloor ,k-1\) by the induction hypothesis, and this implies the conclusion. If \(\lfloor l_1/2 \rfloor = \lfloor l_2/2 \rfloor =l\) then \(I_{k,l_1}=I_{k-1,l}^{\text {left}}\) and \(I_{k,l_2}=I_{k-1,l}^{\text {right}}\) (or the other way around). Then

$$\begin{aligned} |x_1-x_2| \le |I_{k-1,l}| = M^{-(k-1)} = M_{(k)} \end{aligned}$$

and

$$\begin{aligned} |x_1-x_2| \ge \text {dist}\,(I_{k-1,l}^{\text {left}},I_{k-1,l}^{\text {right}}) = (M-2)_{(k)}. \end{aligned}$$

This finishes the proof. \(\square \)

Then we will define a sequence of potentials \(W_k,\,k\ge 0\). In this subsection, we only partially define them by requiring that \(W_k\) is even and

$$\begin{aligned}{} & {} W_k'(x) = \frac{M}{2M-4}(-\text {sgn}(x) + 2x) \nonumber \\{} & {} \quad +\left\{ \begin{aligned}&0,\quad |x|\le 1_{(k)} \\&a_j\text {sgn}(x) + b_jx,\quad (M-2)_{(j)} \le |x| \le M_{(j)},\,j=1,\ldots ,k \\&\text {smoothly connected},\quad \text {otherwise}\end{aligned}\right. \end{aligned}$$
(9.2)

where \(a_j,b_j\) are constants to be determined. The above lemma shows that the unspecified ‘smoothly connected’ part does not affect \((W_k'*\rho _k)(x),\,(W_k''*\rho _k)(x),\,x\in S_k=\text {supp}\,\rho _k\), and therefore does not affect whether \(\rho _k\) is a steady state of \(W_k\). The constant in front of the Newtonian potential part in (9.2) is chosen for convenience to simplify computations in the next result.

We introduce the notation

$$\begin{aligned} {\bar{\chi }}_S(x) = \chi _S(|x|), \end{aligned}$$

for any \(S\subseteq [0,\infty )\).

Theorem 9.2

Assume \(M>3\). Choose \(a_j,b_j\) by

$$\begin{aligned} a_j = -\Big (M-\frac{1}{2}\Big ),\quad b_j=M^j,\quad j=1,\ldots ,k, \end{aligned}$$

then \(\rho _k\) is a steady state of the interaction energy with the interaction potential \(W_k\) satisfying (9.2).

Proof

We proof by induction on k. The case \(k=0\) is clear, since \(W_0(x)=\frac{M}{2M-4}(-|x| + x^2)\) and \(\rho _0=\chi _{[0,1]}\) is the well-known steady state for the Newtonian repulsion with quadratic attraction in one dimension.

Suppose the conclusion holds for \(k-1\). For \(x\in I_{k,2l_0}\) for some \(l_0\), by Lemma 9.1, any \(y\in \text {supp}\,\rho _k=S_k\) satisfies either \(|x-y|\le 1_{(k)}\), or \(|x-y|\in [(M-2)_{(j)},M_{(j)}]\) for some \(1\le j \le k\). Therefore the ‘smoothly connected’ part in (9.2) makes no contribution to \((W_k'*\rho _k)(x)\) or \((W_k''*\rho _k)(x)\), and there holds

$$\begin{aligned} (W_k'*\rho _k)(x)= & {} \frac{M}{2M-4}\Big ((-\text {sgn}(\cdot ) + 2(\cdot ))*\rho _k\Big )(x) \\{} & {} \quad + \sum _{j=1}^k \Big (\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}\cdot (a_j\text {sgn}(\cdot ) + b_j(\cdot ))\Big ) * \rho _k\Big )(x) \end{aligned}$$

and

$$\begin{aligned}\begin{aligned} (W_k''*\rho _k)(x) =&\frac{M}{2M-4}\Big ((-2\delta _0 + 2)*\rho _k\Big )(x) + \sum _{j=1}^k b_j\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}* \rho _k\Big )(x) \\ =&\frac{M}{2M-4}(-2\rho _k(x)+2) + \sum _{j=1}^k b_j\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}* \rho _k\Big )(x) \end{aligned}\end{aligned}$$

where \(\delta _0\) denotes the Dirac delta function. To show the conclusion for k, i.e., \(\rho _k\) is a steady state for \(W_k\), it is equivalent to show that \(W_k'*\rho _k\) vanishes in each \(I_{k,2l_0}\) since the same result for \(I_{k,2l_0+1}\) can be handled by symmetry. This is equivalent to show

$$\begin{aligned} (W_k''*\rho _k)(x) = 0,\,\forall x\in I_{k,2l_0},\quad (W_k'*\rho _k)(x_1) = 0 \end{aligned}$$
(9.3)

where \(x_1\) denotes the left endpoint of \(I_{k,2l_0}\). The induction hypothesis shows that (9.3) is true when k is replaced by \(k-1\) and \(2l_0\) replaced by any l.

STEP 1: show \((W_k''*\rho _k)(x) = 0,\,\forall x\in I_{k,2l_0}\).

Since \((W_{k-1}''*\rho _{k-1})(x) = 0,\,\forall x\in I_{k-1,l_0}\) and \(I_{k,2l_0}\subseteq I_{k-1,l_0}\), it suffices to show

$$\begin{aligned} \begin{aligned}&0 =\, (W_k''*\rho _k)(x) - (W_{k-1}''*\rho _{k-1})(x) \\&=\, \frac{M}{2M-4}(-2\rho _k(x) + 2\rho _{k-1}(x)) + \sum _{j=1}^k b_j\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}* \rho _k\Big )(x) \\&\quad - \sum _{j=1}^{k-1} b_j\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}* \rho _{k-1}\Big )(x)\\&=\, \frac{2M}{2M-4}\Big (-\Big (\frac{M}{2}\Big )^k + \Big (\frac{M}{2}\Big )^{k-1}\Big ) + b_k\Big ({\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]}* \rho _k\Big )(x) \\&\quad + \sum _{j=1}^{k-1} b_j\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}* (\rho _k-\rho _{k-1})\Big )(x)\\&=\, -\Big (\frac{M}{2}\Big )^k + b_k\Big ({\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]}* \rho _k\Big )(x) + \sum _{j=1}^{k-1} b_j\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}* (\rho _k-\rho _{k-1})\Big )(x). \end{aligned}\end{aligned}$$
(9.4)

Recall that \(\text {supp}\,\rho _k=S_k=\bigcup _{l=0}^{2^k-1}I_{k,l}\). Notice that if \(l\ne 2l_0,2l_0+1\), we have

$$\begin{aligned} \text {dist}\,(I_{k,2l_0},I_{k,l}) \ge \text {dist}\,(I_{k-1,l_0},I_{k-1,\lfloor l/2 \rfloor }) \ge (M-2)_{(k-1)} > 1_{(k-1)}= M_{(k)}, \end{aligned}$$

and \(|I_{k,2l_0}| = 1_{(k)}<(M-2)_{(k)}\). Therefore, in the convolution \(\Big ({\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]}* \rho _k\Big )(x)\), the only contribution comes from \(\rho _k|_{ I_{k,2l_0+1}}=(\frac{M}{2})^k\chi _{I_{k,2l_0+1}}\), which gives

$$\begin{aligned}\begin{aligned} \Big ({\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]}* \rho _k\Big )(x) = \Big (\frac{M}{2}\Big )^k\Big ({\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]}* \chi _{I_{k,2l_0+1}}\Big )(x) = \Big (\frac{M}{2}\Big )^k |\chi _{I_{k,2l_0+1}}| = 2^{-k}, \end{aligned}\end{aligned}$$

where the second equality follows from the fact that \(|x-y|\in [(M-2)_{(k)},M_{(k)}]\) for any \(x\in I_{k,2l_0}\) and \(y\in I_{k,2l_0+1}\).

Next we will show that the last summation in (9.4) vanishes. In fact, notice that the definition of \(\rho _k\) in (9.1) implies that

$$\begin{aligned} \text {supp}\,(\rho _k-\rho _{k-1}) = \text {supp}\,\rho _{k-1} = \bigcup _{l=0}^{2^{k-1}-1} I_{k-1,l} \end{aligned}$$

and we have the zeroth and first moment conservation conditions in each \(I_{k-1,l}\) as the following:

$$\begin{aligned} \int _{I_{k-1,l}} (\rho _k-\rho _{k-1})\,\textrm{d}{x} = \int _{I_{k-1,l}} (\rho _k-\rho _{k-1})x\,\textrm{d}{x} = 0,\quad l=0,1,\ldots ,2^{k-1}-1. \end{aligned}$$
(9.5)

Lemma 9.1 gives that \(|I_{k-1,l_0}-I_{k-1,l}|\) is a subset of one of the following disjoint sets: \([0,1_{(k-1)}],\, [(M-2)_{(k-1)}, M_{(k-1)}],\,\ldots ,\,[(M-2)_{(1)}, M_{(1)}]\). Using the zeroth moment conservation, we get

$$\begin{aligned} \Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}* (\rho _k-\rho _{k-1})|_{I_{k-1,l}}\Big )(x) = 0, \quad x\in I_{k,2l_0}, \end{aligned}$$

for any \(j=1,\ldots ,k-1\), and thus

$$\begin{aligned} \sum _{j=1}^{k-1} b_j\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}* (\rho _k-\rho _{k-1})\Big )(x) = 0, \quad x\in I_{k,2l_0}. \end{aligned}$$

Therefore, we conclude that the RHS of (9.4) is zero with the condition \(b_k=M^k\).

STEP 2: show \((W_k'*\rho _k)(x_1) = 0\).

Similar to the previous step, it suffices to show

$$\begin{aligned} \begin{aligned}&0 =\, (W_k'*\rho _k)(x_1) - (W_{k-1}'*\rho _{k-1})(x_1) \\&= \frac{M}{2M-4}\Big ((-\text {sgn}(x) + 2x)*(\rho _k-\rho _{k-1})\Big )(x_1) \\&\quad + \sum _{j=1}^k \Big (\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}\cdot (a_j\text {sgn}(x) + b_jx)\Big ) * \rho _k\Big )(x_1) \\&\quad - \sum _{j=1}^{k-1} \Big (\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}\cdot (a_j\text {sgn}(x) + b_jx)\Big ) * \rho _{k-1}\Big )(x_1) \\&= \frac{M}{2M-4}\Big ((-\text {sgn}(x) + 2x)*(\rho _k-\rho _{k-1})\Big )(x_1) \\&\quad + \Big (\Big ({\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]}\cdot (a_k\text {sgn}(x) + b_kx)\Big ) * \rho _k\Big )(x_1) \\&\quad + \sum _{j=1}^{k-1} \Big (\Big ({\bar{\chi }}_{[(M-2)_{(j)},M_{(j)}]}\cdot (a_j\text {sgn}(x) + b_jx)\Big ) * (\rho _k-\rho _{k-1})\Big )(x_1) \\ \end{aligned}\end{aligned}$$
(9.6)

First, by (9.5), \(x*(\rho _k-\rho _{k-1}) = 0\). Therefore, by further using the moment conservation property (9.5),

$$\begin{aligned}\begin{aligned}&\Big ((-\text {sgn}(x) + 2x)*(\rho _k-\rho _{k-1})\Big )(x_1) = \Big ((-\text {sgn}(x))*(\rho _k-\rho _{k-1})\Big )(x_1) \\ =&-\Big (2\chi _{[0,\infty )}*(\rho _k-\rho _{k-1})\Big )(x_1) = 2\int _{-\infty }^{x_1}(\rho _k-\rho _{k-1})\,\textrm{d}{x} = 0, \end{aligned}\end{aligned}$$

since \(x_1\), as the left endpoint of \(I_{k,2l_0}\), is also the left endpoint of \(I_{k-1,l_0}\).

Next, in the second quantity on the RHS of (9.6), similar to STEP 1, the only contribution comes from \(\rho _k|_{I_{k,2l_0+1}}\), and this term can be computed by

$$\begin{aligned}\begin{aligned}&\Big (\Big ({\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]}\cdot (a_k\text {sgn}(x) + b_kx)\Big ) * \rho _k\Big )(x_1) \\&= \int _{I_{k,2l_0+1}}(a_k\text {sgn}(x_1-x) + b_k(x_1-x))\rho _k(x)\,\textrm{d}{x} \\&= -\Big (\frac{M}{2}\Big )^k \Big (a_k|I_{k,2l_0+1}| + b_k|I_{k,2l_0+1}| \cdot \Big (M-\frac{1}{2}\Big )|I_{k,2l_0+1}|\Big ) \\&= -2^{-k}\Big (a_k + b_k \Big (M-\frac{1}{2}\Big ) M^{-k}\Big )\,. \end{aligned}\end{aligned}$$

Finally, the last quantity on the RHS of (9.6) is zero by (9.5), similar as before.

Therefore we conclude that the RHS of (9.6) is zero with the condition

$$\begin{aligned} a_k = -b_k \Big (M-\frac{1}{2}\Big ) M^{-k} = -\Big (M-\frac{1}{2}\Big ). \end{aligned}$$

\(\square \)

9.2 The condition (2.2) for \(d_2\)-local minimizer

In this section, we specify our potential by defining the choice of ‘smooth connections’ in (9.2) as

$$\begin{aligned} W_k'(x) = \frac{M}{2M-4} (-1 + 2x) + \omega _k(x) ,\quad x>0 \end{aligned}$$

with

$$\begin{aligned}\begin{aligned} \omega _k(x)= \left\{ \begin{array}{ll} 0, &{} 0<x\le (M-\alpha )_{(k)} \\ (-\frac{3}{2}+\frac{3}{4}\cdot \frac{2}{\alpha -2}(M-2)) - \frac{3}{4}\cdot \frac{2}{\alpha -2}M^k x , &{} (M-\alpha )_{(k)} \le x \le (M-2)_{(k)} \\ -(M-\frac{1}{2}) + M^j x, &{} (M-2)_{(j)} \le x \le M_{(j)},\,j=1,\ldots ,k \\ \frac{1}{2} , &{} 1_{(j)} \le x \le (M-\alpha )_{(j)},\,j=1,\ldots ,k-1 \\ (-\frac{3}{2}+\frac{2}{\alpha -2}(M-2)) - \frac{2}{\alpha -2}M^j x , &{} (M-\alpha )_{(j)} \le x \le (M-2)_{(j)},\, j=1,\ldots ,k-1 \\ \frac{1}{2}, &{} x\ge M_{(1)}=1 \\ \end{array}\right. \end{aligned}\end{aligned}$$

where \(2< \alpha \le M-1\) is a parameter to be determined. The smooth connections \(\omega _k\) is extended as an odd function on \({\mathbb {R}}\), in correspondence to \(W_k'\). Moreover, \(\omega _k\) is a continuous, piecewise linear function, and

$$\begin{aligned}\begin{aligned} \omega _k'(x)= \left\{ \begin{array}{ll} 0, &{} 0<x\le (M-\alpha )_{(k)} \\ - \frac{3}{4}\cdot \frac{2}{\alpha -2}M^k, &{} (M-\alpha )_{(k)} \le x \le (M-2)_{(k)} \\ M^j, &{} (M-2)_{(j)} \le x \le M_{(j)},\,j=1,\ldots ,k \\ 0, &{} 1_{(j)} \le x \le (M-\alpha )_{(j)},\,j=1,\ldots ,k-1 \\ - \frac{2}{\alpha -2}M^j, &{} (M-\alpha )_{(j)} \le x \le (M-2)_{(j)},\, j=1,\ldots ,k-1 \\ 0, &{} x\ge M_{(1)}=1 \\ \end{array}\right. \end{aligned}\end{aligned}$$

Notice that for \(k\ge 2\),

$$\begin{aligned} \begin{aligned} W_k''(x)&-W_{k-1}''(x) = \omega _k'(x)-\omega _{k-1}'(x) =\\&\left\{ \begin{array}{ll} -\frac{3}{4}\cdot \frac{2}{\alpha -2}M^k , &{} (M-\alpha )_{(k)} \le x \le (M-2)_{(k)} \\ M^k , &{} (M-2)_{(k)} \le x \le M_{(k)} \\ -\frac{1}{4}\cdot \frac{2}{\alpha -2}M^{k-1} , &{} (M-\alpha )_{(k-1)} \le x \le (M-2)_{(k-1)} \\ 0, &{} \text {otherwise} \\ \end{array}\right. \\ =\,&M^k {\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]} -\frac{1}{4}\cdot \frac{2}{\alpha -2}M^{k-1}{\bar{\chi }}_{ [(M-\alpha )_{(k-1)} , (M-2)_{(k-1)}]}\\&-\frac{3}{4}\cdot \frac{2}{\alpha -2}M^k {\bar{\chi }}_{[(M-\alpha )_{(k)} , (M-2)_{(k)}]} \end{aligned}\end{aligned}$$
(9.7)

is compactly supported on \(\{(M-\alpha )_{(k)} \le |x| \le (M-2)_{(k-1)})\}\) with mean-zero on it. See Fig. 4 for an illustration.

Fig. 4
figure 4

Decomposition of \(\omega _k(x)=\omega _1(x) + \sum _{j=2}^k (\omega _j(x)-\omega _{j-1}(x))\), in the case \(k=2\)

We first prove the necessary condition (2.2) for \(d_2\)-local minimizers for \(W_K\) and \(\rho _K\) and those \(x_0\) in the interval \((M^{-1},1-M^{-1})\), which corresponds to the inner interval of Fig. 6. This is the first step in a self-similar argument for the full condition (2.2) for \(\rho _K\).

Proposition 9.3

Let \(W_K\) be given as above, and \(\rho _K\) as given in (9.1). If M and \(\alpha \) satisfy

$$\begin{aligned} \frac{1}{3}(M+2) < \alpha \le \frac{2}{5}(M-10) \end{aligned}$$
(9.8)

then for any integer \(K \ge 1\),

$$\begin{aligned} (W_K*\rho _K )(x_0) - (W_K*\rho _K )(M^{-1}) \ge c(x_0)>0,\quad \forall x_0\in (M^{-1},1-M^{-1})\,. \end{aligned}$$
(9.9)

Notice that the possible choice of \(\alpha \) is nonempty if M is large enough. For example, \(M=100\), \(\alpha =35\).

We first give the following lemma for the convolution of a characteristic function with a compactly supported measure. Its proof is straightforward and thus omitted.

Lemma 9.4

Let \(\mu (x)\) be a non-negative measure supported on \(I_1 = [a_1,b_1]\) and assume \(\mu \) is symmetric around \((a_1+b_1)/2\). For another interval \(I_2=[a_2,b_2]\) such that \(a_2>0\) and \(|I_2|\ge |I_1|\), \(\mu *\chi _{I_2}\) is a non-negative function supported on \([a_1+a_2,b_1+b_2]\), given by

$$\begin{aligned} (\mu *\chi _{I_2})(x) = \left\{ \begin{aligned}&\psi (x-(a_1+a_2)),\quad a_1+a_2 \le x \le b_1+a_2 \\&|\mu |,\quad b_1+a_2 \le x \le a_1+b_2 \\&\psi ((b_1+b_2)-x),\quad a_1+b_2 \le x \le b_1+b_2 \end{aligned}\right. \end{aligned}$$

where \(\psi (y):=\int _{a_1}^{a_1+y} \mu (y_1)\,\textrm{d}{y_1}\), \(|\mu | = \psi (b_1-a_1)\).

The three pieces of the above function \(\mu *\chi _{I_2}\) will be referred to as piece 1, piece 2 and piece 3 respectively.

Proof of Proposition 9.3

Since \(\rho _K\) is a steady state for \(W_K\) by Theorem 9.2, we have \((W_K'*\rho _K)(M^{-1})=0\) since \(M^{-1}\in \text {supp}\,\rho _K\). Then we write

$$\begin{aligned} \begin{aligned} (W_K*\rho _K )(x_0)-(W_K*\rho _K )(M^{-1})&= \int _{M^{-1}}^{x_0} (W_K'*\rho _K )(x_1)\,\textrm{d}{x_1} \\&= \int _{M^{-1}}^{x_0} \int _{M^{-1}}^{x_1}(W_K''*\rho _K )(x)\,\textrm{d}{x}\,\textrm{d}{x_1} \\&= \int _{M^{-1}}^{x_0} (x_0-x)(W_K''*\rho _K )(x)\,\textrm{d}{x} \end{aligned}\end{aligned}$$
(9.10)

and for \(x\in (M^{-1},1/2)\) which is outside \(\text {supp}\,\rho _K \),

$$\begin{aligned} \begin{aligned} (W_K''*\rho _K )(x) =&(W_1''*\rho _K )(x) + ((\omega _2'-\omega _1')*\rho _K )(x) + \sum _{k=3}^K ((\omega _k'-\omega _{k-1}')*\rho _K )(x). \end{aligned}\end{aligned}$$
(9.11)

We deal with each of these three terms in the next three steps starting from the last one.

STEP 1: show that \( \int _{M^{-1}}^{x_0} (x_0-x)((\omega _k'-\omega _{k-1}')*\rho _K )(x)\,\textrm{d}{x}>0\) for any \(3\le k \le K\).

We first decompose \(\rho _K \) as

$$\begin{aligned} \rho _K = \rho _K \chi _{I_{1,1}} + \sum _{j=2}^{k-1} \rho _K \chi _{I_{j,2^{j-1}-2}} + \rho _K \chi _{I_{k-1,2^{k-2}-1}}\,. \end{aligned}$$

Notice that for \(x\in [M^{-1},\frac{1}{2}]\), we have \(\text {dist}\,(I_{1,1},x) \ge \frac{1}{2}-M^{-1}\), \(\text {dist}\,(I_{j,2^{j-1}-2},x) \ge (M-1)_{(j)},\,2\le j\le k-1\), and \(\text {supp}\,(\omega _k'-\omega _{k-1}')\cap {\mathbb {R}}_+ \subseteq [(M-\alpha )_{(k)}, (M-2)_{(k-1)}]\). Therefore, we can check that the support of the density parts translated by x are to the right of the support of \(\omega _k'-\omega _{k-1}'\), leading to

$$\begin{aligned} (\rho _K \chi _{I_{1,1}})*(\omega _k'-\omega _{k-1}'))(x) = (\rho _K \chi _{I_{j,2^{j-1}-2}})*(\omega _k'-\omega _{k-1}'))(x) = 0,\quad 2\le j \le k-1\,. \end{aligned}$$

Then we further decompose

$$\begin{aligned} \rho _K \chi _{I_{k-1,2^{k-2}-1}} = \rho _K \chi _{I_{k,2^{k-1}-2}} + \rho _K \chi _{I_{k,2^{k-1}-1}} \end{aligned}$$

and then the term with \({\bar{\chi }}_{[(M-\alpha )_{(k)} , (M-2)_{(k)}]}\) in (9.7) does not interact with \(\rho _K \chi _{I_{k,2^{k-1}-2}}\) for the same reason as above. Therefore

$$\begin{aligned}\begin{aligned} (\omega _k'-\omega _{k-1}')*\rho _K =\,&M^k {\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]} * (\rho _K \chi _{I_{k,2^{k-1}-2}}) \\&-\frac{1}{4}\cdot \frac{2}{\alpha -2}M^{k-1}{\bar{\chi }}_{[(M-\alpha )_{(k-1)} , (M-2)_{(k-1)}]} * (\rho _K \chi _{I_{k,2^{k-1}-2}}) \\&+ M^k {\bar{\chi }}_{[(M-2)_{(k)},M_{(k)}]} * (\rho _K \chi _{I_{k,2^{k-1}-1}}) \\&-\frac{3}{4}\cdot \frac{2}{\alpha -2}M^k {\bar{\chi }}_{[(M-\alpha )_{(k)} , (M-2)_{(k)}]} * (\rho _K \chi _{I_{k,2^{k-1}-1}}) \\&-\frac{1}{4}\cdot \frac{2}{\alpha -2}M^{k-1}{\bar{\chi }}_{[(M-\alpha )_{(k-1)} , (M-2)_{(k-1)}]}* (\rho _K \chi _{I_{k,2^{k-1}-1}})\\ :=&{\mathcal {J}}_1 + {\mathcal {J}}_2 + {\mathcal {J}}_3 + {\mathcal {J}}_4 + {\mathcal {J}}_5\,. \end{aligned}\end{aligned}$$

See Fig. 5 for an illustration.

Recall that we focus on \(x\in [M^{-1},1/2]\), and the involved interval for \(rho_K\) are \(I_{k,2^{k-1}-2} = [M^{-1}-1_{(k-1)},M^{-1}-1_{(k-1)}+1_{(k)}]\) and \(I_{k,2^{k-1}-1} = [M^{-1}-1_{(k)},M^{-1}]\). Due to (9.1), we get a localized accumulation function of \(\rho _K \) for both intervals

$$\begin{aligned} \psi (y) = \int _{M^{-1}-1_{(k-1)}}^{M^{-1}-1_{(k-1)}+y}\rho _K (y_1)\,\textrm{d}{y_1} = \int _{M^{-1}-1_{(k)}}^{M^{-1}-1_{(k)}+y}\rho _K (y_1)\,\textrm{d}{y_1},\quad 0\le y \le 1_{(k)} \end{aligned}$$

as appeared in Lemma 9.4. By symmetry, \(\psi (y) + \psi (1_{(k)}-y)=\psi (1_{(k)}) = 2^{-k}\), which implies \(\int _0^{1_{(k)}}\psi (y)\,\textrm{d}{y} = \frac{1}{2}\int _0^{1_{(k)}}(\psi (y)+\psi (1_{(k)}-y))\,\textrm{d}{y}=2^{-(k+1)}\cdot 1_{(k)}\). Then we analyze the five integrals \({\mathcal {J}}_1,\ldots ,{\mathcal {J}}_5\) separately.

Positive contribution from \({\mathcal {J}}_1\):

\(\text {supp}\,{\mathcal {J}}_1 = [M^{-1},M^{-1}+1_{(k)}]\), and its expression only has piece 3. Therefore

$$\begin{aligned} {\mathcal {J}}_1(x) = M^k\psi (M^{-1}+1_{(k)}-x) \end{aligned}$$
(9.12)

and

$$\begin{aligned} \int _{M^{-1}}^{M^{-1}+1_{(k)}}{\mathcal {J}}_1(x)\,\textrm{d}{x} = M^k2^{-(k+1)}\cdot 1_{(k)} = \frac{1}{2}\cdot 2^{-k}. \end{aligned}$$

Negative contribution from \({\mathcal {J}}_2\):

\(\text {supp}\,{\mathcal {J}}_2 = [M^{-1}+ (M-\alpha -1)_{(k-1)},M^{-1}+ (M-3)_{(k-1)} + 1_{(k)}]\), and its expression has all 3 full pieces.

$$\begin{aligned}\begin{aligned}&{\mathcal {J}}_2(x) = -\frac{1}{4}\cdot \frac{2}{\alpha -2}M^{k-1}\\&\cdot \left\{ \begin{array}{ll} \psi (x-(M^{-1}+ (M-\alpha -1)_{(k-1)})), &{} M^{-1}+ (M-\alpha -1)_{(k-1)} \le x \le M^{-1}+ (M-\alpha -1)_{(k-1)}+1_{(k)} \\ 2^{-k}, &{} M^{-1}+ (M-\alpha -1)_{(k-1)}+1_{(k)} \le x \le M^{-1}+ (M-3)_{(k-1)} \\ \psi (M^{-1}+ (M-3)_{(k-1)}+1_{(k)}-x), &{} M^{-1}+ (M-3)_{(k-1)} \le x \le M^{-1}+ (M-3)_{(k-1)}+1_{(k)} \end{array}\right. \end{aligned}\end{aligned}$$

]and

$$\begin{aligned} \int _{M^{-1}+ (M-\alpha -1)_{(k-1)}}^{M^{-1}+ (M-3)_{(k-1)} + M^{-k}}{\mathcal {J}}_2(y)\,\textrm{d}{y} = -\frac{1}{4}\cdot \frac{2}{\alpha -2}M^{k-1} \cdot 2^{-k}\cdot (\alpha -2)_{(k-1)} = -\frac{1}{2}\cdot 2^{-k}. \end{aligned}$$

Positive contribution from \({\mathcal {J}}_3\):

\(\text {supp}\,{\mathcal {J}}_3 = [M^{-1}+(M-3)_{(k)},M^{-1}+M_{(k)}]\), and its expression has all 3 full pieces.

$$\begin{aligned} {\mathcal {J}}_2(x) =M^k\cdot \left\{ \begin{array}{ll} \psi (x-(M^{-1}+(M-3)_{(k)})), &{} M^{-1}+(M-3)_{(k)} \le x \le M^{-1}+(M-2)_{(k)} \\ 2^{-k}, &{} M^{-1}+(M-2)_{(k)} \le x \le M^{-1}+(M-1)_{(k)} \\ \psi (M^{-1}+M_{(k)}-x), &{} M^{-1}+(M-1)_{(k)} \le x \le M^{-1}+M_{(k)} \end{array}\right. \end{aligned}$$

and

$$\begin{aligned} \int _{M^{-1}+(M-3)_{(k)}}^{M^{-1}+M_{(k)}}{\mathcal {J}}_3(y)\,\textrm{d}{y} = M^k \cdot 2^{-k}\cdot 2_{(k)} = 2\cdot 2^{-k}. \end{aligned}$$

Negative contribution from \({\mathcal {J}}_4\):

\(\text {supp}\,{\mathcal {J}}_4 = [M^{-1}+(M-\alpha -1)_{(k)},M^{-1}+(M-2)_{(k)}]\), and its expression has all 3 full pieces.

$$\begin{aligned}\begin{aligned}&{\mathcal {J}}_4(x) = -\frac{3}{4}\cdot \frac{2}{\alpha -2}M^k\\&\cdot \left\{ \begin{array}{ll} \psi (x-(M^{-1}+(M-\alpha -1)_{(k)})), &{} M^{-1}+(M-\alpha -1)_{(k)} \le x \le M^{-1}+M^{-k}\cdot (M-\alpha ) \\ 2^{-k}, &{} M^{-1}+M^{-k}\cdot (M-\alpha ) \le x \le M^{-1}+M^{-k}\cdot (M-3) \\ \psi (M^{-1}+(M-2)_{(k)}-x), &{} M^{-1}+M^{-k}\cdot (M-3) \le x \le M^{-1}+(M-2)_{(k)} \end{array}\right. \end{aligned}\end{aligned}$$

and

$$\begin{aligned} \int _{M^{-1}+(M-\alpha -1)_{(k)}}^{M^{-1}+(M-2)_{(k)}}{\mathcal {J}}_4(y)\,\textrm{d}{y} = -\frac{3}{4}\cdot \frac{2}{\alpha -2}M^k \cdot 2^{-k}\cdot (\alpha -2)_{(k)} = -\frac{3}{2}\cdot 2^{-k}. \end{aligned}$$

Negative contribution from \({\mathcal {J}}_5\):

\(\text {supp}\,{\mathcal {J}}_5 = [M^{-1}+(M-\alpha )_{(k-1)}-1_{(k)},M^{-1}+(M-2)_{(k-1)}]\), and its expression has all 3 full pieces.

$$\begin{aligned}\begin{aligned}&{\mathcal {J}}_5(x) = -\frac{1}{4}\cdot \frac{2}{\alpha -2}M^{k-1} \\&\cdot \left\{ \begin{array}{ll} \psi (x-(M^{-1}+(M-\alpha )_{(k-1)}-1_{(k)})), &{} M^{-1}+(M-\alpha )_{(k-1)}-1_{(k)} \le x \le M^{-1}+(M-\alpha )_{(k-1)} \\ 2^{-k}, &{} M^{-1}+(M-\alpha )_{(k-1)} \le x \le M^{-1}+(M-2)_{(k-1)}-1_{(k)} \\ \psi (M^{-1}+(M-2)_{(k-1)}-x), &{} M^{-1}+(M-2)_{(k-1)}-1_{(k)} \le x \le M^{-1}+(M-2)_{(k-1)} \end{array}\right. \end{aligned}\end{aligned}$$

and

$$\begin{aligned} \int _{M^{-1}+(M-\alpha )_{(k-1)}-1_{(k)}}^{M^{-1}+(M-2)_{(k-1)}}{\mathcal {J}}_5(y)\,\textrm{d}{y} = -\frac{1}{4}\cdot \frac{2}{\alpha -2}M^{k-1} \cdot 2^{-k}\cdot (\alpha -2)_{(k-1)} = -\frac{1}{2}\cdot 2^{-k} . \end{aligned}$$

STEP 1-1: treat the range \(M^{-1}\le x_0 \le M^{-1}+1_{(k)}\), and quantify the positive contribution from \({\mathcal {J}}_1\).

The assumption (9.8) implies that

$$\begin{aligned} M>4,\quad \alpha \le \frac{M}{2}\,. \end{aligned}$$
(9.13)

Then it is clear that

$$\begin{aligned} \text {supp}\,{\mathcal {J}}_1 \cap \text {supp}\,{\mathcal {J}}_i = \emptyset ,\quad i=2,3,4,5. \end{aligned}$$

Therefore \( \int _{M^{-1}}^{x_0} (x_0-x)((\omega _k'-\omega _{k-1}')*\rho _K )(x)\,\textrm{d}{x}>0\) for \(x_0\in \text {supp}\,{\mathcal {J}}_1=[M^{-1},M^{-1}+1_{(k)}]\). Also, for \(x_0>M^{-1}+1_{(k)}\), we have the positive contribution from \({\mathcal {J}}_1\) as

$$\begin{aligned} \begin{aligned} \int _{M^{-1}}^{x_0}&(x_0-x){\mathcal {J}}_1(x)\,\textrm{d}{x} = \int _{M^{-1}}^{M^{-1}+1_{(k)}} (x_0-x){\mathcal {J}}_1(x)\,\textrm{d}{x}\\ =&\int _{M^{-1}}^{M^{-1}+1_{(k)}} (M^{-1}+(\frac{1}{2})_{(k)}-x){\mathcal {J}}_1(x)\,\textrm{d}{x} \\&\quad + (x_0-M^{-1}-(\frac{1}{2})_{(k)})\int _{M^{-1}}^{M^{-1}+1_{(k)}}{\mathcal {J}}_1(x)\,\textrm{d}{x} \\ =&\int _{M^{-1}}^{M^{-1}+1_{(k)}} (x-M^{-1}-(\frac{1}{2})_{(k)}){\mathcal {J}}_1(2M^{-1}+1_{(k)}-x)\,\textrm{d}{x} \\&\quad + \frac{1}{2}\cdot 2^{-k}\cdot (x_0-M^{-1}-(\frac{1}{2})_{(k)}) \\ =&\frac{1}{2}\int _{M^{-1}}^{M^{-1}+1_{(k)}} (M^{-1}+(\frac{1}{2})_{(k)}-x)({\mathcal {J}}_1(x)-{\mathcal {J}}_1(2M^{-1}+1_{(k)}-x))\,\textrm{d}{x} \\&\quad + \frac{1}{2}\cdot 2^{-k}\cdot (x_0-M^{-1}-(\frac{1}{2})_{(k)}) \\ \ge&\frac{1}{2}\cdot 2^{-k}\cdot (x_0-M^{-1}-(\frac{1}{2})_{(k)}) \end{aligned}\end{aligned}$$
(9.14)

where the inequality uses the decreasing property of \({\mathcal {J}}_1\) due to (9.12) to get the positivity of the integrand.

STEP 1-2: treat the range \(M^{-1}+1_{(k)} \le x_0 \le M^{-1}+1_{(k-1)}=M^{-1}+M_{(k)}\).

For such \(x_0\), (9.14) holds, and \(x_0\notin \text {supp}\,{\mathcal {J}}_2\cup \text {supp}\,{\mathcal {J}}_5\) under the condition (9.13). Then the only negative contribution is from \({\mathcal {J}}_4\), whose support is \(\text {supp}\,{\mathcal {J}}_4 = [M^{-1}+(M-\alpha -1)_{(k)},M^{-1}+(M-2)_{(k)}]\) and has integral \(-\frac{3}{2}\cdot 2^{-k}\). Therefore it suffices to consider \(x_0\in [M^{-1}+(M-\alpha -1)_{(k)},M^{-1}+M_{(k)}]\). Using the negativity and symmetry of \({\mathcal {J}}_4\), we have

$$\begin{aligned}\begin{aligned} \int _{M^{-1}}^{x_0} (x_0-x){\mathcal {J}}_4(x)\,\textrm{d}{x}&\ge \int _{M^{-1}}^{M^{-1}+M_{(k)}} (M^{-1}+M_{(k)}-x){\mathcal {J}}_4(x)\,\textrm{d}{x}\\&= -\frac{3}{2}\cdot 2^{-k}(M^{-1}+M_{(k)}-x_c) = -\frac{3}{2}\cdot 2^{-k}(\frac{\alpha +3}{2})_{(k)} \end{aligned}\end{aligned}$$

where \(x_c=M^{-1}+(M-\frac{\alpha +3}{2})_{(k)}\) is the center of \(\text {supp}\,{\mathcal {J}}_4\). For such \(x_0\), (9.14) gives

$$\begin{aligned}{} & {} 7\int _{M^{-1}}^{x_0} (x_0-x){\mathcal {J}}_1(x)\,\textrm{d}{x} \ge \frac{1}{2}\cdot 2^{-k}\cdot (M^{-1}+(M-\alpha -1)_{(k)}-M^{-1} -(\frac{1}{2})_{(k)}) \nonumber \\{} & {} \quad = \frac{1}{2}\cdot 2^{-k}\cdot (M-\alpha -\frac{3}{2})_{(k)} \end{aligned}$$
(9.15)

Therefore, as long as

$$\begin{aligned} M-\alpha -\frac{3}{2} \ge \frac{3}{2}(\alpha +3) \quad \Leftrightarrow \quad \alpha \le \frac{2}{5}(M-6) \end{aligned}$$
(9.16)

which is guaranteed by the assumption (9.8), we have \( \int _{M^{-1}}^{x_0} (x_0-x)({\mathcal {J}}_1(x)+{\mathcal {J}}_4(x))\,\textrm{d}{x}\ge 0\) for \(M^{-1}+1_{(k)} \le x_0 \le M^{-1}+1_{(k-1)}\).

STEP 1-3: treat the range \(M^{-1}+1_{(k-1)} \le x_0 \le 1/2\).

Notice that

$$\begin{aligned} \frac{\partial }{\partial x_0} \int _{M^{-1}}^{x_0} (x_0-x)\sum _{i=1}^5{\mathcal {J}}_i(x)\,\textrm{d}{x} = \int _{M^{-1}}^{x_0} \sum _{i=1}^5{\mathcal {J}}_i(x)\,\textrm{d}{x} \end{aligned}$$

For \(M^{-1}+1_{(k-1)} \le x_0 \le 1/2\), \([M^{-1},x_0]\) contains all the positive contributions: \(\text {supp}\,{\mathcal {J}}_1\cup \text {supp}\,{\mathcal {J}}_3\), therefore

$$\begin{aligned} \int _{M^{-1}}^{x_0} \sum _{i=1}^5{\mathcal {J}}_i(x)\,\textrm{d}{x} \ge \int _{M^{-1}}^{1/2} \sum _{i=1}^5{\mathcal {J}}_i(x)\,\textrm{d}{x} = 0 \end{aligned}$$

Therefore \(\int _{M^{-1}}^{x_0} (x_0-x)\sum _{i=1}^5{\mathcal {J}}_i(x)\,\textrm{d}{x}\) is increasing in \(x_0\) for \(M^{-1}+1_{(k-1)} \le x_0 \le 1/2\), and its positivity follows from STEP 1-2.

STEP 2: show that \( \int _{M^{-1}}^{x_0} (x_0-x)(W_1''*\rho _K )(x)\,\textrm{d}{x}>0\).

\(\omega _1'\) is supported on \([(M-\alpha )_{(1)},1]\subseteq [1/2,1]\). Therefore for \(x\in (M^{-1},1/2)\), the only possible contribution for \(\omega _1'*\rho _K \) comes from \(\rho _K \chi _{I_{1,1}}\), and then

$$\begin{aligned}\begin{aligned} W_1''*\rho _K&= \frac{M}{M-2} + \omega _1'*(\rho _K \chi _{I_{1,1}})\\&= \frac{M}{M-2} + M{\bar{\chi }}_{[(M-2)_{(1)},1]}*(\rho _K \chi _{I_{1,1}}) - \frac{3}{4}\cdot \frac{2}{\alpha -2}M{\bar{\chi }}_{[(M-\alpha )_{(1)},(M-2)_{(1)}]}*(\rho _K \chi _{I_{1,1}}) \\&= \frac{M}{M-2} + {\mathcal {K}}_1 + {\mathcal {K}}_2. \end{aligned}\end{aligned}$$

Applying Lemma 9.4 with x-axis reversed, \({\mathcal {K}}_1\ge 0\) is supported on \([M^{-1},2M^{-1}]\) (after intersecting with \([M^{-1},1/2]\)), and only has piece 1, with expression given by

$$\begin{aligned} {\mathcal {K}}_1(x) = M\psi (2M^{-1}-x),\quad \psi (y) = \int _0^y \rho _K (y_1)\,\textrm{d}{y_1}. \end{aligned}$$

By the symmetry \(\psi (y)+\psi (M^{-1}-y)=1/2\),

$$\begin{aligned} \int _{M^{-1}}^{2M^{-1}}{\mathcal {K}}_1(x)\,\textrm{d}{x} = \frac{1}{2}\cdot \frac{M}{2}\cdot M^{-1} = \frac{1}{4}. \end{aligned}$$

Notice that \(\alpha \ge 4\) by the assumption (9.8). Therefore \({\mathcal {K}}_2\le 0\) is supported on \([M^{-1},\alpha M^{-1}]\subseteq [M^{-1},1/2]\), and has all 3 full pieces. Again, Lemma 9.4 gives

$$\begin{aligned} {\mathcal {K}}_2(x) = -\frac{3}{4}\cdot \frac{2}{\alpha -2}M\cdot \left\{ \begin{array}{ll} \psi (x-M^{-1}), &{} M^{-1} \le x \le 2M^{-1} \\ \frac{1}{2}, &{} 2M^{-1} \le x \le (\alpha -1)M^{-1} \\ \psi (\alpha M^{-1}-x), &{} (\alpha -1)M^{-1} \le x \le \alpha M^{-1} \end{array}\right. \end{aligned}$$
(9.17)

and

$$\begin{aligned} \int _{M^{-1}}^{\alpha M^{-1}}{\mathcal {K}}_2(x)\,\textrm{d}{x} = -\frac{3}{4}\cdot \frac{2}{\alpha -2}M\cdot \frac{1}{2}\cdot (\alpha -2)M^{-1} = -\frac{3}{4}. \end{aligned}$$

Notice that \(W_1''*\rho _K \) is positive at \(M^{-1}\), decreasing on \([M^{-1},2M^{-1}]\), constant on \([2M^{-1},(\alpha -1)M^{-1}]\), increasing on \([(\alpha -1)M^{-1},\alpha M^{-1}]\), being a positive constant on \([\alpha M^{-1},1/2]\). It is clear that \(W_1'*\rho _K\) vanishes at 1/2 by symmetry, and

$$\begin{aligned}\begin{aligned} (W_1'*\rho _K)(M^{-1}) =&\frac{M}{2M-4}\int _{[0,1]}\big (-\text {sgn}(M^{-1}-y)+2(M^{-1}-y)\big )\rho _K(y)\,\textrm{d}{y} \\&+ \int _{I_{1,1}} \big ((M-\frac{1}{2})+M(M^{-1}-y)\big )\rho _K(y)\,\textrm{d}{y} \\ =&\frac{M}{2M-4}\Big (2M^{-1}-2\cdot \frac{1}{2}\Big ) + \frac{1}{2}\Big (M-\frac{1}{2}+1-M\cdot \Big (1-\frac{1}{2M}\Big )\Big ) =0 \end{aligned}\end{aligned}$$

using the fact that the center of mass of \(\rho _K\) and \(\rho _K\chi _{I_{1,1}}\) are \(\frac{1}{2}\) and \(1-\frac{1}{2M}\) respectively. Then we see that \(W_1*\rho _K \) achieves its minimum on \([M^{-1},1/2]\) at either \(M^{-1}\) or 1/2. Therefore, to show that \((W_1*\rho _K )(x_0)=\int _{M^{-1}}^{x_0} (x_0-x)(W_1''*\rho _K )(x)\,\textrm{d}{x}>0\), we only need to check it for \(x_0=1/2\).

Similar to (9.14) for \({\mathcal {K}}_1\) and the symmetry of the integrand for \({\mathcal {K}}_2\), we can estimate

$$\begin{aligned} \int _{M^{-1}}^{1/2} \Big (\frac{1}{2}-x\Big )(W_1''*\rho _K )(x)\,\textrm{d}{x}{} & {} \ge \frac{M}{M-2}\cdot \frac{1}{2}\Big (\frac{1}{2}-M^{-1}\Big )^2 \\{} & {} \quad + \frac{1}{4}\cdot \Big (\frac{1}{2} - \frac{3}{2}M^{-1}\Big ) - \frac{3}{4}\cdot \Big (\frac{1}{2}-\frac{\alpha +1}{2} M^{-1}\Big ) \end{aligned}$$

where we use the fact that the center of \(\text {supp}\,{\mathcal {K}}_1\) and \(\text {supp}\,{\mathcal {K}}_2\) are \(\frac{3}{2}M^{-1}\) and \(\frac{\alpha +1}{2} M^{-1}\) respectively. Its positivity is equivalent to

$$\begin{aligned} \alpha > \frac{1}{3}(M+2) \end{aligned}$$
(9.18)

which is guaranteed by (9.8).

Now we show that

$$\begin{aligned} (W_1*\rho _K)(x_0)-(W_1*\rho _K)(M^{-1})= \int _{M^{-1}}^{x_0} (x_0-x)(W_1''*\rho _K )(x)\,\textrm{d}{x}\ge c(x_0)>0 \end{aligned}$$
(9.19)

for any \(x_0\in (M^{-1},1/2)\), with \(c(x_0)\) independent of K. In fact, using the monotone properties of \(W_1*\rho _K\) we obtained, together with the positivity of \(\int _{M^{-1}}^{x_0} (x_0-x)(W_1''*\rho _K )(x)\,\textrm{d}{x}\) at \(x_0=1/2\), it suffices to show (9.19) for \(x_0\) near \(M^{-1}\). Since \(\psi \) is increasing on \([0,M^{-1}]\) with \(\psi (y)+\psi (M^{-1}-y)=1/2\), we have \(\psi (2M^{-1}-x)\ge \psi (x-M^{-1})\) for \(x\in [M^{-1},\frac{3}{2}M^{-1}]\). Together with the fact that \(\alpha \ge 4\), we see that \({\mathcal {K}}_1(x)\ge {\mathcal {K}}_2(x)\) for \(x\in [M^{-1},\frac{3}{2}M^{-1}]\), which implies \((W_1''*\rho _K)(x)\ge \frac{M}{M-2}\). Therefore we see (9.19) with \(c(x_0) = \frac{M}{M-2}\int _{M^{-1}}^{x_0} (x_0-x)\,\textrm{d}{x} = \frac{M}{2(M-2)}(x_0-M^{-1})^2>0\) for \(x_0\in (M^{-1},\frac{3}{2}M^{-1}]\).

STEP 3: show that \( \int _{M^{-1}}^{x_0} (x_0-x)((\omega _2'-\omega _1')*\rho _K )(x)\,\textrm{d}{x}>0\) for \(K\ge 2\).

Compared to STEP 1, \({\mathcal {J}}_1,{\mathcal {J}}_3,{\mathcal {J}}_4\) appear in exactly the same form. The terms \({\mathcal {J}}_2,{\mathcal {J}}_5\) are different because they involve convolutions of \(\rho _K \) with \({\bar{\chi }}_{[(M-\alpha )_{(1)},(M-2)_{(1)}]}\) and thus have contributions from \(\rho _K \chi _{I_{1,1}}\), since \([(M-\alpha )_{(1)},(M-2)_{(1)}]\subseteq [1/2,1]\). The new term corresponding to \({\mathcal {J}}_2+{\mathcal {J}}_5\) is

$$\begin{aligned} {\tilde{{\mathcal {J}}}}_2 = -\frac{1}{4}\cdot \frac{2}{\alpha -2}M{\bar{\chi }}_{[(M-\alpha )_{(1)},(M-2)_{(1)}]} * \rho _K \chi _{I_{1,1}} = \frac{1}{3}{\mathcal {K}}_2 \le 0 \end{aligned}$$

where \({\mathcal {K}}_2\) is defined in (9.17). Then STEP 1-3 can be repeated as before. For STEP 1-1, we have the same estimate (9.14) for \({\mathcal {J}}_1\) with \(x_0>M^{-1}+1_{(2)}\). For \(x_0\in (M^{-1},M^{-1}+1_{(2)}]\), we have the lower bound

$$\begin{aligned} \int _{M^{-1}}^{x_0} (x_0-x){\mathcal {J}}_1(x)\,\textrm{d}{x} = \int _{M^{-1}}^{M^{-1}+1_{(2)}} \max \{x_0-x,0\}{\mathcal {J}}_1(x)\,\textrm{d}{x} \ge \int _{M^{-1}}^{x_0} (x_0-x)\frac{M^2}{8}\,\textrm{d}{x} \end{aligned}$$

by replacing \({\mathcal {J}}_1(x)\) with its average on \([M^{-1},M^{-1}+1_{(2)}]\), using the decreasing property of \({\mathcal {J}}_1\) and \(\max (x_0-x,0)\). We also have the lower bound

$$\begin{aligned}{} & {} \int _{M^{-1}}^{x_0} (x_0-x){\tilde{{\mathcal {J}}}}_2(x)\,\textrm{d}{x} \ge -\int _{M^{-1}}^{x_0} (x_0-x)\frac{1}{3}\\{} & {} \quad \cdot \frac{3}{4}\cdot \frac{2}{\alpha -2}M\cdot \frac{1}{2}\,\textrm{d}{x} = -\int _{M^{-1}}^{x_0} (x_0-x)\frac{2}{4(\alpha -2)}M\,\textrm{d}{x} \end{aligned}$$

using (9.17) for \({\tilde{{\mathcal {J}}}}_2=\frac{1}{3}{\mathcal {K}}_2\) and the upper bound \(\psi (y)\le 1/2\). Since \(\alpha \ge 4\) and \(M\ge 4\) by (9.8), we get \(\int _{M^{-1}}^{x_0} (x_0-x)({\mathcal {J}}_1(x)+{\tilde{{\mathcal {J}}}}_2(x))\,\textrm{d}{x}>0\), and thus \(\int _{M^{-1}}^{x_0} (x_0-x)((\omega _2'-\omega _1')*\rho _K)(x)\,\textrm{d}{x}>0\).

For STEP 1-2 (i.e., \(x_0\in [M^{-1}+1_{(2)},2M^{-1}]\subseteq [M^{-1},\alpha M^{-1}]\)), we have an extra negative term \({\tilde{{\mathcal {J}}}}_2\). By (9.17), we have \({\tilde{{\mathcal {J}}}}_2(x) \ge -\frac{1}{4}\cdot \frac{2}{\alpha -2}M\cdot \frac{1}{2}\) for any \(x\in [M^{-1}+1_{(2)},2M^{-1}]\). Therefore its contribution in \( \int _{M^{-1}}^{x_0} (x_0-x)((\omega _2'-\omega _1')*\rho _K )(x)\,\textrm{d}{x}\) can be estimated by

$$\begin{aligned}\begin{aligned} \int _{M^{-1}}^{x_0} (x_0-x){\tilde{{\mathcal {J}}}}_2(x)\,\textrm{d}{x}&\ge \int _{M^{-1}}^{x_0} (x_0-x)\Big (-\frac{1}{4}\cdot \frac{2}{\alpha -2}M\cdot \frac{1}{2}\Big )\,\textrm{d}{x}\\&= -\frac{1}{4(\alpha -2)}M\cdot \frac{1}{2}(x_0-M^{-1})^2 \\&\ge -\frac{M}{8(\alpha -2)}(2M^{-1}-M^{-1})^2 = -\frac{1}{2}\cdot 2^{-2}\cdot \Big (\frac{M}{\alpha -2}\Big )_{(2)} \end{aligned}\end{aligned}$$

where in the last equality we rewrite it in a similar form as the positive contribution (9.15). Therefore, compared to the condition (9.16), we have \( \int _{M^{-1}}^{x_0} (x_0-x)((\omega _2'-\omega _1')*\rho _K )(x)\,\textrm{d}{x}>0\) for \(x_0\in [M^{-1}+1_{(2)},2M^{-1}]\) as long as a more restrictive condition

$$\begin{aligned} M-\alpha -\frac{3}{2} \ge \frac{3}{2}(\alpha +3) + \frac{M}{\alpha -2} \end{aligned}$$
(9.20)

is satisfied. We claim that (9.20) is a consequence of (9.8). In fact, (9.8) implies that \(\alpha \ge 24\). Therefore \(\frac{M}{\alpha -2}\le \frac{24}{22}\cdot \frac{M}{\alpha } \le \frac{24}{22}\cdot 3<4\) by (9.18). Using this, we see that (9.20) can be guaranteed by \(\alpha \le \frac{2}{5}(M-10)\) from (9.8).

Finally notice that the strictly positive contribution from \(W_1\) (as in STEP 2) appears in any case of \(K\ge 1\). Therefore we get (9.9) with \(c(x_0)\) independent of K. \(\square \)

Fig. 5
figure 5

The decomposition of \((\omega _k'-\omega _{k-1}')*\rho _K \) into \({\mathcal {J}}_1,\ldots ,{\mathcal {J}}_5\)

Next we use Proposition 9.3 and combined with self-similar arguments to obtain (2.2) for \(W_k*\rho _k\), that is the necessary condition for \(d_2\)-local minimizers.

Proposition 9.5

Assume M and \(\alpha \) satisfy (9.8), and let \(k\ge 1\). Then \(W_k*\rho _k\) is constant on \(\text {supp}\,\rho _k\), denoted as \(c_k\), and for any \(x_0\not \in \text {supp}\,\rho _j\), there exists \(c(x_0)>0\) such that

$$\begin{aligned} (W_k*\rho _k)(x_0) - c_k \ge c(x_0),\quad \forall k\ge j\,. \end{aligned}$$
(9.21)

See Fig. 6 for an illustration.

Fig. 6
figure 6

The potential \(W_4*\rho _4\), with \(M=12\) and \(\alpha =5\). The blue dots indicate \(\text {supp}\,\rho _4\). One can see that \(W_4*\rho _4\) is constant on \(\text {supp}\,\rho _4\) and larger elsewhere. Notice that M and \(\alpha \) do not satisfy the condition (9.8). Nevertheless, the conclusion of Proposition 9.5 still holds

Proof

STEP 1: Inside [0, 1].

We start by noticing the self-similar relations

$$\begin{aligned} \rho _k(x) = \frac{M}{2}\rho _{k-1}(M x),\quad \omega _k'(x) = M\omega _{k-1}'(M x),\quad \forall x\in [0,M^{-1}]. \end{aligned}$$
(9.22)

We first use induction on k to prove

$$\begin{aligned} W_k*\rho _k \text{ is } \text{ constant } c_k \text{ on } \text {supp}\,\rho _k\text{, } \text{ and } (W_k*\rho _k)(x) > c_k,\, \forall x\in [0,1]\backslash \text {supp}\,\rho _k. \end{aligned}$$
(9.23)

For the case \(k=1\), Theorem 9.2 implies that \(\rho _1\) is a steady state for \(W_1\), i.e., \(W_1*\rho _1\) is constant on \(I_{1,0}\) and \(I_{1,1}\), and these two constants are the same by symmetry. Combined with Proposition 9.3, we obtain (9.23) for \(k=1\).

Suppose (9.23) is true for \(k-1\). We first apply Proposition 9.3 to see that \((W_k*\rho _k)(x) > (W_k*\rho _k)(M^{-1})=c_k\) for \(x\in (M^{-1},1-M^{-1})\). Notice that \(\text {supp}\,\rho _k\subseteq I_{1,0}\cup I_{1,1}\) and \(\rho _k\) is symmetric about 1/2. By this symmetry, we can reduce ourselves to the interval \(x\in I_{1,0}=[0,M^{-1}]\) to prove that (9.23) holds for \(x\in [0,1]\), since the same conclusion will be true for \(I_{1,1}=[1-M^{-1},1]\), and thus (9.23) is proved for k.

Take \(x\in I_{1,0}=[0,M^{-1}]\), and then we have

$$\begin{aligned}\begin{aligned} (\omega _k'*(\rho _k\chi _{I_{1,0}}))(x)&= \int _0^{M^{-1}}M\omega _{k-1}'(M (x-y))\cdot \frac{M}{2}\rho _{k-1}(M y)\,\textrm{d}{y} \\&= \frac{M}{2}\int _0^1\omega _{k-1}'(M x-y_1)\rho _{k-1}(y_1)\,\textrm{d}{y_1} \\&= \frac{M}{2}(\omega _{k-1}'*\rho _{k-1})(M x) \end{aligned}\end{aligned}$$

by (9.22) and the change of variable \(y_1=M y\). Combined with

$$\begin{aligned}\begin{aligned} ((W_k''-\omega _k')*(\rho _k\chi _{I_{1,0}}))(x)&= \frac{M}{M-2}((-\delta +1)*(\rho _k\chi _{I_{1,0}}))(x) \\&= -\frac{M}{M-2}\rho _k(x) + \frac{M}{M-2}\int _{I_{1,0}}\rho _k(y)\,\textrm{d}{y}\\&= -\frac{M}{M-2}\rho _k(x)+\frac{M}{2M-4}\,, \end{aligned}\end{aligned}$$

we get

$$\begin{aligned} (W_k''*(\rho _k\chi _{I_{1,0}}))(x) = \frac{M}{2}(\omega _{k-1}'*\rho _{k-1})(M x)-\frac{M^2}{2M-4}\rho _{k-1}(M x)+\frac{M}{2M-4}\,. \end{aligned}$$
(9.24)

Since \(|x-I_{1,1}|\subseteq [(M-2)_{(1)},M_{(1)}]\) on which \(W_k''=\frac{M}{M-2}+M\), we have

$$\begin{aligned} (W_k''*(\rho _k\chi _{I_{1,1}}))(x) = \Big (\frac{M}{M-2}+M\Big )\int _{I_{1,1}}\rho _k(y)\,\textrm{d}{y}=\frac{M}{2M-4}+\frac{M}{2}\,. \end{aligned}$$

Adding with (9.24), we get

$$\begin{aligned} (W_k''*\rho _k)(x)= & {} \frac{M}{2}(\omega _{k-1}'*\rho _{k-1})(M x)-\frac{M^2}{2M-4}\rho _{k-1}(M x)+\frac{M^2}{2M-4} \\= & {} \frac{M}{2}(W_{k-1}''*\rho _{k-1})(M x). \end{aligned}$$

Together with \((W_k'*\rho _k)(0)=(W_{k-1}'*\rho _{k-1})(0)=0\) from Theorem 9.2 and integrating twice, we conclude that

$$\begin{aligned} (W_k*\rho _k)(x)-(W_k*\rho _k)(0) = \frac{1}{2M}\big ((W_{k-1}*\rho _{k-1})(M x)-(W_{k-1}*\rho _{k-1})(0)\big ) . \end{aligned}$$
(9.25)

Notice that \(\{Mx:x\in \text {supp}\,\rho _k\cap I_{1,0}\}=\text {supp}\,\rho _{k-1}\). Therefore, the induction hypothesis implies that \(W_k*\rho _k\) is constant on \(\text {supp}\,\rho _k \cap [0,M^{-1}]\), calling it \(c_k\), and \((W_k*\rho _k)(x)>c_k\) for any \(x\in [0,M^{-1}]\backslash \text {supp}\,\rho _k\).

For \(x_0\in [0,1]\backslash \text {supp}\,\rho _j\), to see that the difference \((W_k*\rho _k)(x_0)-c_k\) can be bounded from below uniformly in \(k\ge j\), we notice that iteratively applying (9.25) and its symmetric counterparts gives

$$\begin{aligned} (W_k*\rho _k)(x_0)-(W_k*\rho _k)(0) = \Big (\frac{1}{2M}\Big )^{j'}\big ((W_{k-j'}*\rho _{k-j'})(x_1)-(W_{k-j'}*\rho _{k-j'})(0)\big ) \end{aligned}$$

for some \(0\le j' \le j\) and \(x_1\in [M^{-1},1-M^{-1}]\), for any \(k\ge j\). Then the lower bound of \((W_k*\rho _k)(x_0)-(W_k*\rho _k)(0)\) is given by (9.9) for \(x_1\), independent of k.

STEP 2: Outside [0, 1]. Then we prove

$$\begin{aligned} (W_k*\rho _k)(x_0) > c_k,\quad \forall x_0\in (-\infty ,0)\cup (1,\infty ). \end{aligned}$$
(9.26)

It suffices to treat \(x_0\in (-\infty ,0)\) by symmetry.

If \(x_0\in (M^{-1}-\frac{1}{2},0)\), then we may write

$$\begin{aligned} (W_k*\rho _k)(x_0)-(W_k*\rho _k)(0)=\int _{x_0}^0 (x-x_0)(W_k''*\rho _k)(x)\,\textrm{d}{x} \end{aligned}$$

similar to (9.10). In this interval, we make use of the decomposition (9.11) and analyse each term separately.

By STEP 1 of the proof of Proposition 9.3 and the symmetry, we get

$$\begin{aligned} \int _{x_0}^0 (x-x_0)((\omega _j'-\omega _{j-1}')*\rho _k)(x)\,\textrm{d}{x} > 0,\quad 3\le j \le k. \end{aligned}$$

For the contribution from \(W_1'' = \frac{M}{M-2}(-\delta +1)+\omega _1'\) (corresponding to STEP 2 of the proof of Proposition 9.3) where \(\text {supp}\,\omega _1'=[(M-\alpha )_{(1)},1]\subseteq [1/2,1]\), we have

$$\begin{aligned} (W_1''*\rho _k)(x) = \frac{M}{M-2}+(\omega _1'*(\rho _k\chi _{I_{1,1}}))(x) >0 \end{aligned}$$

since \(\text {dist}\,(x_0,I_{1,1})\ge (M-1)_{(1)}\) and thus the last convolution only uses the nonzero values of \(\omega _1'\) on \([(M-1)_{(1)},1]\) on which \(\omega _1'=M>0\). Therefore

$$\begin{aligned} \int _{x_0}^0 (x-x_0)(W_1''*\rho _k)(x)\,\textrm{d}{x} > 0. \end{aligned}$$

For the contribution from \(\omega _2'-\omega _1'\), compared to STEP 3 of the proof of Proposition 9.3, we have the same terms \({\mathcal {J}}_1,{\mathcal {J}}_3,{\mathcal {J}}_4\) by symmetry, since they only involve \(\rho _k\chi _{1,0}\). There is no contribution from \(\rho _k\chi _{1,1}\) since \(\text {supp}\,(\omega _2'-\omega _1')\subseteq [(M-\alpha )_{(2)},(M-2)_{(1)}]\) and \(\text {dist}\,(x_0,I_{1,1})\ge (M-1)_{(1)}\). Therefore the negative term \({\tilde{{\mathcal {J}}}}_2\) is absent, and we get

$$\begin{aligned} \int _{x_0}^0 (x-x_0)((\omega _2'-\omega _1')*\rho _k)(x)\,\textrm{d}{x} > 0. \end{aligned}$$

from STEP 3 of the proof of Proposition 9.3. Therefore we conclude (9.26) for \(x_0\in (M^{-1}-\frac{1}{2},0)\). The difference \((W_k*\rho _k)(x_0)-c_k\) can be bounded from below uniformly in k because the positive contribution from \(W_1\) appears in any case of \(k\ge 1\).

If \(x_0\le M^{-1}-\frac{1}{2}\), we will analyze \((W_k'*\rho _k)(x_0)\). We have

$$\begin{aligned}{} & {} \Big (\frac{M}{2M-4}(-\text {sgn}(x)+2x)*\rho _k\Big )(x_0)=\frac{M}{2M-4}\Big (1+2\Big (x_0-\frac{1}{2}\Big )\Big )\\{} & {} \quad =\frac{M}{M-2}x_0 \le -\frac{M}{M-2}\cdot \Big (\frac{1}{2}-\frac{1}{M}\Big ) = -\frac{1}{2} \end{aligned}$$

using the fact that the center of mass of \(\rho _k\) is \(\frac{1}{2}\). For \(\omega _k*\rho _k\), first notice that \(\text {dist}\,(x_0,I_{1,1})> 1-M^{-1}+\frac{1}{2}-M^{-1}>1\), we have

$$\begin{aligned} (\omega _k*(\rho _k\chi _{I_{1,1}}))(x_0) = -\frac{1}{2}\int _{I_{1,1}}\rho _k\,\textrm{d}{x} = -\frac{1}{4} \end{aligned}$$

since \(\omega _k=-\frac{1}{2}\) on \((-\infty ,-1]\). Then notice that \(x_0-x\in (-\infty ,M^{-1}-\frac{1}{2}]\) for any \(x\in I_{0,1}\). On \([\frac{1}{2}-M^{-1},\infty )\subseteq [M^{-1},\infty )\), the expression for \(\omega _k\) shows that \(\omega _k\ge -\frac{3}{2}\) on it. Therefore, by its odd property, \(\omega _k\le \frac{3}{2}\) on \((-\infty ,M^{-1}-\frac{1}{2}]\). Therefore

$$\begin{aligned} (\omega _k*(\rho _k\chi _{I_{0,1}}))(x_0) \le \frac{3}{2}\int _{I_{1,1}}\rho _k\,\textrm{d}{x} = \frac{3}{4}\,. \end{aligned}$$

Combined with the above three estimates, we see that \((W_k'*\rho _k)(x_0)\) for any \(x_0\le M^{-1}-\frac{1}{2}\). Combined with (9.26) for \(x_0\in (M^{-1}-\frac{1}{2},0)\), we get (9.26) for \(x_0\le M^{-1}-\frac{1}{2}\). \(\square \)

Define the limiting potential \(W_\infty \) by the pointwise limit

$$\begin{aligned} W_\infty '(x)=\lim _{k\rightarrow \infty } W_k'(x),\quad \forall x\ne 0. \end{aligned}$$

We remind the reader that the weak limit of \(\rho _k\) is denoted by \(\rho _\infty \), that is the uniform distribution on the Cantor set \(S=\bigcap _{k=0}^\infty S_k\). The main theorem of this section asserts that \(\rho _\infty \) is also a steady state for \(W_\infty \) satisfying the \(d_2\)-local minimizer condition.

Theorem 9.6

For M and \(\alpha \) satisfying the condition (9.8) in Proposition 9.3, there holds

$$\begin{aligned} (W_\infty *\rho _\infty )(x)=c_\infty ,\, \forall x\in \text {supp}\,\rho _\infty ,\quad (W_\infty *\rho _\infty )(x)> c_\infty ,\, \forall x\notin \text {supp}\,\rho _\infty \end{aligned}$$
(9.27)

for some constant \(c_\infty \). Also, \(W_\infty *\rho _\infty \in C^{1,\gamma }_{loc}\) for some \(\gamma >0\). Therefore, \(W_\infty '*\rho _\infty \) is defined pointwisely, satisfying \(W_\infty '*\rho _\infty =0\) on \(\text {supp}\,\rho _\infty \), i.e., \(\rho _\infty \) is a steady state for the interaction energy associated to the potential \(W_\infty \).

Proof

It is clear that \(W_\infty \in L^\infty _{loc}\) and therefore \(W_\infty *\rho _\infty \) is well-defined since \(\rho _\infty \) is compactly supported. We first claim that \(W_k*\rho _k\rightarrow W_\infty *\rho _\infty \) pointwisely as \(k\rightarrow \infty \). In fact, we write \((W_k*\rho _k)(x)- (W_\infty *\rho _\infty )(x) = ((W_k*\rho _k)(x)- (W_k*\rho _\infty )(x)) + ((W_k*\rho _\infty )(x)- (W_\infty *\rho _\infty )(x))\) and control the two terms separately.

For the first term, notice that

$$\begin{aligned}\begin{aligned} |(W_k*\rho _k)(x)- (W_k*\rho _\infty )(x)| =&\left| \int _{\mathbb {R}}W_k(x-y)(\rho _k(y)-\rho _\infty (y))\,\textrm{d}{y}\right| \\ \le&\sum _{l=0}^{2^k-1} \left| \int _{I_{k,l}} W_k(x-y)(\rho _k(y)-\rho _\infty (y))\,\textrm{d}{y}\right| \\ \end{aligned}\end{aligned}$$

since \(\rho _k\) and \(\rho _\infty \) are supported inside \(\text {supp}\,\rho _k=\bigcup _{l=0}^{2^k-1} I_{k,l}\). Notice that we have the mass conservation property \(\int _{I_{k,l}} (\rho _k(y)-\rho _\infty (y))\,\textrm{d}{y} = 0\). Therefore, denoting \(y_0\) as the left endpoint of \(I_{k,l}\),

$$\begin{aligned}\begin{aligned}&\left| \int _{I_{k,l}} W_k(x-y)(\rho _k(y)-\rho _\infty (y))\,\textrm{d}{y}\right| \\ =&\left| \int _{I_{k,l}} (W_k(x-y)-W_k(x-y_0))(\rho _k(y)-\rho _\infty (y))\,\textrm{d}{y}\right| \\ \le \,&\Vert W_k'\Vert _{L^\infty ([x-1,x])} |I_{k,l}|\int _{I_{k,l}}|\rho _k(y)-\rho _\infty (y)|\,\textrm{d}{y} \\ \le \,&C(x) M^{-k}2^{-k}, \end{aligned}\end{aligned}$$

where C(x) denotes a constant depending on x, and it is independent of k by the construction of \(W_k\) as a locally Lipschitz function. Summing over l, we get

$$\begin{aligned}\begin{aligned} |(W_k*\rho _k)(x)- (W_k*\rho _\infty )(x)| \le C(x) M^{-k}. \end{aligned}\end{aligned}$$

For the second term, notice that \(\Vert W_k'-W_{k-1}'\Vert _{L^\infty } \le C\) and \(|\text {supp}\,(W_k'-W_{k-1}')| \le CM^{-k}\) by construction. Therefore, by adding proper constants on \(W_k\), we have \(\Vert W_k-W_{k-1}\Vert _{L^\infty } \le CM^{-k}\). Then summing over \(k,k+1,\ldots \) gives \(\Vert W_k-W_\infty \Vert _{L^\infty } \le CM^{-k}\). Therefore, we conclude

$$\begin{aligned} |(W_k*\rho _\infty )(x)- (W_\infty *\rho _\infty )(x)| \le \Vert W_k-W_\infty \Vert _{L^\infty }\cdot \int _{\mathbb {R}}\rho _\infty (y)\,\textrm{d}{y} \le CM^{-k} \end{aligned}$$

and the claimed convergence is proved.

Recalling Proposition 9.5 and applying this convergence to \(x_1,x_2\in \text {supp}\,\rho _\infty \), we see that \((W_\infty *\rho _\infty )(x_1)=(W_\infty *\rho _\infty )(x_2)\) since \(\text {supp}\,\rho _\infty \subseteq \text {supp}\,\rho _k\) for any k. Applying this convergence to \(x_1\in \text {supp}\,\rho _\infty , x_2\notin \text {supp}\,\rho _\infty \), we see that \((W_\infty *\rho _\infty )(x_1)\le (W_\infty *\rho _\infty )(x_2) - c(x_2)\) with \(c(x_2)>0\). This proves (9.27).

Next we prove the (local) Hölder continuity of the velocity field \(u:=-W_\infty '*\rho _\infty \). Fix \(R>0\) large. By construction, in:inbox

$$\begin{aligned} \Vert W_\infty '\Vert _{L^\infty ([-R,R])}\le C,\quad \Vert W_\infty ''\Vert _{L^\infty ([-R,R]\backslash (-\kappa ,\kappa ))}\le \frac{C}{\kappa } \end{aligned}$$

for any \(\kappa >0\).

Take \(0<\epsilon <1/2\), \(x\in [-(R-2),R-2]\) and write

$$\begin{aligned}\begin{aligned} u(x) -u(x+\epsilon )&= \int _{[0,1]} (W_\infty '(x-y+\epsilon )-W_\infty '(x-y))\rho _\infty (y)\,\textrm{d}{y} \\&= \int _{|x-y|\ge \kappa } (W_\infty '(x-y+\epsilon )-W_\infty '(x-y))\rho _\infty (y)\,\textrm{d}{y} \\&\quad + \int _{|x-y|<\kappa } (W_\infty '(x-y+\epsilon )-W_\infty '(x-y))\rho _\infty (y)\,\textrm{d}{y} \end{aligned}\end{aligned}$$

where \(\kappa >2\epsilon \) is to be chosen. Since \(x\in [-(R-2),R-2]\), \(\text {supp}\,\rho _\infty \subseteq [0,1]\) and \(\epsilon <1/2\), all the arguments in \(W_\infty '\) above are inside \([-R,R]\). Then we estimate the first integral by

$$\begin{aligned}{} & {} \left| \int _{|x-y|\ge \kappa } (W_\infty '(x-y+\epsilon )-W_\infty '(x-y))\rho _\infty (y)\,\textrm{d}{y}\right| \\{} & {} \quad \le \epsilon \Vert W_\infty ''\Vert _{L^\infty ([-R,R]\backslash (-\kappa /2,\kappa /2)} \int _{[0,1]} \rho _\infty (y)\,\textrm{d}{y} \le \frac{C\epsilon }{\kappa } \end{aligned}$$

and the second integral by

$$\begin{aligned}{} & {} \left| \int _{|x-y|<\kappa } (W_\infty '(x-y+\epsilon )-W_\infty '(x-y))\rho _\infty (y)\,\textrm{d}{y}\right| \\{} & {} \quad \le 2\Vert W_\infty '\Vert _{L^\infty ([-R,R])}\int _{|x-y|<\kappa }\rho _\infty (y)\,\textrm{d}{y} \le C2^{-k} \end{aligned}$$

where \(k=\lfloor -\log _M \frac{2\kappa }{M-2} \rfloor \), and the last inequality follows from the facts that \(I_{k,l}\) and \(I_{k,l'}\) has distance at least \((M-2)_{(k)}\), and \((x-\kappa ,x+\kappa )\) can intersect \(I_{k,l}\) for at most one value of l if \(2\kappa \le (M-2)_{(k)}\).

Therefore

$$\begin{aligned} |u(x)-u(x+\epsilon )| \le C(\frac{\epsilon }{\kappa } + \kappa ^{\frac{\ln 2}{\ln M}}) \end{aligned}$$

Then take \(\kappa = \max \{\epsilon ^{1/(1+\frac{\ln 2}{\ln M})},2\epsilon \}\), we get the Hölder continuity of u with \(\gamma =1-1/(1+\frac{\ln 2}{\ln M})\). \(\square \)

Fig. 7
figure 7

Large time simulation of (10.1) with \(\lambda =0.15\)

Fig. 8
figure 8

Large time simulation of (10.1) with \(\lambda =0.1\)

10 Numerical simulations

In this section, we numerically illustrate the main result about almost fractal behavior of the support of steady states in Theorem 8.1 of Sect. 8 and Theorem 7.1 in Sect. 7. We show several numerical simulations with the potential W constructed in (8.1). We consider a particle gradient flow in 2D:

$$\begin{aligned} {\dot{\textbf{x}}}_i = -\frac{1}{N}\sum _{j\ne i} \nabla W(\textbf{x}_i-\textbf{x}_j),\quad i=1,\ldots ,N \end{aligned}$$
(10.1)

where W is defined by

$$\begin{aligned} W(\textbf{x}) = c_{d,\alpha }|\textbf{x}|^{\alpha -d} + C_2\frac{|\textbf{x}|^2}{2}- c_W\sum _{k=1}^K \lambda ^{(\alpha -d)k} \exp (-\frac{|\textbf{x}|^2}{2\lambda ^{2k}}) \end{aligned}$$

We take the parameters

$$\begin{aligned} N=2000,\quad \alpha = 3,\quad K = 7,\quad c_W = 0.25,\quad C_2 = 0.2\,. \end{aligned}$$

To solve (10.1), we take the initial data as random points in \([0,0.5]^2\) with uniform distribution, and apply the fourth order Runge–Kutta method with time step \(\Delta t=0.01\) to solve (10.1) numerically, and stop at final time \(T=200\). See Figs. 7 and 8 for the results at \(T=200\), for the choices \(\lambda =0.15\) and \(\lambda =0.1\) respectively. Both figures show fractal behavior of the particle distributions. In Fig. 7 one can see 7 layers of the fractal structures, as marked on the pictures. In Fig. 8 one can only see 4 layers because the number of particles N is not sufficiently large, and at Layer 4 each cluster has only two or three particles, which are not enough to resolve the next layer.