1 Introduction

The paper is devoted to multipolar Hardy inequalities with weight in \(\mathbb {R}^N\), \(N\ge 3\), with a class of weight functions wide enough. The main difficulties to get the inequalities in the multipolar case rely on the mutual interaction among the poles.

The interest in weighted Hardy inequalities is due to the applications to the study of Kolmogorov operators

$$\begin{aligned} Lu=\Delta u+\frac{\nabla \mu }{\mu }\cdot \nabla u, \end{aligned}$$
(1)

defined on smooth functions, \(\mu >0\) is a probability density on \(\mathbb {R}^N\), perturbed by inverse square potentials of multipolar type and of the related evolution problems

$$\begin{aligned} (P)\quad \left\{ \begin{array}{ll} \partial _tu(x,t)=Lu(x,t)+V(x)u(x,t),\quad \,x\in {{\mathbb {R}}}^N, t>0,\\ u(\cdot ,0)=u_0\ge 0\in L^2(\mathbb {R}^N, \mu (x)dx). \end{array} \right. \end{aligned}$$

In the case of a single pole and of the Lebesgue measure there is a very huge literature on this topic. For the classical Hardy inequality we refer, for example, to [15, 17,18,19,20,21].

We focus our attention on multipolar Hardy’s inequalities.

When L is the Schrödinger operator with multipolar inverse square potentials we can find some reference result in literature.

In particular, for the operator

$$\begin{aligned} {\mathcal {L}}=-\Delta -\sum _{i=1}^n\frac{c_i}{|x-a_i|^2}, \end{aligned}$$

\(n\ge 2\), \(c_i\in \mathbb {R}\), for any \(i\in \{1,\ldots , n\}\), Felli et al. [16] proved that the associated quadratic form

$$\begin{aligned} Q(\varphi ):=\int _{\mathbb {R}^N}|\nabla \varphi |^2\,dx -\sum _{i=1}^n c_i\int _{{{\mathbb {R}}}^N}\frac{\varphi ^2}{|x-a_i|^2}\,dx \end{aligned}$$

is positive if \(\sum _{i=1}^nc_i^+<\frac{(N-2)^2}{4}\), \(c_i^+=\max \{c_i,0\}\), conversely if \(\sum _{i=1}^nc_i^+>\frac{(N-2)^2}{4}\) there exists a configuration of poles such that Q is not positive. Later Bosi et al. [1] proved that for any \(c\in \left( 0,\frac{(N-2)^2}{4}\right] \) there exists a positive constant K such that the multipolar Hardy inequality

$$\begin{aligned} c\int _{{\mathbb {R}}^N}\sum _{i=1}^n\frac{\varphi ^2 }{|x-a_i|^2}\, dx\le \int _{{\mathbb {R}}^N} |\nabla \varphi |^2 \, dx + K \int _{\mathbb {R}^N}\varphi ^2 \, dx \end{aligned}$$

holds for any \(\varphi \in H^1(\mathbb {R}^N)\). Cazacu and Zuazua [14], improving a result stated in [1], obtained the inequality

$$\begin{aligned} \frac{(N-2)^2}{n^2}\sum _{\begin{array}{c} i,j=1\\ i< j \end{array}}^{n} \int _{\mathbb {R}^N}\frac{|a_i-a_j|^2}{|x-a_i|^2|x-a_j|^2}\varphi ^2\,dx \le \int _{\mathbb {R}^N}|\nabla \varphi |^2\,dx, \end{aligned}$$

for any \(\varphi \in H^1(\mathbb {R}^N)\) with \(\frac{(N-2)^2}{n^2}\) optimal constant (see also [13] for estimates in bounded domains).

For Ornstein–Uhlenbeck type operators

$$\begin{aligned} Lu=\Delta u - \sum _{i=1}^{n}A(x-a_i)\cdot \nabla u, \end{aligned}$$

perturbed by multipolar inverse square potentials

$$\begin{aligned} V(x)=\sum _{i=1}^n \frac{c}{|x-a_i|^2},\quad c>0, \quad a_1\ldots ,a_n\in \mathbb {R}^N, \end{aligned}$$

weighted multipolar Hardy inequalities with optimal constant and related existence and nonexistence of solutions to the problem (P) were stated in [10] following Cabré–Martel’s approach in [2], with A a positive definite real Hermitian \(N\times N\) matrix, \(a_i\in \mathbb {R}^N\), \(i\in \{1,\ldots , n\}\). In such a case, the invariant measure for these operators is the Gaussian measure \(\mu _A (x) dx =\kappa e^{-\frac{1}{2}\sum _{i=1}^{n}\left\langle A(x-a_i), x-a_i\right\rangle }dx\), with a normalization constant \(\kappa \). The technique used to get the inequality applies to the Gaussian functions and it allows to get the result in a simple way. More delicate issue is to prove the optimality of the constant.

In [12] these results have been extended to Kolmogorov operators with a more general drift term which force us to use different methods.

The result stated in [14] has been extended to the weighted multipolar case in [6].

In this paper we improve a result in [12]. In particular we state that it holds

$$\begin{aligned} c\int _{{{\mathbb {R}}}^N}\sum _{i=1}^n \frac{\varphi ^2 }{|x-a_i|^2}\, \mu (x)dx\le \int _{{{\mathbb {R}}}^N} |\nabla \varphi |^2 \, \mu (x)dx +k \int _{\mathbb {R}^N}\varphi ^2 \, \mu (x)dx \end{aligned}$$
(2)

for any \( \varphi \in H^1_\mu \), with \(c\in ]0,c_o[\) where \(c_o=c_o(N,\mu )\) is the optimal constant, showing the relation between c and the closeness to the single pole and improving the constant k in the estimate. The proof initially uses the vector field method (see [22]) extended to the weighted case. Then we overcome the difficulties related to the mutual interaction between the poles emphasizing this relation.

The class of weight functions satisfy conditions of quite general type, in particular integrability conditions to get a density result which allows us to state inequality (2) for any function in the weighted Sobolev space. Weights of this type were considered in [3,4,5, 11] in the case of a single pole.

Until now, we can achieve the optimal constant on the left-hand side in (2) using the IMS truncation method [23, 24] (see [1] in the case of Lebesgue measure and [12] in the weighted case). As a counterpart, the estimate is not very good when the constant c is close to the constant \(\frac{c_o(N,\mu )}{n}\) as observed in [1] in the unweighted case.

The paper is organized as follows. In Sect. 2 we consider the weight functions with an example. In Sect. 3 we show a preliminar result introducing suitable estimates useful to state the main result in Sect. 4.

2 Weight functions

Let \(\mu \ge 0\) be a weight function on \(\mathbb {R}^N\). We define the weighted Sobolev space \(H^1_\mu =H^1(\mathbb {R}^N, \mu (x)dx)\) as the space of functions in \(L^2_\mu :=L^2(\mathbb {R}^N, \mu (x)dx)\) whose weak derivatives belong to \(L_\mu ^2\).

In the proof of weighted estimates we make us of vector field method introduced in [22] in the case of a single pole and extended to the multipolar case in [12]. To this aim we define the vector value function

$$\begin{aligned} F(x)=\sum _{i=1}^n \beta \, \frac{x-a_i}{|x-a_i|^2} \mu (x), \quad \beta >0. \end{aligned}$$

The class of weight functions \(\mu \) that we consider fulfills the conditions:

\(H_1)\):
i):

\(\quad \sqrt{\mu }\in H^1_{loc}(\mathbb {R}^N)\);

ii):

\(\quad \mu ^{-1}\in L_{loc}^1(\mathbb {R}^N)\);

\(H_2)\):

there exists constants \(C_\mu , K_\mu \in \mathbb {R}\), \(K_\mu >2-N\), such that it holds

$$\begin{aligned} -\beta \sum _{i=1}^n\frac{(x-a_i)}{|x-a_i|^2}\cdot \frac{\nabla \mu }{\mu }\le C_\mu + K_\mu \sum _{i=1}^n\frac{\beta }{|x-a_i|^2}. \end{aligned}$$

Under the hypotheses i) and ii) in \(H_1)\) the space \(C_c^{\infty }(\mathbb {R}^N)\) is dense in \(H_{\mu }^1\) (see e.g. [25]). So we can regard \(H_{\mu }^1\) as the completion of \(C_c^{\infty }(\mathbb {R}^N)\) with respect to the Sobolev norm

$$\begin{aligned} \Vert \cdot \Vert _{H^1_\mu }^2:= \Vert \cdot \Vert _{L^2_\mu }^2 + \Vert \nabla \cdot \Vert _{L^2_\mu }^2. \end{aligned}$$

The density result allows us to get the weighted inequalities for any function in \(H^1_\mu \). As a consequence of the assumptions on \(\mu \), we get \(F_j\), \(\frac{\partial F_j}{\partial x_j}\in L_{loc}^1(\mathbb {R}^N)\), where \(F_j(x)=\beta \sum _{i=1}^{n}\frac{(x-a_i)_j}{|x-a_i|^2}\mu (x)\). This allows us to integrate by parts in the proof of the Teorem 2 in Sect. 3.

An example of weight function satisfying \(H_2)\) is

$$\begin{aligned} \mu (x)=\prod _{j=1}^n\mu _j (x)= e^{-\delta \sum _{j=1}^{n}|x-a_j|^2}, \quad \delta \ge 0. \end{aligned}$$

Let us see it in detail without worrying about the best estimates. We get

$$\begin{aligned} \frac{\nabla \mu }{\mu }=\sum _{j=1}^{n}\frac{\nabla \mu _j}{\mu _j}= -2\delta \sum _{j=1}^{n} (x-a_j). \end{aligned}$$

So, taking in mind the left-hand-side in \(H_2)\),

$$\begin{aligned} -\beta \sum _{i=1}^{n}\frac{(x-a_i)}{|x-a_i|^2}\cdot \frac{\nabla \mu }{\mu }= 2\beta \delta \sum _{i,j=1}^{n} \frac{(x-a_i)\cdot (x-a_j)}{|x-a_i|^2}. \end{aligned}$$

We estimate the scalar product. In \(B(a_k, r_0)\), for any \(k\in \{1, \ldots n\}\), we get

$$\begin{aligned}{} & {} 2\beta \delta \sum _{i=1}^{n}\frac{(x-a_i)}{|x-a_i|^2}\cdot \frac{\nabla \mu }{\mu }= 2\beta \delta \frac{(x-a_k)\cdot (x-a_k)}{|x-a_k|^2}\nonumber \\{} & {} \quad + 2\beta \delta \sum _{\begin{array}{c} i\ne k\\ j=i \end{array}}^{n}\frac{(x-a_i)\cdot (x-a_i)}{|x-a_i|^2}+ 2\beta \delta \sum _{j\ne k}\frac{(x-a_k)\cdot (x-a_j)}{|x-a_k|^2}\nonumber \\{} & {} \quad + 2\beta \delta \sum _{\begin{array}{c} i\ne k\\ j\ne i \end{array}}^{n} \frac{(x-a_i)\cdot (x-a_j)}{|x-a_i|^2}= J_1+J_2+J_3+J_4. \end{aligned}$$
(3)

So

$$\begin{aligned} J_1+J_2=2\beta \delta n. \end{aligned}$$

Since

$$\begin{aligned} (x-a_k)\cdot (x-a_j)=\frac{1}{2}\left( |x-a_k|^2+|x-a_j|^2-|a_k-a_j|^2\right) , \end{aligned}$$

\(J_3\) and \(J_4\) can be estimated as follows.

$$\begin{aligned} J_3= & {} \beta \delta \sum _{j\ne k} \left( 1+\frac{|x-a_j|^2-|a_k-a_j|^2}{|x-a_k|^2}\right) \\\le & {} \beta \delta \sum _{j\ne k} \left[ 1+ \frac{(r_0+|a_k-a_j|)^2-|a_k-a_j|^2}{|x-a_k|^2}\right] \end{aligned}$$

and

$$\begin{aligned} J_4= & {} \beta \delta \sum _{\begin{array}{c} i\ne k\\ j\ne i \end{array}}^{n} \left( 1+\frac{|x-a_j|^2-|a_i-a_j|^2}{|x-a_i|^2}\right) \\\le & {} \beta \delta \sum _{\begin{array}{c} i\ne k\\ j\ne i \end{array}}^{n} \left[ 1+ \frac{(r_0+|a_k-a_j|)^2-|a_i-a_j|^2}{(|a_k-a_i|-r_0)^2}\right] . \end{aligned}$$

Then for \(C_\mu \) large enough and \(K_{\mu ,r_0}= \delta \sum _{j\ne k} (r_0^2+2r_0|a_k-a_j|)\) in \(B(a_k, r_0)\) the condition \(H_2)\) holds. For \(x\in \mathbb {R}^N\setminus \bigcup _{k=1}^n B(a_k, r_0)\) we obtain

$$\begin{aligned} \frac{(x-a_i)\cdot (x-a_j)}{|x-a_i|^2}\le \frac{|x-a_j|}{|x-a_i|}\le \hbox {const}. \end{aligned}$$

In fact, if \(|x|>2\max _i |a_i|\),

$$\begin{aligned} \frac{|x|}{2}\le |x|-|a_i| \le |x-a_i|\le |x|+|a_i|\le \frac{3}{2}|x| \end{aligned}$$

for any i, so for |x| large enough we get \(|x-a_i|\sim |x|\). Instead if \(|x|\le R=2\max _i |a_i|\),

$$\begin{aligned} r_0\le |x-a_i|\le |x|+|a_i|\le \frac{3}{2}R \end{aligned}$$

for any i.

For other examples see [12].

3 A preliminary estimate

The next result was stated in [12] (see also [6]). We give a riformulated version that is functional to our purposes. The estimate represents a preliminary weighted Hardy inequality.

Theorem 1

Let \(N\ge 3\) and \(n\ge 2\). Under hypotheses \(H_1)\) and \(H_2)\) we get

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^N}\sum _{i=1}^{n}\frac{\beta (N+K_\mu -2)-n\beta ^2}{|x-a_i|^2}\varphi ^2\,d\mu \\ {}&\quad + \frac{\beta ^2}{2}\int _{\mathbb {R}^N}\sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n} \frac{|a_i-a_j|^2}{|x-a_i|^2|x-a_j|^2}\varphi ^2 \, d\mu \\ {}&\qquad \le \int _{\mathbb {R}^N}|\nabla \varphi |^2 \, d\mu +C_\mu \int _{\mathbb {R}^N}\varphi ^2 \, d\mu \end{aligned} \end{aligned}$$
(4)

for any \( \varphi \in H_\mu ^1\). As a consequence the following inequality holds

$$\begin{aligned}{} & {} c_{N,n,\mu }\int _{\mathbb {R}^N}\sum _{i=1}^{n}\frac{\varphi ^2}{|x-a_i|^2}\,d\mu \nonumber \\{} & {} \qquad + \frac{c_{N,n,\mu }}{2n}\int _{\mathbb {R}^N}\sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n} \frac{|a_i-a_j|^2}{|x-a_i|^2|x-a_j|^2}\varphi ^2 \, d\mu \nonumber \\{} & {} \quad \le \int _{\mathbb {R}^N}|\nabla \varphi |^2 \, d\mu +C_\mu \int _{\mathbb {R}^N}\varphi ^2 \, d\mu , \end{aligned}$$
(5)

where \(c_{N,n,\mu }=\frac{(N+K_\mu -2)^2}{4n}\) is the maximum value of the first constant on left-hand side in (4) attained for \(\beta =\frac{N+K_\mu -2}{2 n}\).

The proof of the Theorem 1 in [12] is based on the vector field method extended to the multipolar case. In [1] an estimate similar to (5) was obtained in a different way when \(\mu =1\).

We observe that inequality (5) is an improved inequality with respect to the first example of multipolar inequality with weight

$$\begin{aligned} \frac{(N+K_\mu -2)^2}{4n}\int _{\mathbb {R}^N}\sum _{i=1}^{n}\frac{\varphi ^2}{|x-a_i|^2}\,d\mu \le \int _{\mathbb {R}^N}|\nabla \varphi |^2 \, d\mu +C_\mu \int _{\mathbb {R}^N}\varphi ^2 \, d\mu , \end{aligned}$$

which the natural generalization of the weighted Hardy inequality (see [11])

$$\begin{aligned} \frac{(N+K_\mu -2)^2}{4}\int _{\mathbb {R}^N}\frac{\varphi ^2}{|x|^2}\,d\mu \le \int _{\mathbb {R}^N}|\nabla \varphi |^2 \, d\mu +C_\mu \int _{\mathbb {R}^N}\varphi ^2 \, d\mu . \end{aligned}$$
(6)

Now we focus our attention on the second term on the left-hand side in (4). For simplicity we put

$$\begin{aligned} W(x):=\frac{1}{2}\sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n} \frac{|a_i-a_j|^2}{|x-a_i|^2|x-a_j|^2}. \end{aligned}$$
(7)

In \(B(a_i, r_0)\), taking into account that

$$\begin{aligned} W=\frac{1}{|x-a_i|^2}\sum _{j\ne i}\frac{|a_i-a_j|^2}{|x-a_j|^2}+ \sum _{\begin{array}{c} k,j\ne i\\ j>k \end{array}}^{n} \frac{|a_k-a_j|^2}{|x-a_k|^2|x-a_j|^2}, \end{aligned}$$

we have the following estimates for W from above and from below

$$\begin{aligned} \begin{aligned} W&\le \frac{1}{|x-a_i|^2}\sum _{j\ne i}\frac{(|a_i-a_j|^2}{(|a_i-a_j|-|x-a_i|)^2} \\ {}&\quad + \sum _{\begin{array}{c} k,j\ne i\\ j>k \end{array}}^{n} \frac{|a_k-a_j|^2}{(|a_i-a_k|-|x-a_i|)^2(|a_i-a_j|-|x-a_i|)^2} \\ {}&\le \frac{n-1}{|x-a_i|^2}\frac{d^2}{(d-r_0)^2}+ \sum _{\begin{array}{c} k,j\ne i\\ j>k \end{array}}^{n} \frac{|a_k-a_j|^2}{(|a_i-a_k|-r_0)^2(|a_i-a_j|-r_0)^2} \\ {}&\le \frac{n-1}{|x-a_i|^2}\frac{d^2}{(d-r_0)^2}+c_1 \end{aligned} \end{aligned}$$

and

$$\begin{aligned} W\ge & {} \frac{1}{|x-a_i|^2}\sum _{j\ne i}\frac{(|a_i-a_j|^2}{(|a_i-a_j|+|x-a_i|)^2}\nonumber \\{} & {} + \sum _{\begin{array}{c} k,j\ne i\\ j>k \end{array}}^{n} \frac{|a_k-a_j|^2}{(|a_i-a_k|+|x-a_i|)^2(|a_i-a_j|+|x-a_i|)^2}\nonumber \\\ge & {} \frac{n-1}{|x-a_i|^2}\frac{d^2}{(d+r_0)^2}+ \sum _{\begin{array}{c} k,j\ne i\\ j>k \end{array}}^{n} \frac{|a_k-a_j|^2}{(|a_i-a_k|+r_0)^2(|a_i-a_j|+r_0)^2}\nonumber \\\ge & {} \frac{n-1}{|x-a_i|^2}\frac{d^2}{(d+r_0)^2}+c_2. \end{aligned}$$
(8)

When x tends to \(a_i\) we get

$$\begin{aligned} W\sim \frac{n-1}{|x-a_i|^2} \end{aligned}$$
(9)

and, then, taking in mind the inequality (4), we have the asymptotic behaviour

$$\begin{aligned}{} & {} \sum _{i=1}^{n}\frac{\beta (N+K_\mu -2)-n\beta ^2}{|x-a_i|^2} + \frac{\beta ^2}{2}\sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n}\frac{|a_i-a_j|^2}{|x-a_i|^2|x-a_j|^2}\nonumber \\{} & {} \quad \sim \left[ \beta (N+K_\mu -2)-n\beta ^2+\beta ^2 (n-1)\right] \frac{1}{|x-a_i|^2}\nonumber \\{} & {} \quad = \left[ \beta (N+K_\mu -2)-\beta ^2\right] \frac{1}{|x-a_i|^2}. \end{aligned}$$
(10)

The maximum value of the constant on the right-hand side in (10) is the best constant in the weighted Hardy inequality with a single pole (see (6)).

4 Weighted multipolar Hardy inequality

The behaviour of the function W in (9) when x tends to the pole \(a_i\) leads us to study the relation between the constant on the left-hand side in weighted Hardy inequalities and the closeness to the single pole. The next result emphasizes this relation and improves a similar inequality stated in [12] in a different way.

Theorem 2

Assume that the conditions \(H_1)\) and \(H_2)\) hold. Then for any \( \varphi \in H^1_\mu \) we get

$$\begin{aligned} c\int _{{{\mathbb {R}}}^N}\sum _{i=1}^n \frac{\varphi ^2 }{|x-a_i|^2}\, \mu (x)dx\le \int _{{{\mathbb {R}}}^N} |\nabla \varphi |^2 \, \mu (x)dx +k \int _{\mathbb {R}^N}\varphi ^2 \, \mu (x)dx \end{aligned}$$
(11)

with \(c\in \left]0,c_o(N+K_\mu )\right[\), where \(c_o(N+K_\mu )=\left( \frac{N+K_\mu -2}{2}\right) ^2\) optimal constant, and \(k=k(n, d, \mu )\), \(d:=\min _{\begin{array}{c} 1\le i,j\le n\\ i\ne j \end{array}} |a_i-a_j|/2\).

Proof

By density, it is enough to prove (11) for any \(\varphi \in C_{c}^{\infty }(\mathbb {R}^N)\).

The optimality of the constant \(c_o(N+K_\mu )\) was stated in [12]. We will prove the inequality (11).

We start from the integral

$$\begin{aligned} \int _{\mathbb {R}^N}\varphi ^2 \textrm{div}F \,dx= \beta \int _{\mathbb {R}^N}\sum _{i=1}^n\left[ \frac{N-2}{|x-a_i|^2} \mu (x) + \frac{(x-a_i)}{|x-a_i|^2}\cdot \nabla \mu \right] \varphi ^2 dx \end{aligned}$$
(12)

and integrate by parts getting, through Hölder’s and Young’s inequalities, the first following inequality

$$\begin{aligned}{} & {} \int _{\mathbb {R}^N}\varphi ^2 \textrm{div}F \,dx= -2\int _{\mathbb {R}^N}^{}\varphi F\cdot \nabla \varphi \, dx \nonumber \\{} & {} \quad \le 2\left( \int _{\mathbb {R}^N}|\nabla \varphi |^2 \, \mu (x)dx\right) ^{\frac{1}{2}} \left[ \int _{\mathbb {R}^N}\left| \sum _{i=1}^{n} \frac{ \beta \,(x-a_i)}{|x-a_i|^2} \right| ^2 \,\varphi ^2\, \mu (x)dx\right] ^{\frac{1}{2}} \nonumber \\{} & {} \quad \le \int _{\mathbb {R}^N}|\nabla \varphi |^2 \, \mu (x)dx+ \int _{\mathbb {R}^N}\left| \sum _{i=1}^{n} \frac{\beta \,(x-a_i)}{|x-a_i|^2} \right| ^2 \,\varphi ^2\, \mu (x)dx. \end{aligned}$$
(13)

So from (12), using the estimate (13), we get

$$\begin{aligned}{} & {} \int _{\mathbb {R}^N}\sum _{i=1}^n \frac{\beta (N-2)}{|x-a_i|^2}\varphi ^2\mu (x)dx \le \int _{\mathbb {R}^N}|\nabla \varphi |^2 \, \mu (x)dx\nonumber \\{} & {} \quad + \int _{\mathbb {R}^N} \sum _{i=1}^{n} \frac{\beta ^2}{|x-a_i|^2} \,\varphi ^2\, \mu (x)dx\nonumber \\{} & {} \quad +\int _{\mathbb {R}^N} \sum _{\begin{array}{c} i,j=1\\ j\ne i \end{array}}^n\frac{ \beta ^2\,(x-a_i) \cdot (x-a_j)}{|x-a_i|^2|x-a_j|^2} \,\varphi ^2\, \mu (x)dx\nonumber \\{} & {} \quad -\beta \int _{\mathbb {R}^N}\sum _{i=1}^n\frac{(x-a_i)}{|x-a_i|^2}\cdot \nabla \mu \,\varphi ^2 dx. \end{aligned}$$
(14)

Let \(\varepsilon >0\) small enough and \(\delta >0\) such that \(\varepsilon +\delta <\frac{d}{2}\). The next step is to estimate the integral of the mixed term that comes out the square of the sum in (14) by writing

$$\begin{aligned}{} & {} \int _{\mathbb {R}^N} \sum _{\begin{array}{c} i,j=1\\ j\ne i \end{array}}^n \frac{\beta ^2\,(x-a_i)\cdot (x-a_j)}{|x-a_i|^2|x-a_j|^2} \,\varphi ^2\, \mu (x)dx\nonumber \\{} & {} \quad = \int _{\bigcup _{k=1}^n B(a_k,\varepsilon )}\sum _{\begin{array}{c} i,j=1\\ j\ne i \end{array}}^n \frac{\beta ^2\,(x-a_i)\cdot (x-a_j)}{|x-a_i|^2|x-a_j|^2} \,\varphi ^2\, \mu (x)dx\nonumber \\{} & {} \qquad + \int _{\bigcup _{k=1}^n B(a_k,\varepsilon +\delta )\setminus \bigcup _{k=1}^n B(a_k,\varepsilon )}\sum _{\begin{array}{c} i,j=1\\ j\ne i \end{array}}^n \frac{\beta ^2\,(x-a_i)\cdot (x-a_j)}{|x-a_i|^2|x-a_j|^2} \,\varphi ^2\, \mu (x)dx\nonumber \\{} & {} \qquad + \int _{\mathbb {R}^N\setminus \bigcup _{k=1}^n B(a_k,\varepsilon +\delta )}\sum _{\begin{array}{c} i,j=1\\ j\ne i \end{array}}^n \frac{\beta ^2\,(x-a_i)\cdot (x-a_j)}{|x-a_i|^2|x-a_j|^2} \,\varphi ^2\, \mu (x)dx\nonumber \\{} & {} \quad := I_1+I_2+I_3, \end{aligned}$$
(15)

Subsequently we will rewrite the mixed term in the following way.

$$\begin{aligned}{} & {} \sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n}\frac{(x-a_i)\cdot (x-a_j)}{|x-a_i|^2|x-a_j|^2} = \sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n}\frac{|x|^2-x\cdot a_i-x\cdot a_j+a_i\cdot a_j}{|x-a_i|^2|x-a_j|^2}\nonumber \\{} & {} \quad = \sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n}\frac{\frac{|x-a_i|^2}{2}+\frac{|x-a_j|^2}{2}-\frac{|a_i-a_j|^2}{2}}{|x-a_i|^2|x-a_j|^2}\nonumber \\{} & {} \quad = \sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n}\frac{1}{2}\left( \frac{1}{|x-a_i|^2}+\frac{1}{|x-a_j|^2} -\frac{|a_i-a_j|^2}{|x-a_i|^2|x-a_j|^2}\right) \nonumber \\{} & {} \quad = (n-1)\sum _{i=1}^{n}\frac{1}{|x-a_i|^2} -\frac{1}{2}\sum _{\begin{array}{c} i,j=1\\ i\ne j \end{array}}^{n}\frac{|a_i-a_j|^2}{|x-a_i|^2|x-a_j|^2}\nonumber \\{} & {} \quad = (n-1)\sum _{i=1}^{n}\frac{1}{|x-a_i|^2}-W. \end{aligned}$$
(16)

To estimate the integral \(I_1\) in (15) we use the estimate (8) in Sect. 3 for W in a ball centered in \(a_k\) and the identity (16). We obtain

$$\begin{aligned} \begin{aligned} I_1&\le \beta ^2\sum _{k=1}^{n} \int _{B(a_k,\varepsilon )}\Biggl [ \sum _{i=1}^{n}\frac{n-1}{|x-a_i|^2}- \frac{n-1}{|x-a_k|^2}\frac{d^2}{(d+\varepsilon )^2} \\ {}&\quad - \sum _{\begin{array}{c} i,j\ne k\\ j>i \end{array}}\frac{|a_i-a_j|^2}{(|a_k-a_i|+\varepsilon )^2(|a_k-a_j|+\varepsilon )^2}\Biggr ]\,\varphi ^2\, \mu (x)dx \\ {}&= \beta ^2\sum _{k=1}^{n} \int _{B(a_k,\varepsilon )}\Biggl \{ \frac{n-1}{|x-a_k|^2}\left[ 1-\frac{d^2}{(d+\varepsilon )^2}\right] + \sum _{\begin{array}{c} i=1\\ i\ne k \end{array}}^n \frac{n-1}{|x-a_i|^2} \\ {}&\quad - \sum _{\begin{array}{c} i,j\ne k\\ j>i \end{array}}\frac{|a_i-a_j|^2}{(|a_k-a_i|+\varepsilon )^2(|a_k-a_j|+\varepsilon )^2}\Biggr \}\,\varphi ^2\, \mu (x)dx. \end{aligned} \end{aligned}$$

To complete the estimate of \(I_1\) we observe that in \(B(a_k,\varepsilon )\), for \(i\ne k\), it occurs

$$\begin{aligned} |x-a_i|\ge |a_k-a_i|-|x-a_k|\ge |a_k-a_i|-\varepsilon \end{aligned}$$

so we get

$$\begin{aligned} \sum _{\begin{array}{c} i=1\\ i\ne k \end{array}}^n \frac{n-1}{|x-a_i|^2}\le \sum _{\begin{array}{c} i=1\\ i\ne k \end{array}}^n \frac{n-1}{(|a_k-a_i|-\varepsilon )^2}. \end{aligned}$$

Then

$$\begin{aligned} I_1\le \beta ^2\sum _{k=1}^{n} \int _{B(a_k,\varepsilon )}\left\{ \frac{n-1}{|x-a_k|^2}\left[ 1-\frac{d^2}{(d+\varepsilon )^2}\right] +c_3\right\} \,\varphi ^2\,\mu (x)dx, \end{aligned}$$

where

$$\begin{aligned} c_3=\sum _{\begin{array}{c} i=1\\ i\ne k \end{array}}^n \frac{n-1}{(|a_k-a_i|-\varepsilon )^2}- \sum _{\begin{array}{c} i,j\ne k\\ j>i \end{array}}\frac{|a_i-a_j|^2}{(|a_k-a_i|+\varepsilon )^2(|a_k-a_j|+\varepsilon )^2}. \end{aligned}$$

For the second integral \(I_2\) we observe that in \(B(a_k,\varepsilon +\delta )\setminus B(a_k,\varepsilon )\), for \(j\ne k\), \(|x-a_k|>\varepsilon \) and

$$\begin{aligned} |x-a_j|\ge |a_k-a_j|-|x-a_k|\ge |a_k-a_j|-(\varepsilon +\delta ) \end{aligned}$$

Therefore

$$\begin{aligned} \begin{aligned} I_2&\le \int _{\bigcup _{k=1}^n B(a_k,\varepsilon +\delta )\setminus \bigcup _{k=1}^n B(a_k,\varepsilon )}\sum _{\begin{array}{c} i,j=1\\ j\ne i \end{array}}^n \frac{\beta ^2}{|x-a_i||x-a_j|} \,\varphi ^2\, \mu (x)dx \\ {}&\le \frac{ n\beta ^2}{\varepsilon }\sum _{\begin{array}{c} j=1\\ j\ne k \end{array}}^n \frac{1}{|a_k-a_j|-(\varepsilon +\delta )}\int _{\bigcup _{k=1}^n B(a_k,\varepsilon +\delta )\setminus \bigcup _{k=1}^n B(a_k,\varepsilon )} \varphi ^2\, \mu (x)dx. \end{aligned} \end{aligned}$$

The remaining integral \(I_3\) can be can be estimated as follows.

$$\begin{aligned} \begin{aligned} I_3&\le \int _{\mathbb {R}^N\setminus \bigcup _{k=1}^n B(a_k,\varepsilon +\delta )}\sum _{\begin{array}{c} i,j=1\\ j\ne i \end{array}}^n \frac{\beta ^2}{|x-a_i||x-a_j|} \,\varphi ^2\, \mu (x)dx \\ {}&\le \frac{ n(n-1)\beta ^2}{(\varepsilon +\delta )^2}\int _{\mathbb {R}^N\setminus \bigcup _{k=1}^n B(a_k,\varepsilon +\delta )} \varphi ^2\, \mu (x)dx. \end{aligned} \end{aligned}$$

Starting from (15) and using the estimates obtained for \(I_1\), \(I_2\) and \(I_3\), we get for \(\varepsilon \) small enough,

$$\begin{aligned}{} & {} \int _{\mathbb {R}^N} \sum _{\begin{array}{c} i,j=1\\ j\ne i \end{array}}^n \frac{\beta ^2\,(x-a_i)\cdot (x-a_j)}{|x-a_i|^2|x-a_j|^2} \,\varphi ^2\, \mu (x)dx\nonumber \\{} & {} \quad \le \int _{\mathbb {R}^N} \sum _{i=1}^{n} \frac{\beta ^2(n-1)c_\varepsilon }{|x-a_i|^2}\,\varphi ^2\,\mu (x)dx +c_4 \int _{\mathbb {R}^N}\,\varphi ^2\,\mu (x)dx, \end{aligned}$$
(17)

where

$$\begin{aligned} c_\varepsilon =1-\frac{d^2}{(d+\varepsilon )^2} \quad \hbox {and}\quad c_4=\frac{ n\beta ^2}{\varepsilon }\sum _{\begin{array}{c} j=1\\ j\ne k \end{array}}^n \frac{1}{|a_k-a_j|-(\varepsilon +\delta )}. \end{aligned}$$

Going back to (14), by (17) and by the hypothesis \(H_2)\), we deduce that

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^N}\sum _{i=1}^{n} \frac{\beta (N+K_\mu -2) -\beta ^2\left[ 1+(n-1)c_\varepsilon \right] }{|x-a_i|^2}\varphi ^2 \, \mu (x)dx \\ {}&\quad \le \int _{\mathbb {R}^N}^{}|\nabla \varphi |^2 \mu (x)dx +\left( c_4 +C_\mu \right) \int _{\mathbb {R}^N} \varphi ^2\, \mu (x)dx. \end{aligned} \end{aligned}$$

The maximum of the function \(\beta \mapsto c=(N+K_\mu -2)\beta -\beta ^2 \left[ 1+(n-1)c_\varepsilon \right] \), fixed \(\varepsilon \), is \(c_{max}(N+K_\mu )=\frac{(N+K_\mu -2)^2}{4[1+(n-1)c_\varepsilon ]}\) attained in \(\beta _{max}=\frac{N+K_\mu -2}{2[1+(n-1)c_\varepsilon ]}\). \(\square \)

We conclude with some remarks. If \(\varepsilon \) tends to zero, and then if we get close enough to the single pole, the constant \(c=\frac{(N+K_\mu -2)^2}{4[1+(n-1)c_\varepsilon ]}\) tends to the optimal constant \(c_o(N+K_\mu )\). The constant \(k= c_4+C_\mu \), \(c_4\) with \(\beta =\beta _{\textrm{max}}\), is better than the analogous constant in [12].

In the case of Gaussian measure the constant \(K_\mu \) tends to zero as the radius \(\varepsilon \) of the sphere centered in a single pole tends to zero (cf. example in Sect. 2).

Finally, we observed that as a consequence of Theorem 2, we deduce the estimate

$$\begin{aligned} \Vert V^{\frac{1}{2}}\varphi \Vert _{L_\mu ^2(\mathbb {R}^N)}\le c \Vert \varphi \Vert _{H^1_\mu (\mathbb {R}^N)}, \end{aligned}$$

with \(V=\sum _{i=1}^n \frac{1}{|x-a_i|^2}\) and c a constant independent of V and \(\varphi \).

For \(L^p\) estimates and embedding results of this type with some applications to elliptic equations see, for example, [7,8,9].