1 Introduction

In classical equilibrium statistical physics, interacting particle systems are often described via so-called Gibbs measures, either in a finite or infinite-volume. A possible arrangement of particles (also called configuration) has a higher probability when it is energetically favorable. This energy gets calculated via a Hamiltonian that depends on the chemical potential and the interaction potentials associated to the particles. However, when working with actual data, these potentials are impossible to measure. What instead can be measured or extrapolated from simulation data are the so-called correlation functions of the underlying Gibbs measure. This gives rise to the inverse problem: Given the correlation functions, determine the chemical potential and the interaction potentials.

For the chemical potential that induces a given density this problem was first discussed by Chayes and Chayes [1] and Chayes et al. [2]. The case of higher-order interactions is also briefly discussed.

For the pair-interaction case the inverse problem was investigated in [3] in the case of the finite-volume and in [4, 5] these results were extended upon and made rigorous in the thermodynamic limit, i.e. in infinite-volume. However, all these results do not give a constructive way of recovering the interaction. Estimation methods for the pair-interaction were developed by Takacs [6] and Fiksel [7] and have since been expanded upon for more general Hamiltonians in e.g. [8], another approach based on variational methods was developed in [9], however, it cannot be used to estimate the chemical potential. A first result of an inverse expansion of the chemical potential is found in [10] where the expansion is given in terms of the one-point correlation function (the density) and the Mayer function, see also [11] for an extension of this result to the non-homogeneous case. For this formula however prior knowledge of the pair-interaction is required. The goal of this paper is to find an expansion of the chemical potential, that solely depends on functions derived from the correlation functions of the Gibbs measure.

The idea we use goes back to [12], where an expansion of this type is used in finite-volume. However, the ansatz used in these works constructs multi-body interactions and thus showing convergence of these inversion formulas cannot be done using the classical tools of cluster expansions.

In this work, we will use a different approach to prove convergence for the explicit inversion formula for the chemical potential by using an exponential representation formula. We then use Bell-polynomials to find bounds on the appearing coefficients, which have been used for a similar scope independently in [13] to get bounds on the virial coefficients. In fact, we can show that this formula converges in both finite and infinite-volume. In particular, as we only make very mild assumptions on the Hamiltonian, which include many classic models, and some integrability conditions on the correlation functions this expansion holds not only for pair-interactions, but also in some cases of multi-body interactions of arbitrary order.

The outline is as follows: In Sect. 2 we introduce the general setting of Gibbs measures we work with and introduce the main idea of the expansion. Then, we formulate in Sect. 3 our main assumptions as well as the main result and give a few examples in which these assumptions hold in Sect. 4. The last two Sections are devoted to the proof of the main result, in Sect. 5 we start by proving a result about the zero-point density, which then gets generalized in Sect. 6 to the one-point density to obtain the expansion for the chemical potential.

2 Setting

A point process \({\mathord {\mathbb P}}\) is a probability measure on the set of configurations

$$\begin{aligned} \Gamma = \{\eta \subset {\mathord {\mathbb R}}^d \mid N(\eta \cap \Delta )< \infty {\text { for all }} \Delta \subset {\mathord {\mathbb R}}^d {\text { bounded}}\} \end{aligned}$$

equipped with the \(\sigma \)-algebra \(\mathcal {F}:=\sigma \left( \{N(\cdot \cap \Delta )=m\}\mid m\in {\mathord {\mathbb N}}_0, \,\,\Delta \subset {\mathord {\mathbb R}}^d {\text { bounded}}\right) \). Here \(N(\eta )=\#\eta \) is the number of elements of \(\eta \subset {\mathord {\mathbb R}}^d\). We denote by \(\Gamma _0= \{\gamma \in \Gamma \mid N(\gamma )< \infty \}\) the set of finite configurations. Such a point process is a Gibbs measure for some chemical potential \(\mu \in {\mathord {\mathbb R}}\) and a Hamiltonian \(H:\Gamma \rightarrow {\mathord {\mathbb R}}\cup \{\infty \}\), we write \({\mathord {\mathbb P}}\) is a \((\mu ,H)\)-Gibbs measure, if it is supported on a set of tempered configurations, see e.g. [14] for a discussion, and satisfies the Ruelle-equation, see [15] (herein called system of equilibrium equations), namely if, for every non-negative function \(G:\Gamma \rightarrow [0,\infty ]\) and every bounded set \(\Lambda \subset {\mathord {\mathbb R}}^d\), there holds

$$\begin{aligned} \int _{\Gamma } G(\eta )\,\,\textrm{d}{\mathord {\mathbb P}}(\eta ) = \int _{\Gamma _{\Lambda ^c}}\sum _{n=0}^\infty \frac{e^{n\mu }}{n!}\int _{\Lambda ^n} G(\{{{\varvec{x}}}_n\}\cup \eta ){e^{-H(\{{{\varvec{x}}}_n\})-W(\{{{\varvec{x}}}_n\}\mid \eta )}}\,\textrm{d}{{\varvec{x}}}_n\,\,\textrm{d}{\mathord {\mathbb P}}(\eta ) \end{aligned}$$

where \(\Lambda ^c={\mathord {\mathbb R}}^d\backslash \Lambda \), \(\Gamma _{\Lambda ^c} = \{\eta \in \Gamma \mid \eta \subset \Lambda ^c\}\) and \(W(\{{{\varvec{x}}}_n\}\mid \eta )\) is the interaction between the part of the particles inside \(\Lambda \) and those on the outside. If H has finite-range W can be defined as

$$\begin{aligned} W(\{{{\varvec{x}}}_n\}\mid \eta ):= H(\{{{\varvec{x}}}_n\}\cup \eta ) -H(\{{{\varvec{x}}}_n\})-H(\eta ), \qquad {{\varvec{x}}}_n\in \Lambda ^n, \,\eta \in \Gamma _{\Lambda ^c} \end{aligned}$$

with the convention that we set \(W(\{{{\varvec{x}}}_n\}\mid \eta )=+\infty \) if \(H(\{{{\varvec{x}}}_n\}\cup \eta )=+\infty \). For general Hamiltonians it is possible to extend this definition, cf. [14]. However, the specific definition of W is not important for this work. We also assume that the Hamiltonian H is translation-invariant, i.e. for every \(x\in {\mathord {\mathbb R}}^d\) we have that \(H(\eta )= H(\tau _x(\eta ))\) where \(\tau _x(\eta ):= \{y-x \mid y\in \eta \} \), that H is hereditary, i.e. that \(H(\eta \cup \{x\})= +\infty \) for all \(x\in {\mathord {\mathbb R}}^d\) whenever \(H(\eta ) = +\infty \) and that H does not contain any contribution of a self-interaction of particles, i.e. \(H(\{x\})=0\) for all \(x\in {\mathord {\mathbb R}}^d\). The last assumption we make on H is, that H is stable, i.e. there is a constant \(B>0\) such that

$$\begin{aligned} H(\{{{\varvec{x}}}_n\}) \ge -Bn. \end{aligned}$$

In this case, under some additional technical assumptions, e.g. finite-range of the interaction or the Hamiltonian consists of a regular pair-interaction, there is at least one translation-invariant \((\mu ,H)\)-Gibbs measure \({\mathord {\mathbb P}}\) associated to these parameters, cf. [14]. In the sequel, we will assume that \({\mathord {\mathbb P}}\) is such a translation-invariant Gibbs measure. A family of functions \((j_\Lambda ^{(n)})_{n \ge 0, \,\Lambda \subset {\mathord {\mathbb R}}^d {\text { bounded}}}\) are called the finite-volume densities of \({\mathord {\mathbb P}}\), or Janossy densities, if for every bounded set \(\Lambda \subset {\mathord {\mathbb R}}^d\) and every non-negative function \(G:\Gamma \rightarrow [0,\infty ]\) there holds

$$\begin{aligned} \int _{\Gamma } G(\eta \cap \Lambda )\,\,\textrm{d}{\mathord {\mathbb P}}(\eta ) = \sum _{n=0}^\infty \frac{1}{n!} \int _{\Lambda ^n} G(\{{{\varvec{x}}}_n\})j_\Lambda ^{(n)}({{\varvec{x}}}_n)\textrm{d}x_k. \end{aligned}$$

The Ruelle-equation characterizes the Janossy densities, they exist for every \((\mu ,H)\)-Gibbs measure \({\mathord {\mathbb P}}\) and for fixed \(\Lambda \subset {\mathord {\mathbb R}}^d\) they are given by

$$\begin{aligned} j_\Lambda ^{(n)}({{\varvec{x}}}_n) = \int _{\Gamma _{\Lambda ^c}} \exp \Big (n\mu -H(\{{{\varvec{x}}}_n\})-W(\{{{\varvec{x}}}_n\}\mid \eta )\Big ) \,\,\textrm{d}{\mathord {\mathbb P}}(\eta ), \qquad n \in \mathbb {N}_0. \end{aligned}$$
(2.1)

Note that by definition the Janossy densities are symmetric functions, i.e. for every permutation \(\sigma \in \mathcal {S}_n\) there holds \( j_\Lambda ^{(n)}(x_1,\dots ,x_n)= j_\Lambda ^{(n)}(x_{\sigma (1)},\dots ,x_{\sigma (n)}) \). Another family of symmetric functions we are going to need are the correlation functions \((\rho ^{(n)})_{n\ge 0} \), where \(\rho ^{(0)}:=1\); when a point process has Janossy densities they are known to exist and for \({{\varvec{x}}}_n\in ({\mathord {\mathbb R}}^d)^n\) the n-point correlation function is given by

$$\begin{aligned} \rho ^{(n)}({{\varvec{x}}}_n) = \sum _{k=0}^\infty \frac{1}{k!}\int _{\Lambda ^n}j_\Lambda ^{(n+k)}({{\varvec{x}}}_n,{{\varvec{y}}}_k)\,\textrm{d}{{\varvec{y}}}_k, \end{aligned}$$

for any \(\Lambda \) such that \({{\varvec{x}}}_n \in \Lambda ^n\). Note that we write \(j_\Lambda ^{(n+k)}({{\varvec{x}}}_n,{{\varvec{y}}}_k) = j_\Lambda ^{(n+k)}(x_1,\dots ,x_n,y_1,\dots ,y_k)\) for simplicity and that the Janossy densities contain the averaged contributions of the whole space by Eq. (2.1) and thus the right-hand side above is independent of \(\Lambda \), if \({{\varvec{x}}}_n \in \Lambda ^n.\) In particular, if \({\mathord {\mathbb P}}\) is translation-invariant, then \(\rho ^{(1)}=\rho \) is constant. Under the additional assumption that \({\mathord {\mathbb P}}\) satisfies a so-called Ruelle-condition, i.e. there is a \(\xi >0 \) such that

$$\begin{aligned} 0 \le \rho ^{(n)}({{\varvec{x}}}_n) \le \xi ^n \quad {\text { for all }} {{\varvec{x}}}_n \in ({\mathord {\mathbb R}}^{d})^n, \quad n\ge 1 \end{aligned}$$

there also holds the well-known inverse formula

$$\begin{aligned} j_\Lambda ^{(n)}({{\varvec{x}}}_n) = \sum _{k=0}^\infty \frac{(-1)^k}{k!}\int _{\Lambda ^k}\rho ^{(n+k)}({{\varvec{x}}}_n,{{\varvec{y}}}_k)\,\textrm{d}{{\varvec{y}}}_k, \qquad {{\varvec{x}}}_n \in \Lambda ^n, \end{aligned}$$
(2.2)

see e.g. [15].

Remark 2.1

In experiments, when only having samples of a Gibbs measure, it is straightforward to estimate the correlation functions of the process without having to make prior assumptions on the Hamiltonian. Thus, the correlation functions are the experimentally easier to obtain starting object in contrast to the Janossy densities.

The idea of this work is to find an explicit expansion of \(\log j_\Lambda ^{(1)}\), using the identity Eq. (2.2), in terms that only depend on the correlation functions \((\rho ^{(n)})_{n\ge 0}\). From this, we can then obtain an explicit formula for the chemical potential \(\mu \). The reason is as follows:

By Eq. (2.1) there holds

$$\begin{aligned} \log j_\Lambda ^{(1)}(x)= \mu + \log \int _{\Gamma _{\Lambda ^c}} \exp \Big (-W(\{x\}\mid \eta )\Big ) \,\,\textrm{d}{\mathord {\mathbb P}}(\eta ). \end{aligned}$$

Now it is well-known that both sides of the above equation diverge to \(-\infty \) for \(\Lambda \nearrow {\mathord {\mathbb R}}^d\). In the sequel we will make the following technical assumption, that can be interpreted as an implicit decay condition on the Hamiltonian and in particular is obvious for finite-range potentials,

$$\begin{aligned} \log \int _{\Gamma _{\Lambda ^c}} \exp \Big (-W(\{x\}\mid \eta )\Big ) \,\,\textrm{d}{\mathord {\mathbb P}}(\eta ) = \log \int _{\Gamma _{\Lambda ^c}} \,\,\textrm{d}{\mathord {\mathbb P}}(\eta )+ \varepsilon (\Lambda ), \end{aligned}$$

where \(\varepsilon (\Lambda )\rightarrow 0 \) as \(\Lambda \nearrow {\mathord {\mathbb R}}^d\). The first part of the right-hand side is equal to \(\log j_\Lambda ^{(0)}\) and thus \(\log j_\Lambda ^{(1)}(x)-\log j_\Lambda ^{(0)} \rightarrow \mu \) as \(\Lambda \nearrow {\mathord {\mathbb R}}^d\).

Our main tool is a result about exponential representation that has been used by Ruelle in [16, Sect. 4.4] for the correlation functions, see also [17] for a more general version, which we will state here for ease of reading.

Theorem A

(Exponential representation [17]) Let \((F_n)_{n\ge 0}\) be a family of symmetric functions with \(F_n:({\mathord {\mathbb R}}^d)^n\rightarrow {\mathord {\mathbb R}}\), \(F_0\equiv 1\) and there exist \(0<c<1/2\) and \(D>0\) such that for every bounded \(\Lambda \subset {\mathord {\mathbb R}}^d\) there holds

$$\begin{aligned} \int _{\Lambda ^n}|F_n({{\varvec{x}}}_n)|\,\textrm{d}{{\varvec{x}}}_n \le |\Lambda | n! D c^n \end{aligned}$$
(2.3)

for every \(n\in {\mathord {\mathbb N}}\), here \(|\Lambda |\) denotes the Lebesgue measure of the set \(\Lambda \). Then the function \(\Phi :\Gamma _0\rightarrow {\mathord {\mathbb R}}\) defined by

$$\begin{aligned} \Phi (\eta ) = \sum _{k=1}^{N(\eta )} \sum _{\pi \in \Pi _k(\eta )}\prod _{i=1}^k F_{\kappa _i}({\pi _i}), \end{aligned}$$
(2.4)

where \(\Pi _k(\eta ) \) denotes the set of partitions of \(\eta \) into k non-empty sets \(\pi _i\) \(i=1,\dots ,k\) where \(\pi _i\) consists of \(\kappa _i\) elements, satisfies

$$\begin{aligned} \sum _{n=0}^\infty \frac{1}{n!} \int _{\Lambda ^n} \Phi ({{\varvec{x}}}_n)\,\textrm{d}{{\varvec{x}}}_n = \exp \left( \sum _{n=1}^\infty \frac{1}{n!} \int _{\Lambda ^n}F_n({{\varvec{x}}}_n) \,\textrm{d}{{\varvec{x}}}_n\right) . \end{aligned}$$
(2.5)

The last tool we will need are the so-called truncated correlation functions.

Definition 2.2

For a point process with correlation functions \((\rho ^{(n)})_{n\ge 0}\), the truncated correlation functions \((\rho ^{(n)}_T)_{n\ge 1}\) are defined recursively. For \(n=1\) we define \(\rho _T^{(1)}= \rho ^{(1)}=\rho \) and for \(n\ge 2\)

$$\begin{aligned} \rho ^{(n)}_T({{\varvec{x}}}_n)= \rho ^{(n)}({{\varvec{x}}}_n)-\sum _{k=2}^n \sum _{\pi \in \Pi _k(\{{{\varvec{x}}}_n\})}\prod _{i=1}^k \rho _T^{(\kappa _i)}({\pi _i}). \end{aligned}$$
(2.6)

3 Main Result

We will now state our main result, which will be proved in Sect. 6. To formulate our result we introduce the following two assumptions:

Assumption A

Let W be the interaction associated to the Hamiltonian H. We assume for any bounded set \(\Delta \subset {\mathord {\mathbb R}}^d\) and every \((\mu ,H)\)-Gibbs measure \({\mathord {\mathbb P}}\) there holds

$$\begin{aligned} \lim _{\Lambda \nearrow {\mathord {\mathbb R}}^d}\sup _{x \in \Delta }\left| \log \int _{\Gamma _{\Lambda ^c}} \exp \Big (-W(\{x\}\mid \eta )\Big ) \,\,\textrm{d}{\mathord {\mathbb P}}(\eta ) - \log \int _{\Gamma _{\Lambda ^c}} \,\,\textrm{d}{\mathord {\mathbb P}}(\eta )\right| = 0. \end{aligned}$$

Assumption B

There are constants \(D>0\) and \(q>0\) such that for the truncated correlation functions of \({\mathord {\mathbb P}}\) there holds

$$\begin{aligned} \sup _{x\in {\mathord {\mathbb R}}^d}\int _{\Lambda ^n} \frac{1}{\rho }\left| {\rho }_{T}^{(1+n)}(x,{{\varvec{y}}}_n) \right| \,\textrm{d}{{\varvec{y}}}_n \le n! D q^{n}, \end{aligned}$$
(3.1)

for every bounded set \(\Lambda \subset {\mathord {\mathbb R}}^d\) and every \(n\in {\mathord {\mathbb N}}\).

Theorem 3.1

Let \({\mathord {\mathbb P}}\) be a \((\mu ,H)\)-Gibbs measure that satisfies a Ruelle-condition and Assumptions A and B. If \(q < q_0\), where

$$\begin{aligned} q_0 = \frac{1}{2(2+\zeta D)} \qquad {\text { with }}\qquad \zeta = \frac{1}{2\log 2-1}, \end{aligned}$$
(3.2)

then there holds

$$\begin{aligned} \mu = \log \rho + \sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{({\mathord {\mathbb R}}^d)^k} \widetilde{\rho }^{\,(1+k)}_T(0,{{\varvec{y}}}_k) \,\textrm{d}{{\varvec{y}}}_k \end{aligned}$$
(3.3)

where the family \((\widetilde{\rho }^{\,(1+k)}_T)_{k\ge 1}\) is recursively defined by \(\widetilde{\rho }^{\,(2)}_T(x,y) = \rho ^{(2)}_T(x,y)/\rho \) and for \(k\ge 2\) by

$$\begin{aligned} \widetilde{\rho }^{(1+k)}_T(x,{{\varvec{y}}}_k) = \frac{\rho ^{(1+k)}_T(x,{{\varvec{y}}}_k)}{\rho } -\sum _{l=2}^k \sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_k\})} \prod _{i=1}^l\widetilde{\rho }^{\,(1+\kappa _i)}_T (x,\pi _i). \end{aligned}$$
(3.4)

Remark 3.2

The functions \((\widetilde{\rho }^{\,(1+k)}_T)_{k\ge 1}\) take a similar role as the so-called Ursell functions in classical cluster expansions and thus the expansion above can be seen as a type of multi-body cluster expansion.

4 Examples

We will now take a look at three cases in which the assumptions of Theorem 3.1 are satisfied. First, we will look at a classic superstable pair-interaction, then at a Hamiltonian consisting of multi-body potentials of arbitrary order, which are non-negative and of finite-range, in both settings there always exists at least one translation-invariant \((\mu ,H)\)-Gibbs measure, cf. [14]. Finally, we will look at the Kirkwood-closure process from [18] which can also be shown to be a Gibbs point process for a Hamiltonian consisting of multi-body potentials of every order.

4.1 Superstable Pair-Interaction

Let us consider the case that H consists only of a pair-interaction, i.e. a measurable even function \(u:{\mathord {\mathbb R}}^d\rightarrow {\mathord {\mathbb R}}\cup \{+\infty \}\) such that \(H(\{{{\varvec{x}}}_n\}) = \sum _{i<j}u(x_i-x_j)\). If we further assume that u is superstable, i.e. that there exists \(r_0>0\) and decreasing positive functions \(\varphi :(0,r_0)\rightarrow {\mathord {\mathbb R}}^+_0\) and \(\psi : [0,\infty )\rightarrow {\mathord {\mathbb R}}^+\) with

$$\begin{aligned} \int _{0}^{r_0} r^{d-1}\varphi (r)\,\textrm{d}r= +\infty \qquad {\text {and}} \qquad \int _{0}^\infty r^{d-1}\psi (r)\,\textrm{d}r<\infty \,, \end{aligned}$$

and

$$\begin{aligned} u(x)&\,\ge \, \varphi (|x|) \qquad {\text { for }}\,\ 0< |x| < r_0\,, \\ |u(x)|&\,\le \, \psi (|x|) \qquad {\text { for }}\,\ |x| \ge r_0\,. \end{aligned}$$

It is well-known, cf. [16, Chap. 4 ] that \({\mathord {\mathbb P}}\) satisifies Assumption B if

$$\begin{aligned} e^\mu = \frac{\bar{q}}{e^{2B+1}C(u)} \end{aligned}$$
(4.1)

for some \(\bar{q}<1/2\), where \(C(u):=\int _{{\mathord {\mathbb R}}^d}|e^{-u(x)}-1|\,\textrm{d}x\) and \(B=B(u)>0\) is the stability constant of H, with

$$\begin{aligned} D=\frac{e^\mu e^{-2B}}{(1-\bar{q})}\frac{1-\bar{q}}{1-2\bar{q}}\qquad {\text { and }} \qquad q= \frac{\bar{q}}{1-\bar{q}}. \end{aligned}$$

Let \({\mathord {\mathbb P}}\) be such a \((\mu ,u)\)-Gibbs measure. From [15, Theorem 5.5 ], it follows immediately that \({\mathord {\mathbb P}}\) satisfies Assumption A. Therein, it is also shown that \({\mathord {\mathbb P}}\) satisfies a Ruelle-condition. We thus get the easy result:

Corollary 4.1

Let \({\mathord {\mathbb P}}\) be a \((\mu ,u)\)-Gibbs measure with a superstable pair-interaction u, if

$$\begin{aligned} e^\mu < \min \left\{ \frac{1}{3}, \frac{1}{1+ \frac{2}{2+\zeta D}} \right\} \left( e^{2B+1}C(u)\right) ^{-1}, \end{aligned}$$

then the expansion Eq. (3.3) holds for the true chemical potential of \({\mathord {\mathbb P}}\).

4.2 Non-negative Multi-body Interaction of Finite-Range

We say a Hamiltonian has finite-range, if there exists an \(R>0\) such that, whenever \(|x_0-y|>R\) there holds

$$\begin{aligned} W(\{x_0\} \mid \{{{\varvec{x}}}_n\}\cup \{y\}) =W(\{x_0\} \mid \{{{\varvec{x}}}_n\}) \end{aligned}$$
(4.2)

for every set of points \(\{{{\varvec{x}}}_n\} \subset {\mathord {\mathbb R}}^d\). In particular, this means that \(W(\{x_0\} \mid \eta )=0\), for any \(\eta \in \Gamma \) with \({\text {dist}} (\{x_0\}, \eta )>R\). As mentioned, in this case there always exists at least one \((\mu ,H)\)-Gibbs measure, see e.g. [14].

Corollary 4.2

Let \({\mathord {\mathbb P}}\) be a \((\mu ,H)\)-Gibbs measure where H satisfies Eq. (4.2) as above, then \({\mathord {\mathbb P}}\) satisfies Assumption A.

Proof

Fix some \(\Delta \subset {\mathord {\mathbb R}}^d\) compact and let \(\Lambda \subset {\mathord {\mathbb R}}^d\) be such that \({\text {dist}} (\Delta , \partial \Lambda ) >R\), then Eqs. (4.2) and (2.1) for \(n=1\) imply

$$\begin{aligned} j_\Lambda ^{(1)}(x) = e^\mu \int _{\Gamma _{\Lambda ^c}} \,\,\textrm{d}{\mathord {\mathbb P}}(\eta ) . \end{aligned}$$

This proves the claim. \(\square \)

It was shown by Moraal in [19] that if H is additionally non-negative, then the multi-body version of the Kirkwood-Salsburg operator is bounded and there exists a solution to the multi-body version of the Kirkwood-Salsburg equations. Using the same techniques as in the pair-interaction case, see [16, Sect. 4.4.6], it then follows that \({\mathord {\mathbb P}}\) satisfies Assumption B. Furthermore, \({\mathord {\mathbb P}}\) satisfies a Ruelle-condition with \(\xi =e^\mu \).

Corollary 4.3

Let H be a non-negative Hamiltonian of finite range, \(\mu \in {\mathord {\mathbb R}}\) and \({\mathord {\mathbb P}}\) be a \((\mu ,H)\)-Gibbs measure. If \(\mu \) is sufficiently small, then the expansion Eq. (3.3) holds.

Remark 4.4

In [20] Skrypnik extended the results of Moraal to the case that H consists of the sum of a very particular non-negative multi-body interaction of finite-range and a superstable pair-interaction. In this case, it can also be shown that Assumption A is fulfilled.

4.3 The Kirkwood-Closure Process

A point process \({\mathord {\mathbb P}}\) is called the Kirkwood-closure process for a given density \(\rho >0\) and a non-negative even function g on \({\mathord {\mathbb R}}^d\) if the correlation functions of \({\mathord {\mathbb P}}\) satisfy

$$\begin{aligned} \rho ^{(n)}({{\varvec{x}}}_n) = \rho ^n \prod _{1\le i<j \le n} g(x_i-x_j). \end{aligned}$$
(4.3)

It was shown in [18] by Kuna et al. that, if \(C(g):=\int _{{\mathord {\mathbb R}}^d}|g(x)-1|\,\textrm{d}x<\infty \) and there is a \(b\ge 1\) such that \(\prod _{i=1}^n g(x_i-x_0)\le b\) for any \(x_0,x_1,\dots ,x_n \in {\mathord {\mathbb R}}^d\) with \(\prod _{i<j}g(x_i-x_j)>0\), then the Kirkwood-closure process exists for \(\rho < (ebC(g))^{-1}\). This was an extension of the results in [21], where only the case \(g\le 1\) was discussed. The Kirkwood-closure, which in computational physics is often called Kirkwood-superposition was first suggested in [22] and has been widely used to approximate higher-order correlation functions and thus just having to calculate \(\rho \) and \(\rho ^{(2)}\) from simulation data. In this case, it also known that \({\mathord {\mathbb P}}\) satisfies a Ruelle condition with \(\xi = \rho b^{1/2}\) and for every bounded \(\Lambda \subset {\mathord {\mathbb R}}^d\)

$$\begin{aligned} \sup _{x\in {\mathord {\mathbb R}}^d}\int _{\Lambda ^n} \left| {\rho }_{T}^{(1+n)}(x,{{\varvec{y}}}_n) \right| \,\textrm{d}{{\varvec{y}}}_n \le n! D\left( \rho e b C(g)\right) ^{n+1}, \end{aligned}$$

i.e. the Kirkwood-closure process satisfies Assumption B, cf. [18, Sect. 3.1]. In [23] Glötzl gave sufficient conditions to guarantee the existence of a chemical potential \(\mu \) and a Hamiltonian H for which the point process is Gibbs. These conditions are for the Campbell-measure \(\textrm{C}_{{\mathord {\mathbb P}}}\) of a point process \({\mathord {\mathbb P}}\), i.e. the measure on \(({\mathord {\mathbb R}}^d\times \Gamma ,\mathcal {B}({\mathord {\mathbb R}}^d)\otimes \mathcal {F})\) (here \(\mathcal {B}({\mathord {\mathbb R}}^d)\) are the Borel-sets of \({\mathord {\mathbb R}}^d\)) defined by

$$\begin{aligned} \textrm{C}_{{\mathord {\mathbb P}}}(B\times F) := \int _{\Gamma } \sum _{x \in \eta }\mathbbm {1}_{F}(\eta \backslash \{x\}) \mathbbm {1}_B(x)\,\,\textrm{d}{\mathord {\mathbb P}}(\eta ). \end{aligned}$$

Namely, if the Campbell-measure \(\textrm{C}_{{\mathord {\mathbb P}}}\) is absolutely continuous with respect to \(\,\textrm{d}x\times \,\,\textrm{d}{\mathord {\mathbb P}}\) and the density k satisfies some additional assumptions, such a Hamiltonian exists, see Satz 2 in [23]. Due to the easy structure Eq. (4.3) of the correlation functions, a straightforward computation, using Eq. (2.2), shows that \({\mathord {\mathbb P}}\) satisfies these assumptions and the Radon-Nikodym derivative of the Campbell-measure is given by

$$\begin{aligned} k(x,\eta )&= \rho \prod _{w\in \eta } g(x-w) \exp \left( \sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{({\mathord {\mathbb R}}^d)^k} \left( \prod _{m=1}^k\ g(x-y_m)-1\right) \right. \\&\quad \left. \prod _{w\in \eta }\prod _{m=1}^k g(w-y_m) \rho _T^{(k)}({{\varvec{y}}}_k)\,\textrm{d}{{\varvec{y}}}_k \right) , \end{aligned}$$

for \(x \in {\mathord {\mathbb R}}^d\) and \(\eta \in \Gamma \). Note that, due to the conditions on g the function k is well-defined on \({\mathord {\mathbb R}}^d \times \Gamma \) under mild decay conditions on g. Since \(k(x,\eta )= \exp (\mu -W(\{x\}\mid \eta ))\), i.e. \(k(x,\emptyset )= e^\mu \), it thus follows that \(\mu \) is given by

$$\begin{aligned} \mu = \log \rho + \sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{({\mathord {\mathbb R}}^d)^k} \left( \prod _{m=1}^k g(x-y_m)-1\right) \rho _T^{(k)}({{\varvec{y}}}_k)\,\textrm{d}{{\varvec{y}}}_k. \end{aligned}$$

Furthermore, it is easily checked, that for the Kirkwood-closure process, there holds

$$\begin{aligned} \widetilde{\rho }_T^{\,(1+k)}(x,{{\varvec{y}}}_k) = \left( \prod _{m=1}^k g(x-y_m)-1\right) \rho _T^{(k)}({{\varvec{y}}}_k), \end{aligned}$$
(4.4)

thus Assumption A is not needed in this case to identify the limit in Eq. (6.17), although one could also show that it holds here.

Corollary 4.5

Let g satisfy the assumptions above and \({\mathord {\mathbb P}}\) be the Kirkwood-closure process if

$$\begin{aligned} \rho < \frac{1}{(2+\zeta D)ebC(g)} \end{aligned}$$

then the chemical potential \(\mu \) of the Kirkwood-closure process is given by Eq. (3.3).

Remark 4.6

In the case of the Kirkwood-closure it is quite easy to see the connection of the expansion Eq. (3.3) to classic cluster expansions. It follows from Eq. (4.4) and [18, Sect. 3.1] that

$$\begin{aligned} \widetilde{\rho }_T^{\,(1+k)}(\tilde{{{\varvec{y}}}}_{k+1}) = \rho ^{k} \sum _{C\in \mathcal {C}_{k+1}} \prod _{\{i,j\}\in E(C)} \left( g(\tilde{y}_i-\tilde{y}_j)-1\right) \end{aligned}$$

where \(\tilde{{{\varvec{y}}}}_{k+1}=(x,{{\varvec{y}}}_k)\). Thus, in this case we have

$$\begin{aligned} \mu = \log \rho + \sum _{k=1}^\infty \frac{(-\rho )^{k}}{k!} \int _{({\mathord {\mathbb R}}^d)^k} \sum _{C\in \mathcal {C}_{k+1}} \prod _{\{i,j\}\in E(C)} \left( g(\tilde{y}_i-\tilde{y}_j)-1\right) \,\textrm{d}{{\varvec{y}}}_k, \end{aligned}$$

where \(\mathcal {C}_{k+1}\) is the set of connected graphs on \(k+1\) vertices and E(C) is the set of edges of the graph C. This has the same structure as the classic cluster expansion for the density in the case of pair-interactions with \(-\rho \) taking the role of the activity (the exponential of the chemical potential) and \(g-1\) taking the role of the Mayer function.

Remark 4.7

Using the easier structure of the family \((\widetilde{\rho }_T^{\,(1+k)})_{k\ge 1}\) in this case, i.e. Eq. (4.4), one can easily improve the bound Eq. (6.7) to

$$\begin{aligned} \sup _{x\in {\mathord {\mathbb R}}^d}\int _{\Lambda ^n} \left| \widetilde{\rho }_T^{\,(1+k)}(x,{{\varvec{y}}}_k) \right| \,\textrm{d}{{\varvec{y}}}_k \le k! M \left( \rho ebC(g)\right) ^k \end{aligned}$$

and obtain a better radius of convergence.

5 A First Step: The Case of the Zero-Point Density

Let \(\Lambda \subset {\mathord {\mathbb R}}^d\) be a bounded set, then in the case of \(n=0\) we get from Eq. (2.1)

$$\begin{aligned} j^{(0)}_\Lambda = \int _{\Gamma _{\Lambda ^c}}\,\,\textrm{d}{\mathord {\mathbb P}}(\eta ). \end{aligned}$$

Now furthermore we know that by Eq. (2.2)

$$\begin{aligned} j_{ \Lambda }^{(0)}= 1+ \sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{\Lambda ^k}\rho ^{(k)}({{\varvec{y}}}_k)\,\textrm{d}{{\varvec{y}}}_k. \end{aligned}$$

Our goal now is to write the above equation as an exponential, for that we use the truncated correlation functions defined in Eq. (2.6). Note that in Eq. (2.6) we can easily multiply the left-hand side by \((-1)^n\) and split this factor into \(\prod _{i=1}^k(-1)^{\kappa _i}\) on the right-hand side so that we get the representation Eq. (2.4) for \(\Phi = (-1)^n\rho ^{(n)}\) with \(F_n= (-1)^n \rho _T^{(n)}\).

Assume that \({\mathord {\mathbb P}}\) satisfies Assumption B for some \(q<1/2\), then in particular

$$\begin{aligned} \int _\Lambda \int _{\Lambda ^n} \left| {\rho }_{T}^{(1+n)}(x,{{\varvec{y}}}_n) \right| \,\textrm{d}{{\varvec{y}}}_n\,\textrm{d}x\le |\Lambda |n! D\rho q^n. \end{aligned}$$
(5.1)

This means the assumptions of Theorem A are satisfied, and we can write

$$\begin{aligned} j_{ \Lambda }^{(0)}= \exp \left( \sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{\Lambda ^k}\rho ^{(k)}_T({{\varvec{y}}}_k)\,\textrm{d}{{\varvec{y}}}_k\right) \end{aligned}$$

and thus

$$\begin{aligned} \log j_{ \Lambda }^{(0)}= \sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{\Lambda ^{k}}\rho ^{(k)}_T({{\varvec{y}}}_{k})\,\textrm{d}{{\varvec{y}}}_{k}. \end{aligned}$$
(5.2)

Remark 5.1

Let us consider the same setting as in Example 4.1. If in this case in Eq. (4.1) we have \(\bar{q}<1/3\), then \(q= \bar{q}/(1-\bar{q})<1/2\) and thus taking the thermodynamic limit of Eq. (5.2), i.e. dividing by the volume \(|\Lambda |\) and increasing \(\Lambda \) to the whole space \({\mathord {\mathbb R}}^d\), gives

$$\begin{aligned} \lim _{\Lambda \nearrow {\mathord {\mathbb R}}^d}\frac{1}{|\Lambda |}\log j_{ \Lambda }^{(0)} = -\rho +\sum _{k=1}^\infty \frac{(-1)^{k+1}}{(k+1)!} \int _{({\mathord {\mathbb R}}^d)^{k}}\rho ^{(k+1)}_T(0,{{\varvec{y}}}_{k})\,\textrm{d}{{\varvec{y}}}_{k}=-p(\mu ,u), \end{aligned}$$

where p is the infinite-volume grand-canonical pressure, cf. [24]. The above expression follows immediately from the cluster expansions of the pressure and the truncated correlation functions therein and [25].

6 Generalization to the One-Point Density

We now want to generalize this approach for the one-particle density \(j^{(1)}_\Lambda \) of an infinite-volume Gibbs measure. Again we will use Eq. (2.2) to find an expansion and Eq. (2.1) to identify the terms. By Eq. (2.2) we have

$$\begin{aligned} j^{(1)}_\Lambda (x)= \sum _{k=0}^\infty \frac{(-1)^k}{k!} \int _{\Lambda ^k}\rho ^{(1+k)}(x,{{\varvec{y}}}_k)\,\textrm{d}{{\varvec{y}}}_k = \rho \left( 1+\sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{\Lambda ^k}\frac{\rho ^{(k+1)}(x,{{\varvec{y}}}_k)}{\rho }\,\textrm{d}{{\varvec{y}}}_k\right) . \end{aligned}$$
(6.1)

We introduce the family \((F_k)_{k\ge 1}\) that is recursively defined by \(F_1(x,y) = \rho ^{(2)}(x,y)/\rho \) and for \(k\ge 2\) by

$$\begin{aligned} F_k(x,{{\varvec{y}}}_k) = \frac{\rho ^{(1+k)}(x,{{\varvec{y}}}_{k })}{\rho } -\sum _{l=2}^{k }\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_k\})}\prod _{i=1}^l F_{ \kappa _i}(x,\pi _i). \end{aligned}$$
(6.2)

The ansatz Eq. (6.1) with Eq. (6.2) was first used by Nettleton and Green in [12] and the division by \(\rho \) lets us use Theorem A to obtain the exponential representation in Eq. (6.1) as the series then starts at one. As previously noted, we have \(j^{(1)}_\Lambda \rightarrow 0 \) as \(\Lambda \nearrow {\mathord {\mathbb R}}^d\), thus we need to identify the divergent part of \(\log j^{(1)}_\Lambda \).

Proposition 6.1

For every \(k \in \mathbb {N}\) there holds

$$\begin{aligned} F_k(x,{{\varvec{y}}}_k)= \rho _T^{(k)}({{\varvec{y}}}_k)+ \widetilde{\rho }^{\,(1+k)}_T(x,{{\varvec{y}}}_k), \end{aligned}$$
(6.3)

where the family \((\widetilde{\rho }^{\,(1+k)}_T)_{k\ge 1}\) is the one defined in Eq. (3.4).

Proof

The claim holds for \(k=1\). By definition Eq. (6.2) of the \(F_k\) there holds

$$\begin{aligned} F_{k+1}(x,{{\varvec{y}}}_{k+1}) = \frac{\rho ^{(k+2)}(x,{{\varvec{y}}}_{k+1})}{\rho } - \sum _{l=2}^{k+1}\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l F_{ \kappa _i}(x,\pi _i). \end{aligned}$$
(6.4)

For the second part of the right-hand side above we can use the induction hypothesis to find

$$\begin{aligned} \sum _{l=2}^{k+1}\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l F_{ \kappa _i}(x,\pi _i) =\sum _{l=2}^{k+1}\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l \left( \rho _T^{(\kappa _i)}(\pi _i)+ \widetilde{\rho }^{\,(1+\kappa _i)}_T(x,\pi _i)\right) \end{aligned}$$

and thus we get

$$\begin{aligned} F_{k+1}(x,{{\varvec{y}}}_{k+1}) = \frac{\rho ^{(k+2)}(x,{{\varvec{y}}}_{k+1})}{\rho }- \sum _{l=2}^{k+1}\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l \left( \rho _T^{(\kappa _i)}(\pi _i)+ \widetilde{\rho }^{\,(1+\kappa _i)}_T(x,\pi _i)\right) . \end{aligned}$$
(6.5)

For the first term of the right-hand side of Eq. (6.4), there holds by Eq. (2.6)

$$\begin{aligned} \frac{\rho ^{(k+2)}(x,{{\varvec{y}}}_{k+1})}{\rho } = \frac{1}{\rho } \sum _{l=1}^{k+2}\sum _{\pi \in \Pi _l(\{x,{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l \rho _T^{(\kappa _i)}(\pi _i). \end{aligned}$$

For any partition \(\pi \in \Pi _l(\{x,{{\varvec{y}}}_{k+1}\})\) we denote by \(\nu \) the index such \(\pi _\nu =(x,\pi _\nu ')\), i.e. \(\pi _\nu \) is the part of the partition \(\pi \) containing x. We can thus write

$$\begin{aligned} \frac{\rho ^{(k+2)}(x,{{\varvec{y}}}_{k+1})}{\rho } = \sum _{l=1}^{k+2}\sum _{\pi \in \Pi _l(\{x,{{\varvec{y}}}_{k+1}\})} \frac{\rho ^{(\kappa _\nu )}_T( x,{\pi _\nu '})}{\rho }\mathop {\prod }\limits _{\begin{array}{c} i=1 \\ i \ne \nu \end{array}}^l \rho ^{(\kappa _i)}_T( {\pi _i}). \end{aligned}$$
(6.6)

We distinguish three cases:

  1. 1.

    If \(\kappa _{\nu }=k+2\), then \(\pi \) is the partition into one element and contributes the term \(\frac{\rho ^{(k+2)}_T(x,{{\varvec{y}}}_{k+1})}{\rho }\) which only appears in the case \(l=1\) above;

  2. 2.

    If \(\kappa _{\nu }=1\), then \({\rho ^{(\kappa _\nu )}_T}/{\rho }=1\) and we can understand \(\widehat{\pi }:=\pi \backslash \pi _\nu \) as a partition of the elements \({{\varvec{y}}}_{k+1}\) into \(l-1\) elements, i.e. an element of \(\widehat{\pi }\in \Pi _{l-1}({{\varvec{y}}}_{k+1})\). Summing over all of the partitions where \(\kappa _{\nu }=1\) then gives

    $$\begin{aligned} \sum _{l=2}^{k+2} \mathop {\sum }\limits _{\begin{array}{c} \pi \in \Pi _l(\{x,{{\varvec{y}}}_{k+1}\})\\ \kappa _\nu =1 \end{array}} \mathop {\prod }\limits _{\begin{array}{c} i=1 \\ i\ne \nu \end{array}}^l \rho _T^{(\kappa _i)}(\pi _i) =\sum _{l=2}^{k+2} \sum _{\widehat{\pi }\in \Pi _{l-1}( {{\varvec{y}}}_{k+1} )} \prod _{i=1}^{l-1} \rho _T^{(\widehat{\kappa }_i)}(\widehat{\pi }_i) \end{aligned}$$

    where \(\widehat{\kappa }_i\) is the number of elements in \(\widehat{\pi }_i\). Shifting the index we get

    $$\begin{aligned} \sum _{l=2}^{k+2} \sum _{\widehat{\pi }\in \Pi _{l-1}( \{{{\varvec{y}}}_{k+1} \})} \prod _{i=1}^{l-1} \rho _T^{(\widehat{\kappa }_i)}(\widehat{\pi }_i)= \sum _{l=1}^{k+1} \sum _{\widehat{\pi }\in \Pi _{l}( {{\varvec{y}}}_{k+1} )} \prod _{i=1}^{l} \rho _T^{(\widehat{\kappa }_i)}(\widehat{\pi }_i)= \rho ^{(k+1)}({{\varvec{y}}}_{k+1}) \end{aligned}$$

    where we have used Eq. (2.6) for the last equality.

  3. 3.

    If \( \kappa _{\nu }=m+1\) for some \(1\le m\le k\), we can use the induction assumption to write

    $$\begin{aligned} \frac{\rho ^{(1+m)}_T(x,\pi _\nu ')}{\rho } = \sum _{j=1}^{m} \sum _{\widehat{\pi }\in \Pi _l(\{\pi _\nu '\})} \prod _{s=1}^j \widetilde{\rho }^{\,(1+\widehat{\kappa }_s)}_T (x,{\widehat{\pi }_s}). \end{aligned}$$

Plugging the three cases above into Eq. (6.6) we find

$$\begin{aligned}&\frac{\rho ^{(k+2)}(x,{{\varvec{y}}}_{k+1})}{\rho } = \sum _{l=1}^{k+2}\sum _{\pi \in \Pi _l(\{x,{{\varvec{y}}}_{k+1}\})}\sum _{\kappa _\nu =1}^{k+2} \frac{\rho ^{(\kappa _\nu )}_T( x,{\pi _\nu '})}{\rho }\mathop {\prod }\limits _{\begin{array}{c} i=1 \\ i \ne \nu \end{array}}^l \rho ^{(\kappa _i)}_T( {\pi _i})\\&=\frac{\rho ^{(k+2)}_T(x,{{\varvec{y}}}_{k+1})}{\rho }+\rho ^{(k+1)}({{\varvec{y}}}_{k+1})\\&\quad +\sum _{l=2}^{k+2}\sum _{\pi \in \Pi _l(\{x,{{\varvec{y}}}_{k+1}\})}\sum _{\kappa _\nu =2}^{k+1} \sum _{j=1}^{m} \sum _{\widehat{\pi }\in \Pi _l(\{\pi _\nu '\})} \prod _{s=1}^j \widetilde{\rho }^{\,(1+\widehat{\kappa }_s)}_T (x,{\widehat{\pi }_s})\mathop {\prod }\limits _{\begin{array}{c} i=1 \\ i \ne \nu \end{array}}^l \rho ^{(\kappa _i)}_T( {\pi _i}). \end{aligned}$$

Finally, we plug the above into Eq. (6.5) to get

$$\begin{aligned} F_{k+1}(x,{{\varvec{y}}}_{k+1})&= \frac{\rho ^{(k+2)}_T(x,{{\varvec{y}}}_{k+1})}{\rho }+\rho ^{(k+1)}({{\varvec{y}}}_{k+1})\\&+\sum _{l=2}^{k+2}\sum _{\pi \in \Pi _l(\{x,{{\varvec{y}}}_{k+1}\})}\sum _{\kappa _\nu =2}^{k+1} \sum _{j=1}^{m} \sum _{\widehat{\pi }\in \Pi _l(\{\pi _\nu '\})} \prod _{s=1}^j \widetilde{\rho }^{\,(1+\widehat{\kappa }_s)}_T (x,{\widehat{\pi }_s})\mathop {\prod }\limits _{\begin{array}{c} i=1 \\ i \ne \nu \end{array}}^l \rho ^{(\kappa _i)}_T( {\pi _i})\\&- \sum _{l=2}^{k+1}\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l (\rho _T^{(\kappa _i)}(\pi _i)+ \widetilde{\rho }^{\,(1+\kappa _i)}_T(x,\pi _i)). \end{aligned}$$

Looking closer at the difference of the last two lines above, we see that all terms containing at least one \(\rho _T^{(\kappa _i)}\) and one \(\widetilde{\rho }^{\,(1+\kappa _j)}_T\) cancel and we get

$$\begin{aligned}&\sum _{l=2}^{k+2}\sum _{\pi \in \Pi _l(\{x,{{\varvec{y}}}_{k+1}\})}\sum _{\kappa _\nu =2}^{k+1} \sum _{j=1}^{m} \sum _{\widehat{\pi }\in \Pi _l(\{\pi _\nu '\})} \prod _{s=1}^j \widetilde{\rho }^{\,(1+\widehat{\kappa }_s)}_T (x,{\widehat{\pi }_s})\mathop {\prod }\limits _{\begin{array}{c} i=1 \\ i \ne \nu \end{array}}^l \rho ^{(\kappa _i)}_T( {\pi _i})\\&- \sum _{l=2}^{k+1}\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l (\rho _T^{(\kappa _i)}(\pi _i)+ \widetilde{\rho }^{\,(1+\kappa _i)}_T(x,\pi _i))\\&=-\sum _{l=2}^{k+1}\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l \widetilde{\rho }^{\,(1+\kappa _i)}_T(x,\pi _i). \end{aligned}$$

We thus have that

$$\begin{aligned} F_{k+1}(x,{{\varvec{y}}}_{k+1})&= \rho ^{(k+1)}_T({{\varvec{y}}}_{k+1}) +\frac{\rho ^{(k+2)}_T(x,{{\varvec{y}}}_{k+1})}{\rho } -\sum _{l=2}^{k+1}\sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_{k+1}\})}\prod _{i=1}^l \widetilde{\rho }^{\,(1+\kappa _i)}_T(x,\pi _i). \end{aligned}$$

Finally, using Eq. (3.4) proves Eq. (6.3). \(\square \)

We now want to apply Theorem A to Eq. (6.1), in view of Eqs. (6.3) and (3.1) we thus need a bound on the integrals over \((\widetilde{\rho }_T^{\,(1+k)})_{k\ge 1}\).

Theorem 6.2

Let \({\mathord {\mathbb P}}\) be a \((\mu ,H)\)-Gibbs measure that satisfies Assumption B, then for any bounded \(\Lambda \subset {\mathord {\mathbb R}}^d\)

$$\begin{aligned} \sup _{x\in {\mathord {\mathbb R}}^d}\int _{\Lambda ^n} \left| \widetilde{\rho }_T^{\,(1+k)}(x,{{\varvec{y}}}_k) \right| \,\textrm{d}{{\varvec{y}}}_k \le (k-1)! M (1+\zeta D)^k q^k \end{aligned}$$
(6.7)

for some \(M>0\) and \(\zeta >1\) independent of \(\Lambda \).

Proof

By Eq. (3.4) there holds

$$\begin{aligned} \int _{\Lambda ^k} \widetilde{\rho }_T^{\,(1+k)}(x,{{\varvec{y}}}_k) \,\textrm{d}{{\varvec{y}}}_k = \int _{\Lambda ^k} \frac{\rho ^{(1+k)}_T(x,{{\varvec{y}}}_k)}{\rho } \,\textrm{d}{{\varvec{y}}}_k -\sum _{l=2}^k \sum _{\pi \in \Pi _l(\{{{\varvec{y}}}_k\})} \prod _{i=1}^l\int _{\Lambda ^{\kappa _i}} \widetilde{\rho }^{\,(1+\kappa _i)}_T (x,\pi _i) \!\textrm{d}{\pi _i}. \end{aligned}$$

Note that because we integrate over all \({{\varvec{y}}}_k\), when looking at a particular partition \(\pi \in \Pi _l(\{{{\varvec{y}}}_k\})\) it does not matter which particular elements of \({{\varvec{y}}}_k\) are contained in which part \(\pi _i\) of \(\pi \) only the sizes of the different parts \(\pi _i\) matter. This is the definition of the kth Bell polynomial, cf. [26], and allows us to write

$$\begin{aligned} \int _{\Lambda ^k} \widetilde{\rho }_T^{\,(1+k)}(x,{{\varvec{y}}}_k) \,\textrm{d}{{\varvec{y}}}_k = -B_k\left( I_1,\dots ,I_{k-1},-\int _{\Lambda ^k} \frac{\rho ^{(1+k)}_T(x,{{\varvec{y}}}_k)}{\rho } \,\textrm{d}{{\varvec{y}}}_k \right) \end{aligned}$$

where \(I_i=\int _{\Lambda ^{\kappa _i}} \widetilde{\rho }^{\,(1+\kappa _i)}_T (x,{{\varvec{y}}}_{\kappa _i}) \,\textrm{d}{{\varvec{y}}}_{\kappa _i}\). Taking the absolute value above and using the bound from Assumption B we thus get

$$\begin{aligned} \int _{\Lambda ^k} \left| \widetilde{\rho }_T^{\,(1+k)}(x,{{\varvec{y}}}_k) \right| \,\textrm{d}{{\varvec{y}}}_k \le w_k \end{aligned}$$
(6.8)

where the \(w_k\) are recursively defined by \(w_1=Dq\) and for \(k\ge 2\)

$$\begin{aligned} w_k=B_k \left( {w}_{1},\dots , {w}_{k-1},k!Dq^k\right) . \end{aligned}$$
(6.9)

We use the ansatz

$$\begin{aligned} w_k = q^k\sum _{l=0}^k a_l^{(k)}D^l, \end{aligned}$$

and want to calculate the coefficients \(a_l^{(k)}\) for \(0\le l\le k\) in \(w_k\) using the well-known recursion, cf. [26],

$$\begin{aligned} B_{k+1}\left( x_1 ,\dots ,x_{k+1}\right) = \sum _{i=0}^{k} \left( {\begin{array}{c}k\\ i\end{array}}\right) B_{k-i}(x_1,\dots ,x_{k-i})x_{i+1}, \end{aligned}$$
(6.10)

where \(B_0:=1\). This recursion along with the fact that \(w_1=Dq\) already implies that \(a_0^{(k)}=0\) for every k. Lastly, we note that Eq. (6.10) implies that

$$\begin{aligned} B_k(x_1,\dots ,x_{k-1},x_k+y_k) =B_k(x_1,\dots ,x_{k-1},x_k)+y_k. \end{aligned}$$

Putting \(x_i=w_i\) \(1\le i \le k-1\), \(x_k=q^{k}k!D\) and \(y_k=w_k-q^{k}k!D\) then gives

$$\begin{aligned} B_{k}(w_1,\dots ,w_{k}) = q^{k}\left( 2\sum _{l=1}^{k} a_l^{(k)}D^l-k!D\right) . \end{aligned}$$

Plugging the above into Eq. (6.10) we find

$$\begin{aligned}&B_{k+1}\left( w_1 ,\dots ,w_{k},(k+1)!Dq^{k+1}\right) =(k+1)!Dq^{k+1} \\&+\sum _{i=0}^{k-1} \left( {\begin{array}{c}k\\ i\end{array}}\right) q^{k-i}\left( 2\sum _{l=1}^{k-i} a_l^{(k-i)}D^l-(k-i)!D\right) q^{i+1}\left( \sum _{l=1}^{i+1} a_l^{(i+1)}D^l\right) . \end{aligned}$$

This already shows that \(a^{(k+1)}_1= (k+1)!\) as in all the terms resulting from the second sum D has an exponent of at least two. We can thus use the above equality to find

$$\begin{aligned}&a_2^{(k+1)}=\sum _{i=0}^{k-1} \left( {\begin{array}{c}k\\ i\end{array}}\right) (k-i)!(i+1)!, \\&a_m^{(k+1)}= \sum _{i=0}^{k-1} \left( {\begin{array}{c}k\\ i\end{array}}\right) \left( (k-i)! a_{m-1}^{(i+1)}+2(i+1)! a_{m-1}^{(k-i)} +2\sum _{\nu =2}^{m-2} a_\nu ^{(i+1)}a_{m-\nu }^{(k-i)} \right) , \qquad m \ge 3. \end{aligned}$$

Straightforward computation gives

$$\begin{aligned} a_2^{(k+1)}= \sum _{i=0}^{k-1}\left( {\begin{array}{c}k\\ i\end{array}}\right) (k-i)!(i+1)! = (k+1)!\frac{k}{2}. \end{aligned}$$

We claim that for every \(k,\nu \in \mathbb {N}\) there holds

$$\begin{aligned} a_\nu ^{(k)} = \frac{k!}{\nu !(\nu -1)!} (k-1)\cdot \ldots \cdot (k-\nu +1)b_\nu = \frac{k!}{\nu !}\left( {\begin{array}{c}k-1\\ \nu -1\end{array}}\right) b_\nu \end{aligned}$$
(6.11)

where \((b_\nu )_{\nu \ge 0}\) is the sequence defined by \(b_0=0, b_1=b_2=1\) and for \(\nu \ge 2\)

$$\begin{aligned} b_{\nu +1}= (\nu +2)b_\nu + 2 \sum _{j=2}^{\nu -1} \left( {\begin{array}{c}\nu \\ j\end{array}}\right) b_j b_{\nu -j+1}. \end{aligned}$$
(6.12)

Note that \(a_m^{(k)}=0\) if \(k<m\). We have already established that Eq. (6.11) holds for all \(k\ge m\) for \(m=0,1,2\). We will prove the claim by induction. Let Eq. (6.11) hold for all k and every \(\nu \le m\) for some \(m\ge 2\), then

$$\begin{aligned} a_{m+1}^{(k+1)}&=\sum _{i=0}^{k-1}\left( {\begin{array}{c}k\\ i\end{array}}\right) (k-i)!\,a_{m}^{(i+1)} + 2\sum _{i=0}^{k-1} \left( {\begin{array}{c}k\\ i\end{array}}\right) \sum _{\nu =1}^{m-1} a_\nu ^{(i+1)}a_{m+1-\nu }^{(k-i)} \nonumber \\&=\sum _{i=0}^{k-1}\left( {\begin{array}{c}k\\ i\end{array}}\right) (k-i)!\frac{(i+1)!}{m!}\left( {\begin{array}{c}i\\ m-1\end{array}}\right) b_m \end{aligned}$$
(6.13)
$$\begin{aligned}&+ 2\sum _{i=0}^{k-1} \left( {\begin{array}{c}k\\ i\end{array}}\right) \sum _{\nu =1}^{m-1} \frac{(i+1)!}{\nu !}\left( {\begin{array}{c}i\\ \nu -1\end{array}}\right) b_\nu \frac{(k-i)!}{(m+1-\nu )!}\left( {\begin{array}{c}k-i-1\\ m-\nu \end{array}}\right) b_{m+1-\nu } . \end{aligned}$$
(6.14)

We start by simplifying the term in Eq. (6.13) and get

$$\begin{aligned} \sum _{i=0}^{k-1}\left( {\begin{array}{c}k\\ i\end{array}}\right) (k-i)!\frac{(i+1)!}{m!}\left( {\begin{array}{c}i\\ m-1\end{array}}\right) b_m =\frac{k!b_m}{m!}m\sum _{i=0}^{k-1}\left( {\begin{array}{c}i+1\\ m\end{array}}\right) =\frac{k!b_m}{m!} \left( {\begin{array}{c}k+1\\ m+1\end{array}}\right) m \end{aligned}$$
(6.15)

by the Vandermonde identity, cf. [27]. For the term in Eq. (6.14), we first swap the inner and outer sums and rearrange the binomial coefficients to get

$$\begin{aligned}&\sum _{i=0}^{k-1} \left( {\begin{array}{c}k\\ i\end{array}}\right) \sum _{\nu =1}^{m-1} \frac{(i+1)!}{\nu !}\left( {\begin{array}{c}i\\ \nu -1\end{array}}\right) b_\nu \frac{(k-i)!}{(m+1-\nu )!}\left( {\begin{array}{c}k-i-1\\ m-\nu \end{array}}\right) b_{m+1-\nu } \\&=\frac{k!}{m!}\sum _{\nu =1}^{m-1} \left( {\begin{array}{c}m\\ \nu -1\end{array}}\right) b_\nu b_{m+1-\nu }\sum _{i=0}^{k-1} \left( {\begin{array}{c}i+1\\ \nu \end{array}}\right) \left( {\begin{array}{c}k-i-1\\ m-\nu \end{array}}\right) . \end{aligned}$$

We note that

$$\begin{aligned} \sum _{i=0}^{k-1} \left( {\begin{array}{c}i+1\\ \nu \end{array}}\right) \left( {\begin{array}{c}k-i-1\\ m-\nu \end{array}}\right) =\left( {\begin{array}{c}k+1\\ m+1\end{array}}\right) , \end{aligned}$$

which is a version of the Chu-Vandermonde identity, see e.g. [27], and thus

$$\begin{aligned} \frac{k!}{m!}\sum _{\nu =1}^{m-1} \left( {\begin{array}{c}m\\ \nu -1\end{array}}\right) b_\nu b_{m+1-\nu }\sum _{i=0}^{k-1} \left( {\begin{array}{c}i+1\\ \nu \end{array}}\right) \left( {\begin{array}{c}k-i-1\\ m-\nu \end{array}}\right) =\frac{k!}{m!}\left( {\begin{array}{c}k+1\\ m+1\end{array}}\right) \sum _{\nu =1}^{m-1} \left( {\begin{array}{c}m\\ \nu -1\end{array}}\right) b_\nu b_{m+1-\nu }. \end{aligned}$$
(6.16)

Plugging Eqs. (6.15) and (6.16) back into Eqs. (6.13) and (6.14) we get

$$\begin{aligned} a_{m+1}^{(k+1)}= \frac{k!}{m!} \left( {\begin{array}{c}k+1\\ m+1\end{array}}\right) \left( m b_m+2 b_m +2\sum _{\nu =2}^{m-1}\left( {\begin{array}{c}m\\ \nu \end{array}}\right) b_\nu b_{m+1-\nu }\right) = \frac{k!}{m!} \left( {\begin{array}{c}k+1\\ m+1\end{array}}\right) b_{m+1} \end{aligned}$$

where we used that

$$\begin{aligned} \sum _{\nu =1}^{m-1}\left( {\begin{array}{c}m\\ \nu -1\end{array}}\right) b_\nu b_{m+1-\nu } =b_m+\sum _{\nu =2}^{m-1}\left( {\begin{array}{c}m\\ \nu \end{array}}\right) b_\nu b_{m+1-\nu }. \end{aligned}$$

The claim is thus proved, and we arrive at the bound

$$\begin{aligned} \int _{\Lambda ^k} \left| \widetilde{\rho }_T^{\,(1+k)}(x,{{\varvec{y}}}_k) \right| \,\textrm{d}{{\varvec{y}}}_k \le q^k\sum _{m=1}^k\left( {\begin{array}{c}k\\ m\end{array}}\right) \frac{(k-1)!}{(m-1)!} b_{m}D^m. \end{aligned}$$

The sequence \((b_m)_{m\ge 0}\) is the number of total partitions of m elements, see [28, 29]. Therein we also find the asymptotic behavior of \((b_m)_m\), namely, there is an \(M>0\) such that

$$\begin{aligned} b_{m} \le M\,\frac{m^{m-1}\zeta ^m}{e^m} \qquad {\text { where }}\qquad \zeta = \frac{1}{2\log 2-1}, \end{aligned}$$

giving

$$\begin{aligned} \sum _{m=1}^k\left( {\begin{array}{c}k\\ m\end{array}}\right) \frac{(k-1)!}{(m-1)!} b_{m}D^m \le M (k-1)! (1+\zeta D)^k. \end{aligned}$$

\(\square \)

Proof of Theorem 3.1

Assume that \({\mathord {\mathbb P}}\) is a \((\mu ,H)\)-Gibbs measure satisfying a Ruelle-condition as well as Assumptions A and B. By Eq. (6.3), the triangle inequality, Eq. (5.1) and Theorem 6.2 we get for \(F_n\) of Eq. (6.2), that

$$\begin{aligned}&\int _{\Lambda ^n}\left| F_n(0,{{\varvec{y}}}_n)\right| \,\textrm{d}{{\varvec{y}}}_n \le |\Lambda |(n-1)! D \rho q^{n-1}+M (n-1)! (1+\zeta D)^n q^n \\&\le \left( |\Lambda |\frac{D \rho }{q}+M\right) (n-1)! \left( q(2+\zeta D)\right) ^n. \end{aligned}$$

If now \(q<q_0\) from Eq. (3.2), then by Theorem A there holds

$$\begin{aligned} \log j_\Lambda ^{(1)}(0) = \log \rho + \sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{\Lambda ^k} \left( \rho _T^{(k)}({{\varvec{y}}}_k)+\widetilde{\rho }^{\,(1+k)}_T(0,{{\varvec{y}}}_k)\right) \,\textrm{d}{{\varvec{y}}}_k. \end{aligned}$$

Which, in view of Eq. (5.2), is equivalent to

$$\begin{aligned} \log j_\Lambda ^{(1)}(0)-\log j_\Lambda ^{(0)} =\log \rho + \sum _{k=1}^\infty \frac{(-1)^k}{k!} \int _{\Lambda ^k} \widetilde{\rho }^{\,(1+k)}_T(0,{{\varvec{y}}}_k) \,\textrm{d}{{\varvec{y}}}_k. \end{aligned}$$
(6.17)

By Assumption A and Eq. (2.1) the left-hand side of Eq. (6.17) converges to \(\mu \) as \(\Lambda \nearrow {\mathord {\mathbb R}}^d\) and since the bound in Eq. (6.7) is independent of \(\Lambda \) the right-hand side above converges by dominated convergence, and we arrive at Eq. (3.3).\(\square \)