1 Introduction

The mean-field spherical model in a random external field, which can also be called the random field mean-field spherical model(RFMFS) is a variant of the mean-field spherical model(MFS), introduced in [18], where the homogeneous external field is replaced by a random external field. Due to the introduction of external randomness, as opposed to the internal randomness of the associated Gibbs state, the finite volume Gibbs states(FVGS) of this model form a collection of random probability measures. In this paper, we will prove theorems concerning the infinite volume limits of the collection of FVGS for a number of different modes of convergence introduced in the disordered systems literature. Due to model specific features, we are able to prove these theorems in a significantly more general setting than other similar models which can be found in the literature.

The Curie–Weiss model in a random external field with independent Bernoulli distributed components was first introduced in [31]. In [1, 30, 31] the thermodynamics and the phase diagram of this model are determined. In [3], the fluctuations of magnetization are considered for the Curie–Weiss model in a random external field where the Bernoulli distributed components are replaced by independent identically distributed components with finite absolute first moments. We will refer to the general random external field case as the random field Curie–Weiss model(RFCW), and the Bernoulli distributed external field as the Bernoulli field Curie–Weiss model(BFCW). One of the key results in [3] is the classification of the magnetization fluctuations in the RFCW model in terms of the “type” and “strength” of the global maximizing points of the free energy. This classification is similar to the classification given by Ellis and Newman in [13] for the magnetization fluctuations of the generalized Curie–Weiss model(GCW), but the external randomness modifies the scaling of the fluctuations.

The infinite volume limits of the finite marginal distributions of the GCW model are given in [13], and the result is that the they are convex combinations of product measures where the number of such product measures is related to the amount, type, and strength of the global maximizing points of the free energy of the model. Due to the similarities in results and proof techniques concerning the magnetization fluctuations between the GCW model and the RFCW model, one might suspect that the infinite volume limits would also be similar. This, however, is not necessarily the case.

For the RFCW model, in [4], the authors show that the infinite volume Gibbs states(IVGS), i.e., any element of the collection of limit points of the collection of FVGS, always belongs to the set of convex combinations of some product measures almost surely, where the number of product measures is determined by the amount of global maximizing points of the free energy. However, for the BFCW model where the parameters of the model are chosen so that there are exactly two global maximizing points, they show that there exists a set of probability 1 such that a countable collection of convex combinations of the two associated product measures can be obtained as convergent subsequences almost surely. This phenomenon is known as chaotic size dependence due to Newman and Stein [26]. The implication is that there do not exist a definitive unique almost sure limits of FVGS, instead, they are dependent on the subsequence of volumes chosen.

To rectify the problem of chaotic size dependence, other forms of convergence in the infinite volume limit must be considered. Let us briefly motivate two such forms of convergence and introduce the notion of metastates. Since the FVGS are random probability measures, if one considers their distributions as random variables, then the distribution is essentially a probability measure of probability measures, and we call it a metastate. One can then consider the convergence in distribution of the external field and the FVGS simultaneously. For the resulting limit in distribution, if it exists, one can then take the regular conditional distribution of the random probability measure given the external field, and this is then a metastate which is defined almost surely. This procedure is due to Aizenman and Wehr [2], and it is typically called the Aizenman–Wehr metastate(A–W) or the conditioned metastate. For the other form of convergence, one considers the empirical measure of the sequence of FVGS and its limit either almost surely or in distribution. The empirical measures are called the Newman–Stein metastates(N–S) due to Newman and Stein [26]. We refer to [9, Chapter 6] for more details and definitions concerning the metastates.

In [4], for the BFCW in the case where there are two global maximizing points, it was shown that the limit in distribution of the random FVGS corresponds to a random probability measure which is split between the associated product measures with probability \(\frac{1}{2}\) independent of the external field. A similar result, concerning the A–W metastate, is obtained in [21]. In [21], the author also provides a proof that the N–S metastates corresponding to the FVGS do not converge almost surely, but they do converge in distribution. In doing so, it is also shown that the limit in distribution of the N–S metastates is more general in some sense since it also contains the result concerning the A–W metastate.

Let us now comment on some of the technical details in the RFCW model and the BFCW model so that we have a reference point for some of the results in this paper. The characterization given in [3] for the free energy of the RFCW suggests that one can find random external fields which would yield any amount, type, and strength of the global maximizing points of the free energy. In principle, one could perform a similar analysis for the IVGS of the RFCW model as in [4, 21], but both of these works focused on the BFCW model, and there is an explicit mention of the difficulties associated with the general RFCW models in [4]. There are two features which make the RFCW models technically simple to deal with. The first feature is that the FVGS are probability measures on a compact space, and, as such, the space of probability measures on this compact space is compact itself. As a result, one does not need to provide uniform tightness results for the metastates since they are automatic. The second feature is that the FVGS can be immediately written as a continuous mixture of product states via the Hubbard–Stratonivich transform. Using these ingredients, most of the analysis of these models comes down to the analysis of the free energy and the limiting procedure by which it was obtained.

In this paper, we will rigorously determine the IVGS and the limiting metastates of the RFMFS in the general setting where the external field has components which are independent identically distributed random variables with finite moments of some order larger than four and non-vanishing variances of the second moments. We show that the IVGS of this model is always either unique or a convex combination of two pure states. We will provide a characterization of the phases of this model, and we will show that in the spin glass phase the model exhibits CSD. For the spin glass phase, we will provide a construction of the A–W metastate and consider the limiting properties of the N–S metastates both in distribution and almost surely. The results obtained are universal in the sense that they hold for any random external fields with the assumptions given.

The main results of this paper that concern the case of the random external field are presented in Theorems 2.3.5, 2.3.10, 2.3.15, and 2.3.16. In order, the results are the existence of CSD and the determination of unique almost sure limits of the FVGS, the construction of the A–W metastate from the limit in distribution of the conditioned metastate probability measures, the almost sure divergence of the N–S metastates and the almost sure convergence of a random subsequence of N–S metastates to the A–W metastate, and, finally, the limit in distribution of the N–S metastates. More minor, but contextually important results, are the characterization of the spin glass phase provided in Table 3 and the triviality of the metastates in Theorem 2.3.17.

The methods of proof for this model differ substantially from those for the RFCW. We will first prove results for a deterministic inhomogeneous external field with convergent sample means and non-vanishing convergent sample standard deviations. Such a deterministic inhomogeneous external field will be called strongly varying. For strongly varying external fields, we are able to fully determine the infinite volume Gibbs states by specifying the rates of convergence of the sample means and sample standard deviations. The IVGS and metastate results are then applications of specific instances of strongly varying fields. The technical difficulties of this model are twofold. The first difficulty is that the state space is not compact and thus we must provide a number of uniform tightness results for the different metastates. The second difficulty is that the inclusion of the external field breaks the permutation invariance of the standard mean-field spherical model, and, as a result, we must provide methods of proof for the resolution of finite marginals of certain singular probability distributions which do not require such symmetries.

To our knowledge, the results of this paper for the mean-field spherical model in a strongly varying field and the RFMFS are novel contributions to the literature. In particular, as noted in [9], there are very few explicit constructions of metastates for non-trivial lattice models, and those that do exist, such as the results of the BFCW model, are for specific choices of random external fields. In this paper, we are able to provide an explicit construction of the metastates of the RFMFS model for a large collection of general random external fields, and show that they have a relatively simple and universal structure despite the generality of the random external field.

1.1 Related Works

Recently, the free energy, overlap structures, and thermodynamic fluctuations were studied for the spherical Sherrington–Kirkpatrick model(SSK) and a variant of it in [5,6,7]. The SSK model is similar to our model in the sense that it is a lattice model with a spherical constraint and some sort of external randomness added to the model. However, the critical difference in these models is that the external randomness typically appears in the interactions between spins. Furthermore, these works are not concerned with the construction of metastates. In general, there is a vast literature concerning these types of models too long to cite, instead we refer to the book [33].

Let us also mention the large and moderate deviation results for the RFCW models given in [23, 24] which initially spurred our interest in these types of models and provided valuable references in their introductions. In particular, there are examples given in [24] which consider random external fields which do not have independent identically distributed components.

To accompany the original introduction of the MFS model in [18], the IVGS of the MFS model can be deduced from results presented in [20]. They are not explicitly mentioned as IVGS, but the finite marginal distributions are deduced there. In general, the work of [20] can be used to understand some of the rigorous methods of proof used in this paper as well, but the core methods of this paper differ significantly due to the lack of permutation invariance of the models.

1.2 Reading Guide

This paper is split into two major sections which are Sects. 2 and 3. The main results are contained in Sect. 2, and they are presented along with a significant amount of exposition, intermediate results, and proof sketches. When necessary, details for non-trivial proofs are provided in Sect. 3. In general, the main results of this paper and their proof sketches should be able to be understood by reading only Sect. 2.

Although this paper is primarily concerned with the random external field, Sect. 2 is split into two subsections which are Sects. 2.2 and 2.3. Many key results are first developed for a deterministic inhomogeneous external field in Sect. 2.2 and then applied to the case of the random external field in Sect. 2.3. The main results concerning the random external field are provided in Sect. 2.3, but most of their proofs rely directly on results developed in Sect. 2.2.

In Sect. 3, we will provide the non-trivial proofs of main results and intermediate results from Sect. 2. There are also intermediate results and their proofs which may not be found in Sect. 2. To save space and provide clarity, we will typically omit extraneous sup- and subindices if it is clear what object is being referred to in the proof. If there is a relevant variable dependence in these indices, we will keep them in the proofs. In general, we have tried to remain consistent with the naming of certain key objects so that they are the same throughout the paper.

The appendix contains general definitions and proofs related to the uniform tightness, approximation, and weak convergence of probability measures related to the metastates.

2 Main Results

2.1 Preliminaries

The model of interest in this paper is an equilibrium statistical mechanical model of an unbounded continuous spin system with long-range interactions, a spherical constraint, and an external field which is initially deterministic. Later on, we will consider the case where the external field is random. We will refer to this model and its constituents, which are to be defined, as the RFMFS model.

To define this model, we begin with the mean-field Hamiltonian in an external field \(H_n^{J,h}: \mathbb {R}^n \rightarrow \mathbb {R}\) given by

$$\begin{aligned} H_n^{J,h}[\phi ] := - \frac{J}{2n} \sum _{i,j=1}^n \phi _i \phi _j - \sum _{i=1}^n h_i \phi _i, \end{aligned}$$
(2.1.1)

where \(J > 0\) is a coupling constant, and \(h \in \mathbb {R}^\mathbb {N}\) is an external field. The state space on which this Hamiltonian is defined is continuous and unbounded as opposed to discrete and bounded like the Ising model. Since each spin interacts with all other spins, the interaction defined by this Hamiltonian is a long-range interaction, and it is often also referred to as a mean-field interaction. The uniform probability measure \(\omega _n\) on the \(n-1\)-dimensional sphere with radius \(\sqrt{n}\) is formally given by its density \(\omega _n(d \phi )\) on \(\mathbb {R}^n\) which has the formula

$$\begin{aligned} \omega _n (d \phi ) := \frac{\delta (|| \phi ||^2 - n)}{\int _{\mathbb {R}^n} d \phi \ \delta (|| \phi ||^2 - n)} d \phi , \end{aligned}$$

where \(\delta (\cdot )\) is the Dirac-delta, \(\Vert \cdot \Vert \) is the Euclidean norm, and \(d \phi \) is the Lebesgue measure. We refer to the utilization of this uniform measure as the spherical constraint in this model. The Gibbs state \(\mu _n^{\beta ,J,h}\) of this model is the probability measure on \(\mathbb {R}^n\) formally given by its density \(\mu _n^{\beta ,J,h} (d \phi )\) on \(\mathbb {R}^n\) which has the formula

$$\begin{aligned} \mu _n^{\beta ,J,h} (d \phi ) := \frac{e^{- \beta H_n^{J,h} [\phi ]}}{Z_n(\beta ,J,h)} \omega _n (d \phi ), \end{aligned}$$
(2.1.2)

where \(\beta > 0\) is the inverse temperature, and \(Z_n(\beta ,J,h)\) is the partition function formally given by

$$\begin{aligned} Z_n (\beta ,J,h) := \int _{\mathbb {R}^n} \omega _n (d \phi ) \ e^{- \beta H_n^{J,h} [\phi ]} . \end{aligned}$$
(2.1.3)

It is common to scale or set certain parameters to fixed values in these kinds of models. We insist on leaving the parameters without rescaling so that their contributions can be seen more transparently in results to come such as Lemma 3.4.2 or the results presented in Table 1, where the ratio of the coupling constant J and the, to be introduced, limiting sample standard deviation \(m^\perp \) is of interest.

In this paper, a majority of the practical calculations are first done using the formal calculation properties of the \(\delta \)-functions, and these calculations are then given rigorous proofs later on. The Gibbs state is rigorously redefined by its action on \(f \in C_b (\mathbb {R}^n)\), where \(C_b (\mathbb {R}^n)\) is the space of continuous bounded functions on \(\mathbb {R}^n\), given by

$$\begin{aligned} \mu _n^{\beta ,J,h} [f] := \frac{1}{Z_n (\beta ,J,h)} \frac{2}{n^{\frac{n}{2} - 1}} \frac{1}{|\mathbb {S}^{n - 3}|} \int _{\mathbb {S}^{n-1}} d \Omega \ e^{\frac{\beta J}{2} \left( \sum _{i=1}^n \Omega _i \right) ^2 + \beta \sqrt{n} \sum _{i=1}^n h_i \Omega _i} f \left( \sqrt{n} \Omega \right) , \end{aligned}$$
(2.1.4)

where

$$\begin{aligned} Z_n(\beta , J, h) := \frac{2}{n^{\frac{n}{2} - 1}} \frac{1}{|\mathbb {S}^{n - 3}|} \int _{\mathbb {S}^{n-1}} d \Omega \ e^{\frac{\beta J}{2} \left( \sum _{i=1}^n \Omega _i \right) ^2 + \beta \sqrt{n} \sum _{i=1}^n h_i \Omega _i} , \end{aligned}$$
(2.1.5)

and \(d \Omega \) is the integral over the angular part of the hyperspherical coordinates on \(\mathbb {R}^n\). Note that this is a redefinition, and the n-dependent factors in this definition have been chosen for convenience.

Note that we have explicitly defined the Gibbs state by its action on continuous bounded functions and we reserve the term expectation for real-valued random variables that will appear later on in this paper. We will encounter probabilistic objects such as probability measure-valued random variables and \(\mathbb {R}^\mathbb {N}\)-valued random variables which are defined on a common underlying probability space. For such objects, it is convenient to consider many of their properties as expectations with respect to this underlying probability measure, and this is why we want to have distinguished definitions of the Gibbs state and the distribution of a random variable although both are probability measures.

For technical reasons, we will extend the underlying space on which \(\mu _n^{\beta , J,h}\) acts on as a probability measure to \(\mathbb {R}^\mathbb {N}\) by “tensoring on 0” to the remaining \(\mathbb {R}^{\mathbb {N} \setminus \{ 1,2,...,n\}}\) portion of the space. For the final time, we redefine \(\mu _n^{\beta ,J,h}:= \mu _n^{\beta ,J,h} \otimes \delta _0\), where \(\delta _0\) is the Dirac measure on the point with all components 0 in \(\mathbb {R}^{\mathbb {N} \setminus \{ 1,2,...,n\}}\). The probability measure on \(\mathbb {R}^\mathbb {N}\) constructed from probability measures on \(\mathbb {R}^n\) by this method will be referred to as 0-tensored versions. A function \(f: \mathbb {R}^\mathbb {N} \rightarrow \mathbb {R}\) is said to be local if there exists a finite index set \(I \subset \mathbb {N}\) and a function \(f': \mathbb {R}^I \rightarrow \mathbb {R}\) such that \(f(x) = f' (\pi _I (x))\), where \(\pi _I\) is the canonical projection from \(\mathbb {R}^\mathbb {N}\) to \(\mathbb {R}^I\). When dealing with local functions, we will typically refer to the representation function \(f'\) as f, and the index set on which the function is local will be called I. The reason why this extension procedure is purely technical is that for a fixed local function and large enough n, the expectation of this local function is the same for the normal Gibbs state and the redefined extended Gibbs state.

For a fixed n, such a probability measure \(\mu _n^{\beta ,J,h}\) is referred to as a finite volume Gibbs state(FVGS), and the collection of FVGS will be denoted by \(\mathcal {G}(\beta ,J,h):= \{ \mu _n^{\beta ,J,h} \}_{n \in \mathbb {N}}\). The entire collection \(\mathcal {G} (\beta ,J,h)\) is then a collection of probability measures on \(\mathbb {R}^\mathbb {N}\).

The two principle objects of interest in this paper are the limiting free energy and the infinite volume Gibbs states(IVGS). We define them here explicitly due to their importance.

Definition 2.1.1

(Limiting free energy) The limiting free energy \(f(\beta ,J,h)\), when it exists, is given by

$$\begin{aligned} f(\beta ,J,h) := \lim _{n \rightarrow \infty } \frac{1}{n} \ln Z_n (\beta ,J,h) . \end{aligned}$$

Note that the limiting free energy is defined here with a different sign and unit convention than the physicist’s limiting free energy.

To define the IVGS, recall first that the ambient space \(\mathbb {R}^\mathbb {N}\) can be equipped with the product topology to make it a topological space which is separable, metrizable, and complete with respect to this metric. Such a topological space is called a Polish space. Recall also that the space of probability measures \(\mathcal {M}_1(\mathbb {R}^\mathbb {N})\) on \(\mathbb {R}^\mathbb {N}\), by virtue of \(\mathbb {R}^\mathbb {N}\) being a Polish space, can be equipped with a topology which is separable, metrizable by the Prokhorov metric d, and complete with respect to this metric, see [14, Chapter 3].

Definition 2.1.2

(Infinite volume Gibbs states) The collection infinite volume Gibbs states \(\mathcal {G}_\infty (\beta ,J,h)\), when it is non-empty, is the collection of probability measures on \(\mathbb {R}^\mathbb {N}\) given by

$$\begin{aligned} \mathcal {G}_\infty (\beta ,J,h) := L (\mathcal {G}(\beta ,J,h)), \end{aligned}$$

where \(L(\mathcal {G}(\beta ,J,h))\) is the collection of limits points of the finite volume Gibbs states.

In this definition, it is useful to use the metric space characterization of a limit point. A limit point of a set A in a metric space X is any point not in A which can be obtained as the limit of a convergent subsequence of elements in the set.

Since mean-field models typically do not involve boundary conditions or local specifications in the same sense as the classical Gibbsian formalism for lattice spin systems, see [9, Chapter 4], the definition for the IVGS is less involved and does not initially require any further study of the convexity properties of the Gibbs states. This definition of the IVGS reflects the fact that a typical mean-field interaction is invariant under permutations of the underlying spins, and that the notion of increasing volumes in the thermodynamic limit can be replaced by increasing number of lattice sites, since the structure of the sequence of increasing volumes is irrelevant to the mean-field interaction. This is why the FVGS are labelled as a sequence of probability measures rather than a collection of probability measures indexed by subsets corresponding to volumes.

The metrization of \(\mathcal {M}_1(\mathbb {R}^\mathbb {N})\) by the Prokhorov metric d is somewhat intractable for practical calculations. Instead, we will use the fact that collection of local bounded Lipschitz functions \({\text {LBL}} (\mathbb {R}^\mathbb {N})\) from \(\mathbb {R}^\mathbb {N}\) to \(\mathbb {R}\) is convergence determining on \(\mathcal {M}_1(\mathbb {R}^\mathbb {N})\), see [14, Chapter 3] and [19, Chapter 13]. A local function f is Lipschitz if it is Lipschitz on \(\mathbb {R}^I\) with the standard Euclidean norm. In terms of the Prokhorov metric, it follows that a sequence of probability measures \(\{ \mu _n \}_{n \in \mathbb {N}}\) converges to a probability measure \(\mu \) with respect to the Prokhorov metric if and only if \(\mu _n[f] \rightarrow \mu [f]\) in the limit as \(n \rightarrow \infty \) for any \(f \in {\text {LBL}} (\mathbb {R}^\mathbb {N})\). By using the collection \({\text {LBL}}(\mathbb {R}^\mathbb {N})\), it follows that \(\mu \in \mathcal {G}_\infty (\beta ,J,h)\) if and only if there exists a subsequence \(\{ n_k \}_{k \in \mathbb {N}}\) such that \(\mu _{n_k}^{\beta ,J,h} [f] \rightarrow \mu [f]\) for any \(f \in {\text {LBL}} (\mathbb {R}^\mathbb {N})\). Convergence with respect to the Prokhorov metric is often called weak convergence and we will sometimes write \(\mu _n \rightarrow \mu \) weakly in the limit as \(n \rightarrow \infty \) by which we mean that \(d(\mu _n, \mu ) \rightarrow 0\) in the limit as \(n \rightarrow \infty \). If it is clear from the context, we will omit “weakly” and simply say that \(\mu _n \rightarrow \mu \) in the limit as \(n \rightarrow \infty \).

In the literature, the mean-field spherical model was introduced in [18], and it corresponds to the model presented in this paper with the specific choice of external field h in which all components are equal. In [18], the authors computed the limiting free energy, with a different sign and unit convention, of the mean-field spherical model. In [20], the IVGS are indirectly identified for the mean-field spherical model. Historically mean-field models were introduced as simplified models of more complicated models which retain enough non-trivial features to be of use to study. For example, the Curie–Weiss model, which is the mean-field version of the classical Ising model, was introduced to exactly study a simplified model which still exhibits interesting thermodynamic phenomena such as phase transitions, anomalous thermodynamic fluctuations, etc. We refer to [16] for more details on the Curie–Weiss model and the classical Ising model. In addition, various features of certain generalizations of the classical Curie–Weiss model were studied in depth in [13]. From this perspective, the mean-field spherical model is the mean-field version of the continuous spin version of the Ising model known as the Berlin-Kac model introduced in [8].

2.2 Deterministic Inhomogeneous External Field

We will now begin the exposition and presentation of the main results concerning the deterministic inhomogeneous external field. Denote \(m_n^h:= (m_n^{h, \parallel }, m_n^{h, \perp }) \in \mathbb {R}^2\), where \(m_n^{h,\parallel }\) is the finite sample mean and \(m_n^{h,\perp }\) is the finite sample standard deviation of the external field h given by

$$\begin{aligned} m_n^{h,\parallel } := \frac{1}{n} \sum _{i=1}^n h_i, \ m_n^{h,\perp } := \sqrt{\frac{1}{n} \sum _{i=1}^n h_i^2 - \left( m_n^{h, \parallel } \right) ^2} . \end{aligned}$$
(2.2.1)

The magnetization \(M_n: \mathbb {R}^n \rightarrow \mathbb {R}\) is given by

$$\begin{aligned} M_n[\phi ] := \sum _{i=1}^n \phi _i = \left\langle 1_n, \phi \right\rangle , \end{aligned}$$
(2.2.2)

where \(1 \in \mathbb {R}^\mathbb {N}\) is the vector with all components 1, \(1_n:= \pi _{\{ 1,2,...,n\}} (1)\), and \(\left\langle \cdot , \cdot \right\rangle \) is the Euclidean inner-product. Observe that the Hamiltonian can be written in the following form

$$\begin{aligned} H_n^{J,h} [\phi ] = - \frac{J}{2 n} M_n[\phi ]^2 - \sum _{i=1}^n h_i \phi _i = - \frac{J}{2 n} \left\langle 1_n, \phi \right\rangle ^2 - \left\langle h_n, \phi \right\rangle , \end{aligned}$$

where \(h_n:= \pi _{\{ 1,2,...,n\}} (h)\). Suppose now that h satisfies \(m_n^{h,\perp } \not = 0\), and let us consider the plane \(W_n^h \subset \mathbb {R}^n\) spanned by the vectors \(1_n\) and \(h_n\). An orthonormal basis of \(W_n^h\) is given by the span of the unit vectors \(\{ w_{1,n}, w^h_{2,n} \}\) given by

$$\begin{aligned} w_{1,n} := \frac{1_n}{\sqrt{n}}, \ w^h_{2,n} := \frac{h_n - m_n^{h,\parallel } 1_n}{\sqrt{n} m_n^{h,\perp }} . \end{aligned}$$
(2.2.3)

The Hamiltonian can be written as

$$\begin{aligned} H_n^{J,h} [\phi ] = - \frac{J}{2} \left\langle w_{1,n}, \phi \right\rangle ^2 - \sqrt{n} m_n^{h, \parallel } \left\langle w_{1,n}, \phi \right\rangle - \sqrt{n} m_n^{h, \perp }\left\langle w^h_{2,n}, \phi \right\rangle . \end{aligned}$$
(2.2.4)

Let \(\{ v_{k,n}^h \}_{k=3}^n\) be an orthonormal basis of \(\left( W_n^h \right) ^\perp \), where \(\left( W_n^h \right) ^\perp \) is the orthogonal complement of \(W_n^h \). Let \(O^h_n\) be the orthogonal change of coordinates \(O^h_n: \mathbb {R}^n \rightarrow \mathbb {R}^n\) given by \(\left( O^h_n(\phi ) \right) _1 = \left\langle w_{1,n}, \phi \right\rangle \), \(\left( O^h_n(\phi ) \right) _2 = \left\langle w^h_{2,n}, \phi \right\rangle \), and \(\left( O^h_n(\phi ) \right) _k = \left\langle v_{k,n}^h, \phi \right\rangle \) for \(k=3,...,n\). Using this change of coordinates and the hyperspherical change of coordinates, formally, we have

$$\begin{aligned} \int _{\mathbb {R}^n} d \phi \ e^{- \beta H_n^{J,h} [\phi ]} \delta \left( \Vert \phi \Vert ^2 - n \right)&= \frac{|\mathbb {S}^{n - 3}| n^{\frac{n}{2} - 1}}{2} \int _{B(0,1)} dz \ e^{n \left( \frac{\beta J}{2} x^2 + \beta \left\langle m_n^h, z \right\rangle \right) } (1 - || z ||^2)^{\frac{n - 4}{2}} , \end{aligned}$$
(2.2.5)

where \(z = (x,y) \in \mathbb {R}^2\), \(|\mathbb {S}^{n - 3}|\) is the integral over the angular coordinates of the \(n-3\)-dimensional unit sphere, and \(B(0,1) \subset \mathbb {R}^2\) is the 2-dimensional unit sphere. This formal calculation leading to Equation (2.2.5) is given a rigorous proof in Lemma 3.1.1.

Let us remark that if \(m_n^{h, \perp } = 0\), then it follows that \(h_n\) is a vector with equal components and the calculation is the same as for the standard mean-field spherical model in [18]. This is why the standard mean-field spherical model concerns a homogeneous external field while the model present in the paper concerns an inhomogeneous external field. From here on out, whenever we refer to an external field, we mean an inhomogeneous external field unless otherwise stated.

2.2.1 Limiting Free Energy

To continue, we state the first condition that the external field must satisfy as a definition.

Definition 2.2.1

An external field h is strongly varying if

$$\begin{aligned} \lim _{n \rightarrow \infty } m_n^h = m := (m^\parallel , m^\perp ) \in \mathbb {R} \times (0, \infty ) . \end{aligned}$$

Subsequently, one could define the weakly varying external field as an external field where the limit \(m \in \mathbb {R} \times \{ 0 \}\). In other words, the “strongly varying” part of the field is associated with a non-vanishing limiting sample standard deviation. Note also that the strongly varying condition ensures that \(m_n^{h, \perp } \not = 0\) for large enough n. Note also that whenever an external field is strongly varying it is also thus inhomogeneous. From this point onwards, unless explicitly stated otherwise, we will assume that the external field is strongly varying.

Denote the exponential tilting function \(\psi _n^{\beta ,J,h}\) to be the function \(\psi _n^{\beta ,J,h}: B(0,1) \rightarrow \mathbb {R}\) given by

$$\begin{aligned} \psi _n^{\beta ,J,h} (z) := \frac{\beta J}{2} x^2 + \beta \left\langle m_n^h,z \right\rangle - \frac{1}{2} \ln (1 - || z ||^2) . \end{aligned}$$
(2.2.6)

By setting \(f \equiv 1\) in Lemma 3.1.1, we see that the exponential tilting function is related to the partition function \(Z_n(\beta ,J,h)\) by the following formula

$$\begin{aligned} Z_n(\beta ,J,h) = \int _{B(0,1)} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4) \psi _n^{\beta ,J,h} (z)} . \end{aligned}$$
(2.2.7)

Motivated by this representation, we denote the limiting exponential tilting function \(\psi ^{\beta ,J,m}\) to be the function \(\psi ^{\beta ,J,m}: B(0,1) \rightarrow \mathbb {R}\) given by

$$\begin{aligned} \psi ^{\beta ,J,m} (z) := \frac{\beta J}{2} x^2 + \beta \left\langle m,z \right\rangle - \frac{1}{2} \ln (1 - || z ||^2) . \end{aligned}$$
(2.2.8)

For the exponential tilting functions, it follows that

$$\begin{aligned} \left| \psi _n^{\beta , J, h} (z) - \psi ^{\beta ,J,m} (z) \right| \le \beta || m_n^h - m || , \end{aligned}$$

from which it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \sup _{z \in B(0,1)} \left| \psi _n^{\beta , J, h} (z) - \psi ^{\beta ,J,m} (z) \right| = 0 . \end{aligned}$$

By using this uniform convergence, we have the first main result.

Theorem 2.2.2

Let h be a strongly varying external field.

It follows that

$$\begin{aligned} f (\beta , J, h) := \lim _{n \rightarrow \infty } \frac{1}{n} \ln Z_n (\beta , J, h) = \sup _{z \in B(0,1)} \psi ^{\beta ,J,m}(z) . \end{aligned}$$

The proof of this result, see Sect. 3.3, is a routine large deviations calculation once we show that set of global maximizing points \(M^*(\beta ,J,m)\) of \(\psi ^{\beta ,J,m}\) is compact and non-empty, see Lemma 3.3.1 for the proof.

One can see from this theorem that if we allowed for a weakly varying external field, i.e., we have \(m^\perp = 0\), then we would recover the same free energy as for the standard mean-field spherical model presented in [18]. With respect to the manipulation of the Hamiltonian in Equation (2.2.4), when one applies a homogeneous external field, which is characterized by the external field h having equal components, we see that there is a single magnetization “component” which corresponds to the projection of \(\phi \) along the normalized unit vector \(w_{1,n}\). If the external field is not homogeneous, there exists a second magnetization component corresponding to the projection of \(\phi \) along the normalized unit vector \(w^h_{2,n}\). Subject to the strongly varying condition, this perpendicular magnetization component is non-vanishing in the limit and it is the distinguishing feature between the standard mean-field spherical model in a homogeneous external field and the mean-field spherical model in a strongly varying field.

2.2.2 Partial Classification of Infinite Volume Gibbs States

Our next main result concerns a partial classification of the IVGS. With reference to Lemma 3.1.1, we begin by defining the mixture probability measure \(\rho _n^{\beta ,J,h}\) on B(0, 1) by its density

$$\begin{aligned} \rho _n^{\beta ,J,h} (dz) = \frac{e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4) \psi _n^{\beta ,J,h} (z)}}{Z_n(\beta ,J,h)} dz , \end{aligned}$$
(2.2.9)

where dz is the Lebesgue measure on B(0, 1). Next, we define the “microcanonical” probability measure on \(\mathbb {R}^\mathbb {N}\) as the 0-tensored version of the probability measure given via its action on continuous bounded functions \(f \in C_b (\mathbb {R}^n)\) by

$$\begin{aligned} f \mapsto \nu _n^{z,h}[f] := \frac{1}{|\mathbb {S}^{n - 3}|}\int _{\mathbb {S}^{n - 3}} d \Omega \ f \left( \sqrt{n} x w_{1,n} + \sqrt{n} y w^h_{2,n} + \sqrt{1 - || z ||^2}\sqrt{n} \sum _{j=3}^n \Omega _j v^h_{j,n} \right) . \end{aligned}$$
(2.2.10)

As a direct application of Lemma 3.1.1 along with the definitions given by Equation (2.2.9) and Equation (2.2.10), we have the following central representation result for the FVGS.

Lemma 2.2.3

Let h be a strongly varying external field.

It follows that

$$\begin{aligned} \mu ^{\beta , J, h}_n =\int _{B(0,1)} \rho _n^{\beta , J, h}(dz) \ \nu _n^{z,h} . \end{aligned}$$
(2.2.11)

By using a similar calculation to the one used for the proof of the form of the limiting free energy, we show that the collection of mixture probability measures \(\{ \rho _n^{\beta ,J,h}\}_{n \in \mathbb {N}}\) is uniformly tight and its limit points are probability measures supported on the set \(M^*(\beta ,J,m)\).

Lemma 2.2.4

Let h be a strongly varying external field.

It follows that the collection of probability measures \(\{ \rho ^{\beta ,J,h}_n\}_{n \in \mathbb {N}}\) is uniformly tight and

$$\begin{aligned} L \left( \{ \rho _n^{\beta , J, h} \}_{n \in \mathbb {N}} \right) \subset \left\{ \rho \in \mathcal {M}_1 (B(0,1)) : {\text {supp}} (\rho ) \subset M^*(\beta ,J,m)\right\} . \end{aligned}$$

For the full proof, see Sect. 3.3. Note that the definition of the support we are using is \({\text {supp}}(\rho ):= \{ x \in B(0,1): \forall \delta> 0, \ \rho (B(x, \delta )) > 0 \}\).

For this model, we show that the structure of \(M^*(\beta ,J,m)\) is simple and completely characterizable. It is either a set with one element or two elements depending on the parameters of the model, see Lemmas 3.4.1 and 3.4.2 for the proofs. For the parameter range where \(m^\parallel = 0\) and \(m^\perp < J\) simultaneously, the transition from a single element to two elements is marked by a critical inverse temperature \(\beta _c:= \frac{J}{(J - m^\perp )(J + m^\perp )}\), such that the set consists of a single element when \(\beta \le \beta _c\) and it consists of two elements when \(\beta > \beta _c\). When the mean is non-vanishing i.e., \(m^\parallel \not = 0\), the set consists of a single element \(z^*\) such that \(x^* > 0\) if \(m^\parallel > 0\) and \(x^* < 0\) if \(m^\parallel < 0\). For the other parameter ranges, the set consists of a single element \(z^0\) such that \(x^0 = 0\).

We summarize these results in Table 1.

Table 1 Parametric ranges for the structure of the set of global maximizing points of the limiting exponential tilting function for the strongly varying deterministic inhomogeneous external field

Let us now classify these parameter ranges in the following way. We will say that we are in the pure state(PS) parameter range if the set \(M^*(\beta ,J,m)\) consists of a single element. This parameter range can be deduced from the first three rows of the above table. We will say that we are in the mixed state(MS) parameter range if the set \(M^*(\beta ,J,m)\) consists of two elements. This parameter range can be deduced from the last row of the above table. We will always refer to the two elements of this set as \(z^+\) and \(z^-\), where \(x^+ > 0\).

As for the probability measure \(\nu _n^{z,h}\), we show that it satisfies a type of uniform convergence in the variable z to a sufficiently regular probability measure on \(\mathbb {R}^\mathbb {N}\) in the limit as \(n \rightarrow \infty \). To begin with, denote \(\eta \) to be the probability measure on \(\mathbb {R}^\mathbb {N}\) given via its action on \(f \in {\text {LBL}} (\mathbb {R}^\mathbb {N})\) by

$$\begin{aligned} f \mapsto \eta [f] := \left( \int _{\mathbb {R}^{I}} d \phi \ e^{- \frac{|| \phi ||^2}{2}}\right) ^{-1} \int _{\mathbb {R}^{I}} d \phi \ e^{- \frac{|| \phi ||^2}{2}} f (\phi ) . \end{aligned}$$
(2.2.12)

where \(I \subset \mathbb {N}\) is the local index set of f. This probability measure on \(\mathbb {R}^\mathbb {N}\) is the countable dimensional version of the standard finite dimensional Gaussian probability measure.

We will need the projection \(P_n^h: \mathbb {R}^n \rightarrow \mathbb {R}^n\) to the subspace \(W_n^h\) given by the formula

$$\begin{aligned} P_n^h (\phi ) = \left\langle \phi , w_{1,n}\right\rangle w_{1,n} + \left\langle \phi , w^h_{2,n}\right\rangle w^h_{2,n} . \end{aligned}$$

Let \(T_n^{z,h}: \mathbb {R}^\mathbb {N} \rightarrow \mathbb {R}^{n} \times \mathbb {R}^{\mathbb {N} {\setminus } [n]} \subset \mathbb {R}^\mathbb {N}\) be the transport map given by

$$\begin{aligned} (T_n^{z,h}(\phi ))_1 = \sqrt{n} x w_{1,n} + \sqrt{n} y w^h_{2,n} + \sqrt{1 - || z ||^2} \sqrt{n} \frac{\phi - P^h_n (\phi )}{|| \phi - P^h_n (\phi )||} \end{aligned}$$

and \((T_n^{z,h} (\phi ))_2 = 0\), where \(z = (x,y) \in B(0,1)\). For the given probability measures and transport maps, we show that \(\nu _n^{z,h} = { T_{n}^{z,h} }_* \eta \), see Lemma 3.2.1, where \({ T_{n}^{z,h} }_* \eta \) is the pushforward measure of \(\eta \) by \(T_n^{z,h}\).

Continuing, note that

$$\begin{aligned} \lim _{n \rightarrow \infty } \sqrt{n} \pi _I (w_{1,n}) = 1_I, \ \lim _{n \rightarrow \infty } \sqrt{n} \pi _I (w^h_{2,n}) = \lim _{n \rightarrow \infty } \frac{h_I - m_n^{h, \parallel } 1_I}{\sqrt{m_n^{h, \perp }}} = \frac{h_I - m^\parallel 1_I}{\sqrt{m^\perp }} , \end{aligned}$$

where \(I \subset \mathbb {N}\) is any finite index set. Let \(T_\infty ^{z,h}: \mathbb {R}^\mathbb {N} \rightarrow \mathbb {R}^\mathbb {N}\) be the transport map given by

$$\begin{aligned} T^{z,h} (\phi ) := \sqrt{1 - || z ||^2} \phi + x 1 + y \frac{h - m^\parallel 1}{m^\perp } , \end{aligned}$$
(2.2.13)

where \(z = (x,y) \in B(0,1)\) and \(m^\perp \not = 0\). Using this transport map, consider the probability measure \(\nu _\infty ^{z,h}\) on \(\mathbb {R}^\mathbb {N}\) given by \(\nu _\infty ^{z,h}:={T^{z,h}_\infty }_* \eta \). We show the following uniform convergence result concerning the convergence of \(\nu _n^{z,h}\) to \(\nu _\infty ^{z,h}\).

Lemma 2.2.5

Let h be a strongly varying external field.

It follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \sup _{z \in B(0,1)} \left| \nu _n^{z,h} [f] - \nu _\infty ^{z,h} [f] \right| = 0 \end{aligned}$$

for any \(f \in {\text {LBL}} (\mathbb {R}^\mathbb {N})\).

For the full proof, see Sect. 3.2. The proof of the uniform convergence result relies on specific asymptotic properties of the spherical constraint. The main technical difficulties involve the lack of symmetries such as permutation invariance or translation invariance which are present in the MFS model.

By combining together the limiting point result Lemma 2.2.4 for the collection of mixture probability measures \(\{ \rho _n^{\beta ,J,h} \}_{n \in \mathbb {N}}\), the uniform convergence result from Lemma 2.2.5, and repeated use of Prokhorov’s theorem, see [14, Chapter 3], we have the following partial classification of the IVGS.

Theorem 2.2.6

Let h be a strongly varying external field.

  1. 1.

    For the pure state parameter range, we have

    $$\begin{aligned} \mathcal {G}_\infty (\beta ,J,h) = \left\{ \nu _\infty ^{z^*,h} \right\} . \end{aligned}$$
  2. 2.

    For the mixed state parameter range, we have

    $$\begin{aligned} \mathcal {G}_\infty (\beta ,J,h) \subset {\text {conv}} \left( \nu _\infty ^{z^+,h}, \ \nu _\infty ^{z^-,h} \right) := \{ \lambda \nu _\infty ^{z^+,h} + (1 - \lambda ) \nu _\infty ^{z^-,h} : \lambda \in [0,1] \} . \end{aligned}$$

For the full proofs, see Corollarys 3.3.4 and 3.3.3.

In the PS parameter range, we see that there exists a unique IVGS. Since the proof of this result relies on Prokhorov’s theorem, we do not obtain a complete classification of the collection of IVGS for the MS parameter range. Indeed, it is possible that there could still exist a single unique limit, but this method of proof does not provide it.

Let us emphasize that the method of proof used in Theorem 2.2.6 relies heavily on Prokhorov’s theorem, and the uniqueness of the IVGS in the PS parameter range is due to the fact that the only probability measure supported by a single point is the Dirac measure at that point. In general, when \(M^*(\beta ,J,m)\) does not consist of a single point, to fully characterize the limit points of the mixtures measures as in Lemma 2.2.4, one would need to characterize the probability measures supported by \(M^*(\beta ,J,m)\) that can be obtained as weakly convergent subsequence of the mixtures measures \(\{ \rho _n^{\beta ,J,h}\}_{n \in \mathbb {N}}\). Later on in this paper, we prove two theorems concerning this phenomenon. For the deterministic inhomogeneous external field, subject to additional assumptions to the magnetization vectors \(\{ m_n^h \}_{n \in \mathbb {N}}\), we show that in the MS parameter range, the IVGS can still consist of a single unique point, see Theorem 2.2.9. For the random external field, we show that all convex combinations of the pure states can be obtained, see Theorem 2.3.5. These results show that in order to give results concerning the opposite inclusion or characterization of weakly converging subsequences of the mixture measures, one must control the, to be introduced in Equation (2.2.15), relative weights associated with the FVGS. We will discuss these results in depth once they are proven.

2.2.3 Full Classification of the Infinite Volume Gibbs States

In the MS parameter range, there are two global maximizing points \(z^\pm \in M^*(\beta ,J,m)\). They are related by the fact that \(x^- = - x^+ < 0\) and \(y^- = y^+\). This suggests studying the mixture probability measure \(\rho _n^{\beta ,J,h}\) conditioned to the positive and negative quadrants \(B_{+} (0,1):= B(0,1) \cap ((0, \infty ) \times \mathbb {R})\) and \(B_- (0,1):= B(0,1) \cap ((-\infty , 0) \times \mathbb {R})\), which is equivalent to conditioning the FVGS on the set where the magnetization is positive for the positive quadrant and negative for the negative quadrant. The conditioned FVGS \(\mu _n^{\beta ,J,h, \pm }\) act on \(f \in C_b (\mathbb {R}^\mathbb {N})\) by

$$\begin{aligned} \mu _n^{\beta ,J,h,\pm }[f] := \frac{\mu _n^{\beta ,J,h} [ \mathbbm {1}(\pm M_n> 0) f]}{\mu _n^{\beta ,J,h} [ \mathbbm {1}(\pm M_n > 0)]} . \end{aligned}$$
(2.2.14)

To accompany the conditioned FVGS, the weights \(W_n^{\beta ,J,h, \pm }\) are given by

$$\begin{aligned} W_n^{\beta ,J,h, \pm } := \mu _n^{\beta ,J,h} [ \mathbbm {1}(\pm M_n > 0)] . \end{aligned}$$
(2.2.15)

This conditioning yields a representation of the form

$$\begin{aligned} \mu _n^{\beta ,J,h} = W_n^{\beta ,J,h,+} \mu _n^{\beta ,J,h,+} + (1- W_n^{\beta ,J,h,+}) \mu _n^{\beta ,J,h,-} . \end{aligned}$$
(2.2.16)

By reusing the proof of Theorem 2.2.6 for the PS parameter range, we show that the probability measures \(\nu _n^{\beta ,J,h, \pm }\) converge weakly in the limit as \(n \rightarrow \infty \).

Lemma 2.2.7

Let h be a strongly varying external field.

For the mixed state parameter range, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mu _n^{\beta ,J,h, \pm } = \nu _{\infty }^{z^\pm , h} . \end{aligned}$$

For the full proof, see Sect. 3.5.

It thus follows that the convergence properties of the weights \(W_n^{\beta ,J,h, \pm }\) determine the limiting structure of the FVGS. By rearranging the form of the weights, we see that

$$\begin{aligned} W_n^{\beta , J, h, +} = \frac{1}{1 + \frac{\int _{B_-(0,1)} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4)\psi ^{\beta ,J,m}_n(z)}}{\int _{B_+(0,1)} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4)\psi ^{\beta ,J,m}_n(z)}}} . \end{aligned}$$
(2.2.17)

To resolve the convergence properties, we introduce a sequence of local maximizing points \(\{ z_n^* \}_{n \in \mathbb {N}}\) such that each \(z_n^*\) is a local maximizing point of \(\psi _n^{\beta ,J,h}\), \(z_n^*\) satisfies the critical point equation \(\nabla \psi _n^{\beta ,J,h} (z_n^*) = 0\), and \(z_n^* \rightarrow z^*\) in the limit as \(n \rightarrow \infty \), where \(z^*\) is a global maximizing point of \(\psi ^{\beta ,J,m}\), see Lemma 3.5.1 for the construction. From the proof presented for Lemma 3.4.2, we know that the Hessian \(H[\psi ^{\beta ,J,m}]\) of \(\psi ^{\beta ,J,m}\) is negative definite at the points \(z^\pm \). With these observations, we show that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{n \int _{B_{\pm } (0,1)} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4)\psi ^{\beta ,J,m}_n(z)}}{e^{n \psi ^{\beta ,J,m}_n(z_n^{\pm })}} = \frac{1}{(1 - || z^\pm ||^2)^2}\int _{\mathbb {R}^2} dz \ e^{\frac{1}{2} \left\langle z, H[\psi ^{\beta ,J,m}](z^\pm ) z \right\rangle } , \end{aligned}$$
(2.2.18)

where \(\{ z_n^\pm \}_{n \in \mathbb {N}}\) is the collection of local maximizing points in their own respective quadrants of B(0, 1). The proof, see Lemma 3.5.3, is essentially a modification of a similar proof for Laplace-type integrals which can be found in [35, Chapter 2]. The idea and method of constructing sequences of local maximizing points in this way is a model specific adaptation of the same method presented in [3, 4].

Let us also remark that all of the proofs that are presented in Sect. 3.5 use the same notation and sequence of local maximizing points \(\{ z_n^* \}_{n \in \mathbb {N}}\), and that the proofs and techniques presented here are only valid when the Hessian of \(\psi ^{\beta ,J,m}\) at \(z^*\) is negative definite. This point is also emphasized below the proof of Lemma 3.5.1.

In the following exposition, we will present a series of results concerning the rates of convergence of various quantities present in these calculations. We will use the symbol \(\approx \) to imply that the results hold in the large n limit with a suitable error term for the desired application. In this notation, the weights satisfy

$$\begin{aligned} W_n^{\beta ,J,h,+} \approx \frac{1}{1 + e^{n (\psi _n^{\beta ,J,h} (z_n^-) - \psi _n^{\beta ,J,h} (z_n^+))}} . \end{aligned}$$

Using the critical point equations, we show that the local maximizing points satisfy

$$\begin{aligned} z_n^\pm - z^\pm \approx - \beta H[\psi ^{\beta ,J,m}]^{-1} (z^\pm ) (m_n^h - m) , \end{aligned}$$

see Lemma 3.5.2 for the proof. As a direct application of this result, we show that the exponential tilting functions evaluated at the local maximizing points satisfy

$$\begin{aligned} \psi _n^{\beta ,J,h} (z_n^\pm ) - \psi ^{\beta ,J,m} (z^\pm ) \approx \beta \left\langle m_n^h - m, z^\pm \right\rangle - \frac{\beta ^2}{2} \left\langle m_n^h - m, H[\psi ^{\beta ,J,m}]^{-1} (z^\pm ) (m_n^h - m) \right\rangle , \end{aligned}$$

see Lemma 3.5.4 for the proof. Finally, using the fact that \(x^- = - x^+\), \(y^+ = y^-\), and \(\psi ^{\beta ,J,m} (z^+) = \psi ^{\beta ,J,m} (z^-)\), and the previous result, we show that

$$\begin{aligned} \psi _n^{\beta ,J,h} (z_n^-) - \psi _n^{\beta ,J,h} (z_n^+) \approx - 2 \beta (m_n^{h, \parallel } - m^\parallel ) x^+ . \end{aligned}$$

By combining all of these results together, we can compute the limit of the weights by specifying the rates of convergence of the sample mean and sample standard deviation.

Lemma 2.2.8

Let h be a strongly varying external field, and suppose that we are in the mixed state parameter range.

Suppose there is \(\delta \in [0, \infty )\) such that \(n^\delta (m_n^h - m) \rightarrow \gamma := (\gamma ^{\parallel }, \gamma ^\perp ) \in \mathbb {R}^2\) in the limit as \(n \rightarrow \infty \).

  1. 1.

    If \(\delta \in [0,1)\) and \(\gamma ^{\parallel } \not = 0\), it follows that

    $$\begin{aligned} \lim _{n \rightarrow \infty } W_n^{\beta ,J,h,+} = \mathbbm {1}(\gamma ^\parallel > 0) . \end{aligned}$$
  2. 2.

    If \(\delta = 1\), it follows that

    $$\begin{aligned} \lim _{n \rightarrow \infty } W_n^{\beta ,J,h,+} = \frac{1}{1 + e^{- 2 \beta x^+ \gamma ^\parallel }} . \end{aligned}$$
  3. 3.

    If \(\delta \in (1, \infty )\), it follows that

    $$\begin{aligned} \lim _{n \rightarrow \infty } W_n^{\beta ,J,h,+} = \frac{1}{2} . \end{aligned}$$

For the full proof, see Sect. 3.5.

Combining together the conditioned representation from Equation (2.2.16), the weak convergence of the conditioned probability measures from Lemma 2.2.7, and the asymptotics of the weights from Lemma 2.2.8, we present the full classification of the IVGS given sublinear, linear, and superlinear rates of convergence of \(m_n^h\) to m in the limit as \(n \rightarrow \infty \).

Theorem 2.2.9

Let h be a strongly varying external field, and suppose that we are in the mixed state parameter range.

Suppose there is \(\delta \in [0, \infty )\) such that \(n^\delta (m_n^h - m) \rightarrow \gamma := (\gamma ^{\parallel }, \gamma ^\perp ) \in \mathbb {R}^2\) in the limit as \(n \rightarrow \infty \).

  1. 1.

    If \(\delta \in [0,1)\) and \(\gamma ^{\parallel } \not = 0\), it follows that

    $$\begin{aligned} \lim _{n \rightarrow \infty } \mu _n^{\beta , J, h} = \mathbbm {1}(\gamma ^\parallel > 0) \nu _\infty ^{z^+,h} + \mathbbm {1}(\gamma ^\parallel < 0) \nu _\infty ^{z^-,h}. \end{aligned}$$
  2. 2.

    If \(\delta = 1\), it follows that

    $$\begin{aligned} \lim _{n \rightarrow \infty } \mu _n^{\beta , J, h} = \frac{1}{1 + e^{- 2 \beta x^+ \gamma ^\parallel }} \nu _\infty ^{z^+,h} + \frac{1}{1 + e^{ 2 \beta x^+ \gamma ^\parallel }} \nu _\infty ^{z^-,h} . \end{aligned}$$
  3. 3.

    If \(\delta \in (1, \infty )\), it follows that

    $$\begin{aligned} \lim _{n \rightarrow \infty } \mu _n^{\beta , J, h} = \frac{1}{2} \nu _\infty ^{z^+,h} + \frac{1}{2}\nu _\infty ^{z^-,h} . \end{aligned}$$

This result improves on Theorem 2.2.6 for the mixed state parameter range. In particular, this result shows that when the external field is strongly varying, one can construct any IVGS by a specific choice of the rate of convergence of the sample mean and sample standard deviation. Later on in this paper, in Theorems 2.3.5 and 2.3.9, we will give concrete probabilistic examples which apply Theorem 2.2.9 for \(\delta = \frac{1}{2}\) and \(\delta = 1\) for some \(\gamma \). However, we were unable to find a general way to construct deterministic inhomogeneous external fields which would realize the other possible asymptotic rates presented in Theorem 2.2.9. In general, this problem concerns the simultaneous control of the Cesàro sums of the sequences \(\{ h_i \}_{i \in \mathbb {N}}\) and \(\{ h_i^2 \}_{i \in \mathbb {N}}\), which seems difficult. A result in this direction for the RFCW model is the so-called generalized quasi-average method utilized in [4], but this method, from our perspective, involves a perturbed external field rather than a fixed external field.

2.2.4 Summary and Remarks

Let us now summarize these results for the strongly varying external field. When the parallel magnetization component is non-vanishing i.e., \(m^\parallel \not = 0\), irrespective of all other details of the model, there is a single unique IVGS. When the parallel magnetization component is vanishing i.e., \(m^\parallel = 0\), and the perpendicular magnetization component is large enough i.e., \(m^\perp \ge J\), there is a single unique IVGS. When the parallel magnetization component is vanishing, the perpendicular magnetization component is small enough i.e., \(m^\perp < J\), and the inverse temperature is small enough i.e., \(\beta \le \beta _c\), there is a single unique IVGS. Finally, when the parallel magnetization component is vanishing, the perpendicular magnetization component is small enough, and the inverse temperature is large enough i.e., \(\beta > \beta _c\), the IVGS can be realized as any convex combination of the pure states \(\nu _\infty ^{z^+,h}\) and \(\nu _\infty ^{z^-,h}\) subject to the sublinear, linear, and superlinear rates of convergence assumptions for the convergence of the sample mean and sample standard deviation.

To our knowledge, the results concerning the limiting free energy, classification of IVGS, and rate of convergence analysis are novel contributions in the literature for this specific model, and, in general, this level of detail and specification is rare even for similar models. The last point about details and specification will be discussed further for the random external field.

For a deterministic inhomogeneous external field, a result in the literature in the same spirit is the classification of the IVGS for the classical Curie–Weiss model presented in [10]. The major difference to our work is that the intent of this paper is to realize convex combinations of pure states in the standard Curie–Weiss model by perturbing the Curie–Weiss Hamiltonian with a small symmetry breaking external field. This would be similar to studying the weakly varying case for our model, which we explicitly exclude. Very briefly, the method of solution for this model involves using the Hubbard–Stratonovich transform, to write the perturbed finite volume Gibbs states as a mixture of product states. In our case, we do not, and cannot, in some sense, apply the Hubbard-Stratonivich transform, nor is there an immediate product structure. This is why one needs the uniform convergence lemma, which is Lemma 2.2.5 in the paper, used in the proof of one of the main results. In [20], the authors were able to utilize permutation invariance of the standard mean-field spherical model for a relatively simple proof using Wasserstein distances for weak convergence. In this work, due to the permutation invariance breaking external field, we needed a different technique which is the uniform convergence lemma.

Let us also remark that our paper does not make use of the method of steepest descent utilized by the authors in the original work which introduced the spherical model [8]. We only mention this since there are several instances of the utilization of this method for spherical models, yet we opted for a direct approach since it was possible. The method of steepest descent has been applied to study the spherical model in a specific deterministic non-homogeneous external field in [28] and the limiting Gibbs states of spherical model with a small homogeneous external field in [11].

2.3 Random External Field

Next, we begin the presentation and specification our results when the deterministic inhomogeneous external field is replaced by a random external field. A measurable map \(h: (\Omega , \mathcal {F}, \mathbb {P}) \rightarrow (\mathbb {R}^\mathbb {N}, \mathcal {B}(\mathbb {R}^\mathbb {N}))\) is said to be a random external field, where \((\Omega , \mathcal {F}, \mathbb {P})\) is a probability triple, and \((\mathbb {R}^\mathbb {N}, \mathcal {B}(\mathbb {R}^\mathbb {N}))\) is a measurable space, where \(\mathcal {B}(\mathbb {R}^\mathbb {N})\) is the Borel \(\sigma \)-algebra associated with the product topology on \(\mathbb {R}^\mathbb {N}\). Since the map \(h \mapsto \mu _n^{\beta ,J,h}\) is continuous and thus measurable, it follows that \(\omega \mapsto h(\omega ) \mapsto \mu _n^{\beta ,J,h(\omega )}\) is also measurable, and thus \(\mu _n^{\beta ,J,h}\) interpreted as a probability measure-valued random variable \(\mu _n^{\beta ,J,h(\cdot )}: (\Omega , \mathcal {F}, \mathbb {P}) \rightarrow (\mathcal {M}_1 (\mathbb {R}^\mathbb {N}), \mathcal {B}(\mathbb {R}^\mathbb {N}))\) is a random probability measure, see Lemma 3.6.2 for this justification. The collection of FVGS is then a collection of probability measure-valued random variables, and we are interested in studying the limiting properties of this collection subject to additional assumptions to the random external field.

Let us also briefly remark on the distinction of convergence in distribution and weak convergence of random probability measures. If \(\mathcal {S}\) is a Polish space and \(\{ X_n \}_{n \in \mathbb {N}}\) and X are \(\mathcal {S}\)-valued random variables, we say that \(X_n\) converges to X in distribution if the probability distributions of \(X_n\) converge weakly to the probability distribution of X. For random probability measures \(\{ \mu _n \}_{n \in \mathbb {N}}\) and \(\mu \), when we say that \(\mu _n\) converges to \(\mu \) in distribution, we mean it in the sense that we just explained. If \(\mu _n\) converges to \(\mu \) almost surely, we mean that \(d(\mu _n, \mu ) \rightarrow 0\) in the limit as \(n \rightarrow \infty \) almost surely, which is equivalent to saying that \(\mu _n\) converges weakly to \(\mu \) almost surely. We will try to stay consistent with this terminology so that weak convergence is reserved for probability measures and convergence in distribution is reserved for random variables.

For the following assumptions to the random external field, we will need the concept of possible values of a random walk. Let \(\{ X_i \}_{i \in \mathbb {N}}\) be a collection of independent identically X-distributed \(\mathbb {R}^d\)-valued random variables. Denote \(\{ S_n' \}_{n \in \mathbb {N}}\) to be the centred random walk with step length \(X - \mathbb {E} X\) given by \(S_n':= \sum _{i=1}^n (X_i - \mathbb {E} X_i)\). We say that a point \(x \in \mathbb {R}^d\) is a possible value of \(S_n'\) if for any \(\varepsilon > 0\) there exists \(n \in \mathbb {N}\) such that \(\mathbb {P} (|| S_n' - x || < \varepsilon ) > 0\). Denote the collection of possible values by P. We say that a point \(x \in \mathbb {R}^d\) is a recurrent value of \(S_n'\) if for any \(\varepsilon > 0\), we have \(\mathbb {P} (|| S_n' - x|| < \varepsilon \text { infinitely often}) = 1\).

We present the following further assumptions for the random external field.

Assumption 2.3.1

(A1) The components of h are independent \(h_0\)-distributed real-valued random variables such that \(\mathbb {E} h_0^2 < \infty \) and \(\mathbb {V} h_0 > 0\), where \(\mathbb {V} h_0:= \mathbb {E} h_0^2 - \left( \mathbb {E} h_0 \right) ^2\)

  1. (A2)

    The random variable \(h_0\) satisfies \(\mathbb {E} h_0^4 < \infty \) and \(\mathbb {V} h_0^2 > 0\)

  2. (A3)

    The set of possible values P of the centred random walk with step length \((h_0 - \mathbb {E} h_0, h_0^2 - \mathbb {E} h_0^2)\) satisfies \(\pi _1 (P) = \mathbb {R}\), where \(\pi _1 (\cdot )\) is the canonical projection to the first coordinate

  3. (A4)

    The random variable \(h_0\) satisfies \(\mathbb {E} h_0^{4 + \xi }\) for some \(\xi > 0\)

Note that the moment conditions of (A2) imply the moment conditions of (A1). For the rest of this paper, we will denote \(\{ S_n \}_{n \in \mathbb {N}}\) to be the random walk with step length \((h_0, h_0^2)\) and \(\{ S_n'\}_{n \in \mathbb {N}}\) to be the centred random walk with step length \((h_0 - \mathbb {E} h_0, h_0^2 - \mathbb {E} h_0^2)\).

2.3.1 Self-averaging of the Limiting Free Energy

In terms of the random walk \(\{ S_n \}_{n \in \mathbb {N}}\), we can write the vector \(m_n^h\) as

$$\begin{aligned} m^{h,\parallel }_n = \frac{{(S_n)}_1}{n}, \ m_n^{h, \perp } = \sqrt{\frac{{(S_n)}_2}{n} - \left( \frac{{(S_n)}_1}{n}\right) ^2} . \end{aligned}$$

From this simple observation, as an application of the strong law of large numbers for the random walk \(\{ S_n \}_{n \in \mathbb {N}}\), we show that the sequence of vectors \(\{ m_n^h \}_{n \in \mathbb {N}}\) satisfies a strong law of large numbers.

Lemma 2.3.2

Let h be a random external field which satisfies (A1).

It follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } m_n^h = \left( \mathbb {E} h_0, \sqrt{\mathbb {V} h_0}\right) := m , \end{aligned}$$

almost surely.

For the proof, see Lemma 3.6.3.

We see that the condition \(\mathbb {V} h_0 > 0\) ensures that the random external field h is strongly varying almost surely. Subject to assumption (A1), it immediately follows that the limiting free energy is given by

$$\begin{aligned} f(\beta ,J,h) = \lim _{n \rightarrow \infty } \frac{1}{n} \ln Z_n (\beta ,J,h) = \sup _{z \in B(0,1)} \psi ^{\beta ,J,m} (z) \end{aligned}$$

almost surely. Note that although the partition functions \(Z_n (\beta ,J,h)\) are random variables, the limiting free energy is a deterministic quantity. We show that the collection of finite volume free energies is uniformly integrable and thus we have the following result concerning the self-averaging of the limiting free energy.

Theorem 2.3.3

Let h be a random external field which satisfies (A1).

It follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n} \mathbb {E} \ln Z_n (\beta ,J,h) = \lim _{n \rightarrow \infty } \frac{1}{n} \ln Z_n (\beta ,J,h) = \sup _{z \in B(0,1)} \psi ^{\beta ,J,m} (z) \end{aligned}$$

almost surely.

For the proof, see Sect. 3.6.

Since the random external field is almost surely strongly varying, the classification of the parameter ranges for the pure states and mixed states is the same as for the deterministic inhomogeneous external field. The updated values of m and \(\beta _c\) are \(m^\parallel = \mathbb {E} h_0\), \(m^\perp = \sqrt{\mathbb {V} h_0}\), and \(\beta _c = \frac{J}{(J - \sqrt{\mathbb {V} h_0}) (J + \sqrt{\mathbb {V} h_0})}\). We present the updated values in Table 2.

Table 2 Parametric ranges for the structure of the set of global maximizing points of the limiting exponential tilting function for the random external field

As a direct application of Theorem 2.2.6, we have the same partial classification of the IVGS for the random external field as for the deterministic field, the only difference being that the classification only holds almost surely. For the random external field, the IVGS can be characterized almost surely also in the case where we are in the MS parameter range subject to further assumptions to the random external field. Recall that the first three rows of Table 2 describe the PS parameter range and the last row describes the MS parameter range.

2.3.2 Chaotic Size Dependence

For the PS parameter range, the IVGS is unique almost surely and its proof only relies on assumption (A1). For the MS parameter range, we begin by noting that the sequence of vectors \(\{ m_n^h \}_{n \in \mathbb {N}}\) can be written entirely in terms of the centred random walk \(\{ S_n' \}_{n \in \mathbb {N}}\) by

$$\begin{aligned} n(m_n^{h, \parallel } - m^\parallel ) = \left( S_n' \right) _1, \ n(m_n^{h, \perp } - m^\perp ) = \frac{1}{m_n^{h, \perp } + m^\perp } \left( S_n' \right) _2 - \frac{m_n^{h, \parallel }}{m_n^{h, \perp } + m^\perp } \left( S_n' \right) _1 . \end{aligned}$$
(2.3.1)

Subject to assumption (A2), it follows that \(\frac{1}{\sqrt{n}} S_n' \rightarrow G\) in distribution in the limit as \(n \rightarrow \infty \), where G is a non-degenerate 2-dimensional Gaussian, and, as a result, the recurrent and possible values of the centred random walk \(\{ S_n' \}_{n \in \mathbb {N}}\) are the same, see [12, Chapter 5]. Using the recurrence of the centred random walk, we show that the sequence of vectors \(\{ m_n^h \}_{n \in \mathbb {N}}\) satisfies a similar recurrence result.

Lemma 2.3.4

Let h be a random external field which satisfies (A1) and (A2).

It follows that

$$\begin{aligned} \left\{ \left( p_1, \frac{1}{2 \sqrt{\mathbb {E} h_0^2}} p_2 \right) : p := (p_1,p_2) \in P \right\} \subset L \left( \left\{ n (m_n^h - m)\right\} _{n \in \mathbb {N}} \right) \end{aligned}$$

almost surely.

For the proof, see Lemma 3.6.3. Let us also briefly remark and clarify on proofs of this kind which involve many steps which hold almost surely. If there is a collection, with at most countable size, of statements which all hold almost surely, then the intersection of these statements also holds almost surely. In the proofs, there will typically be a number of such almost sure statements which are used in the order they appear. These proofs should be read so that one collects all almost sure statements made in the proof, and the set of probability 1 for which the theorem holds is the intersection of all of these statements.

As an application of Theorem 2.2.9 in the case where \(\delta = 1\) and \(\gamma = \left( p_1, \frac{1}{2 \sqrt{\mathbb {E} h_0^2}} p_2\right) \), we have the following complete classification of the IVGS almost surely.

Theorem 2.3.5

Let h be a random external field which satisfies (A1).

For the pure state parameter range, it follows that

$$\begin{aligned} \mathcal {G}_\infty (\beta ,J,h) = \left\{ \nu _\infty ^{z^*,h} \right\} \end{aligned}$$

almost surely.

If h also satisfies (A2) and (A3), for the mixed state parameter range, it follows that

$$\begin{aligned} \mathcal {G}_\infty (\beta ,J,h) = {\text {conv}} \left( \nu _\infty ^{z^+,h}, \nu _\infty ^{z^-,h}\right) \end{aligned}$$

almost surely.

The proof of this result, see Sect. 3.6, is given for the case where \(\pi _1 (P)\) is not necessarily the whole space.

The phenomenon proven in this result is referred to as chaotic size dependence(CSD) due to Newman and Stein [25]. For more physically relevant models, this property is distressing in the sense that it predicts that the IVGS depend on the way the subsequences of finite volumes are selected when the external field is random. This property has been studied for a variety of systems including the BFCW in [4, 21]. For the Ising model with random boundary conditions, a result of this type was obtained in [34]. To our knowledge, our result is novel for this particular model, and the generality of the result is greater than similar results obtained for other models with random external fields. In particular, the works of [4, 21, 30] give a certain emphasis to the case where the random external field has components which are Bernoulli distributed. In addition, in [21], it is remarked that a random external field with continuous components, as opposed to the discrete components of the Bernoulli field, is expected to realize all convex combinations of the pure states. This is indeed the case in this model for any \(h_0\) satisfying (A1), (A2) and (A3).

2.3.3 Construction of the Aizenman–Wehr Metastate

Since almost sure convergence is too strong of a form of convergence for FVGS, we will instead consider weaker forms of convergence which ultimately result in constructions of limiting objects similar to the IVGS. These constructions have been introduced in the disordered systems literature, and we will reference them as they appear in this paper. For more details and exposition, we refer to [9, Chapter 6]. In addition, since we are dealing with random probability measures, for convergence properties of random probability measures, we refer to [17, Chapter 4].

We begin with the collection of joint probability measures \(\{ K_n^{\beta ,J}\}_{n \in \mathbb {N}}\) which act on \(f \in C_b (\mathbb {R}^\mathbb {N} \times \mathbb {R}^\mathbb {N})\) by

$$\begin{aligned} K_n^{\beta ,J} [f] := \mathbb {E} \mu _n^{\beta ,J,h} [f(h, \cdot )], \end{aligned}$$

where the expectation \(\mu _n^{\beta ,J,h} [f(h, \cdot )]\) is taken with respect to the second argument. Note that the marginal distribution of the first component is simply the distribution of h, and the marginal distribution of the second component is given by the intensity measure \(\mathbb {E} \mu _n^{\beta ,J,h}\) which acts on \(f \in C_b (\mathbb {R}^\mathbb {N})\) by \(f \mapsto \mathbb {E} \mu _n^{\beta ,J,h} [f]\). We denote the weak limit, when it exists, of the joint probability measures by \(K^{\beta ,J}\).

Next, we will consider the collection of metastate probability measures \(\{ \mathcal {K}_{n}^{\beta ,J}\}_{n \in \mathbb {N}}\) which are the probability distributions of the \(\mathbb {R}^\mathbb {N} \times \mathcal {M}_1 (\mathbb {R}^\mathbb {N})\)-valued random variables \((h, \mu _n^{\beta ,J,h})\). Note that the marginal distribution of the first component of the metastate probability measure is the distribution of h, and the marginal distribution of the second component is the distribution of \(\mu _n^{\beta ,J,h}\). We denote the probability measure corresponding to the limit in distribution, when it exists, of the metastate probability measures by \(\mathcal {K}^{\beta ,J}\). Since \(\mathcal {K}^{\beta ,J}\) is a probability measure on \(\mathbb {R}^\mathbb {N} \times \mathcal {M}_1 (\mathbb {R}^\mathbb {N})\), we can obtain a random probability measure on probability measures by taking the regular conditional distribution of the second argument given the first argument.

Definition 2.3.6

A conditioned metastate probability measure or Aizenman–Wehr metastate \(\kappa ^{\beta ,J,h}\), when it exists, is a measurable map \(\kappa ^{\beta ,J, \cdot }: \mathbb {R}^\mathbb {N} \rightarrow \mathcal {M}_1 (\mathcal {M}_1 (\mathbb {R}^\mathbb {N}))\) which satisfies

$$\begin{aligned} \int _{\mathbb {R}^\mathbb {N} \times \mathcal {M}_1 (\mathbb {R}^\mathbb {N})} \mathcal {K}^{\beta ,J} (dh, d \mu ) \ f (h, \mu ) = \mathbb {E} \kappa ^{\beta ,J,h} [f(h, \cdot )] \end{aligned}$$

for all \(f \in C_b (\mathbb {R}^\mathbb {N} \times \mathcal {M}_1 (\mathbb {R}^\mathbb {N}))\).

This construction is due to Aizenman and Wehr [2], and the name metastate refers to the fact that the resulting object is a probability measure on probability measures. Although we gave here the definition of a conditioned metastate probability measure, we will still refer to upcoming construction as the conditioned metastate probability measure.

Let us now remark on some properties of the joint probability measures and the metastate probability measures. If \(f \in C_b (\mathbb {R}^\mathbb {N} \times \mathbb {R}^\mathbb {N})\), then it follows that the map \((h, \mu ) \mapsto \mu [f(h, \cdot )]\) is continuous and bounded. As a result, if the weak limit of the metastate probability measures exists in the limit as \(n \rightarrow \infty \), we must have

$$\begin{aligned} \int _{\mathbb {R}^\mathbb {N} \times \mathcal {M}_1 (\mathbb {R}^\mathbb {N})} \mathcal {K}^{\beta ,J} (dh, d \mu ) \ \mu [f(h, \cdot )]&= \lim _{n \rightarrow \infty } \mathbb {E} \mu _n^{\beta ,J,h} [f(h, \cdot )]\\&= \int _{\mathbb {R}^\mathbb {N} \times \mathbb {R}^\mathbb {N}} K^{\beta ,J} (dh, d \phi ) \ f (h, \phi ) . \end{aligned}$$

It follows that the weak limit of the metastate probability measures completely determines the weak limit of the joint probability measures. As a result, the joint probability measures are in some sense redundant if one has limiting results pertaining to the metastate probability measures. In this paper, we will use the joint probability measures primarily as a tool to prove uniform tightness results of the metastate probability measures. To that end, although we will not make immediate use of the following result, we show the uniform tightness of the collection of intensity measures \(\{ \mathbb {E} \mu _n^{\beta ,J,h} \}_{n \in \mathbb {N}}\).

Lemma 2.3.7

Let h be a random external field which satisfies (A1).

It follows that the collection of intensity measures \( \left\{ \mathbb {E} \mu _n^{\beta , J, h} \right\} _{n \in \mathbb {N}}\) is uniformly tight.

For the full proof, see Sect. 3.7. The uniform tightness of the various metastate probability measures follow by combining this result with Lemmas A.2.1 and A.2.3.

For the MS parameter range, recall that the random variable \(m_n^h\) can be written in terms of the 2-dimensional centred random walk \(S_n'\) presented in Equation (2.3.1). By using the multivariate delta method, we have the following central limit theorem for the sequence of vectors \(\{ m_n^h \}_{n \in \mathbb {N}}\).

Lemma 2.3.8

Let h be a random external field which satisfies (A1) and (A2).

It follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } (h, \sqrt{n} (m_n^h - m)) = (h,G) \end{aligned}$$

in distribution, where G is a non-degenerate 2-dimensional Gaussian random variable independent of h.

For the proof, see Lemma 3.6.3. The multivariate delta method is a standard tool of statistics, one can see [29] for some more direct references and discussion.

By using Skorohod’s representation theorem, see [19, Chapter 17], we can construct another probability space on which the convergence in distribution of \((h, \sqrt{n} (m_n^h - m)) \rightarrow (h,G)\) is elevated to almost sure convergence. On this new probability space, subject to a slight abuse of notation, we can apply the previous main result Theorem 2.2.9 in the case where \(\delta = \frac{1}{2}\) and \(\gamma = G\) almost surely. Using these methods, we have the following result concerning the weak limit of the metastate probability measures.

Theorem 2.3.9

Let h be a random external field which satisfies (A1) and (A2).

For the mixed state parameter range, it follows

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathcal {K}_n^{\beta , J}[f] = \frac{1}{2} \int _{\Omega } d \mathbb {P} \ f(h, \nu _\infty ^{z^+,h}) + \frac{1}{2} \int _{\Omega } d \mathbb {P} \ f(h, \nu _\infty ^{z^-,h}) := \mathcal {K}^{\beta ,J} [f] \end{aligned}$$

for any \(f \in C_b (\mathbb {R}^\mathbb {N} \times \mathcal {M}_1 (\mathbb {R}^\mathbb {N}))\).

For the full proof, see Sect. 3.7.

As a direct corollary, using the fact that \(h \mapsto \nu _\infty ^{z^\pm ,h}\) is a continuous mapping, we construct the Aizenman–Wehr metastate of this model.

Theorem 2.3.10

Let h be a random external field which satisfies (A1) and (A2).

For the mixed state parameter range, the Aizenman–Wehr metastate is given by

$$\begin{aligned} \kappa ^{\beta ,J,h} := \frac{1}{2} \delta _{\nu _\infty ^{z^+,h}} + \frac{1}{2} \delta _{\nu _\infty ^{z^-,h}} . \end{aligned}$$

To our knowledge this is a novel result for this particular model and it surpasses other similar models in its level of generality. Similar results have been obtained in [4] and [21] for the BFCW model. We also emphasize that the proof of weak convergence of the metastate probability measures is almost a direct corollary of the previous main result Theorem 2.2.9 by using Skorohod’s representation theorem. This proof strategy does not seem to be utilized in either [4] or [21].

2.3.4 Phase Characterization

To better understand this result, we will characterize this model in terms of the phase characterization of disordered systems given in [27]. This characterization describes the phases in terms of the expectation and variance of the magnetization density. We can give an equivalent characterization of the RFMFS model.

If we return to the representation Lemma 3.1.1, we see that the magnetization density of this model is given by

$$\begin{aligned} \mu _n^{\beta ,J,h} \left[ \frac{M_n}{n} \right] = \int _{B(0,1)} \rho _n^{\beta ,J,h} (dz) \ x . \end{aligned}$$

We have the following result.

Lemma 2.3.11

Let h be a random external field which satisfies (A1) and (A2).

For the pure state parameter range, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E} \mu _n^{\beta ,J,h} \left[ \frac{M_n}{n} \right] = x^*, \ \lim _{n \rightarrow \infty } \mathbb {V} \mu _n^{\beta ,J,h} \left[ \frac{M_n}{n} \right] = 0 . \end{aligned}$$

For the mixed state parameter range, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E} \mu _n^{\beta ,J,h} \left[ \frac{M_n}{n} \right] = 0, \ \lim _{n \rightarrow \infty } \mathbb {V} \mu _n^{\beta ,J,h} \left[ \frac{M_n}{n} \right] = \left( x^+ \right) ^2 > 0 . \end{aligned}$$

The proof of this statement is a direct application of the convergence in distribution of the expectation of magnetization, see Lemma 3.7.1.

We present the phase characterization for the RFMFS model in Table 3.

Table 3 Characterization of the phases of the RFMFS model

We see that the transition from the PS parameter range to the MS parameter range involves the phase transition from the ordered paramagnetic phase to the spin glass phase. In the spin glass phase, the model exhibits chaotic size dependence and a unique “splitting” of the pure states unlike that of the deterministic inhomogeneous external field model. In particular, the A–W metastate describes a model where limiting states are positive magnetization states with equal probability to the negative magnetization states.

2.3.5 Convergence of the Newman–Stein Metastates

We introduce the collection of empirical metastates or Newman–Stein metastates \(\{ \overline{\kappa }_N^{\beta ,J,h}\}_{N \in \mathbb {N}}\) which are given by

$$\begin{aligned} \overline{\kappa }_N^{\beta ,J,h} := \frac{1}{N} \sum _{n=1}^N \delta _{\mu _n^{\beta ,J,h}} \end{aligned}$$
(2.3.2)

almost surely. The collection of probability distributions of the N–S metastates is denoted by \(\{ \overline{\mathcal {K}}_N^{\beta ,J,h}\}_{N \in \mathbb {N}}\). The N–S metastates were introduced in [26] by Newman and Stein.

To begin the study of their convergence properties, we first consider their uniform tightness. The almost sure uniform tightness of the collection of N–S metastates follows from the almost sure uniform tightness of \(\mathcal {G}(\beta ,J,h)\), which can be deduced from either Theorem 2.3.5 or Lemmas 3.3.2, and A.2.2. The uniform tightness of the collection of probability measures of N–S metastates follows from the uniform tightness of the collection of intensity measures given in Lemmas 2.3.7 and A.2.3. We state these two results as a lemma.

Lemma 2.3.12

Let h be a random external field which satisfies (A1).

It follows that the collection of Newman–Stein metastates \(\{ \overline{\kappa }_N^{\beta ,J,h}\}_{N \in \mathbb {N}}\) is uniformly tight almost surely, and the collection of probability measures of Newman–Stein metastates \(\{ \overline{\mathcal {K}}^{\beta ,J}_N\}_{N \in \mathbb {N}}\) is uniformly tight.

Given the uniform tightness of these collections, it is enough to study expectations of the form

$$\begin{aligned} \overline{\kappa }_N^{\beta ,J,h} [P] := \frac{1}{N} \sum _{n=1}^N P (\mu _n^{\beta ,J,h} [f_1],..., \mu _n^{\beta ,J,h} [f_m]), \end{aligned}$$

where \(P: \mathbb {R}^m \rightarrow \mathbb {R}\) is a finite degree polynomial of m-variables and \(\{ f_i \}_{i=1}^m\) is a finite collection belonging to \({\text {LBL}} (\mathbb {R}^\mathbb {N})\), see Lemma A.3.3. In the case of almost sure convergence, one considers such expectations in the limit as \(N \rightarrow \infty \) almost surely, and, in the case of weak convergence of the probability distributions of the N–S metastates, one considers such expectations in the limit in distribution in the limit as \(N \rightarrow \infty \).

To study the limits, we begin by introducing a collection of sets \(\{ A_{n, \delta } \}_{n \in \mathbb {N}}\) given by

$$\begin{aligned} A_{n, \delta } := \left( \left[ - n^{- \frac{1}{2} + \delta }, - n^{-\frac{1}{2} - \delta }\right] \cup \left[ - n^{- \frac{1}{2} + \delta }, n^{-\frac{1}{2} + \delta }\right] \right) \subset \mathbb {R}^2 , \end{aligned}$$

where \(0< \delta < \frac{1}{6}\). We will use this collection of sets as “conditioning sets” for the N–S metastates. We show the following three results for this collection. First, subject to the addition of assumption (A4), we show that

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{1}{N} \sum _{n=1}^{N} \mathbbm {1}(m_n^h - m \not \in A_{n, \delta }) = 0 , \end{aligned}$$

almost surely, see Lemma 3.8.1 for the proof.

This result allows one to consider the Newman–Stein metastates only “along” the sets \(A_{n, \delta }\). Using the control of the fluctuation of \(m_n^h - m\) provided by conditioning on the sets \(A_{n, \delta }\), along with the asymptotics developed for the weights \(W_n^{\beta ,J,h,+}\) in Lemma 3.5.4, we show that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbbm {1}(m_n^h - m \in A_{n, \delta }) \left| W_n^{\beta ,J,h,+} - \mathbbm {1} \left( \sum _{i=1}^n h_i > 0 \right) \right| = 0 , \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbbm {1}(m_n^h - m \in A_{n, \delta }) \left| \mu _n^{\beta ,J,h, \pm } [f] - \nu _\infty ^{z^\pm ,h} [f] \right| = 0 , \end{aligned}$$

where \(f \in {\text {LBL}} (\mathbb {R}^\mathbb {N})\), see Lemma 3.8.2 for the full proof. Combining these results together, we have the following result concerning the pathwise asymptotics of the N–S metastates.

Lemma 2.3.13

Let h be a random external field which satisfies (A1), (A2), and (A4).

For the mixed state parameter range, it follows that

$$\begin{aligned} \overline{\kappa }_{N}^{\beta ,J,h} [P] = T_{N}^+ P (\nu _\infty ^{z^+,h} [f_1],..., \nu _\infty ^{z^+,h} [f_m]) + (1 - T_{N}^+) P (\nu _\infty ^{z^-,h} [f_1],..., \nu _\infty ^{z^-,h} [f_m]) + o(1) \end{aligned}$$

almost surely, where

$$\begin{aligned} T_N^+ := \frac{1}{N} \sum _{n=1}^N \mathbbm {1} \left( \sum _{i=1}^n h_i > 0 \right) . \end{aligned}$$

For the full proof and notation, see Sect. 3.8. The results presented and proved here for conditioning sets are model specific adaptations of the ideas and methods concerning “regular sets” presented in [21]. In the above result, we are using the standard little-o asymptotic notation.

The limiting structure of the N–S metastates is then determined by the properties of the collection of random variables \(\{ T_N^+ \}_{N \in \mathbb {N}}\), which correspond to the portion of time that a 1-dimensional random walk with step-length \(h_0\) spends in the upper half-plane. Results concerning this collection of random variables are classical, and we refer to [32, Chapter 4]. We will utilize the following two results. The first results concerns the convergence in distribution of \(\{ T_N^+\}_{N \in \mathbb {N}}\) to an arcsine distributed random variable. The second related result concerning the characterization of the set of limit points of \(\{ T_N^+\}_{N \in \mathbb {N}}\), follows by using the Hewitt-Savage 0-1 law, see [19, Chapter 12], and the convergence in distribution to an arcsine distributed random variable.

Lemma 2.3.14

Let h be a random external field which satisfies (A1) and (A2).

It follows that

$$\begin{aligned} \lim _{N \rightarrow \infty } (h, T_N^+) = (h, \alpha ) \end{aligned}$$

in distribution, where \(\alpha \) is an arcsine distributed random variable independent of h given by its distribution function

$$\begin{aligned} \mathbb {P}(\alpha \le x) = \frac{\arcsin (2x - 1)}{\pi } + \frac{1}{2} \end{aligned}$$

for \(x \in [0,1]\), and

$$\begin{aligned} L \left( \{ T_N^+ \}_{N \in \mathbb {N}} \right) = [0,1] \end{aligned}$$

almost surely.

For the full proof, see Sect. 3.8.

By combining the almost sure uniform tightness of the N–S metastates from Lemma 2.3.12, the pathwise asymptotics from Lemma 2.3.13, and the limit point density from Lemma 2.3.14, we have the following CSD result for the almost sure convergence of the N–S metastates.

Theorem 2.3.15

Let h be a random external field which satisfies (A1), (A2), and (A4).

For the mixed state parameter range, it follows that

$$\begin{aligned} {\text {conv}} (\delta _{\nu _\infty ^{z^+,h}}, \delta _{\nu _\infty ^{z^-,h}}) =L \left( \left\{ \overline{\kappa }^{\beta ,J,h}_N \right\} _{N \in \mathbb {N}}\right) \end{aligned}$$

almost surely.

In particular, it follows that \(\{ \overline{\kappa }^{\beta ,J,h}_N \}_{N \in \mathbb {N}}\) does not converge almost surely but there does exist a random subsequence \(\{ N_{k} \}_{k \in \mathbb {N}}\) such that

$$\begin{aligned} \overline{\kappa }_{N_{k}}^{\beta ,J,h} \rightarrow \kappa ^{\beta ,J,h} \end{aligned}$$

almost surely.

See Sect. 3.8 for the full proof.

By combining the almost sure uniform tightness of the N–S metastates from Lemma 2.3.12, the pathwise asymptotics from Lemma 2.3.13, and the convergence in distribution to the arcsine distributed random variable, we have the following convergence in distribution result.

Theorem 2.3.16

Let h be a random external field which satisfies (A1), (A2), and (A4).

For the mixed state parameter range, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \overline{\kappa }^{\beta ,J,h}_N = \alpha \delta _{\nu _\infty ^{z^+,h}} + (1 - \alpha ) \delta _{\nu _{\infty }^{z^-,h}} := \overline{\kappa }^{\beta ,J,h}, \end{aligned}$$

in distribution, where \(\alpha \) is an arcsine distributed random variable independent of h.

See Sect. 3.8 for the full proof.

The Newman–Stein metastates were introduced as a way to obtain some form of almost sure convergence for the FVGS. However, as can be seen from these results, this is not the case, but at least one can realize the A–W metastate as random subsequence of the N–S metastates.

The convergence in distribution of the N–S metastates clearly shows the pathwise dependence of the model. In some sense, the presence of the arcsine random variable is a result of the pathwise dependence of the weights of the FVGS. Since the weights behave like indicator functions for large enough n, the result is somewhat expected.

This result is novel for this specific model, and similar, almost identical, results have been obtained by [21] for the BFCW model. The biggest difference between the proof technique of these results is that for Bernoulli components, one can work directly with the 1-dimensional simple random walks. For this model, it seems necessary to use some methods of non-linear statistics for 2-dimensional random walks as in the proof of Lemma 3.8.1.

2.3.6 Triviality of Metastates in the Pure State Parameter Range

We have not yet discussed the metastates for the pure state parameter range. This is because they are trivial due to to the almost sure convergence of the FVGS from Theorem 2.3.5. As a direct application of Lemma A.3.1, we have the following result.

Theorem 2.3.17

Let h be a random external field which satisfies (A1).

For the pure state parameter range, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } (h, \mu _n^{\beta ,J,h}) = (h,\nu _\infty ^{z^*,h}) \end{aligned}$$

in distribution, and

$$\begin{aligned} \lim _{N \rightarrow \infty } \overline{\kappa }_N^{\beta ,J,h} = \delta _{\nu _\infty ^{z^*,h}} \end{aligned}$$

almost surely.

2.3.7 Summary and Remarks

As we earlier remarked, the results obtained are novel for this particular model and they are universal for random external fields in the precise sense given by the assumptions of the theorems concerning the random variable \(h_0\).

In the introduction, we noted that the RFCW model, in principle, should allow one to study limiting free energies with many global maximizing points with varying strengths. For the RFMFS model, due possibly to the spherical constraint, the only options are that the global maximizing point is unique, in which case the structure of the metastates is trivial, or there are exactly two global maximizing points of quadratic type which constitute the simplest form of global maximizing points. The results then for the case of two global maximizing points are almost identical to the results concerning the metastate obtained for the BFCW model. As such, it is natural to see many methods repeated such as the construction of sequence of local maximizing points of the exponential tilting functions, and the conditioning sets used for the analysis of the N–S metastate. The most significant difference between the BFCW model and the RFMFS model is that main calculations for the BFCW model concerning the limiting free energy consider the analysis of a smooth function on an unbounded set where all derivatives contain some form of randomness while the analogous analysis for the RFMFS model considers a two-dimensional smooth function on a bounded set such that the randomness vanishes for derivatives of order two or higher.

As a final remark, let us comment on the methods and usability of the arguments presented here for cases other than the independent identically distributed components case. The proofs concerning the CSD phenomenon rely heavily on the recurrence properties of random walks. In a similar fashion, the proofs concerning the convergence of the N–S metastate rely on specific properties of random walks leading to the arcsine law. On the other hand, the construction of the A–W metastate relies on the strong law of large numbers, or generic almost sure convergence, to resolve the almost sure convergence of the “microcanonical” measures, permutation invariance of h to prove uniform tightness, and the weak convergence relies on the convergence in distribution of the magnetization vectors \(\{ \sqrt{n} (m_n^h - m) \}_{n \in \mathbb {N}}\), which in turn comes down to proving results concerning the scaled sums of the sequence \(\{ (h_i, h_i^2)\}_{i \in \mathbb {N}}\). Since the results concerning weak convergence use Skorohod’s representation theorem, one can, in principle, study external fields satisfying the other required properties but with slower or possibly faster rates of asymptotic convergence in distribution of the magnetization vectors. To accommodate this type of analysis, one can also use the methods of proof for the asymptotics of the magnetization vectors to consider cases where the rate of convergence is not the same for each individual component of the sequence of vectors, but different. For examples of this type of analysis for large deviations of the RFCW model, see [24].

3 Proofs of Results

3.1 Rigorous Delta Function Calculation

For this first calculation, we will need the following two properties concerning the integral over the sphere which are presented and proved in the appendix of [22]. These two properties are orthogonal invariance and the decomposition of the sphere into subdimensional spheres.

If \(f \in C_b (\mathbb {R}^n)\) and \(O: \mathbb {R}^n \rightarrow \mathbb {R}^n\) is an orthogonal transformation, then it follows that

$$\begin{aligned} \int _{\mathbb {R}^n} d \phi \ \delta (|| \phi ||^2 - n) f (\phi ) = \int _{\mathbb {R}^n} d \phi \ \delta (|| \phi ||^2 - n) (f \circ O^{-1} ) (\phi ) . \end{aligned}$$

If \(f \in C_b (\mathbb {R}^n)\) and \(1< k < n\), then it follows that

$$\begin{aligned}&\int _{\mathbb {R}^n} d \phi \ \delta (|| \phi ||^2 - n) f (\phi )\\&\quad = \frac{1}{2} \int _{\mathbb {R}^k} d \phi ' \left( n - || \phi '||^2 \right) ^{\frac{n - k}{2} - 1} \mathbbm {1}(|| \phi '||^2 < n) \int _{\mathbb {S}^{n - k - 1}} d \Omega \ f \left( \phi ', \sqrt{n - || \phi '||^2} \Omega \right) , \end{aligned}$$

where we have identified \(\mathbb {R}^n = \mathbb {R}^k \oplus \mathbb {R}^{n - k}\) for the argument of f.

Recall the collection of vectors \(w_{1,n}\), \(w_{2,n}^h\), and \(\{ v_{j,n}^h \}_{j=3}^n\) and the orthogonal change of coordinates \(O_n^h\) given in the surrounding text of Equations (2.2.3), (2.1.1), and (2.2.5). The following lemma is a rigorous version of the formal calculation presented in Equation (2.2.5).

Lemma 3.1.1

Suppose that h satisfies \(m_n^{h, \perp } \not = 0\).

It follows that

$$\begin{aligned}&\frac{2}{n^{\frac{n}{2} - 1}} \frac{1}{|\mathbb {S}^{n - 3}|}\int _{\mathbb {S}^{n-1}} d \Omega \ e^{\frac{\beta J}{2} \left( \sum _{i=1}^n \Omega _i \right) ^2 + \beta \sqrt{n} \sum _{i=1}^n h_i \Omega _i} f (\sqrt{n} \Omega ) \\&\quad = \int _{B(0,1)} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4) \left( \frac{\beta J}{2} x^2 + \beta \left\langle m_n^h, z \right\rangle + \frac{1}{2} \ln (1 - || z ||^2)\right) } \\&\qquad \times \frac{1}{|\mathbb {S}^{n - 3}|}\int _{\mathbb {S}^{n - 3}} d \Omega \ f \left( \sqrt{n}x w_{1,n} + \sqrt{n}y w^h_{2,n} + \sqrt{1 - || z ||^2}\sqrt{n} \sum _{j=3}^n \Omega _j v_{j,n}^h \right) \end{aligned}$$

for \(f \in C_b (\mathbb {R}^n)\).

Proof

Using the orthogonal invariance property, we have

$$\begin{aligned}&\int _{\mathbb {S}^{n-1}} d \Omega \ e^{\frac{\beta J}{2} \left( \sum _{i=1}^n \Omega _i \right) ^2 + \beta \sqrt{n} \sum _{i=1}^n h_i \Omega _i} f (\sqrt{n} \Omega ) \\&\quad = \int _{\mathbb {S}^{n-1}} d \Omega \ e^{\frac{\beta J n}{2} \Omega _1^2 + \beta n m_n^\parallel \Omega _1 + \beta n m^\perp \Omega _2} f \left( \sqrt{n} \left( \Omega _1 w_{1,n} + \Omega _2 w_{2,n} + \sum _{j=3}^n \Omega _j v_{j,n} \right) \right) . \end{aligned}$$

Using the subdimensional sphere decomposition, we have

$$\begin{aligned}&\frac{2}{n^{\frac{n}{2} - 1}} \frac{1}{|\mathbb {S}^{n - 3}|}\int _{\mathbb {S}^{n-1}} d \Omega \ e^{\frac{\beta J n}{2} \Omega _1^2 + \beta n m_n^{h, \parallel } \Omega _1 + \beta n m^\perp \Omega _2} f \left( \sqrt{n} \left( \Omega _1 w_{1,n} + \Omega _2 w_{2,n} + \sum _{j=3}^n \Omega _j v_{j,n} \right) \right) \\&\quad = \int _{B(0,1)} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n, z \right\rangle } e^{(n - 4) \left( \frac{\beta J}{2} x^2 + \beta \left\langle m_n, z \right\rangle + \frac{1}{2} \ln (1 - || z ||^2)\right) } \\&\qquad \times \frac{1}{|\mathbb {S}^{n - 3}|}\int _{\mathbb {S}^{n - 3}} d \Omega f \left( \sqrt{n} x w_{1,n} + \sqrt{n} y w_{2,n} + \sqrt{1 - || z ||^2}\sqrt{n} \sum _{j=3}^n \Omega _j v_{j,n} \right) , \end{aligned}$$

as desired. \(\square \)

3.2 Uniform Convergence of Microcanonical Probability Measures

Denote \(\eta _n\) to be the probability measure on \(\mathbb {R}^n\) obtained by setting \(I = [n]\) in Equation (2.2.12). We have the following lemma concerning the relationship between \(\eta \), \(T_n^{z,h}\), and \(\nu _n^{z,h}\).

Lemma 3.2.1

Suppose that h satisfies \(m_n^{h, \perp } \not = 0\).

It follows that \(\nu _n^{z,h} = {T_n^{z,h}}_* \eta \).

Proof

It is enough to prove that if \(f \in C_b(\mathbb {R}^n)\), then \({T_n}_* \eta _n [f] = \nu _n [f \circ \pi _n]\). To that end, note that

$$\begin{aligned}&\int _{\mathbb {R}^n} d\phi \ e^{- \frac{|| \phi ||^2}{2}} \left( f \circ \left( T_n \right) _1 \right) (\phi ) \\&\quad = \int _{\mathbb {R}^n} d\phi \ e^{- \frac{|| P_n(\phi )||^2 + || \phi - P_n(\phi )||^2}{2}} f\left( \sqrt{n}x w_{1,n} + \sqrt{n}y w_{2,n} + \sqrt{1 - || z ||^2} \sqrt{n} \frac{\phi - P_n (\phi )}{|| \phi - P_n (\phi )||} \right) . \end{aligned}$$

Furthermore, we have

$$\begin{aligned} \frac{\phi - P_n (\phi )}{|| \phi - P_n (\phi )||} := \frac{1}{\sqrt{\sum _{j=3}^n \left\langle v_{j,n}, \phi \right\rangle ^2}}\sum _{j=3}^n \left\langle v_{j,n}, \phi \right\rangle v_{j,n} . \end{aligned}$$

Using the orthogonal change of coordinates \(O_n\), it follows that

$$\begin{aligned}&\int _{\mathbb {R}^n} d\phi \ e^{- \frac{|| P_n(\phi )||^2 + || \phi - P_n(\phi )||^2}{2}} f\left( \sqrt{n}x w_{1,n} + \sqrt{n} y w_{2,n} + \sqrt{1 - || z ||^2} \sqrt{n} \frac{\phi - P_n (\phi )}{|| \phi - P_n (\phi )||} \right) \\&\quad = \int _{\mathbb {R}^2} d \phi ' \ e^{- \frac{|| \phi '||^2}{2}} \int _{\mathbb {R}^{n - 2}} d \phi \ e^{- \frac{|| \phi ||^2}{2}} f\left( \sqrt{n} x w_{1,n} + \sqrt{n} y w_{2,n} + \sqrt{1 - || z ||^2} \sqrt{n} \sum _{j=3}^n \frac{\phi _j}{|| \phi ||} v_{j,n} \right) . \end{aligned}$$

The integral over \(\mathbb {R}^2\) is redundant since the integrand does not depend on \(\phi '\). By change of coordinates to the hyperspherical coordinates for the integral over \(\mathbb {R}^{n-2}\), we have

$$\begin{aligned}&\left( \int _{\mathbb {R}^n} d\phi \ e^{- \frac{|| \phi ||^2}{2}} \right) ^{-1} \int _{\mathbb {R}^n} d\phi \ e^{- \frac{|| \phi ||^2}{2}} \left( f \circ \left( T_n \right) _1 \right) (\phi ) \\&\quad = \frac{1}{|\mathbb {S}^{n - 3}|}\int _{\mathbb {S}^{n - 3}} d \Omega \ f \left( \sqrt{n} x w_{1,n} + \sqrt{n} y w_{2,n} + \sqrt{1 - || z ||^2}\sqrt{n} \sum _{j=3}^n \Omega _j v_{j,n} \right) , \end{aligned}$$

from which the result follows. \(\square \)

We can now prove the result concerning the uniform convergence of \(\nu _n^{z,h}\) to \(\nu _\infty ^{z,h}\).

Proof of Lemma 2.2.5

By rescaling, it is enough to prove this claim for \(f \in {\text {LBL}} (\mathbb {R}^\mathbb {N})\) such that the function and its Lipschitz constant are bounded above by 1. Let I be the index set that f is local on. For large enough n, using the Lipschitz property, we have

$$\begin{aligned} \left| \nu _n [f] - \nu _\infty [f] \right| \le \eta _n \left[ \left| \left| (\pi _I \circ T_n) (\phi ) - (\pi _I \circ T_\infty ) (\phi ) \right| \right| \right] \end{aligned}$$

Using the explicit form of the transport maps, we have

$$\begin{aligned} \left| \left| (\pi _I \circ T_n) (\phi ) - (\pi _I \circ T_\infty ) (\phi ) \right| \right|&\le \sqrt{|I|} \left| \frac{m_n^{\parallel }}{m_n^\perp } - \frac{m^\parallel }{m^\perp } \right| \\&\quad + \sqrt{|I|} \max _{i \in I} |h_i| \left| \frac{1}{m_n^\perp } - \frac{1}{m^\perp } \right| \\&\quad + \left| \left| \pi _I \left( \phi - \sqrt{n} \frac{\phi - P_n^h (\phi )}{\left| \left| \phi - P_n^h (\phi ) \right| \right| }\right) \right| \right| . \end{aligned}$$

For the last term, we begin by noting that

$$\begin{aligned} \phi - \sqrt{n} \frac{\phi - P_n (\phi )}{\left| \left| \phi - P_n (\phi ) \right| \right| } = P_n(\phi ) + (\phi - P_n(\phi )) \left( 1 - \frac{\sqrt{n}}{|| \phi - P_n(\phi )||} \right) . \end{aligned}$$

It follows that

$$\begin{aligned}&\left| \left| \pi _I \left( \phi - \sqrt{n} \frac{\phi - P_n (\phi )}{\left| \left| \phi - P_n (\phi ) \right| \right| }\right) \right| \right| \\&\quad \le || \pi _I (P_n (\phi ))|| + \left| 1 - \frac{\sqrt{n}}{|| \phi - P_n(\phi )||} \right| \left( || \pi _I (P_n (\phi ))|| + || \pi _I (\phi ) ||\right) \end{aligned}$$

Now, note that

$$\begin{aligned} \pi _I (P_n (\phi )) = \frac{1_I}{\sqrt{n}} \left\langle w_{1,n}, \phi \right\rangle + \frac{h_I - m_n^\parallel 1_I}{\sqrt{n} m_n^\perp } \left\langle w_{2,n}, \phi \right\rangle . \end{aligned}$$

Using the projection to form an orthogonal change of coordinates, we have

$$\begin{aligned}&\eta _n \left[ || \pi _I (P_n (\phi ))|| \right] \\&\quad =\frac{1}{\sqrt{n}}\left( \int _{\mathbb {R}^2} dxdy e^{- \frac{x^2 + y^2}{2}}\right) ^{-1} \int _{\mathbb {R}^2} dxdy \ e^{- \frac{x^2 + y^2}{2}} \left| \left| 1_I x + \frac{h_I - m_n^\parallel 1_I}{ m_n^\perp } y \right| \right| . \end{aligned}$$

Using dominated convergence, this term is vanishing in the limit as \(n \rightarrow \infty \). Continuing, by the Cauchy-Schwartz inequality, we have

$$\begin{aligned}&\eta _n \left[ \left| 1 - \frac{\sqrt{n}}{|| \phi - P_n(\phi )||} \right| || \pi _I (P_n(\phi ))||\right] \\&\quad \le \left( \eta _n \left[ \left| 1 - \frac{\sqrt{n}}{|| \phi - P_n(\phi )||} \right| ^2 \right] \right) ^\frac{1}{2} \left( \eta _n \left[ || \pi _I (P_n(\phi ))||^2 \right] \right) ^\frac{1}{2} . \end{aligned}$$

The second term in the product on the right-hand side of the inequality will converge to something finite using the same calculation as in the previous integral. For the first term, using the projection to form an orthogonal change of coordinates and using hyperspherical coordinates, we have

$$\begin{aligned} \eta _n \left[ \left| 1 - \frac{\sqrt{n}}{|| \phi - P_n(\phi )||} \right| ^2 \right] = \left( \int _0^\infty dr \ r^{n - 3} e^{- \frac{n r^2}{2}}\right) ^{-1} \int _0^\infty dr \ r^{n - 3} e^{- \frac{n r^2}{2}} \left| 1 - \frac{1}{r} \right| ^2 . \end{aligned}$$

By using the Laplace method, see [35, Chapter 2], one can conclude that this term vanishes in the limit as \(n \rightarrow \infty \). By essentially repeating steps used for the previous term, one can see that the term

$$\begin{aligned} \eta _n \left[ \left| 1 - \frac{\sqrt{n}}{|| \phi - P_n(\phi )||} \right| || \pi _I (\phi ))||\right] \end{aligned}$$

is vanishing in the limit as \(n \rightarrow \infty \). Collecting the inequalities and vanishing terms, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \eta _n \left[ \left| \left| \pi _I \left( \phi - \sqrt{n} \frac{\phi - P_n (\phi )}{\left| \left| \phi - P_n (\phi ) \right| \right| }\right) \right| \right| \right] = 0 . \end{aligned}$$

Now, returning to the first inequality concerning the transport maps, it follows that

$$\begin{aligned} \left| \nu _n [f] - \nu _\infty [f] \right|&\le \eta _n \left[ \left| \left| (\pi _I \circ T_n) (\phi ) - (\pi _I \circ T_\infty ) (\phi ) \right| \right| \right] \\&\le \sqrt{|I|} \left| \frac{m_n^{\parallel }}{m_n^\perp } - \frac{m^\parallel }{m^\perp } \right| \\&\quad + \sqrt{|I|} \max _{i \in I} |h_i| \left| \frac{1}{m_n^\perp } - \frac{1}{m^\perp } \right| \\&\quad + \eta _n \left[ \left| \left| \pi _I \left( \phi - \sqrt{n} \frac{\phi - P_n (\phi )}{\left| \left| \phi - P_n (\phi ) \right| \right| }\right) \right| \right| \right] . \end{aligned}$$

The right hand side of this inequality is vanishing in the limit as \(n \rightarrow \infty \) and it does not depend on \(z \in B(0,1)\). The result follows. \(\square \)

3.3 Limiting Free Energy and Uniform Tightness of Infinite Volume Gibbs States

The following result shows that the set \(M^*(\beta ,J,m)\) of global maximizing points of \(\psi ^{\beta ,J,m}\) is non-empty and compact.

Lemma 3.3.1

Let h be a strongly varying external field.

It follows that \(M^*(\beta ,J,m)\) is non-empty and compact.

Proof

First, by direct calculation, for any \(z \in B(0,1)\), we have

$$\begin{aligned} \psi (z) < \frac{\beta J}{2} + \beta |m^\parallel | + \beta m^\perp + \frac{1}{2} \ln (1 - || z ||^2) . \end{aligned}$$

Because of the logarithmic term, there exists \(0< R < 1\) such that for any \(z \in B(0,1)\) satisfying \(|| z ||^2 > R^2\), we have \(\psi (z) < -1\). The set \(\overline{B}(0,R)\) is compact, the mapping \(\psi \) is continuous there, and \(\psi (0) = 0 > - 1\). This implies that \(\psi \) attains a maximum value at some point in \(\overline{B}(0,R)\) which is greater than or equal to 0, but it necessarily strictly larger than all elements in \(B(0,1) {\setminus } \overline{B}(0,R)\), which implies that it is actually a global maximum, thus proving that \(M^*\) is non-empty. In fact, this shows that \(M^* \subset \overline{B}(0,R) \subset B(0,1)\).

It is now enough to prove that \(M^*\) is closed since it is contained in a compact set. Let \(\{ z_k^* \}_{k \in \mathbb {N}}\) be a sequence of elements in \(M^*\) such that \(z_k^* \rightarrow z^*\). Suppose that \(z^*\) is not a global maximum point of \(\psi \). It follows that there must exist some \(z \in B(0,1)\) such that \(\psi (z^*) < \psi (z)\). By pointwise convergence and continuity of \(\psi \), for large enough k, we have

$$\begin{aligned} \psi (z_k^*) - \psi (z^*)< \psi (z) - \psi (z^*) \iff \psi (z_k^*) < \psi (z), \end{aligned}$$

which is a contradiction since \(z_k^*\) is a global maximum point. It follows that \(z^* \in M^*\) which shows that \(M^*\) is closed. \(\square \)

Using the fact that \(M^*(\beta ,J,m)\) is non-empty and compact, we can compute the limiting free energy.

Proof of Proof of Theorem 2.2.2

Observe that

$$\begin{aligned}&\left| \ln \frac{\int _{B(0,1)} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n, z \right\rangle } e^{(n - 4)\psi _n(z)}}{\int _{B(0,1)} dz \ e^{(n - 4)\psi (z)}} \right| \\&\quad \le 2 \beta J + 4 \beta || m_n || + (n - 4) \sup _{z \in B(0,1)} |\psi _n(z) - \psi (z)| . \end{aligned}$$

Let \(\varepsilon > 0\) be arbitrary but small. By using compactness of \(M^*\) and the continuity of \(\psi \), it follows that there exists a set \(A_\varepsilon \subset B(0,1)\) such that \(\left| \psi (z) - \sup _{z' \in B(0,1)} \psi (z') \right| \le \varepsilon \) for all \(z \in A_\varepsilon \). By using such a set, it follows that

$$\begin{aligned} \frac{n-4}{n} \left( \sup _{z \in B(0,1)} \psi (z) - \varepsilon \right) + \frac{1}{n} \ln |A_\varepsilon | \le \frac{1}{n} \ln \int _{B(0,1)} dz \ e^{(n - 4)\psi (z)} \le \frac{n-4}{n} \sup _{z \in B(0,1)} \psi (z), \end{aligned}$$

where \(|A_\varepsilon |\) is the Lebesgue measure of \(A_\varepsilon \). By first taking the limit as \(n \rightarrow \infty \) followed by the limit as \(\varepsilon \rightarrow 0^+\), it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n} \ln \int _{B(0,1)} dz \ e^{(n - 4)\psi (z)} = \sup _{z \in B(0,1)} \psi (z) . \end{aligned}$$

By combining the first observation with this limit, the result follows. \(\square \)

We can now prove the following result concerning the uniform tightness and limiting structure of the collection of probability measures \(\{ \rho _n^{\beta , J, h} \}_{n \in \mathbb {N}}\).

Proof of Lemma 2.2.4

Fix \(0< R < 1\) and let \(\varepsilon > 0\) be small enough such that \(K:= \{ z \in B(0,1): d(z, M^*) \le \varepsilon \} \subset \overline{B}(0, R)\). By essentially repeating the arguments of Theorem 2.2.2 and Lemma 3.3.1, one can show that

$$\begin{aligned} \limsup _{n \rightarrow \infty } \frac{1}{n} \ln \rho _n (K^c) \le \sup _{z \in \overline{K^c} \cap B(0,1)} \psi (z) - \sup _{z \in B(0,1)} \psi (z) < 0. \end{aligned}$$

By rewriting

$$\begin{aligned} \rho _n(K) = 1 - e^{n \left( \frac{1}{n} \ln \rho _n (K^c) \right) }, \end{aligned}$$

it follows that

$$\begin{aligned} \inf _{k \ge n} \rho _k (K) \ge 1 - e^{n \sup _{k \ge n} \frac{1}{k} \ln \rho _k (K^c)} , \end{aligned}$$

from which it follows that \(\lim _{n \rightarrow \infty } \rho _n (K) = 1\) which proves uniform tightness.

Next, let \(z \in \left( M^* \right) ^c\) and \(\delta _1, \delta _2 > 0\) small enough such that \(\overline{B}(z, \delta _1) \subset B(z, \delta _2)\) and \(B(z, \delta _2) \cap M^* = \emptyset \). By a similar argument to the uniform tightness argument above, one can show that

$$\begin{aligned} \liminf _{n \rightarrow \infty } \rho _n (B(z, \delta _1)) = 0 . \end{aligned}$$

Now, by Prokhorov’s theorem, let \(\rho \) be a probability measure obtained as a convergent subsequence \(\{ \rho _{n_k} \}_{k \in \mathbb {N}}\) of the uniformly tight collection of probability measures \(\{ \rho _n \}_{n \in \mathbb {N}}\). It follows that

$$\begin{aligned} \rho (B(z, \delta _1)) \le \liminf _{k \rightarrow \infty } \rho _{n_k} (B(z, \delta _1)) = 0 . \end{aligned}$$

This implies that the complement of \(M^*\) and the support of \(\rho \) are disjoint which in turn implies that \(\rho \) must be supported by \(M^*\). \(\square \)

We conclude with the proof of the following partial classification of the IVGS.

Lemma 3.3.2

Let h be a strongly varying external field.

It follows that \(\mathcal {G} (\beta ,J,h)\) is uniformly tight and

$$\begin{aligned} \mathcal {G}_\infty (\beta ,J,h) \subset \left\{ \mu \in \mathcal {M}_1 (\mathbb {R}^\mathbb {N}) : {\text {supp}} (\rho ) \subset M^*(\beta ,J,m), \ \mu = \int _{M^*(\beta ,J,m)} \rho (dz) \ \nu _\infty ^{z,h} \right\} . \end{aligned}$$

Proof

Let \(\{ \mu _{n_k} \}_{k \in \mathbb {N}}\) be an arbitrary subsequence. By Lemma 2.2.4, the collection of probability measures \(\{ \rho _{n_k} \}_{k \in \mathbb {N}}\) is uniformly tight which implies, by Prokhorov’s theorem, that for any subsequence \(\{ \rho _{n_k} \}_{k \in \mathbb {N}}\) of this collection there exists a convergent subsubsequence \(\{ \rho _{n_{k_j}}\}_{j \in \mathbb {N}}\) with a limit probability distribution \(\rho \). We have

$$\begin{aligned} \left| \mu _{n_{k_j}}[f] - \int _{M^*} \rho (dz) \ \nu _\infty ^z [f] \right|&\le \sup _{z \in B(0,1)} \left| \nu _{n_{k_j}}^z [f] - \nu _\infty ^z[f]\right| \\ {}&\quad + \left| \int _{M^*} \rho (dz) \ \nu _\infty ^z [f] - \int _{B(0,1)} \rho _{n_{k_j}}(dz) \ \nu _\infty ^z [f] \right| \end{aligned}$$

for any \(f \in {\text {LBL}}(\mathbb {R}^\mathbb {N})\). By Lemmas 2.2.5 and 2.2.4, it follows that

$$\begin{aligned} \lim _{j \rightarrow \infty } \mu _{n_{k_j}}[f] = \int _{M^*} \rho (dz) \ \nu _\infty ^z [f] \end{aligned}$$

for any \(f \in {\text {LBL}}(\mathbb {R}^\mathbb {N})\), which implies that \(\mathcal {G}\) is uniformly tight. The same argument applied to a weakly convergent subsequence \(\{ \mu _{n_k} \}_{k \in \mathbb {N}}\) shows that

$$\begin{aligned} \mathcal {G}_\infty \subset \left\{ \mu \in \mathcal {M}_1 (\mathbb {R}^\mathbb {N}) : {\text {supp}} (\rho ) \subset M^*, \ \mu = \int _{M^* } \rho (dz) \ \nu _\infty ^{z} \right\} . \end{aligned}$$

\(\square \)

The central applications of this result concern the case where \(M^*(\beta ,J,m)\) is finite. For that case, we have the following corollaries.

Corollary 3.3.3

Let h be a strongly varying external field and suppose that \(M^*(\beta ,J,m)\) is finite.

It follows that

$$\begin{aligned} \mathcal {G}_\infty (\beta ,J,h) \subset {\text {conv}} \left( \left\{ \nu _\infty ^{z^*,h} \right\} _{z^* \in M^*(\beta ,J,m)} \right) . \end{aligned}$$

Proof

If \(M^*\) is finite and \(\rho \) is a probability measure supported by \(M^*\), it follows that \(\rho \) is given by the normalized weighted sum of delta measures. To be exact, we have

$$\begin{aligned} \rho = \sum _{z^* \in M^*} \rho (z^*) \delta _{z^*} , \end{aligned}$$

where \(\rho (z^*):= \rho (\{ z^* \})\). Combining Lemma 3.3.2 and this observation, it follows that

$$\begin{aligned} \int _{M^*} \rho (dx) \ \nu _\infty ^z = \sum _{z^* \in M^*} \rho (z^*) \nu _{\infty }^{z^*} , \end{aligned}$$

from which the result follows. \(\square \)

In the case where \(M^*(\beta ,J,m)\) is a single element, we have the following special case.

Corollary 3.3.4

Let h be a strongly varying external field and suppose that \(M^*(\beta ,J,m) = \{ z^* \}\).

It follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mu _n^{\beta ,J,h} = \nu _\infty ^{z^*,h} \end{aligned}$$

weakly.

Proof

From Lemma 3.3.2, it follows that \(\mathcal {G}\) is uniformly tight. From Corollary 3.3.3, it follows that every weakly convergent subsequence of \(\mathcal {G}\) converges to \(\nu _\infty ^{z^*,h}\) which implies that the full sequence converges weakly to the same limit. \(\square \)

3.4 Global Maximizing Points of the Limiting Exponential Tilting Function

We will determine the amount of points in \(M^*(\beta ,J,m)\) depending on the given parameters. This first result considers the case where \(m^\parallel \not = 0\).

Lemma 3.4.1

Let h be a strongly varying external field and suppose that \(m^\parallel \not = 0\).

It follows that \(\psi ^{\beta ,J,m}\) has a unique global maximizing point.

Proof

We will use the fact that any local extrema of a differentiable function on an open set are necessarily critical points. This leads us to consider the critical point equation

$$\begin{aligned} \nabla [\psi ] (x^*, y^*) = 0 \iff \beta J x^* + \beta m^\parallel - \frac{x^*}{1 - {x^*}^2 - {y^*}^2} = 0, \ \beta m^\perp - \frac{y^*}{1 - {x^*}^2 - {y^*}^2} = 0 . \end{aligned}$$

From the second component of the critical point equation, we can solve for \(y^*\) in terms of \(x^*\) by rearranging the equation to a quadratic in \(y^*\) and solving for the root which satisfies \(-1< y^* < 1\). This root is given by

$$\begin{aligned} y^* = \sqrt{1 + \left( \frac{1}{2 \beta m^\perp } \right) ^2 - {x^*}^2} - \frac{1}{2 \beta m^\perp } . \end{aligned}$$

By direct computation, we have

$$\begin{aligned} 1 - {x^*}^2 - {y^*}^2 = 2 \left( \frac{1}{2 \beta m^\perp } \right) \left( \sqrt{1 + \left( \frac{1}{2 \beta m^\perp } \right) ^2 - {x^*}^2} - \frac{1}{2 \beta m^\perp } \right) . \end{aligned}$$

Plugging in this value to the first component of the critical point equation, we obtain

$$\begin{aligned} \beta J x^* + \beta m^\parallel - \frac{x^*}{2 \left( \frac{1}{2 \beta m^\perp } \right) \left( \sqrt{1 + \left( \frac{1}{2 \beta m^\perp } \right) ^2 - {x^*}^2} - \frac{1}{2 \beta m^\perp } \right) } = 0 . \end{aligned}$$

Denote the function \(F: (-1, 1) \rightarrow \mathbb {R}\) by

$$\begin{aligned} F(x) := \beta J x + \beta m^\parallel - \frac{x}{2 \left( \frac{1}{2 \beta m^\perp } \right) \left( \sqrt{1 + \left( \frac{1}{2 \beta m^\perp } \right) ^2 - {x}^2} - \frac{1}{2 \beta m^\perp } \right) } , \end{aligned}$$

the solution for the value of \(x^*\) will be given by the roots of F. We are, however, not interested in all the roots of this equation. Because we are searching for \((x^*, y^*)\) which correspond to the global maximum value of \(\psi \), we can employ a symmetry argument for \(\psi \) to show that the any maximum value point must lie in a certain quadrant of the unit ball. To be exact, suppose that \(m^\parallel > 0\). If there existed a point \((x^*, y^*)\) such that \(-1< x^* < 0\) and the point corresponds to a global maximum point of \(\psi \), then this would be a contradiction since \(\psi (- x^*, y^*) > \psi (x^*, y^*)\) by direct calculation, which would imply that it is not actually a global maximum point. A similar argument holds for \(m^\parallel < 0\), and a supposed maximum point which would satisfy \(0< x^* < 1\). This argument shows that we are only interested in finding the roots of the equation F on the interval (0, 1) if \(m^\parallel > 0\), and \((-1,0)\) if \(m^\parallel < 0\).

For the root analysis of F, let us begin by first remarking that \(x = 0\) is never a root of F, since \(F(0) = \beta m^\parallel \not = 0\). We only mention this since the symmetry argument used to show that the maximum point must reside on either positive or the negative interval only holds if \(x \not = 0\). We will work case by case. First, suppose that \(m^\parallel > 0\). We can now restrict ourselves to the interval (0, 1). By a direct computation, we have

$$\begin{aligned} F(0) = \beta m^\parallel > 0, \ \lim _{x \rightarrow 1^{-}} F(x) = - \infty . \end{aligned}$$

Since F is a continuous function on [0, 1), this implies that F must have at least one root on the interval (0, 1). Next, by direct computation we have

$$\begin{aligned} F''(x)&= -\frac{1}{2a}\frac{x^3}{(1 + a^2 - x^2)^\frac{3}{2} (\sqrt{1 + a^2 - x^2} - a)^2} \\&\quad -\frac{1}{2a} \frac{2 x^3}{(1 + a^2 - x^2) (\sqrt{1 + a^2 - x^2} - a)^3} \\&\quad -\frac{1}{2a} \frac{3x}{(1 + a^2 - x^2)^\frac{1}{2} (\sqrt{1 + a^2 - x^2} - a)^2} \\ \end{aligned}$$

where \(a = \frac{1}{2 \beta m^\perp } > 0\). It is clear that \(F''(x) < 0\) for all \(x \in (0,1)\). The inequality for the second derivative implies that \(F'\) is a strictly decreasing function on the interval (0, 1). If \(F'(0) \le 0\), then \(F'(x) < 0\) for all \(x \in (0,1)\), and the function F must be strictly decreasing on (0, 1). In this case, F has a single root in (0, 1). If \(F'(0) > 0\), then by the monotonicity of \(F'\), there must exist some intermediate value \(z \in (0,1)\) such that \(F'(z) = 0\). If there is no such value, then \(F'(x) > 0\) for all \(x \in (0,1)\) which would imply that F is a strictly increasing function, but since F starts at a positive value and must decrease without bound as it approaches 1, this is a contradiction. Thus there must exist an intermediate value z as described, and \(F(z) > 0\). Now, by again applying the monotonicity of F on (z, 1), and that F decreases without bound as it approaches 1, we see that there must exist a unique root of F on (z, 1), and there are no other roots on (0, z). This analysis shows that if \(m^\parallel > 0\), then F has a unique root on the interval (0, 1).

If \(m^\parallel < 0\), define \(G: (0,1) \rightarrow \mathbb {R}\) by

$$\begin{aligned} G(x) := - F(- x) = \beta J x + \beta \left( - m^\parallel \right) - \frac{x}{2 \left( \frac{1}{2 \beta m^\perp } \right) \left( \sqrt{1 + \left( \frac{1}{2 \beta m^\perp } \right) ^2 - {x}^2} - \frac{1}{2 \beta m^\perp } \right) } . \end{aligned}$$

The analysis that we did for F when \(m^\parallel > 0\) holds verbatim for the function G since \((- m^\parallel ) > 0\) in this case. As such, the function G has a unique root on (0, 1), which implies that the function F has a unique root on \((-1,0)\) when \(m^\parallel < 0\).

Since the points where the global maximum value of \(\psi \) is attained are also critical points, the above analysis shows that if \(m^\perp \not = 0\), then there exists only one unique point at which the maximum value is attained. \(\square \)

The second result concerns the case where \(m^\parallel = 0\).

Lemma 3.4.2

Let h be a strongly varying external field and suppose that \(m^\parallel = 0\).

One of the following three holds:

  1. 1.

    If \(m^\perp \ge J\), then for all \(\beta > 0\), there exists a unique global maximum point of \(\psi ^{\beta ,J,m}\) given by

    $$\begin{aligned} x^0 = 0, \ y^0 = \sqrt{1 + \left( \frac{1}{2 \beta m^\perp }\right) ^2} - \frac{1}{2 \beta m^\perp } . \end{aligned}$$
  2. 2.

    If \(m^\perp < J\), then for all \(\beta \le \frac{J}{(J - m^\perp ) (J + m^\perp )}\), there exists a unique global maximum point of \(\psi ^{\beta ,J,m}\) given by

    $$\begin{aligned} x^0 = 0, y^0 = \sqrt{1 + \left( \frac{1}{2 \beta m^\perp }\right) ^2} - \frac{1}{2 \beta m^\perp } . \end{aligned}$$
  3. 3.

    If \(m^\perp < J\), then for all \(\beta > \frac{J}{(J - m^\perp ) (J + m^\perp )}\), there exists two global maximum points of \(\psi ^{\beta ,J,m}\) given by

    $$\begin{aligned} x^{\pm } = \pm \sqrt{1 - \frac{1}{\beta J} - \frac{{m^\perp }^2}{J^2} }, \ y^{\pm } = \frac{m^\perp }{J} . \end{aligned}$$

Proof

We begin by noting that any local extrema of differentiable function on an open set are also critical points. The critical point equation is given by

$$\begin{aligned} \nabla [\psi ] (x^*,y^*) = 0 \iff x^* \left( \beta J - \frac{1}{1 - {x^*}^2 - {y^*}^2} \right) = 0, \ \beta m^\perp - \frac{y^*}{1 - {x^*}^2 - {y^*}^2} = 0 . \end{aligned}$$

Let us now proceed case by case. Recall that the second component of the critical point equation can be used to solve for the value of \(y^*\) in terms of \(x^*\). The first component of the critical point equation can then be solved and the three possible candidates for a global maximum point on whether or not \(x^*\) vanishes. Let us denote \(x^0 = 0\), \(x^+ \in (0,1)\), and \(x^- \in (-1,0)\), for the possible values of \(x^*\). Let us emphasize that the existence of these solutions depends on the parameters \(\beta \), J, and \(m^\perp \), and that the candidate global maximum points do not all exist simultaneously.

First, let us consider \(x^* = x^0 = 0\). The corresponding \(y^0\) value is given by

$$\begin{aligned} y^0 = \sqrt{1 + \left( \frac{1}{2 \beta m^\perp }\right) ^2} - \frac{1}{2 \beta m^\perp } . \end{aligned}$$

Furthermore, one can verify that \(H_{12} [ \psi ] (z^0) = 0 = H_{21} [ \psi ] (z^0)\), and \(H_{22} [ \psi ] (z^0) < 0\), where \(H[\psi ] (z^0)\) is the Hessian of \(\psi \) evaluated at the point \(z_0\). This implies that the sign of \(H_{11} [ \psi ] (z^0)\) determines whether or not this is a true local maximum. A direct computation shows that

$$\begin{aligned} H_{11} [ \psi ](z^0) = \beta J - \sqrt{\beta ^2 {m^\perp }^2 + \left( \frac{1}{2} \right) ^2} - \frac{1}{2} . \end{aligned}$$

One can immediately see that if \(m^\perp \ge J\), then \(H_{11} [ \psi ] (z^0) < 0\) for all \(\beta > 0\). If \(m^\perp < J\), we can solve the condition \(H_{11} [ \psi ] (z^0) \le 0\) by considering it in terms of a quadratic in \(\beta \). This analysis yields the following

$$\begin{aligned} H_{11} [ \psi ] (z^0) \le 0 \iff 0 < \beta \le \frac{J}{(J - m^\perp )(J + m^\perp )} . \end{aligned}$$

Note that in the above equation the equality only occurs when both sides of the equivalence have equality. Let us now summarize these properties. If \(m^\perp > J\), \(\psi \) has a local maximum for all \(\beta > 0\) which can be verified by the negative definiteness of its Hessian at the the critical point. If \(m^\perp < J\), \(\psi \) has a local maximum which can be verified by the negative definiteness of its Hessian at the critical point, but this local maximum only exists for \(\beta \) belonging to the finite interval shown before. If \(m^\perp = J\), the Hessian matrix at the critical point has determinant 0, and, without further analysis, we cannot conclude whether or not this point is a maximum, minimum, or a saddle point. We will return to the case of \(m^\perp = J\) later.

Let us now consider the solutions \(z^\pm \) which satisfy

$$\begin{aligned} \beta J - \frac{1}{1 - {x^{\pm }}^2 - {y^{\pm }}^2} = 0 . \end{aligned}$$

By plugging this equation into the second component of the critical point equation, we immediately find that

$$\begin{aligned} y^\pm = \frac{m^\perp }{J} . \end{aligned}$$

Because of this property, it follows that if \(m^\perp \ge J\), then there are no solutions to the critical point equation in B(0, 1), where \(x^* \not = 0\). If \(m^\perp < J\), we can solve for the value of \(x^{\pm }\) from the first component of the critical point equation. When it exists, this value is given by

$$\begin{aligned} x^{\pm } = \pm \sqrt{1 - \frac{1}{\beta J} - \frac{{m^\perp }^2}{J^2} } . \end{aligned}$$

In order for \(x^{\pm } \in (-1,1)\), we must have

$$\begin{aligned} \beta > \frac{J}{(J - m^\perp ) (J + m^\perp )} . \end{aligned}$$

We will consider the Hessian matrix at this point to verify that these are local maxima. A direct computation shows that

$$\begin{aligned} \det \left( H [ \psi ] (z^\pm ) \right) = \frac{1 + x^2 + y^2}{(1 - x^2 - y^2)^3} - \beta J \left( \frac{1}{1 - x^2 - y^2} + \frac{2 y^2}{(1 - x^2 - y^2)^2} \right) . \end{aligned}$$

In our case, we have

$$\begin{aligned} \det \left( H [\psi ] (z^\pm ) \right) = 2 (\beta J)^3 - 2 (\beta J)^2 - 2 \beta J (\beta m^\perp )^2 = 2 (\beta J)^2 \left( \beta J \left( 1 - \left( \frac{m^\perp }{J}\right) ^2 \right) - 1 \right) . \end{aligned}$$

It follows that

$$\begin{aligned} \det \left( H [ \psi ] (z^\pm ) \right)> 0 \iff \beta > \frac{J}{(J - m^\perp ) (J + m^\perp )} . \end{aligned}$$

Since \(H_{22}[\psi ] (z^\pm ) < 0\), this confirms that when \(m^\perp < J\), and

$$\begin{aligned} \beta > \frac{J}{(J - m^\perp ) (J + m^\perp )} , \end{aligned}$$

the given \(z^\pm \) are local maxima of \(\psi \).

Finally, one should observe that the parameter ranges for which the respective maximizing points are derived from the critical point equation are disjoint. Since each maximizing point of \(\psi \) must be achieved at a critical point, it follows that these points are the only possible candidates for global maximizing points. It follows that each of the solutions to the critical point equations are thus all global maximizing points and the result follows. \(\square \)

3.5 Asymptotic Analysis of Sequences of Local Maximizing Points

In this section, we will use the multivariate Taylor’s formula as presented in [15, Chapter 8].

The following result shows that the conditioned FVGS are always weakly convergent.

Proof of Lemma 2.2.7

Observe that

$$\begin{aligned} \mu _n^{\beta , J, h, \pm } = \frac{1}{\rho _n^{\beta , J, h} (B_{\pm } (0,1))} \int _{B_\pm (0,1)} \rho _n^{\beta , J, h}(dz) \nu _n^{z,h} . \end{aligned}$$

If we denote the probability measure \(\rho _n^{\beta , J, h, \pm }\) on \(B_{\pm } (0,1)\) given by its density

$$\begin{aligned} \rho _n^{\beta , J, h, \pm }(dz) := \frac{\rho _n^{\beta , J, h}(dz) \mathbbm {1}(z \in B_\pm (0,1))}{\rho _n^{\beta , J, h} (B_{\pm } (0,1))} , \end{aligned}$$

then by replicating the proofs of Corollary 3.3.4, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \rho _n^{\beta , J, h, \pm } = \delta _{z^{\pm }} . \end{aligned}$$

From this weak limit and the uniform convergence provided by Lemma 2.2.5, the result follows. \(\square \)

The following result provides the construction of a suitably converging sequence of local maximizing points of the exponential tilting functions.

Lemma 3.5.1

Let h be a strongly varying external field.

It follows that for every global maximizing point \(z^*\) of \(\psi ^{\beta ,J,m}\) there exists a sequence of points \(\{ z_n^*\}_{n \in \mathbb {N}}\) such that \(z_n^*\) are local maximum points of \(\psi _n^{\beta , J, h}\) satisfying \(z_n^* \rightarrow z^*\) as \(n \rightarrow \infty \) and \(\nabla [\psi _n^{\beta , J, h} ] (z_n^*) = 0\) for large enough n.

Proof

Let \(B(z^*, \delta ) \subset B(0,1)\) be a small enough so that \(z^*\) is the only global maximum point of \(\psi \) in \(\overline{B}(z^*, \delta )\). By compactness, the functions \(\psi _n\) must attain their maxima \(z_n^*\) in \(\overline{B}(z^*, \delta )\). Let \( \{ z^*_{n_k} \}_{k \in \mathbb {N}}\) be any convergent subsequence of \(\{ z^*_n\}_{n \in \mathbb {N}}\) and let us denote \(z^*_0:= \lim _{k \rightarrow \infty } z^*_{n_k}\). By definition, for all \(k \in \mathbb {N}\), we have

$$\begin{aligned} \psi _{n_k}(z^*) \le \psi _{n_k} (z_{n_k}^*) . \end{aligned}$$

Using the uniform continuity of \(\psi _{n_k}\), taking the limit \(k \rightarrow \infty \), we have

$$\begin{aligned} \psi (z^*) \le \psi (z_0^*) . \end{aligned}$$

Since \(z^*\) is the unique global maximum point of \(\psi \) in \(\overline{B}(z^*, \delta )\), it follows that \(z^* = z_0^*\). Since the sequence \(\{ z_n^*\}_{n \in \mathbb {N}}\) is bounded, and this property holds for any convergent subsequence, it follows that \(z_n^* \rightarrow z^*\). Because \(z_n^* \rightarrow z^*\), it follows that for large enough n the \(z_n^*\) must belong to \(B(z^*, \delta )\) and not \(\partial B(z^*, \delta )\) which guarantees that \(\nabla [\psi _n] (z_n^*) = 0\). \(\square \)

For the rest of the proofs in this section, \(z^* \in B(0,1)\) will always be a global maximum point of \(\psi ^{\beta ,J,m}\) such that the Hessian \(H[\psi ^{\beta ,J,m}] (z^*)\) at \(z^*\) is negative definite, and there exists a sequence of points \(\{ z_n^* \}_{n \in \mathbb {N}}\) satisfying \(z_n^* \rightarrow z^*\) in the limit as \(n \rightarrow \infty \) and \(\nabla [\psi _n^{\beta ,J,h}] (z_n^*) = 0\) for large enough n.

The following result characterizes the rate of convergence of the sequence of local maximizing points in terms of the sample mean and sample standard deviation.

Lemma 3.5.2

Let h be a strongly varying external field.

It follows that there exist a sequence of invertible matrices \(\{ H_n \}_{n \in \mathbb {N}}\) such that \(H_n \rightarrow H[\psi ^{\beta , J,m}] (z^*)\) as \(n \rightarrow \infty \) and we have

$$\begin{aligned} z_n^* - z^* = - \beta H_n^{-1} (m_n^h - m) \end{aligned}$$

for large enough n.

Proof

Using the respective critical point equations for \(\psi \) and \(\psi _n\), we can form the following pair of difference equations

$$\begin{aligned} \beta (m_n^\parallel - m^\parallel )&= \frac{x_n^*}{1 - {x_n^*}^2 - {y_n^*}^2} - \beta J x_n^* - \left( \frac{x^*}{1 - {x^*}^2 - {y^*}^2} - \beta J x^* \right) , \\ \beta (m_n^\perp - m^\perp )&= \frac{y_n^*}{1 - {x^*_n}^2 - {y_n^*}^2} - \frac{y^*}{1 - {x^*}^2 - {y^*}^2}. \end{aligned}$$

We consider two functions \(C_1, C_2: B(0,1) \rightarrow \mathbb {R}\) given by

$$\begin{aligned} C_1(x,y)&:= \frac{x}{1 - x^2 - y^2} - \beta J x - \left( \frac{x^*}{1 - {x^*}^2 - {y^*}^2} - \beta J x^* \right) , \\ C_2(x,y)&:= \frac{y}{1 - x^2 - y^2} - \frac{y^*}{1 - {x^*}^2 - {y^*}^2} . \end{aligned}$$

Observe that for any multi-index \(\alpha \in \mathbb {N}^2\) such that \(|\alpha | \ge 1\), we have \((\partial ^{\alpha } C_1)(x,y) = - \left( \partial ^\alpha \partial _1 \psi \right) (x,y)\) and \((\partial ^{\alpha } C_2)(x,y) = - \left( \partial ^\alpha \partial _2 \psi \right) (x,y)\). Furthermore, if we define \(C: B(0,1) \rightarrow \mathbb {R}^2\) by \(C(x,y):= (C_1 (x,y), C_2(x,y))\), then \(D[C] = - H \left[ \psi \right] \).

The difference equations can thus be expressed as

$$\begin{aligned} \beta (m_n - m) = C(z_n^*) . \end{aligned}$$

If \(\det \left( H \left[ \psi \right] (z^*) \right) < 0\), then \(D[C] (z^*)\) is a real symmetric positive definite matrix. By considering the Taylor series of \(C_1\) and \(C_2\) developed around the point \(z^*\), we have

$$\begin{aligned} \beta (m_n - m) = \left( - H[\psi ](z^*) + S(z_n^* - z^*) \right) (z_n^* - z^*), \end{aligned}$$

where

$$\begin{aligned} S_{ij} (z_n^* - z^*) = - \sum _{k=1}^2 \int _0^1 dt \ (1 - t) (\partial _k \partial _j \partial _i \psi ) (z^* + t (z_n^* - z^*)) (z_n^* - z^*)_k . \end{aligned}$$

Because \(S(z_n^*) \rightarrow 0\) as \(n \rightarrow \infty \), and \(H[\psi ](z^*)\) is negative definite, it follows that \(- H[\psi ](z^*) + S(z_n^*)\) is invertible for large enough \(n \in \mathbb {N}\). For large enough \(n \in \mathbb {N}\), we thus have

$$\begin{aligned} z_n^* - z^* = - \beta \left( H[\psi ](z^*) - S(z_n^*) \right) ^{-1} (m_n - m) . \end{aligned}$$

Now, setting

$$\begin{aligned} H_n := H[\psi ](z^*) - S(z_n^*), \end{aligned}$$

the result follows. \(\square \)

The following result is an adaptation of the Laplace method to this specific model.

Lemma 3.5.3

Let h be a strongly varying external field.

Suppose that there exists an open set \(B \subset B(0,1)\) such that \(z^*\) is the unique maximum point of \(\psi ^{\beta ,J,m}\) in B.

It follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{n \int _{B} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4)\psi ^{\beta ,J,m}_n(z)}}{e^{n \psi ^{\beta ,J,m}_n(z_n^*)}} = \frac{1}{(1 - || z^* ||^2)^2}\int _{\mathbb {R}^2} dz \ e^{\frac{1}{2} \left\langle z, H[\psi ^{\beta ,J,m}](z^*) z \right\rangle } . \end{aligned}$$

Proof

First, using the fact that \(z_n^* \rightarrow z^*\), for large enough n, it follows that there exists \(0< c< \delta < C\) such that \(B(z^*, c) \subset B(z_n^*, \delta ) \subset \overline{B}(z^*, C) \subset B\), where \(\delta \) and c are fixed but small. Denote \(B_{n,\delta } = B(z_n^*, \delta )\). We have

$$\begin{aligned}&\int _{B \setminus B_{n,\delta }} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n, z \right\rangle } e^{(n - 4) (\psi _n(z) - \psi _n(z_n^*))} \\&\quad \le |B (0,1)| e^{\beta \left( 2J + 4 |m_n^\parallel | + 4 m_n^\perp \right) } e^{(n - 4) \Delta _n}, \end{aligned}$$

where

$$\begin{aligned} \Delta _n := \sup _{z \in B \setminus B_{n, \delta }} \psi _n(z) - \sup _{z \in B} \psi _n(z) < \sup _{z \in B \setminus B(z^*, c)} \psi _n(z) - \sup _{z \in B} \psi _n(z). \end{aligned}$$

Using the uniform convergence of \(\psi _n \rightarrow \psi \), one sees that

$$\begin{aligned} \lim _{n \rightarrow \infty } \Delta _n := \Delta < 0, \end{aligned}$$

and it follows that

$$\begin{aligned} \int _{B \setminus B_{n,\delta }} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n, z \right\rangle } e^{(n - 4) (\psi _n(z) - \psi _n(z_n^*))} = O\left( e^{(n - 4) \Delta } \right) . \end{aligned}$$

Next, by changing variables, we have

$$\begin{aligned}&\int _{ B_{n, \delta }} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n, z \right\rangle } e^{(n - 4) (\psi _n(z) - \psi _n(z_n^*))} \\&\quad = \frac{1}{n-4} \int _{B(0, \delta \sqrt{n-4})} dz \ e^{2 \beta J \left( x^* + \frac{x}{\sqrt{n - 4}}\right) ^2 + 4 \beta \left\langle m_n, z^* + \frac{z}{\sqrt{n - 4}} \right\rangle } e^{(n-4) (\psi _n (z_n^* + \frac{z}{\sqrt{n-4}}) - \psi _n(z_n^*))} . \end{aligned}$$

Now, note that

$$\begin{aligned} (n - 4) \left( \psi _n \left( z_n^* + \frac{z}{\sqrt{n-4}} \right) - \psi _n(z_n^*) \right)&= \frac{1}{2} \left\langle z, H[\psi ] (z_n^*)z \right\rangle \\&\quad + \frac{1}{(n - 4)^{\frac{1}{2}}} \sum _{|\alpha | = 3} R_\alpha \left( z_n^*, \frac{z}{\sqrt{n - 4}}\right) z^\alpha , \end{aligned}$$

where \(R_\alpha \) is given by

$$\begin{aligned} R_\alpha \left( z_n^*, \frac{z}{\sqrt{n - 4}}\right) = \frac{|\alpha |}{\alpha !} \int _0^1 dt \ (1 - t)^{|\alpha | - 1} \left( \partial ^\alpha \psi \right) \left( z_n^* + t \frac{z}{\sqrt{n - 4}}\right) . \end{aligned}$$

Because \(z_n^* \rightarrow z^*\), for large enough n there exists \(\delta _1 > \delta \) such that \(B(z^*, \delta _1) \subset B\) and we have

$$\begin{aligned} \left| \left( \partial ^\alpha \psi \right) \left( z_n^* + t \frac{z}{\sqrt{n - 4}}\right) \right| \le \max _{|\alpha | = 3, z' \in B(z^*, \delta _1)} |\partial ^\alpha \psi (z')| \end{aligned}$$

for \(z \in B(0, \delta \sqrt{n - 4})\). It follows that

$$\begin{aligned} \left| \frac{1}{(n - 4)^{\frac{1}{2}}}\sum _{|\alpha | = 3} R_\alpha \left( z_n^* + \frac{z}{\sqrt{n-4}} \right) z^\alpha \right|&\le \max _{|\alpha | = 3, z' \in B(z^*, \delta _1)} |\partial ^\alpha \psi (z')| \frac{1}{(n - 4)^{\frac{1}{2}}} \sum _{|\alpha | = 3} |z^\alpha | \\&\le \delta D \left( \max _{|\alpha | = 3, z' \in B(z^*, \delta _1)} |\partial ^\alpha \psi (z')| \right) || z ||^2 . \end{aligned}$$

for some fixed constant \(D > 0\) and every \(z \in B(0, \delta \sqrt{n - 4})\). It follows that for large enough n and small enough \(\delta > 0\), we have

$$\begin{aligned}&\mathbbm {1}(z \in B(0, \delta \sqrt{n - 4}))e^{(n-4) (\psi _n (z_n^* + \frac{z}{\sqrt{n-4}}) - \psi _n(z_n^*))}\\&\quad \le e^{\frac{1}{2} \left\langle z, \left( H[\psi ](z_n^*) + 2 \delta D \max _{|\alpha | = 3, z' \in B(z^*, \delta _1)} |\partial ^\alpha \psi (z')| I \right) z \right\rangle } , \end{aligned}$$

where I is the identity matrix of dimension \(2 \times 2\). Because \(H[\psi ](z_n^*) \rightarrow H[\psi ] (z^*)\), and \(\delta > 0\) can be chosen arbitrarily small but fixed, it follows that for large enough n and small enough \(\delta > 0\), there exists a positive definite quadratic form Q such that

$$\begin{aligned} e^{\left\langle z, \left( H[\psi ](z_n^*) + \delta D \max _{|\alpha | = 3, z' \in B(z^*, \delta _1)} |\partial ^\alpha \psi (z')| I \right) z \right\rangle } \le e^{- \left\langle z, Q z \right\rangle } . \end{aligned}$$

Finally, it is clear that for large enough n and fixed \(\delta > 0\), there exists a constant \(E > 0\) such that

$$\begin{aligned} \mathbbm {1}(z \in B(0, \delta \sqrt{n - 4}))e^{2 \beta J \left( x^* + \frac{x}{\sqrt{n - 4}}\right) ^2 + 4 \beta \left\langle m_n, z^* + \frac{z}{\sqrt{n - 4}} \right\rangle } \le e^E . \end{aligned}$$

From these observations, by dominated convergence, it follows that

$$\begin{aligned}&\lim _{n \rightarrow \infty } \frac{(n-4) \int _{B_{n, \delta }} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4)\psi ^{\beta ,J,m}_n(z)}}{e^{(n-4) \psi ^{\beta ,J,m}_n(z_n^*)}} \\&\quad = e^{2 \beta J {x^*}^2 + 4 \beta \left\langle m, z^* \right\rangle } \int _{\mathbb {R}^2} dz \ e^{\frac{1}{2} \left\langle z, H[\psi ](z^*) z \right\rangle } . \end{aligned}$$

Now, including the exponentially decreasing integral over \(B {\setminus } B_{n, \delta }\), it follows that

$$\begin{aligned}&\lim _{n \rightarrow \infty } \frac{(n-4) \int _{B} dz \ e^{2 \beta J x^2 + 4 \beta \left\langle m_n^h, z \right\rangle } e^{(n - 4)\psi ^{\beta ,J,m}_n(z)}}{e^{(n-4) \psi ^{\beta ,J,m}_n(z_n^*)}} \\ {}&= e^{2 \beta J {x^*}^2 + 4 \beta \left\langle m, z^* \right\rangle } \int _{\mathbb {R}^2} dz \ e^{\frac{1}{2} \left\langle z, H[\psi ](z^*) z \right\rangle } . \end{aligned}$$

By plugging this formula into the original desired limit, and cancelling out terms, the result follows. \(\square \)

The following result concerns the asymptotics of the difference \(\psi _n^{\beta , J, h} (z_n^\pm ) - \psi ^{\beta ,J,m} (z^\pm )\).

Lemma 3.5.4

Let h be a strongly varying external field.

It follows that there exists two sequence of matrices \(\{ Q_n \}_{n \in \mathbb {N}}\) and \(\{ H_n \}_{n \in \mathbb {N}}\) such that \(Q_n \rightarrow H[\psi ^{\beta ,J,m}] (z^*)\) and \(H_n \rightarrow H[\psi ^{\beta ,J,m}] (z^*)\) as \(n \rightarrow \infty \) and we have

$$\begin{aligned} \psi ^{\beta ,J,m}_n(z_n^*) - \psi ^{\beta ,J,m}(z^*)&= \beta \left\langle m_n^h - m, z^* \right\rangle - \frac{\beta ^2}{2} \left\langle m_n^h - m, Q_n^{-1} (m_n^h - m) \right\rangle \\ {}&+ \sum _{|\alpha | = 3} R_\alpha (-\beta H_n^{-1} (m_n^h - m)) (H_n^{-1} (m_n^h - m))^\alpha \end{aligned}$$

for large enough \(n \in \mathbb {N}\), where \(R_\alpha \) are functions which are continuous at 0.

Proof

We begin with the simple observation

$$\begin{aligned} \psi _n(z_n^*) - \psi (z^*) = \psi _n(z_n^*) - \psi (z_n^*) + \psi (z_n^*) - \psi (z^*) . \end{aligned}$$

The individual differences can be evaluated separately. For the first difference, we have

$$\begin{aligned} \psi _n(z_n^*) - \psi (z_n^*)&= \beta \left\langle m_n - m, z^* \right\rangle + \beta \left\langle m_n - m, z_n^* - z^* \right\rangle \end{aligned}$$

For the second, we have

$$\begin{aligned} \psi (z_n^*) - \psi (z^*) = \frac{1}{2} \left\langle z_n^* - z^*, H[\psi ](z^*) (z_n^* - z^*) \right\rangle + \sum _{|\alpha | = 3} R_\alpha (z_n^* - z^*) (z_n^* - z^*)^\alpha , \end{aligned}$$

where

$$\begin{aligned} R_\alpha (z_n^* - z^*) = \frac{|\alpha |}{\alpha !} \int _0^1 dt \ (1 - t)^{|\alpha | - 1} (\partial ^{\alpha } \psi ) (z^* + t (z_n^* - z^*)) . \end{aligned}$$

Combining the differences, we see that

$$\begin{aligned} \psi _n(z_n^*) - \psi (z^*)&= \beta \left\langle m_n - m, z^* \right\rangle + \beta \left\langle m_n - m, z_n - z^* \right\rangle + \frac{1}{2} \left\langle z_n^* - z^*, H[\psi ] (z^*) (z_n - z^*) \right\rangle \\ {}&\quad + \sum _{|\alpha | = 3} R_\alpha (z_n^* - z^*) (z_n^* - z^*)^\alpha . \end{aligned}$$

Now, we apply Lemma 3.5.2 to convert the \(z_n - z^*\) terms to \(m_n - m^*\) terms to get

$$\begin{aligned} \psi _n(z_n^*) - \psi (z^*)&= \beta \left\langle m_n - m, z^* \right\rangle - \frac{\beta ^2}{2} \left\langle m_n - m, Q_n^{-1} (m_n - m) \right\rangle \\&\quad + \sum _{|\alpha | = 3} R_\alpha (-\beta H_n^{-1} (m_n - m)) (H_n^{-1} (m_n - m))^\alpha \end{aligned}$$

where

$$\begin{aligned} Q_n := (2 H_n^{-1} - H_n^{-1} H[\psi ] (z^*) H_n^{-1})^{-1} , \end{aligned}$$

from which the result follows. \(\square \)

The following result characterizes the asymptotics of the weight \(W_n^{\beta , J, h, +}\) via the rate of convergence of \(m_n^h - m\).

Proof of Lemma 2.2.8

By applying Lemma 3.5.3 to the form of the weight \(W_n^{+}\) given in Equation (2.2.17), there exists of a sequence \(\{ a_n \}_{n \in \mathbb {N}}\) such that \(a_n \rightarrow 1\) and

$$\begin{aligned} W_n^{+} = \frac{1}{1 + a_n e^{n (\psi _n (z^-_n) - \psi _n (z_n^+))}} . \end{aligned}$$

Suppose that the asymptotic relation \(\lim _{n \rightarrow \infty } n^\delta (m_n - m) \rightarrow \gamma \in \mathbb {R}^2\) holds for some \(\delta > 0\). Using the asymptotics from Lemma 3.5.4 applied the differences in the exponential of the previous limit, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } n^\delta (\psi _n (z_n^+) - \psi (z^+) - (\psi _n(z_n^-) - \psi (z^-))) = - 2 \beta x^+ \gamma ^\parallel . \end{aligned}$$

Now, by considering the specific \(\delta \) and \(\gamma \) given in the enumerated assumptions, the result follows from the following limit

$$\begin{aligned} \lim _{n \rightarrow \infty } W_n^+ = \frac{1}{1 + \lim _{n \rightarrow \infty } e^{n^{1 - \delta } n^\delta (\psi _n (z_n^+) - \psi (z^+) - (\psi _n(z_n^-) - \psi (z^-)))}} . \end{aligned}$$

\(\square \)

3.6 Random External Fields, Random Walks, and Chaotic Size Dependence

The following two results concern the measurability of the map \(\omega \in \Omega \mapsto h \mapsto \mu _n^{\beta ,J,h}\).

Lemma 3.6.1

The mapping \(h \mapsto \mu _n^{\beta , J, h}\) is continuous.

Proof

Since the topologies on both spaces are given by metrics and the local continuous bounded functions on \(\mathbb {R}^\mathbb {N}\) are convergence determining, it is enough to show that if \(\{ h_k \}_{k \in \mathbb {N}}\) is a sequence of elements in \(\mathbb {R}^\mathbb {N}\) such that \(h_k \rightarrow h\) then we have \(\mu _n^{\beta , J, h_k} [f] \rightarrow \mu _n^{\beta , J, h} [f]\) as \(k \rightarrow \infty \) for any local continuous bounded function \(f \in C_b(\mathbb {R}^\mathbb {N})\). This follows by using dominated convergence to change the order of the limit and integration in the definition of \(\mu _n^{\beta ,J,h}\) given in Equation (2.1.4). \(\square \)

Lemma 3.6.2

Suppose that \(h: \Omega \rightarrow \mathbb {R}^\mathbb {N}\) is measurable.

It follows that \(\omega \mapsto h \mapsto \mu _n^{\beta , J, h}: \Omega \rightarrow \mathcal {M}_1(\mathbb {R}^\mathbb {N})\) is measurable.

Proof

By Lemma 3.6.1, the mapping \(h \mapsto \mu _n^{\beta , J, h}\) is continuous and thus measurable. The result follows since the composition of measurable functions is measurable. \(\square \)

The following lemma relates the limiting properties of two-dimensional random walks with step length \((h_0, h_0^2)\) to the limiting properties of the sequence \(\{ m_n^h \}_{n \in \mathbb {N}}\).

Lemma 3.6.3

Let h be a random external field which satisfies (A1) and (A2).

The sequence of random variables \(\{ m_n^h \}_{n \in \mathbb {N}}\) has the following limit properties:

  1. 1.

    A strong law of large numbers of the form

    $$\begin{aligned} \lim _{n \rightarrow \infty } m_n^h \rightarrow m = (\mathbb {E} h_0, \sqrt{\mathbb {V} h_0}) \end{aligned}$$

    almost surely.

  2. 2.

    A central limit theorem of the form

    $$\begin{aligned} \lim _{n \rightarrow \infty } (h, \sqrt{n} (m_n^h - m)) = (h,G) \end{aligned}$$

    weakly, where G is a non-degenerate 2-dimensional Gaussian random variable independent of h.

  3. 3.

    A recurrence result of the form

    $$\begin{aligned} \left\{ \left( p_1, \frac{p_2}{2 m^\perp } - \frac{p_1 m^\parallel }{m^\perp } \right) : p \in P\right\} \subset L \left( \left\{ n(m_n^h - m) \right\} _{n=1}^\infty \right) , \end{aligned}$$

    almost surely, where P is the set of possible values of the random walk with step length \((h_0 - \mathbb {E}h_0, h_0^2 - \mathbb {E} h_0^2)\)

Proof

Denote the random walk \(\{ S_n \}_{n=1}^\infty \) with step length \((h_0, h_0^2)\) as a sequence of \(\mathbb {R}^2\)-valued random variables by setting

$$\begin{aligned} \left( S_n \right) _1 := \sum _{i=1}^n h_i, \ \left( S_n \right) _2 := \sum _{i=1}^n h_i^2 . \end{aligned}$$

The mean \(\mathbb {E} S_n\) of \(S_n\) is given by \(\mathbb {E} S_n = n \left( \mathbb {E} h_0, \mathbb {E} h_0^2 \right) \). It follows that

$$\begin{aligned} m_n^\parallel = \left( \frac{S_n}{n} \right) _1, \ m_n^\perp = \sqrt{\left( \frac{S_n}{n} \right) _2 - \left( \left( \frac{S_n}{n} \right) _1 \right) ^2} . \end{aligned}$$

By the the strong law of large numbers, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } m_n = \left( \mathbb {E} h_0, \sqrt{\mathbb {V} (h_0)}\right) \end{aligned}$$

almost surely. We can thus set \(m^\parallel := \mathbb {E} h_0\) and \(m^\perp := \sqrt{\mathbb {V}h_0}\).

By the central limit theorem, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \left( h, \sqrt{n} \left( \frac{S_n}{n} - (\mathbb {E} h_0, \mathbb {E} h_0^2) \right) \right) := (h, G_0) \end{aligned}$$

in distribution, where \(G_0\) is a 2-dimensional Gaussian random variable with mean 0 and covariance matrix \(\Sigma _0\) independent of h given by

$$\begin{aligned} \Sigma _0 := \begin{bmatrix} \mathbb {V} h_0 &{} \mathbb {E} h_0^3 - \mathbb {E} h_0 \mathbb {E} h_0^2 \\ \mathbb {E} h_0^3 - \mathbb {E} h_0 \mathbb {E} h_0^2 &{} \mathbb {V} h_0^2 \end{bmatrix} . \end{aligned}$$

Let \(D:= \{ (x,y) \in \mathbb {R}^2: y - x^2 > 0 \}\). Define a function \(f: D \rightarrow \mathbb {R}^2\) by setting \(f(x,y):= (x, \sqrt{y - x^2})\). Observe that \(f\left( \frac{S_n}{n} \right) = m_n\). By the multivariate delta method, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \left( h, \sqrt{n} (m_n - m) \right)&= \lim _{n \rightarrow \infty } \left( h, \sqrt{n} \left( f \left( \frac{S_n}{n} \right) - f(\mathbb {E} h_0, \mathbb {E} h_0^2) \right) \right) \\ {}&= \left( h, D[f](\mathbb {E} h_0, \mathbb {E} h_0^2) G_0 \right) \end{aligned}$$

weakly where \(G_0\) is again independent of h. Let us then denote \(G:= D[f](\mathbb {E} h_0, \mathbb {E} h_0^2) G_0\) and note that G is a 2-dimensional Gaussian random variable with mean 0 and covariance matrix \(\Sigma \) defined by

$$\begin{aligned} \Sigma := D[f](\mathbb {E} h_0, \mathbb {E} h_0^2) \Sigma _0 D[f](\mathbb {E} h_0, \mathbb {E} h_0^2)^T \end{aligned}$$

independent of h.

Denote the centred random walk \(\{ S_n' \}_{n=1}^\infty \) defined by \(S_n':= S_n - \mathbb {E} S_n\). A standard theorem of random walks, found for instance in [12, Chapter 5], states that if a 2-dimensional scaled random walk \(\{ \frac{S_n'}{\sqrt{n}} \}_{n=1}^\infty \) converges in distribution to a non-degenerate 2-dimensional Gaussian random variable, then the random walk \(\{ S_n' \}_{n=1}^\infty \) is recurrent. Thus the centred random walk \(\{ S_n'\}_{n=1}^\infty \) is recurrent and we will denote its set of recurrent values by P. Note also that by the same standard theorem, the set P is closed.

Since P is a closed subset a separable space \(\mathbb {R}^2\) it follows that there exists a sequence \(\{ q_i \}_{i=1}^\infty \) of elements in P which is dense in P. For \(i,j \in \mathbb {N}\), define the set \(\Omega _{i,j}\) by

$$\begin{aligned} \Omega _{i,j} := \left\{ || S'_{n} - q_i || < \frac{1}{j} \text { infinitely often}\right\} . \end{aligned}$$

By the definition of recurrence, we have \(\mathbb {P}(\Omega _{i,j}) = 1\) for any \(i,j \in \mathbb {N}\). It follows that the set \(\Omega ' \subset \Omega \) defined by

$$\begin{aligned} \Omega ' := \bigcap _{i,j \in \mathbb {N}} \Omega _{i,j} \end{aligned}$$

satisfies \(\mathbb {P}(\Omega ') = 1\). Choose any realization of the random walk \(\{ S_n \}_{n = 1}^\infty \) from the set \(\Omega '\). For this realization, let \(p \in P\) be any recurrent value. It follows that there exists a subsequence \(\{ q_{i_k} \}_{k=1}^\infty \) such that \(q_{i_k} \rightarrow p\). Using the sets \(\Omega _{i_{k}, k}\), construct a subsequence \(\{ n_k \}_{k=1}^\infty \) such that \(|| S'_{n_k} - q_{i_k} || < \frac{1}{k}\). For such a subsequence, it is clear that \(\lim _{k \rightarrow \infty } S'_{n_k} = p\). Such a construction is possible for any recurrent value p, and thus we have shown that

$$\begin{aligned} P \subset L \left( \{ S_n' \}_{n=1}^\infty \right) \end{aligned}$$

almost surely.

Returning now to the random variable \(m_n - m\), let \(\{ n_k \}_{k = 1}^\infty \) be a subsequence such that \(S'_{n_k} \rightarrow p\). It follows that

$$\begin{aligned} n_k(m^\parallel _{n_k} - m^\parallel ) = \left( S'_{n_k} \right) _1 \end{aligned}$$

and

$$\begin{aligned} n_k (m^\perp _{n_k} - m^\perp ) = \frac{\left( S'_{n_k}\right) _2}{m^\perp _{n_k} + m^\perp } - \frac{\left( S'_{n_k} \right) _1 \left( m^\parallel _{n_k} + m^\parallel \right) }{m^\perp _{n_k} + m^\perp } . \end{aligned}$$

It follows that

$$\begin{aligned} \lim _{k \rightarrow \infty } n_k (m_{n_k} - m) = \left( p_1, \frac{p_2}{2 m^\perp } - \frac{p_1 m^\parallel }{m^\perp } \right) . \end{aligned}$$

Combining this with the previous result, we have

$$\begin{aligned} \left\{ \left( p_1, \frac{p_2}{2 m^\perp } - \frac{p_1 m^\parallel }{m^\perp } \right) : p \in P\right\} \subset L \left( \left\{ n(m_n - m) \right\} _{n=1}^\infty \right) \end{aligned}$$

almost surely. \(\square \)

Using the strong law of large numbers for the sequence of random variables \(\{ m_n^h \}_{n \in \mathbb {N}}\), we have the following self-averaging result for the limiting free energy.

Proof of Theorem 2.3.3

By reusing the proof of Theorem 2.2.6, and the strong law of large numbers from Lemma 3.6.3, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n} \ln Z_n (\beta ,J,h) = \sup _{z \in B(0,1)} \psi ^{\beta ,J,m} (z) \end{aligned}$$

almost surely. It is enough to prove that the sequence of random variables \(\{ \frac{1}{n} \ln Z_n (\beta ,J,h) \}_{n \in \mathbb {N}}\) is uniformly integrable. To that end, we immediately have the following inequality

$$\begin{aligned} \left| \frac{1}{n} \ln Z_n (\beta ,J,h) - \frac{1}{n} \ln Z_n (\beta ,J) \right| \le \beta || m_n || , \end{aligned}$$

where

$$\begin{aligned} Z_n (\beta ,J) := \int _{B(0,1)} \frac{dz}{(1 - || z ||^2)^2} e^{n \left( \frac{\beta J x^2}{2} - \frac{1}{2} \ln (1 - || z ||^2) \right) } . \end{aligned}$$

We have

$$\begin{aligned} \sqrt{\mathbb {E} \left| \frac{1}{n} \ln Z_n (\beta ,J,h) \right| ^2} \le \frac{1}{n} \ln Z_n (\beta ,J) + \beta \mathbb {E} h_0^2, \end{aligned}$$

since

$$\begin{aligned} \mathbb {E} m_n^2 = \mathbb {E} \frac{1}{n} \sum _{i=1}^n h_i^2 = \mathbb {E} h_0^2 . \end{aligned}$$

By using the Laplace method, it is clear that the sequence \(\{ \frac{1}{n} \ln Z_n (\beta ,J)\}_{n \in \mathbb {N}}\) is convergent and thus bounded. Combining all of these observations together, we find that the square moments of the sequence \(\{ \frac{1}{n} \ln Z_n (\beta ,J,h) \}_{n \in \mathbb {N}}\) are uniformly bounded and thus the sequence is uniformly integrable and the result follows. \(\square \)

Recall the phase characterization presented in Table 2 for the random external field. Using the first and the third properties from Lemma 3.6.3, we present the chaotic size dependence of the FVGS.

Proof of Theorem 2.3.5

By Lemma 3.6.3, it follows that \(m_n \rightarrow m = (\mathbb {E} h_0, \sqrt{\mathbb {V} h_0})\) almost surely. By applying Theorem 2.2.6 to each realization in this set of probability 1, all of the results follow except for the opposite inclusion of the limit points of \(\{ \mu _n \}_{n \in \mathbb {N}}\) and the convex combinations of \(\nu _\infty ^{z^\pm }\).

For this result, by using the recurrence result from Lemma 3.6.3, for any \(p \in P\), there exists a subsequence \(\{ n_k \}_{k \in \mathbb {N}}\) such that

$$\begin{aligned} n_k(m_{n_k} - m) \rightarrow \left( p_1, \frac{p_2}{2 m^\perp } - \frac{p_1 m^\parallel }{m^\perp } \right) \end{aligned}$$

almost surely. It is not difficult to see that the collection of results leading to and including Theorem 2.2.9 also hold for subsequences. The conditions of this subsequence version of Theorem 2.2.9 are satisfied with \(\delta = 1\) and \(\gamma = \left( p_1, \frac{p_2}{2\,m^\perp } - \frac{p_1 m^\parallel }{m^\perp } \right) \) which implies that

$$\begin{aligned} \mu _{n_k} \rightarrow \frac{1}{1 + e^{- 2 \beta x^+ p_1}} \nu _\infty ^{z^+} + \frac{1}{1 + e^{ 2 \beta x^+ p_1}} \nu _\infty ^{z^-} \end{aligned}$$

almost surely. Since this holds for any \(p \in P\), it follows that

$$\begin{aligned} \left\{ \frac{1}{1 + e^{- 2 \beta x^+ p_1}} \nu _\infty ^{z^+} + \frac{1}{1 + e^{ 2 \beta x^+ p_1}} \nu _\infty ^{z^-} \right\} _{p_1 \in \pi _1 (P)}&\subset \mathcal {G}_\infty (\beta ,J,h) \end{aligned}$$

almost surely. If \(\pi _1 (P) = \mathbb {R}\), then for any \(\alpha \in (0,1)\) there exists \(p_1\) such that \(\alpha = \frac{1}{1 + e^{- 2 \beta x^+ p_1}}\). It follows that

$$\begin{aligned} \left\{ \alpha \nu _\infty ^{z^+} + (1 - \alpha ) \nu _\infty ^{z^-} \right\} _{\alpha \in (0,1)}&\subset \mathcal {G}_\infty (\beta ,J,h) . \end{aligned}$$

Since the set of all limit points is closed, it follows that

$$\begin{aligned} \overline{\left\{ \alpha \nu _\infty ^{z^+} + (1 - \alpha ) \nu _\infty ^{z^-} \right\} _{\alpha \in (0,1)}} = {\text {conv}} \left( \nu _\infty ^{z^+}, \ \nu _\infty ^{z^-} \right) \subset \mathcal {G}_\infty (\beta ,J,h) \end{aligned}$$

almost surely, from which the final result follows. \(\square \)

3.7 Weak Convergence of the Metastate Probability Measures

We have the following uniform tightness result.

Proof of Lemma 2.3.7

Let \(I \subset \mathbb {N}\) be a finite index set. For \(n \ge \max (I)\), by finite permutation invariance of the distribution of h, the intensity measure \(\mathbb {E} \mu _n\) satisfies the following permutation invariance property

$$\begin{aligned} \mathbb {E} \mu _n [f \circ \pi _{I}] = \mathbb {E} \mu _n [f \circ \pi _{[|I|]}] \end{aligned}$$

for any continuous bounded function \(f: \mathbb {R}^{|I|} \rightarrow \mathbb {R}\), where \([|I|]:= \{ 1,2,..., |I_0| \}\). If \(J \subset \mathbb {N}\) is any other finite index set such that \(|J| = |I|\), then, for \(n \ge \max (I \cup J)\) it follows that

$$\begin{aligned} \mathbb {E} \mu _n [f \circ \pi _{I}] = \mathbb {E} \mu _n [f \circ \pi _{[|I|]}] = \mathbb {E} \mu _n [f \circ \pi _{J}] \end{aligned}$$

for any continuous bounded function \(f: \mathbb {R}^{|I|} \rightarrow \mathbb {R}\).

Let \(I \subset \mathbb {N}\) be any finite index set as before. For \(j \in \mathbb {N} \cup \{ 0 \}\), define the sets \(I_j \subset \mathbb {N}\) by \(I_j:= [|I|] + j |I|:= \{ i + j |I|: i \in [|I|] \}\). The sets \(I_j\) are disjoint and, for \(n \ge \max (I) + |I|\), there exists \(k \in \mathbb {N} \) such that \(\bigcup _{j=0}^{k-1} I_j \subset [n] \subset \bigcup _{j=0}^{k} I_j\). The number \(k+1\) corresponds to the smallest number of translates of \(I_0\), including \(I_0\), of the form \(I_j\) required to cover [n] as a union of such translates such that [n] is a strict subset of the union.

For \(n \ge \max (I) + |I|\), and k as before, it follows that \(k |I| \le n \le (k + 1)|I|\). Combining together all of the above observations, it follows that

$$\begin{aligned} \mathbb {E} \mu _n \left[ || \pi _{I}||^2 \right] = \mathbb {E} \mu _n \left[ || \pi _{I_{0}}||^2 \right]&= \frac{1}{k} \sum _{j=0}^{k-1} \mathbb {E} \mu \left[ || \pi _{I_{j}}||^2 \right] \\ {}&\le \frac{1}{k} \sum _{i=1}^n \mathbb {E} \mu _n \left[ || \pi _{i} ||^2 \right] \\ {}&\le |I| \left( 1 + \frac{1}{n - |I|}\right) . \end{aligned}$$

In the above inequality, we used the spherical constraint to conclude that

$$\begin{aligned} \sum _{i=1}^n \mathbb {E} \mu _n \left[ || \pi _{i} ||^2 \right] = n , \end{aligned}$$

and the last line follows by considering the inequality \(k |I| \le n \le (k + 1)|I|\). By Chebyshev’s inequality, we have

$$\begin{aligned} \mathbb {E} \mu _n (|| \pi _{I} || > K) \le \frac{|I|}{K^2} \left( 1 + \frac{1}{n - |I|}\right) , \end{aligned}$$

where \(K > 0\). It follows that

$$\begin{aligned} \lim _{K \rightarrow \infty } \lim _{n \rightarrow \infty } \mathbb {E} \mu _n (|| \pi _{I} || > K) = 0 \end{aligned}$$

which implies the uniform tightness of the marginal of \(\mathbb {E} \mu _n\) on the index set I. Since I was an arbitrary finite index set, and the proof above holds so long as n is large enough, it follows that any finite marginal of the sequence of intensity measures \(\{ \mathbb {E} \mu _n \}_{n \in \mathbb {N}}\) is uniformly tight and thus the sequence of intensity measures itself is uniformly tight. \(\square \)

We can now provide the proof of the convergence in distribution of the random variable \((h, \mu _n^{\beta ,J,h})\).

Proof of Theorem 2.3.9

First, recall from Lemma 3.6.3 that \(\left( h, \sqrt{n}(m_n - m) \right) \rightarrow (h,G)\) weakly, where G is a non-degenerate 2-dimensional Gaussian random variable independent of h. By the Skorohod representation theorem, there exists a probability space \((\Omega ', \mathcal {F}', \mathbb {P}')\) on which this weak convergence can be elevated to \(\mathbb {P}'\)-almost sure convergence and the distributions of these new random variables \((h',\sqrt{n}(m_n' - m))\) and \((h', G')\) are the same as the corresponding random variables in the original space. To be completely exact, the Skorohod representation theorem generates a random variable \(Y_n'\) such that \(\sqrt{n} (m_n - m)\) and \(Y_n'\) agree in distribution and we then define \(m_n'\) such that \(m_n' = \frac{1}{\sqrt{n}} Y_n' + m\). Define the set \(\Omega '' \subset \Omega '\) by

$$\begin{aligned} \Omega '' = \{ G'_1 \not = 0 \} \cap \left\{ \lim _{n \rightarrow \infty } n^\frac{1}{2} (m_n' - m) = G' \right\} . \end{aligned}$$

Since both sets in the intersection are sets of probability 1, it follows that \(\Omega ''\) is a set of probability 1, and we have

$$\begin{aligned} \lim _{n \rightarrow \infty } n^\frac{1}{2} (m_n' - m) = G' . \end{aligned}$$

almost surely, where \(G'_1 \not = 0\) almost surely. By applying Theorem 2.2.9 with \(\delta = \frac{1}{2}\) and \(\gamma = G'\), it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \left( h', \mu _n^{\beta ,J,h'} \right) = \mathbbm {1}(G'_1 > 0) \left( h', \nu _\infty ^{z^+,h'} \right) + \mathbbm {1}(G'_1 < 0) \left( h', \nu _\infty ^{z^-, h'} \right) \end{aligned}$$

almost surely. Finally, since \((h', G')\) and (hG) agree in distribution, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E} f \left( h, \mu _n^{\beta ,J,h} \right)&= \lim _{n \rightarrow \infty } \mathbb {E}' f \left( h', \mu _n^{\beta ,J,h'} \right) \\&= \mathbb {P}'(G_1' > 0) \mathbb {E}' f \left( h', \nu _\infty ^{z^+,h'} \right) + \mathbb {P}'(G_1' < 0) \mathbb {E}' f \left( h', \nu _\infty ^{z^-,h'} \right) \\&=\frac{1}{2} \mathbb {E} f \left( h, \nu _\infty ^{z^+,h} \right) + \frac{1}{2} \mathbb {E} f \left( h, \nu _\infty ^{z^-,h} \right) \end{aligned}$$

for any \(f \in C_b (\mathbb {R}^\mathbb {N} \times \mathcal {M}_1 (\mathbb {R}^\mathbb {N}))\). \(\square \)

For the spin glass characterization given in [27], we have the following calculation of the distribution of the magnetization density.

Lemma 3.7.1

Let h be a random external field which satisfies (A1) and (A2).

For the pure state parameter range, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E} f \left( \mu _n^{\beta ,J,h} \left[ \frac{M_n}{n} \right] \right) = f(x^*) , \end{aligned}$$

and, for the mixed state parameter range, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E} f \left( \mu _n^{\beta ,J,h} \left[ \frac{M_n}{n} \right] \right) = \frac{1}{2} f(x^+) + \frac{1}{2} f(x^-) , \end{aligned}$$

for any continuous \(f \in C_b (\mathbb {R})\).

Proof

From Lemma 3.1.1, we see that

$$\begin{aligned} \mu _n \left[ f \left( \frac{M_n }{n} \right) \right] = \int _{B(0,1)} \rho _n^{\beta ,J,h} (dz) \ f(x) . \end{aligned}$$

By Lemma 2.2.4, for the pure state parameter range, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \rho _n^{\beta ,J,h} \rightarrow \delta _{z^*} \end{aligned}$$

almost surely. The first result follows. The second result follows by essentially repeating the proof of Theorem 2.3.9 to show that

$$\begin{aligned} \lim _{n \rightarrow \infty } \rho _n^{\beta ,J,h} = \mathbbm {1}(G > 0) \delta _{z^+} + \mathbbm {1}(G < 0) \delta _{z^-} \end{aligned}$$

in distribution, where \(\rho _n^{\beta ,J,h}\) is understood to be a random probability measure on B(0, 1), and G is a standard 1-dimensional Gaussian random variable independent of h. \(\square \)

3.8 Convergence of the Newman–Stein Metastates

The following result concerns the almost sure convergence of the Cesàro sum of indicator functions of \(A_{n, \delta }\).

Lemma 3.8.1

Let h be a random external field which satisfies (A1), (A2), and (A4).

For the mixed state parameter range, it follows that

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{1}{N} \sum _{n=1}^{N} \mathbbm {1} \left( m_n^h - m \not \in A_{n, \delta } \right) = 0 \end{aligned}$$

almost surely.

Proof

Denote the sequence \(C_N\) by

$$\begin{aligned} C_N := \frac{1}{N} \sum _{n=1}^{N} \mathbbm {1} \left( m_n - m \not \in A_{n, \delta } \right) . \end{aligned}$$

For each N there exists K such that \(2^K \le N \le 2^{K + 1}\). For such a K, we have

$$\begin{aligned} \frac{C_{2^K}}{2} \le C_N \le 2 C_{2^{K+1}} . \end{aligned}$$

It follows that if \(C_{2^{K}} \rightarrow 0\) almost surely in the limit as \(K \rightarrow \infty \), then \(C_N \rightarrow 0\) almost surely in the limit as \(N \rightarrow \infty \). By Chebyshev’s inequality, we have the following estimate

$$\begin{aligned} \mathbb {P}(C_N > \varepsilon ) \le \frac{1}{\varepsilon N} \sum _{n=1}^N \mathbb {P}(m_n - m \not \in A_{n, \delta }) . \end{aligned}$$

For the individual terms in the sum, we use the following rough union bound

$$\begin{aligned} \mathbb {P}(m_n - m \not \in A_{n, \delta }) \le 3 \max \{ \mathbb {P}(m_n^{\parallel } - m^\parallel \not \in \pi _1 (A_{n, \delta })), \mathbb {P}(m_n^{\perp } - m^\perp \not \in \pi _2 (A_{n, \delta })) \} . \end{aligned}$$

Note that \(\pi _1 (A_{n, \delta }) = \pi _2 (A_{n, \delta })\). To estimate the probabilities, we will use Berry-Essen type uniform bounds for non-linear smooth functions of independent random vectors provided in [29, section 3]. For \(m_n^{\parallel } - m^\parallel \), one can apply standard Berry-Essen bounds with finite third moments, see [19, Chapter 15], to obtain

$$\begin{aligned} \mathbb {P}(m_n^{\parallel } - m^\parallel \not \in \pi _1 (A_{n, \delta })) = \mathbb {P} \left( |G| < \sqrt{\mathbb {E} h_0^2} n^{- \delta }\right) + \mathbb {P} \left( |G| > \sqrt{\mathbb {E} h_0^2} n^{ \delta }\right) + O \left( \frac{1}{\sqrt{n}}\right) , \end{aligned}$$

where G is a standard 1-dimensional Gaussian random variable, and O is the standard big-O asymptotic notation. For \(m_n^\perp - m^\perp \), we consider the function f given by

$$\begin{aligned} f(x,y) := \sqrt{y - x^2 + \mathbb {E} h_0^2} - \sqrt{\mathbb {E} h_0^2} , \end{aligned}$$

where \(y - x^2 + \mathbb {E} h_0^2 > 0\), which is related to \(m_n^\perp - m^\perp \) by

$$\begin{aligned} m_n^\perp - m^\perp = f \left( \frac{1}{n} \sum _{i=1}^n h_i, \frac{1}{n} \sum _{i=1}^n (h_i^2 - \mathbb {E} h_0^2)\right) . \end{aligned}$$

This function is at least twice continuously differentiable in a neighborhood of the origin and its gradient is non-vanishing at the origin. The non-linear Berry–Essen bound with a finite \(4 + \xi \):th moment gives us

$$\begin{aligned} \mathbb {P}(m_n^\perp - m^\perp \not \in \pi _2 (A_{n, \delta }))&= \mathbb {P} \left( |G| < \sqrt{\frac{\mathbb {E} h_0^4 - \left( \mathbb {E} h_0^2\right) ^2}{4 \mathbb {E} h_0^2}} n^{- \delta }\right) \\&\quad + \mathbb {P} \left( |G| > \sqrt{\frac{\mathbb {E} h_0^4 - \left( \mathbb {E} h_0^2\right) ^2}{4 \mathbb {E} h_0^2}} n^{ \delta }\right) \\&\quad + O \left( \frac{1}{n^{\frac{\xi }{4}}} \right) , \end{aligned}$$

where G is as before. For the Gaussian random variables, a rough standard estimate for small value probabilities and large value probabilities shows that

$$\begin{aligned} \mathbb {P}(|G| < a n^{- \delta }) = a O \left( \frac{1}{n^\delta } \right) , \ \mathbb {P}(|G| > b n^\delta ) = b O \left( \frac{1}{n^\delta } \right) \end{aligned}$$

for \(a,b > 0\). Denote \(\chi = \min \{ \frac{1}{2}, \frac{\xi }{4}, \delta \}\). Compiling together all of the above bounds, we have

$$\begin{aligned} \mathbb {P}(m_n - m \not \in A_{n, \delta }) = O \left( \frac{1}{n^\chi }\right) . \end{aligned}$$

Returning to the Cesàro sum, we have

$$\begin{aligned} \mathbb {P}(C_N > \varepsilon ) = \frac{1}{\varepsilon } O \left( \frac{1}{N} \sum _{n=1}^N \frac{1}{n^\chi } \right) = \frac{1}{\varepsilon } \frac{1}{N} \sum _{n=1}^N \frac{1}{\left( \frac{n}{N} \right) ^\chi } O \left( \frac{1}{N^\chi }\right) . \end{aligned}$$

Using Riemann sums, it follows that

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{1}{N} \sum _{n=1}^N \frac{1}{\left( \frac{n}{N} \right) ^\chi } = \int _0^1 dx \ \frac{1}{x^\chi } < \infty . \end{aligned}$$

Combining together these results, we see that

$$\begin{aligned} \mathbb {P}(C_N > \varepsilon ) = \frac{1}{\varepsilon } O\left( N^{- \chi } \right) . \end{aligned}$$

Now, we will apply the Borel–Cantelli lemma, see [19, Chapter 2]. For any rational \(\varepsilon > 0\), we have

$$\begin{aligned} \sum _{K=1}^\infty \mathbb {P}(C_{2^K} > \varepsilon ) = \frac{1}{\varepsilon } O \left( \sum _{K=1}^\infty \left( \frac{1}{2^\chi } \right) ^K \right) . \end{aligned}$$

This implies that the sum on the left hand side is finite for any rational \(\varepsilon > 0\). By the Borel–Cantelli lemma \(\mathbb {P}(C_{2^K} > \varepsilon \text { infinitely often}) = 0\), which implies that \(C_{2^K} \rightarrow 0\) in the limit as \(K \rightarrow \infty \) and thus \(C_N \rightarrow 0\) almost surely in the limit as \(N \rightarrow \infty \) as desired. \(\square \)

Along the sets \(A_{n, \delta }\), we have explicit control over the convergence of realizations of the weights \(W_n^{\beta , J, h, +}\) and the evaluation maps \(\mu _n^{\beta , J, h, \pm } [f]\). This control is presented in the following result.

Lemma 3.8.2

Let h be a random external field which satisfies (A1) and (A2).

For the mixed state parameter range, it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbbm {1} \left( m_n^h - m \in A_{n, \delta } \right) \left| W_n^{\beta ,J,h,+} - \mathbbm {1} \left( \sum _{i=1}^n h_i > 0 \right) \right| = 0 \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbbm {1} \left( m_n^h - m \in A_{n, \delta } \right) \left| \mu _n^{\beta ,J,h,\pm } [f] - \nu _\infty ^{z^\pm ,h} [f] \right| = 0 \end{aligned}$$

for any \(f \in {\text {LBL}} (\mathbb {R}^\mathbb {N})\).

Proof

Let us first split the set \(A_{n, \delta }\) into two disjoint sets \(A_{n,\delta }^+\) and \(A_{n, \delta }^-\) satisfying \(A_{n, \delta } = A_{n, \delta }^+ \cup A_{n, \delta }^-\) defined by

$$\begin{aligned} A_{n, \delta }^\pm = A_{n, \delta } \cap \left\{ \pm \sum _{i=1}^n h_i > 0 \right\} . \end{aligned}$$

From the proof of Lemma 2.2.8, recall that

$$\begin{aligned} W_n^+ = \frac{1}{1 + a_n e^{-n (\psi _n (z_n^+) - \psi (z^+) - (\psi _n(z_n^-) - \psi (z^-)))}}, \end{aligned}$$

where \(\{ a_n \}_{n \in \mathbb {N}}\) is a sequence which converges to 1 so long as \(m_n \rightarrow m\). If we consider a realization in the set \(A_{n, \delta }^+\) for large enough n, then, by considering the asymptotics presented in Lemma 3.5.4, it follows that

$$\begin{aligned}&\psi _n (z_n^+) - \psi (z^+) - (\psi _n(z_n^-) - \psi (z^-)) \\&\quad \ge 2 \beta x^+ (m_n^\parallel - m^\parallel ) - b_n \sum _{|\alpha | = 2} |(m_n - m)^\alpha | - c_n \sum _{|\alpha |=3} |(m_n - m)^\alpha | \\&\quad \ge 2 \beta x^+ n^{- \frac{1}{2} - \delta } - b_n \alpha _2 n^{-1 + 2 \delta } - c_n \alpha _3 n^{-\frac{3}{2} + 3 \delta } \\&\quad =n^{- \frac{1}{2} - \delta } (2 \beta x^+ - b_n \alpha _2 n^{- \frac{1}{2} + 3 \delta } - c_n \alpha _3 n^{-1 + 4 \delta }) \\&\quad \ge n^{- \frac{1}{2} - \delta } (2 \beta x^+ - b_n \alpha _2 n^{- \frac{1}{2} + 3 \delta } - c_n \alpha _3 n^{-1 + 4 \delta }), \end{aligned}$$

where the sequences \(\{ b_n \}_{n \in \mathbb {N}}\) and \(\{ c_n \}_{n \in \mathbb {N}}\) consist of terms arising from the matrices \(Q_n\) and \(H_n\) along with the error terms \(R_\alpha \) present in Lemma 3.5.4. These sequences converge to some non-negative constants when \(m_n \rightarrow m\). The terms \(\alpha _2\) and \(\alpha _3\) are the constants related to the number of 2 dimensional multi-indices of degree 2 and 3 respectively. Since \(\delta \in (0, \frac{1}{6})\), it follows that

$$\begin{aligned} 2 \beta x^+ - b_n \alpha _2 n^{- \frac{1}{2} + 3 \delta } - c_n \alpha _3 n^{-1 + 4 \delta } \rightarrow 2 \beta x^+ > 0 \end{aligned}$$

when \(m_n \rightarrow m\). Combining together these observations, for large enough n, we have

$$\begin{aligned} \mathbbm {1} \left( m_n - m \in A^+_{n, \delta } \right) \left| W_n^+ - \mathbbm {1} \left( \sum _{i=1}^n h_i > 0 \right) \right| \le a_n e^{- n^{ \frac{1}{2} - \delta } (2 \beta x^+ - b_n \alpha _2 n^{- \frac{1}{2} + 3 \delta } - c_n \alpha _3 n^{-1 + 4 \delta }) }, \end{aligned}$$

from which it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbbm {1} \left( m_n - m \in A^+_{n, \delta } \right) \left| W_n^+ - \mathbbm {1} \left( \sum _{i=1}^n h_i > 0 \right) \right| = 0 . \end{aligned}$$

A similar analysis done for \(A_{n, \delta }^-\) shows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbbm {1} \left( m_n - m \in A^-_{n, \delta } \right) \left| W_n^+ - \mathbbm {1} \left( \sum _{i=1}^n h_i > 0 \right) \right| = 0 . \end{aligned}$$

Combining these observations, the first result follows. The second result follows from Equation  (2.2.17) since \(\mu _n^\pm [f] \rightarrow \nu _\infty ^{z^\pm } [f]\) when \(m_n \rightarrow m\). \(\square \)

Combining Lemma 3.8.1 with Lemma 3.8.2, we have the following asymptotic difference result.

Proof of Lemma 2.3.13

First, let us denote

$$\begin{aligned} \nu _n := \mathbbm {1} \left( \sum _{i=1}^n h_i> 0 \right) \nu _\infty ^{z^+} + \left( 1 - \mathbbm {1} \left( \sum _{i=1}^n h_i > 0 \right) \right) \nu _\infty ^{z^-} . \end{aligned}$$

By using the fact that P is Lipschitz in its variables, we have

$$\begin{aligned}&\left| P \left( \mu _n [f_1],..., \mu _n [f_m] \right) - P \left( \nu _n [f_1],..., \nu _n [f_m] \right) \right| \\ {}&\quad \le a \sum _{j=1}^m \left| \mu _n^+ [f_j] - \nu _\infty ^{z^+} [f_j] \right| + b \sum _{j=1}^m \left| \mu _n^- [f_j] - \nu _\infty ^{z^-} [f_j] \right| + c \left| W_n^+ - \mathbbm {1} \left( \sum _{i=1}^n h_i > 0\right) \right| \end{aligned}$$

where a, b, and c are positive constants that depend on the coefficients of P and the bounds of each \(f_j\). By combining this inequality with Lemma 3.8.2, we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbbm {1} \left( m_n - m \in A^+_{n, \delta } \right) \left| P \left( \mu _n [f_1],..., \mu _n [f_m] \right) - P \left( \nu _n [f_1],..., \nu _n [f_m] \right) \right| = 0 . \end{aligned}$$

Furthermore, one can see that

$$\begin{aligned} P \left( \nu _n[f_1],..., \nu _n[f_m] \right)&= \mathbbm {1} \left( \sum _{i=1}^n h_i> 0 \right) P \left( \nu _\infty ^{z^+} [f_1],..., \nu _\infty ^{z^+} [f_m] \right) \\ {}&\quad + \left( 1 - \mathbbm {1} \left( \sum _{i=1}^n h_i > 0 \right) \right) P \left( \nu _\infty ^{z^-} [f_1],..., \nu _\infty ^{z^-} [f_m] \right) . \end{aligned}$$

It follows that

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{1}{N} \sum _{n=1}^N \mathbbm {1} \left( m_n - m \in A^+_{n, \delta } \right) \left| P \left( \mu _n [f_1],..., \mu _n [f_m] \right) - P \left( \nu _n [f_1],..., \nu _n [f_m] \right) \right| = 0 . \end{aligned}$$

By combining this result and Lemma 3.8.1, the result follows. \(\square \)

We determine the limit in distribution of the sequence \(\{ (h, T_N^+ )\}_{N \in \mathbb {N}}\) and the limit points of the sequence \(\{ T_N^+\}_{N \in \mathbb {N}}\) almost surely.

Proof of Lemma 2.3.14

For the first result, note that the random variable \(T^+_N\) converges in distribution to an arcsine distributed random variable \(\alpha \) in the limit as \(N \rightarrow \infty \), see [32, Chapter 4]. It follows that the sequence \(\{ (h, T_N^+)\}_{N \in \mathbb {N}}\) is uniformly tight. Reusing the almost sure convergence argument from Lemma 3.8.1, it follows that

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{1}{N} \sum _{n=1}^{N} \mathbbm {1} \left( \frac{1}{n} \sum _{i=1}^n h_i = m_n^\parallel \not \in \pi _1 (A_{n, \delta }) \right) = 0 \end{aligned}$$

almost surely, where \(\delta > 0\). Note that assumption (A2) guarantees that \(h_0\) has a finite absolute third moment. For the following steps, we fix \(\delta \in (0, \frac{1}{2})\). If \(I \subset \mathbb {N}\) is any finite index set, and \(K \subset \mathbb {R}^I\) is a compact set, then it follows that along the sets \(\pi _1 (A_{n, \delta })\), we have

$$\begin{aligned}&\lim _{n \rightarrow \infty } \mathbbm {1}(\pi _I (h) \in K) \mathbbm {1} \left( \frac{1}{n} \sum _{i=1}^n h_i \in \pi _1 (A_{n, \delta }) \right) \left| \mathbbm {1} \left( \sum _{i=1}^n h_i> 0 \right) - \mathbbm {1} \left( \sum _{i \in \{ 1,2,...,n\} \setminus I} h_i > 0 \right) \right| \\&\quad = 0 . \end{aligned}$$

Let \(f: \mathbb {R}^\mathbb {N} \rightarrow \mathbb {R}\) be a local continuous function on compact support, and let \(L \in {\text {BL}}(\mathbb {R})\). Denote I to be the index set that f is local on, and K to be the compact set contained in \(\mathbb {R}^I\) that it is supported by. By combining the above limits with the almost sure convergence of the Cesàro sum, it follows that

$$\begin{aligned} \lim _{N \rightarrow \infty } \left| \mathbb {E} f(h) L (T_{N}^+) - \mathbb {E} f(h) \mathbb {E} L \left( \frac{1}{N} \sum _{n=1}^{N} \mathbbm {1} \left( \sum _{i \in \{ 1,2,...,n \} \setminus I} h_i > 0 \right) \right) \right| = 0 \end{aligned}$$

Since \(T_N^+\) converges in distribution to an arcsine distributed random variable, it follows that

$$\begin{aligned} \lim _{N \rightarrow \infty } \mathbb {E} f(h) L (T_{N}^+) = \mathbb {E} f (h) \mathbb {E} L (\alpha ) . \end{aligned}$$

Finally, since the subalgebra generated by products of local continuous functions with compact support with bounded Lipschitz functions is separating, it follows that

$$\begin{aligned} \lim _{N \rightarrow \infty } (h, T^+_N) = (h, \alpha ) \end{aligned}$$

in distribution, where \(\alpha \) is independent of h.

For the second result, let \(\{ q_k \}_{k \in \mathbb {N}}\) be an enumeration of the rationals in [0, 1]. Define the sets

$$\begin{aligned} A_{k,j} := \left\{ T_N^+ \in \left[ q_k - \frac{1}{j}, q_k + \frac{1}{j}\right] \text { i.o.} \right\} . \end{aligned}$$

The sets \(A_{k,j}\) are measurable with respect to the exchangeable \(\sigma \)-algebra, and we have

$$\begin{aligned} \mathbb {P}(A_{k,j}) \ge \limsup _{N \rightarrow \infty } \mathbb {P} \left( T_N^+ \in \left[ q_k - \frac{1}{j}, q_k + \frac{1}{j}\right] \right) = \mathbb {P} \left( \alpha \in \left[ q_k - \frac{1}{j}, q_k + \frac{1}{j}\right] \right) > 0 . \end{aligned}$$

By the Hewitt–Savage 0–1 law, see [19, Chapter 12], the sets \(A_{k,j}\) are then sets of probability 1, and, as a result, the intersection \(\bigcap _{k,j \in \mathbb {N}} A_{k,j}\) is a set of probability 1. Furthermore, by a similar random subsequence construction as in the proofs of Lemma 3.6.3 for random walks, for any \(\lambda \in [0,1]\), there exists a random subsequence \(N_k\) such that \(T^+_{N_k} \rightarrow \lambda \), which implies that

$$\begin{aligned}{}[0,1] \subset L \left( \{ T_N^+ \}_{N \in \mathbb {N}} \right) . \end{aligned}$$

For the opposite inclusion, it is enough to observe that any convergent subsequence of \(T_N^+\) must belong to [0, 1]. The result follows. \(\square \)

We present the full almost sure divergence result and the random subsequence convergence result of the Newman–Stein metastates.

Proof of Theorem 2.3.15

By Lemma 2.3.12, we know that the N–S metastates are uniformly tight almost surely. Let \(\{ \overline{\kappa }_{N_k} \}_{k \in \mathbb {N}}\) be any weakly convergent subsequence. By Lemma 2.3.13, it follows that

$$\begin{aligned} \overline{\kappa }_{N_k} [P] = T_{N_k}^+ P (\nu _\infty ^{z^+} [f_1],..., \nu _\infty ^{z^+} [f_m]) + (1 - T_{N_k}^+) P (\nu _\infty ^{z^-} [f_1],..., \nu _\infty ^{z^-} [f_m]) + o(1) , \end{aligned}$$

almost surely, where P and \(f_i\) are as in Lemma 2.3.13. Since \(T^+_{N_k} \in [0,1]\), it follows that there exists a subsubsequence \(\{ T^+_{N_{k_j}}\}_{j \in \mathbb {N}}\) such that \(T^+_{N_{k_j}} \rightarrow \lambda \in [0,1]\) in the limit as \(j \rightarrow \infty \). Since this is a subsubsequence of a weakly convergent subsequence and the functions P formed a separating subalgebra, it follows that

$$\begin{aligned} \lim _{k \rightarrow \infty } \overline{\kappa }_{N_k} = \lambda \delta _{\nu _\infty ^{z^+}} + (1 - \lambda )\delta _{\nu _\infty ^{z^-}} \end{aligned}$$

almost surely. This shows that

$$\begin{aligned} L \left( \left\{ \overline{\kappa }_N^{\beta ,J,h} \right\} _{N \in \mathbb {N}} \right) \subset {\text {conv}} (\delta _{\nu _\infty ^{z^+, h}}, \delta _{\nu _\infty ^{z^-, h}}) , \end{aligned}$$

almost surely. For the opposite inclusion, by combining Lemmas 2.3.14 and 2.3.13, for any \(\lambda \in [0,1]\), there exists a subsequence \(\{ N_{k} \}_{k \in \mathbb {N}}\) such that

$$\begin{aligned} \overline{\kappa }_{N_k} [P] = \lambda P (\nu _\infty ^{z^+} [f_1],..., \nu _\infty ^{z^+} [f_m]) + (1 - \lambda ) P (\nu _\infty ^{z^-} [f_1],..., \nu _\infty ^{z^-} [f_m]) + o(1) \end{aligned}$$

almost surely. Since the N–S metastate are almost surely tight, and the functions P formed a separating subalgebra, it follows that there exists a subsubsequence \(\{ N_{k_j}\}_{j \in \mathbb {N}}\) such that

$$\begin{aligned} \lim _{j \rightarrow \infty } \overline{\kappa }_{N_{k_j}} = \lambda \delta _{\nu _\infty ^{z^+}} + (1 - \lambda )\delta _{\nu _\infty ^{z^-}} \end{aligned}$$

almost surely. To be explicit, note that \(\overline{\left\{ \overline{\kappa }_N \right\} }_{N \in \mathbb {N}}\) is compact almost surely by uniform tightness and \(\overline{\left\{ \overline{\kappa }_{N_k} \right\} }_{k \in \mathbb {N}}\) is a closed subset of a compact set and it too is thus compact almost surely. This is the method of generating the subsequence \(\{ N_{k_j}\}_{j \in \mathbb {N}}\). This shows that

$$\begin{aligned} {\text {conv}} (\delta _{\nu _\infty ^{z^+}}, \delta _{\nu _\infty ^{z^-}}) \subset L \left( \left\{ \overline{\kappa }_N \right\} _{N \in \mathbb {N}} \right) \end{aligned}$$

almost surely, from which the results follow. \(\square \)

We have the following convergence in distribution result.

Proof of Theorem 2.3.16

Since the collection of probability measures of N–S metastates is uniformly tight by Lemma 2.3.12, it is enough to prove convergence in distribution of random variables of the form \(\overline{\kappa }_N [P]\), where P is as in Lemma 2.3.13. By the same result Lemma 2.3.13, by dominated convergence, it follows that

$$\begin{aligned} \mathbb {E} f \left( \overline{\kappa }_N [P] \right)&= \mathbb {E} f \left( T_{N}^+ P (\nu _\infty ^{z^+} [f_1],..., \nu _\infty ^{z^+} [f_m]) + (1 - T_{N}^+) P (\nu _\infty ^{z^-} [f_1],..., \nu _\infty ^{z^-} [f_m]) \right) \\&\quad + o(1) \end{aligned}$$

for any \(f \in {\text {BL}}(\mathbb {R})\). By applying Lemma 2.3.14, it follows that

$$\begin{aligned} \lim _{N \rightarrow \infty }\overline{\kappa }_N = \alpha \delta _{\nu _\infty ^{z^+}} + (1 - \alpha ) \delta _{\nu _\infty ^{z^-}} \end{aligned}$$

in distribution, where \(\alpha \) is an arcsine distributed random variable independent of h. \(\square \)