Multiplicative arithmetic functions and the generalized Ewens measure

Random integers, sampled uniformly from $[1,x]$, share similarities with random permutations, sampled uniformly from $S_n$. These similarities include the Erd\H{o}s--Kac theorem on the distribution of the number of prime factors of a random integer, and Billingsley's theorem on the largest prime factors of a random integer. In this paper we extend this analogy to non-uniform distributions. Given a multiplicative function $\alpha \colon \mathbb{N} \to \mathbb{R}_{\ge 0}$, one may associate with it a measure on the integers in $[1,x]$, where $n$ is sampled with probability proportional to the value $\alpha(n)$. Analogously, given a sequence $\{ \theta_i\}_{i \ge 1}$ of non-negative reals, one may associate with it a measure on $S_n$ that assigns to a permutation a probability proportional to a product of weights over the cycles of the permutation. This measure is known as the generalized Ewens measure. We study the case where the mean value of $\alpha$ over primes tends to some positive $\theta$, as well as the weights $\alpha(p) \approx (\log p)^{\gamma}$. In both cases, we obtain results in the integer setting which are in agreement with those in the permutation setting.


Introduction
The analogy between permutations and integers is a well-established one, see the surveys [6, Ch. 1] and [28].The analogy leads to advancements both in permutations and in integers, see e.g.[26,27,17,18].This analogy always involves comparing a uniformly drawn integer in [1, x] and a uniformly drawn permutation from S n , where n ≈ log x.Our results suggest that the analogy persists even when the chosen measures are not uniform.
We let C i (π) be the number of cycles of π of length i and denote by C(π) the number of cycles in π.
In recent years, there has been significant activity in the study of permutations sampled according to cycle weights [58,54,37,10,41,44,43,22,13,15,51,49]; this model is related to the study of the quantum Bose gas in statistical mechanics, see e.g.[8,9,19].To state the model, let θ 1 , . . ., θ n be non-negative reals (not all zero).The probability of a permutation π with respect to the weights θ i is defined to be where h n is the normalization constant, known as the partition function, given by .
We let π n,θi be the random permutation whose probability distribution is (1.1).The measure P n,θi is called a generalized Ewens measure.
We now describe an analogous measure on the positive integers up to x, which is the main object of study in this paper.Given a positive integer m ∈ N, denote by p 1 (m) ≥ p 2 (m) ≥ . . . the prime factors of m (repeated according to their multiplicity), arranged in non-increasing order.We have log p 1 (m) + log p 2 (m) + . . .= log m.
We denote by Ω(m) the number of prime factors of m, counted with multiplicity.If p k | n and p k+1 ∤ n, we write p k || n.This k is known as the multiplicity of p in n, and is denoted ν p (n).
A function α : N → C is called multiplicative if α(1) = 1 and α(nm) = α(n)α(m) for every coprime n, m ∈ N. Given a non-negative multiplicative function α, we define a measure on the positive integers up to x by where the product is over (maximal) prime powers dividing m, and S(x) is the normalization constant We let be the random integer whose probability distribution is (1.2).In this paper we consider two different families of multiplicative measures on the integers, and compare our results with corresponding generalized Ewens measures.

Constant mean value
We consider multiplicative functions α : N → R ≥0 satisfying the following two conditions for some θ > 0, d > −1, a ∈ (0, 1), η ∈ (0, 1/2] and r ∈ (0, 2): Here p denotes a prime number.Recall that the prime number theorem says that p≤x log p ∼ x.Thus, (1.3) should be interpreted as α(p)/p d being, on average, of size θ, and it is a common condition in multiplicative number theory.Condition (1.4) is of a more technical nature.We did not strive to find the most general conditions for our theorem to hold, but rather to find conditions which are easy to work with, lead to short proofs and are satisfied for natural examples.Our result for these weights is the following.where PD(θ) is the Poisson-Dirichlet distribution with parameter θ (defined in §2).
Here the arrows indicate convergence in distribution.The proof of the first part of Theorem 1.1 applies to ω(N x ) as well, where ω(n) counts the number of prime factors of n without multiplicities, and the result is the same.The prototypical example of a function α satisfying the conditions is θ ω(n) (with d = 0, η = 1/2, r = 1 and any a > 0).
The case α ≡ 1 of (1.5) is the Erdős-Kac theorem [23].Our proof of it is a generalization of a proof given by Billingsley [11] to the original Erdős-Kac theorem with α ≡ 1.The case α ≡ 1 of (1.6) is Billingsley's theorem [12].Our proof of it is a generalization of Donnelly and Grimmett's proof [16], who elucidated Billingsley's result.
The Ewens measure with parameter θ (> 0) is a measure on S n , which may be defined by taking θ i = θ in the definition of the generalized Ewens measure.The partition function is n+θ−1 n in this case.This measure has first appeared in the study of population genetics [25].The Ewens measure has found many practical applications, through its connection with Kingman's coalescent process [36] and its occurrence in non-parametric Bayesian statistics [5].
The similarity of Theorem 1.1 and Theorem 1.2 is most apparent for functions α where α(p) ≈ θ.It suggests an analogy between permutations chosen according to the Ewens measure and integers chosen according to multiplicative weights.We now discuss previous works.

Erdős-Kac
In a series of works, Alladi [1,2,3,4] proved a generalization of Erdős-Kac involving weights α as well.His proof uses the combinatorial sieve and he requires α to satisfy a 'level-of-distribution' condition which is not always easily verified.A related (but simpler) sieve-theoretic approach to Erdős-Kac and its generalizations was introduced by Granville and Soundararajan [30].This approach was used by Khan, Milinovich and Subedi to prove an Erdős-Kac theorem with weights being d k , the kth divisor function [35].
See Elliott [20,21] for a treatment of the Erdős-Kac theorem with weights being the standard divisor function d 2 and its real powers.Tenenbaum proved in [53,Cor. 2.5] an impressively general weighted Erdős-Kac theorem, but unlike Theorem 1.1, he requires α(p)/p d to be uniformly bounded.Both Elliott and Tenenbaum use characteristic functions and complex analysis while we avoid these.

Billingsley
Arratia, Kochman and Miller proved an analogue of Billingsley's theorem for normed arithmetic semigroups satisfying certain growth conditions [7,Thm. 2].A commutative semigroup S is called normed arithmetic semigroup if it contains an identity element and admits unique factorization into 'prime' elements.Furthermore, it should come equipped with a multiplicative norm function s → |s| ∈ R >0 , such that N (x) = #{s ∈ S : |s| ≤ x} is a finite number for each x > 0. There is small overlap between [7, Thm.2] and the second part of Theorem 1.1 as there are multiplicative functions α satisfying (1.3)-(1.4)and coinciding with α S (n) := #{s ∈ S : |s| = n} for some normed arithmetic semigroup S.
The specific choice (1.8) simplifies the computations greatly.Maples, Nikeghbali and Zeindler [41,Cor. 1.2] were able to prove that C(π n,θi ) converges, after an explicit normalization, to a normal distribution (for θ n as in (1.8) and also scalar multiples of it).
Next we describe a result of Ercolani and Ueltschi [22,Thm. 5.1] about a permutation statistic which we have yet to discuss, L 1 (π).This is the length of the cycle of a permutation π which contains the element 1.In the Ewens case θ i = θ, it is known that L 1 (π n,θi )/n converges in distribution to a beta distribution [22, §6].For polynomially-growing weights, Ercolani and Ueltschi proved that L 1 (π n,θi ) exhibits a very different behavior.First, the order of magnitude of L 1 (π n,θi ) in this case is n 1/(γ+1) = o(n) and not n.Second, the limiting distribution is a gamma distribution, whose definition is recalled in §2.
Theorem 1.4 (Ercolani and Ueltschi).Let γ > 0, and take θ n as in (1.8).Then, as n → ∞, See Dereich and Mörters [15] for finer results about L 1 (π n,θi ) for similar weights.We derive number-theoretic analogues of Theorems 1.3 and 1.4.Since there is no such thing as 'a prime divisor of n containing a fixed element', we must turn to a different interpretation of L 1 (π).Definition 1.5.Let a = {a j } j≥1 be a sequence of non-negative reals summing to 0 < S < ∞.A size-biased sampling of an element from a is a random variable X whose distribution is given by Suppose that P is some conjugation-invariant measure on S n (e.g.P n,θi ).If π ∈ S n is sampled according to P, then the distribution of L 1 (π) coincides with the distribution of a typical cycle of π, that is: of a size-biased sampling of an element from {ℓ i (π)} i≥1 .See Lemma 2.3 below for the proof.It is now clear how to define an integer analogue of L 1 (π n,θi ): given N x , we define P 1 (N x ) by letting log P 1 (N x ) be a size-biased sampling of an element from {log p i (N x )} i≥1 .We think of P 1 (N x ) as a typical prime divisor of N x .
In the integer setting, the polynomial weights (1.7) correspond to α(p) ≈ K log γ p.
As x → ∞ we have log P 1 (N x ) (log x) and .
The proof of the second part of Theorem 1.6 applies to ω(N x ) as well.It is interesting that although the measure P x,α assigns larger weights to larger primes, the typical prime factors of N x are much smaller than in the case of uniformly drawn integers between 1 and x.Indeed, it follows from the first part of Theorem 1.6 that log P 1 (N x ) = o(log x), while from Theorem 1.1 it follows that log P 1 (N x ) = Θ(log x) for uniform integers.
The main ingredient in the proof of Theorem 1.6 is the asymptotics of n≤x α(n) for multiplicative α obeying (1.9)-(1.10).This unusual sum was studied by Schwarz [50] and Marenich [42].As they both appeal to the same Tauberian theorem [34,Thm. 1], no error term is obtained.We prove the following estimate.

Conventions
In the arguments below, we think of the function α as fixed, and write N x for N x,α .We denote the set of prime numbers by P, and reserve the letter p for primes.The letters C and c always denote positive constants, which may vary from line to line.However, C and c depend only on the arithmetic function α considered unless otherwise stated.The arguments in the proofs always hold for sufficiently large x.The notation A ≫ 1 indicates that A is sufficiently large.
The second author was supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreements nos 786758 and 851318).We thank the anonymous referee for useful comments and suggestions.

Preliminaries from probability theory
We denote by beta(α, β) the beta distribution with shape parameters α and β whose density with respect to Lebesgue measure on [0, 1] is given by We denote by gamma(α, β) the gamma distribution with shape parameters α and β whose density with respect to Lebesgue measure on [0, ∞) is given by We define the Poisson-Dirichlet distribution with parameter θ, denoted by PD(θ).Let Y 1 , Y 2 , . . .be an i.i.d.sequence of random variables with beta(1, θ) distribution.Define the sequence Intuitively Z 1 takes a beta(1, θ)-distributed fraction of the unit interval.Conditioned on Z 1 , Z 2 takes a beta(1, θ)-distributed fraction of the remaining part of the interval, etc.Finally, the PD(θ) distribution is defined to be the distribution of the sequence (Z j ) j≥1 arranged in non-increasing order.
In the proof of Theorem 1.1, we establish the convergence in distribution to PD(θ) by convergence of a certain sequence to a sequence of independent beta(1, θ) random variables.To this end we define the size-biased permutation of a sequence of random variables.Let X 1 , X 2 , . . .be a non-increasing sequence of random variables such that ∞ j=1 X j = 1, almost surely.
A sized-biased permutation ( Xi ) i of the sequence (X i ) i is a random reordering of the elements of the sequence such that for any j ≥ 1 The sized-biased permutation can be used to reconstruct the beta(1, θ) random variables from the PD(θ) distribution in the following sense.
Proposition 2.1.Let X 1 , X 2 , . . .be a non-increasing sequence of random variables with Then, the sequence (X 1 , X 2 , . . . ) has the PD(θ) distribution if and only if the sequence For a discussion and references for Proposition 2.1 see the introduction of [48].We use the following result in order to prove the convergence to PD(θ) in Theorem 1.1.
where Y 1 , Y 2 , . . . is a sequence of i.i.d.beta(1, θ) random variables.Then, we have and the function that takes a sequence and returns the same sequence in non-increasing order.Since r • g is continuous in the product topology on [0, 1] N we have by (2.1) that (r • g) By the definition of the Poisson-Dirichlet distribution (r and moreover, since Thus, (2.3) simplifies to (2.2), as needed.
Lemma 2.3.Let P be a conjugation-invariant measure on S n .Given π ∈ S n , let L 1 (π) be the size of the cycle containing 1, and let Typ(π) be a random variable which is a size-biased sampling of a cycle of π, according to Definition 1.5.Then L 1 (π) and Typ(π) have the same distribution, where π is a permutation drawn according to P.
Proof.By using the conjugation-invariance, for any k ∈ N we have where L i (π) is the size of the cycle of π containing i. Letting U n be a uniformly drawn integer from {1, 2, . . ., n}, we have just shown that The size of the cycle of π which contains U n is a size-biased sampling of a cycle of π, which concludes the proof.
3.2 Proof of second part of Theorem 1.1
Proof.By the multiplicativity of α, For the sum in the right-hand side we have the naive lower bound (3.12) The first sum in the right-hand side of (3.12) can be estimated by applying Corollary 3.2 twice, and the second sum can be bounded from above by (3.3).We thus obtain for sufficiently large x, as needed.Here we made use of α(p i )/p d+1 i ≪ p −1/2 i (by (1.4)) and p i ≥ x δ .

Conclusion of proof
From Corollary 3.2 it follows that for any given ε ∈ (0, 1), Hence, log N x / log x tends in distribution to 1. Thus, it suffices to prove that as x → ∞.By Proposition 2.2, it suffices to prove the following proposition.
Proposition 3.6.We have as x → ∞, where Y 1 , Y 2 , . . . is a sequence of i.i.d.beta(1, θ) random variables and where ( Xj ) ∞ j=1 is a sized-biased permutation (defined in §2) of the sequence Remark 3.7.To connect this proposition with number-theoretic terms, we introduce a sequence P j of 'typical prime divisors' of N x , defined as P j = N x Xj .The asymptotic behavior of these typical primes might be of independent interest.It follows from the proposition that P j for j ≥ 1 satisfy the following limit law Proof.Fix k ≥ 1 and 0 < a j < b j < 1 for any 1 ≤ j ≤ k.By the Portmanteau Lemma it suffices to show that lim inf We have that where the inner sum is over a sequence of k primes p 1 , . . ., p k such that for any 1 ≤ j ≤ k we have for any 1 ≤ j ≤ k + 1.We have the following lower bound: By the definition of a sized-biased permutation, when p 1 , . . ., p k are distinct and p j || n we have for any n ≤ x.Thus, changing the order of summation in (3.14) we obtain Next, we would like to use Lemma 3.5 in order to lower bound the inner sum in (3.15).We have x j = p j x j+1 ≤ (δx j ) bj x j+1 ≤ x bj j x j+1 and therefore, for any 1 We get that the assumptions of Lemma 3.5 hold and therefore using also Corollary 3.2 we obtain for sufficiently large x.Consider the innermost sum.We define the function g We apply Lemma 3.4 with α(n)/n d and this g, obtaining where in the last equality we performed the change of variables y = log t/ log x k .Substituting the last estimate into (3.16)we get the same expression with k replaced by k − 1.Thus, for sufficiently large x depending on δ, and so lim inf Since δ is arbitrary it follows that (3.13) holds, as needed.
3.3 Proof of first part of Theorem 1.1

Preparatory results
We need the following results from probability, which are given as Remarks 1, 2 and 3 in [11].
Lemma 3.8.Let D x , E x be random variables defined for any x ≫ 1.
Recall that P is the set of primes.By definition, Proof.Consider the multiplicative function α(n) := α(n)t Ω(n)−ω(n) , where we choose 1 < t < 2/r, so that α will still satisfy (1.3)-(1.4)with the same parameters, except r replaced with rt.Applying Corollary 3.2 with α and α, we obtain that which, by Jensen's inequality for instance, implies that E |Ω(N x ) − ω(N x )| is bounded as x → ∞.
For x ≫ 1, we define the subset For each prime p ≫ 1, define a Bernoulli random variable X p such that P(X p = 1) = α(p)/p d+1 (for p ≫ 1, this is in [0, 1]) and such that the different X p s are independent.Define σ x by We define the random variables Lemma 3.10.We have p∈Px α(p)/p d+1 = θ log log x + O(log Proof.Applying Lemma 3.4 with α(p)/p d in place of α, g(t) = 1/t and the interval [log 4 x, exp(exp(log log x− log 1/3 log x))], we find that p∈Px α(p)/p d+1 = θ log log x+O(log 1 3 log x).Since p∈P α 2 (p)/p 2(d+1) = O(1), the estimate for σ 2 x follows from the first estimate.
Lemma 3.11.For each integer k ≥ 1, we have sup Proof.Let Y p := X p − α(p)/p d+1 .We have the following expansion: where As we have EY p = 0, it follows that S(l 1 , . . ., l m ) vanishes if l i = 1 for some i, and so we may restrict the summation in (3.17 x .

Conclusion of proof
By Lemma 3.9 and the first part of Lemma 3.8 with D x = 1 and E x = (Ω(N x ) − ω(N x ))/ √ θ log log x, it follows that (1.5) holds if and only if A x d −→ N (0, 1).Now let D x = √ θ log log x/σ x and We have D x d −→ 1.Moreover, in the sum in the second fraction in (3.23), there can be at most one non-zero term with p greater than √ x, and so we may use Lemma 3.10 and (3.4) to obtain that where the last expression tends to 0 by Lemma 3.4 with Let α : N → R ≥0 be a multiplicative function satisfying and define the Dirichlet series of α.For ℜs > 1 we have where ϕ is differentiable, bounded and has bounded derivative on ℜs ≥ 1.
Proof.By multiplicativity of α, we may write where Using (4.1) and the triangle inequality, we obtain for ℜs ≥ 1.Thus, |E p (s)| ≤ C/(p log 2 p) for ℜs ≥ 1.We turn to bound the derivative of E p .We have and therefore Let p 0 > 0 so that for p ≥ p 0 we have |E p (s)| ≤ 1/2 for any s with ℜs ≥ 1.We have that where and log z is the principal branch of the logarithm.The function ψ 1 is trivially differentiable, bounded and has bounded derivative on ℜs ≥ 1.As for ψ 2 , observe that It follows that ψ 2 is also differentiable, bounded and has bounded derivative on ℜs ≥ 1.Thus, the same is true for ϕ := ψ 1 • ψ 2 , as needed.
Lemma 4.2.Let α : N → R ≥0 be a multiplicative function satisfying (1.10) and Proof.Writing ω(N x ) as p≤x 1 p|Nx and Ω(N x ) as p≤x k≥1 1 p k |Nx , we have Fix γ > 0 and define the Dirichlet series Lemma 4.3.Fix a non-negative integer k.We have for ℜs > 1, where B k is a real constant depending on γ and k.Here (s − 1) γ = exp(γ log(s − 1)) is defined using the principal branch of the logarithm.
Proof.We start with the case k = 0. Let E(t) = ( p≤t log p) − t be the error term in the prime number theorem.Using integration by parts we obtain, for ℜs > 1, that where When ℜs ≥ 1 we may bound ψ 1 as follows, using the known result that E(t) ≪ te −c √ log t : and similarly ψ ′ 1 may be bounded by and so in ℜs ≥ 1.We turn to estimate ψ 2 .Using integration by parts we obtain for ℜs > 1, so that Setting where in the last equality we used that In order to compute the integral in the right-hand side of (4.5) we perform the change of variables w = (s − 1) log t, obtaining Letting C R be the circular contour from R to R s−1 |s−1| , we compute that lim R→∞ CR w γ−1 e −w dw = 0 when ℜs > 1.Thus, we may deform the contour {(s − 1)w : w ≥ 0} in (4.6) to the positive real line, obtaining Substituting (4.7) into (4.5) and then substituting (4.4) and (4.5) into (4.3),we obtain (4.2) with k = 0 and Next we consider the case k ≥ 1.We have Hence, repeating the above arguments with γ replaced by γ + k gives the desired result for any k ≥ 1.
The following lemma bounds ℜ (G(σ + it)) when t is not too large.
Lemma 4.4.There exists c > 0 with the following property.For σ > 1 sufficiently close to 1, and for any t ∈ R with 1 ≤ |t| ≤ e 1/(σ−1) we have To prove Lemma 4.4 we use bounds on primes in short intervals.Information of the form we need is already found in a work of Hoheisel [33].For ease of presentation, we use a stronger result.  1.As ℜ (G(σ + it)) is an even function of t, we may assume that t > 0. Consider the set of integers Clearly, for σ close enough to 1 we have |M | ≥ t/(σ − 1).For any n ∈ M we define where For any n ∈ M and p ∈ A n we have that cos(t log p) ≤ 0 and therefore Since log γ p/p σ is decreasing for sufficiently large p (independently of σ ≥ 1) and as min n∈M min A n → ∞ as σ → 1 + (uniformly in t ≥ 1) we get that, when σ is close enough to 1, where in the second inequality we used (4.8).Indeed the conditions of Theorem 4.5 hold for [e −π/t x n , x n ] as for any n ∈ M we have and so x n ≥ cx 0.9 n .Summing (4.10) over n ∈ M we get n∈M p∈An where in the last inequality we used Lemma 4.3 with k = 0. From (4.9) and (4.11) we obtain the desired bound.
It turns out that when |t| ≥ e 1/(σ−1) , the result of Lemma 4.4 does not hold and ℜ (G(σ + it)) might be as large as G(σ).We shall show that the reals t ∈ R for which ℜ (G(σ + it)) is as large as G(σ) are quite rare.More precisely, we will show in the following lemma that if t 1 , t 2 are such that ℜ (G(σ + it)) is large then the same holds for t 1 − t 2 , and therefore by Lemma 4.4 t 1 and t 2 must be far away from each other.Lemma 4.6.Let {a n } n≥1 be a sequence of non-negative reals with ∞ n=1 a n < ∞.Consider the function a n cos(t log n), t ∈ R.
For any 0 < ε < 1 and any t 1 , t 2 ∈ R with Proof.Let 0 < ε < 1 and define By the assumption on t 1 we have The same argument shows that (4.12) holds for t 2 in place of t 1 as well.Therefore, by the union bound, Now, for any n ∈ A t1 ∩ A t2 and i = 1, 2 we have

Thus, for any
From (4.13) and (4.14) we obtain that Fix K > 0. The function G ′ (s) is monotone-increasing for real s > 1, with lim s→∞ G ′ (s) = 0 and lim s→1 + G ′ (s) = −∞.For x > 1, we let σ = σ x be the unique real solution to (4.15) The point σ plays the role of the saddle point in the proof of Theorem 1.7.The following is a corollary of Lemma 4.3.
Corollary 4.7.As x → ∞ we have and moreover (log x) (log x) 4.1 Proof of Theorem 1.7 If we replace x with ⌊x⌋ + 1 2 in (1.11), then the left-hand side does not change, while the function in the right-hand side is changed by a factor of 1 + O(1/x), which can be absorbed in the relative error term.Thus, in proving Theorem 1.7 we may assume without loss of generality that x is of the form m + 1 2 for some positive integer m (i.e.half-integer).We denote by F (s) the Dirichlet series of α, which by Lemma 4.1 is of the form F (s) = ϕ(s)e where (We have used the fact that x is an half-integer and so |log(x/n)| ≥ C/x.) Set for some sufficiently small δ > 0. We decompose the integral in the right-hand side of (4.17We denote by I 1 , I 2 , I 3 the integrals over these respective domains.We begin by computing the asymptotics for I 1 , which gives the main term.When |t| ≤ t x we have, by Corollary 4.7, since ϕ has bounded derivative.A second-order Taylor approximation of G(σ + it) around t = 0 gives where z := min{γ − δ, 2}/(2(γ + 1)).We thus have which by Corollary 4.7 can be simplified to where B is defined in (1.12), and A α is defined in (1.13).Next we bound I 2 .Using Lemma 4.3 with s = σ +it where t x ≤ |t| ≤ 1, we get and so a second-order Taylor approximation shows that where the second inequality holds for sufficiently small δ and follows from Corollary 4.7.Thus

.22)
We now show that the contribution from I 3 is negligible as well.Fix ε ∈ (0, 1) such that 8 √ ε is strictly less than the constant c from Lemma 4.4.Consider the set We have, by definition of S, where in the last inequality we used Corollary 4.7.We now study the integral over S. Applying Lemma 4.6 with the sequence a n := 1 {n is prime} log γ n n σ we find that for any t 1 , t 2 ∈ S we have that ℜ and therefore, by Lemma 4.4 and the choice of ε, either and a 0 = 0. Thus, for sufficiently large x, Combining the estimates for the integrals over 1, x 2 \ S and 1, x 2 ∩ S, we obtain We conclude the proof by plugging the estimates (4.18), (4.21), (4.22) and (4.23) in (4.17).
4.2 Proof of Theorem 1.6

Auxiliary results
An important step in the proof is understanding the asymptotic behavior of P(p | N x ).We shall see that P(p | N x ) ≈ α(p)S(x/p)/S(x), and so begin by studying the ratio S(x/h)/S(x).Observe that for x ≥ 1, h ≥ 1 by Theorem 1.7.
Proof.Let h x := exp((log x) (γ+4)/(4γ+4) ).Suppose that h ≤ h x .By a first-order Taylor approximation, we get + O 1 (log x) c .Thus, by Theorem 1.7 applied with x and x/h, we obtain the first part of the lemma.We turn to prove the second part of the lemma.Using the first part of the lemma and (4.24)

Conclusion of proof
We begin with the first part of the theorem.We abbreviate P 1 (N x ) as P where Y has gamma(γ + 1, (KΓ(γ + 1)) 1/(γ+1) ) distribution.For any prime p such that a(log x) The change of variables t = n z x in the last integral shows that it equals P(a ≤ Y ≤ b).Taking x to infinity in (4.27) we obtain (4.26), as needed.
We turn to the second part of the theorem.By Lemma 4.9 and (1.9) we have The error term is bounded by a constant.In order to estimate the sum, we split it into three sums S 1 , S 2 and S 3 , over the respective ranges p < exp((log x) δ1 ), exp((log x) δ1 ) ≤ p ≤ exp((log x) δ2 ) and exp((log x) δ2 ) < p ≤ x, where δ 1 = 1/(2(γ + 1)) and δ 2 = (γ + 4)/(4γ + 4).We bound S 1 using (4.24): where in the last inequality we used Lemma 3.4 with α ≡ 1 and g(t) = log γ t/t.We bound S 3 using the second part of Lemma 4.8, which gives .
Since all the accumulated error terms are of order smaller than (log x) γ/(γ+1) , the expectation of ω is estimated.The expectation of Ω behaves the same by Lemma 4.2.

it follows by Lemma 3 .
10 and the first part of Lemma 3.8 that A x d −→ N (0, 1) if and only if B x d −→ N (0, 1).By the second part of Lemma 3.8, to establish B x d −→ N (0, 1) it suffices to show that EB kx → EX k for each k, where X ∼ N (0, 1).By Lemma 3.13 this is equivalent to EC kx → EX k .Since the random variables X p − α(p)/p d+1 are uniformly bounded as p varies, and since the denominator of C x tends to infinity by Lemma 3.10, we have C x d −→ N (0, 1) by the Lindeberg-Feller theorem (also known as Central Limit Theorem for triangular arrays)[47, Thm.4.7].Thus, the moments of C x converge to those of X by Lemma 3.11 and the last part of Lemma 3.8.

4 Polynomially-growing weights Lemma 4 . 1 .
Fix a function f : P → R >0 on primes such that log f (p) = o(log p), and let
K•G(s)where ϕ is differentiable, bounded and has bounded derivative on ℜs ≥ 1.By an effective version of Perron's formula [52, Thm.II.2.3] we have 4.20)for |t| ≤ t x , where in the first equality we used the fact that|G ′′′ (σ + it)| ≤ |G ′′′ (σ)|and in the second equality we used Corollary 4.7 and the definition of σ in (4.15).From (4.19) and (4.20), we get we get that when h ≥ h x