Large prime gaps and probabilistic models

We introduce a new probabilistic model of the primes consisting of integers that survive the sieving process when a random residue class is selected for every prime modulus below a specific bound. From a rigorous analysis of this model, we obtain heuristic upper and lower bounds for the size of the largest prime gap in the interval [1,x]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$[1,x]$\end{document}. Our results are stated in terms of the extremal bounds in the interval sieve problem. The same methods also allow us to rigorously relate the validity of the Hardy-Littlewood conjectures for an arbitrary set (such as the actual primes) to lower bounds for the largest gaps within that set.


Introduction
In this paper, we introduce a new probabilistic model R ⊂ N for the primes P . .= {2, 3, 5, . ..} which can be analyzed rigorously to make a variety of heuristic predictions.In contrast to the well known prime model C of Cramér [6] and the subsequent refinement G of Granville [16], in which random sets are formed by including positive integers with specific probabilities, the model R proposed here is comprised of integers that survive the sieve when a random residue class is selected for every prime modulus below a specific bound.We determine the asymptotic behavior of the largest gap function, G R (x), for the set R, where for any subset A ⊂ N we denote We conjecture that the primes P have similar behavior.Our bounds, given in Theorem 1.1 below, are stated in terms of the extremal bounds in the interval sieve problem.
At present, the strongest unconditional lower bound on G P (x) is due to Ford, Green, Konyagin, Maynard, and Tao [11], who have shown that 1G P (x) log x log 2 x log 4 x log 3 x , for sufficiently large x, with log k x the k-fold iterated natural logarithm of x, whereas the strongest unconditional upper bound is x 0.525 , a result due to Baker, Harman, and Pintz [2].Assuming the Riemann Hypothesis, Cramér [5] showed that G P (x) x 1/2 log x.
1.1.Cramér's random model.In 1936, Cramér [6] introduced a probabilistic model C of primes, where each natural number n 3 is selected for inclusion in C with probability 1/ log n, the events n ∈ C being jointly independent in n.By Hoeffding's inequality (or Lemma 3.3 below), for any fixed ε > 0 one has with probability one. 2 The analogous statement for primes is equivalent to the Riemann Hypothesis.In 1936, Cramér [6] proved that lim sup x→∞ G C (x) log 2 x = 1 almost surely, and remarked:"Obviously we may take this as a suggestion that, for the particular sequence of ordinary prime numbers p n , some similar relation may hold."Later, Shanks [40] conjectured the stronger bound G P (x) ∼ log 2 x, also based on the analysis of a random model very similar to Cramér's model.This is a natural conjecture in light of the fact that holds with probability one (although (1.2) doesn't appear to have been observed before).In the literature, the statements G P (x) = O(log 2 x) and G P (x) log 2 x are sometimes referred to as "Cramér's conjecture."Several people have made refined conjectures, e.g., Cadwell [4] has suggested that G P (x) is well-approximated by (log x)(log x − log 2 x), a conjecture which is strongly supported by numerical calculations of gaps.We refer the reader to Granville [16] or Soundararajan [41] for additional information about the Cramer model and subsequent developments.
Tables of prime gaps have been computed up to 10 18 and beyond (see [35]), thus sup x 10 18 G P (x) log 2 x ≈ 0.9206, a consequence of the gap of size 1132 following the prime 1693182318746371.See also Figure 1 for a plot of G(x) versus various approximations.with probability one, whereas the analogous assertion for prime numbers is false in general (for example, there is no integer n such that n + h is prime for all h ∈ {0, 1, 2}).The reason for the disparity is simple: for any prime p, every prime other than p must lie in one of the residue classes {1, . . ., p − 1} modulo p (we refer to this as the bias of the primes modulo p), whereas C is equidistributed over all residue classes modulo p.
See Pintz [36] and Section 2.5 below, for further discussion of flaws in the Cramér model.1.2.Granville's random model.To correct this flaw in the Cramér model C, Granville [16] altered the model, constructing a random set G as follows.For each interval (x, 2x] (with x being a power of two, say), let A be a parameter such that A = log 1−o (1) x as x → ∞, and put Q . .= p A p. Discard those n for which (n, Q) > 1, and select for inclusion in G each of the remaining integers n ∈ (x, 2x] with probability Q/φ (Q)  log n , where φ is the Euler totient function, the events n ∈ G being jointly independent in n.Since φ(Q)/Q is the density in Z of the set of integers coprime to Q, this model captures the correct global distribution of primes; that is, an analog of (1.1) holds with C replaced by G. Unlike Cramér's model, however, Granville's model also captures the bias of primes in residue classes modulo the primes p A. In particular, for any finite set H of integers, Granville's set satisfies the appropriate analog of the Hardy-Littlewood conjectures for counts of prime k-tuples (see (1.4)

below).
In contrast with the Cramér model, Granville's random set G satisfies with probability one.Granville establishes (1.3) by choosing starting points a with Q | a.If y log 2 x, then there are about y/ log y numbers n ∈ [a, a + y] that are coprime to every p A; this is a factor ξ smaller than the corresponding quantity for a random starting point a, and it accounts for the difference between (1.2) and (1.3).We elaborate on this idea in our analysis of G R (x).1.3.A new probabilistic model for primes.Hardy and Littlewood [19] conjectured that the asymptotic relation |{n x : n + h ∈ P for all h ∈ H}|= S(H) + o(1) Note that the left side of (1.4) is bounded if |H mod p| = p for some prime p, since then for every integer n, one has p | n + h for some h ∈ H.In this case, S(H) = 0. We say that H is admissible if |H mod p| < p for every prime p.To motivate our model set R, we first reinterpret (1.4) probabilistically.The rapid convergence of the product (1.5)implies that S(H) is well approximated by the truncation where We interpret V H (z) as a product of local densities, and Θ z as a kind of global density.In order to match the global density of primes as closely as possible, we take z = z(t) be the largest prime number for which 1/Θ z(t) log t; this is well-defined for t e 2 , and by the prime number theorem we have (1.7) It follows that the right side of (1.4) is V H (z(t)) dt.
On the other hand, the quantity V H (z) can be written probabilistically as where P denotes probability over a uniform choice of residue classes a p mod p, for every prime p, with the random variables a p mod p being jointly independent in p, and S z is the random set (a p mod p). (1.9) Thus, H ⊂ S z is the event that H survives sieving by random residue classes modulo primes p z. Consequently, for admissible H, (1.4) takes the form P(H ⊂ S z(t) ) dt.
Thus, (1.4) asserts that the probability that a random shift of H lies in P is asymptotically the same as the probability that H lies in a randomly sifted set.Motivated by this probabilistic interpretation of (1.4), we now define as our random set of integers.Note that the number of primes being sieved out increases as n increases in order to mimic the slowly decreasing density of the primes.This can be compared with the description of P using the sieve of Eratosthenes, in which z(n) is replaced by n 1/2 and the a p are replaced by 0. We believe that the random set R is a useful model for primes, especially for studying local statistics such as gaps.On the other hand, the analysis of R presents more difficulties than the analysis of C or G, owing to the more complicated coupling between events such as n 1 ∈ R and n 2 ∈ R for n 1 = n 2 .
1.4.Large gaps from the model.The behavior of G R (x) is intimately tied to extremal properties of the interval sieve.To describe this connection, for any y 2 let W y denote the (deterministic) quantity where S z is defined in (1.9) and the minimum in (1.11) is taken over all choices of the residue classes {a p mod p : p (y/ log y) the lower bound being a consequence of Iwaniec's theory (see [12,Theorem 12.14] or [21]) of the linear sieve, and the upper bound resulting from the particular choice a p . .= 0 mod p for all primes p (y/ log y) 1/2 .There is a folklore conjecture that the upper bound in (1.12) is closer to the truth.The problem of bounding W y belongs to a circle of problems centered on the question about the maximum number of primes in some interval of length x; see e.g., [20] and [9].
The function g(u) is evidently increasing, and by (1.12) we see that and so Theorem 1.1 implies that for every ε > 0, almost surely we have for all large x.It seems likely that g((ξ±ε) log 2 x) → g(ξ log 2 x) as ε → 0, although we cannot prove this.Assuming this, Theorem 1.1 leads us to the following prediction for gaps between primes: Conjecture 1.2 (Asymptotic for largest gap in the primes).We have Assuming the previously mentioned folklore conjecture that the lower bound in (1.14) is asymptotically tight in the sense that g(u) ∼ u as u → ∞, we are then led to the prediction that This matches the lower bound (1.3) for the gap in the Granville model G.
1.5.Hardy-Littlewood from the model.It has been conjectured that a much more precise version of (1.4) holds (see, e.g., Montgomery and Soundararajan [28]), namely: There is some computational evidence for this strong estimate for certain small sets H; see Section 2.1.Granville's model set G, by contrast, satisfies the analogous relation with an error term that cannot be made smaller than O(x/ log |H|+1 x).This occurs because G is only capturing the bias of P modulo primes p A; that is, the set G satisfies the analog of (1.16) with S(H) replaced by S A (H).
The model set R given by (1.10) has been designed with the Hardy-Littlewood conjectures in mind.We establish a uniform analog of (1.16) that holds in a wide range of H. Theorem 1.3 (Hardy-Littlewood conjecture for the random model).Fix c ∈ [1/2, 1) and ε > 0. Almost surely, we have uniformly for all admissible tuples H satisfying |H| log c x and in the range In particular, when c = 1 2 the error term is O(x 1/2+o (1) ), which matches (1.16) provided that H ⊆ [0, exp{ log 1/2 x log 2 x }] and |H| log 1/2 x.As we will invoke the Borel-Cantelli lemma in the proof, the constant implied by the O−symbol exists almost surely, but we cannot give any uniform bound on it.This remark applies to the next result as well.
For the special case H = {0} we have the following more precise statement.
Theorem 1.4 (Riemann hypothesis for the random model).Fix c > 3/2.Almost surely, we have Similar results can be obtained for any fixed tuple H; we leave this to the interested reader.
1.6.Large gaps from Hardy-Littlewood.The results stated above have a partial deterministic converse.We show that any set of integers that satisfies a uniform analogue of the Hardy-Littlewood conjecture (1.16) has large gaps.The maximal length of the gaps depends on the range of uniformity of (1.16), and comes close to order log 2 x with a strong uniformity assumption.Our result extends a theorem of Gallagher [14], who showed that if, for every fixed k ∈ N and real c > 1, the primes obey the Hardy-Littlewood conjectures uniformly for every admissible k-tuple H ⊂ [0, c log x], then the gaps normalized by 1 log x enjoy an exponential distribution asymptotically.His approach applies to any set A in place of the primes P. uniformly over all tuples H ⊂ [0, log 2 x] with |H| κ log x 2 log 2 x .Then for all large x, where the implied constant is absolute.
We also have the following variant of Theorem 1.5, which has a stronger conclusion but requires a uniform Hardy-Littlewood conjecture for larger tuples (of cardinality as large as log x log 2 x); on the other hand, this conjecture is only needed in a certain averaged sense.
Theorem 1.6 (Averaged Hardy-Littlewood implies large gaps).Fix 0 < c < 1. Suppose that A ⊂ N satisfies the averaged Hardy-Littlewood type conjecture uniformly for k Cy log x and log x y (log 2 x) log 2 x, where C is a sufficiently large absolute constant.Then where g is defined in (1.13).
One could combine Theorem 1.3 with Theorem 1.5 (taking κ . .= (log x) c−1+ε with fixed c < 1, say) to obtain results similar to Theorem 1.1.However, the conclusion is considerably weaker than that of Theorem 1.1, and it does not appear that this approach is going to come close to recovering the bounds we obtain using a direct argument.
Below we summarize, in rough form, the various results and conjectures for the primes P, the various random models C, G, R for the primes, and for arbitrary sets A obeying a Hardy-Littlewood type conjecture: Set Hardy-Littlewood conjecture?
Asymptotic largest gap up to x C No (singular series is missing) for tuples of size up to (log x) log 2 x Of course, one can combine this table's conclusions with the unconditional bounds in (1.14), or the conjecture g(u) ∼ u, to obtain further rigorous or predicted upper and lower bounds for the largest gap.1.7.Open Problems.
(1) Improve upon the bounds (1.12); alternatively, give some heuristic reason for why the upper bound in (1.12) should be closer to the truth.(2) Show that g(a) ∼ g(b) whenever a ∼ b.This will clean up the statement of Theorem

Background and Further Remarks
The discussion here is not needed for the proofs of the main theorems and may be omitted on the first reading.
2.1.Remarks on the Hardy-Littlewood conjectures.For any H ⊆ [0, y], we have S(H) e O(|H| log 2 y) (see Lemma 3.4 below), and thus when y (log x) O (1) , the main terms in (1.16) and (1.17) are smaller than one for c 1 log x log 2 x |H| exp{(log x) c 2 }, where c 1 , c 2 > 0 are appropriate constants.Therefore, we cannot have a genuine asymptotic when |H| > c 1 log x log 2 x .In the case of primes, it may be the case that (1.16) fails when |H| > log x log 2 x owing to potentially large fluctuations in both the size of S(H) and in the prime counts themselves.We note that Elsholtz [8] has shown that for any c > 0, the left side of (1.16) is bounded by when |H| c log x, where the implied function o(1) depends on c.On the other hand, there are admissible tuples with |H| log x for which the left side of (1.16) is zero (see [8] for a construction of such H).
Our assumption in Theorem 1.6 is more speculative, in light of the above remarks, since we need to deal with tuples H satisfying k = |H| > log x.Also, simply considering subsets H of the primes in (y/2, y] (which are automatically admissible), we see that there are at least ( y k log y ) k > (log x) k/2 tuples H in the summation, and this means that when k > log x, (1.18) implies a great deal of cancellation in the error terms of (1.17) over tuples H.

The cutoff z(t).
In [37], Pólya suggests using a truncation x 1/e γ to justify the Hardy-Littlewood conjectures.The observation that the cutoff √ x leads to erroneous prime counts was made by Hardy and Littlewood [19,Section 4.3] and is occasionally referred to as "the Mertens Paradox" (see [31]).In discussing the probabilistic heuristic for counting the number of primes below x, Hardy and Littlewood write (here denotes a prime) "One might well replace < √ n by < n, in which case we should obtain a probability half as large.This remark is in itself enough to show the unsatisfactory character of the argument" and later "Probability is not a notion of pure mathematics, but of philosophy or physics."2.3.Connection to Jacobsthal's function.Any improvement of the lower bound in (1.12) leads to a corresponding improvement of the known upper bound on Jacobsthal's function J(w), which we define to be the largest gap which occurs in the set of integers that have no prime factor w. Equivalently, J(w) is the largest gap in S w .Iwaniec [21] proved that J(w) w 2 using his linear sieve bounds.Using Montgomery and Vaughan's explicit version of the Brun-Titchmarsh inequality [29], the cardinality of the set S w (y) . .= [0, y] ∩ S w for w > (y/ log y) 1/2 can be bounded from below by If the right side is positive, it follows that J(w) < y.Suppose, for example, that W y αy/ log y for large y, where 0 < α 1 is fixed.Mertens' estimates then imply that J(w) w 1+e −α/2 +o(1) (w → ∞), which improves Iwaniec's upper bound.
We remark that all of the unconditional lower bounds on G P (x), including the current record [11], have utilized the simple inequality G P (x) J(y), where y ∼ log x.

2.4.
The interval sieve problem and exceptional zeros.The problem of determining W y asymptotically is connected with the famous problem about exceptional zeros of Dirichlet L-functions (also known as Siegel zeros or Landau-Siegel zeros); see, e.g., [7,Sections 14,20,21,22] for background on these and [22] for further discussion.
Definition 2.1.We say that exceptional zeros exist if there is an infinite set E ⊂ N, such that for every q ∈ E there is a real Dirichlet character χ q and a zero 1 − δ q with L(1 − δ q , χ q ) = 0 and δ q = o(1/ log q) as q → ∞.
Theorem 2.2.Suppose that exceptional zeros exist.Then Hence, we almost surely have Our proof of Theorem 2.2, given in Section 9, is quantitative, exhibiting an upper bound for W y in terms of the decay of δ q .Siegel's theorem [7,Sec. 21] implies that log 1/δq log q → 0, but we cannot say anything about the rate at which this occurs (i.e., the bound is ineffective).If the rate of decay to zero is extremely slow, then our proof shows that, infinitely often, W y = f (y) y log 2 y log y , with f (y) → ∞ extremely slowly.Consequently, G R (x) is infinitely often close to the upper bound in (1.15).
The related quantity is known by the theory of upper bound sieves to satisfy W y 2y log y (see, e.g., [30]), and it is well known that an improvement of the constant two would imply that exceptional zeros do not exist; see, e.g., Selberg's paper [39].Theorem 2.2 (in the contrapositive) similarly asserts that an improvement of the constant zero in the trivial lower bound W y 0 • y log y implies that exceptional zeroes do not exist.Extending our ideas and those of Selberg, Granville [17] has recently shown that if exceptional zeros exist, then for any real r > 1, where f, F are the lower and upper linear sieve functions.In particular, f (r) = 0 for r 2 and f (r) > 0 for r > 2.
It is widely believed that exceptional zeros do not exist, and this is a famous unsolved problem.Theorem 2.2 indicates that to fully understand W y , it is necessary to solve this problem.Iwaniec's lectures [22] give a nice overview of the problem of exceptional zeros, attempts to prove that they do not exist, and various consequences of their existence.In the paper [10], the second author shows that if there is a sequence of moduli q with δ q (log q) −2 , then one can deduce larger lower bounds for J(w) and G P (x) than are currently known unconditionally.

Primes in longer intervals. With probability one, the Cramér model
as long as x → ∞, y x, and y/ log 2 x → ∞.However, Maier [25] has shown that the analogous statement for primes is false, namely that for any fixed A > 1 one has lim inf (2. 2) The disparity between (2.1) and (2.2) again stems from the uniform distribution of C in residue classes modulo primes.Both models G and R satisfy the analogs of (2.2); we omit the proofs.Moreover, the ideas behind Theorem 1.1 can be used to sharpen (2.2), by replacing the right sides of the inequalities by quantities defined in terms of the extremal behavior of |[0, y] ∩ S y 1/u | for fixed u > 1; we refer the reader to [23,Exercise 30.1] for details.The authors thank Dimitris Koukoulopoulos for this observation.
By contrast, on the Riemann Hypothesis, Selberg [38] showed that π(x + y) − π(x) ∼ y log x holds for almost all x provided that y = y(x) x satisfies y/ log 2 x → ∞ as x → ∞.
On a related note, Granville and Lumley [18] have developed heuristics and conjectures concerning the maximum number of primes x lying in intervals of length L, where L varies between log x and log 2 x.
2.6.Remarks on the singular series and prime gaps.If y is small compared to x, the difference π C (x + y) − π C (x) is a random variable with (essentially) a binomial distribution.Letting y → ∞ with y/ log x fixed, the result is a Poisson distribution: for any real λ > 0 and any integer k 0, we have with probability one.In particular, using C as a model for the primes P, this leads to the conjecture that Gallagher [14] showed that if the Hardy-Littlewood conjectures (1.4) are true uniformly for H ⊂ [0, log 2 x] with fixed cardinality |H|, then (2.4) follows.His analysis relies on the relation which asserts that the singular series has an average value of one.Sharper versions of (2.5) exist (see, e.g., Montgomery and Soundararajan [28]); such results, however, are uniform only in a range |H| log 2 y or so, far too restrictive for our use.Reinterpreting the sum on the left side of (2.5) probabilistically, as we have done above, allows us to adequately deal with a much larger range of sizes |H|.In particular, it is possible to deduce from a uniform version of (1.16) a uniform version of (2.4), although we have not done so in this paper.
We take this occasion to mention a recent unconditional theorem of Mastrostefano [26, Theorem 1.1], which is related to (2.5), and which states that for any integer m 0 there is an ε = ε(m) > 0 so that whenever 0 < λ < ε, we have Establishing the Poisson distribution (2.3) unconditionally, even for some fixed λ, seems very difficult.
The proof is very short, and we sketch it here as a prelude to the proof of Theorem 1.1.Consider the elements of G in (x, 2x] for x a power of two.In accordance with (1.14), let y satisfy log 2 x y = o(log 2 x log 2 x) and put A . .= (y/ log y) 1/2 , so that A = o(log x).Let θ . .= p A (1 − 1/p) −1 ∼ (e γ /2) log y and Q . .= p A p. For simplicity, we suppose that each n ∈ (x, 2x] with (n, Q) = 1 is chosen for inclusion in G with probability θ/ log x; this modification has a negligible effect on the size of the largest gap.Fix ε > 0 arbitrarily small.Let X m denote the event (m, m + y] ∩ G = ∅. Let D m denote the number of integers in (m, m+y], all of whose prime factors are larger than A. If we take y . .= g((ξ + ε) log 2 x), then by our assumption that W y log y ∼ (ξ + ε) log 2 x.Summing on x and applying Borel-Cantelli, we see that almost surely, only finitely many X m occur.
For the lower bound, we take y . .= g((ξ − ε) log 2 x) and restrict to special values of m, namely m ≡ b mod Q, where b is chosen so that Let M . .= {x < m 2x : m ≡ b mod Q} and let N be the number of m ∈ M for which X m occurs.By the above argument, we see that By assumption, |M| = x 1−o (1) and hence the right side is > x ε/2 for large x.Similarly, By Chebyshev's inequality, P(N < 1 2 EN ) 1/EN x −ε/2 .Considering all x and using Borel-Cantelli, we conclude that almost surely every sufficiently large dyadic (x, 2x] contains an m for which X m occurs. We remark that our lower bound argument above works as well for the Cramér model, showing (1.2).We take A = Q = θ = b = 1, and the details are simpler.

Preliminaries
3.1.Notation.The indicator function of any set T is denoted 1 T (n).We select residue classes a p mod p uniformly and independently at random for each prime p, and then for any set of primes Q we denote by A Q the ordered tuple (a p : p ∈ Q); often we condition our probabilities on A Q for a fixed choice of Q.
Probability, expectation, and variance are denoted by P, E, and V respectively.We use P Q and E Q to denote the probability and expectation, respectively, with respect to random A Q .When Q is the set of primes in  1), the function is assumed to be positive.We write For a set H of integers, we denote H − H . .= {h − h : h, h ∈ H}, and for any integer m, H + m . .= {h + m : h ∈ H}.

Various inequalities.
We collect here some standard inequalities from sieve theory and probability that are used in the rest of the paper.q = 1} y/p 1 + min{log w, log(y/p)} .
Lemma 3.2 (Azuma's inequality [1]).Suppose that X 0 , . . ., X n is a martingale with |X j+1 − X j | c j for each j.Then Lemma 3.3 (Bennett's inequality [3]).Suppose that X 1 , . . ., X n are independent random variables such that for each j, EX j = 0, and |X j | M holds with probability one.Then where σ 2 . .= j VX j , and The lemma now follows since k t 1/100 .

Uniform Hardy-Littlewood from the model
In this section, we prove Theorems 1.3 and 1.4 using the first and second moment bounds provided by the following proposition.Furthermore, 2) where Before turning to the proof of the proposition, we first indicate how it is used to prove the two theorems, starting with Theorem 1.4.
Proof of Theorem 1.4.Fix c > 3/2.For any integers u 2 and v 0, we let We apply Proposition 4.1 in the case that H = {0}, k = 1 and D = 0.By (4.1), if v √ u then Let x be a large integer.For integers h, m with 2 √ x 2 m x and 0 h x/2 m − 1, let G m,h be the event that For large x, (4.3) implies that Hence, Chebyshev's inequality yields the bound Let F x denote the event that G h,m holds for all such h, m.By a union bound, we see that PF x = 1 − O((log x) 2−2c ).On this event F x , for any integer y with 1 y x, we have x 1/2 (log x) c .
Since 2c − 2 > 1, the Borel-Cantelli lemma implies that with probability one, F 2 s is true for all large integers s.On this event, ∆(2, x) x 1/2 (log x) c for all real x 2, proving the theorem.The number of such H does not exceed u 100/ log 2 u = u o(1) as u → ∞.
We again invoke the moment bounds in Proposition 4.
+o(1) a o(1) bu 2λ−1+o (1) , where the implied function o( 1) is uniform over all such H, a and b.For integers h, m with 2 √ u 2 m u and 0 h u/2 m − 1, let G h,m be the event that for all H satisfying (4.4), Again, if u is large enough, the expectation of the left side is at most 1 2 u λ+ε/2 , uniformly over all h, m, H.By a union bound and Chebyshev's inequality, Furthermore, as in the proof of Theorem 1.4, we see that if u is large enough (in terms of c, ε) and if G h,m holds for all h, m, then F u holds.Therefore, By Borel-Cantelli, almost surely F 2 s is true for all sufficiently large integers s.
Now assume that we are in the event that F 2 s holds for all s s 0 .Let x be sufficiently large such that x 2 3s 0 +1 and 2 t−1 < x t s , and let H be an admissible tuple with Note that whenever x 1/3 u = 2 s x we have (4.4).Thus, using (3.2), as required for Theorem 1.3.
The following lemma is needed in the proof of Proposition 4.1.When an admissible tuple H is fixed, define , and suppose H is an admissible tuple with k . .= |H| 1.Then Proof.We begin with the simple bound By multiple applications of (1.7), This completes the proof.
Proof of Proposition 4.1.Suppose that H ⊂ [0, D] with k . .= |H| log x (log 2 x) 2 .We may assume that D is an integer.Write ν p . .= |H mod p| for every prime p.Since z(t) is increasing and ψ u is decreasing in u, Hence,

Estimate (3.2) implies that S(H)
x o (1) and this proved the estimate (4.1) of the proposition.
For the second moment bound, let v be a parameter in [4k, log x] and set Q . .= p v p.Given integers n 1 and n 2 with x < n 1 < n 2 x + y, define m and b by m We consider separately the primes v and those > v, setting Then For technical reasons, we use the trivial bound EX n 1 X n 2 ψ n 1 ψ x when m ∈ H − H; the total contribution from such terms is ψ x k 2 y, which is an acceptable error term for (4.2).Now suppose that m ∈ H − H.For any prime p > v and integer a ∈ (−p/2, p/2), let λ a (p) . .= |(H ∩ (H + a)) mod p|.Then, given v < p z(x + y) and m we have where a is the unique integer such that a ≡ m mod p and |a| < p/2.
If an integer r has a prime factor outside the interval I(n 1 , a) or r is not squarefree, we set f a (r) . .= 0. Then (since m ∈ H − H, we always have m − a = 0).Recalling (4.7) we obtain that where f a (d a ).
We now fix n 1 and sum over n 2 .Let each d a is squarefree with all of its prime factors in I(n 1 , a) , i.e., D(n 1 ) is the set of all possible vectors of the numbers d a .We compute where we have dropped the condition n 2 − n 1 ∈ H − H on the right side.A crucial observation is that for every d ∈ D(n 1 ), the components d a are pairwise coprime.Indeed, if a, a are two distinct elements of H − H and a prime p > max{v, 2|a|, 2|a |} divides both d a and d a , then there is some m ∈ . This implies a ≡ a (mod p), a contradiction.Hence, the innermost sum is a sum over a single residue class modulo d . .= Q a d a .For any e ∈ Z we have by (4.5) that where we used that k log x and z(x)<p z(x+y+d) Therefore, Hence, combining (4.9) and (4.10), and reinserting terms with n 2 − n 1 ∈ H − H, for each n 1 we obtain that Extending the first sum over d to all pairwise coprime tuples d composed of prime factors in (v, z(n 1 )], and applying (4.8) again, we find that Finally, summing over n 1 we conclude that Since X 2 n = X n we arrive at Comparing this with (4.6), it follows that the variance in question satisfies To bound T , we consider two cases.First, suppose that k (log x) 1/2 / log 2 x, and let v . .= 4k.In this case, we argue crudely, using (4.8) and ν p k for all p, obtaining The prime number theorem implies that log Q v and thus QT (log x) k 2 .Therefore, (4.11) implies (4.2).
Next, suppose that with k = (log x) , (4.12) and put so that v 4k and Q = x o (1) .For a parameter U x 5 , to be chosen later, let We begin with D − U .For any parameter α > 0 we have, by (4.8), α 1 by (4.12).Recalling (4.13), we see that Next, we turn to D + U , and make use of the special structure of D(n 1 ).For any parameter β ∈ [0, 1) we have Note that each prime p can appear at most once in the double product, since p | (m − a) and p | (m − a ) implies p | (a − a ), which forces a = a .We split the last product into two pieces according to whether p w or p > w, where w is a parameter to be chosen later.For any m ∈ H − H we have for large x.We bound the contribution of larger primes trivially using the fact that any integer m − a is divisible by log x log 2 x such primes (here, it is crucial that m = a).Thus, for any m ∈ H − H we have We now put By (4.12) we have β 0, and clearly β < 1.It follows that Comparing (4.14) with (4.15), we choose U so that , that is, 1), the exponent of y is 4 + o(1) 5 for large x.This gives T y Inserting this into (4.11)yields the inequality (4.2), and completes the proof of Proposition 4.1.

Random sieving by small primes
Throughout the sequel, we employ the notation Throughout this section, we assume that x and y are large real numbers that satisfy (5.2) where W y is given by (1.11), and α, β are fixed with 0 < α < β.Note that (1.12) and (5.2) yield the estimates We adopt the convention that any constants implied by O and may depend on α, β but are independent of other parameters.
We define S w (y) . .= [0, y] ∩ S w and when the value of y is clear from context we put Using a variety of tools, we give sharp probability bounds for S w at five different "checkpoint" values w 1 < w 2 < w 3 < w 4 < w 5 (defined below), with each S w i+1 controlled in terms of S w i for i = 1, 2, 3, 4. Our arguments are summarised as follows, where the range is a range of primes: The most delicate part of the argument is dealing with primes p near log x, that is, w 1 p w 3 (see Lemmas 5.1 and 5.2).To initialize the argument, we observe from definition (1.11) of W y that we have the lower bound S w 1 W y . (5.4) Now we successively increase the sieving range from S w 1 to S w 2 , and so on, up to S w 5 .
Lemma 5.1 (Sieving for w 1 < p w 2 ).Let w 1 . .= (y/ log y) 1/2 and w 2 . .= log x log 3 x.With probability one, we have Proof.In this section and the next one, we adopt the notation R p for the residue class a p mod p. From the Buchstab identity (5.5) The sieve upper bound (Lemma 3.1) and Mertens' theorem together imply that where C y . .= y S w 1 log y .
By (5.2) and ( 5.3) we have (5.7) Using (5.2) and the lower bound Inserting this bound into (5.6)we find that The function z(log 3 x − log z) is increasing for z e −1 log 2 x, hence by (5.7) we have and the stated result follows from (5.5).
Lemma 5.2 (Sieving for w 2 < p w 3 ).Let w 2 . .= log x log 3 x and w 3 . .= log x (log 2 x) 2 .Conditional on A w 2 satisfying S w 2 1 2 W y , we have Proof.As in the previous lemma, we start with The variables X p are independent and have a mean value of zero, and by the sieve upper bound (Lemma 3.1) it follows that for some absolute constant c > 0. Using Montgomery's Large Sieve inequality (see [12,Equation (9.18)] or [27]), which implies that (5.10) We apply Bennett's inequality (Lemma 3.3) with t . .= S w 2 /(2 log 3 x).By (5.9), (5.10) and (5.3), we have and therefore where the last bound follows from (1.12) and our assumption that S w 2 1 2 W y .Lemma 3.3 now shows that for some constant c > 0, Thus, with probability at least 1 − O(x −100 ) we have for sufficiently large x.Recalling (5.8), the proof is complete.Lemma 5.3 (Sieving for w 3 < p w 4 ).Let w 3 . .= log x (log 2 x) 2 and w 4 . .= y 4/3 .Conditional on A w 3 satisfying S w 3 1 4 W y , we have x −100 .
The sequence X 0 , X 1 , . . ., X m is a martingale since where we have used (1.12) in the last step.We apply Azuma's inequality (Lemma 3.2).If In the case that p j+1 y, Lemma 3.1 shows that for any value of R p j+1 we have . Consequently, .

Random sieving by large primes
In this section, we adopt the notation from the previous section; however, we do not assume inequalities (5.2) and (5.3), except in Corollary 6.2 below.We do assume that y is sufficiently large.Sieving by large primes (p > y 4 , say) is easier because there is a relatively low probability that S ∩ R p = ∅ and we are able to deploy combinatorial methods.Lemma 6.1 (Sieving for w 4 < p w 5 ).Let v be a real number greater than w 4 . .= y 4/3 , and let ϑ ∈ [y −1/4 , 1).Conditional on A w 4 , we have Proof.Put S . .= S w 4 (y), . .= |S| = S w 4 , and let P be the set of primes in (w 4 , v]. The random residue classes {R p : p ∈ P} give rise to a bipartite graph G that has vertex sets S and P, with edges connecting the vertices s ∈ S and p ∈ P if and only if s ∈ R p (i.e., s ≡ a p mod p).Since 0 s y < w 4 , for every p there is at most one vertex s joined to it.For any s ∈ S, let d(s) be its degree, and let S + be the set of vertices in S of positive degree: Finally, we denote by d the vector d(s) : s ∈ S + .In this manner, the random residue classes {R p : p ∈ P} determine a subset S + ⊂ S and a vector d.
For any subset T = {t 1 , . . ., t m } in S and a vector r = r 1 , . . ., r m whose entries are positive integers, let E(T , r) be the event that the random graph G described above has S + = T and d = r.Since S ⊂ [0, y] and w 4 > y, we have |S ∩ R p | 1 for all p ∈ P, and thus Fixing the primes p 1 , . . ., p h ∈ P with R p ∩ S = ∅, there are h r 1 ••• rm ways to choose the graph's edges connecting the p i to T .Consequently, Relaxing the conditions on the last sum in (6.1), we find that For fixed m, there are m choices for T ; thus, summing over all r 1 , . . ., r m we conclude that The complete sum over m of the right side of (6.2) is equal to V e U , and the peak occurs when m = (1 − e −U ) + O(1).We also have Standard large-deviation results for the binomial distribution (such as Lemma 3.2) imply that for any δ > 0, Recalling that . .= S w 4 , we see that the inequality for all large x since w 4 . .= y 4/3 and y.Combining our results above, we conclude that e −ϑ 2 /8+O( 2 /w 4 ) e −ϑ 2 /10 for all large x, and the proof is complete.
Combining Lemmas 5.1, 5.2, 5.3 and 6.1 (with v . .= y 8 and ϑ . .= y −1/10 ) we obtain the following result.Corollary 6.2 (Sieving for w 1 < p w 5 ).Assume (5.2), let w 1 . .= (y/ log y) 1/2 and w 5 . .= y 8 .Conditional on A w 1 , we have with probability 1 − O(x −100 ) that Our next result is a very general tool for handling primes larger than y 4 .Lemma 6.3 (Sieving for w 5 < p z, I).Let w y 4 and P be a set of primes larger than w such that p∈P 1/p 1/10.Let S ⊆ S w with |S| 10y, and such that for all p ∈ P, S is distinct modulo p. Conditional on A w , we have for all 0 g |S|: where Proof.Put . .= |S|, and assume that 1 (the case . .= 0 being trivial).Take m . .= − g, and let T , r, E(T , r) and h be defined as in Lemma 6.1 with |T | = m = − g.As before (see (6.1)) we have For any prime p ∈ P the elements of S lie in distinct residue classes modulo p; this implies that P P (E(T , r)) can only be nonzero when m = h in (6.4) (that is, every r j . .= 1).Let T h be the sum over p 1 , . . ., p h in (6.4).Then Also, note that Hence, summing over all vectors r, we find that This completes the proof.
Corollary 6.4 (Sieving for w 5 < p z, II).Uniformly for z 1/2 w y 4 , we have Proof.Let Θ . .= Θ w,z .By Lemma 6.3 with S . .= S w ∩ [0, y] and P the set of primes in (w, z], we have The next lemma has a weaker conclusion than Lemma 6.3 but is more general and is needed for a second moment argument below in which we derive a lower bound for the largest prime gap in [0, x].Lemma 6.5 (Sieving for w 5 < p z, III).Let w and z be real numbers for which z 1/2 w y 8 .Let S ⊂ S w ∩ [0, e y ] with |S| y and such that for every prime p > w, no more than two numbers in S lie in any given residue class modulo p. Then Proof.Put . .= |S|, and let P be the set of primes in (w, z], and put Q . .= p ∈ P : p | s − s for some s, s ∈ S, s = s .

|Q|
2 y log w y 3 (6.5)holds if y is large enough.
By assumption, for every p ∈ Q, |S ∩ R p | 2. Let E m be the event that for S ∩ R p = ∅ holds for precisely m primes p ∈ Q.Since for any prime p ∈ P the probability that S ∩ R p = ∅ does not exceed /p, using (6.5) we have by (6.5).Now P Q (E 0 ) = 1 − O(y 4 /w) by (6.6), so we conclude that This completes the proof.

The behavior of the largest gap
In this section we use the estimates from the previous section to complete the proof of Theorem 1.1.In Theorems 7.1 and 7.2 below, we suppose that We also note that u < W g(u)+1 log(g(u) + 1) (W g(u) + 1) log(g(u) + 1).
Let z . .= z(x).The probability that R ∩ [0, x] has a gap of size y does not exceed the probability that S z ∩ [0, x] has a gap of size y, which in turn is at most Let w 1 . .= (y/ log y) 1/2 and w 5 . .= y 8 as before.Also put η . .= log 4 x log 3 x .Applying Corollary 6.2 together with (7.3), it follows that with probability 1 − O(x −100 ) we have 2  32 log 2 x (1 + 2ε/3) ξ(log x) 2  32 log 2 x using (7.1) in the final step.Fix A w 5 so that S w 5 satisfies this inequality.Taking into account that Θ w 5 ,z = 32 log 2 x ξ log x 1 + O 1 log 2 x , Lemma 6.3 now shows that as required.
Let z . .= z(x/2), w 1 . .= (y/ log y) 1/2 , w 5 . .= y 8 and η . .= log 4 x log 3 x .In particular, z ∼ (x/2) 1/e γ by (1.7), and It suffices to show that with high probability, S z ∩ (x/2, x] has a gap of size y, for this implies that R has a gap of size y within [0, x].For the sake of brevity we write That is, F (u, v) counts the number of elements in [u, u + y] sieved by the primes v.In particular, S w = F (0, w).There is some vector (b p ) p∈w 1 so that there are exactly W y integers in [0, y] that avoid the residue classes (b p mod p) p w Let E be the event that this bound holds for every u ∈ U.By the union bound, P w 1 ,w 5 (E) 1 − O(x −99 ).Conditioning on E, we denote U r . .= {u ∈ U : F (u, w 5 ) = r} (r 0).
The sets U r depend only on A w 5 , and U r = ∅ unless r = ( 1 16 + O(η))W y by (7.6).Rather than work with all r, we focus on a popular value of r; thus, let be fixed with the property that |U | |U r | for all r.By (7.5), we have Combining (7.4) with (7.6) and (7.1), we have Next, let M . .= {u ∈ U : F (u, z) = 0} , which counts those intervals indexed by u ∈ U for which F(u, w 5 ) is covered by w 5 <p z R p .We analyze M using first and second moments.Firstly, by Lemma 6.3, To bound the second moment of M , apply Lemma 6.5 with S . .= F(u, w 5 ) ∪ F(u , w 5 ), where u and u are distinct elements of U .The hypotheses of Lemma 6.5 are satisfied as any prime p > w 5 > y can divide at most two elements of S. We obtain By (7.7), (7.8) and (7.9) we have for large x, and hence we bound the variance by Thus, Chebyshev's inequality implies In particular, with probability at least 1 − O(y −4 ) = 1 − O((log x) −8 ) there is an interval [u, u + y] in (x/2, x] completely sieved out by A z .
Proof of Theorem 1.1.Let x j . .= 2 j vary over positive integers j, and let ε > 0 be fixed.Theorem 7.1 implies that for large j we have The convergence of j x −ε/2 j implies, via the Borel-Cantelli lemma, that almost surely there is a J so that G R (x j ) g((1 + ε)ξ log 2 x j−1 ) (j J).
As G R and g are both increasing functions, the above relation implies that for all x j−1 < x x j and j > J we have In a similar manner, Theorem 7.2 and Borel-Cantelli imply that almost surely there is a J so that As before, this implies that

Large gaps from Hardy-Littlewood
To prove Theorems 1.5 and 1.6, we start with a simple inclusion-exclusion result (a special case of the Bonferroni inequalities or the "Brun pure sieve").Lemma 8.1 (Brun's sieve).Suppose that y 1, let N , A be sets of positive integers, and put and Then, for any even K we have T U K , and for any odd K we have T U K .
Proof.For any integers K, m 0 let Observe that and the lemma is proved.
Proof of Theorem 1.5.Although Theorem 1.5 concerns the behavior of a specific set A, our first task is to express the gap-counting function for A in terms of the random quantities with which we have been working in the past few sections.First, observe that (1.17) with H = {0} implies that and it follows trivially that G A (x) log x.Therefore, by adjusting the implied constant in the conclusion of the theorem, we may assume that for a sufficiently large constant D.
Let x be a large real number, put N . .= [x/2, x] and let y, K be integer parameters to be chosen later, with K odd and with K κ log x 2 log 2 x .Define T and U K as in Lemma 8.1.Since T U K by Lemma 8.1, our aim is to show that U K 1.Using (1.17) we see that  Again, let N . .= (x/2, x], and define C as in the previous proof.We apply Lemma 8.1 with  Now put w 1 . .= (y/ log y) 1/2 , and let A w 1 be fixed such that S w 1 = W y .This occurs with probability x −o (1) , since (y/ log y) 1/2 = o(log x) by (8.6).Conditional on A w 1 , Corollary 6.2 implies that with probability at least 1 − O(x −100 ) we have S w = ( where we have used (8.5) in the last step.Inserting this last estimate into (8.10),we conclude that P z (S z = 0) e −(1+O(η))(1−ε)c log x x −(1−ε/2)c (8.11) In particular, the right side of (8.11) has larger order than the right sides in (8.8) and (8.9).Thus, inserting (8.8), (8.9) and (8.11) into (8.7),we conclude that U K 1 if x is sufficiently large depending on ε.By a simple diagonalization argument, the same claim then holds for some ε = ε(x) = o(1) going to zero sufficiently slowly as x → ∞.This completes the proof of Theorem 1.6.

The influence of exceptional zeros
In this section, we show that the existence of exceptional zeros implies that W y is rather smaller than the upper bound in (1.12) infinitely often.
By Siegel's Theorem [7, §21], for any ε > 0, δ q ε q −ε .We conclude that π(qy + 1; q, 1) δ q qy φ(q) .This may also be deduced from Gallagher's prime number theorem [13,Theorem 7].Define the residue classes a p by qa p + 1 ≡ 0 mod p when p q. Let T denote the set of n y with n ≡ a p mod p for all p q such that p y/ log y.Then for any n ∈ T , qn + 1 is either prime or the product of two primes > y/ log y.Then we make a greedy choice of a p for p | q, choosing successively a p so that a p mod p covers a proportion at least 1/p of the remaining elements of T .This shows that W y φ(q) q |T | φ(q) q π(qy + 1; q, 1) + √ y/ log y<p √ qy+1 π qy + 1 p ; q, p , where p is the inverse of p modulo q.Siegel's theorem implies that log y q o (1) .Applying the Brun-Titchmarsh theorem to the sum over p, we see that W y φ(q) q qyδ q φ(q) + qy log(q log y) φ(q) log 2 y y δ q + log q log 2 y δ q y.This completes the proof.
Proof of Theorem 2.2.Let q ∈ Q, and apply Thereom 9.1 with y = y q defined by (9.1).By assumption, log yq log q → ∞ as q → ∞, and hence that δ q = log q log 2 y q = o 1 log y q .
This shows that W yq = o(y q / log y q ), and the remaining parts of Theorem 2.2 follow immediately.

x 2 dt
log |H| t (1.4) holds for any finite set H ⊂ Z, where S(H) is the singular series given by S(H) . .

2. 7 .
The maximal gap in Granville's model.The claimed bounds in Theorem 1.1 are also satisfied by Granville's random set G, i.e., one has (c, d], we write A c,d , P c,d and E c,d ; if Q is the set of primes c, we write A c , P c and E c .In particular, P c,d refers to the probability over random A c,d , often with conditioning on A c .Throughout the paper, any implied constants in symbols O, and are absolute (independent of any parameter) unless otherwise indicated.The notations F G, G F and F = O(G) are all equivalent to the statement that the inequality |F | c|G| holds with some constant c > 0. We write F G to indicate that F G and G F both hold.The notation o(1) is used to indicate a function that tends to zero as x → ∞; in expressions like 1 − o(

Lemma 3 . 1 (
Upper bound sieve, [30, Theorem 3.8]).For 1 w p y, p prime, b ∈ Z/pZ, and an arbitrary interval I of length y, we have uniformly {n ∈ I : n ≡ b mod p, n, q w

Proposition 4 . 1 (
First and second moment bounds).Suppose that x and y are integers with x 3 and √ x y x, and suppose that 0 D √ x.Let H ⊂ [0, D] be an admissible tuple with k . .= |H| log x (log 2 x) 2 , and put X n . .= h∈H 1 R (n + h) (n ∈ N).
Assume the event E m occurs, and fix A Q .If S has precisely n elements covered by p∈Q R p , then 0 n 2m, the upper bound being a consequence of our hypothesis on S. Put S . .= s ∈ S : s ∈ R for all p ∈ Q , so that |S | = − n.Lemma 6.3 implies that where E ..= Kx 1−κ y + 1 K .By Lemma 3.5, replacing S(H)/ log k t with V H (z(t)) induces an additive error of size O(E) since κ 1/2.Also, (1.8) implies that Since K is odd, the sum on k is a lower bound for P(S z(t) = 0); adding the term k = K + 1 switches the inequality (cf. the proof of Lemma 8.1) and thus Let w . .= y 4 , z . .= z(x/2).The upper bound sieve (Lemma 3.1) implies the crude bound S w Cy/ log y for some absolute constant C. We now put Corollary 6.4 and the crude bound Θ w,z 8 log y log x imply that S(H) dt + O(E),for all large x.where we used(8.3)inthe last step.It remains to show that P z(t) (S z(t) = 0) is substantially larger.Lemma 6.3 implies immediately that ) in the last step.Let w . .= y 8 and fix A w .By Lemma 6.3 we have P(S z(t) = 0) − E z(t) S z(t) K + 1 dt + O(E),(8.7)where, because the function S z(t) (H) appears already in (1.18), as does the averaging over H, we have