SMOOTH NUMBERS AND THE DICKMAN ρ FUNCTION

. We establish an asymptotic formula for Ψ( x, y ) whose shape is xρ (log x/ log y ) times correction factors. These factors take into account the contributions of zeta zeros and prime powers and the formula can be regarded as an (approximate) explicit formula for Ψ( x, y ). With this formula at hand we prove oscillation results for Ψ( x, y ), which resolve a question of Hildebrand on the range of validity of Ψ( x, y ) ≍ xρ (log x/ log y ). We also address a question of Pomerance on the range of validity of Ψ( x, y ) ≥ xρ (log x/ log y ). Along the way we improve classical estimates for Ψ( x, y ) and, on the Riemann Hypothesis, uncover an unexpected phase transition of Ψ( x, y ) at y = (log x ) 3 / 2+ o (1) .


Introduction
A positive integer is called y-smooth if each of its prime factors does not exceed y.We denote the number of y-smooth integers not exceeding x by Ψ(x, y).We assume throughout x ≥ y ≥ 2. Let ρ : [0, ∞) → (0, ∞) be the Dickman function, defined as ρ(t) = 1 for t ≤ 1 and via the delay differential equation tρ ′ (t) = −ρ(t − 1) for t > 1. Dickman [8] showed that (1.1) Ψ(x, y) ∼ xρ(log x/ log y) (x → ∞) holds when y ≥ x ε .For this reason, it is useful to introduce the parameter u := log x/ log y.
In fact, in this range he proves the slightly stronger estimate Ψ(x, y) = xρ(u) exp O ε log(u + 1) log y .
The key input to (1.4) is Landau's Oscillation Theorem.See Remark 2 for a refinement.In [16,22], Pomerance asked whether Ψ(x, y) ≥ xρ(u) holds for all x/2 ≥ y ≥ 1.In [16, p. 274], Granville proved that Ψ(x, y) > 2xρ(u) holds for y ≤ c(log x)(log log x)/ log log log x if x ≥ C. Below we extend Granville's range considerably.Moreover, if RH is true, we show Ψ(x, y) ≥ xρ(u) holds when y ∈ [(log x) 2−ε , (log x) 2+ε ].For y near (log x) 2 , the question lies beyond RH in a precise sense, but we indicate that a positive answer follows from a conjecture of Montgomery and Vaughan on the size of the remainder term in the PNT.If RH is false, the Ω − result in (1.4) For y ≥ 2 and ℜs > 0 we define the partial zeta function We define the entire function We denote the Euler-Mascheroni constant by γ.The Laplace transform of ρ is given by [4, Eq. (1.If a meromorphic function has a removable singularity (e.g.ζ(s)(s − 1) at s = 1), we identify its value there with its limit.When we differentiate a bivariate function (e.g.ζ(s, y)) we always do so with respect to the first variable.In sums and products over p, p is understood to be prime.Throughout, ε > 0 is an arbitrary fixed constant.

2.
A formula and its investigation 2.1.On two saddle points.Let f : R → R be given by f (t) := t log x + log(F 2 (t, y)).

It satisfies
The function f is convex since I ′′ (−r) = r −2 e −r (e r − (r + 1)) > 0. We define for u = log x/ log y.We often shorten ξ(u) to ξ when no confusion may arise.The function f attains its global minimum at β because it is readily verified that The asymptotics of β are well understood thanks to the next lemma.
( The following lemma is useful in simplifying the order of magnitude of y α and y β . Lemma 2.5.For x ≥ y ≥ 2 we have y 1−β ≍ u log(u + 1).For x ≥ y ≥ ε log x we have y 1−α ≍ ε u log(u + 1).
Proof.The estimate for y The following identities are tautological in their domain of definition: (2.11) Alladi proved [2, Eq. (3.9)] which can be phrased as Dividing (2.8) by (2.12) and employing (2.11) we obtain When y/ log x → ∞, (2.4) and (2.10) show α/β ∼ 1 and K(α − 1) ∼ K(β − 1) ∼ K(− log log x/ log y).If further u → ∞ then B(x, y) ∼ 1 by (2.9) and (2.3).Lemma 2.6 then implies as long as u and y/ log x both tend to ∞.The savings in (2.8) and (2.12) are sharp.We can show that the lower order terms within the error terms are close, which allows us to prove in §7.4 the following Proposition 2.7.If y > 1 + log x then the error terms in Lemma 2.6 are O(1/(α log x)).
Working in a zero-free region of ζ we choose the following logarithms of ζ(s, y) and F (s, y): where log(ζ(s)(s − 1)) is chosen to be real-valued for s > 1.In further using Lemma 2.6 it will be of crucial importance to split G as We compute the Mellin transform of log G 1 (s, y) (Proposition 5.1) and then, via Landau's Oscillation Theorem, obtain the following lemma, essential to the proof of Theorem 1.1.
Lemma 2.8 is proved in §5.1.For s = 0 it goes back to Landau.For s = 1 it is due to Diamond and Pintz [7] and our standard proof follows theirs.
The function log G 1 (s, y) and its derivatives with respect to s can be expressed as sums over zeros of ζ (Corollaries 5.4-5.5).This allows us to prove in §5.3 the following Lemma 2.9.Fix i ≥ 0. For ε − 2 ≤ s ≤ 1/ε and x ≥ 4 we have In §5.4 we prove the following illuminating representations of log G 1 .
and the same holds with K(α − 1) replaced by We zoom in on the behavior at y ≍ log x, which was considered by Erdős [9] (who studied y = log x), Erdős and van Lint [10], de Bruijn [6] and Granville [14].In [14] it is explained why "the real difficulty lies in this range".
Proof.We require the following identity [7, Lem.2.2]: (5.1) where for ℜs > max{1 − s 0 , 0}.For any constant b we have {Mb}(s) = s −1 b and so For A 3 first suppose s 0 < 1.We shall use the identity [7, Eq. ( 2 To verify (5.2) we check that both sides have the same derivative and tend to 0 when ℜz → ∞.Applying (5.2) with z = (s 0 + s − 1)/(1 − s 0 ) and substituting t = x 1−s 0 we obtain for ℜs > max{1 − s 0 , 0}, where in the last equality we integrated by parts.Substituting v = e u/(1−s 0 ) we find (5.4) If s 0 = 1 then A 3 ≡ 0 and (5.4) still holds.If s 0 > 1, the left-hand side of (5.3) still agrees with its right-hand side by the uniqueness principle and so (5.4) persists.For A 2 , we apply (5.1) with x a in place of x, obtaining where T is as in the statement of the proposition.We have {M log a}(s) = s −1 log a.By (5.3) with s 0 = 2, s/a in place of s and the substitution x = u a , we find We now sum the Mellin transforms of To establish Lemma 2.8 fix ε > 0 and s > −2.Suppose that log G 1 (s, x) < x ϑ−s−ε (resp.log G 1 (s, x) > −x ϑ−s−ε ) for x ≥ C ε,s .Reach contradiction by applying Landau's Oscillation Theorem [23,Lem. 15.1] )), an eventually positive function.

5.2.
Explicit formulas for G 1 .Given x > 0 and s ∈ C we let S 1 (x, s) := n≤x ′ Λ(n)n −s , where the prime on the summation indicates that if x is a prime power, the last term of the sum should be multiplied by 1/2.Landau [21] established an explicit formula for S 1 (x, s) when ζ(s) = 0.In §A we establish a truncated version of it, stated in the lemma below.We denote by x ′ the prime power closest to x not equal to x, and set x = |x − x ′ |.Lemma 5.2.Suppose ζ(s) = 0.For x ≥ 4 and T ≥ 2 + |ℑs| we define R 1 (x, T, s) by where the first sum is over non-trivial zeros ρ of ζ.Then Next we need the identity [1, p. 228] (5.7) where log z is chosen to be real-valued for z > 0 (it is in fact equivalent to (5.1)).
where log(ζ(z)(z − 1)) is real-valued for z > 1 and defined on {s + t : t ≥ 0}.Then Proof.We start with an integral identity (cf.[25, Prop.1]): We integrate both sides of (5.5) along {s + t : t ≥ 0}.We may interchange sum and integral because the sum over ρ is finite, while the integral of the k-sum converges absolutely.It remains to show The substitution (1 − s − t) log x = v allows us to evaluate the integral as The required limit follows from (5.7) with z = (s From the definition of log G 1 , Lemma 5. Suppose further that ζ(s + t) = 0 for t ≥ 0. We have, for R 0 estimated in (5.8), Applying Cauchy's integral formula to the first part of Corollary 5.4 we get Corollary 5.5.Fix i ≥ 2 and a > 0. Let x ≥ 4. Suppose that ζ(z) = 0 for |z −s| ≤ a/ log x.Then for T ≥ 2 + |ℑs| + a/ log x we have 5.3.Proof of Lemma 2.9.We explain i = 0; general i is similar.Under our assumptions, for every zero of ζ.For the first estimate we apply Corollary 5.4 with T = L(x) c and use the Vinogradov-Korobov zero-free region to bound the sum over the zeros.For the second estimate we apply Corollary 5.4 with T = x and write the sum over zeros as Since ρ 1/|ρ| 2 converges [23, Thm.10.13], ρ|≤x |x ρ /ρ 2 | ≪ x ϑ .By Lemma 5.2 with s = 0, where the last inequality is Exercise 1 in [23, p. 430].

Study of G 2 : proof of Proposition 2.12
We have log The PNT with error term shows, via integration by parts, that for s ∈ [0, 1] we have For x = 0 let Ei(x) be the exponential integral, to be understood in principal value sense: By performing the change of variables v = (1 − 2s) log t in (6.1) we see that Proof.If x ≪ ε 1 then log G 2,2 (s, x) ≪ ε 1 so we may assume x ≥ C ε .In the same way we showed (6.1), we find that the contribution of k = 3 to log G 2,2 (s, x) is acceptable, so we omit this case.We consider the contribution of k ≥ max{2/s, log 2 x} (base-2 logarithm) to log G 2,2 .For such k, where in the last inequality we used s ≥ ε/ log x.This is an acceptable bound.Estimate (2.17) when i = 0 is a direct consequence of (6.1) and Lemma 6.1; i > 0 is similar and so is omitted.We now assume 1/4 ≥ s ≥ ε/ log x and establish (2.18).The contribution of k ≥ max{2/s, log 2 x} to log G 2 is ≪ x −s as in (6.2).We now consider 2 ≤ k < max{2/s, log 2 x}.If x 1/k < p ≤ √ x we get a contribution of ≪ ε x 1/2−s similarly to (6.3).We handle 2 ≤ k < max{2/s, log 2 x} and √ x < p ≤ x by the PNT and integration by parts, obtaining a contribution of we may extend the k-sum within the integral to the range 7. Proofs of Lemma 3.5 and Proposition 2.7 Lemma 7.1.Fix 2 ≤ k ≤ 5. Suppose x ≥ C ε .Let I be the closed interval with endpoints α and β.For t ∈ I, Proof.Suppose x ≥ y ≥ (1 + ε) log x.As shown in Lemma 4 of [20], for a polynomial Q k−1 of degree k −1 and non-negative coefficients, so (−1) k g (k) (t) is positive and monotone for t > 0. By the same lemma, g (k) (α) ≍ (−1) k (log x)(log y) k−1 for x ≥ y ≥ log x.It remains to show g (k) (β) is also of order (−1) k (log x)(log y) k−1 .Since β ≥ c ε / log y by Corollary 2.2, the same lemma shows that By definition of β, the left-hand side is (log x)(log y) k−1 .The sum in the right-hand side is upper bounded in [20, Eq. (7.1)] by which is ≪ ε log x by definition of β.This finishes the proof of (7.1).For f (k) , Observe I (k) (v) ≍ e v /(v + 1) uniformly for v ≥ 0 [26,Lem. 4.5] and e v /(v + 1) ≍ u as long as 0 ≤ v = ξ(u) + O(1).Hence, by monotonicity of I (k) , it suffices to show 0 ≤ (1 − t) log y = ξ(u)+O ε (1) holds for t ∈ {α, β}.For t = β it is trivial.For t = α, (1−α) log y = ξ(u)+O ε (1) follows from Lemma 2.4 so it is left to show α < 1.By definition p≤y log p/(p α −1) = log x, and at α = 1 the sum is log y−γ+o(1) [23, p. 182] which is < log x when x ≥ C, so α < 1.
7.3.Proof of Lemma 3.5 -third part.The square of B can be written as To prove (3.7) we estimate the numerator and denominator using (2.3) and (2.9), respectively.This also shows B(x, y) is bounded when y ≥ ε log x.We turn to (3.6).The denominator is ≍ ε (log x)(log y) by Lemma 7.1 and the numerator is estimated in Lemma 7.3 in terms of α − β and (log G) (2) .Estimating α − β using (3.5) gives (3.6).
7.4.Proof of Proposition 2.7.The range 2 log x ≥ y > 1 + log x is already in Lemma 2.6 because α ≍ 1/ log y in this range by (2.10).If log y ≤ √ log x and y ≥ 2 log x, we make use of the Main Theorem of Saha, Sankaranarayanan and Suzuki [24], which in the current range gives We also use Smida's result [26, Thm.1] We divide these two estimates to get the formulas in Lemma 2.6, with the term 1 + O(u −1 ) replaced by 1 + 1 8 This is estimated in Lemma 7.3 as 1 + O(1/(α log x)) once we invoke (3.5), (2.14) and (2.17In the sum over (x/2, 2x) we consider separately n = x ′ and n = x ′ .We find  When we let K tend to ∞, this bound tends to 0.
) We have β ≤ 1. Equality occurs if and only if y = x.(4)We have β ≥ 0 if and only if y ≥ 1 + log x, and β = 0 if and only if y = 1 + log x. −1 and (2.6) follows by using log(1 + t) ≪ |t| for |t| ≤ 1/2.The second part of the lemma is similar to the first and is left to the reader.The third part follows from ξ being 0 at v = 1 and being strictly increasing.For the last part we need to solve β ≥ 0, or log y ≥ ξ(u).Since ξ is strictly increasing, it suffices to solve log y = ξ(u).Exponentiating, this implies y = e ξ(u) = 1 + uξ(u) = 1 + log x.Let g : (0, ∞) → R be given by g(t) := t log x + log ζ(t, y).The points α and β are known to be close: 1−βfollows from the definition of β and Lemma 2.1.The estimate for y 1−α follows from the estimate for y 1−β by Lemma 2.4.