On a Result of Hayman Concerning the Maximum Modulus Set

The set of points where an entire function achieves its maximum modulus is known as the maximum modulus set. In 1951, Hayman studied the structure of this set near the origin. Following work of Blumenthal, he showed that, near zero, the maximum modulus set consists of a collection of disjoint analytic curves, and provided an upper bound for the number of these curves. In this paper, we establish the exact number of these curves for all entire functions, except for a “small” set whose Taylor series coefficients satisfy a certain simple, algebraic condition. Moreover, we give new results concerning the structure of this set near the origin, and make an interesting conjecture regarding the most general case. We prove this conjecture for polynomials of degree less than four.


Introduction
Suppose that f is an entire function, and define the maximum modulus by M (r, f ) . .= max |z|=r |f (z)|, for r ≥ 0.
It is a simple observation that if a = 0, m ∈ Z, and f (z) . .= az m f (z) for entire functions f and f , then M( f ) = M(f ).Thus, following Hayman [Hay51], we will assume that f has the form f (z) . .= 1 + az k + higher order terms, for a = 0, and k ∈ N. (1.2) Throughout the paper f always has this form, and, in particular, the variables a and k are fixed by this equation.We are interested in the structure of M(f ) near the origin.Hayman [Hay51, Theorem I part (iii)] proved the following.
Theorem A. If f is an entire function of the form (1.2), then, near the origin, M(f ) consists of at most k analytic curves only meeting at zero.Moreover, these curves make angles of 2mπ/k with each other, for some m ∈ N.
In this paper, we strengthen Hayman's result by giving the exact number of such curves for any entire function outside an exceptional set.To give a precise statement of this set, we require the following definitions, the first of which is straightforward.
Definition 1.1.Let f be an entire function of the form (1.2).We define the inner degree of f as the maximal µ . .= µ f ∈ N such that f (z) = f (z µ ) for some entire function f .Note that the inner degree of f always divides k.The second definition is more complicated.Suppose that f is an entire function of the form (1.2), so that we can write For each ≥ k, define a polynomial p (z) . .= 1+az k + σ=k+1 b σ z σ .It is immediate that there is some least ≥ k such that µ p = µ f .We stress that f may itself be a polynomial, and it is possible that p = f .Definition 1.2.Suppose f is an entire function of the form (1.2), and let be as defined above.We say that f is exceptional if there exist m ∈ {1, . . ., 2k − 1}, m ∈ Z, and σ ∈ {k + 1, . . ., }, such that b σ = 0 and also Observe that it is straightforward to determine if an entire function is exceptional, simply by examining the coefficients in its Taylor series.Indeed, we only need to check finitely many such coefficients even when f is transcendental.Note also that no polynomial p with only two terms is exceptional; indeed, it is easy to explicitly check the conclusion of Theorem 1.3, below, in this case.
Our first result establishes the number of curves that form M(f ) near the origin for any f that is not exceptional.
Theorem 1.3.Let f be an entire function of the form (1.2) that is not exceptional.Then, near the origin, M(f ) consists of exactly µ f analytic curves that only meet at zero.
Remark.Note that Theorem 1.3 tells us, in a precise sense, that for "most" entire functions, f , the set M(f ) has µ f components near the origin.For, if f is exceptional, then any sufficiently small perturbation of finitely many of its coefficients gives rise to an entire function that is not exceptional.
In addition, we are able to provide more information on the number and asymptotic behaviour of the curves that make up M(f ) near the origin, for any entire function f .We remark that this result is not completely new; compare to [Hay51, Theorem 1] and [Blu07] (see [Val49,II.3]).However, we obtain more explicit estimates and include proofs for completeness.
Set Σ . .= {0, . . ., k − 1}.For each j ∈ Σ, define the angle and sectors, For a finite set T , we use #T to denote the number of elements of T .For an entire function f and a set T ⊂ C we set Theorem 1.4.Suppose that f is an entire function of the form (1.2).Then there exist R > 0, a set J . .= J f ⊂ Σ, and disjoint analytic curves {γ j } j∈Σ such that and with the following properties.
(b) Each γ j contains exactly one point of each modulus less than or equal to R.
(c) Each γ j is tangent at the origin to the radial line {z ∈ C : arg z = ω j }.In particular, arg z = ω j + O(|z| 1/2 ) as z → 0 along γ j .(d) The cardinal #J is a multiple of the inner degree µ of f .Moreover, if j ∈ J, j ∈ Σ and j ≡ j mod k/µ, then j ∈ J, and γ j = e 2πin/µ γ j for some n ∈ Z.
Remark.Theorem 1.4(d) implies the following.The components of M(f ) near the origin are contained in a disjoint union of families of analytic curves.Each family contains µ f such curves, and the curves within each family are obtained from each other by rotations of 2π/µ f radians around the origin.There is at least one of these families, and at most k/µ f .
Observe that for an entire function f , Theorem 1.4 implies in particular that the number of components of M(f ) near the origin is at least its inner degree, that is, #J f ≥ µ f .We distinguish the case of strict inequality.Definition 1.5.We say that an entire function f of the form (1.2) is magic if #J f > µ f .Theorem 1.3 tells us that all magic entire functions are exceptional.The simplest example of a polynomial that is magic seems to be the cubic p(z) . .= 1 + z 2 + iz 3 , see Figure 1 and also Theorem 1.7.It is an open question to identify necessary and sufficient conditions for an entire function to be magic.It is also an open question to establish the size of #J f in the case where f is magic.We conjecture the following, which, if true, would give a complete answer to the question of the number of disjoint curves in M(f ) near the origin.
Conjecture 1.6.If f is magic, then #J f = 2µ f .Although we have not been able to identify all magic entire functions, the following gives a complete result for quadratic and cubic polynomials.
Theorem 1.7.Suppose that p is a polynomial of the form (1.2).If p is a quadratic, then p is not magic.If p is a cubic, then p is magic if and only if p(z) = 1 + az 2 + bz 3 , where a, b = 0 and Re(ba −3/2 ) = 0.
Remark.It is straightforward to check that Theorem 1.7 implies that, for cubic polynomials, p is exceptional exactly when p is magic.It is tempting to conjecture that the same holds more generally.
The following is an immediate consequence of the proof of Theorem 1.7.
Corollary 1.8.Conjecture 1.6 holds for polynomials of degree less than four.
We observe finally that, if p is a polynomial, then our results can also be used to study the structure of M(p) near infinity.This is for the following reason.Suppose that the degree of p is n, and let q be the reciprocal polynomial, defined by q(z) . .= z n p(1/z).As observed in [PS20a, Proposition 3.3], we have that z ∈ M(q) \ {0} if and only if 1/z ∈ M(p) \ {0}.Hence the structure of M(p) near infinity is completely determined by the structure of M(q) near the origin.
Acknowledgments.We would like to thank Peter Strulo for programming assistance leading to Figure 1, and Argyrios Christodoulou for helpful feedback.

Proof of Theorem 1.4
The goal of this section is to prove Theorem 1.4.We use the following, which is easy to check.
For the rest of the section, let us fix an entire function f as in (1.2), that is, f (z) . .= 1 + az k + higher order terms, for a = 0, and k ∈ N.
Observation 2.2.It follows by inspection of (2.1) that all partial derivatives with respect to θ of the O(•) term in (2.2) are also O(r k+1 ).
Recall from the introduction that for each j ∈ Σ, we defined the angle ω j in (1.4) and, for φ, r > 0, the sector S j (r, φ) in (1.5).
Proof of Theorem 1.4.Observe that M(f ) is contained in the set of points where f (re iθ ) is locally maximised, that is, Using (2.2), and by Observation 2.2, we have that Fix r 1 , φ > 0 sufficiently small, with the property that for all 0 < r ≤ r 1 and re iθ ∈ k−1 j=0 S j (r 1 , φ), the second derivative in (2.4) is negative.Reducing r 1 and φ if necessary, we can deduce that for each 0 < r ≤ r 1 and j ∈ Σ, there is exactly one point re iθ ∈ S j (r 1 , φ) at which the derivative in (2.3) is zero; the fact there is at least one such point follows from (2.3), and the fact there is at most one follows from (2.4).
Next, for each j ∈ Σ and 0 < r ≤ r 1 , let Note that γ r 1 j is the solution set in S j (r 1 , φ) to (2.3) being zero.Using a change of variables or the implicit function theorem, see [Hay51, Lemma 4] or [Val49, II.3], one can see that γ r 1 j is an analytic curve.It is easy to see that γ r 1 j contains exactly one point of each modulus.
Thus, we have shown that there exists r 1 > 0 and a collection {γ r 1 j } j∈Σ of disjoint analytic curves such that γ r 1 j = M(f | S j (r 1 ,φ) ) and By results of Blumenthal [Blu07], see [PS20a, Section 3], it follows that there exists R < r 1 such that M(f ) ∩ {z : 0 < |z| ≤ R} = j∈J γ R j for some subset J ⊆ Σ.We deduce (a) and (b).
Next we prove (c).First, note that by (2.2), for each j ∈ J, the curve γ j . .= γ R j is asymptotic to the set of points where the term cos(kθ + arg a) is maximised.It follows that γ j is tangent at the origin to the radial line L j of argument ω j .It remains to estimate at what rate points of γ j tend to L j as we move towards the origin.
For each 0 < r ≤ R, denote the argument of the point of γ j of modulus r by ω j + θ r .Fix j, and let z = re i(ω j +θr) ∈ γ j .Then, by (2.2), as r → 0, we have . Since 1 − cos(kθ r ) ≥ 0, and since this term is O(θ 2 r ) when θ r is small, it follows that θ r = O(r 1/2 ) as r → 0. We deduce (c).
In order to prove (d), let j ∈ J, so that γ j ⊂ M(f ).If n ∈ Z is a multiple of k, since the inner degree µ of f divides k, then f (z) = f (ze 2πin/µ ), and we can deduce that {ze 2πin/µ : z ∈ γ j } ⊂ M(f ).Recall that γ j ⊂ S j (R, φ), where the sectors {S j (R, φ)} j∈J are pairwise disjoint and obtained from each other by rotations of 2πi/k radians around the origin.This means that if j ∈ J and j ∈ Σ, then j ∈ J whenever j ≡ j mod k/µ.In particular, the elements of J can be grouped into k/µ equivalence classes, each containing µ elements that either all or none belong to J. Hence, #J must be a multiple of µ.We have shown (d), which completes the proof of the theorem.

Auxiliary results
To prove Theorem 1.3, we will begin by proving the same result restricted to polynomials.We need to do this because a key feature of our proof is the use of induction on the number of terms in the polynomial.We will complete the proof of Theorem 1.3 for transcendental entire functions in Section 5.
In particular we first prove the following.
Theorem 3.1.Let p be a polynomial of the form (1.2) that is not exceptional.Then M(p) consists of exactly µ p analytic curves tending to zero near the origin.
The proof of this result is split over two sections.In this first section we give some results required in the proof of Theorem 3.1 which we state separately, since they do not require the assumption that p is not exceptional.In particular, these results may be useful for future applications.Then, we complete the proof of Theorem 3.1 in Section 4.
Throughout this section and Section 4, we fix a polynomial p of the form (1.2).Note that if p(z) = 1 + az k , then Theorem 3.1 follows trivially.Hence we can assume that p has at least three terms.Let p be the polynomial of degree less than p such that p(z) = p(z) + bz n , (3.1) for some b = 0 and n ∈ N the degree of p.Note that, in particular, p is a polynomial of the form (1.2), whose non-constant term of least degree is the same as that of p, that is, az k .The polynomials p and p will remain fixed from now on.
We next introduce some notation that will be used extensively in both this section and the next.By Theorem 1.4(d) we know that J p consists of one or more disjoint sets, each of which contains all the elements of Σ that are congruent modulo k/µ p .If j ∈ J p , then we use [j] p to denote this set, that is, [j] p . .= {j ∈ Σ : j ≡ j mod k/µ p }.
In addition, let R > 0, and let {γ j } j∈Σ be the collection of curves provided by Theorem 1.4, so that (1.6) holds.Then, for 0 < r ≤ R and j ∈ Σ, we let z j (r) denote the unique point on γ j of modulus r.Moreover, reducing R if necessary, if {γ j } j∈Σ is the corresponding set of curves provided by Theorem 1.4 applied to p, then we let ẑj (r) denote the unique point on γj of modulus r.This completes the definition of the notation.
We next give an observation which allows us to estimate the square of the modulus of p in terms of that of p at each point in the plane.This is an immediate consequence of Lemma 2.1.Observation 3.2.We have, as r → 0, that and all partial derivatives of the O(•) term with respect to θ are also O(r n+k ).
The first lemma in this section is key to the proof of Theorem 3.1.Roughly speaking, it says that, close to the origin, |p| 2 at a point ẑj (r) (which is where p takes its maximum modulus) is very close to |p| 2 at a point z j (r) (which is where p takes its maximum modulus).Proof.To prove this result, it helps to simplify notation.Suppose j ∈ Σ is fixed.Then, define real analytic functions f, g, h : R × R → R as f (r, θ) . .= |p(re i(ω j +θ) )| 2 , g(r, θ) . .= |p(re i(ω j +θ) )| 2 , and finally, h(r, θ) . .= 2|b|r n cos(arg b + n(ω j + θ)).
(3.3)Note that all derivatives of f and g with respect to θ are O(r k ) as r → 0, because the first term dominates.We also have that and all derivatives of h with respect to θ are O(r n ) as r → 0. Finally, it follows from the definitions, along with Observation 3.2, that and all the higher order derivatives of the O(•) term are also O(r n+k ).
Recall that for r sufficiently small, z j (r) and ẑj (r) are the respective points in the curves indexed by j where p and p attain the maximum modulus.Let us write z j (r) = re i(ω j +θr) and ẑj (r) = re i(ω j + θr) , where the angles θ r and θr are both functions of r.In particular, it follows from the definitions that f (r, θ r ) = 0 and g (r, θr ) = 0.
Moreover, note that by (3.3) and the bounds on h , we have that g (r, θr ) + h (r, θr ) = −2|a|k 2 r k cos(kθ + kω j + arg a) + O(r k+1 ), and cos(kθ + kω j + arg a) can be taken to be non-zero.It then follows by (3.4) that (3.7) Proof of claim.Set (r) . .= |θ r − θr |.It follows from (3.7) that there are constants This is a quadratic in (r), with zeros at . Assuming that r > 0 is small, one of these zeros is close to 1/C 2 , and the other lies in (0, 2C 1 r n−k ).Since (r) is small and positive, we can deduce that (r) also lies in this interval, for all sufficiently small values of r.
We can now deduce from this claim, together with (3.5), (3.6), the real analyticity of f and h, and our earlier estimates for the size of f and h , that as required.Note that in the last step we have used that g (r, θr ) = 0 and also that 2n − k > n + 1/2, since n ≥ k + 1.Now, for each j ∈ Σ, we let t j . .= 2|b| cos(nω j + arg b), where we recall that ω j . .= (2jπ − arg a)/k.Our next lemma allows us to compare the magnitude of p on different γ j .
Lemma 3.4.Let j, j ∈ J p. Then (3.8) Proof.Let us write z j (r) = re i(ω j +θr) and z j (r) = re i(ω j +θ r ) , where the angles θ r and θ r are functions of r.Then, by Theorem 1.4(c), θ r and θ r are both O(r 1/2 ) as r → 0. By this, and by Observation 3.2, as r → 0 we have that where in the last line we have used that cos nθ r = 1 + O(r) and sin nθ r = O(r 1/2 ).Also, arguing similarly, we have Hence, by Lemma 3.3, we have It follows from Lemma 3.4 that the magnitudes of the quantities t j , for j ∈ Σ, are important for determining the size of |p(z)|.It proves useful to know exactly when two of these terms can be equal.This is the content of the following lemma.
Lemma 3.5.Suppose j, j ∈ Σ.Then t j = t j if and only if there is an integer m such that one of the following hold.Either or Proof.This is a straightforward consequence of the fact that cos z 1 = cos z 2 if and only if there is an integer m such that either The details are omitted.
The next lemma gives a simple relationship between the condition (3.9) and the sets [j] p .Lemma 3.6.Suppose that j, j ∈ J p with j ∈ [j] p. Then (3.9) holds if and only if j ∈ [j] p .
Proof.First, we note that since p(z) = 1 + az k + • • • + bz n = p(z) + bz n , and µ p ≤ µ p, there are natural numbers A 0 , A 1 , A 2 such that µ p = A 0 µ p , k = A 1 µ p and n = A 2 µ p .Moreover, A 0 and A 2 are coprime, since if they shared a factor A 3 > 1, then we could replace µ p with A 3 µ p .
Since j ∈ [j] p, it follows by the definition of [j] p that there is an integer B 0 such that Suppose first that (3.9) holds.We can deduce that mA 0 = B 0 A 2 .Since m is an integer and A 0 and A 2 are coprime, m = B 1 A 2 , where B 1 is an integer.Hence as required.
In the other direction, suppose that j ∈ [j] p .Then there is an integer B 1 such that It follows that and so (3.9) holds.
Our last general lemma allows us to compare J q and J q for two related entire functions q, q of the form (1.2), provided that q satisfies an additional condition.In fact, q is necessarily a polynomial, but q may be a polynomial or may be transcendental.Note that J q and J q are the subsets of Σ provided by Theorem 1.4, and these are both well defined sufficiently close to the origin.Lemma 3.7.Suppose that q is a polynomial of the form (1.2), of degree n.Let {γ } ∈Σ be the set of curves provided by Theorem 1.4 applied to q, and for all sufficiently small values of r, let z (r) denote the point on γ of modulus r.Suppose also that there exist c, R > 0 such that |q(z j (r))| 2 ≥ |q(z j (r))| 2 + cr n , for j ∈ J q , j ∈ Σ \ J q , and 0 < r ≤ R. (3.11)If q is an entire function such that, for some ñ > n and b = 0, its power series is q(z) = q(z) + bz ñ + . . ., (3.12) then J q ⊂ J q .
Proof.Suppose, by way of contradiction, that there exists j ∈ J q \ J q .Choose j ∈ J q , and let r > 0 be small.Let {γ} ∈Σ be the set of curves provided by Theorem 1.4 applied to q.For all sufficiently small values of r, let z (r) denote the point on γ of modulus r.By Theorem 1.4(a), since both γ j and γj are contained in the same sector S j (r, φ) for some r, φ > 0, and γ j = M(q| S j (r,φ) ), we have that |q(z j (r))| 2 ≥ |q(z j (r))| 2 , for r > 0 small.
(3.13)By (3.12), (3.11), and (3.13), there are positive constants c, K such that, for small values of r > 0, we have For sufficiently small values of r this is a contradiction, since n < ñ and for such values |q(z j (r))| 2 ≥ |q(z j (r))| 2 .

Proof of Theorem 3.1
In this section we prove Theorem 3.1, which is a consequence of the following, in which we continue the notation of Theorem 1.4.Theorem 4.1.Suppose that p is a polynomial of the form (1.2) that is not exceptional.Continuing Theorem 1.4, and reducing R > 0 if necessary, the following both hold.
(i) There exists c > 0 such that (3.11) holds with p in place of q. (ii) There exists j ∈ Σ such that J p = [j] p .
We will use the following simple observation, which is critical to our argument in this section, and indeed underlies the definition of "exceptional".It follows immediately from Definition 1.2, and in particular (1.3).
Observation 4.2.Suppose that p is not exceptional.Then (3.10) does not hold.
Proof of Theorem 4.1.We prove the result by induction on the number of non-zero terms in p.When p contains 2 non-zero terms, then we have that Clearly, the maximum modulus of p is achieved exactly when az k is real, in other words, when arg a+k arg z is a multiple of 2π.Then J p = Σ, and so Theorem 4.1(i) holds trivially.Note that Theorem 4.1(ii) is also straightforward.Now suppose the Theorem has been proved for up to non-zero terms, and p has + 1 non-zero terms.Let p be the polynomial defined in (3.1).Note that p has fewer terms than p, and since p is not exceptional, neither is p.Hence, we can assume that both the inductive conclusions apply to p.
Observe that, by the inductive hypothesis, and by the definitions of p and p, the conditions of Lemma 3.7 are satisfied, with p in place of q and p in place of q.Hence J p ⊂ J p.
Claim.Set t kmax . .= max{t j : j ∈ J p}.Then J p = {j ∈ J p : t j = t kmax }. (4.1) Proof of claim.Note first, by Lemma 3.4, that if j, j ∈ J p, then (3.8) holds.This implies that if t j < t j , then j / ∈ J p .Hence J p ⊂ {j ∈ J p : t j = t kmax }.For the reverse inclusion, choose j ∈ J p , so that t j = t kmax .Suppose that j ∈ J p and that t j = t kmax .We need to show that j ∈ J p .Since t j = t j , it follows from Lemma 3.5 that either (3.9) or (3.10) hold.Since p is not exceptional, it follows from Observation 4.2 that (3.10) cannot hold.By Theorem 4.1(ii) applied to p, we know that J p = [j] p, and so j ∈ J p ⊂ J p = [j] p.We can deduce from Lemma 3.6 that j ∈ [j ] p and the claim follows from Theorem 1.4(d).
We now show that Theorem 4.1(i) holds for p.To see this, let j ∈ J p and j / ∈ J p .By (4.1), it must hold that t j < t j .Theorem 4.1(i) then follows from (3.8).
It remains to show that Theorem 4.1(ii) holds for p. Fix j, j ∈ J p .We are required to show that j ∈ [j ] p .In fact, this follows exactly as the second part of the proof of the claim above.
Proof of Theorem 3.1.This is a direct consequence of Theorem 1.4 and Theorem 4.1(ii).

Proof of Theorem 1.3
It remains to use Theorem 3.1 to complete the proof of Theorem 1.3.This is, in fact, quite straightforward.Suppose that f is entire and not exceptional, and is of the form (1.2).Let p . .= p be the polynomial constructed in the definition of "exceptional"; note that p has the same inner degree as f , and is not exceptional.Then, by Theorem 4.1(ii), there exists j ∈ Σ such that J p = [j] p , and, in particular, #J p = µ p .
Observe that, by Theorem 4.1(i) applied to p, and by the definitions of f and p, the conditions of Lemma 3.7 are satisfied, with p in place of q and f in place of q.Hence J f ⊂ J p .The result now follows since, by Theorem 1.4, J f contains at least µ f elements, and µ f = µ p .

Proof of Theorem 1.7
In this section we prove Theorem 1.7.Suppose that p is a polynomial of the form (1.2), and let n be its degree.Observe that if k = n, then its inner degree µ p = k, and so by Theorem 1.4(d), #J p = µ p and p is not magic.If k = 1, then by Theorem 1.4, #J p = 1 and p is not magic.In particular, these are the only two possible cases for quadratic polynomials, and so they cannot be magic.Likewise, if p is cubic, then it can be magic only if k = 2.
Suppose that p is a cubic polynomial, and that k = 2; that is, p(z) = 1+az 2 +bz 3 , where a, b = 0. Let β ∈ C be such that aβ 2 = 1, and set b . .= bβ 3 = ba −3/2 .It is easier to consider the polynomial q(z) . .= p(zβ) = 1 + z 2 + b z 3 .Suppose that |θ| is small, which is the case if z is near the positive real axis.If the real part of b is positive, then cos φ is positive, and we can deduce that |q(z)| 2 > |q(−z)| 2 .By Theorem 1.4(c), near the origin, M(q) lies near the real axis.It follows that M(q) has only one component near the origin, which is asymptotic to the positive real axis, and q is not magic.
If the real part of b is negative, then cos φ is negative, and we can deduce that |q(z)| 2 < |q(−z)| 2 .As above, it follows that M(q) has only one component near the origin, which is asymptotic to the negative real axis, and again q is not magic.
Finally, if b is imaginary, then cos φ = 0 and |q(z)| = |q(−z)|; in fact we have q(z) = q(−z).It follows that M(q) has exactly two components near the origin, which are obtained from each other by reflection in the imaginary axis.In particular, q is magic.
Since z ∈ M(p) if and only if zβ ∈ M(q), we can deduce that p is magic if and only if b is imaginary, as required.

Figure 1 .
Figure 1.Computer generated graphics of M(p) and M(p) near the origin.By Theorem 1.7, the polynomial p is magic and so M(p) has two components near the origin.The polynomial p, which is only a small perturbation of p, is not magic, and so M(p) has only one component near the origin.