On Localization of Zeros of an Entire Function of Finite Order of Growth

The aim of the article is to find conditions on coefficients of a Taylor expansion of an entire function of finite order of growth in C\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathbb C $$\end{document} that guarantee a specified number of zeros. If the Taylor coefficients of f are real we also give conditions to determine whether the number of real and imaginary zeros is finite or infinite, and whether they are real or imaginary.


The Absence of Zeros
Let the function f = f (z) be holomorphic in a neighborhood of the origin in the complex plane C: Denote by γ r the circle of radius r centered at the origin γ r = {z : |z| = r }, r > 0.

Theorem 1 A function f is a non-vanishing entire function of finite order of growth
if and only if for sufficiently small r there exists k 0 ∈ N such that The minimal k 0 is the order of the entire function f .
Recall that an entire function f (z) has a finite order (of growth) if there exists a positive number A such that The infimum of such A is called the order of the function f .
Proof Let f be a function of finite order of growth that has no zeros in C, then it is well known that it has the form f (z) = e ϕ(z) , where ϕ(z) is a polynomial of some degree k 0 (see, e.g., [7,Ch. 7,Sec. 1.5]). Then Conversely, suppose that the condition (2) is fulfilled. Since f (z) is holomorphic in a neighborhood of the origin and f (0) = 0, the values of f (z) lie in a neighborhood of f (0), and this neighborhood does not contain the point 0 for sufficiently small |z|. Therefore, the holomorphic function ϕ(z) = ln f (z) (ln 1 = 0) is defined in a neighborhood of the origin. Then, for sufficiently small r we have When the condition (2) is fulfilled, we see that a k = 0 for k > k 0 . Therefore, ϕ(x) is a polynomial of degree k 0 . Consequently, f (z) = e ϕ(z) is an entire function of finite order k 0 .
Example 1 Let Substitute these values into (4). When k = 1, the determinant is not equal to zero. For k > 1 all determinants are equal to zero since the first two columns are the same.
Then the function f (z) is of order 1 and has no zeros in the complex plane.

Auxiliary Statements
Let a function f (z) of the form (1) be an entire function of finite order of growth. Denote by α 1 , α 2 , . . . , α n , . . . its zeros (every zero appears as many times as its multiplicity) in the order of increasing absolute value |α 1 | |α 2 | · · · |α n | · · · . Their number may be finite or infinite.
Recall the Hadamard decomposition for such functions (see, e.g., [7,Ch. 7 where Q(z) is a polynomial of degree q ρ, s is the multiplicity of a zero of the function f at the origin, p is some integer, and p ρ.
The expression is called the canonical product, and the integer p is the genus of the canonical product of f . The genus of an entire function f (z) is max{q, p}. If ρ is the order of the canonical product (6), then ρ = max{q, ρ }. Let us consider the following series The infimum of positive γ , for which the series (7) converges, is called the rate of convergence of zeros of the canonical product (z).
In what follows we consider the power sums with positive integer exponents k. Let us relate integrals in (2) to power sums σ k of zeros.
Formula (3) relates the integrals in (2) to the expansion coefficients of ϕ(z) = ln f (z) in a neighborhood of the origin. Let us express the integral in terms of power sums of zeros, using the Hadamard formula. We consider the case of s = 0, that is In a sufficiently small neighborhood of the origin we have (according to the Hadamard formula (5)) The series for ϕ(z) converges absolutely and uniformly in a sufficiently small neighborhood of the origin since the zeros α j are bounded away from it.
It is obvious that Let us transform the following expression Thus we have the following statement Similarly, we can consider the case of q > p. In any case we have

Corollary 2
The following equality is true It follows from (3), Lemma 1, and Corollary 2 that

Corollary 3
The following relations are true These formulas relate the power sums σ k to the coefficients of the function f . In the case when σ 1 is an absolutely convergent series such formulas were considered in [4, §2].

Finite Number of Zeros
Consider an entire function of finite order of growth of the form (1). In this section we find conditions for the coefficients of the function whereby the function has a finite number of zeros. First of all, we need to find the order ρ of the function f . To do this, we use the formula (see, e.g., [7,Ch. 7 If ρ is a fractional number then the function has an infinite number of zeros. In this section we assume that ρ is integer.
Consider a sequence of complex numbers s 0 , s 1 , s 2 . . . . This sequence defines an infinite Hankel matrix The consecutive principal minors of the matrix S are denoted by D 0 , D 1 , D 2 , . . .: We also assume that D −1 = 1. If for each p ∈ N there is a non-zero minor of S of order p then the matrix has infinite rank. If starting with some p all minors are equal to zero then the matrix S has a finite rank. The smallest value of such p is called the rank of the matrix. Here are two statements about matrices C of finite rank p (see [1,Ch. 16, §10]).

Corollary 4 (Kronecker)
If an infinite Hankel matrix S of the form (9) has a finite rank p then D p−1 = 0.

Corollary 5 (Frobenius)
If the minor of an infinite Hankel matrix D p−1 = 0 and the minors D p = D p+1 = · · · D p+ j = · · · = 0, then the rank of the matrix S is finite and is equal to p.

Theorem 3 An infinite Hankel matrix has a finite rank p if and only if there exist p integers c
This theorem is given in [1, Ch. 16, §10, Theorem 7].

is a rational function in z. In this case, the rank of the matrix S coincides with the number of poles of R(z). Each pole is counted as many times as its multiplicity.
This statement is given in [1,Ch. 16,§10]. Consider again an entire function f (z) of integer order ρ. By the properties of entire functions, the power sums σ k are absolutely convergent series when k > ρ. Let s j = σ 2k 0 + j , 2k 0 > ρ + 1, j = 0, 1, . . .. Consider an infinite Hankel matrix S of the form (9).

Theorem 5 A function f has a finite number of zeros if and only if the rank of the matrix S is finite. The number of distinct zeros of the function f is equal to the rank of the matrix S.
Proof Assume that α 1 , . . . , α p are zeros of the function f . The number of zeros is finite (each root is counted as many times as its multiplicity). Then and The coefficients c j 's of this polynomial can be found with the help of the classical Newton formulas When j > p they have the form Taking sums σ 2k 0 + j of s j 's, we obtain Thus, Theorem 3 shows that the rank of S is finite and it does not exceed p. Suppose now that the rank of S is finite and is equal to q. According to Theorem 4, this rank is the number of poles of the rational function R(z) (formula (10)). Each pole is counted as many times as its multiplicity.
The series R(z) converges (absolutely and uniformly) for z lying outside of a disk centered at the origin. The disk contains all poles (yet unknown) of the function R(z). Transform R(z) assuming that |α m z| > 1 for all α m : Changing the order of summation of the series is justified because they converge absolutely. By the hypothesis, R(z) is a rational function. Let us show that this series contains only a finite number of terms. Consider the following function Let us analyze this series to find its convergence domain. The roots are arranged in the order of increasing absolute value. Let |w| r < |α 1 |, then The last series converges by the choice of k 0 . Thus, the series R * (w) converges absolutely and uniformly inside the disk {|w| < |α 1 |}. Separating out the roots with the same absolute value |α 1 | = · · · = |α n |, we obtain The first sum is a finite sum of fractions with poles at α 1 , . . . , α n . The second sum is a series that defines a holomorphic function for |w| < |α n+1 | (this is due to the same reasoning as above). Since R * (w) is a rational function, the second series in (12) is also a rational function. Consider a representation where P(w) and Q(w) are polynomials. Then P(w) = Q(w)R * (w). Since the left hand side of this expression is a polynomial, Q(α 1 ) = · · · = Q(α n ) = 0. If s is the degree of Q, then Q(α n+1 ) = · · · = Q(α s ) = 0. Therefore, the series R * (w) has a finite number of fractions which is equal to the number q of distinct roots α j . So the rank S is q. If the number of all roots (counted with multiplicities) is equal to p, then the first part of the theorem shows that p q.
Note that by Corollary 3 the power sums s j are expressed in terms of Taylor coefficients of the function f .
Assume that an entire function f has real coefficients, then it has either real or complex conjugate zeros. Note that in this case all the power sums σ k and, accordingly, the numbers s j are real.
Now we raise the question of the number of real and complex zeros. Since the function f (z) has a finite number of distinct zeros that is equal to the rank of the Hankel matrix S, the solution of this problem is reduced to the classical problem of finding the number of distinct real roots of a polynomial (see, e.g., [1, Ch. 16, §9]).
Since s j = σ 2k 0 + j , The linear forms are linearly independent because the determinant composed of their coefficients is the Vandermonde determinant, and it is distinct from zero. If the forms Z m and Z k are complex conjugate then we can consider These forms are still linearly independent but become real.
In the relation (15) each real root corresponds to a square number and a pair of conjugate roots corresponds to a difference of square numbers. Now we use the Frobenius theorem on the rank and signature of a Hankel form S p (x, x) (see, e.g., Using Corollary 3 we obtain that σ k = 1 for k 2, i.e., the rank of the Hankel matrix S is equal to 1. Then by Theorem 5 the number of roots is equal to 1 and by Theorem 6 this root is real.

Remark 1
The above statements show that in the case of a finite number of zeros the study of entire functions reduces to the study of polynomials. To study other features related to the root localization in the context of [1,2] one needs to factorize a function f (z), i.e., to extract a polynomial from this function (see [10]). The roots of this polynomial coincide with the roots of the function f .

Infinite Number of Zeros
From the previous section we obtain (9) is infinite, where s j = σ 2k 0 + j .

Corollary 6 A function f (z) has an infinite number of zeros if and only if the rank of the matrix S of the form
The conditions for the rank to be infinte is, by Corollary 5, that there is a strictly increasing sequence of positive integers j k , k = 1, 2, . . ., such that all D j k = 0.
In what follows we need some properties of infinite Hankel matrices of infinite rank.
Consider a sequence of complex numbers s 0 , s 1 , . . . , s k , . . . such that Let 0 i 1 < · · · < i k , 0 j 1 < · · · < j k . Introduce the minor It consists of the elements of S, standing at the intersection of the rows i 1 , . . . , i k and the columns j 1 , . . . , j k . In particular, (17) is fulfilled then the following inequalities are true
Consider an infinite Hankel form This double series converges absolutely, for example, when |x j | j −2 . Indeed, taking into account the condition (17), we have In what follows we assume that all D j = 0. Examples of such matrices we present later.
Consider the truncated Hankel matrix S p of the form (13) and truncated Hankel form S p (x, x) of the form (14).
Reduction of quadratic forms to the sum of the squares gives (see [1, Ch. 10, §3]) where

Lemma 2 implies that
Setting we find that for such x q there exists a limit lim p→∞ X ( p) On the other hand, for the same x q there exists the limit Besides that, Therefore we have Note that in view of the identity (20), when squaring X k and substituting the result into (21), each product x j x k in S(x, x) contains only a finite number of terms.
Thus, we obtain the following statement. (9) satisfies the condition (17) and all D p = 0 then the relation (21) holds, where the series

Proposition 2 If a Hankel matrix S of the form
Write the equality (21) in another form Let us treat the system (22) as an infinite system of equations with respect to x p , p = 0, 1, . . . . Denote the infinite matrix of this system as A, the elements of the matrix are denoted as a j,k . The matrix is upper triangular with the unit main diagonal.
Consider cofactors A( j, k) to elements a j,k of the matrix A. These cofactors are well defined since below a certain line there is a unit on the main diagonal. Then the Laplace formula shows that all A( j, k) are determinants of a finite matrix. It is clear that A( j, j) = 1, A( j, k) = 0 for j > k. Let us consider an infinite matrix B that consists of elements A(k, j). This is also an upper triangular matrix. Multiplying the matrix A by B according to the rule of matrix multiplication, we find that the sums ∞ j=1 a j,k A( j, s) are finite, they are equal to 1 if s = k and to 0 if s = k. This follows from the rule for finding a finite inverse matrix with unit determinant. Therefore where I is the infinite identity matrix.
Multiplying the equality (22) by B, we obtain expressions for x q in term of infinite series with respect to Y k . These series obviously converge by Proposition 2.

Lemma 3 If the series
∞ n=1 a k 1 n · b nk 2 (24) converges absolutely for all 1 k 1 , k 2 m, then we have and the resulting series converges absolutely.
To prove the equality (25) we apply the Binet-Cauchy formula (23) to finite submatrices of matrix A of order (m × n) and to finite submatrices of matrix B of order (n × m) and then we take limit as n → ∞. Convergence of the resulting series ensures convergence of the series (24).
For the function f we introduce an infinite matrix with elements and is the transpose of matrix .
In what follows we assume that all |α j | > 1. Therefore, for such f Proposition 2 is valid.
Consider a matrix m that consists of the first m rows of the matrix .

Proposition 3 The determinants D m have a representation in the form
and this series converges absolutely.
Proof We apply formula (25) to the matrices m and m . We use the following facts: a matrix and its transpose have the same determinant, and Therefore, s j+k is an infinite inner product of infinite rows of the matrix m and the infinite columns of the matrix m .  (19) with real variables x j . We have previously obtained that this form is an absolutely convergent double series when |x k | k −2 , k 1.
Transform the form S(x, x): The change of the order of summation is justified since the corresponding series converge. Denote then we find that If a zero β m is real then Z 2 m > 0. If a zero is complex then the sum of squares where P m = Re Z m and Q m = Im Z m . This means that a square number in the representation (27) corresponds to a real zero and a difference of squares corresponds to a complex zero. Therefore, the relation (27) can be written as where r m is 1 or −1 and linear infinite forms F m are real. We shall show that if all D m > 0, then all r m = 1 in (28). Suppose that r m 0 = −1 for some m 0 . Consider a system of equations All equations in this system are different.
Move all x j with j > k to the right hand side of the system (29). Then we have a convergent series in the right hand side of the system. The coefficients at x 0 , . . . , x k−1 form a Vandermonde matrix (or its real or imaginary part). Therefore, we express x 0 , . . . , x k−1 from the system (29) as convergent series in other variables. Substituting x 0 , . . . , x k−1 into the system (29), we obtain that the right hand side is equal to zero.
It is clear that if substituting these solutions into F m 0 , this form cannot be identically equal to zero because this equation is not a consequence of the system of Eq. (29). Then there exist x k , . . ., such that F m 0 = 0.
Recall that the form F m follows from the form Z m and all variables |x j | are bounded by a constant c 1 . Then we obtain We assume that |β m | > 1 for all m. Therefore, The last expression is the remainder of the convergent series, so it can be made arbitrarily small uniformly with respect to x j . Choosing sufficiently large k such that δ < |F m 0 |, we find that S(x, x) has a negative value in this case.  Proof Assume that a function f has imaginary zeros ±iγ j , γ j ∈ R. Then the Taylor decomposition of the canonical product for f contains only even powers of z. Indeed, the product in (5) has the form So this product depends on z 2 . Therefore, the canonical product (6) that corresponds to the function f takes the form: We introduce the function Zeros of this function are only numbers −γ 2 n . Therefore, H (w) has only real zeros. The function H (w) is also an entire function of finite order of growth and its order is half the order of the canonical product (z). So in our case the order of growth of the function H (w) less than 1.
Consider the power sums for the function H (w): It is clear that this power sum is equal to the sum of σ 2k of the function f (z).
Since by Theorem 2 p ρ and q ρ, in our case p = 0 and q = 0. Therefore, the Hadamard decomposition of the function H (w) has the form It is clear that if all zeros of this function are real then they are negative if and only if the Taylor expansion of H (w) has non-negative coefficients c 2 j . These coefficients are the coefficients of the Taylor expansion for the canonical product (z).
The necessary condition that all zeros are imaginary is that the Taylor coefficients of the canonical product (z) have non-negative values and odd-numbered coefficients are equal to 0.
Let us find the condition on the Taylor coefficients of the function f (z) which guarantees non-negativity of the coefficients of the canonical product. Because the order of growth of the function is ρ < 2 then its Hadamard expansion takes the form: where d 1 < 2 and may be equal to zero.
Using Proposition 2, Lemma 1 and (3), we find that So the Taylor coefficients of the canonical product (z) are It follows from this relation and Theorem 7 that the theorem is proved.
Remark 2 If b 1 = 0 then the function f (z) coincides with its canonical product and the condition (33) is equivalent to Recall the definition of a type of entire function. Let f (z) have a finite order ρ. If there exists a positive number K such that then the function f has a finite type.
The infimum of such numbers K is called the type of the function and is denoted κ. A function f has a minimal type if κ = 0.
If function f (z) is an entire function either of order ρ < 1 or of order ρ = 1 and κ = 0, then its Hadamard expansion has the form (see, e.g., [7,Ch. 7, §2]) Therefore, the following statement is true Proof In this case the entire function coincides with its canonical product, the previous considerations include this statement.

Example 4 Let us consider the following function
This function has the order of growth ρ = 1 and it coincides with its canonical product. The even-numbered coefficients of the Taylor expansion of f (z) are positive and the odd-numbered coefficients are zero. Therefore, the necessary condition of Theorem 8 is satisfied.
Consider the determinants of Theorem 8. Formula (31) gives Therefore, the power sums a(x) with even numbers are σ 2n = (−1) n 2 2n−1 B n (2n)! for n 1, and s sn = σ 2n+2 . The determinant m in Example 3 takes the form The element in this determinant that stands at the intersection of j-th row and s-th column is negative if and only if j + s is odd. Then this determinant is equal to the determinant m from Example 3. In fact, the product of elements taken one by one from each row and each column contains an even number of elements with odd sums of row number and column number. Therefore, this example is reduced to Example 3. Now consider the case when D m are positive or negative. Suppose, as before, that all D m = 0. We introduce the sequence D −1 D 0 , D 0 D 1 , . . . , D m D m+1 , . . . , Recall that D −1 = 1.

Theorem 9
Assume that a function f has real coefficients. If the sequence (34) contains exactly m negative numbers then the function f has m distinct pairs of complex conjugate zeros and an infinite number of real zeros. If the sequence (34) has an infinite number of negative numbers then the function f (z) has an infinite number of complex conjugate zeros.
The proof essentially repeats the proof of Theorem 7 and the arguments from [1, Ch. 10, §1]. If the sequence (34) contains exactly m negative numbers then we choose sufficiently big k and k > m. Suppose that the sequence Z 1 , . . . , Z k contains s squares with negative sign and s > m. Then we set to zero in the expression (21) those X k which are included in (21) with negative sign. There are exactly m such equations. We set to zero squares with positive sign in the relation (28). There are exactly k − s such equations and (k −s)+m < k. Then there is no zero solution of these equations. Using the inequality (30), we see that the expression (21) is non-negative and the expression (28) is strictly less than zero. This is impossible. The case s < m is treated similarly.
It is clear that if there is an infinite number of negative numbers in the sequence (34) then the number of complex zeros is infinite.

Remark 3
The sequence (34) has always an infinite number of positive elements, while the number of real zeros of the function f (z) can be finite or infinite. Therefore, this sequence cannot be used to obtain a condition of existence of a finite number of real zeros.
Remark 4 If some of the roots |α j | 1, then we can consider the function f (r z), where r > 0. The zeros of this function are α j /r . Therefore, for sufficiently small r they are greater than 1 in absolute value. Then the sums σ k are multiplied by r k and the minors d p are multiplied by r 2k 0 + p( p−1) . As this takes place, the determinants D p are not equal to zero and their signs are not changed. Moreover, under such transformation real roots remain real and complex roots remain complex, thus Theorems 7-9 remain true.

Remark 5
If the sequence (34) has zero elements then Theorem 9 remains true if signs of sequence elements are arranged according to the Frobenius rule (16). We also note that we have excluded the case where D m are equal to zero, starting from a certain number, since the number of zeros of the function is infinite.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.