Abstract
The aim of the article is to find conditions on coefficients of a Taylor expansion of an entire function of finite order of growth in \( \mathbb C \) that guarantee a specified number of zeros. If the Taylor coefficients of f are real we also give conditions to determine whether the number of real and imaginary zeros is finite or infinite, and whether they are real or imaginary.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Localization of zeroes of polynomials is a classical problem with a long history (see, e.g., [1, Ch. 16], [2, 3]). For algebraic equations the root localization has been studied in [4] with the use of complex analysis. For entire functions, however, this problem has not yet been considered. Nevertheless, equations and systems of equations consisting of exponential polynomials appear for example in chemical kinetics studies (see [5, 6]). This raises the question of the number of zeroes of such functions, the number of real or imaginary zeroes etc.
1 The Absence of Zeros
Let the function \(f = f (z)\) be holomorphic in a neighborhood of the origin in the complex plane \(\mathbb C\):
Denote by \( \gamma _r \) the circle of radius r centered at the origin
Theorem 1
A function f is a non-vanishing entire function of finite order of growth if and only if for sufficiently small r there exists \( k_0 \in \mathbb N \) such that
The minimal \( k_0 \) is the order of the entire function f.
Recall that an entire function f(z) has a finite order (of growth) if there exists a positive number A such that
The infimum of such A is called the order of the function f.
Proof
Let f be a function of finite order of growth that has no zeros in \( \mathbb C \), then it is well known that it has the form \(f(z)=e^{\varphi (z)}\), where \(\varphi (z)\) is a polynomial of some degree \(k_0\) (see, e.g., [7, Ch. 7, Sec. 1.5]). Then
Conversely, suppose that the condition (2) is fulfilled. Since f(z) is holomorphic in a neighborhood of the origin and \( f (0) \ne 0 \), the values of f(z) lie in a neighborhood of f(0) , and this neighborhood does not contain the point 0 for sufficiently small |z| . Therefore, the holomorphic function \(\varphi (z)=\ln f(z)\) (\(\ln 1=0 \)) is defined in a neighborhood of the origin.
Let
Then, for sufficiently small r we have
When the condition (2) is fulfilled, we see that \( a_k = 0 \) for \(k>k_0\). Therefore, \( \varphi (x) \) is a polynomial of degree \( k_0\). Consequently, \( f (z) = e ^ {\varphi (z)} \) is an entire function of finite order \(k_0\). \(\square \)
There exists a recursive relationship between the coefficients of f and \( \varphi (z) \) (see, e.g., [4, §2, Lemma 2.3]).
Lemma 1
The following relations are true:
and
Therefore, we have the following statement.
Corollary 1
For a function f to be an entire non-vanishing function of finite order \( k_0 \), it is necessary and sufficient that
where \( k_0 \) is the minimal number with this property.
Example 1
Let
i.e, \(b_0=1\), \(b_k=\dfrac{1}{k!}\), \(k> 0\).
Substitute these values into (4). When \( k = 1 \), the determinant is not equal to zero. For \( k>1 \) all determinants are equal to zero since the first two columns are the same. Then the function f(z) is of order 1 and has no zeros in the complex plane.
2 Auxiliary Statements
Let a function f(z) of the form (1) be an entire function of finite order of growth. Denote by \( \alpha _1, \ \alpha _2, \ \ldots , \alpha _n, \ldots \) its zeros (every zero appears as many times as its multiplicity) in the order of increasing absolute value \(|\alpha _1|\leqslant |\alpha _2|\leqslant \cdots \leqslant |\alpha _n|\leqslant \cdots \). Their number may be finite or infinite.
Recall the Hadamard decomposition for such functions (see, e.g., [7, Ch. 7, Sec. 2.3], [8, Ch. 8, Theorem 8.2.4]).
Theorem 2
If f(z) is an entire function of finite order \( \rho \) then
where Q(z) is a polynomial of degree \( q\leqslant \rho \), s is the multiplicity of a zero of the function f at the origin, p is some integer, and \(p\leqslant \rho \).
The infinite product in (5) converges absolutely and locally uniformly in \( \mathbb C \). (Recall that a sequence of holomorphic functions converges locally uniformly in an open set U, if it converges uniformly on every compact subset of U.) In what follows we assume for simplicity that \(f(0)=1\). The polynomial Q(z) is of the form
Here \(d_0=0\), since \(f(0)=1\).
The expression
is called the canonical product, and the integer p is the genus of the canonical product of f. The genus of an entire function f(z) is \(\max \{q,p\}\). If \( \rho '\) is the order of the canonical product (6), then \(\rho =\max \{q,\rho '\}\).
Let us consider the following series
The infimum of positive \( \gamma \), for which the series (7) converges, is called the rate of convergence of zeros of the canonical product \(\Phi (z)\).
It is well known (see, e.g., [7, Ch. 7, Sec. 2.2], [8, Sec. 8, §8.2.5]) that the rate of convergence of zeros of the canonical product is equal to its order.
Then sums of zeros in negative powers
are absolutely convergent series when \(k>\rho '\), i.e. when \(k>\rho \). It is also known that \(\rho '-1\leqslant p\leqslant \rho '\) (see, e.g., [8, Sec. 8, §8.2.7]).
In what follows we consider the power sums with positive integer exponents k. Let us relate integrals in (2) to power sums \(\sigma _k\) of zeros.
Formula (3) relates the integrals in (2) to the expansion coefficients of \( \varphi (z) = \ln f (z) \) in a neighborhood of the origin. Let us express the integral in terms of power sums of zeros, using the Hadamard formula. We consider the case of \(s=0\), that is \( f(0)\ne 0\).
In a sufficiently small neighborhood of the origin we have (according to the Hadamard formula (5))
where \(P_n(z)= \dfrac{z}{\alpha _{n}}+\dfrac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\dfrac{z^{p}}{p\alpha _{n}^{p}}\).
The series for \( \varphi (z) \) converges absolutely and uniformly in a sufficiently small neighborhood of the origin since the zeros \( \alpha _j \) are bounded away from it.
It is obvious that
Let us transform the following expression
Then
Thus we have the following statement
Proposition 1
Let f(z) be an entire function of finite order of growth \( \rho \) of the form (5) and \( f (0) = 1 \). If \(q\leqslant p\) then
Similarly, we can consider the case of \( q>p \). In any case we have
Corollary 2
The following equality is true
It follows from (3), Lemma 1, and Corollary 2 that
Corollary 3
The following relations are true
These formulas relate the power sums \( \sigma _k \) to the coefficients of the function f. In the case when \( \sigma _1 \) is an absolutely convergent series such formulas were considered in [4, §2].
3 Finite Number of Zeros
Consider an entire function of finite order of growth of the form (1). In this section we find conditions for the coefficients of the function whereby the function has a finite number of zeros. First of all, we need to find the order \( \rho \) of the function f. To do this, we use the formula (see, e.g., [7, Ch. 7, §2], [8, Ch. 8, Sec. 8.3])
If \( \rho \) is a fractional number then the function has an infinite number of zeros. In this section we assume that \( \rho \) is integer.
We need some results about infinite Hankel matrices. They can be found in [1, Ch. 16, §10] and in [9, Ch. 2].
Consider a sequence of complex numbers \(s_0, s_1, \ s_2\ldots \ \). This sequence defines an infinite Hankel matrix
The consecutive principal minors of the matrix S are denoted by \(D_0,\ D_1, \ D_2, \ldots \):
We also assume that \(D_{-1}=1\).
If for each \( p \in \mathbb N \) there is a non-zero minor of S of order p then the matrix has infinite rank. If starting with some p all minors are equal to zero then the matrix S has a finite rank. The smallest value of such p is called the rank of the matrix. Here are two statements about matrices C of finite rank p (see [1, Ch. 16, §10]).
Corollary 4
(Kronecker) If an infinite Hankel matrix S of the form (9) has a finite rank p then \(D_{p-1}\ne 0\).
The converse statement is also true (see [9, §11]).
Corollary 5
(Frobenius) If the minor of an infinite Hankel matrix \( D_{p-1} \ne 0 \) and the minors \( D_p = D_ {p + 1} = \cdots D_ {p + j} = \cdots = 0 \), then the rank of the matrix S is finite and is equal to p.
Theorem 3
An infinite Hankel matrix has a finite rank p if and only if there exist p integers \( c_1, \ c_2, \ldots , c_p \) such that
This theorem is given in [1, Ch. 16, §10, Theorem 7].
Theorem 4
A matrix S has a finite rank p if and only if the sum of the series
is a rational function in z. In this case, the rank of the matrix S coincides with the number of poles of R(z) . Each pole is counted as many times as its multiplicity.
This statement is given in [1, Ch. 16, §10].
Consider again an entire function f(z) of integer order \( \rho \). By the properties of entire functions, the power sums \( \sigma _k \) are absolutely convergent series when \(k>\rho \). Let \(s_j=\sigma _{2k_0+j}\), \(2k_0>\rho +1\), \(j=0,1,\ldots \). Consider an infinite Hankel matrix S of the form (9).
Theorem 5
A function f has a finite number of zeros if and only if the rank of the matrix S is finite. The number of distinct zeros of the function f is equal to the rank of the matrix S.
Proof
Assume that \( \alpha _1, \ldots , \alpha _p \) are zeros of the function f. The number of zeros is finite (each root is counted as many times as its multiplicity). Then
and \(s_j=\sigma _{2k_0+j}, \ j\geqslant 0\).
Consider a monic polynomial P(x) of degree p with zeroes at \( 1 / \alpha _1, \ldots , \) \( 1 / \alpha _p \)
The coefficients \(c_j\)’s of this polynomial can be found with the help of the classical Newton formulas
When \( j> p \) they have the form
or
Taking sums \( \sigma _ {2k_0 + j} \) of \( s_j \)’s, we obtain
Thus, Theorem 3 shows that the rank of S is finite and it does not exceed p.
Suppose now that the rank of S is finite and is equal to q. According to Theorem 4, this rank is the number of poles of the rational function R(z) (formula (10)). Each pole is counted as many times as its multiplicity.
The series R(z) converges (absolutely and uniformly) for z lying outside of a disk centered at the origin. The disk contains all poles (yet unknown) of the function R(z). Transform R(z) assuming that \( | \alpha _ mz |> 1 \) for all \( \alpha _m \):
Changing the order of summation of the series is justified because they converge absolutely. By the hypothesis, R(z) is a rational function. Let us show that this series contains only a finite number of terms. Consider the following function
Let us analyze this series to find its convergence domain. The roots are arranged in the order of increasing absolute value. Let \(|w|\leqslant r<|\alpha _1|\), then
That is, \(\dfrac{|w|}{|\alpha _m|}\leqslant \dfrac{r}{|\alpha _1|}\), and we have
The last series converges by the choice of \(k_0\). Thus, the series \( R^* (w) \) converges absolutely and uniformly inside the disk \(\{|w|< |\alpha _1|\}\). Separating out the roots with the same absolute value \(| \alpha _1 | = \cdots = | \alpha _n | \), we obtain
The first sum is a finite sum of fractions with poles at \(\alpha _1,\ldots , \alpha _n\). The second sum is a series that defines a holomorphic function for \( | w | <| \alpha _ {n + 1} | \) (this is due to the same reasoning as above). Since \( R^* (w) \) is a rational function, the second series in (12) is also a rational function. Consider a representation
where P(w) and Q(w) are polynomials. Then \(P(w)=Q(w)R^*(w)\). Since the left hand side of this expression is a polynomial, \(Q(\alpha _1)=\cdots = Q(\alpha _n)=0\). If s is the degree of Q, then \(Q(\alpha _{n+1})=\cdots =Q(\alpha _s)=0\). Therefore, the series \( R ^ * (w) \) has a finite number of fractions which is equal to the number q of distinct roots \(\alpha _j\). So the rank S is q. If the number of all roots (counted with multiplicities) is equal to p, then the first part of the theorem shows that \(p\geqslant q\). \(\square \)
Note that by Corollary 3 the power sums \(s_j \) are expressed in terms of Taylor coefficients of the function f.
Assume that an entire function f has real coefficients, then it has either real or complex conjugate zeros. Note that in this case all the power sums \( \sigma _k \) and, accordingly, the numbers \( s_j \) are real.
Now we raise the question of the number of real and complex zeros. Since the function f(z) has a finite number of distinct zeros that is equal to the rank of the Hankel matrix S, the solution of this problem is reduced to the classical problem of finding the number of distinct real roots of a polynomial (see, e.g., [1, Ch. 16, §9]).
Consider an infinite Hankel matrix S of the form (9) with \(s_j=\sigma _{2k_0+j}\). The rank of the matrix is p. We introduce a truncated matrix \(S_{p}\):
and a truncated Hankel quadratic form
The distinct zeros of the function f are \( \beta _1, \ldots , \beta _p \) with multiplicities \( n_1, \ldots , n_p \), respectively.
Since \(s_j=\sigma _{2k_0+j}\),
The linear forms
are linearly independent because the determinant composed of their coefficients is the Vandermonde determinant, and it is distinct from zero. If the forms \( Z_m \) and \( Z_k \) are complex conjugate then we can consider \(\dfrac{1}{2}(Z_m+Z_k))\) and \(\dfrac{1}{2i}(Z_m-Z_k)\) instead. These forms are still linearly independent but become real.
In the relation (15) each real root corresponds to a square number and a pair of conjugate roots corresponds to a difference of square numbers. Now we use the Frobenius theorem on the rank and signature of a Hankel form \(S_{p}(x,x)\) (see, e.g., [1, Ch. 10, §10]). Note that we have \(D_{m-1}\ne 0\) for \(m=0,\ldots , p\).
Suppose that for some \( h <k \) the minors \(D_h\ne 0\), \(D_k\ne 0\) and all intermediate minors are equal to zero, i.e., \(D_{h+1}=\cdots =D_{k-1}=0\). Assign signes to these zero determinants (see [1, Ch. 10, §10, Theorem 24])
Theorem 6
The number of distinct real zeros of f with real coefficients is equal to the difference between the number of ‘non-changes of signs’ and the number of ‘changes of sign’ in the sequence \(D_{-1},\ D_0, \ D_1,\ldots , D_{p-1}\).
Example 2
Let us consider the function \(f(z)=(1-z)e^z\), \(f(0)=1\). The order of growth is \(\rho =1\). Then
Using Corollary 3 we obtain that \( \sigma _k = 1 \) for \( k \geqslant 2 \), i.e., the rank of the Hankel matrix S is equal to 1. Then by Theorem 5 the number of roots is equal to 1 and by Theorem 6 this root is real.
Remark 1
The above statements show that in the case of a finite number of zeros the study of entire functions reduces to the study of polynomials. To study other features related to the root localization in the context of [1, 2] one needs to factorize a function f(z) , i.e., to extract a polynomial from this function (see [10]). The roots of this polynomial coincide with the roots of the function f.
4 Infinite Number of Zeros
From the previous section we obtain
Corollary 6
A function f(z) has an infinite number of zeros if and only if the rank of the matrix S of the form (9) is infinite, where \(s_j=\sigma _{2k_0+j}\).
The conditions for the rank to be infinte is, by Corollary 5, that there is a strictly increasing sequence of positive integers \(j_k\), \(k=1,2,\ldots \), such that all \(D_{j_k}\ne 0\).
In what follows we need some properties of infinite Hankel matrices of infinite rank.
Consider a sequence of complex numbers \(s_0,\ s_1,\ \ldots , s_k, \ldots \) such that
Let \( 0 \leqslant i_1<\cdots <i_k \), \( 0 \leqslant j_1<\cdots <j_k \). Introduce the minor
It consists of the elements of S, standing at the intersection of the rows \( i_1, \ldots , i_k \) and the columns \(j_1,\ldots , j_k\). In particular,
Lemma 2
If the condition (17) is fulfilled then the following inequalities are true
for all \(j_1,\ldots , j_k\). In particular, if \( C=1 \) then \(M_k\leqslant 1\). If \(C<1\) then
Proof
We prove Lemma 2 by induction on k. When \( k = 1 \), the condition (18) obviously holds. We proceed from k to \( k + 1 \). Expanding the determinant
with respect to the last row, we have
taking into account that
The symbol \( [j_m] \) denotes that the determinant has no column with the number \(j_m \). \(\square \)
Consider an infinite Hankel form
This double series converges absolutely, for example, when \(|x_j|\leqslant j^{-2}.\)
Indeed, taking into account the condition (17), we have
In what follows we assume that all \( D_j \ne 0 \). Examples of such matrices we present later.
Consider the truncated Hankel matrix \( S_ {p} \) of the form (13) and truncated Hankel form \(S_{p}(x,x)\) of the form (14).
Reduction of quadratic forms to the sum of the squares gives (see [1, Ch. 10, §3])
where
Lemma 2 implies that
Setting
we find that for such \(x_q \) there exists a limit
On the other hand, for the same \( x_q\) there exists the limit
Besides that,
where
Therefore we have
Note that in view of the identity (20), when squaring \(X_k \) and substituting the result into (21), each product \( x_jx_k \) in S(x, x) contains only a finite number of terms.
Thus, we obtain the following statement.
Proposition 2
If a Hankel matrix S of the form (9) satisfies the condition (17) and all \( D_p \ne 0 \) then the relation (21) holds, where the series
is absolutely convergent for \(|x_q|\leqslant \frac{1}{q^2}\).
Write the equality (21) in another form
where
Let us treat the system (22) as an infinite system of equations with respect to \(x_p\), \(p=0,1,\ldots \) . Denote the infinite matrix of this system as A, the elements of the matrix are denoted as \(a_{j,k}\). The matrix is upper triangular with the unit main diagonal.
Consider cofactors A(j, k) to elements \( a_ {j, k} \) of the matrix A. These cofactors are well defined since below a certain line there is a unit on the main diagonal. Then the Laplace formula shows that all A(j, k) are determinants of a finite matrix. It is clear that \( A(j, j) = 1 \), \( A (j, k) = 0 \) for \( j> k \). Let us consider an infinite matrix B that consists of elements A(k, j). This is also an upper triangular matrix. Multiplying the matrix A by B according to the rule of matrix multiplication, we find that the sums
are finite, they are equal to 1 if \( s = k \) and to 0 if \( s \ne k \). This follows from the rule for finding a finite inverse matrix with unit determinant. Therefore
where I is the infinite identity matrix.
Multiplying the equality (22) by B, we obtain expressions for \( x_q \) in term of infinite series with respect to \( Y_k \). These series obviously converge by Proposition 2.
Let us consider an entire function of finite order of growth \( \rho \) of the form (1) with an infinite number of zeros \(\alpha _1,\ldots ,\alpha _n,\ldots \), power sums \(\sigma _k\) of the form (11) and \(s_j=\sigma _{2k_0+j}\). We check whether the condition (17) is fulfilled for \(s_j \). We have
if all \(|\alpha _j|>1\).
The following inequality holds
by virtue of monotonically increasing sequence of absolute values \(|\alpha _j|\). Therefore, the series (17) converges and
Recall the Binet–Cauchy formula for a product of rectangular matrices. Let
If the matrix
is the product \(C=A\cdot B\), then (see, e.g., [1, Ch. 1, §2])
Suppose that A and B are infinite rectangular matrices of the form
Lemma 3
If the series
converges absolutely for all \( 1\leqslant k_1,k_2\leqslant m\), then we have
and the resulting series converges absolutely.
To prove the equality (25) we apply the Binet–Cauchy formula (23) to finite submatrices of matrix A of order \( (m \times n) \) and to finite submatrices of matrix B of order \( (n \times m) \) and then we take limit as \(n\rightarrow \infty \). Convergence of the resulting series ensures convergence of the series (24).
For the function f we introduce an infinite matrix \( \Delta \) with elements
and \( \Delta '\) is the transpose of matrix \(\Delta \).
In what follows we assume that all \( | \alpha _j|>1 \). Therefore, for such f Proposition 2 is valid.
Consider a matrix \( \Delta _m \) that consists of the first m rows of the matrix \(\Delta \).
Proposition 3
The determinants \( D_m \) have a representation in the form
and this series converges absolutely.
Proof
We apply formula (25) to the matrices \( \Delta _m \) and \( \Delta _m '\). We use the following facts: a matrix and its transpose have the same determinant, and
Therefore, \( s_{j + k} \) is an infinite inner product of infinite rows of the matrix \( \Delta _m \) and the infinite columns of the matrix \(\Delta _m'\). \(\square \)
Theorem 7
Assume that a function f(z) has real coefficients. All zeros of f are real if and only if \(D_m>0\), \(m=0,1,\ldots \).
Proof
Suppose that all zeros of f(z) are real. Using formula (26), we obtain that \( D_m \) is a sum of non-negative minors, some of them are strictly positive since they are Vandermonde determinants with different columns.
Let all \( D_m> 0 \). Consider an infinite Hankel form S(x, x) of the form (19) with real variables \( x_j \). We have previously obtained that this form is an absolutely convergent double series when \(|x_k|\leqslant k^{-2}\), \(k\geqslant 1\).
Assume that \( \beta _1, \ \beta _2, \ldots , \beta _k \ldots \) are distinct zeros of the function f with multiplicities \(n_1,\ n_2, \ldots , n_k,\ldots \), respectively.
Transform the form S(x, x):
The change of the order of summation is justified since the corresponding series converge.
Denote
then we find that
If a zero \( \beta _m \) is real then \(Z_m^2>0\). If a zero is complex then the sum of squares \(Z_m^2+{\overline{Z}}_m^2=P_m^2-Q_m^2\), where \(P_m=\mathrm{Re}\, Z_m\) and \(Q_m= \mathrm{Im}\, Z_m\). This means that a square number in the representation (27) corresponds to a real zero and a difference of squares corresponds to a complex zero. Therefore, the relation (27) can be written as
where \( r_m \) is 1 or \( -1 \) and linear infinite forms \(F_m \) are real.
We shall show that if all \( D_m> 0 \), then all \( r_m = 1 \) in (28). Suppose that \( r_ {m_0} = - 1 \) for some \( m_0 \). Consider a system of equations
All equations in this system are different.
Move all \(x_j \) with \( j> k \) to the right hand side of the system (29). Then we have a convergent series in the right hand side of the system. The coefficients at \( x_0, \ldots , x_ {k-1} \) form a Vandermonde matrix (or its real or imaginary part). Therefore, we express \(x_0,\ldots , x_{k-1}\) from the system (29) as convergent series in other variables. Substituting \(x_0,\ldots , x_{k-1}\) into the system (29), we obtain that the right hand side is equal to zero.
It is clear that if substituting these solutions into \( F_ {m_0} \), this form cannot be identically equal to zero because this equation is not a consequence of the system of Eq. (29). Then there exist \( x_ {k}, \ldots \), such that \(F_{m_0}\ne 0\).
Recall that the form \(F_m \) follows from the form \( Z_m \) and all variables \( | x_j | \) are bounded by a constant \( c_1 \). Then we obtain
We assume that \( | \beta _m |> 1 \) for all m. Therefore,
Then
The last expression is the remainder of the convergent series, so it can be made arbitrarily small uniformly with respect to \( x_j \). Choosing sufficiently large k such that \( \delta <| F_{m_0} | \), we find that S(x, x) has a negative value in this case.
At the same time we have the representation (21) for S(x, x) . By the hypothesis of the theorem it is non-negative. This contradicts our assumption that \(r_{m_0}=-1\). \(\square \)
Example 3
Consider the following entire function f(z) of order \(\rho =\dfrac{1}{2}\):
It is known that
where \( B_n \) are Bernoulli numbers (see, e.g., [11, Ch. 12, Section 449]). They are all positive.
It follows from (3) and Corollary 2 that
and \(B_1=\dfrac{1}{6},\ B_2=\dfrac{1}{30},\ B_3=\dfrac{1}{42},\ B_4= \dfrac{1}{30},\ B_5=\dfrac{5}{66},\ldots \) . Set \( s_n = \sigma _ {n + 1} \), then \( D_m \) is defined as
Factoring out powers of 2 from the determinant \( D_m\), we find that the sign of \( D_m \) coincides with the sign of the determinant
Therefore, \(\Delta _1'=s_0=\dfrac{1}{12}>0\) and
Similarly, we have
where \(c>0\).
This is fully consistent with Theorem 7. Moreover, Theorem 7 shows that all \( D_m \) composed of Bernoulli numbers are positive.
Consider the case when function f(z) has only imaginary zeros. In the case of polynomials the condition can be found in terms of inners wherein they have imaginary roots [2, §2.4].
Theorem 8
Let an entire function f(z) of the form (1) with real Taylor coefficients have the order of growth \(\rho <2\). For the function f to have only imaginary roots it is necessary and sufficient that the determinants
are positive for all m, and the conditions
are satisfied.
Proof
Assume that a function f has imaginary zeros \( \pm i \gamma _j \), \( \gamma _j \in \mathbb R \). Then the Taylor decomposition of the canonical product for f contains only even powers of z. Indeed, the product in (5) has the form
So this product depends on \( z^2 \). Therefore, the canonical product (6) that corresponds to the function f takes the form:
We introduce the function
Zeros of this function are only numbers \( - \gamma _n ^ 2 \). Therefore, H(w) has only real zeros. The function H(w) is also an entire function of finite order of growth and its order is half the order of the canonical product \(\Phi (z)\). So in our case the order of growth of the function H(w) less than 1.
Consider the power sums for the function H(w):
It is clear that this power sum is equal to the sum of \( \sigma _{2k} \) of the function f(z).
Since by Theorem 2 \(p\leqslant \rho \) and \(q\leqslant \rho \), in our case \(p=0\) and \(q=0\).
Therefore, the Hadamard decomposition of the function H(w) has the form
It is clear that if all zeros of this function are real then they are negative if and only if the Taylor expansion of H(w) has non-negative coefficients \(c_{2j}\). These coefficients are the coefficients of the Taylor expansion for the canonical product \(\Phi (z)\).
The necessary condition that all zeros are imaginary is that the Taylor coefficients of the canonical product \( \Phi (z) \) have non-negative values and odd-numbered coefficients are equal to 0.
Let us find the condition on the Taylor coefficients of the function f(z) which guarantees non-negativity of the coefficients of the canonical product. Because the order of growth of the function is \( \rho <2 \) then its Hadamard expansion takes the form:
where \( d_1<2\) and may be equal to zero.
Using Proposition 2, Lemma 1 and (3), we find that \(d_1=b_1\).
Then \(\Phi (z)=e^{-b_1z}\cdot f(z)\). Therefore,
So the Taylor coefficients of the canonical product \( \Phi (z) \) are
It follows from this relation and Theorem 7 that the theorem is proved. \(\square \)
Remark 2
If \(b_1 = 0 \) then the function f(z) coincides with its canonical product and the condition (33) is equivalent to
Recall the definition of a type of entire function.
Let f(z) have a finite order \(\rho \). If there exists a positive number K such that
then the function f has a finite type.
The infimum of such numbers K is called the type of the function and is denoted \(\kappa \). A function f has a minimal type if \(\kappa =0\).
If function f(z) is an entire function either of order \(\rho < 1\) or of order \(\rho =1\) and \(\kappa =0\), then its Hadamard expansion has the form (see, e.g., [7, Ch. 7, §2])
Therefore, the following statement is true
Corollary 7
Let f be an entire function of minimal type and order \(\rho \leqslant 1\) with real coefficients. All zeros of function f are imaginary if and only if all minors of the form (32) are positive, the even-numbered Taylor coefficients of the function f are non-negative and the odd-numbered coefficients are zero.
Proof
In this case the entire function coincides with its canonical product, the previous considerations include this statement. \(\square \)
Example 4
Let us consider the following function
This function has the order of growth \(\rho =1\) and it coincides with its canonical product. The even-numbered coefficients of the Taylor expansion of f(z) are positive and the odd-numbered coefficients are zero. Therefore, the necessary condition of Theorem 8 is satisfied.
Consider the determinants of Theorem 8. Formula (31) gives
Therefore, the power sums a(x) with even numbers are
and \(s_{sn}=\sigma _{2n+2}\). The determinant \( \Delta _m \) in Example 3 takes the form
The element in this determinant that stands at the intersection of j-th row and s-th column is negative if and only if \( j + s \) is odd. Then this determinant is equal to the determinant \( \Delta _m\) from Example 3. In fact, the product of elements taken one by one from each row and each column contains an even number of elements with odd sums of row number and column number.
Therefore, this example is reduced to Example 3.
Now consider the case when \( D_m \) are positive or negative. Suppose, as before, that all \( D_m \ne 0 \). We introduce the sequence
Recall that \(D_{-1}=1\).
Theorem 9
Assume that a function f has real coefficients. If the sequence (34) contains exactly m negative numbers then the function f has m distinct pairs of complex conjugate zeros and an infinite number of real zeros. If the sequence (34) has an infinite number of negative numbers then the function f(z) has an infinite number of complex conjugate zeros.
The proof essentially repeats the proof of Theorem 7 and the arguments from [1, Ch. 10, §1]. If the sequence (34) contains exactly m negative numbers then we choose sufficiently big k and \( k> m \). Suppose that the sequence \( Z_1 \), \( \ldots , Z_k \) contains s squares with negative sign and \( s> m \). Then we set to zero in the expression (21) those \( X_k \) which are included in (21) with negative sign. There are exactly m such equations. We set to zero squares with positive sign in the relation (28). There are exactly \(k-s\) such equations and \((k-s)+m<k\). Then there is no zero solution of these equations. Using the inequality (30), we see that the expression (21) is non-negative and the expression (28) is strictly less than zero. This is impossible. The case \( s <m \) is treated similarly. It is clear that if there is an infinite number of negative numbers in the sequence (34) then the number of complex zeros is infinite.\(\square \)
Remark 3
The sequence (34) has always an infinite number of positive elements, while the number of real zeros of the function f(z) can be finite or infinite. Therefore, this sequence cannot be used to obtain a condition of existence of a finite number of real zeros.
Remark 4
If some of the roots \( | \alpha _j|\leqslant 1 \), then we can consider the function f(rz) , where \( r> 0 \). The zeros of this function are \(\alpha _j/r\). Therefore, for sufficiently small r they are greater than 1 in absolute value. Then the sums \( \sigma _k \) are multiplied by \( r^k \) and the minors \( d_p \) are multiplied by \( r^{2k_0 + p (p-1)} \). As this takes place, the determinants \( D_p \) are not equal to zero and their signs are not changed. Moreover, under such transformation real roots remain real and complex roots remain complex, thus Theorems 7–9 remain true.
Remark 5
If the sequence (34) has zero elements then Theorem 9 remains true if signs of sequence elements are arranged according to the Frobenius rule (16). We also note that we have excluded the case where \( D_m \) are equal to zero, starting from a certain number, since the number of zeros of the function is infinite.
References
Gantmacher, F.: The Theory of Matrices. Chelsy Publishing Company, New York (1959)
Jury, E.I.: Inners and Stability of Dynamics Systems. Wiley, New York (1974)
Krein, M.G., Naimark, M.A.: The method of symmetric and Hermitian forms in the theory of the roots of algebraic equation. Linear Multilinear Algebra 10, 265–308 (1981)
Bykov, V.I., Kytmanov, A.M., Lazman, M.Z.: Elimination Methods in Polynomial Computer Algebra. Kluwer Academic Publishers, Basel (1998)
Bykov, V.I.: Modeling of the Critical Phenomena in Chemical Kinetics. Komkniga, Moscow (2006) (in Russian)
Bykov, V.I., Tsybenova, S.B.: Non-linear Models of Chemical Kinetics. KRASAND, Moscow (2011) (in Russian)
Markushevich, A.I.: The Theory of Analytic Functions, vol. 2. Nauka, Moscow (1968) (in Russian)
Titchmarsh, E.C.: The Theory of Functions. Oxford University Press, Oxford (1939)
Iokhvidov, I.S.: Hankel and Toeplitz Matrices and Forms. Birkhäuser Verlag, Boston (1982)
Kytmanov, A.M., Naprienko, Ya.M.: One approach to finding the resultant of two entire functions. Complex Var. Elliptic Equ. doi:10.1080/17476933.2016.1218855
Fikhtengol’ts, G.M.: Course of Differential and Integral Calculus, vol. 2. Nauka, Moscow (1967) (in Russian)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Filippo Bracci.
The authors were supported by RFBR Grant 15-01-00277, by Grant NSh-9149.2016.1, and by the grant of the Russian Federation Government for research under supervision of leading scientist at the Siberian Federal University, Contract No. 14.Y26.31.0006.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Kytmanov, A.M., Khodos, O.V. On Localization of Zeros of an Entire Function of Finite Order of Growth. Complex Anal. Oper. Theory 11, 393–416 (2017). https://doi.org/10.1007/s11785-016-0606-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11785-016-0606-8