Localization of zeroes of polynomials is a classical problem with a long history (see, e.g., [1, Ch. 16], [2, 3]). For algebraic equations the root localization has been studied in [4] with the use of complex analysis. For entire functions, however, this problem has not yet been considered. Nevertheless, equations and systems of equations consisting of exponential polynomials appear for example in chemical kinetics studies (see [5, 6]). This raises the question of the number of zeroes of such functions, the number of real or imaginary zeroes etc.

1 The Absence of Zeros

Let the function \(f = f (z)\) be holomorphic in a neighborhood of the origin in the complex plane \(\mathbb C\):

$$\begin{aligned} f(z)=\sum _{k=0}^\infty b_k z^k,\quad f(0)=b_0=1. \end{aligned}$$
(1)

Denote by \( \gamma _r \) the circle of radius r centered at the origin

$$\begin{aligned} \gamma _r=\{z:\ |z|=r\}, \quad r>0. \end{aligned}$$

Theorem 1

A function f is a non-vanishing entire function of finite order of growth if and only if for sufficiently small r there exists \( k_0 \in \mathbb N \) such that

$$\begin{aligned} \int _{\gamma _r}\frac{1}{z^k}\frac{df}{f}=0\quad \text{ for } \text{ all }\quad k\geqslant k_0. \end{aligned}$$
(2)

The minimal \( k_0 \) is the order of the entire function f.

Recall that an entire function f(z) has a finite order (of growth) if there exists a positive number A such that

$$\begin{aligned} f(z)=O(e^{R^A})\quad \text{ for }\quad | z | = R \rightarrow + \infty . \end{aligned}$$

The infimum of such A is called the order of the function f.

Proof

Let f be a function of finite order of growth that has no zeros in \( \mathbb C \), then it is well known that it has the form \(f(z)=e^{\varphi (z)}\), where \(\varphi (z)\) is a polynomial of some degree \(k_0\) (see, e.g., [7, Ch. 7, Sec. 1.5]). Then

$$\begin{aligned} \int _{\gamma _r}\frac{1}{z^k}\frac{df}{f}=\int _{\gamma _r}\frac{1}{z^k}\varphi '(z)\,dz=0 \quad \text{ for }\quad k> k_0. \end{aligned}$$

Conversely, suppose that the condition (2) is fulfilled. Since f(z) is holomorphic in a neighborhood of the origin and \( f (0) \ne 0 \), the values of f(z) lie in a neighborhood of f(0) , and this neighborhood does not contain the point 0 for sufficiently small |z| . Therefore, the holomorphic function \(\varphi (z)=\ln f(z)\) (\(\ln 1=0 \)) is defined in a neighborhood of the origin.

Let

$$\begin{aligned} \varphi (z)=\sum _{k=0}^\infty a_k z^k,\quad a_0=\ln f(0)=\ln b_0. \end{aligned}$$

Then, for sufficiently small r we have

$$\begin{aligned} \frac{1}{2\pi i}\int _{\gamma _r}\frac{1}{z^k}\frac{df}{f}=\frac{1}{2\pi i}\int _{\gamma _r}\frac{1}{z^k}\varphi '(z)\,dz=ka_k. \end{aligned}$$
(3)

When the condition (2) is fulfilled, we see that \( a_k = 0 \) for \(k>k_0\). Therefore, \( \varphi (x) \) is a polynomial of degree \( k_0\). Consequently, \( f (z) = e ^ {\varphi (z)} \) is an entire function of finite order \(k_0\). \(\square \)

There exists a recursive relationship between the coefficients of f and \( \varphi (z) \) (see, e.g., [4, §2, Lemma 2.3]).

Lemma 1

The following relations are true:

$$\begin{aligned} a_k=\frac{(-1)^{k-1}}{k b_0^k} \begin{vmatrix} b_1&\quad b_0&\quad 0&\quad \cdots&\quad 0\\ 2b_2&\quad b_1&\quad b_0&\quad \cdots&\quad 0\\ \cdots&\quad \cdots&\quad \cdots&\quad \cdots&\quad \cdots \\ kb_k&\quad b_{k-1}&\quad b_{k-2}&\quad \cdots&\quad b_1\\ \end{vmatrix}, \end{aligned}$$

and

$$\begin{aligned} b_k=\frac{b_0}{k!} \begin{vmatrix} a_1&\quad -1&\quad 0&\quad \cdots&\quad 0\\ 2a_2&\quad a_1&\quad -2&\quad \cdots&\quad 0\\ \cdots&\quad \cdots&\quad \cdots&\quad \cdots&\quad \cdots \\ ka_k&\quad (k-1)a_{k-1}&\quad (k-2)a_{k-2}&\quad \cdots&\quad a_1\\ \end{vmatrix}. \end{aligned}$$

Therefore, we have the following statement.

Corollary 1

For a function f to be an entire non-vanishing function of finite order \( k_0 \), it is necessary and sufficient that

$$\begin{aligned} \begin{vmatrix} b_1&\quad b_0&\quad 0&\quad \cdots&\quad 0\\ 2b_2&\quad b_1&\quad b_0&\quad \cdots&\quad 0\\ \cdots&\quad \cdots&\quad \cdots&\quad \cdots&\quad \cdots \\ kb_k&\quad b_{k-1}&\quad b_{k-2}&\quad \cdots&\quad b_1\\ \end{vmatrix}=0 \quad \text{ for } \quad k>k_0. \end{aligned}$$
(4)

where \( k_0 \) is the minimal number with this property.

Example 1

Let

$$\begin{aligned} f(z)=e^z=1+\sum _{k=1}^\infty \frac{z^k}{k!}, \end{aligned}$$

i.e, \(b_0=1\), \(b_k=\dfrac{1}{k!}\), \(k> 0\).

Substitute these values into (4). When \( k = 1 \), the determinant is not equal to zero. For \( k>1 \) all determinants are equal to zero since the first two columns are the same. Then the function f(z) is of order 1 and has no zeros in the complex plane.

2 Auxiliary Statements

Let a function f(z) of the form (1) be an entire function of finite order of growth. Denote by \( \alpha _1, \ \alpha _2, \ \ldots , \alpha _n, \ldots \) its zeros (every zero appears as many times as its multiplicity) in the order of increasing absolute value \(|\alpha _1|\leqslant |\alpha _2|\leqslant \cdots \leqslant |\alpha _n|\leqslant \cdots \). Their number may be finite or infinite.

Recall the Hadamard decomposition for such functions (see, e.g., [7, Ch. 7, Sec. 2.3], [8, Ch. 8, Theorem 8.2.4]).

Theorem 2

If f(z) is an entire function of finite order \( \rho \) then

$$\begin{aligned} f(z)= z^se^{Q(z)}\prod _{n=1}^{\infty }\left( 1-\frac{z}{\alpha _n}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^{p}}{p\alpha _{n}^{p}}}, \end{aligned}$$
(5)

where Q(z) is a polynomial of degree \( q\leqslant \rho \), s is the multiplicity of a zero of the function f at the origin, p is some integer, and \(p\leqslant \rho \).

The infinite product in (5) converges absolutely and locally uniformly in \( \mathbb C \). (Recall that a sequence of holomorphic functions converges locally uniformly in an open set U, if it converges uniformly on every compact subset of U.) In what follows we assume for simplicity that \(f(0)=1\). The polynomial Q(z) is of the form

$$\begin{aligned} Q(z)=\sum _{j=1}^q d_jz^j. \end{aligned}$$

Here \(d_0=0\), since \(f(0)=1\).

The expression

$$\begin{aligned} \Phi (z)=\prod _{n=1}^{\infty }\left( 1-\frac{z}{\alpha _n}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^{p}}{p\alpha _{n}^{p}}} \end{aligned}$$
(6)

is called the canonical product, and the integer p is the genus of the canonical product of f. The genus of an entire function f(z) is \(\max \{q,p\}\). If \( \rho '\) is the order of the canonical product (6), then \(\rho =\max \{q,\rho '\}\).

Let us consider the following series

$$\begin{aligned} \sum _{n=1}^\infty \frac{1}{|\alpha _n|^\gamma }. \end{aligned}$$
(7)

The infimum of positive \( \gamma \), for which the series (7) converges, is called the rate of convergence of zeros of the canonical product \(\Phi (z)\).

It is well known (see, e.g., [7, Ch. 7, Sec. 2.2], [8, Sec. 8, §8.2.5]) that the rate of convergence of zeros of the canonical product is equal to its order.

Then sums of zeros in negative powers

$$\begin{aligned} \sigma _k=\sum _{n=1}^\infty \frac{1}{\alpha _n^k}, \quad k\in \mathbb N, \end{aligned}$$

are absolutely convergent series when \(k>\rho '\), i.e. when \(k>\rho \). It is also known that \(\rho '-1\leqslant p\leqslant \rho '\) (see, e.g., [8, Sec. 8, §8.2.7]).

In what follows we consider the power sums with positive integer exponents k. Let us relate integrals in (2) to power sums \(\sigma _k\) of zeros.

Formula (3) relates the integrals in (2) to the expansion coefficients of \( \varphi (z) = \ln f (z) \) in a neighborhood of the origin. Let us express the integral in terms of power sums of zeros, using the Hadamard formula. We consider the case of \(s=0\), that is \( f(0)\ne 0\).

In a sufficiently small neighborhood of the origin we have (according to the Hadamard formula (5))

$$\begin{aligned} \varphi (z)=Q(z)+\sum _{n=1}^\infty \ln \left[ \left( 1-\frac{z}{\alpha _n}\right) e^{P_n(z)}\right] , \end{aligned}$$

where \(P_n(z)= \dfrac{z}{\alpha _{n}}+\dfrac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\dfrac{z^{p}}{p\alpha _{n}^{p}}\).

The series for \( \varphi (z) \) converges absolutely and uniformly in a sufficiently small neighborhood of the origin since the zeros \( \alpha _j \) are bounded away from it.

It is obvious that

$$\begin{aligned} \frac{1}{2\pi i}\int _{\gamma _r}\frac{1}{z^k}\cdot dQ(z)= {\left\{ \begin{array}{ll} kd_k \quad \text{ for }\quad 1\leqslant k\leqslant q,\\ 0\quad \quad \,\text{ for }\quad k>q. \end{array}\right. } \end{aligned}$$

Let us transform the following expression

$$\begin{aligned}&d\ln \left[ \left( 1-\frac{z}{\alpha _n}\right) e^{P_n(z)}\right] =\frac{d\left[ \left( 1-\frac{z}{\alpha _{n}}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^{p}}{p\alpha _{n}^{p}}}\right] }{\left( 1-\frac{z}{\alpha _{n}}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^p}{p\alpha _{n}^{p}}}}\\&\quad =\frac{d\left( 1-\frac{z}{\alpha _{n}}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^{p}}{p\alpha _{n}^{p}}}+\left( 1-\frac{z}{\alpha _{n}}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^{p}}{p\alpha _{n}^{p}}}d\left( \frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^{p}}{p\alpha _{n}^{p}}\right) }{\left( 1-\frac{z}{\alpha _{n}}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^p}{p\alpha _{n}^{p}}}}\\&\quad =\frac{d\left( 1-\frac{z}{\alpha _n}\right) }{\left( 1-\frac{z}{\alpha _n}\right) }+d\left( \frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^{p}}{p\alpha _{n}^{p}}\right) \\&\quad =\frac{dz}{z-\alpha _n}+\left( \frac{1}{\alpha _{n}}+\frac{z}{\alpha _{n}^{2}}+\cdots +\frac{z^{p-1}}{\alpha _{n}^{p}}\right) dz\\&\quad =\frac{dz}{z-\alpha _n}+\frac{1}{\alpha _n}\left[ \frac{\left( \frac{z^{p}}{\alpha _n^{p}}-1\right) }{\left( \frac{z}{\alpha _n}-1\right) }\right] dz=\frac{dz}{z-\alpha _n}+\frac{(z^{p}-\alpha _n^{p})dz}{\alpha _n^{p-1}(z-\alpha _n)}=\frac{z^{p}dz}{\alpha _n^{p}(z-\alpha _n)}. \end{aligned}$$

Then

$$\begin{aligned}&\frac{1}{2\pi i}\sum _{n=1}^{\infty }\int \limits _{\gamma _{r}}\frac{d\left[ \left( 1-\frac{z}{\alpha _{n}}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^{p}}{p\alpha _{n}^{p}}}\right] }{z^k\left( 1-\frac{z}{\alpha _{n}}\right) e^{\frac{z}{\alpha _{n}}+\frac{z^{2}}{2\alpha _{n}^{2}}+\cdots +\frac{z^p}{p\alpha _{n}^{p}}}}=\frac{1}{2\pi i}\sum _{n=1}^{\infty }\int \limits _{\gamma _{r}}\frac{z^{p-k} dz}{\alpha _n^{p}(z-\alpha _n)}\\&\quad ={\left\{ \begin{array}{ll} 0, \quad \quad \quad \quad \quad \quad \quad \quad \;\;\,\text{ if }\quad k\leqslant p,\\ -\sum \limits _{n=1}^\infty \frac{1}{\alpha _n^{k}}=-\sigma _k,\quad \text{ if }\quad k>p.\end{array}\right. } \end{aligned}$$

Thus we have the following statement

Proposition 1

Let f(z) be an entire function of finite order of growth \( \rho \) of the form (5) and \( f (0) = 1 \). If \(q\leqslant p\) then

Similarly, we can consider the case of \( q>p \). In any case we have

Corollary 2

The following equality is true

$$\begin{aligned} \frac{1}{2\pi i}\int _{\gamma _r}\frac{1}{z^k}\frac{df}{f}=-\sigma _k,\quad \text{ if }\quad k>\rho . \end{aligned}$$

It follows from (3), Lemma 1, and Corollary 2 that

Corollary 3

The following relations are true

$$\begin{aligned} \sigma _k=-\frac{(-1)^{k-1}}{b_0^k} \begin{vmatrix} b_1&\quad b_0&\quad 0&\quad \cdots&\quad 0\\ 2b_2&\quad b_1&\quad b_0&\quad \cdots&\quad 0\\ \cdots&\quad \cdots&\quad \cdots&\quad \cdots&\quad \cdots \\ kb_k&\quad b_{k-1}&\quad b_{k-2}&\quad \cdots&\quad b_1\\\end{vmatrix} \quad \text{ for }\quad k>\rho . \end{aligned}$$
(8)

These formulas relate the power sums \( \sigma _k \) to the coefficients of the function f. In the case when \( \sigma _1 \) is an absolutely convergent series such formulas were considered in [4, §2].

3 Finite Number of Zeros

Consider an entire function of finite order of growth of the form (1). In this section we find conditions for the coefficients of the function whereby the function has a finite number of zeros. First of all, we need to find the order \( \rho \) of the function f. To do this, we use the formula (see, e.g., [7, Ch. 7, §2], [8, Ch. 8, Sec. 8.3])

$$\begin{aligned} \varliminf \limits _{n\rightarrow \infty }\frac{\ln (1/|b_n|)}{n\ln n}=\frac{1}{\rho }. \end{aligned}$$

If \( \rho \) is a fractional number then the function has an infinite number of zeros. In this section we assume that \( \rho \) is integer.

We need some results about infinite Hankel matrices. They can be found in [1, Ch. 16, §10] and in [9, Ch. 2].

Consider a sequence of complex numbers \(s_0, s_1, \ s_2\ldots \ \). This sequence defines an infinite Hankel matrix

$$\begin{aligned} S= \begin{pmatrix} s_0&{}\quad s_1&{}\quad s_2&{}\quad \cdots \\ s_1&{}\quad s_2&{}\quad s_3&{}\quad \cdots \\ s_2&{}\quad s_3&{}\quad s_4&{}\quad \cdots \\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ \end{pmatrix}. \end{aligned}$$
(9)

The consecutive principal minors of the matrix S are denoted by \(D_0,\ D_1, \ D_2, \ldots \):

$$\begin{aligned} D_p=|s_{j+k}|_0^{p-1}, \quad p=0,1,\ldots \ . \end{aligned}$$

We also assume that \(D_{-1}=1\).

If for each \( p \in \mathbb N \) there is a non-zero minor of S of order p then the matrix has infinite rank. If starting with some p all minors are equal to zero then the matrix S has a finite rank. The smallest value of such p is called the rank of the matrix. Here are two statements about matrices C of finite rank p (see [1, Ch. 16, §10]).

Corollary 4

(Kronecker) If an infinite Hankel matrix S of the form (9) has a finite rank p then \(D_{p-1}\ne 0\).

The converse statement is also true (see [9, §11]).

Corollary 5

(Frobenius) If the minor of an infinite Hankel matrix \( D_{p-1} \ne 0 \) and the minors \( D_p = D_ {p + 1} = \cdots D_ {p + j} = \cdots = 0 \), then the rank of the matrix S is finite and is equal to p.

Theorem 3

An infinite Hankel matrix has a finite rank p if and only if there exist p integers \( c_1, \ c_2, \ldots , c_p \) such that

$$\begin{aligned} s_j=\sum _{j=1}^p c_js_{p-j} \quad j > p. \end{aligned}$$

This theorem is given in [1, Ch. 16, §10, Theorem 7].

Theorem 4

A matrix S has a finite rank p if and only if the sum of the series

$$\begin{aligned} R(z)=\frac{s_0}{z}+\frac{s_1}{z^2}+\frac{s_2}{z^3}+\cdots \end{aligned}$$
(10)

is a rational function in z. In this case, the rank of the matrix S coincides with the number of poles of R(z) . Each pole is counted as many times as its multiplicity.

This statement is given in [1, Ch. 16, §10].

Consider again an entire function f(z) of integer order \( \rho \). By the properties of entire functions, the power sums \( \sigma _k \) are absolutely convergent series when \(k>\rho \). Let \(s_j=\sigma _{2k_0+j}\), \(2k_0>\rho +1\), \(j=0,1,\ldots \). Consider an infinite Hankel matrix S of the form (9).

Theorem 5

A function f has a finite number of zeros if and only if the rank of the matrix S is finite. The number of distinct zeros of the function f is equal to the rank of the matrix S.

Proof

Assume that \( \alpha _1, \ldots , \alpha _p \) are zeros of the function f. The number of zeros is finite (each root is counted as many times as its multiplicity). Then

$$\begin{aligned} \sigma _k=\sum _{j=1}^p \frac{1}{\alpha _j^k}, \quad k\geqslant k_0>\rho , \end{aligned}$$
(11)

and \(s_j=\sigma _{2k_0+j}, \ j\geqslant 0\).

Consider a monic polynomial P(x) of degree p with zeroes at \( 1 / \alpha _1, \ldots , \) \( 1 / \alpha _p \)

$$\begin{aligned} P(z)=z^p+c_1z^{p-1}+\cdots +c_{n-1}z+c_p. \end{aligned}$$

The coefficients \(c_j\)’s of this polynomial can be found with the help of the classical Newton formulas

$$\begin{aligned} \sigma _j+c_1\sigma _{j-1}+\cdots + c_{j-1}\sigma _1 +jc_j,\quad 1\leqslant j\leqslant p. \end{aligned}$$

When \( j> p \) they have the form

$$\begin{aligned} \sigma _j+c_1\sigma _{j-1}+\cdots + c_p \sigma _{j-p}=0 \end{aligned}$$

or

$$\begin{aligned} \sigma _j=-c_1\sigma _{j-1}-\cdots \sigma _{j-p}c_p. \end{aligned}$$

Taking sums \( \sigma _ {2k_0 + j} \) of \( s_j \)’s, we obtain

$$\begin{aligned} s_j=-s_{j-1}c_1-\cdots -s_{j-p}c_p. \end{aligned}$$

Thus, Theorem 3 shows that the rank of S is finite and it does not exceed p.

Suppose now that the rank of S is finite and is equal to q. According to Theorem 4, this rank is the number of poles of the rational function R(z) (formula (10)). Each pole is counted as many times as its multiplicity.

The series R(z) converges (absolutely and uniformly) for z lying outside of a disk centered at the origin. The disk contains all poles (yet unknown) of the function R(z). Transform R(z) assuming that \( | \alpha _ mz |> 1 \) for all \( \alpha _m \):

$$\begin{aligned} R(z)= & {} \frac{s_0}{z}+\frac{s_1}{z^2}+\frac{s_2}{z^3}+\cdots =\frac{1}{z}\sum _{k=0}^\infty \frac{1}{z^k}\left( \sum _{m=1}^\infty \frac{1}{\alpha _m^{2k_0+k}}\right) \\= & {} \frac{1}{z}\sum _{m=1}^\infty \left( \sum _{k=0}^\infty \frac{1}{\alpha _m^{2k_0}}\cdot \frac{1}{\alpha _m^kz^k}\right) = \frac{1}{z}\sum _{m=1}^\infty \frac{1}{\alpha _m^{2k_0}}\left( \sum _{k=0}^{\infty }\frac{1}{(\alpha _m z)^k}\right) \\= & {} \frac{1}{z}\sum _{m=1}^\infty \frac{1}{\alpha _m^{2k_0}}\cdot \frac{\alpha _mz}{\alpha _mz-1}= \sum _{m=1}^\infty \frac{1}{\alpha _m^{2k_0}}\cdot \frac{\alpha _m}{\alpha _mz-1}. \end{aligned}$$

Changing the order of summation of the series is justified because they converge absolutely. By the hypothesis, R(z) is a rational function. Let us show that this series contains only a finite number of terms. Consider the following function

$$\begin{aligned} R^*(w)=R\left( \frac{1}{w}\right) =\sum _{m=1}^\infty \frac{1}{\alpha _m^{2k_0}}\cdot \frac{\alpha _m}{ \frac{\alpha _m}{w}-1}=\sum _{m=1}^\infty \frac{1}{\alpha _m^{2k_0-1}}\cdot \frac{w}{\alpha _m-w}. \end{aligned}$$

Let us analyze this series to find its convergence domain. The roots are arranged in the order of increasing absolute value. Let \(|w|\leqslant r<|\alpha _1|\), then

$$\begin{aligned} \left| 1-\frac{w}{\alpha _m}\right| \geqslant 1-\frac{|w|}{|\alpha _m|}\geqslant 1-\frac{r}{|\alpha _1|}=c. \end{aligned}$$

That is, \(\dfrac{|w|}{|\alpha _m|}\leqslant \dfrac{r}{|\alpha _1|}\), and we have

$$\begin{aligned} \sum _{m=1}^\infty \frac{1}{|\alpha _m|^{2k_0-1}}\cdot \frac{1}{\left| 1-\frac{w}{\alpha _m}\right| }\leqslant c\sum _{m=1}^\infty \frac{1}{|\alpha _m|^{2k_0-1}}. \end{aligned}$$

The last series converges by the choice of \(k_0\). Thus, the series \( R^* (w) \) converges absolutely and uniformly inside the disk \(\{|w|< |\alpha _1|\}\). Separating out the roots with the same absolute value \(| \alpha _1 | = \cdots = | \alpha _n | \), we obtain

$$\begin{aligned} R^*(w)=\sum _{m=1}^n\frac{1}{\alpha _m^{2k_0-1}}\cdot \frac{1}{ \frac{\alpha _m}{w}-1}+ \sum _{m=n+1}^\infty \frac{1}{\alpha _m^{2k_0-1}}\cdot \frac{1}{ \frac{\alpha _m}{w}-1}. \end{aligned}$$
(12)

The first sum is a finite sum of fractions with poles at \(\alpha _1,\ldots , \alpha _n\). The second sum is a series that defines a holomorphic function for \( | w | <| \alpha _ {n + 1} | \) (this is due to the same reasoning as above). Since \( R^* (w) \) is a rational function, the second series in (12) is also a rational function. Consider a representation

$$\begin{aligned} R^*(w) =\frac{P(w)}{Q(w)}, \end{aligned}$$

where P(w) and Q(w) are polynomials. Then \(P(w)=Q(w)R^*(w)\). Since the left hand side of this expression is a polynomial, \(Q(\alpha _1)=\cdots = Q(\alpha _n)=0\). If s is the degree of Q, then \(Q(\alpha _{n+1})=\cdots =Q(\alpha _s)=0\). Therefore, the series \( R ^ * (w) \) has a finite number of fractions which is equal to the number q of distinct roots \(\alpha _j\). So the rank S is q. If the number of all roots (counted with multiplicities) is equal to p, then the first part of the theorem shows that \(p\geqslant q\). \(\square \)

Note that by Corollary 3 the power sums \(s_j \) are expressed in terms of Taylor coefficients of the function f.

Assume that an entire function f has real coefficients, then it has either real or complex conjugate zeros. Note that in this case all the power sums \( \sigma _k \) and, accordingly, the numbers \( s_j \) are real.

Now we raise the question of the number of real and complex zeros. Since the function f(z) has a finite number of distinct zeros that is equal to the rank of the Hankel matrix S, the solution of this problem is reduced to the classical problem of finding the number of distinct real roots of a polynomial (see, e.g., [1, Ch. 16, §9]).

Consider an infinite Hankel matrix S of the form (9) with \(s_j=\sigma _{2k_0+j}\). The rank of the matrix is p. We introduce a truncated matrix \(S_{p}\):

$$\begin{aligned} S_{p}= \begin{pmatrix} s_0&{}\quad s_1&{}\quad s_2&{}\quad \cdots &{}\quad s_{p-1}\\ s_1&{}\quad s_2&{}\quad s_3&{}\quad \cdots &{}\quad s_p \\ s_2&{}\quad s_3&{}\quad s_4&{}\quad \cdots &{}\quad \cdots \\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ s_{p-1}&{}\quad s_p&{}\quad s_{p+1}&{}\quad \cdots &{}\quad s_{2p-2}\end{pmatrix}, \end{aligned}$$
(13)

and a truncated Hankel quadratic form

$$\begin{aligned} S_{p}(x,x)=\sum _{k,j=0}^{p-1}s_{j+k}x_jx_k. \end{aligned}$$
(14)

The distinct zeros of the function f are \( \beta _1, \ldots , \beta _p \) with multiplicities \( n_1, \ldots , n_p \), respectively.

Since \(s_j=\sigma _{2k_0+j}\),

$$\begin{aligned} S_{p}(x,x)= & {} \sum _{k,j=0}^{p-1}\sum _{m=1}^p\frac{n_m}{\beta _m^{j+k +2k_0}}x_jx_k\nonumber \\= & {} \sum _{m=1}^p\frac{n_m}{\beta _m^{2k_0}}\left( x_0+\frac{x_ 1}{\beta _m}+\cdots +\frac{x_{p-1}}{\beta _m^{p-1}}\right) ^2. \end{aligned}$$
(15)

The linear forms

$$\begin{aligned} Z_m=\frac{1}{\beta _m^{2k_0}}\cdot \left( x_0+\frac{x_ 1}{\beta _m}+\cdots +\frac{x_{p-1}}{\beta _m^{p-1}}\right) ,\quad 1\leqslant m\leqslant p, \end{aligned}$$

are linearly independent because the determinant composed of their coefficients is the Vandermonde determinant, and it is distinct from zero. If the forms \( Z_m \) and \( Z_k \) are complex conjugate then we can consider \(\dfrac{1}{2}(Z_m+Z_k))\) and \(\dfrac{1}{2i}(Z_m-Z_k)\) instead. These forms are still linearly independent but become real.

In the relation (15) each real root corresponds to a square number and a pair of conjugate roots corresponds to a difference of square numbers. Now we use the Frobenius theorem on the rank and signature of a Hankel form \(S_{p}(x,x)\) (see, e.g., [1, Ch. 10, §10]). Note that we have \(D_{m-1}\ne 0\) for \(m=0,\ldots , p\).

Suppose that for some \( h <k \) the minors \(D_h\ne 0\), \(D_k\ne 0\) and all intermediate minors are equal to zero, i.e., \(D_{h+1}=\cdots =D_{k-1}=0\). Assign signes to these zero determinants (see [1, Ch. 10, §10, Theorem 24])

$$\begin{aligned} \mathrm{sign}\, D_{h+j}=(-1)^{\frac{j(j-1)}{2}}\mathrm{sign\,} D_h, \quad 1\leqslant j\leqslant k-1. \end{aligned}$$
(16)

Theorem 6

The number of distinct real zeros of f with real coefficients is equal to the difference between the number of ‘non-changes of signs’ and the number of ‘changes of sign’ in the sequence \(D_{-1},\ D_0, \ D_1,\ldots , D_{p-1}\).

Example 2

Let us consider the function \(f(z)=(1-z)e^z\), \(f(0)=1\). The order of growth is \(\rho =1\). Then

$$\begin{aligned} \ln f(z)=z+ \ln (1-z)=2z+\sum _{k=2}^\infty \frac{z^k}{k}. \end{aligned}$$

Using Corollary 3 we obtain that \( \sigma _k = 1 \) for \( k \geqslant 2 \), i.e., the rank of the Hankel matrix S is equal to 1. Then by Theorem 5 the number of roots is equal to 1 and by Theorem 6 this root is real.

Remark 1

The above statements show that in the case of a finite number of zeros the study of entire functions reduces to the study of polynomials. To study other features related to the root localization in the context of [1, 2] one needs to factorize a function f(z) , i.e., to extract a polynomial from this function (see [10]). The roots of this polynomial coincide with the roots of the function f.

4 Infinite Number of Zeros

From the previous section we obtain

Corollary 6

A function f(z) has an infinite number of zeros if and only if the rank of the matrix S of the form (9) is infinite, where \(s_j=\sigma _{2k_0+j}\).

The conditions for the rank to be infinte is, by Corollary 5, that there is a strictly increasing sequence of positive integers \(j_k\), \(k=1,2,\ldots \), such that all \(D_{j_k}\ne 0\).

In what follows we need some properties of infinite Hankel matrices of infinite rank.

Consider a sequence of complex numbers \(s_0,\ s_1,\ \ldots , s_k, \ldots \) such that

$$\begin{aligned} \sum _{k=0}^\infty |s_k|=C<\infty . \end{aligned}$$
(17)

Let \( 0 \leqslant i_1<\cdots <i_k \), \( 0 \leqslant j_1<\cdots <j_k \). Introduce the minor

$$\begin{aligned} S\begin{pmatrix} i_{1}&{}\quad \cdots &{}\quad i_k\\ j_1&{}\quad \cdots &{}\quad j_k \end{pmatrix}. \end{aligned}$$

It consists of the elements of S, standing at the intersection of the rows \( i_1, \ldots , i_k \) and the columns \(j_1,\ldots , j_k\). In particular,

$$\begin{aligned} D_k=S\begin{pmatrix} 0&{}\quad \cdots &{}\quad k-1\\ 0&{}\quad \cdots &{}k-1 \end{pmatrix}. \end{aligned}$$

Lemma 2

If the condition (17) is fulfilled then the following inequalities are true

$$\begin{aligned} M_k=\left| S\begin{pmatrix} 0&{}\quad \cdots &{}\quad k-1\\ j_1&{}\quad \cdots &{}\quad j_k \end{pmatrix}\right| \leqslant C^k \end{aligned}$$
(18)

for all \(j_1,\ldots , j_k\). In particular, if \( C=1 \) then \(M_k\leqslant 1\). If \(C<1\) then

$$\begin{aligned} M_k\rightarrow 0\quad \text{ for } \quad k\rightarrow \infty . \end{aligned}$$

Proof

We prove Lemma 2 by induction on k. When \( k = 1 \), the condition (18) obviously holds. We proceed from k to \( k + 1 \). Expanding the determinant

$$\begin{aligned} S\begin{pmatrix} 0&{}\quad \cdots &{}\quad k\\ j_1&{}\quad \cdots &{}\quad j_{k+1} \end{pmatrix} \end{aligned}$$

with respect to the last row, we have

$$\begin{aligned} \left| S\begin{pmatrix} 0&{}\quad \cdots &{}\quad k\\ j_1&{}\quad \cdots &{}\quad j_{k+1} \end{pmatrix}\right|= & {} \left| \sum _{m=1}^{k+1} s_{k-1+j_m} S\begin{pmatrix} 0&{}\quad \cdots &{}\cdots &{}\quad \cdots &{}\quad k-1\\ j_1&{}\quad \cdots &{}[j_m]&{}\quad \cdots &{}\quad j_{k+1} \end{pmatrix} \right| \\\leqslant & {} \sum _{m=1}^{k+1} |s_{k-1+j_m}| C^{k}\leqslant C^{k+1}, \end{aligned}$$

taking into account that

$$\begin{aligned} \sum _{m=1}^{k+1} |s_{k-1+j_m}|\leqslant \sum _{k=0}^\infty |s_k|=C. \end{aligned}$$

The symbol \( [j_m] \) denotes that the determinant has no column with the number \(j_m \). \(\square \)

Consider an infinite Hankel form

$$\begin{aligned} S(x,x)=\sum _{j,k=0}^\infty s_{j+k}x_j x_k. \end{aligned}$$
(19)

This double series converges absolutely, for example, when \(|x_j|\leqslant j^{-2}.\)

Indeed, taking into account the condition (17), we have

$$\begin{aligned} |S(x,x)|\leqslant C\sum _{j,k=0}^\infty |x_j||x_k|= C\left( \sum _{j=0}^\infty |x_j|\right) ^2. \end{aligned}$$

In what follows we assume that all \( D_j \ne 0 \). Examples of such matrices we present later.

Consider the truncated Hankel matrix \( S_ {p} \) of the form (13) and truncated Hankel form \(S_{p}(x,x)\) of the form (14).

Reduction of quadratic forms to the sum of the squares gives (see [1, Ch. 10, §3])

$$\begin{aligned} S_{p}(x,x)=\sum _{k=0}^{p-1}\frac{1}{D_{k-1}D_k} (X^{(p)}_k)^2, \end{aligned}$$
(20)

where

$$\begin{aligned} X^{(p)}_k=\sum _{q=k}^{p-1}S\begin{pmatrix} 0&{}\quad \cdots &{}\quad k-1&{}\quad k\\ 0&{}\quad \cdots &{}\quad k-1&{}\quad q \end{pmatrix}x_q. \end{aligned}$$

Lemma 2 implies that

$$\begin{aligned} |X_k^{(p)}|\leqslant C^k\sum _{q=k}^{p-1}|x_q|. \end{aligned}$$

Setting

$$\begin{aligned} |x_q|\leqslant \frac{1}{q^2}, \end{aligned}$$

we find that for such \(x_q \) there exists a limit

$$\begin{aligned} \lim _{p\rightarrow \infty }X_k^{(p)}=X_k=\sum _{q=k}^{\infty }S\begin{pmatrix} 0&{}\quad \cdots &{}\quad k-1&{}\quad k\\ 0&{}\quad \cdots &{}\quad k-1&{}\quad q\end{pmatrix}x_q. \end{aligned}$$

On the other hand, for the same \( x_q\) there exists the limit

$$\begin{aligned} \lim _{p\rightarrow \infty } S_p(x,x)=S(x,x). \end{aligned}$$

Besides that,

$$\begin{aligned} |X_k|\leqslant C_1C^k, \end{aligned}$$

where

$$\begin{aligned} C_1=\sum _{j=1}^\infty \frac{1}{j^2}. \end{aligned}$$

Therefore we have

$$\begin{aligned} S(x,x)=\sum _{k=0}^{\infty }\frac{1}{D_{k-1}D_k} X_k^2. \end{aligned}$$
(21)

Note that in view of the identity (20), when squaring \(X_k \) and substituting the result into (21), each product \( x_jx_k \) in S(xx) contains only a finite number of terms.

Thus, we obtain the following statement.

Proposition 2

If a Hankel matrix S of the form (9) satisfies the condition (17) and all \( D_p \ne 0 \) then the relation (21) holds, where the series

$$\begin{aligned} X_k=\sum _{q=k}^{\infty }S\begin{pmatrix} 0&{}\quad \cdots &{}\quad k-1&{}\quad k\\ 0&{}\quad \cdots &{}\quad k-1&{}\quad q \end{pmatrix}x_q,\quad k=0,\ldots \ . \end{aligned}$$

is absolutely convergent for \(|x_q|\leqslant \frac{1}{q^2}\).

Write the equality (21) in another form

$$\begin{aligned} S(x,x)=\sum _{k=0}^{\infty }\frac{D_k}{D_{k-1}} Y_k^2, \end{aligned}$$

where

$$\begin{aligned} Y_k=\frac{1}{D_k}\sum _{q=k}^{\infty }S\begin{pmatrix} 0&{}\quad \cdots &{}\quad k-1&{}\quad k\\ 0&{}\quad \cdots &{}\quad k-1&{}\quad q \end{pmatrix}x_q. \end{aligned}$$
(22)

Let us treat the system (22) as an infinite system of equations with respect to \(x_p\), \(p=0,1,\ldots \) . Denote the infinite matrix of this system as A, the elements of the matrix are denoted as \(a_{j,k}\). The matrix is upper triangular with the unit main diagonal.

Consider cofactors A(jk) to elements \( a_ {j, k} \) of the matrix A. These cofactors are well defined since below a certain line there is a unit on the main diagonal. Then the Laplace formula shows that all A(jk) are determinants of a finite matrix. It is clear that \( A(j, j) = 1 \), \( A (j, k) = 0 \) for \( j> k \). Let us consider an infinite matrix B that consists of elements A(kj). This is also an upper triangular matrix. Multiplying the matrix A by B according to the rule of matrix multiplication, we find that the sums

$$\begin{aligned} \sum _{j=1}^\infty a_{j,k}A(j,s) \end{aligned}$$

are finite, they are equal to 1 if \( s = k \) and to 0 if \( s \ne k \). This follows from the rule for finding a finite inverse matrix with unit determinant. Therefore

$$\begin{aligned} AB=BA =I, \end{aligned}$$

where I is the infinite identity matrix.

Multiplying the equality (22) by B, we obtain expressions for \( x_q \) in term of infinite series with respect to \( Y_k \). These series obviously converge by Proposition 2.

Let us consider an entire function of finite order of growth \( \rho \) of the form (1) with an infinite number of zeros \(\alpha _1,\ldots ,\alpha _n,\ldots \), power sums \(\sigma _k\) of the form (11) and \(s_j=\sigma _{2k_0+j}\). We check whether the condition (17) is fulfilled for \(s_j \). We have

$$\begin{aligned} \sum _{k=0}^\infty |s_k|= & {} \sum _{k=0}^\infty \left| \sum _{j=1}^\infty \frac{1}{\alpha _j^{2k_0+k}}\right| \leqslant \sum _{k=0}^\infty \sum _{j=1}^\infty \frac{1}{|\alpha _j|^{2k_0+k}}\\= & {} \sum _{j=1}^\infty \sum _{k=0}^\infty \frac{1}{|\alpha _j|^{2k_0+k}}=\sum _{j=1}^\infty \frac{1}{|\alpha _k|^{2k_0}}\sum _{k=0}^\infty \frac{1}{|\alpha _j|^{k}}=\sum _{j=1}^\infty \frac{1}{|\alpha _j|^{2k_0}}\cdot \frac{1}{|\alpha _j|-1}, \end{aligned}$$

if all \(|\alpha _j|>1\).

The following inequality holds

$$\begin{aligned} \frac{1}{|\alpha _j|-1}\leqslant \frac{1}{|\alpha _1|-1} \end{aligned}$$

by virtue of monotonically increasing sequence of absolute values \(|\alpha _j|\). Therefore, the series (17) converges and

$$\begin{aligned} \sum _{k=0}^\infty |s_k|\leqslant \frac{1}{|\alpha _1|-1}\cdot \sum _{j=1}^\infty \frac{1}{|\alpha _j|^{2k_0}} = C. \end{aligned}$$

Recall the Binet–Cauchy formula for a product of rectangular matrices. Let

$$\begin{aligned} A=\begin{pmatrix} a_{11}&{}\quad a_{12}&{}\quad \cdots &{}\quad a_{1n}\\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ a_{m1}&{}\quad a_{m2}&{}\quad \cdots &{}\quad a_{mn} \end{pmatrix},\quad \text{ and } \quad B=\begin{pmatrix} b_{11}&{}\quad b_{12}&{}\quad \cdots &{}\quad b_{1m}\\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ b_{n1}&{}\quad b_{n2}&{}\quad \cdots &{}\quad b_{nm}\end{pmatrix}. \end{aligned}$$

If the matrix

$$\begin{aligned} C=\begin{pmatrix} c_{11}&{}\quad c_{12}&{}\quad \cdots &{}\quad c_{1m}\\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ c_{m1}&{}\quad c_{m2}&{}\quad \cdots &{}\quad c_{mm}\end{pmatrix}, \end{aligned}$$

is the product \(C=A\cdot B\), then (see, e.g., [1, Ch. 1, §2])

$$\begin{aligned} \det C=\sum _{1\leqslant k_1<k_2<\cdots <k_m\leqslant n}A\begin{pmatrix} 1&{}\quad 2&{}\quad \cdots &{}\quad m\\ k_1&{}\quad k_2&{}\quad \cdots &{}\quad k_m\\ \end{pmatrix} \cdot B\begin{pmatrix} k_1&{}\quad k_2&{}\quad \cdots &{}\quad k_m\\ 1&{}\quad 2&{}\quad \cdots &{}\quad m\\ \end{pmatrix}.\quad \end{aligned}$$
(23)

Suppose that A and B are infinite rectangular matrices of the form

$$\begin{aligned} A=\begin{pmatrix} a_{11}&{}\quad a_{12}&{}\quad \cdots &{}\quad a_{1n}&{}\quad \cdots \\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ a_{m1}&{}\quad a_{m2}&{}\quad \cdots &{}\quad a_{mn}&{}\quad \cdots \\ \end{pmatrix},\qquad B=\begin{pmatrix} b_{11}&{}\quad b_{12}&{}\quad \cdots &{}\quad b_{1m}\\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ b_{n1}&{}\quad b_{n2}&{}\quad \cdots &{}\quad b_{nm}\\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \end{pmatrix}. \end{aligned}$$

Lemma 3

If the series

$$\begin{aligned} \sum _{n=1}^\infty a_{k_1 n}\cdot b_{n k_2} \end{aligned}$$
(24)

converges absolutely for all \( 1\leqslant k_1,k_2\leqslant m\), then we have

$$\begin{aligned} \det C=\sum _{1\leqslant k_1<k_2<\cdots <k_m}A\begin{pmatrix} 1&{}\quad 2&{}\quad \cdots &{}\quad m\\ k_1&{}\quad k_2&{}\quad \cdots &{}\quad k_m\\ \end{pmatrix} \cdot B\begin{pmatrix} k_1&{}\quad k_2&{}\quad \cdots &{}\quad k_m\\ 1&{}\quad 2&{}\quad \cdots &{}\quad m\\ \end{pmatrix} \end{aligned}$$
(25)

and the resulting series converges absolutely.

To prove the equality (25) we apply the Binet–Cauchy formula (23) to finite submatrices of matrix A of order \( (m \times n) \) and to finite submatrices of matrix B of order \( (n \times m) \) and then we take limit as \(n\rightarrow \infty \). Convergence of the resulting series ensures convergence of the series (24).

For the function f we introduce an infinite matrix \( \Delta \) with elements

$$\begin{aligned} \delta _{kj}=\dfrac{1}{\alpha _j^{2k_0+k}},\quad k,j=0,1,\ldots k\ldots \ , \end{aligned}$$

and \( \Delta '\) is the transpose of matrix \(\Delta \).

In what follows we assume that all \( | \alpha _j|>1 \). Therefore, for such f Proposition 2 is valid.

Consider a matrix \( \Delta _m \) that consists of the first m rows of the matrix \(\Delta \).

Proposition 3

The determinants \( D_m \) have a representation in the form

$$\begin{aligned} D_m=\sum _{1\leqslant k_1<k_2<\cdots <k_m}\left[ \Delta _m\begin{pmatrix} 1&{}2&{}\cdots &{}m\\ k_1&{}k_2&{}\cdots &{}k_m\\ \end{pmatrix}\right] ^2, \end{aligned}$$
(26)

and this series converges absolutely.

Proof

We apply formula (25) to the matrices \( \Delta _m \) and \( \Delta _m '\). We use the following facts: a matrix and its transpose have the same determinant, and

$$\begin{aligned} s_k=\sigma _{2k_0+k}=\sum _{j=1}^\infty \frac{1}{\alpha _j^{2k_0+k}}. \end{aligned}$$

Therefore, \( s_{j + k} \) is an infinite inner product of infinite rows of the matrix \( \Delta _m \) and the infinite columns of the matrix \(\Delta _m'\). \(\square \)

Theorem 7

Assume that a function f(z) has real coefficients. All zeros of f are real if and only if \(D_m>0\), \(m=0,1,\ldots \).

Proof

Suppose that all zeros of f(z) are real. Using formula (26), we obtain that \( D_m \) is a sum of non-negative minors, some of them are strictly positive since they are Vandermonde determinants with different columns.

Let all \( D_m> 0 \). Consider an infinite Hankel form S(xx) of the form (19) with real variables \( x_j \). We have previously obtained that this form is an absolutely convergent double series when \(|x_k|\leqslant k^{-2}\), \(k\geqslant 1\).

Assume that \( \beta _1, \ \beta _2, \ldots , \beta _k \ldots \) are distinct zeros of the function f with multiplicities \(n_1,\ n_2, \ldots , n_k,\ldots \), respectively.

Transform the form S(xx):

$$\begin{aligned} S(x,x)= & {} \sum _{j,k=0}^\infty s_{j+k}x_jx_k=\sum _{j,k=0}^\infty \sum _{m=1}^\infty \frac{n_m}{\beta _m^{2k_0+j+k}}x_jx_k\\= & {} \sum _{m=1}^\infty \sum _{j,k=0}^\infty \frac{n_m}{\beta _m^{2k_0+j+k}}x_jx_k=\sum _{m=1}^\infty \frac{n_m}{\beta _m^{2k_0}}\left( x_0+\frac{x_1}{\beta _m}+\cdots +\frac{x_k}{\beta _m^k}+\cdots \right) ^2. \end{aligned}$$

The change of the order of summation is justified since the corresponding series converge.

Denote

$$\begin{aligned} Z_m=\frac{1}{\beta _m^{k_0}}\left( x_0+\frac{x_1}{\beta _m}+\cdots +\frac{x_k}{\beta _m^k}+\cdots \right) , \end{aligned}$$

then we find that

$$\begin{aligned} S(x,x)=\sum _{m=1}^\infty n_mZ_m^2. \end{aligned}$$
(27)

If a zero \( \beta _m \) is real then \(Z_m^2>0\). If a zero is complex then the sum of squares \(Z_m^2+{\overline{Z}}_m^2=P_m^2-Q_m^2\), where \(P_m=\mathrm{Re}\, Z_m\) and \(Q_m= \mathrm{Im}\, Z_m\). This means that a square number in the representation (27) corresponds to a real zero and a difference of squares corresponds to a complex zero. Therefore, the relation (27) can be written as

$$\begin{aligned} S(x,x)=\sum _{m=1}^\infty r_m F_m^2, \end{aligned}$$
(28)

where \( r_m \) is 1 or \( -1 \) and linear infinite forms \(F_m \) are real.

We shall show that if all \( D_m> 0 \), then all \( r_m = 1 \) in (28). Suppose that \( r_ {m_0} = - 1 \) for some \( m_0 \). Consider a system of equations

$$\begin{aligned} F_0=0,\ldots F_{m_0-1}=0, F_{m_0+1}=0,\ldots F_k=0 \quad \; \text{ for } \text{ sufficiently } \text{ large }\ k. \end{aligned}$$
(29)

All equations in this system are different.

Move all \(x_j \) with \( j> k \) to the right hand side of the system (29). Then we have a convergent series in the right hand side of the system. The coefficients at \( x_0, \ldots , x_ {k-1} \) form a Vandermonde matrix (or its real or imaginary part). Therefore, we express \(x_0,\ldots , x_{k-1}\) from the system (29) as convergent series in other variables. Substituting \(x_0,\ldots , x_{k-1}\) into the system (29), we obtain that the right hand side is equal to zero.

It is clear that if substituting these solutions into \( F_ {m_0} \), this form cannot be identically equal to zero because this equation is not a consequence of the system of Eq. (29). Then there exist \( x_ {k}, \ldots \), such that \(F_{m_0}\ne 0\).

Recall that the form \(F_m \) follows from the form \( Z_m \) and all variables \( | x_j | \) are bounded by a constant \( c_1 \). Then we obtain

$$\begin{aligned} |Z_m|\leqslant & {} \frac{ n_m}{|\beta _m|^{2k_0}}\left( |x_0|+\left| \frac{x_1}{\beta _m}\right| +\cdots +\left| \frac{x_k}{\beta _m^k}\right| +\cdots \right) \\\leqslant & {} \frac{c_1 n_m}{|\beta _m|^{2k_0}} \left( 1+\left| \frac{1}{\beta _m}\right| +\cdots +\left| \frac{1}{\beta _m^k}\right| +\cdots \right) =\frac{c_1}{|\beta _m|^{2k_0}}\frac{n_m}{|\beta _m|-1}. \end{aligned}$$

We assume that \( | \beta _m |> 1 \) for all m. Therefore,

$$\begin{aligned} |Z_m|\leqslant \frac{c_1 n_m}{|\beta _m|^{2k_0+1}}. \end{aligned}$$

Then

$$\begin{aligned} \sum _{m=p}^\infty |Z_m|\leqslant c_1\sum _{m=p}^\infty \frac{1}{|\alpha _m|^{2k_0+1}}=\delta . \end{aligned}$$
(30)

The last expression is the remainder of the convergent series, so it can be made arbitrarily small uniformly with respect to \( x_j \). Choosing sufficiently large k such that \( \delta <| F_{m_0} | \), we find that S(xx) has a negative value in this case.

At the same time we have the representation (21) for S(xx) . By the hypothesis of the theorem it is non-negative. This contradicts our assumption that \(r_{m_0}=-1\). \(\square \)

Example 3

Consider the following entire function f(z) of order \(\rho =\dfrac{1}{2}\):

$$\begin{aligned} f(z)=\frac{\sin \sqrt{z}}{\sqrt{z}}=\sum _{n=0}^\infty (-1)^n\frac{z^n}{(2n+1)!}. \end{aligned}$$

It is known that

$$\begin{aligned} \ln \frac{\sin \sqrt{z}}{\sqrt{z}}=-\sum _{n=1}^\infty \frac{2^{2n-1} B_n}{(2n)!}\cdot \frac{z^n}{n}, \end{aligned}$$
(31)

where \( B_n \) are Bernoulli numbers (see, e.g., [11, Ch. 12, Section 449]). They are all positive.

It follows from (3) and Corollary 2 that

$$\begin{aligned} \sigma _n= \frac{2^{2n-1} B_n}{(2n)!}\quad \text{ for }\quad n\geqslant 1, \end{aligned}$$

and \(B_1=\dfrac{1}{6},\ B_2=\dfrac{1}{30},\ B_3=\dfrac{1}{42},\ B_4= \dfrac{1}{30},\ B_5=\dfrac{5}{66},\ldots \) . Set \( s_n = \sigma _ {n + 1} \), then \( D_m \) is defined as

$$\begin{aligned} D_m=\begin{vmatrix} s_0&\quad s_1&\quad \cdots&\quad s_{m-1}\\ s_1&\quad s_2&\quad \cdots&\quad s_m\\ \cdots&\quad \cdots&\quad \cdots&\quad \cdots \\ s_{m-1}&\quad s_m&\quad \cdots&\quad s_{2m-1}\end{vmatrix}, \end{aligned}$$

Factoring out powers of 2 from the determinant \( D_m\), we find that the sign of \( D_m \) coincides with the sign of the determinant

$$\begin{aligned} \Delta _m'= \begin{vmatrix} \frac{B_1}{2!}&\quad \frac{B_2}{4!}&\quad \cdots&\quad \frac{B_m}{(2m)!}\\ \frac{B_2}{4!}&\quad \frac{B_3}{6!}&\quad \cdots&\quad \frac{B_{m+1}}{(2m+2)!}\\ \cdots&\quad \cdots&\quad \cdots&\quad \cdots \\ \frac{B_m}{(2m)!}&\quad \frac{B_{m+1}}{(2m+2)!}&\quad \cdots&\quad \frac{B_{2m}}{(4m)!}\end{vmatrix} \end{aligned}$$

Therefore, \(\Delta _1'=s_0=\dfrac{1}{12}>0\) and

$$\begin{aligned} \Delta _2'= \begin{vmatrix} \frac{1}{12}&\quad \frac{1}{30\cdot 24}\\ \frac{1}{30\cdot 24}&\quad \frac{1}{42\cdot 720} \end{vmatrix}= \frac{1}{12\cdot 720}\begin{vmatrix} 1&\quad \frac{1}{ 60}\\ 1&\quad \frac{1}{42} \end{vmatrix} >0. \end{aligned}$$

Similarly, we have

$$\begin{aligned} \Delta _3'=\begin{vmatrix} \frac{1}{12}&\quad \frac{1}{30\cdot 24}&\quad \frac{1}{42\cdot 720}\\ \frac{1}{30\cdot 24}&\quad \frac{1}{42\cdot 720}&\quad \frac{1}{30\cdot 8!}\\ \frac{1}{42\cdot 720}&\quad \frac{1}{30\cdot 8!}&\quad \frac{5}{66\cdot (10)!} \end{vmatrix}= c\begin{vmatrix} 1&\quad \frac{1}{30}&\quad \frac{1}{210}\\ 1&\quad \frac{1}{21}&\quad \frac{1}{140}\\ 1&\quad \frac{1}{20}&\quad \frac{1}{99}\end{vmatrix}>0, \end{aligned}$$

where \(c>0\).

This is fully consistent with Theorem 7. Moreover, Theorem 7 shows that all \( D_m \) composed of Bernoulli numbers are positive.

Consider the case when function f(z) has only imaginary zeros. In the case of polynomials the condition can be found in terms of inners wherein they have imaginary roots [2, §2.4].

Theorem 8

Let an entire function f(z) of the form (1) with real Taylor coefficients have the order of growth \(\rho <2\). For the function f to have only imaginary roots it is necessary and sufficient that the determinants

$$\begin{aligned} \Delta _m=\begin{vmatrix} s_0&\quad s_2&\quad \cdots&\quad s_{2m-2}\\ s_2&\quad s_4&\quad \cdots&\quad s_{2m}\\ \cdots&\quad \cdots&\quad \cdots&\quad \cdots \\ s_{2m-2}&\quad s_{2m}&\quad \cdots&\quad s_{4m-2}\end{vmatrix}, \end{aligned}$$
(32)

are positive for all m, and the conditions

$$\begin{aligned} \sum _{m=0}^n\frac{b_{n-m}(-b_1)^m}{m!}{\left\{ \begin{array}{ll} {=}0\quad \text{ for } \;\text{ odd }\; n,\\ {\geqslant } 0\quad \text{ for } \;\text{ even }\; n. \end{array}\right. } \end{aligned}$$
(33)

are satisfied.

Proof

Assume that a function f has imaginary zeros \( \pm i \gamma _j \), \( \gamma _j \in \mathbb R \). Then the Taylor decomposition of the canonical product for f contains only even powers of z. Indeed, the product in (5) has the form

$$\begin{aligned}&\left( 1-\frac{z}{i\gamma _n}\right) e^{\frac{z}{i\gamma _{n}}+\frac{z^{2}}{2(i\gamma _{n})^{2}}+\cdots +\frac{z^{p}}{p(i\gamma _{n})^{p}}}\cdot \left( 1+\frac{z}{i\gamma _n}\right) e^{\frac{z}{-i\gamma _{n}}+\frac{z^{2}}{2{(-i\gamma _{n}})^{2}}+\cdots +\frac{z^{p}}{p(-i\gamma _{n})^{p}}}\\&\quad =\left( 1+\frac{z^2}{\gamma ^2_n}\right) e^{\frac{z^{2}}{-\gamma _{n}^{2}}+\cdots +\left( \frac{z^{p}}{p(i\gamma _{n})^{p}}+\frac{({-z})^{p}}{p(i\gamma _{n})^{p}}\right) }. \end{aligned}$$

So this product depends on \( z^2 \). Therefore, the canonical product (6) that corresponds to the function f takes the form:

$$\begin{aligned} \Phi (z)=\sum _{j=0}^\infty c_jz^{j}=\sum _{j=0}^\infty c_{2j}z^{2j}. \end{aligned}$$

We introduce the function

$$\begin{aligned} H(w)=\sum _{j=0}^\infty c_{2j}w^{j}. \end{aligned}$$

Zeros of this function are only numbers \( - \gamma _n ^ 2 \). Therefore, H(w) has only real zeros. The function H(w) is also an entire function of finite order of growth and its order is half the order of the canonical product \(\Phi (z)\). So in our case the order of growth of the function H(w) less than 1.

Consider the power sums for the function H(w):

$$\begin{aligned} \sum _{n=1}^\infty \big (-\gamma _n^{-2}\big )^k. \end{aligned}$$

It is clear that this power sum is equal to the sum of \( \sigma _{2k} \) of the function f(z).

Since by Theorem 2 \(p\leqslant \rho \) and \(q\leqslant \rho \), in our case \(p=0\) and \(q=0\).

Therefore, the Hadamard decomposition of the function H(w) has the form

$$\begin{aligned} H(w)= \prod _{n=1}^{\infty }\left( 1-\frac{w}{\beta _n}\right) . \end{aligned}$$

It is clear that if all zeros of this function are real then they are negative if and only if the Taylor expansion of H(w) has non-negative coefficients \(c_{2j}\). These coefficients are the coefficients of the Taylor expansion for the canonical product \(\Phi (z)\).

The necessary condition that all zeros are imaginary is that the Taylor coefficients of the canonical product \( \Phi (z) \) have non-negative values and odd-numbered coefficients are equal to 0.

Let us find the condition on the Taylor coefficients of the function f(z) which guarantees non-negativity of the coefficients of the canonical product. Because the order of growth of the function is \( \rho <2 \) then its Hadamard expansion takes the form:

$$\begin{aligned} f(z)= e^{d_1z}\cdot \Phi (z), \end{aligned}$$

where \( d_1<2\) and may be equal to zero.

Using Proposition 2, Lemma 1 and (3), we find that \(d_1=b_1\).

Then \(\Phi (z)=e^{-b_1z}\cdot f(z)\). Therefore,

$$\begin{aligned} \Phi (z)= & {} \sum _{m=0}^\infty \frac{(-b_1z)^m}{m!}\cdot \sum _{k=0}b_kz^k=\sum _{n=0}^\infty z^n\sum _{m+k}^n\frac{b_k(-b_1)^m}{m!} \\= & {} \sum _{n=0}^\infty z^n\sum _{m=0}^n\frac{b_{n-m}(-b_1)^m}{m!}. \end{aligned}$$

So the Taylor coefficients of the canonical product \( \Phi (z) \) are

$$\begin{aligned} c_n=\sum _{m=0}^n\frac{b_{n-m}(-b_1)^m}{m!}. \end{aligned}$$

It follows from this relation and Theorem 7 that the theorem is proved. \(\square \)

Remark 2

If \(b_1 = 0 \) then the function f(z) coincides with its canonical product and the condition (33) is equivalent to

$$\begin{aligned} b_{2n}\geqslant 0, \quad b_{2n-1}=0. \end{aligned}$$

Recall the definition of a type of entire function.

Let f(z) have a finite order \(\rho \). If there exists a positive number K such that

$$\begin{aligned} f(z)=O(e^{KR^\rho })\quad \text{ for }\quad | z | = R \rightarrow + \infty \end{aligned}$$

then the function f has a finite type.

The infimum of such numbers K is called the type of the function and is denoted \(\kappa \). A function f has a minimal type if \(\kappa =0\).

If function f(z) is an entire function either of order \(\rho < 1\) or of order \(\rho =1\) and \(\kappa =0\), then its Hadamard expansion has the form (see, e.g., [7, Ch. 7, §2])

$$\begin{aligned} f(z)= \prod _{n=1}^{\infty }\left( 1-\frac{z}{\alpha _n}\right) . \end{aligned}$$

Therefore, the following statement is true

Corollary 7

Let f be an entire function of minimal type and order \(\rho \leqslant 1\) with real coefficients. All zeros of function f are imaginary if and only if all minors of the form (32) are positive, the even-numbered Taylor coefficients of the function f are non-negative and the odd-numbered coefficients are zero.

Proof

In this case the entire function coincides with its canonical product, the previous considerations include this statement. \(\square \)

Example 4

Let us consider the following function

$$\begin{aligned} f(z)=\frac{\sinh z}{z}=\sum _{k=0}^\infty \frac{z^{2k}}{(2k+1)!}. \end{aligned}$$

This function has the order of growth \(\rho =1\) and it coincides with its canonical product. The even-numbered coefficients of the Taylor expansion of f(z) are positive and the odd-numbered coefficients are zero. Therefore, the necessary condition of Theorem 8 is satisfied.

Consider the determinants of Theorem 8. Formula (31) gives

$$\begin{aligned} \ln \frac{\sinh z}{ z}= \ln \frac{\sin iz}{ iz}=-\sum _{n=1}^\infty \frac{(-1)^n2^{2n-1} B_n}{(2n)!}\cdot \frac{z^{2n}}{2n}. \end{aligned}$$

Therefore, the power sums a(x) with even numbers are

$$\begin{aligned} \sigma _{2n}= \frac{(-1)^n2^{2n-1} B_n}{(2n)!}\quad \text{ for }\quad n\geqslant 1, \end{aligned}$$

and \(s_{sn}=\sigma _{2n+2}\). The determinant \( \Delta _m \) in Example 3 takes the form

$$\begin{aligned} \Delta _m'= \begin{vmatrix} \frac{B_1}{2!}&\quad \frac{-B_2}{4!}&\quad \cdots&\quad \frac{(-1)^mB_m}{(2m)!}\\ \frac{-B_2}{4!}&\quad \frac{B_3}{6!}&\quad \cdots&\quad \frac{B_{m+1}}{(2m+2)!}\\ \cdots&\quad \cdots&\quad \cdots&\quad \cdots \\ \frac{(-1)^mB_m}{(2m)!}&\quad \frac{(-1)^{m+1}B_{m+1}}{(2m+2)!}&\quad \cdots&\quad \frac{B_{2m}}{(4m)!}\end{vmatrix}. \end{aligned}$$

The element in this determinant that stands at the intersection of j-th row and s-th column is negative if and only if \( j + s \) is odd. Then this determinant is equal to the determinant \( \Delta _m\) from Example 3. In fact, the product of elements taken one by one from each row and each column contains an even number of elements with odd sums of row number and column number.

Therefore, this example is reduced to Example 3.

Now consider the case when \( D_m \) are positive or negative. Suppose, as before, that all \( D_m \ne 0 \). We introduce the sequence

$$\begin{aligned} D_{-1}D_0,\ D_0D_1,\ldots , D_mD_{m+1},\ldots , \end{aligned}$$
(34)

Recall that \(D_{-1}=1\).

Theorem 9

Assume that a function f has real coefficients. If the sequence (34) contains exactly m negative numbers then the function f has m distinct pairs of complex conjugate zeros and an infinite number of real zeros. If the sequence (34) has an infinite number of negative numbers then the function f(z) has an infinite number of complex conjugate zeros.

The proof essentially repeats the proof of Theorem 7 and the arguments from [1, Ch. 10, §1]. If the sequence (34) contains exactly m negative numbers then we choose sufficiently big k and \( k> m \). Suppose that the sequence \( Z_1 \), \( \ldots , Z_k \) contains s squares with negative sign and \( s> m \). Then we set to zero in the expression (21) those \( X_k \) which are included in (21) with negative sign. There are exactly m such equations. We set to zero squares with positive sign in the relation (28). There are exactly \(k-s\) such equations and \((k-s)+m<k\). Then there is no zero solution of these equations. Using the inequality (30), we see that the expression (21) is non-negative and the expression (28) is strictly less than zero. This is impossible. The case \( s <m \) is treated similarly. It is clear that if there is an infinite number of negative numbers in the sequence (34) then the number of complex zeros is infinite.\(\square \)

Remark 3

The sequence (34) has always an infinite number of positive elements, while the number of real zeros of the function f(z) can be finite or infinite. Therefore, this sequence cannot be used to obtain a condition of existence of a finite number of real zeros.

Remark 4

If some of the roots \( | \alpha _j|\leqslant 1 \), then we can consider the function f(rz) , where \( r> 0 \). The zeros of this function are \(\alpha _j/r\). Therefore, for sufficiently small r they are greater than 1 in absolute value. Then the sums \( \sigma _k \) are multiplied by \( r^k \) and the minors \( d_p \) are multiplied by \( r^{2k_0 + p (p-1)} \). As this takes place, the determinants \( D_p \) are not equal to zero and their signs are not changed. Moreover, under such transformation real roots remain real and complex roots remain complex, thus Theorems 79 remain true.

Remark 5

If the sequence (34) has zero elements then Theorem 9 remains true if signs of sequence elements are arranged according to the Frobenius rule (16). We also note that we have excluded the case where \( D_m \) are equal to zero, starting from a certain number, since the number of zeros of the function is infinite.