1 Introduction

Let (RE) be an exponential ring, i.e. a commutative unital ring R together with a monoid homomorphism \(E: (R, 0, +) \rightarrow (R^{*}, 1, \cdot )\). The ring of exponential polynomials over (RE) in variables \(z_1, \ldots , z_n\) is defined by recursion, and it is denoted by \(R[z_1, \ldots , z_n]^E\). Clearly, \(R[z_1, \ldots , z_n]^E\) carries an exponential map which makes it into an exponential ring, and if R is a domain also \(R[z_1, \ldots , z_n]^E\) is a domain (for details see [10]). In this paper we focus on complex exponential polynomials in one variable with only one iteration, i.e. we consider exponential polynomials over \(\mathbb {C}\) in one variable of the form

$$\begin{aligned} f(z)= \lambda _1 e^{\mu _1z}+ \cdots + \lambda _N e^{\mu _Nz}, \end{aligned}$$
(1)

where \(\mu _i, \lambda _i \in \mathbb {C}\) for \(i= 1, \ldots , N\) with \(\mu _i\not =\mu _j\), and \(\lambda _i\not =0\) if \(f\not =0\).

We denote by \(\mathcal {E}\) the set of such exponential polynomials. It is easy to verify that \(\mathcal {E}\) is a subring of \(\mathbb {C}[z]^E,\) and it is clearly not an exponential ring. The ring \(\mathcal {E}\) is a domain whose units are of the form \( \lambda e^{\mu z}\) with \(\lambda \not =0\).

We are interested in analyzing the set of zeros of \(f \in \mathcal {E}\). A fundamental characterization of exponential polynomials with no zeros is due to Henson and Rubel (see [3]).

Theorem 1.1

Let \(f(\overline{z}) \in \mathbb {C}[\overline{z}]^E\). Then \(f(\overline{z})\) has no roots in \(\mathbb {C}\) iff \(f(\overline{z}) = e^{h(\overline{z})}\), for some \(h(\overline{z}) \in \mathbb {C}[\overline{z}]^E\).

A consequence of the previous result is the following (see [4]):

Theorem 1.2

A non constant polynomial \(f(z) \in \mathbb {C}[z]^E\) has always infinitely many zeros unless it is of the form \(f(z)=(z - a_1)^{n_1} \cdot \cdots \cdot (z - a_n) ^{n_n} \cdot e^{g(z)}\), where \(a_1, \ldots , a_n \in \mathbb {C}, n_1, \ldots , n_n \in \mathbb {N}\), and \(g(z) \in \mathbb {C}[z]^E\).

Theorem 1.2 implies that exponential polynomials in \(\mathcal {E}\) have either no root or infinitely many roots.

Shapiro [9] conjectured that if two exponential polynomials in \(\mathcal {E}\) have infinitely many common zeros then in fact they have a common factor which belongs to \(\mathcal {E}\). In [12] van der Poorten and Tijdemann proved that this is true for certain exponential polynomials in \(\mathcal {E}\) (i.e. for simple exponential polynomials: see Sect. 2). In [1] the authors extended the result of van der Poorten and Tijdemann to all polynomials in \(\mathcal {E}\) using the following famous conjecture in transcendental number theory:

Schanuel’s Conjecture (SC): Let \(\lambda _1, \ldots , \lambda _n \in \mathbb {C}\). Then the transcendence degree of \(\mathbb {Q}(\lambda _1, \ldots , \lambda _n, e^{\lambda _1}, \ldots , e^{\lambda _n})\) over \(\mathbb {Q}\) is greater or equal than the linear dimension of \(\lambda _1, \ldots , \lambda _n\) over \(\mathbb {Q}\).

In this paper we give some further results on the zero sets of exponential polynomials in \(\mathcal {E}\). More precisely, in Sect. 3 we characterize polynomials in \(\mathcal {E}\) which have the same zero set in term of the radical ideals, i.e. we obtain the following result:

Theorem 1.3

(SC)

  1. (1)

    Let \(f, g \in \mathcal {E}\) and suppose \(V(f) = V(g)\). Then, \(rad(f) = rad(g)\) (see Theorem 3.5).

  2. (2)

    Let \(f, g \in \mathcal {E}\). Then, \(V(f) - V(g)\) is finite iff \(rad(g) \subseteq rad(f)\) (see Proposition 3.6).

Finally in Sect. 4 we introduce the notion of multiplicity for exponential polynomials in the ring \(\mathcal {E}\) using the notion of derivation on \(\mathcal {E}\). We give a bound for the multiplicity in terms of the number of the summands which appear in the polynomial \(f \in \mathcal {E}\). We obtain the following:

Proposition 1.4

Let \(f(z) \in \mathcal {E},\) then the multiplicity of any root of f(z) is at most \(N-2\) (see Proposition 4.2).

2 Factorization for exponential polynomials

In this section we see another important result involving \(\mathcal {E}\), i.e. the ‘Almost Factorization theorem’. Ritt in [7] was the first to give such factorization for exponential polynomials of height 1 over \(\mathbb {C}\), but this result has been extended over the years to larger classes of exponential polynomial rings (see [2, 5, 11]). We briefly review the main ideas in Ritt’s factorization for exponential polynomials in \(\mathcal {E}\). The following basic notions are needed.

Definition 2.1

An element f in \(\mathcal {E}\) is irreducible if there are no non-units g and h in \(\mathcal {E}\) such that \(f = gh.\)

Definition 2.2

Let \(f = \sum _{i=1}^{N}\alpha _ie^{\mu _i z}\) be an exponential polynomial in \(\mathcal {E}\). The support of f, denoted by supp(f), is the \(\mathbb {Q}\)-vector space generated by \(\mu _{1}, \ldots , \mu _{N}\).

Definition 2.3

An exponential polynomial f(z) of \(\mathcal {E}\) is simple if \(\dim supp(f) = 1\).

A simple exponential polynomial in \(\mathcal {E}\) is, up to a unit, a polynomial in \(e^{\mu z}\), for some \(\mu \in \mathbb {C}\). An example of a simple in \(\mathcal {E}\) is

$$\begin{aligned} g(z) = \frac{e^{2\pi i z} -e^{-2\pi i z}}{2i} = sin(2\pi z). \end{aligned}$$

Remark 2.4

A simple exponential polynomial factorizes, up to units, into a finite product of factors of the form \(1 - \alpha e^{\mu z},\) where \(\alpha ,\mu \in \mathbb {C}^{*}\). This is an easy consequence of the fact that the complex field is algebraically closed.

For the factorization theorem of Ritt the following lemma is crucial.

Lemma 2.5

Let \(f(z)=\sum _{i=1}^{N}\alpha _ie^{\mu _i z}\) and \(g(z) = \sum _{j=1}^{M}l_je^{m_j z}\) be non-zero exponential polynomials in \(\mathcal {E}\). If f is divisible by g then supp(ag) is contained in supp(bf), for some units a and b, i.e. every element of supp(ag) is a linear combination of elements of supp(bf) with rational coefficients.

Remark 2.6

Note that if f is a simple exponential polynomial and g divides f then g is also simple.

The factorization theorem that we need is the following (see [7]).

Theorem 2.7

Let \(f(z) = \lambda _1 e^{\mu _1 z} + \cdots + \lambda _N e^{\mu _N z}\), where \(\lambda _i, \mu _i \in \mathbb {C}\). Then f can be written uniquely up to the order of factors and multiplication by units as

$$\begin{aligned}f(z) = g_1 \cdot \cdots \cdot g_k \cdot h_1 \cdot \cdots \cdot h_m, \end{aligned}$$

where \(g_j'\)s are simple exponential polynomials with \(supp(g_{j_1})\not = supp(g_{j_2})\) for \(j_1\not = j_2\), and \(h_l'\)s are irreducible polynomials in \(\mathcal {E}\).

Moreover, Ritt in [8] proved the following result, which we will need later:

Theorem 2.8

If a quotient of exponential polynomials is an entire function then it is an exponential polynomial.

The previous result is equivalent to say that if every zero of \(f(z) = \sum _{i=1}^N \lambda _i e^{\mu _i z}\) is a zero of a polynomial \(g(z) = \sum _{j=1}^{M}l_je^{m_j z},\) then f(z) divides g(z). (Taking into accaunt the multiplicity of the polynomials; in particular it is true if the multiplicity of the zeros of g is more than of f).

3 Zero set and radical ideals

In this section we investigate the zero set of polynomials in \(\mathcal {E}\). We will use the following Shapiro’s Conjecture, which is now a theorem (see [1, 12]); more precisely, using Factorization Theorem, we can split the result in two different theorems:

Theorem 3.1

([12]) Let \(f, g \in \mathcal {E}\). Suppose f is a simple exponential polynomial and g an arbitrary exponential polynomial such that f and g have infinitely many common zeros, then there is a common factor \(h \in \mathcal {E}\).

Theorem 3.2

([1]) (SC) Let \(f, g \in \mathcal {E}\). Suppose f an irreducible exponential polynomial such that f and g have infinitely many common zeros; then f divides g.

We recall the following definition:

Definition 3.3

Let (f) be an ideal of \(\mathcal {E}\). The radical ideal, denoted by rad(f),  is the set of exponential polynomials \(g \in \mathcal {E} \) such that \(g^n \in (f)\).

Given \(f \in \mathcal {E},\) we denote by \(V(f) = \{ \alpha \in \mathbb {C}: f(\alpha ) = 0\}\).

If a simple exponential polynomial f has infinitely many roots then by the Pigeon-hole Principle, one factor of f, say \(1-ae^{\mu z},\) has infinitely many zeros and these are of the form

$$\begin{aligned} z=\frac{(2k\pi i-\log a)}{\mu }, \end{aligned}$$
(2)

with \(k\in \mathbb {Z}\), for a fixed value of \(\log a\).

We now analyze the case when two simple exponential polynomials have more than one root in common.

Lemma 3.4

Let \(f(z) = 1 - a e^{\mu z}\) and \(g(z) = 1 - b e^{\delta z}\), where \(a\ne b\), and \(\mu \ne \delta \). If \(V(f) \cap V(g)\) has cardinality greater then 1,  then there exist \(m, n \in \mathbb {Z}\) with \(m, n \ne 0,\) such that \(a^n = b^m,\) and \(n \mu = m \delta \).

Proof

Let \(z_0 \in V(f) \cap V(g)\). By (2) we have that there exist \(k, l \in \mathbb {Z}\) such that

$$\begin{aligned} \frac{-\log a + 2k\pi i}{\mu } = \frac{-\log b + 2l\pi i}{\delta }; \end{aligned}$$
(3)

and so

$$\begin{aligned} \delta (-\log a + 2k\pi i)= \mu (-\log b + 2l\pi i) \end{aligned}$$

hence

$$\begin{aligned} \delta k - \mu l = (2\pi i)^{-1}(\delta \log a - \mu \log a). \end{aligned}$$

Let \(z_1 \in V(f) \cap V(g)\) and \(z_1\not = z_0\). There exist \(r,s \in \mathbb {Z}\) such that \(r \not = k, s \not = l,\) and

$$\begin{aligned} \delta r - \mu s = (2\pi i)^{-1}(\delta \log a - \mu \log a). \end{aligned}$$

So we obtain

$$\begin{aligned}\delta k - \mu l = \delta r - \mu s,\end{aligned}$$

hence

$$\begin{aligned}\frac{\delta }{\mu } = \frac{l-s}{k-r}.\end{aligned}$$

Both \(l-s\) and \(k-r\) are non zero integers, since both \(\mu , \delta \) are non zero. If \(n = l-s\) and \(m =k-r,\) then \(n\mu = m\delta \). Moreover, from \(e^{\mu z_0} = a^{-1}\) and \(e^{\delta z_0} = b^{-1},\) we obtain \(e^{n\mu z_0} = a^{-n}\) and \(e^{m\delta z_0} = b^{-m}\). Hence, we have \(a^n = b^m\). \(\square \)

Using the previous results and notions, we are able to give a relation between zero sets and radical ideals:

Theorem 3.5

(SC) Let \(f, g \in \mathcal {E}\) and suppose \(V(f) = V(g)\). Then, \(rad(f) = rad(g)\).

Proof

We start by proving that, if \(V(f) \subseteq V(g)\), then \(rad(g) \subseteq rad(f)\).

If \(V(f) = V(g) = \emptyset \) then f and g have no roots in \(\mathbb {C}\), by Henson and Rubel result, fg are invertible elements in \(\mathcal {E},\) so the implication is trivial.

Moreover, we notice that if \(\mid V(f) \mid < \infty ,\) by Hadamard Factorization Theorem for entire function of order 1 and with finitely many zeros, and by the form of the elements in \(\mathcal {E},\) we have \(f = ae^{\alpha z + b},\) with \(a, \alpha , b \in \mathbb {C}\) this implies that \((f) = \mathcal {E}\) so the implication is trivial. So, we can assume that \(\mid V(f) \mid = \infty \).

We start considering the following special cases:

Case 1: If f is a simple polynomial then up to a constant, f is a finite product of the form \(f = \prod (1 - ae^{\alpha z})\) with \(a, \alpha \in \mathbb {C}\). It is easy to see that the zeros of it has only bounded multiplicity. Since we are assuming that any solution of f is a solution of g, by Ritt’s Theorem we have that f divides g. This implies that \(g \in (f),\) and so \(g \in rad(f)\). More precisely, \((g) \subseteq rad(f)\) then \(rad(g) \subseteq rad(f)\).

Case 2: If f is irreducible and it has infinitely many common zeros with g,  then by Shapiro’s Theorem (assuming Shanuel’s Conjecture) see Theorem 3.2, f divides g,  so it is easy to see that \(g \in (f)\) and so \(g^n \in (f)\). We can conclude that \(rad(g) \subseteq rad(f)\).

Now we analyze the general case. Let \(f \in \mathcal {E}\) from Ritt’s Factorization Theorem, up to order and multiplication by units, we can write f uniquely as

$$\begin{aligned} f(z) = S_1 \cdot \cdots \cdot S_k \cdot I_1 \cdot \cdots \cdot I_m\end{aligned}$$

where \(S_j\) are simple polynomials with \(supp(S_{j_1})\not = supp(S_{j_2})\) for \(j_1\not = j_2\), and \(I_i\) are irreducible polynomials in \(\mathcal {E}\). Since we are assuming that \(V(f) \subseteq V(g)\), in particular we have that \(V(S_j) \subseteq V(g)\), for any \(j= 1, \ldots , k\). Applying Case 1 to any \(S_j\), we have that \(S_j\) divides \(g^{n_j}\). Moreover, \(V(I_i) \subseteq V(g),\) for any \(i = 1, \ldots , m\). Applying to any \(I_i\) Case 2 we have that any \(I_i\) divides \(g^{m_i}\). Eventually we have that f divides \(g^{\sum n_i + \sum m_j}\). So we obtain that \(rad(g) \subseteq rad(f)\).

In the similar way we can prove that if \(V(g) \subseteq V(f)\) then \(rad(f) \subseteq rad(g)\). \(\square \)

Proposition 3.6

(SC) Let \(f, g \in \mathcal {E}\), \(V(f) - V(g)\) is finite iff \(rad(g) \subseteq rad(f)\).

Proof

Assume that \(|V(f) - V(g) |= n\), and we denote \(V(f) - V(g) = \{a_1, \ldots , a_n \}\). By Theorem 3.2, there exist \(I_1, \ldots , I_n\) irreducible exponential polynomials such that for \(i=1, \ldots , n,\) \(I_i(a_i) = 0,\) and don’t divide f. If we consider the product \(g' = g \cdot I_1 \cdot \cdots \cdot I_n\) it is clear that \(V(f) \subseteq V(g')\). Applying the previous Theorem we have that \((g') \subseteq rad(f),\) then \((g) \subseteq rad(f)\). So we prove that \(rad(g) \subseteq rad(f)\).

The converse is trivial. \(\square \)

4 Multiplicity of roots

In order to define a multiplicity we have to introduce a derivation on the ring \(\mathcal {E}\).

Recall that a derivation \(\delta \) on a commutative ring R is a map \(\delta : R \rightarrow R\) such that

$$\begin{aligned} \begin{aligned} \delta (x + y)&= \delta (x) + \delta (y);\\ \delta (xy)&= x\delta (y) + \delta (x)y. \end{aligned} \end{aligned}$$

We write usually \(x', x'', \ldots , x^{(n)}\) for \(\delta (x), \delta (\delta (x)), \ldots \). So, we know that the polynomial ring R[z] has a standard derivation as \((\sum _{i = 0}^{N} \alpha _iz^{i}) = \sum _{1}^{N} \alpha _iz^{i-1}\). It is an exercise to verify that we can extend this derivation uniquely to one on \(\mathcal {E}\) in such way that \((e^{\alpha z})' = \alpha e^{\alpha z},\) for all \(\alpha \in \mathbb {C},\) and \(\alpha ' = 0,\) for all \(\alpha \in \mathbb {C}\).

In his paper [10], Van den Dries introduced a degree for an exponential polynomial for many iteration of exponentiation, which is an ordinal and denoted by ord. We can give the definition for our polynomials. More precisely, let f be the following:

$$\begin{aligned}f(z) = \alpha _1e^{\beta _1 z} + \cdots + \alpha _N e^{\beta _N};\end{aligned}$$

we can define

$$\begin{aligned}Ord(f) = \left\{ \begin{array}{lll} N \cdot \omega &{} \text{ where } \text{ N } \text{ is } \text{ the } \text{ number } \text{ of } \text{ index } \text{ i } \text{ such } \text{ that } \\ \beta _i \not = 0 &{}\\ (N - 1) \cdot \omega &{} \text{ if } \text{ there } \text{ exists } \text{ i } \text{ s.t. } \beta _i = 0 &{}\\ \end{array} \right. \end{aligned}$$

Claim: If an ideal I is closed under derivation, then I is trivial (see [6, 10]).

Proof

Suppose that I is closed under derivation and suppose that \(I \not = (0)\). We take an element \(g \in I\) with minimum degree. Since I is closed under derivation then \(g' \in I,\) moreover we know that \(g \cdot q \in \mathcal {E}\) for any invertible element q of \(\mathcal {E}\). By a Van den Dries result either \(ord(g') < ord(g),\) or there exists an invertible element \(q \in \mathcal {E}\) such that \(ord(q \cdot g) < ord(g),\) in both cases it is a contradiction. So \(I = \mathcal {E}\). \(\square \)

Definition 4.1

Let \(f(z) \in {\mathcal {E}}\) and c be a solution of f, we say that c has multiplicity k if \(f'(c) = \ldots = f^{(k)}(c) = 0\) and \(f^{(k+1)}(c) \not = 0\).

Proposition 4.2

Let \(f(z) \in \mathcal {E}\) with order \(N \cdot \omega \), then the multiplicity of any root of f(z) is at most \(N-2\).

Proof

Suppose that \(f(z) = \alpha _1e^{\beta _1 z} + \cdots + \alpha _N e^{\beta _N} \) and \(f(0) = 0\). Suppose by contradiction that the multiplicity of 0 is \(N-1.\) Denote by \(Y_1 = e^{\beta _1 z}, \ldots , Y_N = e^{\beta _N z};\) we have the following system:

$$\begin{aligned} \left\{ \begin{array}{lll} \alpha _1Y_1 + \cdots + \alpha _NY_N&{} = 0 &{}\\ \alpha _1\beta _1Y_1+ \cdots + \alpha _N \beta _NY_N &{} = 0 &{}\\ \ldots &{} = 0 &{}\\ \alpha _1\beta _1^{N-1}Y_1+ \cdots + \alpha _N \beta _N^{N-1}Y_N &{} = 0 &{} \end{array} \right. \end{aligned}$$
(4)

In matrix notation we can write (4) as follows:

$$\begin{aligned}\left( \begin{array}{llll} \alpha _{1} &{} \alpha _{2} &{}\ldots &{}\alpha _N\\ \alpha _{1} \beta _1&{} \alpha _{2} \beta _2 &{}\ldots &{}\alpha _N \beta _N\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \alpha _{1} \beta _1^{N-1}&{} \alpha _{2} \beta _2^{N-1} &{}\ldots &{}\alpha _N \beta _N^{N-1} \end{array}\right) \left( \begin{array}{l} Y_1\\ Y_2\\ \vdots \\ Y_N \end{array}\right) = \left( \begin{array}{l} 0\\ 0\\ \vdots \\ 0 \\ \end{array}\right) .\end{aligned}$$

The system (4) has (1, ..., 1) as solution, therefore the associate matrix has trivial determinant.

$$\begin{aligned}\left| \begin{array}{llcl} \alpha _{1} &{} \alpha _{2} &{}\ldots &{}\alpha _N\\ \alpha _{1} \beta _1&{} \alpha _{2} \beta _2 &{}\ldots &{}\alpha _N \beta _N\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \alpha _{1} \beta _1^{N-1}&{} \alpha _{2} \beta _2^{N-1} &{}\ldots &{}\alpha _N \beta _N^{N-1} \end{array}\right| = 0,\end{aligned}$$

that is

$$\begin{aligned}(\alpha _1 \cdot \ldots \cdot \alpha _N) \cdot \left| \begin{array}{llcl} 1 &{} 1 &{} \cdots &{} 1\\ \beta _{1}&{} \beta _{2} &{} \ldots &{} \beta _{N}\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \beta _{1}^{N-1} &{} \beta _{2}^{N-1} &{} \ldots &{} \beta _{N}^{N-1} \end{array}\right| = 0.\end{aligned}$$

This is Vandermonde’s matrix therefore, \(\alpha _1 \cdot \ldots \cdot \alpha _N = 0,\) but this is a contradiction.

\(\square \)

5 Coi and data availability statement

We declare that in this paper there are no conflict of interest. Moreover data sharing is not applicable to this paper as no new data were created or analyzed in this study.