1 Introduction

A linear recurrence sequence is a sequence \( (G_n)_{n=0}^{\infty } \) given by a recursive formula of the shape \( G_{n+k} = c_{k-1} G_{n+k-1} + \cdots + c_0 G_n \) together with finitely many initial values. This definition makes sense over any field K. It is well known that any linear recurrence sequence has an explicit representation of the form

$$\begin{aligned} G_n = b_1(n) \beta _1^n + \cdots + b_r(n) \beta _r ^n \end{aligned}$$
(1)

for polynomials \( b_1,\ldots ,b_r \) over \( K(\beta _1,\ldots ,\beta _r) \) and elements \( \beta _1,\ldots ,\beta _r \) which are algebraic over K,  the so-called Binet formula. It is also well known (see [2] for references and a proof) that if \( G_n \) takes values in a number field, then under natural and non-restrictive conditions, i.e., \( \beta _1,\ldots ,\beta _r \) are algebraic integers, no ratio \( \beta _i / \beta _j \) for \( i \ne j \) is a root of unity, and \( \max _{i=1,\ldots ,r} \left| \beta _i \right| > 1 ,\) for large enough n,  we have

$$\begin{aligned} \left| G_n \right| \ge \left( \max _{i=1,\ldots ,r} \left| \beta _i \right| \right) ^{n(1-\varepsilon )} \end{aligned}$$
(2)

for \( \varepsilon > 0 .\) An analogous result holds true if \( G_n \) takes values in a function field in one variable over \( \mathbb {C}\) as is shown in [2].

There is a natural generalization of linear recurrence sequences. If we allow more than one parameter, we can generalize (1) to

$$\begin{aligned} G(n_1,\ldots ,n_s) = \sum _{i=1}^{k} f_i(n_1,\ldots ,n_s) \alpha _{i1}^{n_1} \cdots \alpha _{is}^{n_s} \end{aligned}$$

where s and k are positive integers, \( f_1,\ldots ,f_k \) are polynomials in s variables, and \( n_1,\ldots ,n_s \) are non-negative integers. Such polynomial-exponential functions \( G : \mathbb {N}_0^s \rightarrow K \) are called multi-recurrences, where we have denoted the set of non-negative integers by \( \mathbb {N}_0 .\) We say that G is defined over a field K if the coefficients and the bases \( \alpha _{i1}, \ldots , \alpha _{is} \) for \( i = 1,\ldots ,k \) are in K. If G is defined over K,  then it takes values in K. For more information about recurrence sequences, we refer to [4]. Van der Poorten and Schlickewei claimed in [3] a similar bound as (2) for multi-recurrences defined over number fields. The purpose of the present paper is to provide a proof for that bound. We will do this in the same way as in [2] for the case of linear recurrence sequences and use the same auxiliary result due to Evertse [1], which is cited as Theorem 5 below.

2 Notation and result

In the sequel, we shall use the abbreviation

$$\begin{aligned} f_i(\mathbf {n}) \varvec{\alpha }_i^{\mathbf {n}} \end{aligned}$$

for the expression

$$\begin{aligned} f_i(n_1,\ldots ,n_s) \alpha _{i1}^{n_1} \cdots \alpha _{is}^{n_s}, \end{aligned}$$

i.e., we will indicate by boldface letters that the considered object is a vector in difference to a single number. Moreover, for a vector \( \mathbf {n}\in \mathbb {Z}^s ,\) we consider its norm

$$\begin{aligned} \left| \mathbf {n} \right| = \left| n_1 \right| + \cdots + \left| n_s \right| . \end{aligned}$$

In what follows, we are interested in multi-recurrences G as defined above. Our main result is the following theorem:

Theorem 1

Let K be a number field and s a positive integer. Consider the polynomial-exponential function

$$\begin{aligned} G(\mathbf {n}) = \sum _{i=1}^{k} f_i(\mathbf {n}) \varvec{\alpha }_i^{\mathbf {n}} \end{aligned}$$

with non-zero algebraic integers \( \alpha _{ij} \in K \) for \( i=1,\ldots ,k \) and \( j=1,\ldots ,s ,\) and polynomials \( f(X_1,\ldots ,X_s) \in K[X_1,\ldots ,X_s] .\) Fix \( \varepsilon > 0 .\) Assume that there is an index \( i_0 ,\) \( 1 \le i_0 \le k ,\) such that there is no subset \( I \subseteq \left\{ 1,\ldots ,k \right\} \) with \( i_0 \in I \) and

$$\begin{aligned} \sum _{i \in I} f_i(\mathbf {n}) \varvec{\alpha }_i^{\mathbf {n}} = 0. \end{aligned}$$

Then,  for \( \left| \mathbf {n} \right| \) large enough,  we have

$$\begin{aligned} \left| G(\mathbf {n}) \right| \ge \left| f_{i_0}(\mathbf {n}) \varvec{\alpha }_{i_0}^{\mathbf {n}} \right| e^{-\varepsilon \left| \mathbf {n} \right| }. \end{aligned}$$

Remark 2

The condition concerning \( i_0 \) in the above theorem is really necessary and already stated in [3]. Indeed, the size of \( G(\mathbf {n}) \) cannot be bounded by a term from a vanishing subsum.

Remark 3

We emphasize that the same statement as in Theorem 1 holds with the completely analogous proof also for any other valuation \( \left| \cdot \right| _{\mu } \) on K in the proven lower bound instead of the standard absolute value.

Remark 4

The bound in Theorem 1 holds for all \( \mathbf {n}\) with \( \left| \mathbf {n} \right| \ge B .\) Unfortunately this lower bound B cannot be given explicitly since it depends, among others, on the ineffective constant given by Theorem 5 below. More precisely, it is influenced by a threshold where the exponential function becomes larger than a polynomial function having ineffective coefficients.

We are only able to prove the result for number fields. To the knowledge of the authors, it is still open to find and prove an analogous result in the function field case, i.e., a version of [2, Theorem 2.1] for multi-recurrences. We leave this as an open question.

3 Preliminaries

In our proof, we will need the following result of Evertse. The reader will find it as [1, Theorem 2]. We use the notation

$$\begin{aligned} \left\Vert \mathbf {x} \right\Vert := \max _{\begin{array}{c} k=0,\ldots ,t \\ i=1,\ldots ,D \end{array}} \left| \sigma _i(x_k) \right| \end{aligned}$$

with \( \left\{ \sigma _1, \ldots , \sigma _D \right\} \) the set of all embeddings of K in \( \mathbb {C}\) and \( \mathbf {x}= (x_0,x_1,\ldots ,x_t) .\) Moreover, we denote by \( \mathcal {O}_K \) the ring of integers in the number field K and by \( M_K \) the set of places of the number field K:

Theorem 5

Let t be a non-negative integer and S a finite set of places in K,  containing all infinite places. Then for every \( \varepsilon > 0 ,\) a constant C exists,  depending only on \( \varepsilon , S, K, t ,\) such that for each non-empty subset T of S and every vector \( \mathbf {x}= (x_0,x_1,\ldots ,x_t) \in \mathcal {O}_K^{t+1} \) with

$$\begin{aligned} x_{i_0} + x_{i_1} + \cdots + x_{i_s} \ne 0, \end{aligned}$$

for each non-empty subset \( \left\{ i_0,i_1,\ldots ,i_s \right\} \) of \( \left\{ 0,1,\ldots ,t \right\} ,\) the inequality

$$\begin{aligned} \left( \prod _{k=0}^{t} \prod _{\nu \in S} \left| x_k \right| _{\nu } \right) \prod _{\nu \in T} \left| x_0 + x_1 + \cdots + x_t \right| _{\nu } \ge C \left( \prod _{\nu \in T} \max _{k=0,\ldots ,t} \left| x_k \right| _{\nu } \right) \left\Vert \mathbf {x} \right\Vert ^{-\varepsilon } \end{aligned}$$

is valid.

Moreover, the next lemma is used in the proof of our theorem. It is an analogous version of [1, Lemma 1] for vectors.

Lemma 6

Let K be a number field of degree D,  let \( f(X_1,\ldots ,X_s) \in K[X_1,\ldots ,X_s] \) be a polynomial of absolute degree m and let T be a non-empty set of places in K. Then there exists a positive constant c,  depending only on Kf,  such that for all \( \mathbf {n}\in \mathbb {Z}^s \) with \( \mathbf {n}\ne 0 \) and \( f(\mathbf {n}) \ne 0 ,\) it holds that

$$\begin{aligned} \prod _{\nu \in T} \left| f(\mathbf {n}) \right| _{\nu } \le \prod _{\nu \in M_K} \max \left( 1, \left| f(\mathbf {n}) \right| _{\nu } \right) \le c \left| \mathbf {n} \right| ^{Dm}. \end{aligned}$$

Proof

Obviously we have \( \left| f(\mathbf {n}) \right| _{\nu } \le \max \left( 1, \left| f(\mathbf {n}) \right| _{\nu } \right) .\) Thus the first inequality

$$\begin{aligned} \prod _{\nu \in T} \left| f(\mathbf {n}) \right| _{\nu } \le \prod _{\nu \in M_K} \max \left( 1, \left| f(\mathbf {n}) \right| _{\nu } \right) \end{aligned}$$

is trivial. Note that for each \( \mathbf {n},\) there are only finitely many places \( \nu \) with \( \left| f(\mathbf {n}) \right| _{\nu } \ne 1 \) and therefore the products are finite.

There are at most D infinite places. Hence there is a positive constant \( c_1 \) such that

$$\begin{aligned} \left| f(\mathbf {n}) \right| _{\nu } \le c_1 \left| \mathbf {n} \right| ^m \end{aligned}$$

holds for all infinite places \( \nu .\) Moreover, for all but finitely many finite places \( \nu ,\) we have

$$\begin{aligned} \left| f(\mathbf {n}) \right| _{\nu } \le 1. \end{aligned}$$

These finitely many places depend only on (the denominators of) the coefficients of f and are independent of \( \mathbf {n}.\) For the finitely many remaining finite places \( \nu ,\) there is a positive constant \( c_2 ,\) independent of \( \mathbf {n},\) such that

$$\begin{aligned} \left| f(\mathbf {n}) \right| _{\nu } \le c_2. \end{aligned}$$

We may assume that \( c_1 > 1 \) and \( c_2 > 1 .\) Putting things together, we get

$$\begin{aligned} \prod _{\nu \in M_K} \max \left( 1, \left| f(\mathbf {n}) \right| _{\nu } \right) \le c \left| \mathbf {n} \right| ^{Dm} \end{aligned}$$

for a new constant c. \(\square \)

4 Proof of Theorem 1

Before we start with proving Theorem 1, let us mention that the proof follows the same strategy and is the multi-recurrence version of the proof of the corresponding result for linear recurrences given in the appendix of [2] by the authors.

Proof of Theorem 1

Since the bases \( \alpha _{ij} \) of the exponential parts of G are algebraic integers, we can find a non-zero integer z such that \( z f_i(\mathbf {n}) \varvec{\alpha }_i^{\mathbf {n}} \) are algebraic integers for all \( i=1,\ldots ,k \) and all non-negative integers \( n_1,\ldots ,n_s .\) Choose S as a finite set of places in K containing all infinite places as well as all places such that \( \alpha _{ij} \) for \( i=1,\ldots ,k \) and \( j=1,\ldots ,s \) are S-units. Let \( \mu \) be such that \( \left| \cdot \right| _{\mu } = \left| \cdot \right| \) is the usual absolute value on \( \mathbb {C}.\) In particular, we have \( \mu \in S .\) Further define \( T = \left\{ \mu \right\} .\)

We may assume that for the index \( i_0 \) from the theorem, we have \( i_0 = 1 \) to simplify the notation. By renumbering summands, we can assume that

$$\begin{aligned} G(\mathbf {n}) = \sum _{i=1}^{\ell } f_i(\mathbf {n}) \varvec{\alpha }_i^{\mathbf {n}} \end{aligned}$$

for an integer \( \ell \) with \( 1 \le \ell \le k \) has no vanishing subsum. Indeed, there are only finitely many possible subsums with this property and we can perform the following steps for each of them analogously. At the end of the proof, we can put the cases together by choosing the largest occurring bound for \( \left| \mathbf {n} \right| .\)

Thus we can apply Theorem 5 and get

$$\begin{aligned} \left( \prod _{i=1}^{\ell } \prod _{\nu \in S} \left| z f_i(\mathbf {n}) \varvec{\alpha }_i^{\mathbf {n}} \right| _{\nu } \right) \left| zG(\mathbf {n}) \right| \ge C \max _{i=1,\ldots ,\ell } \left| z f_i(\mathbf {n}) \varvec{\alpha }_i^{\mathbf {n}} \right| \left\Vert z\mathbf {x} \right\Vert ^{-\varepsilon '} \end{aligned}$$

for \( \mathbf {x}= \left( f_1(\mathbf {n}) \varvec{\alpha }_1^{\mathbf {n}}, \ldots , f_{\ell }(\mathbf {n}) \varvec{\alpha }_{\ell }^{\mathbf {n}} \right) \) and an \( \varepsilon ' \) to be fixed later. Using that z is a fixed integer and that the \( \alpha _{ij} \) are S-units, we get

$$\begin{aligned} \left( \prod _{i=1}^{\ell } \prod _{\nu \in S} \left| f_i(\mathbf {n}) \right| _{\nu } \right) \left| G(\mathbf {n}) \right| \ge C_1 \left| f_1(\mathbf {n}) \varvec{\alpha }_1^{\mathbf {n}} \right| \left\Vert \mathbf {x} \right\Vert ^{-\varepsilon '}. \end{aligned}$$
(3)

Let m denote the maximum of the absolute degrees of the polynomials \( f_1,\ldots ,f_k .\) Then there exists a constant \( A > 1 ,\) which is independent of \( \mathbf {n}\) and \( \varepsilon ' \) (and fits for all of the finitely many cases mentioned in the second paragraph of this proof), satisfying

$$\begin{aligned} \left\Vert \mathbf {x} \right\Vert&= \max _{\begin{array}{c} i=1,\ldots ,\ell \\ t=1,\ldots ,D \end{array}} \left| \sigma _t \left( f_i(\mathbf {n}) \varvec{\alpha }_i^{\mathbf {n}} \right) \right| \\&\le \max _{\begin{array}{c} i=1,\ldots ,\ell \\ t=1,\ldots ,D \end{array}} \left| \sigma _t \left( f_i(\mathbf {n}) \right) \right| \cdot \max _{\begin{array}{c} i=1,\ldots ,\ell \\ t=1,\ldots ,D \end{array}} \left| \sigma _t \left( \varvec{\alpha }_i^{\mathbf {n}} \right) \right| \\&\le C_2 \left| \mathbf {n} \right| ^m \prod _{j=1}^{s} \max _{\begin{array}{c} i=1,\ldots ,\ell \\ t=1,\ldots ,D \end{array}} \left| \sigma _t \left( \alpha _{ij}^{n_j} \right) \right| \\&\le C_2 \left| \mathbf {n} \right| ^m A^{\left| \mathbf {n} \right| }. \end{aligned}$$

Inserting this upper bound into inequality (3) yields

$$\begin{aligned} \left( \prod _{i=1}^{\ell } \prod _{\nu \in S} \left| f_i(\mathbf {n}) \right| _{\nu } \right) \left| G(\mathbf {n}) \right| \ge \left| f_1(\mathbf {n}) \varvec{\alpha }_1^{\mathbf {n}} \right| C_3 \left| \mathbf {n} \right| ^{-m\varepsilon '} A^{-\left| \mathbf {n} \right| \varepsilon '}. \end{aligned}$$
(4)

Now we apply Lemma 6 to the double product in the last displayed inequality. This gives us

$$\begin{aligned} \prod _{i=1}^{\ell } \prod _{\nu \in S} \left| f_i(\mathbf {n}) \right| _{\nu } \le \prod _{i=1}^{\ell } C_4^{(i)} \left| \mathbf {n} \right| ^{Dm} \le C_5 \left| \mathbf {n} \right| ^{Dm\ell } \end{aligned}$$

and together with inequality (4) the lower bound

$$\begin{aligned} \left| G(\mathbf {n}) \right|&\ge \left| f_1(\mathbf {n}) \varvec{\alpha }_1^{\mathbf {n}} \right| C_6 \left| \mathbf {n} \right| ^{-Dm\ell -m\varepsilon '} A^{-\left| \mathbf {n} \right| \varepsilon '} \\&\ge \left| f_1(\mathbf {n}) \varvec{\alpha }_1^{\mathbf {n}} \right| A^{-2\varepsilon ' \left| \mathbf {n} \right| } \end{aligned}$$

where the last inequality holds for \( \left| \mathbf {n} \right| \) large enough. Thus, choosing \( \varepsilon ' \) such that \( 2\varepsilon ' \log (A) = \varepsilon ,\) we end up with

$$\begin{aligned} \left| G(\mathbf {n}) \right| \ge \left| f_1(\mathbf {n}) \varvec{\alpha }_1^{\mathbf {n}} \right| e^{-2\varepsilon ' \log (A) \left| \mathbf {n} \right| } = \left| f_1(\mathbf {n}) \varvec{\alpha }_1^{\mathbf {n}} \right| e^{-\varepsilon \left| \mathbf {n} \right| } \end{aligned}$$

and the theorem is proven. \(\square \)