On the size of a linear combination of two linear recurrence sequences over function fields

Let Gn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ G_n $$\end{document} and Hm\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ H_m $$\end{document} be two non-degenerate linear recurrence sequences defined over a function field F in one variable over C\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathbb {C}$$\end{document}, and let μ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mu $$\end{document} be a valuation on F. We prove that under suitable conditions there are effectively computable constants c1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ c_1 $$\end{document} and C′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ C' $$\end{document} such that the bound μ(Gn-Hm)≤μ(Gn)+C′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \mu (G_n - H_m) \le \mu (G_n) + C' \end{aligned}$$\end{document}holds for maxn,m>c1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \max \left( n,m \right) > c_1 $$\end{document}.


Introduction
Linear recurrence sequences are studied by many authors in the past and until now.Here, by a linear recurrence sequence we mean a polynomial-exponential function, from the set N 0 of non-negative integers into a given field F , of the form , where the α i are called the characteristic roots of the linear recurrence sequence and the coefficients a i (n) are polynomials in n.It is well known that such a sequence satisfies are linear recurring formula.We say that the seuquence (G n ) n∈N0 is defined over the field F if all characteristic roots α i as well as all coefficients of all polynomials a i (n) belong to F .The recurrence sequence is called non-degenerate if no ratio of two distinct characteristic roots α i /α j for i = j is a root of unity in the case that F is a number field, or if no ratio of two distinct characteristic roots α i /α j for i = j is contained in the field of constants when F is a function field in one variable over C, respectively.
In [2] the author together with Fuchs gave a bound on the size of the n-th element of such a linear recurrence sequence defined over a function field, see Proposition 7 below.They also provide a proof for a well known bound on the growth of G n in the case that F is a number field in the appendix of [2].
Recently, Pethő [6] considered the size of the difference of two linear recurrence sequences over number fields.More precisely, it is proven that for two recurrences A n and B m , taking only integer values, under some technical conditions (A n has a dominant root, i.e. there is a unique characteristic root α with maximal absolute value, B m has a pair of conjugate complex dominating characteristic roots, and some further assumptions) the bound 2020 Mathematics Subject Classification.11B37.Key words and phrases.Linear recurrence sequence, growth.Supported by Austrian Science Fund (FWF) under project I4406.
The purpose of the present paper is to find and prove a suitable similar bound in the setting of function fields in one variable over the field of complex numbers.

Notation and results
Throughout this paper we denote by F a function field in one variable over C and by g the genus of F .For the convenience of the reader we will give a short wrap-up of the notion of valuations that can e.g. also be found in [2,3]: For c ∈ C and f (x) ∈ C(x) * , where C(x) is the rational function field over C, we denote by ν c (f ) the unique integer such that f Additionally, we set ν(0) = ∞ for each ν from above.These functions ν : C(x) → Z ∪ {∞} are up to equivalence all valuations in C(x).If ν c (f ) > 0, then c is called a zero of f , and if ν c (f ) < 0, then c is called a pole of f , where c ∈ C ∪ {∞}.For a finite extension F of C(x) each valuation in C(x) can be extended to no more than [F : C(x)] valuations in F .This again gives up to equivalence all valuations in F .Both, in C(x) as well as in F the sum-formula ν ν(f ) = 0 holds for each nonzero f , where the sum is taken over all valuations in the considered function field.Moreover, valuations have the properties ν(f g) = ν(f ) + ν(g) and ν(f + g) ≥ min (ν(f ), ν(g)) for all f, g ∈ F .For more information about valuations we refer to [7].
For a finite set S of valuations on F , we denote by O * S the set of S-units in F , i.e. the set O * S = {f ∈ F * : ν(f ) = 0 for all ν / ∈ S} .Lastly, we call two elements α, β ∈ F multiplicatively independent if α r β s ∈ C for r, s ∈ Z implies that r = s = 0.
Our first result is now the following theorem which states that there cannot be much cancelation in the expression aG n − bH m if both indices are large: t be two non-degenerate linear recurrence sequences defined over F .Assume that α 1 / ∈ C, and that for any j ∈ {1, . . ., t} the pair (α 1 , β j ) is multiplicatively independent.Furthermore, let µ be a valuation on F such that µ(α 1 ) ≤ µ(α i ) for i ∈ {1, . . ., d}.Fix a, b ∈ F * .Then there exist effectively computable constants c 0 and C, independent of n and m, such that for min (n, m) > c 0 we have The non-degeneracy condition already implies that there is at most one characteristic root in each of the two linear recurrences which is constant.If we require all characteristic roots to be non-constant, then we can prove a little bit more: be two non-degenerate linear recurrence sequences defined over F .Assume that no α i as well as no β j is contained in C, and that for any j ∈ {1, . . ., t} the pair (α 1 , β j ) is multiplicatively independent.Furthermore, let µ be a valuation on F such that µ(α 1 ) ≤ µ(α i ) for i ∈ {1, . . ., d}.Fix a, b ∈ F * .Then there exist effectively computable constants c 1 and C ′ , independent of n and m, such that for max (n, m) > c 1 we have In the case µ(aG n ) = µ(bH m ) the inequality directly follows from the strict triangle inequality.Thus the power of the above theorems concentrates on the case µ(aG n ) = µ(bH m ).There they give a nontrivial upper bound, whereas the trivial lower bound in the case µ(aG Rephrased in words, our theorems state that for large indices the recurrence H m cannot cancel out too much from G n if at least one "size-determining" root α 1 is independent of the roots of H m . The assumption that α 1 is multiplicatively independent of each characteristic root of the second recurrence sequence is needed to avoid situations like H m := G 2m , where G n − H m is zero for n = 2m arbitrary large, and thus the statement of the theorems cannot hold.That things are different if the two considered linear recurrence sequences are too similar, can also be seen in the results of other authors, see e.g.[5].Let us mention that, as in Corollary 4 in [5], we can deduce here that under the assumptions of Theorem 2 the solutions (n, m) to aG n = bH m are bounded effectively from above.
From Theorem 1 to Theorem 2 we extended the area, in which the bound for the valuation holds, from min (n, m) > c 0 to max (n, m) > c 1 to the cost of a little bit stronger assumptions.The restriction max (n, m) > c 1 cannot be removed completely.Indeed, there may be sporadic solutions to aG n − bH m = 0 whence µ(aG n − bH m ) = ∞ is possible for small indices.
To illustrate the result, we formulate the following corollary which immediately follows from Theorem 1 by choosing µ = ν ∞ for the function field C(x).An analogous corollary can be formulated for Theorem 2.
be two non-degenerate linear recurrence sequences of polynomials in C[x] where all the characteristic roots are polynomials as well.Assume that α 1 / ∈ C, and that for any j ∈ {1, . . ., t} the pair (α 1 , β j ) is multiplicatively independent.Furthermore, assume that deg Then there exist effectively computable constants c 0 and C, independent of n and m, such that for min (n, m) > c 0 we have

Preliminaries
In the next section we will make use of height functions in function fields.Let us therefore define the height of an element f ∈ F * by where the sum is taken over all valuations on the function field F/C.Additionally we define H(0) = ∞.This height function satisfies some basic properties that are listed in the lemma below which is proven in [4]: Lemma 4. Denote as above by H the height on F/C.Then for f, g ∈ F * the following properties hold: Moreover, the following result due to Brownawell and Masser will be used when proving our statements.It is an immediate consequence of Theorem B in [1]: Proposition 5 (Brownawell-Masser).Let F/C be a function field in one variable of genus g.Moreover, for a finite set S of valuations, let u 1 , . . ., u k be S-units and where no proper subsum of the left hand side vanishes.Then we have max i=1,...,k Furthermore, we will use the following function field analogue of the Schmidt subspace theorem.A proof can be found in [8]: Proposition 6 (Zannier).Let F/C be a function field in one variable, of genus g, let ϕ 1 , . . ., ϕ n ∈ F be linearly independent over C and let r ∈ {0, 1, . . ., n}.Let S be a finite set of places of F containing all the poles of ϕ 1 , . . ., ϕ n and all the zeros of ϕ 1 , . . ., ϕ r .Put σ In addition, the next proposition will be applied in our proofs.It is proven as Theorem 1 in [2] and we state it here in a combined version with the paragraph immediately before Theorem 1 in [2]: n=0 be a non-degenerate linear recurrence sequence taking values in F with power sum representation Let L be the splitting field of the characteristic polynomial of that sequence, i.e.L = F (α 1 , . . ., α t ).Moreover, let µ be a valuation on L. Then there are effectively computable constants C + and C − , independent of n, such that for every sufficiently large n the inequality Note that an inspection of the proof of the last proposition shows that it is possible to calculate a (admittedly rather complicated) bound N 0 such that "sufficiently large n" can be replaced by n ≥ N 0 .
Last but not least, we will need the following small lemma about multiplicatively independent elements, which is proven in [3]: Lemma 8. Let γ, δ ∈ F \ C be multiplicatively independent and n, m ∈ N. Assume that

H
γ n δ m ≤ L. Then there exists an effectively computable constant L ′ , depending only on γ, δ, g and L, such that max (n, m) ≤ L ′ .

Proofs
We have prepared all auxiliary results needed for proving our theorems.Thus we can start with the proof of our first theorem.
Proof of Theorem 1.First note that aG n is again a non-degenerate linear recurrence sequence with the same characteristic roots as G n and that µ(aG n ) = µ(a) + µ(G n ).The analogue holds for bH m .So, without loss of generality, we may assume that a = b = 1.
Let us rewrite the linear recurrence sequences in a more suitable manner.With Now fix for each i ∈ {1, . . ., d} a maximal C-linear independent subset {π i1 , . . ., π iki } of {a i0 , . . ., a iei }.Using these elements, we can write (1) as . Analogously, we get are polynomials and ψ j1 , . . ., ψ jℓj is linearly independent over C for any j ∈ {1, . . ., t}.Together these representations yield (2) In order to be able to apply Proposition 6 we would need the summands in (2) to be linearly independent over C. Therefore we will check this in the sequel and make changes where necessary.The procedure for doing so is as follows: We assume that we have given an arbitrary but fixed pair (n, m) of indices and, considering several cases, deduce that then either min (n, m) ≤ c 0 , which falls out of the scope of the statement where we only say something for min (n, m) > c 0 , or a related (but in general slightly modified) sum to (2) consists of C-linear independent summands.During this procedure, the bound c 0 will be updated several (but only finitely many) times without changing its label, i.e. it is always denoted by c 0 .As an initial value we choose c 0 large enough such that is nonzero whenever min (n, m) > c 0 .Now suppose that the summands in ( 2) are linearly dependent over C. Then we have complex numbers λ ig , γ jh ∈ C, not all zero, such that (3) Note that the λ ig and γ jh may depend on (n, m) which we assume as fixed for this consideration.Now we consider a minimal vanishing subsum of (3), i.e. no subsubsum of this subsum vanishes.In particular, all λ ig and γ jh appearing in this minimal vanishing subsum are nonzero.Moreover, we fix a finite set S of valuations such that all α i , β j , π ig and ψ jh are S-units, and such that µ ∈ S, and define the constant Both, S and C aux are independent of n and m.We distinguish between six cases: Case 1: The minimal vanishing subsum contains only summands with the same factor α n i .Recalling that {π i1 , . . ., π iki } is linearly independent over C, we see that this case is not possible.
Case 2: The minimal vanishing subsum contains only summands with the same factor β m j .Recalling that ψ j1 , . . ., ψ jℓj is linearly independent over C, we see that this case is also not possible.
Case 3: The minimal vanishing subsum contains summands with the factors α n i and α n j , respectively, where i = j.Dividing the minimal vanishing subsum by a summand containing the factor α n j and then applying Proposition 5 (note that all summands are S-units since λ ig , P ig (n), for some indices g, g ′ .By Lemma 4, this implies The upper bound in (4) is independent of n and m and thus, for an updated c 0 we get min (n, m) ≤ n ≤ c 0 .
Case 4: The minimal vanishing subsum contains summands with the factors β m i and β m j , respectively, where i = j.This case is handled completely analogously to the previous one.
Case 5: The minimal vanishing subsum contains summands with the factors α n 1 and β m j , respectively.Dividing the minimal vanishing subsum by a summand containing the factor β m j and then applying Proposition 5 yields for some indices g, h.By Lemma 4, this implies From this we get either, again by Lemma 4, In both subcases, the upper bound is independent of n and m, and thus we get min (n, m) ≤ c 0 , for an updated c 0 .
Case 6: The minimal vanishing subsum contains summands with the factors α n i and β m j , respectively, where i = 1.In particular, we may assume that no summand with a factor α n 1 is contained.Then we can dissolve the minimal vanishing subsum after one of the appearing terms of the shape Q jh (m)ψ jh β m j , i.e. express this term by a C-linear combination of the remaining terms in this subsum.Now we insert this expression for Q jh (m)ψ jh β m j into (2), summarize terms which differ only by a constant factor, and get recurrences G ′ n as well as H ′ m with the following properties: Here we perform the same reduction process to get G ′′ n − H ′′ m .As in each reduction process the number of summands reduces, this iteration ends after finitely many steps, and after renumbering terms (note that α 1 stays α 1 since terms containing α 1 can not be removed during the reduction process) we get (5) Note that d * ≥ 1 and k * 1 ≥ 1, i.e. α 1 appears on the right hand side.The summands in the expression on the right hand side of equation ( 5) are now linearly independent over C because we only consider min (n, m) > c 0 and no further reduction steps were possible.Nevertheless, which summands from G n − H m still appear in G * n − H * m may depend on the considered pair (n, m).However, this will not be a problem in the sequel since the number of summands is bounded uniformly (cf.our definition of C aux ).
At this point we are now able to apply Proposition 6.By our choice of S, each summand of the right hand side of equation ( 5) is an S-unit.Put , since each summand in the sum on the left hand side of inequality ( 6) is non-negative, and since µ ∈ S, we get From this we infer where in the second to last line we have used Proposition 7 and c 0 becomes updated for the last time.This proves the theorem.
The assumptions in our second theorem contain all assumptions from Theorem 1.So it is not surprising that the proof of it builds on Theorem 1.
Proof of Theorem 2. By Theorem 1, there exist constants c 0 and C such that for min (n, m) > c 0 we have It remains to consider the case when one index is small.Therefore let, firstly, m ≤ c 0 be fixed.Then H m is fixed as well.Since there are only finitely many such cases, we can perform the following for each of this cases and write H (m) for H m in the calculation to emphasize that we consider only a fixed value for m each time.Put α d+1 := 1 and consider the linear recurrence sequence for n > c 1,(m) .
Consider now the second possibility, namely that n ≤ c 0 is fixed.Then G n is fixed as well.Since there are only finitely many such cases, we can perform the following for each of this cases and write G (n) for G n in the calculation to emphasize that we consider only a fixed value for n each time.Put β t+1 := 1 and consider the linear recurrence sequence For these constants, it holds that µ(aG n − bH m ) ≤ µ(G n ) + C ′ whenever max (n, m) > c 1 , and the theorem is proven.
the considered pair (n, m), all expressions of the shape π ig α n i or ψ jh β m j appearing in G ′ n − H ′ m also appear in G n − H m (in general with different coefficients in C), no summand containing π 1g α n 1 got lost, and G ′ n − H ′ m has less summands than G n − H m .Next we check whether the summands in G ′ n − H ′ m are linearly independent over C. If not, then we do the same as we have done above with G n − H m .Observe that we are automatically in Case 6 again since we are only interested in min (n, m) > c 0 .