1 Introduction

There are many papers in the literature concerning Diophantine m-tuples which are sets of m distinct positive integers \(\{a_1,\ldots ,a_m\}\) such that \(a_ia_j+1\) is a square for all \(1\le i<j\le m\) (see [4], for example). A variation of this classical problem is obtained if one changes the set of squares by some different subsets of positive integers like k-powers for some fixed \(k\ge 3\), or perfect powers, or primes, or members of some linearly recurrent sequence, etc. (see [7, 1214, 18]). In this paper, we study this problem with the set of values of k-generalized Fibonacci numbers for some integer \(k\ge 2\). Recall that these numbers denoted by \(F_n^{(k)}\) satisfy the recurrence

$$\begin{aligned} F_{n+k}^{(k)}=F_{n+k-1}^{(k)}+\cdots +F_n^{(k)} \end{aligned}$$

and start with \(0,0,\ldots ,0\) (\(k-1\) times) followed by 1. Notationwise, we assume that \(F_i^{(k)}=0\) for \(i=-(k-2),~-(k-1),~\ldots ,~0\), and \(F_1^{(k)}=1\). For \(k=2\) we obtain the Fibonacci numbers and for \(k=3\) the sequence of Tribonacci numbers. The result is the following:

Theorem 1

Let \(k\ge 2\) be fixed. Then there are only finitely many triples of positive integers \(1<a<b<c\) such that

$$\begin{aligned} ab+1=F_x^{(k)},\quad ac+1=F_y^{(k)},\quad bc+1=F_z^{(k)} \end{aligned}$$
(1)

hold for some integers xyz.

Our result generalizes the results obtained in [8, 11] and [13], where this problem was treated for the cases \(k=2\) and \(k=3\). In [13] it was shown that there does not exist a triple of positive integers abc such that \(ab+1,ac+1,bc+1\) are Fibonacci numbers. In [11] it was shown that there is no Tribonacci Diophantine quadruple, that is a set of four positive integers \(\{a_1,a_2,a_3,a_4\}\) such that \(a_ia_j+1\) is a member of the Tribonacci sequence (3-generalized Fibonacci sequence) for \(1\le i<j\le 4\), and in [8] it was proved that there are only finitely many Tribonacci Diophantine triples. In the current paper, we prove the same result for all such triples having values in the sequence of k-generalized Fibonacci numbers.

For the Proof of Theorem 1, we proceed as follows. In Sect. 2, we recall some properties of the k-generalized Fibonacci sequence \(F_n^{(k)}\) which we will need and we prove two lemmata. The first lemma shows that any \(k-1\) roots of the characteristic polynomial are multiplicatively independent. In the second lemma, the greatest common divisor of \(F_x^{(k)}-1\) and \(F_y^{(k)}-1\) for \(2<y<x\) is estimated. In Sect. 3, we assume that the Theorem 1 is false and give, using the Subspace theorem, a finite expansion of infinitely many solutions. In Sect. 4, we use a parametrization lemma which is proved by using results about finiteness of the number of non-degenerate solutions to S-unit equations. Applying it to the finite expansion, this leads us to a condition on the leading coefficient, which turns out to be wrong. This contradiction is obtained by showing that a certain Diophantine equation has no solutions; this last Diophantine equation has been treated in particular cases in [2] and [16].

2 Preliminaries

There are already many results in the literature about \((F_n^{(k)})_{n\ge 0}\). We will only use what we need, which are the following properties. The sequence \((F_n^{(k)})_{n\ge 0}\) is linearly recurrent of characteristic polynomial

$$\begin{aligned} \varPsi _k(X)=X^k-X^{k-1}-\cdots -X-1. \end{aligned}$$

The polynomial \(\varPsi _k(X)\) is separable and irreducible in \({\mathbb Q}[X]\) and the Galois group thus acts transitively on the roots, which we denote by \(\alpha _1,\ldots ,\alpha _k\). If k is even or prime, the Galois group is certainly \(S_k\) (see [16] for these statements). The polynomial \(\varPsi _k(X)\) has only one root, without loss of generality assume that this root is \(\alpha _1>1\), which is outside the unit disk (formally, this root depends also on k, but in what follows we shall omit the dependence on k on this and the other roots of \(\varPsi _k(X)\) in order to avoid notational clutter). Thus,

$$\begin{aligned} \varPsi _k(X)=\prod _{i=1}^k (X-\alpha _i),\quad {\mathrm{where}}\quad |\alpha _i|<1,\quad i=2,\ldots ,k. \end{aligned}$$

Observe that \(\alpha _1\alpha _2\cdots \alpha _k=(-1)^{k-1}\). Note also that

$$\begin{aligned} \varPsi _k(X)=X^k-\left( X^{k-1}+\cdots +1\right) =X^k-\frac{X^k-1}{X-1}=\frac{X^{k+1}-2X^k+1}{X-1}, \end{aligned}$$

a representation which is sometimes useful. Furthermore,

$$\begin{aligned} 2-\frac{1}{k}<\alpha _1<2 \end{aligned}$$
(2)

(see Lemma 3 in [3]). The Binet formula of \((F_n^{(k)})_{n\ge 0}\) is given by

$$\begin{aligned} F_n^{(k)}=\sum _{i=1}^k f_i \alpha _i^n\quad {\mathrm{for~all}}\quad n\ge 0, \end{aligned}$$
(3)

where

$$\begin{aligned} f_i=\frac{(\alpha _i-1)\alpha _i^{-1}}{2+(k+1)(\alpha _i-2)},\quad i=1,\ldots ,k \end{aligned}$$
(4)

(see Theorem 1 in [3]). We have

$$\begin{aligned} |F_n^{(k)}-f_1\alpha _1^k|<\frac{1}{2}\quad {\mathrm{for~all}}\quad n\ge 1 \end{aligned}$$
(5)

and

$$\begin{aligned} f_1 < 1 \end{aligned}$$
(6)

(see Theorem 2 in [3]). We also need the fact that

$$\begin{aligned} \alpha _1^{n-2}<F_n^{(k)}<\alpha _1^{n-1} \end{aligned}$$
(7)

(see [1]).

Furthermore, the following property is of importance; it follows from the fact that there is no non-trivial multiplicative relation between the conjugates of a Pisot number (cf. [15]). Since in our case it is rather easy to verify, we shall present a proof of what we need.

Lemma 1

Each set of \(k-1\) different roots (e.g., \(\{\alpha _1, \ldots , \alpha _{k-1}\}\)) is multiplicatively independent.

Proof

We shall prove the statement only for the set \(\{\alpha _1,\ldots ,\alpha _{k-1}\}\). The general statement follows easily by the transitivity of the Galois Group of the irreducible polynomial \(\varPsi _k(X)\).

Let us denote by \(\mathbb {L} := \mathbb {Q}(\alpha _1, \dots , \alpha _k)\) the splitting field of \(\varPsi _k\) over \(\mathbb {Q}\) and by \(\mathcal {O}_\mathbb {L}\) the ring of integers in \(\mathbb {L}\). Note that the roots \(\alpha _1,\ldots ,\alpha _k\) are certainly in the group of units \(\mathcal {O}_\mathbb {L}^\times \) which follows from

$$\begin{aligned} \alpha _i^{-1} = (-1)^{k-1}(\alpha _1 \cdots \alpha _{i-1} \cdot \alpha _{i+1} \cdots \alpha _k) \end{aligned}$$

for \(i=1,\ldots ,k\). The extension \(\mathbb {L}/\mathbb {Q}\) is Galois. We denote by d its degree and by \(G:={\text {Gal}}(\mathbb {L}/\mathbb {Q})\) the Galois group of \(\mathbb {L}\) over \(\mathbb {Q}\). Let \(G=\{\sigma _1,\ldots ,\sigma _d\}\). We consider the map \(\lambda :\mathcal {O}_\mathbb {L}^\times \rightarrow \mathbb {R}^d\) defined by \( x \mapsto (\log \vert \sigma _1(x) \vert ,\log \vert \sigma _2(x)\vert ,\ldots ,\) \(\log \vert \sigma _d(x)\vert )\). Observe that by the product formula we have for \(x\in \mathbb {L}^\times \) that

$$\begin{aligned} \prod _{v\in M_\mathbb {L}}\vert x\vert _v=1; \end{aligned}$$

and for \(x\in \mathcal {O}_\mathbb {L}^\times \) this leads to

$$\begin{aligned} \prod _{\begin{array}{c} v \in \mathbb {L} \\ v | \infty \end{array}} \vert x \vert _v = 1 \end{aligned}$$

since for every finite place v we have \(\vert x\vert _v=1\). This means that the image of \(\lambda \) lies in the hyperplane defined by \(X_1+\cdots +X_d=0\) of \(\mathbb {R}^d\).

We will use the property that \(\lambda \) is a homomorphism (see e.g., [17]). So if we can prove that the \(k-1\) vectors \(\lambda (\alpha _1),\ldots ,\lambda (\alpha _{k-1})\) are linearly independent, this will prove the statement.

Since \(\varPsi _k\) is irreducible over \(\mathbb {Q}\), the Galois group G of \(\mathbb {L}\) over \(\mathbb {Q}\) acts transitively on \(\{\alpha _1,\ldots ,\alpha _k\}\), i.e., for each \(i = 1, \dots , k\) there is some Galois automorphism which sends \(\alpha _i\) to \(\alpha _1\); without loss of generality let \(\sigma _1,\ldots ,\sigma _k\) be such that \(\sigma _i(\alpha _i) = \alpha _1\). Observe that \(\sigma _i^{-1}(\alpha _1)=\{\alpha _i\}\) for \(i=1,\ldots ,k\). We have

$$\begin{aligned} \lambda (\alpha _1)= & {} (\log \vert \alpha _1\vert ,\log \vert \sigma _2(\alpha _1)\vert ,\ldots ,\log \vert \sigma _{k-1}(\alpha _1)\vert ,\ldots ),\\ \lambda (\alpha _2)= & {} (\log \vert \sigma _1(\alpha _2)\vert ,\log \vert \alpha _1\vert ,\ldots ,\log \vert \sigma _{k-1}(\alpha _2)\vert ,\ldots ),\\ \vdots&\\ \lambda (\alpha _{k-1})= & {} (\log \vert \sigma _1(\alpha _{k-1})\vert ,\log \vert \sigma _2(\alpha _{k-1})\vert ,\ldots ,\log \vert \alpha _{1}\vert ,\ldots ). \end{aligned}$$

We will show that the matrix \((\log \vert \sigma _i(\alpha _j)\vert )_{i=1,\ldots ,k-1;j=1,\ldots ,k-1}\) consisting of the first \(k-1\) entries of these \(k-1\) vectors has rank \(k-1\) implying the statement of the lemma.

Observe now that \(\alpha _1\cdots \alpha _k=(-1)^{k-1}\) and thus \(\vert \sigma _i(\alpha _1\cdots \alpha _k)\vert =1\) for all \(i=1,\ldots ,k-1\). It follows that \(\sum _{j=1}^k\log \vert \sigma _i(\alpha _j)\vert =0\) and thus

$$\begin{aligned} \sum _{j=1}^{k-1}\log \vert \sigma _i(\alpha _j)\vert =-\log \vert \sigma _i(\alpha _k)\vert >0 \end{aligned}$$

for \(j=1,\ldots ,k-1\) and all \(i=1,\ldots ,k-1\). Hence the transpose of the matrix above is strictly diagonal-dominant since the diagonal entries, which are all equal to \(\log \vert \alpha _1\vert \), are positive, all other entries are negative, and each row-sum (in the transpose matrix) of all off-diagonal entries is in absolute value less than the corresponding diagonal entry. It follows that the matrix is regular, which is what we wanted to show. \(\square \)

Finally, we prove the following result, which generalizes Proposition 1 in [8]. Observe that the upper bound depends now on k.

Lemma 2

Let \(x>y\ge 3\). Then

$$\begin{aligned} \gcd \left( F_x^{(k)}-1,F_y^{(k)}-1\right) <\alpha _1^{\frac{kx}{k+1}}. \end{aligned}$$
(8)

Proof

We may assume that \(y\ge 4\), since for \(y=3\), we get \(F_y^{(k)}-1=1\), and there is nothing to prove. We put \(d=\gcd \left( F_x^{(k)}-1,F_y^{(k)}-1\right) \). Let \(\kappa \) be a constant to be determined later. If \(y\le \kappa x+1\), then

$$\begin{aligned} d\le F_y^{(k)}-1<F_y^{(k)}<\alpha _1^{y-1}\le \alpha _1^{\kappa x}. \end{aligned}$$
(9)

From now on, we assume that \(y>\kappa x+1\). Using (3) and (5), we write

$$\begin{aligned} {\begin{matrix} F_x^{(k)} &{} = f_1 \alpha _1^x+\zeta _x,\quad |\zeta _x|<1/2,\\ F_y^{(k)} &{} = f_1 \alpha _1^y+\zeta _y,\quad |\zeta _y|<1/2. \end{matrix}} \end{aligned}$$
(10)

We put \({\mathbb K}={\mathbb Q}(\alpha _1)\). We let \(\lambda =x-y<(1-\kappa )x-1\), and note that

$$\begin{aligned} d\mid \big (F_x^{(k)}-1\big )-\alpha _1^{\lambda } \big (F_y^{(k)}-1\big )\quad {\mathrm{in}}\quad {\mathbb K}. \end{aligned}$$

We write

$$\begin{aligned} d\eta =\big (F_x^{(k)}-1\big )-\alpha _1^{\lambda } \big (F_y^{(k)}-1\big ), \end{aligned}$$

where \(\eta \) is some algebraic integer in \({\mathbb K}\). Note that the right-hand side above is not zero, for if it were, we would get \(\alpha _1^{\lambda }=\big (F_x^{(k)}-1\big )/\big (F_y^{(k)}-1\big )\in {\mathbb Q}\), which is false for \(\lambda >0\). We compute norms from \({\mathbb K}\) to \({\mathbb Q}\). Observe that

$$\begin{aligned} \left| \big (F_x^{(k)}-1\big )-\alpha _1^{\lambda }\big (F_y^{(k)}-1\big )\right|= & {} \left| \big (f_1\alpha _1^x+\zeta _x-1\big )-\alpha _1^{\lambda }\big (f_1\alpha _1^{y}+\zeta _y-1\big )\right| \\= & {} \left| \alpha _1^{\lambda }(1-\zeta _y)-(1-\zeta _x)\right| \\\le & {} \frac{3}{2}\alpha _1^{\lambda }-\frac{1}{2}<\frac{3}{2}\alpha _1^{\lambda }<\alpha _1^{\lambda +1}\le \alpha _1^{(1-\kappa )x}. \end{aligned}$$

In the above, we used the fact that

$$\begin{aligned} -1/2<\zeta _x,\zeta _y<1/2 \end{aligned}$$

(see (5)) as well as (2). Further, let \(\sigma _i\) be any Galois automorphism that maps \(\alpha _1\) to \(\alpha _i\). Then for \(i\ge 2\), we have

$$\begin{aligned} \left| \sigma _i\left( \big (F_x^{(k)}-1\big )-\alpha _1^{\lambda } \big (F_y^{(k)}-1\big )\right) \right|= & {} \left| \big (F_x^{(k)}-1\big )-\alpha _i^{\lambda } \big (F_y^{(k)}-1\big )\right| \\<&F_x^{(k)}-1+F_y^{(k)}-1<\alpha ^{x-1}+\alpha ^{y-1}-2\\< & {} \alpha ^{x-1}\big (1+\alpha ^{-1}\big )\le \alpha ^x. \end{aligned}$$

We then have

$$\begin{aligned} d^k\le & {} |N_{{\mathbb K}/{\mathbb Q}}(d\eta )|\\\le & {} \left| N_{{\mathbb K}/{\mathbb Q}}\left( \big (F_x^{(k)}-1\big )-\alpha _1^{\lambda } \big (F_y^{(k)}-1 \big )\right) \right| \\= & {} \left| \prod _{i=1}^k \sigma _i \left( \big (F_x^{(k)}-1\big )-\alpha _1^{\lambda } \big (F_y^{(k)}-1\big ) \right) \right| \\< & {} \alpha _1^{(1-\kappa )x} (\alpha _1^{x})^{k-1}=\alpha _1^{(k-\kappa )x}. \end{aligned}$$

Hence,

$$\begin{aligned} d\le \alpha _1^{(1-\kappa /k)x}. \end{aligned}$$
(11)

In order to balance (9) and (11), we choose \(\kappa \) such that \(\kappa =1-\kappa /k\), giving \(\kappa =k/(k+1)\), and the lemma is proved. \(\square \)

3 Parametrizing the Solutions

In order to simplify the notations, we shall from now onwards write \(F_n\) instead of \(F_n^{(k)}\); we still mean the nth k-generalized Fibonacci number. The arguments in this section follow the arguments from [8]. We will show that if there are infinitely many solutions to (1), then all of them can be parametrized by finitely many expressions as given in (18) for c below.

We assume that there are infinitely many solutions to (1). Then, for each integer solution (abc), we have

$$\begin{aligned} a = \sqrt{\frac{(F_x - 1)(F_y - 1)}{F_z - 1}}, \, b = \sqrt{\frac{(F_x - 1)(F_z - 1)}{F_y - 1}}, \, c = \sqrt{\frac{(F_y - 1)(F_z - 1)}{F_x - 1}}. \end{aligned}$$

From

$$\begin{aligned} \alpha _1^{x+y-2} \ge F_x F_y> (F_x - 1)(F_y - 1) \ge F_z - 1 \ge \alpha _1^{z-2} - 1 > \alpha _1^{z-3} \end{aligned}$$

we see that \(x+y > z - 1\) and thus \(y \ge z/2\). In order to get a similar correspondence for x and z, we denote \(d_1 := \gcd (F_y - 1, F_z - 1)\) and \(d_2 := \gcd (F_x - 1, F_z - 1)\), such that \(F_z - 1 \mid d_1 d_2\). Then we use Lemma 2 to obtain

$$\begin{aligned} \alpha _1^{x-1}> F_x> F_x-1 \ge d_2 \ge \frac{F_z - 1}{d_1} \ge \frac{\alpha _1^{z-2} - 1}{\alpha _1^{\frac{kz}{k+1}}} > \alpha _1^{z -\frac{kz}{k+1} -3} \end{aligned}$$

and hence

$$\begin{aligned} x > \left( 1 - \frac{k}{k+1} \right) z - 2, \end{aligned}$$

which we can write as \(x > C_1 z\) for some small constant \(C_1 < 1\) (depending only on k), when z is sufficiently large.

Next, we do a Taylor series expansion for c which was given by

$$\begin{aligned} c = \sqrt{\frac{(F_y - 1)(F_z - 1)}{F_x - 1}}. \end{aligned}$$
(12)

Using the power sum representations of \(F_x, F_y, F_z\), we get

$$\begin{aligned} c= & {} \sqrt{f_1} \alpha _1^{(-x+y+z)/2} \\&\quad \cdot \left( 1 + (-1/f_1) \alpha _1^{-x} + (f_2/f_1) \alpha _2^x \alpha _1^{-x} + \dots + (f_k/f_1) \alpha _k^x \alpha _1^{-x}\right) ^{-1/2} \\&\quad \cdot \left( 1 + (-1/f_1) \alpha _1^{-y} + (f_2/f_1) \alpha _2^y \alpha _1^{-y} + \dots + (f_k/f_1) \alpha _k^y \alpha _1^{-y} \right) ^{1/2} \\&\quad \cdot \left( 1 + (-1/f_1) \alpha _1^{-z} + (f_2/f_1) \alpha _2^z \alpha _1^{-z} + \dots + (f_k/f_1) \alpha _k^z \alpha _1^{-z}\right) ^{1/2}. \end{aligned}$$

We then use the binomial expansion to obtain

$$\begin{aligned}&\left( 1 + (-1/f_1) \alpha _1^{-x} + (f_2/f_1) \alpha _2^x \alpha _1^{-x} + \dots + (f_k/f_1) \alpha _k^x \alpha _1^{-x} \right) ^{1/2} \\&\quad = \sum _{j=0}^T \left( {\begin{array}{c}1/2\\ j\end{array}}\right) \left( (-1/f_1) \alpha _1^{-x} + (f_2/f_1) \alpha _2^x \alpha _1^{-x} + \dots + (f_k/f_1) \alpha _k^x \alpha _1^{-x} \right) ^j + {\mathcal {O}}\left( \alpha _1^{-(T+1) x}\right) , \end{aligned}$$

where \(\mathcal {O}\) has the usual meaning, using estimates from [9] and where T is some index, which we will specify later. Since \(x < z\) and \(z < x/C_1\), the remainder term can also be written as \(\mathcal {O}\big (\alpha _1^{-T \Vert x\Vert /C_1}\big )\), where \(\Vert x\Vert =\max \{x,y,z\}=z\). Doing the same for y and z likewise and multiplying those expression gives

$$\begin{aligned} c=\sqrt{f_1} \alpha _1^{(-x+y+z)/2} \left( 1 + \sum _{j=1}^{n-1} d_j M_j\right) + \mathcal {O}(\alpha _1^{-T \Vert x\Vert /C_1}), \end{aligned}$$
(13)

where the integer n depends only on T and where J is a finite set, \(d_j\) are non-zero coefficients in \({\mathbb L}:={\mathbb Q}(\alpha _1,\ldots ,\alpha _k)\), and \(M_j\) is a monomial of the form

$$\begin{aligned} M_j=\prod _{i=1}^k \alpha _i^{L_{i,j}(\mathbf{x})}, \end{aligned}$$

in which \(\mathbf{x}=(x,y,z)\), and \(L_{i,j}(\mathbf{x})\) are linear forms in \(\mathbf{x}\in {\mathbb R}^3\) with integer coefficients which are all non-negative if \(i=2,\ldots ,k\) and non-positive if \(i=1\). Note that each monomial \(M_j\) is “small” that is there exists a constant \(\kappa > 0\) (which we can even choose independently of k), such that

$$\begin{aligned} |M_j| \le e^{-\kappa x}\quad {\mathrm{for ~all}}\quad j\in J. \end{aligned}$$
(14)

This follows directly from

$$\begin{aligned} |M_j|= & {} |\alpha _1|^{L_{1,j}(\mathbf{x})} \cdot |\alpha _2|^{L_{2,j}(\mathbf{x})} \cdots |\alpha _k|^{L_{k,j}(\mathbf{x})} \\\le & {} (2 - 1/k)^{L_{1,j}(\mathbf{x})} \cdot 1 \cdots 1 \\\le & {} (3/2)^{-x} \\\le & {} e^{-\kappa x}\quad {\mathrm{for ~all}}\quad j\in J. \end{aligned}$$

Our aim of this section is to apply a version of the Subspace theorem given in [5] to show that there is a finite expansion of c involving terms as in (13); the version we are going to use can also be found in Sect. 3 of [10]. For the set-up—in particular the notion of heights—we refer to the mentioned papers.

We work with the field \({\mathbb L}={\mathbb Q}(\alpha _1,\ldots ,\alpha _k)\) and let S be the finite set of infinite places (which are normalized so that the Product Formula holds, cf. [5]). Observe that \(\alpha _1,\ldots ,\alpha _k\) are S-units. According to whether \(-x+y+z\) is even or odd, we set \(\epsilon = 0\) or \(\epsilon = 1\) respectively, such that

$$\begin{aligned} \alpha _1^{(-x+y+z-\epsilon )/2} \in \mathbb {L}. \end{aligned}$$

By going to a still infinite subset of the solutions, we may assume that \(\epsilon \) is always either 0 or 1.

Using the fixed integer n (depending on T) from above, we now define \(n+1\) linearly independent linear forms in indeterminants \((C, Y_0, \dots , Y_n)\). For the standard infinite place \(\infty \) on \(\mathbb {C}\), we set

$$\begin{aligned} l_{0, \infty }(C, Y_0, \dots , Y_{n-1}) := C - \sqrt{f_1 \alpha _1^\epsilon } Y_0 - \sqrt{f_1 \alpha _1^\epsilon } \sum _{j=1}^{n-1} d_{j} Y_j, \end{aligned}$$
(15)

where \(\epsilon \in \{0,1\}\) is as explained above, and

$$\begin{aligned} l_{i, \infty }(C, Y_0, \dots , Y_{n-1}) := Y_{i-1} \quad {\mathrm{for }}\; i \in \{1, \dots , n\}. \end{aligned}$$

For all other places v in S, we define

$$\begin{aligned} l_{0, v} := C, \quad l_{i, v} := Y_{i-1} \quad {\mathrm{for }}\; i = 1, \dots , n. \end{aligned}$$

We will show that there is some \(\delta > 0\), such that the inequality

$$\begin{aligned} \prod _{v\in S}\prod _{i=0}^{n}\frac{\vert l_{i,v}(\mathbf{y})\vert _{v}}{\vert \mathbf{y}\vert _{v}} <\left( \prod _{v\in S}\vert \det (l_{0,v},\ldots ,l_{n,v})\vert _{v}\right) \cdot \mathcal {H}(\mathbf{y})^{-(n+1)-\delta }\, \end{aligned}$$
(16)

is satisfied for all vectors

$$\begin{aligned} \mathbf{y} = \left( c, \alpha _1^{(-x+y+z-\epsilon )/2}, \alpha _1^{(-x+y+z-\epsilon )/2} M_1, \dots , \alpha _1^{(-x+y+z-\epsilon )/2} M_{n-1}\right) . \end{aligned}$$

The use of the correct \(\epsilon \in \{0,1\}\) guarantees that these vectors are indeed in \(\mathbb {L}^n\).

First, notice that the determinant in (16) equals 1 for all places v. Thus (16) reduces to

$$\begin{aligned} \prod _{v\in S}\prod _{i=0}^{n}\frac{\vert l_{i,v}(\mathbf{y})\vert _{v}}{\vert \mathbf{y}\vert _{v}} < \mathcal {H}(\mathbf{y})^{-(n+1)-\delta }, \end{aligned}$$

and the double product on the left-hand side can be split up into

$$\begin{aligned} \vert c - \sqrt{f_1 \alpha _1^\epsilon } y_0 - \sqrt{f_1 \alpha _1^\epsilon } \sum _{j=1}^{n-1} d_{j} y_j \vert _\infty \cdot \prod _{\begin{array}{c} v \in M_{\mathbb {L}, \infty }, v \ne \infty \end{array}} \vert c \vert _v \cdot \prod _{v \in S \backslash M_{\mathbb {L}, \infty }} \vert c \vert _v \cdot \prod _{j=1}^{n-1} \prod _{v \in S} \vert y_j \vert _v. \end{aligned}$$

Now notice that the last double product equals 1 due to the Product Formula and that

$$\begin{aligned} \prod _{v \in S \backslash M_{\mathbb {L}, \infty }} \vert c \vert _v \le 1, \end{aligned}$$

since \(c \in \mathbb {Z}\). An upper bound on the number of infinite places in \(\mathbb {L}\) is k! and hence

$$\begin{aligned} \prod _{\begin{array}{c} v \in M_{\mathbb {L}, \infty }, v \ne \infty \end{array}} \vert c \vert _v< & {} \left( \frac{(T_y - 1) (T_z - 1)}{T_x - 1} \right) ^{k!} \\\le & {} \left( f_1 \alpha _1^y + \cdots + f_k \alpha _k^y - 1\right) ^{k!} \left( f_1 \alpha _1^z + \cdots + f_k \alpha _k^z - 1\right) ^{k!} \\\le & {} (1 \cdot 2^{\Vert x \Vert } + 1/2)^{2 \cdot k!} \end{aligned}$$

using (6) and (5). And finally the first expression is just

$$\begin{aligned} \Big | \sqrt{f_1 \alpha _1^\epsilon } \alpha ^{(-x+y+z-\epsilon )/2} \sum _{j \ge n} d_j M_j \Big |, \end{aligned}$$

which, by (13), is smaller than some expression of the form \(C_2 \alpha _1^{T \Vert x\Vert / C_1}\). Therefore, we have

$$\begin{aligned} \prod _{v\in S}\prod _{i=0}^{n}\frac{\vert l_{i,v}(\mathbf{y})\vert _{v}}{\vert \mathbf{y}\vert _{v}} < C_2 \alpha _1^{T \Vert x\Vert / C_1} \cdot (2^{\Vert x \Vert } + 1/2)^{2 \cdot k!}. \end{aligned}$$

Now we choose T (and the corresponding n) in such a way that

$$\begin{aligned} C_2 \alpha _1^{-T\Vert x\Vert /C_1} < \alpha _1^{-\frac{T\Vert x\Vert }{2C_1}} \end{aligned}$$

and

$$\begin{aligned} (2^{\Vert x\Vert } + 1/2)^{2 \cdot k!} < \alpha _1^{\frac{T\Vert x\Vert }{4C_1}} \end{aligned}$$

holds. Then we can write

$$\begin{aligned} \prod _{v\in S}\prod _{i=0}^{n}\frac{\vert l_{i,v}(\mathbf{y})\vert _{v}}{\vert \mathbf{y}\vert _{v}} < \alpha _1^{\frac{-T \Vert x\Vert }{2C_1}}. \end{aligned}$$
(17)

For the height of our vector \(\mathbf{y}\), we estimate

$$\begin{aligned} {\mathcal {H}}(\mathbf{y})\le & {} C_3 \cdot {\mathcal {H}}(c) \cdot {\mathcal {H}}\left( \alpha _1^\frac{-x+y+z-\epsilon }{2}\right) ^n \cdot \prod _{i=1}^{n-1} \mathcal {H}(M_j) \\\le & {} C_3(2^{\Vert x\Vert } + 1/2)^{k!} \prod _{i=1}^{n-1} \alpha _1^{C_4 \Vert x\Vert } \\\le & {} \alpha _1^{C_5 \Vert x\Vert }, \end{aligned}$$

with suitable constants \(C_3, C_4, C_5\). For the second estimate, we used that

$$\begin{aligned} {\mathcal {H}}(M_j) \le {\mathcal {H}}(\alpha _1)^{C_{\alpha _1}(\mathbf{x})}{\mathcal {H}}(\alpha _2)^{C_{\alpha _2}(\mathbf{x})} \cdots {\mathcal {H}}(\alpha _k)^{C_{\alpha _k}(\mathbf{x})} \end{aligned}$$

and bounded it by the maximum of those expressions. Furthermore we have

$$\begin{aligned} {\mathcal {H}}\left( \alpha _1^\frac{-x+y+z-\epsilon }{2}\right) ^n \le \alpha _1^{n \Vert x\Vert }, \end{aligned}$$

which just changes our constant \(C_4\).

Now finally, the estimate

$$\begin{aligned} \alpha _1^{-\frac{T \Vert x\Vert }{2C_1}} \le \alpha _1^{-\delta C_5\Vert x\Vert } \end{aligned}$$

is satisfied when we pick \(\delta \) small enough.

So all the conditions for the Subspace theorem are met. Since we assumed that there are infinitely many solutions (xyz) of (16), we now can conclude that all of them lie in finitely many proper linear subspaces. Therefore, there must be at least one proper linear subspace, which contains infinitely many solutions and we see that there exists a finite set J and (new) coefficients \(e_j\) for \(j\in J\) in \({\mathbb {L}}\) such that we have

$$\begin{aligned} c=\alpha _1^{(-x+y+z-\epsilon )/2} \left( e_0+\sum _{j\in J_c} e_j M_j\right) \end{aligned}$$
(18)

with (new) non-zero coefficients \(e_j\) and monomials \(M_j\) as before.

Likewise, we can find finite expressions of this form for a and b.

4 Proof of the Theorem

We use the following parametrization lemma:

Lemma 3

Suppose, we have infinitely many solutions for (1). Then there exists a line in \(\mathbb {R}^3\) given by

$$\begin{aligned} x(t) = r_1 t + s_1, \quad y(t) = r_2 t + s_2, \quad z(t) = r_3 t + s_3 \end{aligned}$$

with rationals \(r_1, r_2, r_3, s_1, s_2, s_3\), such that infinitely many of the solutions (xyz) are of the form (x(n), y(n), z(n)) for some integer n.

Proof

Assume that (1) has infinitely many solutions. We already deduced in Sect. 3 that c can be written in the form

$$\begin{aligned} c=\alpha _1^{(-x+y+z-\epsilon )/2} \left( e_{c,0}+\sum _{j\in J_c} e_{c,j} M_{c,j}\right) \end{aligned}$$

with \(J_c\) being a finite set, \(e_{c,j}\) being coefficients in \(\mathbb {L}\) for \(j \in J_c \cup \{0\}\) and \(M_{c,j} = \prod _{i=1}^k \alpha _i^{L_{c,i,j}(\mathbf{x})}\) with \(\mathbf{x}=(x,y,z)\). In the same manner, we can write

$$\begin{aligned} b=\alpha _1^{(x-y+z-\epsilon )/2} \left( e_{b,0}+\sum _{j\in J_b} e_{b,j} M_{b,j}\right) . \end{aligned}$$

Since \(1 + bc = F_z = f_1 \alpha _1^z + \cdots + f_k \alpha _k^z\), we get

$$\begin{aligned} f_1 \alpha _1^z + \cdots + f_k \alpha _k^z - \alpha _1^{z-\varepsilon } \left( e_{b,0}+\sum _{j\in J_b} e_{b,j} M_{b,j}\right) \left( e_{c,0}+\sum _{j\in J_c} e_{c,j} M_{c,j}\right) = 1. \end{aligned}$$
(19)

Substituting

$$\begin{aligned} \alpha _k = \frac{(-1)^{k-1}}{\alpha _1 \cdots \alpha _{k-1}} \end{aligned}$$

into (19), we obtain an equation of the form

$$\begin{aligned} \sum _{j \in J} e_j \alpha _1^{L_{1,j}(\mathbf{x})} \cdots \alpha _{k-1}^{L_{k-1,j}(\mathbf{x})} = 0, \end{aligned}$$
(20)

where again J is some finite set, \(e_j\) are non-zero coefficients in \(\mathbb {L}\) and \(L_{i,j}\) are linear forms in \(\mathbf{x}\) with integer coefficients.

This is an S-unit equation, where S is the multiplicative group generated by \(\{\alpha _1, \dots , \alpha _k, -1\}\). We may assume that infinitely many of the solutions \(\mathbf{x}\) are non-degenerate solutions of (20) by replacing the equation by an equation given by a suitable vanishing subsum if necessary.

We may assume that \((L_{1,i}, \dots , L_{k-1,i}) \ne (L_{1,j}, \dots , L_{k-1,j})\) for any \(i \ne j\), because otherwise we could just merge these two terms.

Therefore for \(i \ne j\), the theorem on non-degenerate solutions to S-unit equations (see [6]) yields that the set of

$$\begin{aligned} \alpha _1^{L_{1,i}(\mathbf{x}) - L_{1,j}(\mathbf{x})} \cdots \alpha _{k-1}^{L_{k-1,i}(\mathbf{x}) - L_{k-1,j}(\mathbf{x})} \end{aligned}$$

is contained in a finite set of numbers. By Lemma 1, \(\alpha _1, \dots , \alpha _{k-1}\) are multiplicatively independent and thus the exponents \((L_{1,i} - L_{1,j})(\mathbf{x}), \ldots , (L_{k-1,i} - L_{k-1,j})(\mathbf{x})\) take the same value for infinitely many \(\mathbf{x}\). Since we assumed that these linear forms are not all identically zero, this implies that there is some non-trivial linear form L defined over \(\mathbb {Q}\) and some \(c\in \mathbb {Q}\) with \(L(\mathbf{x}) = c\) for infinitely many \(\mathbf{x}\). So there exist rationals \(r_i, s_i, t_i\) for \(i = 1, 2, 3\) such that we can parametrize

$$\begin{aligned} x = r_1 p + s_1 q + t_1, \quad y = r_2 p + s_2 q + t_2, \quad z = r_3 p + s_3 q + t_3 \end{aligned}$$

with infinitely many pairs \((p,q) \in \mathbb {Z}^2\).

We can assume that \(r_i, s_i, t_i\) are all integers. If not, we define \(\varDelta \) as the least common multiple of the denominators of \(r_i, s_i\) (\(i= 1, 2,3\)) and let \(p_0, q_0\) be such that for infinitely many pairs (pq) we have \(p \equiv p_0 \mod \varDelta \) and \(q \equiv q_0 \mod \varDelta \). Then \(p = p_0 + \varDelta \lambda , q = q_0 + \varDelta \mu \) and

$$\begin{aligned} x= & {} (r_1\varDelta ) \lambda +(s_1\varDelta ) \mu +(r_1p_0+s_1q_0+t_1)\\ y= & {} (r_2\varDelta ) \lambda +(s_2\varDelta ) \mu +(r_2p_0+s_2q_0+t_2)\\ z= & {} (r_3\varDelta ) \lambda +(s_3\varDelta ) \mu +(r_3p_0+s_3q_0+t_3). \end{aligned}$$

Since \(r_i \varDelta \), \(s_i \varDelta \) and xyz are all integers, \(r_i p_0 + s_i q_0 + t_i\) are integers as well. Replacing \(r_i\) by \(r_i \varDelta \), \(s_i\) by \(s_i \varDelta \) and \(t_i\) by \(r_i p_0 + s_i q_0 + t_i\), we can indeed assume that all coefficients \(r_i, s_i, t_i\) in our parametrization are integers.

Using a similar argument as in the beginning of the proof, we get that our equation is of the form

$$\begin{aligned} \sum _{j \in J} e'_j \alpha _1^{L'_{1,j}(\mathbf{r})} \cdots \alpha _{k-1}^{L'_{k-1,j}(\mathbf{r})} = 0, \end{aligned}$$

where \(\mathbf{r} := (\lambda , \mu )\), J is a finite set of indices, \(e_j'\) are new non-zero coefficients in \(\mathbb {L}\) and \(L'_{i,j}(\mathbf{r})\) are linear forms in \(\mathbf{r}\) with integer coefficients. Again we may assume that we have \((L'_{1,i}(\mathbf{r}), \dots , L'_{k-1,i}(\mathbf{r})) \ne (L'_{1,j}(\mathbf{r}), \dots , L'_{k-1,j}(\mathbf{r}))\) for any \(i \ne j\).

Applying the theorem of non-degenerate solutions to S-unit equations once more, we obtain a finite set of numbers \(\varLambda \), such that for some \(i \ne j\), we have

$$\begin{aligned} \alpha _1^{(L'_{1,i} - L'_{1,j})(\mathbf{r})} \cdots \alpha _{k-1}^{(L'_{k-1,i} - L'_{k-1,j})(\mathbf{r})} \in \varLambda . \end{aligned}$$

So every \(\mathbf{r}\) lies on a finite collection of lines and since we had infinitely many \(\mathbf{r}\), there must be some line, which contains infinitely many solution, which proves our lemma. \(\square \)

We apply this lemma and define \(\varDelta \) as the least common multiple of the denominators of \(r_1, r_2, r_3\). Infinitely many of our n will be in the same residue class modulo \(\varDelta \), which we shall call r. Writing \(n = m \varDelta + r\), we get

$$\begin{aligned} (x,y,z) = ((r_1 \varDelta ) m + (r r_1 + s_1), (r_2 \varDelta ) m + (r r_2 + s_2), (r_3 \varDelta ) m + (r r_3 + s_3) ). \end{aligned}$$

Replacing n by m, \(r_i\) by \(r_i \varDelta \) and \(s_i\) by \(r r_i + s\), we can even assume that \(r_i, s_i\) are integers. So we have

$$\begin{aligned} \frac{-x+y+z-\epsilon }{2} = \frac{(-r_1+r_2+r_3)m}{2} + \frac{-s_1+s_2+s_3 - \epsilon }{2}. \end{aligned}$$

This holds for infinitely many m, so we can choose a still infinite subset such that all of them are in the same residue class \(\delta \) modulo 2 and we can write \(m = 2 \ell + \delta \) with fixed \(\delta \in \{0,1\}\). Thus we have

$$\begin{aligned} \frac{-x+y+z-\epsilon }{2} = (-r_1 + r_2 + r_3) \ell + \eta , \end{aligned}$$

where \(\eta \in \mathbb {Z}\) or \(\eta \in \mathbb {Z} + 1/2\).

Using this representation, we can write (18) as

$$\begin{aligned} c(\ell ) = \alpha _1^{(-r_1+r_2+r_3)\ell + S} \left( e_0+\sum _{j\in J_c} e_j M_j\right) . \end{aligned}$$
(21)

for infinitely many \(\ell \), where

$$\begin{aligned} M_j=\prod _{i=1}^k \alpha _i^{L_{i,j}(\mathbf{x})}, \end{aligned}$$

as before and \(\mathbf{x} = \mathbf{x}(\ell ) = (x(2\ell + \delta ), y(2\ell + \delta ), z(2\ell + \delta ))\). From this, we will now derive a contradiction.

First we observe that there are only finitely many solutions of (21) with \(c(\ell ) = 0\). This can be shown by using the fact that a simple non-degenerate linear recurrence has only finite zero-multiplicity (see [6] for an explicit bound). We will apply this statement here for the linear recurrence in \(\ell \); it only remains to check that no quotient of two distinct roots of the form \(\alpha _1^{L_{1,i}(\mathbf{x}(\ell ))} \cdots \) \(\alpha _k^{L_{k,i}(\mathbf{x}(\ell ))}\) is a root of unity or, in other words, that

$$\begin{aligned} \left( \alpha _1^{m_1} \alpha _2^{m_2} \cdots \alpha _k^{m_k}\right) ^n = 1 \end{aligned}$$
(22)

has no solutions in \(n \in \mathbb {Z}/\{0\}\), \(m_1 < 0\) and \(m_i > 0\) for \(i = 2, \dots , k\). Assume relation (22) holds. Replacing \(\alpha _1\) by \((-1)^{k-1}(\alpha _2\cdots \alpha _k)^{-1}\) gives

$$\begin{aligned} \alpha _2^{2(m_2-m_1)}\cdots \alpha _k^{2(m_k-m_1)}=1. \end{aligned}$$

By squaring this equation and applying Lemma 1 we get \(2(m_2-m_1)=\cdots =2(m_k-m_1)=0\) and thus \(m_1=m_2=\cdots =m_k\), which is impossible because of the signs of \(m_1\) and \(m_2,\ldots ,m_k\).

So we have confirmed that \(c(\ell ) \ne 0\) for still infinitely many solutions. We use (12) and write

$$\begin{aligned} (F_x - 1) c^2 = (F_y - 1)(F_z - 1). \end{aligned}$$
(23)

Then we insert the finite expansion (21) in \(\ell \) for c into (23). Furthermore, we use the Binet formula (3) and write \(F_x, F_y, F_z\) as power sums in x, y and z respectively. Using the parametrization \((x,y,z) = (r_1 m + s_1, r_2 m + s_2, r_3 m + s_3)\) with \(m = 2\ell \) or \(m = 2\ell +1\) as above, we have expansions in \(\ell \) on both sides of (3). Since there must be infinitely many solutions in \(\ell \), the largest terms on both sides have to grow with the same rate. In order to find the largest terms, we have to distinguish some cases: If we assume that \(e_0 \ne 0\) for infinitely many of our solutions, then \(e_0 \alpha _1^{(-x+y+z-\epsilon )/2}\) is the largest term in the expansion of c and we have

$$\begin{aligned} f_1 \alpha _1^x e_0^2 \alpha _1^{-x+y+z-\epsilon } = f_1 \alpha _1^y f_1 \alpha _1^z. \end{aligned}$$

It follows that \(e_0^2 = f_1 \alpha _1^{\epsilon }\). The case \(e_0 = 0\) for infinitely many of our solutions is not possible, because then, the right-hand side of (23) would grow faster than the left-hand side so that (23) could be true for only finitely many of our \(\ell \). In the other cases, we have \(e_0 = \sqrt{f_1 \alpha _1^\epsilon }\), where \(\epsilon \in \{0,1\}\). This now contradicts the following lemma, which turns out to be slightly more involved than in the special case on Tribonacci numbers (cf. [8]).

Lemma 4

\(\sqrt{f_1} \notin \mathbb {L}\) and \(\sqrt{f_1\alpha _1} \notin \mathbb {L}\).

Proof

Suppose that \(\sqrt{f_1\alpha _1^{\epsilon }}\in \mathbb {L}\) for some \(\epsilon \in \{0,1\}\). Then there is \(\beta \in \mathbb {L}\) such that \(f_1\alpha _1^\epsilon =\beta ^2\). Using (4), we get that

$$\begin{aligned} \frac{(\alpha _1-1)\alpha _1^{-\epsilon }}{2+(k+1)(\alpha _1-2)}=\beta ^2. \end{aligned}$$

Computing norms over \({\mathbb {Q}}\), we get that

$$\begin{aligned} \left| \frac{N_{{\mathbb {L}}/{\mathbb {Q}}}(\alpha _1)^{-\epsilon } N_{{\mathbb {L}}/{\mathbb {Q}}}(\alpha _1-1)}{N_{{\mathbb {L}}/{\mathbb {Q}}}(2+(k+1)(\alpha _1-2))}\right| = N_{{\mathbb {L}}/{\mathbb {Q}}}(\beta )^2=\square , \end{aligned}$$
(24)

where \(\square \) denotes a rational square. Note that

$$\begin{aligned} \left| N_{{\mathbb {L}}/{\mathbb {Q}}}(\alpha _1)\right| =\left| \prod _{i=1}^k \alpha _i\right| =|(-1)^k \cdot (-1)|=1, \end{aligned}$$

and

$$\begin{aligned} \left| N_{{\mathbb {L}}/{\mathbb {Q}}}(\alpha _1-1)\right| =\left| \prod _{i=1}^k (\alpha _i-1)\right| =\left| \varPsi _k(1)\right| =k-1, \end{aligned}$$

and finally that

$$\begin{aligned} \left| N_{{\mathbb {L}}/{\mathbb {Q}}}(2+(k+1)(\alpha _1-2))\right|= & {} \left| N_{{\mathbb {L}}/{\mathbb {Q}}}((k+1)\alpha _1-2k)\right| \\= & {} \left| \prod _{i=1}^k ((k+1)\alpha _i-2k)\right| \\= & {} (k+1)^{k} \left| \prod _{i=1}^k (2k/(k+1)-\alpha _i)\right| \\= & {} (k+1)^{k} \left| \varPsi _k(2k/(k+1))\right| \\= & {} (k+1)^k\left| \frac{X^{k+1}-2X^k+1}{X-1}\Big |_{X=2k/(k+1)}\right| \\= & {} \frac{2^{k+1} k^k-(k+1)^{k+1}}{k-1}. \end{aligned}$$

Hence, we get that equation (24) leads to

$$\begin{aligned} \frac{2^{k+1} k^k-(k+1)^{k+1}}{(k-1)^2}=\square . \end{aligned}$$

This leads to

$$\begin{aligned} 2^{k+1} k^k-(k+1)^{k+1}=w^2 \end{aligned}$$
(25)

for some integer w. But this equation has no integer solutions, which is proved in the theorem below. This concludes the proof. \(\square \)

In order to finish the proof, we have the following result, which might be of independent interest since particular cases were considered before in [2] and [16].

Theorem 2

The Diophantine equation (25) has no positive integer solutions (kw) with \(k\ge 2\).

Proof

The cases \(k\equiv 1,2\pmod 4\) have already been treated both in [2] and in [16]. We treat the remaining cases. If \(k\equiv 0\pmod 4\), then the left-hand side of (25) is congruent to \(-1\pmod 4\), and therefore it cannot be a square. Finally, assume that \(k\equiv 3\pmod 4\). Then \(k+1\) is even, \(2^{(k+1)/2}\mid w\), and putting \(w_1=w/2^{(k+1)/2}\), we get

$$\begin{aligned} k^k-((k+1)/2)^{k+1}=w_1^2. \end{aligned}$$

We then get

$$\begin{aligned} k^k = w_1^2+((k+1)/2)^{k+1}. \end{aligned}$$
(26)

Note that the two numbers in the right-hand side of (26) are coprime, for if p divides \(w_1\) and \((k+1)/2\), then p divides the left-hand side of (26). Thus p divides both k and \((k+1)/2\), so also \(k-2((k+1)/2)=-1\), a contradiction. Thus, the right-hand side is a sum of two coprime squares and therefore all odd prime factors of it must be 1 modulo 4 contradicting the fact that in the left-hand side we have \(k\equiv 3\pmod 4\). This finishes the proof of this theorem. \(\square \)