1 Introduction

Once we interpret the famous product

$$\begin{aligned} (1-y) (1-y^2) (1-y^3)(1-y^4)(1-y^5) \cdots \end{aligned}$$
(1)

of Euler in the setting of integer partitions, the nature of the resulting series boils down to relatively simple pairing of partitions in a sign-reversing manner, showing that the series expansion has coefficients from \( \{ -1, 0, 1\}\). The proof is made even simpler by the geometric representation of partitions using Ferrers’ diagrams, which makes it clear which partitions cannot be paired, and provides an explicit formula for the exponents of the terms with \( \pm 1 \) coefficients. This of course is the beautiful pentagonal number theorem of Euler [8], and Fabian Franklin’s combinatorial proof of this result [10].

The reciprocal of Euler’s product is the generating function of unrestricted integer partitions, and the above determination of the coefficients gives the surprising recursion for the partition function where the indices decrease by generalized pentagonal numbers.

The starting point of the present paper is the consideration of the analogous product

$$\begin{aligned} (1-y) (1-y^2) (1-y^3)(1-y^5)(1-y^8)(1-y^{13}) \cdots , \end{aligned}$$
(2)

where the exponents run through the sequence of Fibonacci numbers \( (F_n )_{n \ge 2}\): \(1, 2, 3, 5, 8, 13, \ldots \) instead.

The main result of the paper is that the coefficients of the expansion of (2) are also from \( \{ -1, 0, 1\}\) (Theorem 9.1). They can be efficiently calculated, and straightforward formulas can be obtained when various interesting sequences of integers are taken as exponents.

There are some surprising properties of the expansion in (2). For one thing, the indices of the so-called canonical Fibonacci representation of an exponent n play a central role in the calculation of the corresponding expansion coefficient in the development of (2). The canonical representation of n (also called the Zeckendorf representation) is the unique representation \( n = F_{k_1} + F_{k_2} + \cdots + F_{k_r}\) where \(k_j - k_{j+1} \ge 2\) for \(j = 1, 2, \ldots , r-1\) and \(k_r \ge 2\). The partition associated to n is then \(( k_1 \ge k_2 \ge \cdots \ge k_r)\) with parts differing by at least two, and the smallest part \(\ge 2\). For the analysis of the coefficients of the expansion of (2) the parts can be taken to be remainders \( k_i\) modulo 4; we show that the coefficient of \(y^n\) in the expansion only depends on the vector \(\varvec{x}= ( x_1, x_2, \ldots , x_r)\) of these remainders (Theorem 8.1). Therefore, the expansion coefficients are determined by such vectors \(\varvec{x}\), or equivalently strings \(x = x_1 x_2 \cdots x_r\) with \( x_i \in \{0,1,2,3 \}\). Also, there are conditions in terms of forbidden subwords of the string x which guarantee that the associated expansion coefficient vanishes (Theorem 13.2).

It is possible to compute the value of the coefficient of \(y^n\) in (2) explicitly in various special cases. For instance, if n has canonical representation \( n = F_{k_1} + F_{k_2} + \cdots + F_{k_r}\) with \(r \ge 1\), then closed form expressions can be found for \( r=1,2,3\) as given in (21),(25) and (30), respectively. Another type of result gives that if for some fixed \( p \in \{0,1,2,3\}\), \( k_i \equiv p \!\! \mod 4\) for all i, then the coefficient of \(y^n\) is given by 1, \(-\left\lfloor \frac{p}{2} \right\rfloor \), \(\left\lfloor \frac{p}{2} \right\rfloor -1 \), depending on whether \( r \!\! \mod 3 \) is 0, 1, or 2, respectively (Theorem 12.1). Another result of this type is that the coefficient vanishes whenever \(r \ge 2\) and \( k_1-k_2 \equiv 3 \!\! \mod 4\) (Corollary 11.1).

There are other combinatorial aspects of the expansion (2). For example, the coefficients \( \vartheta (\varvec{x})\) over all \(\varvec{x}\) (i.e., all positive integers) are completely determined by a monoid of 25 matrices \(M_1, M_2 \ldots , M_{25}\), each \(2\times 2\) matrix determining a formula for the coefficients depending on the canonical representation of the exponent n (Theorems 11.1, 13.1).

The monoid associated with these 25 matrices naturally defines a finite Markov chain, and assuming that the elements of the vector of residues \( \varvec{x}\) are picked independently and uniformly, asymptotic probabilities for the values of the coefficients can be determined as a function of the number of summands r of the canonical representation of the exponent (Theorem 17.1).

2 Related Work

The interest in this subject was initiated by the work of Carlitz, especially by his studies on Fibonacci representations [3, 4], and the properties of the product \((1+y) (1+y^2) (1+y^3)(1+y^5) \cdots \), which is the expansion (2) studied here but with plus signs. We make use of his idea of adding an auxiliary variable to the generating function to derive the central recursions. Carlitz also proved a curious result on a property of Fibonacci representations that we make use of repeatedly.

In addition to Carlitz, Klarner [14, 15] also studied the number of representations of an integer in terms of Fibonacci numbers, compiling tables of data on these quantities. Earlier work on Fibonacci representations can be found in Daykin [5], Hoggatt [11], and Ferns [9], among many others. In fact, the fascination with the pretty numerical properties of the Fibonacci sequence has produced a wealth of results which are too numerous to be listed here.

3 Preliminaries

The well-known Fibonacci sequence is defined by \(F_0=0\), \(F_1 = 1\), and \( F_n=F_{n-1} + F_{n-2}\) for \( n \ge 2\).

A monoid is a set with an associative operation having an identity element. The free monoid of a set \(\Sigma \) is the set \(\Sigma ^*\) of all finite sequences of zero or more elements of \(\Sigma \). We will also denote \( \Sigma ^*\) by \( \langle \Sigma \rangle \) and refer to it as the monoid generated by \( \Sigma \). The elements of \(\Sigma ^*\) are called words or strings. The unique string in \(\Sigma ^*\) with no elements is denoted by \( \epsilon \). A string u is a factor or a subword of w if there exists strings \(x,y \in \Sigma ^*\) with \(w = xuy\).

A partition \(\lambda \) of a positive integer n is a weakly decreasing sequence of positive integers \(\lambda =(\lambda _{1}\ge \lambda _{2}\ge \cdots \ge \lambda _r)\) with \(n=\lambda _{1}+\lambda _{2}+\cdots +\lambda _r\). Each \(\lambda _{i}\) is a part of \(\lambda \) and r is the number of parts of \( \lambda \).

A finite Markov chain is specified by a number of states \(s_1, s_2, \ldots , s_m\), and a sequence of steps through these states so that when the process is in state \(s_i\), there is a probability \(p_{ij}\) that in the next step, it will be in state \(s_j\). The \( m \times m\) matrix \( P = [p_{ij}]\) is called the transition matrix of the chain. Its entries are nonnegative with each rowsum 1. To specify the process completely, we provide P and a starting state. The probability of moving from state \(s_i\) to \(s_j\) in k steps is the ijth entry \( p_{ij}^{(k)}\) of the matrix \(P^k\). A set of states communicate with each other if the process can move from any state to any other in the set. If some power of the transition matrix has all positive entries, then the chain is called regular. A state is absorbing if once entered, it cannot be left. A chain is an absorbing chain if it has at least one absorbing state and if it is possible to reach an absorbing state (possibly in many steps) from any state. We refer the reader to [13] for details.

4 Euler’s Expansion

Our starting point is the famous product of Euler [8]. A wonderful account of this product and its impact can be found in Andrews [2].

The combinatorial interpretation of the coefficients of the power series expansion of (3) and Franklin’s proof of it is a favorite topic in any introduction to combinatorics course and can be found in many sources in the literature (see for example [1, 7, 16]). Computing a few terms of the product, we obtain the following expansion.

$$\begin{aligned} \prod _{k\ge 1} (1 - y^k )\! =\! 1-y\!-\!y^2\!+\!y^5\!+\!y^7\!-\!y^{12}\!-\!y^{15}\!+\!y^{22}\!+\!y^{26}\!-\!y^{35}\!-\!y^{40}\!+\!y^{51} \pm \cdots \nonumber \\ \end{aligned}$$
(3)

Here, the coefficients lie in \(\{-1,0,1\}\) and show quite simple periodic behavior. The exponents of the \(\pm 1\) terms are the sequence of pentagonal and second pentagonal numbers. The pentagonal numbers are \( 1, 5, 12, 22, 35, \ldots \), given by the formula \( {\textstyle \frac{1}{2}}m(3m-1)\), and the second pentagonal numbers are \(2, 7, 15, 26, 40, \ldots \), given by the formula \( {\textstyle \frac{1}{2}}m(3m+1)\), both for \( m \ge 1\).

Euler’s pentagonal number theorem gives the expansion of the product in (3) in terms of these numbers as shown in (4).

$$\begin{aligned} \prod _{k\ge 1} (1 - y^k ) = 1 + \sum _{m\ge 1} (-1)^m \left( y^{{\textstyle \frac{1}{2}}m(3m-1)} + y^{{\textstyle \frac{1}{2}}m(3m+1)} \right) \,. \end{aligned}$$
(4)

As is well known, the reciprocal of (3) is the generating function of the number p(n) of unrestricted partitions, which in view of (4) gives Euler’s surprising recursion for it.

5 The Main Product

In analogy with Euler’s product where the exponents are \(1,2,3,4, \ldots \), we consider the infinite product where the exponents run through the Fibonacci numbers \((F_n)_{n\ge 2}\): \(1,2,3,5,8,13, \ldots \) A few terms of this expansion are

$$\begin{aligned} \prod _{k\ge 2} (1 - y^{F_k} ) = 1-y-y^2+y^4+y^7-y^8+y^{11}-y^{12}-y^{13}+y^{14}+y^{18} \pm \cdots \nonumber \\ \end{aligned}$$
(5)

Evidently the reciprocal of (5) is the generating function of integer partitions into parts that are Fibonacci numbers \(F_2, F_3, F_4, \ldots \).

Note that the Fibonacci number \(F_1 = 1\) is not used in (5) to avoid having two different kinds of 1, as we already have \(F_2 = 1\). Coefficients of the powers \(y^n\) in the expansion for the first few values are as shown in Table 1. There seems to be no simple periodic behavior of the location of the \( \pm 1\) coefficients.

Table 1 Partial list of exponents in (5) for which the coefficient is \(-1\), 0, and 1, respectively

We let v(n) denote the coefficient of \( y^n\) in the series expansion in (5) and refer to it as an expansion coefficient. Reading from the list in Table 1, we have \( v(0) = 1\), \(v(1)= -1\), \(v(2) = -1\), \(v(3)= 0 \), \(v(4) = 1\), etc., with this notation.

5.1 Fibonacci Representations

Every positive integer n can be written in terms of Fibonacci numbers as a sum

$$\begin{aligned} n = F_{k_1} + F_{k_2} + \cdots + F_{k_r}, \end{aligned}$$
(6)

where \( \, k_1> k_2> \cdots > k_r \ge 2 \, \) and r depends on n. We call (6) a Fibonacci representation, or simply a representation of n. By a theorem of Zeckendorf [17], the representation (6) is unique if we impose the conditions:

$$\begin{aligned} k_j - k_{j+1} \ge 2 ~~~( j = 1, 2, \ldots , r-1), ~~~ k_r \ge 2 \, . \end{aligned}$$
(7)

This unique Fibonacci representation of n will be called its canonical representation (it is also referred to as the Zeckendorf representation in the literature). To aid in our calculations, the following notation will be useful.

Given a vector of indices \(\varvec{k}\) satisfying (7), we let \( n( \varvec{k}) = n(k_1, k_2, \ldots , k_r) = F_{k_1} + F_{k_2} + \cdots + F_{k_r}\). In the notation \( n= n( \varvec{k}, k_{i+1} , \ldots , k_r)\), \(\varvec{k}\) denotes the vector of indices \( (k_1, k_2, \ldots , k_i)\). Additionally, for a vector \({\varvec{u}} = ( u_1, u_2, \ldots , u_s)\) and scalar \( \rho \), we set \( {\varvec{u}} - \rho = ( u_1 - \rho , u_2 - \rho , \ldots , u_s - \rho ) \,\).

Example 1

Take \(n=117\). Then \(n = F_{11}+F_8+F_5+F_3= n({\varvec{u}})\) where \({\varvec{u}} =(11,8,5,3)\). We have \(n({\varvec{u}} -1 ) = F_{10}+F_7+F_4+F_2= 72\). The vector \({\varvec{u}} -2 = (9,6,3,1)\) gives the representation of \(45 = F_9+F_6+F_3+F_1\), but this is not canonical because of the presence of \(F_1\), and \(F_9 + F_6+F_3+F_2\) is not canonical either because of the presence of consecutive indices. We write \(45 = n(9,6,4)\).

Example 2

There are a total of five Fibonacci representations of \(n=106\) as given below, the first being the canonical representation.

$$\begin{aligned}{} & {} F_{11}+F_7+F_4+F_2\\{} & {} F_{11}+F_6+F_5+F_4+F_2\\{} & {} F_{10}+F_9+F_7+F_4+F_2\\{} & {} F_{10}+F_9+F_6+F_5+F_4+F_2\\{} & {} F_{10}+F_8+F_7+F_6+F_5+F_4+F_2 \,. \end{aligned}$$

It is also common to use binary strings to encode such representations by using their Fibonacci indicator “digits," where the rightmost digit corresponds to \(F_2\). For the above five representations of 106, these strings are 1000100101, 1000011101, 110100101, 110011101, and 101111101, respectively. The encoding of a canonical representation is called a Fibonacci string. Fibonacci strings of a given length form the vertices of a Fibonacci cube. Fibonacci cubes form a family of networks which has many interesting combinatorial properties [6, 12].

Table 2 is a list of all Fibonacci representations of the integers \(1,2, \ldots , 12.\) As examples, for \(n=3\), there is a single representation with an even and a single representation with an odd number of terms, and therefore the coefficient of \( y^3\) in (5) is 0. For \(n=8\), there are two representations with an odd number of terms and one representation with an even number of terms, so the coefficient of \( y^8\) in (5) is \(-1\).

Table 2 Fibonacci representations of \(1, 2, \ldots , 12\), with the canonical representation given in boldface red

If n has a Fibonacci representation as given in (6), then we use the following notation due to Carlitz:

$$\begin{aligned} e(n) = F_{k_1-1} + F_{k_2-1} + \cdots + F_{k_r-1} \,, \end{aligned}$$
(8)

with \(e(0) = 0\).

If \( n = n(k_1, k_2, \ldots , k_r) = n( \varvec{k},k_r)\) is the canonical representation of n we see that

$$\begin{aligned} e( n ) = \left\{ \begin{array}{ll} n( \varvec{k}-1, k_r-1) &{} \text{ if } k_r > 2, \vspace{1mm}\\ n( \varvec{k}-1) + 1 &{} \text{ if } k_r =2 \,. \end{array} \right. \end{aligned}$$

Carlitz [3] proved the following interesting result, which justifies the use of an arbitrary Fibonacci representation in (8).

Theorem 5.1

(Carlitz) The value e(n) is independent of the Fibonacci representation in (6) chosen for n.

6 Recursion à la Carlitz

Following along the lines of an idea by Carlitz, let

$$\begin{aligned} \Phi (x,y) = \prod _{s \ge 1} ( 1 - x^{F_s} y^{F_{s+1}}) = \sum _{m,n \ge 0} \alpha (m,n) x^m y^n \,. \end{aligned}$$
(9)

Then

$$\begin{aligned} \Phi (y, x y) = \prod _{s \ge 1} ( 1 - x^{F_{s+1}} y^{F_{s+2}}) = \frac{\Phi (x, y)}{ 1 - x y} \,. \end{aligned}$$

Equating coefficients in the resulting identity \(( 1 - x y) \Phi (y, xy) = \Phi (x,y)\) we obtain

$$\begin{aligned} \alpha (m,n) = \alpha (n-m, m) - \alpha (n-m,m-1) \end{aligned}$$
(10)

where \(\alpha (0,0)=1\), and \(\alpha (m,n) = 0 \) if either argument is negative.

Note that the expansion of the product (9) can be written in the form:

$$\begin{aligned} 1+ \sum _{k_1> k_2> \cdots > k_r \ge 2 } (-1)^r x^{F_{k_1 -1}+F_{k_2-1} + \cdots + F_{k_r -1} } y^{F_{k_1}+F_{k_2} + \cdots +F_{k_r} } \end{aligned}$$
(11)

over all \(r \ge 1\). Therefore, if the exponent of y in a summand in (11) above is n, then the exponent of the corresponding x of that term is e(n).

Remark 1

For any given n, \( \alpha (m,n) = 0\) unless \( m = e(n)\). We will use this observation repeatedly.

Using Carlitz’s theorem, we can write

$$\begin{aligned} \Phi (x,y) = \sum _{n \ge 0} v(n) x^{e(n)} y^n \,, \end{aligned}$$

where

$$\begin{aligned} \prod _{s \ge 2 } (1 - y^{F_s} ) = \sum _{n \ge 0} v(n) y^n \end{aligned}$$
(12)

with \(v(0) = 1\). Therefore

$$\begin{aligned} v(n) = \alpha (e(n), n ) = \alpha (n-e(n),e(n)) - \alpha (n-e(n),e(n)-1) \,, \end{aligned}$$
(13)

in which the second equality is a consequence of (10) with \( m = e(n)\).

We prove a sequence of three lemmas and collect the partial results together as Proposition 6.1.

Lemma 6.1

Suppose \(n = n( \varvec{k},k_r )\) with \(k_r = 2t +1\). Then \(~ v(n) = v(e(n))\).

Proof

In view of (13), it suffices to show that \(n-e(n) = e(e(n))\) and \(\alpha (n-e(n),e(n)-1) = 0 \). We have

$$\begin{aligned} e(n )= & {} F_{k_1 -1}+F_{k_2-1} + \cdots + F_{k_r -1} ,\\ n-e(n )= & {} F_{k_1 -2}+F_{k_2-2} + \cdots + F_{k_r -2} \, . \end{aligned}$$

Since \(k_r \ge 3\), it follows that

$$\begin{aligned} n - e(n) = e(e(n)) \,. \end{aligned}$$
(14)

To prove that \(\alpha (n-e(n),e(n)-1) = 0 \), we start with the standard Fibonacci identities

$$\begin{aligned} F_{2t-1} + \cdots + F_5 + F_3= & {} F_{2t} - 1 \,, \end{aligned}$$
(15)
$$\begin{aligned} F_{2t-2}+ \cdots + F_4 + F_2= & {} F_{2t-1} - 1 \,. \end{aligned}$$
(16)

Identity (15) implies that

$$\begin{aligned} e(n) - 1 = F_{k_1 -1}+F_{k_2-1} + \cdots + F_{k_{r -1} -1} + ( F_{2t-1} + \cdots + F_5 + F_3). \end{aligned}$$

Since this is a Fibonacci representation, Carlitz’s theorem with identity (16) imply that

$$\begin{aligned} e(e(n )-1)= & {} F_{k_1 -2}+F_{k_2-2} + \cdots + F_{k_{r-1} -2} + (F_{2t-2}+ \cdots + F_4+ F_2 ) \\= & {} F_{k_1 -2}+F_{k_2-2} + \cdots + F_{k_{r-1} -2} + F_{2t-1} -1 \\= & {} F_{k_1 -2}+F_{k_2-2} + \cdots + F_{k_{r-1} -2} + F_{k_r-2} -1 \\= & {} n - e(n) -1 \,. \end{aligned}$$

Therefore, the first argument in \(\alpha (n-e(n),e(n)-1) \) is not equal to \( e( e(n) -1)\) and it must vanish.

The following useful lemma is another consequence of the pair of Fibonacci identities in (15) and (16).

Lemma 6.2

For \( t\ge 1\),

$$\begin{aligned} e(F_{2t+1} - 1 )= & {} F_{2t} \end{aligned}$$
(17)
$$\begin{aligned} e(F_{2t} - 1 )= & {} F_{2t-1} -1 \,. \end{aligned}$$
(18)

Next, we consider the case in which \( k_r\) is even.

Lemma 6.3

Suppose \(n=n( \varvec{k},k _r) \) with \(k_r = 2t\). Put \( n_1 = n(\varvec{k})\). Then

$$\begin{aligned} v(n) = \left\{ \begin{array}{ll} v(e(n)) - v (e(n)-1) &{} \text{ if } 2t > 2 \vspace{1mm} \\ - v(e(n_1)) &{} \text{ if } 2t = 2 \,. \end{array} \right. \end{aligned}$$

Proof

For \(2t > 2\), (14) holds as before. Using the two identities (15) and (16) one more time, we have

$$\begin{aligned} e(n) -1= & {} F_{k_1 -1}+F_{k_2-1} + \cdots + F_{k_{r-1} -1} + F_{2t-1} -1 \nonumber \\= & {} F_{k_1 -1}+F_{k_2-1} + \cdots + F_{k_{r-1} -1} + (F_{2t-2} + \cdots + F_4 + F_2) \,,\nonumber \\ \end{aligned}$$
(19)
$$\begin{aligned} e(e(n)-1)= & {} F_{k_1 -2}+F_{k_2-2} + \cdots + F_{k_{r-1} -2} + ( F_{2t-3} + \cdots + F_3 + F_1) \nonumber \\= & {} F_{k_1 -2}+F_{k_2-2} + \cdots + F_{k_{r-1} -2} + F_{2t-2}~. \end{aligned}$$
(20)

Therefore, \( \, e( e(n) -1) = e(e(n)) \, \). The proof of the case \(2t>2\) then follows by recursion (13). For \(k_r=2t = 2\),

$$\begin{aligned} n - e(n)= & {} F_{k_1 -2}+F_{k_2-2} + \cdots + F_{k_{r-1} -2} = e(e(n_1)) \,, \\ e(n)-1= & {} F_{k_1 -1}+F_{k_2-1} + \cdots + F_{k_{r-1} -1} = e(n_1) \,. \end{aligned}$$

Also, since

$$\begin{aligned} e(n) = F_{k_1 -1}+F_{k_2-1} + \cdots + F_{k_{r-1} -1} + F_2 \,, \end{aligned}$$

we have

$$\begin{aligned} e(e(n)) = F_{k_1 -2}+F_{k_2-2} + \cdots + F_{k_{r-1} -2} + F_1 = n - e(n) +1 \,. \end{aligned}$$

From (10) we conclude that for \( k_r = 2\), \( v(n) = - v (e(n_1))\).

We collect these results in Proposition 6.1.

Proposition 6.1

Let n have the canonical Fibonacci representation \(n = F_{k_1}+F_{k_2} + \cdots + F_{k_r}\). Then

$$\begin{aligned} v(n) = \left\{ \begin{array}{ll} \hspace{2.5mm} v(e(n))&{} \text{ if } k_r = 2t+1 \vspace{1mm} \\ \hspace{2.5mm} v(e(n)) - v (e(n)-1) &{} \text{ if } k_r = 2t > 2 \vspace{1mm} \\ - v(e(n_1)) &{} \text{ if } k_r = 2t = 2 \,. \end{array} \right. \end{aligned}$$

where \(n_1 = F_{k_1}+F_{k_2} + \cdots + F_{k_{r-1}}\) and e is the operator defined in (8).

6.1 A Special Case: \(r=1\)

It is possible to we give an explicit formula for the expansion coefficients for \( r=1\), i.e., when the exponent \(n = F_{k_1}\) is a Fibonacci number.

Lemma 6.4

Let \(x_1 = k_1 \!\! \mod 4\), so that \( x_1 \in \{ 0,1,2,3 \}\). Then

$$\begin{aligned} v(F_{k_1}) = - \left\lfloor \frac{x_1}{2} \right\rfloor \,. \end{aligned}$$
(21)

Proof

We have

$$\begin{aligned} v(F_{2k+1})= & {} \alpha (F_{2k}, F_{2k+1} )\\= & {} \alpha (F_{2k-1}, F_{2k}) - \alpha (F_{2k-1},F_{2k}-1) \\= & {} v(F_{2k}) - \alpha (F_{2k-1},F_{2k}-1) \end{aligned}$$

by (13). From (18), \( e(F_{2k} - 1 ) = F_{2k-1} -1 \), and therefore, \(\alpha (F_{2k-1},F_{2k}-1)=0\), the first parameter not being the e of the second. Thus, \( v(F_{2k+1})= v(F_{2k})\,\). Since \(v(F_2) = -1\) and \( v(F_4) = 0\) directly from (5), the lemma follows.

Remark 2

Proposition 6.1 together with Lemma 6.4 provide a recursive algorithm to compute v(n). As an example, for \(n = 18 = F_7 + F_5\),

$$\begin{aligned} v(18) = \vartheta ( 7, 5) = \vartheta (6,4)= & {} \vartheta (5,3) - \vartheta (5,2) \\= & {} \vartheta (4,2) + \vartheta (4) = - \vartheta (3) + 0 = 1 \,. \end{aligned}$$

However, this approach to compute v(n) is not very illuminating.

Suppose that e is defined as in (8). We recursively define its powers for \( t \ge 0\) by setting \( e^t(n) = e( e^{t-1}(n))\) with \(e^0(n) = n\). Additionally, given a vector of indices \(\varvec{k}\) satisfying (7), we let \( \mathcal {E}( \varvec{k}) = e(n)\) where \( n = n( \varvec{k})= F_{k_1} + F_{k_2} + \cdots + F_{k_r}\). In the notation \( \mathcal {E}( \varvec{k}, k_{i+1} , \ldots , k_r)\), \(\varvec{k}\) will denote the vector of indices \( (k_1, k_2, \ldots , k_i)\).

6.2 A Lemma on Staircases

We need a result on the expansion coefficients where the canonical representation of n ends with a “staircase." This result is used in the proof of the main recursion for the coefficients, given as Proposition 7.1 in the next section.

Lemma 6.5

For \( s \ge 1\), \( ~ \vartheta ( \varvec{k}, 2s, 2s-2, \ldots , 4, 2) = (-1)^s \vartheta ( \mathcal {E}^{2s-1} ( \varvec{k})) \, . \)

Proof

For \( s=1\), we need to show \(\vartheta ( \varvec{k}, 2) = - \vartheta ( \mathcal {E}( \varvec{k})) \), but this is precisely the case \(k_r=2t=2\) of Proposition 6.1. For \( s >1\), again by Proposition 6.1 we can write

$$\begin{aligned} \vartheta ( \varvec{k}, 2s, 2s-2, \ldots , 4, 2)= & {} - \vartheta ( \mathcal {E}( \varvec{k}, 2s, 2s-2, \ldots , 4 )) \\= & {} - \vartheta ( \varvec{k}-1, 2s-1, \ldots , 5, 3 ) \\= & {} - \vartheta ( \varvec{k}-2, 2s-2, \ldots , 4, 2) \, . \end{aligned}$$

By induction on s,

$$\begin{aligned} \vartheta ( \varvec{k}-2, 2s-2, \ldots , 4, 2)= & {} (-1)^{s-1} \vartheta (\mathcal {E}^{2s-3} (\varvec{k}-2) ) \\= & {} (-1)^{s-1} \vartheta (\mathcal {E}^{2s-1} (\varvec{k}) ) \end{aligned}$$

and the lemma follows.

7 Main Recursion for the Expansion Coefficients

Proposition 7.1

Suppose \(n = n( k_1, k_2, \ldots , k_r)\). Then for \(k_r = 2t\),

$$\begin{aligned} v(n) = \vartheta (\varvec{k}, k_r) = \left\{ \begin{array}{ll} - \vartheta ( \mathcal {E}^{2t-1} (\varvec{k})) &{} \text{ if } t ~\text{ is } \text{ odd },\vspace{1mm} \\ \vartheta ( \mathcal {E}^{2t-2} (\varvec{k})) - \vartheta (\mathcal {E}^{2t-1} (\varvec{k})) &{} \text{ if } t ~\text{ is } \text{ even }, \end{array} \right. \end{aligned}$$

and for \(k_r = 2t+1\),

$$\begin{aligned} v(n) = \vartheta (\varvec{k}, k_r) = \left\{ \begin{array}{ll} - \vartheta ( \mathcal {E}^{2t} (\varvec{k}) ) &{} \text{ if } t ~\text{ is } \text{ odd }, \vspace{1mm} \\ \vartheta ( \mathcal {E}^{2t-1} (\varvec{k})) - \vartheta (\mathcal {E}^{2t} (\varvec{k})) &{} \text{ if } t ~\text{ is } \text{ even }. \end{array} \right. \end{aligned}$$

Proof

If \(k_r = 2 t= 2\), then the conclusion is the third case of Proposition 6.1. If \( 2t > 2\), then the second case of Proposition 6.1 and the expression (19) for \( e(n)-1\) imply that

$$\begin{aligned} \vartheta ( \varvec{k}, k_r) = \vartheta ( e(\varvec{k}), 2t -1) - \vartheta (e(\varvec{k}), 2t-2, , \ldots , 4, 2) \, . \end{aligned}$$

Using Lemma 6.5 with \( s = t-1\),

$$\begin{aligned} \vartheta ( \varvec{k}, k_r)= & {} \vartheta ( e(\varvec{k}), 2t -1) + (-1)^t \vartheta (e^{2t-2}( \varvec{k}) ) \\= & {} \vartheta ( e^2 ( \varvec{k}) , 2t -2) + (-1)^t \vartheta (e^{2t-2}( \varvec{k}) ) \end{aligned}$$

where the last equality is a consequence of the first part of Proposition 6.1. If \(2t-2 >2\), we can apply to same expansion to the first term above to obtain

$$\begin{aligned} \vartheta ( e^2 ( \varvec{k}) , 2t -2) = \vartheta ( e^3 ( \varvec{k}) , 2t -3) + (-1)^{t-1} \vartheta (e^{2(t-1)-3}( e^3(\varvec{k})) ) \end{aligned}$$

and therefore,

$$\begin{aligned} \vartheta ( \varvec{k}, k_r)= & {} \vartheta ( e^3(\varvec{k}), 2t -3) + (-1)^{t-1} \vartheta ( e^{2t-2} ( \varvec{k})) + (-1)^t \vartheta (e^{2t-2}( \varvec{k}) ) \\= & {} \vartheta ( e^3 ( \varvec{k}) , 2t -3) \, . \end{aligned}$$

Continuing this way, there is a cancelation for t odd; therefore, for t odd, we obtain

$$\begin{aligned} \vartheta ( \varvec{k}, k_r) = \vartheta ( e^{2t -3} ( \varvec{k}) , 3) \, . \end{aligned}$$

Using the first and the third parts of Proposition 6.1, this gives

$$\begin{aligned} \vartheta ( \varvec{k}, k_r) = \vartheta ( e^{2t -2} ( \varvec{k}) , 2) = - \vartheta ( e^{2t -1} ( \varvec{k}) ) \end{aligned}$$

for \( k_r = 2t\), t odd. When \( k_r = 2t\) and t is even,

$$\begin{aligned} \vartheta ( \varvec{k}, k_r)= & {} \vartheta ( e^{2t - 2} (\varvec{k}), 2) + \vartheta (e^{2t-2}( \varvec{k}) ) \\= & {} - \vartheta ( e^{2t - 1} (\varvec{k})) + \vartheta (e^{2t-2}( \varvec{k}) ) \, . \end{aligned}$$

For \( k_r = 2t+1\), the first part of Proposition 6.1 implies that

$$\begin{aligned} \vartheta ( \varvec{k}, k_r) = \vartheta ( e(\varvec{k}), 2t) \end{aligned}$$

and the result follows from the \(k_r = 2t \) part of the proof with \( e ( \varvec{k})\) playing the part of \(\varvec{k}\).

The different cases in Proposition 7.1 depend on the value of \(k_r \) modulo 4. They can be combined to write the recursion for v(n) in the form

$$\begin{aligned} v(n) = \vartheta (\varvec{k}, k_r) = \left\{ \begin{array}{ll} \hspace{2.5mm} \vartheta ( \mathcal {E}^{k_r -2 } (\varvec{k})) - \vartheta (\mathcal {E}^{k_r -1 } (\varvec{k})) &{} \text{ if } k_r ~ \equiv 0, 1 \!\!\! \mod 4 \, , \vspace{1mm}\\ - \vartheta ( \mathcal {E}^{k_r-1} (\varvec{k}) ) &{} \text{ if } k_r ~ \equiv 2, 3 \!\!\! \mod 4 \,. \end{array} \right. \end{aligned}$$

There is even a simpler formulation if we allow ourselves some further notation by setting \( x_i = k_i \!\! \mod 4\) for every index i. Using also the notation used in Example 1, we record this as a proposition in the following form.

Proposition 7.2

The expansion coefficients in (5) satisfy

$$\begin{aligned} v(n) = \vartheta (\varvec{k}, k_r)= & {} \left( 1- \left\lfloor \frac{x_r}{2} \right\rfloor \right) \vartheta ( \mathcal {E}^{k_r -2 } (\varvec{k})) - \vartheta (\mathcal {E}^{k_r -1 } (\varvec{k})) \\= & {} \left( 1- \left\lfloor \frac{x_r}{2} \right\rfloor \right) \vartheta ( \varvec{k}-k_r +2) - \vartheta (\varvec{k}- k_r+1 ) \, . \end{aligned}$$

8 Invariance Modulo 4

A surprising application of Proposition 7.2 is the following result.

Theorem 8.1

Suppose n has the canonical representation \(n = F_{k_1}+F_{k_2} + \cdots + F_{k_r}\). Then v(n) depends only on the values of \( k_1, k_2, \ldots , k_r\) modulo 4.

Proof

This is clearly so if \(r=1\) because of the formula (21) for this case. The formula given in Proposition 7.2 for \(r>1\) depends on \(x_r \equiv k_r \!\! \mod 4\), and values for numbers represented by \( \mathcal {E}^{k_r -2 } (\varvec{k})\) and \(\mathcal {E}^{k_r -1 } (\varvec{k}) \). These numbers have \(k_1\) values in their canonical representation that are smaller than that of n, and the proof follows by induction.

Remark 3

This result means that if \(n = F_{k_1}+F_{k_2} + \cdots + F_{k_r}\) and \(n' = F_{k_1'}+F_{k_2'} + \cdots + F_{k_r'}\) with \( k_i \equiv k_i' \!\! \mod 4\) for all i, then \(v(n)= v(n')\).

Since v(n) only depends on the vector of remainders \( \varvec{x}= (x_1, x_2, \ldots , x_r)\) of the canonical representation \(n = F_{k_1}+F_{k_2} + \cdots + F_{k_r}\) with \( x_i \equiv k_i \!\! \mod 4\), \(1 \le i \le r\), we denote v(n) by \(\vartheta (x_1, x_2, \ldots , x_r)\) or equivalently by \(\vartheta ( \varvec{x})\). Similarly, we denote n by \( n ( \varvec{x})\). Of course, given a vector \(\varvec{x}= (x_1, x_2, \ldots , x_r)\) with \( x_i \in \{ 0,1,2,3\}\), there are infinitely many integers n whose canonical representation indices modulo 4 are the given \(x_i\), i.e., infinitely many \(n = n ( \varvec{x})\).

Remark 4

From now on arithmetic operations involving parameters \( x_i\) and vector \( \varvec{x}\) are understood to be modulo 4 to avoid cumbersome notation. For instance, when we write \( x_1 - x_2 - 2\), we mean \( (x_1 - x_2 - 2) \!\!\! \mod 4\). As another example, the notation \(\left\lfloor \frac{x_2-x_3 - 1}{2} \right\rfloor \) is a shorthand for \( \left\lfloor \frac{(x_2-x_3 - 1) \!\!\! \mod 4}{2} \right\rfloor \).

9 Expansion Coefficients Lie in \(\{ -1, 0 , 1 \}\)

Since the expansion coefficients only depend on the remainder of the indices of the canonical representation modulo 4, the recursion of Proposition 7.2 can be restated as follows.

$$\begin{aligned} \vartheta (\varvec{x}, x_r)= & {} \left( 1- \left\lfloor \frac{x_r}{2} \right\rfloor \right) \vartheta ( \mathcal {E}^{x_r -2 } (\varvec{x})) - \vartheta (\mathcal {E}^{x_r -1 } (\varvec{x})) \end{aligned}$$
(22)
$$\begin{aligned}= & {} \left( 1- \left\lfloor \frac{x_r}{2} \right\rfloor \right) \vartheta ( \varvec{x}-x_r -2) - \vartheta (\varvec{x}- x_r -3 ) \,, \end{aligned}$$
(23)

where \( \varvec{x}=( x_1, x_2, \ldots , x_{r-1})\). This can be split into four recursions depending on the value of \(x_r\):

$$\begin{aligned} \nonumber \vartheta (\varvec{x}, 0)= & {} \vartheta ( \mathcal {E}^{2 } (\varvec{x})) - \vartheta (\mathcal {E}^{3 } (\varvec{x})) = \vartheta ( \varvec{x}+2) - \vartheta (\varvec{x}+1 )\\ \vartheta (\varvec{x}, 1)= & {} \vartheta ( \mathcal {E}^{3 } (\varvec{x})) - \vartheta (\varvec{x}) = \vartheta ( \varvec{x}+ 1) - \vartheta (\varvec{x})\\ \nonumber \vartheta (\varvec{x}, 2 )= & {} - \vartheta (\mathcal {E}(\varvec{x})) = - \vartheta (\varvec{x}+ 3 ) \\ \nonumber \vartheta (\varvec{x}, 3)= & {} - \vartheta (\mathcal {E}^{2 } (\varvec{x})) = - \vartheta (\varvec{x}+2 ) ~. \end{aligned}$$
(24)

Next, we prove that the expansion coefficients lie in \(\{-1,0,1\}\). To do this we strengthen the induction hypothesis and prove the following result.

Theorem 9.1

The expansion coefficients v(n) lie in \(\{-1,0,1\}\). Furthermore, \(\vartheta ( \varvec{x})\) and \(\vartheta ( \mathcal {E}( \varvec{x})) = \vartheta ( \varvec{x}-1)\) never have opposite signs.

Proof

This is the case for \(r=1\) given in Lemma 6.4. For \( r > 1\), from (24) we have

$$\begin{aligned} \vartheta (\varvec{x}, 0)= & {} \vartheta ( \varvec{x}+2) - \vartheta ( \mathcal {E}( \varvec{x}+2) ) \\ \vartheta (\varvec{x}, 1)= & {} \vartheta ( \varvec{x}+ 1) - \vartheta ( \mathcal {E}(\varvec{x}+1) ) \,. \end{aligned}$$

These differences are in \(\{-1,0,1\}\), since the summands in each do not have opposite signs and they are in \(\{-1,0,1\}\) by the induction hypothesis. This holds for the last two cases in (24) also. To complete the induction proof, we need to show that \(a = \vartheta (\varvec{x}, x_r) \) and \(b= \vartheta ( \mathcal {E}( \varvec{x}, x_r)) \) never have opposite signs, i.e., \(ab \ne -1\). For \( x_r = 0\), we have \(a = \vartheta (\varvec{x}, 0 ) \) and \(b= \vartheta ( \mathcal {E}( \varvec{x}, 0 )) = \vartheta (\varvec{x}-1 ,3) \). From (24)

$$\begin{aligned} a= & {} \vartheta ( \varvec{x}+2) - \vartheta (\mathcal {E}(\varvec{x}+2) ), \\ b= & {} - \vartheta (\mathcal {E}(\varvec{x}+2) ) \end{aligned}$$

so that \(ab = \vartheta (\mathcal {E}(\varvec{x}+2) )^2 - \vartheta (\mathcal {E}(\varvec{x}+2) ) \vartheta ( \varvec{x}+2) \ne -1\). For \(x_r = 3\),

$$\begin{aligned} a= & {} \vartheta (\varvec{x}, 3 ) = - \vartheta ( \mathcal {E}^2 (\varvec{x})) , \\ b= & {} \vartheta ( \mathcal {E}( \varvec{x}, 3 )) = \vartheta (\varvec{x}-1 ,2) = -\vartheta (\mathcal {E}^2 (\varvec{x})), \end{aligned}$$

we have \( ab = \vartheta ( \mathcal {E}^2 (\varvec{x}))^2 \ne -1 \). The proofs of the other cases are similar.

10 Another Special Case: \(r=2\)

When \(r=2\), n is equal to the sum of two nonzero, nonadjacent Fibonacci numbers.

Proposition 10.1

For \(r=2\), the expansion coefficients are given by

$$\begin{aligned} \nonumber \vartheta (x_1 , x_2)= & {} \left( 1- \left\lfloor \frac{x_2}{2} \right\rfloor \right) \vartheta ( x_1 -x_2 -2) - \vartheta (x_1 - x_2-3 ) \\= & {} \left( \left\lfloor \frac{x_2}{2}\right\rfloor -1 \right) \left\lfloor \frac{x_1-x_2-2}{2}\right\rfloor + \left\lfloor \frac{x_1-x_2-3}{2}\right\rfloor \,. \end{aligned}$$
(25)

Proof

The formula follows from (23) and (21), keeping in mind that the operations involving the \(x_i\) are performed modulo 4.

Example 3

For \( n = F_{12} + F_{8}\), \(x_1=x_2=0\) and \( v(n) = \vartheta (0,0) = - \left\lfloor \frac{2}{2} \right\rfloor + \left\lfloor \frac{1}{2} \right\rfloor = -1\); for \( n = F_{13} + F_{6}\), \(x_1 = 1\), \(x_2 = 2\) and \( v(n)= \vartheta (1,2) = \left\lfloor \frac{0}{2} \right\rfloor = 0\); and for \( n = F_{15} + F_{9}\), \(x_1= 3\), \(x_2=1\) and \( v(n) = \vartheta (3,1) = - \left\lfloor \frac{0}{2} \right\rfloor + \left\lfloor \frac{3}{2} \right\rfloor = 1\).

Remark 5

Note that for \( r=2\), the expansion coefficients for \( x_1 - x_2 =3\) are all zero as a consequence of (25). We will prove (Corollary 11.1) that this holds for arbitrary \( r\ge 2\).

11 The Matrix Recursion

When we have a linear homogeneous recursion, it is usually helpful to formulate it as a matrix recursion. For constant coefficients, we then use linear algebra techniques to compute the powers of the matrix and obtain Binet-like formulas for the n-th term of the sequence. When the coefficients are not constant, the matrices involved need to have special properties if we are to have any hope of writing the resulting matrix product in a simple form.

For \( 2 \le i \le r-1\), define the \( 2 \times 2 \) integer matrices

$$\begin{aligned} B_i = \begin{bmatrix} \lfloor \frac{x_i - x_{i+1}}{2}\rfloor &{} \lfloor \frac{x_i - x_{i+1}-1}{2}\rfloor \\ -1 &{} -1 \end{bmatrix}, \end{aligned}$$
(26)

where the arithmetic operations involving the \(x_i\)’s are done modulo 4 as before.

Theorem 11.1

Given \( (\varvec{x}, x_r ) = (x_1, x_2 , \ldots , x_{r-1}, x_{r})\) with \( r \ge 2 \) and \( x_i \in \{ 0,1,2,3 \}\), set

$$\begin{aligned} M(\varvec{x}, x_r) = B_2 B_3 \cdots B_{r-1} \end{aligned}$$
(27)

where the empty product for \( r=2\) is taken to be the \(2 \times 2 \) identity matrix. Then the value of the expansion coefficient \( \vartheta (\varvec{x}, x_r)\) is given by the vector–matrix–vector product

$$\begin{aligned} \vartheta (\varvec{x}, x_r) = \begin{bmatrix} \vartheta (x_1 - x_2 - 2),&\!\! \vartheta (x_1-x_2 -3) \end{bmatrix} M(\varvec{x}, x_r) \begin{bmatrix} 1- \left\lfloor \frac{x_r}{2}\right\rfloor \\ -1 \end{bmatrix} \, . \end{aligned}$$
(28)

Proof

For \( r=2\), from formula (25) and Lemma 6.4 we obtain the following expression.

$$\begin{aligned} \vartheta (x_1, x_2)= & {} \left( \left\lfloor \frac{x_2}{2}\right\rfloor -1 \right) \left\lfloor \frac{x_1-x_2-2}{2}\right\rfloor + \left\lfloor \frac{x_1-x_2-3}{2}\right\rfloor \\= & {} \begin{bmatrix} \vartheta (x_1 - x_2 - 2),&\!\! \vartheta (x_1-x_2 -3) \end{bmatrix} \begin{bmatrix} 1- \left\lfloor \frac{x_2}{2}\right\rfloor \\ -1 \end{bmatrix} \,. \end{aligned}$$

Let \( v_1 = \vartheta (x_1 - x_2 - 2)\) and \( v_2 = \vartheta (x_1-x_2 -3) \). For \( r>2\) by induction

$$\begin{aligned} \vartheta (\varvec{x}) = [v_1, v_2] M( \varvec{x}) \begin{bmatrix} 1- \left\lfloor \frac{x_{r-1}}{2}\right\rfloor \\ -1 \end{bmatrix} \, . \end{aligned}$$
(29)

Using recursion (23),

$$\begin{aligned} \vartheta ( \varvec{x}, x_r )= & {} \left( 1- \left\lfloor \frac{x_r}{2} \right\rfloor \right) [v_1, v_2] M( \varvec{x}- x_r -2) \begin{bmatrix} 1- \lfloor \frac{x_{r-1} - x_r-2}{2}\rfloor \\ -1 \end{bmatrix} \\{} & {} - [v_1, v_2] M( \varvec{x}- x_r -3) \begin{bmatrix} 1- \lfloor \frac{x_{r-1} - x_r-3}{2}\rfloor \\ -1 \end{bmatrix} \\= & {} [v_1, v_2] M( \varvec{x}) \left( \left( 1- \left\lfloor \frac{x_r}{2} \right\rfloor \right) \begin{bmatrix} 1- \lfloor \frac{x_{r-1} - x_r-2}{2}\rfloor \\ -1 \end{bmatrix} - \begin{bmatrix} 1- \lfloor \frac{x_{r-1} - x_r-3}{2}\rfloor \\ -1 \end{bmatrix} \right) \,. \end{aligned}$$

Therefore, our proof obligation is to show that

$$\begin{aligned} B_{r-1} \begin{bmatrix} 1- \lfloor \frac{x_{r}}{2}\rfloor \\ -1 \end{bmatrix} = \left( 1- \left\lfloor \frac{x_r}{2} \right\rfloor \right) \begin{bmatrix} 1- \lfloor \frac{x_{r-1} - x_r-2}{2}\rfloor \\ -1 \end{bmatrix} - \begin{bmatrix} 1- \lfloor \frac{x_{r-1} - x_r-3}{2}\rfloor \\ -1 \end{bmatrix} . \end{aligned}$$

The above identity is immediately verified using Mathematica to check the equality of both sides for the sixteen possible values of \( x_{r-1} \) and \( x_r\) modulo 4.

An immediate consequence of Theorem 11.1 is the following corollary.

Corollary 11.1

If n has canonical representation \( n = F_{k_1} + F_{k_2} + \cdots + F_{k_r}\) with \(r \ge 2\) and \( k_1-k_2 \equiv 3 \!\! \mod 4\), then \(v(n) = 0\).

Proof

Since both \( v_1 = \vartheta (x_1 - x_2 - 2)\) and \( v_2 = \vartheta (x_1-x_2 -3) \) vanish for \(x_1 - x_2 =3\) the corollary follows from (28).

11.1 Formula for \(r=3\)

For \(r=3\), the matrix formulation gives

$$\begin{aligned} M(x_1, x_2, x_3 ) = \begin{bmatrix} \left\lfloor \frac{x_2 - x_3}{2}\right\rfloor &{} \left\lfloor \frac{x_2 - x_3-1}{2}\right\rfloor \\ -1 &{} -1 \end{bmatrix} ~. \end{aligned}$$

Therefore, the formula for the expansion coefficient \( \vartheta (x_1,x_2,x_3)\) is

$$\begin{aligned} \nonumber \vartheta (x_1, x_2, x_3 )= & {} \left\lfloor \frac{x_1-x_2-2}{2}\right\rfloor \left( \left\lfloor \frac{x_2-x_3}{2}\right\rfloor \left( \left\lfloor \frac{x_3}{2}\right\rfloor -1 \right) + \left\lfloor \frac{x_2-x_3-1}{2}\right\rfloor \right) \\{} & {} - \left\lfloor \frac{x_3}{2}\right\rfloor \left\lfloor \frac{x_2-x_3-3}{2}\right\rfloor \,. \end{aligned}$$
(30)

The formulas (21), (25) and (30) provide closed form expressions for the expansion coefficients for \( r=1, 2 ,3\). Evidently, these formulas get unwieldy very quickly.

12 The A-Matrices and a Special Case

Among all possible \(2 \times 2 \) matrices \(B_i\), there are only four distinct ones, depending on the value of the difference \(x_{i}- x_{i+1} \) modulo 4. We will refer to these four matrices indexed by the possible remainders 0, 1, 2, 3 as the A-matrices. They are given in (31).

$$\begin{aligned} A_0= \begin{bmatrix} 0 &{} 1 \\ -1 &{} -1 \end{bmatrix},~ A_1 = \begin{bmatrix} 0 &{} 0 \\ -1 &{} -1 \end{bmatrix},~ A_2= \begin{bmatrix} 1 &{} 0 \\ -1 &{} -1 \end{bmatrix},~ A_3 = \begin{bmatrix} 1 &{} 1 \\ -1 &{} -1 \end{bmatrix}. \end{aligned}$$
(31)

We can use the matrix recursion formulation to obtain formulas for the expansion coefficients explicitly in certain other special cases. Suppose in \( \varvec{x}= (x_1, x_2, \ldots , x_r) \) with \( r\ge 3\), all digits have the same remainder p modulo 4, for some \( p \in \{0,1,2,3 \}\). In this case each A-matrix that is a factor of \(M (\varvec{x})\) is equal to \(A_0\) as given in (31), and the expansion coefficient is given by

$$\begin{aligned} \begin{bmatrix} - \left\lfloor \frac{x_1 - x_2 -2}{2} \right\rfloor ,&\!\! - \left\lfloor \frac{x_1 - x_2 -3}{2} \right\rfloor \end{bmatrix} A_0^{r-2} \begin{bmatrix} 1 - \left\lfloor \frac{x_r}{2} \right\rfloor \\ -1 \end{bmatrix} = [ - 1 , 0] A_0^{r-2} \begin{bmatrix} 1 - \left\lfloor \frac{p}{2} \right\rfloor \\ -1 \end{bmatrix} \,. \end{aligned}$$
(32)

We calculate that for \( m \ge 0\),

$$\begin{aligned} A_0^m = \left\{ \begin{array}{ll} \begin{bmatrix} 1 &{} 0 \\ 0 &{} 1 \end{bmatrix} &{} \text{ if } m = 0,3,6, \ldots \vspace{1.5mm} \\ \begin{bmatrix} 0 &{} 1 \\ -1 &{} -1 \end{bmatrix} &{} \text{ if } m = 1,4,7, \ldots \vspace{1.5mm} \\ \begin{bmatrix} -1 &{} -1 \\ 1 &{} 0 \end{bmatrix}&\text{ if } m = 2,5,8, \ldots \end{array} \right. \end{aligned}$$
(33)

This gives the following result.

Theorem 12.1

Suppose n has canonical representation \( n = F_{k_1} + F_{k_2} + \cdots + F_{k_r}\) with \(r \ge 1\) where for some fixed \(p \in \{0,1,2,3\}\), \( k_i \equiv p \!\! \mod 4\), for all \(i=1,2, \ldots , r\). Then the expansion coefficient \( \vartheta ( \varvec{x})\) is given by

$$\begin{aligned} \vartheta ( \varvec{x}) = \left\{ \begin{array}{ll} 1 &{} \text{ if } r= 3k , \vspace{1mm} \\ -\left\lfloor \frac{p}{2} \right\rfloor &{} \text{ if } r= 3k+1, \vspace{1mm} \\ \left\lfloor \frac{p}{2} \right\rfloor -1 &{} \text{ if } r= 3k+2, \end{array} \right. \end{aligned}$$
(34)

for \(k \ge 0\).

Proof

For \( r\ge 3\), the proof is immediate from the vector–matrix–vector product in (32) and the expression for the powers of \(A_0\) in (33). We check that the values for \( r=1\) and \(r=2\) already obtained for \( x_1= p \) (Lemma 6.4) and \( x_1 = x_2 =p \) (Proposition 10.1) are also accounted for by the formula in (34).

13 The Monoid of M-Matrices

The four A-matrices \(A_0, A_1, A_2, A_3\) defined in (31) generate a monoid

$$\begin{aligned} \langle A_0, A_1, A_2, A_3 \rangle = \{ A_{i_1} A_{i_2} \cdots A_{i_r} \, \vert \, 0 \le i_1 , i_2, \ldots , i_r \le 3, r \ge 0 \} \end{aligned}$$
(35)

where the empty product is taken to be the \(2 \times 2 \) identity matrix I. Elements of this monoid coincide with the possible matrices \( M( \varvec{x})\) that appear in (29) for the computation of the expansion coefficients. We will refer to them as the M-matrices. The set of M-matrices was determined by experimentation with products of the A-matrices in (31) using Mathematica. Once determined, it is straightforward to prove that they are indeed the monoid \(\langle A_0, A_1, A_2, A_3 \rangle \). It is interesting that there are exactly 25 M-matrices. These are labeled \(M_1\) through \(M_{25}\) as shown in Table 3.

Table 3 The 25 elements of the monoid of M-matrices generated by the four A-matrices

Theorem 13.1

The monoid generated by the A-matrices consists of the 25 matrices \(M_1\) through \(M_{25}\) of Table 3.

Proof

The proof is in two parts. In the first part we show that each matrix \(M_i\) is a product of A-matrices. This is shown in the last column of Table 4, which gives the lexicographically smallest representation of the M-matrix in that row as a product of A-matrices. The proof is completed by showing that the \(M_1\) through \(M_{25}\) are closed under multiplication from the right by the A-matrices. This is shown in the columns labeled \(A_0\) through \(A_3\) in Table 4. The verification of these computations was carried out on Mathematica.

Remark 6

For any given integer \(n = n(\varvec{x})\), the matrix \(M(\varvec{x})\) that determines its expansion coefficient via (28) is one of the matrices \(M_1, M_2, \ldots , M_{25}\). So positive integers are partitioned into 25 equivalence classes determined by their canonical representation and the resulting M-matrices calculated through (26) and (27). For \(r\le 2\), this matrix is the identity matrix.

Table 4 The table of M-matrices and the effect of right multiplication of the \(M_i\) by the four A-matrices

A few identities immediately derived from (31) are

$$\begin{aligned} A_1 A_3 = A_3 A_3 = 0 \,, \end{aligned}$$
(36)

as well as \( A_2 A_2 = I\), \(A_1 A_1 = A_2 A_1\), and \( A_0 A_1 = A_3 A_1\). Of particular interest are the products of the A-matrices that vanish, which guarantees that the associated expansion coefficient is zero. For example, the fact that \(A_1 A_3 = 0\) means that if \(x= x_1x_2 \cdots x_r \) is the string of residues of the indices of the canonical representation modulo 4 and if \(x_2 x_3 \cdots x_r\) contains a subword \( x_i x_{i+1} x_{i+2} \) which is one of 030, 101, 212, or 323, then the expansion coefficient is zero. Similarly, since \(A_3 A_3 = 0 \), in view of Corollary 11.1 which covers the case in which \( x_1 -x_2 =3\), the existence of a subword of x which is one of 012, 123, 230, or 301 implies that the expansion coefficient is zero. Therefore, we have the following theorem.

Theorem 13.2

Suppose n has canonical representation \(n = F_{k_1}+F_{k_2} + \cdots + F_{k_r}\) and \(x= x_1x_2 \cdots x_r \) is the string of residues of the indices modulo 4. If \(x_2 x_3 \cdots x_r\) contains any one of the subwords 101, 212, 323, 030, or if x contains any of the subwords 012, 123, 230, 301 then \(v(n)=0\).

Any vanishing product of A-matrices can be turned into a property of the expansion coefficients. For example, we compute that \(A_1 A_0 A_1 = 0 \) and \( A_3 A_0 A_1 = 0 \) also hold. These are immediately obtained from the identity \(A_0 A_1 = A_3 A_1\) along with the pair of vanishing products in (36). So if the string of residues \(x_2 x_3 \cdots x_r\) of the indices of the canonical representation modulo 4 contains a subword which is one of 0332, 1003, 2110, 3221; or 0110, 1221, 2332 or 3003, then the expansion coefficient is zero.

There are more examples like this. For instance, the following are products of some A-matrices of length 4 that evaluate to the zero matrix:

$$\begin{aligned} A_1A_2A_2A_3 , A_1A_2A_0A_3 , A_1A_0A_2A_1 , A_3A_2A_2A_3 , A_3A_2A_0A_3 , A_3A_0A_2A_1 \,. \end{aligned}$$

Remark 7

In general the identities satisfied by the A-matrices are useful as subwords when looking at the string of residues if \(x_2 x_3 \cdots x_r \) of the indices of the canonical representation modulo 4, which excludes \(x_1\). This is because the product of the A-matrices that make up \(M ( \varvec{x})\) is independent of \(x_1\). This can be seen from (27).

Incidentally, Table 4 gives a straightforward algorithm to solve the problem of finding the word in the last column to which a given element of the monoid \(\langle A_0, A_1, A_2, A_3 \rangle \) is equal. For example given the word \(w = A_2 A_1 A_0 A_3 A_0 A_0\), we process from left to right to find \(\epsilon \rightarrow M_{20} \rightarrow M_{15} \rightarrow M_{10} \rightarrow M_9 \rightarrow M_{14} \rightarrow M_{13}\). Therefore, \(A_2 A_1 A_0 A_3 A_0 A_0\) is equal to \( A_1A_2\).

14 The Exact Distribution of the Expansion Coefficients

Since the values of the expansion coefficients are determined by their residues \(x= x_1 x_2 \cdots x_r\) for an integer \( n = n ( \varvec{x}) \) with r summands in its canonical representation, we can use the explicit formulas for the \( r=1\) and \(r=2\) cases from Lemma 6.4 and Proposition 10.1, and for \( r>2\), use the vector–matrix–vector formula of (28) to compute the distribution of the \(-1, 0, 1\) values for the expansion coefficients as a function of r. These brute-force computed values are given as Table 5.

Table 5 The exact distribution of the values \(-1, 0 , 1\) of the expansion coefficients among the \(4^r\) possibilities \( x= x_1 x_2 \cdots x_r\) for \( 1 \le r \le 11\)

If we plot the values of exact values of these probabilities, we see that the values for \(-1 \) and 1 expansions oscillate around a common value (Fig. 1).

Fig. 1
figure 1

The plot of the probabilities of the \(-1,0, 1\) expansion coefficients as a function of the number of parts \(r=1,2, \ldots , 13\)

Let \(u_r\) denote the empirical probability that the expansion coefficient is nonzero (this is the sum of the green and the blue lines in Fig. 1). To get an idea of how this ratio decreases using the available numerical data, we compute the values of \( u_{r+1} / u_r\) for \(r = 1, 2, \ldots , 12\). This gives the sequence of ratios

$$\begin{aligned} 1,~ \frac{3}{4}, ~ \frac{5}{6}, ~ \frac{9}{10}, ~ \frac{5}{6}, ~ \frac{5}{6}, ~ \frac{43}{5 0}, ~ \frac{73}{86}, ~ \frac{123}{146}, ~ \frac{209}{246}, ~ \frac{355}{418}, ~ \frac{601}{710} ~. \end{aligned}$$

The decimal expansion of the last three ratios are about 0.8495, 0.8492, 0.8464. We will show in Theorem 17.1 that both \(-1\) and 1 probabilities go to zero as \( ~\frac{2+\Phi }{10} \left( \frac{\Phi }{2} \right) ^r ~\) as r goes to infinity, where \( \Phi = (1 + \sqrt{5})/2\) is the golden ratio. For this we turn the M-matrices into a Markov chain.

15 M-Matrices as a Finite Markov Chain

We assume that modulo 4 remainders of integers \(n = n( \varvec{x})\) are distributed uniformly and independently. Thus, for a given r, the components \(x_i \in \{0,1,2,3\}\) of \( \varvec{x}= ( x_1, x_2, \ldots , x_r)\) are each picked independently with probability \( {\textstyle \frac{1}{4}}\). The Markov chain we construct has as its states \(M_1\) through \(M_{25}\). There is a transition from state \(M_i\) to state \(M_j\) if \(M_i A_k = M_j\) according to Table 4, where \( A_k\) is one of \(A_0\), \(A_1\), \(A_2\), \(A_3\). The probability of each transition is \({\textstyle \frac{1}{4}}\). We refer to the resulting finite Markov chain as the M-chain. The underlying state diagram of the M-chain (without the probabilities labeled) is given in Fig. 2. Since all transitions from \(M_{12}\) are back to itself, \(M_{12}\) is absorbing. The process starts at state \(M_{23}\).

Fig. 2
figure 2

The Markov chain (M-chain) defined by the matrices \(M_1, M_2, \ldots , M_{25}\) as states. The start state is \(M_{23}\); \(M_{12}\) is the absorbing state. The probability is \(\frac{1}{4}\) for each transition

Fig. 3
figure 3

Sketch of the structure of the M-chain with the communicating classes \(C_1\), \(C_2\), \(C_3\), \(C_4\) and \(C_5\) as given in (37)

15.1 The Nature of the M-Chain

The M-chain decomposes into strongly connected components (communicating states) which consist of the classes given in (37).

$$\begin{aligned} \nonumber C_1= & {} \{ M_{12} \}\\ \nonumber C_2= & {} \{ M_1, M_5, M_7, M_{18}, M_{22}, M_{25} \}\\ C_3= & {} \{ M_9, M_{10}, M_{11}, M_{13}, M_{14}, M_{15} \} \\ \nonumber C_4= & {} \{ M_4, M_6, M_8, M_{17}, M_{21}, M_{24} \}\\ \nonumber C_5= & {} \{ M_2, M_3, M_{16}, M_{19}, M_{20}, M_{23} \}\,. \nonumber \end{aligned}$$
(37)

The restriction of the chain to each of the classes is regular in that the transition matrix of the chain restricted to the elements of the class has a matrix power whose elements are all positive. For example, the cubes of the restricted matrices for the classes \(C_2, C_3, C_4\) are identical and is given by the matrix on the left in (38); whereas the cube of the restricted transition matrix for \(C_5\) is the one on the right in (38).

$$\begin{aligned} \begin{bmatrix} \frac{3}{32} &{} \frac{1}{32} &{} \frac{1}{32} &{} \frac{3}{32} &{} \frac{3}{32} &{} \frac{5}{32} \\ \frac{5}{32} &{} \frac{3}{32} &{} \frac{3}{32} &{} \frac{1}{32} &{} \frac{1}{32} &{} \frac{3}{32} \\ \frac{1}{16} &{} \frac{1}{16} &{} \frac{1}{16} &{} \frac{1}{16} &{} \frac{1}{16} &{} \frac{5}{16} \\ \frac{5}{16} &{} \frac{1}{16} &{} \frac{1}{16} &{} \frac{1}{16} &{} \frac{1}{16} &{} \frac{1}{16} \\ \frac{3}{32} &{} \frac{1}{32} &{} \frac{1}{32} &{} \frac{3}{32} &{} \frac{3}{32} &{} \frac{5}{32} \\ \frac{5}{32} &{} \frac{3}{32} &{} \frac{3}{32} &{} \frac{1}{32} &{} \frac{1}{32} &{} \frac{3}{32} \end{bmatrix} ~,~~~ \begin{bmatrix} \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{32} &{} \frac{1}{64} &{} \frac{1}{32} &{} \frac{1}{64} \\ \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{32} &{} \frac{1}{64} &{} \frac{1}{32} \\ \frac{1}{32} &{} \frac{1}{32} &{} \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{64} \\ \frac{1}{32} &{} \frac{1}{32} &{} \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{64} \\ \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{32} &{} \frac{1}{64} &{} \frac{1}{32} \\ \frac{1}{64} &{} \frac{1}{64} &{} \frac{1}{32} &{} \frac{1}{64} &{} \frac{1}{32} &{} \frac{1}{64} \end{bmatrix} ~. \end{aligned}$$
(38)

The schematic of the patterns of communication between these classes is as shown in Fig. 3.

16 Eigenvalues and Expansion Coefficient Probabilities

If we are in state \(M_i\) of the Markov chain with remainders \(x= x_1 x_2 \cdots x_r \) with \(r\ge 3\), then the coefficient is given by

$$\begin{aligned} \left[ - \left\lfloor \frac{x_1 - x_2 -2}{2} \right\rfloor , - \left\lfloor \frac{x_1 - x_2 -3}{2} \right\rfloor \right] M_i \begin{bmatrix} 1- \lfloor \frac{x_{r-1} - x_r-2}{2}\rfloor \\ -1 \end{bmatrix} \,. \end{aligned}$$

For example for \(i=2\), this value is

$$\begin{aligned} - \left\lfloor \frac{x_r}{2} \right\rfloor \left\lfloor \frac{x_1 - x_2 -2}{2} \right\rfloor + \left\lfloor \frac{x_1 - x_2 -3}{2} \right\rfloor \,. \end{aligned}$$
(39)

Assuming that the \(x_i\) are equally likely and counting the frequency of the values \( -1, 0, 1 \) taken by (39) for \( 0 \le x_1, x_2, x_r \le 3\), we find that the probability of that the value of the expansion coefficient at \(M_2\) is \( -1, 0 , 1\) is \( \frac{1}{8}, \frac{1}{2}, \frac{3}{8} \), respectively. A similar calculation for the M-matrix \(M_3\) gives the probabilities of \( -1, 0, 1\) as \(\frac{1}{2}, \frac{1}{2}, 0 \). Calculating these probabilities for each \(M_i\) by Mathematica, we obtain three 25-dimensional vectors \(V_{-1}, V_0, V_1\), whose i-th coordinate gives the probability of the coefficient being \(-1, 0, and 1 \), respectively at that state.

$$\begin{aligned} V_{-1}= & {} \left[ {\textstyle \frac{1}{4}},{\textstyle \frac{1}{8}},{\textstyle \frac{1}{2}},{\textstyle \frac{1}{8}},0,{\textstyle \frac{1}{8}},{\textstyle \frac{1}{2}},{\textstyle \frac{1}{4}}, {\textstyle \frac{1}{4}},0,{\textstyle \frac{1}{2}},0,0,{\textstyle \frac{1}{4}},0,{\textstyle \frac{1}{8}},{\textstyle \frac{1}{4}},0,{\textstyle \frac{1}{8}}, {\textstyle \frac{1}{2}},{\textstyle \frac{1}{8}},{\textstyle \frac{1}{4}},{\textstyle \frac{1}{8}},{\textstyle \frac{1}{8}},0\right] \\ V_{0}= & {} \left[ {\textstyle \frac{3}{4}},{\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}},{\textstyle \frac{3}{4}},{\textstyle \frac{3}{4}},{\textstyle \frac{3}{4}},{\textstyle \frac{1}{2}}, {\textstyle \frac{1}{2}},{\textstyle \frac{3}{4}},{\textstyle \frac{3}{4}},{\textstyle \frac{1}{2}},1, {\textstyle \frac{1}{2}},{\textstyle \frac{3}{4}},{\textstyle \frac{3}{4}},{\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}}, {\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}}, {\textstyle \frac{1}{2}},{\textstyle \frac{3}{4}},{\textstyle \frac{3}{4}},{\textstyle \frac{1}{2}},{\textstyle \frac{3}{4}},{\textstyle \frac{3}{4}}\right] \\ V_{1}= & {} \left[ 0,{\textstyle \frac{3}{8}},0,{\textstyle \frac{1}{8}},{\textstyle \frac{1}{4}},{\textstyle \frac{1}{8}},0,{\textstyle \frac{1}{4}},0,{\textstyle \frac{1}{4}},0,0,{\textstyle \frac{1}{2}},0, {\textstyle \frac{1}{4}},{\textstyle \frac{3}{8}},{\textstyle \frac{1}{4}}, {\textstyle \frac{1}{2}},{\textstyle \frac{3}{8}},0,{\textstyle \frac{1}{8}},0,{\textstyle \frac{3}{8}},{\textstyle \frac{1}{8}},{\textstyle \frac{1}{4}}\right] \end{aligned}$$

We let \( V_{-1}(i), V_0(i)\) and \(V_1(i)\) denote the i-th coordinate of these vectors for \( 1 \le i \le 25\). The spectrum of the transition matrix P of the M-chain is as follows.

$$\begin{aligned} \text{ Spec } \, (P) = \left( \begin{array}{ccccccccc} 1 &{} \frac{1}{4} \left( 1+\sqrt{5}\right) &{} \frac{1}{2} &{} 0 &{} -\frac{1}{4} &{} \frac{1}{4}\left( 1-\sqrt{5}\right) &{} -\frac{1}{2} \vspace{1mm} \\ 1 &{} 3 &{} 4 &{} 9 &{} 2 &{} 3 &{} 3 \end{array} \right) ~, \end{aligned}$$

where the first row is the list of eigenvalues of P in decreasing magnitude and the second row are the multiplicities. The minimal polynomial of P is found to be

$$\begin{aligned} \textstyle {\frac{1}{64}} (x-1) x (2 x-1) (2 x+1) (4 x+1) \left( 4 x^2-2 x-1\right) \,. \end{aligned}$$

Since the roots of the minimal polynomial are distinct, P is diagonalizable and can be written in form \( P = Q D Q^{-1}\) where D is diagonal. Using Mathematica to do this calculation, we compute the \( 25 \times 25\) basis change matrix Q and the diagonal matrix D. The eigenvalues in D are

$$\begin{aligned}{} & {} -{\textstyle \frac{1}{2}},-{\textstyle \frac{1}{2}},-{\textstyle \frac{1}{2}},-{\textstyle \frac{1}{4}},-{\textstyle \frac{1}{4}},0,0,0,0,0,0,0,0,0,{\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}}, {\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}},1,\\{} & {} {\textstyle \frac{1}{4}}\left( 1-\sqrt{5}\right) ,{\textstyle \frac{1}{4}}\left( 1-\sqrt{5}\right) , {\textstyle \frac{1}{4}}\left( 1-\sqrt{5}\right) ,{\textstyle \frac{1}{4}}\left( 1+\sqrt{5}\right) ,{\textstyle \frac{1}{4}}\left( 1+\sqrt{5}\right) , {\textstyle \frac{1}{4}}\left( 1+\sqrt{5}\right) . \end{aligned}$$

This ordering of the eigenvalues down the diagonal is provided by Mathematica 9 when the command JordanDecomposition is used to find D and Q.

17 Asymptotic Expansions

Using the fact that the k-step probabilities of the chain are given by \( P^k = Q D^k Q^{-1}\), we calculate the exact entries in row 23 of \(P^k\), which corresponds to the start state \(M_{23}\). The elements \(p_1^{(k)}, p_2^{(k)}, \ldots , p_{25}^{(k)}\) of the 23rd row of \(P^k\) from the first column to the 25th are found to be

$$\begin{aligned} p_1^{(k)} = p_{25}^{(k)}= & {} \frac{4^{-k}}{24} \left( -2 \left( 8 (-1)^k+2^k\right) +3 \left( 3-\sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k \right. \\{} & {} \left. +3 \left( 3+\sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k\right) \\ p_2^{(k)} = p_3^{(k)} = p_{19}^{(k)} = p_{23}^{(k)}= & {} \frac{4^{-k}}{6} \left( 2 (-1)^k+2^k\right) \\ p_4^{(k)} = p_{15}^{(k)}= & {} \frac{4^{-k}}{8} \left( 3 (-2)^k-2^k-\left( 1+\sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k\right. \\{} & {} \left. -\left( 1-\sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k\right) \\ p_5^{(k)} = p_7^{(k)} = p_{18}^{(k)}= p_{22}^{(k)}= & {} \frac{4^{-k}}{120} \left( 10 \left( 4 (-1)^k-2^k\right) -3 \left( 5+3 \sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k \right. \\{} & {} \left. -3 \left( 5-3 \sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k\right) \\ p_6^{(k)} = p_8^{(k)} = p_{13}^{(k)} = p_{14}^{(k)}= & {} \frac{4^{-k}}{120} \left( 5 \left( 3 (-2)^k-8 (-1)^k-2^k\right) +3 \left( 5-\sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k \right. \\{} & {} \left. +3 \left( 5+\sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k\right) \\ p_9^{(k)} = p_{24}^{(k)}= & {} \frac{4^{-k}}{24} \left( -9 (-2)^k+16 (-1)^k-2^k-3 \left( 1+\sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k \right. \\{} & {} \left. -3 \left( 1-\sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k\right) \\ p_{10}^{(k)} = p_{11}^{(k)}= p_{17}^{(k)} = p_{21}^{(k)}= & {} \frac{4^{-k}}{40} \left( -5 \left( (-2)^k+2^k\right) +\left( 5-\sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k \right. \\{} & {} \left. +\left( 5+\sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k\right) \\ p_{12}^{(k)}= & {} \frac{4^{-k}}{20} \left( 5\ 2^{k+1} \left( 2^{k+1}+1\right) - \left( 15+7 \sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k \right. \\{} & {} \left. -\left( 15 - 7 \sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k\right) \\ p_{16}^{(k)} = p_{20}^{(k)}= & {} \frac{4^{-k}}{6} \left( 2^k-4 (-1)^k\right) \, . \\ \end{aligned}$$

For any i, the probability that starting in state \(M_{23}\), the M-chain is in state \(M_i\) after k steps is \(p_i^{(k)}\). In state \(M_i\), the probability that the expansion coefficient is \(-1\) is \(V_{-1}(i)\). Summing the contribution over i, the probability that after a k-step walk on the M-chain starting at \(M_{23}\), the coefficient will equal \(-1\) is found to be

$$\begin{aligned} \sum _{i=1}^{25} V_{-1} (i) p_i^{(k)} = \frac{4^{-k}}{40} \left( \left( 5+2 \sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k +\left( 5-2 \sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k -5 (-2)^k \right) , \end{aligned}$$

after the simplifications made by Mathematica. Similarly, the probability that after a k-step walk on the M-chain the coefficient will be 1 is

$$\begin{aligned} \sum _{i=1}^{25} V_{1} (i) p_i^{(k)} = \frac{4^{-k}}{40} \left( \left( 5+2 \sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k +\left( 5-2 \sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k +5 (-2)^k \right) , \end{aligned}$$

and the probability that after a k-step walk on the M-chain the coefficient will be 0 is

$$\begin{aligned} \sum _{i=1}^{25} V_{0} (i) p_i^{(k)} = 1 - \frac{4^{-k}}{20} \left( \left( 5+2 \sqrt{5}\right) \left( 1+\sqrt{5}\right) ^k +\left( 5-2 \sqrt{5}\right) \left( 1-\sqrt{5}\right) ^k \right) . \end{aligned}$$

Next, we look at these expressions in terms of the number of summands of the canonical representation \(r = k+2\). Making the substitution and simplifying, we obtain the following expressions for the probability as a function of r, that the expansion coefficient is \(-1, 1\), and 0, respectively:

$$\begin{aligned}{} & {} \frac{4^{-r}}{20} \left( \left( 5+ \sqrt{5}\right) \left( 1+\sqrt{5}\right) ^r +\left( 5- \sqrt{5}\right) \left( 1-\sqrt{5}\right) ^r -10 (-2)^r \right) \,, \end{aligned}$$
(40)
$$\begin{aligned}{} & {} \frac{4^{-r}}{20} \left( \left( 5+ \sqrt{5}\right) \left( 1+\sqrt{5}\right) ^r +\left( 5- \sqrt{5}\right) \left( 1-\sqrt{5}\right) ^r +10 (-2)^r \right) \, , \end{aligned}$$
(41)
$$\begin{aligned}{} & {} 1 - \frac{4^{-r}}{10} \left( \left( 5+ \sqrt{5}\right) \left( 1+\sqrt{5}\right) ^r +\left( 5-\sqrt{5}\right) \left( 1-\sqrt{5}\right) ^r \right) \,. \end{aligned}$$
(42)

The above expressions yield the following theorem in which the asymptotic probabilities are written in terms of the golden ratio.

Theorem 17.1

The asymptotic probability that the expansion coefficient of an exponent with canonical representation with r summands is \(-1\) or 1 is both \( \frac{2+\Phi }{10} \left( \frac{\Phi }{2} \right) ^r \), and the asymptotic probability that it is 0 is \( 1 - \frac{2+\Phi }{5} \left( \frac{\Phi }{2} \right) ^r \), where \( \Phi \) is the golden ratio.

Remark 8

In Theorem 17.1, one important point is that the probability is defined as the density of the canonical representation of the integers in terms of the Fibonacci numbers.

18 Recursions

From the form of the expression in (40), (41) and (42), we can work backwards from the Binet formula for recursions by means of the roots of the characteristic equation, and figure out the corresponding recurrence relation that gave rise to it. Keeping the denominator as the total number of possibilities \(4^r\) in each expression, the numerators are found to satisfy the following recursions for \( r \ge 3\):

$$\begin{aligned} \begin{array}{llll} ~~~&{}m_r = 8 m_{r-2} + 8 m_{r-3} &{} &{} (m_0=0, m_1=2, m_2 = 2) , \\ ~~~&{} o_r = 8 o_{r-2} + 8 o_{r-3} &{} &{} (o_0=1, o_1=0, o_2 = 6) , \\ ~~~&{} z_r = 6 z_{r-1} -4 z_{r-2} -16 z_{r-3} &{} &{} (z_0=0, z_1=2, z_2 = 8) ,\\ \end{array} \end{aligned}$$

for the number \(m_r\), \(o_r\), \(z_r\) of \(-1, 1\), and 0 coefficients, respectively, as a function of r.

Remark 9

The interesting point about these three recursions is that even though they have been obtained from probabilistic considerations by looking at the behavior of the Markov chain of M-matrices, they hold for the actual values computed by brute force and represented in Table 5. There is probably a simpler combinatorial explanation.

19 Conclusions and Remarks

The main result presented here is that the coefficients of the expansion of (2) are from \( \{ -1, 0, 1\}\), as is the case in the expansion of Euler’s product (1).

The coefficients in the development of (2) can be efficiently calculated, and explicit formulas can be obtained when various interesting sequences of integers are taken as exponents. However, a result like the pentagonal number theorem with explicit identification of the \( \pm 1 \) coefficients, and a simple combinatorial proof of by using Ferrers’ diagrams with Fibonacci numbers as parts seems difficult. Instead of partitions with distinct parts that occur in the case of the combinatorial interpretation of the expansion of (1), the relevant partitions for this problem seem to be integer partitions formed by the indices of the canonical Fibonacci representation of n.

There are conditions in terms of forbidden subwords of the string x, whose letters are the modulo 4 remainders of the indices of canonical representation of an exponent, which guarantee that the associated expansion coefficient vanishes.

A more extensive formal language oriented study of the properties of the coefficients is made possible by the construction of a deterministic finite automaton over the alphabet \(\Sigma = \{ A_0, A_1, A_2,A_3 \}\) which has as its states the states of the M-chain in Fig. 2, and where the transitions are labeled with the letters of \(\Sigma \) as indicated by right multiplication in Table 4.

Various submonoids of the M-matrices give results on the coefficients of subsequences of exponents. We have considered the case of \(\langle A_0\rangle \) in Sect. 12. The analysis of this submonoid gives the result in Theorem 12.1 in which the parts in the canonical representation of the exponent are all equivalent to the same remainder p modulo 4. Study of various other submonoids gives results of a similar nature from which asymptotic expressions can also be obtained. The simplification of the analysis in these cases is made possible by making use of the relations satisfied by the A-matrices as discussed briefly in Sect. 13.

An example of a result based on the study of submonoids and the uniqueness of the canonical representation is Proposition 19.1, which we state here without proof.

Proposition 19.1

Writing the product in (2) as

$$\begin{aligned} \prod _{k\ge 2} (1 - y^{F_{k}}) = \prod _{k\ge 1} (1 - y^{F_{4k}}) \prod _{k\ge 1} (1 - y^{F_{4k+1}}) \prod _{k\ge 0} (1 - y^{F_{4k+2}}) \prod _{k\ge 0} (1 - y^{F_{4k+3}}) \,, \end{aligned}$$

the power series expansion of any product, or a pair of products on the right have coefficients that lie in \(\{-1,0,1\}\).