1 Introduction

It is well known that the scalar homogeneous linear difference equations of order \(r\ge 2\), defined by

$$\begin{aligned} y_{n+r}=a_1(n)y_{n+r-1} + \cdots + a_{r}(n)y_{n}, \,\,\,\,\,\, \text{ for } \,\,\, n\, \ge \,0, \end{aligned}$$
(1.1)

where the coefficients \(a_1(n), \dots , a_{r}(n)\) are functions of n, occur in several fields of mathematics and applied sciences. Several methods have been provided in the literature for solving Eq. (1.1) (see, for example, [15, 17], and references therein). Recently, the homogeneous linear difference equations (1.1) with periodic coefficients, i.e., \(a_j(n+p)=a_j(n)\), have been solved in [4, 5], using properties of the generalized Fibonacci sequences in the algebra of square matrices. More precisely, in [4], Eq. (1.1) has been studied under its equivalent matrix equation:

$$\begin{aligned} {Y}(n+1)= {C}(n) {Y}(n),\,\,\,\,for\,\,n\, \ge \,0, \end{aligned}$$
(1.2)

where Y(n) and C(n) are given as follows:

$$\begin{aligned} C(n)=\begin{pmatrix} a_{1}(n) &{} a_{2}(n)&{}\cdots &{}a_{r}(n) \\ 1 &{} 0 &{} \cdots &{} 0 \\ 0 &{} 1 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ 0 &{} \cdots &{} 1&{}0 \end{pmatrix} \text{ and } Y(n)=\begin{pmatrix} y(n+r-1) \\ y(n+r-2) \\ \vdots \\ y(n) \end{pmatrix}. \end{aligned}$$

In the sequel, we use the notation \(\displaystyle C(n)=C[a_{1}(n),\,\,\, a_{2}(n),\,\,\, \cdots ,\,\,\, a_{r}(n)]_{r\times r}\), for these companion matrices of order r. Since \(\displaystyle Y(n+1)=C(n)...C(1)C(0)Y(0)\), the main problem for studying the matrix equation (1.2), is reduced to the study of the product of companion matrices:

$$\begin{aligned} B(n)=C(n)C(n-1)...C(1)C(0), \text{ for } n\ge 1. \end{aligned}$$

In the last years, the product of companion matrices has attracted much attention, because this product occurs in various fields of mathematics and applied sciences, such that the Floquet system theory related to the linear difference equations (see [4, 5, 16, 17]). Diverse methods for computing the product of companion matrices have been proposed in the literature. For instance, in [16], the authors developed an explicit formula for the entries of the product of companion matrices. Then, they applied their results to solve linear difference equations of variable coefficients. Another expression for the product of companion matrices was obtained in [17], based on the study of solutions of non-homogeneous and homogeneous linear difference equations of order N, with variable coefficients. Recently, it was shown in [4, 5] that the product of companion matrices plays a central role, for investigating a large class of periodic-discrete homogeneous difference equations via generalized Fibonacci sequences. Moreover, through the key of generalized Fibonacci sequences, there are still some interesting and relevant problems that can be examined.

In this paper, we aim to study the linear difference matrix equations defined by

$$\begin{aligned} Y_{n+r}=A_1(n) Y_{n+r-1} \cdots + A_{r}(n)Y_{n}, \,\,\,\,\,\, \text{ for } \,\,\, n\, \ge \, 0, \end{aligned}$$
(1.3)

where \(Y_0, \cdots , Y_{r-1}\) are in \(\mathbb {C}^{d}\) and stand for the initial values, and the coefficients \(A_1(n), \cdots , A_{r}(n)\) are square matrices in \(\mathbb {C}^{d\times d}\), the algebra of square matrices of order d, with complex coefficients, representing periodic matrices functions of n, that is, \(A_{j}(n+p)=A_j(n)\), for every \(n\ge 0\), i.e., \(p=\min \{N\in \mathbb {N}, N\ge 1\, A_j(n+N)=A_j(n), \text{ for } j=1,..., r\, \text{ and } n\ge 0\}\).

The class of discrete linear matrix equations (1.3) appears in many applied fields, such as economics, population dynamics, and signal processing. For instance, periodic matrix models are often used to study seasonal temporal variation of structured populations (see [6] for example). They can also occur in many practical control systems (see [20] for example).

In our exploration, we are looking forward to studying properties of some periodic matrix difference equations (1.3), throughout their closed relation with the product of companion matrices in blocks. First, we formulate the main result on the solutions of the linear matrix difference equation (1.3), through the product of companion matrices in blocks and powers of matrix in blocks. As a matter of fact, we utilize the generalized Cayley–Hamilton theorem for giving rise to a new result that allows us to compute the powers of matrices in blocks. Moreover, we outline the recursive method for investigating two algorithms to compute the finite product of companion matrices in blocks. To highlight the importance of our results, special cases, significant examples and applications are provided.

The outline of this study is as follows. Section 2 is devoted to some basic properties of the periodic matrix difference equations (1.3), where the product of periodic companion matrices in blocks is considered. Section 3 concerns the study of the powers of matrices in the algebra of square matrices in blocks. More precisely, using the generalized Cayley–Hamilton Theorem and the linear recursiveness in the algebra of square matrices in blocks, we give an explicit expression of the powers of a square matrix in blocks. Here, the Kronecker product (or tensor product) of matrices plays a central role. In Sect. 4, we develop two algorithms for computing the finite product of companion matrices in blocks, where a recursive sequence of matrices is considered. In Sect. 5, gathering the results of Sects. 23 and 4, we then employ those to examine some special class of periodic matrix difference equations (1.3).

2 Periodic matrix difference equations: general setting

In the same way to the scalar case, the matrix equation associated to Eq. (1.3) is given by

$$\begin{aligned} Z(n+1)=C(n)Z(n)\,\,\,\,for\,\,\, n \ge \,0, \end{aligned}$$
(2.1)

where \(Z(n)=(Y(n+r-1), Y(n+r-2),...,Y(n))^\top \in \mathbb {C}^{dr}\) and C(n) is a matrix of order dr, i.e., in \(\mathbb {C}^{dr\times dr}\) given by

$$\begin{aligned} C(n)=C[A_{1}(n), \cdots , A_{r}(n)]_{r\times r}=\begin{pmatrix} A_{1}(n) &{} A_{2}(n) &{}\cdots &{} A_{r}(n) \\ \mathbf {1}_{d\times d} &{} \Theta _{d\times d} &{}\cdots &{}\Theta _{d\times d}\\ \vdots &{} \ddots &{}\ddots &{}\vdots \\ \Theta _{d} &{}\cdots &{} \mathbf {1}_{d\times d} &{}\Theta _{d\times d} \end{pmatrix}_{r\times r}, \end{aligned}$$
(2.2)

where \(\mathbf {1}_{d\times d}\) and \(\Theta _{d\times d}\) are, respectively, the identity matrix and the zero matrix of order \({d\times d}\). We observe that the matrix C(n) is a companion matrix in blocks. In the sequel, we use the notation \(\displaystyle C(n)=C[A_{1}(n),\,\,\, A_{2}(n),\,\,\, \cdots ,\,\,\, A_{r}(n)]_{r\times r}\), for these companion matrices in blocks of order r. As for the scalar case, the main problem for studying the matrix equation (2.1), is reduced to the study of the following product of companion matrices in blocks:

$$\begin{aligned} B(n)=C(n-1)...C(0), \text{ for } n\ge 1. \end{aligned}$$

Since \(A_j(n+p)=A_j(n)\), for every j (\(1\le j\le r\)) and \(n\ge 0\), we then infer that \(C(n+p)=C(n)\), for every \(n\ge 0\), where \(p\ge 1\) is the period. Here we are concerned with the finite product of companion matrices in blocks. It is worthwhile to point out that this class of companion matrices in blocks arises in various mathematical and applied fields (see, for example, [18]). In this work, we will emphasize its key role for providing the solutions of Eq. (2.1).

Let us consider the matrix equations (2.1) and (2.2) related to the periodic matrix difference equation (1.3). Suppose that \(n=kp\). Then we have

$$\begin{aligned} Z(kp)=C(kp-1)Z(kp-1)=C(kp-1)C(kp-2)\cdots C(kp-p)Z(kp-p)=BZ(kp-p), \end{aligned}$$

where \(Z(kp)=(Y_{kp+r-1},Y_{kp+r-2},\cdots ,Y_{kp})^\top \) and \(B=C(p-1)\cdots C(0)\). Due to the periodicity condition, we show that Eq. (2.1) takes the form

$$\begin{aligned} Z(kp)=B^{k}Z(0) \end{aligned}$$

where \(Z(0)=(Y_{r-1},\cdots ,Y_{0})^\top \). For \(n=kp+1\), we get

$$\begin{aligned} Z(kp+1)=C(kp)Z(kp)=C(0)B^{k}Z(0), \end{aligned}$$

and for \(n=kp+p-1\), we have

$$\begin{aligned} Z(kp+p-1)=C(kp+p-2)Z(kp+p-2)=C(p-2)\cdots C(0) B^{k} Z(0). \end{aligned}$$

Thus, for every \(n \equiv i[p]\), i.e \(n=kp+i\) (\(0\le i\le p-1\)), the solution of the matrix equations (2.1) and (2.2) related to the periodic matrix difference equation (1.3) is given as follows.

Theorem 2.1

Consider the periodic matrix equations (2.1) and (2.2) of period \(p\ge 2\), with the initial conditions vector \(Z(0)=(Y_{r-1},\cdots ,Y_{0})^\top \). Then, the solution Z(n) of (2.1) and (2.2) is given by

$$\begin{aligned} Z(kp+i+1)= C(i) \cdots C(0) B^{k}Z(0) \,\, , \,\, i\, = \, 0,\, \cdots \, p-1, \end{aligned}$$
(2.3)

where \(B=C(p-1) \cdots C(0)\).

Theorem 2.1 shows that there is a closed link between the periodic matrices difference equations and product of companion matrices in blocks. More precisely, in expression (2.3) appears a finite product of companion matrices in blocks and the powers of the matrix \(B=C(p-1) \cdots C(0)\), which is itself a finite product of companion matrices in blocks.

To establish more results, concerning the explicit representation of the solutions of the periodic matrix equations (2.1) and (2.2), we are led to study the two following problems. The first one is related to the powers of matrices in blocks and the second concerns the finite product of companion matrices in blocks. In the first problem, our approach revolves around the generalized Cayley–Hamilton Theorem. In the whereas, for the second problem we manage to build two recursive algorithms for computing this finite product of companion matrices in blocks.

3 Linear recursive relation in the algebra of square matrices, generalized Cayley–Hamilton theorem and powers of matrices in blocks

Recently, great interest has been brought to the use of the product of Kronecker in the algebra of square matrices. Indeed, the techniques of this operation has been successfully proven to be very important for studying several problems, in various fields of mathematics and other exact or applied sciences. Especially, in the resolution of matrix and matrix differential equations (see for example, [1, 2, 12] and references therein).

3.1 Kronecker product and linear recursive relation in the algebra of square matrices in blocks

In this subsection, we are interested in the use of the matrix Kronecker product for studying some linear recursive relation in the algebra of square matrices in blocks, and their use for the computation of the powers of matrices in blocks through the generalized Cayley–Hamilton Theorem. In fact, using the product of Kronecker, we extend the results of [3], to the algebra of square matrices in blocks.

For reason of clarity, let us recall that the Kronecker product can be defined for two matrices of arbitrary size over any ring. In the sequel of this study, we consider only the square matrices, whose entries are in the fields of real \(\mathbb {R}\) or complex numbers \(\mathbb {C}\) (see for example, [11, 19]). Let us start by recalling the definition of Kronecker product. That is, let \(\mathbb {C}^{d\times d}\) and \(\mathbb {C}^{r\times r}\) be the algebras of square matrices of order \(d\ge 1\) and \(r\ge 1\), respectively.

Definition 3.1

The Kronecker product of the matrix \(A=(a_{ij})_{1\le i,j\le r}\in \mathbb {C}^{r\times r}\) with the matrix \(B=(b_{ij})_{1\le i,j\le d}\in \mathbb {C}^{d\times d}\) is defined as follows:

$$\begin{aligned} A\otimes B=(a_{i,j}B)_{1\le i,j\le r}. \end{aligned}$$
(3.1)

Note that, there is other denomination for the Kronecker product such that tensor product, direct product or left direct product (see, for example, [11]). For more details, an interesting overview on the Kronecker product is given by K. Schnack in [19]. The Kronecker tensor product has several important algebraic properties, we refer to what we will use in this section. Let first remark that for \(r=1\), we have \(A=a_{1,1}\in \mathbb {C}^{1\times 1}=\mathbb {C}\), thus the tensor product (3.1) takes the form

$$\begin{aligned} A\otimes B=(a_{i,j}B)_{1\le i,j\le 1}=a_{1,1}B, \end{aligned}$$

which allows to see that the tensor product coincides with the usual multiplication of matrices by scalars. Or equivalently, the tensor product can be viewed as an extension of the usual multiplication of matrices by scalars.

Expression (3.1) shows that \( A\otimes B\) is an element of \(GL(r, \mathbb {C}^{d\times d})\), the algebra of square matrices of order \(r\ge 1\), with coefficients in \(\mathbb {C}^{d\times d}\). Moreover, we can also see that \( A\otimes B\) can be identified with an element of \(\mathbb {C}^{rd\times rd}\), the algebra of square matrices of order rd, with coefficients in \(\mathbb {C}\). Therefore, we have the following known isomorphisms of algebras:

$$\begin{aligned} \mathbb {C}^{r\times r}\otimes \mathbb {C}^{d\times d}\simeq \mathbb {C}^{rd\times rd}\simeq GL(r, \mathbb {C}^{d\times d})\simeq GL(d, \mathbb {C}^{r\times r}). \end{aligned}$$

In the sequel, we will use the notation \(\mathbb {M}_{_{d\odot r}}\) to designate without distinction the previous notations of \(\mathbb {C}^{r\times r}\otimes \mathbb {C}^{d\times d}\).

A method for computing the powers of the matrices of \(\mathbb {C}^{r\times r}\), the algebra of square matrices, has been considered in [3]. This method is based on the linear recursive sequences of Fibonacci type in the algebra of square matrices \(\mathbb {C}^{r\times r}\), and can be extended here as follows. More precisely, for computing the powers of the matrix in blocks, we introduce the notion of linear recursive sequences of Fibonacci type in the algebra of square matrices \(\mathbb {M}_{_{d\odot r}}=GL(r,\mathbb {C}^{d\times d})\). Let \(A_{0}\), \(A_{2}\), \(\cdots \) , \(A_{r-1}\) be a family of commuting matrices in \(GL(r,\mathbb {C}^{d\times d})\), and \(B_{0},B_{1}, \cdots ,B_{r-1}\) \((r > 2)\) a given sequence of \(GL(r,\mathbb {C}^{d\times d})\). Let \(\{Y_{n}\}_{n\ge 0}\) be a sequence of \(GL(r,\mathbb {C}^{d\times d})\) defined by \(Y_{0} = B_{0}, Y_{1} = B_{1}, \cdots , Y_{r-1} = B_{r-1}\) and the matrix difference equation of Fibonacci type,

$$\begin{aligned} Y_{n+1} = A_{0}Y_{n}+A_{1}Y_{n-1}+\cdots +A_{r-1}Y_{n-r+1}\,\,\, \text{ for } \,\, n > r-1. \end{aligned}$$

In other words, the sequence \(\{Y_{n}\}_{n\ge 0}\) is called a generalized Fibonacci sequence, where \(A_{0}\), \(A_{2}\), \(\cdots \) , \(A_{r-1}\) are the coefficients, and \(Y_{0}, Y_{1}, \cdots , Y_{r-1}\) stand for the initial conditions. As it was shown in [3], we have

$$\begin{aligned} Y_{n} =\rho (n,r)W_{0} + \rho (n-1,r)W_{1} +\cdots +\rho (n-r+1,r)W_{r-1},\,\, \text{ for } \text{ every } n\ge r, \end{aligned}$$
(3.2)

where \(W_{s} = A_{r-1}B_{s} +\cdots +A_{s}B_{r-1}\) for \(s = 0, 1,\cdots , r-1\) and

$$\begin{aligned}&\rho (n,r)= \sum _{k_0+2k_1+...+rk_{r-1}=n-r}\frac{(k_0+...+k_{r-1})!}{k_0!...k_{r-1}!}A_{0}^{k_0}...A_{r-1}^{k_{r-1} }, \text{ for } \text{ every } n\ge r \end{aligned}$$
(3.3)

with \(\rho (r,r) = \mathbf {1}_{r\times r}=diag(\mathbf {1}_{d\times d}, \, ...,\, \mathbf {1}_{d\times d})=\mathbf {1}_{r\times r}\otimes \mathbf {1}_{d\times d}\) (the r-by-r diagonal matrix in which the entries of the main diagonal are all \(\mathbf {1}_{d\times d}\)) and \(\rho (n,r) = \Theta _{r\times r}\otimes \Theta _{d\times d}\), if \( n < r\).

The preceding expressions (3.2) and (3.3) combined with the generalized Cayley–Hamilton are useful for computing the powers of the matrix in blocks \(B=C(p-1) \cdots C(0)\). For this propose, we employ the result of the generalized Fibonacci sequence, that allows us to obtain a tractable expression for the powers of a block matrix A of \(GL(r, \mathbb {C}^{d\times d})\).

3.2 Generalized Cayley–Hamilton theorem and powers of companion matrices in blocks

We first recall the generalized Cayley–Hamilton Theorem for matrices given in [13, 14]. Let us consider the square matrix in blocks:

$$\begin{aligned} A=\begin{pmatrix} A_{11} &{}\cdots &{} A_{1r} \\ \vdots &{} \ddots &{}\vdots \\ A_{r1} &{}\cdots &{} A_{rr} \end{pmatrix}, \end{aligned}$$
(3.4)

where the \(A_{ij}\in \mathbb {C}^{d\times d}\) are commutative, i.e., \(A_{ij}A_{kl}=A_{kl}A_{ij}\), for every i, j (\(1\le i,j,k,l\le r\)). Also, consider the Cayley–Hamilton theorem for block matrices, for the matrix A defined by (3.4). Following Kaczorek [13, Theorem 4] (for more details see also [14]), the matrix characteristic polynomial of A, is given by

Definition 3.2

The matrix characteristic polynomial of the square matrix in blocks A, defined by (3.4), is

$$\begin{aligned} P(S)=det[\mathbf {1}_{r\times r}\otimes S - A]=S^r-D_0S^{r-1}+\cdots -D_{r-1}, \end{aligned}$$
(3.5)

where \(S\in \mathbb {C}^{d\times d}\) is the matrix (block) eigenvalue of A, \(\otimes \) denotes the Kronecker product of matrices.

The matrix determinant (3.5) is obtained by developing the determinant of the matrix considering its commuting blocks as scalar entries (see [13, 14]). More precisely, it was shown in [13, 14] that the matrices \(D_i (i=0,\cdots , r-1)\) are obtained by developing the determinant of the matrix \([\mathbf {1}_{d\times d}\otimes S-A]\) considering its blocks as scalar entries. Then, we have

$$\begin{aligned} P(A)=A^r - \sum _{i=0}^{r-1}(\mathbf {1}_{r\times r}\otimes D_{r-i})A^i=0 \end{aligned}$$
(3.6)

We now turn our attention to the theory of generalized Fibonacci sequence, to extend some properties established in the case of matrices with scalar coefficients, to the case of matrices in blocks. Equation (3.6) leads to get

$$\begin{aligned} A^n = (\mathbf {1}_{r\times r}\otimes D_{0})A^{n-1} +(\mathbf {1}_{r\times r}\otimes D_{2})A^{n-2}+\cdots +(\mathbf {1}_{r\times r}\otimes D_{r-1})A^{n-r}, \end{aligned}$$

for every \(n\ge r\). We observe that the sequence \(\{A^n\}_{n\ge 0}\) is nothing else but only a generalized Fibonacci sequence of order r, with matrices coefficients \( \mathbf {1}_{d\times d}\otimes D_{r-i}\) (\(0\le i\le r-1\)) and initial conditions \(A^{0},A^{1},\cdots , A^{r-1}\). In an entirely similar way followed when the matrix has scalar coefficients in [3], we manage to obtain the following result for the block matrix.

Theorem 3.3

Let A be a matrix in blocks and \( P(S)=det[\mathbf {1}_{r\times r}\otimes S - A]=S^r-D_0S^{r-1}+\cdots -D_{r-1}\) be the matrix characteristic polynomial of A. Then, we have

$$\begin{aligned} A^n=\rho (n,r)W_{0}+\rho (n-1,r)W_{1}+\cdots +\rho (n-r+1,r)W_{r-1}, \end{aligned}$$
(3.7)

for any \(n \ge r\), where \(W_{s}=(\mathbf {1}_{r\times r}\otimes D_{r-1})A^s+\cdots +(\mathbf {1}_{r\times r}\otimes D_{s})A^{r-1}\) for \(s=0,1,\cdots ,r-1\), the \(\rho (n,r)\) is given by \(\rho (r,r)=\mathbf {1}_{r\times r}\otimes \mathbf {1}_{d\times d}\), with \(\rho (p,r)=\mathbf {1}_{r\times r}\otimes \Theta _{d\times d}\) for \(p<r\), and

$$\begin{aligned} \rho (n,r)=\sum _{k_0+2k_1+\dots +rk_{r-1}=n-r} \frac{(k_0+\cdots +k_{r-1})!}{k_0!k_1!\cdots k_{r-1}!}(\mathbf {1}_{r\times r}\otimes D_{0})^{k_0}\cdots (\mathbf {1}_{r\times r}\otimes D_{r-1})^{k_{r-1}}. \end{aligned}$$
(3.8)

It seems for us that the result of Theorem 3.3 is not current in the literature. Comparing to the linked results in this subject, we establish here a handed expression that can be a key to resolve diverse questions in this subject. Notably, those on the similar matrix equations (see, for example [2, 7,8,9, 13, 14]).

To give more light to the content of Theorem 3.3, we examine the following special situation. Suppose that \(r=2\) and \( A=\begin{pmatrix} A_{11} &{} A_{12} \\ \Theta _{d\times d} &{} A_{22} \end{pmatrix}\), with \( A_{11},A_{12}\) and \(A_{22}\) are matrices of order d, in addition they satisfy the commutativity condition \(A_{ij}A_{kl}=A_{kl}A_{ij}\) (\(1\le i,j,k,l\le 2\)). Then, we have \( P(S)=det[\mathbf {1}_{2\times 2}\otimes S - A]=S^2-(A_{11}+A_{22})S + A_{11}A_{22}\). Employing expressions (3.7) and (3.8), we obtain

$$\begin{aligned} A^n=\rho (n,2)W_{0}+\rho (n-1,2)W_{1}, \end{aligned}$$

for every \(n\ge 1\), where

$$\begin{aligned} W_0= \begin{pmatrix} A_{11}^2&{} A_{11}A_{12}+A_{22}A_{12} \\ \Theta _{d\times d} &{} A_{22}^2 \end{pmatrix} \text{ and }\; W_1=\begin{pmatrix} -A_{11}^2A_{22} &{} -A_{11}A_{22}A_{12} \\ \Theta _{d\times d} &{} -A_{11}A_{22}^2 \end{pmatrix}. \end{aligned}$$

What is more, in this case , we have \( \rho (n,2)=\sum _{k_0+2k_1=n-2} \frac{(k_0+k_{1})!}{k_0!k_1!}(\mathbf {1}_{2\times 2}\otimes (A_{11}+A_{22}))^{k_0} (\mathbf {1}_{2\times 2} (- A_{11}A_{22}))^{k_{1}}\). Now, if we suppose the condition \(A_{11}=-A_{22}\), we obtain the following explicit expressions of \( \rho (n,2)\):

$$\begin{aligned} \rho (n,2)= \left\{ \begin{aligned}&(\mathbf {1}_{2\times 2}\otimes A_{11})^{2k-2}\;\;\;\text{ if }\;\;\ n=2k,\\&\mathbf {1}_{2\times 2}\otimes \Theta _{2\times 2} \;\;\;\text{ if }\;\;\; n=2k+1. \end{aligned} \right. \end{aligned}$$

Therefore, we have the following proposition.

Proposition 3.4

Under the preceding data, we have

$$\begin{aligned} A^n= \left\{ \begin{aligned}&\begin{pmatrix} A_{11}^{2k}&{} \Theta _{d\times d} \\ \Theta _{d\times d} &{} A_{11}^{2k} \end{pmatrix} \;\;\text{ if }\;\;\ n=2k (k\ge 1),\\&\begin{pmatrix} A_{11}^{2k-1} &{} A_{11}^{2k-2}A_{12} \\ \Theta _{d\times d}&{} A_{11}^{2k-1} \end{pmatrix} \;\;\text{ if }\;\;\ n=2k+1 (k\ge 2). \end{aligned} \right. \end{aligned}$$

As a numerical application of Proposition 3.4, consider the matrix \( A=\begin{pmatrix} A_{11} &{} A_{12} \\ \Theta _{2\times 2} &{} A_{22} \end{pmatrix}\), where \(A_{11}=\begin{pmatrix} 2&{} 0 \\ 0 &{} 2 \end{pmatrix}\), \(A_{12}=\begin{pmatrix} 1&{} 1\\ 2 &{}1 \end{pmatrix}\), \(A_{21}= \Theta _{2\times 2}\) (null matrix of order 2) and \(A_{22}=-2\times \mathbf {1}_{2\times 2}\). Then, a direct computation shows that

$$\begin{aligned} A^n= \left\{ \begin{aligned}&2^{2k}\times \begin{pmatrix} \mathbf {1}_{2\times 2} &{}\Theta _{2\times 2}\\ \Theta _{2\times 2}&{} \mathbf {1}_{2\times 2} \end{pmatrix} \;\;\text{ if }\;\;\ n=2k (k\ge 1),\\&2^{2k-1}\times \begin{pmatrix} \mathbf {1}_{2\times 2} &{}A_{22}(k)\\ \Theta _{2\times 2}&{} \mathbf {1}_{2\times 2} \end{pmatrix} \;\;\text{ if }\;\;\ n=2k+1 (k\ge 2), \end{aligned} \right. \end{aligned}$$

where \(A_{22}(k)=\begin{pmatrix} 2&{}2 \\ 1 &{}2 \end{pmatrix}\).

Proposition 3.4 and its numerical application illustrate the efficient role of Theorem 3.3. Moreover, our main goal, is to apply Theorem 3.3 to calculate the powers of the matrix \(B=C(p-1) \cdots C(0)\), in the aim to provide solutions of the periodic matrix difference equations (2.1) and (2.2), in some special cases, that will be more exploited in Sect. 5.

4 Two algorithms for the product of companion matrices in blocks

4.1 Algorithm 1: product of block companion matrices

In this section, we develop the first algorithm for computing the finite product of companion matrices in blocks. Recall that this product appears in the solutions of the matrix expressions (2.3) of Theorem 2.1.

Let us consider the companion matrix in blocks (2.2), namely,

$$\begin{aligned} C(m)=C[A_{1}(m),\,\,\, A_{1}(m),\,\,\, \cdots ,\,\,\, A_{r}(m)]_{r\times r}, \end{aligned}$$

where \(A_{1}(m), \cdots , A_{r}(m)\) are matrices of order d. We shall give an explicit formula for the matrix

$$\begin{aligned} B^{(m)}=C(1)\cdots C(m)=B^{(m-1)}C(m). \end{aligned}$$

The main idea behind this algorithm is to build an iterative formula that calculates recursively the entries \(B_{ij}^{(m)}\) of the matrix \(B^{(m)}\), using from the entries of the matrix C(1). More precisely, this recursive process is based on a sequence of matrices \(D^{(k)}(m)\), whose entries constructed recursively, using the given sequence \(A_{j}(m)\). To this matter, we set

$$\begin{aligned} B^{(m)}=\begin{pmatrix} B_{11}^{(m)} &{}\cdots &{} B_{1r}^{(m)} \\ \vdots &{} \ddots &{}\vdots \\ B_{r1}^{(m)} &{}\cdots &{}B_{rr}^{(m)} \end{pmatrix}, \end{aligned}$$

where \(B_{ij}^{(m)}(1\le i,j\le r)\) are matrices of order d. The steps of our first algorithm are as follows. Let \(D_{j}^{(0)}(m)=\mathbf {1}_{d\times d}\), \(D_{j}^{(1)}(m)=A_{j}(m)\) and \(D_{i, j+k}^{(l)}(m)=\Theta _{d\times d}\) when \(j+k>r\). Then, we have

$$\begin{aligned} \begin{aligned} B_{ij}^{(m)}&=B_{i1}^{(m-1)} D_{j}^{(1)}(m)+B_{i j+1}^{(m-1)}D_{j}^{(0)}(m)\\&=[B_{i1}^{(m-2)}A_{1}(m-1)+B_{i2}^{(m-2)}I_{d}]D_{j}^{(1)}(m) \\&+[B_{i1}^{(m-2)}D_{j+1}^{(1)}(m-1)+B_{i j+2}^{(m-2)}D_{j+1}^{(0)}(m-1)]D_{j}^{(0)}(m)\\&=B_{i1}^{(m-2)}[A_{1}(m-1)D_{j}^{(1)}(m)+A_{j+1}(m-1)D_{j}^{(0)}(m)]\\&+B_{i2}^{(m-2)}D_{j}^{(1)}(m)+B_{i j+2}^{(m-2)}D_{j}^{(0)}(m). \end{aligned} \end{aligned}$$

Let us define \(D_{j}(2)\) by the following relation:

$$\begin{aligned} D_{j}^{(2)}(m)=A_{1}(m-1)D_{j}^{(1)}(m)+A_{j+1}(m-1)D_{j}^{(0)}(m). \end{aligned}$$

Thus, by substituting \(D_{j}^{(2)}(m)\) in the last formula of \(B_{ij}^{(m)}\), we obtain

$$\begin{aligned} {\begin{matrix} B_{ij}^{(m)}&{}=B_{i1}^{(m-2)}D_{j}^{(2)}(m)+B_{i2}^{(m-2)}D_{j}^{(1)}(m)+B_{ij+2}^{(m-2)}D_{j}^{(0)}(m)\\ &{}=[B_{i1}^{(m-3)}D_{1}^{(1)}(m-2)+B_{i2}^{(m-3)}D_{j}^{(0)}(m-2)]D_{j}^{(2)}(m)\\ &{}+[B_{i1}^{(m-3)}D_{2}^{(1)}(m-2)+B_{i3}^{(m-3)}D_{2}^{(0)}(m-2)]D_{j}^{(1)}(m)\\ &{}+[B_{i1}^{(m-3)}D_{j+2}^{(1)}(m-2)+B_{i j+3}^{(m-3)}D_{j+2}^{(0)}(m-2)]D_{j}^{(0)}(m). \end{matrix}} \end{aligned}$$

Now, let define \(D_{j}^{(3)}(m)\) by taking

$$\begin{aligned} D_{j}^{(3)}(m)=A_{1}(m-2)D_{j}^{(2)}(m)+A_{2}(m-2)D_{j}^{(1)}(m)+A_{j+2}(m-2)D_{j}^{(0)}(m), \end{aligned}$$

hence, we show that \(B_{ij}^{(m)}\) is given by

$$\begin{aligned} B_{ij}^{(m)}=B_{i1}^{(m-3)}D_{j}^{(3)}(m)+B_{i2}^{(m-3)}D_{j}^{(2)}(m)+B_{i3}^{(m-3)}D_{j}^{(1)}(m)+B_{ij+3}^{(m-3)}D_{j}^{(0)}(m). \end{aligned}$$

Using the same recurrent process above, we obtain

$$\begin{aligned} {\begin{matrix} B_{ij}^{(m)}&{}=[B_{i1}^{(m-4)}D_{1}^{(1)}(m-3)+B_{i2}^{(m-4)}D_{1}^{(0)}(m-3)]D_{j}^{(3)}(m)\\ &{}+[B_{i1}^{(m-4)}D_{2}^{(1)}(m-3)+B_{i3}^{(m-4)}D_{j}^{(0)}(m-3)]D_{j}^{(2)}(m)\\ &{}+[B_{i1}^{(m-4)}D_{3}^{(1)}(m-3)+B_{i4}^{(m-4)}D_{3}^{(0)}(m-3)]D_{j}^{(1)}(m)\\ &{}+[B_{i1}^{(m-4)}D_{j+3}^{(1)}(m-3)+B_{ij+4}^{(m-4)}D_{j+3}^{(0)}(m-3)]D_{j}^{(0)}(m). \end{matrix}} \end{aligned}$$

We can continue this process by taking

$$\begin{aligned} D_{j}^{(4)}(m)=A_{1}(m-3)D_{j}^{(3)}(m)+A_{2}(m-3)D_{j}^{(2)}(m)+A_{3}(m-3)D_{j}^{(1)}(m)+A_{j+3}(m-3)D_{j}^{(0)}(m), \end{aligned}$$

and thus, we get

$$\begin{aligned} {\begin{matrix} B_{ij}^{(m)}&{}=B_{i1}^{(m-4)}D_{j}^{(4)}(m)+B_{i2}^{(m-4)}D_{j}^{(3)}(m)+B_{i3}^{(m-4)}D_{j}^{(2)}(m)\\ {} &{}+ B_{i4}^{(m-4)}D_{j}^{(1)}(m)+B_{ij+4}^{(m-4)}D_{j}^{(0)}(m). \end{matrix}} \end{aligned}$$

Finally, by recurrence we have the following result.

Theorem 4.1

Consider the block matrix C(m) given by (2.2), where \(A_{1}(m), \cdots , A_{r}(m)\) are matrices of order d. Then the entries \(\{B_{ij}^{(m)}\}_{1\le i,j\le r}\) of the product of the partitioned companion matrices \(B^{(m)}=C(1)\cdots C(m)\) are given as follows.

For every \(m>r\), we have

$$\begin{aligned} B_{ij}^{(m)}=B_{i1}^{(1)}D_{j}^{(m-1)}(m)+B_{i2}^{(1)}D_{j}^{(m-2)}(m) +\cdots +B_{ir}^{(1)}D_{j}^{(m-r)}(m). \end{aligned}$$
(4.1)

For every \( m\le r\), we have

$$\begin{aligned} B_{ij}^{(m)}=B_{i1}^{(1)}D_{j}^{(m-1)}(m)+B_{i2}^{(1)}D_{j}^{(m-2)}(m)+\cdots +B_{im-1}^{(1)}D_{j}^{(1)}(m)+B_{ij+m-1}^{(1)}D_{j}^{(0)}(m), \end{aligned}$$
(4.2)

where

$$\begin{aligned} \left\{ \begin{aligned} D_{j}^{(0)}(m)&=\mathbf {1}_{d\times d},\, \, D_{j}^{(1)}(m)=A_{j}(m),\\ D_{j}^{(2)}(m)&=A_{1}(m-1)D_{j}^{(1)}(m)+A_{j+1}(m-1)D_{j}^{(0)}(m), \end{aligned} \right. \end{aligned}$$

and for every \(k>2\), we set

$$\begin{aligned} {\begin{matrix} D_{j}^{(k)}(m)&{}=A_{1}(m-k+1)D_{j}^{(k-1)}(m)+A_{2}(m-k+1)D_{j}^{(k-2)}(m)\\ &{}+ \cdots +\\ &{}+A_{k-1}(m-k+1)D_{j}^{(1)}(m)+A_{j+k-1}(m-k+1)D_{j}^{(0)}(m). \end{matrix}} \end{aligned}$$

Theorem 4.1 shows the main role of the bi-indexed relations (4.1) and (4.2), for generating the \(B_{ij}^{(m)}\) from the entries of the matrix \(B^{(1)}=C(1)\), using the matrix \(D_{j}^{(k)}(m)\).

It should be made clear that, since the product of matrices is not commutative, the order of matrices in formulas (4.1) and (4.2) need to be respected.

For more illustration of Theorem 4.1, we examine the following special case.

Proposition 4.2

Consider the companion matrix in blocks:

$$\begin{aligned} C(m)=C[A_{1}(m),A_{2}(m)]_{2\times 2}=\begin{pmatrix} A_{1}(m) &{} A_{2}(m)\\ 1_{d \times d} &{} \Theta _{d\times d} \end{pmatrix}. \end{aligned}$$

Then, for every \(m\ge 2\), the entries of the matrix

$$\begin{aligned} B^{(m)}=\begin{pmatrix} B_{11}^{(m)} &{} B_{12}^{(m)}\\ B_{21}^{(m)} &{} B_{22}^{(m)} \end{pmatrix}, \end{aligned}$$

are given as follows:

$$\begin{aligned} \left\{ \begin{aligned} B_{11}^{(m)}&=A_1(1) D_{1}^{(m-1)}(m)+A_2(1) D_{1}^{(m-2)}(m), \, \, \, \, B_{22}^{(m)}= D_{2}^{(m-1)}(m),\\ B_{12}^{(m)}&=A_1(1) D_{2}^{(m-1)}(m)+A_{2}(1) D_{2}^{(m-2)}(m), \, \, \, \, B_{21}^{(m)}= D_{1}^{(m-1)}(m), \end{aligned} \right. \end{aligned}$$

where

$$\begin{aligned} D_{j}^{(m-1)}(m)= \left\{ \begin{aligned}&\mathbf {1}_{d\times d} \, \text{ if } \, m=1, \text{ for } \, j=1, \, 2,\\&A_{j}(m)\, \text{ if } \, m=2, \text{ for } \, j=1,2,\\&A_{1}(2) D_{j}^{(m-2)}(m)+ A_{2}(2) D_{j}^{(m-3)}(m) \, \text{ if }\, m>2, \text{ for }\, j=1,2. \end{aligned} \right. \end{aligned}$$

That is, for \(m=2\), by a straightforward computation, we get

$$\begin{aligned} \left\{ \begin{aligned} B_{11}^{(2)}&=A_1(1) D_{1}^{(1)}(2)+A_2(1) D_{1}^{(0)}(2),\, \, \, \,&B_{12}^{(2)}&=A_1(1)^{(1)}, D_{2}^{(1)}(2)\\ B_{21}^{(2)}&= D_{1}^{(1)}(2), \, \, \, \,&B_{22}^{(2)}&=D_{2}^{(1)}(2), \end{aligned} \right. \end{aligned}$$

where \(D_{1}^{(0)}(2)=D_{2}^{(0)}(2)=1_{d}\), \(D_{1}^{(1)}(2)=A_{1}(2)\) and \(D_{2}^{(1)}(2)=A_{2}(2)\). In addition, for \(m=3\), we obtain

$$\begin{aligned} \left\{ \begin{aligned} D_{1}^{(1)}(3)&=A_{1}(3),\, \, \, \,&D_{2}^{(1)}(3)&=A_{2}(3),\\ D_{1}^{(2)}(3)&=A_{1}(2)A_{1}(3)+A_{2}(2),\, \, \, \,&D_{2}^{(2)}(3)&=A_{1}(2)A_{2}(3). \end{aligned} \right. \end{aligned}$$

Thence, we obtain

$$\begin{aligned} \left\{ \begin{aligned} B_{11}^{(3)}&=A_1(1)A_1(2) A_1(3)+ A_1(1)A_2(2)+A_2(1)A_1(3), \, \,&B_{21}^{(3)}&= A_1(2) A_1(3)+ A_2(2),\\ B_{12}^{(3)}&=A_1(1)A_1(2) A_1(3)+ A_1(1)A_2(2), \, \, \, \,&B_{22}^{(3)}&= A_1(2) A_2(3)+ A_2(2). \end{aligned} \right. \end{aligned}$$

Consider the following numerical example. Suppose that

$$\begin{aligned} C(1)=\begin{pmatrix} C_{11}(1) &{} C_{12}(1)\\ \mathbf {1}_{2\times 2} &{} \Theta _{2\times 2} \end{pmatrix}, C(2)=\begin{pmatrix} C_{11}(2) &{} \mathbf {1}_{2\times 2}\\ \mathbf {1}_{2\times 2} &{} \Theta _{2\times 2} \end{pmatrix}, C(3)=\begin{pmatrix} C_{11}(3) &{} C_{12}(3)\\ \mathbf {1}_{2\times 2} &{} \Theta _{2\times 2} \end{pmatrix}, \end{aligned}$$

where \(\Theta _{2\times 2}\) is the null matrix of order 2 and \(\mathbf {1}_{2\times 2}\) is the identity matrix of order 2, \(C_{11}(1)=\begin{pmatrix} 1&{} 0 \\ 0 &{} 2 \end{pmatrix}\), \(C_{12}(1)=\begin{pmatrix} 1&{} 1 \\ 1 &{} 1 \end{pmatrix}\), \(C_{11}(2)=\begin{pmatrix} 1&{} -1 \\ 0 &{} 0 \end{pmatrix}\), \(C_{11}(3)=\begin{pmatrix} 2&{} 1 \\ 1 &{} 1 \end{pmatrix}\) and \(C_{12}(3)=\begin{pmatrix} 1&{} 0 \\ -1 &{} 0 \end{pmatrix}\). By applying the formula above, we obtain the entries of the matrix \(B^{(3)}=C(1)C(2)C(3)\) as follows:

$$\begin{aligned} B^{(3)}=\begin{pmatrix} B_{11}^{(3)} &{} B_{12}^{(3)}\\ B_{21}^{(3)} &{} B_{22}^{(3)} \end{pmatrix}, \end{aligned}$$

where \(B_{11}^{(3)}=\begin{pmatrix} 5 &{} 2\\ 4 &{} 4 \end{pmatrix}\), \(B_{12}^{(3)}=\begin{pmatrix} -2 &{} 0 \\ 0 &{} 0 \end{pmatrix}\), \(B_{21}^{(3)}=\begin{pmatrix} 1 &{} 0\\ 0 &{} 1 \end{pmatrix} \) and \( B_{22}^{(3)}=\begin{pmatrix} -2 &{} 0 \\ 0 &{} 0 \end{pmatrix}\).

Now, we turn out to the matrix of order r, related to Eqs. (1.1) and (1.2), defined by

$$\begin{aligned} C(m)= C[a_{1}(m),\,\,\, a_{2}(m),\,\,\, \cdots ,\,\,\, a_{r}(n)]_{r\times r}, \end{aligned}$$

where \(a_{j}: \mathbb {N} \rightarrow \mathbb {R}\), \(1\le j\le r\), are scalar functions of m. It seems to us evident to derive explicit formulas for the entries \(\{\alpha _{ij}^{(m)}\}_{1\le i,j\le r}\) of the product of companion matrices \(B^{(m)}=C(1) \cdots C(m)\) , since our method is recursive and novel. We can now proceed analogously to Theorem 4.1, and then we obtain a new expression of \(\alpha _{ij}^{(m)}\) given by the following corollary.

Corollary 4.3

Let \(\alpha _{ij}^{(m)}\) be the (ij)-entry of the product \(B^{(m)}\). Then, for \(m>r\), we have

$$\begin{aligned} \alpha _{ij}^{(m)}= D_{j}^{(m-1)}(m) \alpha _{i1}^{(1)}+ D_{j}^{(m-2)}(m) \alpha _{i2}^{(1)}+\cdots + D_{j}^{(m-r)}(m) \alpha _{ir}^{(1)}, \end{aligned}$$

and for \(m\le r\),

$$\begin{aligned} \alpha _{ij}^{(m)}= D_{j}^{(m-1)}(m) \alpha _{i1}^{(1)}+ D_{j}^{(m-2)}(m) \alpha _{i2}^{(1)}+\cdots + D_{j}^{(1)}(m) \alpha _{i m-1}^{(1)}+ D_{j}^{(0)}(m) \alpha _{i j+m-1}^{(1)}, \end{aligned}$$

where

$$\begin{aligned} D_{j}^{(0)}(m)=1,\,\, \,\, D_{j}^{(1)}(m)=a_{j}(m),\,\, \,\, D_{j}^{(2)}(m)=a_{1}(m-1)D_{j}^{(1)}(m)+a_{j+1}(m-1)D_{j}^{(0)}(m), \end{aligned}$$

and for every \(k>2\), we set

$$\begin{aligned} {\begin{matrix} D_{j}^{(k)}(m)=&{}a_{1}(m-k+1)D_{j}^{(k-1)}(m)+a_{2}(m-k+1)D_{j}^{(k-2)}(m)+\cdots +\\ &{}a_{k-1}(m-k+1)D_{j}^{(1)}(m)+a_{j+k-1}(m)D_{j}^{(0)}(m). \end{matrix}} \end{aligned}$$

In the aim to give more light in the previous result of Corollary 4.3, we study the case \(m=3\). Let \( \{a_{1}(1), a_{2}(1), a_{3}(1), a_{1}(2), a_{2}(2), a_{3}(2), a_{1}(3), a_{2}(3) , a_{3}(3)\}\) be a set of real or complex numbers. Consider the following three companion matrices:

$$\begin{aligned} C(1)= [a_{1}(1), a_{2}(1), a_{3}(1)], C(2)= [a_{1}(2); a_{2}(2), a_{3}(2)]; C(3)= [a_{1}(3), a_{2}(3), a_{3}(3)]. \end{aligned}$$

Applying the result of Corollary 4.3, a direct computation leads to get

$$\begin{aligned} B^{(3)}= C(1)C(2)C(3)= \begin{pmatrix} \alpha _{11}&{} \alpha _{12}&{} \alpha _{13} \\ a_{1}(3)a_{1}(2)+a_{2}(2) &{} a_{2}(3)a_{1}(2) +a_{3}(2) &{}a_{3}(3)a_{1}(2) \\ a_{1}(3) &{} a_{2}(3)&{}a_{3}(3) \end{pmatrix}, \end{aligned}$$

where \(\alpha _{11}=a_{1}(3) a_{1}(2)a_{1}(1)+a_{1}(3)a_{2}(1)+a_{1}(2)a_{1}(1)+a_{3}(1)\), \(\alpha _{12}= a_{2}(3)a_{1}(2)a_{1}(1)+a_{2}(3)a_{2}(1)+a_{3}(2)a_{1}(1)\) and \(\alpha _{13}=a_{3}(3)a_{1}(2)a_{1}(1)+a_{3}(3)a_{2}(1)\). We illustrate this situation by the following numerical application.

Example 4.4

A straightforward computation of the following product of companion matrices, \( B^{(3)} =\begin{pmatrix} -1 &{} 2 &{} 3 \\ 1 &{} 0 &{}0\\ 0 &{} 1 &{}0 \end{pmatrix} \begin{pmatrix} 2 &{} 2 &{} -1 \\ 1 &{} 0 &{}0\\ 0 &{} 1 &{}0 \end{pmatrix} \begin{pmatrix} 4 &{}1 &{} 2 \\ 1 &{} 0 &{}0\\ 0 &{} 1 &{}0 \end{pmatrix}\), permits us to obtain, \(B^{(3)} =\begin{pmatrix} 1 &{} 1 &{} 0\\ 10 &{} 1 &{}4\\ 4&{} 1 &{}2 \end{pmatrix}.\) Meanwhile, the useful of this algorithm appears better for large m, where the computation becomes heavy and not feasible by hand.

4.2 Algorithm 2: product of companion matrices in blocks

In this section, we manage to provide another recursive algorithm to calculate the entries of the matrix \(B^{(m)}=C(m)\cdots C(1)=C(m)B^{(m-1)}\), our approach reposes in the techniques of generalized Fibonacci sequences in the algebra of square matrices in blocks, given in Sect. 3.2.

We set

$$\begin{aligned} C(k)=\begin{pmatrix} A_{1}(k) &{} A_{2}(k) &{}\cdots &{} A_{r}(k) \\ \mathbf {1}_{d\times d} &{} \Theta _{d\times d} &{}\cdots &{}\Theta _{d\times d} \\ \vdots &{} \ddots &{}\ddots &{}\vdots \\ \Theta _{d\times d} &{}\cdots &{} \mathbf {1}_{d\times d} &{}\Theta _{d\times d} \end{pmatrix}, \end{aligned}$$

where \(A_{1}(k), \cdots , A_{r}(k)\) are matrices of order d, and

$$\begin{aligned} B^{(m)}=\begin{pmatrix} B_{11}^{(m)} &{}\cdots &{} B_{1r}^{(m)} \\ \vdots &{} \ddots &{}\vdots \\ B_{r1}^{(m)} &{}\cdots &{}B_{rr}^{(m)}. \end{pmatrix} \end{aligned}$$

To express recursively the entries \(B_{ij}^{(m)}\) of the matrix \(B^{(m)}\), we consider a family of generalized Fibonacci sequences in the algebra \(GL(r, \mathbb {C}^{d\times d})\), defined as follows.

Definition 4.5

Let \(1\le s \le m\), we consider the r family \(\{Y_{n}^{(k)}(s)\}_{n\ge 0}\) of generalized Fibonacci sequences in defined \(GL(r, \mathbb {C}^{d\times d})\) for \(1\le k \le r\) by

$$\begin{aligned} Y_{n+1}^{(k)}(s)=A_{1}(s) Y_{n}^{(k)}(s)+\cdots +A_{r}(s) Y_{n-r+1}^{(k)}(s), \end{aligned}$$
(4.3)

with mutually different sets of initial conditions defined as follows. For \(s=1\), the initial conditions of the sequence \(\{Y_{n}^{(k)}(1)\}_{n\ge 0}\) are given by

$$\begin{aligned} Y_{q}^{(k)}(1)= \left\{ \begin{aligned}&\mathbf {1}_{d\times d} \text{ if } \,\, q=r-k \,\,\, (0\le q\le r-1), (1\le k\le r), \\&\Theta _{d\times d} \text{ if } \text{ not } . \end{aligned} \right. \end{aligned}$$
(4.4)

For \(s\ge 2\) and \(1 \le k \le r\), the initial conditions of the sequence \(\{Y_{n}^{(k)}(s)\}_{n\ge 0}\) are related to those \(\{Y_{n}^{(k)}(s-1)\}_{n\ge 0}\), as follows:

$$\begin{aligned} Y_{l}^{(k)}(s)= Y_{l+1}^{(k)}(s-1) = B_{r-l,k}^{(s-1)}, \; \text{ for } \text{ every } \, 0\le l \le r-1\; \text{ and }\, 2 \le s \le m. \end{aligned}$$
(4.5)

Therefore, using a straightforward computation, it ensues that we can rewrite de matrix \(B^{(m-1)}\) under the form:

$$\begin{aligned} B^{(m-1)} =\begin{pmatrix} Y_{r-1}^{(1)}(m)&{}\cdots &{}Y_{r-1}^{(j)}(m)&{}\cdots &{}Y_{r-1}^{(r)}(m) \\ Y_{r-2}^{(1)}(m)&{}\cdots &{}Y_{r-2}^{(j)}(m)&{}\cdots &{}Y_{r-2}^{(r)}(m) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{r-i}^{(1)}(m)&{}\cdots &{}Y_{r-i}^{(j)}(m)&{}\cdots &{}Y_{r-i}^{(r)}(m) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{0}^{(1)}(m)&{}\cdots &{}Y_{0}^{(j)}(m)&{}\cdots &{}Y_{0}^{(r)}(m) \end{pmatrix}. \end{aligned}$$

Thence, employing the recursive relation (4.3) satisfied at order r by \(Y_n^{(k)}(m)\), we get

$$\begin{aligned} B^{(m)} = \begin{pmatrix} Y_{r}^{(1)}(m)&{}\cdots &{}Y_{r}^{(j)}(m)&{}\cdots &{}Y_{r}^{(r)}(m) \\ Y_{r-1}^{(1)}(m)&{}\cdots &{}Y_{r-1}^{(j)}(m)&{}\cdots &{}Y_{r-1}^{(r)}(m) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{r-i+1}^{(1)}(m)&{}\cdots &{}Y_{r-i+1}^{(j)}(m)&{}\cdots &{}Y_{r-i+1}^{(r)}(m) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{1}^{(1)}(m)&{}\cdots &{}Y_{1}^{(j)}(m)&{}\cdots &{}Y_{1}^{(r)}(m) \end{pmatrix}. \end{aligned}$$

By induction, we observe that for \(m<r \,\,(m\ge 1)\), we obtain

$$\begin{aligned} B^{(m)}=\begin{pmatrix} Y_{r}^{(1)}(m) &{} Y_{r}^{(2)}(m) &{}\cdots &{} Y_{r}^{(r-1)}(m) \\ Y_{r}^{(1)}(m-1) &{} Y_{r}^{(2)}(m-1) &{}\cdots &{} Y_{r}^{(r)}(m-1) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{r}^{(1)}(0) &{} Y_{r}^{(2)}(0) &{}\cdots &{} Y_{r}^{(r)}(0) \\ Y_{r-1}^{(1)}(0) &{} Y_{r-1}^{(2)}(0) &{}\cdots &{} Y_{r-1}^{(r)}(0) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{r-m}^{(1)}(0) &{} Y_{r-m}^{(2)}(0) &{}\cdots &{} Y_{r-m}^{(r)}(0) \end{pmatrix}. \end{aligned}$$

For \(m\ge r\), we observe that by induction, we get

$$\begin{aligned} B_{ij}^{(m)}=Y_{r-i+1}^{(j)}(m)= B_{i-1,j}^{(m-1)}= \cdots = B_{1j}^{(m-i+1)}= Y_{r}^{(j)}(m-i+1). \end{aligned}$$

for every \(1\le i,j\le r\). Hereafter, we derive that

$$\begin{aligned} B^{(m)}=\begin{pmatrix} Y_{r}^{(1)}(m) &{} Y_{r}^{(2)}(m) &{}\cdots &{} Y_{r}^{(r)}(m) \\ Y_{r}^{(1)}(m-1) &{} Y_{r}^{(2)}(m-1) &{}\cdots &{} Y_{r}^{(m-2)}(m-1) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{r}^{(1)}(m-r+1) &{} Y_{r}^{(2)}(m-r+1) &{}\cdots &{} Y_{r}^{(r)}(m-r+1) \end{pmatrix}. \end{aligned}$$

The main idea here is to take advantage of the fact that \(\{Y_{n}^{(k)}(s)\}_{n\ge 0}\) are r-generalized Fibonacci sequences for \(1\le k \le r\). Indeed, we have established that the entries of the matrix \(B^{(m)}\) are obtained by considering only the initial conditions defined by (4.4) and (4.5), and the terms of the Fibonacci sequence \(Y_{r}^{(j)}(m-j+1)\) (\(1\le j\le r\)) once we reach the order r.

We can observe that for the two preceding algorithms, the commutativity condition \(A_j(k)A_i(k)=A_i(k)A_j(k)\) is not necessary.

5 Study of the p-periodic matrix difference equations in blocks: some special cases

Let us first recall that for resolving the discrete linear matrix equation (1.3) of order r, the first step consists in transforming this equation into a discrete linear matrix equation (2.1) of order 1. Thus, the resolution of Eq. (1.3) is equivalent to that of Eq. (2.1), through the computation of the powers of the companion matrix in blocks (2.2). Therefore, the solution of (1.3), is obtained via Theorems 2.1, Theorem 3.3 and the Algorithms 1 and 2. The previous procedure will be illustrated in this section.

More precisely, in this section, we are interested in making use of all the material provided in the above sections for exploring some special cases of the p-periodic matrix difference equations in blocks. Yet, some particular cases are treated and some examples are given, to make this study more affordable.

5.1 Solutions of the matrix equation \(Y_{n+2}=A(n)Y_{n}\), where A(n) is p-periodic

Consider the periodic matrix difference equation:

$$\begin{aligned} Y_{n+2}=A(n)Y_{n},\;\;\;\; n\ge 0, \end{aligned}$$
(5.1)

where A(n) is p-periodic (with period \(p\ge 2\)) square matrix of order d , and \(Y_{0},Y_{1}\) stand for the initial conditions. We assume that \(A(i)A(j)=A(j)A(i)\) for \(0\le i,j\le p-1\). This equation (5.1) can be written under the following matrix equation:

$$\begin{aligned} Z(n+1)=C(n)Z(n),\;\;\ n\ge 0, \end{aligned}$$
(5.2)

where

$$\begin{aligned} C(n)=\begin{pmatrix} \Theta _{d} &{} A(n) \\ \mathbf {1}_{d\times d} &{} \Theta _{d\times d} \end{pmatrix} \text{ and } Z(n)=\begin{pmatrix} Y_{n+1} \\ Y_{n} \end{pmatrix}. \end{aligned}$$

It ensues that C(n) is p-periodic emanated from the fact that A(n) is p-periodic. We consider the matrix \(B=C(p-1)C(p-2)\cdots C(1)C(0)\). Thus, employing Theorem 2.1, we need to distinguish two cases \(p=2\) and \(p>2\). If \(p=2\), then for every \(n=2k\) (\(k\ge 1\)), the matrix equation (5.2) takes the form:

$$\begin{aligned} Z(2k)=B^{k}Z(0), \end{aligned}$$

where \(Z(2k)=(Y_{2k+1},Y_{2k})^\top \), \(Z(0)=(Y_{1},Y_{0})^\top \) the vector of initial conditions, and \(B=C(1)C(0)\). In addition, if \(p>2\), for every \(n=kp+i\) (\(k\ge 1\)) (\(0 \le i \le p-1\)), the matrix equation (5.2) takes the form:

$$\begin{aligned} Z(kp+i+1)=C(i)C(i-1)\cdots C(0)B^{k}Z(0), \end{aligned}$$

where \(Z(kp+i+1)=(Y_{kp+i+1},Y_{kp+i})^\top \), \(Z(0)=(Y_{1},Y_{0})^\top \) the vector of initial conditions.

We start by giving the expression of B in terms of \(A(p-1),\cdots , A(1),A(0)\), using the Algorithm 1 (see Sect. 4.1). For reason of clarity, we consider the case of \(r=2\). For \(p=2\), a straightforward application of the Algorithm 1 shows that

$$\begin{aligned} B=C(1)C(0)=\begin{pmatrix} B_{11}^{(2)} &{} B_{12}^{(2)} \\ B_{21}^{(2)} &{} B_{22}^{(2)} \end{pmatrix}, \end{aligned}$$

where the entries of the matrix B are given by

$$\begin{aligned} B_{11}^{(2)}=A(1)D_{1}^{(0)}(2), \, B_{12}^{(2)}= \Theta _{d\times d}, \, B_{21}^{(2)}=D_{1}^{(1)}(2) \text{ and } \, B_{22}^{(2)}= D_{2}^{(1)}(2), \end{aligned}$$

where \(D_{1}^{(0)}(2)=\mathbf {1}_{d\times d}\), \(D_{1}^{(1)}(2)= \Theta _{d\times d}\) and \(D_{2}^{(1)}(2)=A(0)\). Thus, when the matrix A(n) is 2-periodic, the solutions of the 2-periodic matrix equation (5.1) are given by the following proposition.

Proposition 5.1

The unique solution of the 2-periodic matrix difference equation \(Y_{n+2}=A(n)Y_{n},\;\ n\ge 0\) (A(n) is 2-periodic), with the prescribed initial conditions \(Y_{0}\) and \(Y_{1}\), is given by

$$\begin{aligned} Y_{2n+1}= A(1)^{n} Y_{1} \,\,\, \text{ and } \,\,\,X_{2n}= A(0)^{n} Y_{0}. \end{aligned}$$

Example 5.2

Consider the scalar linear difference equation of the form

$$\begin{aligned} x_{n+2}=a(n)x_{n}, \end{aligned}$$

where \(a : \mathbb {N} \rightarrow \mathbb {R}\) is a 2-periodic scalar function of n and \(x_{0}\), \(x_{1}\) stand for the initial conditions. Then, the matrix A(n) is reduced to one element a(n), thus the unique solution of Eq. (5.1) is given by Proposition 5.1 as follows:

$$\begin{aligned} x_{2n+1}=a(1)^{n}x_{1}, \, \, x_{2n}=a(0)^{n}x_{0}. \end{aligned}$$
(5.3)

This class of equations has been studied in [5]. The method used consists in transforming equation (5.1) into the equivalent linear difference equation with constant coefficients,

$$\begin{aligned} x_{n+4}=c_{0}x_{n+2}+c_{1}x_{n}, \end{aligned}$$
(5.4)

where \(c_{0}\), \(c_{1}\) are the coefficients of the characteristic polynomial \(P(z)=z^{2}-(a(0)+a(1))z-a(0)a(1)=(z-a(0))(z-a(1))\), of the matrix \(B=C(1)C(0)\). Using formulas of corollary 4.3, we obtain \(B=\begin{pmatrix} \alpha _{11}^{(2)} &{} \alpha _{12}^{(2)} \\ \alpha _{21}^{(2)} &{} \alpha _{22}^{(2)} \end{pmatrix} =\begin{pmatrix} a(1) &{} 0 \\ 0 &{} a(0) \end{pmatrix}\). Therefore, the form of the solutions of Eq. (5.4) are given in [5, Proposition 3.6] as follows:

$$\begin{aligned} {\begin{matrix} x_{2n}&=\frac{a(0)^{n}}{a(0)+a(1)} \left( \frac{A_{0}}{a(0)}+\frac{A_{2}}{a(0)^2}\right) +\frac{a(1)^{k}}{a(1)-a(0)}\left( \frac{A_{0}}{a(1)}+\frac{A_{2}}{a(1)^{2}}\right) , \end{matrix}} \end{aligned}$$
(5.5)
$$\begin{aligned} {\begin{matrix} x_{2n+1}&=\frac{a(0)^{n}}{a(0)+a(1)} \left( \frac{A_{1}}{a(0)}+\frac{A_{3}}{a(0)^2}\right) +\frac{a(1)^{k}}{a(1)-a(0)}\left( \frac{A_{1}}{a(1)}+\frac{A_{3}}{a(1)^{2}}\right) . \end{matrix}} \end{aligned}$$
(5.6)

In addition, starting from (5.5) to (5.6), a direct computation implies that \(x_{2n}=a(0)^{n}x_{0}\) and \(x_{2n+1}=a(1)^{n}x_{0}\). Therefore, we show that the two solutions (5.3) and (5.5)–(5.6) are the same results.

Now, we turn to the case when \(r=2\) and the period \(p\ge 3\). In this case, the entries of the matrix \(B=C(1)C(0)\) are given by

$$\begin{aligned} B_{11}^{(p)}=A(p-1)D_{1}^{(p-2)}(p),\, B_{12}^{(p)}= A(p-1)D_{2}^{(p-2)}(p),\, B_{21}^{(p)}=D_{1}^{(p-1)}(p), \, B_{12}^{(p)}= D_{2}^{(p-1)}(p), \end{aligned}$$

where

$$\begin{aligned} D_{j}^{(0)}(p)=\mathbf {1}_{d\times d},\, \, D_{j}^{(1)}(p)=A_{j}(0), \, \, \, D_{j}^{(2)}(p)=A_{1}(1)D_{j}^{(1)}(p)+A_{j+1}D_{j}^{(0)}(p), \end{aligned}$$

and for every \(k>2\), we have

$$\begin{aligned} D_{j}^{(k)}(p)=A_{1}(k-1)D_{j}^{(k-1)}(p)+\cdots +A_{k-1}(k-1)D_{j}^{(1)}(p)+A_{j+k-1}(k-1)D_{j}^{(0)}(p). \end{aligned}$$

We need to distinguish two cases. When p is even, a straightforward computation, shows that

$$\begin{aligned} B=\begin{pmatrix} A(p-1)A(p-3)\cdots A(1) &{} \Theta _{d \times d} \\ \Theta _{d \times d} &{} A(p-2)A (p-4) \cdots A(0) \end{pmatrix}. \end{aligned}$$

Hence, for every \(k\ge 2\), we have

$$\begin{aligned} B^{k}=\begin{pmatrix} (A(p-1)A(p-3)\cdots A(1))^{k} &{} \Theta _{d \times d} \\ \Theta _{d \times d} &{} (A(p-2)A(p-4) \cdots A(0))^{k} \end{pmatrix}. \end{aligned}$$

However, when p is odd, we have

$$\begin{aligned} B=\begin{pmatrix} \Theta _{d\times d} &{} A(p-1)A(p-3)\cdots A(0) \\ A(p-2)A(p-4) \cdots A(1) &{} \Theta _{d\times d} \end{pmatrix}. \end{aligned}$$

In this case for calculating the powers \(B^{k}\), for \(k\ge 2\), we need to utilize Theorem 3.3 of Sect. 3.2. Indeed, in this case, \( P(S)=det[\mathbf {1}_{2\times 2}\otimes S - B ]=S^2-A(p-1)A(p-2)\cdots A(1)A(0)\), and it follows from Theorem 3.3 that for every \(k\ge 1\), we have

$$\begin{aligned} B^k=\rho (k,2)W_{0}+\rho (k-1,2)W_{1}, \end{aligned}$$

with

$$\begin{aligned} W_0= \begin{pmatrix} W_{11}(0) &{} \Theta _{d\times d}\\ \Theta _{d\times d} &{} W_{22}(0) \end{pmatrix} \text{ and } W_1=\begin{pmatrix} \Theta _{d\times d} &{} W_{12}(1)\\ W_{21}(1) &{} \Theta _{d\times d} \end{pmatrix}, \end{aligned}$$

where \(W_{11}(0)=W_{22}(0)=A(p-1)A(p-2)\cdots A(1)A(0)\) and \(W_{12}(1)=A(p-1)A(p-2)^{2}A(p-3)\cdots A(1)^{2}A(0)\), \(W_{21}(1)=A(p-1)^{2}A(p-2)A(p-3)^{2}\cdots A(0)^{2}\). In addition, once again, we need to distinguish two cases: when k is odd or even, for giving the expression of \(\rho (k,2)\):

$$\begin{aligned} \rho (k,2)= \left\{ \begin{aligned}&\mathbf {1}_{2\times 2}\otimes (A(p-1)A(p-2)\cdots A(1)A(0))^{\frac{k}{2}-1}, \;\; \text{ if } \;\; k \;\; \text{ is } \text{ even },\\&\mathbf {1}_{2\times 2}\otimes \Theta _{d\times d}, \;\; \text{ if } \;\; k \;\; \text{ is } \text{ odd }. \end{aligned} \right. \end{aligned}$$

Thence, we obtain

$$\begin{aligned} B^{k}= \begin{pmatrix} B_{11}(k)&{} \Theta _{d\times d}\\ \Theta _{d\times d} &{} B_{22}(k) \end{pmatrix}\; \text{ for } k \text{ even }, \end{aligned}$$

and

$$\begin{aligned} B^{k}=\begin{pmatrix} \Theta _{d\times d}&{} B_{12}^k\\ B_{21}^{k} &{} \Theta _{d\times d} \end{pmatrix}\; \text{ for } k \text{ odd }, \end{aligned}$$

where \(B_{11}(k)=B_{22}(k)=(A(p-1)A(p-2)\cdots A(1)A(0))^{\frac{k}{2}}\) and \(B_{12}(k)=(A(p-2) A(p-4)\cdots A(1))^{\frac{k-1}{2}-1}(A(p-1)A(p-3)\cdots A(0))^{\frac{k+1}{2}}\), \(B_{22}(k)=(A(p-1) A(p-3)\cdots A(0))^{\frac{k-1}{2}} (A(p-2)A(p-4)\cdots A(1))^{\frac{k+1}{2}}\).

With this results at our disposal, we can express the solution of the matrix equation \(Y_{n+2}=A(n)Y_{n}\). Indeed, when A(n) is periodic of period \(p>2\), we have the following result.

Proposition 5.3

Let \(p\ge 3\) be an even integer. Consider the p-periodic matrix equation \(Y_{n+2}=A(n)Y_{n},\;\ n\ge 0\), with the initial conditions vector \((Y_{1},Y_{0})^{\top }\). Then, for every \(n=kp+i\) (\(i=0,\cdots ,p-1\)), the unique solution is given by

$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+1}&= (A(p-1)A(p-3)\cdots A(1))^{k}Y_{1}, \;\;\;(i=0),\\ Y_{kp+2}&= (A(p-2)A(p-4)\cdots A(2))^{k} A(0)^{k+1}Y_{0}, \;\;\;(i=1),\\ Y_{kp+p}&= (A(p-2)A(p-4)\cdots A(0))^{k+1}Y_{0}, \;\; \;(i=p-1), \end{aligned} \right. \end{aligned}$$

and for every \(i=3,\cdots ,p-2\), we have

$$\begin{aligned} Y_{kp+i+1}=\left\{ \begin{aligned}&(A(p-1)A(p-3)\cdots A(i+1))^{k} (A(i-1) A(i-3) A(1))^{k+1}Y_{1}, \; \text{ if } i \text{ is } \text{ even },\\&(A(p-2)A(p-4)\cdots A(i+1))^{k} (A(i-1) A(i-3) A(0))^{k+1}Y_{0}, \; \text{ if } i \text{ is } \text{ odd }. \end{aligned} \right. \end{aligned}$$

Similarly, when the period p is odd, we have to consider two cases. For k is even, the vector solution is given by

$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+1}&= D Y_{1},\\ Y_{kp+i+1}&= A(i-1)A(i-3)\cdots A(1) D Y_{1}, \;\;\text{ if } i \text{ is } \text{ even },\\ Y_{kp+i+1}&= A(i-1)A(i-3)\cdots A(0) D Y_{0}\;\; \text{ if } i \text{ is } \text{ odd }, \end{aligned} \right. \end{aligned}$$

where \(D=(A(p-1)A(p-2)\cdots A(0))^{\frac{k}{2}}.\)

When k is odd, we get

$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+1}&= B_{1}^{k} Y_{0},\\ Y_{kp+i+1}&= A(i-1)A(i-3)\cdots A(1) B_{1}^{k} Y_{0}, \;\; \text{ if } i \text{ is } \text{ even },\\ Y_{kp+i+1}&= A(i-1)A(i-3)\cdots A(0) B_{2}^{k} Y_{1}, \;\; \text{ if } i \text{ is } \text{ odd }, \end{aligned} \right. \end{aligned}$$

where \(B_1^k=(A(p-2) A(p-4)\cdots A(1))^{\frac{k-1}{2}-1}(A(p-1)A(p-3)\cdots A(0))^{\frac{k+1}{2}}\) and \(B_2^k=(A(p-1) A(p-3)\cdots A(0))^{\frac{k-1}{2}} (A(p-2)A(p-4)\cdots A(1))^{\frac{k+1}{2}}\).

For more illustration, we propose the following example.

Example 5.4

Consider the 3-periodic matrix equation (5.1) of order 2, such that \(A(0)=\begin{pmatrix} 1 &{} 1 \\ 0 &{} 0 \end{pmatrix}\), \(A(1)=\begin{pmatrix} -1 &{} -1 \\ 0 &{} 0 \end{pmatrix}\) and \(A(2)=\begin{pmatrix} 2 &{} 1 \\ 0 &{} 1 \end{pmatrix}\). Thus, we have \(B=C(2)C(1)C(0)=\begin{pmatrix} \Theta _{2\times 2} &{} A(2)A(0) \\ A(1) &{} \Theta _{2\times 2} \end{pmatrix}\begin{pmatrix} \Theta _{2\times 2} &{} B_{12} \\ B_{21} &{} \Theta _{2\times 2} \end{pmatrix}\), where \(B_{12}=\begin{pmatrix} 2 &{} 2\\ 0 &{} 0 \end{pmatrix}\) and \(B_{21}=\begin{pmatrix} -1 &{} -1\\ 0 &{} 0 \end{pmatrix}\). By a simple verification, we remark that \(A(i)A(j)=A(j)A(i)\) for \(0\le i,j \le 2\). Then, applying results of Proposition 5.3, the solutions of the 3-periodic matrix equation (5.1) are described as follows. If k is even, we have

$$\begin{aligned} Y_{kp+1}=E_{1}^{\frac{k}{2}} Y_{1},\, Y_{kp+2}=E_{2}^{\frac{k}{2}} A(0)^{\frac{k}{2}+1} Y_{0},\, Y_{kp+3}=E_{3}^{\frac{k}{2}} A(1)^{\frac{k}{2}+1} Y_{1}, \end{aligned}$$

and, if k is odd, we get

$$\begin{aligned} Y_{kp+1}=A(1)^{\frac{k-1}{2}-1}E_{3}^{\frac{k+1}{2}} Y_{0},\, Y_{kp+2}=A(2)^{\frac{k-1}{2}} E_{4}^{\frac{k+1}{2}} Y_{1},\, Y_{kp+3}=A(1)^{\frac{k-1}{2}}E_{3}^{\frac{k+1}{2}} Y_{0}, \end{aligned}$$

where \(E_1=A(2)A(1)A(0)\), \(E_2=A(2)A(1)\), \(E_3=A(2)A(0)\) and \(E_4=A(1)A(0)\).

5.2 Solutions of the matrix equation \(Y_{n+r} =A(n)Y_{n}, A(n)\) is p-periodic

In this subsection, we apply Algorithm 2 to solve the equation

$$\begin{aligned} Y_{n+r} =A(n)Y_{n}, n\ge 0, \end{aligned}$$
(5.7)

where A(n) is a p-periodic matrix (with period \(p\ge 2\)) of order d, and \(Y_{0},Y_{1}\) stand for the initial conditions. We assume that \(A(i)A(j)=A(j)A(i)\) for \( 0\le i,j\le p-1\). In a similar way as before, we can formulate Eq. (5.7) under the following matrix equation:

$$\begin{aligned} Z(n+1)=C(n)Z(n),\;\;\;\;\;\; n\ge 0, \end{aligned}$$

where C(n) is the companion matrix in blocks \(C(n)=C[\Theta _{d\times d}, \cdots , \Theta _{d\times d}, A(n)]_{r\times r}\) and Z(n) is the column vector \(Z(n)=\left( Y_{n+r-1}, Y_{n+r-2}, \cdots , Y_{n}\right) ^{T}\). Consider the product of companion matrices in blocks \( B=C(p-1)\cdots C(0)\). For \( 1\le m\le p-1\), the product of companion matrices in blocks \(B^{(m)}=C(m-1)\cdots C(0)\) is given by \(B^{(m)}=\begin{pmatrix} B_{11}^{(m)} &{}\cdots &{} B_{1r}^{(m)} \\ \vdots &{} \ddots &{}\vdots \\ B_{r1}^{(m)} &{}\cdots &{}B_{rr}^{(m)} \end{pmatrix}\). We point out that for \(m=p\) we have \(B=B^{(p)}\). To give the form of the matrix B, we propose to apply the Algorithm 2 (the Algorithm 1 can also be used here). We need to distinguish three cases. When \(p=r\), a direct computation, using Algorithm 2, allows us to have

$$\begin{aligned} B=\begin{pmatrix} Y_{r}^{(r-1)}(0) &{} Y_{r}^{(r-1)}(1) &{}\cdots &{} Y_{r}^{(r-1)}(r-1) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{r}^{(0)}(0) &{} Y_{r}^{(0)}(1) &{}\cdots &{} Y_{r}^{(0)}(r-1) \end{pmatrix}. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} B= diag(A(p-1), A(p-2),...,A(0))_{r\times r}. \end{aligned}$$

Therefore, for every \(k\ge 1\), we have

$$\begin{aligned} B^{k}= diag(A(p-1)^{k}, A(p-2)^{k},...,A(0)^{k})_{r\times r}. \end{aligned}$$

In this case, the solution of the p-periodic matrix equation (5.7) is given by the following proposition.

Proposition 5.5

Consider the p-periodic matrix difference equation (5.7) with the initial conditions vector \((Y_{r-1},\cdots ,Y_{0})^\top \). Suppose that the period p satisfies \(p=r\). Then, for \(n=kp\), the solution of Eq. (5.7) is given by

$$\begin{aligned} Y_{kp+r}=A(0)^{k+1}Y_{0},\,\, Y_{kp+r-1}=A(p-1)^{k} Y_{r-1},\,\, \cdots ,\,\,Y_{kp+1}=A(1)^{k}Y_{1}, \end{aligned}$$

and if \(n=kp+i\), for \(i=1,\cdots ,p-2\), we have

$$\begin{aligned}&Y_{kp+i+r}=A(i)^{k+1}Y_{i},\,\, \cdots ,\,\,Y_{kp+r}=A(0)^{k+1} Y_{0},\\&Y_{kp+r-1}=A(p-1)^{k+1} Y_{r-1},\, \cdots ,\, Y_{kp+i+1}=A(i+1)^{k}Y_{i+1}. \end{aligned}$$

Finally, for \(n=(k+1)p-1\), we have

$$\begin{aligned} Y_{(k+1)p+r-1}=A(p-1)^{k+1}Y_{r-1}, \,\,\cdots ,\,\,Y_{(k+1)p}=A(0)^{k+1}Y_{0}. \end{aligned}$$

When \(p<r\), in a similar way by employing the Algorithm 2, we get

$$\begin{aligned} B=\begin{pmatrix} Y_{r}^{(p-1)}(0) &{} Y_{r}^{(p-1)}(1) &{}\cdots &{} Y_{r}^{(p-1)}(r-1) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{r}^{(0)}(0) &{} Y_{r}^{(0)}(1) &{}\cdots &{} Y_{r}^{(0)}(r-1) \\ Y_{r-1}^{(0)}(0) &{} Y_{r-1}^{(0)}(1) &{}\cdots &{} Y_{r-1}^{(0)}(r-1) \\ \vdots &{} \cdots &{}\cdots &{}\vdots \\ Y_{p}^{(0)}(0) &{} Y_{p}^{(0)}(1) &{}\cdots &{} Y_{p}^{(0)}(r-1) \end{pmatrix}. \end{aligned}$$

By a straightforward computation, we obtain

$$\begin{aligned} B_{i,r-p+i}=A(p-i) \,\,\,i=1,\cdots ,p, \,\, B_{i,i+1}=\mathbf {1}_{d\times d} \,\,\,i=p+1,\cdots ,r-p, \, \text{ and } B_{i,j}=\Theta _{d\times d} \,\,\hbox { if not.} \end{aligned}$$

Thence, we have

$$\begin{aligned} B=\begin{pmatrix} B_1 &{} B_2 \\ B_3 &{} B_4 \end{pmatrix}, \end{aligned}$$

where \(B_1=\Theta _{p\times r-p}\) is the null matrix of order \(p\times r-p\), \(B_{2}=diag(A(p-1),A(p-2),...,A(0))_{p\times p}\) (diagonal matrix of order \(p\times p\)), \(B_{3}=\mathbf {1}_{r-p\times r-p}\) (identity matrix of order \(r-p\times r-p\)), \(B_4=\Theta _{r-p\times p}\) is the null matrix of order \(r-p\times p\).

To express the solution of Eq. (5.7) when \(p<r\), we need to compute the powers \(B^{k}\) of the matrix B given above using the Theorem 3.3. Unfortunately, it is not straightforward to derive the expression of the matrix polynomial \( P(S)=det[\mathbf {1}_{r\times r}\otimes S - A]=S^r-D_0S^{r-1}+\cdots -D_{r-1}\) of B. Therefore, we propose to examine the following example, when \(p=3\) and \(r=5\) as follows.

Example 5.6

Consider the matrix equation

$$\begin{aligned} Y_{n+5}=A(n)Y_{n}, \end{aligned}$$

where A(n) is periodic with period \(p=3\), such that \(A(0)=\begin{pmatrix} 1 &{} 1\\ 1 &{} 1 \end{pmatrix},\) \(A(1)=\begin{pmatrix} 2 &{} -1\\ -1 &{} 2 \end{pmatrix}\) and \(A(2)=\begin{pmatrix} 0 &{} 1\\ 1 &{} 0 \end{pmatrix}.\)

Thus, applying Algorithm 2, we get the entries of the matrix \(B=C(2)C(1)C(0)\) as follows:

$$\begin{aligned} B=\begin{pmatrix} \Theta _{2\times 2}&{} \Theta _{2\times 2} &{} A(2) &{} \Theta _{2\times 2} &{} \Theta _{2\times 2} \\ \Theta _{2\times 2}&{} \Theta _{2\times 2} &{} \Theta _{2\times 2} &{} A(1) &{} \Theta _{2\times 2}\\ \Theta _{2\times 2}&{} \Theta _{2} &{} \Theta _{2\times 2} &{} \Theta _{2\times 2} &{} A(0)\\ \mathbf {1}_{2\times 2} &{} \Theta _{2\times 2}&{} \Theta _{2\times 2} &{} \Theta _{2\times 2} &{} \Theta _{2\times 2}\\ \Theta _{2\times 2} &{} \mathbf {1}_{2\times 2} &{} \Theta _{2\times 2} &{} \Theta _{2\times 2} &{} \Theta _{2\times 2} \end{pmatrix}. \end{aligned}$$

Let \(n=kp+i\) (\(i=0,1,2\)) and \(k=k'p+s\) (\(s=0,1,2,3,4\)), and we set \(D=A(2)A(1)A(0)\). The solution of the 3-periodic matrix difference equation \(Y_{n+5}=A(n)Y_{n}\) prescribed to the initial conditions vector \((Y_{4},Y_{3},Y_{2},Y_{1},Y_{0})^\top \) is given by

  1. (1)

    When \(k=5k'\), we have

    $$\begin{aligned} \left\{ \begin{aligned} Y_{kp+i+r-l}&=D^{k'}A(l)Y_{l}, \;\; \text{ for }\;\; l=i,i-1,\cdots ,0,\\ Y_{kp+i+r-l}&=D^{k'}Y_{i+r-l},\;\; \text{ for }\;\; l=i+1,\cdots ,r-1. \end{aligned} \right. \end{aligned}$$
  2. (2)

    When \(k=5k'+1\), we have

    $$\begin{aligned} \left\{ \begin{aligned} Y_{kp+7}&=A(1)^{k'}(A(0)A(2))^{k'+1}Y_{0}, \\ Y_{kp+l+1}&=D^{k'}A(l-1)Y_{l-1},\;\;\text{ for }\;\; 1\le l \le 5,\\ Y_{kp+1}&=D^{k'}Y_{4}. \\ \end{aligned} \right. \end{aligned}$$
  3. (3)

    When \(k=5k'+2\), we have

    $$\begin{aligned} \left\{ \begin{aligned} Y_{kp+l+1}&=D^{k'}A(l+2)Y_{l+2}, \;\; \text{ for }\;\; 0\le l\le 2, \\ Y_{kp+l+1}&=D^{k'}A(l)A(l+2)Y_{l-3},\;\; \text{ for }\;\; l=3,4,5,6. \end{aligned} \right. \end{aligned}$$
  4. (4)

    When \(k=5k'+3\), we have

    $$\begin{aligned} \left\{ \begin{aligned} Y_{kp+7}&=D^{k'+1}Y_{1}, \\ Y_{kp+6}&=D^{k'+1}Y_{0},\\ Y_{kp+l+1}&=D^{k'}A(l)A(l+2)Y_{l} \;\; \text{ for }\;\; l=0,1,2,3,4.\\ \end{aligned} \right. \end{aligned}$$
  5. (5)

    When \(k=5k'+4\), we have

    $$\begin{aligned} \left\{ \begin{aligned} Y_{kp+l+1}&=D^{k'+1}Y_{l-2}, \;\; \text{ for }\;\; l=2,3,4,5,6,\\ Y_{kp+2}&=D^{k'}A(l)A(l+2)Y_{l+3}, \;\; \text{ for }\;\; l=0,1. \end{aligned} \right. \end{aligned}$$

Finally, for \(p>r\), once again we need to discuss two cases. The first case consists in \(p=k r\) and \(k>1\). By following the same approach using yet Algorithm 2, we get \(\displaystyle B=(B_{i,j})_{1\le i,j\le r}\), where

$$\begin{aligned} B_{i,i}=A(p-i)A(p-r-i)\cdots A(r-i), \, \,\, B_{i,j}=\Theta _{d} \,\, \text{ for } \,\,i\not = j. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} B^{(p)}=diag(B_{1},B_{2},\cdots , B_{r})_{r\times r}, \end{aligned}$$

where \(B_{i}=A(p-i)A(p-r-i)\cdots A(r-i)\) (\(i=1,\cdots ,r\)). Therefore, for every \(k\ge 1\), we have

$$\begin{aligned} B^{k}=diag(B_{1}^{k},B_{2}^{k},\cdots , B_{r}^{k})_{r\times r}. \end{aligned}$$
(5.8)

The second case is when \(p=kr+s\) and \(s=1,\cdots ,r-1\). Then, a direct computation shows that the entries of the matrix \(\displaystyle B=(B_{i,j})_{1\le i,j\le r}\) are given by

$$\begin{aligned} \left\{ \begin{aligned} B_{i,r-s+i}&=A(p-i)A(p-i-r)\cdots A(p-i-kr), \text{ if } \;i=1,\cdots ,s,\\ B_{i,i-s}&=A(p-s-i)A(p-s-i-r)\cdots A(p-s-i-(k-1)r), \text{ when } \;i=s+1,\cdots ,r,\\ B_{i,j}&=\Theta _{d\times d} \,\,if\,\,not. \end{aligned} \right. \end{aligned}$$

In other words, we have

$$\begin{aligned} B=\begin{pmatrix} \Theta _{s\times r-s} &{} B_{1}\\ B_2 &{} \Theta _{r-s\times s} \end{pmatrix}, \end{aligned}$$

where \(\Theta _{k\times m}\) is the null matrix of order \(k\times m\) and

$$\begin{aligned} B_{1}=diag(B_{1}^{(1)}, B_{1}^{(2)},...,B_{1}^{(s)})_{s\times s} \, \, \text{ and } \, \, B_{2}=diag(B_{2}^{(s+1)}, B_{2}^{(s+2)},...,B_{2}^{(r)})_{r-s\times r-s}, \end{aligned}$$

such that \(B_{1}^{(i)}=A(p-i)A(p-r-i)\cdots A(p-kr-i)\) for \(i=1,\cdots ,s\) and \(B_{2}^{(i)}=A(p-i)A(p-r-i)\cdots A(p-(k-1)r-i)\) for \(i=s+1,\cdots ,r\).

For more illustration, we examine the particular case of the matrix equation (5.7) of order 3, where A(n) is periodic of period 4. The solution is given by Theorem 3.3 as follows:

$$\begin{aligned} Z(kp+i+1)=C(i)\cdots C(0) B^{k} Z(0),\;\; \text{ for }\;\; i=0,1,2,3, \end{aligned}$$

where \(B=C(3)C(2)C(1)C(0)\). Then, using the generalized Cayley–Hamilton Theorem 3.3 for computing the powers \(B^{k}\) of the matrix B, we get the following result.

Proposition 5.7

Let \(n=kp+i\) (\(i=0,\cdots ,3\)) and \(k=3k'+s\) (\(s=0,1,2\)). The solution of the matrix difference equation \(Y_{n+3}=A(n)Y_{n}\) where A(n) is periodic with period \(p=4\), prescribed to the initial conditions vector \((Y_{2},Y_{1},Y_{0})^\top \), is given by

  1. (1)

    When \(k=3k'\), we have

    $$\begin{aligned} \left\{ \begin{aligned} Y_{kp+6}&=D^{k'}A(3)A(0)Y_{0},\\ Y_{kp+l+1}&=D^{k'}A(l-2)Y_{l-2}, \;\;\text{ for }\;\; l=2,3,4,\\ Y_{kp+l+1}&=D{k'}Y_{l+1},\;\;\text{ for }\;\; l=0,1. \\ \end{aligned} \right. \end{aligned}$$
  2. (2)

    When \(k=3k'+1\), we have

    $$\begin{aligned} \left\{ \begin{aligned} Y_{kp+l+1}&=D^{k'}A(l-1)A(l-2)A(l-3)Y_{l-4}, \;\;\text{ for }\;\; l=4,5,\\ Y_{kp+l+1}&=D^{k'}A(l-1)A(l+2)Y_{l-1}, \;\;\text{ for }\;\; l=1,2,3,\\ Y_{kp+1}&=D^{k'}A(2)Y_{2}. \\ \end{aligned} \right. \end{aligned}$$
  3. (3)

    When \(k=3k'+2\), we have

    $$\begin{aligned} \left\{ \begin{aligned} Y_{kp+l+1}&=D^{k'+1}Y_{l-3}, \;\;\text{ for }\;\; l=3,4,5,\\ Y_{kp+l+1}&=D^{k'}(\prod _{{j=0}_{j\ne l+1}}^{3}A(j))Y_{l}, \;\;\text{ for }\;\; l=0,1,2, \\ \end{aligned} \right. \end{aligned}$$

where we denote by \(D=A(3)A(2)A(1)A(0)\).

6 Discussion and concluding remarks

In this paper, we have been interested in the study of a class of periodic matrix difference equations. While formulating the result on the solutions of this class of equations, we were led to deal with two new problems. The first one concerns the expression of the powers of matrices in blocks. To this aim, we proposed a method for computing the powers of matrices in blocks based on the linear recursive sequences of Fibonacci type in the algebra of square matrices, and the generalized Cayley–Hamilton theorem. Here, the combinatorial expression for the linear sequences of Fibonacci type in the algebra of square matrices \(GL(r, \mathbb {C}^{d\times d})\), and the Kronecker product play a central role. The second problem deals with the computation of the product of companion matrices in blocks. To this matter, we developed two recursive algorithms to calculate the entries of the resulting matrix product: Algorithm 1 is an iterative process based on a sequence of matrices, while Algorithm 2 reposes merely on a family of a Fibonacci sequences in the algebra of square matrices. General results are established and special cases are considered. To the best of our knowledge, the results of this investigation present a pilot study to solve periodic matrix difference equations.

It is worth noting that, for reason of clarity and simplicity, in the examples illustrating our results, the matrices are mostly small matrices. However, the general results and algorithms show that our method can work for matrices of large size. On the other side, the programming code of the two algorithms is actually of interest, both for the purpose to treat the matrices of large size and to study as concrete application of the periodic matrix model of Samuelson–Hicks. Partial results have been established, which illustrate that this type of method can be used more effectively.

Finally, a recent literature shows that the generalized Cayley–Hamilton Theorem constitutes an important tool for dealing with various applied and theoretical topics. Especially, the generalized Cayley–Hamilton Theorem can be used as new technique for solving some matrix and matrix differential equations (see, for example [2, 7,8,9, 13, 14]). As for the periodic matrix model of Samuelson–Hicks, it seem for us that our results and algorithms, can be also used effectively for studying some topics, related to the generalized Cayley–Hamilton Theorem.