Periodic matrix difference equations and companion matrices in blocks: some applications

This study is devoted to some periodic matrix difference equations, through their associated product of companion matrices in blocks. Linear recursive sequences in the algebra of square matrices in blocks and the generalized Cayley–Hamilton theorem are considered for working out some results about the powers of matrices in blocks. Two algorithms for computing the finite product of periodic companion matrices in blocks are built. Illustrative examples and applications are considered to demonstrate the effectiveness of our approach.


Introduction
It is well known that the scalar homogeneous linear difference equations of order r ≥ 2, defined by y n+r = a 1 (n)y n+r −1 + · · · + a r (n)y n , for n ≥ 0, (1.1) where the coefficients a 1 (n), . . . , a r (n) are functions of n, occur in several fields of mathematics and applied sciences. Several methods have been provided in the literature for solving Eq. (1.1) (see, for example, [15,17], and references therein). Recently, the homogeneous linear difference equations (1.1) with periodic coefficients, i.e., a j (n + p) = a j (n), have been solved in [4,5], using properties of the generalized Fibonacci sequences in the algebra of square matrices. More precisely, in [4], Eq. (1.1) has been studied under its equivalent matrix equation: Y (n + 1) = C(n)Y (n), f or n ≥ 0, (1.2) where Y (n) and C(n) are given as follows: . . .
In the last years, the product of companion matrices has attracted much attention, because this product occurs in various fields of mathematics and applied sciences, such that the Floquet system theory related to the linear difference equations (see [4,5,16,17]). Diverse methods for computing the product of companion matrices have been proposed in the literature. For instance, in [16], the authors developed an explicit formula for the entries of the product of companion matrices. Then, they applied their results to solve linear difference equations of variable coefficients. Another expression for the product of companion matrices was obtained in [17], based on the study of solutions of non-homogeneous and homogeneous linear difference equations of order N, with variable coefficients. Recently, it was shown in [4,5] that the product of companion matrices plays a central role, for investigating a large class of periodic-discrete homogeneous difference equations via generalized Fibonacci sequences. Moreover, through the key of generalized Fibonacci sequences, there are still some interesting and relevant problems that can be examined.
In this paper, we aim to study the linear difference matrix equations defined by where Y 0 , · · · , Y r −1 are in C d and stand for the initial values, and the coefficients A 1 (n), · · · , A r (n) are square matrices in C d×d , the algebra of square matrices of order d, with complex coefficients, representing periodic matrices functions of n, that is, A j (n + p) = A j (n), for every n ≥ 0, i.e., p = min{N ∈ N, N ≥ 1 A j (n + N ) = A j (n), for j = 1, ..., r and n ≥ 0}. The class of discrete linear matrix equations (1.3) appears in many applied fields, such as economics, population dynamics, and signal processing. For instance, periodic matrix models are often used to study seasonal temporal variation of structured populations (see [6] for example). They can also occur in many practical control systems (see [20] for example).
In our exploration, we are looking forward to studying properties of some periodic matrix difference equations (1.3), throughout their closed relation with the product of companion matrices in blocks. First, we formulate the main result on the solutions of the linear matrix difference equation (1.3), through the product of companion matrices in blocks and powers of matrix in blocks. As a matter of fact, we utilize the generalized Cayley-Hamilton theorem for giving rise to a new result that allows us to compute the powers of matrices in blocks. Moreover, we outline the recursive method for investigating two algorithms to compute the finite product of companion matrices in blocks. To highlight the importance of our results, special cases, significant examples and applications are provided.
The outline of this study is as follows. Section 2 is devoted to some basic properties of the periodic matrix difference equations (1.3), where the product of periodic companion matrices in blocks is considered. Section 3 concerns the study of the powers of matrices in the algebra of square matrices in blocks. More precisely, using the generalized Cayley-Hamilton Theorem and the linear recursiveness in the algebra of square matrices in blocks, we give an explicit expression of the powers of a square matrix in blocks. Here, the Kronecker product (or tensor product) of matrices plays a central role. In Sect. 4, we develop two algorithms for computing the finite product of companion matrices in blocks, where a recursive sequence of matrices is considered. In Sect. 5, gathering the results of Sects. 2, 3 and 4, we then employ those to examine some special class of periodic matrix difference equations (1.3).

Periodic matrix difference equations: general setting
In the same way to the scalar case, the matrix equation associated to Eq. (1.3) is given by , ..., Y (n)) ∈ C dr and C(n) is a matrix of order dr, i.e., in C dr×dr given by where 1 d×d and d×d are, respectively, the identity matrix and the zero matrix of order d × d. We observe that the matrix C(n) is a companion matrix in blocks. In the sequel, we use the notation C(n) = C[A 1 (n), A 2 (n), · · · , A r (n)] r ×r , for these companion matrices in blocks of order r . As for the scalar case, the main problem for studying the matrix equation (2.1), is reduced to the study of the following product of companion matrices in blocks: Since A j (n + p) = A j (n), for every j (1 ≤ j ≤ r ) and n ≥ 0, we then infer that C(n + p) = C(n), for every n ≥ 0, where p ≥ 1 is the period. Here we are concerned with the finite product of companion matrices in blocks. It is worthwhile to point out that this class of companion matrices in blocks arises in various mathematical and applied fields (see, for example, [18]). In this work, we will emphasize its key role for providing the solutions of Eq.
Thus, for every n ≡ i[ p], i.e n = kp + i (0 ≤ i ≤ p − 1), the solution of the matrix equations (2.1) and (2.2) related to the periodic matrix difference equation (1.3) is given as follows.
Theorem 2.1 shows that there is a closed link between the periodic matrices difference equations and product of companion matrices in blocks. More precisely, in expression (2.3) appears a finite product of companion matrices in blocks and the powers of the matrix B = C( p − 1) · · · C(0), which is itself a finite product of companion matrices in blocks.
To establish more results, concerning the explicit representation of the solutions of the periodic matrix equations (2.1) and (2.2), we are led to study the two following problems. The first one is related to the powers of matrices in blocks and the second concerns the finite product of companion matrices in blocks. In the first problem, our approach revolves around the generalized Cayley-Hamilton Theorem. In the whereas, for the second problem we manage to build two recursive algorithms for computing this finite product of companion matrices in blocks.

Kronecker product and linear recursive relation in the algebra of square matrices in blocks
In this subsection, we are interested in the use of the matrix Kronecker product for studying some linear recursive relation in the algebra of square matrices in blocks, and their use for the computation of the powers of matrices in blocks through the generalized Cayley-Hamilton Theorem. In fact, using the product of Kronecker, we extend the results of [3], to the algebra of square matrices in blocks.
For reason of clarity, let us recall that the Kronecker product can be defined for two matrices of arbitrary size over any ring. In the sequel of this study, we consider only the square matrices, whose entries are in the fields of real R or complex numbers C (see for example, [11,19]). Let us start by recalling the definition of Kronecker product. That is, let C d×d and C r ×r be the algebras of square matrices of order d ≥ 1 and r ≥ 1, respectively.

Definition 3.1
The Kronecker product of the matrix A = (a i j ) 1≤i, j≤r ∈ C r ×r with the matrix B = (b i j ) 1≤i, j≤d ∈ C d×d is defined as follows: Note that, there is other denomination for the Kronecker product such that tensor product, direct product or left direct product (see, for example, [11]). For more details, an interesting overview on the Kronecker product is given by K. Schnack in [19]. The Kronecker tensor product has several important algebraic properties, we refer to what we will use in this section. Let first remark that for r = 1, we have A = a 1,1 ∈ C 1×1 = C, thus the tensor product (3.1) takes the form which allows to see that the tensor product coincides with the usual multiplication of matrices by scalars. Or equivalently, the tensor product can be viewed as an extension of the usual multiplication of matrices by scalars.
Expression (3.1) shows that A ⊗ B is an element of G L(r, C d×d ), the algebra of square matrices of order r ≥ 1, with coefficients in C d×d . Moreover, we can also see that A ⊗ B can be identified with an element of C rd×rd , the algebra of square matrices of order rd, with coefficients in C. Therefore, we have the following known isomorphisms of algebras: In the sequel, we will use the notation M d r to designate without distinction the previous notations of C r ×r ⊗ C d×d .
A method for computing the powers of the matrices of C r ×r , the algebra of square matrices, has been considered in [3]. This method is based on the linear recursive sequences of Fibonacci type in the algebra of square matrices C r ×r , and can be extended here as follows. More precisely, for computing the powers of the matrix in blocks, we introduce the notion of linear recursive sequences of Fibonacci type in the algebra of square matrices M d r = G L(r, C d×d ). Let A 0 , A 2 , · · · , A r −1 be a family of commuting matrices in G L(r, C d×d ), and B 0 , B 1 , · · · , B r −1 (r > 2) a given sequence of G L(r, In other words, the sequence {Y n } n≥0 is called a generalized Fibonacci sequence, where A 0 , A 2 , · · · , A r −1 are the coefficients, and Y 0 , Y 1 , · · · , Y r −1 stand for the initial conditions. As it was shown in [3], we have where W s = A r −1 B s + · · · + A s B r −1 for s = 0, 1, · · · , r − 1 and with ρ(r, r ) = 1 r ×r = diag(1 d×d , ..., 1 d×d ) = 1 r ×r ⊗ 1 d×d (the r -by-r diagonal matrix in which the entries of the main diagonal are all 1 d×d ) and ρ(n, r ) = r ×r ⊗ d×d , if n < r .
The preceding expressions (3.2) and (3.3) combined with the generalized Cayley-Hamilton are useful for computing the powers of the matrix in blocks B = C( p − 1) · · · C(0). For this propose, we employ the result of the generalized Fibonacci sequence, that allows us to obtain a tractable expression for the powers of a block matrix A of G L(r, C d×d ).

Generalized Cayley-Hamilton theorem and powers of companion matrices in blocks
We first recall the generalized Cayley-Hamilton Theorem for matrices given in [13,14]. Let us consider the square matrix in blocks:  [14]), the matrix characteristic polynomial of A, is given by The matrix characteristic polynomial of the square matrix in blocks A, defined by (3.4), is where S ∈ C d×d is the matrix (block) eigenvalue of A, ⊗ denotes the Kronecker product of matrices.
The matrix determinant (3.5) is obtained by developing the determinant of the matrix considering its commuting blocks as scalar entries (see [13,14]). More precisely, it was shown in [13,14] that the matrices D i (i = 0, · · · , r − 1) are obtained by developing the determinant of the matrix [1 d×d ⊗ S − A] considering its blocks as scalar entries. Then, we have We now turn our attention to the theory of generalized Fibonacci sequence, to extend some properties established in the case of matrices with scalar coefficients, to the case of matrices in blocks. Equation (3.6) leads to get for every n ≥ r . We observe that the sequence {A n } n≥0 is nothing else but only a generalized Fibonacci sequence of order r , with matrices coefficients In an entirely similar way followed when the matrix has scalar coefficients in [3], we manage to obtain the following result for the block matrix.

Theorem 3.3 Let A be a matrix in blocks and P(S)
is given by ρ(r, r ) = 1 r ×r ⊗ 1 d×d , with ρ( p, r ) = 1 r ×r ⊗ d×d for p < r , and It seems for us that the result of Theorem 3.3 is not current in the literature. Comparing to the linked results in this subject, we establish here a handed expression that can be a key to resolve diverse questions in this subject. Notably, those on the similar matrix equations (see, for example [2,[7][8][9]13,14]).
To give more light to the content of Theorem 3.3, we examine the following special situation. Suppose that r = 2 and A = A 11 A 12 d×d A 22 , with A 11 , A 12 and A 22 are matrices of order d, in addition they satisfy the Employing expressions (3.7) and (3.8), we obtain .
What is more, in this case , we have ρ(n, 2) = k 0 +2k 1 =n−2 , we obtain the following explicit expressions of ρ(n, 2): Therefore, we have the following proposition.

Proposition 3.4 Under the preceding data, we have
As a numerical application of Proposition 3.4, consider the matrix A = Proposition 3.4 and its numerical application illustrate the efficient role of Theorem 3.3. Moreover, our main goal, is to apply Theorem 3.3 to calculate the powers of the matrix B = C( p − 1) · · · C(0), in the aim to provide solutions of the periodic matrix difference equations (2.1) and (2.2), in some special cases, that will be more exploited in Sect. 5.

Algorithm 1: product of block companion matrices
In this section, we develop the first algorithm for computing the finite product of companion matrices in blocks. Recall that this product appears in the solutions of the matrix expressions (2.3) of Theorem 2.1. Let us consider the companion matrix in blocks (2.2), namely, where A 1 (m), · · · , A r (m) are matrices of order d. We shall give an explicit formula for the matrix The main idea behind this algorithm is to build an iterative formula that calculates recursively the entries B (m) i j of the matrix B (m) , using from the entries of the matrix C(1). More precisely, this recursive process is based on a sequence of matrices D (k) (m), whose entries constructed recursively, using the given sequence A j (m).
To this matter, we set The steps of our first algorithm are as follows. Let D Let us define D j (2) by the following relation: Thus, by substituting D (2) j (m) in the last formula of B (m) i j , we obtain Using the same recurrent process above, we obtain We can continue this process by taking Finally, by recurrence we have the following result.
For every m ≤ r , we have and for every k > 2, we set It should be made clear that, since the product of matrices is not commutative, the order of matrices in formulas (4.1) and (4.2) need to be respected.
For more illustration of Theorem 4.1, we examine the following special case.

Proposition 4.2 Consider the companion matrix in blocks:
Then, for every m ≥ 2, the entries of the matrix , are given as follows: That is, for m = 2, by a straightforward computation, we get In addition, for m = 3, we obtain Thence, we obtain Consider the following numerical example. Suppose that where 2×2 is the null matrix of order 2 and 1 2×2 is the identity matrix of order 2, C 11 (1) = 1 0 0 2 ,   i j } 1≤i, j≤r of the product of companion matrices B (m) = C(1) · · · C(m) , since our method is recursive and novel. We can now proceed analogously to Theorem 4.1, and then we obtain a new expression of α (m) i j given by the following corollary. (1) ir , and for m ≤ r,

Corollary 4.3 Let α (m) i j be the (i, j)-entry of the product B (m) . Then, for m > r , we have
In the aim to give more light in the previous result of Corollary 4.3, we study the case m = 3. Let (3)} be a set of real or complex numbers. Consider the following three companion matrices: Applying the result of Corollary 4.3, a direct computation leads to get . We illustrate this situation by the following numerical application.

Algorithm 2: product of companion matrices in blocks
In this section, we manage to provide another recursive algorithm to calculate the entries of the matrix B (m) = C(m) · · · C(1) = C(m)B (m−1) , our approach reposes in the techniques of generalized Fibonacci sequences in the algebra of square matrices in blocks, given in Sect. 3.2. We set where A 1 (k), · · · , A r (k) are matrices of order d, and with mutually different sets of initial conditions defined as follows. For s = 1, the initial conditions of the sequence {Y (k) n (1)} n≥0 are given by r −l,k , for every 0 ≤ l ≤ r − 1 and 2 ≤ s ≤ m. (4.5) Therefore, using a straightforward computation, it ensues that we can rewrite de matrix B (m−1) under the form: Thence, employing the recursive relation (4.3) satisfied at order r by Y By induction, we observe that for m < r (m ≥ 1), we obtain For m ≥ r , we observe that by induction, we get for every 1 ≤ i, j ≤ r . Hereafter, we derive that The main idea here is to take advantage of the fact that {Y We can observe that for the two preceding algorithms, the commutativity condition is not necessary. More precisely, in this section, we are interested in making use of all the material provided in the above sections for exploring some special cases of the p-periodic matrix difference equations in blocks. Yet, some particular cases are treated and some examples are given, to make this study more affordable.

Solutions of the matrix equation Y n+2 = A(n)Y n , where A(n) is p-periodic
Consider the periodic matrix difference equation: where A(n) is p-periodic (with period p ≥ 2) square matrix of order d , and Y 0 , Y 1 stand for the initial conditions. We assume that A(i)A( j) = A( j)A(i) for 0 ≤ i, j ≤ p − 1. This equation (5.1) can be written under the following matrix equation: It ensues that C(n) is p-periodic emanated from the fact that A(n) is p-periodic. We consider the matrix B = C( p − 1)C( p − 2) · · · C(1)C(0). Thus, employing Theorem 2.1, we need to distinguish two cases p = 2 and p > 2. If p = 2, then for every n = 2k (k ≥ 1), the matrix equation (5.2) takes the form: We start by giving the expression of B in terms of A( p − 1), · · · , A(1), A(0), using the Algorithm 1 (see Sect. 4.1). For reason of clarity, we consider the case of r = 2. For p = 2, a straightforward application of the Algorithm 1 shows that where the entries of the matrix B are given by 1 (2) and B (2) where D (0) 1 (2) = d×d and D

Example 5.2 Consider the scalar linear difference equation of the form
where a : N → R is a 2-periodic scalar function of n and x 0 , x 1 stand for the initial conditions. Then, the matrix A(n) is reduced to one element a(n), thus the unique solution of Eq. (5.1) is given by Proposition 5.1 as follows: This class of equations has been studied in [5]. The method used consists in transforming equation ( x 2n = a(0) n a(0) + a(1) In addition, starting from (5.5) to (5.6), a direct computation implies that x 2n = a(0) n x 0 and x 2n+1 = a(1) n x 0 . Therefore, we show that the two solutions (5.3) and (5.5)-(5.6) are the same results. Now, we turn to the case when r = 2 and the period p ≥ 3. In this case, the entries of the matrix B = C(1)C(0) are given by and for every k > 2, we have We need to distinguish two cases. When p is even, a straightforward computation, shows that Hence, for every k ≥ 2, we have However, when p is odd, we have In this case for calculating the powers B k , for k ≥ 2, we need to utilize Theorem 3.3 of Sect. 3.2. Indeed, in this case, , and it follows from Theorem 3.3 that for every k ≥ 1, we have In addition, once again, we need to distinguish two cases: when k is odd or even, for giving the expression of ρ(k, 2): Thence, we obtain With this results at our disposal, we can express the solution of the matrix equation Y n+2 = A(n)Y n . Indeed, when A(n) is periodic of period p > 2, we have the following result.

Proposition 5.3 Let p ≥ 3 be an even integer. Consider the p-periodic matrix equation Y n+2
= A(n)Y n , n ≥ 0, with the initial conditions vector (Y 1 , Y 0 ) . Then, for every n = kp + i (i = 0, · · · , p − 1), the unique solution is given by and for every i = 3, · · · , p − 2, we have Similarly, when the period p is odd, we have to consider two cases. For k is even, the vector solution is given by When k is odd, we get For more illustration, we propose the following example.
and, if k is odd, we get

Solutions of the matrix equation Y n+r = A(n)Y n , A(n) is p-periodic
In this subsection, we apply Algorithm 2 to solve the equation where A(n) is a p-periodic matrix (with period p ≥ 2) of order d, and Y 0 , Y 1 stand for the initial conditions. We assume that In a similar way as before, we can formulate Eq. (5.7) under the following matrix equation: Consider the product of companion matrices in blocks B = C( p −1) · · · C(0). For 1 ≤ m ≤ p −1, the product of companion matrices in blocks We point out that for m = p we have B = B ( p) . To give the form of the matrix B, we propose to apply the Algorithm 2 (the Algorithm 1 can also be used here). We need to distinguish three cases. When p = r , a direct computation, using Algorithm 2, allows us to have Thus, we obtain Therefore, for every k ≥ 1, we have In this case, the solution of the p-periodic matrix equation (5.7) is given by the following proposition.

Proposition 5.5
Consider the p-periodic matrix difference equation (5.7) with the initial conditions vector (Y r −1 , · · · , Y 0 ) . Suppose that the period p satisfies p = r. Then, for n = kp, the solution of Eq. (5.7) is given by and if n = kp + i, for i = 1, · · · , p − 2, we have Finally, for n = (k + 1) p − 1, we have When p < r , in a similar way by employing the Algorithm 2, we get By a straightforward computation, we obtain Thence, we have To express the solution of Eq. (5.7) when p < r , we need to compute the powers B k of the matrix B given above using the Theorem 3.3. Unfortunately, it is not straightforward to derive the expression of the matrix polynomial P(S) = det[1 r ×r ⊗ S − A] = S r − D 0 S r −1 + · · · − D r −1 of B. Therefore, we propose to examine the following example, when p = 3 and r = 5 as follows.
(5) When k = 5k + 4, we have Y kp+l+1 = D k +1 Y l−2 , for l = 2, 3, 4, 5, 6, Finally, for p > r , once again we need to discuss two cases. The first case consists in p = kr and k > 1. By following the same approach using yet Algorithm 2, we get B = (B i, j ) 1≤i, j≤r , where Thus, we obtain r ). Therefore, for every k ≥ 1, we have The second case is when p = kr + s and s = 1, · · · , r − 1. Then, a direct computation shows that the entries of the matrix B = (B i, j ) 1≤i, j≤r are given by In other words, we have where k×m is the null matrix of order k × m and  (3) When k = 3k + 2, we have ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ Y kp+l+1 = D k +1 Y l−3 , for l = 3, 4, 5, A( j))Y l , for l = 0, 1, 2, where we denote by D = A(3)A(2)A(1)A(0).

Discussion and concluding remarks
In this paper, we have been interested in the study of a class of periodic matrix difference equations. While formulating the result on the solutions of this class of equations, we were led to deal with two new problems. The first one concerns the expression of the powers of matrices in blocks. To this aim, we proposed a method for computing the powers of matrices in blocks based on the linear recursive sequences of Fibonacci type in the algebra of square matrices, and the generalized Cayley-Hamilton theorem. Here, the combinatorial expression for the linear sequences of Fibonacci type in the algebra of square matrices G L(r, C d×d ), and the Kronecker product play a central role. The second problem deals with the computation of the product of companion matrices in blocks. To this matter, we developed two recursive algorithms to calculate the entries of the resulting matrix product: Algorithm 1 is an iterative process based on a sequence of matrices, while Algorithm 2 reposes merely on a family of a Fibonacci sequences in the algebra of square matrices. General results are established and special cases are considered. To the best of our knowledge, the results of this investigation present a pilot study to solve periodic matrix difference equations. It is worth noting that, for reason of clarity and simplicity, in the examples illustrating our results, the matrices are mostly small matrices. However, the general results and algorithms show that our method can work for matrices of large size. On the other side, the programming code of the two algorithms is actually of interest, both for the purpose to treat the matrices of large size and to study as concrete application of the periodic matrix model of Samuelson-Hicks. Partial results have been established, which illustrate that this type of method can be used more effectively.
Finally, a recent literature shows that the generalized Cayley-Hamilton Theorem constitutes an important tool for dealing with various applied and theoretical topics. Especially, the generalized Cayley-Hamilton Theorem can be used as new technique for solving some matrix and matrix differential equations (see, for example [2,[7][8][9]13,14]). As for the periodic matrix model of Samuelson-Hicks, it seem for us that our results and algorithms, can be also used effectively for studying some topics, related to the generalized Cayley-Hamilton Theorem.