Abstract
This study is devoted to some periodic matrix difference equations, through their associated product of companion matrices in blocks. Linear recursive sequences in the algebra of square matrices in blocks and the generalized Cayley–Hamilton theorem are considered for working out some results about the powers of matrices in blocks. Two algorithms for computing the finite product of periodic companion matrices in blocks are built. Illustrative examples and applications are considered to demonstrate the effectiveness of our approach.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
It is well known that the scalar homogeneous linear difference equations of order \(r\ge 2\), defined by
where the coefficients \(a_1(n), \dots , a_{r}(n)\) are functions of n, occur in several fields of mathematics and applied sciences. Several methods have been provided in the literature for solving Eq. (1.1) (see, for example, [15, 17], and references therein). Recently, the homogeneous linear difference equations (1.1) with periodic coefficients, i.e., \(a_j(n+p)=a_j(n)\), have been solved in [4, 5], using properties of the generalized Fibonacci sequences in the algebra of square matrices. More precisely, in [4], Eq. (1.1) has been studied under its equivalent matrix equation:
where Y(n) and C(n) are given as follows:
In the sequel, we use the notation \(\displaystyle C(n)=C[a_{1}(n),\,\,\, a_{2}(n),\,\,\, \cdots ,\,\,\, a_{r}(n)]_{r\times r}\), for these companion matrices of order r. Since \(\displaystyle Y(n+1)=C(n)...C(1)C(0)Y(0)\), the main problem for studying the matrix equation (1.2), is reduced to the study of the product of companion matrices:
In the last years, the product of companion matrices has attracted much attention, because this product occurs in various fields of mathematics and applied sciences, such that the Floquet system theory related to the linear difference equations (see [4, 5, 16, 17]). Diverse methods for computing the product of companion matrices have been proposed in the literature. For instance, in [16], the authors developed an explicit formula for the entries of the product of companion matrices. Then, they applied their results to solve linear difference equations of variable coefficients. Another expression for the product of companion matrices was obtained in [17], based on the study of solutions of non-homogeneous and homogeneous linear difference equations of order N, with variable coefficients. Recently, it was shown in [4, 5] that the product of companion matrices plays a central role, for investigating a large class of periodic-discrete homogeneous difference equations via generalized Fibonacci sequences. Moreover, through the key of generalized Fibonacci sequences, there are still some interesting and relevant problems that can be examined.
In this paper, we aim to study the linear difference matrix equations defined by
where \(Y_0, \cdots , Y_{r-1}\) are in \(\mathbb {C}^{d}\) and stand for the initial values, and the coefficients \(A_1(n), \cdots , A_{r}(n)\) are square matrices in \(\mathbb {C}^{d\times d}\), the algebra of square matrices of order d, with complex coefficients, representing periodic matrices functions of n, that is, \(A_{j}(n+p)=A_j(n)\), for every \(n\ge 0\), i.e., \(p=\min \{N\in \mathbb {N}, N\ge 1\, A_j(n+N)=A_j(n), \text{ for } j=1,..., r\, \text{ and } n\ge 0\}\).
The class of discrete linear matrix equations (1.3) appears in many applied fields, such as economics, population dynamics, and signal processing. For instance, periodic matrix models are often used to study seasonal temporal variation of structured populations (see [6] for example). They can also occur in many practical control systems (see [20] for example).
In our exploration, we are looking forward to studying properties of some periodic matrix difference equations (1.3), throughout their closed relation with the product of companion matrices in blocks. First, we formulate the main result on the solutions of the linear matrix difference equation (1.3), through the product of companion matrices in blocks and powers of matrix in blocks. As a matter of fact, we utilize the generalized Cayley–Hamilton theorem for giving rise to a new result that allows us to compute the powers of matrices in blocks. Moreover, we outline the recursive method for investigating two algorithms to compute the finite product of companion matrices in blocks. To highlight the importance of our results, special cases, significant examples and applications are provided.
The outline of this study is as follows. Section 2 is devoted to some basic properties of the periodic matrix difference equations (1.3), where the product of periodic companion matrices in blocks is considered. Section 3 concerns the study of the powers of matrices in the algebra of square matrices in blocks. More precisely, using the generalized Cayley–Hamilton Theorem and the linear recursiveness in the algebra of square matrices in blocks, we give an explicit expression of the powers of a square matrix in blocks. Here, the Kronecker product (or tensor product) of matrices plays a central role. In Sect. 4, we develop two algorithms for computing the finite product of companion matrices in blocks, where a recursive sequence of matrices is considered. In Sect. 5, gathering the results of Sects. 2, 3 and 4, we then employ those to examine some special class of periodic matrix difference equations (1.3).
2 Periodic matrix difference equations: general setting
In the same way to the scalar case, the matrix equation associated to Eq. (1.3) is given by
where \(Z(n)=(Y(n+r-1), Y(n+r-2),...,Y(n))^\top \in \mathbb {C}^{dr}\) and C(n) is a matrix of order dr, i.e., in \(\mathbb {C}^{dr\times dr}\) given by
where \(\mathbf {1}_{d\times d}\) and \(\Theta _{d\times d}\) are, respectively, the identity matrix and the zero matrix of order \({d\times d}\). We observe that the matrix C(n) is a companion matrix in blocks. In the sequel, we use the notation \(\displaystyle C(n)=C[A_{1}(n),\,\,\, A_{2}(n),\,\,\, \cdots ,\,\,\, A_{r}(n)]_{r\times r}\), for these companion matrices in blocks of order r. As for the scalar case, the main problem for studying the matrix equation (2.1), is reduced to the study of the following product of companion matrices in blocks:
Since \(A_j(n+p)=A_j(n)\), for every j (\(1\le j\le r\)) and \(n\ge 0\), we then infer that \(C(n+p)=C(n)\), for every \(n\ge 0\), where \(p\ge 1\) is the period. Here we are concerned with the finite product of companion matrices in blocks. It is worthwhile to point out that this class of companion matrices in blocks arises in various mathematical and applied fields (see, for example, [18]). In this work, we will emphasize its key role for providing the solutions of Eq. (2.1).
Let us consider the matrix equations (2.1) and (2.2) related to the periodic matrix difference equation (1.3). Suppose that \(n=kp\). Then we have
where \(Z(kp)=(Y_{kp+r-1},Y_{kp+r-2},\cdots ,Y_{kp})^\top \) and \(B=C(p-1)\cdots C(0)\). Due to the periodicity condition, we show that Eq. (2.1) takes the form
where \(Z(0)=(Y_{r-1},\cdots ,Y_{0})^\top \). For \(n=kp+1\), we get
and for \(n=kp+p-1\), we have
Thus, for every \(n \equiv i[p]\), i.e \(n=kp+i\) (\(0\le i\le p-1\)), the solution of the matrix equations (2.1) and (2.2) related to the periodic matrix difference equation (1.3) is given as follows.
Theorem 2.1
Consider the periodic matrix equations (2.1) and (2.2) of period \(p\ge 2\), with the initial conditions vector \(Z(0)=(Y_{r-1},\cdots ,Y_{0})^\top \). Then, the solution Z(n) of (2.1) and (2.2) is given by
where \(B=C(p-1) \cdots C(0)\).
Theorem 2.1 shows that there is a closed link between the periodic matrices difference equations and product of companion matrices in blocks. More precisely, in expression (2.3) appears a finite product of companion matrices in blocks and the powers of the matrix \(B=C(p-1) \cdots C(0)\), which is itself a finite product of companion matrices in blocks.
To establish more results, concerning the explicit representation of the solutions of the periodic matrix equations (2.1) and (2.2), we are led to study the two following problems. The first one is related to the powers of matrices in blocks and the second concerns the finite product of companion matrices in blocks. In the first problem, our approach revolves around the generalized Cayley–Hamilton Theorem. In the whereas, for the second problem we manage to build two recursive algorithms for computing this finite product of companion matrices in blocks.
3 Linear recursive relation in the algebra of square matrices, generalized Cayley–Hamilton theorem and powers of matrices in blocks
Recently, great interest has been brought to the use of the product of Kronecker in the algebra of square matrices. Indeed, the techniques of this operation has been successfully proven to be very important for studying several problems, in various fields of mathematics and other exact or applied sciences. Especially, in the resolution of matrix and matrix differential equations (see for example, [1, 2, 12] and references therein).
3.1 Kronecker product and linear recursive relation in the algebra of square matrices in blocks
In this subsection, we are interested in the use of the matrix Kronecker product for studying some linear recursive relation in the algebra of square matrices in blocks, and their use for the computation of the powers of matrices in blocks through the generalized Cayley–Hamilton Theorem. In fact, using the product of Kronecker, we extend the results of [3], to the algebra of square matrices in blocks.
For reason of clarity, let us recall that the Kronecker product can be defined for two matrices of arbitrary size over any ring. In the sequel of this study, we consider only the square matrices, whose entries are in the fields of real \(\mathbb {R}\) or complex numbers \(\mathbb {C}\) (see for example, [11, 19]). Let us start by recalling the definition of Kronecker product. That is, let \(\mathbb {C}^{d\times d}\) and \(\mathbb {C}^{r\times r}\) be the algebras of square matrices of order \(d\ge 1\) and \(r\ge 1\), respectively.
Definition 3.1
The Kronecker product of the matrix \(A=(a_{ij})_{1\le i,j\le r}\in \mathbb {C}^{r\times r}\) with the matrix \(B=(b_{ij})_{1\le i,j\le d}\in \mathbb {C}^{d\times d}\) is defined as follows:
Note that, there is other denomination for the Kronecker product such that tensor product, direct product or left direct product (see, for example, [11]). For more details, an interesting overview on the Kronecker product is given by K. Schnack in [19]. The Kronecker tensor product has several important algebraic properties, we refer to what we will use in this section. Let first remark that for \(r=1\), we have \(A=a_{1,1}\in \mathbb {C}^{1\times 1}=\mathbb {C}\), thus the tensor product (3.1) takes the form
which allows to see that the tensor product coincides with the usual multiplication of matrices by scalars. Or equivalently, the tensor product can be viewed as an extension of the usual multiplication of matrices by scalars.
Expression (3.1) shows that \( A\otimes B\) is an element of \(GL(r, \mathbb {C}^{d\times d})\), the algebra of square matrices of order \(r\ge 1\), with coefficients in \(\mathbb {C}^{d\times d}\). Moreover, we can also see that \( A\otimes B\) can be identified with an element of \(\mathbb {C}^{rd\times rd}\), the algebra of square matrices of order rd, with coefficients in \(\mathbb {C}\). Therefore, we have the following known isomorphisms of algebras:
In the sequel, we will use the notation \(\mathbb {M}_{_{d\odot r}}\) to designate without distinction the previous notations of \(\mathbb {C}^{r\times r}\otimes \mathbb {C}^{d\times d}\).
A method for computing the powers of the matrices of \(\mathbb {C}^{r\times r}\), the algebra of square matrices, has been considered in [3]. This method is based on the linear recursive sequences of Fibonacci type in the algebra of square matrices \(\mathbb {C}^{r\times r}\), and can be extended here as follows. More precisely, for computing the powers of the matrix in blocks, we introduce the notion of linear recursive sequences of Fibonacci type in the algebra of square matrices \(\mathbb {M}_{_{d\odot r}}=GL(r,\mathbb {C}^{d\times d})\). Let \(A_{0}\), \(A_{2}\), \(\cdots \) , \(A_{r-1}\) be a family of commuting matrices in \(GL(r,\mathbb {C}^{d\times d})\), and \(B_{0},B_{1}, \cdots ,B_{r-1}\) \((r > 2)\) a given sequence of \(GL(r,\mathbb {C}^{d\times d})\). Let \(\{Y_{n}\}_{n\ge 0}\) be a sequence of \(GL(r,\mathbb {C}^{d\times d})\) defined by \(Y_{0} = B_{0}, Y_{1} = B_{1}, \cdots , Y_{r-1} = B_{r-1}\) and the matrix difference equation of Fibonacci type,
In other words, the sequence \(\{Y_{n}\}_{n\ge 0}\) is called a generalized Fibonacci sequence, where \(A_{0}\), \(A_{2}\), \(\cdots \) , \(A_{r-1}\) are the coefficients, and \(Y_{0}, Y_{1}, \cdots , Y_{r-1}\) stand for the initial conditions. As it was shown in [3], we have
where \(W_{s} = A_{r-1}B_{s} +\cdots +A_{s}B_{r-1}\) for \(s = 0, 1,\cdots , r-1\) and
with \(\rho (r,r) = \mathbf {1}_{r\times r}=diag(\mathbf {1}_{d\times d}, \, ...,\, \mathbf {1}_{d\times d})=\mathbf {1}_{r\times r}\otimes \mathbf {1}_{d\times d}\) (the r-by-r diagonal matrix in which the entries of the main diagonal are all \(\mathbf {1}_{d\times d}\)) and \(\rho (n,r) = \Theta _{r\times r}\otimes \Theta _{d\times d}\), if \( n < r\).
The preceding expressions (3.2) and (3.3) combined with the generalized Cayley–Hamilton are useful for computing the powers of the matrix in blocks \(B=C(p-1) \cdots C(0)\). For this propose, we employ the result of the generalized Fibonacci sequence, that allows us to obtain a tractable expression for the powers of a block matrix A of \(GL(r, \mathbb {C}^{d\times d})\).
3.2 Generalized Cayley–Hamilton theorem and powers of companion matrices in blocks
We first recall the generalized Cayley–Hamilton Theorem for matrices given in [13, 14]. Let us consider the square matrix in blocks:
where the \(A_{ij}\in \mathbb {C}^{d\times d}\) are commutative, i.e., \(A_{ij}A_{kl}=A_{kl}A_{ij}\), for every i, j (\(1\le i,j,k,l\le r\)). Also, consider the Cayley–Hamilton theorem for block matrices, for the matrix A defined by (3.4). Following Kaczorek [13, Theorem 4] (for more details see also [14]), the matrix characteristic polynomial of A, is given by
Definition 3.2
The matrix characteristic polynomial of the square matrix in blocks A, defined by (3.4), is
where \(S\in \mathbb {C}^{d\times d}\) is the matrix (block) eigenvalue of A, \(\otimes \) denotes the Kronecker product of matrices.
The matrix determinant (3.5) is obtained by developing the determinant of the matrix considering its commuting blocks as scalar entries (see [13, 14]). More precisely, it was shown in [13, 14] that the matrices \(D_i (i=0,\cdots , r-1)\) are obtained by developing the determinant of the matrix \([\mathbf {1}_{d\times d}\otimes S-A]\) considering its blocks as scalar entries. Then, we have
We now turn our attention to the theory of generalized Fibonacci sequence, to extend some properties established in the case of matrices with scalar coefficients, to the case of matrices in blocks. Equation (3.6) leads to get
for every \(n\ge r\). We observe that the sequence \(\{A^n\}_{n\ge 0}\) is nothing else but only a generalized Fibonacci sequence of order r, with matrices coefficients \( \mathbf {1}_{d\times d}\otimes D_{r-i}\) (\(0\le i\le r-1\)) and initial conditions \(A^{0},A^{1},\cdots , A^{r-1}\). In an entirely similar way followed when the matrix has scalar coefficients in [3], we manage to obtain the following result for the block matrix.
Theorem 3.3
Let A be a matrix in blocks and \( P(S)=det[\mathbf {1}_{r\times r}\otimes S - A]=S^r-D_0S^{r-1}+\cdots -D_{r-1}\) be the matrix characteristic polynomial of A. Then, we have
for any \(n \ge r\), where \(W_{s}=(\mathbf {1}_{r\times r}\otimes D_{r-1})A^s+\cdots +(\mathbf {1}_{r\times r}\otimes D_{s})A^{r-1}\) for \(s=0,1,\cdots ,r-1\), the \(\rho (n,r)\) is given by \(\rho (r,r)=\mathbf {1}_{r\times r}\otimes \mathbf {1}_{d\times d}\), with \(\rho (p,r)=\mathbf {1}_{r\times r}\otimes \Theta _{d\times d}\) for \(p<r\), and
It seems for us that the result of Theorem 3.3 is not current in the literature. Comparing to the linked results in this subject, we establish here a handed expression that can be a key to resolve diverse questions in this subject. Notably, those on the similar matrix equations (see, for example [2, 7,8,9, 13, 14]).
To give more light to the content of Theorem 3.3, we examine the following special situation. Suppose that \(r=2\) and \( A=\begin{pmatrix} A_{11} &{} A_{12} \\ \Theta _{d\times d} &{} A_{22} \end{pmatrix}\), with \( A_{11},A_{12}\) and \(A_{22}\) are matrices of order d, in addition they satisfy the commutativity condition \(A_{ij}A_{kl}=A_{kl}A_{ij}\) (\(1\le i,j,k,l\le 2\)). Then, we have \( P(S)=det[\mathbf {1}_{2\times 2}\otimes S - A]=S^2-(A_{11}+A_{22})S + A_{11}A_{22}\). Employing expressions (3.7) and (3.8), we obtain
for every \(n\ge 1\), where
What is more, in this case , we have \( \rho (n,2)=\sum _{k_0+2k_1=n-2} \frac{(k_0+k_{1})!}{k_0!k_1!}(\mathbf {1}_{2\times 2}\otimes (A_{11}+A_{22}))^{k_0} (\mathbf {1}_{2\times 2} (- A_{11}A_{22}))^{k_{1}}\). Now, if we suppose the condition \(A_{11}=-A_{22}\), we obtain the following explicit expressions of \( \rho (n,2)\):
Therefore, we have the following proposition.
Proposition 3.4
Under the preceding data, we have
As a numerical application of Proposition 3.4, consider the matrix \( A=\begin{pmatrix} A_{11} &{} A_{12} \\ \Theta _{2\times 2} &{} A_{22} \end{pmatrix}\), where \(A_{11}=\begin{pmatrix} 2&{} 0 \\ 0 &{} 2 \end{pmatrix}\), \(A_{12}=\begin{pmatrix} 1&{} 1\\ 2 &{}1 \end{pmatrix}\), \(A_{21}= \Theta _{2\times 2}\) (null matrix of order 2) and \(A_{22}=-2\times \mathbf {1}_{2\times 2}\). Then, a direct computation shows that
where \(A_{22}(k)=\begin{pmatrix} 2&{}2 \\ 1 &{}2 \end{pmatrix}\).
Proposition 3.4 and its numerical application illustrate the efficient role of Theorem 3.3. Moreover, our main goal, is to apply Theorem 3.3 to calculate the powers of the matrix \(B=C(p-1) \cdots C(0)\), in the aim to provide solutions of the periodic matrix difference equations (2.1) and (2.2), in some special cases, that will be more exploited in Sect. 5.
4 Two algorithms for the product of companion matrices in blocks
4.1 Algorithm 1: product of block companion matrices
In this section, we develop the first algorithm for computing the finite product of companion matrices in blocks. Recall that this product appears in the solutions of the matrix expressions (2.3) of Theorem 2.1.
Let us consider the companion matrix in blocks (2.2), namely,
where \(A_{1}(m), \cdots , A_{r}(m)\) are matrices of order d. We shall give an explicit formula for the matrix
The main idea behind this algorithm is to build an iterative formula that calculates recursively the entries \(B_{ij}^{(m)}\) of the matrix \(B^{(m)}\), using from the entries of the matrix C(1). More precisely, this recursive process is based on a sequence of matrices \(D^{(k)}(m)\), whose entries constructed recursively, using the given sequence \(A_{j}(m)\). To this matter, we set
where \(B_{ij}^{(m)}(1\le i,j\le r)\) are matrices of order d. The steps of our first algorithm are as follows. Let \(D_{j}^{(0)}(m)=\mathbf {1}_{d\times d}\), \(D_{j}^{(1)}(m)=A_{j}(m)\) and \(D_{i, j+k}^{(l)}(m)=\Theta _{d\times d}\) when \(j+k>r\). Then, we have
Let us define \(D_{j}(2)\) by the following relation:
Thus, by substituting \(D_{j}^{(2)}(m)\) in the last formula of \(B_{ij}^{(m)}\), we obtain
Now, let define \(D_{j}^{(3)}(m)\) by taking
hence, we show that \(B_{ij}^{(m)}\) is given by
Using the same recurrent process above, we obtain
We can continue this process by taking
and thus, we get
Finally, by recurrence we have the following result.
Theorem 4.1
Consider the block matrix C(m) given by (2.2), where \(A_{1}(m), \cdots , A_{r}(m)\) are matrices of order d. Then the entries \(\{B_{ij}^{(m)}\}_{1\le i,j\le r}\) of the product of the partitioned companion matrices \(B^{(m)}=C(1)\cdots C(m)\) are given as follows.
For every \(m>r\), we have
For every \( m\le r\), we have
where
and for every \(k>2\), we set
Theorem 4.1 shows the main role of the bi-indexed relations (4.1) and (4.2), for generating the \(B_{ij}^{(m)}\) from the entries of the matrix \(B^{(1)}=C(1)\), using the matrix \(D_{j}^{(k)}(m)\).
It should be made clear that, since the product of matrices is not commutative, the order of matrices in formulas (4.1) and (4.2) need to be respected.
For more illustration of Theorem 4.1, we examine the following special case.
Proposition 4.2
Consider the companion matrix in blocks:
Then, for every \(m\ge 2\), the entries of the matrix
are given as follows:
where
That is, for \(m=2\), by a straightforward computation, we get
where \(D_{1}^{(0)}(2)=D_{2}^{(0)}(2)=1_{d}\), \(D_{1}^{(1)}(2)=A_{1}(2)\) and \(D_{2}^{(1)}(2)=A_{2}(2)\). In addition, for \(m=3\), we obtain
Thence, we obtain
Consider the following numerical example. Suppose that
where \(\Theta _{2\times 2}\) is the null matrix of order 2 and \(\mathbf {1}_{2\times 2}\) is the identity matrix of order 2, \(C_{11}(1)=\begin{pmatrix} 1&{} 0 \\ 0 &{} 2 \end{pmatrix}\), \(C_{12}(1)=\begin{pmatrix} 1&{} 1 \\ 1 &{} 1 \end{pmatrix}\), \(C_{11}(2)=\begin{pmatrix} 1&{} -1 \\ 0 &{} 0 \end{pmatrix}\), \(C_{11}(3)=\begin{pmatrix} 2&{} 1 \\ 1 &{} 1 \end{pmatrix}\) and \(C_{12}(3)=\begin{pmatrix} 1&{} 0 \\ -1 &{} 0 \end{pmatrix}\). By applying the formula above, we obtain the entries of the matrix \(B^{(3)}=C(1)C(2)C(3)\) as follows:
where \(B_{11}^{(3)}=\begin{pmatrix} 5 &{} 2\\ 4 &{} 4 \end{pmatrix}\), \(B_{12}^{(3)}=\begin{pmatrix} -2 &{} 0 \\ 0 &{} 0 \end{pmatrix}\), \(B_{21}^{(3)}=\begin{pmatrix} 1 &{} 0\\ 0 &{} 1 \end{pmatrix} \) and \( B_{22}^{(3)}=\begin{pmatrix} -2 &{} 0 \\ 0 &{} 0 \end{pmatrix}\).
Now, we turn out to the matrix of order r, related to Eqs. (1.1) and (1.2), defined by
where \(a_{j}: \mathbb {N} \rightarrow \mathbb {R}\), \(1\le j\le r\), are scalar functions of m. It seems to us evident to derive explicit formulas for the entries \(\{\alpha _{ij}^{(m)}\}_{1\le i,j\le r}\) of the product of companion matrices \(B^{(m)}=C(1) \cdots C(m)\) , since our method is recursive and novel. We can now proceed analogously to Theorem 4.1, and then we obtain a new expression of \(\alpha _{ij}^{(m)}\) given by the following corollary.
Corollary 4.3
Let \(\alpha _{ij}^{(m)}\) be the (i, j)-entry of the product \(B^{(m)}\). Then, for \(m>r\), we have
and for \(m\le r\),
where
and for every \(k>2\), we set
In the aim to give more light in the previous result of Corollary 4.3, we study the case \(m=3\). Let \( \{a_{1}(1), a_{2}(1), a_{3}(1), a_{1}(2), a_{2}(2), a_{3}(2), a_{1}(3), a_{2}(3) , a_{3}(3)\}\) be a set of real or complex numbers. Consider the following three companion matrices:
Applying the result of Corollary 4.3, a direct computation leads to get
where \(\alpha _{11}=a_{1}(3) a_{1}(2)a_{1}(1)+a_{1}(3)a_{2}(1)+a_{1}(2)a_{1}(1)+a_{3}(1)\), \(\alpha _{12}= a_{2}(3)a_{1}(2)a_{1}(1)+a_{2}(3)a_{2}(1)+a_{3}(2)a_{1}(1)\) and \(\alpha _{13}=a_{3}(3)a_{1}(2)a_{1}(1)+a_{3}(3)a_{2}(1)\). We illustrate this situation by the following numerical application.
Example 4.4
A straightforward computation of the following product of companion matrices, \( B^{(3)} =\begin{pmatrix} -1 &{} 2 &{} 3 \\ 1 &{} 0 &{}0\\ 0 &{} 1 &{}0 \end{pmatrix} \begin{pmatrix} 2 &{} 2 &{} -1 \\ 1 &{} 0 &{}0\\ 0 &{} 1 &{}0 \end{pmatrix} \begin{pmatrix} 4 &{}1 &{} 2 \\ 1 &{} 0 &{}0\\ 0 &{} 1 &{}0 \end{pmatrix}\), permits us to obtain, \(B^{(3)} =\begin{pmatrix} 1 &{} 1 &{} 0\\ 10 &{} 1 &{}4\\ 4&{} 1 &{}2 \end{pmatrix}.\) Meanwhile, the useful of this algorithm appears better for large m, where the computation becomes heavy and not feasible by hand.
4.2 Algorithm 2: product of companion matrices in blocks
In this section, we manage to provide another recursive algorithm to calculate the entries of the matrix \(B^{(m)}=C(m)\cdots C(1)=C(m)B^{(m-1)}\), our approach reposes in the techniques of generalized Fibonacci sequences in the algebra of square matrices in blocks, given in Sect. 3.2.
We set
where \(A_{1}(k), \cdots , A_{r}(k)\) are matrices of order d, and
To express recursively the entries \(B_{ij}^{(m)}\) of the matrix \(B^{(m)}\), we consider a family of generalized Fibonacci sequences in the algebra \(GL(r, \mathbb {C}^{d\times d})\), defined as follows.
Definition 4.5
Let \(1\le s \le m\), we consider the r family \(\{Y_{n}^{(k)}(s)\}_{n\ge 0}\) of generalized Fibonacci sequences in defined \(GL(r, \mathbb {C}^{d\times d})\) for \(1\le k \le r\) by
with mutually different sets of initial conditions defined as follows. For \(s=1\), the initial conditions of the sequence \(\{Y_{n}^{(k)}(1)\}_{n\ge 0}\) are given by
For \(s\ge 2\) and \(1 \le k \le r\), the initial conditions of the sequence \(\{Y_{n}^{(k)}(s)\}_{n\ge 0}\) are related to those \(\{Y_{n}^{(k)}(s-1)\}_{n\ge 0}\), as follows:
Therefore, using a straightforward computation, it ensues that we can rewrite de matrix \(B^{(m-1)}\) under the form:
Thence, employing the recursive relation (4.3) satisfied at order r by \(Y_n^{(k)}(m)\), we get
By induction, we observe that for \(m<r \,\,(m\ge 1)\), we obtain
For \(m\ge r\), we observe that by induction, we get
for every \(1\le i,j\le r\). Hereafter, we derive that
The main idea here is to take advantage of the fact that \(\{Y_{n}^{(k)}(s)\}_{n\ge 0}\) are r-generalized Fibonacci sequences for \(1\le k \le r\). Indeed, we have established that the entries of the matrix \(B^{(m)}\) are obtained by considering only the initial conditions defined by (4.4) and (4.5), and the terms of the Fibonacci sequence \(Y_{r}^{(j)}(m-j+1)\) (\(1\le j\le r\)) once we reach the order r.
We can observe that for the two preceding algorithms, the commutativity condition \(A_j(k)A_i(k)=A_i(k)A_j(k)\) is not necessary.
5 Study of the p-periodic matrix difference equations in blocks: some special cases
Let us first recall that for resolving the discrete linear matrix equation (1.3) of order r, the first step consists in transforming this equation into a discrete linear matrix equation (2.1) of order 1. Thus, the resolution of Eq. (1.3) is equivalent to that of Eq. (2.1), through the computation of the powers of the companion matrix in blocks (2.2). Therefore, the solution of (1.3), is obtained via Theorems 2.1, Theorem 3.3 and the Algorithms 1 and 2. The previous procedure will be illustrated in this section.
More precisely, in this section, we are interested in making use of all the material provided in the above sections for exploring some special cases of the p-periodic matrix difference equations in blocks. Yet, some particular cases are treated and some examples are given, to make this study more affordable.
5.1 Solutions of the matrix equation \(Y_{n+2}=A(n)Y_{n}\), where A(n) is p-periodic
Consider the periodic matrix difference equation:
where A(n) is p-periodic (with period \(p\ge 2\)) square matrix of order d , and \(Y_{0},Y_{1}\) stand for the initial conditions. We assume that \(A(i)A(j)=A(j)A(i)\) for \(0\le i,j\le p-1\). This equation (5.1) can be written under the following matrix equation:
where
It ensues that C(n) is p-periodic emanated from the fact that A(n) is p-periodic. We consider the matrix \(B=C(p-1)C(p-2)\cdots C(1)C(0)\). Thus, employing Theorem 2.1, we need to distinguish two cases \(p=2\) and \(p>2\). If \(p=2\), then for every \(n=2k\) (\(k\ge 1\)), the matrix equation (5.2) takes the form:
where \(Z(2k)=(Y_{2k+1},Y_{2k})^\top \), \(Z(0)=(Y_{1},Y_{0})^\top \) the vector of initial conditions, and \(B=C(1)C(0)\). In addition, if \(p>2\), for every \(n=kp+i\) (\(k\ge 1\)) (\(0 \le i \le p-1\)), the matrix equation (5.2) takes the form:
where \(Z(kp+i+1)=(Y_{kp+i+1},Y_{kp+i})^\top \), \(Z(0)=(Y_{1},Y_{0})^\top \) the vector of initial conditions.
We start by giving the expression of B in terms of \(A(p-1),\cdots , A(1),A(0)\), using the Algorithm 1 (see Sect. 4.1). For reason of clarity, we consider the case of \(r=2\). For \(p=2\), a straightforward application of the Algorithm 1 shows that
where the entries of the matrix B are given by
where \(D_{1}^{(0)}(2)=\mathbf {1}_{d\times d}\), \(D_{1}^{(1)}(2)= \Theta _{d\times d}\) and \(D_{2}^{(1)}(2)=A(0)\). Thus, when the matrix A(n) is 2-periodic, the solutions of the 2-periodic matrix equation (5.1) are given by the following proposition.
Proposition 5.1
The unique solution of the 2-periodic matrix difference equation \(Y_{n+2}=A(n)Y_{n},\;\ n\ge 0\) (A(n) is 2-periodic), with the prescribed initial conditions \(Y_{0}\) and \(Y_{1}\), is given by
Example 5.2
Consider the scalar linear difference equation of the form
where \(a : \mathbb {N} \rightarrow \mathbb {R}\) is a 2-periodic scalar function of n and \(x_{0}\), \(x_{1}\) stand for the initial conditions. Then, the matrix A(n) is reduced to one element a(n), thus the unique solution of Eq. (5.1) is given by Proposition 5.1 as follows:
This class of equations has been studied in [5]. The method used consists in transforming equation (5.1) into the equivalent linear difference equation with constant coefficients,
where \(c_{0}\), \(c_{1}\) are the coefficients of the characteristic polynomial \(P(z)=z^{2}-(a(0)+a(1))z-a(0)a(1)=(z-a(0))(z-a(1))\), of the matrix \(B=C(1)C(0)\). Using formulas of corollary 4.3, we obtain \(B=\begin{pmatrix} \alpha _{11}^{(2)} &{} \alpha _{12}^{(2)} \\ \alpha _{21}^{(2)} &{} \alpha _{22}^{(2)} \end{pmatrix} =\begin{pmatrix} a(1) &{} 0 \\ 0 &{} a(0) \end{pmatrix}\). Therefore, the form of the solutions of Eq. (5.4) are given in [5, Proposition 3.6] as follows:
In addition, starting from (5.5) to (5.6), a direct computation implies that \(x_{2n}=a(0)^{n}x_{0}\) and \(x_{2n+1}=a(1)^{n}x_{0}\). Therefore, we show that the two solutions (5.3) and (5.5)–(5.6) are the same results.
Now, we turn to the case when \(r=2\) and the period \(p\ge 3\). In this case, the entries of the matrix \(B=C(1)C(0)\) are given by
where
and for every \(k>2\), we have
We need to distinguish two cases. When p is even, a straightforward computation, shows that
Hence, for every \(k\ge 2\), we have
However, when p is odd, we have
In this case for calculating the powers \(B^{k}\), for \(k\ge 2\), we need to utilize Theorem 3.3 of Sect. 3.2. Indeed, in this case, \( P(S)=det[\mathbf {1}_{2\times 2}\otimes S - B ]=S^2-A(p-1)A(p-2)\cdots A(1)A(0)\), and it follows from Theorem 3.3 that for every \(k\ge 1\), we have
with
where \(W_{11}(0)=W_{22}(0)=A(p-1)A(p-2)\cdots A(1)A(0)\) and \(W_{12}(1)=A(p-1)A(p-2)^{2}A(p-3)\cdots A(1)^{2}A(0)\), \(W_{21}(1)=A(p-1)^{2}A(p-2)A(p-3)^{2}\cdots A(0)^{2}\). In addition, once again, we need to distinguish two cases: when k is odd or even, for giving the expression of \(\rho (k,2)\):
Thence, we obtain
and
where \(B_{11}(k)=B_{22}(k)=(A(p-1)A(p-2)\cdots A(1)A(0))^{\frac{k}{2}}\) and \(B_{12}(k)=(A(p-2) A(p-4)\cdots A(1))^{\frac{k-1}{2}-1}(A(p-1)A(p-3)\cdots A(0))^{\frac{k+1}{2}}\), \(B_{22}(k)=(A(p-1) A(p-3)\cdots A(0))^{\frac{k-1}{2}} (A(p-2)A(p-4)\cdots A(1))^{\frac{k+1}{2}}\).
With this results at our disposal, we can express the solution of the matrix equation \(Y_{n+2}=A(n)Y_{n}\). Indeed, when A(n) is periodic of period \(p>2\), we have the following result.
Proposition 5.3
Let \(p\ge 3\) be an even integer. Consider the p-periodic matrix equation \(Y_{n+2}=A(n)Y_{n},\;\ n\ge 0\), with the initial conditions vector \((Y_{1},Y_{0})^{\top }\). Then, for every \(n=kp+i\) (\(i=0,\cdots ,p-1\)), the unique solution is given by
and for every \(i=3,\cdots ,p-2\), we have
Similarly, when the period p is odd, we have to consider two cases. For k is even, the vector solution is given by
where \(D=(A(p-1)A(p-2)\cdots A(0))^{\frac{k}{2}}.\)
When k is odd, we get
where \(B_1^k=(A(p-2) A(p-4)\cdots A(1))^{\frac{k-1}{2}-1}(A(p-1)A(p-3)\cdots A(0))^{\frac{k+1}{2}}\) and \(B_2^k=(A(p-1) A(p-3)\cdots A(0))^{\frac{k-1}{2}} (A(p-2)A(p-4)\cdots A(1))^{\frac{k+1}{2}}\).
For more illustration, we propose the following example.
Example 5.4
Consider the 3-periodic matrix equation (5.1) of order 2, such that \(A(0)=\begin{pmatrix} 1 &{} 1 \\ 0 &{} 0 \end{pmatrix}\), \(A(1)=\begin{pmatrix} -1 &{} -1 \\ 0 &{} 0 \end{pmatrix}\) and \(A(2)=\begin{pmatrix} 2 &{} 1 \\ 0 &{} 1 \end{pmatrix}\). Thus, we have \(B=C(2)C(1)C(0)=\begin{pmatrix} \Theta _{2\times 2} &{} A(2)A(0) \\ A(1) &{} \Theta _{2\times 2} \end{pmatrix}\begin{pmatrix} \Theta _{2\times 2} &{} B_{12} \\ B_{21} &{} \Theta _{2\times 2} \end{pmatrix}\), where \(B_{12}=\begin{pmatrix} 2 &{} 2\\ 0 &{} 0 \end{pmatrix}\) and \(B_{21}=\begin{pmatrix} -1 &{} -1\\ 0 &{} 0 \end{pmatrix}\). By a simple verification, we remark that \(A(i)A(j)=A(j)A(i)\) for \(0\le i,j \le 2\). Then, applying results of Proposition 5.3, the solutions of the 3-periodic matrix equation (5.1) are described as follows. If k is even, we have
and, if k is odd, we get
where \(E_1=A(2)A(1)A(0)\), \(E_2=A(2)A(1)\), \(E_3=A(2)A(0)\) and \(E_4=A(1)A(0)\).
5.2 Solutions of the matrix equation \(Y_{n+r} =A(n)Y_{n}, A(n)\) is p-periodic
In this subsection, we apply Algorithm 2 to solve the equation
where A(n) is a p-periodic matrix (with period \(p\ge 2\)) of order d, and \(Y_{0},Y_{1}\) stand for the initial conditions. We assume that \(A(i)A(j)=A(j)A(i)\) for \( 0\le i,j\le p-1\). In a similar way as before, we can formulate Eq. (5.7) under the following matrix equation:
where C(n) is the companion matrix in blocks \(C(n)=C[\Theta _{d\times d}, \cdots , \Theta _{d\times d}, A(n)]_{r\times r}\) and Z(n) is the column vector \(Z(n)=\left( Y_{n+r-1}, Y_{n+r-2}, \cdots , Y_{n}\right) ^{T}\). Consider the product of companion matrices in blocks \( B=C(p-1)\cdots C(0)\). For \( 1\le m\le p-1\), the product of companion matrices in blocks \(B^{(m)}=C(m-1)\cdots C(0)\) is given by \(B^{(m)}=\begin{pmatrix} B_{11}^{(m)} &{}\cdots &{} B_{1r}^{(m)} \\ \vdots &{} \ddots &{}\vdots \\ B_{r1}^{(m)} &{}\cdots &{}B_{rr}^{(m)} \end{pmatrix}\). We point out that for \(m=p\) we have \(B=B^{(p)}\). To give the form of the matrix B, we propose to apply the Algorithm 2 (the Algorithm 1 can also be used here). We need to distinguish three cases. When \(p=r\), a direct computation, using Algorithm 2, allows us to have
Thus, we obtain
Therefore, for every \(k\ge 1\), we have
In this case, the solution of the p-periodic matrix equation (5.7) is given by the following proposition.
Proposition 5.5
Consider the p-periodic matrix difference equation (5.7) with the initial conditions vector \((Y_{r-1},\cdots ,Y_{0})^\top \). Suppose that the period p satisfies \(p=r\). Then, for \(n=kp\), the solution of Eq. (5.7) is given by
and if \(n=kp+i\), for \(i=1,\cdots ,p-2\), we have
Finally, for \(n=(k+1)p-1\), we have
When \(p<r\), in a similar way by employing the Algorithm 2, we get
By a straightforward computation, we obtain
Thence, we have
where \(B_1=\Theta _{p\times r-p}\) is the null matrix of order \(p\times r-p\), \(B_{2}=diag(A(p-1),A(p-2),...,A(0))_{p\times p}\) (diagonal matrix of order \(p\times p\)), \(B_{3}=\mathbf {1}_{r-p\times r-p}\) (identity matrix of order \(r-p\times r-p\)), \(B_4=\Theta _{r-p\times p}\) is the null matrix of order \(r-p\times p\).
To express the solution of Eq. (5.7) when \(p<r\), we need to compute the powers \(B^{k}\) of the matrix B given above using the Theorem 3.3. Unfortunately, it is not straightforward to derive the expression of the matrix polynomial \( P(S)=det[\mathbf {1}_{r\times r}\otimes S - A]=S^r-D_0S^{r-1}+\cdots -D_{r-1}\) of B. Therefore, we propose to examine the following example, when \(p=3\) and \(r=5\) as follows.
Example 5.6
Consider the matrix equation
where A(n) is periodic with period \(p=3\), such that \(A(0)=\begin{pmatrix} 1 &{} 1\\ 1 &{} 1 \end{pmatrix},\) \(A(1)=\begin{pmatrix} 2 &{} -1\\ -1 &{} 2 \end{pmatrix}\) and \(A(2)=\begin{pmatrix} 0 &{} 1\\ 1 &{} 0 \end{pmatrix}.\)
Thus, applying Algorithm 2, we get the entries of the matrix \(B=C(2)C(1)C(0)\) as follows:
Let \(n=kp+i\) (\(i=0,1,2\)) and \(k=k'p+s\) (\(s=0,1,2,3,4\)), and we set \(D=A(2)A(1)A(0)\). The solution of the 3-periodic matrix difference equation \(Y_{n+5}=A(n)Y_{n}\) prescribed to the initial conditions vector \((Y_{4},Y_{3},Y_{2},Y_{1},Y_{0})^\top \) is given by
-
(1)
When \(k=5k'\), we have
$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+i+r-l}&=D^{k'}A(l)Y_{l}, \;\; \text{ for }\;\; l=i,i-1,\cdots ,0,\\ Y_{kp+i+r-l}&=D^{k'}Y_{i+r-l},\;\; \text{ for }\;\; l=i+1,\cdots ,r-1. \end{aligned} \right. \end{aligned}$$ -
(2)
When \(k=5k'+1\), we have
$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+7}&=A(1)^{k'}(A(0)A(2))^{k'+1}Y_{0}, \\ Y_{kp+l+1}&=D^{k'}A(l-1)Y_{l-1},\;\;\text{ for }\;\; 1\le l \le 5,\\ Y_{kp+1}&=D^{k'}Y_{4}. \\ \end{aligned} \right. \end{aligned}$$ -
(3)
When \(k=5k'+2\), we have
$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+l+1}&=D^{k'}A(l+2)Y_{l+2}, \;\; \text{ for }\;\; 0\le l\le 2, \\ Y_{kp+l+1}&=D^{k'}A(l)A(l+2)Y_{l-3},\;\; \text{ for }\;\; l=3,4,5,6. \end{aligned} \right. \end{aligned}$$ -
(4)
When \(k=5k'+3\), we have
$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+7}&=D^{k'+1}Y_{1}, \\ Y_{kp+6}&=D^{k'+1}Y_{0},\\ Y_{kp+l+1}&=D^{k'}A(l)A(l+2)Y_{l} \;\; \text{ for }\;\; l=0,1,2,3,4.\\ \end{aligned} \right. \end{aligned}$$ -
(5)
When \(k=5k'+4\), we have
$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+l+1}&=D^{k'+1}Y_{l-2}, \;\; \text{ for }\;\; l=2,3,4,5,6,\\ Y_{kp+2}&=D^{k'}A(l)A(l+2)Y_{l+3}, \;\; \text{ for }\;\; l=0,1. \end{aligned} \right. \end{aligned}$$
Finally, for \(p>r\), once again we need to discuss two cases. The first case consists in \(p=k r\) and \(k>1\). By following the same approach using yet Algorithm 2, we get \(\displaystyle B=(B_{i,j})_{1\le i,j\le r}\), where
Thus, we obtain
where \(B_{i}=A(p-i)A(p-r-i)\cdots A(r-i)\) (\(i=1,\cdots ,r\)). Therefore, for every \(k\ge 1\), we have
The second case is when \(p=kr+s\) and \(s=1,\cdots ,r-1\). Then, a direct computation shows that the entries of the matrix \(\displaystyle B=(B_{i,j})_{1\le i,j\le r}\) are given by
In other words, we have
where \(\Theta _{k\times m}\) is the null matrix of order \(k\times m\) and
such that \(B_{1}^{(i)}=A(p-i)A(p-r-i)\cdots A(p-kr-i)\) for \(i=1,\cdots ,s\) and \(B_{2}^{(i)}=A(p-i)A(p-r-i)\cdots A(p-(k-1)r-i)\) for \(i=s+1,\cdots ,r\).
For more illustration, we examine the particular case of the matrix equation (5.7) of order 3, where A(n) is periodic of period 4. The solution is given by Theorem 3.3 as follows:
where \(B=C(3)C(2)C(1)C(0)\). Then, using the generalized Cayley–Hamilton Theorem 3.3 for computing the powers \(B^{k}\) of the matrix B, we get the following result.
Proposition 5.7
Let \(n=kp+i\) (\(i=0,\cdots ,3\)) and \(k=3k'+s\) (\(s=0,1,2\)). The solution of the matrix difference equation \(Y_{n+3}=A(n)Y_{n}\) where A(n) is periodic with period \(p=4\), prescribed to the initial conditions vector \((Y_{2},Y_{1},Y_{0})^\top \), is given by
-
(1)
When \(k=3k'\), we have
$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+6}&=D^{k'}A(3)A(0)Y_{0},\\ Y_{kp+l+1}&=D^{k'}A(l-2)Y_{l-2}, \;\;\text{ for }\;\; l=2,3,4,\\ Y_{kp+l+1}&=D{k'}Y_{l+1},\;\;\text{ for }\;\; l=0,1. \\ \end{aligned} \right. \end{aligned}$$ -
(2)
When \(k=3k'+1\), we have
$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+l+1}&=D^{k'}A(l-1)A(l-2)A(l-3)Y_{l-4}, \;\;\text{ for }\;\; l=4,5,\\ Y_{kp+l+1}&=D^{k'}A(l-1)A(l+2)Y_{l-1}, \;\;\text{ for }\;\; l=1,2,3,\\ Y_{kp+1}&=D^{k'}A(2)Y_{2}. \\ \end{aligned} \right. \end{aligned}$$ -
(3)
When \(k=3k'+2\), we have
$$\begin{aligned} \left\{ \begin{aligned} Y_{kp+l+1}&=D^{k'+1}Y_{l-3}, \;\;\text{ for }\;\; l=3,4,5,\\ Y_{kp+l+1}&=D^{k'}(\prod _{{j=0}_{j\ne l+1}}^{3}A(j))Y_{l}, \;\;\text{ for }\;\; l=0,1,2, \\ \end{aligned} \right. \end{aligned}$$
where we denote by \(D=A(3)A(2)A(1)A(0)\).
6 Discussion and concluding remarks
In this paper, we have been interested in the study of a class of periodic matrix difference equations. While formulating the result on the solutions of this class of equations, we were led to deal with two new problems. The first one concerns the expression of the powers of matrices in blocks. To this aim, we proposed a method for computing the powers of matrices in blocks based on the linear recursive sequences of Fibonacci type in the algebra of square matrices, and the generalized Cayley–Hamilton theorem. Here, the combinatorial expression for the linear sequences of Fibonacci type in the algebra of square matrices \(GL(r, \mathbb {C}^{d\times d})\), and the Kronecker product play a central role. The second problem deals with the computation of the product of companion matrices in blocks. To this matter, we developed two recursive algorithms to calculate the entries of the resulting matrix product: Algorithm 1 is an iterative process based on a sequence of matrices, while Algorithm 2 reposes merely on a family of a Fibonacci sequences in the algebra of square matrices. General results are established and special cases are considered. To the best of our knowledge, the results of this investigation present a pilot study to solve periodic matrix difference equations.
It is worth noting that, for reason of clarity and simplicity, in the examples illustrating our results, the matrices are mostly small matrices. However, the general results and algorithms show that our method can work for matrices of large size. On the other side, the programming code of the two algorithms is actually of interest, both for the purpose to treat the matrices of large size and to study as concrete application of the periodic matrix model of Samuelson–Hicks. Partial results have been established, which illustrate that this type of method can be used more effectively.
Finally, a recent literature shows that the generalized Cayley–Hamilton Theorem constitutes an important tool for dealing with various applied and theoretical topics. Especially, the generalized Cayley–Hamilton Theorem can be used as new technique for solving some matrix and matrix differential equations (see, for example [2, 7,8,9, 13, 14]). As for the periodic matrix model of Samuelson–Hicks, it seem for us that our results and algorithms, can be also used effectively for studying some topics, related to the generalized Cayley–Hamilton Theorem.
References
Al Zhour, Z.; Kilicman, A.: Some applications of the convolution and Kronecker products of matrices. In: Proceeding of the International Conference on Math, UUM, Kedah, Malaysia. pp. 551–562 (2005)
Al Zhour, Z.: New techniques for solving some matrix and matrix differential equations. Ain Shams Eng. J. 6(1), 347–354 (2015)
Ben Taher, R.; Rachidi, M.: Linear recurrence relations in the algebra of matrices and applications. Linear Algebra Appl. 330, 15–24 (2001)
Ben Taher, R.; Benkhaldoun, H.; Rachidi, M.: On some class of periodic-discrete homogeneous difference equations via Fibonacci sequences. J. Differ. Equ. Appl. 22(9), 1292–1306 (2016)
Ben Taher, R.; Benkhaldoun, H.: Solving the linear difference equation with periodic coefficients via Fibonacci sequences. Linear Multilinear Algebra. (2018). https://doi.org/10.1080/03081087.2018.1497584
Cushing, J.M.; Ackleh, A.S.: A net reproductive number for periodic matrix models. J. Biol. Dyn. 6, 166–188 (2012)
Dassios, I.; Szajowski, K.: Bayesian optimal control for a non-autonomous stochastic discrete time system. Appl. Math. Comput. Elsevier 274, 556–564 (2016)
Dassios, I.; Baleanu, D.; Kalogeropoulos, G.: On non-homogeneous singular systems of fractional nabla difference equations. Appl. Math. Comput. Elsevier 227, 112–131 (2014)
Dassios, I.: On non-homogeneous linear generalized linear discrete time systems. Circ. Syst. Signal Process. Spring 31(5), 1699–1712 (2012)
Heijman, W.J.M.; van Mouche, P.H.M.: Floquet theory and economic dynamics II, WASS Working paper; No. 15–2015 (2015). https://doi.org/10.13140/RG.2.1.1067.4009
Horn, R.A.; Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)
Jódara, L.; Abou-Kandilb, H.: Kronecker products and coupled matrix Riccati differential systems. Linear Algebra App. 121, 39–51 (1989)
Kaczorek, T.: New extension of the Cayley–Hamilton Theorem with applications. Proceedings 19th European Conference on Modeling and Simulation, Editors Y. Merkuryev, R. Zobel and E. Kerchkoffs. ECMS (2005)
Kaczorek, T.: An extension of the Cayley–Hamilton theorem for a standard pair of block matrices. Appl. Math. Comput. Sci. 8(3), 511–516 (1998)
Kittappa, R.K.: A representation of the solution of the nth order linear difference equation with variable coefficients. Linear Algebra App. 193, 211–222 (1993)
Lim, A.; Dai, J.: On product of companion matrices. Linear Algebra App. 435(11), 2921–2935 (2011)
Mallik, R.K.: Solutions of linear difference equation with variable coefficients. J. Math. Anal. Appl. 222(1), 79–91 (1998)
Heijman, W.; Mouche, P. V.: Floquet theory and economic dynamics II. WASS Working paper. No. 15 (2015). https://doi.org/10.13140/RG.2.1.1067.4009
Schacke, K.: On the Kronecker Product. Master’s Thesis, University of Waterloo (2004)
Yang, Y.: An efficient LQR design for discrete-time linear periodic system based on a novel lifting method. Automatica 87, 383–388 (2018)
Acknowledgements
The authors are very grateful to the anonymous referees for their valuable suggestions that clearly improved this article. The third author MR expresses his sincere thanks to the INMA and the UFMS for their valuable support and encouragements. His special thanks to several members of the UFMS for their precious help, especially, Professor Patricia Sandero and Adriane Eidman.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Funding
The authors received no specific funding for this work.
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Benkhaldoun, H., Ben Taher, R. & Rachidi, M. Periodic matrix difference equations and companion matrices in blocks: some applications. Arab. J. Math. 10, 555–574 (2021). https://doi.org/10.1007/s40065-021-00332-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40065-021-00332-2